url
stringlengths 90
342
| html
stringlengths 602
98.8k
| text_length
int64 602
98.8k
| __index_level_0__
int64 0
5.02k
|
---|---|---|---|
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/20%3A_Electrochemistry/20.5%3A_Batteries%3A_Producing_Electricity_Through_Chemical_Reactions |
Because galvanic cells can be self-contained and portable, they can be used as batteries and fuel cells. A is a galvanic cell (or a series of galvanic cells) that contains all the reactants needed to produce electricity. In contrast, a is a galvanic cell that requires a constant external supply of one or more reactants to generate electricity. In this section, we describe the chemistry behind some of the more common types of batteries and fuel cells. There are two basic kinds of batteries: disposable, or primary, batteries, in which the electrode reactions are effectively irreversible and which cannot be recharged; and rechargeable, or secondary, batteries, which form an insoluble product that adheres to the electrodes. These batteries can be recharged by applying an electrical potential in the reverse direction. The recharging process temporarily converts a rechargeable battery from a galvanic cell to an electrolytic cell. Batteries are cleverly engineered devices that are based on the same fundamental laws as galvanic cells. The major difference between batteries and the galvanic cells we have previously described is that commercial batteries use solids or pastes rather than solutions as reactants to maximize the electrical output per unit mass. The use of highly concentrated or solid reactants has another beneficial effect: the concentrations of the reactants and the products do not change greatly as the battery is discharged; consequently, the output voltage remains remarkably constant during the discharge process. This behavior is in contrast to that of the Zn/Cu cell, whose output decreases logarithmically as the reaction proceeds (Figure \(\Page {1}\)). When a battery consists of more than one galvanic cell, the cells are usually connected in series—that is, with the positive (+) terminal of one cell connected to the negative (−) terminal of the next, and so forth. The overall voltage of the battery is therefore the sum of the voltages of the individual cells. The major difference between batteries and the galvanic cells is that commercial typically batteries use solids or pastes rather than solutions as reactants to maximize the electrical output per unit mass. An obvious exception is the standard car battery which used solution phase chemistry. The dry cell, by far the most common type of battery, is used in flashlights, electronic devices such as the Walkman and Game Boy, and many other devices. Although the dry cell was patented in 1866 by the French chemist Georges Leclanché and more than 5 billion such cells are sold every year, the details of its electrode chemistry are still not completely understood. In spite of its name, the is actually a “wet cell”: the electrolyte is an acidic water-based paste containing \(MnO_2\), \(NH_4Cl\), \(ZnCl_2\), graphite, and starch (part (a) in Figure \(\Page {1}\)). The half-reactions at the anode and the cathode can be summarized as follows: \[\ce{2MnO2(s) + 2NH^{+}4(aq) + 2e^{−} -> Mn2O3(s) + 2NH3(aq) + H2O(l)} \nonumber \] \[\ce{Zn(s) -> Zn^{2+}(aq) + 2e^{−}} \nonumber \] The \(\ce{Zn^{2+}}\) ions formed by the oxidation of \(\ce{Zn(s)}\) at the anode react with \(\ce{NH_3}\) formed at the cathode and \(\ce{Cl^{−}}\) ions present in solution, so the overall cell reaction is as follows: \[\ce{2MnO2(s) + 2NH4Cl(aq) + Zn(s) -> Mn2O3(s) + Zn(NH3)2Cl2(s) + H2O(l)} \label{Eq3} \] The dry cell produces about 1.55 V and is inexpensive to manufacture. It is not, however, very efficient in producing electrical energy because only the relatively small fraction of the \(\ce{MnO2}\) that is near the cathode is actually reduced and only a small fraction of the zinc cathode is actually consumed as the cell discharges. In addition, dry cells have a limited shelf life because the \(\ce{Zn}\) anode reacts spontaneously with \(\ce{NH4Cl}\) in the electrolyte, causing the case to corrode and allowing the contents to leak out. The is essentially a Leclanché cell adapted to operate under alkaline, or basic, conditions. The half-reactions that occur in an alkaline battery are as follows: \[\ce{2MnO2(s) + H2O(l) + 2e^{−} -> Mn2O3(s) + 2OH^{−}(aq)} \nonumber \] \[\ce{Zn(s) + 2OH^{−}(aq) -> ZnO(s) + H2O(l) + 2e^{−}} \nonumber \] \[\ce{Zn(s) + 2MnO2(s) -> ZnO(s) + Mn2O3(s)} \nonumber \] This battery also produces about 1.5 V, but it has a longer shelf life and more constant output voltage as the cell is discharged than the Leclanché dry cell. Although the alkaline battery is more expensive to produce than the Leclanché dry cell, the improved performance makes this battery more cost-effective. Although some of the small button batteries used to power watches, calculators, and cameras are miniature alkaline cells, most are based on a completely different chemistry. In these "button" batteries, the anode is a zinc–mercury amalgam rather than pure zinc, and the cathode uses either \(\ce{HgO}\) or \(\ce{Ag2O}\) as the oxidant rather than \(\ce{MnO2}\) in Figure \(\Page {1b}\)). The cathode, anode and overall reactions and cell output for these two types of button batteries are as follows (two half-reactions occur at the anode, but the overall oxidation half-reaction is shown): The major advantages of the mercury and silver cells are their reliability and their high output-to-mass ratio. These factors make them ideal for applications where small size is crucial, as in cameras and hearing aids. The disadvantages are the expense and the environmental problems caused by the disposal of heavy metals, such as \(\ce{Hg}\) and \(\ce{Ag}\). None of the batteries described above is actually “dry.” They all contain small amounts of liquid water, which adds significant mass and causes potential corrosion problems. Consequently, substantial effort has been expended to develop water-free batteries. One of the few commercially successful water-free batteries is the . The anode is lithium metal, and the cathode is a solid complex of \(I_2\). Separating them is a layer of solid \(LiI\), which acts as the electrolyte by allowing the diffusion of Li ions. The electrode reactions are as follows: \[I_{2(s)} + 2e^− \rightarrow {2I^-}_{(LiI)}\label{Eq11} \] \[2Li_{(s)} \rightarrow 2Li^+_{(LiI)} + 2e^− \label{Eq12} \] \[2Li_{(s)}+ I_{2(s)} \rightarrow 2LiI_{(s)} \label{Eq12a} \] with \(E_{cell} = 3.5 \, V\) As shown in part (c) in Figure \(\Page {1}\), a typical lithium–iodine battery consists of two cells separated by a nickel metal mesh that collects charge from the anode. Because of the high internal resistance caused by the solid electrolyte, only a low current can be drawn. Nonetheless, such batteries have proven to be long-lived (up to 10 yr) and reliable. They are therefore used in applications where frequent replacement is difficult or undesirable, such as in cardiac pacemakers and other medical implants and in computers for memory protection. These batteries are also used in security transmitters and smoke alarms. Other batteries based on lithium anodes and solid electrolytes are under development, using \(TiS_2\), for example, for the cathode. Dry cells, button batteries, and lithium–iodine batteries are disposable and cannot be recharged once they are discharged. Rechargeable batteries, in contrast, offer significant economic and environmental advantages because they can be recharged and discharged numerous times. As a result, manufacturing and disposal costs drop dramatically for a given number of hours of battery usage. Two common rechargeable batteries are the nickel–cadmium battery and the lead–acid battery, which we describe next. The , or NiCad, battery is used in small electrical appliances and devices like drills, portable vacuum cleaners, and AM/FM digital tuners. It is a water-based cell with a cadmium anode and a highly oxidized nickel cathode that is usually described as the nickel(III) oxo-hydroxide, NiO(OH). As shown in Figure \(\Page {2}\), the design maximizes the surface area of the electrodes and minimizes the distance between them, which decreases internal resistance and makes a rather high discharge current possible. The electrode reactions during the discharge of a \(NiCad\) battery are as follows: \[2NiO(OH)_{(s)} + 2H_2O_{(l)} + 2e^− \rightarrow 2Ni(OH)_{2(s)} + 2OH^-_{(aq)} \label{Eq13} \] \[Cd_{(s)} + 2OH^-_{(aq)} \rightarrow Cd(OH)_{2(s)} + 2e^- \label{Eq14} \] \[Cd_{(s)} + 2NiO(OH)_{(s)} + 2H_2O_{(l)} \rightarrow Cd(OH)_{2(s)} + 2Ni(OH)_{2(s)} \label{Eq15} \] \(E_{cell} = 1.4 V\) Because the products of the discharge half-reactions are solids that adhere to the electrodes [Cd(OH) and 2Ni(OH) ], the overall reaction is readily reversed when the cell is recharged. Although NiCad cells are lightweight, rechargeable, and high capacity, they have certain disadvantages. For example, they tend to lose capacity quickly if not allowed to discharge fully before recharging, they do not store well for long periods when fully charged, and they present significant environmental and disposal problems because of the toxicity of cadmium. A variation on the NiCad battery is the nickel–metal hydride battery (NiMH) used in hybrid automobiles, wireless communication devices, and mobile computing. The overall chemical equation for this type of battery is as follows: \[NiO(OH)_{(s)} + \rightarrow Ni(OH)_{2(s)} + M_{(s)} \label{Eq16} \] The NiMH battery has a 30%–40% improvement in capacity over the NiCad battery; it is more environmentally friendly so storage, transportation, and disposal are not subject to environmental control; and it is not as sensitive to recharging memory. It is, however, subject to a 50% greater self-discharge rate, a limited service life, and higher maintenance, and it is more expensive than the NiCad battery. Directive 2006/66/EC of the European Union prohibits the placing on the market of portable batteries that contain more than 0.002% of cadmium by weight. The aim of this directive was to improve "the environmental performance of batteries and accumulators" The is used to provide the starting power in virtually every automobile and marine engine on the market. Marine and car batteries typically consist of multiple cells connected in series. The total voltage generated by the battery is the potential per cell (E° ) times the number of cells. As shown in Figure \(\Page {3}\), the anode of each cell in a lead storage battery is a plate or grid of spongy lead metal, and the cathode is a similar grid containing powdered lead dioxide (\(PbO_2\)). The electrolyte is usually an approximately 37% solution (by mass) of sulfuric acid in water, with a density of 1.28 g/mL (about 4.5 M \(H_2SO_4\)). Because the redox active species are solids, there is no need to separate the electrodes. The electrode reactions in each cell during discharge are as follows: \[PbO_{2(s)} + ^−_{4(aq)} + 3H^+_{(aq)} + 2e^− \rightarrow PbSO_{4(s)} + 2H_2O_{(l)} \label{Eq17} \] with \(E^°_{cathode} = 1.685 \; V\) \[Pb_{(s)} + HSO^−_{4(aq)} \rightarrow PbSO_{4(s) }+ H^+_{(aq)} + 2e^−\label{Eq18} \] with \(E^°_{anode} = −0.356 \; V\) \[Pb_{(s)} + PbO_{2(s)} + 2HSO^−_{4(aq)} + 2H^+_{(aq)} \rightarrow 2PbSO_{4(s)} + 2H_2O_{(l)} \label{Eq19} \] and \(E^°_{cell} = 2.041 \; V\) As the cell is discharged, a powder of \(PbSO_4\) forms on the electrodes. Moreover, sulfuric acid is consumed and water is produced, decreasing the density of the electrolyte and providing a convenient way of monitoring the status of a battery by simply measuring the density of the electrolyte. This is often done with the use of a hydrometer. When an external voltage in excess of 2.04 V per cell is applied to a lead–acid battery, the electrode reactions reverse, and \(PbSO_4\) is converted back to metallic lead and \(PbO_2\). If the battery is recharged too vigorously, however, of water can occur: \[ 2H_2O_{(l)} \rightarrow 2H_{2(g)} +O_{2 (g)} \label{EqX} \] This results in the evolution of potentially explosive hydrogen gas. The gas bubbles formed in this way can dislodge some of the \(PbSO_4\) or \(PbO_2\) particles from the grids, allowing them to fall to the bottom of the cell, where they can build up and cause an internal short circuit. Thus the recharging process must be carefully monitored to optimize the life of the battery. With proper care, however, a lead–acid battery can be discharged and recharged thousands of times. In automobiles, the alternator supplies the electric current that causes the discharge reaction to reverse. A fuel cell is a galvanic cell that requires a constant external supply of reactants because the products of the reaction are continuously removed. Unlike a battery, it does not store chemical or electrical energy; a fuel cell allows electrical energy to be extracted directly from a chemical reaction. In principle, this should be a more efficient process than, for example, burning the fuel to drive an internal combustion engine that turns a generator, which is typically less than 40% efficient, and in fact, the efficiency of a fuel cell is generally between 40% and 60%. Unfortunately, significant cost and reliability problems have hindered the wide-scale adoption of fuel cells. In practice, their use has been restricted to applications in which mass may be a significant cost factor, such as manned space vehicles. These space vehicles use a hydrogen/oxygen fuel cell that requires a continuous input of H (g) and O (g), as illustrated in Figure \(\Page {4}\). The electrode reactions are as follows: \[O_{2(g)} + 4H^+ + 4e^− \rightarrow 2H_2O_{(g)} \label{Eq20} \] \[2H_{2(g)} \rightarrow 4H^+ + 4e^− \label{Eq21} \] \[2H_{2(g)} + O_{2(g)} \rightarrow 2H_2O_{(g)} \label{Eq22} \] The overall reaction represents an essentially pollution-free conversion of hydrogen and oxygen to water, which in space vehicles is then collected and used. Although this type of fuel cell should produce 1.23 V under standard conditions, in practice the device achieves only about 0.9 V. One of the major barriers to achieving greater efficiency is the fact that the four-electron reduction of \(O_2 (g)\) at the cathode is intrinsically rather slow, which limits current that can be achieved. All major automobile manufacturers have major research programs involving fuel cells: one of the most important goals is the development of a better catalyst for the reduction of \(O_2 (g)\). Commercial batteries are galvanic cells that use solids or pastes as reactants to maximize the electrical output per unit mass. A battery is a contained unit that produces electricity, whereas a fuel cell is a galvanic cell that requires a constant external supply of one or more reactants to generate electricity. One type of battery is the Leclanché dry cell, which contains an electrolyte in an acidic water-based paste. This battery is called an alkaline battery when adapted to operate under alkaline conditions. Button batteries have a high output-to-mass ratio; lithium–iodine batteries consist of a solid electrolyte; the nickel–cadmium (NiCad) battery is rechargeable; and the lead–acid battery, which is also rechargeable, does not require the electrodes to be in separate compartments. A fuel cell requires an external supply of reactants as the products of the reaction are continuously removed. In a fuel cell, energy is not stored; electrical energy is provided by a chemical reaction. | 15,362 | 2,185 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/01%3A_General_Techniques/1.03%3A_Transferring_Methods/1.3B%3A_Transferring_Methods_-_Liquids |
When transferring liquids with volumes greater than \(5 \: \text{mL}\), they can be poured directly into vessels. Graduated cylinders and beakers have an indentation in their mouth, so they can be poured controllably as long as the two pieces of glass touch one another (Figure 1.17a). If pouring from an Erlenmeyer flask, or transferring a liquid into a vessel containing a narrow mouth (e.g. a round bottomed flask), a funnel should be used. Funnels can be securely held with a ring clamp (Figure 1.17b), or held with one hand while pouring with the other (Figure 1.17c). In order to determine a meaningful yield for a chemical reaction, it is important to have precise measurements on the limiting reactant. It is less important to be precise when manipulating a reagent that is in excess, especially if the reagent is in several times excess. A portion of the liquid measured by a graduated cylinder always clings to the glassware after pouring, meaning that the true volume dispensed is never equivalent to the markings on the cylinder. Therefore, graduated cylinders can be used for dispensing solvents or liquids that are in excess, while more accurate methods (e.g. mass, calibrated pipettes or syringes) should be used when dispensing or measuring the limiting reactant. A graduated cylinder may be used to dispense a limiting reactant if a subsequent mass will be determined to find the precise quantity actually dispensed. When determining the mass of a vessel on a balance, it's best to include the mass of a cork ring (Figure 1.18a) or other support (e.g. the beaker in Figure 1.18b). A cork ring might get wet, have reagents spilled on it, or have pieces of cork fall out, leading to changes in mass that cannot be accounted for. Beakers used to support flasks can get mixed up, and every \(100\)-\(\text{mL}\) beaker does not have the same mass. It is also best to transport vessels containing chemicals to the balance in sealed containers, so as to minimize vapors and prevent possible spillage during transport. Pasteur pipettes (or pipets) are the most commonly used tool for transferring small volumes of liquids (< \(5 \: \text{mL}\)) from one container to another. They are considered disposable, although some institutions may clean and reuse them if they have a method for preventing the fragile tips from breaking. Pasteur pipettes come in two sizes (Figure 1.19a): short (5.75") and long (9"). Each can hold about \(1.5 \: \text{mL}\) of liquid, although the volume delivered is dependent on the size of the dropper bulb. The general guideline that "\(1 \: \text{mL}\) is equivalent to 20 drops" does not always hold for Pasteur pipettes, and may be inconsistent between different pipettes. The drop ratio for a certain pipette and solution can be determined by counting drops until \(1 \: \text{mL}\) is accumulated in a graduated cylinder. Alternatively, a pipette can be roughly calibrated by withdrawing \(1 \: \text{mL}\) of liquid from a graduated cylinder and marking the volume line with a permanent marker (Figure 1.19b). To use a pipette, attach a dropper bulb and place the pipette tip into a liquid. Squeeze then release the bulb to create suction, which will cause liquid to withdraw into the pipette (Figures 1.20 a+b). Keeping the pipette vertical, bring it to the flask where it is to be transferred, and position the pipette tip below the joint of the flask but not touching the sides before depressing the bulb to deliver the material to the flask (Figure 1.20c). The bulb can be squeezed a few times afterward to "blow out" residual liquid from the pipette. If the receiving flask has a ground glass joint, the pipette tip should be below the joint while delivering so that liquid does not splash onto the joint, which sometimes causes pieces to freeze together when connected. If the pipette is to be reused (for example is the designated pipette for a reagent bottle), the pipette should be held so it does not touch the glassware, where it may become contaminated by other reagents in the flask (Figure 1.20d). When some precision is needed in dispensing small volumes of liquid (\(1\)-\(2 \: \text{mL}\)), a graduated cylinder is not ideal as the pouring action results in a significant loss of material. Calibrated plastic pipettes have markings at \(0.25 \: \text{mL}\) increments for a \(1 \: \text{mL}\) pipette, and are economical ways to dispense relatively accurate volumes. To use a calibrated plastic pipette, withdraw some of the liquid to be transferred into the bulb as usual (Figure 1.21b). Then squeeze the bulb just enough so that the liquid drains to the desired volume (Figure 1.21c), and maintain your position. While keeping the bulb depressed so the liquid still reads to the desired volume, quickly move the pipette to the transfer flask (Figure 1.21d), and depress the bulb further to deliver liquid to the flask (Figure 1.21e). When a high level of precision is needed while dispensing liquids, calibrated glass pipettes (volumetric or graduated) can be used. Volumetric pipettes have a glass bulb at the top of their neck, and are capable of dispensing only one certain volume (for example, the top pipette in Figure 1.22 is a \(10.00 \: \text{mL}\) pipette). Graduated pipettes (Mohr pipettes) have markings that allow them to deliver many volumes. Both pipettes need to be connected to a pipette bulb to provide suction. The volume markings on a graduated pipette indicate the volume, which may seem a bit "backward" at first. For example, when a graduated pipette is held vertically, the highest marking is \(0.0 \: \text{mL}\), which indicates that no volume has been delivered when the pipette is still full. As liquid is drained into a vessel, the volume markings increase on the pipette, with the lowest marking often being the total capacity of the pipette (e.g. \(1.0 \: \text{mL}\) for a \(1.0 \: \text{mL}\) pipette). Graduated pipettes can deliver any volume of liquid made possible by differences in the volume markings. For example, a \(1.0 \: \text{mL}\) pipette could be used to deliver \(0.4 \: \text{mL}\) of liquid by: a) Withdrawing liquid to the \(0.0 \: \text{mL}\) mark, then draining and delivering liquid to the \(0.4 \: \text{mL}\) mark, or b) Withdrawing liquid to the \(0.2 \: \text{mL}\) mark and draining and delivering liquid to the \(0.6 \: \text{mL}\) mark (or any combination where the difference in volumes is \(0.4 \: \text{mL}\)). It is important to look carefully at the markings on a graduated pipette. Three different \(1 \: \text{mL}\) pipettes are shown in Figure 1.23a. The left-most pipette has markings every \(0.1 \: \text{mL}\), but no intermediary markings so is less precise than the other two pipettes in Figure 1.23a. The other two pipettes differ in the markings on the bottom. The lowest mark on the middle pipette is \(1 \: \text{mL}\), while the lowest mark on the right-most pipette is \(0.9 \: \text{mL}\). To deliver \(1.00 \: \text{mL}\) with the middle pipette, the liquid must be drained from the \(0.00 \: \text{mL}\) to the \(1.00 \: \text{mL}\) mark, and the final inch of liquid should be retained. To deliver \(1.00 \: \text{mL}\) with the right-most pipette, liquid must be drained from the \(0.00 \: \text{mL}\) mark completely out the tip, with the intent to deliver its total capacity. Pipettes are calibrated " " (TD) or " " (TC) the marked volume. Pipettes are marked with T.C. or T.D. to differentiate between these two kinds, and to-deliver pipettes are also marked with a double ring near the top (Figure 1.23b). After draining a "to-deliver" pipette, the tip should be touched to the side of the flask to withdraw any clinging drops, and a small amount of residual liquid will remain in the tip. A "to-deliver" pipette is calibrated to deliver only the liquid that freely drains from the tip. However, after draining a "to-contain" pipette, the residual liquid in the tip should be "blown out" with pressure from a pipette bulb. "To-contain" pipettes may be useful for dispensing viscous liquids, where solvent can be used to wash out the entire contents. In this section are described methods on how to use a calibrated glass pipette. These methods are for use with a clean and dry pipette. If residual liquid is in the tip of the pipette from water or from previous use with a different solution, a fresh pipette should be used. Alternatively, if the reagent is not particularly expensive or reactive, the pipette can be "conditioned" with the reagent to remove residual liquid. To condition a pipette, rinse the pipette twice with a full volume of the reagent and collect the rinsing in a waste container. After two rinses, any residual liquid in the pipette will have been replaced by the reagent. When the reagent is then withdrawn into the pipette it will not be diluted or altered in any way. Place pipette tip in reagent bottle, squeeze pipette bulb, and connect to the pipette. release your hand to create suction. Do not let go completely or liquid will withdraw forcibly and possibly into the bulb Remove the pipette bulb and place your finger atop the pipette. Allow tiny amounts of air to be let into the top of the pipette by wiggling your finger or a slight release of pressure. Drain the liquid to the desired mark. Tightly hold the pipette with your finger, bring it to the transfer flask and deliver the reagent to the desired mark. Touch the pipette to the side of the container to dislodge the drip at the end of the pipette. If a pipette is drained to the tip, When attempting to dispense highly volatile liquids (e.g. diethyl ether) via pipette, it is very common that liquid drips out of the pipette even without pressure from the dropper bulb! This occurs as the liquid evaporates into the pipette's headspace, and the additional vapor causes the headspace pressure to exceed the atmospheric pressure. To prevent a pipette from dripping, withdraw and expunge the liquid into the pipette several times. Once the headspace is saturated with solvent vapors, the pipette will no longer drip. It may be difficult to manipulate a vessel of hot liquid with your bare hands. If pouring a hot liquid from a beaker, a silicone hot hand protector can be used (Figure 1.26a) or beaker tongs (Figures 1.26b+c). When pouring a hot liquid from an Erlenmeyer flask, hot hand protectors can also be used, but do not hold the awkward shape of the flask very securely. Pouring from hot Erlenmeyer flasks can be accomplished using a makeshift " ". A long section of paper towel is folded several times in one direction to the thickness of approximately one inch (and secured with lab tape if desired, Figure 1.27a). This folded paper towel can be wrapped around the top of a beaker or Erlenmeyer flask and pinched to hold the flask (Figures 1.26d + 1.27b). When pouring hot liquid from an Erlenmeyer flask, the paper towel holder should be narrow enough that the towel does not reach the top of the flask. If it does, liquid will wick toward the paper as it is poured, thus weakening the holder and also removing possibly valuable solution (Figure 1.27c). When the paper towel is a distance away from the top of the flask, liquid can be poured from the flask without absorbing the liquid (Figure 1.27d). | 11,272 | 2,186 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Bruice)/08%3A_Substitution_Reactions_of_Alkyl_Halides/8.08%3A_Competition_Between_(S_N2)_and_(S_N1)_Reactions |
Having discussed the many factors that influence nucleophilic substitution and elimination reactions of alkyl halides, we must now consider the practical problem of predicting the most likely outcome when a given alkyl halide is reacted with a given nucleophile. As we noted earlier, several variables must be considered, .In general, in order for an SN1 or E1 reaction to occur, the relevant carbocation intermediate must be relatively stable. Strong nucleophile favor substitution, and strong bases, especially strong hindered bases (such as tert-butoxide) favor elimination. The nature of the halogen substituent on the alkyl halide is usually not very significant if it is Cl, Br or I. In cases where both S 2 and E2 reactions compete, chlorides generally give more elimination than do iodides, since the greater electronegativity of chlorine increases the acidity of beta-hydrogens. Indeed, although alkyl fluorides are relatively unreactive, when reactions with basic nucleophiles are forced, elimination occurs (note the high electronegativity of fluorine). The following table summarizes the expected outcome of alkyl halide reactions with nucleophiles. It is assumed that the alkyl halides have one or more beta-hydrogens, making elimination possible; and that low dielectric solvents (e.g. acetone, ethanol, tetrahydrofuran & ethyl acetate) are used. When a high dielectric solvent would significantly influence the reaction this is noted in red. . Nucleophile
( Weak Bases: I , Br , SCN , N ,
CH CO , RS , CN etc. )
( Strong Bases: HO , RO )
( H O, ROH, RSH, R N ) Alkyl Group ), | 1,668 | 2,187 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Substitution_Reactions/Electrophilic_Substitution_Reactions/The_Nitration_of_Benzene |
This page gives you the facts and a simple, uncluttered mechanism for the electrophilic substitution reaction between benzene and a mixture of concentrated nitric acid and concentrated sulfuric acid. Benzene is treated with a mixture of concentrated nitric acid and concentrated sulfuric acid at a temperature not exceeding 50°C. As temperature increases there is a greater chance of getting more than one nitro group, -NO , substituted onto the ring. Nitrobenzene is formed: \[ C_6H_6 + HNO_3 \rightarrow C_6H_5NO_2 + H_2O\] or: The concentrated sulfuric acid is acting as a catalyst. The electrophile is the "nitronium ion" or the "nitryl cation", \(NO_2^+\). This is formed by reaction between the nitric acid and the sulphuric acid. \[ HNO_3 + 2H_2SO_4 \rightarrow NO_2^+ 2HSO_4^- + H_3O^+\] Jim Clark ( ) | 823 | 2,188 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Reactivity/Nucleophilic_Substitution_at_Tetrahedral_Carbon/NS9._Enolate_Nucleophiles |
Enolates and related nucleophiles deserve a closer look because they are and because they have their own issues of regiochemistry. Remember, an enolate is just the conjugate base of an enol. An enolate can also be thought of as the conjugate base of a related carbonyl. Because the enolate is a delocalized anion, it can be protonated in two different places to get two different conjugates. Enols typically are not seen because of a rapid equilibrium with that related carbonyl compound. As soon as an enol forms, if there is any way for it to transfer a proton to get to the carbonyl, it will do so. This kind of equilibrium is called "tautomerism," involving the transfer of one proton from one place to another within the molecule. The enol and its related carbonyl are referred to as "tautomers." "Tautomers" describes the relationship between these two molecules. Enamines are very similar to enolates, but with a nitrogen atom in place of the oxygen. Hence, they are amines instead of alcohols. Enamine, enolates and enols are all turbo-charged nucleophiles. The nucleophilic atom is the alpha carbon. Although that carbon can be thought of as a double bonded carbon, with no lone pair, that position is motivated to donate electrons because of pi donation from the oxygen (or nitrogen). One of the issues with these nucleophiles has to do with asymmetry about the carbonyl (or the would-be carbonyl). If one alpha position next to the carbonyl isn't the same as the other one, two possible enolates could result from removal of a proton. That means we potentially have two different nucleophiles from the same starting compound. Sometimes, mixtures of products result from enolate reactions. Nevertheless, enolates and enamines are very broadly used in the synthesis of important things like pharmaceuticals, precisely because they can be controlled so well. If you want the enolate on one side of the carbonyl -- we'll call it the more-substituted side -- then you can have it. If you want the enolate on the other side of the carbonyl -- the less-substituted side -- you can have that, instead. Let's think about what is different about those two sides of the carbonyl. One side is more substituted. It has more stuff on it. It's more crowded. We will focus on the formation of enolate ions. To get the proton off and turn a carbonyl compound into an enolate requires a base. Some control over which proton is removed might come from the choice of base. Maybe to get the proton off the more crowded position, you need a smaller base. Conversely, to get the proton exclusively from the least crowded position, and have very little chance of getting it from the more crowded spot, you could use a really big base. But there's something else about enolates that is apparent only when you look at the ions in one resonance form. Enolate ions can be thought of as alkenes, of course. Depending on which proton we remove, we get two different alkenes. There may be factors that make one of these two alkenes more stable. If so, there may be ways to form that one instead of the other. In general, more-substituted alkenes are more stable than less-substituted ones. The more substituted alkene is formed via loss of the proton at the more crowded position. Forming a product based on its relative stability means relying on thermodynamics. One way to do that is to allow the deprotonation happen reversibly. Given multiple chances, the more stable enolate will form eventually. On the other hand, if you intend to take the proton off the least substituted position, you don't want any reversibility. Given the chance, the wrong enolate will eventually form. This is a case in which we need kinetic control to get one product: we want the least-substituted enolate, and we depend on it forming more quickly than the ther enolate. After all, if the other enolate forms , it is more stable; it isn't likely to come back. For the other product, we need themodynamic control. We depend on the eventual product stability of the more substituted enolate to pull the reaction through. , | 4,095 | 2,189 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book3A_Bioinorganic_Chemistry_(Bertini_et_al.)/04%3A_Biological_and_Synthetic_Dioxygen_Carriers/4.10%3A_Hazards_of_Life_with_Dioxygen |
The binding of dioxygen is normally a reversible process: \[M + O_{2} \rightleftharpoons MO_{2} \tag{4.22}\] Under some circumstances, such as in the presence of added nucleophiles and protons, coordinated dioxygen is displaced as the superoxide anion radical, O , leaving the metal center oxidized by one electron and unreactive to dioxygen: \[MO_{2} \rightleftharpoons M^{+} + O_{2}^{-} \tag{4.23}\] For hemoglobin there exists a flavoprotein reductase system, comprising a reduced pyridine nucleotide (e.g., NADH), cytochrome b reductase, and cytochrome b , that reduces the ferric iron back to the ferrous state, so that it may coordinate dioxygen again. In addition, all aerobically respiring organisms and many air-tolerant anaerobes contain a protein, superoxide dismutase, that very efficiently catalyzes the dismutation of superoxide ion to dioxygen and hydrogen peroxide: \[2O_{2}^{-} + 2H^{+} \rightarrow O_{2} + H_{2}O_{2} \tag{4.24}\] However, the physiological effects of the superoxide moiety remain controversial. Finally, there is a third enzyme, the hemoprotein catalase, that converts the toxic hydrogen peroxide into water and dioxygen: \[2H_{2}O_{2} \rightarrow O_{2} + 2H_{2}O \tag{4.25}\] This topic is discussed further in Chapter 5. | 1,279 | 2,190 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/07%3A_Periodic_Properties_of_the_Elements/7.06%3A_Metals_Nonmetals_and_Metalloids |
An element is the simplest form of matter that cannot be split into simpler substances or built from simpler substances by any ordinary chemical or physical method. There are 118 elements known to us, out of which 92 are naturally occurring, while the rest have been prepared artificially. Elements are further classified into metals, non-metals, and metalloids based on their properties, which are correlated with their placement in the periodic table. With the exception of hydrogen, all elements that form positive ions by losing electrons during chemical reactions are called metals. Thus metals are electropositive elements with relatively low ionization energies. They are characterized by bright luster, hardness, ability to resonate sound and are excellent conductors of heat and electricity. Metals are solids under normal conditions except for Mercury. Metals are lustrous, malleable, ductile, good conductors of heat and electricity. Other properties include: Metals are electropositive elements that generally form or oxides with oxygen. Other chemical properties include: \[\ce{Na^0 \rightarrow Na^+ + e^{-}} \label{1.1} \] \[\ce{Mg^0 \rightarrow Mg^{2+} + 2e^{-}} \label{1.2} \] \[\ce{Al^0 \rightarrow Al^{3+} + 3e^{-}} \label{1.3} \] Compounds of metals with non-metals tend to be in nature. Most metal oxides are basic oxides and dissolve in water to form : \[\ce{Na2O(s) + H2O(l) \rightarrow 2NaOH(aq)}\label{1.4} \] \[\ce{CaO(s) + H2O(l) \rightarrow Ca(OH)2(aq)} \label{1.5} \] Metal oxides exhibit their chemical nature by reacting with to form metal and water: \[\ce{MgO(s) + HCl(aq) \rightarrow MgCl2(aq) + H2O(l)} \label{1.6} \] \[\ce{NiO(s) + H2SO4(aq) \rightarrow NiSO4(aq) + H2O(l)} \label{1.7} \] What is the chemical formula for aluminum oxide? Al has a 3+ charge, the oxide ion is \(O^{2-}\), thus \(Al_2O_3\). Would you expect it to be solid, liquid or gas at room temperature? Oxides of metals are characteristically solid at room temperature Write the balanced chemical equation for the reaction of aluminum oxide with nitric acid: \[\ce{Al2O3(s) + 6HNO3(aq) \rightarrow 2Al(NO3)3(aq) + 3H2O(l)} \nonumber \] Elements that tend to gain electrons to form anions during chemical reactions are called non-metals. These are electronegative elements with high ionization energies. They are non-lustrous, brittle and poor conductors of heat and electricity (except graphite). Non-metals can be gases, liquids or solids. Non-metals have a tendency to gain or share electrons with other atoms. They are electronegative in character. Nonmetals, when reacting with metals, tend to gain electrons (typically and become \[\ce{3Br2(l) + 2Al(s) \rightarrow 2AlBr3(s)} \nonumber \] Compounds composed entirely of nonmetals are covalent substances. They generally form acidic or neutral oxides with oxygen that that dissolve in water to form acids: \[\ce{CO2(g) + H2O(l)} \rightarrow \underset{\text{carbonic acid}}{\ce{H2CO3(aq)}} \nonumber \] As you may know, carbonated water is slightly acidic (carbonic acid). Nonmetal oxides can combine with bases to form salts. \[\ce{CO2(g) + 2NaOH(aq) \rightarrow Na2CO3(aq) + H2O(l)} \nonumber \] Metalloids have properties intermediate between the metals and nonmetals. Metalloids are useful in the semiconductor industry. Metalloids are all solid at room temperature. They can form alloys with other metals. Some metalloids, such as silicon and germanium, can act as electrical conductors under the right conditions, thus they are called semiconductors. Silicon for example appears lustrous, but is malleable nor ductile (it is - a characteristic of some nonmetals). It is a much poorer conductor of heat and electricity than the metals. The physical properties of metalloids tend to be metallic, but their chemical properties tend to be non-metallic. The oxidation number of an element in this group can range from +5 to -2, depending on the group in which it is located. Metallic character is strongest for the elements in the leftmost part of the periodic table, and tends to decrease as we move to the right in any period (nonmetallic character increases with increasing electronegativity and ionization energy values). Within any group of elements (columns), the metallic character increases from top to bottom (the electronegativity and ionization energy values generally decrease as we move down a group). This general trend is not necessarily observed with the transition metals. ( ) | 4,491 | 2,191 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/01%3A_Matter-_Its_Properties_And_Measurement/1.6%3A_Uncertainties_in_Scientific_Measurements |
All measurements have a degree of uncertainty regardless of precision and accuracy. This is caused by two factors, the limitation of the measuring instrument (systematic error) and the skill of the experimenter making the measurements (random error). The graduated buret in Figure \(\Page {1}\) contains a certain amount of water (with yellow dye) to be measured. The amount of water is somewhere between 19 ml and 20 ml according to the marked lines. By checking to see where the bottom of the meniscus lies, referencing the ten smaller lines, the amount of water lies between 19.8 ml and 20 ml. The next step is to estimate the uncertainty between 19.8 ml and 20 ml. Making an approximate guess, the level is less than 20 ml, but greater than 19.8 ml. We then report that the measured amount is approximately 19.9 ml. The graduated cylinder itself may be distorted such that the graduation marks contain inaccuracies providing readings slightly different from the actual volume of liquid present. When we use tools meant for measurement, we assume that they are correct and accurate, however measuring tools are not always right. In fact, they have errors that naturally occur called . Systematic errors tend to be consistent in magnitude and/or direction. If the magnitude and direction of the error is known, accuracy can be improved by additive or proportional corrections. involves adding or subtracting a constant adjustment factor to each measurement; involves multiplying the measurement(s) by a constant. s: Sometimes called human error, random error is determined by the experimenter's skill or ability to perform the experiment and read scientific measurements. These errors are random since the results yielded may be too high or low. Often random error determines the precision of the experiment or limits the precision. For example, if we were to time a revolution of a steadily rotating turnable, the random error would be the reaction time. Our reaction time would vary due to a delay in starting (an underestimate of the actual result) or a delay in stopping (an overestimate of the actual result). Unlike systematic errors, random errors vary in magnitude and direction. It is possible to calculate the average of a set of measured positions, however, and that average is likely to be more accurate than most of the measurements. Measurements may be accurate, meaning that the measured value is the same as the true value; they may be precise, meaning that multiple measurements give nearly identical values (i.e., reproducible results); they may be both accurate and precise; or they may be neither accurate nor precise. The goal of scientists is to obtain measured values that are both accurate and precise. Figure \(\Page {1}\) help to understand the difference between precision (small expected difference between multiple measurements) and accuracy (difference between the result and a known value). Suppose, for example, that the mass of a sample of gold was measured on one balance and found to be 1.896 g. On a different balance, the same sample was found to have a mass of 1.125 g. Which was correct? Careful and repeated measurements, including measurements on a calibrated third balance, showed the sample to have a mass of 1.895 g. The masses obtained from the three balances are in the following table: Whereas the measurements obtained from balances 1 and 3 are reproducible (precise) and are close to the accepted value (accurate), those obtained from balance 2 are neither. Even if the measurements obtained from balance 2 had been precise (if, for example, they had been 1.125, 1.124, and 1.125), they still would not have been accurate. We can assess the precision of a set of measurements by calculating the average deviation of the measurements as follows: 1. Calculate the average value of all the measurements: 2. Calculate the deviation of each measurement, which is the absolute value of the difference between each measurement and the average value: \[deviation = |\text{measurement − average}| \label{1.6.2}\] where | | means absolute value (i.e., convert any negative number to a positive number). 3. Add all the deviations and divide by the number of measurements to obtain the average deviation: \[ \text{average} = \dfrac{\text{sum of measurements} }{\text{number of measurements}} \label{Eq1} \] \[ {1.125 \;g + 1.158 \;g + 1.067\; g \over 3} = 1.117 \;g \] The deviations are So the average deviation is \[ {0.008 \:g + 0.041 \;g + 0.050 \;g \over 3} = 0.033\; g \] The precision of this set of measurements is therefore \[ {0.033\;g \over 1.117\;g} \times 100 = 3.0 \% \] When a series of measurements is precise but not accurate, the error is usually systematic. Systematic errors can be caused by faulty instrumentation or faulty technique. When a series of measurements is precise but not accurate, the error is usually systematic. Systematic errors can be caused by faulty instrumentation or faulty technique. The following archery targets show marks that represent the results of four sets of measurements. Which target shows a. The expected mass of a 2-carat diamond is 2 × 200.0 mg = 400.0 mg. The average of the three measurements is 457.3 mg, about 13% greater than the true mass. These measurements are not particularly accurate. The deviations of the measurements are 7.3 mg, 1.7 mg, and 5.7 mg, respectively, which give an average deviation of 4.9 mg and a precision of \[ {4.9 mg \over 457.3 mg } \times 100 = 1.1 \% \nonumber \] These measurements are rather . b. The average values of the measurements are 93.2% zinc and 2.8% copper versus the true values of 97.6% zinc and 2.4% copper. Thus these measurements are not very accurate, with errors of −4.5% and + 17% for zinc and copper, respectively. (The sum of the measured zinc and copper contents is only 96.0% rather than 100%, which tells us that either there is a significant error in one or both measurements or some other element is present.) The deviations of the measurements are 0.0%, 0.3%, and 0.3% for both zinc and copper, which give an average deviation of 0.2% for both metals. We might therefore conclude that the measurements are equally precise, but that is not the case. Recall that precision is the average deviation divided by the average value times 100. Because the average value of the zinc measurements is much greater than the average value of the copper measurements (93.2% versus 2.8%), the copper measurements are much less precise. \[ \text {precision (Zn)} = \dfrac {0.2 \%}{93.2 \% } \times 100 = 0.2 \% \nonumber \] \[ \text {precision (Cu)} = \dfrac {0.2 \%}{2.8 \% } \times 100 = 7 \% \nonumber \] No measurement is free from error. Error is introduced by (1) the limitations of instruments and measuring devices (such as the size of the divisions on a graduated cylinder) and (2) the imperfection of human senses. Although errors in calculations can be enormous, they do not contribute to uncertainty in measurements. Chemists describe the estimated degree of error in a measurement as the uncertainty of the measurement, and they are careful to report all measured values using only significant figures, numbers that describe the value without exaggerating the degree to which it is known to be accurate. Chemists report as significant all numbers known with absolute certainty, plus one more digit that is understood to contain some uncertainty. The uncertainty in the final digit is usually assumed to be ±1, unless otherwise stated. The following rules have been developed for counting the number of significant figures in a measurement or calculation: \[ 1 ft = 12 in \nonumber \] An effective method for determining the number of significant figures is to convert the measured or calculated value to scientific notation because any zero used as a placeholder is eliminated in the conversion. When 0.0800 is expressed in scientific notation as 8.00 × 10 , it is more readily apparent that the number has three significant figures rather than five; in scientific notation, the number preceding the exponential (i.e., N) determines the number of significant figures. Give the number of significant figures in each. Identify the rule for each. Which measuring apparatus would you use to deliver 9.7 mL of water as accurately as possible? To how many significant figures can you measure that volume of water with the apparatus you selected? Use the 10 mL graduated cylinder, which will be accurate to two significant figures. Mathematical operations are carried out using all the digits given and then rounding the final result to the correct number of significant figures to obtain a reasonable answer. This method avoids compounding inaccuracies by successively rounding intermediate calculations. After you complete a calculation, you may have to round the last significant figure up or down depending on the value of the digit that follows it. If the digit is 5 or greater, then the number is rounded up. For example, when rounded to three significant figures, 5.215 is 5.22, whereas 5.213 is 5.21. Similarly, to three significant figures, 5.005 kg becomes 5.01 kg, whereas 5.004 kg becomes 5.00 kg. The procedures for dealing with significant figures are different for addition and subtraction versus multiplication and division. When we add or subtract measured values, the value with the fewest significant figures to the right of the decimal point determines the number of significant figures to the right of the decimal point in the answer. Drawing a vertical line to the right of the column corresponding to the smallest number of significant figures is a simple method of determining the proper number of significant figures for the answer: 3240.7 + 21.2 36 The line indicates that the digits 3 and 6 are not significant in the answer. These digits are not significant because the values for the corresponding places in the other measurement are unknown (3240.7??). Consequently, the answer is expressed as 3261.9, with five significant figures. Again, numbers greater than or equal to 5 are rounded up. If our second number in the calculation had been 21.256, then we would have rounded 3261.956 to 3262.0 to complete our calculation. When we multiply or divide measured values, the answer is limited to the smallest number of significant figures in the calculation; thus, 42.9 × 8.323 = 357.057 = 357. Although the second number in the calculation has four significant figures, we are justified in reporting the answer to only three significant figures because the first number in the calculation has only three significant figures. An exception to this rule occurs when multiplying a number by an integer, as in 12.793 × 12. In this case, the number of significant figures in the answer is determined by the number 12.973, because we are in essence adding 12.973 to itself 12 times. The correct answer is therefore 155.516, an increase of one significant figure, not 155.52. When you use a calculator, it is important to remember that the number shown in the calculator display often shows more digits than can be reported as significant in your answer. When a measurement reported as 5.0 kg is divided by 3.0 L, for example, the display may show 1.666666667 as the answer. We are justified in reporting the answer to only two significant figures, giving 1.7 kg/L as the answer, with the last digit understood to have some uncertainty. In calculations involving several steps, slightly different answers can be obtained depending on how rounding is handled, specifically whether rounding is performed on intermediate results or postponed until the last step. Rounding to the correct number of significant figures should always be performed at the end of a series of calculations because rounding of intermediate results can sometimes cause the final answer to be significantly in error. In practice, chemists generally work with a calculator and carry all digits forward through subsequent calculations. When working on paper, however, we often want to minimize the number of digits we have to write out. Because successive rounding can compound inaccuracies, intermediate roundings need to be handled correctly. When working on paper, always round an intermediate result so as to retain at least one more digit than can be justified and carry this number into the next step in the calculation. The final answer is then rounded to the correct number of significant figures at the very end. In the worked examples in this text, we will often show the results of intermediate steps in a calculation. In doing so, we will show the results to only the correct number of significant figures allowed for that step, in effect treating each step as a separate calculation. This procedure is intended to reinforce the rules for determining the number of significant figures, but in some cases it may give a final answer that differs in the last digit from that obtained using a calculator, where all digits are carried through to the last step. Significant Figures: Complete the calculations and report your answers using the correct number of significant figures. | 13,153 | 2,192 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/13%3A_The_Phase_Rule_and_Phase_Diagrams/13.04%3A_Chapter_13_Problems |
An underlined problem number or problem-part letter indicates that the numerical answer appears in Appendix I.
13.1 Consider a single-phase system that is a gaseous mixture of \(\mathrm{N}_{2}, \mathrm{H}_{2}\), and \(\mathrm{NH}_{3}\). For each of the following cases, find the number of degrees of freedom and give an example of the independent intensive variables that could be used to specify the equilibrium state, apart from the total amount of gas. (a) There is no reaction. (b) The reaction \(\mathrm{N}_{2}(\mathrm{~g})+3 \mathrm{H}_{2}(\mathrm{~g}) \rightarrow 2 \mathrm{NH}_{3}(\mathrm{~g})\) is at equilibrium. (c) The reaction is at equilibrium and the system is prepared from \(\mathrm{NH}_{3}\) only. 13.2 How many components has a mixture of water and deuterium oxide in which the equilibrium \(\mathrm{H}_{2} \mathrm{O}+\mathrm{D}_{2} \mathrm{O} \rightleftharpoons 2\) HDO exists? 13.3 Consider a system containing only \(\mathrm{NH}_{4} \mathrm{Cl}(\mathrm{s}), \mathrm{NH}_{3}(\mathrm{~g})\), and \(\mathrm{HCl}(\mathrm{g}) .\) Assume that the equilibrium \(\mathrm{NH}_{4} \mathrm{Cl}(\mathrm{s}) \rightleftharpoons \mathrm{NH}_{3}(\mathrm{~g})+\mathrm{HCl}(\mathrm{g})\) exists. (a) Suppose you prepare the system by placing solid \(\mathrm{NH}_{4} \mathrm{Cl}\) in an evacuated flask and heating to \(400 \mathrm{~K}\). Use the phase rule to decide whether you can vary the pressure while both phases remain in equilibrium at \(400 \mathrm{~K}\). (b) According to the phase rule, if the system is not prepared as described in part (a) could you vary the pressure while both phases remain in equilibrium at \(400 \mathrm{~K}\) ? Explain. (c) Rationalize your conclusions for these two cases on the basis of the thermodynamic equilibrium constant. Assume that the gas phase is an ideal gas mixture and use the approximate expression \(K=p_{\mathrm{NH}_{3}} p_{\mathrm{HCl}} /\left(p^{\circ}\right)^{2}\). 13.4 Consider the lime-kiln process \(\mathrm{CaCO}_{3}(\mathrm{~s}) \rightarrow \mathrm{CaO}(\mathrm{s})+\mathrm{CO}_{2}(\mathrm{~g})\). Find the number of intensive variables that can be varied independently in the equilibrium system under the following conditions: (a) The system is prepared by placing calcium carbonate, calcium oxide, and carbon dioxide in a container. (b) The system is prepared from calcium carbonate only. (c) The temperature is fixed at \(1000 \mathrm{~K}\). 13.5 What are the values of \(C\) and \(F\) in systems consisting of solid \(\mathrm{AgCl}\) in equilibrium with an aqueous phase containing \(\mathrm{H}_{2} \mathrm{O}, \mathrm{Ag}^{+}(\mathrm{aq}), \mathrm{Cl}^{-}(\mathrm{aq}), \mathrm{Na}^{+}(\mathrm{aq})\), and \(\mathrm{NO}_{3}^{-}(\mathrm{aq})\) prepared in the following ways? Give examples of intensive variables that could be varied independently. (a) The system is prepared by equilibrating excess solid \(\mathrm{AgCl}\) with an aqueous solution of \(\mathrm{NaNO}_{3}\). (b) The system is prepared by mixing aqueous solutions of \(\mathrm{AgNO}_{3}\) and \(\mathrm{NaCl}\) in arbitrary proportions; some solid \(\mathrm{AgCl}\) forms by precipitation. 13.6 How many degrees of freedom has a system consisting of solid \(\mathrm{NaCl}\) in equilibrium with an aqueous phase containing \(\mathrm{H}_{2} \mathrm{O}, \mathrm{Na}^{+}(\mathrm{aq}), \mathrm{Cl}^{-}(\mathrm{aq}), \mathrm{H}^{+}(\mathrm{aq})\), and \(\mathrm{OH}^{-}(\mathrm{aq})\) ? Would it be possible to independently vary \(T, p\), and \(m_{\mathrm{OH}^{-}}\)? If so, explain how you could do this. 13.7 Consult the phase diagram shown in Fig. \(13.4\) on page 430. Suppose the system contains \(36.0 \mathrm{~g}\) (2.00 mol) \(\mathrm{H}_{2} \mathrm{O}\) and \(58.4 \mathrm{~g}\) (1.00 mol) \(\mathrm{NaCl}\) at \(25^{\circ} \mathrm{C}\) and 1 bar. (a) Describe the phases present in the equilibrium system and their masses. (b) Describe the changes that occur at constant pressure if the system is placed in thermal contact with a heat reservoir at \(-30^{\circ} \mathrm{C}\). (c) Describe the changes that occur if the temperature is raised from \(25^{\circ} \mathrm{C}\) to \(120^{\circ} \mathrm{C}\) at constant pressure. (d) Describe the system after \(200 \mathrm{~g} \mathrm{H}_{2} \mathrm{O}\) is added at \(25^{\circ} \mathrm{C}\). Table 13.1 Aqueous solubilities of sodium sulfate decahydrate and anhydrous sodium sulfate \({ }^{a}\) \begin{tabular}{lccc}
\hline \(\mathrm{Na}_{2} \mathrm{SO}_{4} \cdot 10 \mathrm{H}_{2} \mathrm{O}\) & & \multicolumn{2}{c}{\(\mathrm{Na}_{2} \mathrm{SO}_{4}\)} \\
\cline { 1 - 2 } \cline { 5 }\(t /{ }^{\circ} \mathrm{C}\) & \(x_{\mathrm{B}}\) & \(t /{ }^{\circ} \mathrm{C}\) & \(x_{\mathrm{B}}\) \\
\hline 10 & \(0.011\) & 40 & \(0.058\) \\
15 & \(0.016\) & 50 & \(0.056\) \\
20 & \(0.024\) & & \\
25 & \(0.034\) & & \\
30 & \(0.048\) & & \\
\hline\({ }^{a}\) Ref. [59], p. 179-180. & & \\
& & &
\end{tabular} 13.8 Use the following information to draw a temperature-composition phase diagram for the binary system of \(\mathrm{H}_{2} \mathrm{O}(\mathrm{A})\) and \(\mathrm{Na}_{2} \mathrm{SO}_{4}(\mathrm{~B})\) at \(p=1\) bar, confining \(t\) to the range \(-20\) to \(50^{\circ} \mathrm{C}\) and \(z_{\mathrm{B}}\) to the range \(0-0.2\). The solid decahydrate, \(\mathrm{Na}_{2} \mathrm{SO}_{4} \cdot 10 \mathrm{H}_{2} \mathrm{O}\), is stable below \(32.4^{\circ} \mathrm{C}\). The anhydrous salt, \(\mathrm{Na}_{2} \mathrm{SO}_{4}\), is stable above this temperature. There is a peritectic point for these two solids and the solution at \(x_{\mathrm{B}}=0.059\) and \(t=32.4^{\circ} \mathrm{C}\). There is a eutectic point for ice, \(\mathrm{Na}_{2} \mathrm{SO}_{4} \cdot 10 \mathrm{H}_{2} \mathrm{O}\), and the solution at \(x_{\mathrm{B}}=0.006\) and \(t=-1.3^{\circ} \mathrm{C}\). Table \(13.1\) gives the temperature dependence of the solubilities of the ionic solids. Table 13.2 Data for Problem 13.9. Temperatures of saturated solutions of aqueous iron(III) chloride at \(p=\) 1 bar (\left(\mathrm{A}=\mathrm{FeCl}_{3}, \mathrm{~B}=\mathrm{H}_{2} \mathrm{O}\right)^{a}\) \begin{tabular}{crcccr}
\hline\(x_{\mathrm{A}}\) & \(t /{ }^{\circ} \mathrm{C}\) & \(x_{\mathrm{A}}\) & \(t /{ }^{\circ} \mathrm{C}\) & \(x_{\mathrm{A}}\) & \(t /{ }^{\circ} \mathrm{C}\) \\
\hline \(0.000\) & \(0.0\) & \(0.119\) & \(35.0\) & \(0.286\) & \(56.0\) \\
\(0.020\) & \(-10.0\) & \(0.143\) & \(37.0\) & \(0.289\) & \(55.0\) \\
\(0.032\) & \(-20.5\) & \(0.157\) & \(36.0\) & \(0.293\) & \(60.0\) \\
\(0.037\) & \(-27.5\) & \(0.173\) & \(33.0\) & \(0.301\) & \(69.0\) \\
\(0.045\) & \(-40.0\) & \(0.183\) & \(30.0\) & \(0.318\) & \(72.5\) \\
\(0.052\) & \(-55.0\) & \(0.195\) & \(27.4\) & \(0.333\) & \(73.5\) \\
\(0.053\) & \(-41.0\) & \(0.213\) & \(32.0\) & \(0.343\) & \(72.5\) \\
\(0.056\) & \(-27.0\) & \(0.222\) & \(32.5\) & \(0.358\) & \(70.0\) \\
\(0.076\) & \(0.0\) & \(0.232\) & \(30.0\) & \(0.369\) & \(66.0\) \\
\(0.083\) & \(10.0\) & \(0.238\) & \(35.0\) & \(0.369\) & \(80.0\) \\
\(0.093\) & \(20.0\) & \(0.259\) & \(50.0\) & \(0.373\) & \(100.0\) \\
\(0.106\) & \(30.0\) & \(0.277\) & \(55.0\) & & \\
\hline
\end{tabular}
\({ }^{a}\) Data from Ref. [59], page \(193 .\) 13.9 Iron(III) chloride forms various solid hydrates, all of which melt congruently. Table \(13.2\) on the preceding page lists the temperatures \(t\) of aqueous solutions of various compositions that are saturated with respect to a solid phase. (a) Use these data to construct a \(t-z_{\mathrm{B}}\) phase diagram for the binary system of \(\mathrm{FeCl}_{3}\) (A) and \(\mathrm{H}_{2} \mathrm{O}\) (B). Identify the formula and melting point of each hydrate. Hint: derive a formula for the mole ratio \(n_{\mathrm{B}} / n_{\mathrm{A}}\) as a function of \(x_{\mathrm{A}}\) in a binary mixture. (b) For the following conditions, determine the phase or phases present at equilibrium and the composition of each.
1. \(t=-70.0^{\circ} \mathrm{C}\) and \(z_{\mathrm{A}}=0.100\)
2. \(t=50.0^{\circ} \mathrm{C}\) and \(z_{\mathrm{A}}=0.275\)
\(\overline 13.10 Figure \(13.19\) is a temperature-composition phase diagram for the binary system of water (A) and phenol (B) at 1 bar. These liquids are partially miscible below \(67^{\circ} \mathrm{C}\). Phenol is more dense than water, so the layer with the higher mole fraction of phenol is the bottom layer. Suppose you place \(4.0 \mathrm{~mol}\) of \(\mathrm{H}_{2} \mathrm{O}\) and \(1.0 \mathrm{~mol}\) of phenol in a beaker at \(30^{\circ} \mathrm{C}\) and gently stir to allow the layers to equilibrate. (a) What are the compositions of the equilibrated top and bottom layers? (b) Find the amount of each component in the bottom layer. (c) As you gradually stir more phenol into the beaker, maintaining the temperature at \(30^{\circ} \mathrm{C}\), what changes occur in the volumes and compositions of the two layers? Assuming that one layer eventually disappears, what additional amount of phenol is needed to cause this to happen? 13.11 The standard boiling point of propane is \(-41.8{ }^{\circ} \mathrm{C}\) and that of \(n\)-butane is \(-0.2{ }^{\circ} \mathrm{C}\). Table \(13.3\) on the next page lists vapor pressure data for the pure liquids. Assume that the liquid mixtures obey Raoult's law. (a) Calculate the compositions, \(x_{\mathrm{A}}\), of the liquid mixtures with boiling points of \(-10.0^{\circ} \mathrm{C}\), \(-20.0^{\circ} \mathrm{C}\), and \(-30.0^{\circ} \mathrm{C}\) at a pressure of \(1 \mathrm{bar}\). (b) Calculate the compositions, \(y_{\mathrm{A}}\), of the equilibrium vapor at these three temperatures. Table \(13.3\) Saturation vapor pressures of propane (A) and \(n\)-butane (B) \begin{tabular}{ccc}
\hline\(t /{ }^{\circ} \mathrm{C}\) & \(p_{\mathrm{A}}^{*} /\) bar & \(p_{\mathrm{B}}^{*} / \mathrm{bar}\) \\
\hline\(-10.0\) & \(3.360\) & \(0.678\) \\
\(-20.0\) & \(2.380\) & \(0.441\) \\
\(-30.0\) & \(1.633\) & \(0.275\) \\
\hline
\end{tabular} (c) Plot the temperature-composition phase diagram at \(p=1\) bar using these data, and label the areas appropriately. (d) Suppose a system containing \(10.0 \mathrm{~mol}\) propane and \(10.0 \mathrm{~mol} n\)-butane is brought to a pressure of 1 bar and a temperature of \(-25^{\circ} \mathrm{C}\). From your phase diagram, estimate the compositions and amounts of both phases. Table 13.4 Liquid and gas compositions in the two-phase system of 2-propanol (A) and benzene at \(45^{\circ} \mathrm{C}^{a}\) \begin{tabular}{llcccc}
\hline\(x_{\mathrm{A}}\) & \(y_{\mathrm{A}}\) & \(p / \mathrm{kPa}\) & \(x_{\mathrm{A}}\) & \(y_{\mathrm{A}}\) & \(p / \mathrm{kPa}\) \\
\hline 0 & 0 & \(29.89\) & \(0.5504\) & \(0.3692\) & \(35.32\) \\
\(0.0472\) & \(0.1467\) & \(33.66\) & \(0.6198\) & \(0.3951\) & \(34.58\) \\
\(0.0980\) & \(0.2066\) & \(35.21\) & \(0.7096\) & \(0.4378\) & \(33.02\) \\
\(0.2047\) & \(0.2663\) & \(36.27\) & \(0.8073\) & \(0.5107\) & \(30.28\) \\
\(0.2960\) & \(0.2953\) & \(36.45\) & \(0.9120\) & \(0.6658\) & \(25.24\) \\
\(0.3862\) & \(0.3211\) & \(36.29\) & \(0.9655\) & \(0.8252\) & \(21.30\) \\
\(0.4753\) & \(0.3463\) & \(35.93\) & \(1.0000\) & \(1.0000\) & \(18.14\) \\
\hline
\end{tabular}
\({ }^{a}\) Ref. [24]. 13.12 Use the data in Table \(13.4\) to draw a pressure-composition phase diagram for the 2-propanolbenzene system at \(45^{\circ} \mathrm{C}\). Label the axes and each area. Table 13.5 Liquid and gas compositions in the twophase system of acetone (A) and chloroform at \(35.2{ }^{\circ} \mathrm{C}\) \begin{tabular}{lllccc}
\hline\(x_{\mathrm{A}}\) & \(y_{\mathrm{A}}\) & \(p / \mathrm{kPa}\) & \(x_{\mathrm{A}}\) & \(y_{\mathrm{A}}\) & \(p / \mathrm{kPa}\) \\
\hline 0 & 0 & \(39.08\) & \(0.634\) & \(0.727\) & \(36.29\) \\
\(0.083\) & \(0.046\) & \(37.34\) & \(0.703\) & \(0.806\) & \(38.09\) \\
\(0.200\) & \(0.143\) & \(34.92\) & \(0.815\) & \(0.896\) & \(40.97\) \\
\(0.337\) & \(0.317\) & \(33.22\) & \(0.877\) & \(0.936\) & \(42.62\) \\
\(0.413\) & \(0.437\) & \(33.12\) & \(0.941\) & \(0.972\) & \(44.32\) \\
\(0.486\) & \(0.534\) & \(33.70\) & \(1.000\) & \(1.000\) & \(45.93\) \\
\(0.577\) & \(0.662\) & \(35.09\) & & & \\
\hline
\end{tabular}
\({ }^{a}\) Ref. [179], p. \(286 .\) | 12,192 | 2,195 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.07%3A_Liquid-Liquid_Extractions |
A liquid–liquid extraction is an important separation technique for environmental, clinical, and industrial laboratories. A standard environmental analytical method illustrates the importance of liquid–liquid extractions. Municipal water departments routinely monitor public water supplies for trihalomethanes (CHCl , CHBrCl , CHBr Cl, and CHBr ) because they are known or suspected carcinogens. Before their analysis by gas chromatography, trihalomethanes are separated from their aqueous matrix using a liquid–liquid extraction with pentane [“The Analysis of Trihalomethanes in Drinking Water by Liquid Extraction,”EPAMethod501.2 (EPA 500-Series, November 1979)]. The Environmental Protection Agency (EPA) also publishes two additional methods for trihalomethanes. Method 501.1 and Method 501.3 use a purge-and-trap to collect the trihalomethanes prior to a gas chromatographic analysis with a halide-specific detector (Method 501.1) or a mass spectrometer as the detector (Method 501.3). You will find more details about gas chromatography, including detectors, in . In a simple liquid–liquid extraction the solute partitions itself between two immiscible phases. One phase usually is an aqueous solvent and the other phase is an organic solvent, such as the pentane used to extract trihalomethanes from water. Because the phases are immiscible they form two layers, with the denser phase on the bottom. The solute initially is present in one of the two phases; after the extraction it is present in both phases. —that is, the percentage of solute that moves from one phase to the other—is determined by the equilibrium constant for the solute’s partitioning between the phases and any other side reactions that involve the solute. Examples of other reactions that affect extraction efficiency include acid–base reactions and complexation reactions. As we learned earlier in this chapter, a solute’s partitioning between two phases is described by a partition coefficient, . If we extract a solute from an aqueous phase into an organic phase \[S_{a q} \rightleftharpoons S_{o r g} \nonumber\] then the partition coefficient is \[K_{\mathrm{D}}=\frac{\left[S_{org}\right]}{\left[S_{a q}\right]} \nonumber\] A large value for indicates that extraction of solute into the organic phase is favorable. To evaluate an extraction’s efficiency we must consider the solute’s total concentration in each phase, which we define as a , . \[D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{a q}\right]_{\text { total }}} \nonumber\] The partition coefficient and the distribution ratio are identical if the solute has only one chemical form in each phase; however, if the solute exists in more than one chemical form in either phase, then and usually have different values. For example, if the solute exists in two forms in the aqueous phase, and , only one of which, , partitions between the two phases, then \[D=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}+\left[S_{a q}\right]_{B}} \leq K_{\mathrm{D}}=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}} \nonumber\] This distinction between and is important. The partition coefficient is a thermodynamic equilibrium constant and has a fixed value for the solute’s partitioning between the two phases. The distribution ratio’s value, however, changes with solution conditions if the relative amounts of and change. If we know the solute’s equilibrium reactions within each phase and between the two phases, we can derive an algebraic relationship between and . In a simple liquid–liquid extraction, the only reaction that affects the extraction efficiency is the solute’s partitioning between the two phases (Figure 7.7.1
). In this case the distribution ratio and the partition coefficient are equal. \[D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{aq}\right]_{\text { total }}} = K_\text{D} = \frac {[S_{org}]} {[S_{aq}]} \label{7.1}\] Let’s assume the solute initially is present in the aqueous phase and that we wish to extract it into the organic phase. A conservation of mass requires that the moles of solute initially present in the aqueous phase equal the combined moles of solute in the aqueous phase and the organic phase after the extraction. \[\left(\operatorname{mol} \ S_{a q}\right)_{0}=\left(\operatorname{mol} \ S_{a q}\right)_{1}+\left(\operatorname{mol} \ S_{org}\right)_{1} \label{7.2}\] where the subscripts indicate the extraction number with 0 representing the system before the extraction and 1 the system following the first extraction. After the extraction, the solute’s concentration in the aqueous phase is \[\left[S_{a q}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{a q}} \label{7.3}\] and its concentration in the organic phase is \[\left[S_{o r g}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{o r g}\right)_{1}}{V_{o r g}} \label{7.4}\] where and are the volumes of the aqueous phase and the organic phase. Solving Equation \ref{7.2} for (mol ) and substituting into Equation \ref{7.4} leave us with \[\left[S_{o r g}\right]_{1} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0}-\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{o r g}} \label{7.5}\] Substituting Equation \ref{7.3} and Equation \ref{7.5} into Equation \ref{7.1} gives \[D = \frac {\frac {(\text{mol }S_{aq})_0-(\text{mol }S_{aq})_1} {V_{org}}} {\frac {(\text{mol }S_{aq})_1} {V_{aq}}} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0} \times V_{a q}-\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{a q}}{\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{o r g}} \nonumber\] Rearranging and solving for the fraction of solute that remains in the aqueous phase after one extraction, ( ) , gives \[\left(q_{aq}\right)_{1} = \frac{\left(\operatorname{mol} \ S_{aq}\right)_{1}}{\left(\operatorname{mol} \ S_{a q}\right)_{0}} = \frac{V_{aq}}{D V_{o r g}+V_{a q}} \label{7.6}\] The fraction present in the organic phase after one extraction, ( ) , is \[\left(q_{o r g}\right)_{1}=\frac{\left(\operatorname{mol} S_{o r g}\right)_{1}}{\left(\operatorname{mol} S_{a q}\right)_{0}}=1-\left(q_{a q}\right)_{1}=\frac{D V_{o r g}}{D V_{o r g}+V_{a q}} \nonumber\] Example 7.7.1
shows how we can use Equation \ref{7.6} to calculate the efficiency of a simple liquid-liquid extraction. A solute has a between water and chloroform of 5.00. Suppose we extract a 50.00-mL sample of a 0.050 M aqueous solution of the solute using 15.00 mL of chloroform. (a) What is the separation’s extraction efficiency? (b) What volume of chloroform do we need if we wish to extract 99.9% of the solute? For a simple liquid–liquid extraction the distribution ratio, , and the partition coefficient, , are identical. (a) The fraction of solute that remains in the aqueous phase after the extraction is given by Equation \ref{7.6}. \[\left(q_{aq}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.400 \nonumber\] The fraction of solute in the organic phase is 1–0.400, or 0.600. Extraction efficiency is the percentage of solute that moves into the extracting phase; thus, the extraction efficiency is 60.0%. (b) To extract 99.9% of the solute ( ) must be 0.001. Solving Equation \ref{7.6} for , and making appropriate substitutions for ( ) and gives \[V_{o r g}=\frac{V_{a q}-\left(q_{a q}\right)_{1} V_{a q}}{\left(q_{a q}\right)_{1} D}=\frac{50.00 \ \mathrm{mL}-(0.001)(50.00 \ \mathrm{mL})}{(0.001)(5.00 \ \mathrm{mL})}=999 \ \mathrm{mL} \nonumber\] This is large volume of chloroform. Clearly, a single extraction is not reasonable under these conditions. In Example 7.7.1
, a single extraction provides an extraction efficiency of only 60%. If we carry out a second extraction, the fraction of solute remaining in the aqueous phase, ( ) , is \[\left(q_{a q}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{2}}{\left(\operatorname{mol} \ S_{a q}\right)_{1}}=\frac{V_{a q}}{D V_{org}+V_{a q}} \nonumber\] If and are the same for both extractions, then the cumulative fraction of solute that remains in the aqueous layer after two extractions, ( ) , is the product of ( ) and ( ) , or \[\left(Q_{aq}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{aq}\right)_{2}}{\left(\operatorname{mol} \ S_{aq}\right)_{0}}=\left(q_{a q}\right)_{1} \times\left(q_{a q}\right)_{2}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{2} \nonumber\] In general, for a series of identical extractions, the fraction of analyte that remains in the aqueous phase after the last extraction is \[\left(Q_{a q}\right)_{n}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{n} \label{7.7}\] For the extraction described in Example 7.7.1
, determine (a) the extraction efficiency for two identical extractions and for three identical extractions; and (b) the number of extractions required to ensure that we extract 99.9% of the solute. (a) The fraction of solute remaining in the aqueous phase after two extractions and three extractions is \[\left(Q_{aq}\right)_{2}=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{2}=0.160 \nonumber\] \[\left(Q_{a q}\right)_{3}=\left(\frac{50.0 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{3}=0.0640 \nonumber\] The extraction efficiencies are 84.0% for two extractions and 93.6% for three extractions. (b) To determine the minimum number of extractions for an efficiency of 99.9%, we set ( ) to 0.001 and solve for using Equation \ref{7.7}. \[0.001=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{n}=(0.400)^{n} \nonumber\] Taking the log of both sides and solving for \[\begin{aligned} \log (0.001) &=n \log (0.400) \\ n &=7.54 \end{aligned} \nonumber\] we find that a minimum of eight extractions is necessary. The last two examples provide us with an important observation—for any extraction efficiency, we need less solvent if we complete several extractions using smaller portions of solvent instead of one extraction using a larger volume of solvent. For the conditions in and , an extraction efficiency of 99.9% requires one extraction with 9990 mL of chloroform, or 120 mL when using eight 15-mL portions of chloroform. Although extraction efficiency increases dramatically with the first few multiple, the effect diminishes quickly as we increase the number of extractions (Figure 7.7.2
). In most cases there is little improvement in extraction efficiency after five or six extractions. For the conditions in Example 7.7.2
, we reach an extraction efficiency of 99% after five extractions and need three additional extractions to obtain the extra 0.9% increase in extraction efficiency. To plan a liquid–liquid extraction we need to know the solute’s distribution ratio between the two phases. One approach is to carry out the extraction on a solution that contains a known amount of solute. After the extraction, we isolate the organic phase and allow it to evaporate, leaving behind the solute. In one such experiment, 1.235 g of a solute with a molar mass of 117.3 g/mol is dissolved in 10.00 mL of water. After extracting with 5.00 mL of toluene, 0.889 g of the solute is recovered in the organic phase. (a) What is the solute’s distribution ratio between water and toluene? (b) If we extract 20.00 mL of an aqueous solution that contains the solute using 10.00 mL of toluene, what is the extraction efficiency? (c) How many extractions will we need to recover 99.9% of the solute? (a) The solute’s distribution ratio between water and toluene is \[D=\frac{\left[S_{o r g}\right]}{\left[S_{a q}\right]}=\frac{0.889 \ \mathrm{g} \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.00500 \ \mathrm{L}}}{(1.235 \ \mathrm{g}-0.889 \ \mathrm{g}) \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.01000 \ \mathrm{L}}}=5.14 \nonumber\] (b) The fraction of solute remaining in the aqueous phase after one extraction is \[\left(q_{a q}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}=0.280 \nonumber\] The extraction efficiency, therefore, is 72.0%. (c) To extract 99.9% of the solute requires \[\left(Q_{aq}\right)_{n}=0.001=\left(\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}\right)^{n}=(0.280)^{n} \nonumber\] \[\begin{aligned} \log (0.001) &=n \log (0.280) \\ n &=5.4 \end{aligned} \nonumber\] a minimum of six extractions. As we see in Equation \ref{7.1}, in a simple liquid–liquid extraction the distribution ratio and the partition coefficient are identical. As a result, the distribution ratio does not depend on the composition of the aqueous phase or the organic phase. A change in the pH of the aqueous phase, for example, will not affect the solute’s extraction efficiency when and have the same value. If the solute participates in one or more additional equilibrium reactions within a phase, then the distribution ratio and the partition coefficient may not be the same. For example, Figure 7.7.3
shows the equilibrium reactions that affect the extraction of the weak acid, HA, by an organic phase in which ionic species are not soluble. In this case the partition coefficient and the distribution ratio are \[K_{\mathrm{D}}=\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]} \label{7.8}\] \[D=\frac{\left[\mathrm{HA}_{org}\right]_{\text { total }}}{\left[\mathrm{HA}_{a q}\right]_{\text { total }}} =\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]+\left[\mathrm{A}_{a q}^{-}\right]} \label{7.9}\] Because the position of an acid–base equilibrium depends on pH, the distribution ratio, , is pH-dependent. To derive an equation for that shows this dependence, we begin with the acid dissociation constant for HA. \[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}_{\mathrm{aq}}^{+}\right]\left[\mathrm{A}_{\mathrm{aq}}^{-}\right]}{\left[\mathrm{HA}_{\mathrm{aq}}\right]} \label{7.10}\] Solving Equation \ref{7.10} for the concentration of A in the aqueous phase \[\left[\mathrm{A}_{a q}^{-}\right]=\frac{K_{\mathrm{a}} \times\left[\mathrm{HA}_{a q}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{a q}^{+}\right]} \nonumber\] and substituting into Equation \ref{7.9} gives \[D = \frac {[\text{HA}_{org}]} {[\text{HA}_{aq}] + \frac {K_a \times [\text{HA}_{aq}]}{[\text{H}_3\text{O}_{aq}^+]}} \nonumber\] Factoring [HA ] from the denominator, replacing [HA ]/[HA ] with (Equation \ref{7.8}), and simplifying leaves us with the following relationship between the distribution ratio, , and the pH of the aqueous solution. \[D=\frac{K_{\mathrm{D}}\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]+K_{a}} \label{7.11}\] An acidic solute, HA, has a of \(1.00 \times 10^{-5}\) and a between water and hexane of 3.00. Calculate the extraction efficiency if we extract a 50.00 mL sample of a 0.025 M aqueous solution of HA, buffered to a pH of 3.00, with 50.00 mL of hexane. Repeat for pH levels of 5.00 and 7.00. When the pH is 3.00, [\(\text{H}_3\text{O}_{aq}^+\)] is \(1.0 \times 10^{-3}\) and the distribution ratio is \[D=\frac{(3.00)\left(1.0 \times 10^{-3}\right)}{1.0 \times 10^{-3}+1.00 \times 10^{-5}}=2.97 \nonumber\] The fraction of solute that remains in the aqueous phase is \[\left(Q_{aq}\right)_{1}=\frac{50.00 \ \mathrm{mL}}{(2.97)(50.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.252 \nonumber\] The extraction efficiency, therefore, is almost 75%. The same calculation at a pH of 5.00 gives the extraction efficiency as 60%. At a pH of 7.00 the extraction efficiency is just 3% . The extraction efficiency in Example 7.7.3
is greater at more acidic pH levels because HA is the solute’s predominate form in the aqueous phase. At a more basic pH, where A is the solute’s predominate form, the extraction efficiency is smaller. A graph of extraction efficiency versus pH is shown in Figure 7.7.4
. Note that extraction efficiency essentially is independent of pH for pH levels more acidic than the HA’s p , and that it is essentially zero for pH levels more basic than HA’s p . The greatest change in extraction efficiency occurs at pH levels where both HA and A are predominate species. The ladder diagram for HA along the graph’s -axis helps illustrate this effect. The liquid–liquid extraction of the weak base B is governed by the following equilibrium reactions: \[\begin{array}{c}{\mathrm{B}(a q) \rightleftharpoons \mathrm{B}(org) \quad K_{D}=5.00} \\ {\mathrm{B}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{HB}^{+}(a q) \quad K_{b}=1.0 \times 10^{-4}}\end{array} \nonumber\] Derive an equation for the distribution ratio, , and calculate the extraction efficiency if 25.0 mL of a 0.025 M solution of B, buffered to a pH of 9.00, is extracted with 50.0 mL of the organic solvent. Because the weak base exists in two forms, only one of which extracts into the organic phase, the partition coefficient, , and the distribution ratio, , are not identical. \[K_{\mathrm{D}}=\frac{\left[\mathrm{B}_{org}\right]}{\left[\mathrm{B}_{aq}\right]} \nonumber\] \[D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + [\text{HB}_{aq}^+]} \nonumber\] Using the expression for the weak base \[K_{\mathrm{b}}=\frac{\left[\mathrm{OH}_{a q}^{-}\right]\left[\mathrm{HB}_{a q}^{+}\right]}{\left[\mathrm{B}_{a q}\right]} \nonumber\] we solve for the concentration of HB and substitute back into the equation for , obtaining \[D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + \frac {K_b \times [\text{B}_{aq}]} {[\text{OH}_{aq}^-]}} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]\left(1+\frac {K_b} {[\text{OH}_{aq}^+]} \right)} =\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{a q}^{-}\right]+K_{\mathrm{b}}} \nonumber\] At a pH of 9.0, the [OH ] is \(1 \times 10^{-5}\) M and the distribution ratio has a value of \[D=\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{aq}^{-}\right]+K_{\mathrm{b}}}=\frac{(5.00)\left(1.0 \times 10^{-5}\right)}{1.0 \times 10^{-5}+1.0 \times 10^{-4}}=0.455 \nonumber\] After one extraction, the fraction of B remaining in the aqueous phase is \[\left(q_{aq}\right)_{1}=\frac{25.00 \ \mathrm{mL}}{(0.455)(50.00 \ \mathrm{mL})+25.00 \ \mathrm{mL}}=0.524 \nonumber\] The extraction efficiency, therefore, is 47.6%. At a pH of 9, most of the weak base is present as HB , which explains why the overall extraction efficiency is so poor. One important application of a liquid–liquid extraction is the selective extraction of metal ions using an organic ligand. Unfortunately, many organic ligands are not very soluble in water or undergo hydrolysis or oxidation reactions in aqueous solutions. For these reasons the ligand is added to the organic solvent instead of the aqueous phase. Figure 7.7.5
shows the relevant equilibrium reactions (and equilibrium constants) for the extraction of M by the ligand HL, including the ligand’s extraction into the aqueous phase ( ), the ligand’s acid dissociation reaction ( ), the formation of the metal–ligand complex (\(\beta_n\)), and the complex’s extraction into the organic phase ( ). If the ligand’s concentration is much greater than the metal ion’s concentration, then the distribution ratio is \[D=\frac{\beta_{n} K_{\mathrm{D}, c}\left(K_{a}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}}{\left(K_{\mathrm{D}, \mathrm{HL}}\right)^{n}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{n}+\beta_{n}\left(K_{\mathrm{a}}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}} \label{7.12}\] where is the ligand’s initial concentration in the organic phase. As shown in Example 7.7.4
, the extraction efficiency for metal ions shows a marked pH dependency. A liquid–liquid extraction of the divalent metal ion, M , uses the scheme outlined in Figure 7.7.5
. The partition coefficients for the ligand, , and for the metal–ligand complex, , are \(1.0 \times 10^4\) and \(7.0 \times 10^4\), respectively. The ligand’s acid dissociation constant, , is \(5.0 \times 10^{-5}\), and the formation constant for the metal–ligand complex, \(\beta_2\), is \(2.5 \times 10^{16}\). What is the extraction efficiency if we extract 100.0 mL of a \(1.0 \times 10^{-6}\) M aqueous solution of M , buffered to a pH of 1.00, with 10.00 mL of an organic solvent that is 0.1 mM in the chelating agent? Repeat the calculation at a pH of 3.00. When the pH is 1.00 the distribution ratio is \[D=\frac{\left(2.5 \times 10^{16}\right)\left(7.0 \times 10^{4}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}}{\left(1.0 \times 10^{4}\right)^{2}(0.10)^{2}+\left(2.5 \times 10^{16}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}} \nonumber\] or a of 0.0438. The fraction of metal ion that remains in the aqueous phase is \[\left(Q_{aq}\right)_{1}=\frac{100.0 \ \mathrm{mL}}{(0.0438)(10.00 \ \mathrm{mL})+100.0 \ \mathrm{mL}}=0.996 \nonumber\] At a pH of 1.00, we extract only 0.40% of the metal into the organic phase. Changing the pH to 3.00, however, increases the extraction efficiency to 97.8%. Figure 7.7.6
shows how the pH of the aqueous phase affects the extraction efficiency for M . One advantage of using a ligand to extract a metal ion is the high degree of selectivity that it brings to a liquid–liquid extraction. As seen in Figure 7.7.6
, a divalent metal ion’s extraction efficiency increases from approximately 0% to 100% over a range of 2 pH units. Because a ligand’s ability to form a metal–ligand complex varies substantially from metal ion to metal ion, significant selectivity is possible if we carefully control the pH. Table 7.7.1
shows the minimum pH for extracting 99% of a metal ion from an aqueous solution using an equal volume of 4 mM dithizone in CCl . Using Table 7.7.1
, explain how we can separate the metal ions in an aqueous mixture of Cu , Cd , and Ni by extracting with an equal volume of dithizone in CCl . From Table 7.7.1
, a quantitative separation of Cu from Cd and from Ni is possible if we acidify the aqueous phase to a pH of less than 1. This pH is greater than the minimum pH for extracting Cu and significantly less than the minimum pH for extracting either Cd or Ni . After the extraction of Cu is complete, we shift the pH of the aqueous phase to 4.0, which allows us to extract Cd while leaving Ni in the aqueous phase. | 22,485 | 2,196 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.02%3A_Designing_a_Sampling_Plan |
A sampling plan must support the goals of an analysis. For example, a material scientist interested in characterizing a metal’s surface chemistry is more likely to choose a freshly exposed surface, created by cleaving the sample under vacuum, than a surface previously exposed to the atmosphere. In a qualitative analysis, a sample need not be identical to the original substance provided there is sufficient analyte present to ensure its detection. In fact, if the goal of an analysis is to identify a trace-level component, it may be desirable to discriminate against major components when collecting samples. For an interesting discussion of the importance of a sampling plan, see Buger, J. et al. “Do Scientists and Fishermen Collect the Same Size Fish? Possible Implications for Exposure Assessment,” , , 34–41. For a quantitative analysis, the sample’s composition must represent accurately the target population, a requirement that necessitates a careful sampling plan. Among the issues we need to consider are these five questions. A sampling error occurs whenever a sample’s composition is not identical to its target population. If the target population is , then we can collect individual samples without giving consideration to where we collect sample. Unfortunately, in most situations the target population is and attention to where we collect samples is important. For example, due to settling a medication available as an oral suspension may have a higher concentration of its active ingredients at the bottom of the container. The composition of a clinical sample, such as blood or urine, may depend on when it is collected. A patient’s blood glucose level, for instance, will change in response to eating and exercise. Other target populations show both a spatial and a temporal heterogeneity. The concentration of dissolved O in a lake is heterogeneous due both to a change in seasons and to point sources of pollution. The composition of a homogeneous target population is the same regardless of where we sample, when we sample, or the size of our sample. For a heterogeneous target population, the composition is not the same at different locations, at different times, or for different sample sizes. If the analyte’s distribution within the target population is a concern, then our sampling plan must take this into account. When feasible, homogenizing the target population is a simple solution, although this often is impracticable. In addition, homogenizing a sample destroys information about the analyte’s spatial or temporal distribution within the target population, information that may be of importance. The ideal sampling plan provides an unbiased estimate of the target population’s properties. A is the easiest way to satisfy this requirement [Cohen, R. D. , , 902–903]. Despite its apparent simplicity, a truly random sample is difficult to collect. Haphazard sampling, in which samples are collected without a sampling plan, is not random and may reflect an analyst’s unintentional biases. Here is a simple method to ensure that we collect random samples. First, we divide the target population into equal units and assign to each unit a unique number. Then, we use a random number table to select the units to sample. Example 7.2.1
provides an illustrative example. provides a random number table that you can use to design a sampling plan. To analyze a polymer’s tensile strength, individual samples of the polymer are held between two clamps and stretched. To evaluate a production lot, the manufacturer’s sampling plan calls for collecting ten 1 cm \(\times\) 1 cm samples from a 100 cm \(\times\) 100 cm polymer sheet. Explain how we can use a random number table to ensure that we collect these samples at random. As shown by the grid below, we divide the polymer sheet into 10 000 1 cm \(\times\) 1 cm squares, each identified by its row number and its column number, with numbers running from 0 to 99. For example, the square is in row 98 and in column 1. To select ten squares at random, we enter the random number table in at an arbitrary point and let the entry’s last four digits represent the row number and the column number for the first sample. We then move through the table in a predetermined fashion, selecting random numbers until we have 10 samples. For our first sample, let’s use the second entry in the third column of , which is 76831. The first sample, therefore, is row 68 and column 31. If we proceed by moving down the third column, then the 10 samples are as follows: 66558 When we collect a random sample we make no assumptions about the target population, which makes this the least biased approach to sampling. On the other hand, a random sample often requires more time and expense than other sampling strategies because we need to collect a greater number of samples to ensure that we adequately sample the target population, particularly when that population is heterogenous [Borgman, L. E.; Quimby, W. F. in Keith, L. H., ed. , American Chemical Society: Washington, D. C., 1988, 25–43]. The opposite of random sampling is selective, or in which we use prior information about the target population to help guide our selection of samples. Judgmental sampling is more biased than random sampling, but requires fewer samples. Judgmental sampling is useful if we wish to limit the number of independent variables that might affect our results. For example, if we are studying the bioaccumulation of PCB’s in fish, we may choose to exclude fish that are too small, too young, or that appear diseased. Random sampling and judgmental sampling represent extremes in bias and in the number of samples needed to characterize the target population. falls in between these extremes. In systematic sampling we sample the target population at regular intervals in space or time. Figure 7.2.1
shows an aerial photo of the Great Salt Lake in Utah. A railroad line divides the lake into two sections that have different chemical compositions. To compare the lake’s two sections—and to evaluate spatial variations within each section—we use a two-dimensional grid to define sampling locations, collecting samples at the center of each location. When a population’s is heterogeneous in time, as is common in clinical and environmental studies, then we might choose to collect samples at regular intervals in time. If a target population’s properties have a periodic trend, a systematic sampling will lead to a significant bias if our sampling frequency is too small. This is a common problem when sampling electronic signals where the problem is known as aliasing. Consider, for example, a signal that is a simple sign wave. Figure 7.2.2
a shows how an insufficient sampling frequency underestimates the signal’s true frequency. The apparent signal, shown by the dashed red line that passes through the five data points, is significantly different from the true signal shown by the solid blue line. According to the , to determine accurately the frequency of a periodic signal, we must sample the signal at least twice during each cycle or period. If we collect samples at an interval of \(\Delta t\), then the highest frequency we can monitor accurately is \((2 \Delta t)^{-1}\). For example, if we collect one sample each hour, then the highest frequency we can monitor is (2 \(\times\) 1 hr) or 0.5 hr , a period of less than 2 hr. If our signal’s period is less than 2 hours (a frequency of more than 0.5 hr ), then we must use a faster sampling rate. Ideally, we use a sampling rate that is at least 3–4 times greater than the highest frequency signal of interest. If our signal has a period of one hour, then we should collect a new sample every 15-20 minutes. Combinations of the three primary approaches to sampling also are possible [Keith, L. H. , , 610–617]. One such combination is , in which we use prior knowledge about a system to guide a systematic sampling plan. For example, when monitoring waste leaching from a landfill, we expect the plume to move in the same direction as the flow of groundwater—this helps focus our sampling, saving money and time. The systematic–judgmental sampling plan in Figure 7.2.3
includes a rectangular grid for most of the samples and linear transects to explore the plume’s limits [Flatman, G. T.; Englund, E. J.; Yfantis, A. A. in Keith, L. H., ed. , American Chemical Society: Washington, D. C., 1988, 73–84]. Another combination of the three primary approaches to sampling is judgmental–random, or . Many target populations consist of distinct units, or strata. For example, suppose we are studying particulate Pb in urban air. Because particulates come in a range of sizes—some visible and some microscopic—and come from many sources—such as road dust, diesel soot, and fly ash to name a few—we can subdivide the target population by size or by source. If we choose a random sampling plan, then we collect samples without considering the different strata, which may bias the sample toward larger particulates. In a stratified sampling we divide the target population into strata and collect random samples from within each stratum. After we analyze the samples from each stratum, we pool their respective means to give an overall mean for the target population. The advantage of stratified sampling is that individual strata usually are more homogeneous than the target population. The overall sampling variance for stratified sampling always is at least as good, and often is better than that obtained by simple random sampling. Because a stratified sampling requires that we collect and analyze samples from several strata, it often requires more time and money. One additional method of sampling deserves mention. In we select sample sites using criteria other than minimizing sampling error and sampling variance. In a survey of rural groundwater quality, for example, we can choose to drill wells at sites selected at random or we can choose to take advantage of existing wells; the latter usually is the preferred choice. In this case cost, expedience, and accessibility are more important than ensuring a random sample Having determined from where to collect samples, the next step in designing a sampling plan is to decide on the type of sample to collect. There are three common methods for obtaining samples: grab sampling, composite sampling, and in situ sampling. The most common type of sample is a in which we collect a portion of the target population at a specific time or location, providing a “snapshot” of the target population. If our target population is homogeneous, a series of random grab samples allows us to establish its properties. For a heterogeneous target population, systematic grab sampling allows us to characterize how its properties change over time and/or space. A is a set of grab samples that we combine into a single sample before analysis. Because information is lost when we combine individual samples, normally we analyze separately each grab sample. In some situations, however, there are advantages to working with a composite sample. One situation where composite sampling is appropriate is when our interest is in the target population’s average composition over time or space. For example, wastewater treatment plants must monitor and report the average daily composition of the treated water they release to the environment. The analyst can collect and analyze a set of individual grab samples and report the average result, or she can save time and money by combining the grab samples into a single composite sample and report the result of her analysis of the composite sample. Composite sampling also is useful when a single sample does not supply sufficient material for the analysis. For example, analytical methods for the quantitative analysis of PCB’s in fish often require as much as 50 g of tissue, an amount that may be difficult to obtain from a single fish. Combining and homogenizing tissue samples from several fish makes it easy to obtain the necessary 50-g sample. A significant disadvantage of grab samples and composite samples is that we cannot use them to monitor continuously a time-dependent change in the target population. , in which we insert an analytical sensor into the target population, allows us to monitor the target population without removing individual grab samples. For example, we can monitor the pH of a solution in an industrial production line by immersing a pH electrode in the solution’s flow. A study of the relationship between traffic density and the concentrations of Pb, Cd, and Zn in roadside soils uses the following sampling plan [Nabulo, G.; Oryem-Origa, H.; Diamond, M. , , 42–52]. Samples of surface soil (0–10 cm) are collected at distances of 1, 5, 10, 20, and 30 m from the road. At each distance, 10 samples are taken from different locations and mixed to form a single sample. What type of sampling plan is this? Explain why this is an appropriate sampling plan. This is a systematic–judgemental sampling plan using composite samples. These are good choices given the goals of the study. Automobile emissions release particulates that contain elevated concentrations of Pb, Cd, and Zn—this study was conducted in Uganda where leaded gasoline was still in use—which settle out on the surrounding roadside soils as “dry rain.” Samples collected near the road and samples collected at fixed distances from the road provide sufficient data for the study, while minimizing the total number of samples. Combining samples from the same distance into a single, composite sample has the advantage of decreasing sampling uncertainty. To minimize sampling errors, samples must be of an appropriate size. If a sample is too small its composition may differ substantially from that of the target population, which introduces a sampling error. Samples that are too large, however, require more time and money to collect and analyze, without providing a significant improvement in the sampling error. Let’s assume our target population is a homogeneous mixture of two types of particles. Particles of type contain a fixed concentration of analyte, and particles of type are analyte-free. Samples from this target population follow a binomial distribution. If we collect a sample of particles, then the expected number of particles that contains analyte, , is \[n_{A}=n p \nonumber\] where is the probability of selecting a particle of type . The standard deviation for sampling is \[s_{samp}=\sqrt{n p(1-p)} \label{7.1}\] To calculate the relative standard deviation for sampling, \(\left( s_{samp} \right)_{rel}\), we divide Equation \ref{7.1} by , obtaining \[\left(s_{samp}\right)_{r e l}=\frac{\sqrt{n p(1-p)}}{n p} \nonumber\] Solving for allows us to calculate the number of particles we need to provide a desired relative sampling variance. \[n=\frac{1-p}{p} \times \frac{1}{\left(s_{s a m p}\right)_{rel}^{2}} \label{7.2}\] Suppose we are analyzing a soil where the particles that contain analyte represent only \(1 \times 10^{-7}\)% of the population. How many particles must we collect to give a percent relative standard deviation for sampling of 1%? Since the particles of interest account for \(1 \times 10^{-7}\)% of all particles, the probability, , of selecting one of these particles is \(1 \times 10^{-9}\). Substituting into Equation \ref{7.2} gives \[n=\frac{1-\left(1 \times 10^{-9}\right)}{1 \times 10^{-9}} \times \frac{1}{(0.01)^{2}}=1 \times 10^{13} \nonumber\] To obtain a relative standard deviation for sampling of 1%, we need to collect \(1 \times 10^{13}\) particles. Depending on the particle size, a sample of 10 particles may be fairly large. Suppose this is equivalent to a mass of 80 g. Working with a sample this large clearly is not practical. Does this mean we must work with a smaller sample and accept a larger relative standard deviation for sampling? Fortunately the answer is no. An important feature of Equation \ref{7.2} is that the relative standard deviation for sampling is a function of the number of particles instead of their combined mass. If we crush and grind the particles to make them smaller, then a sample of 10 particles will have a smaller mass. If we assume that a particle is spherical, then its mass is proportional to the cube of its radius. \[\operatorname{mass} \propto r^{3} \nonumber\] If we decrease a particle’s radius by a factor of 2, for example, then we decrease its mass by a factor of 2 , or 8. This assumes, of course, that the process of crushing and grinding particles does not change the composition of the particles. Assume that a sample of 10 particles from Example 7.2.3
weighs 80 g and that the particles are spherical. By how much must we reduce a particle’s radius if we wish to work with 0.6-g samples? To reduce the sample’s mass from 80 g to 0.6 g, we must change its mass by a factor of \[\frac{80}{0.6}=133 \times \nonumber\] To accomplish this we must decrease a particle’s radius by a factor of \[\begin{aligned} r^{3} &=133 \times \\ r &=5.1 \times \end{aligned} \nonumber\] Decreasing the radius by a factor of approximately 5 allows us to decrease the sample’s mass from 80 g to 0.6 g. Treating a population as though it contains only two types of particles is a useful exercise because it shows us that we can improve the relative standard deviation for sampling by collecting more particles. Of course, a real population likely contains more than two types of particles, with the analyte present at several levels of concentration. Nevertheless, the sampling of many well-mixed populations approximate binomial sampling statistics because they are homogeneous on the scale at which they are sampled. Under these conditions the following relationship between the mass of a random grab sample, , and the percent relative standard deviation for sampling, , often is valid \[m R^{2}=K_{s} \label{7.3}\] where is a sampling constant equal to the mass of a sample that produces a percent relative standard deviation for sampling of ±1% [Ingamells, C. O.; Switzer, P. , , 547–568]. The following data were obtained in a preliminary determination of the amount of inorganic ash in a breakfast cereal. What is the value of and what size sample is needed to give a percent relative standard deviation for sampling of ±2.0%. Predict the percent relative standard deviation and the absolute standard deviation if we collect 5.00-g samples. To determine the sampling constant, , we need to know the average mass of the cereal samples and the relative standard deviation for the amount of ash in those samples. The average mass of the cereal samples is 1.0007 g. The average %w/w ash and its absolute standard deviation are, respectively, 1.298 %w/w and 0.03194 %w/w. The percent relative standard deviation, , therefore, is \[R=\frac{s_{\text { samp }}}{\overline{X}}=\frac{0.03194 \% \ \mathrm{w} / \mathrm{w}}{1.298 \% \ \mathrm{w} / \mathrm{w}} \times 100=2.46 \% \nonumber\] Solving for gives its value as \[K_{s}=m R^{2}=(1.0007 \mathrm{g})(2.46)^{2}=6.06 \ \mathrm{g} \nonumber\] To obtain a percent relative standard deviation of ±2%, samples must have a mass of at least \[m=\frac{K_{s}}{R^{2}}=\frac{6.06 \mathrm{g}}{(2.0)^{2}}=1.5 \ \mathrm{g} \nonumber\] If we use 5.00-g samples, then the expected percent relative standard deviation is \[R=\sqrt{\frac{K_{s}}{m}}=\sqrt{\frac{6.06 \mathrm{g}}{5.00 \mathrm{g}}}=1.10 \% \nonumber\] and the expected absolute standard deviation is \[s_{\text { samp }}=\frac{R \overline{X}}{100}=\frac{(1.10)(1.298 \% \mathrm{w} / \mathrm{w})}{100}=0.0143 \% \mathrm{w} / \mathrm{w} \nonumber\] Olaquindox is a synthetic growth promoter in medicated feeds for pigs. In an analysis of a production lot of feed, five samples with nominal masses of 0.95 g were collected and analyzed, with the results shown in the following table. What is the value of and what size samples are needed to obtain a percent relative deviation for sampling of 5.0%? By how much do you need to reduce the average particle size if samples must weigh no more than 1 g? To determine the sampling constant, , we need to know the average mass of the samples and the percent relative standard deviation for the concentration of olaquindox in the feed. The average mass for the five samples is 0.95792 g. The average concentration of olaquindox in the samples is 23.14 mg/kg with a standard deviation of 2.200 mg/kg. The percent relative standard deviation, , is \[R=\frac{s_{\text { samp }}}{\overline{X}} \times 100=\frac{2.200 \ \mathrm{mg} / \mathrm{kg}}{23.14 \ \mathrm{mg} / \mathrm{kg}} \times 100=9.507 \approx 9.51 \nonumber\] Solving for gives its value as \[K_{s}=m R^{2}=(0.95792 \mathrm{g})(9.507)^{2}=86.58 \ \mathrm{g} \approx 86.6 \ \mathrm{g} \nonumber\] To obtain a percent relative standard deviation of 5.0%, individual samples need to have a mass of at least \[m=\frac{K_{s}}{R^{2}}=\frac{86.58 \ \mathrm{g}}{(5.0)^{2}}=3.5 \ \mathrm{g} \nonumber\] To reduce the sample’s mass from 3.5 g to 1 g, we must change the mass by a factor of \[\frac{3.5 \ \mathrm{g}}{1 \ \mathrm{g}}=3.5 \times \nonumber\] If we assume that the sample’s particles are spherical, then we must reduce a particle’s radius by a factor of \[\begin{aligned} r^{3} &=3.5 \times \\ r &=1.5 \times \end{aligned} \nonumber\] In the previous section we considered how much sample we need to minimize the standard deviation due to sampling. Another important consideration is the number of samples to collect. If the results from our analysis of the samples are normally distributed, then the confidence interval for the sampling error is \[\mu=\overline{X} \pm \frac{t s_{samp}}{\sqrt{n_{samp}}} \label{7.4}\] where is the number of samples and is the standard deviation for sampling. Rearranging Equation \ref{7.4} and substituting for the quantity \(\overline{X} - \mu\), gives the number of samples as \[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}} \label{7.5}\] Because the value of depends on , the solution to Equation \ref{7.5} is found iteratively. When we use Equation \ref{7.5}, we must express the standard deviation for sampling, , and the error, , in the same way. If is reported as a percent relative standard deviation, then the error, , is reported as a percent relative error. When you use Equation \ref{7.5}, be sure to check that you are expressing and in the same way. In we determined that we need 1.5-g samples to establish an of ±2.0% for the amount of inorganic ash in cereal. How many 1.5-g samples do we need to collect to obtain a percent relative sampling error of ±0.80% at the 95% confidence level? Because the value of depends on the number of samples—a result we have yet to calculate—we begin by letting = \(\infty\) and using (0.05, \(\infty\)) for . From , the value for (0.05, \(\infty\)) is 1.960. Substituting known values into Equation \ref{7.5} gives the number of samples as \[n_{samp}=\frac{(1.960)^{2}(2.0)^{2}}{(0.80)^{2}}=24.0 \approx 24 \nonumber\] Letting = 24, the value of (0.05, 23) from is 2.073. Recalculating gives \[n_{samp}=\frac{(2.073)^{2}(2.0)^{2}}{(0.80)^{2}}=26.9 \approx 27 \nonumber\] When = 27, the value of (0.05, 26) from is 2.060. Recalculating gives \[n_{samp}=\frac{(2.060)^{2}(2.0)^{2}}{(0.80)^{2}}=26.52 \approx 27 \nonumber\] Because two successive calculations give the same value for , we have an iterative solution to the problem. We need 27 samples to achieve a percent relative sampling error of ±0.80% at the 95% confidence level. Assuming that the percent relative standard deviation for sampling in the determination of olaquindox in medicated feed is 5.0% (see ), how many samples do we need to analyze to obtain a percent relative sampling error of ±2.5% at \(\alpha\) = 0.05? Because the value of depends on the number of samples—a result we have yet to calculate—we begin by letting = \(\infty\) and using (0.05, \(\infty\)) for the value of . From , the value for (0.05, \(\infty\)) is 1.960. Our first estimate for is \[n_{samp}=\frac{t^{2} s_{s a m p}^{2}}{e^{2}} = \frac{(1.96)^{2}(5.0)^{2}}{(2.5)^{2}}=15.4 \approx 15 \nonumber\] Letting = 15, the value of (0.05,14) from is 2.145. Recalculating gives \[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}}=\frac{(2.145)^{2}(5.0)^{2}}{(2.5)^{2}}=18.4 \approx 18 \nonumber\] Letting = 18, the value of (0.05,17) from is 2.103. Recalculating gives \[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}}=\frac{(2.103)^{2}(5.0)^{2}}{(2.5)^{2}}=17.7 \approx 18 \nonumber\] Because two successive calculations give the same value for , we need 18 samples to achieve a sampling error of ±2.5% at the 95% confidence interval. Equation \ref{7.5} provides an estimate for the smallest number of samples that will produce the desired sampling error. The actual sampling error may be substantially larger if for the samples we collect during the subsequent analysis is greater than used to calculate . This is not an uncommon problem. For a target population with a relative sampling variance of 50 and a desired relative sampling error of ±5%, Equation \ref{7.5} predicts that 10 samples are sufficient. In a simulation using 1000 samples of size 10, however, only 57% of the trials resulted in a sampling error of less than ±5% [Blackwood, L. G. , , 1366–1367]. Increasing the number of samples to 17 was sufficient to ensure that the desired sampling error was achieved 95% of the time. For an interesting discussion of why the number of samples is important, see Kaplan, D.; Lacetera, N.; Kaplan, C. “Sample Size and Precision in NIH Peer Review,” Plos One, 2008, 3(7), 1–3. When reviewing grants, individual reviewers report a score between 1.0 and 5.0 (two significant figures). NIH reports the average score to three significant figures, implying that a difference of 0.01 is significant. If the individual scores have a standard deviation of 0.1, then a difference of 0.01 is significant at \(\alpha = 0.05\) only if there are 384 reviews. The authors conclude that NIH review panels are too small to provide a statistically meaningful separation between proposals receiving similar scores. A final consideration when we develop a sampling plan is how we can minimize the overall variance for the analysis. shows that the overall variance is a function of the variance due to the method, \(s_{meth}^2\), and the variance due to sampling, \(s_{samp}^2\). As we learned earlier, we can improve the sampling variance by collecting more samples of the proper size. Increasing the number of times we analyze each sample improves the method’s variance. If \(s_{samp}^2\) is significantly greater than \(s_{meth}^2\), we can ignore the method’s contribution to the overall variance and use Equation \ref{7.5} to estimate the number of samples to analyze. Analyzing any sample more than once will not improve the overall variance, because the method’s variance is insignificant. If \(s_{meth}^2\) is significantly greater than \(s_{samp}^2\), then we need to collect and analyze only one sample. The number of replicate analyses, , we need to minimize the error due to the method is given by an equation similar to Equation \ref{7.5}. \[n_{rep}=\frac{t^{2} s_{m e t h}^{2}}{e^{2}} \nonumber\] Unfortunately, the simple situations described above often are the exception. For many analyses, both the sampling variance and the method variance are significant, and both multiple samples and replicate analyses of each sample are necessary. The overall error in this case is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} \label{7.6}\] Equation \ref{7.6} does not have a unique solution as different combinations of and give the same overall error. How many samples we collect and how many times we analyze each sample is determined by other concerns, such as the cost of collecting and analyzing samples, and the amount of available sample. An analytical method has a relative sampling variance of 0.40% and a relative method variance of 0.070%. Evaluate the percent relative error (\(\alpha = 0.05\)) if you collect 5 samples and analyze each twice, and if you collect 2 samples and analyze each 5 times. Both sampling strategies require a total of 10 analyses. From we find that the value of (0.05, 9) is 2.262. Using Equation \ref{7.6}, the relative error for the first sampling strategy is \[e=2.262 \sqrt{\frac{0.40}{5}+\frac{0.070}{5 \times 2}}=0.67 \% \nonumber\] and that for the second sampling strategy is \[e=2.262 \sqrt{\frac{0.40}{2}+\frac{0.070}{2 \times 5}}=1.0 \% \nonumber\] Because the method variance is smaller than the sampling variance, we obtain a smaller relative error if we collect more samples and analyze each sample fewer times. An analytical method has a relative sampling variance of 0.10% and a relative method variance of 0.20%. The cost of collecting a sample is $20 and the cost of analyzing a sample is $50. Propose a sampling strategy that provides a maximum relative error of ±0.50% (\(\alpha = 0.05\)) and a maximum cost of $700. If we collect a single sample (cost $20), then we can analyze that sample 13 times (cost $650) and stay within our budget. For this scenario, the percent relative error is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.179 \sqrt{\frac{0.10}{1}+\frac{0.20}{1 \times 13}}=0.74 \% \nonumber\] where (0.05, 12) is 2.179. Because this percent relative error is larger than ±0.50%, this is not a suitable sampling strategy. Next, we try two samples (cost $40), analyzing each six times (cost $600). For this scenario, the percent relative error is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.2035 \sqrt{\frac{0.10}{2}+\frac{0.20}{2 \times 6}}=0.57 \% \nonumber\] where (0.05, 11) is 2.2035. Because this percent relative error is larger than ±0.50%, this also is not a suitable sampling strategy. Next we try three samples (cost $60), analyzing each four times (cost $600). For this scenario, the percent relative error is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.2035 \sqrt{\frac{0.10}{3}+\frac{0.20}{3 \times 4}}=0.49 \% \nonumber\] where (0.05, 11) is 2.2035. Because both the total cost ($660) and the percent relative error meet our requirements, this is a suitable sampling strategy. There are other suitable sampling strategies that meet both goals. The strategy that requires the least expense is to collect eight samples, analyzing each once for a total cost of $560 and a percent relative error of ±0.46%. Collecting 10 samples and analyzing each one time, gives a percent relative error of ±0.39% at a cost of $700. | 30,795 | 2,198 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.02%3A_Designing_a_Sampling_Plan |
A sampling plan must support the goals of an analysis. For example, a material scientist interested in characterizing a metal’s surface chemistry is more likely to choose a freshly exposed surface, created by cleaving the sample under vacuum, than a surface previously exposed to the atmosphere. In a qualitative analysis, a sample need not be identical to the original substance provided there is sufficient analyte present to ensure its detection. In fact, if the goal of an analysis is to identify a trace-level component, it may be desirable to discriminate against major components when collecting samples. For an interesting discussion of the importance of a sampling plan, see Buger, J. et al. “Do Scientists and Fishermen Collect the Same Size Fish? Possible Implications for Exposure Assessment,” , , 34–41. For a quantitative analysis, the sample’s composition must represent accurately the target population, a requirement that necessitates a careful sampling plan. Among the issues we need to consider are these five questions. A sampling error occurs whenever a sample’s composition is not identical to its target population. If the target population is , then we can collect individual samples without giving consideration to where we collect sample. Unfortunately, in most situations the target population is and attention to where we collect samples is important. For example, due to settling a medication available as an oral suspension may have a higher concentration of its active ingredients at the bottom of the container. The composition of a clinical sample, such as blood or urine, may depend on when it is collected. A patient’s blood glucose level, for instance, will change in response to eating and exercise. Other target populations show both a spatial and a temporal heterogeneity. The concentration of dissolved O in a lake is heterogeneous due both to a change in seasons and to point sources of pollution. The composition of a homogeneous target population is the same regardless of where we sample, when we sample, or the size of our sample. For a heterogeneous target population, the composition is not the same at different locations, at different times, or for different sample sizes. If the analyte’s distribution within the target population is a concern, then our sampling plan must take this into account. When feasible, homogenizing the target population is a simple solution, although this often is impracticable. In addition, homogenizing a sample destroys information about the analyte’s spatial or temporal distribution within the target population, information that may be of importance. The ideal sampling plan provides an unbiased estimate of the target population’s properties. A is the easiest way to satisfy this requirement [Cohen, R. D. , , 902–903]. Despite its apparent simplicity, a truly random sample is difficult to collect. Haphazard sampling, in which samples are collected without a sampling plan, is not random and may reflect an analyst’s unintentional biases. Here is a simple method to ensure that we collect random samples. First, we divide the target population into equal units and assign to each unit a unique number. Then, we use a random number table to select the units to sample. Example 7.2.1
provides an illustrative example. provides a random number table that you can use to design a sampling plan. To analyze a polymer’s tensile strength, individual samples of the polymer are held between two clamps and stretched. To evaluate a production lot, the manufacturer’s sampling plan calls for collecting ten 1 cm \(\times\) 1 cm samples from a 100 cm \(\times\) 100 cm polymer sheet. Explain how we can use a random number table to ensure that we collect these samples at random. As shown by the grid below, we divide the polymer sheet into 10 000 1 cm \(\times\) 1 cm squares, each identified by its row number and its column number, with numbers running from 0 to 99. For example, the square is in row 98 and in column 1. To select ten squares at random, we enter the random number table in at an arbitrary point and let the entry’s last four digits represent the row number and the column number for the first sample. We then move through the table in a predetermined fashion, selecting random numbers until we have 10 samples. For our first sample, let’s use the second entry in the third column of , which is 76831. The first sample, therefore, is row 68 and column 31. If we proceed by moving down the third column, then the 10 samples are as follows: 66558 When we collect a random sample we make no assumptions about the target population, which makes this the least biased approach to sampling. On the other hand, a random sample often requires more time and expense than other sampling strategies because we need to collect a greater number of samples to ensure that we adequately sample the target population, particularly when that population is heterogenous [Borgman, L. E.; Quimby, W. F. in Keith, L. H., ed. , American Chemical Society: Washington, D. C., 1988, 25–43]. The opposite of random sampling is selective, or in which we use prior information about the target population to help guide our selection of samples. Judgmental sampling is more biased than random sampling, but requires fewer samples. Judgmental sampling is useful if we wish to limit the number of independent variables that might affect our results. For example, if we are studying the bioaccumulation of PCB’s in fish, we may choose to exclude fish that are too small, too young, or that appear diseased. Random sampling and judgmental sampling represent extremes in bias and in the number of samples needed to characterize the target population. falls in between these extremes. In systematic sampling we sample the target population at regular intervals in space or time. Figure 7.2.1
shows an aerial photo of the Great Salt Lake in Utah. A railroad line divides the lake into two sections that have different chemical compositions. To compare the lake’s two sections—and to evaluate spatial variations within each section—we use a two-dimensional grid to define sampling locations, collecting samples at the center of each location. When a population’s is heterogeneous in time, as is common in clinical and environmental studies, then we might choose to collect samples at regular intervals in time. If a target population’s properties have a periodic trend, a systematic sampling will lead to a significant bias if our sampling frequency is too small. This is a common problem when sampling electronic signals where the problem is known as aliasing. Consider, for example, a signal that is a simple sign wave. Figure 7.2.2
a shows how an insufficient sampling frequency underestimates the signal’s true frequency. The apparent signal, shown by the dashed red line that passes through the five data points, is significantly different from the true signal shown by the solid blue line. According to the , to determine accurately the frequency of a periodic signal, we must sample the signal at least twice during each cycle or period. If we collect samples at an interval of \(\Delta t\), then the highest frequency we can monitor accurately is \((2 \Delta t)^{-1}\). For example, if we collect one sample each hour, then the highest frequency we can monitor is (2 \(\times\) 1 hr) or 0.5 hr , a period of less than 2 hr. If our signal’s period is less than 2 hours (a frequency of more than 0.5 hr ), then we must use a faster sampling rate. Ideally, we use a sampling rate that is at least 3–4 times greater than the highest frequency signal of interest. If our signal has a period of one hour, then we should collect a new sample every 15-20 minutes. Combinations of the three primary approaches to sampling also are possible [Keith, L. H. , , 610–617]. One such combination is , in which we use prior knowledge about a system to guide a systematic sampling plan. For example, when monitoring waste leaching from a landfill, we expect the plume to move in the same direction as the flow of groundwater—this helps focus our sampling, saving money and time. The systematic–judgmental sampling plan in Figure 7.2.3
includes a rectangular grid for most of the samples and linear transects to explore the plume’s limits [Flatman, G. T.; Englund, E. J.; Yfantis, A. A. in Keith, L. H., ed. , American Chemical Society: Washington, D. C., 1988, 73–84]. Another combination of the three primary approaches to sampling is judgmental–random, or . Many target populations consist of distinct units, or strata. For example, suppose we are studying particulate Pb in urban air. Because particulates come in a range of sizes—some visible and some microscopic—and come from many sources—such as road dust, diesel soot, and fly ash to name a few—we can subdivide the target population by size or by source. If we choose a random sampling plan, then we collect samples without considering the different strata, which may bias the sample toward larger particulates. In a stratified sampling we divide the target population into strata and collect random samples from within each stratum. After we analyze the samples from each stratum, we pool their respective means to give an overall mean for the target population. The advantage of stratified sampling is that individual strata usually are more homogeneous than the target population. The overall sampling variance for stratified sampling always is at least as good, and often is better than that obtained by simple random sampling. Because a stratified sampling requires that we collect and analyze samples from several strata, it often requires more time and money. One additional method of sampling deserves mention. In we select sample sites using criteria other than minimizing sampling error and sampling variance. In a survey of rural groundwater quality, for example, we can choose to drill wells at sites selected at random or we can choose to take advantage of existing wells; the latter usually is the preferred choice. In this case cost, expedience, and accessibility are more important than ensuring a random sample Having determined from where to collect samples, the next step in designing a sampling plan is to decide on the type of sample to collect. There are three common methods for obtaining samples: grab sampling, composite sampling, and in situ sampling. The most common type of sample is a in which we collect a portion of the target population at a specific time or location, providing a “snapshot” of the target population. If our target population is homogeneous, a series of random grab samples allows us to establish its properties. For a heterogeneous target population, systematic grab sampling allows us to characterize how its properties change over time and/or space. A is a set of grab samples that we combine into a single sample before analysis. Because information is lost when we combine individual samples, normally we analyze separately each grab sample. In some situations, however, there are advantages to working with a composite sample. One situation where composite sampling is appropriate is when our interest is in the target population’s average composition over time or space. For example, wastewater treatment plants must monitor and report the average daily composition of the treated water they release to the environment. The analyst can collect and analyze a set of individual grab samples and report the average result, or she can save time and money by combining the grab samples into a single composite sample and report the result of her analysis of the composite sample. Composite sampling also is useful when a single sample does not supply sufficient material for the analysis. For example, analytical methods for the quantitative analysis of PCB’s in fish often require as much as 50 g of tissue, an amount that may be difficult to obtain from a single fish. Combining and homogenizing tissue samples from several fish makes it easy to obtain the necessary 50-g sample. A significant disadvantage of grab samples and composite samples is that we cannot use them to monitor continuously a time-dependent change in the target population. , in which we insert an analytical sensor into the target population, allows us to monitor the target population without removing individual grab samples. For example, we can monitor the pH of a solution in an industrial production line by immersing a pH electrode in the solution’s flow. A study of the relationship between traffic density and the concentrations of Pb, Cd, and Zn in roadside soils uses the following sampling plan [Nabulo, G.; Oryem-Origa, H.; Diamond, M. , , 42–52]. Samples of surface soil (0–10 cm) are collected at distances of 1, 5, 10, 20, and 30 m from the road. At each distance, 10 samples are taken from different locations and mixed to form a single sample. What type of sampling plan is this? Explain why this is an appropriate sampling plan. This is a systematic–judgemental sampling plan using composite samples. These are good choices given the goals of the study. Automobile emissions release particulates that contain elevated concentrations of Pb, Cd, and Zn—this study was conducted in Uganda where leaded gasoline was still in use—which settle out on the surrounding roadside soils as “dry rain.” Samples collected near the road and samples collected at fixed distances from the road provide sufficient data for the study, while minimizing the total number of samples. Combining samples from the same distance into a single, composite sample has the advantage of decreasing sampling uncertainty. To minimize sampling errors, samples must be of an appropriate size. If a sample is too small its composition may differ substantially from that of the target population, which introduces a sampling error. Samples that are too large, however, require more time and money to collect and analyze, without providing a significant improvement in the sampling error. Let’s assume our target population is a homogeneous mixture of two types of particles. Particles of type contain a fixed concentration of analyte, and particles of type are analyte-free. Samples from this target population follow a binomial distribution. If we collect a sample of particles, then the expected number of particles that contains analyte, , is \[n_{A}=n p \nonumber\] where is the probability of selecting a particle of type . The standard deviation for sampling is \[s_{samp}=\sqrt{n p(1-p)} \label{7.1}\] To calculate the relative standard deviation for sampling, \(\left( s_{samp} \right)_{rel}\), we divide Equation \ref{7.1} by , obtaining \[\left(s_{samp}\right)_{r e l}=\frac{\sqrt{n p(1-p)}}{n p} \nonumber\] Solving for allows us to calculate the number of particles we need to provide a desired relative sampling variance. \[n=\frac{1-p}{p} \times \frac{1}{\left(s_{s a m p}\right)_{rel}^{2}} \label{7.2}\] Suppose we are analyzing a soil where the particles that contain analyte represent only \(1 \times 10^{-7}\)% of the population. How many particles must we collect to give a percent relative standard deviation for sampling of 1%? Since the particles of interest account for \(1 \times 10^{-7}\)% of all particles, the probability, , of selecting one of these particles is \(1 \times 10^{-9}\). Substituting into Equation \ref{7.2} gives \[n=\frac{1-\left(1 \times 10^{-9}\right)}{1 \times 10^{-9}} \times \frac{1}{(0.01)^{2}}=1 \times 10^{13} \nonumber\] To obtain a relative standard deviation for sampling of 1%, we need to collect \(1 \times 10^{13}\) particles. Depending on the particle size, a sample of 10 particles may be fairly large. Suppose this is equivalent to a mass of 80 g. Working with a sample this large clearly is not practical. Does this mean we must work with a smaller sample and accept a larger relative standard deviation for sampling? Fortunately the answer is no. An important feature of Equation \ref{7.2} is that the relative standard deviation for sampling is a function of the number of particles instead of their combined mass. If we crush and grind the particles to make them smaller, then a sample of 10 particles will have a smaller mass. If we assume that a particle is spherical, then its mass is proportional to the cube of its radius. \[\operatorname{mass} \propto r^{3} \nonumber\] If we decrease a particle’s radius by a factor of 2, for example, then we decrease its mass by a factor of 2 , or 8. This assumes, of course, that the process of crushing and grinding particles does not change the composition of the particles. Assume that a sample of 10 particles from Example 7.2.3
weighs 80 g and that the particles are spherical. By how much must we reduce a particle’s radius if we wish to work with 0.6-g samples? To reduce the sample’s mass from 80 g to 0.6 g, we must change its mass by a factor of \[\frac{80}{0.6}=133 \times \nonumber\] To accomplish this we must decrease a particle’s radius by a factor of \[\begin{aligned} r^{3} &=133 \times \\ r &=5.1 \times \end{aligned} \nonumber\] Decreasing the radius by a factor of approximately 5 allows us to decrease the sample’s mass from 80 g to 0.6 g. Treating a population as though it contains only two types of particles is a useful exercise because it shows us that we can improve the relative standard deviation for sampling by collecting more particles. Of course, a real population likely contains more than two types of particles, with the analyte present at several levels of concentration. Nevertheless, the sampling of many well-mixed populations approximate binomial sampling statistics because they are homogeneous on the scale at which they are sampled. Under these conditions the following relationship between the mass of a random grab sample, , and the percent relative standard deviation for sampling, , often is valid \[m R^{2}=K_{s} \label{7.3}\] where is a sampling constant equal to the mass of a sample that produces a percent relative standard deviation for sampling of ±1% [Ingamells, C. O.; Switzer, P. , , 547–568]. The following data were obtained in a preliminary determination of the amount of inorganic ash in a breakfast cereal. What is the value of and what size sample is needed to give a percent relative standard deviation for sampling of ±2.0%. Predict the percent relative standard deviation and the absolute standard deviation if we collect 5.00-g samples. To determine the sampling constant, , we need to know the average mass of the cereal samples and the relative standard deviation for the amount of ash in those samples. The average mass of the cereal samples is 1.0007 g. The average %w/w ash and its absolute standard deviation are, respectively, 1.298 %w/w and 0.03194 %w/w. The percent relative standard deviation, , therefore, is \[R=\frac{s_{\text { samp }}}{\overline{X}}=\frac{0.03194 \% \ \mathrm{w} / \mathrm{w}}{1.298 \% \ \mathrm{w} / \mathrm{w}} \times 100=2.46 \% \nonumber\] Solving for gives its value as \[K_{s}=m R^{2}=(1.0007 \mathrm{g})(2.46)^{2}=6.06 \ \mathrm{g} \nonumber\] To obtain a percent relative standard deviation of ±2%, samples must have a mass of at least \[m=\frac{K_{s}}{R^{2}}=\frac{6.06 \mathrm{g}}{(2.0)^{2}}=1.5 \ \mathrm{g} \nonumber\] If we use 5.00-g samples, then the expected percent relative standard deviation is \[R=\sqrt{\frac{K_{s}}{m}}=\sqrt{\frac{6.06 \mathrm{g}}{5.00 \mathrm{g}}}=1.10 \% \nonumber\] and the expected absolute standard deviation is \[s_{\text { samp }}=\frac{R \overline{X}}{100}=\frac{(1.10)(1.298 \% \mathrm{w} / \mathrm{w})}{100}=0.0143 \% \mathrm{w} / \mathrm{w} \nonumber\] Olaquindox is a synthetic growth promoter in medicated feeds for pigs. In an analysis of a production lot of feed, five samples with nominal masses of 0.95 g were collected and analyzed, with the results shown in the following table. What is the value of and what size samples are needed to obtain a percent relative deviation for sampling of 5.0%? By how much do you need to reduce the average particle size if samples must weigh no more than 1 g? To determine the sampling constant, , we need to know the average mass of the samples and the percent relative standard deviation for the concentration of olaquindox in the feed. The average mass for the five samples is 0.95792 g. The average concentration of olaquindox in the samples is 23.14 mg/kg with a standard deviation of 2.200 mg/kg. The percent relative standard deviation, , is \[R=\frac{s_{\text { samp }}}{\overline{X}} \times 100=\frac{2.200 \ \mathrm{mg} / \mathrm{kg}}{23.14 \ \mathrm{mg} / \mathrm{kg}} \times 100=9.507 \approx 9.51 \nonumber\] Solving for gives its value as \[K_{s}=m R^{2}=(0.95792 \mathrm{g})(9.507)^{2}=86.58 \ \mathrm{g} \approx 86.6 \ \mathrm{g} \nonumber\] To obtain a percent relative standard deviation of 5.0%, individual samples need to have a mass of at least \[m=\frac{K_{s}}{R^{2}}=\frac{86.58 \ \mathrm{g}}{(5.0)^{2}}=3.5 \ \mathrm{g} \nonumber\] To reduce the sample’s mass from 3.5 g to 1 g, we must change the mass by a factor of \[\frac{3.5 \ \mathrm{g}}{1 \ \mathrm{g}}=3.5 \times \nonumber\] If we assume that the sample’s particles are spherical, then we must reduce a particle’s radius by a factor of \[\begin{aligned} r^{3} &=3.5 \times \\ r &=1.5 \times \end{aligned} \nonumber\] In the previous section we considered how much sample we need to minimize the standard deviation due to sampling. Another important consideration is the number of samples to collect. If the results from our analysis of the samples are normally distributed, then the confidence interval for the sampling error is \[\mu=\overline{X} \pm \frac{t s_{samp}}{\sqrt{n_{samp}}} \label{7.4}\] where is the number of samples and is the standard deviation for sampling. Rearranging Equation \ref{7.4} and substituting for the quantity \(\overline{X} - \mu\), gives the number of samples as \[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}} \label{7.5}\] Because the value of depends on , the solution to Equation \ref{7.5} is found iteratively. When we use Equation \ref{7.5}, we must express the standard deviation for sampling, , and the error, , in the same way. If is reported as a percent relative standard deviation, then the error, , is reported as a percent relative error. When you use Equation \ref{7.5}, be sure to check that you are expressing and in the same way. In we determined that we need 1.5-g samples to establish an of ±2.0% for the amount of inorganic ash in cereal. How many 1.5-g samples do we need to collect to obtain a percent relative sampling error of ±0.80% at the 95% confidence level? Because the value of depends on the number of samples—a result we have yet to calculate—we begin by letting = \(\infty\) and using (0.05, \(\infty\)) for . From , the value for (0.05, \(\infty\)) is 1.960. Substituting known values into Equation \ref{7.5} gives the number of samples as \[n_{samp}=\frac{(1.960)^{2}(2.0)^{2}}{(0.80)^{2}}=24.0 \approx 24 \nonumber\] Letting = 24, the value of (0.05, 23) from is 2.073. Recalculating gives \[n_{samp}=\frac{(2.073)^{2}(2.0)^{2}}{(0.80)^{2}}=26.9 \approx 27 \nonumber\] When = 27, the value of (0.05, 26) from is 2.060. Recalculating gives \[n_{samp}=\frac{(2.060)^{2}(2.0)^{2}}{(0.80)^{2}}=26.52 \approx 27 \nonumber\] Because two successive calculations give the same value for , we have an iterative solution to the problem. We need 27 samples to achieve a percent relative sampling error of ±0.80% at the 95% confidence level. Assuming that the percent relative standard deviation for sampling in the determination of olaquindox in medicated feed is 5.0% (see ), how many samples do we need to analyze to obtain a percent relative sampling error of ±2.5% at \(\alpha\) = 0.05? Because the value of depends on the number of samples—a result we have yet to calculate—we begin by letting = \(\infty\) and using (0.05, \(\infty\)) for the value of . From , the value for (0.05, \(\infty\)) is 1.960. Our first estimate for is \[n_{samp}=\frac{t^{2} s_{s a m p}^{2}}{e^{2}} = \frac{(1.96)^{2}(5.0)^{2}}{(2.5)^{2}}=15.4 \approx 15 \nonumber\] Letting = 15, the value of (0.05,14) from is 2.145. Recalculating gives \[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}}=\frac{(2.145)^{2}(5.0)^{2}}{(2.5)^{2}}=18.4 \approx 18 \nonumber\] Letting = 18, the value of (0.05,17) from is 2.103. Recalculating gives \[n_{samp}=\frac{t^{2} s_{samp}^{2}}{e^{2}}=\frac{(2.103)^{2}(5.0)^{2}}{(2.5)^{2}}=17.7 \approx 18 \nonumber\] Because two successive calculations give the same value for , we need 18 samples to achieve a sampling error of ±2.5% at the 95% confidence interval. Equation \ref{7.5} provides an estimate for the smallest number of samples that will produce the desired sampling error. The actual sampling error may be substantially larger if for the samples we collect during the subsequent analysis is greater than used to calculate . This is not an uncommon problem. For a target population with a relative sampling variance of 50 and a desired relative sampling error of ±5%, Equation \ref{7.5} predicts that 10 samples are sufficient. In a simulation using 1000 samples of size 10, however, only 57% of the trials resulted in a sampling error of less than ±5% [Blackwood, L. G. , , 1366–1367]. Increasing the number of samples to 17 was sufficient to ensure that the desired sampling error was achieved 95% of the time. For an interesting discussion of why the number of samples is important, see Kaplan, D.; Lacetera, N.; Kaplan, C. “Sample Size and Precision in NIH Peer Review,” Plos One, 2008, 3(7), 1–3. When reviewing grants, individual reviewers report a score between 1.0 and 5.0 (two significant figures). NIH reports the average score to three significant figures, implying that a difference of 0.01 is significant. If the individual scores have a standard deviation of 0.1, then a difference of 0.01 is significant at \(\alpha = 0.05\) only if there are 384 reviews. The authors conclude that NIH review panels are too small to provide a statistically meaningful separation between proposals receiving similar scores. A final consideration when we develop a sampling plan is how we can minimize the overall variance for the analysis. shows that the overall variance is a function of the variance due to the method, \(s_{meth}^2\), and the variance due to sampling, \(s_{samp}^2\). As we learned earlier, we can improve the sampling variance by collecting more samples of the proper size. Increasing the number of times we analyze each sample improves the method’s variance. If \(s_{samp}^2\) is significantly greater than \(s_{meth}^2\), we can ignore the method’s contribution to the overall variance and use Equation \ref{7.5} to estimate the number of samples to analyze. Analyzing any sample more than once will not improve the overall variance, because the method’s variance is insignificant. If \(s_{meth}^2\) is significantly greater than \(s_{samp}^2\), then we need to collect and analyze only one sample. The number of replicate analyses, , we need to minimize the error due to the method is given by an equation similar to Equation \ref{7.5}. \[n_{rep}=\frac{t^{2} s_{m e t h}^{2}}{e^{2}} \nonumber\] Unfortunately, the simple situations described above often are the exception. For many analyses, both the sampling variance and the method variance are significant, and both multiple samples and replicate analyses of each sample are necessary. The overall error in this case is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} \label{7.6}\] Equation \ref{7.6} does not have a unique solution as different combinations of and give the same overall error. How many samples we collect and how many times we analyze each sample is determined by other concerns, such as the cost of collecting and analyzing samples, and the amount of available sample. An analytical method has a relative sampling variance of 0.40% and a relative method variance of 0.070%. Evaluate the percent relative error (\(\alpha = 0.05\)) if you collect 5 samples and analyze each twice, and if you collect 2 samples and analyze each 5 times. Both sampling strategies require a total of 10 analyses. From we find that the value of (0.05, 9) is 2.262. Using Equation \ref{7.6}, the relative error for the first sampling strategy is \[e=2.262 \sqrt{\frac{0.40}{5}+\frac{0.070}{5 \times 2}}=0.67 \% \nonumber\] and that for the second sampling strategy is \[e=2.262 \sqrt{\frac{0.40}{2}+\frac{0.070}{2 \times 5}}=1.0 \% \nonumber\] Because the method variance is smaller than the sampling variance, we obtain a smaller relative error if we collect more samples and analyze each sample fewer times. An analytical method has a relative sampling variance of 0.10% and a relative method variance of 0.20%. The cost of collecting a sample is $20 and the cost of analyzing a sample is $50. Propose a sampling strategy that provides a maximum relative error of ±0.50% (\(\alpha = 0.05\)) and a maximum cost of $700. If we collect a single sample (cost $20), then we can analyze that sample 13 times (cost $650) and stay within our budget. For this scenario, the percent relative error is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.179 \sqrt{\frac{0.10}{1}+\frac{0.20}{1 \times 13}}=0.74 \% \nonumber\] where (0.05, 12) is 2.179. Because this percent relative error is larger than ±0.50%, this is not a suitable sampling strategy. Next, we try two samples (cost $40), analyzing each six times (cost $600). For this scenario, the percent relative error is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.2035 \sqrt{\frac{0.10}{2}+\frac{0.20}{2 \times 6}}=0.57 \% \nonumber\] where (0.05, 11) is 2.2035. Because this percent relative error is larger than ±0.50%, this also is not a suitable sampling strategy. Next we try three samples (cost $60), analyzing each four times (cost $600). For this scenario, the percent relative error is \[e=t \sqrt{\frac{s_{samp}^{2}}{n_{samp}} + \frac{s_{meth}^{2}}{n_{sam p} n_{rep}}} = 2.2035 \sqrt{\frac{0.10}{3}+\frac{0.20}{3 \times 4}}=0.49 \% \nonumber\] where (0.05, 11) is 2.2035. Because both the total cost ($660) and the percent relative error meet our requirements, this is a suitable sampling strategy. There are other suitable sampling strategies that meet both goals. The strategy that requires the least expense is to collect eight samples, analyzing each once for a total cost of $560 and a percent relative error of ±0.46%. Collecting 10 samples and analyzing each one time, gives a percent relative error of ±0.39% at a cost of $700. | 30,795 | 2,200 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/24%3A_Chemistry_of_Life-_Organic_and_Biological_Chemistry/24.10%3A_Carbohydrates |
All carbohydrates consist of carbon, hydrogen, and oxygen atoms and are polyhydroxy aldehydes or ketones or are compounds that can be broken down to form such compounds. Examples of carbohydrates include starch, fiber, the sweet-tasting compounds called sugars, and structural materials such as cellulose. The term had its origin in a misinterpretation of the molecular formulas of many of these substances. For example, because its formula is C H O , glucose was once thought to be a “carbon hydrate” with the structure C ·6H O. Which compounds would be classified as carbohydrates? Which compounds would be classified as carbohydrates? Green plants are capable of synthesizing glucose (C H O ) from carbon dioxide (CO ) and water (H O) by using solar energy in the process known as photosynthesis: \[\ce{6CO_2 + 6H_2O} + \text{686 kcal} \rightarrow \ce{C_6H_{12}O_6 + 6O_2} \label{\(\Page {1}\)} \] (The 686 kcal come from solar energy.) Plants can use the glucose for energy or convert it to larger carbohydrates, such as starch or cellulose. Starch provides energy for later use, perhaps as nourishment for a plant’s seeds, while cellulose is the structural material of plants. We can gather and eat the parts of a plant that store energy—seeds, roots, tubers, and fruits—and use some of that energy ourselves. Carbohydrates are also needed for the synthesis of nucleic acids and many proteins and lipids. Animals, including humans, cannot synthesize carbohydrates from carbon dioxide and water and are therefore dependent on the plant kingdom to provide these vital compounds. We use carbohydrates not only for food (about 60%–65% by mass of the average diet) but also for clothing (cotton, linen, rayon), shelter (wood), fuel (wood), and paper (wood). The simplest carbohydrates—those that cannot be hydrolyzed to produce even smaller carbohydrates—are called monosaccharides. Two or more monosaccharides can link together to form chains that contain from two to several hundred or thousand monosaccharide units. Prefixes are used to indicate the number of such units in the chains. Disaccharide molecules have two monosaccharide units, molecules have three units, and so on. Chains with many monosaccharide units joined together are called polysaccharides. All these so-called higher saccharides can be hydrolyzed back to their constituent monosaccharides. Compounds that cannot be hydrolyzed will not react with water to form two or more smaller compounds. Carbohydrates are an important group of biological molecules that includes sugars and starches. Photosynthesis is the process by which plants use energy from sunlight to synthesize carbohydrates. A monosaccharide is the simplest carbohydrate and cannot be hydrolyzed to produce a smaller carbohydrate molecule. Disaccharides contain two monosaccharide units, and polysaccharides contain many monosaccharide units. | 2,900 | 2,201 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Heterogeneous_Equilibria/The_Bends |
The is an illness that arises from the rapid release of nitrogen gas from the bloodstream and is caused by bubbles forming in the blood and other tissues when a diver ascends to the surface of the ocean too rapidly. It is also referred to as Caisson sickness, decompression sickness (DCS), and Divers' Disease. As divers descend into the ocean, the external pressure on their bodies increases by about 1 atm every 10.06 m. To balance this it is necessary to increase the pressure of the air they breathe from tanks or pumped to them from the surface so that their chests and lungs do not collapse. Unfortunately, our bodies aren't used to the pressurized air (because we normally breathe air under normal atmospheric conditions). With higher air pressure in the lungs Henry's Law tells us that gases such as nitrogen, helium (when used in diving gas mixtures) and oxygen become increasingly soluble in the blood. Unlike oxygen which is metabolized, nitrogen and helium build up throughout the body When divers want to emerge from the water, they have to make sure they don't ascend to the surface level too quickly because they risk numerous bubbles forming as the nitrogen/helium re-equilibrates, much as when a pressurized bottle of soda is suddenly opened. When nitrogen (N ) gas forms bubbles, it accumulates and saturates the muscles and blood, causing pain. Called the , this condition can also cause injuries involving the nervous system. The solubility of a gas is the ability for the gas to dissolve in a solvent (in our case, blood, which although it contains organic components is essentially an aqueous solution). Both temperature and pressure affect the solubility of a gas. English chemist William Henry discovered that as the pressure increases, the solubility of a gas increases. \[ C =k P_{gas} \] In the case of : Determine Constant, k, with the information that the aqueous solubility of N at 10 degrees Celsius is 11.5 mL N / L and 1 atm. \( k= \dfrac {11.5 mL N_{2}/ L}{\ 1 atm} \) Now if the P of N increases to 5 atm: \[ P_{N2}=\dfrac {C}{\dfrac {11.5 mL N_{2}/ L}{\ 1 atm}} \] \[ 5 atm=\dfrac {C}{\dfrac {11.5 mL N_{2}/ L}{\ 1 atm}} \] Solve for C: C= 57.5 mL N /L Therefore, both examples show that as the the pressure increases from 1 atm to 5 atm, the solubility of the N gas increases from 11.5 to 57.5 mL N L. This supports Henry's Law. Most symptoms occur 24 hours after decompression, but can occur up to 3 days after. | 2,482 | 2,202 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/04%3A_Evaluating_Analytical_Data/4.04%3A_The_Distribution_of_Measurements_and_Results |
Earlier we reported results for a determination of the mass of a circulating United States penny, obtaining a mean of 3.117 g and a standard deviation of 0.051 g. Table 4.4.1
shows results for a second, independent determination of a penny’s mass, as well as the data from the first experiment. Although the means and standard deviations for the two experiments are similar, they are not identical. The difference between the two experiments raises some interesting questions. Are the results for one experiment better than the results for the other experiment? Do the two experiments provide equivalent estimates for the mean and the standard deviation? What is our best estimate of a penny’s expected mass? To answer these questions we need to understand how we might predict the properties of all pennies using the results from an analysis of a small sample of pennies. We begin by making a distinction between populations and samples. A is the set of all objects in the system we are investigating. For the data in Table 4.4.1
, the population is all United States pennies in circulation. This population is so large that we cannot analyze every member of the population. Instead, we select and analyze a limited subset, or of the population. The data in Table 4.4.1
, for example, shows the results for two such samples drawn from the larger population of all circulating United States pennies. Table 4.4.1
provides the means and the standard deviations for two samples of circulating United States pennies. What do these samples tell us about the population of pennies? What is the largest possible mass for a penny? What is the smallest possible mass? Are all masses equally probable, or are some masses more common? To answer these questions we need to know how the masses of individual pennies are distributed about the population’s average mass. We represent the distribution of a population by plotting the probability or frequency of obtaining a specific result as a function of the possible results. Such plots are called . There are many possible probability distributions; in fact, the probability distribution can take any shape depending on the nature of the population. Fortunately many chemical systems display one of several common probability distributions. Two of these distributions, the binomial distribution and the normal distribution, are discussed in this section. The binomial distribution describes a population in which the result is the number of times a particular event occurs during a fixed number of trials. Mathematically, the binomial distribution is defined as \[P(X, N) = \frac {N!} {X!(N - X)!} \times p^X \times (1 - p)^{N - X} \nonumber\] where ( , ) is the probability that an event occurs times during trials, and is the event’s probability for a single trial. If you flip a coin five times, (2,5) is the probability the coin will turn up “heads” exactly twice. The term ! reads as -factorial and is the product \(N \times (N – 1) \times (N – 2) \times \cdots \times 1\). For example, 4! is \(4 \times 3 \times 2 \times 1 = 24\). Your calculator probably has a key for calculating factorials. A binomial distribution has well-defined measures of central tendency and spread. The expected mean value is \[\mu = Np \nonumber\] and the expected spread is given by the variance \[\sigma^2 = Np(1 - p) \nonumber\] or the standard deviation. \[\sigma = \sqrt{Np(1 - p)} \nonumber\] The binomial distribution describes a population whose members have only specific, discrete values. When you roll a die, for example, the possible values are 1, 2, 3, 4, 5, or 6. A roll of 3.45 is not possible. As shown in Worked Example 4.4.1
, one example of a chemical system that obeys the binomial distribution is the probability of finding a particular isotope in a molecule. Carbon has two stable, non-radioactive isotopes, C and C, with relative isotopic abundances of, respectively, 98.89% and 1.11%.
(a) What are the mean and the standard deviation for the number of C atoms in a molecule of cholesterol (C H O)?
(b) What is the probability that a molecule of cholesterol has no atoms of C? Solution The probability of finding an atom of C in a molecule of cholesterol follows a binomial distribution, where is the number of C atoms, is the number of carbon atoms in a molecule of cholesterol, and is the probability that an atom of carbon in C. For (a), the mean number of C atoms in a molecule of cholesterol is \[\mu = Np = 27 \times 0.0111 = 0.300 \nonumber\] with a standard deviation of \[\sigma = \sqrt{Np(1 - p)} = \sqrt{27 \times 0.0111 \times (1 - 0.0111)} = 0.544 \nonumber\] For (b), the probability of finding a molecule of cholesterol without an atom of C is \[P(0, 27) = \frac {27!} {0! \: (27 - 0)!} \times (0.0111)^0 \times (1 - 0.0111)^{27 - 0} = 0.740 \nonumber\] There is a 74.0% probability that a molecule of cholesterol will not have an atom of C, a result consistent with the observation that the mean number of C atoms per molecule of cholesterol, 0.300, is less than one. A portion of the binomial distribution for atoms of C in cholesterol is shown in Figure 4.4.1
. Note in particular that there is little probability of finding more than two atoms of C in any molecule of cholesterol. A binomial distribution describes a population whose members have only certain discrete values. This is the case with the number of C atoms in cholesterol. A molecule of cholesterol, for example, can have two C atoms, but it can not have 2.5 atoms of C. A population is continuous if its members may take on any value. The efficiency of extracting cholesterol from a sample, for example, can take on any value between 0% (no cholesterol is extracted) and 100% (all cholesterol is extracted). The most common continuous distribution is the Gaussian, or , the equation for which is \[f(X) = \frac {1} {\sqrt{2 \pi \sigma^2}} e^{- \frac {(X - \mu)^2} {2 \sigma^2}} \nonumber\] where \(\mu\) is the expected mean for a population with members \[\mu = \frac {\sum_{i = 1}^n X_i} {n} \nonumber\] and \(\sigma^2\) is the population’s variance. \[\sigma^2 = \frac {\sum_{i = 1}^n (X_i - \mu)^2} {n} \label{4.1}\] Examples of three normal distributions, each with an expected mean of 0 and with variances of 25, 100, or 400, respectively, are shown in Figure 4.4.2
. Two features of these normal distribution curves deserve attention. First, note that each normal distribution has a single maximum that corresponds to \(\mu\), and that the distribution is symmetrical about this value. Second, increasing the population’s variance increases the distribution’s spread and decreases its height; the area under the curve, however, is the same for all three distributions. The area under a normal distribution curve is an important and useful property as it is equal to the probability of finding a member of the population within a particular range of values. In Figure 4.4.2
, for example, 99.99% of the population shown in curve (a) have values of between –20 and +20. For curve (c), 68.26% of the population’s members have values of between –20 and +20. Because a normal distribution depends solely on \(\mu\) and \(\sigma^2\), the probability of finding a member of the population between any two limits is the same for all normally distributed populations. Figure 4.4.3
, for example, shows that 68.26% of the members of a normal distribution have a value within the range \(\mu \pm 1 \sigma\), and that 95.44% of population’s members have values within the range \(\mu \pm 2 \sigma\). Only 0.27% members of a population have values that exceed the expected mean by more than ± 3\(\sigma\). Additional ranges and probabilities are gathered together in the probability table included in . As shown in Example 4.4.2
, if we know the mean and the standard deviation for a normally distributed population, then we can determine the percentage of the population between any defined limits. The amount of aspirin in the analgesic tablets from a particular manufacturer is known to follow a normal distribution with \(\mu\) = 250 mg and \(\sigma\) = 5. In a random sample of tablets from the production line, what percentage are expected to contain between 243 and 262 mg of aspirin? We do not determine directly the percentage of tablets between 243 mg and 262 mg of aspirin. Instead, we first find the percentage of tablets with less than 243 mg of aspirin and the percentage of tablets having more than 262 mg of aspirin. Subtracting these results from 100%, gives the percentage of tablets that contain between 243 mg and 262 mg of aspirin. To find the percentage of tablets with less than 243 mg of aspirin or more than 262 mg of aspirin we calculate the deviation, , of each limit from \(\mu\) in terms of the population’s standard deviation, \(\sigma\) \[z = \frac {X - \mu} {\sigma} \nonumber\] where is the limit in question. The deviation for the lower limit is \[z_{lower} = \frac {243 - 250} {5} = -1.4 \nonumber\] and the deviation for the upper limit is \[z_{upper} = \frac {262 - 250} {5} = +2.4 \nonumber\] Using the table in , we find that the percentage of tablets with less than 243 mg of aspirin is 8.08%, and that the percentage of tablets with more than 262 mg of aspirin is 0.82%. Therefore, the percentage of tablets containing between 243 and 262 mg of aspirin is \[100.00 \% - 8.08 \% - 0.82 \% = 91.10 \% \nonumber\] Figure 4.4.4
shows the distribution of aspiring in the tablets, with the area in blue showing the percentage of tablets containing between 243 mg and 262 mg of aspirin. What percentage of aspirin tablets will contain between 240 mg and 245 mg of aspirin if the population’s mean is 250 mg and the population’s standard deviation is 5 mg. To find the percentage of tablets that contain less than 245 mg of aspirin we first calculate the deviation, , \[z = \frac {245 - 250} {5} = -1.00 \nonumber\] and then look up the corresponding probability in , obtaining a value of 15.87%. To find the percentage of tablets that contain less than 240 mg of aspirin we find that \[z = \frac {240 - 250} {5} = -2.00 \nonumber\] which corresponds to 2.28%. The percentage of tablets containing between 240 and 245 mg of aspiring is 15.87% – 2.28% = 13.59%. If we select at random a single member from a population, what is its most likely value? This is an important question, and, in one form or another, it is at the heart of any analysis in which we wish to extrapolate from a sample to the sample’s parent population. One of the most important features of a population’s probability distribution is that it provides a way to answer this question. shows that for a normal distribution, 68.26% of the population’s members have values within the range \(\mu \pm 1\sigma\). Stating this another way, there is a 68.26% probability that the result for a single sample drawn from a normally distributed population is in the interval \(\mu \pm 1\sigma\). In general, if we select a single sample we expect its value, is in the range \[X_i = \mu \pm z \sigma \label{4.2}\] where the value of is how confident we are in assigning this range. Values reported in this fashion are called . Equation \ref{4.2}, for example, is the confidence interval for a single member of a population. Table 4.4.2
gives the confidence intervals for several values of . For reasons discussed later in the chapter, a 95% confidence level is a common choice in analytical chemistry. When = 1, we call this the 68.26% confidence interval. What is the 95% confidence interval for the amount of aspirin in a single analgesic tablet drawn from a population for which \(\mu\) is 250 mg and for which \(\sigma\) is 5? Using Table 4.4.2
, we find that is 1.96 for a 95% confidence interval. Substituting this into Equation \ref{4.2} gives the confidence interval for a single tablet as \[X_i = \mu \pm 1.96\sigma = 250 \text{ mg} \pm (1.96 \times 5) = 250 \text{ mg} \pm 10 \text{mg} \nonumber\] A confidence interval of 250 mg ± 10 mg means that 95% of the tablets in the population contain between 240 and 260 mg of aspirin. Alternatively, we can rewrite Equation \ref{4.2} so that it gives the confidence interval is for \(\mu\) based on the population’s standard deviation and the value of a single member drawn from the population. \[\mu = X_i \pm z \sigma \label{4.3}\] The population standard deviation for the amount of aspirin in a batch of analgesic tablets is known to be 7 mg of aspirin. If you randomly select and analyze a single tablet and find that it contains 245 mg of aspirin, what is the 95% confidence interval for the population’s mean? The 95% confidence interval for the population mean is given as \[\mu = X_i \pm z \sigma = 245 \text{ mg} \pm (1.96 \times 7) \text{ mg} = 245 \text{ mg} \pm 14 \text{ mg} \nonumber\] Therefore, based on this one sample, we estimate that there is 95% probability that the population’s mean, \(\mu\), lies within the range of 231 mg to 259 mg of aspirin. Note the qualification that the prediction for \(\mu\) is based on one sample; a different sample likely will give a different 95% confidence interval. Our result here, therefore, is an estimate for \(\mu\) based on this one sample. It is unusual to predict the population’s expected mean from the analysis of a single sample; instead, we collect samples drawn from a population of known \(\sigma\), and report the mean, . The standard deviation of the mean, \(\sigma_{\overline{X}}\), which also is known as the , is \[\sigma_{\overline{X}} = \frac {\sigma} {\sqrt{n}} \nonumber\] The confidence interval for the population’s mean, therefore, is \[\mu = \overline{X} \pm \frac {z \sigma} {\sqrt{n}} \nonumber\] What is the 95% confidence interval for the analgesic tablets in , if an analysis of five tablets yields a mean of 245 mg of aspirin? In this case the confidence interval is \[\mu = 245 \text{ mg} \pm \frac {1.96 \times 7} {\sqrt{5}} \text{ mg} = 245 \text{ mg} \pm 6 \text{ mg} \nonumber\] We estimate a 95% probability that the population’s mean is between 239 mg and 251 mg of aspirin. As expected, the confidence interval when using the mean of five samples is smaller than that for a single sample. An analysis of seven aspirin tablets from a population known to have a standard deviation of 5, gives the following results in mg aspirin per tablet: \(246 \quad 249 \quad 255 \quad 251 \quad 251 \quad 247 \quad 250\) What is the 95% confidence interval for the population’s expected mean? The mean is 249.9 mg aspirin/tablet for this sample of seven tablets. For a 95% confidence interval the value of is 1.96, which makes the confidence interval \[249.9 \pm \frac {1.96 \times 5} {\sqrt{7}} = 249.9 \pm 3.7 \approx 250 \text{ mg} \pm 4 \text { mg} \nonumber\] In Examples 4.4.2
–4.4.5
we assumed that the amount of aspirin in analgesic tablets is normally distributed. Without analyzing every member of the population, how can we justify this assumption? In a situation where we cannot study the whole population, or when we cannot predict the mathematical form of a population’s probability distribution, we must deduce the distribution from a limited sampling of its members. Let’s return to the problem of determining a penny’s mass to explore further the relationship between a population’s distribution and the distribution of a sample drawn from that population. The two sets of data in Table 4.4.1
are too small to provide a useful picture of a sample’s distribution, so we will use the larger sample of 100 pennies shown in Table 4.4.3
. The mean and the standard deviation for this sample are 3.095 g and 0.0346 g, respectively. A (Figure 4.4.5
) is a useful way to examine the data in Table 4.4.3
. To create the histogram, we divide the sample into intervals, by mass, and determine the percentage of pennies within each interval (Table 4.4.4
). Note that the sample’s mean is the midpoint of the histogram. Figure 4.4.5
also includes a normal distribution curve for the population of pennies, based on the assumption that the mean and the variance for the sample are appropriate estimates for the population’s mean and variance. Although the histogram is not perfectly symmetric in shape, it provides a good approximation of the normal distribution curve, suggesting that the sample of 100 pennies is normally distributed. It is easy to imagine that the histogram will approximate more closely a normal distribution if we include additional pennies in our sample. We will not offer a formal proof that the sample of pennies in Table 4.4.3
and the population of all circulating U. S. pennies are normally distributed; however, the evidence in Figure 4.4.5
strongly suggests this is true. Although we cannot claim that the results of all experiments are normally distributed, in most cases our data are normally distributed. According to the , when a measurement is subject to a variety of indeterminate errors, the results for that measurement will approximate a normal distribution [Mark, H.; Workman, J. , , 44–48]. The central limit theorem holds true even if the individual sources of indeterminate error are not normally distributed. The chief limitation to the central limit theorem is that the sources of indeterminate error must be independent and of similar magnitude so that no one source of error dominates the final distribution. An additional feature of the central limit theorem is that a distribution of means for samples drawn from a population with any distribution will approximate closely a normal distribution if the size of each sample is sufficiently large. For example, Figure 4.4.6
shows the distribution for two samples of 10 000 drawn from a uniform distribution in which every value between 0 and 1 occurs with an equal frequency. For samples of size = 1, the resulting distribution closely approximates the population’s uniform distribution. The distribution of the means for samples of size = 10, however, closely approximates a normal distribution. You might reasonably ask whether this aspect of the central limit theorem is important as it is unlikely that we will complete 10 000 analyses, each of which is the average of 10 individual trials. This is deceiving. When we acquire a sample of soil, for example, it consists of many individual particles each of which is an individual sample of the soil. Our analysis of this sample, therefore, gives the mean for this large number of individual soil particles. Because of this, the central limit theorem is relevant. For a discussion of circumstances where the central limit theorem may not apply, see “Do You Reckon It’s Normally Distributed?”, the full reference for which is Majewsky, M.; Wagner, M.; Farlin, J. , , 408–409. Did you notice the differences between the equation for the variance of a population and the variance of a sample? If not, here are the two equations: \[\sigma^2 = \frac {\sum_{i = 1}^n (X_i - \mu)^2} {n} \nonumber\] \[s^2 = \frac {\sum_{i = 1}^n (X_i - \overline{X})^2} {n - 1} \nonumber\] Both equations measure the variance around the mean, using \(\mu\) for a population and \(\overline{X}\) for a sample. Although the equations use different measures for the mean, the intention is the same for both the sample and the population. A more interesting difference is between the denominators of the two equations. When we calculate the population’s variance we divide the numerator by the population’s size, ; for the sample’s variance, however, we divide by – 1, where is the sample’s size. Why do we divide by – 1 when we calculate the sample’s variance? A variance is the average squared deviation of individual results relative to the mean. When we calculate an average we divide the sum by the number of independent measurements, or , in the calculation. For the population’s variance, the degrees of freedom is equal to the population’s size, . When we measure every member of a population we have complete information about the population. When we calculate the sample’s variance, however, we replace \(\mu\) with \(\overline{X}\), which we also calculate using the same data. If there are members in the sample, we can deduce the value of the member from the remaining – 1 members and the mean. For example, if \( = 5\) and we know that the first four samples are 1, 2, 3 and 4, and that the mean is 3, then the fifth member of the sample must be \[X_5 = (\overline{X} \times n) - X_1 - X_2 - X_3 - X_4 = (3 \times 5) - 1 - 2 - 3 - 4 = 5 \nonumber\] Because we have just four independent measurements, we have lost one degree of freedom. Using – 1 in place of when we calculate the sample’s variance ensures that \(s^2\) is an unbiased estimator of \(\sigma^2\). Here is another way to think about degrees of freedom. We analyze samples to make predictions about the underlying population. When our sample consists of measurements we cannot make more than independent predictions about the population. Each time we estimate a parameter, such as the population’s mean, we lose a degree of freedom. If there are degrees of freedom for calculating the sample’s mean, then – 1 degrees of freedom remain when we calculate the sample’s variance. Earlier we introduced the confidence interval as a way to report the most probable value for a population’s mean, \(\mu\) \[\mu = \overline{X} \pm \frac {z \sigma} {\sqrt{n}} \label{4.4}\] where \(\overline{X}\) is the mean for a sample of size , and \(\sigma\) is the population’s standard deviation. For most analyses we do not know the population’s standard deviation. We can still calculate a confidence interval, however, if we make two modifications to Equation \ref{4.4}. The first modification is straightforward—we replace the population’s standard deviation, \(\sigma\), with the sample’s standard deviation, . The second modification is not as obvious. The values of in are for a normal distribution, which is a function of \(sigma^2\), not . Although the sample’s variance, , is an unbiased estimate of the population’s variance, \(\sigma^2\), the value of will only rarely equal \(\sigma^2\). To account for this uncertainty in estimating \(\sigma^2\), we replace the variable in Equation \ref{4.4} with the variable , where is defined such that \(t \ge z\) at all confidence levels. \[\mu = \overline{X} \pm \frac {t s} {\sqrt{n}} \label{4.5}\] Values for at the 95% confidence level are shown in Table 4.4.5
. Note that becomes smaller as the number of degrees of freedom increases, and that it approaches as approaches infinity. The larger the sample, the more closely its confidence interval for a sample (Equation \ref{4.5}) approaches the confidence interval for the population (Equation \ref{4.3}). provides additional values of for other confidence levels. What are the 95% confidence intervals for the two samples of pennies in ? The mean and the standard deviation for first experiment are, respectively, 3.117 g and 0.051 g. Because the sample consists of seven measurements, there are six degrees of freedom. The value of from , is 2.447. Substituting into Equation \ref{4.5} gives \[\mu = 3.117 \text{ g} \pm \frac {2.447 \times 0.051 \text{ g}} {\sqrt{7}} = 3.117 \text{ g} \pm 0.047 \text{ g} \nonumber\] For the second experiment the mean and the standard deviation are 3.081 g and 0.073 g, respectively, with four degrees of freedom. The 95% confidence interval is \[\mu = 3.081 \text{ g} \pm \frac {2.776 \times 0.037 \text{ g}} {\sqrt{5}} = 3.081 \text{ g} \pm 0.046 \text{ g} \nonumber\] Based on the first experiment, the 95% confidence interval for the population’s mean is 3.070–3.164 g. For the second experiment, the 95% confidence interval is 3.035–3.127 g. Although the two confidence intervals are not identical—remember, each confidence interval provides a different estimate for \(\mu\)—the mean for each experiment is contained within the other experiment’s confidence interval. There also is an appreciable overlap of the two confidence intervals. Both of these observations are consistent with samples drawn from the same population. Note that our comparison of these two confidence intervals at this point is somewhat vague and unsatisfying. We will return to this point in the next section, when we consider a statistical approach to comparing the results of experiments. What is the 95% confidence interval for the sample of 100 pennies in ? The mean and the standard deviation for this sample are 3.095 g and 0.0346 g, respectively. Compare your result to the confidence intervals for the samples of pennies in . With 100 pennies, we have 99 degrees of freedom for the mean. Although Table 4.4.3
does not include a value for (0.05, 99), we can approximate its value by using the values for (0.05, 60) and (0.05, 100) and by assuming a linear change in its value. \[t(0.05, 99) = t(0.05, 60) - \frac {39} {40} \left\{ t(0.05, 60) - t(0.05, 100\} \right) \nonumber\] \[t(0.05, 99) = 2.000 - \frac {39} {40} \left\{ 2.000 - 1.984 \right\} = 1.9844 \nonumber\] The 95% confidence interval for the pennies is \[3.095 \pm \frac {1.9844 \times 0.0346} {\sqrt{100}} = 3.095 \text{ g} \pm 0.007 \text{ g} \nonumber\] From , the 95% confidence intervals for the two samples in are 3.117 g ± 0.047 g and 3.081 g ± 0.046 g. As expected, the confidence interval for the sample of 100 pennies is much smaller than that for the two smaller samples of pennies. Note, as well, that the confidence interval for the larger sample fits within the confidence intervals for the two smaller samples. There is a temptation when we analyze data simply to plug numbers into an equation, carry out the calculation, and report the result. This is never a good idea, and you should develop the habit of reviewing and evaluating your data. For example, if you analyze five samples and report an analyte’s mean concentration as 0.67 ppm with a standard deviation of 0.64 ppm, then the 95% confidence interval is \[\mu = 0.67 \text{ ppm} \pm \frac {2.776 \times 0.64 \text{ ppm}} {\sqrt{5}} = 0.67 \text{ ppm} \pm 0.79 \text{ ppm} \nonumber\] This confidence interval estimates that the analyte’s true concentration is between –0.12 ppm and 1.46 ppm. Including a negative concentration within the confidence interval should lead you to reevaluate your data or your conclusions. A closer examination of your data may convince you that the standard deviation is larger than expected, making the confidence interval too broad, or you may conclude that the analyte’s concentration is too small to report with confidence. We will return to the topic of detection limits near the end of this chapter. Here is a second example of why you should closely examine your data: results obtained on samples drawn at random from a normally distributed population must be random. If the results for a sequence of samples show a regular pattern or trend, then the underlying population either is not normally distributed or there is a time-dependent determinate error. For example, if we randomly select 20 pennies and find that the mass of each penny is greater than that for the preceding penny, then we might suspect that our balance is drifting out of calibration. | 27,234 | 2,203 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Basic_Principles_of_Organic_Chemistry_(Roberts_and_Caserio)/04%3A_Alkanes/4.03%3A_Chemical_Reactions_of_Alkanes._Combustion_of_Alkanes |
As a class, alkanes generally are unreactive. The names saturated hydrocarbon, or "paraffin," which literally means "not enough affinity" [L. ( ), not enough, \(+\) , affinity], arise because their chemical "affinity" for most common reagents may be regarded as "saturated" or satisfied. Thus none of the \(C-H\) or \(C-C\) bonds in a typical saturated hydrocarbon, for example ethane, are attacked at ordinary temperatures by a strong acid, such as sulfuric acid (\(H_2SO_4\)), or by an oxidizing agent, such as bromine (in the dark), oxygen, or potassium permanganate (\(KMnO_4\)). Under ordinary conditions, ethane is similarly stable to reducing agents such as hydrogen, even in the presence of catalysts such as platinum, palladium, or nickel. However, all saturated hydrocarbons are attacked by oxygen at elevated temperatures and, if oxygen is in excess, complete combustion to carbon dioxide and water occurs. Vast quantities of hydrocarbons from petroleum are utilized as fuels for the production of heat and power by combustion, although it is becoming quite clear that few of the nations of the world are going to continue to satisfy their needs (or desires) for energy through the use of petroleum the way it has been possible in the past. Petroleums differ considerably in composition depending on their source. However, a representative petroleum\(^1\) on distillation yields the following fractions: The way in which petroleum is refined and the uses for it depend very much on supply and demand, which always are changing. However, the situation for the United States in 1974 is summarized in Figure 4-3, which shows roughly how much of one barrel of oil (160 liters) is used for specific purposes. In the past three decades, petroleum technology has outpaced coal technology, and we now are reliant on petroleum as the major source of fuels and chemicals. Faced with dwindling oil reserves, however, it is inevitable that coal again will become a major source of raw materials. When coal is heated at high temperatures in the absence of air, it carbonizes to and gives off a gaseous mixture of compounds. Some of these gases condense to a black viscous oil ( ), others produce an aqueous condensate called , and some remain gaseous ( ). The residue is coke, which is used both as a fuel and as a source of carbon for the production of steel. The major component in coal gas is methane. Coal tar is an incredible mixture of compounds, mostly hydrocarbons, a substantial number of which are arenes. Coal and coal tar can be utilized to produce alkanes, but the technology involved is more complex and costly than petroleum refining. It seems inevitable that the cost of hydrocarbon fuel will continue to rise as supply problems become more difficult. And there is yet no answer to what will happen when the world's limited quantities of petroleum and coal are exhausted. \(^1\)See F. D. Rossini, "Hydrocarbons in Petroleum," , 554 (1960). and (1977) | 2,989 | 2,205 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Analytical_Chemiluminescence/4%3A_Instrumentation/4.02%3A_Flow_Injection_Analysis_(FIA) |
Batch techniques for measuring the intensity of chemiluminescence are sometimes used, some of which incorporate automation to improve sample throughput , but flow methods are applied much more often. A suitable flow injection manifold is shown in figure D2.1. Flow injection manifolds are constructed from polytetrafluoroethylene (PTFE) tubing to contain the sample while it is chemically or physically modified prior to detection. Liquid is usually transported from reservoirs by means of a peristaltic pump with suitable tubing. An accurately measured volume of sample is reproducibly introduced into a carrier stream by means of a rotary injection valve. The detector is connected to some means of data-storage. The signal depends on the rate of the reaction producing it and on flow-rate, tubing dimensions, reagent addition order and flow-cell volume, which should be large enough to ensure that a high proportion of the total emission enters the detector; optimisation will favor conditions that lead to emission occurring during the passage of the sample through the flow-cell. The flow-cell should be so positioned as to make this possible, e.g., directly in front of the window of a photomultiplier tube and in a box that excludes ambient light. FIA has important advantages over batch methods. It makes use of simple and relatively inexpensive apparatus, which is readily miniaturised and has great potential for adaptation and modification. Easy operation and high sampling rates are possible. | 1,524 | 2,206 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/24%3A_Chemistry_of_Life-_Organic_and_Biological_Chemistry/24.11%3A_Nucleic_Acids |
The repeating, or monomer, units that are linked together to form nucleic acids are known as nucleotides. The deoxyribonucleic acid (DNA) of a typical mammalian cell contains about 3 × 10 nucleotides. Nucleotides can be further broken down to phosphoric acid (H PO ), a pentose sugar (a sugar with five carbon atoms), and a nitrogenous base (a base containing nitrogen atoms). \[\mathrm{nucleic\: acids \underset{down\: into}{\xrightarrow{can\: be\: broken}} nucleotides \underset{down\: into}{\xrightarrow{can\: be\: broken}} H_3PO_4 + nitrogen\: base + pentose\: sugar} \nonumber \] If the pentose sugar is ribose, the nucleotide is more specifically referred to as a , and the resulting nucleic acid is ribonucleic acid (RNA). If the sugar is 2-deoxyribose, the nucleotide is a , and the nucleic acid is . The nitrogenous bases found in nucleotides are classified as pyrimidines or purines. Pyrimidines are heterocyclic amines with two nitrogen atoms in a six-member ring and include uracil, thymine, and cytosine. Purines are heterocyclic amines consisting of a pyrimidine ring fused to a five-member ring with two nitrogen atoms. Adenine and guanine are the major purines found in nucleic acids (Figure \(\Page {1}\)). The formation of a bond between C1′ of the pentose sugar and N1 of the pyrimidine base or N9 of the purine base joins the pentose sugar to the nitrogenous base. In the formation of this bond, a molecule of water is removed. Table \(\Page {1}\) summarizes the similarities and differences in the composition of nucleotides in DNA and RNA. The numbering convention is that primed numbers designate the atoms of the pentose ring, and unprimed numbers designate the atoms of the purine or pyrimidine ring. The names and structures of the major ribonucleotides and one of the deoxyribonucleotides are given in Figure \(\Page {2}\). Apart from being the monomer units of DNA and RNA, the nucleotides and some of their derivatives have other functions as well. Adenosine diphosphate (ADP) and adenosine triphosphate (ATP), shown in Figure \(\Page {3}\), have a role in cell metabolism. Moreover, a number of coenzymes, including (FAD), (NAD ), and coenzyme A, contain adenine nucleotides as structural components. Nucleotides are composed of phosphoric acid, a pentose sugar (ribose or deoxyribose), and a nitrogen-containing base (adenine, cytosine, guanine, thymine, or uracil). Ribonucleotides contain ribose, while deoxyribonucleotides contain deoxyribose. | 2,502 | 2,207 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Properties_of_Aldehydes_and_Ketones/Properties_of_Aldehydes_and_Ketones |
This page explains what aldehydes and ketones are, and looks at the way their bonding affects their reactivity. It also considers their simple physical properties such as solubility and boiling points. Aldehydes and ketones are simple compounds which contain a carbonyl group - a carbon-oxygen double bond. They are simple in the sense that they don't have other reactive groups like -OH or -Cl attached directly to the carbon atom in the carbonyl group - as you might find, for example, in carboxylic acids containing -COOH. In aldehydes, the carbonyl group has a hydrogen atom attached to it together with either a second hydrogen atom or, more commonly, a hydrocarbon group which might be an alkyl group or one containing a benzene ring. For the purposes of this section, we shall ignore those containing benzene rings. Notice that these all have exactly the same end to the molecule. All that differs is the complexity of the other group attached. When you are writing formulae for these, the aldehyde group (the carbonyl group with the hydrogen atom attached) is always written as -CHO - never as COH. That could easily be confused with an alcohol. Ethanal, for example, is written as CH CHO; methanal as HCHO. The name counts the total number of carbon atoms in the longest chain - including the one in the carbonyl group. If you have side groups attached to the chain, notice that you always count from the carbon atom in the carbonyl group as being number 1. In ketones, the carbonyl group has two hydrocarbon groups attached. Again, these can be either alkyl groups or ones containing benzene rings. Again, we'll concentrated on those containing alkyl groups just to keep things simple. Notice that ketones never have a hydrogen atom attached to the carbonyl group. Propanone is normally written CH COCH . Notice the need for numbering in the longer ketones. In pentanone, the carbonyl group could be in the middle of the chain or next to the end - giving either pentan-3-one or pentan-2-one. Oxygen is far more electronegative than carbon and so has a strong tendency to pull electrons in a carbon-oxygen bond towards itself. One of the two pairs of electrons that make up a carbon-oxygen double bond is even more easily pulled towards the oxygen. That makes the carbon-oxygen double bond very highly polar. The slightly positive carbon atom in the carbonyl group can be attacked by nucleophiles. A nucleophile is a negatively charged ion (for example, a cyanide ion, CN ), or a slightly negatively charged part of a molecule (for example, the lone pair on a nitrogen atom in ammonia, NH ). During the reaction, the carbon-oxygen double bond gets broken. The net effect of all this is that the carbonyl group undergoes addition reactions, often followed by the loss of a water molecule. This gives a reaction known as addition-elimination or condensation. You will find examples of simple addition reactions and addition-elimination if you explore the aldehydes and ketones menu (link at the bottom of the page). Both aldehydes and ketones contain a carbonyl group. That means that their reactions are very similar in this respect. An aldehyde differs from a ketone by having a hydrogen atom attached to the carbonyl group. This makes the aldehydes very easy to oxidise. For example, ethanal, CH CHO, is very easily oxidised to either ethanoic acid, CH COOH, or ethanoate ions, CH COO . Ketones don't have that hydrogen atom and are resistant to oxidation. They are only oxidised by powerful oxidising agents which have the ability to break carbon-carbon bonds. You will find the oxidation of aldehydes and ketones discussed if you follow a link from the aldehydes and ketones menu (see the bottom of this page). Methanal is a gas (boiling point -21°C), and ethanal has a boiling point of +21°C. That means that ethanal boils at close to room temperature. The other aldehydes and the ketones are liquids, with boiling points rising as the molecules get bigger. The size of the boiling point is governed by the strengths of the intermolecular forces. Notice that the aldehyde (with dipole-dipole attractions as well as dispersion forces) has a boiling point higher than the similarly sized alkane which only has dispersion forces. However, the aldehyde's boiling point isn't as high as the alcohol's. In the alcohol, there is hydrogen bonding as well as the other two kinds of intermolecular attraction. Although the aldehydes and ketones are highly polar molecules, they don't have any hydrogen atoms attached directly to the oxygen, and so they can't hydrogen bond with each other. The small aldehydes and ketones are freely soluble in water but solubility falls with chain length. For example, methanal, ethanal and propanone - the common small aldehydes and ketones - are miscible with water in all proportions.The reason for the solubility is that although aldehydes and ketones can't hydrogen bond with themselves, they can hydrogen bond with water molecules. One of the slightly positive hydrogen atoms in a water molecule can be sufficiently attracted to one of the lone pairs on the oxygen atom of an aldehyde or ketone for a hydrogen bond to be formed. There will also, of course, be dispersion forces and dipole-dipole attractions between the aldehyde or ketone and the water molecules. Forming these attractions releases energy which helps to supply the energy needed to separate the water molecules and aldehyde or ketone molecules from each other before they can mix together. As chain lengths increase, the hydrocarbon "tails" of the molecules (all the hydrocarbon bits apart from the carbonyl group) start to get in the way. By forcing themselves between water molecules, they break the relatively strong hydrogen bonds between water molecules without replacing them by anything as good. This makes the process energetically less profitable, and so solubility decreases. Jim Clark ( ) | 5,921 | 2,209 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Equilibria/Le_Chateliers_Principle/Ice_Tables |
An ICE ( nitial, hange, quilibrium) table is simple matrix formalism that used to simplify the calculations in reversible equilibrium reactions (e.g., weak acids and weak bases or complex ion formation). ICE tables are composed of the concentrations of molecules in solution in different stages of a reaction, and are usually used to calculate the K, or equilibrium constant expression, of a reaction (in some instances, K may be given, and one or more of the concentrations in the table will be the unknown to be solved for). ICE tables automatically set up and organize the variables and constants needed when calculating the unknown. ICE is a simple acronym for the titles of the first column of the table. The procedure for filling out an ICE table is best illustrated through example. Use an ICE table to determine \(K_c\) for the following balanced general reaction: \[ \ce{ 2X(g) <=> 3Y(g) + 4Z(g)} \nonumber\] where the capital letters represent the products and reactants. Desired Unknown \[ K_c = ? \nonumber \] The equilibrium constant expression is expressed as products over reactants, each raised to the power of their respective stoichiometric coefficients: \[ K_c = \dfrac{[Y]^3[Z]^4}{[X]^2} \nonumber \] The equilibrium concentrations of Y and Z are unknown, but they can be calculated using the ICE table. This is the first step in setting up the ICE table. As mentioned above, the ICE mnemonic is vertical and the equation heads the table horizontally, giving the rows and columns of the table, respectively. The numerical amounts were given. Any amount not directly given is unknown. Notice that the equilibrium in this equation is shifted to the right, meaning that some amount of reactant will be taken away and some amount of product will be added (for the Change row). The change in amount (\(x\)) can be calculated using algebra: \[ Equilibrium \; Amount = Initial \; Amount + Change \; in \; Amount \nonumber \] Solving for the Change in the amount of \(2x\) gives: \[ 0.350 \; mol - 0.500 \; mol = -0.150 \; mol \nonumber \] The change in reactants and the balanced equation of the reaction is known, so the change in products can be calculated. The stoichiometric coefficients indicate that for every 2 mol of x reacted, 3 mol of Y and 4 mol of Z are produced. The relationship is as follows: \[ \begin{eqnarray} Change \; in \; Product &=& -\left(\dfrac{\text{Stoichiometric Coefficient of Product}}{\text{Stoichiometric Coefficient of Reactant}}\right)(\text{Change in Reactant}) \\ Change \; in \; Y &=& -\left(\dfrac{3}{2}\right)(-0.150 \; mol) \\ &= +0.225 \; mol \end{eqnarray} \nonumber \] Try obtaining the change in Z with this method (the answer is already in the ICE table). If the initial amounts of Y and/or Z were nonzero, then they would be added together with the change in amounts to determine equilibrium amounts. However, because there was no initial amount for the two products, the equilibrium amount is simply equal to the change: \[\begin{eqnarray} Equilibrium \; Amount &=& Initial \; Amount + Change \; in \; Amount \\ Equilibrium \; Amount \; of \; Y &=& 0.000 \; mol\; + 0.225 \; mol \\ &=& +0.225 \; mol \end{eqnarray} \nonumber \] Use the same method to find the equilibrium amount of Z. Convert the equilibrium amounts to concentrations. Recall that the volume of the system is 0.750 liters. \[[Equilibrium \; Concentration \; of \; Substance] = \dfrac{Amount \; of \; Substance}{Volume \; of \; System}\nonumber \] \[ [X] = \dfrac{0.350 \; mol}{0.750 \; L} = 0.467 \; M \nonumber \] \[ [Y] = \dfrac{0.225 \; mol}{0.750 \; L} = 0.300 \; M \nonumber \] \[ [Z] = \dfrac{0.300 \; mol}{0.750 \; L} = 0.400 \; M \nonumber \] Use the concentration values to solve the \(K_c\) equation: \[ \begin{eqnarray} K_c &=& \dfrac{[Y]^3[Z]^4}{[X]^2} \\ &=& \dfrac{[0.300]^3[0.400]^4}{[0.467]^2} \\ K_c &=& 3.17 \times 10^{-3} \end{eqnarray}\nonumber \] n this example an ICE table is used to find the equilibrium concentration of the reactants and products. (This example will be less in depth than the previous example, but the same concepts are applied.) These calculations are often carried out for weak acid titrations. Find the concentration of A for the generic acid dissociation reaction: \[ \ce{HA(aq) + H_2O(l) <=> A^{-}(aq) + H_3O^{+}(aq)} \nonumber \] with \([HA (aq)]_{initial} = 0.150 M\) and \(K_a = 1.6 \times 10^{-2}\) This equation describes a weak acid reaction in solution with water. The acid (HA) dissociates into its conjugate base (\(A^-\)) and protons (H O ). Notice that water is a liquid, so its concentration is not relevant to these calculations. \(K_a\) The expression for K is written by dividing the concentrations of the products by the concentrations of the reactants. Plugging in the values at equilibrium into the equation for K gives the following: \[K_a = \dfrac{x^2}{0.150-x} = 1.6 \times 10^{-2} \nonumber\] To find the concentration x, rearrange this equation to its quadratic form, and then use the quadratic formula to find x \[\begin{align*} (1.6 \times 10^{-2})({0.150-x}) &= {x^2} \\[4pt] x^2+(1.6 \times 10^{-2})x-(0.150)(1.6 \times 10^{-2}) &= 0 \end{align*}\] This is the typical form for a quadratic equation: \[Ax^{2}+Bx+C=0\nonumber \] where, in this case: The quadratic formula gives two solutions (but only one physical solution) for x: \[x = \dfrac{-B+\sqrt{B^2-4AC}}{2A}\nonumber \] and \[x = \dfrac{-B-\sqrt{B^2-4AC}}{2A}\nonumber \] Intuition must be used in determining which solution is correct. If one gives a negative concentration, it can be eliminated, because negative concentrations are unphysical. The x value can be used to calculate the equilibrium concentrations of each product and reactant by plugging it into the elements in the E row of the ice table. [Solution: x = 0.0416, -0.0576. x = 0.0416 makes chemical sense and is therefore the correct answer.] For some problems like example 2, if x is significantly less than the value for K , then the x of the reactants (in the denominator) can be omitted and the concentration for x should not be greatly affected. This will make calculations faster by eliminating the necessity of the quadratic formula. may also be substituted for in the ICE table, if desired (i.e., if the concentrations are not known, \(K_p\) instead of \(K_c\) is desired, etc.). "Amount" is also acceptable (the ICE table may be done in amounts until the equilibrium amounts are found, after which they will be converted to concentrations). For simplicity, assume that the word "concentration" can be replaced with "partial pressure" or "amounts" when formulating ICE tables. 0.200 M acetic acid is added to water. What is the concentration of H O in solution if \(K_c = 1.8 \times 10^{-6}\)? 5.99×10 If the initial concentration of NH is 0.350 M and the concentration at equilibrium is 0.325 M, what is \(K_c\) for this reaction? 1.92×10 How is \(K_c\) derived from \(K_p\)? \(K_p = K_c(RT)^{\Delta n}\) then solve for \(K_c\) Complete this ICE table: | 7,015 | 2,210 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Physical_Properties_of_Matter/All_About_Water |
is not as narrow as one might think! Yes, we start with the atom, and then go on to the rules governing the kinds of structural units that can be made from them. We are taught early on to predict the properties of bulk matter from these geometric arrangements. And then we come to H O, and are shocked to find that many of these predictions are way off, and that water (and by implication, life itself) should not even exist on our planet! But we soon learn that this tiny combination of three nuclei and eight electrons possesses special properties that make it unique among the more than 15 million chemical species we presently know. When we stop to ponder the consequences of this, chemistry moves from being an arcane science to a voyage of wonder and pleasure as we learn to relate the microscopic world of the atom to the greater world in which we all live. A is an aggregation of atomic and that is sufficiently stable to possess observable properties — and there are few molecules that are more stable and difficult to decompose than H O. In water, each hydrogen nucleus is bound to the central oxygen atom by a pair of electrons that are shared between them; chemists call this shared electron pair a covalent chemical bond. In H O, only two of the six outer-shell electrons of oxygen are used for this purpose, leaving four electrons which are organized into two non-bonding pairs. The four electron pairs surrounding the oxygen tend to arrange themselves as far from each other as possible in order to minimize repulsions between these clouds of negative charge. This would ordinarily result in a tetrahedral geometry in which the angle between electron pairs (and therefore the H-O-H ) is 109.5°. However, because the two non-bonding pairs remain closer to the oxygen atom, these exert a stronger repulsion against the two covalent bonding pairs, effectively pushing the two hydrogen atoms closer together. The result is a distorted tetrahedral arrangement in which the H—O—H angle is 104.5°. Although the water molecule carries no net electric charge, its eight electrons are not distributed uniformly; there is slightly more negative charge (purple) at the oxygen end of the molecule, and a compensating positive charge (green) at the hydrogen end. The resulting is largely responsible for water's unique properties. Because molecules are smaller than light waves, they cannot be observed directly, and must be "visualized" by alternative means. This computer-generated image comes from calculations that model the electron distribution in the H O molecule. The outer envelope shows the effective "surface" of the molecule as defined by the extent of the cloud of negative electric charge created by the eight electrons. The H O molecule is electrically neutral, but the positive and negative charges are not distributed uniformly. This is illustrated by the gradation in color in the schematic diagram here. The electronic (negative) charge is concentrated at the oxygen end of the molecule, owing partly to the nonbonding electrons (solid blue circles), and to oxygen's high nuclear charge which exerts stronger attractions on the electrons. This charge displacement constitutes an , represented by the arrow at the bottom; you can think of this dipole as the electrical "image" of a water molecule. As we all learned in school, opposite charges attract, so the partially-positive hydrogen atom on one water molecule is electrostatically attracted to the partially-negative oxygen on a neighboring molecule. This process is called (somewhat misleadingly) . Notice that the hydrogen bond (shown by the dashed green line) is somewhat longer than the covalent O—H bond. This means that it is considerably ; it is so weak, in fact,that a given hydrogen bond cannot survive for more than a tiny fraction of a second. Water has long been known to exhibit many physical properties that distinguish it from other small molecules of comparable mass. Chemists refer to these as the "anomalous" properties of water, but they are by no means mysterious; all are entirely predictable consequences of the way the size and nuclear charge of the oxygen atom conspire to distort the electronic charge clouds of the atoms of other elements when these are chemically bonded to the oxygen. Water is one of the few known substances whose solid form is less dense than the liquid. The plot at the right shows how the volume of water varies with the temperature; the large increase (about 9%) on freezing shows why ice floats on water and why pipes burst when they freeze. The expansion between –4° and 0° is due to the formation of larger hydrogen-bonded aggregates. Above 4°, thermal expansion sets in as vibrations of the O—H bonds becomes more vigorous, tending to shove the molecules farther apart. The other widely-cited anomalous property of water is its high . As this graph shows, a molecule as light as H O "should" boil at around –90°C; that is, it would exist in the world as a gas rather than a liquid if H-bonding were not present. Notice that H-bonding is also observed with fluorine and nitrogen. Have you ever watched an insect walk across the surface of a pond? The water strider takes advantage of the fact that the water surface acts like an elastic film that resists deformation when a small weight is placed on it. (If you are careful, you can also "float" a small paper clip or steel staple on the surface of water in a cup.) This is all due to the of the water. A molecule within the bulk of a liquid experiences attractions to neighboring molecules in all directions, but since these average out to zero, there is no net force on the molecule. For a molecule that finds itself the surface, the situation is quite different; it experiences forces only sideways and downward, and this is what creates the stretched-membrane effect. The distinction between molecules located at the surface and those deep inside is especially prominent in H O, owing to the strong hydrogen-bonding forces. The difference between the forces experienced by a molecule at the surface and one in the bulk liquid gives rise to the liquid's . This drawing highlights two H O molecules, one at the surface, and the other in the bulk of the liquid. The surface molecule is attracted to its neighbors below and to either side, but there are no attractions pointing in the 180° solid angle angle above the surface. As a consequence, a molecule at the surface will tend to be drawn into the bulk of the liquid. But since there must always be some surface, the overall effect is to minimize the surface area of a liquid. The geometric shape that has the smallest ratio of surface area to volume is the , so very small quantities of liquids tend to form spherical drops. As the drops get bigger, their weight deforms them into the typical tear shape. Take a plastic mixing bowl from your kitchen, and splash some water around in it. You will probably observe that the water does not cover the inside surface uniformly, but remains dispersed into drops. The same effect is seen on a dirty windshield; turning on the wipers simply breaks hundreds of drops into thousands. By contrast, water poured over a clean glass surface will it, leaving a uniform film. When a liquid is in contact with a solid surface, its behavior depends on the relative magnitudes of the surface tension forces and the attractive forces between the molecules of the liquid and of those comprising the surface. If an H O molecule is more strongly attracted to its own kind, then surface tension will dominate, increasing the curvature of the interface. This is what happens at the interface between water and a hydrophobic surface such as a plastic mixing bowl or a windshield coated with oily material. A clean glass surface, by contrast, has -OH groups sticking out of it which readily attach to water molecules through hydrogen bonding; this causes the water to spread out evenly over the surface, or to wet it. A liquid will wet a surface if the angle at which it makes contact with the surface is more than 90°. The value of this can be predicted from the properties of the liquid and solid separately. If we want water to wet a surface that is not ordinarily wettable, we add a to the water to reduce its surface tension. A detergent is a special kind of molecule in which one end is attracted to H O molecules but the other end is not, so these ends stick out above the surface and repel each other, cancelling out the surface tension forces due to the water molecules alone. The nature of liquid water and how the H O molecules within it are organized and interact are questions that have attracted the interest of chemists for many years. There is probably no liquid that has received more intensive study, and there is now a huge literature on this subject. The following facts are well established: A variety of techniques including infrared absorption, neutron scattering, and nuclear magnetic resonance have been used to probe the microscopic structure of water. The information garnered from these experiments and from theoretical calculations has led to the development of around twenty "models" that attempt to explain the structure and behavior of water. More recently, computer simulations of various kinds have been employed to explore how well these models are able to predict the observed physical properties of water. This work has led to a gradual refinement of our views about the structure of liquid water, but it has not produced any definitive answer. There are several reasons for this, but the principal one is that the very concept of "structure" (and of water "clusters") depends on both the time frame and volume under consideration. Thus questions of the following kinds are still open: The view first developed in the 1950's that water is a collection of "flickering clusters" of varying sizes (right) has gradually been abandoned as being unable to account for many of the observed properties of the liquid. The present thinking, influenced greatly by molecular modeling simulations beginning in the 1980s, is that on a very short time scale (less than a picosecond), water is more like a "gel" consisting of a single, huge hydrogen-bonded cluster. On a 10 -10 sec time scale, rotations and other thermal motions cause individual hydrogen bonds to break and re-form in new configurations, inducing ever-changing local discontinuities whose extent and influence depends on the temperature and pressure. Recent work from Richard SayKally's lab shows that the hydrogen bonds in liquid water break and re-form so rapidly (often in distorted configurations) that the liquid can be regarded as a continuous network of hydrogen-bonded molecules. This computer-generated nanoscale view of liquid water is from the lab of Gene Stanley of Boston University. The oxygen atoms are red, the hydrogen atoms white It is quite likely that over very small volumes, localized (H O) polymeric clusters may have a fleeting existence, and many theoretical calculations have been made showing that some combinations are more stable than others. While this might prolong their lifetimes, it does not appear that they remain intact long enough to detect as directly observable entities in ordinary bulk water at normal pressures. Theoretical models suggest that the average cluster may encompass as many as 90 H O molecules at 0°C, so that very cold water can be thought of as a collection of ever-changing ice-like structures. At 70° C, the average cluster size is probably no greater than about 25. It must be emphasized that no stable clustered unit or arrangement has ever been isolated or identified in pure bulk liquid water. A 2006 report suggests that a simple tetrahedral arrangement is the only long-range structure that persists at time scales of a picosecond or beyond. And a 2007 study suggests that infrared radiation can stabilize clathrate-like clusters for up to several hours. Water clusters are of considerable interest as models for the study of water and water surfaces, and many articles on them are published every year. Some notable work reported in 2004 extended our view of water to the femtosecond time scale. The principal finding was that 80 percent of the water molecules are bound in chain-like fashion to only two other molecules at room temperature, thus supporting the prevailing view of a dynamically-changing, disordered water structure. , like all solids, has a well-defined structure; each water molecule is surrounded by four neighboring H Os. two of these are hydrogen-bonded to the oxygen atom on the central H O molecule, and each of the two hydrogen atoms is similarly bonded to another neighboring H O. The hydrogen bonds are represented by the dashed lines in this 2-dimensional schematic diagram. In reality, the four bonds from each O atom point toward the four corners of a tetrahedron centered on the O atom. This basic assembly repeats itself in three dimensions to build the ice crystal. When ice melts to form , the uniform three-dimensional tetrahedral organization of the solid breaks down as thermal motions disrupt, distort, and occasionally break hydrogen bonds. The methods used to determine the positions of molecules in a solid do not work with liquids, so there is no unambiguous way of determining the detailed structure of water. The illustration here is probably typical of the arrangement of neighbors around any particular H O molecule, but very little is known about the extent to which an arrangement like this gets propagated to more distant molecules. Here are three-dimensional views of a typical local structure of water (left) and ice (right.) Notice the greater openness of the ice structure which is necessary to ensure the strongest degree of hydrogen bonding in a uniform, extended crystal lattice. The more crowded and jumbled arrangement in liquid water can be sustained only by the greater amount thermal energy available above the freezing point. The stable arrangement of hydrogen-bonded water molecules in ice gives rise to the beautiful hexagonal symmetry that reveals itself in every snowflake. At temperatures as low as 200K, the surface of ice is highly disordered and water-like. As the temperature approaches the freezing point, this region of disorder extends farther down from the surface and acts as a lubricant. The illustration is taken from from an article in the April 7, 2008 issue of C&EN honoring the physical chemist Gabor Somorjai who pioneered modern methods of studying surfaces. To a chemist, the term "pure" has meaning only in the context of a particular application or process. The distilled or de-ionized water we use in the laboratory contains dissolved atmospheric gases and occasionally some silica, but their small amounts and relative inertness make these impurities insignificant for most purposes. When water of the highest obtainable purity is required for certain types of exacting measurements, it is commonly filtered, de-ionized, and triple-vacuum distilled. But even this "chemically pure" water is a mixture of isotopic species: there are two stable isotopes of both hydrogen (H and H , the latter often denoted by D) and oxygen (O and O ) which give rise to combinations such as H O , HDO , etc., all of which are readily identifiable in the infrared spectra of water vapor. And to top this off, the two hydrogen atoms in water contain protons whose magnetic moments can be parallel or antiparallel, giving rise to and water, respectively. The two forms are normally present in a ratio of 3:1. The amount of the varies enough from place to place that it is now possible to determine the age and source of a particular water sample with some precision. These differences are reflected in the H and O isotopic profiles of organisms. Thus the isotopic analysis of human hair can be a useful tool for crime investigations and anthropology research. It has recently been found ( 2003, 19, 6851-6856) that freshly distilled water takes a surprisingly long time to equilibrate with the atmosphere, that it undergoes large fluctuations in pH and redox potential, and that these effects are greater when the water is exposed to a magnetic field. The reasons for this behavior are not clear, but one possibility is that dissolved O molecules, which are paramagnetic, might be involved. Our ordinary drinking water, by contrast, is never chemically pure, especially if it has been in contact with sediments. Groundwaters (from springs or wells) always contain ions of calcium and magnesium, and often iron and manganese as well; the positive charges of these ions are balanced by the negative ions carbonate/bicarbonate, and occasionally some chloride and sulfate. Groundwaters in some regions contain unacceptably high concentrations of naturally-occuring toxic elements such as selenium and arsenic. One might think that rain or snow would be exempt from contamination, but when water vapor condenses out of the atmosphere it always does so on a particle of dust which releases substances into the water, and even the purest air contains carbon dioxide which dissolves to form carbonic acid. Except in highly polluted atmospheres, the impurities picked up by snow and rain are too minute to be of concern. Various governments have established upper limits on the amounts of contaminants allowable in drinking water; the best known of these are the U.S. EPA Drinking Water Standards. I am not aware of any evidence indicating that any one type of water (including highly "pure" water) is more beneficial to health than any other, as long as the water is pathogen-free and meets accepted standards such as those mentioned above. For those who are sensitive to residual chlorine or still have concerns, a good activated-carbon filter is usually satisfactory. More extreme measures such as reverse-osmosis or distillation are only justified in demonstrably extreme situations. "Pure" rainwater always contains some dissolved carbon dioxide which makes it slightly acidic. When this water comes into contact with sediments, it tends to dissolve them, and in the process becomes alkaline. The pH of drinking water can vary from around 5 to 9, and it has no effect on one's health. The idea that alkaline water is better to drink than acidic water is widely promoted by alternative-health hucksters who market worthless "water ionizer" machines for this purpose. Acidic water is sometimes described by engineers as "aggressive"; this refers to its tendency to corrode metal distribution pipes, but in this sense it is no more active than the hydrochloric acid already present in your gastric fluid! One occasionally hears that mineral-free water, and especially , are unhealthy because they "leach out" required minerals from the body. There is no truth to this; the fact is that mineral ions do not pass through cell walls by ordinary osmotic diffusion, but rather are actively transported by metabolic processes. An extensive 2008 study failed to confirm earlier reports that low calcium/magnesium in drinking water correlates with cardiovascular disease. Any well-balanced diet should supply all the mineral substances we need. It is well known that people who are engaged in heavy physical activity or are in a very hot environment should avoid drinking large quantities of even ordinary water. In order to prevent serious electrolyte imbalance problems, it is necessary to make up for the salts lost through perspiration. This can be accomplished by ingestion of salted foods or beverages (including "sports beverages"), or salt tablets. About two-thirds of the weight of an adult human consists of water. About two-thirds of this water is located within cells, while the remaining third consists of extracellular water, mostly in the blood plasma and in the interstitial fluid that bathes the cells. This water, amounting to about five percent of body weight (about 5 L in the adult), serves as a supporting fluid for the blood cells and acts as a means of transporting chemicals between cells and the external environment. It is basically a 0.15M solution of salt (NaCl) containing smaller amounts of other electrolytes, the most important of which are bicarbonate (HCO ) and protein anions. The water content of our bodies is tightly controlled in respect to both total volume and its content of dissolved substances, particulary ions. Drinking constitutes only one source of our water; many foods, especially those containing cells (fruits, vegetables, meats) are an important secondary source. In addition, a considerable amount of water (350-400 mL/day) is produced — that is, from the oxidation of glucose derived from foods. The quantity of water exchanged within various parts of our bodies is surprisingly large. The kidneys process about 180 L/day, returning most of the water to the blood stream. Lymph flow amounts to 1-2.5 L/day, and turnover of fluids in the bowel to 8-9 L/day. These figures are dwarfed by the 80,000 L/day of water that diffuses in both directions through capillary walls. The idea that everyone should drink "eight glasses" of water a day is one of those urban legends that never seems to go away; it is nicely debunked at this medical myths site. Ultimately, total water intake plus metabolic production must balance water loss. For a healthy unstressed adult, the figures shown here are typical minimum values. Notice that the major loss is through simple breathing. The minimal urinary loss is determined by the need to remove salts and other solutes taken in with foods or produced by metabolic processes. Individuals (such as many elderly) having reduced kidney function produce more dilute urine, and must therefore take in more water. And of course stress factors such as strenuous exercise, exposure to very high temperatures, or diarrhea can greatly increase the need for water intake. Consumption of overly large quantities of water can lead to electrolyte imbalance resulting in water intoxication. Children, with their low body masses, are especially susceptible. As we explained above, bulk liquid water consists of a seething mass of various-sized chain-like groups and that flicker in and out of existence on a time scale of picoseconds. But in the vicinity of a solid surface or of another molecule or ion that possesses an unbalanced electric charge, water molecules can become oriented and sometimes even bound into relatively stable structures. Water molecules interact strongly with , which are electrically-charged atoms or molecules. Dissolution of ordinary salt (NaCl) in water yields a solution containing the ions Na and Cl . Owing to its high polarity, the H O molecules closest to the dissolved ion are strongly attached to it, forming what is known as the or . Positively-charged ions such as Na attract the negative (oxygen) ends of the H O molecules, as shown in the diagram below. The ordered structure within the primary shell creates, through hydrogen-bonding, a region in which the surrounding waters are also somewhat ordered; this is the outer hydration shell, or region. In 2003, some chemists in India found ( 44(4) pp 816 - 818) that a suitable molecular backbone (above) can cause water molecules to form a "thread" that can snake its way though the more open space of the larger molecules. What all of these examples show is that water can have highly organized local structures when it interacts with molecules capable of imposing these structures on the water. It has long been known that the intracellular water very close to any membrane or organelle (sometimes called ) is organized very differently from bulk water, and that this structured water plays a significant role in governing the shape (and thus biological activity) of large folded biopolymers. It is important to bear in mind, however, that the structure of the water in these regions is imposed solely by the geometry of the surrounding hydrogen bonding sites. Water can hydrogen-bond not only to itself, but also to any other molecules that have -OH or -NH units hanging off of them. This includes simple molecules such as alcohols, surfaces such as glass, and macromolecules such as proteins. The biological activity of proteins (of which enzymes are an important subset) is critically dependent not only on their composition but also on the way these huge molecules are folded; this folding involves hydrogen-bonded interactions with water, and also between different parts of the molecule itself. Anything that disrupts these intramolecular hydrogen bonds will denature the protein and destroy its biological activity. This is essentially what happens when you boil an egg; the bonds that hold the eggwhite protein in its compact folded arrangement break apart so that the molecules unfold into a tangled, insoluble mass which, like Humpty Dumpty, cannot be restored to their original forms. Note that hydrogen-bonding need not always involve water; thus the two parts of the DNA double helix are held together by H—N—H hydrogen bonds. This image, taken from the work of William Royer Jr. of the U. Mass. Medical School, shows the water structure (small green circles) that exists in the space between the two halves of a kind of dimeric hemoglobin. The thin dotted lines represent hydrogen bonds. Owing to the geometry of the hydrogen-bonding sites on the heme protein backbones, the H O molecules within this region are highly ordered; the local water structure is stabilized by these hydrogen bonds, and the resulting water cluster in turn stabilizes this particular geometric form of the hemoglobin dimer. Not really. For water to act as a fuel, there must be some combination of oxygen and hydrogen that is energetically more stable than H O, and no such molecule is known. This fact has failed to put to rest the venerable urban legend that some obscure inventor discovered a process to do this, but the invention was secretly bought up by the oil companies in order to preserve their monopoly. However, adding water to the fuel-air mixture in an internal combustion engine, a process known as water injection, has been employed for many years as a method of improving the performance of both piston- and turbine engines. Water injection kits are widely available, many offered by hucksters whose marketing falsely implies that their products allow you to "run your car on water". Don't believe it! And get some solid advice before you try this on a modern computer-controlled high compression engine. In 2007, a widely-cited YouTube video appeared that showed a sample of salt water "burning". This occurs only in the presence of a strong radio-frequency field, which supposedly dissociates the water into H and O . These two gases then recombine, producing the flame. Although there has been much uninformed hype about this being some kind of a breakthrough as a source of "energy from water", there is no reason to believe that the First Law of Thermodynamics has been repealed. If the energy supplied by the radio-frequency source is taken into account, you can be sure that there has been no net energy gain. The actual mechanism of the process remains unclear. The fact that salt or some other ionic solute is required suggests that ions at the water's surface might be accelerated in the local field produced by the plasma discharge, helping to break up the molecules in the water vapor. ) | 27,498 | 2,212 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Diffraction_Scattering_Techniques/Powder_X-ray_Diffraction |
When an X-ray is shined on a crystal, it diffracts in a pattern characteristic of the structure. In powder X-ray diffraction, the diffraction pattern is obtained from a powder of the material, rather than an individual crystal. Powder diffraction is often easier and more convenient than single crystal diffraction since it does not require individual crystals be made. Powder X-ray diffraction (XRD) also obtains a diffraction pattern for the bulk material of a crystalline solid, rather than of a single crystal, which doesn't necessarily represent the overall material. A diffraction pattern plots intensity against the angle of the detector, \(2\theta\). Since most materials have unique diffraction patterns, compounds can be identified by using a database of diffraction patterns. The purity of a sample can also be determined from its diffraction pattern, as well as the composition of any impurities present. A diffraction pattern can also be used to determine and refine the lattice parameters of a crystal structure. A theoretical structure can also be refined using a method known as . The particle size of the powder can also be determined by using the Scherrer formula, which relates the particle size to the peak width. The Scherrer fomula is \[t = \dfrac{0.9 \lambda}{\sqrt{B^2_M-B^2_s} \cos \theta}\] with To the left is an example XRD pattern for \(Ba_{24}Ge_{100}\). The x axis is \(2\theta\) and the y axis is the intensity. X-rays are partially scattered by atoms when they strike the surface of a crystal. The part of the X-ray that is not scattered passes through to the next layer of atoms, where again part of the X-ray is scattered and part passes through to the next layer. This causes an overall diffraction pattern, similar to how a grating diffracts a beam of light. In order for an X-ray to diffract the sample must be crystalline and the spacing between atom layers must be close to the radiation wavelength. If beams diffracted by two different layers are in phase, constructive interference occurs and the diffraction pattern shows a peak, however if they are out of phase, destructive interference occurs appear and there is no peak. Diffraction peaks only occur if \[\sin \theta = \dfrac{n\lambda}{2d}\] where Since a highly regular structure is needed for diffraction to occur, only crystalline solids will diffract; amorphous materials will not show up in a diffraction pattern. A powder X-ray diffractometer consists of an X-ray source (usually an X-ray tube), a sample stage, a detector and a way to vary angle . The X-ray is focused on the sample at some angle , while the detector opposite the source reads the intensity of the X-ray it receives at 2 away from the source path. The incident angle is than increased over time while the detector angle always remains 2 above the source path. While other sources such as radioisotopes and secondary fluorescence exist, the most common source of X-rays is an X-ray tube. The tube is evacuated and contains a copper block with a metal target anode, and a tungsten filament cathode with a high voltage between them. The filament is heated by a separate circuit, and the large potential difference between the cathode and anode fires electrons at the metal target. The accelerated electrons knock core electrons out of the metal, and electrons in the outer orbitals drop down to fill the vacancies, emitting X-rays. The X-rays exit the tube through a beryllium window. Due to massive amounts of heat being produced in this process, the copper block must usually be water cooled While older machines used film as a detector, most modern equipment uses transducers that produce an electrical signal when exposed to radiation. These detectors are often used as photon counters, so intensities are determined by the number of counts in a certain amount of time. A gas-filled transducer consists of a metal chamber filled with an inert gas, with the walls of the chamber as a cathode and a long anode in the center of the chamber. As an X-ray enters the chamber, its energy ionizes many molecules of the gas. The free electrons then migrate towards the anode and the cations towards the cathode, with some recombining before they reach the electrodes. The electrons that reach the anode cause current to flow, which can be detected. The sensitivity and dead time (when the transducer will not respond to radiation) both depend on the voltage the transducer is operated at. At high voltage, the transducer will be very sensitive but have a long dead time, and at low voltage the transducer will have a short dead time but low sensitivity. In a scintillation counter, a phosphor is placed in front of a photomultiplier tube. When X-rays strike the phosphor, it produces flashes of light, which are detected by the photomultiplier tube. A semiconductor transducer has a gold coated p-type semiconductor layered on a lithium containing semiconductor intrinsic zone, followed by an n-type semiconductor on the other side of the intrinsic zone. The semiconductor is usually composed of silicon; germanium is used if the radiation wavelength is very short. The n-type semiconductor is coated by an aluminum contact, which is connected to a preamplifier. The entire crystal has a voltage applied across it. When an X-ray strikes the crystal, it elevates many electrons in the semiconductor into the conduction band, which causes a pulse of current. Copper emits radiation at 1.5418Å. If a diffraction pattern taken with a copper X-ray tube source shows a peak at 40, what is the corresponding d spacing? (Hint: Don't forget that diffraction patterns are plotted in \(2 , not \( . | 5,673 | 2,213 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Synthesis_of_Aldehydes_and_Ketones/Preparation_of_Aldehydes_and_Ketones |
This page explains how aldehydes and ketones are made in the lab by the oxidation of primary and secondary alcohols. The oxidizing agent used in these reactions is normally a solution of sodium or potassium dichromate(VI) acidified with dilute sulfuric acid. If oxidation occurs, the orange solution containing the dichromate(VI) ions is reduced to a green solution containing chromium(III) ions. The net effect is that an oxygen atom from the oxidizing agent removes a hydrogen from the -OH group of the alcohol and one from the carbon to which it is attached. [O] is often used to represent oxygen coming from an oxidising agent. R and R' are alkyl groups or hydrogen. They could also be groups containing a benzene ring, but I'm ignoring these to keep things simple. If at least one of these groups is a hydrogen atom, then you will get an aldehyde. If they are both alkyl groups then you get a ketone. If you now think about where they are coming from, you will get an aldehyde if your starting molecule looks like this: In other words, if you start from a primary alcohol, you will get an aldehyde. You will get a ketone if your starting molecule looks like this: . . . where R and R' are both alkyl groups. Secondary alcohols oxidize to give ketones. Aldehydes are made by oxidising primary alcohols. There is, however, a problem. The aldehyde produced can be oxidised further to a carboxylic acid by the acidified potassium dichromate(VI) solution used as the oxidising agent. In order to stop at the aldehyde, you have to prevent this from happening. To stop the oxidation at the aldehyde, you . . . If you used ethanol as a typical primary alcohol, you would produce the aldehyde ethanal, CH CHO. The full equation for this reaction is fairly complicated, and you need to understand about electron-half-equations in order to work it out. In organic chemistry, simplified versions are often used which concentrate on what is happening to the organic substances. To do that, oxygen from an oxidising agent is represented as [O]. That would produce the much simpler equation: Secondary alcohols are oxidised to ketones. There is no further reaction which might complicate things. For example, if you heat the secondary alcohol propan-2-ol with sodium or potassium dichromate(VI) solution acidified with dilute sulphuric acid, you get propanone formed. Playing around with the reaction conditions makes no difference whatsoever to the product. Using the simple version of the equation: Jim Clark ( ) | 2,516 | 2,214 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Coordination_Chemistry/Complex_Ion_Chemistry/Stereoisomerism_in_complex_ions |
Some complex ions can show either optical or geometric isomerism. This occurs in planar complexes like the Pt(NH ) Cl we've just looked at. There are two completely different ways in which the ammonias and chloride ions could arrange themselves around the central platinum ion: The two structures drawn are isomers because there is no way that you can just twist one to turn it into the other. The complexes are both locked into their current forms. The terms and are used in the same way as they are in organic chemistry. Trans implies "opposite" - notice that the ammine ligands are arranged opposite each other in that version, and so are the chloro ligands. Cis means "on the same side" - in this instance, that just means that the ammine and chloro ligands are next door to each other. You recognize optical isomers because they have no plane of symmetry. In the organic case, it is fairly easy to recognize the possibility of this by looking for a carbon atom with four different things attached to it. It isn't qute so easy with the complex ions - either to draw or to visualize! The examples you are most likely to need occur in octahedral complexes which contain bidentate ligands - ions like [Ni(NH CH CH NH ) ] or [Cr(C O ) ] . The diagram below shows a simplified view of one of these ions. Essentially, they all have the same shape - all that differs is the nature of the "headphones". The charges are left off the ion, because obviously they will vary from case to case. The shape shown applies to any ion of this kind. If your visual imagination will cope, you may be able to see that this ion has no plane of symmetry. If you find this difficult to visualize, the only solution is to make the ion out of a lump of plasticene (or a bit of clay or dough) and three bits of cardboard cut to shape. A substance with - one of which is the mirror image of the other. One of the isomers will rotate the plane of polarization of plane polarized light clockwise; the other rotates it counter-clockwise. In this case, the two isomers are: If you have a really impressive visual imagination, you may be able to see that there is no way of rotating the second isomer in space so that it looks exactly the same as the first one. Jim Clark ( ) | 2,281 | 2,215 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Coordination_Chemistry/Complex_Ion_Chemistry/Origin_of_Color_in_Complex_Ions |
This page is going to take a simple look at the origin of color in complex ions - in particular, why so many transition metal ions are colored. If you pass white light through a prism it splits into all the colors of the rainbow. Visible light is simply a small part of an electromagnetic spectrum most of which we cannot see - gamma rays, X-rays, infra-red, radio waves and so on. Each of these has a particular wavelength, ranging from 10 meters for gamma rays to several hundred metersfor radio waves. Visible light has wavelengths from about 400 to 750 nm. (1 nanometer = 10-9 meters.) d block elements So, what causes transition metal ions to absorb wavelengths from visible light (causing color) whereas non-transition metal ions do not? And why does the color vary so much from ion to ion? Simple tetrahedral complexes have four ligands arranged around the central metal ion. Again the ligands have an effect on the energy of the d electrons in the metal ion. This time, of course, the ligands are arranged differently in space relative to the shapes of the d orbitals. The net effect is that when the d orbitals split into two groups, three of them have a greater energy, and the other two a lesser energy (the opposite of the arrangement in an octahedral complex). Apart from this difference of detail, the explanation for the origin of color in terms of the absorption of particular wavelengths of light is exactly the same as for octahedral complexes. Jim Clark ( ) | 1,491 | 2,216 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Spectrometer/How_an_FTIR_instrument_works/FTIR%3A_Hardware |
Below you will discover a detailed review of the physical components of a Fourier Transform Infrared (FTIR) Spectrometer. This module focuses on the physical equipments/components which make up the instrument, and not the mathematical aspects of analyzing the resulting data. For the mathematical treatment of FTIR data please see . The history of the FTIR is a twisted and somewhat confusing tale, involving the development of technology, math, and materials. The beginnings of the first commercial FTIR spectrometer have been attributed to the work of M.J. Block and his research team in the small company 'Digilab'. Block's personal memoirs of the experience are both interesting and entertaining, involving highly classified information, money laundering, and fraud charges (follow the link if you wish to discover for yourself s-a-s.org/epstein/block/index.htm ). Otherwise let it be enough to say that once the FTIR spectrometer was developed, its impact on the scientific community was paramount. Suddenly it was possible to acquire extremely accurate data in a much shorter amount of time than with traditional IR, as well as allowing for the analysis of exceedingly dilute samples. The device itself is surprisingly simple, with only one moving part. It’s no surprise that the instrument has been growing in popularity ever since its introduction, finding applications in chemistry, biology, materials science, process engineering, pharmaceutical science, and many other professions. FTIR instruments are relatively inexpensive, sturdy, stable, flexible, and fast. Through the years, this instrument have steadily evolved and new applications is continually being developed. Expanded computer power, the trend towards miniaturization, and more sophisticated imaging have all inspired some important new innovations. FTIR measurements are conducted in the time domain. This is accomplished by directing the radiation from a broadband IR source to a beam splitter, which divides the light into two optical paths. Mirrors in the paths reflect the light back to the beam splitter, where the two beams recombine, and this modulated beam passes through the sample and hits the detector. In a typical interferometer, one mirror remains fixed, and the other retreats from the beam splitter at a constant speed. As the mirror moves, the beams go in and out of phase with each other, which generates a repeating interference pattern—a plot of intensity versus optical path difference—called an interferogram. The interferogram can be converted into the frequency domain via a Fourier transform, which yields the familiar single beam spectrum. The resolution of this spectrum is determined by the distance that the moving mirror traveled. Analyses generally fall into three categories, which are determined by the wavelengths of the radiation. Midrange IR covers the wavenumbers 1400 nm–3000 nm, where strong absorptions from fundamental molecular vibrations are measured. Near-IR (NIR) ranges from 700 nm–1400 nm. Far IR ranges from 3000 nm–1 mm. Infrared radiation is a relatively low energy light. All physical objects give of infrared radiation, the wavelength of which is dependent upon the temperature of the object. This phenomenon is known as black body radiation. The ideal IR source would emit radiation across the entire IR spectrum. As this is very difficult, a good compromise is a source which emits continuous mid-infrared radiation. Thankfully this can be achieved by most high temperature black bodies. Black body radiation was studied in depth by Max Planck, and it is through his equations that that the spectral energy density at a given wave number from a blackbody source of a given temperature can be calculated. Not to mention he was the discoverer of the properties of energy quanta. For this Max Planck received the 1918 Nobel Prize in Physics in recognition of the services he rendered to the advancement of Science. Now take a moment to examine the plot of energy density vs. max wave length below. At first glance it would seem that the source temperature should be as high as possible to maximize the results—this is rarely the case. For example consider a typical incandescent light bulb. The tungsten filament glows at a temperature of 3000k, which would emit massive amounts of IR. The bulb portion of a light bulb is responsible for their lack of use as an IR source. The Bulb is made of glass which seals the tungsten filament in a vacuum. The vacuum is necessary to keep the tungsten from oxidizing at such high temperature, but the glass serves as an IR absorber, blocking its path to the sample. Any source we choose must be in direct contact with the atmosphere, because of this there are drastic limits on the temperature that we may operate an IR source. There are several other limiting facts that require consideration when choosing an IR source. The material should be thermodynamically stable; otherwise it would quickly break down and need replacing. This would obviously be an expensive and undesired approach. There is also the possibility that the source may produce an excess of IR radiation. This would saturate the detector and possibly over load the analog-to-digital converter. The most ubiquitous IR source used in FTIR is a resistively heated silicon carbide rod (see image below). This device is commonly and somewhat simply referred to as a Globar. An electric current is passed through the bar which become very hot, prducing large amounts of IR raidiation. A Globar can reach temperatures of 1300K, and in the past required water cooling to keep from damaging the electrical components. Advances in ceramic metal alloys have lead to the production of Globars that no longer require water cooling. However these newer Globars are typically not operated at as high a temperature as 1300K. Nichrome and Kanthanl wire coils where also once popular IR sources. They too did not require water cooling, ran at lower temperatures than a Globar, and possessed lower emissivity. Nernst Glowers are an IR source that is capable of hotter temperatures than a Globar. Nernst Glowers are fabricated from a mixture of refractory oxides. Despite being capable of higher temperature than a globar, the Nernst Glower is not capable of producing IR radiation above 2000 cm-1. As long as the frequency of IR needing to be examined is below 2000 cm-1 The Nernst Glower is an exceptional IR source, but if the entire mid IR range is necessary then using a Nernst Glower would result in low signal to noise ratios. It should be noted that the carbon IR sources used in many spectrometers today, similar to the Globar discussed above are different then the carbon arcs that you may be familiar with. A carbon arc occurs when an electrical discharge occurs between to carbon electrodes. These sparks are incredibly bright reaching temperature as hot at 6000 K. IR sources capitalizing on the large IR output of these arcs have ultimately shown to possess more draw backs than advantages. Because the carbon electrodes are consumed in the arcing process it would be necessary to continuously feed new rod forward to maintain the arc. The rods would also require an inert atmosphere to avoid combustion of the carbon. These limiting features and add complication of carbon archs makes them unfit as IR sources. The creation of today’s FTIR would not have been possible had it not been for the existence of the Michelson interferometer. This essential piece of optical equipment was invented by Albert Abraham Michelson. He received the Nobel Prize in 1907 for his accurate measurements of the wavelengths of light. His Nobel winning experiments were made possible by his invention of the interferometer. Albert Michelson was in fact the first member of the United States of America to receive the Nobel Prize, solidifying the U.S as a world leader in science. Michelson did not invent the interferometer to perform infrared spectroscopy; in fact his experiments had nothing to do with any kind of spectroscopy. Michelson’s goal was to discover evidence for luminiferous aether, the material once believed to permeate the universe allowing for the propagation of light waves. Of course it is now known that no such aether exists and that light is capable of propagating in vacuum. For more information on the extraordinary achievement of Michelson and his invention of the interferometer go to . The stationary mirror in an FTIR interferometer is nothing more than a flat highly reflective surface. The beauty of the FTIR spectrometers design lies in its simplicity. There is present only one moving part in an FTIR spectrometer, its oscillating mirror. Air bearings are used in FTIR spectrometers because of the higher speed that the oscillating mirror is required to move at. The air bearings eliminate friction that would inevitable cause the moving parts of the mirror to break down, as is the case for the mechanical bearings. The air bearing has nearly replaced the mechanical bearing in all modern FTIR spectrometers. The older mechanical bearings required expensive ruby ball bearings, as they were the only material strong enough to endure the high physical demands of oscillating once every millisecond. Infrared detectors are classified into two categories; thermal, and quantum models. A thermal detector uses the energy of the infrared beam as heat, while the quantum mechanical detector uses the IR beam as light and provides for a more sensitive detector. A thermal detector operates by detecting the changes in temperature of an absorbing material. Their output may be in the form of an electromotive force (thermocouples), a change in resistance of a conductor (bolometer) or semiconductor (thermistor bolometer), or the movement of a diaphragm caused by the expansion of a gas (pneumatic detector). There exist major limitations to these forms of IR detectors. Their response time is much slower (several milliseconds) than the vibrational frequency of the oscillating mirror in FTIR. The mirror is moving with a frequency of approximately 1.25 kHz, there for the response time for an IR detector employed in FTIR must have a response time of less than 1ms. A response time of less than one millisecond is obtainable with cryogenically cooled thermo detectors. These detectors are commonly too expensive to be desired over other forms of detectors. There is one kind of thermo detector that is both inexpensive and possesses a response time fast enough to be appropriate, as well as the additional benefit of operating at room temperature. This detector is the Pyroelectric bolometer detector. These detectors incorporate as their heat sensing element ferroelectric materials that exhibit a large spontaneous electrical polarization at temperatures below their curie point. If the temperature of the ferroelectric material is changed the degree of polarization also changes causing an electric current. Pyroelectric bolometer is based on a Pyroelectric crystal (usually LiTaO3 or PZT) covered by absorbing layer (silver or silver blackened with carbon) Because of their higher sensitivity, and faster response times, quantum well detectors are much more ubiquitous to FTIR. The detection mechanism of Quantum Well Infrared Photodetector (QWIP) involves photoexcitation of electrons between ground and first excited states of single or a multiquantum well structure. The parameters are designed so that these photo excited carriers can escape from the well and be collected as photocurrent. These quantum wells can be realized by placing thin layers of two different high bandgap semiconductor materials alternately where the bandgap discontinuity creates potential wells associated with conduction bands and valence bands. When IR photons strick these materials they induce a current that is then transformed into a digital signal via a analog digital converter. These detector work more effectively (increased sensitivity) when at lower temperature. This is in part due to the higher degree of instrumental noise associated with a higher thermal back ground. Today there are available a wide range of these photo detecting diodes that do not require cooling. The finer details of the detector are numerous and dependent on the parameters of the equipment, there for beyond the scope of this module. | 12,367 | 2,217 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_General_Chemistry%3A_Principles_Patterns_and_Applications_(Averill)/24%3A_Nuclear_Chemistry/24.06%3A_Applied_Nuclear_Chemistry |
The ever-increasing energy needs of modern societies have led scientists and engineers to develop ways of harnessing the energy released by nuclear reactions. To date, all practical applications of nuclear power have been based on nuclear fission reactions. Although nuclear fusion offers many advantages in principle, technical difficulties in achieving the high energies required to initiate nuclear fusion reactions have thus far precluded using fusion for the controlled release of energy. In this section, we describe the various types of nuclear power plants that currently generate electricity from nuclear reactions, along with some possible ways to harness fusion energy in the future. In addition, we discuss some of the applications of nuclear radiation and radioisotopes, which have innumerable uses in medicine, biology, chemistry, and industry. When a critical mass of a fissile isotope is achieved, the resulting flux of neutrons can lead to a self-sustaining reaction. A variety of techniques can be used to control the flow of neutrons from such a reaction, which allows nuclear fission reactions to be maintained at safe levels. Many levels of control are required, along with a fail-safe design, because otherwise the chain reaction can accelerate so rapidly that it releases enough heat to melt or vaporize the fuel and the container, a situation that can release enough radiation to contaminate the surrounding area. Uncontrolled nuclear fission reactions are relatively rare, but they have occurred at least 18 times in the past. The most recent event resulted from the damaged Fukushima Dai-ichi plant after the March 11, 2011, earthquake and tsunami that devastated Japan. The plant used fresh water for cooling nuclear fuel rods to maintain controlled, sustainable nuclear fission. When the water supply was disrupted, so much heat was generated that a partial meltdown occurred. Radioactive iodine levels in contaminated seawater from the plant were over 4300 times the regulated safety limit. To put this in perspective, drinking one liter of fresh water with this level of contamination is the equivalent to receiving double the annual dose of radiation that is typical for a person. Dismantling the plant and decontaminating the site is estimated to require 30 years at a cost of approximately $12 billion. There is compelling evidence that uncontrolled nuclear chain reactions occurred naturally in the early history of our planet, about 1.7 billion years ago in uranium deposits near Oklo in Gabon, West Africa ( ). The natural abundance of U 2 billion years ago was about 3%, compared with 0.72% today; in contrast, the “fossil nuclear reactor” deposits in Gabon now contain only 0.44% U. An unusual combination of geologic phenomena in this region apparently resulted in the formation of deposits of essentially pure uranium oxide containing 3% U, which coincidentally is identical to the fuel used in many modern nuclear plants. When rainwater or groundwater saturated one of these deposits, the water acted as a natural moderator that decreased the kinetic energy of the neutrons emitted by radioactive decay of U, allowing the neutrons to initiate a chain reaction. As a result, the entire deposit “went critical” and became an uncontrolled nuclear chain reaction, which is estimated to have produced about 100 kW of power. It is thought that these natural nuclear reactors operated only intermittently, however, because the heat released would have vaporized the water. Removing the water would have shut down the reactor until the rocks cooled enough to allow water to reenter the deposit, at which point the chain reaction would begin again. This on–off cycle is believed to have been repeated for more than 100,000 years, until so much U was consumed that the deposits could no longer support a chain reaction. In addition to the incident in Japan, another recent instance of an uncontrolled nuclear chain reaction occurred on April 25–26, 1986, at the Chernobyl nuclear power plant in the former Union of Soviet Socialist Republics (USSR; now in the Ukraine; Figure \(\Page {2}\)). During testing of the reactor’s turbine generator, a series of mechanical and operational failures caused a chain reaction that quickly went out of control, destroying the reactor core and igniting a fire that destroyed much of the facility and released a large amount of radioactivity. Thirty people were killed immediately, and the high levels of radiation in a 20 mi radius forced nearly 350,000 people to be resettled or evacuated. In addition, the accident caused a disruption to the Soviet economy that is estimated to have cost almost $13 billion. It is somewhat surprising, however, that the long-term health effects on the 600,000 people affected by the accident appear to be much less severe than originally anticipated. Initially, it was predicted that the accident would result in tens of thousands of premature deaths, but an exhaustive study almost 20 yr after the event suggests that 4000 people will die prematurely from radiation exposure due to the accident. Although significant, in fact it represents only about a 3% increase in the cancer rate among the 600,000 people most affected, of whom about a quarter would be expected to eventually die of cancer even if the accident had not occurred. If, on the other hand, the neutron flow in a reactor is carefully regulated so that only enough heat is released to boil water, then the resulting steam can be used to produce electricity. Thus a nuclear reactor is similar in many respects to the conventional power plants that burn coal or natural gas to generate electricity; the only difference is the source of the heat that converts water to steam. We begin our description of nuclear power plants with light-water reactors, which are used extensively to produce electricity in countries such as Japan, Israel, South Korea, Taiwan, and France—countries that lack large reserves of fossil fuels. The essential components of a light-water reactor are depicted in . All existing nuclear power plants have similar components, although different designs use different fuels and operating conditions. Fuel rods containing a fissile isotope in a structurally stabilized form (such as uranium oxide pellets encased in a corrosion-resistant zirconium alloy) are suspended in a cooling bath that transfers the heat generated by the fission reaction to a secondary cooling system. The heat is used to generate steam for the production of electricity. In addition, control rods are used to absorb neutrons and thereby control the rate of the nuclear chain reaction. Control rods are made of a substance that efficiently absorbs neutrons, such as boron, cadmium, or, in nuclear submarines, hafnium. Pulling the control rods out increases the neutron flux, allowing the reactor to generate more heat, whereas inserting the rods completely stops the reaction, a process called “scramming the reactor.” Despite this apparent simplicity, many technical hurdles must be overcome for nuclear power to be an efficient and safe source of energy. Uranium contains only 0.72% uranium-235, which is the only naturally occurring fissile isotope of uranium. Because this abundance is not enough to support a chain reaction, the uranium fuel must be at least partially enriched in U, to a concentration of about 3%, for it to be able to sustain a chain reaction. At this level of enrichment, a nuclear explosion is impossible; far higher levels of enrichment (greater than or equal to 90%) are required for military applications such as nuclear weapons or the nuclear reactors in submarines. Enrichment is accomplished by converting uranium oxide to UF , which is volatile and contains discrete UF molecules. Because UF and UF have different masses, they have different , and they can be separated using a gas diffusion process. Another difficulty is that neutrons produced by nuclear fission are too energetic to be absorbed by neighboring nuclei, and they escape from the material without inducing fission in nearby U nuclei. Consequently, a moderator must be used to slow the neutrons enough to allow them to be captured by other U nuclei. High-speed neutrons are scattered by substances such as water or graphite, which decreases their kinetic energy and increases the probability that they will react with another U nucleus. The moderator in a light-water reactor is the water that is used as the primary coolant. The system is highly pressurized to about 100 atm to keep the water from boiling at 100°C. All nuclear reactors require a powerful cooling system to absorb the heat generated in the reactor core and create steam that is used to drive a turbine that generates electricity. In 1979, an accident occurred when the main water pumps used for cooling at the nuclear power plant at Three Mile Island in Pennsylvania stopped running, which prevented the steam generators from removing heat. Eventually, the zirconium casing of the fuel rods ruptured, resulting in a meltdown of about half of the reactor core. Although there was no loss of life and only a small release of radioactivity, the accident produced sweeping changes in nuclear power plant operations. The US Nuclear Regulatory Commission tightened its oversight to improve safety. The main disadvantage of nuclear fission reactors is that the spent fuel, which contains too little of the fissile isotope for power generation, is much more radioactive than the unused fuel due to the presence of many daughter nuclei with shorter half-lives than U. The decay of these daughter isotopes generates so much heat that the spent fuel rods must be stored in water for as long as 5 yr before they can be handled. Even then, the radiation levels are so high that the rods must be stored for many, many more years to allow the daughter isotopes to decay to nonhazardous levels. How to store these spent fuel rods for hundreds of years is a pressing issue that has not yet been successfully resolved. As a result, some people are convinced that nuclear power is not a viable option for providing our future energy needs, although a number of other countries continue to rely on nuclear reactors for a large fraction of their energy. Deuterium ( H) absorbs neutrons much less effectively than does hydrogen ( H), but it is about twice as effective at slowing neutrons. Consequently, a nuclear reactor that uses D O instead of H O as the moderator is so efficient that it can use unenriched uranium as fuel. Using a lower grade of uranium reduces operating costs and eliminates the need for plants that produce enriched uranium. Because of the expense of D O, however, only countries like Canada, which has abundant supplies of hydroelectric power for generating D O by electrolysis, have made a major investment in heavy-water reactors. A breeder reactor is a nuclear fission reactor that produces more fissionable fuel than it consumes. This does not violate the first law of thermodynamics because the fuel produced is not the same as the fuel consumed. Under heavy neutron bombardment, the nonfissile U isotope is converted to Pu, which can undergo fission: The overall reaction is thus the conversion of nonfissile U to fissile Pu, which can be chemically isolated and used to fuel a new reactor. An analogous series of reactions converts nonfissile Th to U, which can also be used as a fuel for a nuclear reactor. Typically, about 8–10 yr are required for a breeder reactor to produce twice as much fissile material as it consumes, which is enough to fuel a replacement for the original reactor plus a new reactor. The products of the fission of Pu, however, have substantially longer half-lives than the fission products formed in light-water reactors. Although nuclear fusion reactions, are thermodynamically spontaneous, the positive charge on both nuclei results in a large electrostatic energy barrier to the reaction (remember that thermodynamic spontaneity is unrelated to the reaction rate.) Extraordinarily high temperatures (about 1.0 × 10 °C) are required to overcome electrostatic repulsions and initiate a fusion reaction. Even the most feasible such reaction, deuterium–tritium fusion (D–T fusion), requires a temperature of about 4.0 × 10 °C. Achieving these temperatures and controlling the materials to be fused are extraordinarily difficult problems, as is extracting the energy released by the fusion reaction, because a commercial fusion reactor would require such high temperatures to be maintained for long periods of time. Several different technologies are currently being explored, including the use of intense magnetic fields to contain ions in the form of a dense, high-energy plasma at a temperature high enough to sustain fusion (Figure \(\Page {4a}\)). Another concept employs focused laser beams to heat and compress fuel pellets in controlled miniature fusion explosions (Figure \(\Page {4b}\)). Nuclear reactions such as these are called because a great deal of thermal energy must be invested to initiate the reaction. The amount of energy released by the reaction, however, is several orders of magnitude greater than the energy needed to initiate it. In principle, a nuclear fusion reaction should thus result in a significant net production of energy. In addition, Earth’s oceans contain an essentially inexhaustible supply of both deuterium and tritium, which suggests that nuclear fusion could provide a limitless supply of energy. Unfortunately, however, the technical requirements for a successful nuclear fusion reaction are so great that net power generation by controlled fusion has yet to be achieved. Nuclear radiation can damage biological molecules, thereby disrupting normal functions such as cell division. Because radiation is particularly destructive to rapidly dividing cells such as tumor cells and bacteria, it has been used medically to treat cancer since 1904, when radium-226 was first used to treat a cancerous tumor. Many radioisotopes are now available for medical use, and each has specific advantages for certain applications. In modern radiation therapy, radiation is often delivered by a source planted inside the body. For example, tiny capsules containing an isotope such as Ir, coated with a thin layer of chemically inert platinum, are inserted into the middle of a tumor that cannot be removed surgically. The capsules are removed when the treatment is over. In some cases, physicians take advantage of the body’s own chemistry to deliver a radioisotope to the desired location. For example, the thyroid glands in the lower front of the neck are the only organs in the body that use iodine. Consequently, radioactive iodine is taken up almost exclusively by the thyroid (Figure \(\Page {5a}\)). Thus when radioactive isotopes of iodine ( I or I) are injected into the blood of a patient suffering from thyroid cancer, the thyroid glands filter the radioisotope from the blood and concentrate it in the tissue to be destroyed. In cases where a tumor is surgically inaccessible (e.g., when it is located deep in the brain), an external radiation source such as a Co “gun” is used to aim a tightly focused beam of γ rays at it. Unfortunately, radiation therapy damages healthy tissue in addition to the target tumor and results in severe side effects, such as nausea, hair loss, and a weakened immune system. Although radiation therapy is generally not a pleasant experience, in many cases it is the only choice. A second major medical use of radioisotopes is medical imaging, in which a radioisotope is temporarily localized in a particular tissue or organ, where its emissions provide a “map” of the tissue or the organ. Medical imaging uses radioisotopes that cause little or no tissue damage but are easily detected. One of the most important radioisotopes for medical imaging is Tc. Depending on the particular chemical form in which it is administered, technetium tends to localize in bones and soft tissues, such as the heart or the kidneys, which are almost invisible in conventional x-rays (Figure \(\Page {5b}\)). Some properties of other radioisotopes used for medical imaging are listed in . Because γ rays produced by isotopes such as I and Tc are emitted randomly in all directions, it is impossible to achieve high levels of resolution in images that use such isotopes. However, remarkably detailed three-dimensional images can be obtained using an imaging technique called (PET). The technique uses radioisotopes that decay by positron emission, and the resulting positron is annihilated when it collides with an electron in the surrounding matter. In the annihilation process, both particles are converted to energy in the form of two γ rays that are emitted simultaneously and at 180° to each other: With PET, biological molecules that have been “tagged” with a positron-emitting isotope such as F or C can be used to probe the functions of organs such as the brain. Another major health-related use of ionizing radiation is the irradiation of food, an effective way to kill bacteria such as in chicken and eggs and potentially lethal strains of in beef. Collectively, such organisms cause almost 3 million cases of food poisoning annually in the United States, resulting in hundreds of deaths. Figure \(\Page {6}\) shows how irradiation dramatically extends the storage life of foods such as strawberries. Although US health authorities have given only limited approval of this technique, the growing number of illnesses caused by antibiotic-resistant bacteria is increasing the pressure to expand the scope of food irradiation. One of the more unusual effects of radioisotopes is in dentistry. Because dental enamels contain a mineral called feldspar (KAlSi O , which is also found in granite rocks), teeth contain a small amount of naturally occurring radioactive K. The radiation caused by the decay of K results in the emission of light (fluorescence), which gives the highly desired “pearly white” appearance associated with healthy teeth. In a sign of how important nuclear medicine has become in diagnosing and treating illnesses, the medical community has become alarmed at the global shortage of Tc, a radioisotope used in more than 30 million procedures a year worldwide. Two reactors that produce 60% of the world’s radioactive Mo, which decays to Tc, are operating beyond their intended life expectancies. Moreover, because most of the reactors producing Mo use weapons-grade uranium ( U), which decays to 99Mo during fission, governments are working to phase out civilian uses of technology to produce Mo because of concerns that the technology can be used to produce nuclear weapons. Engineers are currently focused on how to make key medical isotopes with other alternatives that don’t require fission. One promising option is by removing a neutron from Mo, a stable isotope that makes up about 10% of natural molybdenum, transmuting it to Mo. In addition to the medical uses of radioisotopes, radioisotopes have literally hundreds of other uses. For example, smoke detectors contain a tiny amount of Am, which ionizes the air in the detector so an electric current can flow through it. Smoke particles reduce the number of ionized particles and decrease the electric current, which triggers an alarm. Another application is the “go-devil” used to detect leaks in long pipelines. It is a packaged radiation detector that is inserted into a pipeline and propelled through the pipe by the flowing liquid. Sources of Co are attached to the pipe at regular intervals; as the detector travels along the pipeline, it sends a radio signal each time it passes a source. When a massive leak causes the go-devil to stop, the repair crews know immediately which section of the pipeline is damaged. Finally, radioactive substances are used in gauges that measure and control the thickness of sheets and films. As shown in Figure \(\Page {7}\), thickness gauges rely on the absorption of either β particles (by paper, plastic, and very thin metal foils) or γ rays (for thicker metal sheets); the amount of radiation absorbed can be measured accurately and is directly proportional to the thickness of the material. All practical applications of nuclear power have been based on nuclear fission reactions, which nuclear power plants use to generate electricity. In nuclear power plants, nuclear reactions generate electricity. Light-water reactors use enriched uranium as a fuel. They include fuel rods, a moderator, control rods, and a powerful cooling system to absorb the heat generated in the reactor core. Heavy-water reactors use unenriched uranium as a fuel because they use D O as the moderator, which scatters and slows neutrons much more effectively than H O. A breeder reactor produces more fissionable fuel than it consumes. A nuclear fusion reactor requires very high temperatures. Fusion reactions are thermonuclear reactions because they require high temperatures for initiation. Radioisotopes are used in both radiation therapy and medical imaging. | 21,145 | 2,218 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Atomic_Theory/Simple_View_of_Atomic_Structure |
The table below gives relative masses and relative charges for protons, neutrons, and electrons: In reality, protons and neutrons do not have exactly the same mass; neither of them has a mass of exactly 1 on the carbon-12 scale (the scale on which the relative masses of atoms are measured). On this scale, a proton has a mass of 1.0073, and a neutron a mass of 1.0087. The masses are given as 1 for simplicity and convenience. If a beam containing each of these particles is passed between two electrically charged plates—one positive and one negative—the following are observed: The exact manner in which these events occur depends on whether the particles have the same energy. If beams of the three types of particles, all with the same energy, are passed between two electrically charged plates, the following is observed: If the particles have the same energy, the magnitude of the deflection is exactly the same in the electron beam as in the proton beam, but the deflections occur in opposite directions If the electric field is strong enough, then the electron and proton beams could curve enough to hit their respective plates. If beams of the three particles, all with the same speed, are passed between two electrically charged plates: If the electrons and protons travel with the same speed, then the lighter electrons are deflected far more strongly than the heavier protons. The nucleus, located at the center of the atom, contains the protons and neutrons. Protons and neutrons are collectively known as nucleons. Virtually all the mass of the atom is concentrated in the nucleus because electrons weigh so little in comparison to the nucleons. Number of protons = ATOMIC NUMBER of the atom The atomic number is also given the more descriptive name of proton number. Number of protons + number of neutrons = MASS NUMBER of the atom The mass number is also called the nucleon number. This information can be expressed in the following form: The atomic number is the number of protons (9); the mass number counts protons + neutrons (19). If there are 9 protons, there must be 10 neutrons adding up to a total of 19 nucleons in the atom. The atomic number is tied to the position of the element in the periodic table; the number of protons therefore defines the element of interest. If an atom has 8 protons (atomic number = 8), it must be oxygen. If an atom has 12 protons (atomic number = 12), it must be magnesium. Similarly, every chlorine atom (atomic number = 17) has 17 protons; every uranium atom (atomic number = 92) has 92 protons. The number of neutrons in an atom can vary within small limits. For example, there are three kinds of carbon atom: C, C and C. They all have the same number of protons, but the number of neutrons varies. Atoms with the same atomic number but different mass numbers are called isotopes. Varying numbers of neutrons nave no effect on the chemical properties of the atom. Atoms are electrically neutral, and the positive charge from the protons is balanced by negative charge from the electrons. It follows that in a neutral atom: number of electrons = number of protons Therefore, if an oxygen atom (atomic number = 8) has 8 protons, it must also have 8 electrons; if a chlorine atom (atomic number = 17) has 17 protons, it must also have 17 electrons. Electrons are found at considerable distances from the nucleus, arranged in successive energy levels. Each energy level can only hold a certain number of electrons. The first level (nearest the nucleus) holds two electrons, and the second and third levels each hold eight. These levels can be visualized as getting successively further from the nucleus. Electrons always occupy the lowest possible energy level (nearest the nucleus), provided there is space. This process is demonstrated using chlorine: After the third level, the pattern is altered due to the transition series. Examining the patterns in the table above reveals the following: Introductory chemistry courses generally depict the electronic structures of hydrogen and carbon in the following way: This is a simplification and can be misleading. It gives the impression that the electrons are circling the nucleus in orbits like planets around the sun. It is impossible to know exactly how they are actually moving. The circles show energy levels - representing increasing distances from the nucleus. If the circles are straightened, the electronic structure is shown in a simple energy diagram. The energy diagram for carbon is shown below: This visualization of the arrangement of the electrons is useful in understanding electronic structure. | 4,632 | 2,219 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Basic_Principles_of_Organic_Chemistry_(Roberts_and_Caserio)/19%3A_More_on_Stereochemistry/19.05%3A_Absolute_And_Relative_Configuration |
The sign of rotation of plane-polarized light by an enantiomer is not easily related to its configuration. This is true even for substances with very similar structures. Thus, given lactic acid, \(\ce{CH_3CHOHCO_2H}\), with a specific rotation \(+3.82^\text{o}\), and methyl lactate, \(\ce{CH_3CHOHCO_2CH_3}\), with a specific rotation \(-8.25^\text{o}\), we cannot tell from the rotation alone whether the acid and ester have the same or a different arrangement of groups about the chiral center. Their relative configurations have to be obtained by other means. If we convert \(\left( + \right)\)-lactic acid into its methyl ester, we can be reasonably certain that the ester will be related in configuration to the acid, because esterification should not affect the configuration about the chiral carbon atom. It happens that the methyl ester so obtained is levorotatory, so we know that \(\left( + \right)\)-lactic acid and \(\left( - \right)\)-methyl lactate have the same relative configuration at the asymmetric carbon, even if they possess opposite signs of optical rotation. However, we still do not know the absolute configuration; that is, we are unable to tell which of the two possible configurations of lactic acid, \(2a\) or \(2b\), corresponds to the dextro or \(\left( + \right)\)-acid and which to the levo or \(\left( - \right)\)-acid: Until 1956, the absolute configuration of no optically active compound was known. Instead, configurations were assigned relative to a standard, , which originally was chosen by E. Fischer (around 1885) for the purpose of correlating the configuration of carbohydrates. Fischer arbitrarily assigned the configuration \(3a\) to dextrorotatory glyceraldehyde, which was known as \(D\)-\(\left( + \right)\)-glyceraldehyde. The levorotatory enantiomer, \(3b\), is designated as \(L\)-\(\left( - \right)\)-glyceraldehyde. (If you are unsure of the terminology \(D\) and \(L\), or of the rules for writing Fischer projection formulas, review and .) The configurations of many compounds besides sugars now have been related to glyceraldehyde, including \(\alpha\)-amino acids, terpenes, steroids, and other biochemically important substances. Compounds whose configurations are related to \(D\)-\(\left( + \right)\)-glyceraldehyde are said to belong to the \(D\) series, and those related to \(L\)-\(\left( - \right)\)-glyceraldehyde belong to the \(L\) series. At the time the choice of absolute configuration for glyceraldehyde was made, there was no way of knowing whether the configuration of \(\left( + \right)\)-glyceraldehyde was in reality \(3a\) or \(3b\). However, the choice had a \(50\%\) chance of being correct, and we now know that \(3a\), the \(D\) configuration, is in fact the correct configuration of \(\left( + \right)\)-glyceraldehyde. This was established through use of a special x-ray crystallographic technique, which permitted determination of the absolute disposition of the atoms in space of sodium rubidium \(\left( + \right)\)-tartrate. The configuration of \(\left( + \right)\)-tartaric acid ( ) previously had been shown by chemical means to be opposite to that of \(\left( + \right)\)-glyceraldehyde. Consequently the absolute configuration of any compound now is known once it has been correlated directly or indirectly with glyceraldehyde. For example, When there are several chiral carbons in a molecule, the configuration at one center usually is related directly or indirectly to glyceraldehyde, and the configurations at the other centers are determined relative to the first. Thus in the aldehyde form of the important sugar, \(\left( + \right)\)-glucose, there are chiral centers, and so there are \(2^4 = 16\) possible stereoisorners. The projection formula of the isomer that corresponds to the aldehyde form of natural glucose is \(4\). By convention for sugars, the configuration of the is referred to glyceraldehyde to determine the overall configuration of the molecule. For glucose, this atom is \(\ce{C5}\), next to the \(\ce{CH_2OH}\) group, and has the hydroxyl group on the right. Therefore, naturally occurring glucose, which has a \(\left( + \right)\) rotation, belongs to the \(D\) series and is properly called \(D\)-\(\left( + \right)\)-glucose: However, the configurations of \(\alpha\)-amino acids possessing more than one chiral carbon are determined by the -numbered chiral carbon, which is the carbon to the carboxyl group. Thus, even though the natural \(\alpha\)-amino acid, threonine, has exactly the same kind of arrangement of substituents as the natural sugar, threose, threonine by the amino-acid convention belongs to the \(L\)-series, whereas threose by the sugar convention belongs to the \(D\)-series: A serious ambiguity arises for compounds such as the active tartaric acids. If the amino-acid convention is used, \(\left( + \right)\)-tartaric acid falls in the \(D\) series; by the sugar convention, it has the \(L\) configuration. One way out of this dilemma is to use the subscripts \(s\) and \(g\) to denote the amino-acid or carbohydrate conventions, respectively. Then the absolute configuration of \(\left( + \right)\)-tartaric acid can be designated as either \(D_s\)-\(\left( + \right)\)-tartaric acid of \(L_g\)-\(\left( + \right)\)-tartaric acid. and (1977) | 5,318 | 2,220 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book3A_Bioinorganic_Chemistry_(Bertini_et_al.)/05%3A_Dioxygen_Reactions/5.05%3A_Dioxygen_Toxicity |
Before we consider the enzymatically controlled reactions of dioxygen in living systems, it is instructive to consider the uncontrolled and deleterious reactions that must also occur in aerobic organisms. Life originally appeared on Earth at a time when the atmosphere contained only low concentrations of dioxygen, and was reducing rather than oxidizing, as it is today. With the appearance of photosynthetic organisms approximately 2.5 billion years ago, however, the conversion to an aerobic, oxidizing atmosphere exposed the existing anaerobic organisms to a gradually increasing level of oxidative stress. Modern-day anaerobic bacteria, the descendants of the original primitive anaerobic organisms, evolved in ways that enabled them to avoid contact with normal atmospheric concentrations of dioxygen. Modern-day aerobic organisms, by contrast, evolved by developing aerobic metabolism to harness the oxidizing power of dioxygen and thus to obtain usable metabolic energy. This remarkably successful adaptation enabled life to survive and flourish as the atmosphere became aerobic, and also allowed larger, multicellular organisms to evolve. An important aspect of dioxygen chemistry that enabled the development of aerobic metabolism is the relatively slow rate of dioxygen reactions in the absence of catalysts. Thus, enzymes could be used to direct and control the oxidation of substrates either for energy generation or for biosynthesis. Nevertheless, the balance achieved between constructive and destructive oxidation is a delicate one, maintained in aerobic organisms by several means, e.g.: compartmentalization of oxidative reactions in mitochondria, peroxisomes, and chloroplasts; scavenging or detoxification of toxic byproducts of dioxygen reactions; repair of some types of oxidatively damaged species; and degradation and replacement of other species. The classification "anaerobic" actually includes organisms with varying degrees of tolerance for dioxygen: strict anaerobes, for which even small concentrations of O are toxic; moderate anaerobes, which can tolerate low levels of dioxygen; and microaerophiles, which require low concentrations of O for growth, but cannot tolerate normal atmospheric concentrations, i.e., 21 percent O , 1 atm pressure. Anaerobic organisms thrive in places protected from the atmosphere, for example, in rotting organic material, decaying teeth, the colon, and gangrenous wounds. Dioxygen appears to be toxic to anaerobic organisms largely because it depletes the reducing equivalents in the cell that are needed for normal biosynthetic reactions. Aerobic organisms can, of course, live in environments in which they are exposed to normal atmospheric concentrations of O . Nevertheless, there is much evidence that O is toxic to these organisms as well. For example, plants grown in varying concentrations of O have been observed to grow faster in lower than normal concentrations of O . grown under 5 atm of O ceased to grow unless the growth medium was supplemented with branched-chain amino acids or precursors. High concentrations of O damaged the enzyme dihydroxy acid dehydratase, an important component in the biosynthetic pathway for those amino acids. In mammals, elevated levels of O are clearly toxic, leading first to coughing and soreness of the throat, and then to convulsions when the level of 5 atm of 100 percent O is reached. Eventually, elevated concentrations of O lead to pulmonary edema and irreversible lung damage, with obvious damage to other tissues as well. The effects of high concentrations of O on humans is of some medical interest, since dioxygen is used therapeutically for patients experiencing difficulty breathing, or for those suffering from infection by anaerobic organisms. The major biochemical targets of O toxicity appear to be lipids, DNA, and proteins. The chemical reactions accounting for the damage to each type of target are probably different, not only because of the different reactivities of these three classes of molecules, but also because of the different environment for each one inside the cell. Lipids, for example, are essential components of membranes and are extremely hydrophobic. The oxidative damage that is observed is due to free-radical autoxidation (see Reactions 5.16 to 5.21), and the products observed are lipid hydroperoxides (see Reaction 5.23). The introduction of the hydroperoxide group into the interior of the lipid bilayer apparently causes that structure to be disrupted, as the configuration of the lipid rearranges in order to bring that polar group out of the hydrophobic membrane interior and up to the membrane-water interface. DNA, by contrast, is in the interior of the cell, and its exposed portions are surrounded by an aqueous medium. It is particularly vulnerable to oxidative attack at the base or at the sugar, and multiple products are formed when samples are exposed to oxidants . Since oxidation of DNA may lead to mutations, this type of damage is potentially very serious. Proteins also suffer oxidative damage, with amino-acid side chains, particularly the sulfur-containing residues cysteine and methionine, appearing to be the most vulnerable sites. The biological defense systems protecting against oxidative damage and its consequences are summarized below. Some examples of small-molecule antioxidants are \(\alpha\)-tocopherol (vitamin E; 5.24), which is found dissolved in cell membranes and protects them against lipid peroxidation, and ascorbate (vitamin C; 5.25) and glutathione (5.26), which are found in the cytosol of many cells. Several others are known as well. \(\tag{5.24}\) \(\tag{5.25}\) \(\tag{5.26}\) The enzymatic antioxidants are (a) catalase and the various peroxidases, whose presence lowers the concentration of hydrogen peroxide, thereby preventing it from entering into potentially damaging reactions with various cell components (see Section VI and Reactions 5.82 and 5.83), and (b) the superoxide dismutases, whose presence provides protection against dioxygen toxicity that is believed to be mediated by the superoxide anion, O (see Section VII and Reaction 5.95). Some of the enzymatic and nonenzymatic antioxidants in the cell are illustrated in Figure 5.1. Redox-active metal ions are present in the cell in their free, uncomplexed state only in extremely low concentrations. They are instead sequestered by metal-ion storage and transport proteins, such as ferritin and transferrin for iron (see Chapter 1) and ceruloplasmin for copper. This arrangement prevents such metal ions from catalyzing deleterious oxidative reactions, but makes them available for incorporation into metalloenzymes as they are needed. In vitro experiments have shown quite clearly that redox-active metal ions such as Fe or Cu are extremely good catalysts for oxidation of sulfhydryl groups by O (Reaction 5.27). \[4RSH + O_{2} \xrightarrow{M^{n+}} 2RSSR + 2H_{2}O \tag{5.27}\] In addition, in the reducing environment of the cell, redox-active metal ions catalyze a very efficient one-electron reduction of hydrogen peroxide to produce hydroxyl radical, one of the most potent and reactive oxidants known (Reactions 5.28 to 5.30). \[M^{n+} + Red^{-} \rightarrow M^{(n-1)+} + Red \tag{5.28}\] \[M^{(n-1)+} + H_{2}O_{2} \rightarrow M^{n+} + OH^{-} + HO \cdotp \tag{5.29}\] \[Red^{-} + H_{2}O_{2} \rightarrow Red + OH^{-} + HO \cdotp \tag{5.30}\] \[(Red^{-} = reducing\; agent)\] Binding those metal ions in a metalloprotein usually prevents them from entering into these types of reactions. For example, transferrin, the iron-transport enzyme in serum, is normally only 30 percent saturated with iron. Under conditions of increasing iron overload, the empty iron-binding sites on transferrin are observed to fill, and symptoms of iron poisoning are not observed until after transferrin has been totally saturated with iron. Ceruloplasmin and metallothionein may playa similar role in preventing copper toxicity. It is very likely that both iron and copper toxicity are largely due to catalysis of oxidation reactions by those metal ions. Repair of oxidative damage must go on constantly, even under normal conditions of aerobic metabolism. For lipids, repair of peroxidized fatty-acid chains is catalyzed by phospholipase A , which recognizes the structural changes at the lipid-water interface caused by the fatty-acid hydroperoxide, and catalyzes removal of the fatty acid at that site. The repair is then completed by enzymatic reacylation. Although some oxidatively damaged proteins are repaired, more commonly such proteins are recognized, degraded at accelerated rates, and then replaced. For DNA, several multi-enzyme systems exist whose function is to repair oxidatively damaged DNA. For example, one such system catalyzes recognition and removal of damaged bases, removal of the damaged part of the strand, synthesis of new DNA to fill in the gaps, and religation to restore the DNA to its original, undamaged state. Mutant organisms that lack these repair enzymes are found to be hypersensitive to O , H O , or other oxidants. One particularly interesting aspect of oxidant stress is that most aerobic organisms can survive in the presence of normally lethal levels of oxidants if they have first been exposed to lower, nontoxic levels of oxidants. This phenomenon has been observed in animals, plants, yeast, and bacteria, and suggests that low levels of oxidants cause antioxidant systems to be induced . In certain bacteria, the mechanism of this induction is at least partially understood. A DNA-binding regulatory protein named OxyR that exists in two redox states has been identified in these systems. Increased oxidant stress presumably increases concentration of the oxidized form, which then acts to turn on the transcription of the genes for some of the antioxidant enzymes. A related phenomenon may occur when bacteria and yeast switch from anaerobic to aerobic metabolism. When dioxygen is absent, these microorganisms live by fermentation, and do not waste energy by synthesizing the enzymes and other proteins needed for aerobic metabolism. However, when they are exposed to dioxygen, the synthesis of the respiratory apparatus is turned on. The details of this induction are not known completely, but some steps at least depend on the presence of heme, the prosthetic group of hemoglobin and other heme proteins, whose synthesis requires the presence of dioxygen. What has been left out of the preceding discussion is the identification of the species responsible for oxidative damage, i.e., the agents that directly attack the various vulnerable targets in the cell. They were left out because the details of the chemistry responsible for dioxygen toxicity are largely unknown. In 1954, Rebeca Gerschman formulated the "free-radical theory of oxygen toxicity" after noting that tissues subjected to ionizing radiation resemble those exposed to elevated levels of dioxygen. Fourteen years later, Irwin Fridovich proposed that the free radical responsible for dioxygen toxicity was superoxide, O , based on his identification of the first of the superoxide dismutase enzymes. Today it is still not known if superoxide is the principal agent of dioxygen toxicity, and, if so, what the chemistry responsible for that toxicity is. There is no question that superoxide is formed during the normal course of aerobic metabolism, although it is difficult to obtain estimates of the amount under varying conditions, because, even in the absence of a catalyst, superoxide disproportionates quite rapidly to dioxygen and hydrogen peroxide (Reaction 5.4) and therefore never accumulates to any great extent in the cell under normal conditions of pH. One major problem in this area is that a satisfactory chemical explanation for the purported toxicity of superoxide has never been found, despite much indirect evidence from experiments that the presence of superoxide can lead to undesirable oxidation of various cell components and that such oxidation can be inhibited by superoxide dismutase. The mechanism most commonly proposed is production of hydroxyl radicals via Reactions (5.28) to (5.30) with Red = O , which is referred to as the "Metal-Catalyzed Haber-Weiss Reaction". The role of superoxide in this mechanism is to reduce oxidized metal ions, such as Cu or Fe , present in the cell in trace amounts, to a lower oxidation state. Hydroxyl radical is an extremely powerful and indiscriminate oxidant. It can abstract hydrogen atoms from organic substrates, and oxidize most reducing agents very rapidly. It is also a very effective initiator of free-radical autoxidation reactions (see Section II.C above). Therefore, reactions that produce hydroxyl radical in a living cell will probably be very deleterious. The problem with this explanation for superoxide toxicity is that the only role played by superoxide here is that of a reducing agent of trace metal ions. The interior of a cell is a highly reducing environment, however, and other reducing agents naturally present in the cell such as, for example, ascorbate anion can also act as Red in Reaction (5.28), and the resulting oxidation reactions due to hydroxyl radical are therefore no longer inhibitable by SOD. Other possible explanations for superoxide toxicity exist, of course, but none has ever been demonstrated experimentally. Superoxide might bind to a specific enzyme and inhibit it, much as cytochrome oxidase is inhibited by cyanide or hemoglobin by carbon monoxide. Certain enzymes may be extraordinarily sensitive to direct oxidation by superoxide, as has been suggested for the enzyme aconitase, an iron-sulfur enzyme that contains an exposed iron atom. Another possibility is that the protonated and therefore neutral form of superoxide, HO , dissolves in membranes and acts as an initiator of lipid peroxidation. It has also been suggested that superoxide may react with nitric oxide, NO, in the cell producing peroxynitrite, a very potent oxidant. One particularly appealing mechanism for superoxide toxicity that has gained favor in recent years is the "Site-Specific Haber-Weiss Mechanism." The idea here is that traces of redox-active metal ions such as copper and iron are bound to macromolecules under normal conditions in the cell. Most reducing agents in the cell are too bulky to come into close proximity to these sequestered metal ions. Superoxide, however, in addition to being an excellent reducing agent, is very small, and could penetrate to these metal ions and reduce them. The reduced metal ions could then react with hydrogen peroxide, generating hydroxyl radical, which would immediately attack at a site near the location of the bound metal ion. This mechanism is very similar to that of the metal complexes that cause DNA cleavage; by reacting with hydrogen peroxide while bound to DNA, they generate powerful oxidants that react with DNA with high efficiency because of their proximity to it (see Chapter 8). Although we are unsure what specific chemical reactions superoxide might undergo inside of the cell, there nevertheless does exist strong evidence that the superoxide dismutases play an important role in protection against dioxygen-induced damage. Mutant strains of bacteria and yeast that lack superoxide dismutases are killed by elevated concentrations of dioxygen that have no effect on the wild-type cells. This extreme sensitivity to dioxygen is alleviated when the gene coding for a superoxide dismutase is reinserted into the cell, even if the new SOD is of another type and from a different organism. In summary, we know a great deal about the sites that are vulnerable to oxidative damage in biological systems, about the agents that protect against such damage, and about the mechanisms that repair such damage. Metal ions are involved in all this chemistry, both as catalysts of deleterious oxidative reactions and as cofactors in the enzymes that protect against and repair such damage. What we still do not know at this time, however, is how dioxygen initiates the sequence of chemical reactions that produce the agents that attack the vulnerable biological targets | 16,226 | 2,222 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/02%3A_Chromatography/2.03%3A_Thin_Layer_Chromatography_(TLC)/2.3F%3A_Visualizing_TLC_Plates |
Organic compounds most commonly appear colorless on the white background of a TLC plate, which means that after running a TLC, chemists often cannot simply where compounds are located. The compounds have to be "visualized" after elution, which means to temporarily convert them into something visible. Visualization methods can be either (compound is unchanged after the process) or (compound is converted into something new after the process. Viewing a TLC plate under ultraviolet light is non-destructive, while using a chemical stain is destructive. Below is a summary of various visualization techniques, and the functional groups that generally react with each. A more detailed discussion of each technique is provided later in this section. The most common non-destructive visualization method for TLC plates is ultraviolet (UV) light. A UV lamp can be used to shine either short-waved \(\left( 254 \: \text{nm} \right)\) or long-waved \(\left( 365 \: \text{nm} \right)\) ultraviolet light on a TLC plate with the touch of a button. Most commercially bought TLC plates contain a fluorescent material (e.g. zinc sulfide) in the silica or alumina, so the background of the plate will appear green when viewing with short-waved UV light. If a compound absorbs \(254 \: \text{nm}\) UV light, it will appear dark, as the compound prevents the fluorescent material from receiving the UV light. This method is so quick and easy that it is often the first visualization method tried. It is most useful for visualizing aromatic compounds and highly conjugated systems, as these strongly absorb UV. Most other functional groups do not absorb UV light at the wavelengths used and will not appear dark under the UV lamp even though they are still there. It doesn't hurt to try UV after performing TLC with all compounds just in case. Since the compounds remain unchanged after viewing with UV light, a further visualization technique can be used afterwards on the same plate. A commonly used semi-destructive visualization method is to expose a developed TLC plate to iodine \(\left( \ce{I_2} \right)\) vapor. An "iodine chamber" can be created by adding a few iodine crystals to a TLC chamber, or by adding a few iodine crystals to a chamber containing a portion of powdered silica or alumina (Figure 2.33a). When a developed TLC plate is placed in the chamber and capped, the iodine sublimes and reacts with the compounds on the plate, forming yellow-brown spots (Figure 2.33d). The coloration occurs because iodine forms colored complexes with many organic compounds. This stain will work with approximately half the compounds you may encounter. This method is considered "semi-destructive" because complexation is reversible, and the iodine will eventually evaporate from the TLC plate, leaving the original compound behind. When the coloration fades, it is theoretically possible to use another visualization technique on the TLC plate, although it's possible the compound may have also evaporated by that time. There are a variety of destructive visualization methods that can turn colorless compounds on a TLC plate into colored spots. A plate is either sprayed with or dipped in a reagent that undergoes a chemical reaction with a compound on the TLC plate to convert it into a colored compound, enabling the spot to be seen with the naked eye. Since a chemical reaction is occurring in the process, it is common to gently heat a plate after exposure to the reagent to speed up the reaction, although this may be unnecessary with some stains. Not every compound can be visualized with every reagent if they do not react together, and stains are often designed to work with only certain functional groups. The specific stain should be chosen based on the presumed structure of the compounds you want to visualize. The -anisaldehyde and vanillin stains are general purpose, and work for many strong and weak nucleophiles (alcohols, amines), and for many aldehydes and ketones. They do not work on alkenes, aromatics, esters, or carboxylic acids. The TLC plates need to be mildly heated, and will develop a light pink to dark pink background. A TLC of four samples visualized with three different techniques is shown in Figure 2.36. The plate is visualized with UV light (Figure 2.36b), -anisaldehyde stain (Figure 2.36c), and vanillin stain (Figure 2.36d). 4-heptanone (lane #1) and acetophenone (lane #2) showed similar colorations using the two stains. Ethyl benzoate (lane #4) was unreactive to both. Cinnamaldehyde (lane #3) was reactive to -anisaldehyde but not vanillin, while its impurity (cinnamic acid, on the baseline of lane #3) showed the opposite behavior. ( -Anisaldehyde): \(135 \: \text{mL}\) absolute ethanol, \(5 \: \text{mL}\) concentrated \(\ce{H_2SO_4}\), \(1.5 \: \text{mL}\) glacial acetic acid, and \(3.7 \: \text{mL}\) -anisaldehyde. This stain is susceptible to degradation by light, so store wrapped in aluminum foil (Figure 2.37e), ideally in the refrigerator when not in use. Compared to other stains, this stain has a somewhat short shelf life (approximately half a year). The stain will at first be colorless (Figure 2.37a), but over time will turn to a light then dark pink (Figure 2.37b-d). The stain is less potent when it darkens, but is often still usable. wear gloves while using this highly acidic stain. (Vanillin): \(250 \: \text{mL}\) ethanol, \(15 \: \text{g}\) vanillin, and \(2.5 \: \text{mL}\) concentrated \(\ce{H_2SO_4}\). This stain is light sensitive and should be stored wrapped in aluminum foil in the refrigerator. It is originally light yellow, but darkens over time (Figure 2.37f+g). It should be discarded if it acquires a blue color. wear gloves while using this highly acidic stain. The -anisaldehyde and vanillin stains react in a similar manner, and commonly undergo Aldol and acetalization reactions to produce highly conjugated (and thus colored) compounds on TLC plates. Under the acidic conditions of the stain, some aldehydes or ketones can undergo a keto-enol tautomerism, and the enol can undergo acid-catalyzed nucleophilic addition to -anisaldehyde or vanillin through an aldol mechanism. Dehydration of the aldol product (encouraged by heating the TLC plate), results in a highly-conjugated compound (Figure 2.38d), which is why spots become colored. For example, a TLC plate containing acetophenone and benzophenone (as seen with UV, Figure 2.38a), are stained with -anisaldehyde and vanillin stains. Acetophenone produced a colored spot with these stains (Figures 2.38b+c) while benzophenone did not. The main difference is that benzophenone cannot form an enol, or be a nucleophile to -anisaldehyde, so the stain is unreactive. Some alcohols react with -anisaldehyde and vanillin stains through acetalization reactions. A proposed reaction of -cresol with -anisaldehyde is shown in Figure 2.39b to produce a highly-conjugated cation, a possible structure of the pink spot on the TLC plate in lane #2 of Figure 2.39a. This cationic structure may look unusual, but is a feasible structure in the highly acidic conditions of the stain. The permanganate ion \(\left( \ce{MnO_4^-} \right)\) is a deep purple color, and when it reacts with compounds on a TLC plate (and is consumed), the plate is left with a yellow color (Figure 2.40a). The stain easily visualizes alkenes and alkynes by undergoing addition reactions (Figure 2.40d), and the color change is often immediate with these functional groups. Permanganate is also capable of oxidizing many functional groups (e.g. aldehydes, lane 1 in Figure 2.40c), and so is considered by some to be a universal stain. Heat may be required to visualize some functional groups, and often improves the contrast between spots and the background. Heating may be done (if needed) until the background color just begins to yellow, but a brown background means the plate was overheated. \(1.5 \: \text{g}\) \(\ce{KMnO_4}\), \(10 \: \text{g} \: \ce{K_2CO_3}\), \(1.25 \: \text{mL} \: 10\% \: \ce{NaOH} \left( aq \right)\), and \(200 \: \text{mL}\) water. wear gloves while using this stain, as permanganate is corrosive and will stain skin brown. The phosphomolybdic acid stain (PMA) is considered by some a universal stain, able to visualize a wide variety of compounds (alcohols, alkenes, alkyl iodides, and many carbonyl compounds). The yellow-green PMA reagent \(\left( \ce{Mo^{6+}} \right)\) oxidizes the compound on the plate while itself being reduced to molybdenum blue (\(\ce{Mo^{5+}}\) or \(\ce{Mo^{4+}}\)). Vigorous heating is required to develop the spots, but the plate is overheated when the background begins to darken. There is typically no color differentiation between spots, as most compounds visualize as green or blue spots (Figure 2.41c). \(5 \: \text{g}\) phosphomolybdic acid in \(500 \: \text{mL}\) ethanol. The stain is light sensitive and so should be stored in a jar under aluminum foil. The reagent is expensive, but the stain has a very long shelf life (5+ years). The ferric chloride \(\left( \ce{FeCl_3} \right)\) stain is highly specific, and is used mainly to visualize phenols \(\left( \ce{ArOH} \right)\). Some carbonyl compounds with high enol content may also be visualized. \(\ce{Fe^{3+}}\) forms colored complexes with phenols (often faint blue), in the general sense of what is shown in Figure 2.42c. The actual structure of these complexes is debated\(^6\). The coloration fades rather quickly with this stain, so observations should be recorded immediately. \(1\%\) \(\ce{FeCl_3}\) in water and \(\ce{CH_3OH}\) (\(50\%\) each). This stain has a high shelf life (5+ years). The bromocresol green stain is specific for acidic compounds, and should be able to visualize compounds that produce a solution lower than pH 5. Experience has shown that carboxylic acids work moderately well (first lane in Figure 2.43d) but phenols are only barely visible (indicated with an arrow in Figure 2.43d). In theory, the plate does not need to be heated after exposure to this stain, but in practice it often improves the contrast between the spots and the background. \(100 \: \text{mL}\) absolute ethanol, \(0.04 \: \text{g}\) bromocresol green, and \(0.10 \: \text{M} \: \ce{NaOH} \left( aq \right)\) drop wise until solution turns from yellow to blue (green works as well, as in Figure 2.43b). This stain uses an acid-base indicator, which works in a similar manner to phenolphthalein. Bromocresol green is yellow below pH 3.8 and blue above pH 5.4 (Figure 2.44a). When an acidic compound is spotted on the plate, the acid lowers the pH and causes the indicator to shift to the lower pH yellow form (Figure 2.44b). Even when a compound has certainly been applied on the baseline of a TLC plate, it is possible that the compound is not seen on the plate after elution. There are several possible reasons for this: A simple solution to a dilution problem is to add more compound to the original sample and run the TLC again using a new plate. If the compound is expected to be UV active (i.e. if it contains an aromatic ring), it is a good idea to view the TLC plate under UV light eluting the plate (Figure 2.45a). If the sample spot is not visible before elution it will not be visible afterwards, as compounds diffuse during elution. If the sample is determined to be only too dilute, the material can be deposited multiple times before elution (Figure 2.45b). To do this, deliver a small spot of sample on the baseline, and let it (it helps to blow on it) before delivering another spot over top of the first. If the spots are not allowed to dry in between applications, the spot will be too large. If the compound is expected to be UV active, check the plate under UV light, and if necessary spot more times before elution. A TLC plate should be visualized immediately after elution, so if a moderate amount of time was left between running the TLC and visualizing it, evaporation may be the cause of the problem. A solution to this problem is to run the TLC again and visualize it immediately. If the compound has a low boiling point, it probably evaporated during elution. For example, 2-pentene (boiling point \(36^\text{o} \text{C}\)) was spotted in lane #1 of Figure 1.45c. It did not stain with permanganate after elution even though the compound is reactive to the stain (an undiluted, uneluted sample of 2-pentene did stain somewhat on a scrap TLC plate, Figure 2.45d). Compounds with boiling points lower than approximately \(120^\text{o} \text{C}\) are difficult to analyze through TLC. Visualization techniques are often tailored toward certain functional groups. For example, ultraviolet light is generally good at visualizing aromatic compounds but poor at other functional groups. If UV, iodine, or a stain fails to visualize a compound, it could mean the compound is simply not reactive to the technique, and another method should be tried. For example, Figure 2.46 shows for different compounds visualized with UV (Figure 2.46a), -anisaldehyde stain (Figure 2.46b) and iodine (Figure 2.46c). The compound in lane #1 of all the plates (4-heptanone) was only visible with anisaldehyde stain (blue spot), and not with UV or \(\ce{I_2}\). The compound in lane #4 of all the plates (ethyl benzoate) was unreactive to anisaldehyde stain, but could be visualized with UV and \(\ce{I_2}\). The impurity present on the baseline of lane #3 (the impurity cinnamic acid) was strongly UV active, but could hardly be seen with the other stains. Ultraviolet light is often the first visualization technique attempted on an eluted TLC plate because it is nondestructive and rather simple to carry out. If a dark spot is seen with a UV lamp, it is customary to circle the spot with pencil (as in Figure 2.46b), as the spot will be invisible when the lamp is removed. Another visualization technique is often carried out after viewing the plate under UV, and it is not uncommon that the subsequent stain extends to a smaller or larger region than the pencil marking. For example, the compound in lane 2 of Figure 2.46 (acetophenone) can be easily seen with ultraviolet light (Figure 2.46a), but on the plate visualized with iodine (Figure 2.46c), the pencil markings encapsulate a larger region than is seen darkened by the iodine. This is because acetophenone is very strongly UV active, but only mildly complexes with iodine. It is not uncommon for one technique to visualize a compound than another technique. It is therefore important to be cautious in using TLC to interpret the of material present in a sample, for example when assessing the quantity of an impurity (such as in lane #3 of Figure 2.46, which contains cinnamaldehyde and its impurity cinnamic acid). It is that a large spot is present in a greater quantity than a small spot, but it could also be that the large spot is more to the visualization technique. It is very common for the coloration produced by a stain to fade with time, as the compounds eventually evaporate from the plate or other slower reactions take place. For this reason, it may be a good idea to circle the spots with pencil immediately after a plate is visualized, although as spots are generally circled after viewing with UV, additional markings may cause confusion as to which compounds are UV-active. Another alternative is to place clear tape across the plate to prevent the spots from evaporating. It is possible that the coloration produced by a stain will change with extended heating, or with time. For example, the plate in Figure 2.47 was visualized with -anisaldehyde stain, and Figure 2.47a shows how the plate appeared immediately after heating. Figure 2.47b shows how the same plate appeared after sitting at room temperature for 30 minutes. The compound in lane #2 (acetophenone) had the most dramatic change in color during that time, changing from a bright orange to a green color. Observations recorded into a lab notebook should be of the original color of a spot. Certain visualization methods work best for certain functional groups, so a positive result with a stain can give clues about the identity of an unknown spot. However, sometimes a compound stains when it isn't "supposed to", and this can be confusing. For example, a TLC of benzaldehyde visualized with UV light (Figure 2.48a) shows two spots, and based on relative \(R_f\) values, it would make sense that the dark spot is benzaldehyde and the fainter spot near the baseline is benzoic acid (caused by an oxidation of benzaldehyde). Staining of the plate with bromocresol green (a stain for acidic compounds), supports this hypothesis as the lower acidic spot is visualized with this method (Figure 2.48b). This is an example of when the staining results "make structural sense", and can even support the identification of unknown spots. However, in a similar experiment with cinnamaldehyde, both aldehyde and carboxylic acid spots were strongly visualized with bromocresol green (Figure 2.48d), even though only one is an acidic compound. This result does not at first "make sense", and theories can only be postulated for why the aldehyde reacted with the stain. The significance of Figure 2.48 is that interpretation of staining or lack of staining can be used to infer structure, but there may be exceptions that are difficult to explain. \(^6\)See , 1012 (24 June 1950); DOI: 10.1038/1651012b0 \(^7\)The TLC plate was left in the \(\ce{I_2}\) chamber for only about 2 minutes, and the spots may have developed further with additional time. | 17,570 | 2,224 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2A%3A_8.2.2A%3A_Solutions_of_Gaseous_Solutes_in_Gaseous_Solvents |
Make sure you thoroughly understand the following essential ideas: Mixtures of gases are really solutions, but we tend not to think of them this way because they mix together freely and with no limits to their compositions; we say that gases are . To the extent that gases behave ideally (because they consist mostly of empty space), their mixing does not involve energy changes at all; the mixing of gases is driven entirely by the increase in entropy ( ) as each kind of molecule occupies and shares the space and kinetic energy of the other. Your nose can be a remarkably sensitive instrument for detecting components of gaseous solutions, even at the parts-per-million level. The olfactory experiences resulting from cooking cabbage, eating asparagus, and bodily emanations that are not mentionable in polite society are well known. Can solids or liquids "dissolve" in a gaseous solvent? In a very narrow sense they can, but only to a very small extent. Dissolution of a condensed phase of matter into a gas is formally equivalent to evaporation (of a liquid) or sublimation (of a solid), so the process really amounts to the mixing of gases. The energy required to remove molecules from their neighbors in a liquid or solid and into the gaseous phase is generally too great to be compensated by the greater entropy they enjoy in the larger volume of the mixture, so solids tend to have relatively low vapor pressures. The same is true of liquids at temperatures well below their boiling points. These two cases of gaseous solutions can be summarized as follows: | 1,579 | 2,226 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Reactivity/Electrophilic_Addition_to_Alkenes/EA2._Cations_in_Electrophilic_Addition |
Many of the reactions of alkenes begin with a protonation step. The cation that forms then undergoes a second step in which it combines with the counterion from the acid. In the first step, the alkene's π bond is the nucleophile and the proton is the electrophile. In the second step, the bromide is the nucleophile and the cation is the electrophile. If you are familiar with nucleophilic aliphatic substitution, you will already know that the presence of a cationic intermediate signals some potential complications in this reaction. One issue is the problem of stereochemical control. A carbocation is trigonal planar, because the carbon with the positive charge has only three groups attached to it. Because the cation is trigonal planar, the bromide ion that combines with it can approach from either side. It can come from above or below the trigonal plane. That fact may have no effect whatsoever. However, if the alkene (and the cation it forms) is prochiral, meaning it has the potential to form a new chiral center during this reaction, then there is a choice of which enantiomer to make. A prochiral carbocation is easy to recognize because the cationic carbon has three different groups attached to it. The fourth group added, the nucleophile, would result in four different groups attached to that carbon, making it a chiral center. In order to recognize a prochiral alkene, you can picture what the alkene would look like after the reaction has taken place: will there be four different groups? Which of the following alkenes are prochiral? Addition of the nucleophile to one face of the alkene will result in a stereocentre with R configuration. That face is called the face. Adding it to the other will lead to formation of S configuration. That face is called the face. In the following alkenes, identify whether we are looking at the face or the face. Draw the products of the following reactions, paying attention to stereochemistry. In addition to the problem of stereochemistry, electrophilic additions of alkenes also present potential regiochemical complications. As in aliphatic nucleophilic substitutions, formation of a cation often opens the door to rapid rearrangement via 1,2-hydride shifts. There may be one hydride shift or there may be many of them in a row. These hydride shifts happen pretty easily. Overlap of a hydrogen atom with the empty p orbital of the the adjacent cation leads to a short hop from one carbon to the next. A hydride shift from one secondary carbon to the next, as illustrated in the above example, is thermodynamically pretty neutral. Because the barrier is low, it happens quickly, but there isn't a driving force fo the hydride to shift one way or the other. Instead, both cations result. There is a mixture. However, in a case in which the cation can form in a more stable position, such as a tertary position, there is a driving force for the reaction to go one way. The barrier would be too high for it to get back. As a result, when the counterion combines with the cation, it may do so in a position away from the original double bond. Draw the products of the following reactions, paying attention to stereochemistry and regiochemistry. , | 3,223 | 2,227 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/03%3A_Using_Chemical_Equations_in_Calculations/3.09%3A_Hess'_Law/3.9.04%3A_Geology-_Iron_and_its_Ores |
Perhaps the most useful feature of thermochemical equations is that they can be combined to determine Δ values for other chemical reactions. For example, iron forms several oxides, including iron(II) oxide or wüstite (FeO), iron(III) oxide or hematite (Fe O ), and finally, iron(II,III) oxide or magnetite (FeO·Fe O or Fe O ). These oxides form by thermochemical reactions which depend on, and influence, their environment by producing or absorbing heat. Hematite exists in several phases (denoted &alpha--hematite;, β, &gamma--maghemite; and ε), and they are all different from ordinary rust, which is also often given the formula Fe O . Fe O is the chief iron ore used in production of iron metal. FeO is nonstoichiometric. Magnetite is the most magnetic of all the naturally occurring minerals on Earth. Naturally magnetized pieces of magnetite, called lodestone, will attract small pieces of iron. We'll see evidence below that Fe O is not simply a mixture of FeO and Fe O . Iron(II) oxide, wüstite Iron(III) oxide, hematite Iron(II,III) oxide, Magnetite Consider, for example, the following two-step sequence. Step 1 is reaction of 2 mol Fe( ) and 1 mol O ( ) to form 2 mol FeO( ): (1) 2 Fe( ) + 1 O ( ) → 2 FeO( ) Δ = -544 kJ = Δ In step 2 the 2 moles of FeO react with an additional 0.5 mol O yielding 1 mol Fe O : (2) 2 FeO( ) + ½O ( ) → Fe O ( ) Δ = –280.2 kJ = Δ (Note that since the equation refers to moles, not molecules, fractional coefficients are permissible.) The net result of this two-step process is production of 1 mol Fe O from the original 2 mol Fe and 1.5 mol O (1 mol in the first step and 0.5 mol in the second step). All the FeO produced in step 1 is used up in step 2. On paper this net result can be obtained by the two chemical equations as though they were algebraic equations. The FeO produced is canceled by the FeO consumed since it is both a reactant and a product of the overall reaction 2 Fe( ) + 1 O ( ) → Δ = –-544 kJ 2 Fe( ) + 1.5 O ( ) → 1 Fe O ( ) (3) Δ Experimentally it is found that the enthalpy change for the net reaction is the of the enthalpy changes for steps 1 and 2: Δ = –544 kJ + (–280.2 kJ ) = = –824 kJ = Δ + Δ That is, the thermochemical equation (3) 2 Fe( ) + 1.5 O ( ) → 1 Fe O ( ) Δ = –824 kJ is the correct one for the overall reaction. In the general case it is always true that . This principle is known as . If it were not true, it would be possible to think up a series of reactions in which energy would be created but which would end up with exactly the same substances we started with. This would contradict the law of conservation of energy. Hess’ law enables us to obtain Δ values for reactions which cannot be carried out experimentally, as the next example shows. Magnetite has been very important in understanding the conditions under which rocks form and evolve. Magnetite reacts with oxygen to produce hematite, and the mineral pair forms a buffer that can control the activity of oxygen. One way magnetite is formed is decompostion of FeO. FeO is thermodynamically unstable below 575 °C, disproportionating to metal and Fe O . (4) 4FeO → Fe + Fe O The direct reaction of iron with oxygen does not occur in nature, because iron does not occur in the elemental form in the presence of oxygen, but we know the enthalpy of reaction from laboratory studies: (5) 3 Fe( ) + 2 O ( ) → Fe O Δ = –1118.4 kJ Calculate the enthalpy change for Reaction (4) from the enthalpies of other reactions given on this page. We use the following strategy to manipulate the three experimental equations so that when added they yield Eq. (4): Since the target reaction (4) has FeO on the left, but the reaction (1) above with Δ has FeO on the right, we can reverse it, changing the sign on Δ : (1b) 2 FeO( ) → 2 Fe( ) + 1 O ( ) Δ = +544 kJ = - Δ But the target reaction requires 4 mol of FeO on the left, so we need to multiply this reaction, and its associated enthalpy change, by 2: (1c) 4 FeO( ) → 4 Fe( ) + 2 O ( ) Δ = +1088 kJ = -2 x Δ Since the target equation has 1 mole of Fe O on the right, as does equation (5) above, we can combine equation (5) with (1c): (1c) 4 FeO( ) → 4 Fe( ) + 2 O ( ) Δ = +1088 kJ = -2xΔ (5) 3 Fe( ) + 2 O ( ) → Fe O Δ = –1118.4 kJ Combining the equations and canceling 2O on the left and right, and canceling 3 Fe on the left, leaving 1 Fe on the right, we get equation (4): The enthalpy change will be the sum of the enthalpy changes for (1c) and (5): Δ = -2Δ +Δ Δ = +1088 kJ + (-1118.4) = -30.4 kJ Fe O is not simply a mixture of FeO and Fe O , but a novel structure. Prove this by using thermochemical equations on this page to calculate the enthalpy for reaction (6) below. If the enthalpy change is zero, no significant chemical change occurs. (6) FeO( ) + Fe O → Fe O ( ) Δ It appears that we could start with (5) which has Fe O on the right, like the target equation: (5) 3 Fe( ) + 2 O ( ) → Fe O Δ = –1118.4 kJ We can introduce the Fe O needed on the left of the target equation by using the reverse of Equation (2), changing the sign on Δ (2b) Fe O → 2 FeO( ) + ½O ( ) ( ) Δ = -(–280.2) kJ mol = - Δ This will introduce 2 FeO on the right, and we want 3 FeO on the left in the target equation. There are also 3 Fe on the left of Equation (3) that need to be canceled. We can accomplish both by adding the reverse of Equation (1): (1b) 2 FeO( ) → 2 Fe( ) + 1 O ( ) Δ = -(-544) kJ mol = Δ Since the target equation has 1 FeO on the left, we need to multiply (1b) by 3/2 or 1.5: (1c) 3 FeO( ) → 3 Fe( ) + 1.5 O ( ) Δ = -3/2 x (-544) kJ Combining (5), (2b), and (c) we get the target equation, and the Δ is calculated by combining the corresponding Δ values: (6) FeO( ) + Fe O → Fe O ( ) Δ Δ = -1118 kJ + -(-280.2 kJ) + (-3/2)x(-544 kJ) = -22 kJ Since this is a significantly exothermic change, it appears that a chemical change occurs when FeO and Fe O combine to make Fe O . Significant enthalpy changes occur when solutions are prepared (the dangerous heating observed when water is added to sulfuric acid is a prime example), but these always indicate that bonds have been broken or formed. | 6,156 | 2,228 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/11%3A_Reactions_in_Aqueous_Solutions/11.20%3A_Substances_Which_Are_Both_Oxidizing_and_Reducing_Agents |
In the section on acids and bases, we saw that some substances can act as both an acid and a base ( ). In the world of redox chemistry there exist substances that can act as both a reducing agent and oxidizing and a couple of examples are given below. We have seen that , such as fluorine, can oxidize water to oxygen. There are also , such as lithium, which can reduce water to hydrogen. In terms of redox, water behaves much as it did in , where we found it to be . In the presence of a strong electron donor (strong reducing agent), water serves as an oxidizing agent. In the presence of a strong electron acceptor (strong oxidizing agent), water serves as a reducing agent. Water is rather weak as an oxidizing or as a reducing agent, however; so there are not many substances which reduce or oxidize it. Thus it makes a good solvent for redox reactions. This also parallels water’s acid-base behavior, since it is also a very and a very . In this molecule the for oxygen is –1. This is halfway between O (0) and H O(–2), and so hydrogen peroxide can either be reduced or oxidized. When it is reduced, it acts as an oxidizing agent: \[\ce{H2O2 + 2H^+ + 2e^{–} -> 2H2O} \nonumber \] When it is oxidized, it serves as a reducing agent: \[\ce{H2O2 -> O2 + 2H^+ + 2e^{–}} \nonumber \] Hydrogen peroxide is considerably stronger as an oxidizing agent than as a reducing agent, especially in acidic solutions. | 1,428 | 2,229 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Basic_Principles_of_Organic_Chemistry_(Roberts_and_Caserio)/25%3A_Amino_Acids_Peptides_and_Proteins/25.06%3A_Synthesis_of_-Amino_Acids |
Many of the types of reactions that are useful for the preparation of amino acids have been discussed previously in connection with separate syntheses of carboxylic acids ( ) and amino compounds ( ). Examples include the \(S_\text{N}2\) displacement of halogen from \(\alpha\)-halo acids by ammonia, and the , which, in its first step, bears a close relationship to cyanohydrin formation ( ): Other general synthetic methods introduce the \(\alpha\)-amino acid grouping, \(\ce{H_2N-CH-CO_2H}\), by way of enolate anions. Two selected examples follow. Notice that in each a carbanion is generated and alkylated. Also the \(\ce{H_2N}-\) group is introduced as a protected amide or imide group. 1. 2. With those amino acids that are very soluble in water, it usually is necessary to isolate the product either by evaporation of an aqueous solution or by precipitation induced by addition of an organic solvent like alcohol. Difficulty may be encountered in obtaining a pure product when inorganic salts are coproducts of the synthesis. The best general method for removal of inorganic salts involves passage of the solutions through columns of suitable ion-exchange resins ( ). The products of laboratory syntheses, starting with achiral reagents, are of course racemic \(\alpha\)-amino acids. To obtain the natural amino acids, the \(D\),\(L\) mixtures must be resolved ( ). and (1977) | 1,405 | 2,230 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/18%3A_Solubility_and_Complex-Ion_Equilibria/18.2%3A_Relationship_Between_Solubility_and_Ksp |
Considering the relation between solubility and \(K_{sp}\) is important when describing the solubility of slightly ionic compounds. However, this article discusses ionic compounds that are difficult to dissolve; they are considered "slightly soluble" or "almost insoluble." Solubility product constants (\(K_{sq}\)) are given to those solutes, and these constants can be used to find the molar solubility of the compounds that make the solute. This relationship also facilitates finding the \(K_{sq}\) of a slightly soluble solute from its solubility. Recall that the definition of is the maximum possible concentration of a solute in a solution at a given temperature and pressure. We can determine the solubility product of a slightly soluble solid from that measure of its solubility at a given temperature and pressure, provided that the only significant reaction that occurs when the solid dissolves is its dissociation into solvated ions, that is, the only equilibrium involved is: \[\ce{M}_p\ce{X}_q(s)⇌p\mathrm{M^{m+}}(aq)+q\mathrm{X^{n−}}(aq)\] In this case, we calculate the solubility product by taking the solid’s solubility expressed in units of moles per liter (mol/L), known as its . We began the chapter with an informal discussion of how the mineral fluorite is formed. Fluorite, \(\ce{CaF2}\), is a slightly soluble solid that dissolves according to the equation: \[\ce{CaF2}(s)⇌\ce{Ca^2+}(aq)+\ce{2F-}(aq)\nonumber \] The concentration of Ca in a saturated solution of CaF is 2.1 × 10 ; therefore, that of F is 4.2 × 10 , that is, twice the concentration of \(\ce{Ca^{2+}}\). What is the solubility product of fluorite? First, write out the expression, then substitute in concentrations and solve for : \[\ce{CaF2(s) <=> Ca^{2+}(aq) + 2F^{-}(aq)} \nonumber\] A saturated solution is a solution at equilibrium with the solid. Thus: \[\begin{align*} K_\ce{sp} &= \ce{[Ca^{2+},F^{-}]^2} \\[4pt] &=(2.1×10^{−4})(4.2×10^{−4})^2 \\[4pt] &=3.7×10^{−11}\end{align*}\] As with other equilibrium constants, we do not include units with . In a saturated solution that is in contact with solid Mg(OH) , the concentration of Mg is 3.7 × 10 . What is the solubility product for Mg(OH) ? \[\ce{Mg(OH)2}(s)⇌\ce{Mg^2+}(aq)+\ce{2OH-}(aq)\nonumber\] 2.0 × 10 The of copper(I) bromide, \(\ce{CuBr}\), is 6.3 × 10 . Calculate the molar solubility of copper bromide. The solubility product constant of copper(I) bromide is 6.3 × 10 . The reaction is: \[\ce{CuBr}(s)⇌\ce{Cu+}(aq)+\ce{Br-}(aq)\nonumber\] First, write out the solubility product equilibrium constant expression: \[K_\ce{sp}=\ce{[Cu+,Br- ]}\nonumber\] At equilibrium: \[ \begin{align*} K_\ce{sp} &=\ce{[Cu+,Br- ]} \\[4pt] 6.3×10^{−9} &=(x)(x)=x^2 \\[4pt] x&=\sqrt{(6.3×10^{−9})}=7.9×10^{−5} \end{align*}\] Therefore, the molar solubility of \(\ce{CuBr}\) is 7.9 × 10 . Finding the Solubility of a Salt: Solubility is defined as the maximum amount of solute that can be dissolved in a solvent at equilibrium. Equilibrium is the state at which the concentrations of products and reactant are constant after the reaction has taken place. The solubility product constant (\(K_{sp}\)) describes the equilibrium between a solid and its constituent ions in a solution. The value of the constant identifies the degree to which the compound can dissociate in water. The higher the \(K_{sp}\), the more soluble the compound is. \(K_{sq}\) is defined in terms of activity rather than concentration because it is a measure of a concentration that depends on certain conditions such as temperature, pressure, and composition. It is influenced by surroundings. \(K_{sp}\) is used to describe the saturated solution of ionic compounds. (A saturated solution is in a state of equilibrium between the dissolved, dissociated, undissolved solid, and the ionic compound.) ). | 3,854 | 2,231 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/06%3A_Chemical_Bonding_-_Electron_Pairs_and_Octets/6.18%3A_Ionic_Compounds_Containing_Polyatomic_Ions |
When polyatomic ions are included, the number of ionic compounds increases significantly. Indeed, most ionic compounds contain polyatomic ions. Well-known examples are sodium hydroxide (NaOH) with OH as the polyatomic anion, calcium carbonate (CaCO ), and ammonium nitrate (NH NO ), which contains two polyatomic ions: NH and NO . A list of the more important polyatomic ions is given in the following table, which can be used for reference while learning the charges of polyatomic ions. A great many of them are oxyanions (polyatomic ions that contain oxygen). \(\Page {1}\) The properties of compounds containing polyatomic ions are very similar to those of binary ionic compounds. The ions are arranged in a regular lattice and held together by coulombic forces of attraction. The resulting crystalline solids usually have high melting points (1500 °F for CaCO ) and all conduct electricity when molten. Most are soluble in water and form conducting solutions in which the ions can move around as independent entities. In general polyatomic ions are colorless, unless, like CrO or MnO , they contain a transition-metal atom. The more negatively charged polyatomic ions, like their monatomic counterparts, show a distinct tendency to react with water, producing hydroxide ions; for example, \[\ce{PO_{4}^{3-} + H_{2}O \rightarrow HPO_{4}^{2-} + OH^{-}} \nonumber \] It is important to realize that compounds containing polyatomic ions must be . In a crystal of calcium sulfate, for instance, there must be equal numbers of Ca and SO ions in order for the charges to balance. The formula is thus CaSO . In the case of sodium sulfate, by contrast, the Na ion has only a single charge. In this case we need Na ions for each SO ion in order to achieve electroneutrality. The formula is thus Na SO . Structurally, polyatomic ions are similar to the ionic solids we saw earlier. An example of a simple ionic compound, NaCl, is seen below, alongside a more complex ionic solid, AgClO . Notice how both are tightly packed and form a repeating pattern, which lends both compounds strength and brittleness. In the Silver Chlorate (AgClO ), however, polyatomic ions are present where the Cl ions are present in the Sodium Chloride (NaCl). What is the formula of the ionic compound calcium phosphate? It is necessary to have the correct ratio of calcium ions, Ca , and phosphate ions, PO , in order to achieve electroneutrality. The required ratio is the of the ratio of the charges on ions. Since the charges are in the ratio of 2:3, the numbers must be in the ratio of 3:2. In other words the solid salt must contain three calcium ions for every two phosphate ions: The formula for calcium phosphate is thus Ca (PO ) . | 2,739 | 2,232 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.12%3A_Van_der_Waals'_Equation |
An equation due to van der Waals extends the ideal gas equation in a straightforward way. is \[\left(P+\frac{an^2}{V^2}\right)\left(V-nb\right)=nRT\] It fits pressure-volume-temperature data for a real gas better than the ideal gas equation does. The improved fit is obtained by introducing two parameters (designated “\(a\)” and “\(b\)”) that must be determined experimentally for each gas. Van der Waals’ equation is particularly useful in our effort to understand the behavior of real gases, because it embodies a simple physical picture for the difference between a real gas and an ideal gas. In deriving Boyle’s law from Newton’s laws, we assume that the gas molecules do not interact with one another. Simple arguments show that this can be only approximately true. Real gas molecules must interact with one another. At short distances they repel one another. At somewhat longer distances, they attract one another. The ideal gas equation can also be derived from the basic assumptions that we make in 10 by an application of the theory of statistical thermodynamics. By making different assumptions about molecular properties, we can apply statistical thermodynamics to derive\({}^{5}\) van der Waals’ equation. The required assumptions are that the molecules occupy a finite volume and that they attract one another with a force that varies as the inverse of a power of the distance between them. (The attractive force is usually assumed to be proportional to \(r^{-6}\).) To recognize that real gas molecules both attract and repel one another, we need only remember that any gas can be liquefied by reducing its temperature and increasing the pressure applied to it. If we cool the liquid further, it freezes to a solid. Now, two distinguishing features of a solid are that it retains its shape and that it is almost incompressible. We attribute the incompressibility of a solid to repulsive forces between its constituent molecules; they have come so close to one another that repulsive forces between them have become important. To compress the solid, the molecules must be pushed still closer together, which requires inordinate force. On the other hand, if we throw an ice cube across the room, all of its constituent water molecules fly across the room together. Evidently, the water molecules in the solid are attracted to one another, otherwise they would all go their separate ways—throwing the ice cube would be like throwing a handful of dry sand. But water molecules are the same molecules whatever the temperature or pressure, so if there are forces of attraction and repulsion between them in the solid, these forces must be present in the liquid and gas phases also. In the gas phase, molecules are far apart; in the liquid or the solid phase, they are packed together. At its boiling point, the volume of a liquid is much less than the volume of the gas from which it is condensed. At the freezing point, the volume of a solid is only slightly different from the volume of the liquid from which it is frozen, and it is certainly greater than zero. These commonplace observations are readily explained by supposing that any molecule has a characteristic volume. We can understand this, in turn, to be a consequence of the nature of the intermolecular forces; evidently, these forces become stronger as the distance between a pair of molecules decreases. Since a liquid or a solid occupies a definite volume, the repulsive force must increase more rapidly than the attractive force when the intermolecular distance is small. Often it is useful to talk about the molar volume of a condensed phase. By molar volume, we mean the volume of one mole of a pure substance. The molar volume of a condensed phase is determined by the intermolecular distance at which there is a balance between intermolecular forces of attraction and repulsion. Evidently molecules are very close to one another in condensed phases. If we suppose that the empty spaces between molecules are negligible, the volume of a condensed phase is approximately equal to the number of molecules in the sample multiplied by the volume of a single molecule. Then the molar volume is Avogadro’s number times the volume occupied by one molecule. If we know the density, , and the molar mass, \(\overline{M}\), we can find the molar volume, \(\overline{V}\), as \[\overline{V}=\frac{\overline{M}}{D}\] The volume occupied by a molecule, V\({}_{molecule}\), becomes \[V_{molecule}=\frac{\overline{V}}{\overline{N}}\] The pressure and volume appearing in van der Waals’ equation are the pressure and volume of the real gas. We can relate the terms in van der Waals’ equation to the ideal gas equation: It is useful to think of the terms \(\left(P+{{an}^2}/{V^2}\right)\) and \(\left(V-nb\right)\) as the pressure and volume of a . That is \[ \begin{align*} P_{ideal\ gas}V_{ideal\ gas} &=\left(P_{real\ gas}+\frac{an^2}{V^2_{real\ gas}}\right)\left(V_{real\ gas}-nb\right) \\[4pt] &=nRT \end{align*}\] Then we have \[V_{real\ gas}=V_{ideal\ gas}+nb\] We derive the ideal gas equation from a model in which the molecules are non-interacting point masses. So the volume of an ideal gas is the volume occupied by a gas whose individual molecules have zero volume. If the individual molecules of a real gas effectively occupy a volume \({b}/{\overline{N}}\), then \(n\) moles of them effectively occupy a volume \[\left({b}/{\overline{N}}\right)\left(n\overline{N}\right)=nb.\] Van der Waals’ equation says that the volume of a real gas is the volume that would be occupied by non-interacting point masses, \(V_{ideal\ gas}\), plus the effective volume of the gas molecules themselves. (When data for real gas molecules are fit to the van der Waals’ equation, the value of \(b\) is usually somewhat greater than the volume estimated from the liquid density and molecular weight. See problem 24.) Similarly, we have \[P_{\text{real gas}}=P_{\text{ideal gas}}-\frac{an^2}{V^2_{\text{real gas}}}\] We can understand this as a logical consequence of attractive interactions between the molecules of the real gas. With \(a>0\), it says that the pressure of the real gas is less than the pressure of the hypothetical ideal gas, by an amount that is proportional to \({\left({n}/{V}\right)}^2\). The proportionality constant is \(a\). Since \({n}/{V}\) is the molar density (moles per unit volume) of the gas molecules, it is a measure of concentration. The number of collisions between molecules of the same kind is proportional to the square of their concentration. (We consider this point in more detail in Chapters 4 and 5.) So \({\left({n}/{V}\right)}^2\) is a measure of the frequency with which the real gas molecules come into close contact with one another. If they attract one another when they come close to one another, the effect of this attraction should be proportional to \({\left({n}/{V}\right)}^2\). So van der Waals’ equation is consistent with the idea that the pressure of a real gas is different from the pressure of the hypothetical ideal gas by an amount that is proportional to the frequency and strength of attractive interactions. But why should attractive interactions have this effect; why should the pressure of the real gas be less than that of the hypothetical ideal gas? Perhaps the best way to develop a qualitative picture is to recognize that attractive intermolecular forces tend to cause the gas molecules to clump up. After all, it is these attractive forcesattractive force that cause the molecules to aggregate to a liquid at low temperatures. Above the boiling point, the ability of gas molecules to go their separate ways limits the effects of this tendency; however, even in the gas, the attractive forces must act in a way that tends to reduce the volume occupied by the molecules. Since the volume occupied by the gas is dictated by the size of the container—not by the properties of the gas itself—this clumping-up tendency finds expression as a decrease in pressure. It is frequently useful to describe the interaction between particles or chemical moieties in terms of a potential energy versus distance diagram. The van der Waals’ equation corresponds to the case that the repulsive interaction between molecules is non-existent until the molecules come into contact. Once they come into contact, the energy required to move them still closer together becomes arbitrarily large. Often this is described by saying that they behave like “hard spheres”. The attractive force between two molecules decreases as the distance between them increases. When they are very far apart the attractive interaction is very small. We say that the energy of interaction is zero when the molecules are infinitely far apart. If we initially have two widely separated, stationary, mutually attracting molecules, they will spontaneously move toward one another, gaining kinetic energy as they go. Their potential energy decreases as they approach one another, reaching its smallest value when the molecules come into contact. Thus, van der Waals’ equation implies the potential energy distance diagram sketched in Figure 5. | 9,140 | 2,233 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/09%3A_Mixtures/9.04%3A_Liquid_and_Solid_Mixtures_of_Nonelectrolytes |
\( \newcommand{\tx}[1]{\text{#1}} % text in math mode\)
\( \newcommand{\subs}[1]{_{\text{#1}}} % subscript text\)
\( \newcommand{\sups}[1]{^{\text{#1}}} % superscript text\)
\( \newcommand{\st}{^\circ} % standard state symbol\)
\( \newcommand{\id}{^{\text{id}}} % ideal\)
\( \newcommand{\rf}{^{\text{ref}}} % reference state\)
\( \newcommand{\units}[1]{\mbox{$\thinspace$#1}}\)
\( \newcommand{\K}{\units{K}} % kelvins\)
\( \newcommand{\degC}{^\circ\text{C}} % degrees Celsius\)
\( \newcommand{\br}{\units{bar}} % bar (\bar is already defined)\)
\( \newcommand{\Pa}{\units{Pa}}\)
\( \newcommand{\mol}{\units{mol}} % mole\)
\( \newcommand{\V}{\units{V}} % volts\)
\( \newcommand{\timesten}[1]{\mbox{$\,\times\,10^{#1}$}}\)
\( \newcommand{\per}{^{-1}} % minus one power\)
\( \newcommand{\m}{_{\text{m}}} % subscript m for molar quantity\)
\( \newcommand{\CVm}{C_{V,\text{m}}} % molar heat capacity at const.V\)
\( \newcommand{\Cpm}{C_{p,\text{m}}} % molar heat capacity at const.p\)
\( \newcommand{\kT}{\kappa_T} % isothermal compressibility\)
\( \newcommand{\A}{_{\text{A}}} % subscript A for solvent or state A\)
\( \newcommand{\B}{_{\text{B}}} % subscript B for solute or state B\)
\( \newcommand{\bd}{_{\text{b}}} % subscript b for boundary or boiling point\)
\( \newcommand{\C}{_{\text{C}}} % subscript C\)
\( \newcommand{\f}{_{\text{f}}} % subscript f for freezing point\)
\( \newcommand{\mA}{_{\text{m},\text{A}}} % subscript m,A (m=molar)\)
\( \newcommand{\mB}{_{\text{m},\text{B}}} % subscript m,B (m=molar)\)
\( \newcommand{\mi}{_{\text{m},i}} % subscript m,i (m=molar)\)
\( \newcommand{\fA}{_{\text{f},\text{A}}} % subscript f,A (for fr. pt.)\)
\( \newcommand{\fB}{_{\text{f},\text{B}}} % subscript f,B (for fr. pt.)\)
\( \newcommand{\xbB}{_{x,\text{B}}} % x basis, B\)
\( \newcommand{\xbC}{_{x,\text{C}}} % x basis, C\)
\( \newcommand{\cbB}{_{c,\text{B}}} % c basis, B\)
\( \newcommand{\mbB}{_{m,\text{B}}} % m basis, B\)
\( \newcommand{\kHi}{k_{\text{H},i}} % Henry's law constant, x basis, i\)
\( \newcommand{\kHB}{k_{\text{H,B}}} % Henry's law constant, x basis, B\)
\( \newcommand{\arrow}{\,\rightarrow\,} % right arrow with extra spaces\)
\( \newcommand{\arrows}{\,\rightleftharpoons\,} % double arrows with extra spaces\)
\( \newcommand{\ra}{\rightarrow} % right arrow (can be used in text mode)\)
\( \newcommand{\eq}{\subs{eq}} % equilibrium state\)
\( \newcommand{\onehalf}{\textstyle\frac{1}{2}\D} % small 1/2 for display equation\)
\( \newcommand{\sys}{\subs{sys}} % system property\)
\( \newcommand{\sur}{\sups{sur}} % surroundings\)
\( \renewcommand{\in}{\sups{int}} % internal\)
\( \newcommand{\lab}{\subs{lab}} % lab frame\)
\( \newcommand{\cm}{\subs{cm}} % center of mass\)
\( \newcommand{\rev}{\subs{rev}} % reversible\)
\( \newcommand{\irr}{\subs{irr}} % irreversible\)
\( \newcommand{\fric}{\subs{fric}} % friction\)
\( \newcommand{\diss}{\subs{diss}} % dissipation\)
\( \newcommand{\el}{\subs{el}} % electrical\)
\( \newcommand{\cell}{\subs{cell}} % cell\)
\( \newcommand{\As}{A\subs{s}} % surface area\)
\( \newcommand{\E}{^\mathsf{E}} % excess quantity (superscript)\)
\( \newcommand{\allni}{\{n_i \}} % set of all n_i\)
\( \newcommand{\sol}{\hspace{-.1em}\tx{(sol)}}\)
\( \newcommand{\solmB}{\tx{(sol,$\,$$m\B$)}}\)
\( \newcommand{\dil}{\tx{(dil)}}\)
\( \newcommand{\sln}{\tx{(sln)}}\)
\( \newcommand{\mix}{\tx{(mix)}}\)
\( \newcommand{\rxn}{\tx{(rxn)}}\)
\( \newcommand{\expt}{\tx{(expt)}}\)
\( \newcommand{\solid}{\tx{(s)}}\)
\( \newcommand{\liquid}{\tx{(l)}}\)
\( \newcommand{\gas}{\tx{(g)}}\)
\( \newcommand{\pha}{\alpha} % phase alpha\)
\( \newcommand{\phb}{\beta} % phase beta\)
\( \newcommand{\phg}{\gamma} % phase gamma\)
\( \newcommand{\aph}{^{\alpha}} % alpha phase superscript\)
\( \newcommand{\bph}{^{\beta}} % beta phase superscript\)
\( \newcommand{\gph}{^{\gamma}} % gamma phase superscript\)
\( \newcommand{\aphp}{^{\alpha'}} % alpha prime phase superscript\)
\( \newcommand{\bphp}{^{\beta'}} % beta prime phase superscript\)
\( \newcommand{\gphp}{^{\gamma'}} % gamma prime phase superscript\)
\( \newcommand{\apht}{\small\aph} % alpha phase tiny superscript\)
\( \newcommand{\bpht}{\small\bph} % beta phase tiny superscript\)
\( \newcommand{\gpht}{\small\gph} % gamma phase tiny superscript\) \( \newcommand{\upOmega}{\Omega}\) \( \newcommand{\dif}{\mathop{}\!\mathrm{d}} % roman d in math mode, preceded by space\)
\( \newcommand{\Dif}{\mathop{}\!\mathrm{D}} % roman D in math mode, preceded by space\)
\( \newcommand{\df}{\dif\hspace{0.05em} f} % df\) \(\newcommand{\dBar}{\mathop{}\!\mathrm{d}\hspace-.3em\raise1.05ex{\Rule{.8ex}{.125ex}{0ex}}} % inexact differential \)
\( \newcommand{\dq}{\dBar q} % heat differential\)
\( \newcommand{\dw}{\dBar w} % work differential\)
\( \newcommand{\dQ}{\dBar Q} % infinitesimal charge\)
\( \newcommand{\dx}{\dif\hspace{0.05em} x} % dx\)
\( \newcommand{\dt}{\dif\hspace{0.05em} t} % dt\)
\( \newcommand{\difp}{\dif\hspace{0.05em} p} % dp\)
\( \newcommand{\Del}{\Delta}\)
\( \newcommand{\Delsub}[1]{\Delta_{\text{#1}}}\)
\( \newcommand{\pd}[3]{(\partial #1 / \partial #2 )_{#3}} % \pd{}{}{} - partial derivative, one line\)
\( \newcommand{\Pd}[3]{\left( \dfrac {\partial #1} {\partial #2}\right)_{#3}} % Pd{}{}{} - Partial derivative, built-up\)
\( \newcommand{\bpd}[3]{[ \partial #1 / \partial #2 ]_{#3}}\)
\( \newcommand{\bPd}[3]{\left[ \dfrac {\partial #1} {\partial #2}\right]_{#3}}\)
\( \newcommand{\dotprod}{\small\bullet}\)
\( \newcommand{\fug}{f} % fugacity\)
\( \newcommand{\g}{\gamma} % solute activity coefficient, or gamma in general\)
\( \newcommand{\G}{\varGamma} % activity coefficient of a reference state (pressure factor)\)
\( \newcommand{\ecp}{\widetilde{\mu}} % electrochemical or total potential\)
\( \newcommand{\Eeq}{E\subs{cell, eq}} % equilibrium cell potential\)
\( \newcommand{\Ej}{E\subs{j}} % liquid junction potential\)
\( \newcommand{\mue}{\mu\subs{e}} % electron chemical potential\)
\( \newcommand{\defn}{\,\stackrel{\mathrm{def}}{=}\,} % "equal by definition" symbol\) \( \newcommand{\D}{\displaystyle} % for a line in built-up\)
\( \newcommand{\s}{\smash[b]} % use in equations with conditions of validity\)
\( \newcommand{\cond}[1]{\\[-2.5pt]{}\tag*{#1}}\)
\( \newcommand{\nextcond}[1]{\\[-5pt]{}\tag*{#1}}\)
\( \newcommand{\R}{8.3145\units{J$\,$K$\per\,$mol$\per$}} % gas constant value\)
\( \newcommand{\Rsix}{8.31447\units{J$\,$K$\per\,$mol$\per$}} % gas constant value - 6 sig figs\) \( \newcommand{\jn}{\hspace3pt\lower.3ex{\Rule{.6pt}{2ex}{0ex}}\hspace3pt} \)
\( \newcommand{\ljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}} \hspace3pt} \)
\( \newcommand{\lljn}{\hspace3pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace1.4pt\lower.3ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise.45ex{\Rule{.6pt}{.5ex}{0ex}}\hspace-.6pt\raise1.2ex{\Rule{.6pt}{.5ex}{0ex}}\hspace3pt} \) Homogeneous liquid and solid mixtures are condensed phases of variable composition. Most of the discussion of condensed-phase mixtures in this section focuses on liquids. The same principles, however, apply to homogeneous solid mixtures, often called solid solutions. These solid mixtures include most metal alloys, many gemstones, and doped semiconductors. The relations derived in this section apply to mixtures of nonelectrolytes—substances that do not dissociate into charged species. Solutions of electrolytes behave quite differently in many ways, and will be discussed in the next chapter. In 1888, the French physical chemist François Raoult published his finding that when a dilute liquid solution of a volatile solvent and a nonelectrolyte solute is equilibrated with a gas phase, the partial pressure \(p\A\) of the solvent in the gas phase is proportional to the mole fraction \(x\A\) of the solvent in the solution: \begin{equation} p\A=x\A p\A^* \tag{9.4.1} \end{equation} Here \(p\A^*\) is the saturation vapor pressure of the pure solvent (the pressure at which the pure liquid and pure gas phases are in equilibrium). Consider the , A, of a solution that is dilute enough to be in the ideal-dilute range. In this range, the solvent fugacity obeys Raoult’s law, and the partial molar quantities of the solvent are the same as those in an ideal mixture. Formulas for these quantities were given in Eqs. 9.4.8–9.4.13 and are collected in the first column of Table 9.2. The formulas show that the chemical potential and partial molar entropy of the solvent, at constant \(T\) and \(p\), vary with the solution composition and, in the limit of infinite dilution (\(x\A\ra 1\)), approach the values for the pure solvent. The partial molar enthalpy, volume, internal energy, and heat capacity, on the other hand, are independent of composition in the ideal-dilute region and are equal to the corresponding molar quantities for the pure solvent. Next consider a , B, of a binary ideal-dilute solution. The solute obeys Henry’s law, and its chemical potential is given by \(\mu\B = \mu\xbB\rf + RT\ln x\B\) (Eq. 9.4.24) where \(\mu\xbB\rf\) is a function of \(T\) and \(p\), but not of composition. \(\mu\B\) varies with the composition and goes to \(-\infty\) as the solution becomes infinitely dilute (\(x\A\ra 1\) and \(x\B\ra 0\)). For the partial molar entropy of the solute, we use \(S\B=-\pd{\mu\B}{T}{p,\allni}\) (Eq. 9.2.48) and obtain \begin{equation} S\B = -\Pd{\mu\xbB\rf}{T}{\!p} - R \ln x\B \tag{9.4.36} \end{equation} The term \(-\pd{\mu\xbB\rf}{T}{p}\) represents the partial molar entropy \(S\xbB\rf\) of B in the fictitious reference state of unit solute mole fraction. Thus, we can write Eq. 9.4.36 in the form \begin{gather} \s{ S\B = S\xbB\rf - R \ln x\B } \tag{9.4.37} \cond{(ideal-dilute solution} \nextcond{of a nonelectrolyte)} \end{gather} This equation shows that the partial molar entropy varies with composition and goes to \(+\infty\) in the limit of infinite dilution. From the expressions of Eqs. 9.4.27 and 9.4.28, we can derive similar expressions for \(S\B\) in terms of the solute reference states on a concentration or molality basis. The relation \(H\B = \mu\B + TS\B\) (from Eq. 9.2.46), combined with Eqs. 9.4.24 and 9.4.37, yields \begin{equation} H\B = \mu\xbB\rf + TS\xbB\rf = H\xbB\rf \tag{9.4.38} \end{equation} showing that at constant \(T\) and \(p\), the partial molar enthalpy of the solute is constant throughout the ideal-dilute solution range. Therefore, we can write \begin{gather} \s{ H\B = H\B^{\infty} } \tag{9.4.39} \cond{(ideal-dilute solution} \nextcond{of a nonelectrolyte)} \end{gather} where \(H\B^{\infty}\) is the partial molar enthalpy at infinite dilution. By similar reasoning, using Eqs. 9.2.49–9.2.52, we find that the partial molar volume, internal energy, and heat capacity of the solute are constant in the ideal-dilute range and equal to the values at infinite dilution. The expressions are listed in the second column of Table 9.2. When the pressure is equal to the standard pressure \(p\st\), the quantities \(H\B^{\infty}\), \(V\B^{\infty}\), \(U\B^{\infty}\), and \(C_{p,\tx{B}}^{\infty}\) are the same as the standard values \(H\B\st\), \(V\B\st\), \(U\B\st\), and \(C_{p,\tx{B}}\st\). | 11,642 | 2,234 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/01%3A_General_Techniques/1.03%3A_Transferring_Methods/1.3A%3A_Transferring_Methods_-_Solids |
A solid can be dispensed from its reagent jar directly into a vessel or onto a weighing boat or creased piece of paper. If a solid is to be transferred into a vessel containing a narrow mouth (such as a round bottomed flask), a "powder funnel" or wide-mouth funnel can be used (Figure 1.15a). Alternatively, the solid can be nudged off a creased piece of paper in portions using a spatula (Figures 1.15 b+c). If the solid is the limiting reagent in a chemical reaction, it should ideally be dispensed from the reagent jar directly into the vessel (Figure 1.16a). However, if using a weighing boat, residue should be rinsed off with the solvent that will be used in the reaction (only if the boat is unreactive to the solvent) in order to transfer the reagent in its entirety. Residue clinging to ground glass joints should also be dislodged with a KimWipe or rinsed into the flask with solvent to prevent joints from sticking, and to make sure the entire reagent makes it to the reaction vessel. Certain solid compounds (e.g. \(\ce{KOH}\), \(\ce{K2CO3}\), \(\ce{CaCl2}\)) are sticky or (readily absorb water from the air), and these reagents should be dispensed onto glossy weighing paper (used in Figure 1.15b). This weighing paper has a wax coating so that sticky reagents more easily slide off its surface. For transfer into vessels with very narrow mouths (e.g. NMR tubes), it is sometimes easier to dissolve solids in their eventual solvent and transfer a solution via pipette (Figures 1.16 b+c). | 1,517 | 2,235 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/02%3A_Chromatography/2.02%3A_Chromatography_Generalities/2.2A%3A_Overview_of_Chromatography |
The first uses of chromatography involved separating the colored components of plants in the early 1900's. The pigments in a plant can be separated into yellow, orange and green colors (xanthophylls, carotenes and chlorophylls respectively) through this method. The Greek name for color is , and is 'to write,' so chromatography can be thought of as "color writing." The general idea of chromatography can be demonstrated with food dyes in your kitchen. Commercial green food dye does not contain any green colored components at all, and chromatography can show that green food dye is actually a mixture of blue and yellow dyes. If a drop of green food dye is placed in the middle of a paper towel followed by a few drops of water, the water will creep outwards as it wets the paper (Figure 2.1). As the water expands, the dye will travel with it. If you let the dye expand long enough, you'll see that the edges will be tinted with blue (Figure 2.1d). This is the beginning of the separation of the blue and yellow components in the green dye by the paper and water. A complete separation of the green food dye can be accomplished using paper chromatography. A dilute sample is deposited on the bottom edge of a piece of paper, the paper is rolled in a cylinder, stapled, and placed vertically in a closed container containing a small amount of solvent\(^1\) (Figure 2.2a). The solvent is allowed to wick up the paper through capillary action (called " ," Figure 2.2b), and through this method complete separation of the blue and yellow components can be achieved (Figure 2.2d). \(^1\)The solvent used in this separation is a solution made from a 1:3:1 volume ratio of \(6 \: \text{M} \: \ce{NH_4OH}\):1-pentanol:ethanol. | 1,739 | 2,236 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/26%3A_Structure_of_Organic_Compounds/26.7%3A_Organic_Compounds_Containing_Functional_Groups |
are atoms or small groups of atoms (two to four) that exhibit a characteristic reactivity. A particular functional group will almost always display its characteristic chemical behavior when it is present in a compound. Because of their importance in understanding organic chemistry, functional groups have characteristic names that often carry over in the naming of individual compounds incorporating specific groups As we progress in our study of organic chemistry, it will become extremely important to be able to quickly recognize the most common functional groups, because they are the key structural elements that define how organic molecules react. For now, we will only worry about drawing and recognizing each functional group, as depicted by Lewis and line structures. Much of the remainder of your study of organic chemistry will be taken up with learning about how the different functional groups tend to behave in organic reactions. We have already seen the simplest possible example of an functional group in methanol. In the alcohol functional group, a carbon is single-bonded to an OH group (this OH group, by itself, is referred to as a ). If the central carbon in an alcohol is bonded to only one other carbon, we call the group a primary alcohol. In secondary alcohols and tertiary alcohols, the central carbon is bonded to two and three carbons, respectively. Methanol, of course, is in class by itself in this respect. Compounds in which an OH group is attached directly to an aromatic ring are designated ArOH and called phenols. Phenols differ from alcohols in that they are slightly acidic in water. They react with aqueous sodium hydroxide (NaOH) to form salts. \[ArOH_{(aq)} + NaOH_{(aq)} \rightarrow ArONa_{(aq)} + H_2O\] The parent compound, C H OH, is itself called phenol. (An old name, emphasizing its slight acidity, was .) Phenol is a white crystalline compound that has a distinctive (“hospital smell”) odor. There are two primary methods to make alcohols in the laboratory: Hydration of an alkene and hydrolysis of an alkyl halide. Ethanol is manufactured by reacting ethene with steam. The catalyst used is solid silicon dioxide coated with phosphoric(V) acid. The reaction is reversible. You might expect to get either propan-1-ol or propan-2-ol depending on which way around the water adds to the double bond. In practice what you get is propan-2-ol. If you add a molecule H-X across a carbon-carbon double bond, the hydrogen nearly always gets attached to the carbon with the most hydrogens on it already - in this case the CH2 rather than the CH. The effect of this is that there are bound to be some alcohols which it is impossible to make by reacting alkenes with steam because the addition would be the wrong way around. The other common method to make alcholes is a substitution reaction, the halogen atom is replaced by an -OH group to give an alcohol. For example: In this example, 2-bromopropane is converted into propan-2-ol. The halogenoalkane is heated under reflux with a solution of sodium or potassium hydroxide. Heating under reflux means heating with a condenser placed vertically in the flask to prevent loss of volatile substances from the mixture. The solvent is usually a 50/50 mixture of ethanol and water, because everything will dissolve in that. The halogenoalkane is insoluble in water. If you used water alone as the solvent, the halogenoalkane and the sodium hydroxide solution wouldn't mix and the reaction could only happen where the two layers met. \[CH_3CH_2OH +3O_2 \rightarrow 2CO_2 + 3H_2O\] \[2CH_3OH +3O_2 \rightarrow 2CO_2 + 4H_2O\] It can be used a a petrol additive to improve combustion, and its use as a fuel in its own right is under investigation. Furthremore, most methanol is used to make other compounds, for example, methanal (formaldehyde), ethanoic acid, and methyl esters of various acids. In most cases, these are then converted into further products. Phenols are widely used as antiseptics (substances that kill microorganisms on living tissue) and as disinfectants (substances intended to kill microorganisms on inanimate objects such as furniture or floors). The first widely used antiseptic was phenol. Joseph Lister used it for antiseptic surgery in 1867. Phenol is toxic to humans, however, and can cause severe burns when applied to the skin. In the bloodstream, it is a systemic poison—that is, one that is carried to and affects all parts of the body. Its severe side effects led to searches for safer antiseptics, a number of which have been found. In an functional group, a central oxygen is bonded to two carbons. Below are the line and Lewis structures of diethyl ether, a common laboratory solvent and also one of the first medical anaesthesia agents. Acid-catalyzed dehydration of small 1º-alcohols constitutes a specialized method of preparing symmetrical ethers. As shown in the following two equations, the success of this procedure depends on the temperature. At 110º to 130 ºC an S 2 reaction of the alcohol conjugate acid leads to an ether product. At higher temperatures (over 150 ºC) an E2 elimination takes place. In this reaction alcohol has to be used in excess and the temperature has to be maintained around 413 K. If alcohol is not used in excess or the temperature is higher, the alcohol will preferably undergo dehydration to yield alkene. If ethanol is dehydrated to ethene in presence of sulfuric acid at 433 K but as 410 K, ethoxyethane is the main product. The dehydration of secondary and tertiary alcohols to get corresponding ethers is unsuccessful as alkenes are formed easily in these reactions. This reaction be employed to prepare unsymmetrical ethers. It is because a mixture of products is likely to be obtained. There are a number of functional groups that contain a carbon-oxygen double bond, which is commonly referred to as a . and are two closely related carbonyl-based functional groups that react in very similar ways. In a ketone, the carbon atom of a carbonyl is bonded to two other carbons. In an aldehyde, the carbonyl carbon is bonded on one side to a hydrogen, and on the other side to a carbon. The exception to this definition is formaldehyde, in which the carbonyl carbon has bonds to two hydrogens. Aldehydes and ketones can be prepared using a wide variety of reactions. Although these reactions are discussed in greater detail in other sections, they are listed here as a summary and to help with planning multistep synthetic pathways. A common way to synthesize aldehydes is the Hydration of an alkyne to form aldehydes via an addition reaction of a hydroxyl group to an alkyne forms an aldehyde. The addition of a hydroxyl group to an alkyne causes tautomerization which subsequently forms a carbonyl. If a carbonyl carbon is bonded on one side to a carbon (or hydrogen) and on the other side to a (in organic chemistry, this term generally refers to oxygen, nitrogen, sulfur, or one of the halogens), the functional group is considered to be one of the ‘ , a designation that describes a grouping of several functional groups. The eponymous member of this grouping is the functional group, in which the carbonyl is bonded to a hydroxyl (OH) group. As the name implies, carboxylic acids are acidic, meaning that they are readily deprotonated to form the conjugate base form, called a (much more about carboxylic acids in the acid-base chapter!). The oxidation of aldehydes or primary alcohols forms carboxylic acids: In the presence of an oxidizing agent, ethanol is oxidized to acetaldehyde, which is then oxidized to acetic acid. This process also occurs in the liver, where enzymes catalyze the oxidation of ethanol to acetic acid. \(\mathrm{CH_3CH_2OH \underset{oxidizing\: agent}{\xrightarrow{alcohol\: dehydrogenase}} CH_3CHO \underset{oxidizing\: agent}{\xrightarrow{alcohol\: dehydrogenase}} CH_3COOH\%}\) Acetic acid can be further oxidized to carbon dioxide and water. In , the carbonyl carbon is bonded to an oxygen which is itself bonded to another carbon. Another way of thinking of an ester is that it is a carbonyl bonded to an alcohol. are similar to esters, except a sulfur is in place of the oxygen. Some esters can be prepared by esterification, a reaction in which a carboxylic acid and an alcohol, heated in the presence of a mineral acid catalyst, form an ester and water: The reaction is reversible. As a specific example of an esterification reaction, butyl acetate can be made from acetic acid and 1-butanol. Esters are common solvents. Ethyl acetate is used to extract organic solutes from aqueous solutions—for example, to remove caffeine from coffee. It also is used to remove nail polish and paint. Cellulose nitrate is dissolved in ethyl acetate and butyl acetate to form lacquers. The solvent evaporates as the lacquer “dries,” leaving a thin film on the surface. High boiling esters are used as softeners (plasticizers) for brittle plastics. In , the carbonyl carbon is bonded to a nitrogen. The nitrogen in an amide can be bonded either to hydrogens, to carbons, or to both. Another way of thinking of an amide is that it is a carbonyl bonded to an amine. The addition of ammonia (NH ) to a carboxylic acid forms an amide, but the reaction is very slow in the laboratory at room temperature. Water molecules are split out, and a bond is formed between the nitrogen atom and the carbonyl carbon atom. In living cells, amide formation is catalyzed by enzymes. Proteins are polyamides; they are formed by joining amino acids into long chains. In proteins, the amide functional group is called a . With the exception of formamide (HCONH ), which is a liquid, all simple amides are solids (Table \(\Page {1}\)). The lower members of the series are soluble in water, with borderline solubility occurring in those that have five or six carbon atoms. Like the esters, solutions of amides in water usually are neutral—neither acidic nor basic. The amides generally have high boiling points and melting points. These characteristics and their solubility in water result from the polar nature of the amide group and hydrogen bonding (Figure \(\Page {1}\)). (Similar hydrogen bonding plays a critical role in determining the structure and properties of proteins, deoxyribonucleic acid [DNA], ribonucleic acid [RNA], and other giant molecules so important to life processes. Ammonia is the simplest example of a functional group called . Just as there are primary, secondary, and tertiary alcohols, there are primary, secondary, and tertiary amines. One of the most important properties of amines is that they are basic, and are readily protonated to form cations. \[ CH_3CH_2NH_3^+Br^- + NH_3 \rightleftharpoons CH_3CH_2NH_2 + NH_4^+ Br^-\] A single compound often contains several functional groups. The six-carbon sugar molecules glucose and fructose, for example, contain aldehyde and ketone groups, respectively, and both contain five alcohol groups (a compound with several alcohol groups is often referred to as a ‘ ). Capsaicin, the compound responsible for the heat in hot peppers, contains phenol, ether, amide, and alkene functional groups. The male sex hormone testosterone contains ketone, alkene, and secondary alcohol groups, while acetylsalicylic acid (aspirin) contains aromatic, carboxylic acid, and ester groups. While not in any way a complete list, this section has covered most of the important functional groups that we will encounter in biological and laboratory organic chemistry. The table on the inside back cover provides a summary of all of the groups listed in this section, plus a few more that will be introduced later in the text. Identify the functional groups in the following organic compounds. State whether alcohols and amines are primary, secondary, or tertiary. Draw one example each (there are many possible correct answers) of compounds fitting the descriptions below, using line structures. Be sure to designate the location of all non-zero formal charges. All atoms should have complete octets (phosphorus may exceed the octet rule). by (University of Minnesota, Morris) | 12,152 | 2,237 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/19%3A_Transition_Metals_and_Coordination_Chemistry/19.1%3A_Properties_of_Transition_Metals_and_Their_Compounds |
We have daily contact with many transition metals. Iron occurs everywhere—from the rings in your spiral notebook and the cutlery in your kitchen to automobiles, ships, buildings, and in the hemoglobin in your blood. Titanium is useful in the manufacture of lightweight, durable products such as bicycle frames, artificial hips, and jewelry. Chromium is useful as a protective plating on plumbing fixtures and automotive detailing. Transition metals are defined as those elements that have (or readily form) partially filled orbitals. As shown in Figure \(\Page {2}\), the in groups 3–11 are transition elements. The , also called (the lanthanides and actinides), also meet this criterion because the orbital is partially occupied before the orbitals. The orbitals fill with the copper family (group 11); for this reason, the next family (group 12) are technically not transition elements. However, the group 12 elements do display some of the same chemical properties and are commonly included in discussions of transition metals. Some chemists do treat the group 12 elements as transition metals. The -block elements are divided into the (the elements Sc through Cu), the (the elements Y through Ag), and the (the element La and the elements Hf through Au). Actinium, Ac, is the first member of the , which also includes Rf through Rg. The -block elements are the elements Ce through Lu, which constitute the (or ), and the elements Th through Lr, which constitute the (or ). Because lanthanum behaves very much like the lanthanide elements, it is considered a lanthanide element, even though its electron configuration makes it the first member of the third transition series. Similarly, the behavior of actinium means it is part of the actinide series, although its electron configuration makes it the first member of the fourth transition series. Review how to write electron configurations, covered in the chapter on electronic structure and periodic properties of elements. Recall that for the transition and inner transition metals, it is necessary to remove the electrons before the or electrons. Then, for each ion, give the electron configuration: For the examples that are transition metals, determine to which series they belong. For ions, the -valence electrons are lost prior to the or electrons. Give an example of an ion from the first transition series with no electrons. V is one possibility. Other examples include Sc , Ti , Cr , and Mn . Lanthanides (elements 57–71) are fairly abundant in the earth’s crust, despite their historic characterization as . Thulium, the rarest naturally occurring lanthanoid, is more common in the earth’s crust than silver (4.5 × 10 % versus 0.79 × 10 % by mass). There are 17 rare earth elements, consisting of the 15 lanthanoids plus scandium and yttrium. They are called rare because they were once difficult to extract economically, so it was rare to have a pure sample; due to similar chemical properties, it is difficult to separate any one lanthanide from the others. However, newer separation methods, such as ion exchange resins similar to those found in home water softeners, make the separation of these elements easier and more economical. Most ores that contain these elements have low concentrations of all the rare earth elements mixed together. The commercial applications of lanthanides are growing rapidly. For example, europium is important in flat screen displays found in computer monitors, cell phones, and televisions. Neodymium is useful in laptop hard drives and in the processes that convert crude oil into gasoline (Figure \(\Page {3}\)). Holmium is found in dental and medical equipment. In addition, many alternative energy technologies rely heavily on lanthanoids. Neodymium and dysprosium are key components of hybrid vehicle engines and the magnets used in wind turbines. As the demand for lanthanide materials has increased faster than supply, prices have also increased. In 2008, dysprosium cost $110/kg; by 2014, the price had increased to $470/kg. Increasing the supply of lanthanoid elements is one of the most significant challenges facing the industries that rely on the optical and magnetic properties of these materials. The transition elements have many properties in common with other metals. They are almost all hard, high-melting solids that conduct heat and electricity well. They readily form alloys and lose electrons to form stable cations. In addition, transition metals form a wide variety of stable , in which the central metal atom or ion acts as a Lewis acid and accepts one or more pairs of electrons. Many different molecules and ions can donate lone pairs to the metal center, serving as Lewis bases. In this chapter, we shall focus primarily on the chemical behavior of the elements of the first transition series. Transition metals demonstrate a wide range of chemical behaviors. As can be seen from their reduction potentials (Table P1), some transition metals are strong reducing agents, whereas others have very low reactivity. For example, the lanthanides all form stable 3+ aqueous cations. The driving force for such oxidations is similar to that of alkaline earth metals such as Be or Mg, forming Be and Mg . On the other hand, materials like platinum and gold have much higher reduction potentials. Their ability to resist oxidation makes them useful materials for constructing circuits and jewelry. Ions of the lighter -block elements, such as Cr , Fe , and Co , form colorful hydrated ions that are stable in water. However, ions in the period just below these (Mo , Ru , and Ir ) are unstable and react readily with oxygen from the air. The majority of simple, water-stable ions formed by the heavier -block elements are oxyanions such as \(\ce{MoO4^2-}\) and \(\ce{ReO4-}\). Ruthenium, osmium, rhodium, iridium, palladium, and platinum are the . With difficulty, they form simple cations that are stable in water, and, unlike the earlier elements in the second and third transition series, they do not form stable oxyanions. Both the - and -block elements react with nonmetals to form binary compounds; heating is often required. These elements react with halogens to form a variety of halides ranging in oxidation state from 1+ to 6+. On heating, oxygen reacts with all of the transition elements except palladium, platinum, silver, and gold. The oxides of these latter metals can be formed using other reactants, but they decompose upon heating. The -block elements, the elements of group 3, and the elements of the first transition series except copper react with aqueous solutions of acids, forming hydrogen gas and solutions of the corresponding salts. Transition metals can form compounds with a wide range of oxidation states. Some of the observed oxidation states of the elements of the first transition series are shown in Figure \(\Page {4}\). As we move from left to right across the first transition series, we see that the number of common oxidation states increases at first to a maximum towards the middle of the table, then decreases. The values in the table are typical values; there are other known values, and it is possible to synthesize new additions. For example, in 2014, researchers were successful in synthesizing a new oxidation state of iridium (9+). For the elements scandium through manganese (the first half of the first transition series), the highest oxidation state corresponds to the loss of all of the electrons in both the and orbitals of their valence shells. The titanium(IV) ion, for example, is formed when the titanium atom loses its two 3 and two 4 electrons. These highest oxidation states are the most stable forms of scandium, titanium, and vanadium. However, it is not possible to continue to remove all of the valence electrons from metals as we continue through the series. Iron is known to form oxidation states from 2+ to 6+, with iron(II) and iron(III) being the most common. Most of the elements of the first transition series form ions with a charge of 2+ or 3+ that are stable in water, although those of the early members of the series can be readily oxidized by air. The elements of the second and third transition series generally are more stable in higher oxidation states than are the elements of the first series. In general, the atomic radius increases down a group, which leads to the ions of the second and third series being larger than are those in the first series. Removing electrons from orbitals that are located farther from the nucleus is easier than removing electrons close to the nucleus. For example, molybdenum and tungsten, members of group 6, are limited mostly to an oxidation state of 6+ in aqueous solution. Chromium, the lightest member of the group, forms stable Cr ions in water and, in the absence of air, less stable Cr ions. The sulfide with the highest oxidation state for chromium is Cr S , which contains the Cr ion. Molybdenum and tungsten form sulfides in which the metals exhibit oxidation states of 4+ and 6+. Which is the strongest oxidizing agent in acidic solution: dichromate ion, which contains chromium(VI), permanganate ion, which contains manganese(VII), or titanium dioxide, which contains titanium(IV)? First, we need to look up the reduction half reactions (Table P1) for each oxide in the specified oxidation state: \[\ce{Cr2O7^2- + 14H+ + 6e- ⟶ 2Cr^3+ + 7H2O} \hspace{20px} \mathrm{+1.33\: V} \nonumber \] \[\ce{MnO4- + 8H+ + 5e- ⟶ Mn^2+ + H2O} \hspace{20px} \mathrm{+1.51\: V} \nonumber \] \[\ce{TiO2 + 4H+ + 2e- ⟶ Ti^2+ + 2H2O} \hspace{20px} \mathrm{−0.50\: V} \nonumber \] A larger reduction potential means that it is easier to reduce the reactant. Permanganate, with the largest reduction potential, is the strongest oxidizer under these conditions. Dichromate is next, followed by titanium dioxide as the weakest oxidizing agent (the hardest to reduce) of this set. Predict what reaction (if any) will occur between HCl and Co( ), and between HBr and Pt( ). You will need to use the standard reduction potentials from (Table P1). \(\ce{Co}(s)+\ce{2HCl}⟶\ce{H2}+\ce{CoCl2}(aq)\); no reaction because Pt( ) will not be oxidized by H Ancient civilizations knew about iron, copper, silver, and gold. The time periods in human history known as the Bronze Age and Iron Age mark the advancements in which societies learned to isolate certain metals and use them to make tools and goods. Naturally occurring ores of copper, silver, and gold can contain high concentrations of these metals in elemental form (Figure \(\Page {5}\)). Iron, on the other hand, occurs on earth almost exclusively in oxidized forms, such as rust (Fe O ). The earliest known iron implements were made from iron meteorites. Surviving iron artifacts dating from approximately 4000 to 2500 BC are rare, but all known examples contain specific alloys of iron and nickel that occur only in extraterrestrial objects, not on earth. It took thousands of years of technological advances before civilizations developed iron , the ability to extract a pure element from its naturally occurring ores and for iron tools to become common. Generally, the transition elements are extracted from minerals found in a variety of ores. However, the ease of their recovery varies widely, depending on the concentration of the element in the ore, the identity of the other elements present, and the difficulty of reducing the element to the free metal. In general, it is not difficult to reduce ions of the -block elements to the free element. Carbon is a sufficiently strong reducing agent in most cases. However, like the ions of the more active main group metals, ions of the -block elements must be isolated by electrolysis or by reduction with an active metal such as calcium. We shall discuss the processes used for the isolation of iron, copper, and silver because these three processes illustrate the principal means of isolating most of the -block metals. In general, each of these processes involves three principal steps: preliminary treatment, smelting, and refining. The early application of iron to the manufacture of tools and weapons was possible because of the wide distribution of iron ores and the ease with which iron compounds in the ores could be reduced by carbon. For a long time, charcoal was the form of carbon used in the reduction process. The production and use of iron became much more widespread about 1620, when coke was introduced as the reducing agent. Coke is a form of carbon formed by heating coal in the absence of air to remove impurities. The first step in the metallurgy of iron is usually roasting the ore (heating the ore in air) to remove water, decomposing carbonates into oxides, and converting sulfides into oxides. The oxides are then reduced in a blast furnace that is 80–100 feet high and about 25 feet in diameter (Figure \(\Page {6}\)) in which the roasted ore, coke, and limestone (impure CaCO ) are introduced continuously into the top. Molten iron and slag are withdrawn at the bottom. The entire stock in a furnace may weigh several hundred tons. Near the bottom of a furnace are nozzles through which preheated air is blown into the furnace. As soon as the air enters, the coke in the region of the nozzles is oxidized to carbon dioxide with the liberation of a great deal of heat. The hot carbon dioxide passes upward through the overlying layer of white-hot coke, where it is reduced to carbon monoxide: \[\ce{CO2}(g)+\ce{C}(s)⟶\ce{2CO}(g) \nonumber \] The carbon monoxide serves as the reducing agent in the upper regions of the furnace. The individual reactions are indicated in Figure \(\Page {6}\). The iron oxides are reduced in the upper region of the furnace. In the middle region, limestone (calcium carbonate) decomposes, and the resulting calcium oxide combines with silica and silicates in the ore to form slag. The slag is mostly calcium silicate and contains most of the commercially unimportant components of the ore: \[\ce{CaO}(s)+\ce{SiO2}(s)⟶\ce{CaSiO3}(l) \nonumber \] Just below the middle of the furnace, the temperature is high enough to melt both the iron and the slag. They collect in layers at the bottom of the furnace; the less dense slag floats on the iron and protects it from oxidation. Several times a day, the slag and molten iron are withdrawn from the furnace. The iron is transferred to casting machines or to a steelmaking plant (Figure \(\Page {7}\)). Much of the iron produced is refined and converted into steel. is made from iron by removing impurities and adding substances such as manganese, chromium, nickel, tungsten, molybdenum, and vanadium to produce alloys with properties that make the material suitable for specific uses. Most steels also contain small but definite percentages of carbon (0.04%–2.5%). However, a large part of the carbon contained in iron must be removed in the manufacture of steel; otherwise, the excess carbon would make the iron brittle. The most important ores of copper contain copper sulfides (such as covellite, CuS), although copper oxides (such as tenorite, CuO) and copper hydroxycarbonates [such as malachite, Cu (OH) CO ] are sometimes found. In the production of copper metal, the concentrated sulfide ore is roasted to remove part of the sulfur as sulfur dioxide. The remaining mixture, which consists of Cu S, FeS, FeO, and SiO , is mixed with limestone, which serves as a flux (a material that aids in the removal of impurities), and heated. Molten slag forms as the iron and silica are removed by Lewis acid-base reactions: \[\ce{CaCO3}(s)+\ce{SiO2}(s)⟶\ce{CaSiO3}(l)+\ce{CO2}(g) \nonumber \] \[\ce{FeO}(s)+\ce{SiO2}(s)⟶\ce{FeSiO3}(l) \nonumber \] In these reactions, the silicon dioxide behaves as a Lewis acid, which accepts a pair of electrons from the Lewis base (the oxide ion). Reduction of the Cu S that remains after smelting is accomplished by blowing air through the molten material. The air converts part of the Cu S into Cu O. As soon as copper(I) oxide is formed, it is reduced by the remaining copper(I) sulfide to metallic copper: \[\ce{2Cu2S}(l)+\ce{3O2}(g)⟶\ce{2Cu2O}(l)+\ce{2SO2}(g) \nonumber \] \[\ce{2Cu2O}(l)+\ce{Cu2S}(l)⟶\ce{6Cu}(l)+\ce{SO2}(g) \nonumber \] The copper obtained in this way is called blister copper because of its characteristic appearance, which is due to the air blisters it contains (Figure \(\Page {8}\)). This impure copper is cast into large plates, which are used as anodes in the electrolytic refining of the metal (which is described in the chapter on electrochemistry). Silver sometimes occurs in large nuggets (Figure \(\Page {9}\)) but more frequently in veins and related deposits. At one time, panning was an effective method of isolating both silver and gold nuggets. Due to their low reactivity, these metals, and a few others, occur in deposits as nuggets. The discovery of platinum was due to Spanish explorers in Central America mistaking platinum nuggets for silver. When the metal is not in the form of nuggets, it often useful to employ a process called to separate silver from its ores. Hydrology involves the separation of a metal from a mixture by first converting it into soluble ions and then extracting and reducing them to precipitate the pure metal. In the presence of air, alkali metal cyanides readily form the soluble dicyanoargentate(I) ion, \(\ce{[Ag(CN)2]-}\), from silver metal or silver-containing compounds such as Ag S and AgCl. Representative equations are: \[\ce{4Ag}(s)+\ce{8CN-}(aq)+\ce{O2}(g)+\ce{2H2O}(l)⟶\ce{4[Ag(CN)2]-}(aq)+\ce{4OH-}(aq) \nonumber \] \[\ce{2Ag2S}(s)+\ce{8CN-}(aq)+\ce{O2}(g)+\ce{2H2O}(l)⟶\ce{4[Ag(CN)2]-}(aq)+\ce{2S}(s)+\ce{4OH-}(aq) \nonumber \] \[\ce{AgCl}(s)+\ce{2CN-}(aq)⟶\ce{[Ag(CN)2]-}(aq)+\ce{Cl-}(aq) \nonumber \] The silver is precipitated from the cyanide solution by the addition of either zinc or iron(II) ions, which serves as the reducing agent: \[\ce{2[Ag(CN)2]-}(aq)+\ce{Zn}(s)⟶\ce{2Ag}(s)+\ce{[Zn(CN)4]^2-}(aq) \nonumber \] One of the steps for refining silver involves converting silver into dicyanoargenate(I) ions: \[\ce{4Ag}(s)+\ce{8CN-}(aq)+\ce{O2}(g)+\ce{2H2O}(l)⟶\ce{4[Ag(CN)2]-}(aq)+\ce{4OH-}(aq) \nonumber \] Explain why oxygen must be present to carry out the reaction. Why does the reaction not occur as: \[\ce{4Ag}(s)+\ce{8CN-}(aq)⟶\ce{4[Ag(CN)2]-}(aq)? \nonumber \] The charges, as well as the atoms, must balance in reactions. The silver atom is being oxidized from the 0 oxidation state to the 1+ state. Whenever something loses electrons, something must also gain electrons (be reduced) to balance the equation. Oxygen is a good oxidizing agent for these reactions because it can gain electrons to go from the 0 oxidation state to the 2− state. During the refining of iron, carbon must be present in the blast furnace. Why is carbon necessary to convert iron oxide into iron? The carbon is converted into CO, which is the reducing agent that accepts electrons so that iron(III) can be reduced to iron(0). The bonding in the simple compounds of the transition elements ranges from ionic to covalent. In their lower oxidation states, the transition elements form ionic compounds; in their higher oxidation states, they form covalent compounds or polyatomic ions. The variation in oxidation states exhibited by the transition elements gives these compounds a metal-based, oxidation-reduction chemistry. The chemistry of several classes of compounds containing elements of the transition series follows. Anhydrous halides of each of the transition elements can be prepared by the direct reaction of the metal with halogens. For example: \[\ce{2Fe}(s)+\ce{3Cl2}(g)⟶\ce{2FeCl3}(s) \nonumber \] Heating a metal halide with additional metal can be used to form a halide of the metal with a lower oxidation state: \[\ce{Fe}(s)+\ce{2FeCl3}(s)⟶\ce{3FeCl2}(s) \nonumber \] The stoichiometry of the metal halide that results from the reaction of the metal with a halogen is determined by the relative amounts of metal and halogen and by the strength of the halogen as an oxidizing agent. Generally, fluorine forms fluoride-containing metals in their highest oxidation states. The other halogens may not form analogous compounds. In general, the preparation of stable water solutions of the halides of the metals of the first transition series is by the addition of a hydrohalic acid to carbonates, hydroxides, oxides, or other compounds that contain basic anions. Sample reactions are: \[\ce{NiCO3}(s)+\ce{2HF}(aq)⟶\ce{NiF2}(aq)+\ce{H2O}(l)+\ce{CO2}(g) \nonumber \] \[\ce{Co(OH)2}(s)+\ce{2HBr}(aq)⟶\ce{CoBr2}(aq)+\ce{2H2O}(l) \nonumber \] Most of the first transition series metals also dissolve in acids, forming a solution of the salt and hydrogen gas. For example: \[\ce{Cr}(s)+\ce{2HCl}(aq)⟶\ce{CrCl2}(aq)+\ce{H2}(g) \nonumber \] The polarity of bonds with transition metals varies based not only upon the electronegativities of the atoms involved but also upon the oxidation state of the transition metal. Remember that bond polarity is a continuous spectrum with electrons being shared evenly (covalent bonds) at one extreme and electrons being transferred completely (ionic bonds) at the other. No bond is ever 100% ionic, and the degree to which the electrons are evenly distributed determines many properties of the compound. Transition metal halides with low oxidation numbers form more ionic bonds. For example, titanium(II) chloride and titanium(III) chloride (TiCl and TiCl ) have high melting points that are characteristic of ionic compounds, but titanium(IV) chloride (TiCl ) is a volatile liquid, consistent with having covalent titanium-chlorine bonds. All halides of the heavier -block elements have significant covalent characteristics. The covalent behavior of the transition metals with higher oxidation states is exemplified by the reaction of the metal tetrahalides with water. Like covalent silicon tetrachloride, both the titanium and vanadium tetrahalides react with water to give solutions containing the corresponding hydrohalic acids and the metal oxides: \[\ce{SiCl4}(l)+\ce{2H2O}(l)⟶\ce{SiO2}(s)+\ce{4HCl}(aq) \nonumber \] \[\ce{TiCl4}(l)+\ce{2H2O}(l)⟶\ce{TiO2}(s)+\ce{4HCl}(aq) \nonumber \] As with the halides, the nature of bonding in oxides of the transition elements is determined by the oxidation state of the metal. Oxides with low oxidation states tend to be more ionic, whereas those with higher oxidation states are more covalent. These variations in bonding are because the electronegativities of the elements are not fixed values. The electronegativity of an element increases with increasing oxidation state. Transition metals in low oxidation states have lower electronegativity values than oxygen; therefore, these metal oxides are ionic. Transition metals in very high oxidation states have electronegativity values close to that of oxygen, which leads to these oxides being covalent. The oxides of the first transition series can be prepared by heating the metals in air. These oxides are Sc O , TiO , V O , Cr O , Mn O , Fe O , Co O , NiO, and CuO. Alternatively, these oxides and other oxides (with the metals in different oxidation states) can be produced by heating the corresponding hydroxides, carbonates, or oxalates in an inert atmosphere. Iron(II) oxide can be prepared by heating iron(II) oxalate, and cobalt(II) oxide is produced by heating cobalt(II) hydroxide: \[\ce{FeC2O4}(s)⟶\ce{FeO}(s)+\ce{CO}(g)+\ce{CO2}(g) \nonumber \] \[\ce{Co(OH)2}(s)⟶\ce{CoO}(s)+\ce{H2O}(g) \nonumber \] With the exception of CrO and Mn O , transition metal oxides are not soluble in water. They can react with acids and, in a few cases, with bases. Overall, oxides of transition metals with the lowest oxidation states are basic (and react with acids), the intermediate ones are amphoteric, and the highest oxidation states are primarily acidic. Basic metal oxides at a low oxidation state react with aqueous acids to form solutions of salts and water. Examples include the reaction of cobalt(II) oxide accepting protons from nitric acid, and scandium(III) oxide accepting protons from hydrochloric acid: \[\ce{CoO}(s)+\ce{2HNO3}(aq)⟶\ce{Co(NO3)2}(aq)+\ce{H2O}(l) \nonumber \] \[\ce{Sc2O3}(s)+\ce{6HCl}(aq)⟶\ce{2ScCl3}(aq)+\ce{3H2O}(l) \nonumber \] The oxides of metals with oxidation states of 4+ are amphoteric, and most are not soluble in either acids or bases. Vanadium(V) oxide, chromium(VI) oxide, and manganese(VII) oxide are acidic. They react with solutions of hydroxides to form salts of the oxyanions \(\ce{VO4^3-}\), \(\ce{CrO4^2-}\), and \(\ce{MnO4-}\). For example, the complete ionic equation for the reaction of chromium(VI) oxide with a strong base is given by: \[\ce{CrO3}(s)+\ce{2Na+}(aq)+\ce{2OH-}(aq)⟶\ce{2Na+}(aq)+\ce{CrO4^2-}(aq)+\ce{H2O}(l) \nonumber \] Chromium(VI) oxide and manganese(VII) oxide react with water to form the acids H CrO and HMnO , respectively. When a soluble hydroxide is added to an aqueous solution of a salt of a transition metal of the first transition series, a gelatinous precipitate forms. For example, adding a solution of sodium hydroxide to a solution of cobalt sulfate produces a gelatinous pink or blue precipitate of cobalt(II) hydroxide. The net ionic equation is: \[\ce{Co^2+}(aq)+\ce{2OH-}(aq)⟶\ce{Co(OH)2}(s) \nonumber \] In this and many other cases, these precipitates are hydroxides containing the transition metal ion, hydroxide ions, and water coordinated to the transition metal. In other cases, the precipitates are hydrated oxides composed of the metal ion, oxide ions, and water of hydration: \[\ce{4Fe^3+}(aq)+\ce{6OH-}(aq)+\ce{nH2O}(l)⟶\ce{2Fe2O3⋅(n + 3)H2O}(s) \nonumber \] These substances do not contain hydroxide ions. However, both the hydroxides and the hydrated oxides react with acids to form salts and water. When precipitating a metal from solution, it is necessary to avoid an excess of hydroxide ion, as this may lead to complex ion formation as discussed later in this chapter. The precipitated metal hydroxides can be separated for further processing or for waste disposal. Many of the elements of the first transition series form insoluble carbonates. It is possible to prepare these carbonates by the addition of a soluble carbonate salt to a solution of a transition metal salt. For example, nickel carbonate can be prepared from solutions of nickel nitrate and sodium carbonate according to the following net ionic equation: \[\ce{Ni^2+}(aq)+\ce{CO3^2-}⟶\ce{NiCO3}(s) \nonumber \] The reactions of the transition metal carbonates are similar to those of the active metal carbonates. They react with acids to form metals salts, carbon dioxide, and water. Upon heating, they decompose, forming the transition metal oxides. In many respects, the chemical behavior of the elements of the first transition series is very similar to that of the main group metals. In particular, the same types of reactions that are used to prepare salts of the main group metals can be used to prepare simple ionic salts of these elements. A variety of salts can be prepared from metals that are more active than hydrogen by reaction with the corresponding acids: Scandium metal reacts with hydrobromic acid to form a solution of scandium bromide: \[\ce{2Sc}(s)+\ce{6HBr}(aq)⟶\ce{2ScBr3}(aq)+\ce{3H2}(g) \nonumber \] The common compounds that we have just discussed can also be used to prepare salts. The reactions involved include the reactions of oxides, hydroxides, or carbonates with acids. For example: \[\ce{Ni(OH)2}(s)+\ce{2H3O+}(aq)+\ce{2ClO4-}(aq)⟶\ce{Ni^2+}(aq)+\ce{2ClO4-}(aq)+\ce{4H2O}(l) \nonumber \] Substitution reactions involving soluble salts may be used to prepare insoluble salts. For example: \[\ce{Ba^2+}(aq)+\ce{2Cl-}(aq)+\ce{2K+}(aq)+\ce{CrO4^2-}(aq)⟶\ce{BaCrO4}(s)+\ce{2K+}(aq)+\ce{2Cl-}(aq) \nonumber \] In our discussion of oxides in this section, we have seen that reactions of the covalent oxides of the transition elements with hydroxides form salts that contain oxyanions of the transition elements. A is a substance that conducts electricity with no resistance. This lack of resistance means that there is no energy loss during the transmission of electricity. This would lead to a significant reduction in the cost of electricity. Most currently used, commercial superconducting materials, such as NbTi and Nb Sn, do not become superconducting until they are cooled below 23 K (−250 °C). This requires the use of liquid helium, which has a boiling temperature of 4 K and is expensive and difficult to handle. The cost of liquid helium has deterred the widespread application of superconductors. One of the most exciting scientific discoveries of the 1980s was the characterization of compounds that exhibit superconductivity at temperatures above 90 K. (Compared to liquid helium, 90 K is a high temperature.) Typical among the high-temperature superconducting materials are oxides containing yttrium (or one of several rare earth elements), barium, and copper in a 1:2:3 ratio. The formula of the ionic yttrium compound is YBa Cu O . The new materials become superconducting at temperatures close to 90 K (Figure \(\Page {10}\)), temperatures that can be reached by cooling with liquid nitrogen (boiling temperature of 77 K). Not only are liquid nitrogen-cooled materials easier to handle, but the cooling costs are also about 1000 times lower than for liquid helium. Although the brittle, fragile nature of these materials presently hampers their commercial applications, they have tremendous potential that researchers are hard at work improving their processes to help realize. Superconducting transmission lines would carry current for hundreds of miles with no loss of power due to resistance in the wires. This could allow generating stations to be located in areas remote from population centers and near the natural resources necessary for power production. The first project demonstrating the viability of high-temperature superconductor power transmission was established in New York in 2008. Researchers are also working on using this technology to develop other applications, such as smaller and more powerful microchips. In addition, high-temperature superconductors can be used to generate magnetic fields for applications such as medical devices, magnetic levitation trains, and containment fields for nuclear fusion reactors (Figure \(\Page {11}\)). The transition metals are elements with partially filled orbitals, located in the -block of the periodic table. The reactivity of the transition elements varies widely from very active metals such as scandium and iron to almost inert elements, such as the platinum metals. The type of chemistry used in the isolation of the elements from their ores depends upon the concentration of the element in its ore and the difficulty of reducing ions of the elements to the metals. Metals that are more active are more difficult to reduce. Transition metals exhibit chemical behavior typical of metals. For example, they oxidize in air upon heating and react with elemental halogens to form halides. Those elements that lie above hydrogen in the activity series react with acids, producing salts and hydrogen gas. Oxides, hydroxides, and carbonates of transition metal compounds in low oxidation states are basic. Halides and other salts are generally stable in water, although oxygen must be excluded in some cases. Most transition metals form a variety of stable oxidation states, allowing them to demonstrate a wide range of chemical reactivity. | 31,584 | 2,238 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/06%3A_Chemical_Bonding_-_Electron_Pairs_and_Octets/6.05%3A_Ions_and_Noble-Gas_Electron_Configurations |
When considering the , one aspect deserves explanation. If the transfer of one electron from Li to H is energetically favorable, why is the same not true for the transfer of a second electron to produce Li H ? Certainly the double charges on Li and H would attract more strongly than the single charges on Li and H , and the doubly charged ions would be held more tightly in the crystal lattice. The answer to this question can be found by looking back at the diagram below.
Figure 6.5.1 Removal of a second electron from Li would require much more energy than the removal of the first because this second electron would be a 1 electron rather than a 2 electron. Not only is this second electron much closer to the nucleus, but it also is very poorly shielded from the nucleus, meaning it's attraction to the nucleus is strong. It is not surprising, therefore, that the of Li (the energy required to remove this second electron) is 7297 kJ mol almost 14 times as large as the first ionization energy! Such a colossal energy requirement is enough to insure that only the outermost electron (the electron) of Li will be removed and that the inner 1 kernel with its helium-type electron configuration will remain intact. A similar argument applies to the acceptance of a second electron by the H atom to form the H ion. If such an ion were to be formed, the extra electron would have to occupy the 2 orbital (outside the mix of red and grey dots of the H ion pictured above). Its electron cloud would extend far from the nucleus (even farther than for the 2 electron in Li, because the nuclear charge in H would only be +1, as opposed to +3 in Li), and it would be quite high in energy. So much energy would be needed to force a second electron to move around the H nucleus in this way, that only one electron is transferred. The ion formed has the formula H and a helium-type 1 electronic structure instead of an H ion with a 1s 2s electronic structure. The simple example of lithium hydride is typical of all ionic compounds which can be formed by combination of two elements. Invariably we find that one of the two elements has a relatively and is capable of easily losing one or more electrons. The other element has a relatively and is able to accept one or more electrons into its structure. The ions formed by this transfer of electrons almost always have an electronic structure which is the same as that of , and all electrons are paired in each ion. The resulting compound is in which the ions are arranged in a three-dimensional array or crystal lattice similar to, though not always identical with, that shown in the LiH crystal lattice below. In such a solid the nearest neighbors of each anion are always cations and vice versa, and the solid is held together by the forces of attraction between the ions of opposite sign. An everyday example of such an ionic compound is ordinary table salt, sodium chloride, whose formula is NaCl. As we shall see in the next section, sodium is an element with a low ionization energy, and chlorine is an element with a high electron affinity. On the microscopic level crystals of sodium chloride consist of an array of sodium ions, Na , and chloride ions, Cl , packed together in a lattice like that shown for lithium hydride. The chloride ions are chlorine atoms which have gained an electron and thus have the electronic structure 1 2 2 3 3 , the same as that of the noble-gas argon. The sodium ions are sodium atoms which have lost an electron, giving them the structure 1 2 2 , the same as that of the noble-gas neon. All electrons in both kinds of ions are paired. | 3,659 | 2,239 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book3A_Bioinorganic_Chemistry_(Bertini_et_al.)/06%3A_Electron_Transfer/6.08%3A_Marcus_Theory |
In classical transition-state theory, the expression for the rate constant of a bimolecular reaction in solution is \[k = (\kappa \nu_{n})^{\frac{-\Delta G^{\ast}}{RT}}, \tag{6.16}\] where \(\nu_{n}\), the nuclear frequency factor, is approximately 10 M s for small molecules, and \(\Delta\)G* is the Gibbs-free-energy difference between the activated complex and the precursor complex. This theoretical framework provides the starting point for classical electron-transfer theory. Usually the transmission coefficient \(\kappa\) is initially assumed to be unity. Thus, the problem of calculating the rate constant involves the calculation of \(\Delta\)G*, which Marcus partitioned into several parameters: \[\Delta G^{\ast} = w^{r} + \left(\dfrac{\lambda}{4}\right) \left(1 + \dfrac{\Delta G^{o\; \prime}}{\lambda}\right)^{2}, \tag{6.17}\] \[\Delta G^{o\; \prime} = \Delta G^{o} + w^{p} - w^{r} \ldotp \tag{6.18}\] Here w is the electrostatic work involved in bringing the reactants to the mean reactant separation distance in the activated complex, and w is the analogous work term for dissociation of the products. These terms vanish in situations where one of the reactants (or products) is uncharged. \(\Delta\)G° is the Gibbs-free-energy change when the two reactants and products are an infinite distance apart, and \(\Delta\)G°' is the free energy of the reaction when the reactants are a distance r apart in the medium; \(\Delta\)G° is the standard free energy of the reaction, obtainable from electrochemical measurements (the quantity - \(\Delta\)G° is called the of the reaction). The reorganization energy \(\lambda\) is a parameter that contains both inner-sphere (\(\lambda_{i}\)) and outer-sphere (\(\lambda_{o}\)) components; \(\lambda = \lambda_{i} + \lambda_{o}\). The inner-sphere reorganization energy is the free-energy change associated with changes in the bond lengths and angles of the reactants. The \(\lambda_{i}\) term can be evaluated within the simple harmonic-oscillator approximation: \[\lambda_{i} = \left(\dfrac{1}{2}\right) \sum_{j} k_{j} (\Delta x_{j})^{2}, \tag{6.19}\] where k values are normal-mode force constants, and the \(\Delta\)x values are differences in equilibrium bond lengths between the reduced and oxidized forms of a redox center. The outer-sphere reorganization energy reflects changes in the polarization of solvent molecules during electron transfer: \[\lambda_{o} = e^{2} \bigg[\left(\dfrac{1}{2r_{A}}\right) + \left(\dfrac{1}{2r_{B}}\right) - \left(\dfrac{1}{d}\right) \bigg] \bigg[\left(\dfrac{1}{D_{op}}\right) - \left(\dfrac{1}{D_{s}}\right) \bigg] ; \tag{6.20}\] d is the distance between centers in the activated complex, generally taken to be the sum of the reactant radii r and r ; D is the optical dielectric constant of the medium (or, equivalently, the square of the refractive index); and D is the static dielectric constant. This simple model for the effect of solvent reorganization assumes that the reactants are spherical, and that the solvent behaves as a dielectric continuum. (Sometimes the latter approximation is so rough that there is no correspondence between theory and experiment.) Variations in \(\lambda\) can have enormous effects on electron-transfer rates, Some of the possible variations are apparent from inspection of Equation (6.20). First, \(\lambda_{o}\) decreases with increasing reactant size. Second, the dependence of the reaction rate on separation distance attributable to \(\lambda_{o}\)occurs via the \(\frac{1}{d}\) term. Third, \(\lambda\_{o}\) decreases markedly as the solvent polarity decreases. For nonpolar solvents, D \(\simeq\) D \(\simeq\) 1.5 to 4.0. It is significant to note that protein interiors are estimated to have D \(\simeq\) 4, whereas, D \(\simeq\) 78 for water. An important conclusion is that metalloproteins that contain buried redox cofactors need not experience large outer-sphere reorganization energies. The key result of Marcus theory is that the free energy of activation displays a quadratic dependence on \(\Delta\)G° and \(\lambda\) (ignoring work terms). Hence, the reaction rate may be written as \[k_{et} = (\nu_{n} \kappa)^{\frac{-(\lambda + \Delta G^{o})^{2}}{4 \lambda RT}}\ldotp \tag{6.21}\] For intramolecular reactions, the nuclear frequency factor (\(\nu_{n}\)) is ~10 s . One of the most striking predictions of Marcus theory follows from this equation: as the driving force of the reaction increases, the reaction rate increases, reaching a maximum at - \(\Delta\)G° = \(\lambda\); when - \(\Delta\)G° is greater than \(\lambda\), the rate decreases as the driving force increases (Figure 6.23). Two free-energy regions, depending on the relative magnitudes of - \(\Delta\)G° and \(\lambda\), are thus distinguished. The normal free-energy region is defined by - \(\Delta\)G° < \(\lambda\)A. In this region, \(\Delta\)G* decreases if - \(\Delta\)G° increases or if \(\lambda\) decreases. If - \(\Delta\)G° = \(\lambda\), there is no free-energy barrier to the reaction. In the inverted region, defined by - \(\Delta\)G° > \(\lambda\), \(\Delta\)G* increases if \(\lambda\) decreases or if - \(\Delta\)G° increases. Another widely used result of Marcus theory deals with the extraction of useful kinetic relationships for cross reactions from parameters for self-exchange reactions. Consider the cross reaction, Equation (6.22), for which the rate \[A_{1}(ox) + A_{2}(red) \rightarrow A_{1}(red) + A_{2}(ox) \tag{6.22}\] and equilibrium constants are k and K , respectively. Two self-exchange reactions are pertinent here: \[A_{1}(ox) + A_{1}(red) \rightarrow A_{1}(red) + A_{1}(ox) \tag{6.23a}\] \[A_{2}(ox) + A_{2}(red) \rightarrow A_{2}(red) + A_{2}(ox) \tag{6.23b}\] These reactions are characterized by rate constants k and k , respectively. The reorganization energy (\(\lambda_{12}\)for the cross reaction can be approximated as the mean of the reorganization energies for the relevant self-exchange reactions: \[\lambda_{12} = \frac{1}{2} (\lambda_{11} + \lambda_{22}) \tag{6.24}\] Substitution of Equation (6.24) into Equation (6.17) leads to the relation \[\Delta G_{12}^{\ast} = \frac{1}{2}(\Delta G_{11}^{\ast} + \Delta G_{22}^{\ast}) + \frac{1}{2}\Delta G_{12}^{\ast}(1 + \alpha), \tag{6.25a}\] where \[\alpha = \frac{\Delta G_{12}^{\ast}}{4(\Delta G_{11}^{\ast} + \Delta G_{22}^{\ast})}\ldotp \tag{6.25b}\] When the self-exchange rates k are corrected for work terms or when the latter nearly cancel, the cross-reaction rate k is given by the , \[k_{12} = (k_{11}k_{22}K_{12}f_{12})^{\frac{1}{2}}, \tag{6.26a}\] where \[ln f_{12} = \frac{(ln K_{12})^{2}}{4\; ln \left(\dfrac{k_{11}k_{22}}{\nu_{n}^{2}}\right)}\ldotp \tag{6.26b}\] This relation has been used to predict and interpret both self-exchange and cross-reaction rates (or even K , depending on which of the quantities have been measured experimentally. Alternatively, one could study a series of closely related electron-transfer reactions (to maintain a nearly constant \(\lambda\) ) as a function of \(\Delta\)G ; a plot of In k vs. In K is predicted to be linear, with slope 0.5 and intercept 0.5 In (k k ). The Marcus prediction (for the normal free-energy region) amounts to a linear free-energy relation (LFER) for outer-sphere electron transfer. Given the measured self-exchange rate constant for stellacyanin (k 1.2 x 10 M s ), the Marcus cross relation (Equation 6.26a) can be used to calculate the reaction rates for the reduction of Cu -stellacyanin by Fe(EDTA) and the oxidation of Cu -stellacyanin by Co(phen) . E°(Cu ) for stellacyanin is 0.18 V vs. NHE, and the reduction potentials and self-exchange rate constants for the inorganic reagents are given in Table 6.3. For relatively small \(\Delta\)E° values, f is ~1; here a convenient form of the Marcus cross relation is log k = 0.5[log k + log k + 16.9\(\Delta\)E °]' Calculations with k , k , and \(\Delta\)E ° from experiments give k values that accord quite closely with the measured rate constants. \[Cu^{II}St + Fe(EDTA)^{2-} \rightarrow Cu^{I}St + Fe(EDTA)^{-}\] \[k_{12}(calc.) = 2.9 \times 10^{5} M^{-1}s^{-1} \qquad (\Delta E_{12}^{o} = 0.06 V)\] \[k_{12}(obs.) = 4.3 \times 10^{5} M^{-1}s^{-1} \qquad \qquad \qquad \qquad \qquad \] \[Cu^{I}St + Co(phen)_{3}^{3+} \rightarrow Cu^{II}St + Co(phen)_{3}^{2+}\] \[k_{12}(calc.) = 1.4 \times 10^{5} M^{-1}s^{-1} \qquad (\Delta E_{12}^{o} = 0.19 V)\] \[k_{12}(obs.) = 1.8 \times 10^{5} M^{-1}s^{-1} \qquad \qquad \qquad \qquad \qquad \] The success of the Marcus cross relation with stellacyanin indicates that the copper site in the protein is accessible to inorganic reagents. The rate constants for the reactions of other blue copper proteins with inorganic redox agents show deviations from cross-relation predictions (Table 6.4). These deviations suggest the following order of surface accessibilities of blue copper sites: stellacyanin > plastocyanin > azurin. Rate constants for protein-protein electron transfers also have been subjected to cross-relation analysis. | 9,059 | 2,240 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/19%3A_Spontaneous_Change%3A_Entropy_and_Gibbs_Energy/19.5%3A_Standard_Gibbs_Energy_Change_G |
The Gibbs free energy (\(G\)), often called simply free energy, was named in honor of J. Willard Gibbs (1838–1903), an American physicist who first developed the concept. It is defined in terms of three other state functions with which you are already familiar: enthalpy, temperature, and entropy: \[ G = H − TS \label{Eq1}\] Because it is a combination of state functions, \(G\) is also a . The criterion for predicting spontaneity is based on Δ , the change in , at constant temperature and pressure. Although very few chemical reactions actually occur under conditions of constant temperature and pressure, most systems can be brought back to the initial temperature and pressure without significantly affecting the value of thermodynamic state functions such as . At constant temperature and pressure, \[ \Delta G=\Delta H-T\Delta S \label{18.5.2} \] where all thermodynamic quantities are those of the . Under standad conditions Equation \(\ref{18.5.2}\) is then expressed at \[ \Delta G^o=\Delta H^o-T\Delta S^o \label{18.5.3} \] Since \(G\) is a state function, \(\Delta G^o\) can be obtained from the values in (or ) via the similar relationship used to calculate other state functions like \(\Delta H^o\) and \(\Delta S^o\): \[\Delta G^o = \sum n \Delta G_f^o\;(products) - \sum m \Delta G_f^o\; (reactants) \label{19.7}\] Consider the decomposition of yellow mercury(II) oxide. \[\ce{HgO}(s,\,\ce{yellow})⟶\ce{Hg}(l)+\dfrac{1}{2}\ce{O2}(g) \nonumber\] Calculate the standard free energy change at room temperature, \(ΔG^\circ_{298}\), using (a) standard free energies of formation and (b) standard enthalpies of formation and standard entropies. Do the results indicate the reaction to be spontaneous or nonspontaneous under standard conditions? The required data are available in . The required data are available in and are shown here. (a) Using free energies of formation: \[ΔG^\circ_{298}=∑νGS^\circ_{298}(\ce{products})−∑νΔG^\circ_{298}(\ce{reactants}) \nonumber\] \[=\left[1ΔG^\circ_{298}\ce{Hg}(l)+\dfrac{1}{2}ΔG^\circ_{298}\ce{O2}(g)\right]−1ΔG^\circ_{298}\ce{HgO}(s,\,\ce{yellow}) \nonumber\] \[\mathrm{=\left[1\:mol(0\: kJ/mol)+\dfrac{1}{2}mol(0\: kJ/mol)\right]−1\: mol(−58.43\: kJ/mol)=58.43\: kJ/mol} \nonumber\] (b) Using enthalpies and entropies of formation: \[ΔH^\circ_{298}=∑νΔH^\circ_{298}(\ce{products})−∑νΔH^\circ_{298}(\ce{reactants}) \nonumber\] \[=\left[1ΔH^\circ_{298}\ce{Hg}(l)+\dfrac{1}{2}ΔH^\circ_{298}\ce{O2}(g)\right]−1ΔH^\circ_{298}\ce{HgO}(s,\,\ce{yellow}) \nonumber\] \[\mathrm{=[1\: mol(0\: kJ/mol)+\dfrac{1}{2}mol(0\: kJ/mol)]−1\: mol(−90.46\: kJ/mol)=90.46\: kJ/mol} \nonumber\] \[ΔS^\circ_{298}=∑νΔS^\circ_{298}(\ce{products})−∑νΔS^\circ_{298}(\ce{reactants}) \nonumber\] \[=\left[1ΔS^\circ_{298}\ce{Hg}(l)+\dfrac{1}{2}ΔS^\circ_{298}\ce{O2}(g)\right]−1ΔS^\circ_{298}\ce{HgO}(s,\,\ce{yellow}) \nonumber\] \[\mathrm{=\left[1\: mol(75.9\: J/mol\: K)+\dfrac{1}{2}mol(205.2\: J/mol\: K)\right]−1\: mol(71.13\: J/mol\: K)=107.4\: J/mol\: K} \nonumber\] Now use these values in Equation \(\ref{18.5.3}\) to get \(ΔG^o\): \[ΔG°=ΔH°−TΔS°=\mathrm{90.46\: kJ−298.15\: K×107.4\: J/K⋅mol×\dfrac{1\: kJ}{1000\: J}} \nonumber\] \[ΔG°=\mathrm{(90.46−32.01)\:kJ/mol=58.45\: kJ/mol} \nonumber\] Both ways to calculate the standard free energy change at 25 °C give the same numerical value (to three significant figures), and both predict that the process is nonspontaneous (not spontaneous) at room temperature (since \(ΔG^o > 0\). Calculate ΔG° using (a) free energies of formation and (b) enthalpies of formation and entropies (Appendix G). Do the results indicate the reaction to be spontaneous or nonspontaneous at 25 °C? \[\ce{C2H4}(g)⟶\ce{H2}(g)+\ce{C2H2}(g) \nonumber\] −141.5 kJ/mol, nonspontaneous Calculating Gibbs Free Energy (Grxn) for a Reaction: | 3,821 | 2,241 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Exercises%3A_Physical_and_Theoretical_Chemistry/Data-Driven_Exercises/00%3A_Front_Matter/02%3A_InfoPage |
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being integrated. and are supported by the Department of Education Open Textbook Pilot 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by . Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not ). and are supported by the Department of Education Open Textbook Pilot Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. . . | 1,070 | 2,242 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/05%3A_Introduction_To_Reactions_In_Aqueous_Solutions/5.1%3A_The_Nature_of_Aqueous_Solutions |
The solvent in aqueous solutions is water, which makes up about 70% of the mass of the human body and is essential for life. Many of the chemical reactions that keep us alive depend on the interaction of water molecules with dissolved compounds. Moreover, the presence of large amounts of water on Earth’s surface helps maintain its surface temperature in a range suitable for life. In this section, we describe some of the interactions of water with various substances and introduce you to the characteristics of aqueous solutions. As shown in Figure \(\Page {1}\), the individual water molecule consists of two hydrogen atoms bonded to an oxygen atom in a bent (V-shaped) structure. As is typical of group 16 elements, the oxygen atom in each O–H covalent bond attracts electrons more strongly than the hydrogen atom does. Consequently, the oxygen and hydrogen nuclei do not equally share electrons. Instead, hydrogen atoms are electron poor compared with a neutral hydrogen atom and have a partial positive charge, which is indicated by δ . The oxygen atom, in contrast, is more electron rich than a neutral oxygen atom, so it has a partial negative charge. This charge must be twice as large as the partial positive charge on each hydrogen for the molecule to have a net charge of zero. Thus its charge is indicated by 2δ . This unequal distribution of charge creates a polar bond in which one portion of the molecule carries a partial negative charge, while the other portion carries a partial positive charge (Figure \(\Page {1}\)). Because of the arrangement of polar bonds in a water molecule, water is described as a polar substance. Because of the asymmetric charge distribution in the water molecule, adjacent water molecules are held together by attractive electrostatic (δ …δ ) interactions between the partially negatively charged oxygen atom of one molecule and the partially positively charged hydrogen atoms of adjacent molecules (Figure \(\Page {2}\)). Energy is needed to overcome these electrostatic attractions. In fact, without them, water would evaporate at a much lower temperature, and neither Earth’s oceans nor we would exist! As you learned previously,, ionic compounds such as sodium chloride (NaCl) are also held together by electrostatic interactions—in this case, between oppositely charged ions in the highly ordered solid, where each ion is surrounded by ions of the opposite charge in a fixed arrangement. In contrast to an ionic solid, the structure of liquid water is not completely ordered because the interactions between molecules in a liquid are constantly breaking and reforming. The unequal charge distribution in polar liquids such as water makes them good solvents for ionic compounds. When an ionic solid dissolves in water, the ions . That is, the partially negatively charged oxygen atoms of the H O molecules surround the cations (Na in the case of NaCl), and the partially positively charged hydrogen atoms in H O surround the anions (Cl ; Figure \(\Page {3}\)). Individual cations and anions that are each surrounded by their own shell of water molecules are called hydrated ions. We can describe the dissolution of NaCl in water as \(NaCl(s) \xrightarrow{H_2O(l)} Na^+ (aq) + Cl^- (aq) \label{5.1.1}\) where (aq) indicates that Na and Cl are hydrated ions. Polar liquids are good solvents for ionic compounds. When electricity, in the form of an , is applied to a solution, ions in solution migrate toward the oppositely charged rod or plate to complete an electrical circuit, whereas neutral molecules in solution do not (Figure \(\Page {4}\)). Thus solutions that contain ions conduct electricity, while solutions that contain only neutral molecules do not. Electrical current will flow through the circuit shown in Figure \(\Page {4}\) and the bulb will glow if ions are present. The lower the concentration of ions in solution, the weaker the current and the dimmer the glow. Pure water, for example, contains only very low concentrations of ions, so it is a poor electrical conductor. Solutions that contain ions conduct electricity. An electrolyte is any compound that can form ions when dissolved in water (c.f. nonelectrolytes). Electrolytes may be strong or weak. is any compound that can form ions when it dissolves in water. When strong electrolytes dissolve, the constituent ions dissociate completely due to strong electrostatic interactions with the solvent, producing aqueous solutions that conduct electricity very well (Figure \(\Page {4}\)). Examples include ionic compounds such as barium chloride (\(BaCl_2\)) and sodium hydroxide (NaOH), which are both strong electrolytes and dissociate as follows: \( BaCl_2 (s) \xrightarrow{H_2O(l)} Ba^{2+} (aq) + 2Cl^- (aq) \label{5.1.2}\) \( NaOH(s) \xrightarrow{H_2O(l)} Na^+ (aq) + OH^- (aq) \label{5.1.3}\) The single arrows from reactant to products in Equation 5.1.2 and Equation 5.1.3 indicate that dissociation is . When weak electrolytes dissolve, they produce relatively few ions in solution. This does mean that the compounds do not dissolve readily in water; many weak electrolytes contain polar bonds and are therefore very soluble in a polar solvent such as water. They do not completely dissociate to form ions, however, because of their weaker electrostatic interactions with the solvent. Because very few of the dissolved particles are ions, aqueous solutions of weak electrolytes do not conduct electricity as well as solutions of strong electrolytes. One such compound is acetic acid (CH CO H), which contains the –CO H unit. Although it is soluble in water, it is a weak acid and therefore also a weak electrolyte. Similarly, ammonia (NH ) is a weak base and therefore a weak electrolyte. The behavior of weak acids and weak bases will be described in more detail elsewhere. Nonelectrolytes (a substance that dissolves in water that dissolve in water do so as neutral molecules and thus have essentially no effect on conductivity. Examples of nonelectrolytes that are very soluble in water but that are essentially nonconductive are ethanol, ethylene glycol, glucose, and sucrose, all of which contain the –OH group that is characteristic of alcohols. The topic of why alcohols and carboxylic acids behave differently in aqueous solution is for a different Module; for now, however, you can simply look for the presence of the –OH and –CO H groups when trying to predict whether a substance is a strong electrolyte, a weak electrolyte, or a nonelectrolyte. The distinctions between soluble and insoluble substances and between strong, weak, and nonelectrolytes are illustrated in Figure \(\Page {5}\). Ionic substances and carboxylic acids are electrolytes; alcohols, aldehydes, and ketones are nonelectrolytes. Predict whether each compound is a strong electrolyte, a weak electrolyte, or a nonelectrolyte in water. compound relative ability to form ions in water Classify the compound as ionic or covalent. If the compound is ionic and dissolves, it is a strong electrolyte that will dissociate in water completely to produce a solution that conducts electricity well. If the compound is covalent and organic, determine whether it contains the carboxylic acid group. If the compound contains this group, it is a weak electrolyte. If not, it is a nonelectrolyte. Predict whether each compound is a strong electrolyte, a weak electrolyte, or a nonelectrolyte in water. Predicting the Solubility of Ionic Compounds: Most chemical reactions are carried out in , which are homogeneous mixtures of two or more substances. In a solution, a (the substance present in the lesser amount) is dispersed in a (the substance present in the greater amount). contain water as the solvent, whereas have solvents other than water. Polar substances, such as water, contain asymmetric arrangements of , in which electrons are shared unequally between bonded atoms. Polar substances and ionic compounds tend to be most soluble in water because they interact favorably with its structure. In aqueous solution, dissolved ions become ; that is, a shell of water molecules surrounds them. Substances that dissolve in water can be categorized according to whether the resulting aqueous solutions conduct electricity. dissociate completely into ions to produce solutions that conduct electricity well. produce a relatively small number of ions, resulting in solutions that conduct electricity poorly. dissolve as uncharged molecules and have no effect on the electrical conductivity of water. ( ) | 8,564 | 2,243 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Reactivity/Nucleophilic_Substitution_at_Tetrahedral_Carbon/NS1._Introduction_to_ANS |
Aliphatic nucleophilic substitution is a mouthful, but each piece tells you something important about this kind of reaction. In reactions, one piece of a molecule is replaced by another. For example, ligands can be replaced in . Oxygen atoms in can be replaced by nitrogen atoms or sulfur atoms, in a particular variation of . These reactions all involve the addition of a to an electrophilic atom or ion. They are all nucleophilic substitution reactions. systems involve chains of saturated hydrocarbons, in which carbons are attached to each other only through single bonds. Aliphatic nucleophilic substitution is the substitution of a nucleophile at a tetrahedral or sp carbon. Aliphatic nucleophilic substitutions do not play a glamourous, central role in the world of chemistry. They don't happen in every important process, the way carbonyl additions and carboxyloid substitutions appear to in biochemistry. Instead, they are ubiquitous little reactions that play important, small roles in all kinds of places. For example, polyethylene gloycol (PEG) is a commonly used polymer in lots of biomedical applications. PEG frequently has hydroxyl groups at each end of the polymer. Capping the ends of the polymer through reaction with another group can lead to very different physical properties. For another example, many biochemical processes require prenylation of proteins. That would involve a nucleophilic substitution in which a sulfur in a cysteine residue adds to a tetrahedral carbon in a prenyl group, replacing a phosphate group. In order to be an electrophile, that tetrahedral carbon should have at least some partial positive charge on it. In the simplest cases, this electrophilic carbon is attached to a halogen: chlorine, bromine or iodine. These compounds are called alkyl halides (or alkyl chlorides, alkyl bromides and alkyl iodides). Draw structures of the following alkyl halides. a) 2-bromopentane b) 2-methyl-2-chlorobutane c) benzyl iodide d) allyl chloride Lots of things can be nucleophiles in these reactions. Sometimes, the nucleophile is a neutral compound with a lone pair, such as ammonia or water (or, by extension, an amine or an alcohol). Sometimes, addition of a mild base is helpful in reactions of neutral nucleophiles. Show, with mechanistic arrows, how sodium carbonate (K CO ) would play a role in the reaction. The third row analogs of these nucleophiles, in which the nucleophlic atom is a phosphorus or a sulfur, are also good nucleophiles in these reactions. Sometimes, the nucleophile is an anion. Cyanide anion is a good nucleophile, as are the structurally similar acetylides. Enols, enolates and enamines are also very good nucleophiles in this type of reaction. Semi-anionic nucleophiles such as Grignard (or organomagnesium) reagents and alkyl lithium reagents can sometimes act as nucleophiles in this reactions, but they are not very reliable. Complications often lead to other reactions instead. Gilman (or organocopper) reagents, in which a carbon atom is attached to a copper atom, can usually react with alkyl halides. However, they probably act via a different mechanism from the ones described in this chapter. , | 3,196 | 2,244 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Exercises%3A_Physical_and_Theoretical_Chemistry/Exercises%3A_Zielinski/8._The_Hydrogen_Atom_(Exercises) |
Calculate the probability density for a hydrogen 1s electron at a distance 3a from the proton along the z-axis (a is the Bohr radius). Calculate the radial probability density for a hydrogen 1s electron to be 3a from the proton. Calculate the probability that a hydrogen 1s electron is within a distance 3a from the nucleus. Calculate and compare the average distances of the electron from the proton for the hydrogen 1s orbital and the 2s orbital. What insight do you gain from this comparison? What is the percent error in the energy of the 1s orbital if the electron mass is used to calculate the energy rather than the reduced mass? Calculate the energies (in units of electron volts and wavenumbers) of the three 1s to 2p transitions for a hydrogen atom in a magnetic field of 10 Tesla. Calculate the frequency of radiation that would be absorbed due to a change in the electron spin state of a hydrogen atom in a magnetic field of 10 Tesla. Compare the energy of this transition to the energy of the 1s to 2p transitions in the previous problem. What insight do you gain from this comparison? Which is larger for the hydrogen atom, the Zeeman splitting due to spin motion (electron in the 1s orbital) or the Zeeman splitting due to orbital motion (electron in the 2p orbitals neglecting spin)? Why is one larger than the other? What is the difference between the average value of \(r\) and the most probable value of \(r\) where \(r\) is the distance of the electron from the nucleus? Show that orbitals directed along the x and y axis can be formed by taking linear combinations of the spherical harmonics \(Y^{+1}_1\) and \(Y^{-1}_1\). These orbitals are called \(p_x\) and \(p_y\). Why do you think chemists prefer to use px and py rather than the angular momentum eigenfunctions? What are the expectation values of \(\hat {L}_x, \hat {L}_y \), and \(\hat {L}_z\) for the three 2p wavefunctions? Why can \(\hat {H}, \hat {L} ^2\), and \(\hat {L} _z\) have the same eigenfunctions? Derive the selection rules for electronic transitions in the hydrogen atom. See Section 8.3 above and selection rules in Chapter 7. Use Mathcad to generate the radial probability densities for the 3s, 3p, and 3d atomic orbitals of hydrogen. What insight do you gain by comparing these plots? Examine the Periodic Table and explain the relationship between the number and types of atomic orbitals, including spin, and the columns and rows. ") | 2,455 | 2,245 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/12%3A_Chromatographic_and_Electrophoretic_Methods/12.05%3A_High-Performance_Liquid_Chromatography |
In (HPLC) we inject the sample, which is in solution form, into a liquid mobile phase. The mobile phase carries the sample through a packed or capillary column that separates the sample’s components based on their ability to partition between the mobile phase and the stationary phase. Figure 12.5.1
shows an example of a typical HPLC instrument, which has several key components: reservoirs that store the mobile phase; a pump for pushing the mobile phase through the system; an injector for introducing the sample; a column for separating the sample into its component parts; and a detector for monitoring the eluent as it comes off the column. Let’s consider each of these components. A solute’s retention time in HPLC is determined by its interaction with the stationary phase and the mobile phase. There are several different types of solute/stationary phase interactions, including liquid–solid adsorption, liquid–liquid partitioning, ion-exchange, and size-exclusion. This chapter deals exclusively with HPLC separations based on liquid–liquid partitioning. Other forms of liquid chromatography receive consideration in . An HPLC typically includes two columns: an analytical column, which is responsible for the separation, and a guard column that is placed before the analytical column to protect it from contamination. The most common type of HPLC column is a stainless steel tube with an internal diameter between 2.1 mm and 4.6 mm and a length between 30 mm and 300 mm (Figure 12.5.2
). The column is packed with 3–10 µm porous silica particles with either an irregular or a spherical shape. Typical column efficiencies are 40000–60000 theoretical plates/m. Assuming a / of approximately 50, a 25-cm column with 50 000 plates/m has 12 500 theoretical plates and a peak capacity of 110. Capillary columns use less solvent and, because the sample is diluted to a lesser extent, produce larger signals at the detector. These columns are made from fused silica capillaries with internal diameters from 44–200 μm and lengths of 50–250 mm. Capillary columns packed with 3–5 μm particles have been prepared with column efficiencies of up to 250 000 theoretical plates [Novotony, M. , , , 51–57]. One limitation to a packed capillary column is the back pressure that develops when pumping the mobile phase through the small interstitial spaces between the particulate micron-sized packing material (Figure 12.5.3
). Because the tubing and fittings that carry the mobile phase have pressure limits, a higher back pressure requires a lower flow rate and a longer analysis time. Monolithic columns, in which the solid support is a single, porous rod, offer column efficiencies equivalent to a packed capillary column while allowing for faster flow rates. A monolithic column—which usually is similar in size to a conventional packed column, although smaller, capillary columns also are available—is prepared by forming the mono- lithic rod in a mold and covering it with PTFE tubing or a polymer resin. Monolithic rods made of a silica-gel polymer typically have macropores with diameters of approximately 2 μm and mesopores—pores within the macropores—with diameters of approximately 13 nm [Cabrera, K. Chromatography Online, April 1, 2008]. Two problems tend to shorten the lifetime of an analytical column. First, solutes that bind irreversibly to the stationary phase degrade the column’s performance by decreasing the amount of stationary phase available for effecting a separation. Second, particulate material injected with the sample may clog the analytical column. To minimize these problems we place a guard column before the analytical column. A Guard column usually contains the same particulate packing material and stationary phase as the analytical column, but is significantly shorter and less expensive—a length of 7.5 mm and a cost one-tenth of that for the corresponding analytical column is typical. Because they are intended to be sacrificial, guard columns are replaced regularly. If you look closely at , you will see the small guard column just above the analytical column. In liquid–liquid chromatography the stationary phase is a liquid film coated on a packing material, typically 3–10 μm porous silica particles. Because the stationary phase may be partially soluble in the mobile phase, it may elute, or bleed from the column over time. To prevent the loss of stationary phase, which shortens the column’s lifetime, it is bound covalently to the silica particles. are created by reacting the silica particles with an organochlorosilane of the general form Si(CH ) RCl, where R is an alkyl or substituted alkyl group. To prevent unwanted interactions between the solutes and any remaining –SiOH groups, Si(CH ) Cl is used to convert unreacted sites to \(–\text{SiOSi(CH}_3)_3\); such columns are designated as end-capped. The properties of a stationary phase depend on the organosilane’s alkyl group. If R is a polar functional group, then the stationary phase is polar. Examples of polar stationary phases include those where R contains a cyano (–C H CN), a diol (–C H OCH CHOHCH OH), or an amino (–C H NH ) functional group. Because the stationary phase is polar, the mobile phase is a nonpolar or a moderately polar solvent. The combination of a polar stationary phase and a nonpolar mobile phase is called . In , which is the more common form of HPLC, the stationary phase is nonpolar and the mobile phase is polar. The most common nonpolar stationary phases use an organochlorosilane where the R group is an -octyl (C ) or -octyldecyl (C ) hydrocarbon chain. Most reversed-phase separations are carried out using a buffered aqueous solution as a polar mobile phase, or using other polar solvents, such as methanol and acetonitrile. Because the silica substrate may undergo hydrolysis in basic solutions, the pH of the mobile phase must be less than 7.5. It seems odd that the more common form of liquid chromatography is identified as reverse-phase instead of normal phase. You might recall that one of the earliest examples of chromatography was Mikhail Tswett’s separation of plant pigments using a polar column of calcium carbonate and a nonpolar mobile phase of petroleum ether. The assignment of normal and reversed, therefore, is all about precedence. The elution order of solutes in HPLC is governed by polarity. For a normal-phase separation, a solute of lower polarity spends proportionally less time in the polar stationary phase and elutes before a solute that is more polar. Given a particular stationary phase, retention times in normal-phase HPLC are controlled by adjusting the mobile phase’s properties. For example, if the resolution between two solutes is poor, switching to a less polar mobile phase keeps the solutes on the column for a longer time and provides more opportunity for their separation. In reversed-phase HPLC the order of elution is the opposite that in a normal-phase separation, with more polar solutes eluting first. Increasing the polarity of the mobile phase leads to longer retention times. Shorter retention times require a mobile phase of lower polarity. There are several indices that help in selecting a mobile phase, one of which is the polarity index [Snyder, L. R.; Glajch, J. L.; Kirkland, J. J. , Wiley-Inter- science: New York, 1988]. Table 12.5.1
provides values of the polarity index, \(P^{\prime}\), for several common mobile phases, where larger values of \(P^{\prime}\) correspond to more polar solvents. Mixing together two or more mobile phases—assuming they are miscible—creates a mobile phase of intermediate polarity. For example, a binary mobile phase made by combining solvent and solvent has a polarity index, \(P_{AB}^{\prime}\), of \[P_{A B}^{\prime}=\Phi_{A} P_{A}^{\prime}+\Phi_{B} P_{B}^{\prime} \label{12.1}\] where \(P_A^{\prime}\) and \(P_B^{\prime}\) are the polarity indices for solvents and , and \(\Phi_A\) and \(\Phi_B\) are the volume fractions for the two solvents. A reversed-phase HPLC separation is carried out using a mobile phase of 60% v/v water and 40% v/v methanol. What is the mobile phase’s polarity index? Using Equation \ref{12.1} and the values in Table 12.5.1
, the polarity index for a 60:40 water–methanol mixture is \[P_{A B}^{\prime}=\Phi_\text{water} P_\text{water}^{\prime}+\Phi_\text{methanol} P_\text{methanol}^{\prime} \nonumber\] \[P_{A B}^{\prime}=0.60 \times 10.2+0.40 \times 5.1=8.2 \nonumber\] Suppose you need a mobile phase with a polarity index of 7.5. Explain how you can prepare this mobile phase using methanol and water. If we let be the fraction of water in the mobile phase, then 1 – is the fraction of methanol. Substituting these values into Equation \ref{12.1} and solving for \[7.5=10.2 x+5.1(1-x) \nonumber\] \[7.5=10.2 x+5.1-5.1 x \nonumber\] \[2.4=5.1 x \nonumber\] gives as 0.47. The mobile phase is 47% v/v water and 53% v/v methanol. As a general rule, a two unit change in the polarity index corresponds to an approximately 10-fold change in a solute’s retention factor. Here is a simple example. If a solute’s retention factor, , is 22 when using water as a mobile phase (\(P^{\prime}\) = 10.2), then switching to a mobile phase of 60:40 water–methanol (\(P^{\prime}\) = 8.2) decreases to approximately 2.2. Note that the retention factor becomes smaller because we are switching from a more polar mobile phase to a less polar mobile phase in a reversed-phase separation. Changing the mobile phase’s polarity index changes a solute’s retention factor. As we learned in , however, a change in is not an effective way to improve resolution when the initial value of is greater than 10. To effect a better separation between two solutes we must improve the selectivity factor, \(\alpha\). There are two common methods for increasing \(\alpha\): adding a reagent to the mobile phase that reacts with the solutes in a secondary equilibrium reaction or switching to a different mobile phase. Taking advantage of a secondary equilibrium reaction is a useful strategy for improving a separation [(a) Foley, J. P. , , , 118–128; (b) Foley, J. P.; May, W. E. Anal. Chem. 1987, 59, 102–109; (c) Foley, J. P.; May, W. E. Anal. Chem. 1987, 59, 110–115]. , which we considered earlier in this chapter, shows the reversed-phase separation of four weak acids—benzoic acid, terephthalic acid, -aminobenzoic acid, and -hydroxybenzoic acid—on a nonpolar C column using an aqueous buffer of acetic acid and sodium acetate as the mobile phase. The retention times for these weak acids are shorter when using a less acidic mobile phase because each solute is present in an anionic, weak base form that is less soluble in the nonpolar stationary phase. If the mobile phase’s pH is sufficiently acidic, the solutes are present as neutral weak acids that are more soluble in the stationary phase and take longer to elute. Because the weak acid solutes do not have identical p values, the pH of the mobile phase has a different effect on each solute’s retention time, allowing us to find the optimum pH for effecting a complete separation of the four solutes. Acid–base chemistry is not the only example of a secondary equilibrium reaction. Other examples include ion-pairing, complexation, and the interaction of solutes with micelles. We will consider the last of these in when we discuss micellar electrokinetic capillary chromatography. In we learned how to adjust the mobile phase’s polarity by blending together two solvents. A polarity index, however, is just a guide, and binary mobile phase mixtures with identical polarity indices may not resolve equally a pair of solutes. Table 12.5.2
, for example, shows retention times for four weak acids in two mobile phases with nearly identical values for \(P^{\prime}\). Although the order of elution is the same for both mobile phases, each solute’s retention time is affected differently by the choice of organic solvent. If we switch from using acetonitrile to tetrahydrofuran, for example, we find that benzoic acid elutes more quickly and that -hydroxybenzoic acid elutes more slowly. Although we can resolve fully these two solutes using mobile phase that is 16% v/v acetonitrile, we cannot resolve them if the mobile phase is 10% tetrahydrofuran. 16% acetonitrile (CH CN) 84% pH 4.11 aqueous buffer (\(P^{\prime}\) = 9.5) 10% tetrahydrofuran (THF) 90% pH 4.11 aqueous buffer (\(P^{\prime}\) = 9.6) Key: BA is benzoic acid; PH is -hydroxybenzoic acid; PA is -aminobenzoic acid; TP is terephthalic acid
Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. “Optimization of HPLC and GC Separations Using Re- sponse Surfaces,” , , 162–168. One strategy for finding the best mobile phase is to use the solvent triangle shown in Figure 12.5.4
, which allows us to explore a broad range of mobile phases with only seven experiments. We begin by adjusting the amount of acetonitrile in the mobile phase to produce the best possible separation within the desired analysis time. Next, we use Table 12.5.3
to estimate the composition of methanol/H O and tetrahydrofuran/H O mobile phases that will produce similar analysis times. Four additional mobile phases are prepared using the binary and ternary mobile phases shown in Figure 12.5.4
. When we examine the chromatograms from these seven mobile phases we may find that one or more provides an adequate separation, or we may identify a region within the solvent triangle where a separation is feasible. Figure 12.5.5
shows a resolution map for the reversed-phase separation of benzoic acid, terephthalic acid, -aminobenzoic acid, and -hydroxybenzoic acid on a nonpolar C column in which the maximum desired analysis time is set to 6 min [Harvey, D. T.; Byerly, S.; Bowman, A.; Tomlin, J. , , 162–168]. The areas in , , and show mobile phase compositions that do not provide baseline resolution. The unshaded area represents mobile phase compositions where a separation is possible. The choice to start with acetonitrile is arbitrary—we can just as easily choose to begin with methanol or with tetrahydrofuran. A separation using a mobile phase that has a fixed composition is an . One difficulty with an isocratic elution is that an appropriate mobile phase strength for resolving early-eluting solutes may lead to unacceptably long retention times for late-eluting solutes. Optimizing the mobile phase for late-eluting solutes, on the other hand, may provide an inadequate separation of early-eluting solutes. Changing the mobile phase’s composition as the separation progresses is one solution to this problem. For a reversed-phase separation we use an initial mobile phase that is more polar. As the separation progresses, we adjust the composition of mobile phase so that it becomes less polar (see Figure 12.5.6
). Such separations are called . In a gas chromatograph the pressure from a compressed gas cylinder is sufficient to push the mobile phase through the column. Pushing a liquid mobile phase through a column, however, takes a great deal more effort, generating pressures in excess of several hundred atmospheres. In this section we consider the basic plumbing needed to move the mobile phase through the column and to inject the sample into the mobile phase. A typical HPLC includes between 1–4 reservoirs for storing mobile phase solvents. The instrument in , for example, has two mobile phase reservoirs that are used for an isocratic elution or a gradient elution by drawing solvents from one or both reservoirs. Before using a mobile phase solvent we must remove dissolved gases, such as N and O , and small particulate matter, such as dust. Because there is a large drop in pressure across the column—the pressure at the column’s entrance is as much as several hundred atmospheres, but it is atmospheric pressure at the column’s exit—gases dissolved in the mobile phase are released as gas bubbles that may interfere with the detector’s response. Degassing is accomplished in several ways, but the most common are the use of a vacuum pump or sparging with an inert gas, such as He, which has a low solubility in the mobile phase. Particulate materials, which may clog the HPLC tubing or column, are removed by filtering the solvents. Bubbling an inert gas through the mobile phase releases volatile dissolved gases. This process is called sparging. The mobile phase solvents are pulled from their reservoirs by the action of one or more pumps. Figure 12.5.7
shows a close-up view of the pumps for the instrument in . The working pump and the equilibrating pump each have a piston whose back and forth movement maintains a constant flow rate of up to several mL/min and provides the high output pressure needed to push the mobile phase through the chromatographic column. In this particular instrument, each pump sends its mobile phase to a mixing chamber where they combine to form the final mobile phase. The relative speed of the two pumps determines the mobile phase’s final composition. The back and forth movement of a reciprocating pump creates a pulsed flow that contributes noise to the chromatogram. To minimize these pulses, each pump in Figure 12.5.7
has two cylinders. During the working cylinder’s forward stoke it fills the equilibrating cylinder and establishes flow through the column. When the working cylinder is on its reverse stroke, the flow is maintained by the piston in the equilibrating cylinder. The result is a pulse-free flow. There are other possible ways to control the mobile phase’s composition and flow rate. For example, instead of the two pumps in Figure 12.5.7
, we can place a solvent proportioning valve before a single pump. The solvent proportioning value connects two or more solvent reservoirs to the pump and determines how much of each solvent is pulled during each of the pump’s cycles. Another approach for eliminating a pulsed flow is to include a pulse damper between the pump and the column. A pulse damper is a chamber filled with an easily compressed fluid and a flexible diaphragm. During the piston’s forward stroke the fluid in the pulse damper is compressed. When the piston withdraws to refill the pump, pressure from the expanding fluid in the pulse damper maintains the flow rate. The operating pressure within an HPLC is sufficiently high that we cannot inject the sample into the mobile phase by inserting a syringe through a septum, as is possible in gas chromatography. Instead, we inject the sample using a , a diagram of which is shown in Figure 12.5.8
. In the load position a sample loop—which is available in a variety of sizes ranging from 0.5 μL to 5 mL—is isolated from the mobile phase and open to the atmosphere. The sample loop is filled using a syringe with a capacity several times that of the sample loop, with excess sample exiting through the waste line. After loading the sample, the injector is turned to the inject position, which redirects the mobile phase through the sample loop and onto the column. The instrument in uses an autosampler to inject samples. Instead of using a syringe to push the sample into the sample loop, the syringe draws sample into the sample loop. Many different types of detectors have been use to monitor HPLC separations, most of which use the spectroscopic techniques from or the electrochemical techniques from . The most popular HPLC detectors take advantage of an analyte’s UV/Vis absorption spectrum. These detectors range from simple designs, in which the analytical wavelength is selected using appropriate filters, to a modified spectrophotometer in which the sample compartment includes a flow cell. Figure 12.5.9
shows the design of a typical flow cell when using a diode array spectrometer as the detector. The flow cell has a volume of 1–10 μL and a path length of 0.2–1 cm. To review the details of how we measure absorbance, see . More information about different types of instruments, including the diode array spectrometer, is in . When using a UV/Vis detector the resulting chromatogram is a plot of absorbance as a function of elution time (see Figure 12.5.10
). If the detector is a diode array spectrometer, then we also can display the result as a three-dimensional chromatogram that shows absorbance as a function of wavelength and elution time. One limitation to using absorbance is that the mobile phase cannot absorb at the wavelengths we wish to monitor. lists the minimum useful UV wavelength for several common HPLC solvents. Absorbance detectors provide detection limits of as little as 100 pg–1 ng of injected analyte. If an analyte is fluorescent, we can place the flow cell in a spectrofluorimeter. As shown in Figure 12.5.11
, a fluorescence detector provides additional selectivity because only a few of a sample’s components are fluorescent. Detection limits are as little as 1–10 pg of injected analyte. See for a review of fluorescence spectroscopy and spectrofluorimeters. Another common group of HPLC detectors are those based on electrochemical measurements such as amperometry, voltammetry, coulometry, and conductivity. Figure 12.5.12
, for example, shows an amperometric flow cell. Effluent from the column passes over the working electrode—held at a constant potential relative to a downstream reference electrode—that completely oxidizes or reduces the analytes. The current flowing between the working electrode and the auxiliary electrode serves as the analytical signal. Detection limits for amperometric electrochemical detection are from 10 pg–1 ng of injected analyte. See for a review of amperometry. Several other detectors have been used in HPLC. Measuring a change in the mobile phase’s refractive index is analogous to monitoring the mobile phase’s thermal conductivity in gas chromatography. A refractive index detector is nearly universal, responding to almost all compounds, but has a relatively poor detection limit of 0.1–1 μg of injected analyte. An additional limitation of a refractive index detector is that it cannot be used for a gradient elution unless the mobile phase components have identical refractive indexes. Another useful detector is a mass spectrometer. Figure 12.5.13
shows a block diagram of a typical HPLC–MS instrument. The effluent from the column enters the mass spectrometer’s ion source using an interface the removes most of the mobile phase, an essential need because of the incompatibility between the liquid mobile phase and the mass spectrometer’s high vacuum environment. In the ionization chamber the remaining molecules—a mixture of the mobile phase components and solutes—undergo ionization and fragmentation. The mass spectrometer’s mass analyzer separates the ions by their mass-to-charge ratio (m/z). A detector counts the ions and displays the mass spectrum. There are several options for monitoring the chromatogram when using a mass spectrometer as the detector. The most common method is to continuously scan the entire mass spectrum and report the total signal for all ions reaching the detector during each scan. This total ion scan provides universal detection for all analytes. As seen in Figure 12.5.14
, we can achieve some degree of selectivity by monitoring only specific mass-to-charge ratios, a process called selective-ion monitoring. The advantages of using a mass spectrometer in HPLC are the same as for gas chromatography. Detection limits are very good, typically 0.1–1 ng of injected analyte, with values as low as 1–10 pg for some samples. In addition, a mass spectrometer provides qualitative, structural information that can help to identify the analytes. The interface between the HPLC and the mass spectrometer is technically more difficult than that in a GC–MS because of the incompatibility of a liquid mobile phase with the mass spectrometer’s high vacuum requirement. For more details on mass spectrometry see Introduction to Mass Spectrometry by Michael Samide and Olujide Akinbo, a resource that is part of the Analytical Sciences Digital Library. High-performance liquid chromatography is used routinely for both qualitative and quantitative analyses of environmental, pharmaceutical, industrial, forensic, clinical, and consumer product samples. Samples in liquid form are injected into the HPLC after a suitable clean-up to remove any particulate materials, or after a suitable extraction to remove matrix interferents. In determining polyaromatic hydrocarbons (PAH) in wastewater, for example, an extraction with CH Cl serves the dual purpose of concentrating the analytes and isolating them from matrix interferents. Solid samples are first dissolved in a suitable solvent or the analytes of interest brought into solution by extraction. For example, an HPLC analysis for the active ingredients and the degradation products in a pharmaceutical tablet often begins by extracting the powdered tablet with a portion of mobile phase. Gas samples are collected by bubbling them through a trap that contains a suitable solvent. Organic isocyanates in industrial atmospheres are collected by bubbling the air through a solution of 1-(2-methoxyphenyl)piperazine in toluene. The reaction between the isocyanates and 1-(2-methoxyphenyl)piperazine both stabilizes them against degradation before the HPLC analysis and converts them to a chemical form that can be monitored by UV absorption. A quantitative HPLC analysis is often easier than a quantitative GC analysis because a fixed volume sample loop provides a more precise and accurate injection. As a result, most quantitative HPLC methods do not need an internal standard and, instead, use external standards and a normal calibration curve. An internal standard is necessary when using HPLC–MS because the interface between the HPLC and the mass spectrometer does not allow for a reproducible transfer of the column’s eluent into the MS’s ionization chamber. The concentration of polynuclear aromatic hydrocarbons (PAH) in soil is determined by first extracting the PAHs with methylene chloride. The extract is diluted, if necessary, and the PAHs separated by HPLC using a UV/Vis or fluorescence detector. Calibration is achieved using one or more external standards. In a typical analysis a 2.013-g sample of dried soil is extracted with 20.00 mL of methylene chloride. After filtering to remove the soil, a 1.00-mL portion of the extract is removed and diluted to 10.00 mL with acetonitrile. Injecting 5 μL of the diluted extract into an HPLC gives a signal of 0.217 (arbitrary units) for the PAH fluoranthene. When 5 μL of a 20.0-ppm fluoranthene standard is analyzed using the same conditions, a signal of 0.258 is measured. Report the parts per million of fluoranthene in the soil. For a single-point external standard, the relationship between the signal, , and the concentration, , of fluoranthene is \[S = kC \nonumber\] Substituting in values for the standard’s signal and concentration gives the value of as \[k=\frac{S}{C}=\frac{0.258}{20.0 \text{ ppm}}=0.0129 \text{ ppm}^{-1} \nonumber\] Using this value for and the sample’s HPLC signal gives a fluoranthene concentration of \[C=\frac{S}{k}=\frac{0.217}{0.0129 \text{ ppm}^{-1}}=16.8 \text{ ppm} \nonumber\] for the extracted and diluted soil sample. The concentration of fluoranthene in the soil is \[\frac{16.8 \text{ g} / \mathrm{mL} \times \frac{10.00 \text{ mL}}{1.00 \text{ mL}} \times 20.00 \text{ mL}}{2.013 \text{ g} \text { sample }}=1670 \text{ ppm} \text { fluoranthene } \nonumber\] The concentration of caffeine in beverages is determined by a reversed-phase HPLC separation using a mobile phase of 20% acetonitrile and 80% water, and using a nonpolar C column. Results for a series of 10-μL injections of caffeine standards are in the following table. What is the concentration of caffeine in a sample if a 10-μL injection gives a peak area of 424195? The data in this problem comes from Kusch, P.; Knupp, G. “Simultaneous Determination of Caffeine in Cola Drinks and Other Beverages by Reversed-Phase HPTLC and Reversed-Phase HPLC,” , , , 201–205. The figure below shows the calibration curve and calibration equation for the set of external standards. Substituting the sample’s peak area into the calibration equation gives the concentration of caffeine in the sample as 94.4 mg/L. The best way to appreciate the theoretical and the practical details discussed in this section is to carefully examine a typical analytical method. Although each method is unique, the following description of the determination of fluoxetine in serum provides an instructive example of a typical procedure. The description here is based on Smyth, W. F. , Wiley Teubner: Chichester, England, 1996, pp. 187–189. Fluoxetine is another name for the antidepressant drug Prozac. The determination of fluoxetine in serum is an important part of monitoring its therapeutic use. The analysis is complicated by the complex matrix of serum samples. A solid-phase extraction followed by an HPLC analysis using a fluorescence detector provides the necessary selectivity and detection limits. Add a known amount of the antidepressant protriptyline, which serves as an internal standard, to each serum sample and to each external standard. To remove matrix interferents, pass a 0.5-mL aliquot of each serum sample or standard through a C solid-phase extraction cartridge. After washing the cartridge to remove the interferents, elute the remaining constituents, including the analyte and the internal standard, by washing the cartridge with 0.25 mL of a 25:75 v/v mixture of 0.1 M HClO and acetonitrile. Inject a 20-μL aliquot onto a 15-cm \(\times\) 4.6-mm column packed with a 5 μm C -bonded stationary phase. The isocratic mobile phase is 37.5:62.5 v/v acetonitrile and water (that contains 1.5 g of tetramethylammonium perchlorate and 0.1 mL of 70% v/v HClO ). Monitor the chromatogram using a fluorescence detector set to an excitation wave- length of 235 nm and an emission wavelength of 310 nm. 1. The solid-phase extraction is important because it removes constitutions in the serum that might interfere with the analysis. What types of interferences are possible? Blood serum, which is a complex mixture of compounds, is approximately 92% water, 6–8% soluble proteins, and less than 1% each of various salts, lipids, and glucose. A direct injection of serum is not advisable for three reasons. First, any particulate materials in the serum will clog the column and restrict the flow of mobile phase. Second, some of the compounds in the serum may absorb too strongly to the stationary phase, degrading the column’s performance. Finally, although an HPLC can separate and analyze complex mixtures, an analysis is difficult if the number of constituents exceeds the column’s peak capacity. 2. One advantage of an HPLC analysis is that a loop injector often eliminates the need for an internal standard. Why is an internal standard used in this analysis? What assumption(s) must we make when using the internal standard? An internal standard is necessary because of uncertainties introduced during the solid-phase extraction. For example, the volume of serum transferred to the solid-phase extraction cartridge, 0.5 mL, and the volume of solvent used to remove the analyte and internal standard, 0.25 mL, are very small. The precision and accuracy with which we can measure these volumes is not as good as when we use larger volumes. For example, if we extract the analyte into a volume of 0.24 mL instead of a volume of 0.25 mL, then the analyte’s concentration increases by slightly more than 4%. In addition, the concentration of eluted analytes may vary from trial-to-trial due to variations in the amount of solution held up by the cartridge. Using an internal standard compensates for these variation. To be useful we must assume that the analyte and the internal standard are retained completely during the initial loading, that they are not lost when the cartridge is washed, and that they are extracted completely during the final elution. 3. Why does the procedure monitor fluorescence instead of monitoring UV absorption? Fluorescence is a more selective technique for detecting analytes. Many other commonly prescribed antidepressants (and their metabolites) elute with retention times similar to that of fluoxetine. These compounds, however, either do not fluoresce or are only weakly fluorescent. 4. If the peaks for fluoxetine and protriptyline are resolved insufficiently, how might you alter the mobile phase to improve their separation? Decreasing the amount of acetonitrile and increasing the amount of water in the mobile will increase retention times, providing more time to effect a separation. With a few exceptions, the scale of operation, accuracy, precision, sensitivity, selectivity, analysis time, and cost for an HPLC method are similar to GC methods. Injection volumes for an HPLC method usually are larger than for a GC method because HPLC columns have a greater capacity. Because it uses a loop injection, the precision of an HPLC method often is better than a GC method. HPLC is not limited to volatile analytes, which means we can analyze a broader range of compounds. Capillary GC columns, on the other hand, have more theoretical plates, and can separate more complex mixtures. | 33,331 | 2,248 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Kinetics/10%3A_Using_logarithms_-_Log_vs._Ln |
A common question exists regarding the use of logarithm base 10 (\(\log\) or \(\log_{10}\)) vs. logarithm base \(e\) (\(\ln\)). The logarithm base \(e\) is called the since it arises from the integral: \[ \ln (a) = \int_1^a \dfrac{dx}{x}\nonumber \] Of course, one can convert from \(\ln\) to \(\log\) \[ \ln (10^{\log a}) = \log (a) \ln(10) \approx 2.3025 \log (a)\nonumber \] but \( 10^{\log a}=a\) so \[ \ln a \approx 2.4025 \log a\nonumber \] The analysis of the and rate constant using the is performed using the \(\log_{10}\) function. This could have been done using the \(\ln\) function just as well. The initial rate is given by \[r_o=k'[A]_0^a\nonumber \] The analysis can proceed by taking the logarithm base 10 of each side of the equation \[ \log r_o = \log k' + a\log [A]_0\nonumber \] or the \(\ln\) of each side of the equation \[ \ln r_o = \ln k' + a\ln [A]_0\nonumber \] as long as one is . Once can think of the \(\log\) or the \(\ln\) as a way to that has some kind of power law dependence. The only difference between these two functions is a scaling factor (\(\ln 10 \approx 2.3025\)) in the slope. Obviously, if you multiply both sides of the equation by the same number the relative values of the constants remains the same on both sides. ( ) | 1,292 | 2,249 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/OCLUE%3A_Organic_Chemistry_Life_the_Universe_and_Everything_(Copper_and_Klymkowsky)/07%3A_Nucleophilic_attack_at_the_carbonyl_carbon- |
There is a set of organic compounds that incorporates the carbonyl group (\(\mathrm{C=O}\)) which includes aldehyde ketones, carboxylic acids, and carboxylic acid derivatives such as: esters, amides, acid anhydrides, and acid chlorides (as shown in Table \(7.0.1\)). Table \(7.0.1\): Functional groups that contain a carbonyl group remove -ane, add -al Ethanal (IUPAC) Acetaldehyde 2-propanone (IUPAC) Acetone Ethanoic acid (IUPAC) Acetic acid Ethanoic anhydride Acetic anhydride As we discussed in Chapter \(6\), the carbonyl carbon is highly polarized; the large \(\sigma^{+}\) on the carbon makes it susceptible to nucleophilic attack. There are a large number of reactions that begin by the attack of a nucleophile on a carbonyl group. To make understanding these reactions more manageable (intelligible), we will consider these reactions in a sequence of increasing complexity, beginning with reactions of aldehydes and ketones. We will then cycle back around and visit similar reactions involving acids and their derivatives.
| 1,047 | 2,250 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Synthesis_of_Aldehydes_and_Ketones/Grignard_and_Organolithium_Reagents |
The (Li, Na, K etc.) and the (Mg and Ca, together with Zn) are good reducing agents, the former being stronger than the latter. These same metals reduce the carbon-halogen bonds of alkyl halides. The halogen is converted to a halide anion, and the carbon bonds to the metal which has characteristics similar to a (R:-). Many organometallic reagents are commercially available, however, it is often necessary to make then. The following equations illustrate these reactions for the commonly used metals lithium and magnesium (R may be hydrogen or alkyl groups in any combination). \[ \ce{R3C-X} + \ce{2Li} \rightarrow \ce{R3C-Li} + \ce{LiX}\] \[\ce{R3C-X} + \ce{Mg} \rightarrow \ce{R3C-MgX}\] Halide reactivity in these reactions increases in the order: Cl < Br < I and Fluorides are usually not used. The alkyl magnesium halides described in the second reaction are called Grignard Reagents after the French chemist, Victor Grignard, who discovered them and received the Nobel prize in 1912 for this work. The other metals mentioned above react in a similar manner, but Grignard and Alky Lithium Reagents most widely used. Although the formulas drawn here for the alkyl lithium and Grignard reagents reflect the stoichiometry of the reactions and are widely used in the chemical literature, they do not accurately depict the structural nature of these remarkable substances. Mixtures of polymeric and other associated and complexed species are in equilibrium under the conditions normally used for their preparation. A suitable solvent must be used. For alkyl lithium formation pentane or hexane are usually used. Diethyl ether can also be used but the subsequent alkyl lithium reagent must be used immediately after preparation due to an interaction with the solvent. Ethyl ether or THF are essential for Grignard reagent formation. Lone pair electrons from two ether molecules form a complex with the magnesium in the Grignard reagent (As pictured below). This complex helps stabilize the organometallic and increases its ability to react. These reactions are obviously substitution reactions, but they cannot be classified as nucleophilic substitutions, as were the earlier reactions of alkyl halides. Because the functional carbon atom has been reduced, the polarity of the resulting functional group is inverted (an originally electrophilic carbon becomes nucleophilic). This change, shown below, makes alkyl lithium and Grignard reagents excellent nucleophiles and useful reactants in synthesis. Because organometallic reagents react as their corresponding carbanion, they are excellent nucleophiles. The basic reaction involves the nucleophilic attack of the carbanionic carbon in the organometallic reagent with the electrophilic carbon in the carbonyl to form alcohols. Both Grignard and Organolithium Reagents will perform these reactions. Addition to formaldehyde gives 1° alcohols Addition to aldehydes gives 2° alcohols Addition to ketones gives 3° alcohols Addition to carbon dioxide (CO ) forms a carboxylic acid Going from Reactants to Products Simplified The mechanism for a Grignard agent is shown; the mechanism for an organometallic reagent is the same. 1) Nucleophilic attack 2) Protonation These reagents are very strong bases (pKa's of saturated hydrocarbons range from 42 to 50). Although not usually done with Grignard reagents, organolithium reagents can be used as strong bases. Both Grignard reagents and organolithium reagents react with water to form the corresponding hydrocarbon. This is why so much care is needed to insure dry glassware and solvents when working with organometallic reagents. In fact, the reactivity of Grignard reagents and organolithium reagents can be exploited to create a new method for the conversion of halogens to the corresponding hydrocarbon (illustrated below). The halogen is converted to an organometallic reagent and then subsequently reacted with water to from an alkane. As discussed above, Grignard and organolithium reagents are powerful bases. Because of this they cannot be used as nucleophiles on compounds which contain acidic hydrogens. If they are used they will act as a base and deprotonate the acidic hydrogen rather than act as a nucleophile and attack the carbonyl. A partial list of functional groups which cannot be used are: alcohols, amides, 1o amines, 2o amines, carboxylic acids, and terminal alkynes. 1) Please write the product of the following reactions. 2) Please indicate the starting material required to produce the product. 3) Please give a detailed mechanism and the final product of this reaction 4) Please show two sets of reactants which could be used to synthesize the following molecule using a Grignard reaction. 1) 2) 3) Nucleophilic attack Protonation 4) ) ), | 4,790 | 2,254 |
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/05%3A_Cooperativity/19%3A_Self-Assembly/19.04%3A_Shape_of_Self-Assembled_Amphiphiles |
Empirically it is observed that certain features of the molecular structure of amphiphilic molecules and surfactants are correlated with the shape of the larger structures that they self-assemble into. For instance, single long hydrocarbon tails with a sulfo- group (like SDS) tend to aggregate into spherical micelles, whereas phosopholipids with two hydrocarbon chains (like DMPC) prefer to form bilayers. Since structure formation is largely governed by the hydrophobic effect, condensing the hydrophobic tails and driving the charged groups to a water interfaces, this leads to the conclusion that the volume and packing of the hydrophobic tail plays a key role in shape. While the molecular volume and the head group size and charge are fixed, the fluid nature of the hydrocarbon chain allows the molecule to pack into different configurations. This structural variability is captured by the packing parameter: \( p = \dfrac{V_0}{a_e \ell_0} \) where V and \(\ell_0\) are the volume and length of the hydrocarbon chain, and a is the average surface area per charged head group. \(V_0 / \ell_0 \) is relatively constant at ~0.2 nm , but the shape of the chain may vary from extended (cylindrical) to compact (conical), which will favor a particular packing. Empirically it is found that systems with p < ⅓ typically form micelles, for cylindrical structures for ⅓ < p < ½, and for bilayer structures for ½ < p < 1. Simple geometric arguments can be made to rationalize this observation. Taking a spherical aggregate with radius R and aggregation number n as an example, we expect the ratio of the volume to the surface area to be \[ \dfrac{V}{A} = \dfrac{nV_0}{na_e} = \dfrac{R}{3} \quad \rightarrow \quad V_0 = \dfrac{a_eR}{3} \] Substituting into the packing parameter: \[ p = \dfrac{V_0}{a_e \ell_0} = \dfrac{R}{3\ell_0} \] Now, even though the exact conformation of the hydrocarbon chain is not known, the length of the hydrocarbon tail will not be longer than the radius of the micelle, i.e., \(\ell_0 \geq R \). Therefore \[ \therefore p \leq \dfrac{1}{3} \qquad (spheres) \] Similar arguments can be used to explain why extended lipid bilayers have \(p \approx 1 \) and cylinders for p ≈ ½. In a more general sense, we note that the packing parameter is related to the curvature of the aggregate surface. As p decreases below one, the aggregate forms an increasingly curved surface. (Thus vesicles are expected to have ½ < p < 1). It is also possible to have p > 1. In this case, the curvature also increases with increasing p, although the sign of the curvature inverts (from convex to concave). Such conditions result in inverted structures, such as reverse micelles in which water is confined in a spherical pool in contact with the charged headgroups, and the hydrocarbon tails are project outward into a hydrophobic solvent. _____________________________________________________________________ J. N. Israelachvili, Intermolecular and Surface Forces, 3rd ed. (Academic Press, Burlington, MA, 2011). | 3,028 | 2,255 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Reactivity_of_Aldehydes_and_Ketones/Oxidation_of_Aldehydes_and_Ketones |
\[RCHO + H_2O \rightarrow RCOOH + 2H^+ +2e^- \tag{1}\] \[RCHO + 3OH^- \rightarrow RCOO^- + 2H_2O +2e^- \tag{2}\] \[ Cr_2O_7^{2-} + 14H^+ + 6e^- \rightarrow 2Cr^{3+} + 7H_2O \tag{3}\] \[RCHO + H_2O \rightarrow RCOOH + 2H^+ +2e^- \tag{4}\] \[2RCHO + Cr_2O_7^{2-} + 8H^+ \rightarrow 3RCOOH +2Cr^{3+}+ 4H_2O \tag{5}\] \[RCHO + 3OH^- \rightarrow RCOO^- + 2H_2O +2e^- \tag{7}\] \[2Ag(NH_3)_2^+ + RCHO + 3OH^- \rightarrow 2Ag + RCOO^- + 4NH_3 +2H_2O \tag{8}\] \[RCHO + 3OH^- \rightarrow RCOO^- + 2H_2O +2e^- \tag{10}\] \[RCHO + 2Cu^{2+}_{complexed} + 5OH^- \rightarrow RCOO^- + Cu_2O + 3H_2O \tag{11}\] Jim Clark ( ) | 621 | 2,256 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Reactivity_of_Alpha_Hydrogens/Acidity_of_Alpha_Hydrogens_and_Keto-enol_Tautomerism |
Alkyl hydrogen atoms bonded to a carbon atom in a (alpha) position relative to a carbonyl group display unusual acidity. While the pK values for alkyl C-H bonds is typically on the order of 40-50, pK values for these alpha hydrogens is more on the order of 19-20. This can most easily be explained by resonance stabilization of the product carbanion, as illustrated in the diagram below. In the presence of a proton source, the product can either revert back into the starting ketone or aldehyde or can form a new product, the enol. The equilibrium reaction between the ketone or aldehyde and the enol form is commonly referred to as "keto-enol tautomerism". The ketone or aldehyde is generally strongly favored in this reaction. Because carbonyl groups are sp hybridized the carbon and oxygen both have un p orbitals which can overlap to form the C=O \(\pi\) bond. The presence of these overlapping p orbitals gives hydrogens (Hydrogens on carbons adjacent to carbonyls) special properties. In particular, hydrogens are weakly acidic because the conjugate base, called an enolate, is stabilized though conjugation with the orbitals of the carbonyl. The effect of the carbonyl is seen when comparing the pK for the hydrogens of aldehydes (~16-18), ketones (~19-21), and esters (~23-25) to the pK of an alkane (~50). Of the two resonance structures of the enolate ion the one which places the negative charge on the oxygen is the most stable. This is because the negative change will be better stabilized by the greater electronegativity of the oxygen. Because of the acidity of α hydrogens carbonyls undergo keto-enol tautomerism. Tautomers are rapidly interconverted constitutional isomers, usually distinguished by a different bonding location for a labile hydrogen atom and a differently located double bond. The equilibrium between tautomers is not only rapid under normal conditions, but it often strongly favors one of the isomers (acetone, for example, is 99.999% keto tautomer). Even in such one-sided equilibria, evidence for the presence of the minor tautomer comes from the chemical behavior of the compound. Tautomeric equilibria are catalyzed by traces of acids or bases that are generally present in most chemical samples. ) (Department of Chemistry, Kent State University Stark Campus) | 2,336 | 2,258 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Basic_Principles_of_Organic_Chemistry_(Roberts_and_Caserio)/05%3A_Stereoisomerism_of_Organic_Molecules/5.03%3A_Conformational_Isomers |
When using ball-and-stick models, if one allows the sticks to rotate in the holes, it will be found that for ethane, \(CH_3-CH_3\), an infinite number of different atomic orientations are possible, depending on the angular relationship (the so-called angle) between the hydrogens on each carbon. Two extreme orientations or are shown in Figure 5-5. In end-on views of the models, the conformation is seen to have the hydrogens on the forward carbon directly in front of those on the back carbon. The conformation has each of the hydrogens on the forward carbon set between each of the hydrogens on the back carbon. It has not been possible to obtain separate samples of ethane that correspond to these or intermediate orientations because actual ethane molecules appear to have essentially "free rotation" about the single bond joining the carbons. Free, or at least rapid, rotation is possible around all \(C-C\) bonds, except when the carbons are part of a ring as in cyclopropane or cyclohexane. For ethane and its derivatives, the staggered conformations are more stable than the eclipsed conformations. The reason for this in ethane is not wholly clear, but doubtless depends on the fact that, in the staggered conformation, the \(C-H\) bonding electrons are as far away from one another as possible and give the least interelectronic repulsion. With groups larger than hydrogen atoms substituted on ethane carbons, space-filling models usually show less interference ( ) for staggered conformations than for eclipsed conformations. The energy difference between eclipsed and staggered ethane is approximately \(3 \: \text{kcal mol}^{-1}\).\(^4\) This is shown in Figure 5-6 as the height of the peaks (eclipsed forms) separating the valleys (staggered forms) on a curve showing the potential energy of ethane as the methyl groups rotate with respect to each other through \(360^\text{o}\). Rotation then is not strictly "free" because there is a \(3\)-\(\text{kcal mol}^{-1}\) energy barrier to overcome on eclipsing the hydrogens. Even so, the barrier is low enough that rotation is very rapid at room temperature, occurring on the order of \(10^{10}\) times per second. In butane, \(CH_3CH_2CH_2CH_3\), a \(360^\text{o}\) rotation about the central \(C-C\) bond allows the molecule to pass through three different eclipsed arrangements (\(8\), \(10\), \(12\)), and three different staggered arrangements (\(7\), \(9\), \(11\)), as shown in Figure 5-7. Experiment shows that butane favors the staggered form \(7\) in which the methyl groups are farthest apart. This form is called the (or ) conformation (sometimes ), and \(63\%\) of the molecules of butane exist in this form at room temperature. The other two staggered forms \(9\) and \(11\) are called ( or ) conformations and have a torsional angle of \(60^\text{o}\) between the two methyl groups. Forms \(9\) and \(11\) actually are nonidentical mirror images, but bond rotation is so rapid that the separate enantiomeric conformations cannot be isolated. The populations of the two gauche forms are equal at room temperature (\(18.5\%\) of each) so any optical rotation caused by one form is exactly canceled by an opposite rotation caused by the other. The populations of the eclipsed forms of butane, like the eclipsed forms of ethane, are small and represent energy maxima for the molecule as rotation occurs about the central \(C-C\) bond. The energy differences between the butane conformations are represented diagrammatically in Figure 5-8. The valleys correspond to staggered forms and the energy difference between the anti and gauche forms is \(0.8\)-\(0.9 \: \text{kcal mol}^{-1}\). Pioneering work in the field of conformational analysis was contributed by O. Hassel (Norway) and D. R. H. Barton (Britain), for which they shared the Nobel Prize in chemistry in 1969. Hassel's work involved the physical determination of preferred conformations of small molecules, whereas Barton was the first to show the general importance of conformation to chemical reactivity. Study of conformations and conformational equilibria has direct application to explaining the extraordinary specificity exhibited by compounds of biological importance. The compounds of living systems are tailor-made to perform highly specific or even unique functions by virtue of their particular configurations and conformations. \(^4\)This is by no means a trivial amount of energy - the difference in energy between the staggered and eclipsed forms of \(1 \: \text{mol}\) (\(30 \: \text{g}\)) of ethane being enough to heat \(30 \: \text{g}\) of water from \(0^\text{o}\) to \(100^\text{o}\). and (1977) | 4,686 | 2,259 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Map%3A_Physical_Chemistry_for_the_Biosciences_(Chang)/10%3A_Enzyme_Kinetics/10.05%3A_Enzyme_Inhibition |
Enzymes can be regulated in ways that either promote or reduce their activity. There are many different kinds of molecules that inhibit or promote enzyme function, and various mechanisms exist for doing so. In some cases of enzyme inhibition, for example, an inhibitor molecule is similar enough to a substrate that it can bind to the active site and simply block the substrate from binding. When this happens, the enzyme is inhibited through , because an inhibitor molecule competes with the substrate for active site binding. On the other hand, in noncompetitive inhibition, an inhibitor molecule binds to the enzyme in a location other than an allosteric site and still manages to block substrate binding to the active site. When an interacts with an enzyme it decreases the enzyme’s catalytic efficiency. An irreversible inhibitor covalently binds to the enzyme’s active site, producing a permanent loss in catalytic efficiency even if we decrease the inhibitor’s concentration. A reversible inhibitor forms a noncovalent complex with the enzyme, resulting in a temporary decrease in catalytic efficiency. If we remove the inhibitor, the enzyme’s catalytic efficiency returns to its normal level. There are several pathways for the reversible binding of an inhibitor to an enzyme, as shown in Figure \(\Page {1}\). In the substrate and the inhibitor compete for the same active site on the enzyme. Because the substrate cannot bind to an enzyme–inhibitor complex, EI, the enzyme’s catalytic efficiency for the substrate decreases. With the substrate and the inhibitor bind to different active sites on the enzyme, forming an enzyme–substrate–inhibitor, or ESI complex. The formation of an ESI complex decreases catalytic efficiency because only the enzyme–substrate complex reacts to form the product. Finally, in the inhibitor binds to the enzyme–substrate complex, forming an inactive ESI complex. We can identify the type of reversible inhibition by observing how a change in the inhibitor’s concentration affects the relationship between the rate of reaction and the substrate’s concentration. As shown in Figure 13.14, when we display kinetic data using as a plot it is easy to determine which mechanism is in effect. For example, an increase in slope, a decrease in the -intercept, and no change in the -intercept indicates competitive inhibition. Because the inhibitor’s binding is reversible, we can still obtain the same maximum velocity—thus the constant value for the -intercept—by adding enough substrate to completely displace the inhibitor. Because it takes more substrate, the value of increases, which explains the increase in the slope and the decrease in the -intercept’s value. provides kinetic data for the oxidation of catechol (the substrate) to -quinone by the enzyme -diphenyl oxidase in the absence of an inhibitor. The following additional data are available when the reaction is run in the presence of -hydroxybenzoic acid, PBHA. Is PBHA an inhibitor for this reaction and, if so, what type of inhibitor is it? Figure \(\Page {3}\) shows the resulting Lineweaver–Burk plot for the data in Practice Exercise 13.3 and Example 13.7. Although the -intercepts are not identical in value—the result of uncertainty in measuring the rates—the plot suggests that PBHA is a competitive inhibitor for the enzyme’s reaction with catechol. Practice Exercise 13.3 provides kinetic data for the oxidation of catechol (the substrate) to -quinone by the enzyme -diphenyl oxidase in the absence of an inhibitor. The following additional data are available when the reaction is run in the presence of phenylthiourea. Is phenylthiourea an inhibitor for this reaction and, if so, what type of inhibitor is it? The data in this exercise are adapted from jkimball. to review your answer to this exercise. | 3,860 | 2,260 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Coordination_Chemistry/Complex_Ion_Equilibria/Hard_and_Soft_Acids_and_Bases |
The thermodynamic stability of a metal complex depends greatly on the properties of the ligand and the metal ion and on the type of bonding. Metal–ligand interaction is an example of a Lewis acid–base interaction. Lewis bases can be divided into two categories: Metal ions with the highest affinities for hard bases are hard acids, whereas metal ions with the highest affinity for soft bases are soft acids. Some examples of hard and soft acids and bases are given in Table \(\Page {1}\). Notice that hard acids are usually cations of electropositive metals; consequently, they are relatively nonpolarizable and have higher charge-to-radius ratios. Conversely, soft acids tend to be cations of less electropositive metals; consequently, they have lower charge-to-radius ratios and are more polarizable. Chemists can predict the relative stabilities of complexes formed by the with a remarkable degree of accuracy by using a simple rule: hard acids prefer to bind to hard bases, and soft acids prefer to bind to soft bases. prefer to bind to , and prefer to bind to . Because the interaction between hard acids and hard bases is primarily electrostatic in nature, the stability of complexes involving hard acids and hard bases increases as the positive charge on the metal ion increases and as its radius decreases. For example, the complex of Al (r = 53.5 pm) with four fluoride ligands (AlF ) is about 10 times more stable than InF , the corresponding fluoride complex of In (r = 80 pm). In general, the stability of complexes of divalent first-row transition metals with a given ligand varies inversely with the radius of the metal ion, as shown in Table \(\Page {2}\). The inversion in the order at copper is due to the anomalous structure of copper(II) complexes, which will be discussed shortly. Because a hard metal interacts with a base in much the same way as a proton, by binding to a lone pair of electrons on the base, the stability of complexes of hard acids with hard bases increases as the ligand becomes more basic. For example, because ammonia is a stronger base than water, metal ions bind preferentially to ammonia. Consequently, adding ammonia to aqueous solutions of many of the first-row transition-metal cations results in the formation of the corresponding ammonia complexes. In contrast, the interaction between soft metals (such as the second- and third-row transition metals and Cu ) and soft bases is largely covalent in nature. Most soft-metal ions have a filled or nearly filled d subshell, which suggests that metal-to-ligand π bonding is important. Complexes of soft metals with soft bases are therefore much more stable than would be predicted based on electrostatic arguments. The hard acid–hard base/soft acid–soft base concept also allows us to understand why metals are found in nature in different kinds of ores. Recall that most of the first-row transition metals are isolated from oxide ores but that copper and zinc tend to occur naturally in sulfide ores. This is consistent with the increase in the soft character of the metals across the first row of the transition metals from left to right. Recall also that most of the second- and third-row transition metals occur in nature as sulfide ores, consistent with their greater soft character. | 3,304 | 2,261 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/05%3A_Distillation/5.03%3A_Fractional_Distillation/5.3C%3A_Uses_of_Fractional_Distillation |
Crude oil (petroleum) is composed of mostly hydrocarbons (alkanes and aromatics), and is a tremendous mixture of compounds consisting of between 5 and 40 carbon atoms.\(^{11}\) The components in oil are incredibly useful as fuels and lubricants, but not when they are mixed together. Fractional distillation is used in oil refineries (Figure 5.41) to separate the complex mixture into fractions that contain similar boiling points and therefore similar molecular weights and properties. Gasoline, diesel fuel, kerosene, and jet fuel are some of the different fractions produced by an oil refinery. Cyclopentadiene is used in many chemical reactions, including and polymerizations. The reagent is so reactive, however, that it undergoes a Diels-Alder reaction with itself in the reagent bottle to form dicyclopentadiene (Figure 5.42a). Therefore, chemical companies do not sell cyclopentadiene, and chemists are instead required to distill commercial dicyclopentadiene (Figure 5.42b) to reverse the dimerization reaction and obtain cyclopentadiene (Figure 5.42c). At temperatures above \(150^\text{o} \text{C}\) the dimer reverts to the monomer through a (driven by the favorable change in entropy, Figure 5.42c). Distillation can be used to remove the monomer as it forms. Although the two components (dimer and monomer) have dramatically different boiling points, the temperature required for the reverse reaction is too similar to the boiling point of the dicyclopentadiene that its vapor pressure cannot be ignored. Therefore, a fractional distillation is required for this process. \(^{11}\)About \(6\%\) of crude oil contains hydrocarbons with greater than 40 carbon atoms, a fraction that eventually becomes used for asphalt. | 1,747 | 2,262 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Alkenes/Reactivity_of_Alkenes/Free_Radical_Reactions_of_Alkenes/Radical_Allylic_Halogenation |
When are in the presence of molecules such as alkenes, the expected reaction is to the double bond carbons resulting in a vicinal dihalide (halogens on adjacent carbons). However, when the halogen concentration is low enough, alkenes containing allylic hydrogens undergo substitution at the allylic position rather than addition at the double bond. The product is an allylic halide (halogen on carbon next to double bond carbons), which is acquired through a radical chain mechanism. As the table below shows, the dissociation energy for the allylic C-H bond is lower than the dissociation energies for the C-H bonds at the vinylic and alkylic positions. This is because the radical formed when the allylic hydrogen is removed is -stabilized. Hence, given that the halogen concentration is low, substitution at the allylic position is favored over competing reactions. However, when the halogen concentration is high, addition at the double bond is favored because a polar reaction outcompetes the radical chain reaction. NBS (N-bromosuccinimide) is the most commonly used reagent to produce low concentrations of bromine. When suspended in tetrachloride (CCl ), NBS reacts with trace amounts of HBr to produce a low enough concentration of bromine to facilitate the allylic bromination reaction. Once the pre-initiation step involving NBS produces small quantities of Br , the bromine molecules are homolytically cleaved by light to produce bromine radicals. One bromine radical produced by homolytic cleavage in the initiation step removes an allylic hydrogen of the alkene molecule. A radical intermediate is generated, which is stabilized by resonance. The stability provided by of the radical in the alkene intermediate is the reason that substitution at the allylic position is favored over competing reactions such as addition at the double bond. The intermediate radical then reacts with a Br molecule to generate the allylic bromide product and regenerate the bromine radical, which continues the radical chain mechanism. If the alkene reactant is asymmetric, two distinct product isomers are formed. The radical chain mechanism of allylic bromination can be terminated by any of the possible steps shown below. Like bromination, chlorination at the allylic position of an alkene is achieved when low concentrations of Cl are present. The reaction is run at high temperatures to achieve the desired results. Allylic chlorination has important practical applications in industry. Since chlorine is inexpensive, allylic chlorinations of alkenes have been used in the industrial production of valuable products. For example, 3-chloropropene, which is necessary for the synthesis of products such as epoxy resin, is acquired through radical allylic chlorination (shown below). | 2,810 | 2,264 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Reactivity_of_Aldehydes_and_Ketones/Reduction_of_Aldehydes_and_Ketones |
Despite the fearsome names, the structures of the two reducing agents are very simple. In each case, there are four hydrogens ("tetrahydido") around either aluminium or boron in a negative ion (shown by the "ate" ending). The "(III)" shows the oxidation state of the aluminium or boron, and is often left out because these elements only ever show the +3 oxidation state in their compounds. The formulae of the two compounds are \(LiAlH_4\) and \(NaBH_4\). Their structures are: In each of the negative ions, one of the bonds is a co-ordinate covalent (dative covalent) bond using the lone pair on a hydride ion (H-) to form a bond with an empty orbital on the aluminium or boron. You get exactly the same organic product whether you use lithium tetrahydridoaluminate or sodium tetrahydridoborate. For example, with ethanal you get ethanol: Notice that this is a simplified equation where [H] means "hydrogen from a reducing agent". In general terms, reduction of an aldehyde leads to a primary alcohol. Again the product is the same whichever of the two reducing agents you use. For example, with propanone you get propan-2-ol: Reduction of a ketone leads to a secondary alcohol. Lithium tetrahydridoaluminate is much more reactive than sodium tetrahydridoborate. It reacts violently with water and alcohols, and so any reaction must exclude these common solvents. The reactions are usually carried out in solution in a carefully dried ether such as ethoxyethane (diethyl ether). The reaction happens at room temperature, and takes place in two separate stages. In the first stage, a salt is formed containing a complex aluminium ion. The following equations show what happens if you start with a general aldehyde or ketone. R and R' can be any combination of hydrogen or alkyl groups. The product is then treated with a dilute acid (such as dilute sulfuric acid or dilute hydrochloric acid) to release the alcohol from the complex ion. The alcohol formed can be recovered from the mixture by fractional distillation. Sodium tetrahydridoborate is a more gentle (and therefore safer) reagent than lithium tetrahydridoaluminate. It can be used in solution in alcohols or even solution in water - provided the solution is alkaline. Solid sodium tetrahydridoborate is added to a solution of the aldehyde or ketone in an alcohol such as methanol, ethanol or propan-2-ol. Depending on which recipe you read, it is either heated under reflux or left for some time around room temperature. This almost certainly varies depending on the nature of the aldehyde or ketone. At the end of this time, a complex similar to the previous one is formed. In the second stage of the reaction, water is added and the mixture is boiled to release the alcohol from the complex. Again, the alcohol formed can be recovered from the mixture by fractional distillation. Jim Clark ( ) | 2,868 | 2,265 |
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/21%3A_Binding_and_Association/21.03%3A_DNA_Hybridization |
To illustrate the use of statistical thermodynamics to describe binding, we discuss simple models for the hybridization or melting of DNA. These models are similar to our description of the helix–coil transition in their approach. These do not distinguish the different nucleobases, only considering nucleotides along a chain that are paired (bp) or free (f). Consider the case of the pairing between self-complementary oligonucleotides. \[S+S \rightleftharpoons D \nonumber \] S refers to any fully dissociated ssDNA and D to any dimer forms that involve two strands which have at least one base pair formed. We can then follow expressions for monomer–dimer equilibria above. The equilibrium constant for the association of single strands is \[K_a = \dfrac{c_D}{c_S^2}\] This equilibrium constant is determined by the concentration-dependent free-energy barrier for two strands to diffuse into contact and create the first base pair. If the total concentration of molecules present is either monomer or dimer, the form is \[ C_{TOT} = c_S + 2c_D \] then the fraction of the DNA strands in the dimer form is \[ \theta_D = \dfrac{2c_D}{C_{tot}} \] and eq. (10) leads to \[ \theta_D = 1+(4K_aC_{tot})^{-1}-\sqrt{(1+(4K_aC_{tot})^{-1})^2-1} \] We see that at the total concentration, which results in a dimer fraction \(\theta_D = 0.5\), the association constant is obtained from \(K_a=(9C_{tot})^{-1} \). This is a traditional description of the thermodynamics of a monomer–dimer equilibrium. We can calculate K from the molecular partition functions for the S and D states: \[K_a = \dfrac{q_D}{q_S^2} \nonumber \] Different models for hybridization will vary in the form of these partition functions. For either state, we can separate the partition function into contributions from the conformational degrees of freedom relevant to the base-pairing and hybridization, and other degrees of freedom, qi = qi,confqi,ext. Assuming that the external degrees of freedom will be largely of an entropic nature, we neglect an explicit calculation and factor out the external degrees of freedom by defining the variable γ: \[\gamma = \dfrac{q_{D,ext}C_{tot}}{q_{S,ext}^2} \] then \[ \theta_D = 1+\dfrac{q_{S,int}^2}{4\gamma q_{D,int}}-\sqrt{\left( 1+ \dfrac{q_{S,int}^2}{4\gamma q_{D,int}} \right)^2-1} \] Short Oligonucleotides: The Zipper Model For short oligonucleotide hybridization, a common (and reasonable) approximation is the single stretch model, which assumed that base-pairing will only occur as a single continuous stretch of base pairs. This is reasonable for short oligomers (n < 20) where two distinct helical stretches separated by a bubble (loop) are unlikely given the persistence length of dsDNA. The zipper model refers to the single-stretch case with “perfect matching”, in which only pairing between the bases in precisely sequence-aligned DNA strands is counted. As a result of these two approximations, the only dissociated base pairs observed in this model appear at the end of a chain (fraying). The number of bases in a single strand is n and the number of bases that are paired is n . For the dimer, we consider all configurations that have at least one base pair formed. The dimer partition function can be written as \[ \begin{aligned} q_{D,int}(n) &=\sigma \sum_{n_{bp}=1}^ng(n,n_{bp})s^{n_{bp}} \\ &=\sigma \sum_{n_{bp}=1}^n (n-n_{bp} +1)s^{n_{bp}} \end{aligned} \] Here g is the number of ways of arranging nbp continuous base pairs on a strand with length n; σis the statistical weight for nucleating the first base pair; and s is the statistical weight for forming a base pair next to an already-paired segment: \( s=e^{-\Delta \varepsilon_{bp}/k_BT}\). Therefore, in the zipper model, the equilibrium constant in eq. (23) between ssDNA and dimers involving at least one intact base pair is: K = σs. In the case of homogeneous polynucleotide chains, in which sliding of registry between chains is allowed: \( q_{D,int}(n) =\sigma \sum_{n_{bp}=1}^n (n-n_{bp} +1)^2s^{n_{bp}} \). The sum in eq. (27) can be evaluated exactly, giving \[q_{D,int}(n) = \dfrac{\sigma_S}{(s-1)^2}\left[ s^{n+1} -(n+1)s+n \right] \] In the case that s > 1 ( \( \Delta \varepsilon_{bp} < 0 \) ) and n≫1, q →σs . Also, the probability distribution of helical segments is \[ P_{bp}(n,n_{bp}) = \dfrac{(n-n_{bp}+1)\sigma s^{n_{bp}}}{q_{D,int}}\qquad 1\leq n_{bp} \leq n \] The plot below shows illustrations of the probability density and associated energy landscape for a narrow range of s across the helix–coil transition. These figures illustrate a duplex state that always has a single free-energy minimum characterized by frayed configurations. In addition to the fraction of molecules that associate to form a dimer, we must also consider the fraction of contacts that successfully form a base pair in the dimer state \[ \theta_{bp} = \dfrac{ \langle n_{bp} \rangle }{n} \nonumber \] We can evaluate this using the identity \[ \langle n_H \rangle = \dfrac{s}{q} \dfrac{\partial q}{\partial s} \nonumber \] Using eq. (28) we have \[ \theta_{bp} = \dfrac{ns^{n+2}-(n+2)s^{n+1}+(n+2)s - n}{n(s-1)(s^{n+1}-s(n+1)+n)} \nonumber \] Similar to the helix–coil transition in polypeptides, θ shows cooperative behavior with a transition centered at s = 1, which gets steeper with increasing n and decreasing σ. \[\theta_{tot} = \theta_D \theta_{bp} \nonumber \] | 5,370 | 2,266 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Basic_Principles_of_Organic_Chemistry_(Roberts_and_Caserio)/20%3A_Carbohydrates/20.10%3A_Formation_of_Carbohydrates_by_Photosynthesis |
Carbohydrates are formed in green plants by , which is the chemical combination, or "fixation", of carbon dioxide and water by utilization of energy from the absorption of visible light. The overall result is the of carbon dioxide to carbohydrate and the formation of oxygen: If the carbohydrate formed is cellulose, then the reaction in effect is the reverse of the burning of wood, and obviously requires considerable energy input. Because of its vital character to life as we know it, photosynthesis has been investigated intensively and the general features of the process are now rather well understood. The principal deficiencies in our knowledge include just how the light absorbed by the plants is converted to chemical energy and the details of how the many complex enzyme-induced reactions involved take place. The ingredients in green plants that carry on the work of photosynthesis are contained in highly organized, membrane-covered units called . The specific substances that absorb the light are the plant pigments, chlorophyll and chlorophyll , whose structures are shown in Figure 20-6. These highly conjugated substances are very efficient light absorbers, and the energy so gained is used in two separate processes, which are represented diagrammatically in Figure 20-7. One photoprocess reduces \(\left( \ce{NADP}^\oplus \right)\) to \(\ce{NADPH}\). These dinucleotides, shown below, differ from \(\ce{NAD}^\oplus\) and \(\ce{NADH}\) ( ) in having a phosphate group at \(\ce{C_2}\) of one of the ribose units. The oxidized form, \(\ce{NADP}^\oplus\), behaves like \(\ce{NAD}^\oplus\) and receives the equivalent of \(\ce{H}^\ominus\) at \(\ce{C_4}\) of the nicotinamide ring to form \(\ce{NADPH}\): The other important photoreaction is oxidation of water to oxygen by the reaction: \[\ce{H_2O} \rightarrow 2 \ce{H}^\oplus + \frac{1}{2} \ce{O_2} + 2 \ce{e}^\ominus\] The oxygen formed clearly comes from \(\ce{H_2O}\) and not from \(\ce{CO_2}\), because photosynthesis in the presence of water labeled with \(\ce{^{18}O}\) produces oxygen labeled with \(\ce{^{18}O}\), whereas carbon dioxide labeled with \(\ce{^{18}O}\) does not give oxygen labeled with \(\ce{^{18}O}\). Notice that the oxidation of the water produces two electrons, and that the formation of \(\ce{NADPH}\) from \(\ce{NADP}^\oplus\) requires two electrons. These reactions occur at different locations within the chloroplasts and in the process of transferring electrons from the water oxidation site to the \(\ce{NADP}^\oplus\) reduction site, adenosine diphosphate (ADP) is converted to adenosine triphosphate (ATP; see for discussion between the importance of such phosphorylations). Thus electron transport between the two photoprocesses is coupled to phosphorylation. This process is called (Figure 20-7). The end result of the photochemical part of photosynthesis is the formation of \(\ce{O_2}\), \(\ce{NADPH}\), and ATP. Much of the oxygen is released to the atmosphere, but the \(\ce{NADPH}\) and ATP are utilized in a series of dark reactions that achieve the reduction of carbon dioxide to the level of a a carbohydrate (fructose). A balanced equation is \[6 \ce{CO_2} + 12 \ce{NADPH} + 12 \ce{H}^\oplus \rightarrow \ce{C_6H_{12}O_6} + 12 \ce{NADP}^\oplus + 6 \ce{H_2O}\] The cycle of reactions that converts carbon dioxide to carbohydrates is called the , after M. Calvin, who received the Nobel Prize in chemistry in 1961 for his work on determining the path of carbon in photosynthesis. Carbon enters the cycle as carbon dioxide. The key reaction by which the \(\ce{CO_2}\) is "fixed" involves enzymatic of a pentose, \(D\)-ribulose 1,5-phosphate.\(^8\) A subsequent hydrolytic cleavage of the \(\ce{C_2}\)-\(\ce{C_3}\) bond of the carboxylation product (this amounts to a reverse Claisen condensation; ) yields two molecules of \(D\)-3-phosphoglycerate.\(^9\) In subsequent steps, ATP is utilized to phosphorylate the carboxyl group of 3-phosphoglycerate to create 1,3-diphosphoglycerate (a mixed anhydride of glyceric and phosphoric acids). This substance then is reduced by \(\ce{NADPH}\) to glyceraldehyde 3-phosphate: Two glyceraldehyde 3-phosphates are utilized to build the six-carbon chain of fructose by an aldol condensation \(\left( \ce{C_3} + \ce{C_3} \rightarrow \ce{C_6} \right)\), but the donor nucleophile in this reaction is the phosphate ester of dihydroxypropanone, which is an isomer of glyceraldehyde 3-phosphate. Rearrangement of the \(\ce{C_3}\) aldose to the \(\ce{C_3}\) ketose (of the type described in ) therefore precedes the aldol addition. (For a discussion of the mechanism of the enzymatic aldol reaction, see .) The fructose 1,6-diphosphate formed is then hydrolyzed to fructose 6-phosphate: From what we have described thus far, only one atom of carbon has been added from the atmosphere, and although we have reached fructose, five previously reduced carbons were consumed in the process. Thus the plant has to get back a five-carbon sugar from a six-carbon sugar to perpetuate the cycle. Rather than split off one carbon and use that as a building block to construct other sugars, an amazing series of transformations is carried on that can be summarized by the following equations: These reactions have several features in common. They all involve phosphate esters of aldoses or ketoses, and they resemble aldol or reverse-aldol condensations. Their mechanisms will no be considered here, but are discussed more explicitly in , , and . Their summation is \(\ce{C_6} + 3 \ce{C_3} \rightarrow 3 \ce{C_5}\), which means that fructose 6-phosphate as the \(\ce{C_6}\) component reacts with a total of three \(\ce{C_3}\) units (two glyceraldehyde 3-phosphates and one dihydroxypropanone phosphate) to give, ultimately, three ribulose 5-phosphates. Although the sequence may seem complex, it avoids building up pentose or hexose chains one carbon at a time from one-carbon intermediates. The Calvin cycle is completed by the phosphorylation of \(D\)-ribulose 5-phosphate with ATP. The resulting \(D\)-ribulose 1,5-diphosphate then is used to start the cycle again by combining with carbon dioxide. There is one sixth more fructose is used to build other carbohydrates, notably glucose, starch, and cellulose. \(^8\)All of the reactions we will be discussing are mediated by enzymes, and we will omit henceforth explicit mention of this fact. But it should not be forgotten that these are enzyme-induced processes, for which we have few, if any, laboratory reagents to duplicate on the particular compounds involved. \(^9\)We will henceforth, in equations, designate the various acids we encounter as the phosphate and the carboxylate anions, although this is hardly reasonable at the pH values normal in living cells. Glyceric and phosphoric acids are only partially ionized at pH 7-8. However, it would be equally unrealistic to represent the acids as being wholly undissociated. and (1977) | 6,966 | 2,267 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/32%3A_Math_Chapters/32.01%3A_Complex_Numbers |
Let us think of the ordinary numbers as set out on a line which goes to infinity in both positive and negative directions. We could start by taking a stretch of the line near the origin (that is, the point representing the number zero) and putting in the integers as follows: Next, we could add in rational numbers, such as ½, 23/11, etc., then the irrationals like \(\sqrt{2}\), then numbers like \(\pi\), and so on, so any number you can think of has its place on this line. Now let’s take a slightly different point of view, and think of the numbers as represented by a from the origin to that number, so 1 is and, for example, –2 is represented by: Note that if a number is multiplied by –1, the corresponding vector is turned through 180 degrees. In pictures, The “vector” 2 is turned through \(\pi\), or 180 degrees, when you multiply it by –1. What are the square roots of 4? Well, 2, obviously, but also –2, because multiplying the backwards pointing vector –2 by –2 not only doubles its length, but also turns it through 180 degrees, so it is now pointing in the positive direction. We seem to have invented a hard way of stating that multiplying two negatives gives a positive, but thinking in terms of turning vectors through 180 degrees will pay off soon. In solving the standard quadratic equation \[ax^2 + bx + c = 0 \label{A.1} \] we find the solution to be: \[ x =\dfrac{-b \pm \sqrt{b^2-ac}}{2a} \label{A.2} \] The problem with this is that sometimes the expression inside the square root is negative. What does that signify? For some problems in physics, it means there is no solution. For example, if I throw a ball directly upwards at 10 meters per sec, and ask when will it reach a height of 20 meters, taking = 10 m per sec , the solution of the quadratic equation for the time has a negative number inside the square root, and that means that the ball doesn’t get to 20 meters, so the question didn’t really make sense. We shall find, however, that there are other problems, in wide areas of physics, where negative numbers inside square roots have an important physical significance. For that reason, we need to come up with a scheme for interpreting them. The simplest quadratic equation that gives trouble is: \[x^2 + 1 = 0 \label{A.3} \] the solutions being \[x = \pm \sqrt{-1}\label{A.4} \] What does that mean? We’ve just seen that the square of a positive number is positive, and the square of a negative number is also positive, since multiplying one negative number, which points backwards, by another, which turns any vector through 180 degrees, gives a positive vector. Another way of saying the same thing is to regard the minus sign itself, , as an operator which turns the number it is applied to through 180 degrees. Now \((-2)\times (-2)\) has two such rotations in it, giving the full 360 degrees back to the positive axis. . Let’s concentrate for the moment on the square root of –1, from the quadratic equation above. Think of –1 as the operator – acting on the vector 1, so the – turns the vector through 180 degrees. We need to find the square root of this operator, the operator which applied gives the rotation through 180 degrees. Put like that, it is pretty obvious that the operator we want rotates the vector 1 through 90 degrees. But if we take a positive number, such as 1, and rotate its vector through 90 degrees only, it isn’t a number at all, at least in our original sense, since we put all known numbers on one line, and we’ve now rotated 1 away from that line. The new number created in this way is called a pure imaginary number, and is denoted by \(i\). Once we’ve found the square root of –1, we can use it to write the square root of any other negative number—for example, \(2i\) is the square root of \(–4\). Putting together a real number from the original line with an imaginary number (a multiple of ) gives a . Evidently, complex numbers fill the entire two-dimensional plane. Taking ordinary Cartesian coordinates, any point \(P\) in the plane can be written as \((x, y)\) where the point is reached from the origin by going \(x\) units in the direction of the positive real axis, then y units in the direction defined by \(i\), in other words, the axis. Thus the point with coordinates ( , ) can be identified with the complex number , where \[z = x + iy. \label{A.5} \] The plane is often called the , and representing complex numbers in this way is sometimes referred to as an Argand Diagram. Visualizing the complex numbers as two-dimensional vectors, it is clear how to two of them together. If = + , and = + , then + = ( + ) + ( + ). The real parts and imaginary parts are added separately, just like vector components. Multiplying two complex numbers together does not have quite such a simple interpretation. It is, however, quite straightforward—ordinary algebraic rules apply, with replaced where it appears by 1. So for example, to multiply = + by = + , \[z_1z_2 = (x_1 + iy_1)( x_2 + iy_2) = (x_1x_2 - y_1y_2) + i(x_1y_2 + x_2y_1). \label{A.6} \] Some properties of complex numbers are most easily understood if they are represented by using the polar coordinates \(r, \theta\) instead of \((x, y)\) to locate \(z\) in the complex plane. Note that \(z = x + iy\) can be written \(r(\cos \theta + i \sin \theta)\) from the diagram above. In fact, this representation leads to a clearer picture of multiplication of two complex numbers: \[\begin{align} z_1z_2 &= r_2 ( \cos(\theta_1 + i\sin \theta_1) r_2( \cos(\theta_2 + i\sin \theta_2) \label{A.7} \\[4pt] & = r_1r_2 \left[ (\cos \theta_1 \cos \theta_2 - \sin \theta_1 \sin \theta_2) + i (\sin \theta_1 \cos \theta_2 + \cos \theta_1 \sin \theta_2) \right] \label{A.8} \\[4pt] & = r_1r_2 \left[ \cos(\theta_1+\theta_2) + i\sin (\theta_1+\theta_2) \right] \label{A.9} \end{align} \] So, if \[ z = r(cos \theta + i\sin \theta ) = z_1z_2 \label{A.10} \] then \[r = r_1r_2 \label{A.11} \] and \[\theta=\theta_1\theta_2 \label{A.12} \] That is to say, to multiply together two complex numbers, we the ’s – called the – and the phases, the \(\theta\) ’s. The modulus \(r\) is often denoted by \(| |\), and called , the phase \(\theta\) is sometimes referred to as . For example, \(|i| = 1\), \(\text{arg}\; i = \pi/2\). We can now see that, although we had to introduce these complex numbers to have a \(\sqrt{-1}\), we do not need to bring in new types of numbers to get \(\sqrt{-1}\), or \(\sqrt{i}\). Clearly, \(|\sqrt{i}|=1\), \( arg \sqrt{i} = 45°\). It is on the circle of unit radius centered at the origin, at 45 , and squaring it just doubles the angle. In fact this circle—called the —plays an important part in the theory of complex numbers and every point on the circle has the form \[ z = \cos \theta + i \sin \theta = Cis(\theta) \label{A.13} \] Since all points on the unit circle have \(|z| = 1\), by definition, multiplying any two of them together just amounts to adding the angles, so our new function \(Cis(\theta)\) satisfies \[ Cis(\theta_1)Cis(\theta_2)=Cis(\theta_1+\theta_2). \label{A.14} \] But that is just how multiplication works for exponents! That is, \[a^{\theta_1}a^{\theta_2} = a^{\theta_1+\theta_2} \label{A.15} \] for \(a\) any constant, which strongly suggests that maybe our function \(Cis(\theta\) is nothing but some constant \(a\) raised to the power \(\theta\), that is, \[ Cis(\theta) = a^{\theta}\label{A.16} \] It turns out to be convenient to write \(a^{\theta} = e^{(\ln a)\theta} = e^{A \theta}\), where \(A = \ln a\). This line of reasoning leads us to write \[\cos \theta + i\sin \theta = e^{A\theta} \label{A.17} \] Now, for the above “addition formula” to work for multiplication, \(A\) must be a constant, of \(\theta\). Therefore, we can find the value of by choosing \(\theta\) for which things are simple. We take \(\theta\) to be very small—in this limit: \[ \cos \theta = 1 \nonumber \] \[ \sin \theta = \theta \nonumber \] \[ e^{A\theta} = 1+ A\theta \nonumber \] with we drop terms of order \(\theta^2\) and higher. Substituting these values into Equation \ref{A.17} gives \(\theta\) So we find: \[ (\cos \theta + i \sin \theta) e ^{i \theta} \label{A.18} \] To test this result, we expand \(e^{i \theta}\): \[ \begin{align} e^{i \theta} &= 1 + i\theta + \dfrac{(i\theta)^2}{2!} + \dfrac{(i\theta)^3}{3!} + \dfrac{(i\theta)^4}{4!} + \dfrac{(i\theta)^5}{5!} ... \label{A.19a} \\[4pt] &= 1 + i\theta - \dfrac{\theta^2}{2!} - \dfrac{i\theta^3}{3!} +\dfrac{\theta^4}{4!} +\dfrac{i\theta^5}{5!} ... \label{A.19b} \\[4pt] &= \left( 1 - \dfrac{\theta^2}{2!} + \dfrac{\theta^4}{4!} \right) + i \left(\theta - \dfrac{i\theta^3}{3!}+\dfrac{i\theta^5}{5!} \right) \label{A.19c} \\[4pt] &= \cos \theta + i\sin \theta \label{A.19d} \end{align} \] We write \(= \cos \theta + i\sin \theta\) in Equation \ref{A.19d} because the series in the brackets are precisely the Taylor series for \(\cos \theta\) and \(\sin \theta\) confirming our equation for \(e^{i\theta}\). Changing the sign of \(\theta\) it is easy to see that \[ e^{-i \theta} = \cos \theta - i\sin \theta \label{A.20} \] so the two trigonometric functions can be expressed in terms of exponentials of complex numbers: \[\cos (\theta) = \dfrac{1}{2} \left( e^{i\theta} + e^{-i \theta} \right) \nonumber \] \[\sin (\theta) = \dfrac{1}{2i} \left( e^{i\theta} - e^{-i \theta} \right) \nonumber \] The Euler formula states that any complex number can be written: \[e^{i \theta} = \cos \theta + i\sin \theta \nonumber \] (Beams Professor, , | 9,645 | 2,268 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/07%3A_Mixtures_and_Solutions/7.08%3A_Non-ideality_in_Solutions_-_Activity |
The bulk of the discussion in this chapter dealt with ideal solutions. However, real solutions will deviate from this kind of behavior. So much as in the case of gases, where fugacity was introduced to allow us to use the ideal models, is used to allow for the deviation of real solutes from limiting ideal behavior. The activity of a solute is related to its concentration by \[ a_B=\gamma \dfrac{m_B}{m^o} \nonumber \] where \(\gamma \) is the , \(m_B\) is the molaliy of the solute, and \(m^o\) is unit molality. The activity coefficient is unitless in this definition, and so the activity itself is also unitless. Furthermore, the activity coefficient approaches unity as the molality of the solute approaches zero, insuring that dilute solutions behave ideally. The use of activity to describe the solute allows us to use the simple model for chemical potential by inserting the activity of a solute in place of its mole fraction: \[ \mu_B =\mu_B^o + RT \ln a_B \nonumber \] The problem that then remains is the measurement of the activity coefficients themselves, which may depend on temperature, pressure, and even concentration. For an ionic substance that dissociates upon dissolving \[ MX(s) \rightarrow M^+(aq) + X^-(aq) \nonumber \] the chemical potential of the cation can be denoted \(\mu_+\) and that of the anion as \(\mu_-\). For a solution, the total molar Gibbs function of the solutes is given by \[G = \mu_+ + \mu_- \nonumber \] where \[ \mu = \mu^* + RT \ln a \nonumber \] where \(\ denotes the chemical potential of an ideal solution, and \(a\) is the activity of the solute. Substituting his into the above relationship yields \[G = \mu^*_+ + RT \ln a_+ + \mu_-^* + RT \ln a_- \nonumber \] Using a molal definition for the activity coefficient \[a_i = \gamma_im_i \nonumber \] The expression for the total molar Gibbs function of the solutes becomes \[G = \mu_+^* + RT \ln \gamma_+ m_+ + \mu_-^* + RT \ln \gamma_- m_- \nonumber \] This expression can be rearranged to yield \[ G = \mu_+^* + \mu_-^* + RT \ln m_+m_- + RT \ln \gamma_+\gamma _- \nonumber \] where all of the deviation from ideal behavior comes from the last term. Unfortunately, it impossible to experimentally deconvolute the term into the specific contributions of the two ions. So instead, we use a geometric average to define the , \(\gamma _\pm\). \[\gamma_{\pm} = \sqrt{\gamma_+\gamma_-} \nonumber \] For a substance that dissociates according to the general process \[ M_xX_y(s) \rightarrow x M^{y+} (aq) + yX^{x-} (aq) \nonumber \] the expression for the mean activity coefficient is given by \[ \gamma _{\pm} = (\gamma_+^x \gamma_-^y)^{1/x+y} \nonumber \] In 1923, Debeye and Hückel (Debye & Hückel, 1923) suggested a means of calculating the mean activity coefficients from experimental data. Briefly, they suggest that \[ \log _{10} \gamma_{\pm} = \dfrac{1.824 \times 10^6}{(\epsilon T)^{3/2}} |z_++z_- | \sqrt{I} \nonumber \] where \(\epsilon\) is the dielectric constant of the solvent, \(T\) is the temperature in K, \(z_+\) and \(z_-\) are the charges on the ions, and \(I\) is the of the solution. \(I\) is given by \[ I = \dfrac{1}{2} \dfrac{m_+ z_+^2 + m_-z_-^2}{m^o} \nonumber \] For a solution in water at 25 C, | 3,240 | 2,269 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/08%3A_Phase_Equilibrium/8.08%3A_Non-ideality_-_Henry's_Law_and_Azeotropes |
The proceeding discussion was based on the behaviors of ideal solutions of volatile compounds, and for which both compounds follow Raoult’s Law. can be used to describe these deviations. \[ p_B = k_H p_B^o \nonumber \] For which the Henry’s Law constant (\(k_H\)) is determined for the specific compound. Henry’s Law is often used to describe the solubilities of gases in liquids. The relationship to Raoult’s Law is summarized in Figure \(\Page {1}\). Henry’s Law is depicted by the upper straight line and Raoult’s Law by the lower. The solubility of \(CO_2(g)\) in water at 25 C is 3.32 x 10 M with a partial pressure of \(CO_2\) over the solution of 1 bar. Assuming the density of a saturated solution to be 1 kg/L, calculate the Henry’s Law constant for \(CO_2\). In one L of solution, there is 1000 g of water (assuming the mass of CO dissolved is negligible.) \[ (1000 \,g) \left( \dfrac{1\, mol}{18.02\,g} \right) = 55\, mol\, H_2O \nonumber \] The solubility of \(CO_2\) can be used to find the number of moles of \(CO_2\) dissolved in 1 L of solution also: \[ \dfrac{3.32 \times 10^{-2} mol}{L} \cdot 1 \,L = 3.32 \times 10^{-2} mol\, CO_2 \nonumber \] and so the mol fraction of \(CO_2\) is \[ \chi_b = \dfrac{3.32 \times 10^{-2} mol}{55.5 \, mol} = 5.98 \times 10^{-4} \nonumber \] And so \[10^5\, Pa = 5.98 \times 10^{-4} k_H \nonumber \] or \[ k_H = 1.67 \times 10^9\, Pa \nonumber \] An azeotrope is defined as the common composition of vapor and liquid when they have the same composition. Azeotropes can be either maximum boiling or minimum boiling, as show in Figure \(\Page {2; left}\). Regardless, distillation cannot purify past the azeotrope point, since the vapor and quid phases have the same composition. If a system forms a minimum boiling azeotrope and also has a range of compositions and temperatures at which two liquid phases exist, the phase diagram might look like Figure \(\Page {2; right}\): Another possibility that is common is for two substances to form a two-phase liquid, form a minimum boiling azeotrope, but for the azeotrope to boil at a temperature below which the two liquid phases become miscible. In this case, the phase diagram will look like Figure \(\Page {3}\). In the diagram, make up of a system in each region is summarized below the diagram. The point e indicates the azeotrope composition and boiling temperature. Within each two-phase region (III, IV, and the two-phase liquid region, the lever rule will apply to describe the composition of each phase present. So, for example, the system with the composition and temperature represented by point b (a single-phase liquid which is mostly compound A, designated by the composition at point a, and vapor with a composition designated by that at point c), will be described by the lever rule using the lengths of tie lines and . | 2,854 | 2,270 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(LibreTexts)/24%3A_Solutions_I_-_Volatile_Solutes/24.02%3A_The_Gibbs-Duhem_Equation |
At , there is no change in chemical potential for the system: \[\sum_i n_i d\mu_i = 0 \label{eq1}\] This is the Gibbs-Duhem relationship and it places a compositional constraint upon any changes in the chemical potential in a mixture at constant temperature and pressure for a given composition. This result is easily derived when one considers that \(\mu_i\) represents the partial molar Gibbs function for component \(i\). And as with other partial molar quantities: \[ G_text{tot} = \sum_i n_i \mu_i\] Taking the derivative of both sides yields: \[ dG_text{tot} = \sum_i n_i d \mu_i + \sum_i \mu_i d n_i \] But \(dG\) can also be expressed as: \[dG = Vdp - sdT + \sum_i \mu_i d n_i\] Setting these two expressions equal to one another: \[ \sum_i n_i d \mu_i + \cancel{ \sum_i \mu_i d n_i } = Vdp - sdT + \cancel{ \sum_i \mu_i d n_i} \] And after canceling terms, one gets: \[ \sum_i n_i d \mu_i = Vdp - sdT \label{eq41}\] For a system at constant temperature and pressure: \[Vdp - sdT = 0 \label{eq42}\] Substituting Equation \ref{eq42} into \ref{eq41} results in the (Equation \ref{eq1}). This expression relates how the chemical potential can change for a given composition while the system maintains equilibrium. For a binary system consisting of components two components, \(A\) and \(B\): \[ n_Bd\mu_B + n_Ad\mu_A = 0 \] Rearranging: \[ d\mu_B = -\dfrac{n_A}{n_B} d\mu_A\] Consider a Gibbs free energy that only includes \(μ_n\) conjugate variables as we obtained it from our scaling experiment at \(T\) and \(P\) constant: \[G = \mu_An_A + \mu_Bn_B \nonumber \] Consider a change in \(G\): \[dG = d(\mu_An_A) + d(\mu_Bn_B) \nonumber \] \[dG = n_Ad\mu_A+\mu_Adn_A + n_Bd\mu_B+\mu_Bdn_B \nonumber \] However, if we simply write out a change in \(G\) due to the number of moles we have: \[dG = \mu_Adn_A +\mu_Bdn_B \nonumber \] Consequently the other terms must add up to zero: \[0 = n_Ad\mu_A+ n_Bd\mu_B \nonumber \] \[d\mu_A= - \dfrac{n_B}{n_A}d\mu_B \nonumber \] \[d\mu_A= - \dfrac{x_B}{x_A}d\mu_B \nonumber \] In the last step we have simply divided both denominator and numerator by the total number of moles. This expression is the Gibbs-Duhem equation for a 2-component system. It relates the change in one thermodynamic potential (\(d\mu_A\)) to the other (\(d\mu_B\)). The Gibbs-Duhem equation relates the change in one thermodynamic potential (\(d\mu_A\)) to the other (\(d\mu_B\)). In the ideal case we have: \[\mu_B = \mu^*_B + RT \ln x_B \nonumber \] Gibbs-Duhem gives: \[d\mu_A = - \dfrac{x_B}{x_A} d\mu_B \nonumber \] As: \[d\mu_B = 0 + \dfrac{RT}{x_B} \nonumber \] with \(x_B\) being the only active variable at constant temperature, we get: \[d\mu_A = - \dfrac{x_B}{x_A} \dfrac{RT}{x_B} = \dfrac{RT}{x_A} \nonumber \] If we now wish to find \(\mu_A\) we need to integrate \(d\mu_A\), e.g. form pure 1 to \(x_A\). This produces: \[\mu_A = \mu^*_A + RT \ln x_A \nonumber \] This demonstrates that Raoult's law can only hold over the whole range for one component it also holds for the other over the whole range. | 3,055 | 2,274 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Physical_Chemistry_(Fleming)/04%3A_Putting_the_First_Law_to_Work/4.02%3A_Total_and_Exact_Differentials |
The fact that we can define the as \[ C_V \equiv \left( \dfrac{\partial U}{\partial T} \right)_V \label{compress} \] suggests that the internal energy depends very intimately on two variables: volume and temperature. In fact, we will see that for a single component system, state variables are always determined when two state variables are defined. In the case of internal energy, we might write \(U=f(V,T)\) or \(U(V,T)\). This suggests that the way to change \(U\) is to change either \(V\) or \(T\) (or both!) And if there is a mathematical function that relates the internal energy to these two variables, it should easy to see how it changes when either (or both!) are changed. This can be written as a : \[ dU = \left( \dfrac{\partial U}{\partial V} \right)_T dV + \left( \dfrac{\partial U}{\partial T} \right)_V dT \label{total} \] Even without knowing the actually mathematical function relating the variables to the property, we can imagine how to calculate changes in the property from this expression. \[ \Delta U = \int _{V_1}^{V_2} \left( \dfrac{\partial U}{\partial V} \right)_T dV + \int _{T_1}^{T_2} \left( \dfrac{\partial U}{\partial T} \right)_V dT \nonumber \] In words, this implies that we can think of a change in \(U\) occurring due to an followed by an . And all we need to know is the slope of the surface in each pathway direction. There are a couple of very important experiments people have done to explore the measurement of those kinds of slopes. Understanding them, it turns out, depends on two very important physical properties of substances. We have seen that the total differential of \(U(V, T)\) can be expressed as Equation \ref{total}. In general, if a differential can be expressed as \[ df(x,y) = P\,dx + Q\,dy \nonumber \] the differential will be an if it follows the \[\left( \dfrac{\partial P}{\partial y} \right)_x = \left( \dfrac{\partial Q}{\partial x} \right)_y \label{euler} \] In order to illustrate this concept, consider \(p(V, T)\) using the ideal gas law. \[p= \dfrac{RT}{V} \nonumber \] The total differential of \(p\) can be written \[ dp = \left( - \dfrac{RT}{V^2} \right) dV + \left( \dfrac{R}{V} \right) dT \label{Eq10} \] Does Equation \ref{Eq10} follow the Euler relation (Equation \ref{euler})? Let’s confirm! \[ \begin{align*} \left[ \dfrac{1}{\partial T} \left( - \dfrac{RT}{V^2} \right) \right]_V &\stackrel{?}{=} \left[ \dfrac{1}{\partial V} \left( \dfrac{R}{V} \right) \right]_T \\[4pt] \left( - \dfrac{R}{V^2} \right) &\stackrel{\checkmark }{=} \left( - \dfrac{R}{V^2} \right) \end{align*} \nonumber \] \(dp\) is, in fact, an exact differential. The differentials of all of the that are will be exact. Heat and work are not exact differential and \(dw\) and \(dq\) are called instead. | 2,786 | 2,276 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Reactivity/Electrophilic_Aromatic_Substitution/AR6._Solutions_to_Selected_Problems |
In the case of uncatalyzed bromination reactions, there is clear evidence that the Br-Br bond-breaking step does not start the reaction off. If that were the first step, there would presumably be an equilibrium between Br and Br /Br ions. That equilibrium would be shifted back toward Br if bromide salts were added. In that case, the amount of bromine cation would be suppressed and the reaction would slow down. No such salt effects are observed, however. That evidence suggests that, in the uncatalyzed reaction, the aromatic reacts directly with Br . In each case, a base must remove the proton from the cationic intermediate. An anion that would be present in solution has been chosen for this role. a) b) c) d) The primary cation formed is very unstable. As a result, there is a high barrier to cation formation. The cation that results is stabilized via π-donation from oxygen. This is a substituted alkyl group. An alkyl group should be moderately activating, but the presence of a halogen exerts an inductive electron-withdrawing effect. The cation-stabilizing effect of the alkyl substituent is completely counteracted by the halogen. a) activating b) deactivating c) activating d) deactivating e) deactivating The tertiary cations that result during and substitution offer extra stability, leading to preferential formation of these cations. The π-donation that occurs in the cations arising from and substitution results in extra stability, leading to preferential formation of these cations. The cation directly adjacent to the carbonyl is destabilized by the electron withdrawing effect of the ketone. By default, the other intermediate is preferentially formed. The π-donation that occurs in the cations arising from and substitution results in extra stability, leading to preferential formation of these cations. In cases leading to mixtures of and products, only one product was chosen, based on minimal steric interactions. , | 1,975 | 2,277 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Substitution_Reactions/Carboxyl_Substitution/CX12._Solutions_For_Selected_Problems |
These heteroatoms are all found in the upper right-hand corner of the periodic table. They are all pretty electronegative and they all have lone pairs. We might expect carboxyloids with the most electronegative elements attached to the carbonyl to be the most reactive and least stable towards substitution (in other words, carboxyloids with the most electronegative heteroatoms would become substituted the most easily). In that case, we would predict that the carboxyloids with the most electronegative substituent (oxygen) would be the most reactive. There are a number of different kinds and we will think about how they relate to each other shortly. After the oxygen derivatives we would predict either the nitrogen derivatives or the chloride, depending on what electronegativity scale we happen to use (remember, electronegativity is not an experimentally pure property, but the result of a calculation that can be performed in different ways). The sulfur derivative would be least reactive. There are still several different oxygen derivatives to compare: carboxylic acids (OH), carboxylates (O-), esters (OR, in which R is an alkyl or carbon chain) and acid anhydrides (OC=O). The easiest to differentiate is the carboxylate, because of its negative charge. It must be less attractive to a nucleophile than the other oxygen derivatives, because it would offer more repulsion to an incoming lone pair. However, we can't really predict whether it would be any less reactive than the nitrogen, chlorine or sulfur analogues, because who knows whether the charge or the nature of the atom matters more? As it happens, the charge probably matters more. We learn that simply by looking at the experimental trend and seeing that the carboxylate is the least reactive of all the carboxyloids. Turning to the other three oxygen derivatives, it would be difficult to differentiate between the effect of a remote hydrogen atom versus an alkyl chain in the ester versus the carboxylic acid, so we'll say those two are about the same. On the other hand, the additional electron-withdrawing carbonyl group in the acid anhydride probably has a profound effect, so we would expect that compound to attract nucleophiles more strongly. Of course, the series we have produced above is not the "right answer". It does not match the experimentally observed series of carboxyloid reactivities. Nevertheless, it is very useful in terms of building an understanding of carboxyloids. It tells us that electronegativity may play a role here, but that it can't be the only factor. Some other factor is putting some of the derivatives out of order. In particular, the acid chloride (C=OCl) and the thioester (C=OSR) do not fit. Electronegativity is an abvious factor that could influence an atom's ability to π-donate, but we just looked at that factor in the previous section, so let's look at another atomic property instead. Of course, different atoms have different sizes. In particular, if we look at the atoms involved in carboxyloid substituents, we can divide them into 2nd row atoms and 3rd row atoms. It's actually well-documented that the degree of overlap between two orbitals influences how well they bond together. Since carbon is in the second row, it is about the same size as, and overlaps pretty well with, other second row atoms. Third row atoms are a little too big, on the other hand. That factor breaks the carboxyloids into two different groups. Assuming π-donation is a major factor, sulfur and chlorine may be placed above the others in tems of reactivity. They cannot donate as well as oxygen or nitrogen can. From there, differences among the atoms from the same row may be sorted out based on electronegativity differences. Amide bonds are among the most stable carboxyloids possible. That stability makes them well-suited to form useful structures that will not decompose easily. Remember, any change that occurs in matter occurs through chemical reactions, including the formation and decomposition of biomaterials. Shutting down a potential chemical reaction means a material will be more durable. a) b) Because acid chlorides are at the top of the carboxyloid reactivity diagram (the ski hill), and other halides are likely to be similar in reactivity to the chloride, this reaction would be uphill from the other carboxyloids. Amides and carboxylates are the least reactive carboxyloids, so it might not be too surprising that they do not react with these nucleophiles. Acid chlorides typically react with these cuprate reagents. Borohydrides could presumably react with acid chlorides, anhydrides and thioesters, which are the most reactive carboxyloids. They probably can't react with amids or carboxylate ions, which are even farther downhill than esters. This change in charge results because, although amines are easily protonated, amides are not. Protonation of an amide would result in a cation adjacent to the very positive carbonyl carbon, leading to a buildup of localized positive charge. That wouldn't be easy. Furthermore, the amide nitrogen is not very likely to donate its electrons to a proton in the first place. Its protons are too busy. They are tied up in conjugation with the carbonyl, so they really aren't available to act as the lone pair of a base. , | 5,301 | 2,279 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Principles_of_Modern_Chemistry_(Oxtoby_et_al.)/Unit_3%3A_The_States_of_Matter/11%3A_Solutions/11.1%3A_Composition_of_Solutions |
There are several different ways to quantitatively describe the concentration of a solution. For example, molarity is a useful way to describe solution concentrations for reactions that are carried out in solution. Mole fractions are used not only to describe gas concentrations but also to determine the vapor pressures of mixtures of similar liquids. Example \(\Page {1}\) reviews the methods for calculating the molarity and mole fraction of a solution when the masses of its components are known. Commercial vinegar is essentially a solution of acetic acid in water. A bottle of vinegar has 3.78 g of acetic acid per 100.0 g of solution. Assume that the density of the solution is 1.00 g/mL. : mass of substance and mass and density of solution : molarity and mole fraction : : The molarity is the number of moles of acetic acid per liter of solution. We can calculate the number of moles of acetic acid as its mass divided by its molar mass. \[ \begin{align*} \text{moles } \ce{CH_3CO_2H} &=\dfrac{3.78\; \cancel{\ce{g}}\; \ce{CH_3CO_2H}}{60.05\; \cancel{\ce{g}}/\ce{mol}} \\[4pt] &=0.0629 \; \ce{mol} \end{align*} \nonumber \] The volume of the solution equals its mass divided by its density. \[ \begin{align*} \text{volume} &=\dfrac{\text{mass}}{\text{density}} \\[4pt] &=\dfrac{100.0\; \cancel{\ce{g}}\; \text{solution}}{1.00\; \cancel{\ce{g}}/\ce{mL}}=100\; mL\nonumber \end{align*} \nonumber \] Then calculate the molarity directly. \[ \begin{align*} \text{molarity of } \ce{CH_3CO_2H} &=\dfrac{\text{moles } \ce{CH3CO2H} }{\text{liter solution}} \\[4pt] &=\dfrac{0.0629\; mol\; \ce{CH_3CO_2H}}{(100\; \cancel{\ce{mL}})(1\; L/1000\; \cancel{\ce{mL}})}=0.629\; M \; \ce{CH_3CO_2H} \end{align*} \nonumber \] This result makes intuitive sense. If 100.0 g of aqueous solution (equal to 100 mL) contains 3.78 g of acetic acid, then 1 L of solution will contain 37.8 g of acetic acid, which is a little more than \(\ce{ 1/2}\) mole. Keep in mind, though, that the mass and volume of a solution are related by its density; concentrated aqueous solutions often have densities greater than 1.00 g/mL. To calculate the mole fraction of acetic acid in the solution, we need to know the number of moles of both acetic acid and water. The number of moles of acetic acid is 0.0629 mol, as calculated in part (a). We know that 100.0 g of vinegar contains 3.78 g of acetic acid; hence the solution also contains (100.0 g − 3.78 g) = 96.2 g of water. We have \[moles\; \ce{H_2O}=\dfrac{96.2\; \cancel{\ce{g}}\; \ce{H_2O}}{18.02\; \cancel{\ce{g}}/mol}=5.34\; mol\; \ce{H_2O}\nonumber \] The mole fraction \(\chi\) of acetic acid is the ratio of the number of moles of acetic acid to the total number of moles of substances present: \[ \begin{align*} \chi_{\ce{CH3CO2H}} &=\dfrac{moles\; \ce{CH_3CO_2H}}{moles \; \ce{CH_3CO_2H} + moles\; \ce{H_2O}} \\[4pt] &=\dfrac{0.0629\; mol}{0.0629 \;mol + 5.34\; mol} \\[4pt] &=0.0116=1.16 \times 10^{−2} \end{align*} \nonumber \] This answer makes sense, too. There are approximately 100 times as many moles of water as moles of acetic acid, so the ratio should be approximately 0.01. A solution of \(\ce{HCl}\) gas dissolved in water (sold commercially as “muriatic acid,” a solution used to clean masonry surfaces) has 20.22 g of \(\ce{HCl}\) per 100.0 g of solution, and its density is 1.10 g/mL. 6.10 M HCl \(\chi_{HCl} = 0.111\) The concentration of a solution can also be described by its molality (m), the number of moles of solute per kilogram of solvent: \[ \text{molality (m)} =\dfrac{\text{moles solute}}{\text{kilogram solvent}} \label{Eq1} \] Molality, therefore, has the same numerator as molarity (the number of moles of solute) but a different denominator (kilogram of solvent rather than liter of solution). For dilute aqueous solutions, the molality and molarity are nearly the same because dilute solutions are mostly solvent. Thus because the density of water under standard conditions is very close to 1.0 g/mL, the volume of 1.0 kg of \(H_2O\) under these conditions is very close to 1.0 L, and a 0.50 M solution of \(KBr\) in water, for example, has approximately the same concentration as a 0.50 m solution. Another common way of describing concentration is as the ratio of the mass of the solute to the total mass of the solution. The result can be expressed as mass percentage, parts per million (ppm), or parts per billion (ppb): \[ \begin{align} \text{mass percentage}&=\dfrac{\text{mass of solute}}{\text{mass of solution}} \times 100 \label{Eq2} \\[4pt] \text{parts per million (ppm)} &=\dfrac{\text{mass of solute}}{\text{mass of solution}} \times 10^{6} \label{Eq3} \\[4pt] \text{parts per billion (ppb)}&=\dfrac{\text{mass of solute}}{\text{mass of solution}} \times 10^{9} \label{Eq4} \end{align} \] In the health sciences, the concentration of a solution is often expressed as parts per thousand (ppt), indicated as a proportion. For example, adrenalin, the hormone produced in high-stress situations, is available in a 1:1000 solution, or one gram of adrenalin per 1000 g of solution. The labels on bottles of commercial reagents often describe the contents in terms of mass percentage. Sulfuric acid, for example, is sold as a 95% aqueous solution, or 95 g of \(\ce{H_2SO_4}\) per 100 g of solution. Parts per million and parts per billion are used to describe concentrations of highly dilute solutions. These measurements correspond to milligrams and micrograms of solute per kilogram of solution, respectively. For dilute aqueous solutions, this is equal to milligrams and micrograms of solute per liter of solution (assuming a density of 1.0 g/mL). Several years ago, millions of bottles of mineral water were contaminated with benzene at ppm levels. This incident received a great deal of attention because the lethal concentration of benzene in rats is 3.8 ppm. A 250 mL sample of mineral water has 12.7 ppm of benzene. Because the contaminated mineral water is a very dilute aqueous solution, we can assume that its density is approximately 1.00 g/mL. : volume of sample, solute concentration, and density of solution : molarity of solute and mass of solute in 250 mL : : a. A To calculate the molarity of benzene, we need to determine the number of moles of benzene in 1 L of solution. We know that the solution contains 12.7 ppm of benzene. Because 12.7 ppm is equivalent to 12.7 mg/1000 g of solution and the density of the solution is 1.00 g/mL, the solution contains 12.7 mg of benzene per liter (1000 mL). The molarity is therefore \[\begin{align*} \text{molarity}&=\dfrac{\text{moles}}{\text{liter solution}} \\[4pt] &=\dfrac{(12.7\; \cancel{mg}) \left(\frac{1\; \cancel{g}}{1000\; \cancel{mg}}\right)\left(\frac{1\; mol}{78.114\; \cancel{g}}\right)}{1.00\; L} \\[4pt] &=1.63 \times 10^{-4} M\end{align*} \nonumber \] b. B We are given that there are 12.7 mg of benzene per 1000 g of solution, which is equal to 12.7 mg/L of solution. Hence the mass of benzene in 250 mL (250 g) of solution is \[\begin{align*} \text{mass of benzene} &=\dfrac{(12.7\; mg\; \text{benzene})(250\; \cancel{mL})}{1000\; \cancel{mL}} \\[4pt] &=3.18\; mg \\[4pt] &=3.18 \times 10^{-3}\; g\; \text{benzene} \end{align*} \nonumber \] The maximum allowable concentration of lead in drinking water is 9.0 ppb. 4.3 × 10 M 2 × 10 g How do chemists decide which units of concentration to use for a particular application? Although molarity is commonly used to express concentrations for reactions in solution or for titrations, it does have one drawback—molarity is the number of moles of solute divided by the volume of the solution, and the volume of a solution depends on its density, which is a function of temperature. Because volumetric glassware is calibrated at a particular temperature, typically 20°C, the molarity may differ from the original value by several percent if a solution is prepared or used at a significantly different temperature, such as 40°C or 0°C. For many applications this may not be a problem, but for precise work these errors can become important. In contrast, mole fraction, molality, and mass percentage depend on only the masses of the solute and solvent, which are independent of temperature. Mole fraction is not very useful for experiments that involve quantitative reactions, but it is convenient for calculating the partial pressure of gases in mixtures, as discussed previously. Mole fractions are also useful for calculating the vapor pressures of certain types of solutions. Molality is particularly useful for determining how properties such as the freezing or boiling point of a solution vary with solute concentration. Because mass percentage and parts per million or billion are simply different ways of expressing the ratio of the mass of a solute to the mass of the solution, they enable us to express the concentration of a substance even when the molecular mass of the substance is unknown. Units of ppb or ppm are also used to express very low concentrations, such as those of residual impurities in foods or of pollutants in environmental studies. Table \(\Page {1}\) summarizes the different units of concentration and typical applications for each. When the molar mass of the solute and the density of the solution are known, it becomes relatively easy with practice to convert among the units of concentration we have discussed, as illustrated in Example \(\Page {3}\). Vodka is essentially a solution of ethanol in water. Typical vodka is sold as “80 proof,” which means that it contains 40.0% ethanol by volume. The density of pure ethanol is 0.789 g/mL at 20°C. If we assume that the volume of the solution is the sum of the volumes of the components (which is not strictly correct), calculate the following for the ethanol in 80-proof vodka. : volume percent and density : mass percentage, mole fraction, molarity, and molality : : The key to this problem is to use the density of pure ethanol to determine the mass of ethanol (\(CH_3CH_2OH\)), abbreviated as EtOH, in a given volume of solution. We can then calculate the number of moles of ethanol and the concentration of ethanol in any of the required units. A Because we are given a percentage by volume, we assume that we have 100.0 mL of solution. The volume of ethanol will thus be 40.0% of 100.0 mL, or 40.0 mL of ethanol, and the volume of water will be 60.0% of 100.0 mL, or 60.0 mL of water. The mass of ethanol is obtained from its density: \[mass\; of\; EtOH=(40.0\; \cancel{mL})\left(\dfrac{0.789\; g}{\cancel{mL}}\right)=31.6\; g\; EtOH\nonumber \] If we assume the density of water is 1.00 g/mL, the mass of water is 60.0 g. We now have all the information we need to calculate the concentration of ethanol in the solution. B The mass percentage of ethanol is the ratio of the mass of ethanol to the total mass of the solution, expressed as a percentage: \[ \begin{align*} \%EtOH &=\left(\dfrac{mass\; of\; EtOH}{mass\; of\; solution}\right)(100) \\[4pt] &=\left(\dfrac{31.6\; \cancel{g}\; EtOH}{31.6\; \cancel{g} \;EtOH +60.0\; \cancel{g} \; H_2O} \right)(100) \\[4pt]&= 34.5\%\end{align*} \nonumber \] C The mole fraction of ethanol is the ratio of the number of moles of ethanol to the total number of moles of substances in the solution. Because 40.0 mL of ethanol has a mass of 31.6 g, we can use the molar mass of ethanol (46.07 g/mol) to determine the number of moles of ethanol in 40.0 mL: \[ \begin{align*} moles\; \ce{CH_3CH_2OH}&=(31.6\; \cancel{g\; \ce{CH_3CH_2OH}}) \left(\dfrac{1\; mol}{46.07\; \cancel{g\; \ce{CH_3CH_2OH}}}\right) \\[4pt] &=0.686 \;mol\; \ce{CH_3CH_2OH} \end{align*} \nonumber \] Similarly, the number of moles of water is \[ moles \;\ce{H_2O}=(60.0\; \cancel{g \; \ce{H_2O}}) \left(\dfrac{1 \;mol\; \ce{H_2O}}{18.02\; \cancel{g\; \ce{H_2O}}}\right)=3.33\; mol\; \ce{H_2O}\nonumber \] The mole fraction of ethanol is thus \[ \chi_{\ce{CH_3CH_2OH}}=\dfrac{0.686\; \cancel{mol}}{0.686\; \cancel{mol} + 3.33\;\cancel{ mol}}=0.171\nonumber \] D The molarity of the solution is the number of moles of ethanol per liter of solution. We already know the number of moles of ethanol per 100.0 mL of solution, so the molarity is The molality of the solution is the number of moles of ethanol per kilogram of solvent. Because we know the number of moles of ethanol in 60.0 g of water, the calculation is again straightforward: \[ m_{\ce{CH_3CH_2OH}}=\left(\dfrac{0.686\; mol\; EtOH}{60.0\; \cancel{g}\; H_2O } \right) \left(\dfrac{1000\; \cancel{g}}{kg}\right)=\dfrac{11.4\; mol\; EtOH}{kg\; H_2O}=11.4 \;m\nonumber \] A solution is prepared by mixing 100.0 mL of toluene with 300.0 mL of benzene. The densities of toluene and benzene are 0.867 g/mL and 0.874 g/mL, respectively. Assume that the volume of the solution is the sum of the volumes of the components. Calculate the following for toluene. mass percentage toluene = 24.8% \(\chi_{toluene} = 0.219\) 2.35 M toluene 3.59 m toluene A Discussing Different Measures of Concentration. Link: A Discussing how to Convert Measures of Concentration. Link: Different units are used to express the concentrations of a solution depending on the application. The concentration of a solution is the quantity of solute in a given quantity of solution. It can be expressed in several ways: molarity (moles of solute per liter of solution); mole fraction, the ratio of the number of moles of solute to the total number of moles of substances present; mass percentage, the ratio of the mass of the solute to the mass of the solution times 100; parts per thousand (ppt), grams of solute per kilogram of solution; parts per million (ppm), milligrams of solute per kilogram of solution; parts per billion (ppb), micrograms of solute per kilogram of solution; and molality (m), the number of moles of solute per kilogram of solvent. | 13,802 | 2,280 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.02%3A_Thermodynamics_of_Solutions/8.2.2B%3A_8.2.2B%3A_Solutions_of_Gaseous_Solutes_in_Liquid_Solvents |
Make sure you thoroughly understand the following essential ideas: Gases dissolve in liquids, but usually only to a small extent. When a gas dissolves in a liquid, the ability of the gas molecules to move freely throughout the volume of the solvent is greatly restricted. If this latter volume is small, as is often the case, the gas is effectively being compressed. Both of these effects amount to a decrease in the entropy of the gas that is not usually compensated by the entropy increase due to mixing of the two kinds of molecules. Such processes greatly restrict the solubility of gases in liquids. One important consequence of the entropy decrease when a gas dissolves in a liquid is that the solubility of a gas decreases at higher temperatures; this is in contrast to most other situations, where a rise in temperature usually leads to increased solubility. Bringing a liquid to its boiling point will completely remove a gaseous solute. Some typical gas solubilities, expressed in the number of moles of gas at 1 atm pressure that will dissolve in a liter of water at 25° C, are given below: As we indicated above, the only gases that are readily soluble in water are those whose polar character allows them to interact strongly with it. Inspection of the above table reveals that ammonia is a champion in this regard. At 0° C, one liter of water will dissolve about 90 g (5.3 mol) of ammonia. The reaction of ammonia with water according to \[\ce{NH_3 + H_2O → NH_4^{+} + OH^{–}}\] makes no significant contribution to its solubility; the equilibrium lies heavily on the left side (as evidenced by the strong odor of ammonia solutions). Only about four out of every 1000 NH molecules are in the form of ammonium ions at equilibrium. This is truly impressive when one calculates that this quantity of NH would occupy (5.3 mol) × (22.4 L mol ) = 119 L at STP. Thus one volume of water will dissolve over 100 volumes of this gas. It is even more impressive when you realize that in order to compress 119 L of an ideal gas into a volume of 1 L, a pressure of 119 atm would need to be applied! This, together with the observation that dissolution of ammonia is accompanied by the liberation of a considerable amount of heat, tells us that the high solubility of ammonia is due to the formation of more hydrogen bonds (to H O) than are broken within the water structure in order to accommodate the NH molecule. If we actually compress 90 g of pure NH gas to 1 L, it will liquefy, and the vapor pressure of the liquid would be about 9 atm. In other words, the escaping tendency of NH molecules from H O is only about 1/9th of what it is from liquid NH . One way of interpreting this is that the strong intermolecular (dipole-dipole) attractions between NH and the solvent H O give rise to a force that has the effect of a negative pressure of 9 atm. This classic experiment nicely illustrates the high solubility of gaseous ammonia in water. A flask fitted with a tube as shown is filled with ammonia gas and inverted so that the open end of tube is submerged in a container of water. A small amount of water is pushed up into the flask to get the process started. As the gas dissolves in the water, its pressure is reduced, creating a partial vacuum that draws additional water into the flask. The effect can be made more dramatic by adding an indicator dye such as phenolphthalein to the water, which turns pink as the water emerges from the "fountain" and becomes alkaline. In old textbooks, ammonia's extraordinarily high solublility in water was incorrectly attributed to the formation of the non-existent compound "ammonium hydroxide" NH OH. Although this formula is still occasionally seen, the name ammonium hydroxide is now used as a synonym for "aqueous ammonia" whose formula is simply NH . As can also be seen in the above table, the gases CO and SO also exhibit higher solubilities in water. The main product in each case is a loosely-bound of the gas, denoted by CO or SO . A very small fraction of the hydrate CO ·H O then combines to form H CO . Recall that entropy is a measure of the ability of thermal energy to spread and be shared and exchanged by molecules in the system. Higher temperature exerts a kind of multiplying effect on a positive entropy change by increasing the amount of thermal energy available for sharing. Have you ever noticed the tiny bubbles that form near the bottom of a container of water when it is placed on a hot stove? These bubbles contain air that was previously dissolved in the water, but reaches its solubility limit as the water is warmed. You can completely rid a liquid of any dissolved gases (including unwanted ones such as Cl or H S) by boiling it in an open container. This is quite different from the behavior of most (but not all) solutions of solid or liquid solutes in liquid solvents. The reason for this behavior is the very large entropy increase that gases undergo when they are released from the confines of a condensed phase . Fresh water at sea level dissolves 14.6 mg of oxygen per liter at 0°C and 8.2 mg/L at 25°C. These saturation levels ensure that fish and other gilled aquatic animals are able to extract sufficient oxygen to meet their respiratory needs. But in actual aquatic environments, the presence of decaying organic matter or nitrogenous runoff can reduce these levels far below saturation. The health and survival of these organisms is severely curtailed when oxygen concentrations fall to around 5 mg/L. The temperature dependence of the solubility of oxygen in water is an important consideration for the well-being of aquatic life; thermal pollution of natural waters (due to the influx of cooling water from power plants) has been known to reduce the dissolved oxygen concentration to levels low enough to kill fish. The advent of summer temperatures in a river can have the same effect if the oxygen concentration has already been partially depleted by reaction with organic pollutants. The pressure of a gas is a measure of its "escaping tendency" from a phase. So it stands to reason that raising the pressure of a gas in contact with a solvent will cause a larger fraction of it to "escape" into the solvent phase. The direct-proportionality of gas solubility to pressure was discovered by William Henry (1775-1836) and is known as . It is usually written as \[P = k_H C \label{7b.2.1}\] with For Table 7b.2.X, is given as \[ k_H = \dfrac{\text{partial pressure of gas in atm}}{\text{concentration in liquid} \; mol \;L^{–1}}\] Some vendors of bottled waters sell pressurized "oxygenated water" that is (falsely) purported to enhance health and athletic performance by supplying more oxygen to the body. \[C = \dfrac{P}{k_H} = \dfrac{2.0\; atm}{769\; L\; atm \;mol^{–1}} = 0.0026\; mol\; L^{–1}\] Artificially carbonated water was first prepared by Joseph Priestley (who later discovered oxygen) in 1767 and was commercialized in 1783 by Joseph Schweppe, a Swiss-German jeweler. Naturally-carbonated spring waters have long been reputed to have curative values, and these became popular tourist destinations in the 19th century. The term "seltzer water" derives from one such spring in Niederselters, Germany. Of course, carbonation produced by fermentation has been known since ancient times. The tingling sensation that carbonated beverages produce in the mouth comes from the carbonic acid produced when bubbles of carbon dioxide come into contact with the mucous membranes of the mouth and tongue: \[CO_2 + H_2O → H_2CO_3\] | 7,555 | 2,281 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/02%3A_Chromatography/2.02%3A_Chromatography_Generalities/2.2B%3A_General_Separation_Theory |
The main chromatographic techniques (thin layer chromatography, column chromatography, and gas chromatography) follow the same general principles in terms of how they are able to separate mixtures. In all chromatographic methods, a sample is first applied onto a stationary material that either or the sample: adsorption is when molecules or ions in a sample adhere to a surface, while absorption is when the sample particles penetrate into the interior of another material. A paper towel absorbs water because the water molecules form intermolecular forces (in this case hydrogen bonds) with the cellulose in the paper towel. In chromatography, a sample is typically adsorbed onto a surface, and can form a variety of intermolecular forces with this surface. After adsorption, the sample is then exposed to a liquid or gas traveling in one direction. The sample may overcome its intermolecular forces with the stationary surface and transfer into the moving material, due to some attraction or sufficient thermal energy. The sample will later readsorb to the stationary material, and transition between the two materials in a constant equilibrium (Equation \ref{1}). If there is to be any separation between components in a mixture, it is crucial that there are many equilibrium "steps" in the process (summarized in Figure 2.3). \[\ce{X}_\text{(stationary)} \leftrightharpoons \ce{X}_\text{(mobile)} \label{1}\] The material the sample adsorbs onto is referred to as the " " because it retains the sample's position. The moving material is called the " " because it can cause the sample to move from its original position. The main principle that allows chromatography to separate components of a mixture is that components will spend different amounts of time interacting with the stationary and mobile phases. A compound that spends a large amount of time mobile will move quickly away from its original location, and will separate from a compound that spends a larger amount of time stationary. The main principle that determines the amount of time spent in the phases is the experienced in each phase. If a compound has strong intermolecular forces with the stationary phase it will remain adsorbed for a longer amount of time than a compound that has weaker intermolecular forces. This causes compounds with different strengths of intermolecular forces to move at different rates. How these general ideas apply to each chromatographic technique (thin layer chromatography, column chromatography, and gas chromatography) will be explained in greater detail in each section. | 2,597 | 2,282 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/26%3A_Structure_of_Organic_Compounds/26.8%3A_From_Molecular_Formula_to_Molecular_Structure |
There are many ways one can go about determining the structure of an unknown organic molecule. Although, nuclear magnetic resonance (NMR) and infrared radiation (IR) are the primary ways of determining molecular structures, calculating the degrees of unsaturation is useful information since knowing the degrees of unsaturation make it easier for one to figure out the molecular structure; it helps one double-check the number of \(\pi\) bonds and/or cyclic rings. In the lab, saturation may be thought of as the point when a solution cannot dissolve anymore of a substance added to it. In terms of degrees of unsaturation, a molecule only containing single bonds with no rings is considered saturated. Unlike saturated molecules, unsaturated molecules contain double bond(s), triple bond(s) and/or ring(s). Degree of Unsaturation (DoU) is also known as . If the molecular formula is given, plug in the numbers into this formula: \[ DoU= \dfrac{2C+2+N-X-H}{2} \] As stated before, a saturated molecule contains only single bonds and no rings. Another way of interpreting this is that a saturated molecule has the maximum number of hydrogen atoms possible to be an acyclic alkane. Thus, the number of hydrogens can be represented by 2C+2, which is the general molecular representation of an alkane. As an example, for the molecular formula C H the number of actual hydrogens needed for the compound to be saturated is 8 . The compound needs 4 more hydrogens in order to be fully saturated . Degrees of unsaturation is equal to 2, or half the number of hydrogens the molecule needs to be classified as saturated. Hence, the DoB formula divides by 2. The formula subtracts the number of X's because a halogen (X) replaces a hydrogen in a compound. For instance, in chloroethane, C H Cl, there is one less hydrogen compared to ethane, C H . For a compound to be saturated, there is one more hydrogen in a molecule when nitrogen is present. Therefore, we add the number of nitrogens (N). This can be seen with C H N compared to C H . Oxygen and sulfur are not included in the formula because saturation is unaffected by these elements. As seen in alcohols, the same number of hydrogens in ethanol, C H OH, matches the number of hydrogens in ethane, C H . The following chart illustrates the possible combinations of the number of double bond(s), triple bond(s), and/or ring(s) for a given degree of unsaturation. Each row corresponds to a different combination. Remember, the degrees of unsaturation only gives the sum of double bonds, triple bonds and/or rings. For instance, a degree of unsaturation of 3 can contain 3 rings, 2 rings+1 double bond, 1 ring+2 double bonds, 1 ring+1 triple bond, 1 double bond+1 triple bond, 3 double bonds. What is the Degree of Unsaturation for Benzene? The molecular formula for benzene is C H Thus, DoU= 4, where C=6, N=0,X=0, and H=6. 1 DoB can equal 1 ring or 1 double bond. This corresponds to benzene containing 1 ring and 3 double bonds. However, when given the molecular formula C H , benzene is only one of many possible structures (isomers). The following structures all have DoB of 4 and have the same molecular formula as benzene. 1. (a.) (E (b.) (c.) (d.) 2. (a.) (b.) (c.) (d.) 3. (a.) (b.) (c.) (d.) 4. (a.) (b.) (c.) (d.) 5. (a.) (Remember-a saturated molecule only contains single bonds) (b.) (c.) ( (d.) | 3,410 | 2,283 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/03%3A__The_Vocabulary_of_Analytical_Chemistry/3.04%3A_Selecting_an_Analytical_Method |
A method is the application of a technique to a specific analyte in a specific matrix. We can develop an analytical method to determine the concentration of lead in drinking water using any of the techniques mentioned in the previous section. A gravimetric method, for example, might precipiate the lead as PbSO or as PbCrO , and use the precipitate’s mass as the analytical signal. Lead forms several soluble complexes, which we can use to design a complexation titrimetric method. As shown in , we can use graphite furnace atomic absorption spectroscopy to determine the concentration of lead in drinking water. Finally, lead’s multiple oxidation states (Pb , Pb , Pb ) makes feasible a variety of electrochemical methods. Ultimately, the requirements of the analysis determine the best method. In choosing among the available methods, we give consideration to some or all the following design criteria: accuracy, precision, sensitivity, selectivity, robustness, ruggedness, scale of operation, analysis time, availability of equipment, and cost. is how closely the result of an experiment agrees with the “true” or expected result. We can express accuracy as an absolute error, \[e = \text{obtained result} - \text{expected result} \nonumber\] or as a percentage relative error, % \[\% e_r = \frac {\text{obtained result} - \text{expected result}} {\text{expected result}} \times 100 \nonumber\] A method’s accuracy depends on many things, including the signal’s source, the value of in or , and the ease of handling samples without loss or contamination. A total analysis technique, such as gravimetry and titrimetry, often produce more accurate results than does a concentration technique because we can measure mass and volume with high accuracy, and because the value of is known exactly through stoichiometry. Because it is unlikely that we know the true result, we use an expected or accepted result to evaluate accuracy. For example, we might use a standard reference material, which has an accepted value, to establish an analytical method’s accuracy. You will find a more detailed treatment of accuracy in , including a discussion of sources of errors. When a sample is analyzed several times, the individual results vary from trial-to-trial. is a measure of this variability. The closer the agreement between individual analyses, the more precise the results. For example, the results shown in the upper half of Figure 3.4.1
for the concentration of K in a sample of serum are more precise than those in the lower half of Figure 3.4.1
. It is important to understand that precision does not imply accuracy. That the data in the upper half of Figure 3.4.1
are more precise does not mean that the first set of results is more accurate. In fact, neither set of results may be accurate. A method’s precision depends on several factors, including the uncertainty in measuring the signal and the ease of handling samples reproducibly. In most cases we can measure the signal for a total analysis technique with a higher precision than is the case for a concentration method. Confusing accuracy and precision is a common mistake. See Ryder, J.; Clark, A. , , 1–3, and Tomlinson, J.; Dyson, P. J.; Garratt, J. , , 16–23 for discussions of this and other common misconceptions about the meaning of error. You will find a more detailed treatment of precision in , including a discussion of sources of errors. The ability to demonstrate that two samples have different amounts of analyte is an essential part of many analyses. A method’s is a measure of its ability to establish that such a difference is significant. Sensitivity is often confused with a method’s , which is the smallest amount of analyte we can determine with confidence. Confidence, as we will see in , is a statistical concept that builds on the idea of a population of results. For this reason, we will postpone our discussion of detection limits to . For now, the definition of a detection limit given here is sufficient. Sensitivity is equivalent to the proportionality constant, , in and [IUPAC Compendium of Chemical Terminology, Electronic version]. If \(\Delta S_A\) is the smallest difference we can measure between two signals, then the smallest detectable difference in the absolute amount or the relative amount of analyte is \[\Delta n_A = \frac {\Delta S_A} {k_A} \quad \text{ or } \quad \Delta C_A = \frac {\Delta S_A} {k_A} \nonumber\] Suppose, for example, that our analytical signal is a measurement of mass using a balance whose smallest detectable increment is ±0.0001 g. If our method’s sensitivity is 0.200, then our method can conceivably detect a difference in mass of as little as \[\Delta n_A = \frac {\pm 0.0001 \text{ g}} {0.200} = \pm 0.0005 \text{ g} \nonumber\] For two methods with the same \(\Delta S_A\), the method with the greater sensitivity—that is, the method with the larger —is better able to discriminate between smaller amounts of analyte. An analytical method is specific if its signal depends only on the analyte [Persson, B-A; Vessman, J. , , 117–119; Persson, B-A; Vessman, J. , , 526–532]. Although is the ideal, few analytical methods are free from interferences. When an contributes to the signal, we expand and to include its contribution to the sample’s signal, \[S_{samp} = S_A + S_I = k_A n_A + k_I n_I \label{3.1}\] \[S_{samp} = S_A + S_I = k_A C_A + k_I C_I \label{3.2}\] where is the interferent’s contribution to the signal, is the interferent’s sensitivity, and and are the moles (or grams) and the concentration of interferent in the sample, respectively. is a measure of a method’s freedom from interferences [Valcárcel, M.; Gomez-Hens, A.; Rubio, S. , , 386–393]. A method’s selectivity for an interferent relative to the analyte is defined by a , \[K_{A,I} = \frac {k_I} {k_A} \label{3.3}\] which may be positive or negative depending on the signs of and . The selectivity coefficient is greater than +1 or less than –1 when the method is more selective for the interferent than for the analyte. Although and usually are positive, they can be negative. For example, some analytical methods work by measuring the concentration of a species that remains after is reacts with the analyte. As the analyte’s concentration increases, the concentration of the species that produces the signal decreases, and the signal becomes smaller. If the signal in the absence of analyte is assigned a value of zero, then the subsequent signals are negative. Determining the selectivity coefficient’s value is easy if we already know the values for and . As shown by Example 3.4.1
, we also can determine by measuring in the presence of and in the absence of the interferent. A method for the analysis of Ca in water suffers from an interference in the presence of Zn . When the concentration of Ca is 100 times greater than that of Zn , an analysis for Ca has a relative error of +0.5%. What is the selectivity coefficient for this method? Since only relative concentrations are reported, we can arbitrarily assign absolute concentrations. To make the calculations easy, we will let = 100 (arbitrary units) and = 1. A relative error of +0.5% means the signal in the presence of Zn is 0.5% greater than the signal in the absence of Zn . Again, we can assign values to make the calculation easier. If the signal for Cu in the absence of Zn is 100 (arbitrary units), then the signal in the presence of Zn is 100.5. The value of is determined using \[k_\text{Ca} = \frac {S_\text{Ca}} {C_\text{Ca}} = \frac {100} {100} = 1 \nonumber\] In the presence of Zn the signal is given by Equation 3.4.2; thus \[S_{samp} = 100.5 = k_\text{Ca} C_\text{Ca} + k_\text{Zn} C_\text{Zn} = (1 \times 100) + k_\text{Zn} \times 1 \nonumber\] Solving for gives its value as 0.5. The selectivity coefficient is \[K_\text{Ca,Zn} = \frac {k_\text{Zn}} {k_\text{Ca}} = \frac {0.5} {1} = 0.5 \nonumber\] If you are unsure why, in the above example, the signal in the presence of zinc is 100.5, note that the percentage relative error for this problem is given by \[\frac {\text{obtained result} - 100} {100} \times 100 = +0.5 \% \nonumber\] Solving gives an obtained result of 100.5. Wang and colleagues describe a fluorescence method for the analysis of Ag in water. When analyzing a solution that contains \(1.0 \times 10^{-9}\) M Ag and \(1.1 \times 10^{-7}\) M Ni , the fluorescence intensity (the signal) was +4.9% greater than that obtained for a sample of \(1.0 \times 10^{-9}\) M Ag . What is for this analytical method? The full citation for the data in this exercise is Wang, L.; Liang, A. N.; Chen, H.; Liu, Y.; Qian, B.; Fu, J. , , 170-176. Because the signal for Ag in the presence of Ni is reported as a relative error, we will assign a value of 100 as the signal for \(1 \times 10^{-9}\) M Ag . With a relative error of +4.9%, the signal for the solution of \(1 \times 10^{-9}\) M Ag and \(1.1 \times 10^{-7}\) M Ni is 104.9. The sensitivity for Ag is determined using the solution that does not contain Ni ; thus \[k_\text{Ag} = \frac {S_\text{Ag}} {C_\text{Ag}} = \frac {100} {1 \times 10^{-9} \text{ M}} = 1.0 \times 10^{11} \text{ M}^{-1} \nonumber\] Substituting into Equation \ref{3.2} values for , , and the concentrations of Ag and Ni \[104.9 = (1.0 \times 10^{11} \text{ M}^{-1}) \times (1 \times 10^{-9} \text{ M}) + k_\text{Ni} \times (1.1 \times 10^{-7} \text{ M}) \nonumber\] and solving gives as \(4.5 \times 10^7\) M . The selectivity coefficient is \[K_\text{Ag,Ni} = \frac {k_\text{Ni}} {k_\text{Ag}} = \frac {4.5 \times 10^7 \text{ M}^{-1}} {1.0 \times 10^{11} \text{ M}^{-1}} = 4.5 \times 10^{-4} \nonumber\] A selectivity coefficient provides us with a useful way to evaluate an interferent’s potential effect on an analysis. Solving Equation \ref{3.3} for \[k_I = K_{A,I} \times k_A \label{3.4}\] and substituting in Equation \ref{3.1} and Equation \ref{3.2}, and simplifying gives \[S_{samp} = k_A \{ n_A + K_{A,I} \times n_I \} \label{3.5}\] \[S_{samp} = k_A \{ C_A + K_{A,I} \times C_I \} \label{3.6}\] An interferent will not pose a problem as long as the term \(K_{A,I} \times n_I\) in Equation \ref{3.5} is significantly smaller than , or if \(K_{A,I} \times C_I\) in Equation \ref{3.6} is significantly smaller than . Barnett and colleagues developed a method to determine the concentration of codeine (structure shown below) in poppy plants [Barnett, N. W.; Bowser, T. A.; Geraldi, R. D.; Smith, B. , , 309– 317]. As part of their study they evaluated the effect of several interferents. For example, the authors found that equimolar solutions of codeine and the interferent 6-methoxycodeine gave signals, respectively of 40 and 6 (arbitrary units). (a) What is the selectivity coefficient for the interferent, 6-methoxycodeine, relative to that for the analyte, codeine. (b) If we need to know the concentration of codeine with an accuracy of ±0.50%, what is the maximum relative concentration of 6-methoxy-codeine that we can tolerate? (a) The signals due to the analyte, , and the interferent, , are \[S_A = k_A C_A \quad \quad S_I = k_I C_I \nonumber\] Solving these equations for and for , and substituting into Equation \ref{3.4} gives \[K_{A,I} = \frac {S_I / C_I} {S_A / C_I} \nonumber\] Because the concentrations of analyte and interferent are equimolar ( = ), the selectivity coefficient is \[K_{A,I} = \frac {S_I} {S_A} = \frac {6} {40} = 0.15 \nonumber\] (b) To achieve an accuracy of better than ±0.50% the term \(K_{A,I} \times C_I\) in Equation \ref{3.6} must be less than 0.50% of ; thus \[K_{A,I} \times C_I \le 0.0050 \times C_A \nonumber\] Solving this inequality for the ratio / and substituting in the value for from part (a) gives \[\frac {C_I} {C_A} \le \frac {0.0050} {K_{A,I}} = \frac {0.0050} {0.15} = 0.033 \nonumber\] Therefore, the concentration of 6-methoxycodeine must be less than 3.3% of codeine’s concentration. When a method’s signal is the result of a chemical reaction—for example, when the signal is the mass of a precipitate—there is a good chance that the method is not very selective and that it is susceptible to an interference. Mercury (II) also is an interferent in the fluorescence method for Ag developed by Wang and colleagues (see ). The selectivity coefficient, has a value of \(-1.0 \times 10^{-3}\). (a) What is the significance of the selectivity coefficient’s negative sign? (b) Suppose you plan to use this method to analyze solutions with concentrations of Ag no smaller than 1.0 nM. What is the maximum concentration of Hg you can tolerate if your percentage relative errors must be less than ±1.0%? (a) A negative value for means that the presence of Hg decreases the signal from Ag . (b) In this case we need to consider an error of –1%, since the effect of Hg is to decrease the signal from Ag . To achieve this error, the term \(K_{A,I} \times C_I\) in Equation \ref{3.6} must be less than –1% of ; thus \[K_\text{Ag,Hg} \times C_\text{Hg} = -0.01 \times C_\text{Ag} \nonumber\] Substituting in known values for and , we find that the maximum concentration of Hg is \(1.0 \times 10^{-8}\) M. Problems with selectivity also are more likely when the analyte is present at a very low concentration [Rodgers, L. B. , , 3–6]. Look back at , which shows Fresenius’ analytical method for the determination of nickel in ores. The reason there are so many steps in this procedure is that precipitation reactions generally are not very selective. The method in includes fewer steps because dimethylglyoxime is a more selective reagent. Even so, if an ore contains palladium, additional steps are needed to prevent the palladium from interfering. For a method to be useful it must provide reliable results. Unfortunately, methods are subject to a variety of chemical and physical interferences that contribute uncertainty to the analysis. If a method is relatively free from chemical interferences, we can use it to analyze an analyte in a wide variety of sample matrices. Such methods are considered . Random variations in experimental conditions introduces uncertainty. If a method’s sensitivity, , is too dependent on experimental conditions, such as temperature, acidity, or reaction time, then a slight change in any of these conditions may give a significantly different result. A method is relatively insensitive to changes in experimental conditions. Another way to narrow the choice of methods is to consider three potential limitations: the amount of sample available for the analysis, the expected concentration of analyte in the samples, and the minimum amount of analyte that will produce a measurable signal. Collectively, these limitations define the analytical method’s scale of operations. We can display the scale of operations visually (Figure 3.4.2
) by plotting the sample’s size on the -axis and the analyte’s concentration on the -axis. For convenience, we divide samples into macro (>0.1 g), meso (10 mg–100 mg), micro (0.1 mg–10 mg), and ultramicro (<0.1 mg) sizes, and we divide analytes into major (>1% w/w), minor (0.01% w/w–1% w/w), trace (10 % w/w–0.01% w/w), and ultratrace (<10 % w/w) components. Together, the analyte’s concentration and the sample’s size provide a characteristic description for an analysis. For example, in a microtrace analysis the sample weighs between 0.1 mg and 10 mg and contains a concentration of analyte between 10 % w/w and 10 % w/w. The diagonal lines connecting the axes show combinations of sample size and analyte concentration that contain the same absolute mass of analyte. As shown in Figure 3.4.2
, for example, a 1-g sample that is 1% w/w analyte has the same amount of analyte (10 mg) as a 100-mg sample that is 10% w/w analyte, or a 10-mg sample that is 100% w/w analyte. We can use Figure 3.4.2
to establish limits for analytical methods. If a method’s minimum detectable signal is equivalent to 10 mg of analyte, then it is best suited to a major analyte in a macro or meso sample. Extending the method to an analyte with a concentration of 0.1% w/w requires a sample of 10 g, which rarely is practical due to the complications of carrying such a large amount of material through the analysis. On the other hand, a small sample that contains a trace amount of analyte places significant restrictions on an analysis. For example, a 1-mg sample that is 10 % w/w in analyte contains just 1 ng of analyte. If we isolate the analyte in 1 mL of solution, then we need an analytical method that reliably can detect it at a concentration of 1 ng/mL. It should not surprise you to learn that a total analysis technique typically requires a macro or a meso sample that contains a major analyte. A concentration technique is particularly useful for a minor, trace, or ultratrace analyte in a macro, meso, or micro sample. Finally, we can compare analytical methods with respect to their equipment needs, the time needed to complete an analysis, and the cost per sample. Methods that rely on instrumentation are equipment-intensive and may require significant operator training. For example, the graphite furnace atomic absorption spectroscopic method for determining lead in water requires a significant capital investment in the instrument and an experienced operator to obtain reliable results. Other methods, such as titrimetry, require less expensive equipment and less training. The time to complete an analysis for one sample often is fairly similar from method-to-method. This is somewhat misleading, however, because much of this time is spent preparing samples, preparing reagents, and gathering together equipment. Once the samples, reagents, and equipment are in place, the sampling rate may differ substantially. For example, it takes just a few minutes to analyze a single sample for lead using graphite furnace atomic absorption spectroscopy, but several hours to analyze the same sample using gravimetry. This is a significant factor in selecting a method for a laboratory that handles a high volume of samples. The cost of an analysis depends on many factors, including the cost of equipment and reagents, the cost of hiring analysts, and the number of samples that can be processed per hour. In general, methods that rely on instruments cost more per sample then other methods. Unfortunately, the design criteria discussed in this section are not mutually independent [Valcárcel, M.; Ríos, A. , , 781A–787A]. Working with smaller samples or improving selectivity often comes at the expense of precision. Minimizing cost and analysis time may decrease accuracy. Selecting a method requires carefully balancing the various design criteria. Usually, the most important design criterion is accuracy, and the best method is the one that gives the most accurate result. When the need for a result is urgent, as is often the case in clinical labs, analysis time may become the critical factor. In some cases it is the sample’s properties that determine the best method. A sample with a complex matrix, for example, may require a method with excellent selectivity to avoid interferences. Samples in which the analyte is present at a trace or ultratrace concentration usually require a concentration method. If the quantity of sample is limited, then the method must not require a large amount of sample. Determining the concentration of lead in drinking water requires a method that can detect lead at the parts per billion concentration level. Selectivity is important because other metal ions are present at significantly higher concentrations. A method that uses graphite furnace atomic absorption spectroscopy is a common choice for determining lead in drinking water because it meets these specifications. The same method is also useful for determining lead in blood where its ability to detect low concentrations of lead using a few microliters of sample is an important consideration. | 19,996 | 2,284 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/24%3A_Complex_Ions_and_Coordination_Compounds/24.09%3A_Acid-Base_Reactions_of_Complex_Ions |
The pH's of solutions containing hexaaqua ions vary a lot from one metal to another (assuming you are comparing solutions of equal concentrations). However, the underlying explanation is the same for all of them. Consider the hexaaquairon(III) ion, \([Fe(H_2O)_6]^{3+}\) with six water molecules are attached to the central iron(III) ion via a co-ordinate bond using one of the lone pairs on the oxygen. Imagine for the moment that the 3+ charge is located entirely on the iron. When the lone pairs on the oxygen atoms form co-ordinate bonds with the iron, there is obviously a movement of electrons towards the iron. That has an effect on the electrons in the O-H bonds. These electrons, in turn, get pulled towards the oxygen even more than usual. That leaves the hydrogen nuclei more exposed than normal. The overall effect is that each of the hydrogen atoms is more positive than it is in ordinary water molecules. The 3+ charge is no longer located entirely on the iron, but spreads out over the whole ion - much of it on the hydrogen atoms. The 3+ charge is no longer located entirely on the iron, but spreads out over the whole ion - much of it on the hydrogen atoms. The hydrogen atoms attached to the water ligands are sufficiently positive that they can be pulled off in a reaction involving water molecules in the solution. The first stage of this process is: \[Fe(H_2O)_6]^{3+} + H_2O \rightleftharpoons [Fe(H_2O)_5(OH)]^{2+} _{(aq)} + H_3O^+_{(aq)} \label{Eqa1}\] The complex ion is acting as an acid by donating a hydrogen ion to water molecules in the solution. The water is, of course, acting as a base by accepting the hydrogen ion. Because of the confusing presence of water from two different sources (the ligands and the solution), it is easier to simplify Equation \(\ref{Eqa1}\): \[ [Fe(H_2O)_6]^{3+} _{(aq)} \rightleftharpoons [Fe(H_2O)_5(OH)]^{2+} _{(aq)} + H^+ _{(aq)} \] However, if you write it like this, remember that the hydrogen ion is not just falling off the complex ion. It is being pulled off by a water molecule in the solution. The hexaaquairon(III) ion is q giving solutions with pH's around 1.5, depending on concentration. You can get further loss of hydrogen ions as well, from a second and a third water molecule. Losing a second hydrogen ion: \[ [ Fe(H_2O)_5(OH)]^{2+} _{(aq)} \rightleftharpoons [ Fe(H_2O)_4(OH)_2]^{+} _{(aq)} + H^+ _{(aq)} \] . . . and a third one: \[ [ Fe(H_2O)_4(OH)_2]^{+} _{(aq)} \rightleftharpoons [ Fe(H_2O)_3(OH)_4] _{(s)} + H^+ _{(aq)} \] This time you end up with a neutral \([ Fe(H_2O)_3(OH)_4]_{(s)}\) complex t hat is weakly soluble in water and precipitates. Looking at the equilibrium showing the loss of the first hydrogen ion (Equation \(\ref{Eqa1}\)): The color of the new complex ion on the right-hand side is so strong that it completely masks the color of the hexaaqua ion. In concentrated solutions, the equilibrium position will be even further to the right-hand side (Le Chatelier's Principle), and so the color darkens. You will also get significant loss of other hydrogen ions leading to some formation of the neutral complex - and so you get some precipitate. The position of this equilibrium can be shifted by adding extra hydrogen ions from a concentrated acid (e.g., by increasing pH by adding concentrated acid to the solution ( \(\Page {1}\)). The new hydrogen ions push the position of the equilibrium to the left so that you can see the color of the hexaaqua ion: Solutions containing 3+ hexaaqua ions tend to have pH's in the range from 1 to 3. Solutions containing 2+ ions have higher pH's - typically around 5 - 6, although they can go down to about 3. Remember that the reason that these ions are acidic is because of the pull of the electrons towards the positive central ion. An ion with 3+ charges on it is going to pull the electrons more strongly than one with only 2+ charges. In 3+ ions, the electrons in the O-H bonds will be pulled further away from the hydrogens than in 2+ ions. That means that the hydrogen atoms in the ligand water molecules will have a greater positive charge in a 3+ ion, and so will be more attracted to water molecules in the solution. If they are more attracted, they will be more readily lost - and so the 3+ ions are more acidic. If you have ions of the same charge, it seems reasonable that the smaller the volume this charge is packed into, the greater the distorting effect on the electrons in the O-H bonds. Ions with the same charge but in a smaller volume (a higher charge density) would be expected to be more acidic. You would therefore expect to find that the smaller the radius of the metal ion, the stronger the acid. Unfortunately, it's not that simple! There probably is a relationship between ionic radius and acid strength, but that it is nothing like as simple and straightforward as most books at this level pretend. The problem is that there are other more important effects operating as well (quite apart from differences in charge) and these can completely swamp the effect of the changes in ionic radius. You have to look in far more detail at the bonding in the hexaaqua ions and the product ions | 5,174 | 2,285 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_A_Molecular_Approach_(Tro)/05%3A_Gases/5.05%3A_Applications_of_the_Ideal_Gas_Law-_Molar_Volume_Density_and_Molar_Mass_of_a_Gas |
With the , we can use the relationship between the amounts of gases (in moles) and their volumes (in liters) to calculate the stoichiometry of reactions involving gases, if the pressure and temperature are known. This is important for several reasons. Many reactions that are carried out in the laboratory involve the formation or reaction of a gas, so chemists must be able to quantitatively treat gaseous products and reactants as readily as they quantitatively treat solids or solutions. Furthermore, many, if not most, industrially important reactions are carried out in the gas phase for practical reasons. Gases mix readily, are easily heated or cooled, and can be transferred from one place to another in a manufacturing facility via simple pumps and plumbing. The ideal-gas equation can be manipulated to solve a variety of different types of problems. For example, the density, \(\rho\), of a gas, depends on the number of gas molecules in a constant volume. To determine this value, we rearrange the ideal gas equation to \[\dfrac{n}{V}=\dfrac{P}{RT}\label{10.5.1} \] Density of a gas is generally expressed in g/L (mass over volume). Multiplication of the left and right sides of Equation \ref{10.5.1} by the molar mass in g/mol (\(M\)) of the gas gives \[\rho= \dfrac{g}{L}=\dfrac{PM}{RT} \label{10.5.2} \] This allows us to determine the density of a gas when we know the molar mass, or vice versa. What is the density of nitrogen gas (\(\ce{N_2}\)) at 248.0 Torr and 18º C? \[(248 \; \rm{Torr}) \times \dfrac{1 \; \rm{atm}}{760 \; \rm{Torr}} = 0.3263 \; \rm{atm} \nonumber \] \[18\,^oC + 273 = 291 K\nonumber \] Write down all known equations: \[PV = nRT \nonumber \] \[\rho=\dfrac{m}{V} \nonumber \] where \(\rho\) is density, \(m\) is mass, and \(V\) is volume. \[m=M \times n \nonumber \] where \(M\) is molar mass and \(n\) is the number of moles. Now take the definition of density (Equation \ref{10.5.1}) \[\rho=\dfrac{m}{V} \nonumber \] Keeping in mind \(m=M \times n\)...replace \((M \times n)\) for \(mass\) within the density formula. \[\begin{align*} \rho &=\dfrac{M \times n}{V} \\[4pt] \dfrac{\rho}{M} &= \dfrac{n}{V} \end{align*} \nonumber \] Now manipulate the Ideal Gas Equation \[ \begin{align*} &= nRT \\[4pt] \dfrac{n}{V} &= \dfrac{P}{RT} \end{align*} \nonumber \] \((n/V)\) is in both equations. \[ \begin{align*} \dfrac{n}{V} &= \dfrac{\rho}{M} \\[4pt] &= \dfrac{P}{RT} \end{align*} \nonumber \] Now combine them please. \[\dfrac{\rho}{M} = \dfrac{P}{RT}\nonumber \] Isolate density. \[\rho = \dfrac{PM}{RT} \nonumber \] \[ \begin{align*} \rho &= \dfrac{PM}{RT} \\[4pt] &= \dfrac{(0.3263\; \rm{atm})(2*14.01 \; \rm{g/mol})}{(0.08206\, L\, atm/K mol)(291 \; \rm{K})} \\[4pt] &= 0.3828 \; g/L \end{align*} \nonumber \] An example of varying density for a useful purpose is the hot air balloon, which consists of a bag (called the envelope) that is capable of containing heated air. As the air in the envelope is heated, it becomes less dense than the surrounding cooler air (Equation \(\ref{10.5.2}\)), which is has enough lifting power (due to buoyancy) to cause the balloon to float and rise into the air. Constant heating of the air is required to keep the balloon aloft. As the air in the balloon cools, it contracts, allowing outside cool air to enter, and the density increases. When this is carefully controlled by the pilot, the balloon can land as gently as it rose. The ideal gas law can be used to calculate volume of gases consumed or produced. The ideal-gas equation frequently is used to interconvert between volumes and molar amounts in chemical equations. What volume of carbon dioxide gas is produced at by the decomposition of 0.150 g \(\ce{CaCO_3}\) via the equation: \[ \ce{CaCO3(s) \rightarrow CaO(s) + CO2(g)} \nonumber \] Begin by converting the mass of calcium carbonate to moles. \[ \dfrac{0.150\;g}{100.1\;g/mol} = 0.00150\; mol \nonumber \] The stoichiometry of the reaction dictates that the number of moles \(\ce{CaCO_3}\) decomposed equals the number of moles \(\ce{CO2}\) produced. Use the ideal-gas equation to convert moles of \(\ce{CO2}\) to a volume. \[ \begin{align*} V &= \dfrac{nRT}{PR} \\[4pt] &= \dfrac{(0.00150\;mol)\left( 0.08206\; \frac{L \cdot atm}{mol \cdot K} \right) ( 273.15\;K)}{1\;atm} \\[4pt] &= 0.0336\;L \; or \; 33.6\;mL \end{align*} \nonumber \] A 3.00 L container is filled with \(\ce{Ne(g)}\) at 770 mmHg at 27 C. A \(0.633\;\rm{g}\) sample of \(\ce{CO2}\) vapor is then added. Step 1: Write down all given information, and convert as necessary. Before: Other Unknowns: \(n_{\ce{CO2}}\)= ? \[n_{CO_2} = 0.633\; \rm{g} \;CO_2 \times \dfrac{1 \; \rm{mol}}{44\; \rm{g}} = 0.0144\; \rm{mol} \; CO_2 \nonumber \] \[ \begin{align*} n_{Ne} &= \dfrac{PV}{RT} \\[4pt] &= \dfrac{(1.01\; \rm{atm})(3.00\; \rm{L})}{(0.08206\;atm\;L/mol\;K)(300\; \rm{K})} \\[4pt] &= 0.123 \; \rm{mol} \end{align*} \nonumber \] Because the pressure of the container before the \(\ce{CO2}\) was added contained only \(\ce{Ne}\), that is your partial pressure of \(Ne\). After converting it to atm, you have already answered part of the question! \[P_{Ne} = 1.01\; \rm{atm} \nonumber \] Step 3: Now that have pressure for \(\ce{Ne}\), you must find the partial pressure for \(CO_2\). Use the ideal gas equation. \[ \dfrac{P_{Ne}\cancel{V}}{n_{Ne}\cancel{RT}} = \dfrac{P_{CO_2}\cancel{V}}{n_{CO_2}\cancel{RT}} \nonumber \] but because both gases share the same Volume (\(V\)) and Temperature (\(T\)) and since the Gas Constant (\(R\)) is constants, all three terms cancel. \[ \begin{align*} \dfrac{P}{n_{Ne}} &= \dfrac{P}{n_{CO_2}} \\[4pt] \dfrac{1.01 \; \rm{atm}}{0.123\; \rm{mol} \;Ne} &= \dfrac{P_{CO_2}}{0.0144\; \rm{mol} \;CO_2} \\[4pt] P_{CO_2} &= 0.118 \; \rm{atm} \end{align*} \nonumber \] This is the partial pressure \(\ce{CO_2}\). Step 4: Now find total pressure. \[\begin{align*} P_{total} &= P_{Ne} + P_{CO_2} \\[4pt] &= 1.01 \; \rm{atm} + 0.118\; \rm{atm} \\[4pt] &= 1.128\; \rm{atm} \\[4pt] &\approx 1.13\; \rm{atm} \; \text{(with appropriate significant figures)} \end{align*} \nonumber \] Sulfuric acid, the industrial chemical produced in greatest quantity (almost 45 million tons per year in the United States alone), is prepared by the combustion of sulfur in air to give \(\ce{SO2}\), followed by the reaction of \(\ce{SO2}\) with \(\ce{O2}\) in the presence of a catalyst to give \(\ce{SO3}\), which reacts with water to give \(\ce{H2SO4}\). The overall chemical equation is as follows: \[\ce {2S(s) + 3O2(g) + 2H2O(l) \rightarrow 2H2SO4(aq)} \nonumber \] What volume of O (in liters) at 22°C and 745 mmHg pressure is required to produce 1.00 ton (907.18 kg) of H SO ? reaction, temperature, pressure, and mass of one product volume of gaseous reactant Calculate the number of moles of H SO in 1.00 ton. From the stoichiometric coefficients in the balanced chemical equation, calculate the number of moles of \(\ce{O2}\) required. Use the ideal gas law to determine the volume of \(\ce{O2}\) required under the given conditions. Be sure that all quantities are expressed in the appropriate units. of \(\ce{H2SO4}\) → mol s \(\ce{H2SO4}\) → m \(\ce{O2}\) \(\ce{O2}\) We begin by calculating the number of moles of H SO in 1.00 ton: \[\rm\dfrac{907.18\times10^3\;g\;H_2SO_4}{(2\times1.008+32.06+4\times16.00)\;g/mol}=9250\;mol\;H_2SO_4 \nonumber \] We next calculate the number of moles of \(\ce{O2}\) required: \[\rm9250\;mol\;H_2SO_4\times\dfrac{3mol\; O_2}{2mol\;H_2SO_4}=1.389\times10^4\;mol\;O_2 \nonumber \] After converting all quantities to the appropriate units, we can use the ideal gas law to calculate the volume of O : \[\begin{align*} V&=\dfrac{nRT}{P} \\[4pt] &=\rm\dfrac{1.389\times10^4\;mol\times0.08206\dfrac{L\cdot atm}{mol\cdot K}\times(273+22)\;K}{745\;mmHg\times\dfrac{1\;atm}{760\;mmHg}} \\[4pt] &=3.43\times10^5\;L \end{align*} \nonumber \] The answer means that more than 300,000 L of oxygen gas are needed to produce 1 ton of sulfuric acid. These numbers may give you some appreciation for the magnitude of the engineering and plumbing problems faced in industrial chemistry. Charles used a balloon containing approximately 31,150 L of \(\ce{H2}\) for his initial flight in 1783. The hydrogen gas was produced by the reaction of metallic iron with dilute hydrochloric acid according to the following balanced chemical equation: \[\ce{ Fe(s) + 2 HCl(aq) \rightarrow H2(g) + FeCl2(aq)} \nonumber \] How much iron (in kilograms) was needed to produce this volume of \(\ce{H2}\) if the temperature were 30°C and the atmospheric pressure was 745 mmHg? 68.6 kg of Fe (approximately 150 lb) Sodium azide (\(\ce{NaN_3}\)) decomposes to form sodium metal and nitrogen gas according to the following balanced chemical equation: \[\ce{ 2NaN3 \rightarrow 2Na(s) + 3N2(g)} \nonumber \] This reaction is used to inflate the air bags that cushion passengers during automobile collisions. The reaction is initiated in air bags by an electrical impulse and results in the rapid evolution of gas. If the \(\ce{N_2}\) gas that results from the decomposition of a 5.00 g sample of \(\ce{NaN_3}\) could be collected by displacing water from an inverted flask, what volume of gas would be produced at 21°C and 762 mmHg? reaction, mass of compound, temperature, and pressure volume of nitrogen gas produced Calculate the number of moles of \(\ce{N_2}\) gas produced. From the data in , determine the partial pressure of \(\ce{N_2}\) gas in the flask. Use the ideal gas law to find the volume of \(\ce{N_2}\) gas produced. Because we know the mass of the reactant and the stoichiometry of the reaction, our first step is to calculate the number of moles of \(\ce{N_2}\) gas produced: \[\rm\dfrac{5.00\;g\;NaN_3}{(22.99+3\times14.01)\;g/mol}\times\dfrac{3mol\;N_2}{2mol\;NaN_3}=0.115\;mol\; N_2 \nonumber \] The pressure given (762 mmHg) is the pressure in the flask, which is the sum of the pressures due to the N gas and the water vapor present. tells us that the vapor pressure of water is 18.65 mmHg at 21°C (294 K), so the partial pressure of the \(\ce{N_2}\) gas in the flask is only \[\begin{align*} \rm(762 − 18.65)\;mmHg \times\dfrac{1\;atm}{760\;mmHg} &= 743.4\; \cancel{mmHg} \times\dfrac{1\;atm}{760\;\cancel{mmHg}} \\[4pt] &= 0.978\; atm. \end{align*} \nonumber \] Solving the ideal gas law for and substituting the other quantities (in the appropriate units), we get \[V=\dfrac{nRT}{P}=\rm\dfrac{0.115\;mol\times0.08206\dfrac{atm\cdot L}{mol\cdot K}\times294\;K}{0.978\;atm}=2.84\;L \nonumber \] A 1.00 g sample of zinc metal is added to a solution of dilute hydrochloric acid. It dissolves to produce \(\ce{H2}\) gas according to the equation \[\ce{ Zn(s) + 2 HCl(aq) → H2(g) + ZnCl2(aq)}. \nonumber \] The resulting H gas is collected in a water-filled bottle at 30°C and an atmospheric pressure of 760 mmHg. What volume does it occupy? 0.397 L The relationship between the amounts of products and reactants in a chemical reaction can be expressed in units of moles or masses of pure substances, of volumes of solutions, or of volumes of gaseous substances. The ideal gas law can be used to calculate the volume of gaseous products or reactants as needed. In the laboratory, gases produced in a reaction are often collected by the displacement of water from filled vessels; the amount of gas can then be calculated from the volume of water displaced and the atmospheric pressure. A gas collected in such a way is not pure, however, but contains a significant amount of water vapor. The measured pressure must therefore be corrected for the vapor pressure of water, which depends strongly on the temperature. | 11,644 | 2,286 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/14%3A_Chemical_Kinetics/14.03%3A_Effect_of_Concentration_on_Reaction_Rates%3A_The_Rate_Law |
The factors that affect the reaction rate of a chemical reaction, which may determine whether a desired product is formed. In this section, we will show you how to quantitatively determine the reaction rate. Typically, reaction rates decrease with time because reactant concentrations decrease as reactants are converted to products. Reaction rates generally increase when reactant concentrations are increased. This section examines mathematical expressions called , which describe the relationships between reactant rates and reactant concentrations. Rate laws are mathematical descriptions of experimentally verifiable data. Rate laws may be written from either of two different but related perspectives. A expresses the reaction rate in terms of changes in the concentration of one or more reactants (Δ[R]) over a specific time interval (Δt). In contrast, an describes the reaction rate in terms of the initial concentration ([R] ) and the measured concentration of one or more reactants ([R]) after a given amount of time (t); integrated rate laws are discussed in more detail later The integrated rate law is derived by using calculus to integrate the differential rate law. Whether using a differential rate law or integrated rate law, always make sure that the rate law gives the proper units for the reaction rate, usually moles per liter per second (M/s). For a reaction with the general equation: \[aA + bB \rightarrow cC + dD \label{14.3.1} \] the experimentally determined rate law usually has the following form: \[\text{rate} = k[A]^m[B]^n \label{14.3.2}\] The proportionality constant ( ) is called the , and its value is characteristic of the reaction and the reaction conditions. A given reaction has a particular rate constant value under a given set of conditions, such as temperature, pressure, and solvent; varying the temperature or the solvent usually changes the value of the rate constant. The numerical value of , however, does not change as the reaction progresses under a given set of conditions. The reaction rate thus depends on the rate constant for the given set of reaction conditions and the concentration of A and B raised to the powers and , respectively. The values of and are derived from experimental measurements of the changes in reactant concentrations over time and indicate the , the degree to which the reaction rate depends on the concentration of each reactant; and need not be integers. For example, tells us that is th order in reactant A and th order in reactant B. It is important to remember that and are not related to the stoichiometric coefficients and in the balanced chemical equation and must be determined experimentally. The overall reaction order is the sum of all the exponents in the rate law: + . Under a given set of conditions, the value of the rate constant does not change as the reaction progresses. Although differential rate laws are generally used to describe what is occurring on a molecular level during a reaction, integrated rate laws are used to determine the reaction order and the value of the rate constant from experimental measurements. ( the link for a presentation of the general forms for integrated rate laws.) To illustrate how chemists interpret a differential rate law, consider the experimentally derived rate law for the hydrolysis of -butyl bromide in 70% aqueous acetone. This reaction produces -butanol according to the following equation: \[(CH_3)_3CBr_{(soln)} + H_2O_{(soln)} \rightarrow (CH_3)_3COH_{(soln)} + HBr_{(soln)} \label{14.3.3}\] Combining the rate expression in Equation \(\ref{14.3.2}\) with the definition of average reaction rate \[\textrm{rate}=-\dfrac{\Delta[\textrm A]}{\Delta t}\] gives a general expression for the differential rate law: \[\textrm{rate}=-\dfrac{\Delta[\textrm A]}{\Delta t}=k[\textrm A]^m[\textrm B]^n \label{14.3.4}\] Inserting the identities of the reactants into gives the following expression for the differential rate law for the reaction: \[\textrm{rate}=-\dfrac{\Delta[\mathrm{(CH_3)_3CBr}]}{\Delta t}=k[\mathrm{(CH_3)_3CBr}]^m[\mathrm{H_2O}]^n \label{14.3.5}\] Experiments to determine the rate law for the hydrolysis of -butyl bromide show that the reaction rate is directly proportional to the concentration of (CH ) CBr but is independent of the concentration of water. Therefore, m and n in Equation \(\ref{14.3.4}\) are 1 and 0, respectively, and, \[\text{rate} = k[(CH_3)_3CBr]^1[H_2O]^0 = k[(CH_3)_3CBr] \label{14.3.6}\] Because the exponent for the reactant is 1, the reaction is first order in (CH ) CBr. It is zeroth order in water because the exponent for [H O] is 0. (Recall that anything raised to the zeroth power equals 1.) Thus, the overall reaction order is 1 + 0 = 1. The reaction orders state in practical terms that doubling the concentration of (CH ) CBr doubles the reaction rate of the hydrolysis reaction, halving the concentration of (CH ) CBr halves the reaction rate, and so on. Conversely, increasing or decreasing the concentration of water has no effect on the reaction rate. (Again, when working with rate laws, there is no simple correlation between the stoichiometry of the reaction and the rate law. The values of , , and in the rate law must be determined experimentally.) Experimental data show that has the value 5.15 × 10 s at 25°C. The rate constant has units of reciprocal seconds (s ) because the reaction rate is defined in units of concentration per unit time (M/s). The units of a rate constant depend on the rate law for a particular reaction. Under conditions identical to those for the -butyl bromide reaction, the experimentally derived differential rate law for the hydrolysis of methyl bromide (CH Br) is as follows: \[\textrm{rate}=-\dfrac{\Delta[\mathrm{CH_3Br}]}{\Delta t}=k'[\mathrm{CH_3Br}] \label{14.3.7}\] This reaction also has an overall reaction order of 1, but the rate constant in is approximately 10 times smaller than that for -butyl bromide. Thus, methyl bromide hydrolyzes about 1 million times more slowly than -butyl bromide, and this information tells chemists how the reactions differ on a molecular level. Frequently, changes in reaction conditions also produce changes in a rate law. In fact, chemists often alter reaction conditions to study the mechanics of a reaction. For example, when -butyl bromide is hydrolyzed in an aqueous acetone solution containing OH ions rather than in aqueous acetone alone, the differential rate law for the hydrolysis reaction does not change. For methyl bromide, in contrast, the differential rate law becomes rate = ″[CH Br,OH ], with an overall reaction order of 2. Although the two reactions proceed similarly in neutral solution, they proceed very differently in the presence of a base, providing clues as to how the reactions differ on a molecular level. Differential rate laws are generally used to describe what is occurring on a molecular level during a reaction, whereas integrated rate laws are used for determining the reaction order and the value of the rate constant from experimental measurements. Below are three reactions and their experimentally determined differential rate laws. For each reaction, give the units of the rate constant, give the reaction order with respect to each reactant, give the overall reaction order, and predict what happens to the reaction rate when the concentration of the first species in each chemical equation is doubled. balanced chemical equations and differential rate laws units of rate constant, reaction orders, and effect of doubling reactant concentration The exponent in the rate law is 2, so the reaction is second order in HI. Because HI is the only reactant and the only species that appears in the rate law, the reaction is also second order overall. If the concentration of HI is doubled, the reaction rate will increase from [HI] to (2[HI]) = 4 [HI] . The reaction rate will therefore quadruple. The rate law tells us that the reaction rate is constant and independent of the N O concentration. That is, the reaction is zeroth order in N O and zeroth order overall. Because the reaction rate is independent of the N O concentration, doubling the concentration will have no effect on the reaction rate. The only concentration in the rate law is that of cyclopropane, and its exponent is 1. This means that the reaction is first order in cyclopropane. Cyclopropane is the only species that appears in the rate law, so the reaction is also first order overall. Doubling the initial cyclopropane concentration will increase the reaction rate from [cyclopropane] to 2 [cyclopropane] . This doubles the reaction rate. Given the following two reactions and their experimentally determined differential rate laws: determine the units of the rate constant if time is in seconds, determine the reaction order with respect to each reactant, give the overall reaction order, and predict what will happen to the reaction rate when the concentration of the first species in each equation is doubled. a. \[\textrm{CH}_3\textrm N\textrm{=NCH}_3\textrm{(g)}\rightarrow\mathrm{C_2H_6(g)}+\mathrm{N_2(g)}\hspace{5mm}\] with \[ \begin{align} \textrm{rate}=-\frac{\Delta[\textrm{CH}_3\textrm N\textrm{=NCH}_3]}{\Delta t}=k[\textrm{CH}_3\textrm N\textrm{=NCH}_3] \end{align} \] b. \[\mathrm{2NO_2(g)}+\mathrm{F_2(g)}\rightarrow\mathrm{2NO_2F(g)}\hspace{5mm}\] with \[ \begin{align} \textrm{rate}=-\frac{\Delta[\mathrm{F_2}]}{\Delta t}=-\frac{1}{2}\left ( \frac{\Delta[\mathrm{NO_2}]}{\Delta t} \right )=k[\mathrm{NO_2},\mathrm{F_2}]\end{align}\] The number of fundamentally different mechanisms (sets of steps in a reaction) is actually rather small compared to the large number of chemical reactions that can occur. Thus understanding can simplify what might seem to be a confusing variety of chemical reactions. The first step in discovering the reaction mechanism is to determine the reaction’s rate law. This can be done by designing experiments that measure the concentration(s) of one or more reactants or products as a function of time. For the reaction \(A + B \rightarrow products\), for example, we need to determine and the exponents and in the following equation: \[\text{rate} = k[A]^m[B]^n \label{14.4.11}\] To do this, we might keep the initial concentration of B constant while varying the initial concentration of A and calculating the initial reaction rate. This information would permit us to deduce the reaction order with respect to A. Similarly, we could determine the reaction order with respect to B by studying the initial reaction rate when the initial concentration of A is kept constant while the initial concentration of B is varied. In earlier examples, we determined the reaction order with respect to a given reactant by comparing the different rates obtained when only the concentration of the reactant in question was changed. An alternative way of determining reaction orders is to set up a proportion using the rate laws for two different experiments. Rate data for a hypothetical reaction of the type \(A + B \rightarrow products\) are given in . The general rate law for the reaction is given in . We can obtain or directly by using a proportion of the rate laws for two experiments in which the concentration of one reactant is the same, such as Experiments 1 and 3 in . \[\dfrac{\mathrm{rate_1}}{\mathrm{rate_3}}=\dfrac{k[\textrm A_1]^m[\textrm B_1]^n}{k[\textrm A_3]^m[\textrm B_3]^n}\] Inserting the appropriate values from , \[\dfrac{8.5\times10^{-3}\textrm{ M/min}}{34\times10^{-3}\textrm{ M/min}}=\dfrac{k[\textrm{0.50 M}]^m[\textrm{0.50 M}]^n}{k[\textrm{1.00 M}]^m[\textrm{0.50 M}]^n}\] Because 1.00 to any power is 1, [1.00 M] = 1.00 M. We can cancel like terms to give 0.25 = [0.50] , which can also be written as 1/4 = [1/2] . Thus we can conclude that = 2 and that the reaction is second order in A. By selecting two experiments in which the concentration of B is the same, we were able to solve for . Conversely, by selecting two experiments in which the concentration of A is the same (e.g., Experiments 5 and 1), we can solve for . \(\dfrac{\mathrm{rate_1}}{\mathrm{rate_5}}=\dfrac{k[\mathrm{A_1}]^m[\mathrm{B_1}]^n}{k[\mathrm{A_5}]^m[\mathrm{B_5}]^n}\) Substituting the appropriate values from , \[\dfrac{8.5\times10^{-3}\textrm{ M/min}}{8.5\times10^{-3}\textrm{ M/min}}=\dfrac{k[\textrm{0.50 M}]^m[\textrm{0.50 M}]^n}{k[\textrm{0.50 M}]^m[\textrm{1.00 M}]^n}\] Canceling leaves 1.0 = [0.50] , which gives \(n = 0\); that is, the reaction is zeroth order in \(B\). The experimentally determined rate law is therefore rate = [A] [B] = [A] We can now calculate the rate constant by inserting the data from any row of into the experimentally determined rate law and solving for \(k\). Using Experiment 2, we obtain 19 × 10 M/min = (0.75 M) 3.4 × 10 M ·min = k You should verify that using data from any other row of gives the same rate constant. This must be true as long as the experimental conditions, such as temperature and solvent, are the same. Nitric oxide is produced in the body by several different enzymes and acts as a signal that controls blood pressure, long-term memory, and other critical functions. The major route for removing NO from biological fluids is via reaction with \(O_2\) to give \(NO_2\), which then reacts rapidly with water to give nitrous acid and nitric acid: These reactions are important in maintaining steady levels of NO. The following table lists kinetics data for the reaction of NO with O at 25°C: \[2NO(g) + O_2(g) \rightarrow 2NO_2(g)\] Determine the rate law for the reaction and calculate the rate constant. balanced chemical equation, initial concentrations, and initial rates rate law and rate constant Comparing Experiments 1 and 2 shows that as [O ] is doubled at a constant value of [NO ], the reaction rate approximately doubles. Thus the reaction rate is proportional to [O ] , so the reaction is first order in O . Comparing Experiments 1 and 3 shows that the reaction rate essentially quadruples when [NO] is doubled and [O ] is held constant. That is, the reaction rate is proportional to [NO] , which indicates that the reaction is second order in NO. Using these relationships, we can write the rate law for the reaction: rate = [NO] [O ] The data in any row can be used to calculate the rate constant. Using Experiment 1, for example, gives \[k=\dfrac{\textrm{rate}}{[\mathrm{NO}]^2[\mathrm{O_2}]}=\dfrac{7.98\times10^{-3}\textrm{ M/s}}{(0.0235\textrm{ M})^2(0.0125\textrm{ M})}=1.16\times10^3\;\mathrm{ M^{-2}\cdot s^{-1}}\] Alternatively, using Experiment 2 gives \[k=\dfrac{\textrm{rate}}{[\mathrm{NO}]^2[\mathrm{O_2}]}=\dfrac{15.9\times10^{-3}\textrm{ M/s}}{(0.0235\textrm{ M})^2(0.0250\textrm{ M})}=1.15\times10^3\;\mathrm{ M^{-2}\cdot s^{-1}}\] The difference is minor and associated with significant digits and likely experimental error in making the table. The overall reaction order \((m + n) = 3\), so this is a third-order reaction whose rate is determined by three reactants. The units of the rate constant become more complex as the overall reaction order increases. The peroxydisulfate ion (S O ) is a potent oxidizing agent that reacts rapidly with iodide ion in water: \[S_2O^{2−}_{8(aq)} + 3I^−_{(aq)} \rightarrow 2SO^{2−}_{4(aq)} + I^−_{3(aq)}\] The following table lists kinetics data for this reaction at 25°C. Determine the rate law and calculate the rate constant. rate = [S O ,I ]; = 20 M ·s Initial Rates and Rate Law Expressions: The rate law for a reaction is a mathematical relationship between the reaction rate and the concentrations of species in solution. Rate laws can be expressed either as a differential rate law, describing the change in reactant or product concentrations as a function of time, or as an integrated rate law, describing the actual concentrations of reactants or products as a function of time. The rate constant ( ) of a rate law is a constant of proportionality between the reaction rate and the reactant concentration. The exponent to which a concentration is raised in a rate law indicates the reaction order, the degree to which the reaction rate depends on the concentration of a particular reactant. | 16,292 | 2,288 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/02%3A_Gas_Laws/2.11%3A_The_Barometric_Formula |
We can measure the pressure of the atmosphere at any location by using a . A mercury barometer is a sealed tube that contains a vertical column of liquid mercury. The space in the tube above the liquid mercury is occupied by mercury vapor. Since the vapor pressure of liquid mercury at ordinary temperatures is very low, the pressure at the top of the mercury column is very low and can usually be ignored. The pressure at the bottom of the column of mercury is equal to the pressure of a column of air extending from the elevation of the barometer all the way to the top of the earth’s atmosphere. As we take the barometer to higher altitudes, we find that the height of the mercury column decreases, because less and less of the atmosphere is above the barometer. If we assume that the atmosphere is composed of an ideal gas and that its temperature is constant, we can derive an equation for atmospheric pressure as a function of altitude. Imagine a cylindrical column of air extending from the earth’s surface to the top of the atmosphere (Figure 4). The force exerted by this column at its base is the weight of the air in the column; the pressure is this weight divided by the cross-sectional area of the column. Let the cross-sectional area of the column be \(A\). Consider a short section of this column. Let the bottom of this section be a distance \(h\) from the earth’s surface, while its top is a distance \(h+\Delta h\) from the earth’s surface. The volume of this cylindrical section is then \(V_S=A\Delta h\). Let the mass of the gas in this section be \(M_S\). The pressure at \(h+\Delta h\) is less than the pressure at \(h\) by the weight of this gas divided by the cross-sectional area. The weight of the gas is \(M_Sg\). The pressure difference is \(\Delta P=-{M_Sg}/{A}\). We have \[\frac{P\left(h+\Delta h\right)-P\left(h\right)}{\Delta h}=\frac{\Delta P}{\Delta h}=\frac{-M_Sg}{A\Delta h}=\frac{-M_Sg}{V_S}\] Since we are assuming that the sample of gas in the cylindrical section behaves ideally, we have \(V_S={n_SRT}/{P}\). Substituting for \(V_S\) and taking the limit as \(\Delta h\to 0\), we find \[\frac{dP}{dh}=\left(\frac{{-M}_Sg}{n_SRT}\right)P=\left(\frac{{-n}_S\overline{M}g}{n_SRT}\right)P=\left(\frac{-mg}{kT}\right)P\] where we introduce \(n_S\) as the number of moles of gas in the sample, \(\overline{M}\) as the molar mass of this gas, and \(m\) as the mass of an individual atmosphere molecule. The last equality on the right makes use of the identities \(\overline{M}=m\overline{N}\) and \(R=\overline{N}k\). Separating variables and integrating between limits \(P\left(0\right)=P_0\) and \(P\left(h\right)=P\), we find \[\int^P_{P_0}{\frac{dP}{P}}=\left(\frac{-mg}{kT}\right)\int^h_0{dh}\] so that \[{ \ln \left(\frac{P}{P_0}\right)\ }=\frac{-mgh}{kT}\] and \[P=P_0\mathrm{exp}\left(\frac{-mgh}{kT}\right)\] Either of the latter relationships is frequently called the . If we let \(\eta\) be the number of molecules per unit volume, \(\eta ={N}/{V}\), we can write \(P={NkT}/{V}=\eta kT\) and \(P_0={\eta }_0kT\) so that the barometric formula can be expressed in terms of these number densities as \[\eta ={\eta }_0\mathrm{exp}\left(\frac{-mgh}{kT}\right)\] | 3,216 | 2,289 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/07%3A_Obtaining_and_Preparing_Samples_for_Analysis/7.07%3A_Liquid-Liquid_Extractions |
A liquid–liquid extraction is an important separation technique for environmental, clinical, and industrial laboratories. A standard environmental analytical method illustrates the importance of liquid–liquid extractions. Municipal water departments routinely monitor public water supplies for trihalomethanes (CHCl , CHBrCl , CHBr Cl, and CHBr ) because they are known or suspected carcinogens. Before their analysis by gas chromatography, trihalomethanes are separated from their aqueous matrix using a liquid–liquid extraction with pentane [“The Analysis of Trihalomethanes in Drinking Water by Liquid Extraction,”EPAMethod501.2 (EPA 500-Series, November 1979)]. The Environmental Protection Agency (EPA) also publishes two additional methods for trihalomethanes. Method 501.1 and Method 501.3 use a purge-and-trap to collect the trihalomethanes prior to a gas chromatographic analysis with a halide-specific detector (Method 501.1) or a mass spectrometer as the detector (Method 501.3). You will find more details about gas chromatography, including detectors, in . In a simple liquid–liquid extraction the solute partitions itself between two immiscible phases. One phase usually is an aqueous solvent and the other phase is an organic solvent, such as the pentane used to extract trihalomethanes from water. Because the phases are immiscible they form two layers, with the denser phase on the bottom. The solute initially is present in one of the two phases; after the extraction it is present in both phases. —that is, the percentage of solute that moves from one phase to the other—is determined by the equilibrium constant for the solute’s partitioning between the phases and any other side reactions that involve the solute. Examples of other reactions that affect extraction efficiency include acid–base reactions and complexation reactions. As we learned earlier in this chapter, a solute’s partitioning between two phases is described by a partition coefficient, . If we extract a solute from an aqueous phase into an organic phase \[S_{a q} \rightleftharpoons S_{o r g} \nonumber\] then the partition coefficient is \[K_{\mathrm{D}}=\frac{\left[S_{org}\right]}{\left[S_{a q}\right]} \nonumber\] A large value for indicates that extraction of solute into the organic phase is favorable. To evaluate an extraction’s efficiency we must consider the solute’s total concentration in each phase, which we define as a , . \[D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{a q}\right]_{\text { total }}} \nonumber\] The partition coefficient and the distribution ratio are identical if the solute has only one chemical form in each phase; however, if the solute exists in more than one chemical form in either phase, then and usually have different values. For example, if the solute exists in two forms in the aqueous phase, and , only one of which, , partitions between the two phases, then \[D=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}+\left[S_{a q}\right]_{B}} \leq K_{\mathrm{D}}=\frac{\left[S_{o r g}\right]_{A}}{\left[S_{a q}\right]_{A}} \nonumber\] This distinction between and is important. The partition coefficient is a thermodynamic equilibrium constant and has a fixed value for the solute’s partitioning between the two phases. The distribution ratio’s value, however, changes with solution conditions if the relative amounts of and change. If we know the solute’s equilibrium reactions within each phase and between the two phases, we can derive an algebraic relationship between and . In a simple liquid–liquid extraction, the only reaction that affects the extraction efficiency is the solute’s partitioning between the two phases (Figure 7.7.1
). In this case the distribution ratio and the partition coefficient are equal. \[D=\frac{\left[S_{o r g}\right]_{\text { total }}}{\left[S_{aq}\right]_{\text { total }}} = K_\text{D} = \frac {[S_{org}]} {[S_{aq}]} \label{7.1}\] Let’s assume the solute initially is present in the aqueous phase and that we wish to extract it into the organic phase. A conservation of mass requires that the moles of solute initially present in the aqueous phase equal the combined moles of solute in the aqueous phase and the organic phase after the extraction. \[\left(\operatorname{mol} \ S_{a q}\right)_{0}=\left(\operatorname{mol} \ S_{a q}\right)_{1}+\left(\operatorname{mol} \ S_{org}\right)_{1} \label{7.2}\] where the subscripts indicate the extraction number with 0 representing the system before the extraction and 1 the system following the first extraction. After the extraction, the solute’s concentration in the aqueous phase is \[\left[S_{a q}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{a q}} \label{7.3}\] and its concentration in the organic phase is \[\left[S_{o r g}\right]_{1}=\frac{\left(\operatorname{mol} \ S_{o r g}\right)_{1}}{V_{o r g}} \label{7.4}\] where and are the volumes of the aqueous phase and the organic phase. Solving Equation \ref{7.2} for (mol ) and substituting into Equation \ref{7.4} leave us with \[\left[S_{o r g}\right]_{1} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0}-\left(\operatorname{mol} \ S_{a q}\right)_{1}}{V_{o r g}} \label{7.5}\] Substituting Equation \ref{7.3} and Equation \ref{7.5} into Equation \ref{7.1} gives \[D = \frac {\frac {(\text{mol }S_{aq})_0-(\text{mol }S_{aq})_1} {V_{org}}} {\frac {(\text{mol }S_{aq})_1} {V_{aq}}} = \frac{\left(\operatorname{mol} \ S_{a q}\right)_{0} \times V_{a q}-\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{a q}}{\left(\operatorname{mol} \ S_{a q}\right)_{1} \times V_{o r g}} \nonumber\] Rearranging and solving for the fraction of solute that remains in the aqueous phase after one extraction, ( ) , gives \[\left(q_{aq}\right)_{1} = \frac{\left(\operatorname{mol} \ S_{aq}\right)_{1}}{\left(\operatorname{mol} \ S_{a q}\right)_{0}} = \frac{V_{aq}}{D V_{o r g}+V_{a q}} \label{7.6}\] The fraction present in the organic phase after one extraction, ( ) , is \[\left(q_{o r g}\right)_{1}=\frac{\left(\operatorname{mol} S_{o r g}\right)_{1}}{\left(\operatorname{mol} S_{a q}\right)_{0}}=1-\left(q_{a q}\right)_{1}=\frac{D V_{o r g}}{D V_{o r g}+V_{a q}} \nonumber\] Example 7.7.1
shows how we can use Equation \ref{7.6} to calculate the efficiency of a simple liquid-liquid extraction. A solute has a between water and chloroform of 5.00. Suppose we extract a 50.00-mL sample of a 0.050 M aqueous solution of the solute using 15.00 mL of chloroform. (a) What is the separation’s extraction efficiency? (b) What volume of chloroform do we need if we wish to extract 99.9% of the solute? For a simple liquid–liquid extraction the distribution ratio, , and the partition coefficient, , are identical. (a) The fraction of solute that remains in the aqueous phase after the extraction is given by Equation \ref{7.6}. \[\left(q_{aq}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.400 \nonumber\] The fraction of solute in the organic phase is 1–0.400, or 0.600. Extraction efficiency is the percentage of solute that moves into the extracting phase; thus, the extraction efficiency is 60.0%. (b) To extract 99.9% of the solute ( ) must be 0.001. Solving Equation \ref{7.6} for , and making appropriate substitutions for ( ) and gives \[V_{o r g}=\frac{V_{a q}-\left(q_{a q}\right)_{1} V_{a q}}{\left(q_{a q}\right)_{1} D}=\frac{50.00 \ \mathrm{mL}-(0.001)(50.00 \ \mathrm{mL})}{(0.001)(5.00 \ \mathrm{mL})}=999 \ \mathrm{mL} \nonumber\] This is large volume of chloroform. Clearly, a single extraction is not reasonable under these conditions. In Example 7.7.1
, a single extraction provides an extraction efficiency of only 60%. If we carry out a second extraction, the fraction of solute remaining in the aqueous phase, ( ) , is \[\left(q_{a q}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{a q}\right)_{2}}{\left(\operatorname{mol} \ S_{a q}\right)_{1}}=\frac{V_{a q}}{D V_{org}+V_{a q}} \nonumber\] If and are the same for both extractions, then the cumulative fraction of solute that remains in the aqueous layer after two extractions, ( ) , is the product of ( ) and ( ) , or \[\left(Q_{aq}\right)_{2}=\frac{\left(\operatorname{mol} \ S_{aq}\right)_{2}}{\left(\operatorname{mol} \ S_{aq}\right)_{0}}=\left(q_{a q}\right)_{1} \times\left(q_{a q}\right)_{2}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{2} \nonumber\] In general, for a series of identical extractions, the fraction of analyte that remains in the aqueous phase after the last extraction is \[\left(Q_{a q}\right)_{n}=\left(\frac{V_{a q}}{D V_{o r g}+V_{a q}}\right)^{n} \label{7.7}\] For the extraction described in Example 7.7.1
, determine (a) the extraction efficiency for two identical extractions and for three identical extractions; and (b) the number of extractions required to ensure that we extract 99.9% of the solute. (a) The fraction of solute remaining in the aqueous phase after two extractions and three extractions is \[\left(Q_{aq}\right)_{2}=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{2}=0.160 \nonumber\] \[\left(Q_{a q}\right)_{3}=\left(\frac{50.0 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{3}=0.0640 \nonumber\] The extraction efficiencies are 84.0% for two extractions and 93.6% for three extractions. (b) To determine the minimum number of extractions for an efficiency of 99.9%, we set ( ) to 0.001 and solve for using Equation \ref{7.7}. \[0.001=\left(\frac{50.00 \ \mathrm{mL}}{(5.00)(15.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}\right)^{n}=(0.400)^{n} \nonumber\] Taking the log of both sides and solving for \[\begin{aligned} \log (0.001) &=n \log (0.400) \\ n &=7.54 \end{aligned} \nonumber\] we find that a minimum of eight extractions is necessary. The last two examples provide us with an important observation—for any extraction efficiency, we need less solvent if we complete several extractions using smaller portions of solvent instead of one extraction using a larger volume of solvent. For the conditions in and , an extraction efficiency of 99.9% requires one extraction with 9990 mL of chloroform, or 120 mL when using eight 15-mL portions of chloroform. Although extraction efficiency increases dramatically with the first few multiple, the effect diminishes quickly as we increase the number of extractions (Figure 7.7.2
). In most cases there is little improvement in extraction efficiency after five or six extractions. For the conditions in Example 7.7.2
, we reach an extraction efficiency of 99% after five extractions and need three additional extractions to obtain the extra 0.9% increase in extraction efficiency. To plan a liquid–liquid extraction we need to know the solute’s distribution ratio between the two phases. One approach is to carry out the extraction on a solution that contains a known amount of solute. After the extraction, we isolate the organic phase and allow it to evaporate, leaving behind the solute. In one such experiment, 1.235 g of a solute with a molar mass of 117.3 g/mol is dissolved in 10.00 mL of water. After extracting with 5.00 mL of toluene, 0.889 g of the solute is recovered in the organic phase. (a) What is the solute’s distribution ratio between water and toluene? (b) If we extract 20.00 mL of an aqueous solution that contains the solute using 10.00 mL of toluene, what is the extraction efficiency? (c) How many extractions will we need to recover 99.9% of the solute? (a) The solute’s distribution ratio between water and toluene is \[D=\frac{\left[S_{o r g}\right]}{\left[S_{a q}\right]}=\frac{0.889 \ \mathrm{g} \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.00500 \ \mathrm{L}}}{(1.235 \ \mathrm{g}-0.889 \ \mathrm{g}) \times \frac{1 \ \mathrm{mol}}{117.3 \ \mathrm{g}} \times \frac{1}{0.01000 \ \mathrm{L}}}=5.14 \nonumber\] (b) The fraction of solute remaining in the aqueous phase after one extraction is \[\left(q_{a q}\right)_{1}=\frac{V_{a q}}{D V_{org}+V_{a q}}=\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}=0.280 \nonumber\] The extraction efficiency, therefore, is 72.0%. (c) To extract 99.9% of the solute requires \[\left(Q_{aq}\right)_{n}=0.001=\left(\frac{20.00 \ \mathrm{mL}}{(5.14)(10.00 \ \mathrm{mL})+20.00 \ \mathrm{mL}}\right)^{n}=(0.280)^{n} \nonumber\] \[\begin{aligned} \log (0.001) &=n \log (0.280) \\ n &=5.4 \end{aligned} \nonumber\] a minimum of six extractions. As we see in Equation \ref{7.1}, in a simple liquid–liquid extraction the distribution ratio and the partition coefficient are identical. As a result, the distribution ratio does not depend on the composition of the aqueous phase or the organic phase. A change in the pH of the aqueous phase, for example, will not affect the solute’s extraction efficiency when and have the same value. If the solute participates in one or more additional equilibrium reactions within a phase, then the distribution ratio and the partition coefficient may not be the same. For example, Figure 7.7.3
shows the equilibrium reactions that affect the extraction of the weak acid, HA, by an organic phase in which ionic species are not soluble. In this case the partition coefficient and the distribution ratio are \[K_{\mathrm{D}}=\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]} \label{7.8}\] \[D=\frac{\left[\mathrm{HA}_{org}\right]_{\text { total }}}{\left[\mathrm{HA}_{a q}\right]_{\text { total }}} =\frac{\left[\mathrm{HA}_{org}\right]}{\left[\mathrm{HA}_{a q}\right]+\left[\mathrm{A}_{a q}^{-}\right]} \label{7.9}\] Because the position of an acid–base equilibrium depends on pH, the distribution ratio, , is pH-dependent. To derive an equation for that shows this dependence, we begin with the acid dissociation constant for HA. \[K_{\mathrm{a}}=\frac{\left[\mathrm{H}_{3} \mathrm{O}_{\mathrm{aq}}^{+}\right]\left[\mathrm{A}_{\mathrm{aq}}^{-}\right]}{\left[\mathrm{HA}_{\mathrm{aq}}\right]} \label{7.10}\] Solving Equation \ref{7.10} for the concentration of A in the aqueous phase \[\left[\mathrm{A}_{a q}^{-}\right]=\frac{K_{\mathrm{a}} \times\left[\mathrm{HA}_{a q}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{a q}^{+}\right]} \nonumber\] and substituting into Equation \ref{7.9} gives \[D = \frac {[\text{HA}_{org}]} {[\text{HA}_{aq}] + \frac {K_a \times [\text{HA}_{aq}]}{[\text{H}_3\text{O}_{aq}^+]}} \nonumber\] Factoring [HA ] from the denominator, replacing [HA ]/[HA ] with (Equation \ref{7.8}), and simplifying leaves us with the following relationship between the distribution ratio, , and the pH of the aqueous solution. \[D=\frac{K_{\mathrm{D}}\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]}{\left[\mathrm{H}_{3} \mathrm{O}_{aq}^{+}\right]+K_{a}} \label{7.11}\] An acidic solute, HA, has a of \(1.00 \times 10^{-5}\) and a between water and hexane of 3.00. Calculate the extraction efficiency if we extract a 50.00 mL sample of a 0.025 M aqueous solution of HA, buffered to a pH of 3.00, with 50.00 mL of hexane. Repeat for pH levels of 5.00 and 7.00. When the pH is 3.00, [\(\text{H}_3\text{O}_{aq}^+\)] is \(1.0 \times 10^{-3}\) and the distribution ratio is \[D=\frac{(3.00)\left(1.0 \times 10^{-3}\right)}{1.0 \times 10^{-3}+1.00 \times 10^{-5}}=2.97 \nonumber\] The fraction of solute that remains in the aqueous phase is \[\left(Q_{aq}\right)_{1}=\frac{50.00 \ \mathrm{mL}}{(2.97)(50.00 \ \mathrm{mL})+50.00 \ \mathrm{mL}}=0.252 \nonumber\] The extraction efficiency, therefore, is almost 75%. The same calculation at a pH of 5.00 gives the extraction efficiency as 60%. At a pH of 7.00 the extraction efficiency is just 3% . The extraction efficiency in Example 7.7.3
is greater at more acidic pH levels because HA is the solute’s predominate form in the aqueous phase. At a more basic pH, where A is the solute’s predominate form, the extraction efficiency is smaller. A graph of extraction efficiency versus pH is shown in Figure 7.7.4
. Note that extraction efficiency essentially is independent of pH for pH levels more acidic than the HA’s p , and that it is essentially zero for pH levels more basic than HA’s p . The greatest change in extraction efficiency occurs at pH levels where both HA and A are predominate species. The ladder diagram for HA along the graph’s -axis helps illustrate this effect. The liquid–liquid extraction of the weak base B is governed by the following equilibrium reactions: \[\begin{array}{c}{\mathrm{B}(a q) \rightleftharpoons \mathrm{B}(org) \quad K_{D}=5.00} \\ {\mathrm{B}(a q)+\mathrm{H}_{2} \mathrm{O}(l)\rightleftharpoons \mathrm{OH}^{-}(a q)+\mathrm{HB}^{+}(a q) \quad K_{b}=1.0 \times 10^{-4}}\end{array} \nonumber\] Derive an equation for the distribution ratio, , and calculate the extraction efficiency if 25.0 mL of a 0.025 M solution of B, buffered to a pH of 9.00, is extracted with 50.0 mL of the organic solvent. Because the weak base exists in two forms, only one of which extracts into the organic phase, the partition coefficient, , and the distribution ratio, , are not identical. \[K_{\mathrm{D}}=\frac{\left[\mathrm{B}_{org}\right]}{\left[\mathrm{B}_{aq}\right]} \nonumber\] \[D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + [\text{HB}_{aq}^+]} \nonumber\] Using the expression for the weak base \[K_{\mathrm{b}}=\frac{\left[\mathrm{OH}_{a q}^{-}\right]\left[\mathrm{HB}_{a q}^{+}\right]}{\left[\mathrm{B}_{a q}\right]} \nonumber\] we solve for the concentration of HB and substitute back into the equation for , obtaining \[D = \frac {[\text{B}_{org}]} {[\text{B}_{aq}] + \frac {K_b \times [\text{B}_{aq}]} {[\text{OH}_{aq}^-]}} = \frac {[\text{B}_{org}]} {[\text{B}_{aq}]\left(1+\frac {K_b} {[\text{OH}_{aq}^+]} \right)} =\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{a q}^{-}\right]+K_{\mathrm{b}}} \nonumber\] At a pH of 9.0, the [OH ] is \(1 \times 10^{-5}\) M and the distribution ratio has a value of \[D=\frac{K_{D}\left[\mathrm{OH}_{a q}^{-}\right]}{\left[\mathrm{OH}_{aq}^{-}\right]+K_{\mathrm{b}}}=\frac{(5.00)\left(1.0 \times 10^{-5}\right)}{1.0 \times 10^{-5}+1.0 \times 10^{-4}}=0.455 \nonumber\] After one extraction, the fraction of B remaining in the aqueous phase is \[\left(q_{aq}\right)_{1}=\frac{25.00 \ \mathrm{mL}}{(0.455)(50.00 \ \mathrm{mL})+25.00 \ \mathrm{mL}}=0.524 \nonumber\] The extraction efficiency, therefore, is 47.6%. At a pH of 9, most of the weak base is present as HB , which explains why the overall extraction efficiency is so poor. One important application of a liquid–liquid extraction is the selective extraction of metal ions using an organic ligand. Unfortunately, many organic ligands are not very soluble in water or undergo hydrolysis or oxidation reactions in aqueous solutions. For these reasons the ligand is added to the organic solvent instead of the aqueous phase. Figure 7.7.5
shows the relevant equilibrium reactions (and equilibrium constants) for the extraction of M by the ligand HL, including the ligand’s extraction into the aqueous phase ( ), the ligand’s acid dissociation reaction ( ), the formation of the metal–ligand complex (\(\beta_n\)), and the complex’s extraction into the organic phase ( ). If the ligand’s concentration is much greater than the metal ion’s concentration, then the distribution ratio is \[D=\frac{\beta_{n} K_{\mathrm{D}, c}\left(K_{a}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}}{\left(K_{\mathrm{D}, \mathrm{HL}}\right)^{n}\left[\mathrm{H}_{3} \mathrm{O}^{+}\right]^{n}+\beta_{n}\left(K_{\mathrm{a}}\right)^{n}\left(C_{\mathrm{HL}}\right)^{n}} \label{7.12}\] where is the ligand’s initial concentration in the organic phase. As shown in Example 7.7.4
, the extraction efficiency for metal ions shows a marked pH dependency. A liquid–liquid extraction of the divalent metal ion, M , uses the scheme outlined in Figure 7.7.5
. The partition coefficients for the ligand, , and for the metal–ligand complex, , are \(1.0 \times 10^4\) and \(7.0 \times 10^4\), respectively. The ligand’s acid dissociation constant, , is \(5.0 \times 10^{-5}\), and the formation constant for the metal–ligand complex, \(\beta_2\), is \(2.5 \times 10^{16}\). What is the extraction efficiency if we extract 100.0 mL of a \(1.0 \times 10^{-6}\) M aqueous solution of M , buffered to a pH of 1.00, with 10.00 mL of an organic solvent that is 0.1 mM in the chelating agent? Repeat the calculation at a pH of 3.00. When the pH is 1.00 the distribution ratio is \[D=\frac{\left(2.5 \times 10^{16}\right)\left(7.0 \times 10^{4}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}}{\left(1.0 \times 10^{4}\right)^{2}(0.10)^{2}+\left(2.5 \times 10^{16}\right)\left(5.0 \times 10^{-5}\right)^{2}\left(1.0 \times 10^{-4}\right)^{2}} \nonumber\] or a of 0.0438. The fraction of metal ion that remains in the aqueous phase is \[\left(Q_{aq}\right)_{1}=\frac{100.0 \ \mathrm{mL}}{(0.0438)(10.00 \ \mathrm{mL})+100.0 \ \mathrm{mL}}=0.996 \nonumber\] At a pH of 1.00, we extract only 0.40% of the metal into the organic phase. Changing the pH to 3.00, however, increases the extraction efficiency to 97.8%. Figure 7.7.6
shows how the pH of the aqueous phase affects the extraction efficiency for M . One advantage of using a ligand to extract a metal ion is the high degree of selectivity that it brings to a liquid–liquid extraction. As seen in Figure 7.7.6
, a divalent metal ion’s extraction efficiency increases from approximately 0% to 100% over a range of 2 pH units. Because a ligand’s ability to form a metal–ligand complex varies substantially from metal ion to metal ion, significant selectivity is possible if we carefully control the pH. Table 7.7.1
shows the minimum pH for extracting 99% of a metal ion from an aqueous solution using an equal volume of 4 mM dithizone in CCl . Using Table 7.7.1
, explain how we can separate the metal ions in an aqueous mixture of Cu , Cd , and Ni by extracting with an equal volume of dithizone in CCl . From Table 7.7.1
, a quantitative separation of Cu from Cd and from Ni is possible if we acidify the aqueous phase to a pH of less than 1. This pH is greater than the minimum pH for extracting Cu and significantly less than the minimum pH for extracting either Cd or Ni . After the extraction of Cu is complete, we shift the pH of the aqueous phase to 4.0, which allows us to extract Cd while leaving Ni in the aqueous phase. | 22,485 | 2,290 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/02%3A_Atoms_and_The_Atomic_Theory/2.2%3A_Electrons_and_Other_Discoveries_in_Atomic_Physics |
Long before the end of the 19th century, it was well known that applying a high voltage to a gas contained at low pressure in a sealed tube (called a gas discharge tube) caused electricity to flow through the gas, which then emitted light (Figure \(\Page {1}\)). Researchers trying to understand this phenomenon found that an unusual form of energy was also emitted from the cathode, or negatively charged electrode; this form of energy was called a cathode ray. In 1897, the British physicist J. J. Thomson (1856–1940) proved that atoms were not the most basic form of matter. He demonstrated that cathode rays could be deflected, or bent, by magnetic or electric fields, which indicated that cathode rays consist of charged particles (Figure \(\Page {2}\)). More important, by measuring the extent of the deflection of the cathode rays in magnetic or electric fields of various strengths, Thomson was able to calculate the mass-to-charge ratio of the particles. These particles were emitted by the negatively charged cathode and repelled by the negative terminal of an electric field. Because like charges repel each other and opposite charges attract, Thomson concluded that the particles had a net negative charge; these particles are now called electrons. Most relevant to the field of chemistry, Thomson found that the mass-to-charge ratio of cathode rays is independent of the nature of the metal electrodes or the gas, which suggested that electrons were fundamental components of all atoms. Subsequently, the American scientist Robert Millikan (1868–1953) carried out a series of experiments using electrically charged oil droplets, which allowed him to calculate the charge on a single electron. With this information and Thomson’s mass-to-charge ratio, Millikan determined the mass of an electron: \[\dfrac {mass}{charge} \times {charge} ={mass}\] It was at this point that two separate lines of investigation began to converge, both aimed at determining how and why matter emits energy. The video below shows how JJ Thompson used such a tube to measure the ratio of charge over mass of an electron The second line of investigation began in 1896, when the French physicist Henri Becquerel (1852–1908) discovered that certain minerals, such as uranium salts, emitted a new form of energy. Becquerel’s work was greatly extended by Marie Curie (1867–1934) and her husband, Pierre (1854–1906); all three shared the Nobel Prize in Physics in 1903. Marie Curie coined the term radioactivity (from the Latin , meaning “ray”) to describe the emission of energy rays by matter. She found that one particular uranium ore, pitchblende, was substantially more radioactive than most, which suggested that it contained one or more highly radioactive impurities. Starting with several tons of pitchblende, the Curies isolated two new radioactive elements after months of work: polonium, which was named for Marie’s native Poland, and radium, which was named for its intense radioactivity. Pierre Curie carried a vial of radium in his coat pocket to demonstrate its greenish glow, a habit that caused him to become ill from radiation poisoning well before he was run over by a horse-drawn wagon and killed instantly in 1906. Marie Curie, in turn, died of what was almost certainly radiation poisoning. Building on the Curies’ work, the British physicist Ernest Rutherford (1871–1937) performed decisive experiments that led to the modern view of the structure of the atom. While working in Thomson’s laboratory shortly after Thomson discovered the electron, Rutherford showed that compounds of uranium and other elements emitted at least two distinct types of radiation. One was readily absorbed by matter and seemed to consist of particles that had a positive charge and were massive compared to electrons. Because it was the first kind of radiation to be discovered, Rutherford called these substances α particles. Rutherford also showed that the particles in the second type of radiation, β particles, had the same charge and mass-to-charge ratio as Thomson’s electrons; they are now known to be high-speed electrons. A third type of radiation, γ rays, was discovered somewhat later and found to be similar to a lower-energy form of radiation called x-rays, now used to produce images of bones and teeth. These three kinds of radiation—α particles, β particles, and γ rays—are readily distinguished by the way they are deflected by an electric field and by the degree to which they penetrate matter. As Figure \(\Page {3}\) illustrates, α particles and β particles are deflected in opposite directions; α particles are deflected to a much lesser extent because of their higher mass-to-charge ratio. In contrast, γ rays have no charge, so they are not deflected by electric or magnetic fields. Figure \(\Page {5}\) shows that α particles have the least penetrating power and are stopped by a sheet of paper, whereas β particles can pass through thin sheets of metal but are absorbed by lead foil or even thick glass. In contrast, γ-rays can readily penetrate matter; thick blocks of lead or concrete are needed to stop them. Atoms, the smallest particles of an element that exhibit the properties of that element, consist of negatively charged electrons around a central nucleus composed of more massive positively charged protons and electrically neutral neutrons. Radioactivity is the emission of energetic particles and rays (radiation) by some substances. Three important kinds of radiation are α particles (helium nuclei), β particles (electrons traveling at high speed), and γ rays (similar to x-rays but higher in energy). | 5,647 | 2,292 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Book3A_Bioinorganic_Chemistry_(Bertini_et_al.)/06%3A_Electron_Transfer/6.02%3A_Coupling_Electron_Transfers_and_Substrate_Activation |
Electron transfers are key steps in many enzymatic reactions involving the oxidation or reduction of a bound substrate. Relevant examples include cytochrome c oxidase (O → 2H O) and nitrogenase (N → 2NH ). To reinforce the claim that electron-transfer steps are of widespread importance, several other redox systems, representative of diverse metabolic processes, will be mentioned here. Xanthine oxidase (275 kDa; \(\alpha_{2}\) dimer) catalyzes the two-electron oxidation of xanthine to uric acid (Equation 6.7). The first step in the biosynthesis of DNA involves the reduction of ribonucleotides (Equation 6.8) catalyzed by ribonucleotide reductase. The enzyme is an \(\alpha_{2} \beta_{2}\) tetramer composed of a B1 protein (160 kDa) and a B2 protein (78 kDa). The B1 protein (a dimer) contains redox-active dithiol groups, binding sites for ribonucleotide substrates, and regulatory binding sites for nucleotide diphosphates. Protein B2, also a dimer, possesses a phenolate radical (Tyr-122) that is stabilized by an antiferromagnetically coupled binuclear iron center (Figure 6.18). This radical is essential for enzyme activity, and is ~10 Å from the protein-B1/protein-B2 interface. Hence it cannot directly participate in an H-atom abstraction from the substrate (bound to protein B1). Instead, the x-ray structure of the B2 protein suggests that a long-range electron transfer from the Tyr radical to a residue (perhaps Trp-48) on the B1 protein is operative during enzyme turnover. \(\tag{6.8}\) Most of the presently known metal-containing mono- and dioxygenases are multicomponent, requiring the involvement of additional proteins (electron transferases) to shuttle electrons from a common biological reductant (usually NADH or NADPH) to the metallooxygenase. Cytochrome P-450, whose substrate oxidation chemistry was discussed in detail in Chapter 5, serves as an excellent example. Figure 5.10 presented a catalytic cycle for cytochrome P-450-dependent hydroxylations that begins with substrate (RH) binding to the ferric enzyme (RH is camphor for cytochrome P-450). To hydroxylate the camphor substrate, the monooxygenase must be reduced via the electron-transport chain in Equation (6.9). \(\tag{6.9}\) The ferredoxin reductase receives two electrons from NADH and passes them on, one at at time, to putidaredoxin, a [2Fe-2S] iron-sulfur protein. Thus, two single-electron-transfer steps from reduced putidaredoxin to cytochrome P-450 are required to complete one enzyme turnover. The activity of the enzyme appears to be regulated at the first reduction step. In a 1:1 putidaredoxin-cytochrome P-450 complex, the reduction potential of putidaredoxin is -196 mV, but that of cytochrome P-450 is -340 mV in the absence of camphor; reduction of the cytochrome P-450 is thus thermodynamically unfavorable (k ~ 0.22 s ). Upon binding camphor, the reduction potential of cytochrome P-450 shifts to -173 mV, and the electron-transfer rate in the protein complex accordingly increases to 41 s . "Costly" reducing equivalents are not wasted, and there are no appreciable amounts of noxious oxygen-reduction products when substrate is not present. In the third step, molecular oxygen binds to the camphor adduct of ferrous cytochrome P-450. This species, in the presence of reduced putidaredoxin, accepts a second electron, and catalyzes the hydroxylation of the bound camphor substrate. The turnover rate for the entire catalytic cycle is 10-20 s , and the second electron-transfer step appears to be rate-determining. The bulk of the interest in electron-transfer reactions of redox proteins has been directed toward questions dealing with long-range electron transfer and the nature of protein-protein complexes whose structures are optimized for rapid intramolecular electron transfer. Before we undertake a discussion of these issues, it is worth noting that studies of the reactions of redox proteins at electrodes are attracting increasing attention. Direct electron transfer between a variety of redox proteins and electrode surfaces has been achieved. Potential applications include the design of substrate-specific biosensors, the development of biofuel cells, and electrochemical syntheses. An interesting application of bioelectrochemical technology is the oxidation of p-cresol to p-hydroxybenzaldehyde (Figure 6.19). | 4,365 | 2,293 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Exercises%3A_Physical_and_Theoretical_Chemistry/Exercises%3A_Simons_and_Nichols/3%3A_Trapped_Particles |
simons.hec.utah.edu/TheoryPag...&Solutions.pdf A particle of mass \(m\) moves in a one-dimensional box of length \(L\), with boundaries at \(x = 0\) and \(x = L\). Thus, \(V(x) = 0\) for \(0 ≤ x ≤ L\), and \(V(x) = ∞\) elsewhere. The normalized eigenfunctions of the Hamiltonian for this system are given by \[Ψ_{n} (x) = \sqrt{\dfrac{2}{L}} \sin \left(\dfrac{n\pi x}{L} \right)\] with \[E_n = \dfrac{n^2 π^2 \hbar^2}{ 2mL^2}\] where the quantum number \(n\) can take on the values \(n=1,2,3,....\) A particle is confined to a one-dimensional box of length \(L\) having infinitely high walls and is in its lowest quantum state. Calculate \(\langle x \rangle\), \(\langle x^2 \rangle\), \(\langle p \rangle\), and \(\langle p^2 \rangle\). Using the definition of the uncertainty \(\sigma_Α\) of the A measurement \[\sigma_Α = \sqrt{\langle x^2 \rangle − \langle A \rangle ^2}\] to verify the Heisenberg uncertainty principle. It has been claimed that as the quantum number \(n\) increases, the motion of a particle in a box becomes more classical. In this problem you will have an opportunity to convince yourself of this fact: | 1,153 | 2,294 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Chromatography/V._Chromatography/E._Paper_Chromatography |
The paper is suspended in a container with a shallow layer of a suitable solvent or mixture of solvents in it. It is important that the solvent level is below the line with the spots on it. The next diagram doesn't show details of how the paper is suspended because there are too many possible ways of doing it and it clutters the diagram. Sometimes the paper is just coiled into a loose cylinder and fastened with paper clips top and bottom. The cylinder then just stands in the bottom of the container. The reason for covering the container is to make sure that the atmosphere in the beaker is saturated with solvent vapour. Saturating the atmosphere in the beaker with vapour stops the solvent from evaporating as it rises up the paper. As the solvent slowly travels up the paper, the different components of the ink mixtures travel at different rates and the mixtures are separated into different colored spots. The diagram shows what the plate might look like after the solvent has moved almost to the top. It is fairly easy to see from the final chromatogram that the pen that wrote the message contained the same dyes as pen 2. You can also see that pen 1 contains a mixture of two different blue dyes - one of which might be the same as the single dye in pen 3. Some compounds in a mixture travel almost as far as the solvent does; some stay much closer to the base line. The distance travelled relative to the solvent is a constant for a particular compound as long as you keep everything else constant - the type of paper and the exact composition of the solvent, for example. The distance travelled relative to the solvent is called the R value. For each compound it can be worked out using the formula: For example, if one component of a mixture travelled 9.6 cm from the base line while the solvent had travelled 12.0 cm, then the R value for that component is: In the example we looked at with the various pens, it wasn't necessary to measure R values because you are making a direct comparison just by looking at the chromatogram. You are making the assumption that if you have two spots in the final chromatogram which are the same color and have travelled the same distance up the paper, they are most likely the same compound. It isn't necessarily true of course - you could have two similarly colored compounds with very similar R values. We'll look at how you can get around that problem further down the page. In some cases, it may be possible to make the spots visible by reacting them with something which produces a colored product. A good example of this is in chromatograms produced from amino acid mixtures. Suppose you had a mixture of amino acids and wanted to find out which particular amino acids the mixture contained. For simplicity we'll assume that you know the mixture can only possibly contain five of the common amino acids. A small drop of a solution of the mixture is placed on the base line of the paper, and similar small spots of the known amino acids are placed alongside it. The paper is then stood in a suitable solvent and left to develop as before. In the diagram, the mixture is M, and the known amino acids are labeled 1 to 5. The position of the solvent front is marked in pencil and the chromatogram is allowed to dry and is then sprayed with a solution of ninhydrin. Ninhydrin reacts with amino acids to give colored compounds, mainly brown or purple. The left-hand diagram shows the paper after the solvent front has almost reached the top. The spots are still invisible. The second diagram shows what it might look like after spraying with ninhydrin. There is no need to measure the R values because you can easily compare the spots in the mixture with those of the known amino acids - both from their positions and their colors. In this example, the mixture contains the amino acids labeled as 1, 4 and 5. And what if the mixture contained amino acids other than the ones we have used for comparison? There would be spots in the mixture which didn't match those from the known amino acids. You would have to re-run the experiment using other amino acids for comparison. Two way paper chromatography gets around the problem of separating out substances which have very similar R values. I'm going to go back to talking about colored compounds because it is much easier to see what is happening. You can perfectly well do this with colorless compounds - but you have to use quite a lot of imagination in the explanation of what is going on! This time a chromatogram is made starting from a single spot of mixture placed towards one end of the base line. It is stood in a solvent as before and left until the solvent front gets close to the top of the paper. In the diagram, the position of the solvent front is marked in pencil before the paper dries out. This is labeled as SF1 - the solvent front for the first solvent. We shall be using two different solvents. If you look closely, you may be able to see that the large central spot in the chromatogram is partly blue and partly green. Two dyes in the mixture have almost the same R values. They could equally well, of course, both have been the same color - in which case you couldn't tell whether there was one or more dye present in that spot. What you do now is to wait for the paper to dry out completely, and then rotate it through 90°, and develop the chromatogram again in a different solvent. It is very unlikely that the two confusing spots will have the same R values in the second solvent as well as the first, and so the spots will move by a different amount. The next diagram shows what might happen to the various spots on the original chromatogram. The position of the second solvent front is also marked. You wouldn't, of course, see these spots in both their original and final positions - they have moved! The final chromatogram would look like this: Two way chromatography has completely separated out the mixture into four distinct spots. If you want to identify the spots in the mixture, you obviously can't do it with comparison substances on the same chromatogram as we looked at earlier with the pens or amino acids examples. You would end up with a meaningless mess of spots. You can, though, work out the R values for each of the spots in both solvents, and then compare these with values that you have measured for known compounds under exactly the same conditions. Paper is made of cellulose fibres, and cellulose is a polymer of the simple sugar, glucose. The key point about cellulose is that the polymer chains have -OH groups sticking out all around them. To that extent, it presents the same sort of surface as silica gel or alumina in thin layer chromatography. It would be tempting to try to explain paper chromatography in terms of the way that different compounds are adsorbed to different extents on to the paper surface. In other words, it would be nice to be able to use the same explanation for both thin layer and paper chromatography. Unfortunately, it is more complicated than that! The complication arises because the cellulose fibres attract water vapour from the atmosphere as well as any water that was present when the paper was made. You can therefore think of paper as being cellulose fibres with a very thin layer of water molecules bound to the surface. It is the interaction with this water which is the most important effect during paper chromatography. Suppose you use a non-polar solvent such as hexane to develop your chromatogram. Non-polar molecules in the mixture that you are trying to separate will have little attraction for the water molecules attached to the cellulose, and so will spend most of their time dissolved in the moving solvent. Molecules like this will therefore travel a long way up the paper carried by the solvent. They will have relatively high R values. On the other hand, polar molecules will have a high attraction for the water molecules and much less for the non-polar solvent. They will therefore tend to dissolve in the thin layer of water around the cellulose fibres much more than in the moving solvent. Because they spend more time dissolved in the stationary phase and less time in the mobile phase, they aren't going to travel very fast up the paper. The tendency for a compound to divide its time between two immiscible solvents (solvents such as hexane and water which won't mix) is known as . Paper chromatography using a non-polar solvent is therefore a type of partition chromatography. A moment's thought will tell you that partition can't be the explanation if you are using water as the solvent for your mixture. If you have water as the mobile phase and the water bound on to the cellulose as the stationary phase, there can't be any meaningful difference between the amount of time a substance spends in solution in either of them. All substances should be equally soluble (or equally insoluble) in both. And yet the first chromatograms that you made were probably of inks using water as your solvent. If water works as the mobile phase as well being the stationary phase, there has to be some quite different mechanism at work - and that must be equally true for other polar solvents like the alcohols, for example. Partition only happens between solvents which don't mix with each other. Polar solvents like the small alcohols do mix with water. In researching this topic, I haven't found any easy explanation for what happens in these cases. Most sources ignore the problem altogether and just quote the partition explanation without making any allowance for the type of solvent you are using. Other sources quote mechanisms which have so many strands to them that they are far too complicated for this introductory level. I'm therefore not taking this any further - you shouldn't need to worry about this at UK A level, or its various equivalents. | 9,886 | 2,297 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Scanning_Probe_Microscopy/03_Basic_Theory/02_Atomic_Force_Microscopy_(AFM) |
AFM provides a 3D profile of the surface on a nanoscale, by measuring between a sharp probe (<10 nm) and surface at very short distance (0.2-10 nm probe-sample separation). The probe is supported on a flexible cantilever. The AFM tip “gently” touches the surface and records the small force between the probe and the surface. The probe is placed on the end of a cantilever (which one can think of as a spring). The amount of force between the probe and sample is dependant on the (stiffness) of the cantilever and the distance between the probe and the sample surface. This force can be described using Hooke’s Law: \[\mathrm{F=-k·x}\nonumber\] F = Force k = spring constant x = cantilever deflection If the spring constant of cantilever (typically ~ 0.1-1 N/m) is less than surface, the and the deflection is monitored. This typically results in forces ranging from nN (10 ) to µN (10 ) in the open air. Probes are typically made from Si N , or Si. Different cantilever lengths, materials, and shapes allow for varied spring constants and resonant frequencies. A description of the variety of different probes can be found at various vendor sites. Probes may be coated with other materials for additional SPM applications such as chemical force microscopy (CFM) and magnetic force microscopy (MFM). The motion of the probe across the surface is controlled similarly to the STM using and The primary difference in instrumentation design is how the forces between the probe and sample surface are monitored. The deflection of the probe is typically measure by a . A semiconductor diode laser is bounced off the back of the cantilever onto a position sensitive photodiode detector. This detector measures the bending of cantilever during the tip is scanned over the sample. The measured cantilever deflections are used to generate a map of the surface topography. For a visual depiction of the “beam bounce” method of detection in AFM you can refer to the following which utilizes Legos ®, magnetics, and a laser pointer to demonstrate this concept. The dominant interactions at short probe-sample distances in the AFM are Van der Waals (VdW) interactions. However long-range interactions (i.e. capillary, electrostatic, magnetic) are significant further away from the surface. These are important in other SPM methods of analysis. During contact with the sample, the probe predominately experiences (contact mode). This leads to the tip deflection described previously. As the tip moves further away from the surface are dominant (non-contact mode). fast scanning, good for rough samples, used in friction analysis at time forces can damage/deform soft samples (however imaging in liquids often resolves this issue) Oscillation Amplitude: 20-100 nm allows high resolution of samples that are easily damaged and/or loosely held to a surface; Good for biological samples more challenging to image in liquids, slower scan speeds needed VERY low force exerted on the sample(10 N), extended probe lifetime generally lower resolution; contaminant layer on surface can interfere with oscillation; usually need ultra high vacuum (UHV) to have best imaging Force curves measure the amount of force felt by the cantilever as the probe tip is brought close to - and even indented into - a sample surface and then pulled away. In a force curve analysis the probe is repeatedly brought towards the surface and then retracted, Figure 5. Force curve analyses can be used to determine chemical and mechanical properties such as adhesion, elasticity, hardness and rupture bond lengths. The slope of the deflection (C) provides information on the hardness of a sample. The adhesion (D) provides information on the interaction between the probe and sample surface as the probe is trying to break free. Direct measurements of the interactions between molecules and molecular assemblies can be achieved by functionlizing probes with molecules of interest ( ). An interactive force curve analysis can be found here: The AFM can be used to study a wide variety of samples (i.e. plastic, metals, glasses, semiconductors, and biological samples such as the walls of cells and bacteria). However there are limitations in achieving atomic resolution. The physical probe used in AFM imaging is not ideally sharp. As a consequence, an AFM image does not reflect the true sample topography, but rather represents the interaction of the probe with the sample surface. This is called tip convolution, Figure 6. Commercially available probes are becoming more widely available that have very high aspect ratios. These are made with materials such as carbon nanotubes or . However these probes are still very expensive to use for every day image analysis. Another useful source on the principles of SPM: Another useful source on how AFM works | 4,840 | 2,298 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Synthesis_of_Aldehydes_and_Ketones/Synthesis_of_Aldehydes_and_Ketones |
Aldehydes and ketones can be prepared using a wide variety of reactions. Although these reactions are discussed in greater detail in other sections, they are listed here as a summary and to help with planning multistep synthetic pathways. Please use the appropriate links to see more details about the reactions. Anti-Markovnikov addition of a hydroxyl group to an alkyne forms an aldehyde. The addition of a hydroxyl group to an alkyne causes tautomerization which subsequently forms a carbonyl. Typically uses Jones reagent (CrO in H SO ) but many other reagents can be used The addition of a hydroxyl group to an alkyne causes tautomerization which subsequently forms a carbonyl. Markovnikov addition of a hydroxyl group to an alkyne forms a ketone. This is an example of a reaction. ) | 811 | 2,299 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Descriptive_Chemistry/Periodic_Trends_of_Elemental_Properties/Table_Basics |
The Periodic table characterizes the known elements in increasing order of atomic number. It starts on the top right hand corner with Hydrogen and continue from left to right which then repeats in the horizontal row below the last element. This is not just a list of elements it is organized in very different ways like properties and atomic mass. At first glance the periodic table may seem disorganized with only a couple elements on the top row and a block on the last row but it is very specific in the way that they are organized. Elements themselves can be one of the following . are: are the complete opposite so they are: have the properties from both metals and non-metals. One way that the elements are organized is vertically in or families. An example of this is group 8, the noble gases with include Helium, Neon and Argon which has a specific name called alkali metals. Group 1 isn’t the only family with a special name there is group 17 which includes Fluorine and Chlorine, that is called the halogens. When looking at groups, elements at the top are the beginning of the group and the ones at the bottom are the end of the group. The are the ones in groups 1, 2 and the ones ranging from 13 – 18 and the metals in between 3 and 12 are the . Elements are organized in horizontally in . There are a total of 7 groups in the periodic table and each vary in how many elements they contain. Period 1 only has two elements, Hydrogen and Helium. Period 2 has eight elements whereas group 6 has the most with the addition of the top section of the block of elements on the bottom of the table called Below them are the which follow the same rules as the lanthanide but in group 7. The individual elements are represented in their own blocks by stating the atomic number (Z) on the top of the box, in the middle is the elements symbol and on the bottom is the average atomic mass of the element. This information can differ in the type of periodic table you are looking at. Some can include information about the actual chemical name or omit information described above. Determine whether the following elements are metals, non-metals or metalloids, Identify the group and period that the following elements are in: Classify which elements are considered as the main group or transition metals. If they are transition metals, state if they are lanthanides or actinides. The elements are: Arrange the elements from the lowest to highest group number: nitrogen, fluorine, boron, oxygen and carbon. Arrange the following elements from the lowest to highest period number: aluminum, polonium, germanium, and antimony. From looking at the periodic table, information about the following elements: Boron, carbon, nitrogen, oxygen, fluorine aluminum, germanium, antimony, and polonium 6. From looking at the periodic table, information about the following elements: This depends on what period table you use! | 2,942 | 2,300 |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Atomic_Theory/The_Atom/Sub-Atomic_Particles |
A typical atom consists of three subatomic particles: protons, neutrons, and electrons (as seen in the helium atom below). Other particles exist as well, such as alpha and beta particles (which are discussed below). The shows the three basic subatomic particles in a simple manner. Most of an atom's mass is in the a small, dense area at the center of every atom, composed of nucleons. Nucleons include protons and neutrons. All the positive charge of an atom is contained in the nucleus, and originates from the protons. Neutrons are neutrally-charged. Electrons, which are negatively-charged, are located outside of the nucleus subatomic particles in a comprehensible way. Protons were discovered by Ernest Rutherford in the year 1919, when he performed his gold foil experiment. He projected alpha particles (helium nuclei) at gold foil, and the positive alpha particles were deflected. He concluded that protons exist in a nucleus and have a positive nuclear charge. The atomic number or proton number is the number of protons present in an atom. The atomic number determines an element (e.g., the element of atomic number 6 is carbon). Electrons were discovered by Sir John Joseph Thomson in 1897. After many experiments involving cathode rays, J.J. Thomson demonstrated the ratio of mass to electric charge of cathode rays. He confirmed that cathode rays are fundamental particles that are negatively-charged; these cathode rays became known as electrons. Robert Millikan, through oil drop experiments, found the value of the electronic charge. Electrons are located in an electron cloud, which is the area surrounding the nucleus of the atom. There is usually a higher probability of finding an electron closer to to the nucleus of an atom. Electrons can abbreviated as e . Electrons have a negative charge that is equal in magnitude to the positive charge of the protons. However, their mass is considerably less than that of a proton or neutron (and as such is usually considered insignificant). Unequal amounts of protons and electrons create ions: positive cations or negative anions. Neutrons were discovered by James Chadwick in 1932, when he demonstrated that penetrating radiation incorporated beams of neutral particles. Neutrons are located in the nucleus with the protons. Along with protons, they make up almost all of the mass of the atom. The number of neutrons is called the neutron number and can be found by subtracting the proton number from the atomic mass number. The neutrons in an element determine the of an atom, and often its stability. The number of neutrons is not necessarily equal to the number of protons. Both of the following are appropriate ways of representing the composition of a particular atom: Often the proton number is not indicated because the elemental symbol conveys the same information. Consider a neutral atom of carbon: \(\ce{^{12}_{6}C}\). The atomic mass number of Carbon is 12 amu, the proton number is 6, and it has no charge. In neutral atoms, the charge is omitted. Above is the atomic symbol for helium from the periodic table, with the atomic number, elemental symbol, and mass indicated. Every element has a specific number of protons, so the proton number is not always written (as in the second method above). Note: The atomic mass number is not the same as the atomic mass seen on the periodic table. Many of these particles (explained in detail below) are emitted through radioactive decay. Also note that many forms of radioactive decay emit gamma rays, which are not particles. Alpha particles can be denoted by He , , or just . They are helium nuclei, which consist of two protons and two neutrons. The net spin on an alpha particle is zero. They result from large, unstable atoms through a process called . Alpha decay is the process by which an atom emits an alpha particle, thereby becoming a new element. This only occurs in elements with large, radioactive nuclei. The smallest noted element that emits alpha particles is element 52, tellurium. Alpha particles are generally not harmful. They can be easily stopped by a single sheet of paper or by one's skin. However, they can cause considerable damage to the insides of one's body. Alpha decay is used as a safe power source for radioisotope generators used in artificial heart pacemakers and space probes. Beta particles (β) are either free electrons or positrons with high energy and high speed; they are emitted in a process called beta decay. Positrons have the exact same mass as an electron, but are positively-charged. There are two forms of beta decay: the emission of electrons, and the emission of positrons. Beta particles, which are 100 times more penetrating than alpha particles, can be stopped by household items like wood or an aluminum plate or sheet. Beta particles have the ability to penetrate living matter and can sometimes alter the structure of molecules they strike. The alteration usually is considered damage, and can cause cancer and death. In contrast to beta particle's harmful effects, they can also be used in radiation to treat cancer. Electron emission may result when excess neutrons make the nucleus of an atom unstable. As a result, one of the neutrons decays into a proton, an electron, and an anti-neutrino. The proton remains in the nucleus, and the electron and anti-neutrino are emitted. The electron is called a beta particle. The equation for this process is given below: \[ _{1}^{0}\textrm{n}\rightarrow {_{1}^{1}\textrm{p}}^+ + \textrm{e}^- + \bar{\nu_{e}} \] Position emission occurs when an excess of protons makes the atom unstable. In this process, a proton is converted into a neutron, a positron, and a neutrino. While the neutron remains in the nucleus, the positron and the neutrino are emitted. The positron can be called a beta particle in this instance. The equation for this process is given below: \[ { _{1}^{1}\textrm{p}}^+ \rightarrow _{1}^{0}\textrm{n} + \textrm{e}^+ + \nu_{e} \] 1. Identify the number of protons, electrons, and neutrons in the following atom. 2. Identify the subatomic particles (protons, electrons, neutrons, and positrons) present in the following: 3. Given the following, identify the subatomic particles present. (The periodic table is required to solve these problems) 4. Arrange the following elements in order of increasing (a) number of protons; (b) number of neutrons; (c) mass. Co, when A=59; Fe, when Z=26; Na, when A=23; Br, when Z=35; Cu, when A=30; Mn, when Z=25 5. Fill in the rest of the table: 1. There are 4 protons, 5 neutrons, and 4 electrons. This is a neutral beryllium atom. 2. Identify the subatomic particles present in the following: 3. Given the following, identify the subatomic particles present. (The periodic table is required to solve these problems) 4. Arrange the following lements in order of increasing (a) number of protons; (b) number of neutrons; (c) atomic mass. a) b) Note: Cu, Fe, Mn are all equal in their number of neutrons, which is 30. c) Note: This is the same order as the number of protons, because as Atomic Number(Z) increases so does Atomic Mass. 5. Fill in the rest of the table: Note: Atomic Number=Number of Protons=Number of Electrons and Mass Number=Number of Protons+Number of Neutrons | 7,289 | 2,301 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Wade)_Complete_and_Semesters_I_and_II/Map%3A_Organic_Chemistry_(Wade)/15%3A_Ethers_Epoxides_and_Thioethers |
After reading this chapter and completing ALL the exercises, a student can a) Williamson ether synthesis (refer to section 15.3) b) alkoximercuration-demercuration (refer to section 15.4) c) peroxyacid epoxidation (refer to Chapter 9 section 12) d) base-promoted cyclization of halohydrins (refer to section 15.7) a) acidic cleavage of ethers (refer to section 15.5) b) opening of epoxides (refer to section 15.8) c) reactions of epoxides with organometallic reagents (refer to section 15.10) Please note: IUPAC nomenclature and important common names of alcohols were explained in Chapter 3.
| 614 | 2,302 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Descriptive_Chemistry/Main_Group_Reactions/The_s-Block_Elements_in_Biology |
The s-block elements play important roles in biological systems. Covalent hydrides, for example, are the building blocks of organic compounds, and other compounds and ions containing s-block elements are found in tissues and cellular fluids. In this section, we describe some ways in which biology depends on the properties of the group 1 and group 2 elements. There are three major classes of hydrides—covalent, ionic, and metallic—but only covalent hydrides occur in living cells and have any biochemical significance. Carbon and hydrogen have similar electronegativities, and the C–H bonds in organic molecules are strong and essentially nonpolar. Little acid–base chemistry is involved in the cleavage or formation of these bonds. In contrast, because hydrogen is less electronegative than oxygen and nitrogen (symbolized by Z), the H–Z bond in the hydrides of these elements is polarized (H –Z ). Consequently, the hydrogen atoms in these H–Z bonds are relatively acidic. Moreover, S–H bonds are relatively weak due to poor s orbital overlap, so they are readily cleaved to give a proton. Hydrides in which H is bonded to O, N, or S atoms are therefore polar, hydrophilic molecules that form hydrogen bonds. They also undergo acid–base reactions by transferring a proton. Covalent hydrides in which H is bonded to O, N, or S atoms are polar and hydrophilic, form hydrogen bonds, and transfer a proton in their acid-base reactions. Hydrogen bonds are crucial in biochemistry, in part because they help hold proteins in their biologically active folded structures. Hydrogen bonds also connect the two intertwining strands of DNA (deoxyribonucleic acid), the substance that contains the genetic code for all organisms. Because hydrogen bonds are easier to break than the covalent bonds that form the individual DNA strands, the two intertwined strands can be separated to give intact single strands, which is essential for the duplication of genetic information. In addition to the importance of hydrogen bonds in biochemical molecules, the extensive hydrogen-bonding network in water is one of the keys to the existence of life on our planet. Based on its molecular mass, water should be a gas at room temperature (20°C), but the strong intermolecular interactions in liquid water greatly increase its boiling point. Hydrogen bonding also produces the relatively open molecular arrangement found in ice, which causes ice to be less dense than water. Because ice floats on the surface of water, it creates an insulating layer that allows aquatic organisms to survive during cold winter months. These same strong intermolecular hydrogen-bonding interactions are also responsible for the high heat capacity of water and its high heat of fusion. A great deal of energy must be removed from water for it to freeze. Consequently, large bodies of water act as “thermal buffers” that have a stabilizing effect on the climate of adjacent land areas. Perhaps the most striking example of this effect is the fact that humans can live comfortably at very high latitudes. For example, palm trees grow in southern England at the same latitude (51°N) as the southern end of frigid Hudson Bay and northern Newfoundland in North America, areas known more for their moose populations than for their tropical vegetation. Warm water from the Gulf Stream current in the Atlantic Ocean flows clockwise from the tropical climate at the equator past the eastern coast of the United States and then turns toward England, where heat stored in the water is released. The temperate climate of Europe is largely attributable to the thermal properties of water. Strong intermolecular hydrogen-bonding interactions are responsible for the high heat capacity of water and its high heat of fusion. The members of group 1 and group 2 that are present in the largest amounts in organisms are sodium, potassium, magnesium, and calcium, all of which form monatomic cations with a charge of +1 (group 1, M ) or +2 (group 2, M ). Biologically, these elements can be classified as macrominerals (Table 1.6). For example, calcium is found in the form of relatively insoluble calcium salts that are used as structural materials in many organisms. Hydroxyapatite [Ca (PO ) OH] is the major component of bones, calcium carbonate (CaCO ) is the major component of the shells of mollusks and the eggs of birds and reptiles, and calcium oxalate (CaO CCO ) is found in many plants. Because calcium and strontium have similar sizes and charge-to-radius ratios, small quantities of strontium are always found in bone and other calcium-containing structural materials. Normally this is not a problem because the Sr ions occupy sites that would otherwise be occupied by Ca ions. When trace amounts of radioactive Sr are released into the atmosphere from nuclear weapons tests or a nuclear accident, however, the radioactive strontium eventually reaches the ground, where it is taken up by plants that are consumed by dairy cattle. The isotope then becomes concentrated in cow’s milk, along with calcium. Because radioactive strontium coprecipitates with calcium in the hydroxyapatite that surrounds the bone marrow (where white blood cells are produced), children, who typically ingest more cow’s milk than adults, are at substantially increased risk for leukemia, a type of cancer characterized by the overproduction of white blood cells. The Na , K , Mg , and Ca ions are important components of intracellular and extracellular fluids. Both Na and Ca are found primarily in extracellular fluids, such as blood plasma, whereas K and Mg are found primarily in intracellular fluids. Substantial inputs of energy are required to establish and maintain these concentration gradients and prevent the system from reaching equilibrium. Thus energy is needed to transport each ion across the cell membrane toward the side with the higher concentration. The biological machines that are responsible for the selective transport of these metal ions are complex assemblies of proteins called . Ion pumps recognize and discriminate between metal ions in the same way that crown ethers and cryptands do, with a high affinity for ions of a certain charge and radius. Defects in the ion pumps or their control mechanisms can result in major health problems. For example, cystic fibrosis, the most common inherited disease in the United States, is caused by a defect in the transport system (in this case, chloride ions). Similarly, in many cases, hypertension, or high blood pressure, is thought to be due to defective Na uptake and/or excretion. If too much Na is absorbed from the diet (or if too little is excreted), water diffuses from tissues into the blood to dilute the solution, thereby decreasing the osmotic pressure in the circulatory system. The increased volume increases the blood pressure, and ruptured arteries called aneurysms can result, often in the brain. Because high blood pressure causes other medical problems as well, it is one of the most important biomedical disorders in modern society. For patients who suffer from hypertension, low-sodium diets that use NaCl substitutes, such as KCl, are often prescribed. Although KCl and NaCl give similar flavors to foods, the K is not readily taken up by the highly specific Na -uptake system. This approach to controlling hypertension is controversial, however, because direct correlations between dietary Na content and blood pressure are difficult to demonstrate in the general population. More important, recent observations indicate that high blood pressure may correlate more closely with inadequate intake of calcium in the diet than with excessive sodium levels. This finding is important because the typical “low-sodium” diet is also low in good sources of calcium, such as dairy products. Some of the most important biological functions of the group 1 and group 2 metals are due to small changes in the cellular concentrations of the metal ion. The transmission of nerve impulses, for example, is accompanied by an increased flux of Na ions into a nerve cell. Similarly, the binding of various hormones to specific receptors on the surface of a cell leads to a rapid influx of Ca ions; the resulting sudden rise in the intracellular Ca concentration triggers other events, such as muscle contraction, the release of neurotransmitters, enzyme activation, or the secretion of other hormones. Within cells, K and Mg often activate particular enzymes by binding to specific, negatively charged sites in the enzyme structure. Chlorophyll, the green pigment used by all plants to absorb light and drive the process of photosynthesis, contains magnesium. During photosynthesis, CO is reduced to form sugars such as glucose. The structure of the central portion of a chlorophyll molecule resembles a crown ether (part (a) in Figure 13.7) with four five-member nitrogen-containing rings linked together to form a large ring that provides a “hole” the proper size to tightly bind Mg . Because the health of cells depends on maintaining the proper levels of cations in intracellular fluids, any change that affects the normal flux of metal ions across cell membranes could well cause an organism to die. Molecules that facilitate the transport of metal ions across membranes are generally called (ion plus phore from the Greek phorein, meaning “to carry”). Many ionophores are potent antibiotics that can kill or inhibit the growth of bacteria. An example is valinomycin, a cyclic molecule with a central cavity lined with oxygen atoms (part (a) in Figure 21.14) that is similar to the cavity of a crown ether (part (a) in Figure 13.7). Like a crown ether, valinomycin is highly selective: its affinity for K is about 1000 times greater than that for Na . By increasing the flux of K ions into cells, valinomycin disrupts the normal K gradient across a cell membrane, thereby killing the cell (part (b) in Figure 21.14). ions and ionic radii suitability as replacement for Ca Use periodic trends to arrange the ions from least effective to most effective as a replacement for Ca . The most important properties in determining the affinity of a biological molecule for a metal ion are the size and charge-to-radius ratio of the metal ion. Of the possible Ca replacements listed, the F ion has the opposite charge, so it should have no affinity for a Ca -binding site. Na is approximately the right size, but with a +1 charge it will bind much more weakly than Ca . Although Eu , Sr , and Pb are all a little larger than Ca , they are probably similar enough in size and charge to bind. Based on its ionic radius, Eu should bind most tightly of the three. La is nearly the same size as Ca and more highly charged. With a higher charge-to-radius ratio and a similar size, La should bind tightly to a Ca site and be the most effective replacement for Ca . The order is F << Na << Pb ~ Sr ~ Eu < La . The ionic radius of K is 138 pm. Arrange the following ions in order of increasing affinity for a K -binding site in an enzyme (numbers in parentheses are ionic radii): Na (102 pm), Rb (152 pm), Ba (135 pm), Cl (181 pm), and Tl (150 pm). Cl << Na < Tl ~ Rb < Ba Covalent hydrides in which hydrogen is bonded to oxygen, nitrogen, or sulfur are polar, hydrophilic molecules that form hydrogen bonds and undergo acid–base reactions. Hydrogen-bonding interactions are crucial in stabilizing the structure of proteins and DNA and allow genetic information to be duplicated. The hydrogen-bonding interactions in water and ice also allow life to exist on our planet. The group 1 and group 2 metals present in organisms are macrominerals, which are important components of intracellular and extracellular fluids. Small changes in the cellular concentration of a metal ion can have a significant impact on biological functions. Metal ions are selectively transported across cell membranes by ion pumps, which bind ions based on their charge and radius. Ionophores, many of which are potent antibiotics, facilitate the transport of metal ions across membranes. Structure and Reactivity Answer | 12,150 | 2,303 |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/01%3A_Introduction_-_Matter_and_Measurement/1.05%3A_Uncertainty_in_Measurement |
Measurements may be accurate, meaning that the measured value is the same as the true value; they may be precise, meaning that multiple measurements give nearly identical values (i.e., reproducible results); they may be both accurate and precise; or they may be neither accurate nor precise. The goal of scientists is to obtain measured values that are both accurate and precise. Suppose, for example, that the mass of a sample of gold was measured on one balance and found to be 1.896 g. On a different balance, the same sample was found to have a mass of 1.125 g. Which was correct? Careful and repeated measurements, including measurements on a calibrated third balance, showed the sample to have a mass of 1.895 g. The masses obtained from the three balances are in the following table: Whereas the measurements obtained from balances 1 and 3 are reproducible (precise) and are close to the accepted value (accurate), those obtained from balance 2 are neither. Even if the measurements obtained from balance 2 had been precise (if, for example, they had been 1.125, 1.124, and 1.125), they still would not have been accurate. We can assess the precision of a set of measurements by calculating the average deviation of the measurements as follows: 1. Calculate the average value of all the measurements: \[ \text{average} = \dfrac{\text{sum of measurements} }{\text{number of measurements}} \label{Eq1} \] 2. Calculate the deviation of each measurement, which is the absolute value of the difference between each measurement and the average value: \[ \text{deviation} = |\text{measurement − average}| \label{Eq2} \] where \(|\, |\) means absolute value (i.e., convert any negative number to a positive number). 3. Add all the deviations and divide by the number of measurements to obtain the average deviation: \[ \text{average} = \dfrac{\text{sum of deviations}}{\text{number of measurements}} \label{Eq3} \] Then we can express the precision as a percentage by dividing the average deviation by the average value of the measurements and multiplying the result by 100. In the case of balance 2, the average value is \[ {1.125 \;g + 1.158 \;g + 1.067\; g \over 3} = 1.117 \;g \nonumber \] The deviations are So the average deviation is \[ {0.008 \:g + 0.041 \;g + 0.050 \;g \over 3} = 0.033\; g \nonumber \] The precision of this set of measurements is therefore \[ {0.033\;g \over 1.117\;g} \times 100 = 3.0 \% \nonumber \] When a series of measurements is precise but not accurate, the error is usually systematic. Systematic errors can be caused by faulty instrumentation or faulty technique. The following archery targets show marks that represent the results of four sets of measurements. Which target shows a. The expected mass of a 2-carat diamond is 2 × 200.0 mg = 400.0 mg. The average of the three measurements is 457.3 mg, about 13% greater than the true mass. These measurements are not particularly accurate. The deviations of the measurements are 7.3 mg, 1.7 mg, and 5.7 mg, respectively, which give an average deviation of 4.9 mg and a precision of \[ {4.9 mg \over 457.3 mg } \times 100 = 1.1 \% \nonumber \] These measurements are rather . b. The average values of the measurements are 93.2% zinc and 2.8% copper versus the true values of 97.6% zinc and 2.4% copper. Thus these measurements are not very accurate, with errors of −4.5% and + 17% for zinc and copper, respectively. (The sum of the measured zinc and copper contents is only 96.0% rather than 100%, which tells us that either there is a significant error in one or both measurements or some other element is present.) The deviations of the measurements are 0.0%, 0.3%, and 0.3% for both zinc and copper, which give an average deviation of 0.2% for both metals. We might therefore conclude that the measurements are equally precise, but that is not the case. Recall that precision is the average deviation divided by the average value times 100. Because the average value of the zinc measurements is much greater than the average value of the copper measurements (93.2% versus 2.8%), the copper measurements are much less precise. \[\begin{align*} \text {precision (Zn)} &= \dfrac {0.2 \%}{93.2 \% } \times 100 = 0.2 \% \\[4pt] \text {precision (Cu)} &= \dfrac {0.2 \%}{2.8 \% } \times 100 = 7 \% \end{align*} \nonumber \] No measurement is free from error. Error is introduced by the limitations of instruments and measuring devices (such as the size of the divisions on a graduated cylinder) and the imperfection of human senses (i.e., detection). Although errors in calculations can be enormous, they do not contribute to uncertainty in measurements. Chemists describe the estimated degree of error in a measurement as the uncertainty of the measurement, and they are careful to report all measured values using only significant figures, numbers that describe the value without exaggerating the degree to which it is known to be accurate. Chemists report as significant all numbers known with absolute certainty, plus one more digit that is understood to contain some uncertainty. The uncertainty in the final digit is usually assumed to be ±1, unless otherwise stated. The following rules have been developed for counting the number of significant figures in a measurement or calculation: An effective method for determining the number of significant figures is to convert the measured or calculated value to scientific notation because any zero used as a placeholder is eliminated in the conversion. When 0.0800 is expressed in scientific notation as 8.00 × 10 , it is more readily apparent that the number has three significant figures rather than five; in scientific notation, the number preceding the exponential (i.e., N) determines the number of significant figures. Give the number of significant figures in each. Identify the rule for each. Which measuring apparatus would you use to deliver 9.7 mL of water as accurately as possible? To how many significant figures can you measure that volume of water with the apparatus you selected? Use the 10 mL graduated cylinder, which will be accurate to two significant figures. Mathematical operations are carried out using all the digits given and then rounding the final result to the correct number of significant figures to obtain a reasonable answer. This method avoids compounding inaccuracies by successively rounding intermediate calculations. After you complete a calculation, you may have to round the last significant figure up or down depending on the value of the digit that follows it. If the digit is 5 or greater, then the number is rounded up. For example, when rounded to three significant figures, 5.215 is 5.22, whereas 5.213 is 5.21. Similarly, to three significant figures, 5.005 kg becomes 5.01 kg, whereas 5.004 kg becomes 5.00 kg. The procedures for dealing with significant figures are different for addition and subtraction versus multiplication and division. When we add or subtract measured values, the value with the fewest significant figures to the right of the decimal point determines the number of significant figures to the right of the decimal point in the answer. Drawing a vertical line to the right of the column corresponding to the smallest number of significant figures is a simple method of determining the proper number of significant figures for the answer: \[3240.7 + 21.236 = 3261.9|36 \nonumber \] The line indicates that the digits 3 and 6 are not significant in the answer. These digits are not significant because the values for the corresponding places in the other measurement are unknown (3240.7??). Consequently, the answer is expressed as 3261.9, with five significant figures. Again, numbers greater than or equal to 5 are rounded up. If our second number in the calculation had been 21.256, then we would have rounded 3261.956 to 3262.0 to complete our calculation. When we multiply or divide measured values, the answer is limited to the smallest number of significant figures in the calculation; thus, \[42.9 × 8.323 = 357.057 = 357. \nonumber \] Although the second number in the calculation has four significant figures, we are justified in reporting the answer to only three significant figures because the first number in the calculation has only three significant figures. An exception to this rule occurs when multiplying a number by an integer, as in 12.793 × 12. In this case, the number of significant figures in the answer is determined by the number 12.973, because we are in essence adding 12.973 to itself 12 times. The correct answer is therefore 155.516, an increase of one significant figure, not 155.52. When you use a calculator, it is important to remember that the number shown in the calculator display often shows more digits than can be reported as significant in your answer. When a measurement reported as 5.0 kg is divided by 3.0 L, for example, the display may show 1.666666667 as the answer. We are justified in reporting the answer to only two significant figures, giving 1.7 kg/L as the answer, with the last digit understood to have some uncertainty. In calculations involving several steps, slightly different answers can be obtained depending on how rounding is handled, specifically whether rounding is performed on intermediate results or postponed until the last step. Rounding to the correct number of significant figures should always be performed at the end of a series of calculations because rounding of intermediate results can sometimes cause the final answer to be significantly in error. Complete the calculations and report your answers using the correct number of significant figures. In practice, chemists generally work with a calculator and carry all digits forward through subsequent calculations. When working on paper, however, we often want to minimize the number of digits we have to write out. Because successive rounding can compound inaccuracies, intermediate roundings need to be handled correctly. When working on paper, always round an intermediate result so as to retain at least one more digit than can be justified and carry this number into the next step in the calculation. The final answer is then rounded to the correct number of significant figures at the very end. In the worked examples in this text, we will often show the results of intermediate steps in a calculation. In doing so, we will show the results to only the correct number of significant figures allowed for that step, in effect treating each step as a separate calculation. This procedure is intended to reinforce the rules for determining the number of significant figures, but in some cases it may give a final answer that differs in the last digit from that obtained using a calculator, where all digits are carried through to the last step. Significant Figures: | 10,821 | 2,304 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Chemical_Reactions/Chemical_Reactions_Examples/Chemical_Reactions_Overview |
Chemical reactions are the processes by which chemicals interact to form new chemicals with different compositions. Simply stated, a chemical reaction is the process where reactants are transformed into products. How chemicals react is dictated by the chemical properties of the element or compound- the ways in which a compound or element undergoes changes in composition. Chemical reactions are constantly occurring in the world around us; everything from the rusting of an iron fence to the metabolic pathways of a human cell are all examples of chemical reactions. Chemistry is an attempt to classify and better understand these reactions. A chemical reaction is typically represented by a chemical equation, which represents the change from reactants to products. The left hand side of the equation represents the reactants, while the right hand side represents the products. A typical chemical reaction is written with stoichiometric coefficients, which show the relative amounts of products and reactants involved in the reaction. Each compound is followed by a parenthetical note of the compound’s state of 2: (l) for liquid, (s) for solid, (g) for gas. The symbol (aq) is also commonly used in order to represent an aqueous solution, in which compounds are dissolved in water. A reaction might take the following form: \[\ce{A (aq) + B (g) \rightarrow C (s) + D (l)} \nonumber \] In the above example, \(A\) and \(B\), known as the reactants, reacted to form \(C\) and \(D\), the products. To write an accurate chemical equation, two things must occur: \[\ce{2Mg + O_2 \rightarrow 2MgO} \nonumber \] Hydrogen and nitrogen react together in order to produce ammonia gas, write the chemical equation of this reaction. Step 1: Write each product and reactant using its chemical formula. \[\ce{H_2 + N_2 \rightarrow NH_3} \nonumber \] Step 2: Ensure the number of atoms of each element are equal on both sides of the equation. \[\ce{3H_2 + N_2 \rightarrow 2NH_3} \nonumber \] In order to balance this equation, coefficients must be used. Since there are only 2 nitrogen atoms present on the left side of the equation, a coefficient of 2 must be added to \(NH_3\). The coefficient that used for balancing the equation is called the stoichiometric coefficient. The coefficients tell us the ratio of each element in a chemical equation. For example \[\ce{2Mg + O_2 \rightarrow 2MgO} \nonumber\] means When all of the reactants of a reaction are completely consumed, the reaction is in perfect stoichiometric proportions. Often times, however, a reaction is not in perfect stoichiometric proportions, leading to a situation in which the entirety of one reactant is consumed, but there is some of another reactant remaining. The reactant that is entirely consumed is called the , and it determines how much of the products are produced. 4.00 g of hydrogen gas mixed with 20.0g of oxygen gas. How many grams of water are produced? \[n(H_2)=\dfrac{4g}{(1.008 \times2)g/mol}=1.98mol\] So theoretically, it requires 0.99 mol of \(O_2\) n(O )=n(H )*(1mol O /2mol H )=0.99 mol m(O )=n(O )*(16g/mol *2) = 31.7 g O Because \(O_2\) only has 20.0 g, less than the required mass. It is limiting. Often, reactants do not react completely, resulting in a smaller amount of product formed than anticipated. The amount of product expected to be formed from the chemical equation is called the theoretical yield. The amount of product that is produced during a reaction is the actual yield. To determine the percent yield: Percent yield =actual yield/theoretical yield X 100% Chemical reactions do not only happen in the air, but also exist in solutions. In a solution, the solvent is the compound that is dissolved, and the solute is the compound that the solvent is dissolved in. The molarity of a solution is the number of moles of a solvent divided by the number of liters of solution. \[\ Molarity=\dfrac{ \text{amount of solute (mol)}}{\text{volume of solution (L)}} \] \[\ M=\dfrac{n}{V}\] 100.0 g NaCl is dissolved in 50.00 ml water. What is the molarity of the solution? a) Find the amount of solute in moles. 100.0g/(22.99 g/mol+35.45 g/mol) =1.711 moles b) Convert mL to L. 50.00 mL=0.05000 L c) Find the molarity 1.711 moles/0.05000L=34.22 mol/L Physical change is the change in physical properties. Physical changes usually occur during chemical reactions, but does not change the nature of substances. The most common physical changes during reactions are the change of color, scent and evolution of gas. However, when physical changes occur, chemical reactions may not occur. A reaction that occurs when aqueous solutions of anions (negatively charged ions) and cations (positively charged ions) combine to form a compound that is insoluble is known as precipitation. The insoluble solid is called the precipitate, and the remaining liquid is called the supernate. See Figure2.1 Real life example: The white precipitate formed by acid rain on a marble statue: \[CaCO_3(aq)+H_2SO_4(aq) \rightarrow CaSO_4(s)+H_2O(l)+CO_2(g) \nonumber \] An example of a precipitation reaction is the reaction between silver nitrate and sodium iodide. This reaction is represented by the chemical equation : AgNO (aq)+ NaI (aq) → AgI (s) + NaNO (aq) Since all of the above species are in aqueous solutions, they are written as ions, in the form: Ag +NO (aq)+ Na (aq) + I (aq) → AgI (s) + Na (aq) + NO (aq) Ions that appear on both sides of the equation are called spectator ions. These ions do not affect the reaction and are removed from both sides of the equation to reveal the net ionic equation, as written below: Ag (aq) + I (aq) → AgI (s) In this reaction, the solid, AgI, is known as the precipitate. The formation of a precipitate is one of the many indicators that a chemical reaction has taken place. A neutralization reaction occurs when an acid and base are mixed together. An acid is a substance that produces H ions in solution, whereas a base is a substance that that produces OH ions in solution. A typical will produce an ionic compound called a and . A typical acid-base reaction is the reaction between hydrochloric acid and sodium hydroxide. This reaction is represented by the equation: \[\ce{HCl (aq) + NaOH (aq) \rightarrow NaCl (aq)+ H_2O (l)} \nonumber \] In this reaction, \(HCl\) is the acid, \(NaOH\) is the base, and \(NaCl\) is the salt. Real life example: Baking soda reacts with vinegar is a neutralization reaction. A redox reaction occurs when the oxidation number of atoms involved in the reaction are changed. Oxidation is the process by which an atom’s oxidation number is increased, and reduction is the process by which an atom’s oxidation number is decreased. If the oxidation states of any elements in a reaction change, the reaction is an oxidation-reduction reaction. An atom that undergoes oxidation is called the reducing agent, and the atom that undergoes reduction is called the oxidizing agent. An example of a redox reaction is the reaction between hydrogen gas and fluorine gas: \[H_2 (g) + F_2 (g) \rightarrow 2HF (g) \label{redox1}\] In this reaction, hydrogen is oxidized from an oxidation state of 0 to +1, and is thus the reducing agent. Fluorine is reduced from 0 to -1, and is thus the oxidizing agent. Real life example: The cut surface of an apple turns brownish after exposed to the air for a while. A combustion reaction is a type of redox reaction during which a fuel reacts with an oxidizing agent, resulting in the release of energy as heat. Such reactions are exothermic, meaning that energy is given off during the reaction. An endothermic reaction is one which absorbs heat. A typical combustion reaction has a hydrocarbon as the fuel source, and oxygen gas as the oxidizing agent. The products in such a reaction would be \(CO_2\) and \(H_2O\). \[C_xH_yO_z+O_2 \rightarrow CO_2+H_2O \;\;\; \text{(unbalanced)}\] Such a reaction would be the combustion of in the following equation \[C_6H_{12}O_6 (s) + 6O_2 (g) \rightarrow 6CO_2 (g) + 6H_2O (g)\] Real life example: explosion; burning. A synthesis reaction occurs when one or more compounds combines to form a complex compound. The simplest equation of synthesis reaction is illustrated below. An example of such a reaction is the reaction of silver with oxygen gas to form silver oxide: \[2Ag (s) +O_2 (g) \rightarrow 2AgO (s)\] Real life example: Hydrogen gas is burned in air (reacts with oxygen) to form water: \[2H_2(g) + O_2(g) \rightarrow 2H_2O(l)\] A decomposition reaction is the opposite of a synthesis reaction. During a decomposition reaction, a more complex compound breaks down into multiple simpler compounds. A classic example of this type of reaction is the decomposition of hydrogen peroxide into oxygen and hydrogen gas: \[H_2O_2 (l) \rightarrow H_2 (g) + O_2 (g)\] A type of oxidation-reduction reaction in which an element in a compound is replaced by another element. An example of such a reaction is: \[Cu (s) + AgNO_3 (aq) \rightarrow Ag(s) + Cu(NO_3)_2 (aq)\] This is also a redox reaction. 1) C H O + O → CO (g) +H O (g)
a) What type of reaction is this?
b) Is is exothermic or endothermic? Explain.
2) Given the oxidation-reduction reaction :
Fe (s) + CuSO (aq)→ FeSO (aq)+ Cu (s)
a) Which element is the oxidizing agent and which is the reducing agent?
b) How do the oxidation states of these species change?
3) Given the equation:
AgNO (aq) + KBr (aq) → AgBr (s) +KNO (aq)
a) What is the net ionic reaction?
b) Which species are spectator ions?
4) 2 HNO (aq) + Sr(OH) (aq) → Sr(NO ) (aq) +2 H O (l)
a) In this reaction, which species is the acid and which is the base?
b) Which species is the salt?
c) If 2 moles of HNO3 and 1 mole of Sr(OH)2 are used, resulting in 0.85 moles of Sr(NO )2 , what is the percent yield (with respect to moles) of Sr(NO3)2 ?
5) Identify the type of the following reactions:
a) Al(OH) (aq) + HCl (aq) → AlCl (aq) + H O (l)
b) MnO + 4H + 2Cl → Mn + 2H O (l) + Cl (g)
c) P (s) + Cl (g) → PCl (l)
d) Ca (s) + 2H O (l) → Ca(OH) (aq) + H (g)
e) AgNO (aq) + NaCl (aq) → AgCl (s) + NaNO (aq) 1a) It is a combustion reaction 1b) It is exothermic, because combustion reactions give off heat 2a) Cu is the oxidizing agent and Fe is the reducing agent 2b) Fe changes from 0 to +2, and Cu changes from +2 to 0. 3a) Ag (aq) + Br (aq) → AgBr (s) 3b) The spectator ions are K and NO 4a) HNO is the acid and Sr(OH) is the base 4b) Sr(NO ) is the salt 4c) According to the stoichiometric coefficients, the theoretical yield of Sr(NO ) is one mole. The actual yield was 0.85 moles. Therefore the percent yield is: (0.85/1.0) * 100% = 85% 5a) Acid-base 5b) Oxidation-reduction 5c) Synthesis 5d) Single-replacement reaction 5e) Double replacement reaction | 10,797 | 2,305 |
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Instrumentation_and_Analysis/Mass_Spectrometry/Fragmentation_Patterns_in_Mass_Spectra |
This page looks at how fragmentation patterns are formed when organic molecules are fed into a mass spectrometer, and how you can get information from the mass spectrum. When the vaporized organic sample passes into the ionization chamber of a mass spectrometer, it is bombarded by a stream of electrons. These electrons have a high enough energy to knock an electron off an organic molecule to form a positive ion. This ion is called the The molecular ion is often given the symbol \(\ce{M^{+}}\) or \(\ce{M^{\cdot +} }\)- the dot in this second version represents the fact that somewhere in the ion there will be a single unpaired electron. That's one half of what was originally a pair of electrons - the other half is the electron which was removed in the ionization process. The molecular ions are energetically unstable, and some of them will break up into smaller pieces. The simplest case is that a molecular ion breaks into two parts - one of which is another positive ion, and the other is an uncharged free radical. \[M^{\cdot +} \rightarrow X^+ + Y^{\cdot}\] The uncharged free radical won't produce a line on the mass spectrum. Only charged particles will be accelerated, deflected and detected by the mass spectrometer. These uncharged particles will simply get lost in the machine - eventually, they get removed by the vacuum pump. The ion, X , will travel through the mass spectrometer just like any other positive ion - and will produce a line on the stick diagram. All sorts of fragmentations of the original molecular ion are possible - and that means that you will get a whole host of lines in the mass spectrum. For example, the mass spectrum of pentane looks like this: It's important to realize that the pattern of lines in the mass spectrum of an organic compound tells you something quite different from the pattern of lines in the mass spectrum of an element. With an element, each line represents a different isotope of that element. With a compound, each line represents a different fragment produced when the molecular ion breaks up. In the stick diagram showing the mass spectrum of pentane, the line produced by the heaviest ion passing through the machine (at m/z = 72) is due to the molecular ion. The tallest line in the stick diagram (in this case at m/z = 43) is called the . This is usually given an arbitrary height of 100, and the height of everything else is measured relative to this. The base peak is the tallest peak because it represents the commonest fragment ion to be formed - either because there are several ways in which it could be produced during fragmentation of the parent ion, or because it is a particularly stable ion. This section will ignore the information you can get from the molecular ion (or ions). That is covered in three other pages which you can get at via the mass spectrometry menu. You will find a link at the bottom of the page. Let's have another look at the mass spectrum for pentane: What causes the line at m/z = 57? How many carbon atoms are there in this ion? There can't be 5 because 5 x 12 = 60. What about 4? 4 x 12 = 48. That leaves 9 to make up a total of 57. How about C H then? C H would be [CH CH CH CH ] , and this would be produced by the following fragmentation: The methyl radical produced will simply get lost in the machine. The line at m/z = 43 can be worked out similarly. If you play around with the numbers, you will find that this corresponds to a break producing a 3-carbon ion: The line at m/z = 29 is typical of an ethyl ion, [CH CH ] : The other lines in the mass spectrum are more difficult to explain. For example, lines with m/z values 1 or 2 less than one of the easy lines are often due to loss of one or more hydrogen atoms during the fragmentation process. This time the base peak (the tallest peak - and so the commonest fragment ion) is at m/z = 57. But this isn't produced by the same ion as the same m/z value peak in pentane. If you remember, the m/z = 57 peak in pentane was produced by [CH CH CH CH ] . If you look at the structure of pentan-3-one, it's impossible to get that particular fragment from it. Work along the molecule mentally chopping bits off until you come up with something that adds up to 57. With a small amount of patience, you'll eventually find [CH CH CO] - which is produced by this fragmentation: You would get exactly the same products whichever side of the CO group you split the molecular ion. The m/z = 29 peak is produced by the ethyl ion - which once again could be formed by splitting the molecular ion either side of the CO group. The more stable an ion is, the more likely it is to form. The more of a particular sort of ion that's formed, the higher its peak height will be. We'll look at two common examples of this. Summarizing the most important conclusion from the page on carbocations: primary < secondary < tertiary Applying the logic of this to fragmentation patterns, it means that a split which produces a secondary carbocation is going to be more successful than one producing a primary one. A split producing a tertiary carbocation will be more successful still. Let's look at the mass spectrum of 2-methylbutane. 2-methylbutane is an isomer of pentane - isomers are molecules with the same molecular formula, but a different spatial arrangement of the atoms. Look first at the very strong peak at m/z = 43. This is caused by a different ion than the corresponding peak in the pentane mass spectrum. This peak in 2-methylbutane is caused by: The ion formed is a secondary carbocation - it has two alkyl groups attached to the carbon with the positive charge. As such, it is relatively stable. The peak at m/z = 57 is much taller than the corresponding line in pentane. Again a secondary carbocation is formed - this time, by: You would get the same ion, of course, if the left-hand CH group broke off instead of the bottom one as we've drawn it. In these two spectra, this is probably the most dramatic example of the extra stability of a secondary carbocation. Ions with the positive charge on the carbon of a carbonyl group, C=O, are also relatively stable. This is fairly clearly seen in the mass spectra of ketones like pentan-3-one. The base peak, at m/z=57, is due to the [CH CH CO] ion. We've already discussed the fragmentation that produces this. Suppose you had to suggest a way of distinguishing between pentan-2-one and pentan-3-one using their mass spectra. Each of these is likely to split to produce ions with a positive charge on the CO group. In the pentan-2-one case, there are two different ions like this: That would give you strong lines at m/z = 43 and 71. With pentan-3-one, you would only get one ion of this kind: In that case, you would get a strong line at 57. You don't need to worry about the other lines in the spectra - the 43, 57 and 71 lines give you plenty of difference between the two. The 43 and 71 lines are missing from the pentan-3-one spectrum, and the 57 line is missing from the pentan-2-one one. The two mass spectra look like this: As you've seen, the mass spectrum of even very similar organic compounds will be quite different because of the different fragmentations that can occur. Provided you have a computer data base of mass spectra, any unknown spectrum can be computer analysed and simply matched against the data base. Jim Clark ( ) | 7,367 | 2,306 |
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Concepts_in_Biophysical_Chemistry_(Tokmakoff)/06%3A_Dynamics_and_Kinetics/20%3A_Protein_Folding/20.02%3A_Two-State_Thermodynamics |
These models have helped drive theoretical developments that provide alternate perspectives on how proteins fold: The statistical perspective is important. The standard ways of talking about folding is in terms of activated processes, in which we describe states that have defined structures, and which exchange across barriers along a reaction coordinate. And the emphasis is on molecularly interpreting these states. There is nothing formally wrong with that except that it is an unsatisfying way of treating problems where one has entropic barriers. Helps with entropic barriers Reprinted with permission from K. A. Dill, Protein Sci. 8, 1166-1180 (1999). John Wiley and Sons 1999. Reprinted with permission from K. A. Dill, Protein Sci. 8, 1166-1180 (1999). John Wiley and Sons 1999. ___________________________________________ | 843 | 2,307 |
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Descriptive_Chemistry/Elements_Organized_by_Group/Group_17%3A_The_Halogens |
The halogens are located on the left of the noble gases on the periodic table. These five toxic, non-metallic elements make up Group 17 of the periodic table and consist of: fluorine (F), chlorine (Cl), bromine (Br), iodine (I), and astatine (At). Although astatine is radioactive and only has short-lived isotopes, it behaves similar to iodine and is often included in the halogen group. Because the halogen elements have seven valence electrons, they only require one additional electron to form a full octet. This characteristic makes them more reactive than other non-metal groups.
| 606 | 2,309 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_Lab_Techniques_(Nichols)/05%3A_Distillation/5.04%3A_Vacuum_Distillation/5.4B%3A_Predicting_the_Boiling_Temperature |
The boiling point of a liquid or solution drops when the pressure is reduced in a distillation apparatus. It is helpful to be able to predict the altered boiling point depending on the pressure inside the apparatus. The lowest pressure attainable inside the apparatus depends largely on the vacuum source and the integrity of the seal on the joints. Lower pressures are attainable when using a portable vacuum pump\(^{14}\) than when using a water aspirator or the building's house vacuum (Figure 5.49). Due to the very low pressures possible with oil pumps in portable vacuums, these vacuum distillations should be conducted in the fume hood behind a blast shield. are the most common vacuum source in teaching labs because they are inexpensive. When a water aspirator is used, the vacuum pressure is always limited by the intrinsic vapor pressure of water, which is often between \(17.5 \: \text{mm} \: \ce{Hg}\) \(\left( 20^\text{o} \text{C} \right)\) and \(23.8 \: \text{mm} \: \ce{Hg}\) \(\left( 25^\text{o} \text{C} \right)\).\(^{15}\) the vacuum pressure is also very dependent on water flow, which can vary greatly. If an entire lab section uses the water lines at the same time, the water flow can be significantly compromised, leading to a much higher pressure than \(25 \: \text{mm} \: \ce{Hg}\) inside an apparatus. The number of students using aspirators at one time should be limited as much as possible. If a manometer is available, the distillation apparatus should be set up and evacuated without heating to measure the pressure. The expected boiling point of a compound can then be roughly estimated using a (found in a CRC or online) or through the general guidelines in Table 5.9. If a manometer is not available and a water aspirator is to be used, the expected boiling point can be estimated using an approximate pressure of \(20 \: \text{mm} \: \ce{Hg}\), although the pressure will likely be higher than this. \(^{14}\)A Kugelrohr apparatus can obtain pressures as low as \(0.05 \: \text{mm} \: \ce{Hg}\), as reported by the Sigma-Aldrich operating instructions. \(^{15}\)J. A. Dean, , 15\(^\text{th}\) ed., McGraw-Hill, , Sect 5.28. \(^{16}\)Selected values from: A. J. Gordon and R. J. Ford, , Wiley & Sons, , p 32-33. | 2,263 | 2,310 |
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Aldehydes_and_Ketones/Reactivity_of_Aldehydes_and_Ketones/00%3A_Front_Matter/02%3A_InfoPage |
Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new access texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being integrated. and are supported by the Department of Education Open Textbook Pilot 1525057, and 1413739. Unless otherwise noted, LibreTexts content is licensed by . Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not ). and are supported by the Department of Education Open Textbook Pilot Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. . . | 1,070 | 2,311 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.