url
stringlengths
90
342
html
stringlengths
602
98.8k
text_length
int64
602
98.8k
__index_level_0__
int64
0
5.02k
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.01%3A_Medicine_Hunting
While sometimes the discovery of potential medicines falls to researchers' good luck, most often pharmacologists, chemists, and other scientists looking for new drugs plod along methodically for years, taking suggestions from nature or clues from knowledge about how the body works. Finding chemicals' cellular targets can educate scientists about how drugs work. Aspirin's molecular target, the enzyme cyclooxygenase, or COX (see page 22), was discovered this way in the early 1970s in Nobel Prize-winning work by pharmacologist John Vane, then at the Royal College of Surgeons in London, England. Another example is colchicines, a relatively old drug that is still widely used to treat gout, an excruciatingly painful type of arthritis in which needle-like crystals of uric acid clog joints, leading to swelling, heat, pain, and stiffness. Lab experiments with colchicine led scientists to this drug's molecular target, a cell-scaffolding protein called tubulin. Colchicine works by attaching itself to tubulin, causing certain parts of a cell's architecture to crumble, and this action can interfere with a cell's ability to move around. Researchers suspect that in the case of gout, colchicine works by halting the migration of immune cells called granulocytes that are responsible for the inflammation characteristic of gout. Drugs used to treat bone ailments may be useful for treating infectious diseases like malaria. As pet owners know, you teach some old dogs new tricks. In a similar vein, scientists have in some cases found new uses for "old" drugs. Remarkably, the potential new uses often have little in common with a drug's product label (its "old" use). For example, chemist Eric Oldfield of the University of Illinois at Urbana-Champaign discovered that one class of drugs called bisphosphonates, which are currently approved to treat osteoporosis and other bone disorders, may also be useful for treating malaria, Chagas' disease, leishmaniasis, and AIDS-related infections like toxoplasmosis. Previous research by Oldfield and his coworkers had hinted that the active ingredient in the bisphosphonate medicines Fosamax , Actonel , and Aredia blocks a critical step in the metabolism of parasites, the microorganisms that cause these diseases. To test whether this was true, Oldfield gave the medicines to five different types of parasites, each grown along with human cells in a plastic lab dish. The scientists found that small amounts of the osteoporosis drugs killed the parasites while sparing human cells. The researchers are now testing the drugs in animal models of the parasitic diseases and so far have obtained cures—in mice—of certain types of leishmaniasis. If these studies prove that bisphosphonate drugs work in larger animal models, the next step will be to find out if the medicines can thwart these parasitic diseases in humans. Current estimates indicate that scientists have identified roughly 500 to 600 molecular targets where medicines may have effects in the body. Medicine hunters can strategically "discover" drugs by designing molecules to "hit" these targets. That has already happened in some cases. Researchers knew just what they were looking for when they designed the successful AIDS drugs called HIV protease inhibitors. Previous knowledge of the three-dimensional structure of certain HIV proteins (the target) guided researchers to develop drugs shaped to block their action. Protease inhibitors have extended the lives of many people with AIDS. However, sometimes even the most targeted approaches can end up in big surprises. The New York City pharmaceutical firm Pfizer had a blood pressure-lowering drug in mind, when instead its scientists discovered Viagra , a best-selling drug approved to treat erectile dysfunction. Initially, researchers had planned to create a heart drug, using knowledge they had about molecules that make blood clot and molecular signals that instruct blood vessels to relax. What the scientists did not know was how their candidate drug would fare in clinical trials. Colchicine, a treatment for gout, was originally derived from the stem and seeds of the meadow saffron (autumn crocus). NATIONAL AGRICULTURE LIBRARY, ARS, USDA Sildenafil (Viagra's chemical name) did not work very well as a heart medicine, but many men who participated in the clinical testing phase of the drug noted one side effect in particular: erections. Viagra works by boosting levels of a natural molecule called cyclic GMP that plays a key role in cell signaling in many body tissues. This molecule does a good job of opening blood vessels in the penis, leading to an erection.
4,662
4,406
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Anti-Cancer_Drugs_II
Hydroxyurea blocks an enzyme which converts the cytosine nucleotide into the deoxy derivative. In addition, DNA synthesis is further inhibited because hydroxyurea blocks the incorporation of the thymidine nucleotide into the DNA strand. Mercaptopurine, a chemical analog of the purine adenine, inhibits the biosynthesis of adenine nucleotides by acting as an antimetabolite. In the body, 6-MP is converted to the corresponding ribonucleotide. 6-MP ribonucleotide is a potent inhibitor of the conversion of a compound called inosinic acid to adenine Without adenine, DNA cannot be synthesized. 6-MP also works by being incorporated into nucleic acids as thioguanosine, rendering the resulting nucleic acids (DNA, RNA) unable to direct proper protein synthesis. Thioguanine is an antimetabolite in the synthesis of guanine nucleotides. Alkylating agents involve reactions with guanine in DNA. These drugs add methyl or other alkyl groups onto molecules where they do not belong. This in turn inhibits their correct utilization by base pairing and causes a miscoding of DNA. There are six groups of alkylating agents: nitrogen mustards; ethylenimes; alkylsulfonates; triazenes; piperazines; and nitrosureas. Cyclosporamide is a classical example of the role of the host metabolism in the activation of an alkylating agent and is one or the most widely used agents of this class. It was hoped that the cancer cells might posses enzymes capable of accomplishing the cleavage, thus resulting in the selective production of an activated nitrogen mustard in the malignant cells. Compare the top and bottom structures in the graphic on the left. A number of antibiotics such as anthracyclines, dactinomycin, bleomycin, adriamycin, mithramycin, bind to DNA and inactivate it. Thus the synthesis of RNA is prevented. General properties of these drugs include: interaction with DNA in a variety of different ways including intercalation (squeezing between the base pairs), DNA strand breakage and inhibition with the enzyme topoisomerase II. Most of these compounds have been isolated from natural sources and antibiotics. However, they lack the specificity of the antimicrobial antibiotics and thus produce significant toxicity. The are among the most important antitumor drugs available. Doxorubicin is widely used for the treatment of several solid tumors while and idarubicin are used exclusively for the treatment of leukemia. These agents have a number of important effects including: intercalating (squeezing between the base pairs) with DNA affecting many functions of the DNA including DNA and RNA synthesis. Breakage of the DNA strand can also occur by inhibition of the enzyme topoisomerase II. At low concentrations dactinomycin inhibits DNA directed RNA synthesis and at higher concentrations DNA synthesis is also inhibited. All types of RNA are affected, but ribosomal RNA is more sensitive. Dactinomycin binds to double stranded DNA , permitting RNA chain initiation but blocking chain elongation. Binding to the DNA depends on the presence of guanine. Plant alkaloids like prevent cell division, or mitosis. There are several phases of mitosis, one of which is the metaphase. During metaphase, the cell pulls duplicated DNA chromosomes to either side of the parent cell in structures called "spindles". These spindles ensure that each new cell gets a full set of DNA. Spindles are microtubular fibers formed with the help of the protein "tubulin". Vincristine binds to tubulin, thus preventing the formation of spindles and cell division. Paclitaxel (taxol) was first isolated from the from the bark of the Pacific Yew (Taxus brevifolia). Docetaxel is a more potent analog that is produced semisynthetically. In contrast to other microtubule antagonists, taxol disrupts the equilibrium between free tubulin and mircrotubules by shifting it in the direction of assembly, rather than disassembly. As a result, taxol treatment causes both the stabilization of microtubules and the formation of abnormal bundles of microtubules. The net effect is still the disruption of mitosis. Intercalating agents wedge between bases along the DNA. The intercalated drug molecules affect the structure of the DNA, preventing polymerase and other DNA binding proteins from functioning properly. The result is prevention of DNA synthesis, inhibition of transcription and induction of mutations. Examples include: These related drugs covalently bind to DNA with preferential binding to the N-7 position of guanine and adenine. They are able to bind to two different sites on DNA producing cross-links, either intrastrand (within the same DNA molecule which results in inhibition of DNA synthesis and transcription.
4,735
4,407
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Metabolism/Anabolism/Gluconeogenesis
Gluconeogenesis is the metabolic process by which organisms produce sugars (namely glucose) for catabolic reactions from non-carbohydrate precursors. is the only energy source used by the brain (with the exception of ketone bodies during times of fasting), testes, erythrocytes, and kidney medulla. In mammals this process occurs in the liver and kidneys. The need for energy is important to sustain life. Organisms have evolved ways of producing substrates required for the catabolic reactions necessary to sustain life when desired substrates are unavailable. The main source of energy for eukaryotes is glucose. When glucose is unavailable, organisms are capable of metabolizing glucose from other non-carbohydrate precursors. The process that coverts pyruvate into glucose is called gluconeogenesis. Another way organisms derive glucose is from energy stores like glycogen and starch. Gluconeogenesis is much like glycolysis only the process occurs in reverse. However, there are exceptions. In glycolysis there are three highly exergonic steps (steps 1,3,10). These are also regulatory steps which include the enzymes hexokinase, phosphofructokinase, and pyruvate kinase. Biological reactions can occur in both the forward and reverse direction. If the reaction occurs in the reverse direction the energy normally released in that reaction is now required. If gluconeogenesis were to simply occur in reverse the reaction would require too much energy to be profitable to that particular organism. In order to overcome this problem, nature has evolved three other enzymes to replace the glycolysis enzymes hexokinase, phosphofructokinase, and pyruvate kinase when going through the process of gluconeogenesis: Because it is important for organisms to conserve energy, they have derived ways to regulate those metabolic pathways that require and release the most energy. In glycolysis and gluconeogenesis seven of the ten steps occur at or near equilibrium. In gluconeogenesis the conversion of pyruvate to PEP, the conversion of fructose-1,6-bP, and the conversion of glucose-6-P to glucose all occur very spontaneously which is why these processes are highly regulated. It is important for the organism to conserve as much energy as possible. When there is an excess of energy available, gluconeogenesis is inhibited. When energy is required, gluconeogenesis is activated.
2,396
4,408
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Polysaccharides/Starch
are carbohydrate polymers consisting of tens to hundreds to several thousand monosaccharide units. All of the common polysaccharides contain glucose as the monosaccharide unit. Polysaccharides are synthesized by plants, animals, and humans to be stored for food, structural support, or metabolized for energy. Plants store glucose as the polysaccharide starch. The cereal grains (wheat, rice, corn, oats, barley) as well as tubers such as potatoes are rich in starch. Starch can be separated into two fractions-- . Natural starches are mixtures of amylose (10-20%) and amylopectin (80-90%). forms a colloidal dispersion in hot water, while amylopectin is soluble it is demanding of more extensive heating than amylose. The structure of amylose consists of long polymer chains of glucose units connected by an linkage. The graphic on the left shows a very small portion of an amylose chain. All of the monomer units are alpha -D-glucose, and all the alpha acetal links connect C #1 of one glucose and C #4 of the next glucose. Carbon # 1 is called the and is the center of an acetal functional group. A carbon that has two ether oxygens attached is an acetal. The is defined as the ether oxygen being on the opposite side of the ring as the C # 6. In the chair structure this results in a . This is the same definition as the -OH in a hemiacetal. As a result of the bond angles in the alpha acetal linkage, amylose actually forms a spiral much like a coiled spring. Amylose is responsible for the formation of a deep , which slips inside of the amylose coil. The graphic on the left shows very small portion of an amylopectin-type structure showing two branch points [drawn closer together than they should be]. The acetal linkages are alpha connecting C #1 of one glucose to C #4 of the next glucose. The branches are formed by linking C #1 to a C #6 through an acetal linkages. Amylopectin has 12-20 glucose units between the branches. Natural starches are mixtures of amylose and amylopectin. In glycogen, the branches occur at intervals of 8-10 glucose units, while in amylopectin the branches are separated by 10-12 glucose units.
2,164
4,409
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Medicinal_Chemistry/Adrenergic_Drugs
The compounds ordinarily classified as central stimulants are drugs that increase behavioral activity, thought processes, and alertness or elevate the mood of an individual. These drugs differ widely in their molecular structures and in their mechanism of action. Thus, describing a drug as a stimulant does not adequately describe its medicinal chemistry. The convulsions induced by a stimulant such as strychnine, for example, are very different from the behavioral stimulation and psychomotor agitation induced by a stimulant such as amphetamine. The three main catacholamines (chatecol is ortho-dihydroxybenzene) are epinephrine EP, norepinephrine NE, and dopamine DA. A host of physiological and metabolic responses follows stimulation of sympathetic nerves in mammals is usually mediated by the neurotransmitter norepinephrine. As part of the response to stress, the adrenal medulla is also stimulated, resulting in elevation of the concentrations of EP and NE in the circulation. The actions of these two catecholamine are very similar at some sites but differ in significantly at others. For example, both compounds stimulate the myocardium; however, EP dilates blood vessels to skeletal muscle, whereas NE has a minimal constricting effect on them. DA is found predominantly in the basal ganglia of the CNS and is found in very low levels in peripheral tissues. The synthesis of the neurotransmitters DA and NE and EP and the hormones NE and EP takes place by a pathway that involves 5 enzymes (see figure below). Tyrosine is generally considered the starting point, although phenylalanine hydroxylase can hydroxylate phenylalanine to tyrosine in the event that there is a tyrosine deficiency. Tyrosine hydroxylase (structure) is the rate-limiting enzyme in this pathway. Its addition of the 3-OH yielding L-3, 4-dihydroxyphenylalanine (L-DOPA) requires O tetrahydropteridine, and Fe as cofactors. One of the oxygen atoms in O is incorporated into an organic substrate and the other is reduced to water. Because this is the rate-limiting step, inhibition of this enzyme is the most likely way to reduce NE, DA, or EP levels significantly. Particularly are the a-methyltyrosine analogs, especially those containing an iodine atom in the benzene ring. The drug -methyltyrosine is useful in the management of malignant hypertension and in pheochromocytoma. The latter is a chromaffin cell tumor that produces and spills copious amounts of NE and EP into the circulation. DOPA is then converted to dopamine by the enzyme DOPA decarboxylase. The cofactor for this enzyme is pyridoxal (the aldahyde form of pyridoxine, vitamin B6). The copper-containing enzyme dopamine-beta-monooxygenase then converts dopamine to NE and in the end norepinephrine N-methyltransferase converts NE to EP. Genetic defaults in, or complete absence of, the first of these 5 enzymes (Phenylalanine Hydroxylase) leads to a disease called phenylketoneuria PKU, which will lead to severe mental disorder if not treated at an early stage after birth. Research experiments using different drugs that mimic the action of norepinephrine on sympathetic effector organs have shown that there are two major types of adrenergic receptors, alpha receptors and beta receptors. The beta receptors in turn are divided into beta1 and beta 2 receptors because certain drugs affect only some beta receptors. Also, there is a less distinct division of alpha receptors into alpha1 and alpha 2 receptors. Just as in the muscarinic receptor, and most other G protein-coupled receptors that bind biogenic amines, the adrenergic receptors possess an aspartate residue in the third transmembrane domain. The aspartate residue appears to interact with the amine residue of norepinephrine and other adrenergic ligands. Conserved serine residues in TM5 may play a role in the binding of adrenergic ligands through hydrogen bond interactions. In addition, aromatic amino acid residues, such as a phenylalanine in TM6, may contribute to the binding of ligands through pi - pi interactions. Norepinephrine and epinephrine, both of which are secreted into the blood by the adrenal medulla, have somewhat different effects in exciting the alpha and beta receptors. Norepinephrine excites mainly alpha receptors but excites the beta receptors to a less extent as well. On the other hand, epinephrine excites both types of receptors approximately equally. Therefore, the relative effects of norepinephrine and epinephrine on different effector organs are determined by the types of receptors in the organs. If they are all beta receptors, epinephrine will be the more effective excitant. It should be emphasised that not all tissues have both of these receptors. Usually they are associated with only one type of receptor or the other. EP dilates blood vessels (relaxes smooth muscle) in skeletal muscle and liver vascular beds; NE constricts the same vascular beds. EP decreases resistance in the hepatic and skeletal vascular smooth muscle beds; NE increases resistance. In contrast to their opposite effects on vascular smooth muscle of the liver and skeletal muscle, both EP and NE cause vasoconstriction (contraction of smooth muscle) in blood vessels supplying the skin and mucosa. EP decreases diastolic blood pressure; NE increases diastolic blood pressure. EP relaxes bronchial smooth muscle; NE has little effect. Both EP and NE stimulate an increased rate of beating when applied directly to a heart muscle removed from the body and isolated from nervous input. In contrast, NE given intravenously causes a profound reflex bradycardia due to a baroreceptor/vagal response (and increased release of acetylcholine onto the heart) in response to the vasopressor effect of NE. The binding of the adrenergic receptor causes a series of reactions that eventually results in a characteristic response. Two of the proteins that are phosphorylated in this process breakdown glycogen and stop glycogen synthesis. There are three main ways in which catacolamines are removed from a receptor - recycling back into the presynaptic neuron by an active transport reuptake mechanism, degredation to inactive compounds through the sequential actions of catecholamine-O-methyltransferase (COMT) and monoamine oxidase (MAO), and simple diffusion (see figure below). MAO catalyzes the oxidative deamination of catecholamines, serotonin, and other monoamines. It is one of several oxydase-type enzymes who's coenzyme is the flavin-adenine-dinucleatide (FAD) covalently bound as a prosthetic group. The isoallozazine ring system is viewed as the catalytically functional component of the enzyme. In this view N-5 and C-4a is where the redox reaction takes place. Although the whole region undoubtedly participates. Norepinephrine (NE) is the neurotransmitter of most postganglionic sympathetic fibers and many central neurons (eg, locus ceruleus, hypothalamus). Upon release, NE interacts with adrenergic receptors. This action is terminated largely by the re-uptake of NE back into the prejunctional neurons. Tyrosine hydroxylase and MAO regulate intraneuronal NE levels. Metabolism of NE occurs via MAO and catechol-O-methyltransferase to inactive metabolites (eg, normetanephrine, 3-methoxy-4-hydroxyphenylethylene glycol, 3-methoxy-4-hydroxymandelic acid). Epinephrine is a potent stimulator of both a and b -adrenergic receptors, and its effects on target organs are thus complex. Most of the effects which occur after injection are listed in the table on a and b -receptors shown above. Particularly prominent are the actions on the heart and the vascular and other smooth muscle. Epinephrine is one of the most potent vasopressor drugs known. Given intravenously it evokes a characteristic effect on blood pressure, which rises rapidly to a peak that is proportional to the dose. The increase is systolic pressure is greater than diastolic pressure, so that the pulse pressure increases. As the response wanes, the mean pressure falls below normal before returning to normal. The mechanism of the rise in blood pressure due to epinephrine is three fold; a direct myocardial stimulation that increases the strength of ventricular contraction; and increased heart rate; and most important, vasoconstriction in many vascular beds, especially the in the vessels of the skin, mucosa, and kidney, and constriction in the veins. Due to this increased blood pressure and to powerful b 2-receptor vasodilator action that is partially counterbalanced by vasoconstrictor action on the a receptors that are also present, blood flow to the skeletal muscles and central nervous system is increased. The effects of epinephrine on the smooth muscles of different organs and systems depend upon the type of adrenergic receptor in the muscle. It has powerful bronchiodilatior action, most evident when bronchial muscle is contracted as in bronchial asthma. In such situations, epinephrine has a striking therapeutic effect as a physiological antagonist to the constrictor influences since it is not limited to specific competitive antagonism such as occurs with antihistaminic drugs against histamine-induced bronchiospasm. Epinephrine has a wide variety of clinical uses in medicine and surgery. In general, these are based on the actions of the dug on blood vessels, heart, and bronchial muscle. The most common uses of epinephrine are to relieve respiratory distress due to bronchiospasm and to provide rapid relief of hypersensitivity reactions to drugs and other allergens. Its cardiac effects may be of use in restoring cardiac rhythm in patients with cardiac arrest. It is also used as a topical hemostatic on bleeding surfaces. is the chemical mediator liberated by mammalian postgangionic adrenergic nerves. It differs from epinephrine only by lacking the methyl substitution in the amino group. Norepinephrine constitutes 10 to 20% of the catecholamine content of human adrenal medulla. Norepinephrine is a potent agonist at a receptors and has little action on receptors; however, it is somewhat less potent than epinephrine on the a receptors of most organs. Most of the effects which occur after injection are listed in the table on and -receptors shown above Norepinephrine has only limited therapeutic value. , racemic -phenylisopropylamine, has powerful CNS stimulant actions in addition to the peripheral and actions common to indirectly acting sympathomimetic drugs. Unlike epinephrine, it is effective after oral administration and its effects last for several hours. Although amphetamine and methamphetamine are almost structurally identical to norepinephrine and epinephrine, these drugs have an indirect sympathomimetic action rather than directly exciting adrenergic effector receptors. Their effect is to cause release of norepinephrine from its storage vesicles in the sympathetic nerve endings The release of norepinephrine in turn causes the sympathetic effects. Ephedrine occurs naturally in . It was used in China for at least 2000 years before being introduced into Western medicine in 1924. Its central actions are less pronounced than those of the amphetamines. Ephedrine stimulates both and receptors and has clinical uses related to both these types of action. The drug owes part of its peripheral action to the release of norepinephrine, but it also has direct effects of receptors. Since ephedrine contains two chiral carbon atoms, four compounds are possible. Clinically, D-ephedrine is used to a large extent as an anti-asthmatic and, formerly, as a presser amine to restore low blood pressure as a result of trauma. L-pseudo-ephedrine is used primarily as a nasal decongestant. Ephedrine differs from epinephrine mainly in its efficacy after oral administration, its much longer duration of action, its more pronounced central actions, and its much lower potency. Cardiovascular effects of ephedrine are in many ways similar to those of epinephrine, but they persist about ten times as long. The drug elevates the systolic and diastolic pressure in man, and pulse pressure increases. Bronchial muscle relaxation is less prominent but more sustained with ephedrine than with epinephrine. The main clinical uses of ephedrine are in bronchiospasm, as a nasal decongestant, and certain allergic disorders, The drug is also used, although perhaps unwisely, as a weight loss agent. The monoamine oxidase inhibitors (MAOIs) comprise a chemically heterogeneous group of drugs that have in common the ability to block oxidative deamination of naturally occurring monoamines. These drugs have numerous other effects, many of which are still poorly understood. For example, they lower blood pressure and were at one time used to treat hypertension. Their use in psychiatry has also become very limited as the tricyclic antidepressants have come to dominate the treatment of depression and allied conditions. Thus, MAOIs are used most often when tricyclic antidepressants give unsatisfactory results. In addition, whereas severe depression may not be the primary indication for these agents, certain neurotic illnesses with depressive features, and also with anxiety and phobias, may respond especially favorably. Two main problems are associated with the MAOIs. The first is that an amino acid called "tyramine" may cause a hypertensive reaction in some people taking MAOIs. Therefore, foods containing tyramine must be avoided. Alcohol and caffeine must also be eliminated from the diet. Certain medications may react dangerously when combined with MAOIs. Therefore, it is crucial to tell the prescribing doctor about medications (including over-the-counter) you are taking. The second problem associated with MAOIs is the possibility of side effects. MAOIs not only inhibit MAO but other enzymes as well, and they interfere with the hepatic metabolism of many drugs. The dietary restrictions and side effects deter many people from staying on MAOIs. is the hydrazine analog of phenylethylamine, a substrate of MAO. This and several other MAOIs, such as isocarboxazide, are structurally related to amphetamine and were synthesized in an attempt to enhance central stimulant properties. Cocaine blocks the reuptake of dopamine by presynaptic neurons. More about this can be found under the topic Illegal Drugs. Dopamine is the immediate metabolic precursor of NE and EP; it is a central neurotransmitter and possesses important intrinsic pharmacological properties. DA is a substrate for both MAO and COMT and thus is ineffective when administered orally. Parkinson's disease can be characterized as having a DA deficiency in the brain. The pathology can be traced to certain large neurons in the substantia nigra in the basal ganglia, whose degeneration is directly related to DA deficiency. One of the principle roles of the basal ganglia is to control complex patterns of motor activity. When there is damage to the basal ganglia one's writing becomes crude. Logic would dictate that increasing brain levels of DA should ameliorate symptoms of Parkinson's disease. Direct parental DA administration is useless since the compound does not penetrate the Blood-brain barrier. It is shown that oral dosing with L-DOPA can successfully act as a pro-drug to the extent it enters the brain and is then decarboxylated to DA there. The clinical results in terms of decreased tremors and rigidity are dramatic. However, there are complications which produce intense side effects including nausea and vomiting, that are presumably due to chemoreceptor trigger zone stimulation by large amounts of DA produced peripherally. The reason for this situation is the relatively high peripheral levels of decarboxylase enzyme compared with brain concentrations. Thus 95% of a given oral dose was converted to DA before reaching the brain to be decarboxylated there. This can be prevented using L-DOPA in combination with a drug called carbidopa (more). Amantadine, introduced as an antiviral agent for the influenza was unexpectedly found to cause symptomatic improvement of patients with parkinsonism. Amantadine is a basic amine like dopamine, but the lipophilic nature of the cage structure enhances its ability to cross the blood brain barrier. This drug acts by releasing dopamine from intact dopaminergic terminals that remain in the nigrostraeatum of patients with Parkinson's disease. Because of this facilitated release of dopamine it appears that the therapeutic efficacy of amantadine is enhanced by the concurrent administration of levodopa. Amantadine has also been shown to delay the re-uptake of dopamine by neural cells, and it may have anticholinergic effects as well. A three dimensional view of amantadine may provide a better understanding of the structure. The above therapies are based on the manipulation of endogenous stores of dopamine. Dopamine agonists can stimulate the receptor directly and are of therapeutic value. Some of the drugs acting as dopaminergic agonists include the ergot alkaloid derivative bromocriptine. Bromocriptine is used particularly when L-DOPA therapy fails during the advanced stages of the disease. Bromocriptine is a derivative of lysergic acid (a precursor for LSD). Its structure is shown below. The addition of the bromine atom renders this alkaloid a potent dopamine agonist and virtually all of its actions result from stimulation of dopamine receptors. Schizophrenia results from excessive excitement of a group of neurons that secrete dopamine in the behavioral centers of the brain, including in the frontal lobes. Therefore drugs used to treat this disorder decrease the level of dopamine excreted from these neurons or antagonize dopamine. We will discuss these drugs in detail later under the topic Psychoactive Drugs. Strychnine does not directly affect adrenergic mechanisms, and technically should not be listed in this category. However, its stimulating affects are a result of adrenergic mechanisms. In addition, strychnine has no demonstrated therapeutic value, despite a long history of unwarranted popularity. However, the mechanism of action of strychnine is thoroughly understood, and it is a valuable pharmacological tool for studies of inhibition in the CNS. Poisoning with strychnine results in a predictable sequence of dramatic symptoms that may be lethal unless interrupted by established therapeutic measures. Strychnine is the principle alkaloid present in , the seeds of a tree native to India, . The structural formula for strychnine is: Strychnine produces excitation of all portions of the CNS. This effect, however, does not result from direct synaptic excitation. Strychnine increases the level of neuronal excitability by selectively blocking inhibition. Nerve impulses are normally confined to appropriate pathways by inhibitory influences. When inhibition is blocked by strychnine, ongoing neuronal activity is enhanced and sensory stimuli produce exaggerated reflex effects. Strychnine is a powerful convulsant, and the convulsion has a characteristic motor pattern. Inasmuch as strychnine reduces inhibition, including the reciprocal inhibition existing between antagonistic muscles, the pattern of convulsion is determined by the most powerful muscles acting at a given joint. In most laboratory animals, this convulsion is characterized by tonic extension of the body and of all limbs. The convulsant action of strychnine is due to interference with post synaptic inhibition that is mediated by glycine. Glycine is an important inhibitory transmitter to motorneurons and interneurons in the spinal cord, and strychnine acts as a selective, competitive antagonist to block the inhibitory effects of glycine at all glycine receptors. Competitive receptor-binding studies indicate that both strychnine and glycine interact with the same receptor complex, although possibly at different sites. The first symptoms of strychnine poisoning that is noticed is stiffness of the face and neck muscles. Heightened reflex excitability soon becomes evident. Any sensory stimulus may produce a violent motor response. In the early stages this is a coordinated extensor thrust, and in the later stages it may be a full tetanic convulsion. All voluntary muscles, including those of the face, are soon in full contraction. Respiration ceases due to the contraction of the diaphragm and the thoracic and abdominal muscles.
20,391
4,410
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Introductory_Chemistry_(CK-12)/17%3A_Thermochemistry/17.13%3A_Heat_of_Solution
When preparing dilutions of concentrated sulfuric acid, the directions usually call for adding the acid slowly to water with frequent stirring. When this acid is mixed with water, a great deal of heat is released in the dissolution process. If water were added to acid, the water would quickly heat and splatter, causing harm to the person making the solution. Enthalpy changes also occur when a solute undergoes the physical process of dissolving into a solvent. Hot packs and cold packs (see figure below) use this property. Many hot packs use calcium chloride, which releases heat when it dissolves, according to the equation below. \[\ce{CaCl_2} \left (s \right) \rightarrow \ce{Ca^{2+}} \left( aq \right) + 2 \ce{Cl^-} \left( aq \right) + 82.8 \: \text{kJ}\nonumber \] The of a substance is the heat absorbed or released when one mole of the substance is dissolved in water. For calcium chloride, \(\Delta H_\text{soln} = -82.8 \: \text{kJ/mol}\). Many cold packs use ammonium nitrate, which absorbs heat from the surroundings when it dissolves. \[\ce{NH_4NO_3} \left( s \right) + 25.7 \: \text{kJ} \rightarrow \ce{NH_4^+} \left( aq \right) + \ce{NO_3^-} \left( aq \right)\nonumber \] Cold packs are typically used to treat muscle strains and sore joints. The cold pack is activated and applied to the affected area. As the ammonium nitrate dissolves, it absorbs heat from the body and helps to limit swelling. For ammonium nitrate, \(\Delta H_\text{soln} = 25.7 \: \text{kJ/mol}\). The molar heat of solution, \(\Delta H_\text{soln}\), of \(\ce{NaOH}\) is \(-44.51 \: \text{kJ/mol}\). In a certain experiment, \(50.0 \: \text{g}\) of \(\ce{NaOH}\) is completely dissolved in \(1.000 \: \text{L}\) of \(20.0^\text{o} \text{C}\) water in a foam cup calorimeter. Assuming no heat loss, calculate the final temperature of the water. This is a multiple-step problem: 1) Grams \(\ce{NaOH}\) is converted to moles. 2) Moles is multiplied by the molar heat of solution. 3) The joules of heat released in the dissolution process is used with the specific heat equation and the total mass of the solution to calculate the \(\Delta T\). 4) The \(T_\text{final}\) is determined from \(\Delta T\). \[50.0 \: \text{g} \: \ce{NaOH} \times \frac{1 \: \text{mol} \: \ce{NaOH}}{40.00 \: \text{g} \: \ce{NaOH}} \times \frac{-44.51 \: \text{kJ}}{1 \: \text{mol} \: \ce{NaOH}} \times \frac{1000 \: \text{J}}{1 \: \text{kJ}} = -5.56 \times 10^4 \: \text{J}\nonumber \] \[\Delta T = \frac{\Delta H}{c_p \times m} = \frac{-5.56 \times 10^4 \: \text{J}}{4.18 \: \text{J/g}^\text{o} \text{C} \times 1050 \: \text{g}} = 13.2^\text{o} \text{C}\nonumber \] \[T_\text{final} = 20.0^\text{o} \text{C} + 13.2^\text{o} \text{C} = 33.2^\text{o} \text{C}\nonumber \] The dissolution process releases a large amount of heat, which causes the temperature of the solution to rise. Care must be taken when preparing concentrated solutions of sodium hydroxide, because of the large amounts of heat released.
2,996
4,411
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.04%3A_Kinetic_Molecular_Theory_(Overview)
Make sure you thoroughly understand the following essential ideas which are presented below. It is especially important that you know the principal assumptions of the kinetic-molecular theory. These can be divided into those that refer to the nature of the molecules themselves, and those that describe the nature of their motions: The properties such as temperature, pressure, and volume, together with others dependent on them (density, thermal conductivity, etc.) are known as properties of matter; these are properties that can be observed in bulk matter, without reference to its underlying structure or molecular nature. By the late 19 century the atomic theory of matter was sufficiently well accepted that scientists began to relate these macroscopic properties to the behavior of the individual molecules, which are described by the properties of matter. The outcome of this effort was the of gases. This theory applies strictly only to a hypothetical substance known as an ; we will see, however, that under many conditions it describes the behavior of real gases at ordinary temperatures and pressures quite accurately, and serves as the starting point for dealing with more complicated states of matter. The "kinetic-molecular theory of gases" may sound rather imposing, but it is based on a series of easily-understood assumptions that, taken together, constitute a model that greatly simplifies our understanding of the gaseous state of matter. The five basic tenets of the kinetic-molecular theory are as follows: If gases do in fact consist of widely-separated particles, then the observable properties of gases must be explainable in terms of the simple mechanics that govern the motions of the individual molecules. The kinetic molecular theory makes it easy to see why a gas should exert a pressure on the walls of a container. Any surface in contact with the gas is constantly bombarded by the molecules. At each collision, a molecule moving with momentum strikes the surface. Since the collisions are elastic, the molecule bounces back with the same velocity in the opposite direction. This change in velocity Δ is equivalent to a o ; according to Newton's second law, a is thus exerted on the surface of area exerting a pressure . According to the kinetic molecular theory, the average kinetic energy of an ideal gas is directly proportional to the absolute temperature. Kinetic energy is the energy a body has by virtue of its motion: \[ K.E. = \dfrac{mv^2}{2}\] As the temperature of a gas rises, the average velocity of the molecules will increase; a doubling of the temperature will increase this velocity by a factor of four. Collisions with the walls of the container will transfer more momentum, and thus more kinetic energy, to the walls. If the walls are cooler than the gas, they will get warmer, returning less kinetic energy to the gas, and causing it to cool until thermal equilibrium is reached. Because temperature depends on the kinetic energy, the concept of temperature only applies to a statistically meaningful sample of molecules. We will have more to say about molecular velocities and kinetic energies farther on. The molecules of a gas are in a state of perpetual motion in which the velocity (that is, the speed and direction) of each molecule is completely random and independent of that of the other molecules. This fundamental assumption of the kinetic-molecular model helps us understand a wide range of commonly-observed phenomena. refers to the transport of matter through a ; the rule is that substances move (or tend to move) from regions of higher concentration to those of lower concentration. The diffusion of tea out of a tea bag into water, or of perfume from a person, are common examples; we would not expect to see either process happening in reverse! It might at first seem strange that the motions of molecules can lead to a completely predictable drift in their ultimate distribution. The key to this apparent paradox is the distinction between an and the . Although we can say nothing about the fate of an individual molecule, the behavior of a large collection ("population") of molecules is subject to the laws of statistics. This is exactly analogous to the manner in which insurance actuarial tables can accurately predict the average longevity of people at a given age, but provide no information on the fate of any single person. If a tiny hole is made in the wall of a vessel containing a gas, then the rate at which gas molecules leak out of the container will be proportional to the number of molecules that collide with unit area of the wall per second, and thus with the of the gas molecules. This process, when carried out under idealized conditions, is known as . Around 1830, the English chemist Thomas Graham (1805-1869) discovered that the relative rates at which two different gases, at the same temperature and pressure, will effuse through identical openings is inversely proportional to the square root of its molar mass. \[v \propto \dfrac{1}{\sqrt{M}}\] , as this relation is known, is a simple consequence of the square-root relation between the velocity of a body and its kinetic energy. According to the kinetic molecular theory, the molecules of two gases at the same temperature will possess the same average kinetic energy. If and are the average velocities of the two kinds of molecules, then at any given temperature KE = KE and \[\dfrac{m_1v_1^2}{2} = \dfrac{m_2v_2^2}{2}\] or, in terms of molar masses \(M\), \[ \color{red} { \dfrac{v_1}{v_2} = \sqrt{\dfrac{M_2}{M_1}}}\] Thus the average velocity of the lighter molecules must be greater than those of the heavier molecules, and the ratio of these velocities will be given by the inverse ratio of square roots of the molecular weights. Although Graham's law applies exactly only when a gas diffuses into a vacuum, the law gives useful estimates of relative diffusion rates under more practical conditions, and it provides insight into a wide range of phenomena that depend on the relative average velocities of molecules of different masses. The glass tube shown above has cotton plugs inserted at either end. The plug on the left is moistened with a few drops of aqueous ammonia, from which \(NH_3\) gas slowly escapes. The plug on the right is similarly moisted with a strong solution of hydrochloric acid, from which gaseous \(HCl\) escapes. The gases diffuse in opposite directions within the tube; at the point where they meet, they combine to form solid ammonium chloride, which appears first as a white fog and then begins to coat the inside of the tube. The reaction is \[NH_{3(g)} + HCl_{(g)} \rightarrow NH_4Cl_{(s)}\] The lighter ammonia molecules will diffuse more rapidly, so the point where the two gases meet will be somewhere in the right half of the tube. The ratio of the diffusion velocities of ammonia ( )and hydrogen chloride ( ) can be estimated from Graham's law: \[ \dfrac{v_1}{v_2} = \sqrt{\dfrac{36.5}{17}} = 1.46\] We can therefore assign relative velocities of the two gases as \(v_1 = 1.46\) and \(v_2 = 1\). Clearly, the meeting point will be directly proportional to . It will, in fact, be proportional to the ratio /( + )*: \[ \dfrac{v_1}{v_1+v_2} \times 100\; cm = \dfrac{1.46}{1.46 + 1.00} \times 100\, cm = 59 \;cm \] *In order to see how this ratio was deduced, consider what would happen in the three special cases in which =0, =0, and = , for which the distances (from the left end) would be 0, 50, and 100 cm, respectively. It should be clear that the simpler ratio / would lead to absurd results. Note that the above calculation is only an estimate. Graham's law is strictly valid only under special conditions, the most important one being that no other gases are present. Contrary to what is written in some textbooks and is often taught, Graham's law does not accurately predict the relative rates of escape of the different components of a gaseous mixture into the outside air, nor does it give the rates at which two gases will diffuse through another gas such as air. See by Stephen J. Hawkes, 1993 70(10) 836-837 One application of this principle that was originally suggested by Graham himself but was not realized on a practical basis until a century later is the separation of isotopes. The most important example is the enrichment of uranium in the production of nuclear fission fuel. The K-25 Gaseous Diffusion Plant was one of the major sources of enriched uranium during World War II. It was completed in 1945 and employed 12,000 workers. Owing to the secrecy of the Manhattan Project, the women who operated the system were unaware of the purpose of the plant; they were trained to simply watch the gauges and turn the dials for what they were told was a "government project". Uranium consists mostly of U , with only 0.7% of the fissionable isotope U . Uranium is of course a metal, but it reacts with fluorine to form a gaseous hexafluoride, UF . In the very successful the UF diffuses repeatedly through a porous wall. Each time, the lighter isotope passes through a bit more rapidly then the heavier one, yielding a mixture that is minutely richer in U . The process must be over a thousand times to achieve the desired degree of enrichment. The development of a large-scale diffusion plant was a key part of the U.S. development of the first atomic bomb in 1945. This process is now obsolete, having been replaced by other methods. Diffusion ensures that molecules will quickly distribute themselves throughout the volume occupied by the gas in a thoroughly uniform manner. The chances are virtually zero that sufficiently more molecules might momentarily find themselves near one side of a container than the other to result in an observable temporary density or pressure difference. This is a result of simple statistics. But statistical predictions are only valid when the sample population is large. Consider what would happen if we consider extremely small volumes of space: cubes that are about 10 cm on each side, for example. Such a cell would contain only a few molecules, and at any one instant we would expect to find some containing more or less than others, although in time they would average out to the same value. The effect of this statistical behavior is to give rise to random fluctuations in the density of a gas over distances comparable to the dimensions of visible light waves. When light passes through a medium whose density is non-uniform, some of the light is . The kind of scattering due to random density fluctuations is called , and it has the property of affecting (scattering) shorter wavelengths more effectively than longer wavelengths. The clear sky appears blue in color because the blue (shorter wavelength) component of sunlight is scattered more. The longer wavelengths remain in the path of the sunlight, available to delight us at sunrise or sunset. [source] What we have been discussing is a form of what is known as . As the animation shows, the random fluctuations in pressure of a gas on either side do not always completely cancel when the density of molecules (i.e., ) are quite small. An interesting application involving several aspects of the kinetic molecular behavior of gases is the use of a gas, usually argon, to extend the lifetime of incandescent lamp bulbs. As a light bulb is used, tungsten atoms evaporate from the filament and condense on the cooler inner wall of the bulb, blackening it and reducing light output. As the filament gets thinner in certain spots, the increased electrical resistance results in a higher local power dissipation, more rapid evaporation, and eventually the filament breaks. The pressure inside a lamp bulb must be sufficiently low for the of the gas molecules to be fairly long; otherwise heat would be conducted from the filament too rapidly, and the bulb would melt. (Thermal conduction depends on intermolecular collisions, and a longer mean free path means a lower collision frequency). A complete vacuum would minimize heat conduction, but this would result in such a long mean free path that the tungsten atoms would rapidly migrate to the walls, resulting in a very short filament life and extensive bulb blackening. Around 1910, the General Electric Company hired Irving Langmuir as one of the first chemists to be employed as an industrial scientist in North America. Langmuir quickly saw that bulb blackening was a consequence of the long mean free path of vaporized tungsten atoms, and he showed that the addition of a small amount of argon will reduce the mean free path, increasing the probability that an outward-moving tungsten atom will collide with an argon atom. A certain proportion of these will eventually find their way back to the filament, partially reconstituting it. Krypton would be a better choice of gas than argon, since its greater mass would be more effective in changing the direction of the rather heavy tungsten atom. Unfortunately, krypton, being a rarer gas, is around 50 times as expensive as argon, so it is used only in “premium” light bulbs. The more recently-developed is an interesting chemistry-based method of prolonging the life of a tungsten-filament lamp. Gases, like all fluids, exhibit a resistance to flow, a property known as . The basic cause of viscosity is the random nature of thermally-induced molecular motion. In order to force a fluid through a pipe or tube, an additional non-random translational motion must be superimposed on the thermal motion. There is a slight problem, however. Molecules flowing near the center of the pipe collide mostly with molecules moving in the same direction at about the same velocity, but those that happen to find themselves near the wall will experience frequent collisions with the wall. Since the molecules in the wall of the pipe are not moving in the direction of the flow, they will tend to absorb more kinetic energy than they return, with the result that the gas molecules closest to the wall of the pipe lose some of their forward momentum. Their random thermal motion will eventually take them deeper into the stream, where they will collide with other flowing molecules and slow them down. This gives rise to a resistance to flow known as ; this is the reason why long gas transmission pipelines need to have pumping stations every 100 km or so. As you know, liquids such as syrup or honey exhibit smaller viscosities at higher temperatures as the increased thermal energy reduces the influence of intermolecular attractions, thus allowing the molecules to slip around each other more easily. Gases, however, behave in just the opposite way; gas viscosity arises from collision-induced transfer of momentum from rapidly-moving molecules to slow ones that have been released from the boundary layer. The higher the temperature, the more rapidly the molecules move and collide with each other, so the higher the viscosity. Everyone knows that the air pressure decreases with altitude. This effect is easily understood qualitatively through the kinetic molecular theory. Random thermal motion tends to move gas molecules in all directions equally. In the presence of a gravitational field, however, motions in a downward direction are slightly favored. This causes the concentration, and thus the pressure of a gas to be greater at lower elevations and to decrease without limit at higher elevations. The pressure at any elevation in a vertical column of a fluid is due to the weight of the fluid above it. This causes the pressure to decrease exponentially with height. The exact functional relationship between pressure and altitude is known as the . It is easily derived using first-year calculus. For air at 25°C the pressure at any altitude is given by \[P_h = P_o e^{–.011h}\] in which \(P_o\) is the pressure at sea level. This is a form of the very common which we will encounter in several different contexts in this course. An exponential decay (or growth) law describes any quantity whose rate of change is directly proportional to its current value, such as the amount of money in a compound-interest savings account or the density of a column of gas at any altitude. The most important feature of any quantity described by this law is that the fractional rate of change of the quantity in question (in this case, Δ or in calculus, is a constant. This means that the increase in altitude required to reduce the pressure by half is also a constant, about 6 km in the Earth's case. Because heavier molecules will be more strongly affected by gravity, their concentrations will fall off more rapidly with elevation. For this reason the partial pressures of the various components of the atmosphere will tend to vary with altitude. The difference in pressure is also affected by the temperature; at higher temperatures there is more thermal motion, and hence a less rapid fall-off of pressure with altitude. Owing to atmospheric convection and turbulence, these effects are not observed in the lower part of the atmosphere, but in the uppermost parts of the atmosphere the heavier molecules do tend to drift downward. At very low pressures, mean free paths are sufficiently great that collisions between molecules become rather infrequent. Under these conditions, highly reactive species such as ions, atoms, and molecular fragments that would ordinarily be destroyed on every collision, can persist for appreciable periods of time. The most important example of this occurs at the top of the Earth's atmosphere, at an altitude of 200 km, at pressure of about 10 atm. Here the mean free path will be 10 times its value at 1 atm, or about 1 m. In this part of the atmosphere, known as the , the chemistry is dominated by species such as O, O and HO which are formed by the action of intense solar ultraviolet light on the normal atmospheric gases near the top of the stratosphere. The high concentrations of electrically charged species in these regions (sometimes also called the ) reflect radio waves and are responsible for around-the-world transmission of mid-frequency radio signals. The ion density in the lower part of the ionosphere (about 80 km altitude) is so great that the radiation from broadcast-band radio stations is absorbed in this region before these waves can reach the reflective high-altitude layers. However, the pressure in this region (known as the ) is great enough that the ions recombine soon after local sunset, causing the D-layer to disappear and allowing the waves to reflect off of the upper (F-layer) part of the ionosphere. This is the reason that distant broadcast stations can only be heard at night.
18,736
4,412
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Nucleic_Acids/DNA/DNA%3A_Replication
The hereditary material in a cell is coded in the sequence of the heterocyclic amines of . There are normally 46 strands of DNA called in human cells. Specific regions, called , on each chromosome contain the hereditary information which distinguishes individuals from each other. The genes also contain the coded information required for the synthesis of proteins and enzymes needed for the normal functions of the cells. Bacterial cells may have 1000 genes, while the human cell contains more than a million genes. A single E. coli (bacteria) chromosome of double helical DNA consists of 3.4 million base pairs. Prior to cell division, the DNA material in the original cell must be duplicated so that after cell division, each new cell contains the full amount of DNA material. The process of DNA duplication is usually called . The replication is termed semiconservative since each new cell contains one strand of original DNA and one newly synthesized strand of DNA. The original polynucleotide strand of DNA serves as a template to guide the synthesis of the new complementary polynucleotide of DNA. A template is a guide that may be used for example, by a carpenter to cut intricate designs in wood. The DNA single strand template serves to guide the synthesis of a complementary strand of DNA. DNA polymerase III is an example of this process. More explanation in the next panel. Several enzymes and proteins are involved with the replication of DNA. At a specific point, the double helix of DNA is caused to unwind possibly in response to an initial synthesis of a short RNA strand using the enzyme helicase. Proteins are available to hold the unwound DNA strands in position. Each strand of DNA then serves as a template to guide the synthesis of its complementary strand of DNA. DNA polymerase III is used to join the appropriate nucleotide units together. The replication process is shown in graphic on the left. Template #1 guides the formation of a new complementary #2 strand. The DNA template guides the formation of a DNA complementary strand - not an exact copy of itself. For example looking at template # 2, this process occurs because the heterocyclic amine, adenine (A), codes or guides the incorporation of only thymine (T) to synthesize new DNA #1. The replication of DNA is guided by the base pairing principle so that no other heterocyclic amine nucleotide can hydrogen bond and fit correctly with cytosine. The next heterocyclic amine, cytosine (C), guides the incorporation of guanine (G) while similar arguments apply to the other bases. Exactly the opposite reaction occurs using template #2 (far right margin) where cytosine (C) guides the incorporation of guanine (G) to form a new complementary #2 strand. It is so important that the cells duplicate the DNA genetic material exactly, that the sequence of newly synthesized nucleotides is checked by two different polymerase enzymes. The second enzyme can check for and actually correct any mistake of mismatched base pairs in the sequence. The mismatched nucleotides are hydrolyzed and cut out and new correct ones are inserted. Although details of DNA replication is not thoroughly understood, because so many molecules are involved in the process. This example focuses on the bacteriophage T7 DNA replication complex because it consists of relatively few proteins. The mechanism of T7 DNA replication is a good model for other DNA replication. This molecule is based on the recent work of Doublie, et al. (1998). In the graphic below, the DNA polymerase enzyme is shown with a short section of DNA. The green color represents the DNA template, while the magenta color represents the newly synthesized DNA. In the close up, guanine triphosphate nucleotide is shown on the active site, guided by the cytosine in the template matching through hydrogen bonds. Only a few of the enzyme protein side chain interactions with the nucleotide are shown. Magnesium ions are also active in stabilizing the triphosphate through ionic interactions. Eventually the two of the phosphates are hydrolyzed and the remaining phosphate is bonded in a phosphate ester bond to the deoxyribose on the end of the newly forming DNA chain.
4,219
4,414
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Monosaccharides/Ribose
and its related compound, , are the building blocks of the backbone chains in nucleic acids, better known as and . Ribose is used in NA and deoxyribose is used in NA. The deoxy- designation refers to the lack of an alcohol, -OH, group as will be shown in detail further down. Ribose and deoxyribose are classified as , , , and are . The chair form of ribose follows a similar pattern as that for glucose with one exception. Since ribose has an aldehyde functional group, the ring closure occurs at carbon # 1, which is the same as glucose. See the graphic on the left. The exception is that ribose is a pentose, five carbons. Therefore a five membered ring is formed. The -OH on carbon #4 is converted into the ether linkage to close the ring with carbon #1. This makes a 5 member ring - four carbons and one oxygen. The chair structures are always written with the orientation depicted above to avoid confusion. Carbon # 1 is now called the and is the center of a functional group. A carbon that has both an ether oxygen and an alcohol group is a hemiacetal. The presence or absence of the -OH group on carbon (#2) is an important distinction between ribose and deoxyribose. Ribose has an alcohol at carbon # 2, while deoxyribose does have the alcohol group. See red -OH and H in the structures below. The is defined as the -OH being on the same side of the ring as the C # 6. In the ring structure this results in a . The is defined as the -OH being on the opposite side of the ring as the C # 6. In the ring structure this results in a . The alpha and beta label is not applied to any other carbon - only the anomeric carbon, in this case # 1.
1,693
4,415
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Fundamentals/How_to_Draw_Organic_Molecules
This page explains the various ways that organic molecules can be represented on paper or on screen - including molecular formulae, and various forms of structural formulae. A molecular formula simply counts the numbers of each sort of atom present in the molecule, but tells you nothing about the way they are joined together. For example, the molecular formula of butane is \(C_4H_{10}\), and the molecular formula of ethanol is \(C_2H_6O\). Molecular formulae are very rarely used in organic chemistry, because they do not give useful information about the bonding in the molecule. About the only place where you might come across them is in equations for the , for example: \[ C_5H_{12} + 8O_2 \rightarrow 5CO_2 + 6H_2O\] In cases like this, the bonding in the organic molecule isn't important. A structural formula shows how the various atoms are bonded. There are various ways of drawing this and you will need to be familiar with all of them. A displayed formula shows all the bonds in the molecule as individual lines. You need to remember that each line represents a pair of shared electrons. For example, this is a model of methane together with its displayed formula: Notice that the way the methane is drawn bears no resemblance to the actual shape of the molecule. Methane isn't flat with 90° bond angles. This mismatch between what you draw and what the molecule actually looks like can lead to problems if you aren't careful. For example, consider the simple molecule with the molecular formula CH Cl . You might think that there were two different ways of arranging these atoms if you drew a displayed formula. The chlorines could be opposite each other or at right angles to each other. But these two structures are actually exactly the same. Look at how they appear as models. One structure is in reality a simple rotation of the other one. Consider a slightly more complicated molecule, C H Cl. The displayed formula could be written as either of these: But, again these are exactly the same. Look at the models. For anything other than the most simple molecules, drawing a fully displayed formula is a bit of a bother - especially all the carbon-hydrogen bonds. You can simplify the formula by writing, for example, CH or CH instead of showing all these bonds. For example, ethanoic acid would be shown in a fully displayed form and a simplified form as: You could even condense it further to CH COOH, and would probably do this if you had to write a simple chemical equation involving ethanoic acid. You do, however, lose something by condensing the acid group in this way, because you can't immediately see how the bonding works. You still have to be careful in drawing structures in this way. Remember from above that these two structures both represent the same molecule: The next three structures all represent butane. All of these are just versions of four carbon atoms joined up in a line. The only difference is that there has been some rotation about some of the carbon-carbon bonds. You can see this in a couple of models. Not one of the structural formulae accurately represents the shape of butane. The convention is that we draw it with all the carbon atoms in a straight line - as in the first of the structures above. This is even more important when you start to have branched chains of carbon atoms. The following structures again all represent the same molecule - 2-methylbutane. The two structures on the left are fairly obviously the same - all we've done is flip the molecule over. The other one isn't so obvious until you look at the structure in detail. There are four carbons joined up in a row, with a CH group attached to the next-to-end one. That's exactly the same as the other two structures. If you had a model, the only difference between these three diagrams is that you have rotated some of the bonds and turned the model around a bit. To overcome this possible confusion, the convention is that you always look for the longest possible chain of carbon atoms, and then draw it horizontally. Anything else is simply hung off that chain. It does not matter in the least whether you draw any side groups pointing up or down. All of the following represent exactly the same molecule. If you made a model of one of them, you could turn it into any other one simply by rotating one or more of the carbon-carbon bonds. There are occasions when it is important to be able to show the precise 3-D arrangement in parts of some molecules. To do this, the bonds are shown using conventional symbols: For example, you might want to show the 3-D arrangement of the groups around the carbon which has the -OH group in butan-2-ol. Butan-2-ol has the structural formula: Using conventional bond notation, you could draw it as, for example: The only difference between these is a slight rotation of the bond between the centre two carbon atoms. This is shown in the two models below. Look carefully at them - particularly at what has happened to the lone hydrogen atom. In the left-hand model, it is tucked behind the carbon atom. In the right-hand model, it is in the same plane. The change is very slight. It doesn't matter in the least which of the two arrangements you draw. You could easily invent other ones as well. Choose one of them and get into the habit of drawing 3-dimensional structures that way. My own habit (used elsewhere on this site) is to draw two bonds going back into the paper and one coming out - as in the left-hand diagram above. Notice that no attempt was made to show the whole molecule in 3-dimensions in the structural formula diagrams. The CH CH group was left in a simple form. Keep diagrams simple - trying to show too much detail makes the whole thing amazingly difficult to understand! In a skeletal formula, all the hydrogen atoms are removed from carbon chains, leaving just a carbon skeleton with functional groups attached to it. For example, we've just been talking about butan-2-ol. The normal structural formula and the skeletal formula look like this: In a skeletal diagram of this sort Beware! Diagrams of this sort take practice to interpret correctly - and may well not be acceptable to your examiners (see below). There are, however, some very common cases where they are frequently used. These cases involve rings of carbon atoms which are surprisingly awkward to draw tidily in a normal structural formula. Cyclohexane, C H , is a ring of carbon atoms each with two hydrogens attached. This is what it looks like in both a structural formula and a skeletal formula. And this is cyclohexene, which is similar but contains a double bond: But the commonest of all is the benzene ring, C H , which has a special symbol of its own. There's no easy, all-embracing answer to this problem. It depends more than anything else on experience - a feeling that a particular way of writing a formula is best for the situation you are dealing with. Don't worry about this - as you do more and more organic chemistry, you will probably find it will come naturally. You'll get so used to writing formulae in reaction mechanisms, or for the structures for isomers, or in simple chemical equations, that you won't even think about it. Jim Clark ( )
7,231
4,416
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Medicinal_Chemistry/Anticancer_Drugs
3D structure of nitrogen mustard and uracil nitrogen mustard Efforts to modify the chemical structure of mechlorethamine to achieve greater selectivity for neoplastic tissues led to the development of cyclophosphamide. After studies of the pharmacological activity of cyclophosphamide, clinical investigations by European workers demonstrated its effectiveness in selected malignant neoplasms. Cyclophosphamide is a classical example of the role of the host metabolism in the activation of an alkylating agent and is one or the most widely used agents of this class. The original rationale that guided its molecular design was twofold. First, if a cyclic phosphamide group replaced the N-methyl of mechlorethamine, the compound might be relatively inert, presumably because the bis-(2-chloroethyl) group of the molecule could not ionize until the cyclic phosphamide was cleaved at the phosphorous-nitrogen linkage. Second, it was hoped that neplastic tissues might posses phosphatase of phosphamidase activity capable of accomplishing this cleavage, thus resulting in the selective production of an activated nitrogen mustard in the malignant cells. In accord with these predictions, cyclophosphamide displays only weak cytotoxic, mutagenic, or alkylating activity and is relatively stable in aqueous solution. However, when administered to experimental animals or patients bearing susceptible tumors, marked chemotherapeutic effects, as well as mutagenicity and cancinogenicity, are seen. Although a definite role for phosphatases of phosphamidases in the mechanism of action of cyclophospamide has not yet been demonstrated, it is clearly established that the drug initially undergoes metabolic activation be the cytochrome P-450 mixed-function oxidase system of the liver, with subsequent transport of the activated intermediate to sites of action. Thus, a crucial factor in the structure-activity relationship of cyclophosphamide concerns its capacity to undergo metabolic activation in the liver, rather than to alkylated malignant cells directly. it also appears that the selectivity of cyclophosphamide against certain malignant tissues may result in part from the capacity of normal tissues, such as liver, to protect themselves against cytotoxicity by further degrading the activated intermediates. None of the severe acute CNS manifestations reported with the typical nitrogen mustards has been noted with cyclophosphamide. Nosea and vomiting, however, may occur. Although the general cytotoxic action of this drug is similar to that of other alkylating agents, some notable diferences have been observed. When comapred with mechloroethamine, damage to the megakaryocytes and thrombocytopenia are less common. Another unusual manifestation of selectivity consists in more prominent damage to the hair follicles, resulting frequently in alopecia (baldness). The drug is not a vesicant, and local irritaion does occur. Uracil mustard was synthesized in an unsuccessful attempt to produce an active-site alkylator by linking the bis-(2-chloroethyl) group to the pyrimidine base uracil. Its activity in experimental neoplasms was demonstrated shortly thereafter. No relationship has been demonstrated, however, with the biological function of uracil. h marijuana Cancer is a group of diseases characterized by abnormal and uncontrolled cell division. One important approach to antitumor agents is the design of compounds with structures related to those of pyrimidines and purines that are involved in biosynthesis of DNA. These compounds are known as antimetabolites because they interfere with the formation or utilization of a normal cellular metabolite. This interference generally results from the inhibition of an enzyme in the biosynthetic pathway of the metabolite from the incorporation, as a false building block, into vital macromolecules such as proteins or nucleic acids. Uracil is not a component of DNA. Rather, DNA contains thymine, the methylated analog of uracil. The enzyme thymidylate synthetase is required to catalyze this finishing touch: deoxyuridylate (dUMP) is methylated to deoxythymidylate (dTMP) (see figure below). The methyl donor in this reaction is methylenetetrahydrofolate. Rapidly dividing cells require an abundant supply of deoxythymadylate for the synthesis of DNA. Therefore, the vulnerability of these cells to the inhibition of dTMP synthesis can be exploited in cancer therapy. The rational for 5-fluorouracil, 5-FU, was to block DNA synthesis by inhibiting the biosynthesis of dTMP, by virtue of its close structural analogy to uracil. Fluorine, being the smallest atom that would substitute for hydrogen at the 5' position, was assumed to create the smallest possible molecular perturbation and thus be converted to the nucleotide and be accepted by the reactive site of thymidylate synthetase as a substrate imposter. In fact, this was the case. The van der Waals radius of the F atom (1.35 A) is only slightly larger than that of the H atom (1.20 A). Therefore, 5-fluorouracil is a fluorinated pyrimidine analogue which stops cell proliferation by blocking DNA synthesis and RNA processing. Fluorouracil is converted in vivo into fluorodeoxyuridylate (F-dUMP). This analog of dUMP irreversibly inhibits thymidylate synthase after acting as a normal substrate through part of the catalytic cycle. First a sulfhydryl group of the enzyme adds to C-6 of the bound F-dUMP (see figure below). Methylenetetrahydrofolate then adds to C-5 of this intermediate. In the case of dUMP, a hydride ion of the folate is subsequently shifted to the methylene group, and a proton is taken away from C-5 of the bound nucleotide. However, F+ cannot be abstracted from F-dUMP by the enzyme, and so catalysis is blocked at the stage of the covalent complex formed by F-dUMP, methylenetetrahydrofolate, and the sulfhydryl group of the enzyme. We see here and example of , in which an enzyme converts a substrate into a reactive inhibitor that immediately inactivates its catalytic activity.
6,042
4,417
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Macromolecules/Catenation
Catenation of chemical bonds leads to the formation of inorganic polymers. However, inorganic polymers are mostly solids in the form of crystals. Typical inorganic polymers are diamond, graphite, silicates, and other solids in which all atoms are connected by covalent bonds. During the 20th century, the investigation of the material world turned to the very heart of material world - the structure of atoms. The discovery of electrons in 1897 by J.J. Thomson showed that there were more fundamental particles present in the atoms. Fourteen years later, Rutherford discovered that most of the mass of an atom resides in a tiny nucleus whose radius is 100,000 times smaller than that of an atom. In the mean time, light beams were discovered to be made of photons which are equivalent to particles of wave motion. These discoveries created new concepts. When these concepts and discoveries are integrated, new ideas emerge. The result is quantum theory. This theory gives good interpretations of the phenomena of the atomic and subatomic world. In this microscopic world, distances are measured in nanometers (10 or 1e-9 meter) and fantometers (1e-15 meter, also called fermi, in honour of Fermi who built the first nuclear reactor). The electrons in an atom are confined by the electromagnetic force of the atomic nuclei. At this level, we need a quantum mechanical approach to understand the energy states of the electrons in the atom. However, we do not have the time to discuss this in details. Quantum mechanics on atomic structures is a mathematical approach to describe the behavior of electrons in atoms. The electrons are represented by wavefunctions, and each of them are characterized by a set of numbers. Each set of numbers represent a state, which is often called an . Quantum Numbers and Atomic Orbitals are pages that give a bit more details on this subject, but a summary of the atomic orbitals is given below: 1s 2s 2p 3s 3p 3d 4s 4p 4d 4f 5s 5p 5d 5f 5g 6s 6p 6d 6f 6g 7h 7s 7p 7d 7f 7g 7h 8i A film has shown how these are related to the Period Tables of Elements, and it also shows the shape and concepts of atomic orbitals. These concepts are vital for the understanding of bonding, such as the bonds formed between carbon atoms of diamond, silicon, graphite etc. In the assignment, you have been asked to apply these concept to describe the bonding for carbon. The same argument also applies to the bonding of silicon. The electronic configuration of an element or atom shows the energy states of electrons in it. Pauli exclusion principle and Hund's rules are some of the theories involved in assigning electronic configurations. For the discussion of bondings in some light elements, please note the following: H: 1s He: 1s Li: 1s *2s after * are valence electrons Be: 1s *2s B: 1s *2s 2p C: 1s *2s 2p N: 1s *2s 2p O: 1s *2s 2p F: 1s *2s 2p Ne: 1s *2s 2p ... Si: 1s 2s 2p *3s 3p P: 1s 2s 2p *3s 3p S: 1s 2s 2p *3s 3p Cl: 1s 2s 2p *3s 3p Ar: 1s 2s 2p 3s 3p Na: {Ar}*4s ... Electrons in an atom may have properties of several orbitals, and they share each other's characters. In other words, atomic orbitals may be combined to form hybrid orbitals. These hybrid orbitals are particularly useful in the discussion of chemical bonding. For carbon, the hybrid orbitals are made up of 2s, 2p , 2p and 2p orbitals. Since 1 s and 3 p orbitals are used, the 4 orbitals sharing s and p characters are called sp hybrid orbitals. The shapes and directions of these orbitals should have been demonstrated in lectures, and diagrams are needed here. The bonding of diamond is beautifully described using the sp hybrid orbitals. The bonding of benzene should have been fully discussed in the organic chemistry course you have taken. Simply, the orbitals used to form the sigma bond are sp hybrid orbitals resulting from combining 2s, 2p and 2p orbitals. Furthermore, the overlap of the 2p orbitals leads to the formation of the pi bond. Again, the structure of benzene serves an excellent example for the concept called . If one insisted on the fact that the 3 double bonds and 3 single bonds in benzene alternate along the ring, one can start with a single or a double bond. Nether structure represent the structure of benzene, beccause all 6 bond lengths are about the same. Thus a combination of the two structures is used to represent the structure of benzene, and such an approach is called . In other words, electrons in the double bonds delocalize over the entire ring. The bonding description for benzene can be applied to that of a sheet of graphite. The electrons are delocalized on two planes in graphite. Thus, it no surprize that graphite is a good conductor along the sheet. The graphite structure is the result of expanding the pi electrons into planes. Since all rings in graphite consist of 6 carbon atoms, the sheets are flat. If the hybrid orbitals are somewhat flexible, it is easy to understand that the 5-member rings are also possibilities. However, formation of 5-member rings reults in buckling of a flat structure, and we usually do not think this will happen. The discoverer of the buckminsterfullerenes spent a long time figuring out the structure of a of carbon consisting of 60 carbon atoms, which is represented by . However, once they have deducted the structure, its shape is very common. The carbon atoms are at the junction of lines on a soccer ball (other parts of the world call it foot ball). A geometric description is a . The fullerenes, or buckyballs have become the talk of the news media since the award of Nobel Prize to its discoverers. The fullerenes actually are common, and their discovery adds a nice touch to theories of bonding and electronic structures. The electrons on the surface of the ball perhaps they are on a very large atom. The discoverers are still very active in the study of fullerenes. A compound with equal number of boron and nitrogen atoms, BN, has on average 4 valence electrons per atom, same as that of diamond or graphite in carbon. Thus, we anticipate BN to form solids with simlar bonding and structures as diamond and graphite. In fact, the bonding of boron and boron compounds is also very interesting. The following items are mentioned here for future development. Give the electronic configuration of an element.
6,360
4,418
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/17%3A_Chemical_Kinetics_and_Dynamics/17.06%3A_Catalysts_and_Catalysis
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially imortant that you know the precise meanings of all the green-highlighted terms in the context of this topic. It almost seems like magic! A mixture of gaseous H and O can coexist indefinitely without any detectable reaction, but if a platinum wire is heated and then immersed in the gaseous mixture, the reaction proceeds gently as the heat liberated by the reaction makes the wire glow red-hot. Catalysts play an essential role in our modern industrial economy, in our stewardship of the environment, and in all biological processes. This lesson will give you a glimpse into the wonderful world of catalysts, helping you to understand what they are and how they work. Catalysts have no effect on the equilibrium constant and thus on the equilibrium composition. Catalysts are substances that speed up a reaction but which are not consumed by it and do not appear in the net reaction equation. Also — and this is very important — ; this means that . Thus a catalyst (in this case, sulfuric acid) can be used to speed up a reversible reaction such as ester formation or its reverse, ester hydrolysis: The catalyst has no effect on the equilibrium constant or the direction of the reaction. The direction can be controlled by adding or removing water (Le Chatelier principle). Catalysts function by allowing the reaction to take place through an alternative mechanism that requires a smaller activation energy. This change is brought about by a specific interaction between the catalyst and the reaction components. You will recall that the rate constant of a reaction is an exponential function of the activation energy, so even a modest reduction of \(E_a\) can yield an impressive increase in the rate. Catalysts provide alternative reaction pathways Catalysts are conventionally divided into two categories: and . , natural biological catalysts, are often included in the former group, but because they share some properties of both but exhibit some very special properties of their own, we will treat them here as a third category. When heated by itself, a sugar cube (sucrose) melts at 185°C but does not burn. But if the cube is rubbed in cigarette ashes, it burns before melting owing to the catalytic action of trace metal compounds in the ashes. The surface of metallic platinum is an efficient catalyst for the oxidation of many fuel vapors. This property is exploited in flameless camping stoves (left). The image at the right shows a glowing platinum wire heated by the slow combustion of ammonia on its surface. However, if you dip a heated Pt wire into ammonia, you get a miniature explosion: see video below. Hydrogen peroxide is thermodynamically unstable according to the reaction \[\ce{2 H2O2 → 2 H2O + O2 } \quad \quad ΔG^o = –210\, kJ\, mol^{–1}\] In the absence of contaminants this reaction is very slow, but a variety of substances, ranging from iodine, metal oxides, trace amount of metals, greatly accelerate the reaction, in some cases almost explosively owing to the rapid release of heat. The most effective catalyst of all is the enzyme , present in blood and intracellular fluids; adding a drop of blood to a solution of 30% hydrogen peroxide induces a vigorous reaction. This same reaction has been used to power a racing car! Each kind of catalyst facilitates a different pathway with its own activation energy. Because the rate is an exponential function of (Arrhenius equation), even relatively small differences in 's can have dramatic effects on reaction rates. Note especially the values for ; the chemist is still a rank amateur compared to what Nature can accomplish through natural selection! Ea kJ/mol Changes in the rate constant or of the are obvious ways of measuring the efficacy of a catalyst. But two other terms have come into use that have special relevance in industrial applications. The turnover number (TON) is an average number of cycles a catalyst can undergo before its performance deteriorates (see ). Reported TONs for common industrial catalysts span a very wide range from perhaps 10 to well over 10 , which approaches the limits of diffusion transport. This term, which was originally applied to enzyme-catalyzed reactions, has come into more general use. It is simply the number of times the overall catalyzed reaction takes place per catalyst (or per active site on an enzyme or heterogeneous catalyst) per unit time:is defined as The number of active sites on a heterogeneous catalyst is often difficult to estimate, so it is often replaced by the total area of the exposed catalyst, which is usually experimentally measurable. TOFs for heterogeneous reactions generally fall between 10 to 10 s . As the name implies, homogeneous catalysts are present in the same phase (gas or liquid solution) as the reactants. Homogeneous catalysts generally enter directly into the chemical reaction (by forming a new compound or complex with a reactant), but are released in their initial form after the reaction is complete, so that they do not appear in the net reaction equation. Unless you are taking an organic chemistry course in which your instructor indicates otherwise, . They are presented here for the purpose of convincing you that catalysis is not black magic, and to familiarize you with some of the features of catalyzed mechanisms. It should be sufficient for you to merely convince yourself that the individual steps make chemical sense. You will recall that is possible when atoms connected to each of two doubly-bonded carbons can be on the same ( ) or opposite ( ) sides of the bond. This reflects the fact that rotation about a double bond is not possible. Conversion of an alkene between its - and forms can only occur if the double bond is temporarily broken, thus freeing the two ends to rotate. Processes that cleave covalent bonds have high activation energies, so cis-trans isomerization reactions tend to be slow even at high temperatures. Iodine is one of several catalysts that greatly accelerate this process, so the isomerization of serves as a good introductory example of homogeneous catalysis. The mechanism of the iodine-catalyzed reaction is believed to involve the attack of iodine atoms (formed by the dissociation equilibrium on one of the carbons in Step : During its brief existence, the free-redical activated complex can undergo rotation about the C—C bond, so that when it decomposes by releasing the iodine ( ), a portion of the reconstituted butene will be in its form. Finally, the iodine atom recombine into diiodine. Since processes and cancel out, iodine does not appear in the net reaction equation — a requirement for a true catalyst. Many reactions are catalyzed by the presence of an acid or a base; in many cases, both acids and bases will catalyze the same reaction. As one might expect, the mechanism involves the addition or removal of a proton, changing the reactant into a more kinetically labile form. A simple example is the addition of iodine to propanone I + (CH ) –C=O → (CH I)(CH )–C=O The mechanism for the acid-catalyzed process involves several steps. The role of the acid is to provide a proton that attaches to the carbonyl oxygen, forming an unstable oxonium ion . The latter rapidly rearranges into an enol (i.e., a carbon connected to both a double bond ( ) and a hydroxyl ( ) group.) This completes the catalytic part of the process, which is basically an acid-base (proton-transfer) reaction in which the role of the proton is to withdraw an electron from the ketone oxygen. In the second stage, the enol reacts with the iodine. The curved arrows indicate shifts in electron locations. In below, an electron is withdrawn from the π orbital of the double bond by one of the atoms of the I molecule. This induces a shift of electrons in the latter, causing half of this molecule to be expelled as an iodide ion. The other half of the iodine is now an iodonium ion I which displaces a proton from one of the methyl groups. The resultant carbonium ion then expels the -OH proton to yield the final neutral product. Perhaps the most well-known acid-catalyzed reaction is the hydrolysis (or formation) of an ester — a reaction that most students encounter in an organic chemistry laboratory course. This is a more complicated process involving five steps; its mechanism is discussed . See also this U. Calgary site, which describes both the acid- and base-catalyzed reaction. Many oxidation-reduction (electron-transfer) reactions, including direct oxidation by molecular oxygen, are quite slow. Ions of transition metals capable of existing in two oxidation states can often materially increase the rate. An example would be the reduction of Fe by the vanadium ion V : \[\ce{V^{3+} + Fe^{3+} → V^{4+} + Fe^{2+}}\] This reaction is catalyzed by either Cu or Cu , and the rate is proportional to the concentration of V and of the copper ion, but independent of the Fe concentration. The mechanism is believed to involve two steps: (If Cu is used as the catalyst, it is first oxidized to Cu by step 2.) Ions capable of being oxidized by an oxidizing agent such as H O can serve as catalysts for its decomposition. Thus H O oxidizes iodide ion to iodate, when then reduces another H O molecule, returning an I ion to start the cycle over again: \[H_2O_2 + I^– → H_2O + IO^–\] \[H_2O_2 + IO^– → H_2O + O_2 + I^–\] Iron(II) can do the same thing. Even traces of metallic iron can yield enough Fe to decompose solutions of hydrogen peroxide. \[H_2O_2 + Fe^{2+} → H_2O + Fe^{3+}\] \[H_2O_2 + Fe^{3+} + 2H^+ → H_2O + O_2 + Fe^{2+} + 2H^+\] As its name implies, a heterogeneous catalyst exists as a separate phase (almost always a solid) from the one (most commonly a gas) in which the reaction takes place. The catalytic affect arises from disruption (often leading to dissociation) of the reactant molecules brought about by their interaction with the surface of the catalyst. ← You will recall that one universal property of matter is the weak attractive forces that arise when two particles closely approach each other. ( for a quick review.) When the particles have opposite electric charges or enter into covalent bonding, these far stronger attraction dominate and define the "chemistry" of the interacting species. The molecular units within the bulk of a solid are bound to their neighbors through these forces which act in opposing directions to keep each under a kind of "tension" that restricts its movement and contributes to the cohesiveness and rigidity of the solid. At the surface of any kind of condensed matter, things are quite different. The atoms or molecules that reside on the surface experience unbalanced forces which prevents them from assuming the same low potential energies that characterize the interior units. (The same thing happens in liquids, and gives rise to a variety of interfacial effects such as surface tension.) But in the case of a solid, in which the attractive forces tend to be stronger, something much more significant happens. The molecular units that reside on the surface can be thought of as partially buried in it, with their protruding parts (and the intermolecular attractions that emerge from them) exposed to the outer world. The strength of the attractive force field which emanates from a solid surface varies in strength depending on the nature of the atoms or molecules that make up the solid. Don't confuse a sorption with a sorption; the latter refers to the bulk uptake of a substance into the interior of a porous material. At the microscopic level, of course, absorption also involves adsorption. The process in which molecules in a gas or a liquid come into contact with and attach themselves to a solid surface is known as . Adsorption is almost always an exothermic process and its strength is conventionally expressed by the enthalpy or "heat" of adsorption Δ . Two general categories of adsorption are commonly recognized, depending on the extent to which the electronic- or bonding structure of the attached molecule is affected. When the attractive forces arise from relatively weak van der Waals interactions, there is little such effect and Δ tends to be small. This condition is described as ( ). Physisorption of a gas to a surface is energetically similar to the condensation of the gas to a liquid, it usually builds up multiple layers of adsorbed molecules, and it proceeds with zero activation energy. Of more relevance to catalytic phenomena is , in which the adsorbate is bound to the surface by what amounts to a chemical bond. The resulting disruption of the electron structure of the adsorbed species "activates" it and makes it amenable to a chemical reaction (often dissociation) that could not be readily achieved through thermal activation in the gas or liquid phase. In contrast to physisorption, chemisorption generally involves an activation energy (supplied by Δ ) and the adorbed species is always a monolayer. The simplest heterogeneous process is chemisorption followed by bond-breaking as described above. The most common and thoroughly-studied of these is the dissociation of hydrogen which takes place on the surface of most transition metals. The single 1 electron of each hydrogen atom coordinates with the orbitals of the metal, forming a pair of chemisorption bonds (indicated by the red dashed lines). Although these new bonds are more stable than the single covalent bond they replace, the resulting hydrogen atoms are able to migrate along the surface owing to the continuous extent of the -orbital conduction band. Although the adsorbed atoms (" ") are not free radicals, they are nevertheless highly reactive, so if a second, different molecular species adsorbs onto the same surface, an interchange of atoms may be possible. Thus carbon monoxide can be oxidized to CO by the process illustrated below: In this example, only the O molecule undergoes dissociation . The CO molecule adsorbs without dissociation , configured perpendicular to the surface with the chemisorption bond centered over a hollow space between the metal atoms. After the two adsorbed species have migrated near each other , the oxygen atom switches its attachment from the metal surface to form a more stable C=O bond with the carbon , followed by release of the product molecule. An alternative mechanism eliminates the second chemisorption step; the oxygen adatoms react directly with the gaseous CO molecules by replacing the chemisorption bond with a new C–O bond as they swoop over the surface: Examples of both mechanisms are known, but the Langmuir-Hinshelwood mechanism is more importantin that it exploits the activation of the adsorbed reactant. In the case of carbon monoxide oxdation, studies involving molecular beam experiments support this scheme. A key piece of evidence is the observation of a short time lag between contact of a CO molecule with the surface and release of the CO , suggesting that CO remains chemisorbed during the interval. To be effective, these processes of adsorption, reaction, and desorption must be orchestrated in a way that depends critically on the properties of the catalyst in relation to the chemisorption properties (Δ ) of the reactants and products. The importance of choosing a catalyst that achieves the proper balance of the heats of adsorption of the various reaction components is known as the , but is sometimes referred to as the "just-right" or " ". Remember the story of ? ... or see this UTube video. In its application to catalysis, this principle is frequently illustrated by a "volcano diagram" in which the rate of the catalyzed reaction is plotted as a function of Δ of a substrate such as H on a transition metal surface. The plot at the left shows the relative effectiveness of various metals in catalyzing the decomposition of formic acid HCOOH. The vertical axis is plotted as temperature, the idea being that the better the catalyst, the lower the temperature required to maintain a given rate. This term refers to the idealized sequence of steps between the adsorption of a reactant onto the catalyst and the desorption of the product, culminating in restoration of the catalyst to its original condition. A typical catalytic cycle for the hydrogenation of propene is illustrated below. Catalyst poisoning, brought about by irreversible binding of a substance to its surface, can be permanent or temporary. In the latter case the catalyst can be regenerated, usually by heating to a high temperature. In organisms, many of the substances we know as "poisons" act as catalytic poisons on enzymes. If catalysts truly remain unchanged, they should last forever, but in actual practice, various events can occur that limit the useful lifetime of many catalysts. Catalysts tend to be rather expensive, so it is advantageous if they can be reprocessed or regenerated to restore their activity. It is a common industrial practice to periodically shut down process units to replace spent catalysts. The actual mechanisms by which adsorption of a molecule onto a catalytic surface facilitates the cleavage of a bond vary greatly from case to case. We give here only one example, that of the dissociation of dixoygen O on the surface of a catalyst capable of temporarily donating an electron which enters an oxygen antibonding molecular orbital that will clearly destabilize the O–O bond. (Once the bond has been broken, the electron is given back to the catalyst.) Heterogeneous catalysts mostly depend on one or more of the followng kinds of surfaces: Since heterogeneous catalysis requires direct contact between the reactants and the catalytic surface, the goes at the top of the list. In the case of a metallic film, this is not the same as the nominal area of the film as measured by a ruler; at the microscopic level, even apparently smooth surfaces are highly irregular, and some cavities may be too small to accommodate reactant molecules. Consider, for example, that a 1-cm cube of platinum (costing roughly $1000) has a nominal surface area of only 6 cm . If this is broken up into 10 smaller cubes whose sides are 10 m, the total surface area would be 60,000 cm , capable in principle of increasing the rate of a Pt-catalyzed reaction by a factor of 10 . These very finely-divided (and often very expensive) metals are typically attached to an inert to maximize their exposure to the reactants. At the microscopic level, even an apparently smooth surface is pitted and uneven, and some sites will be more active than others. Penetration of molecules into and out of some of the smaller channels of a porous surface may become rate-limiting. An otherwise smooth surface will always possess a variety of defects such as steps and corners which offer greater exposure and may be either the only active sites on the surface, or overly active so as to permanently bind to a reactant, reducing the active area of the surface. In one study, it was determined that kink defects constituting just 5 percent of platinum surface were responsible for over 90% of the catalytic activity in a certain reaction. When chemisorbtion occurs at two or more locations on the reactant, efficient catalysis requires that the spacing of the active centers on the catalytic surface be such that surface bonds can be formed without significant angular distortion. Thus activation of the ethylene double bond on a nickel surface proceeds efficiently because the angle between the C—Ni bonds and the C—C is close to the tetrahedral value of 109.5° required for carbon hybrid orbital formation. Similarly, we can expect that the hydrogenation of benzene should proceed efficiently on a surface in which the active sites are spaced in the range of 150 and 250 pm. This is one reason why many metallic catalysts exhibit different catalytic activity on different crystal faces. As the particle size of a catalyst is reduced, the fraction of more highly exposed step, edge, and corner atoms increases. An extreme case occurs with nano-sized (1-2 nm) metal cluster structures composed typically of 10-50 atoms. [link] → Metallic gold, well known for its chemical inertness, exhibits very high catalytic activity when it is deposited as metallic clusters on an oxide support. For example, O dissociates readily on Au clusters which have been found to efficiently catalyze the oxidation of hydrocarbons [article]. Zeolites are clay-like aluminosilicate solids that form open-framework microporous structures that may contain linked cages, cavities or channels whose dimensions can be tailored to the sizes of the reactants and products. To those molecules able to diffuse through these spaces, zeolites are in effect "all surface", making them highly efficient. This size-selectivity makes them important for adsorption, separation, ion-exchange, and catalytic applications. Many zeolites occur as minerals, but others are made synthetically in order to optimize their properties. As catalyts, zeolites offer a number of advantages that has made them especially important in "green chemistry" operations in which the number of processing steps, unwanted byproducts, and waste stream volumes are minimized. This distortion of Robert FitzGerald's already-distorted translation of the famous quatrain from the wonderful underlines the central role that enzymes and their technology have played in civilization since ancient times. ← Fermentation and wine-making have been a part of human history and culture for at least 8000 years, but recognition of the role of catalysis in these processes had to wait until the late nineteenth century. By the 1830's, numerous similar agents, such as those that facilitate protein digestion in the stomach, had been discovered. The term "enzyme", meaning "from yeast", was coined by the German physiologist Wilhelm Kühne in 1876. In 1900, Eduard Buchner (1860-1917, 1907 Nobel Prize in Chemistry) showed that fermentation, previously believed to depend on a mysterious "life force" contained in living organisms such as yeast, could be achieved by a cell-free "press juice" that he squeezed out of yeast. By this time it was recognized that enzymes are a form of catalyst (a term introduced by Berzelius in 1835), but their exact chemical nature remained in question. They appeared to be associated with proteins, but the general realization that enzymes proteins began only in the 1930s when the first pure enzyme was crystallized, and did not become generally accepted until the 1950s. It is now clear that nearly all enzymes are proteins, the major exception being a small but important class of RNA-based enzymes known as ribozymes. Proteins are composed of long sequences of amino acids strung together by amide bonds; this sequence defines the of the protein.. Their huge size (typically 200-2000 amino acid units, with total molecular weights 20,000 - 200,000) allows them to fold in complicated ways (known as secondary and tertiary structures) whose configurations are essential to their catalytic function. Because enzymes are generally very much larger than the reactant molecules they act upon (known in biochemistry as ), enzymatic catalysis is in some ways similar to heterogeneous catalysis. The main difference is that the binding of a subtrate to the enzyme is much more selective. Most enzymes come into being as inactive precursors ( ) which are converted to their active forms at the time and place they are needed. Conversion to the active form may involve a simple breaking up of the protein by hydrolysis of an appropriate peptide bond or the addition of a phosphate or similar group to one of the amino acid residues. Many enzyme proteins also require "helper" molecules, known as , to make them catalytically active. These may be simple metal ions (many of the trace nutrient ions of Cu, Mn, Mo, V, etc.) or they may be more complex organic molecules which are called . Many of the latter are what we commonly refer to as . Other molecules, known as , decrease enzyme activity; many drugs and poisons act in this way. The standard model of enzyme kinetics consists of a two-step process in which an enzyme binds reversibly to its substrate S (the reactant) to form an ES: The enzyme-substrate complex plays a role similar to that of the activated complex in conventional kinetics, but the main function of the enzyme is to stabilize the transition state. In the second, essentially irreversible step, the product and the enzyme are released: The basic kinetic treatment of this process involves the assumption that the concentrations [E] and [ES] reach steady-state values which do not change with time. (The detailed treatment, which is beyond the scope of this course, can be found here.) The overall process is described by the which is plotted here. The is defined as shown, but can be simplified to the ES dissociation constant / in cases when dissociation of the complex is the rate-limiting step. The quantity is not observed directly, but can be determined from as shown here. In order to understand enzymes and how they catalyze reactions, it is first necessary to review a few basic concepts relating to proteins and the amino acids of which they are composed. The 21 amino acids that make up proteins all possess the basic structure shown here, where R represents either hydrogen or a side chain which may itself contain additional –NH or –COOH groups. Both kinds of groups can hydrogen-bond with water and with the polar parts of substrates, and therefore contribute to the amino acid's polarity and hydophilic nature. Side chains that contain longer carbon chains and especially benzene rings have the opposite effect, and tend to render the amino acid non-polar and hydrophobic. Both the –NH and –COOH groups are ionizable (i.e., they can act as proton donors or acceptors) and when they are in their ionic forms, they will have an electric charge. The –COOH groups have pKa's in the range 1.8-2.8, and will therefore be in their ionized forms –COO at ordinary cellular pH values of around 7.4. The amino group pKa's are around 8.8-10.6, so these will also normally be in their ionized forms NH . This means that at ordinary cellular pH, both the carboxyl and amino groups will be ionized. But because the charges have opposite signs, an amino acid that has no extra ionizable groups in its side chain will have a net charge of zero. But if the side chain contains an extra amino or carboxyl group, the amino acid can carry a net electric charge. The following diagram illustrates typical amino acids that fall into each of the groups described above. Proteins are made up of one or more chains of amino acids linked to each other through peptide bonds by elimination of a water molecule. The product shown above is called a , specifically it is a because it contains two (what's left after the water has been removed.) Proteins are simply very long polypeptide chains, or combinations of them. (The distinction between a long polypeptide and a small protein is rather fuzzy!) Most enzymes fall into the category of . In contrast to the that form the structural components of tissues, globular proteins are soluble in water and rarely have any systematic tertiary structures. They are made up of one or more amino-acid (" ") chains which fold into various shapes that can roughly be described as spherical — hence the term "globular", and the suffix "globin" that is frequently appended to their names, as in "hemoglobin". Protein folding is a spontaneous process that is influenced by a number of factors. One of these is obviously their primary amino-acid sequence that facilitates formation of intramolecular bonds between amino acids in different parts of the chain. These consist mostly of hydrogen bonds, although disulfide bonds S—S between sulfur-containing amino acids are not uncommon. In addition to these intramolecular forces, interactions with the surroundings play an important role. The most import of these is the , which favors folding conformations in which polar amino acids (which form hydrogen bonds with water) are on the outside, while the so-called amino acids remain in protected locations within the folds. The catalytic process mediated by an enzyme takes place in a depression or cleft that exposes the substrate to only a few of the hundreds-to-thousands of amino acid residues in the protein chain. The high specificity and activity of enzyme catalysis is sensitively dependent on the shape of this cavity and on the properties of the surrounding amino acids In 1894, long before it was clear that enzymes are proteins, the German chemist Emil Fischer suggested the so-called as a way of understanding how a given enzyme can act specificilly on only one kind of substrate molecule. This model is essentially an elaboration of the one we still use for explaining heterogeneous catalysis. Although the basic lock-and-key model continues to be useful, it has been modified into what is now called the . This assumes that when the substrate enters the active site and interacts with the surrounding parts of the amino acid chain, it reshapes the active site (and perhaps other parts of the enzyme) so that it can engage more fully with the substrate. One important step in this process is to squeeze out any water molecules that are bound to the substrate and which would interfere with its optimal positioning. Within the active site, specific interactions between the substrate and appropriately charged, hydrophlic and hydrophobic amino acids of the active site then stabilize the transition state by distorting the substrate molecule in such a way as to lead to a transition state having a substantially lower activation energy than can be achieved by ordinay non-enzymatic catalysis. Beyond this point, the basic catalytic steps are fairly conventional, with acid/base and nucleophilic catalysis being the most common. For a very clear and instructive illustration of a typical multi-step sequence of a typical enzyme-catalyzed reaction, from Mark Bishop's online textbook, from which this illustration is taken. If all the enzymes in an organism were active all the time, the result would be runaway chaos. Most cellular processes such as the production and utilization of energy, cell division, and the breakdown of metabolic products must operate in an exquisitely choreographed, finely-tuned manner, much like a large symphony orchestra; no place for jazz-improv here! Nature has devised various ways of achieving this; we described the action of precursors and coenzymes . Here we focus on one of the most important (and chemically-interesting) regulatory mechanisms. There is an important class of enzymes that possess special sites (distinct from the catalytically active sites) to which certain external molecules can reversibly bind. Although these , as they are called, may be quite far removed from the catalytic sites, the effect of binding or release of these molecules is to trigger a rapid change in the folding pattern of the enzyme that alters the shape of the active site. The effect is to enable a signalling or regulatory molecule (often a very small one such as NO) to modulate the catalytic activity of the active site, effectively turning the enzyme on or off. In some instances, the product of an enzyme-catalyzed reaction can itself bind to an allosteric site, decreasing the activity of the enzyme and thus providing negative feedback that helps keep the product at the desired concentration. It is believed that concentrations of plasma ions such as calcium, and of energy-supplying ATP are, are regulated in this way. Allosteric enzymes are more than catalysts: they act as control points in metabolic and cellular signalling networks.Allosteric enzymes frequently stand at the beginning of a sequence of enzymes in a metabolic chain, or at branch points where two such chains diverge, acting very much like traffic signals at congested intersections. As is the case with heterogeneous catalysts, certain molecules other than the normal substrate may be able to enter and be retained in the active site so as to competitively inhibit an enzyme's activity. This is how penicillin and related antibiotics work; these molecules covalently bond to amino acid residues in the active site of an enzyme that catalyzes the formation of an essential component of bacterial cell walls. When the cell divides, the newly-formed progeny essentially fall apart. Enzymes have been widely employed in the food, pulp-and-paper, and detergent industries for a very long time, but mostly as impure whole-cell extracts. In recent years, developments in biotechnology and the gradual move of industry from reliance on petroleum-based feedstocks and solvents to so-called "green" chemistry have made enzymes more attractive as industrial catalysts. Compared to the latter, purified enzymes tend to be expensive, difficult to recycle, and unstable outside of rather narrow ranges of temperature, pH, and solvent composition. Many of the problems connected with the use of free enzymes can be overcome by immobilizing the enzyme. This can be accomplished in several ways:
33,402
4,419
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/10%3A_Solids_Liquids_and_Solutions/10.02%3A_Solids
The most obvious distinguishing feature of a solid is its rigidity. In the image below, you see fools gold, or pyrite. Like any typical solid, it is hard and rigid, especially when compared to a liquid or a gas. On the microscopic level this corresponds to strong forces between the atoms, ions, or molecules relative to the degree of motion of those particles. The only movements within a solid crystal lattice are relatively restricted vibrations about an average position. This restricted vibration is due to the tight packing of the atom, as seen in the microscopic depiction of a solid below. Thus we often think and speak of crystalline solids as having atoms, ions, or molecules in fixed positions.
717
4,420
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Polymer_Chemistry_(Schaller)/04%3A_Polymer_Properties/4.06%3A_Microphase_Separation
Crystalline domains provide additional strength to polymer materials. The strong attraction possible between closely-aligned chains results in long segments of the polymer being held more firmly in position. Consequently, chain flow is more limited, and the material becomes more rigid. Sometimes, more rigid segments of a polymer are deliberately built into the structure. For example, in block co-polymers, softer, more flexible blocks are often paired with harder, more rigid blocks. The may have greater conformational flexibility, or weaker intermolecular attractions between themselves, or both. The may be more conformationally rigid or they may have stronger intermolecular attractions, such as strong dipoles or hydrogen bonds. If the block lengths are the right size, the two segments are able to separate into two phases. As a result of stronger intermolecular attractions, lengths of chains containing hard segments cluster together, pushing out the soft segments that would otherwise get in the way of these intermolecular attractions. This phenomenon is called . The result is that the material contains islands of strength and rigidity in a matrix of flexible polymer chains. That can be a very useful combination. The flexible chains of the soft segments allow the polymer to be distorted, bent or compressed, but the hard segments put limits on that flexibility, keeping the material firmly together. Because we are usually dealing with very large numbers of enchained monomers, the difference between the two kinds of segments need not even be dramatic. A copolymer of butadiene and styrene, both hydrocarbons, can form microphase separated materials. In this case, intermolecular attractions are dominated by weak London dispersion forces, but the aromatic groups of the styrene, with their delocalized pi systems, have London dispersion forces that are slightly stronger. As a result, the polystyrene blocks can cluster together, surrounded by the softer polybutadiene blocks. Identify the hard segment and the soft segment in each of the following block-co-polymers. Sometimes, the separation between these phases can be directly observed via microscopy. Tunneling electron microscopy (TEM) is a technique that can generate images of a cross-sectional slice of the material. The material is generally stained with a heavy metal, such as osmium, that binds preferentially to one phase or the other. The stained phase shows up darker under TEM than the phase that isn't stained. X-ray diffraction techniques can often be used to measure distances between hard segments. Small-angle X-ray scattering (SAXS) is very similar to wide-angle X-ray scattering (WAXS). Because of the inverse relationship between scattering angle and distance, SAXS is used to probe regularly repeating structures at greater distances than those seen in WAXS. That makes it possible to see peaks if the hard segments are distributed regularly enough within the soft matrix. Note that, in SAXS, the x-axis is usually labeled as q, the scattering vector: q = 4πsinθ / λ But since d = 2sinθ/λ then q = 2π/d or d = 2π/q. That gives us a pretty straightforward way of calculating distances between regularly-spaced hard segments (or any other regularly-spaced objects). Once again, just as in WAXS, there is an inverse relationship between the quantity shown on the x-axis and distances through space. Calculate the approximate distances revealed in the following SAXS results.
3,507
4,422
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Smith)/05%3A_Stereochemistry/5.01%3A_Starch_and_Cellulose
The polysaccharides are the most abundant carbohydrates in nature and serve a variety of functions, such as energy storage or as components of plant cell walls. Polysaccharides are very large polymers composed of tens to thousands of monosaccharides joined together by glycosidic linkages. The three most abundant polysaccharides are starch, glycogen, and cellulose. These three are referred to as because each yields only one type of monosaccharide (glucose) after complete hydrolysis. may contain sugar acids, amino sugars, or noncarbohydrate substances in addition to monosaccharides. Heteropolymers are common in nature (gums, pectins, and other substances) but will not be discussed further in this textbook. The polysaccharides are nonreducing carbohydrates, are not sweet tasting, and do not undergo mutarotation. Starch is the most important source of carbohydrates in the human diet and accounts for more than 50% of our carbohydrate intake. It occurs in plants in the form of granules, and these are particularly abundant in seeds (especially the cereal grains) and tubers, where they serve as a storage form of carbohydrates. The breakdown of starch to glucose nourishes the plant during periods of reduced photosynthetic activity. We often think of potatoes as a “starchy” food, yet other plants contain a much greater percentage of starch (potatoes 15%, wheat 55%, corn 65%, and rice 75%). Commercial starch is a white powder. Starch is a mixture of two polymers: and . Natural starches consist of about 10%–30% amylase and 70%–90% amylopectin. Amylose is a linear polysaccharide composed entirely of D-glucose units joined by the α-1,4-glycosidic linkages we saw in maltose (part (a) of Figure 5.1.1). Experimental evidence indicates that amylose is not a straight chain of glucose units but instead is coiled like a spring, with six glucose monomers per turn (part (b) of Figure 5.1.1). When coiled in this fashion, amylose has just enough room in its core to accommodate an iodine molecule. The characteristic blue-violet color that appears when starch is treated with iodine is due to the formation of the amylose-iodine complex. This color test is sensitive enough to detect even minute amounts of starch in solution. Amylopectin is a branched-chain polysaccharide composed of glucose units linked primarily by α-1,4-glycosidic bonds but with occasional α-1,6-glycosidic bonds, which are responsible for the branching. A molecule of amylopectin may contain many thousands of glucose units with branch points occurring about every 25–30 units (Figure 5.1.2). The helical structure of amylopectin is disrupted by the branching of the chain, so instead of the deep blue-violet color amylose gives with iodine, amylopectin produces a less intense reddish brown. Dextrins are glucose polysaccharides of intermediate size. The shine and stiffness imparted to clothing by starch are due to the presence of dextrins formed when clothing is ironed. Because of their characteristic stickiness with wetting, dextrins are used as adhesives on stamps, envelopes, and labels; as binders to hold pills and tablets together; and as pastes. Dextrins are more easily digested than starch and are therefore used extensively in the commercial preparation of infant foods. The complete hydrolysis of starch yields, in successive stages, glucose: In the human body, several enzymes known collectively as amylases degrade starch sequentially into usable glucose units. Glycogen is the energy reserve carbohydrate of animals. Practically all mammalian cells contain some stored carbohydrates in the form of glycogen, but it is especially abundant in the liver (4%–8% by weight of tissue) and in skeletal muscle cells (0.5%–1.0%). Like starch in plants, glycogen is found as granules in liver and muscle cells. When fasting, animals draw on these glycogen reserves during the first day without food to obtain the glucose needed to maintain metabolic balance. Note About 70% of the total glycogen in the body is stored in muscle cells. Although the percentage of glycogen (by weight) is higher in the liver, the much greater mass of skeletal muscle stores a greater total amount of glycogen. Glycogen is structurally quite similar to amylopectin, although glycogen is more highly branched (8–12 glucose units between branches) and the branches are shorter. When treated with iodine, glycogen gives a reddish brown color. Glycogen can be broken down into its D-glucose subunits by acid hydrolysis or by the same enzymes that catalyze the breakdown of starch. In animals, the enzyme phosphorylase catalyzes the breakdown of glycogen to phosphate esters of glucose. Cellulose, a fibrous carbohydrate found in all plants, is the structural component of plant cell walls. Because the earth is covered with vegetation, cellulose is the most abundant of all carbohydrates, accounting for over 50% of all the carbon found in the vegetable kingdom. Cotton fibrils and filter paper are almost entirely cellulose (about 95%), wood is about 50% cellulose, and the dry weight of leaves is about 10%–20% cellulose. The largest use of cellulose is in the manufacture of paper and paper products. Although the use of noncellulose synthetic fibers is increasing, rayon (made from cellulose) and cotton still account for over 70% of textile production. Like amylose, cellulose is a linear polymer of glucose. It differs, however, in that the glucose units are joined by β-1,4-glycosidic linkages, producing a more extended structure than amylose (part (a) of Figure 5.1.3). This extreme linearity allows a great deal of hydrogen bonding between OH groups on adjacent chains, causing them to pack closely into fibers (part (b) of Figure 5.1.3). As a result, cellulose exhibits little interaction with water or any other solvent. Cotton and wood, for example, are completely insoluble in water and have considerable mechanical strength. Because cellulose does not have a helical structure, it does not bind to iodine to form a colored product. Cellulose yields D-glucose after complete acid hydrolysis, yet humans are unable to metabolize cellulose as a source of glucose. Our digestive juices lack enzymes that can hydrolyze the β-glycosidic linkages found in cellulose, so although we can eat potatoes, we cannot eat grass. However, certain microorganisms can digest cellulose because they make the enzyme cellulase, which catalyzes the hydrolysis of cellulose. The presence of these microorganisms in the digestive tracts of herbivorous animals (such as cows, horses, and sheep) allows these animals to degrade the cellulose from plant material into glucose for energy. Termites also contain cellulase-secreting microorganisms and thus can subsist on a wood diet. This example once again demonstrates the extreme stereospecificity of biochemical processes.
6,856
4,423
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/20%3A_Electrochemistry/20.E%3A_Exercises
reduction: SO (aq) + 9H (aq) + 8e → HS (aq) + 4H O(l) oxidation: C H O (aq) + 12H O(l) → 6HCO (g) + 30H (aq) + 24e overall: C H O (aq) + 3SO (aq) → 6HCO (g) + 3H (aq) + 3HS (aq) oxidation: Zn(s) → Zn (aq) + 2e ; anode; overall: Zn(s) + 2H (aq) → Zn (aq) + H (aq) oxidation: H (g) → 2H (aq) + 2e ; anode; overall: AgCl(s) + H (g) → 2H (aq) + Ag(s) + Cl (aq) oxidation: H (g) → 2H (aq) + 2e ; anode; overall: 2Fe (aq) + H (g) → 2H (aq) + 2Fe (aq) \(E^\circ_{\textrm{anode}} \\ E^\circ_{\textrm{cathode}} \\ E^\circ_{\textrm{cell}}\) \( \mathrm{Ni^{2+}(aq)}+\mathrm{2e^-}\rightarrow\mathrm{Ni(s)};\;-\textrm{0.257 V} \\ \mathrm{2H^+(aq)}+\mathrm{2e^-}\rightarrow\mathrm{H_2(g)};\textrm{ 0.000 V} \\ \mathrm{2H^+(aq)}+\mathrm{Ni(s)}\rightarrow\mathrm{H_2(g)}+\mathrm{Ni^{2+}(aq)};\textrm{ 0.257 V} \) 2Fe O •xH O(s) + 3C(s) → 4Fe(l) + 3CO (g) + 2xH O(g) Write the two half-reactions for this overall reaction. 5. This reaction has ΔH° = −2877 kJ/mol. Calculate E° and then determine ΔG°. Is this a spontaneous process? What is the change in entropy that accompanies this process at 298 K? Pb(s) + PbO (s) + 2H SO (aq) → 2PbSO (s) + 2H O(l) If you have a battery with an electrolyte that has a density of 1.15 g/cm and contains 30.0% sulfuric acid by mass, is the potential greater than or less than that of the standard cell? :
1,352
4,424
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Named_Reactions/Cope_Elimination
When a tertiary amine oxide bearing one or more beta hydrogens is heated, it is converted to an . The reaction is known as or , not to be confused with . For example: The net reaction is , hence the name Cope elimination. mechanism: Cope elimination is an intramolecular E2 reaction. It is also a . Intermolecular E2 reactions occur preferentially from the conformation of the substrate in which the leaving group and the beta hydrogen abstracted by the base are antiperiplanar, which is not possible in intramolecular E2 reactions in which the base is built into the leaving group because the basic atom is too far away from the beta hydrogen anti to the leaving group. Intramolecular E2 reactions occur preferentially from the conformation of the substrate in which the leaving group and the beta hydrogen abstracted by the base are synperiplanar. The basic atom and the beta hydrogen abstracted by it are closest to each other in this conformation. For example: mechanism: Cope elimination is regioselective. Unlike intermolecular E2 reactions, it does not follow Zaitsev’s rule; the major product is always the least stable alkene, i.e., the alkene with the least highly substituted double bond. For example: This trend is most likely due to the fact that the less highly substituted β-carbon bears more hydrogen atoms than the more highly substituted one; at a given moment, in a sample of the substrate, there are more molecules in which a hydrogen atom on the less highly substituted beta carbon is to the leaving group than there are in which a hydrogen atom on the more highly substituted beta carbon is.
1,639
4,425
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/13%3A_Acid-Base_Equilibria/13.02%3A_Strong_Monoprotic_Acids_and_Bases
Make sure you thoroughly understand the following essential concepts that have been presented above. To a good approximation, strong acids, in the forms we encounter in the laboratory and in much of the industrial world, have no real existence; they are all really solutions of \(\ce{H3O^{+}}\). So if you think about it, the labels on those reagent bottles you see in the lab are not strictly true! However, if the strong acid is highly diluted, the amount of \(\ce{H3O^{+}}\) it contributes to the solution becomes comparable to that which derives from the autoprotolysis of water. Under these conditions, we need to develop a more systematic way of working out equilibrium concentrations. A , you will recall, is one that is stronger than the hydronium ion \(\ce{H3O^{+}}\). This means that in the presence of water, the proton on a strong acid such as HCl will "fall" into the "sink" provided by H O, converting the latter into its conjugate acid \(\ce{H3O^{+}}\). In other words, \(\ce{H3O^{+}}\) . As we explained in the preceding lesson, all strong acids appear to be strong in aqueous solution because there are always plenty of H O molecules to accept their protons. This is called the "leveling effect". This greatly simplifies our treatment of strong acids because there is no need to deal with equilibria such as for hydrochloric acid \[\ce{HCl + H_2O → H_3O^{+} + Cl^{–}}\] The equilibrium constants for such reactions are so overwhelmingly large that we can usually consider the concentrations of acid species such as "HCl" to be indistinguishable from zero. As we will see further on, this is not strictly true for highly-concentrated solutions of strong acids (Figure \(\Page {3}\)). Over the normal range of concentrations we commonly work with (indicated by the green shading on this plot), the pH of a strong acid solution is given by the negative logarithm of its concentration in mol L . Note that in very dilute solutions, the plot levels off, showing that this simple relation breaks down; What will be the pH of a 0.025 mol/l solution of hydrochloric acid? If we assume all the hydronium concentration originates from the added acid. So \[\ce{H3O^{+}}\] and we just find the negative logarithm of the concentration \[pH = -\log_{10} [\ce{H3o^{+}}] = –\log_{10} 0.025 = 1.6\] Since this pH is so far away from 7, our assumption is reasonable. are those that are totally inorganic. Not all mineral acids are strong; boric and carbonic acids are common examples of very weak ones. However, in ordinary usage, the term often implies one of those described below. With the exception of perchloric acid, which requires special handling, these are all widely used in industry and are almost always found in chemistry laboratories. Most have been known since ancient times You should know the names and formulas of all four of these widely-encountered strong mineral acids. There is a class of super acids that are stronger than some of the common mineral acids. According to the classical definition, a superacid is an acid with an acidity greater than that of 100% pure sulfuric acid. Some, like fluorosulfuric acid \(FSO_3H\) are commercially available. Strong superacids are prepared by the combination of a strong Lewis acid and a strong Brønsted acid. The strongest known super acid is fluoroantimonic acid (\(H_2FSbF_6\)). This acid is so corrosive that the fumes alone will dissolve entire fume hoods, glass and plastic beakers, human skin, bone and most synthetic compounds. The only strong bases that are commonly encountered are solutions of hydroxides, mainly NaOH and KOH. Unlike most metal hydroxides, these solids are highly soluble in water, and can thus yield concentrated solutions of hydroxide ion, the strongest base that can exist in water — the ultimate aquatic proton sink. Sodium hydroxide is by far the most important of these strong bases; its common names "lye", "caustic soda" (or, in industry, often just "caustic"), reflect the diverse uses of NaOH. Solid NaOH is usually sold in the form of pellets. When exposed to air, they become wet ( ) and absorb CO , becoming contaminated with sodium carbonate. NaOH is the most soluble of the Group 1 hydroxides, dissolving in less than its own weight of water (111 g / 100 ml) to form a 2.8 M/L solution at 20°C. However, as with strong acids, the pH of such a solution cannot be reliably calculated from such a high concentration. At higher concentrations, intermolecular interactions and ion-pairing can cause the effective concentration (known as the ) of \(\ce{H3O^{+}}\) to deviate from the value corresponding to the nominal or "analytical" concentration of the acid. Activities are important because only these work properly in equilibrium calculations. Also, pH is defined as the negative logarithm of the hydrogen ion activity, not its concentration. The relation between the concentration of a species and its activity is expressed by the \(\gamma\): \[a = \gamma C\] As a solution becomes more dilute, \(\gamma\) approaches unity. At ionic concentrations not exceeding about 2 , concentrations of typical strong acids can generally be used in place of activities without serious error. Note that activities of single ions other than \(\ce{H3O^{+}}\) cannot be determined, so activity coefficients in ionic solutions are always the average, or , of those for the ionic species present. This quantity is denoted as \(\gamma_±\). Because activities of single ions cannot be measured, these mean values are the closest we can get to \(\{H^+\}\) in solutions of strong acids. is a practical consideration when dealing with strong mineral acids which are available at concentrations of 10 M or greater. In a 12 M solution of hydrochloric acid, for example, the mean ionic activity coefficient is 17.25. This means that under these conditions with [H O ] = 12 M, the {H O } = 12 × 17.25 = 207, corresponding to a pH of about –2.3, instead of –1.1 as might be predicted if concentrations were being used. These very high activity coefficients also explain another phenomenon: why you can detect the odor of HCl over a concentrated hydrochloric acid solution even though this acid is supposedly 100% dissociated. It turns out that the activity {HCl} (which represents the "escaping tendency" of HCl from the solution) is almost 49,000 for a 10 M solution! The source of this gas is best described as the result of ion pairing. Similarly, in a solution prepared by adding 0.5 mole of the very strong acid HClO to sufficient water to make the volume 1 liter, freezing-point depression measurements indicate that the concentrations of hydronium and perchlorate ions are only about 0.4 M. This does not mean that the acid is only 80% dissociated; there is no evidence of HClO molecules in the solution. What has happened is that about 20% of the \(\ce{H3O^{+}}\) and ClO ions have formed ion-pair complexes in which the oppositely-charged species are loosely bound by electrostatic forces (Figure \(\Page {4}\)). Ion-pairing reduces effective dissociation at high concentrations. If you have worked with concentrated hydrochloric acid in the lab, you will have noticed that the choking odor of hydrogen chloride gas is very apparent. How can this happen if this strong acid is really "100 percent dissociated" as all strong acids are said to be? At very high concentrations, too few H O molecules are available to completely fill the extended hydration shells that normally help keep the ions apart, reducing the fraction of "free" \(\ce{H3O^{+}}\) ions capable of acting independently. Under these conditions, the term "dissociation" begins to lose its meaning. Although the concentration of HCl is never very high, its own activity coefficient can be as great as 2000 (Table \(\Page {2}\)), which means that its escaping tendency from the solution is extremely high, so that the presence of even a tiny amount is very noticeable. Estimate the pH of a 10.0 M solution of hydrochloric acid in which the mean ionic activity coefficient - is 10.4. pH = – log {H } ≈ – log (10.4 × 10.0) = – log 104 = Compare this result with what you would get by using – log[H ] In this section, we will derive expressions that relate the pH of solution of a strong acid to its concentration in a solution of pure water. We will use hydrochloric acid as an example. When HCl gas is dissolved in water, the resulting solution contains the ions H O , OH , and Cl−, However, except in very concentrated solutions, the concentration of HCl is negligible; for all practical purposes, molecules of “hydrochloric acid”, HCl, do not exist in dilute aqueous solutions. To specify the concentrations of the three species present in an aqueous solution of HCl, we need three independent relations between them. These relations are obtained by observing that certain conditions must always be true in any solution of HCl. These are: 1. The equilibrium of water must always be satisfied: \[[H_3O^+,OH^–] = K_w \label{4-1}\] 2. For any acid-base system, one can write a equation that relates the concentrations of the various dissociation products of the substance to its “nominal concentration”, which we designate here as . For a solution of HCl, this equation would be \[[HCl] + [Cl^–] = C_a \label{4-2}\] However, since HCl is a strong acid and therefore no "HCl" exists in the solution, we can neglect the first term, so the mass balance equation becomes simply \[[Cl^–] = C_a \label{4-3}\] 3. In any ionic solution, the sum of the positive and negative electric charges must be zero; in other words, all solutions are electrically neutral. This is known as the . \[[H_3O^+] = [OH^–] + [Cl^–] \label{4-4}\] The next step is to combine these three equations into a single expression that relates the hydronium ion concentration to \(C_a\). This is best done by starting with an equation that relates several quantities, such as Equation \(\ref{4-4}\), and substituting the terms that we want to eliminate. Thus we can get rid of the [Cl−] term by substituting Equation \(\ref{4-3}\) into Equation \(\ref{4-4}\) : \[[H_3O^+] = [OH^–] + C_a \label{4-5}\] The [OH ] term can be eliminated by use of Equation \(\ref{4-3}\): \[[H_3O^+] = C_a + \dfrac{K_w}{[H_3O^+]} \label{4-6}\] This equation tells us that the hydronium ion concentration will be the same as the nominal concentration of a strong acid as long as the solution is not very dilute. Notice that Equation \(\ref{4-6}\) is a quadratic equation. Recalling that = 10 , it is apparent that the final term of the above equation will ordinarily be very small in comparison to the other terms, so it can be ordinarily be dropped, yielding the simple relation \[[H_3O^+] \approx C_a \label{4-7}\] Only in extremely dilute solutions, around 10 or below (where the plot curves), does this approximation become untenable. However, even then, the effect is tiny. After all, the hydronium ion concentration in a solution of a strong acid can never fall below 10 ; no amount of dilution can make the solution alkaline! So for almost all practical purposes, Equaton \(\ref{4-7}\) is all you will ever need for a solution of a strong acid.
11,188
4,431
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Hallucinogenic_Drugs
Hallucinogenic agents, also called psychomimetic agents, are capable of producing hallucinations, sensory illusions and bizarre thoughts. The primary effect of these compounds is to consistently alter thought and sensory perceptions. Some of these drugs are used in medicine to produce model psychoses as aids in psychotherapy. Another purpose is to investigate the relationship of mind, brain, and biochemistry with the purpose of elucidating mental diseases such as schizophrenia. A large body of evidence links the action of hallucinogenic agents to effects at serotonin receptor sites in the central nervous system. Whether the receptor site is stimulated or blocked is not exactly known. The serotonin receptor site may consist of three polar or ionic areas to complement the structure of serotonin as shown in the graphic on the left. The drugs shown in the graphic can be isolated from natural sources: lysergic acid amide from morning glory seeds, psilocybin from the "magic mushroom", Psilocybe mexicana. The hallucinogenic molecules fit into the same receptors as the neuro-transmitter, and over-stimulate them, leading to false signals being created. Mescaline is isolated from a peyote cactus. The natives of Central America first made use of these drugs in religious ceremonies, believing the vivid, colorful hallucinations had religious significance. The Aztecs even had professional mystics and prophets who achieved their inspiration by eating the mescaline-containing peyote cactus (Lophophora williamsii). Indeed, the cactus was so important to the Aztecs that they named it teo-nancacyl, or "God's Flesh". This plant was said to have been distributed to the guests at the coronation of Montezuma to make the ceremony seem even more spectacular. LSD is one of the most powerful hallucinogenic drugs known. LSD stimulates centers of the sympathetic nervous system in the midbrain, which leads to pupillary dilation, increase in body temperature, and rise in the blood-sugar level. LSD also has a serotonin-blocking effect. The hallucinogenic effects of lysergic acid diethylamide (LSD) are also the result of the complex interactions of the drug with both the serotoninergic and dopaminergic systems. During the first hour after ingestion, the user may experience visual changes with extreme changes in mood. The user may also suffer impaired depth and time perception, with distorted perception of the size and shape of objects, movements, color, sound, touch and the user's own body image. Serotonin (5-hydroxytryptamine or 5-HT) is a monoamine neurotransmitter found in cardiovascular tissue, in endothelial cells, in blood cells, and in the central nervous system. The role of serotonin in neurological function is diverse, and there is little doubt that serotonin is an important CNS neurotransmitter. Although some of the serotonin is metabolized by monoamine oxidase, most of the serotonin released into the post-synaptic space is removed by the neuron through a re uptake mechanism inhibited by the tricyclic antidepressants and the newer, more selective antidepressant re uptake inhibitors such as fluoxetine and sertraline.
3,168
4,433
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Structure_and_Properties_(Tro)/10%3A_Thermochemistry/10.11%3A_Lattice_Energy
The Lattice energy, \(U\), is the amount of energy required to separate a mole of the solid (s) into a gas (g) of its ions. \[\ce{M_{a} L_{b} (s) \rightarrow a M^{b+} (g) + b X^{a-} (g) } \label{eq1}\] This quantity cannot be experimentally determined directly, but it can be estimated using a approach in the form of . It can also be calculated from the electrostatic consideration of its crystal structure. As defined in Equation \ref{eq1}, the lattice energy is positive, because energy is always required to separate the ions. For the reverse process of Equation \ref{eq1}: \[\ce{ a M^{b+} (g) + b X^{a-} (g) \rightarrow M_{a}L_{b}(s) }\] the energy released is called (\(E_{cryst}\)). Therefore, \[U_{lattice} = - E_{cryst}\] Values of lattice energies for various solids have been given in literature, especially for some common solids. Some are given here. The following trends are obvious at a glance of the data in Table \(\Page {1}\): Estimating lattice energy using the Born-Haber cycle has been discussed in Ionic Solids. For a quick review, the following is an example that illustrate the estimate of the energy of crystallization of NaCl. Hsub of Na = 108 kJ/mol (Heat of sublimation) D of Cl2 = 244 (Bond dissociation energy) IP of Na(g) = 496 (Ionization potential or energy) EA of Cl(g) = -349 (Electron affinity of Cl) Hf of NaCl = -411 (Enthalpy of formation) The Born-Haber cycle to evaluate E is shown below: = -411-(108+496+244/2)-(-349) kJ/mol = -788 kJ/mol. The value calculated for depends on the data used. Data from various sources differ slightly, and so is the result. The lattice energies for NaCl most often quoted in other texts is about 765 kJ/mol. Compare with the method shown below There are many other factors to be considered such as covalent character and electron-electron interactions in ionic solids. But for simplicity, let us consider the ionic solids as a collection of positive and negative ions. In this simple view, appropriate number of cations and anions come together to form a solid. The positive ions experience both attraction and repulson from ions of opposite charge and ions of the same charge. As an example, let us consider the the NaCl crystal. In the following discussion, assume be the distance between Na and Cl ions. The nearest neighbors of Na are 6 Cl ions at a distance 1 , 12 Na ions at a distance 2 , 8 Cl at 3 , 6 Na at 4 , 24 Na at 5 , and so on. Thus, the energy due to one ion is \[ E = \dfrac{Z^2e^2}{4\pi\epsilon_or} M \label{6.13.1}\] The , \(M\), is a poorly converging series of interaction energies: \[ M= \dfrac{6}{1} - \dfrac{12}{2} + \dfrac{8}{3} - \dfrac{6}{4} + \dfrac{24}{5} ... \label{6.13.2}\] with The above discussion is valid only for the sodium chloride (also called rock salt) structure type. This is a geometrical factor, depending on the arrangement of ions in the solid. The Madelung constant depends on the structure type, and its values for several structural types are given in Table 6.13.1. is the number of anions coordinated to cation and is the numbers of cations coordinated to anion. is the number of anions coordinated to cation and is the numbers of cations coordinated to anion. Madelung constants for a few more types of crystal structures are available from the Handbook Menu. There are other factors to consider for the evaluation of energy of crystallization, and the treatment by led to the formula for the evaluation of crystallization energy \(E_{cryst}\), for a mole of . \[ E_{cryst} = \dfrac{N Z^2e^2}{4\pi \epsilon_o r} \left( 1 - \dfrac{1}{n} \right)\label{6.13.3a} \] where is the Avogadro's number (6.022x10 ), and is a number related to the electronic configurations of the ions involved. The values and the electronic configurations (e.c.) of the corresponding inert gases are given below: The following values of have been suggested for some common solids: Estimate the energy of crystallization for \(\ce{NaCl}\). Using the values giving in the discussion above, the estimation is given by Equation \ref{6.13.3a}: \[ \begin{align*} E_cryst &= \dfrac{(6.022 \times 10^{23} /mol (1.6022 \times 10 ^{-19})^2 (1.747558)}{ 4\pi \, (8.854 \times 10^{-12} C^2/m ) (282 \times 10^{-12}\; m} \left( 1 - \dfrac{1}{9.1} \right) \\[4pt] &= - 766 kJ/mol \end{align*}\] Much more should be considered in order to evaluate the lattice energy accurately, but the above calculation leads you to a good start. When methods to evaluate the energy of crystallization or lattice energy lead to reliable values, these values can be used in the Born-Haber cycle to evaluate other chemical properties, for example the electron affinity, which is really difficult to determine directly by experiment. Which one of the following has the largest lattice energy? LiF, NaF, CaF , AlF Explain the trend of lattice energy. Which one of the following has the largest lattice energy? LiCl, NaCl, CaCl , Al O Corrundum Al O has some covalent character in the solid as well as the higher charge of the ions. Lime, CaO, is know to have the same structure as NaCl and the edge length of the unit cell for CaO is 481 pm. Thus, Ca-O distance is 241 pm. Evaluate the energy of crystallization, for CaO. Evaluate the lattice energy and know what values are needed. Assume the interionic distance for NaCl to be the same as those of NaCl ( = 282 pm), and assume the structure to be of the fluorite type ( = 2.512). Evaluate the energy of crystallization, . This number has not been checked. If you get a different value, please let me know.
5,595
4,437
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Phases_and_Intermolecular_Forces/Phase_Changes
Most phase changes occur at specific temperature-pressure combinations. For instance, at atmospheric pressure, water melts at 0 °C and boils at 100 °C. In this section, we will talk about when and how they happen. The names of the different phase changes are shown below: We can predict the relative temperature at which phase changes will happen using intermolecular forces. If the intermolecular forces are strong, then the melting point and boiling point will be high. If the intermolecular forces are weak, the melting and boiling point will be low. London forces vary widely in strength based on the number of electrons present. The number of electrons is related to the molecular or atomic weight. Heavy elements or molecules, like iodine or wax, are solids at room temperature because they have relatively strong London forces, which correlate with big molecular weights. London forces are always present, but in small molecules or atoms, like helium, they are quite weak. Dipole-dipole forces are present in molecules with a permanent dipole. We can predict this by drawing a Lewis structure, identifying polar bonds using electronegativity, predicting the shape of the molecule, and seeing if the bond dipoles on different molecules can touch. If they can, there will be dipole-dipole forces. The bigger the dipoles (bigger electronegativity difference, etc.) and the closer together they can get, the bigger the dipole-dipole forces are. Hydrogen bonds occur only when there are H atoms bonded to N, O, or F and lone pairs on N, O, or F. Look for both of these in molecules to see if they can hydrogen bond. If you can find which types of intermolecular forces are present in a molecule, you can make some guesses about which molecules have higher or lower melting or boiling points. For instance, let's compare methane (CH ), silane (SiH ), hydrogen sulfide (H S) and water (H O). Methane and silane are non-polar, because of the tetrahedral shape and also the small electronegativity differences. Because these don't have dipole-dipole forces, the boiling point will depend on how strong the London forces are. Silane is heavier, so it has bigger London forces and a higher boiling point. Between water and hydrogen sulfide, both are polar, and have dipole-dipole forces, so they have higher boiling points than methane or silane. But water has hydrogen bonds, which are extra-strong dipole-dipole forces. Water boils much hotter than hydrogen sulfide. We can't really explain phase changes in terms of energy without entropy, which we haven't talked about yet. For now, we can just say that as we add energy to a substance, it usually gets hotter and the particles have more kinetic energy. This will make it easier for them go from solid to liquid, or liquid to gas. Gases have more energy than liquids, which have more energy than solids. As we increase the temperature, the stable form of the substance goes from solid to liquid to gas. The transition temperatures (melting point, boiling point) are the temperatures at which both phases are stable and in equilibrium. Actually, there will be some gas in equilibrium with solid and liquid all the time, because a few molecules can always escape the solid/liquid, but the solid or liquid won't be present above certain temperatures. For instance, imagine heating a solid. The molecules start moving more, and the temperature increases as predicted by the . At some point, they have so much energy that it's hard for them to stay in the orderly solid, so the solid starts to melt. As we add more heat, the temperature doesn't change, because all the heat we add goes into melting the solid. The solid can't get any hotter than it is, and the liquid can't increase it's temperature because its kinetic energy is absorbed to melt the remaining solid. The amount of energy needed to melt the solid is the . When all the solid is melted, if we keep adding heat, the temperature will rise again. As the temperature rises, the vapor pressure increases, because more molecules have enough kinetic energy to escape. Still, most of the molecules are in the liquid form, because the total pressure pushes on the liquid and keeps it from expanding into a gas. When the temperature increases to the boiling point, then the vapor pressure will be equal to the outside pressure. Now, because the vapor pressure is equal to the atmospheric pressure, bubbles form in the liquid. It can expand into a gas, because it's pressure is the same as the atmospheric pressure. The temperature will stay constant again as all the liquid become gas, while you add the . Then if you keep heating the temperature of the gas will increase. This is shown in the diagram below.
4,733
4,438
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/02%3A_The_Hydrosphere/2.01%3A_Water_Water_Everywhere...
Water is the most abundant substance at the earth’s surface. Almost all of it is in the oceans, which cover 70% of the surface area of the earth. However, the amounts of water present in the atmosphere and on land (as surface runoff, lakes and streams) is great enough to make it a significant agent in transporting substances between the and the oceans. Water interacts with both the atmosphere and the lithosphere, acquiring solutes from each, and thus provides the major chemical link between these two realms. The various transformations undergone by water through the different stages of the hydrologic cycle act to transport both dissolved and particulate substances between different geographic locations. Shallow groundwater The composition of the ocean has attracted the attention of some of the more famous names in science, including Robert Boyle, Antoine Lavoisier and Edmund Halley. Their early investigations tended to be difficult to reproduce, owing to the different conditions under which they crystallized the various salts. As many as 54 salts, double salts and hydrated salts can be obtained by evaporating seawater to dryness. At least 73 elements are now known to be present in seawater. The best way of characterizing seawater is in terms of its ionic content, shown above. The remarkable thing about seawater is the constancy of its relative ionic composition. The overall salt content, known as the salinity (grams of salts contained in 1 kg of seawater), varies slightly within the range of 32-37.5%, corresponding to a solution of about 0.7% salt content. The ratios of the concentrations of the different ions, however, are quite constant, so that a measurement of Cl concentration is sufficient to determine the overall composition and total salinity. Although most elements are found in seawater only at trace levels, marine organisms may selectively absorb them and make them more detectable. Iodine, for example, was discovered in marine algae (seaweeds) 14 years before it was found in seawater. Other elements that were not detected in seawater until after they were found in marine organisms include barium, cobalt, copper, lead, nickel, silver and zinc. Si , presumably deriving from cosmic ray bombardment of Ar, has been discovered in marine sponges. Reflecting this constant ionic composition is the pH, which is usually maintained in the narrow range of 7.8-8.2, compared with 1.5 to 11 for fresh water. The major buffering action derives from the carbonate system, although ion exchange between Na+ in the water and H in clay sediments has recently been recognized to be a significant factor. The major ionic constituents whose concentrations can be determined from the salinity are known as conservative substances. Their constant relative concentrations are due to the large amounts of these species in the oceans in comparison to their small inputs from river flow. This is another way of saying that their residence times are very large.   A number of other species, mostly connected with biological activity, are subject to wide variations in concentration. These include the nutrients NO , NO , NH , and HPO , which may become depleted near the surface in regions of warmth and light. As was explained in the preceding subsection on coastal upwelling, offshore prevailing winds tend to drive Western coastal surface waters out to sea, causing deeper and more nutrient-rich water to be drawn to the surface. This upwelled water can support a large population of phytoplankton and thus of zooplankton and fish. The best-known example of this is the anchovy fishery off the coast of Peru, but the phenomenon occurs to some extent on the West coasts of most continents, including our own. Other non-conservative components include Ca and SiO . These ions are incorporated into the solid parts of marine organisms, which sink to greater depths after the organisms die. The silica gradually dissolves, since the water is everywhere undersaturated in this substance. Calcium carbonate dissolves at intermediate depths, but may reprecipitate in deep waters owing to the higher pressure. Thus the concentrations of Ca and of SiO tend to vary with depth. The gases O and CO , being intimately involved with biological activity, are also non-conservative, as are N O and CO. Most of the organic carbon in seawater is present as dissolved material, with only about 1-2% in particulates. The total organic carbon content ranges between 0.5 mg/L in deep water to 1.5 mg/L near the surface. There is still considerable disagreement about the composition of the dissolved organic matter; much of it appears to be of high molecular weight, and may be polymeric. Substances qualitatively similar to the humic acids found in soils can be isolated. The greenish color that is often associated with coastal waters is due to a mixture of fluorescent, high molecular weight substances of undetermined composition known as . It is likely that the significance of the organic fraction of seawater may be much greater than its low abundance would suggest. For one thing, many of these substances are lipid-like and tend to adsorb onto surfaces. It has been shown that any particle entering the ocean is quickly coated with an organic surface film that may influence the rate and extent of its dissolution or decomposition. Certain inorganic ions may be strongly complexed by humic-like substances. The surface of the ocean is mostly covered with an organic film, only a few molecular layers thick. This is believed to consist of hydrocarbons, lipids, and the like, but glycoproteins and proteoglycans have been reported. If this film is carefully removed from a container of seawater, it will quickly be reconstituted. How significant this film is in its effects on gas exchange with the atmosphere is not known. The salinity of the ocean appears to have been about the same for at least the last 200 million years. There have been changes in the relative amounts of some species, however; the ratio of Na/K has increased from about 1:1 in ancient ocean sediments to its present value of 28:1. Incorporation of calcium into sediments by the action of marine organisms has depleted the Ca/Mg ratio from 1:1 to 1:3. If the composition of the ocean has remained relatively unchanged with time, the continual addition of new mineral substances by the rivers and other sources must be exactly balanced by their removal as sediment, possibly passing through one or more biological systems in the process. In 1715 Edmund Halley suggested that the age of the ocean (and thus presumably of the world) might be estimated from the rate of salt transport by rivers. When this measurement was actually carried out in 1899, it gave an age of only 90 million years. This is somewhat better than the calculation made in 1654 by James Ussher, the Anglican Archbishop of Armagh, Ireland, based on his interpretation of the Biblical book of Genesis, that the world was created at 9 A.M. on October 23, 4004 BC, but it is still far too recent, being about when the dinosaurs became extinct. What Halley actually described was the residence time, which is about right for Na but much to long for some of the minor elements of seawater. The commonly stated view that the salt content of the oceans derives from surface runoff that contains the products of weathering and soil leaching is not consistent with the known compositions of the major river waters (See Table). The halide ions are particularly over-represented in seawater, compared to fresh water. These were once referred to as “excess volatiles”, and were attributed to volcanic emissions. With the discovery of plate tectonics, it became apparent that the locations of seafloor spreading at which fresh basalt flows up into the ocean from the mantle are also sources of mineral-laden water. Some of this may be seawater that has cycled through a hot porous region and has been able to dissolve some of the mineral material owing to the high temperature. Much of the water, however, is “juvenile” water that was previously incorporated into the mantle material and has never before been in the liquid phase. The substances introduced by this means (and by volcanic activity) are just the elements that are “missing” from river waters. Estimates of what fraction of the total volume of the oceans is due to juvenile water (most of it added in the early stages of mantle differentiation that began a billion years ago) range from 30 to 90%. The oceans can be regarded as a product of a giant acid-base titration in which the carbonic acid present in rain reacts with the basic materials of the lithosphere. The juvenile water introduced at locations of ocean-floor spreading is also acidic, and is partly neutralized by the basic components of the basalt with which it reacts. Surface rocks mostly contain aluminum, silicon and oxygen combined with alkali and alkaline-earth metals, mainly potassium, sodium and calcium. The CO and volcanic gases in rainwater react with this material to form a solution of the metal ion and HCO , in which is suspended some hydrated SiO . The solid material left behind is a clay such as kaolinite, Al Si O (OH) . This first forms as a friable coating on the surface of the weathered rock; later it becomes a soil material, then an alluvial deposit, and finally it may reach the sea as a suspended sediment. Here it may undergo a number of poorly-understood transformations to other clay sediments such as illites. Sea floor spreading eventually transports these sediments to a subduction region under a continental block, where the high temperatures and pressures permit reactions that transform it into hard rock such as granite, thus completing the geochemical cycle. Deep-sea hydrothermal vents are now recognized to be another significant route for both the addition and removal of ionic substances from seawater. Although the relative concentrations of most of the elements in seawater are constant throughout the oceans, there are certain elements that tend to have highly uneven distributions vertically, and to a lesser extent horizontally. Neglecting the highly localized effects of undersea springs and volcanic vents, these variations are direct results of the removal of these elements from seawater by organisms; if the sea were sterile, its chemical composition would be almost uniform. Plant life can exist only in the upper part of the ocean where there is sufficient light available to drive photosynthesis. These plants, together with the animals that consume them, extract nutrients from the water, reducing the concentrations of certain elements in the upper part of the sea. When these organisms die, they fall toward the lower depths of the ocean as particulate material. On the way down, some of the softer particles, deriving from tissue, may be consumed by other animals and recycled. Eventually, however, the nutrient elements that were incorporated into organisms in the upper part of the ocean will end up in the colder, dark, and essentially lifeless lower part. Mixing between the upper and lower reservoirs of the ocean is quite slow, owing to the higher density of the colder water; the average residence time of a water molecule in the lower reservoir is about 1600 years. Since the volume of the upper reservoir is only about 1/20 of that of the lower, a water molecule stays in the upper reservoir for only about 80 years. Except for dissolved oxygen, all elements required by living organisms are depleted in the upper part of the ocean with respect to the lower part. In the case of the major nutrients P, N and Si, the degree of depletion is sufficiently complete (around 95%) to limit the growth of organisms at the surface. These three elements are said to be biolimiting. A few other biointermediate elements show partial depletion in surface waters: Ca (1%), C (15%), Ba (75%). The organic component of plants and animals has the average composition C N P . It is remarkable that the ratio of N:P in seawater (both surface and deep) is also 15:1; this raises the interesting question of to what extent the ocean and life have co-evolved. In the deep part of the ocean the elemental ratio corresponds to C N P, but of course with much larger absolute amounts of these elements. Eventually some of this deeper water returns to the surface where the N and P are quickly taken up by plants. But since plants can only utilize 80 out of every 800 carbon atoms, 90 percent of the carbon will remain in dissolved form, mostly as HCO . To work out the balance of Ca and Si used in the hard parts of organisms, we add these elements to the average composition of the lower reservior to get Ca Si C N P. Particulate carbon falls into the deep ocean in the ratio of about two atoms in organic tissue to one atom in the form of calcite. This makes the overall composition of detrital material something like C N P; i.e., 80 organic C’s and 40 in CaCO . Accompanying these 40 calcite units will be 40 Ca atoms, but this represents a minor depletion of the 3200 Ca atoms that eventually return to the surface, so this element is only slightly depleted in the upper waters. Silicon, being far less abundant, is depleted to a much greater extent. A continual rain of particulate material from dead organisms falls through the ocean. This shower is comprised of three major kinds of material: calcite (CaCO ), silica (SiO ), and organic matter. The first two come from the hard parts of both plants and animals (mainly microscopic animals such as foraminifera and radiolarians). The organic matter is derived mainly from the soft tissues of organisms, and from animal fecal material. Some of this solid material dissolves before it reaches the ocean floor, but not usually before it enters the deep ocean where it will remain for about 1600 years. The remainder of this material settles onto the floor of the sea, where it forms one component of a layer of sediments that provide important information about the evolution of the sea and of the earth. Over a short time scale of months to years, these sediments are in quasi-equilibrium with the seawater. On a scale of millions of years, the sediments are merely way-stations in the geochemical cycling of material between the earth’s surface and its interior. The oceanic sediments have three main origins: Our main interest lies with the silica and calcium carbonate, since these substances form a crucial part of the biogeological cycle. Also, their distributions in the ocean are not uniform- a fact that must tell us something. The skeletons of diatoms and radiolarians are the principal sources of silica sediments. Since the ocean is everywhere undersaturated with respect to silica, only the most resistant parts of these skeletons reach the bottom of the deep ocean and get incorporated into sediments. Silica sediments are less common in the Atlantic ocean, owing to the lower content of dissolved silica. The parts of the ocean where these sediments are increasing most rapidly correspond to regions of upwelling, where deep water that is rich in dissolved silica rises to the surface where the silica is rapidly fixed by organisms. Where upwelling is absent, the growth of the organisms is limited, and little silica is precipitated. Since deep waters tend to flow from the Atlantic into the Pacific ocean where most of the upwelling occurs, Atlantic waters are depleted in silica, and silica sediments are not commonly found in this ocean. For calcium carbonate, the situation is quite different. In the first place, surface waters are everywhere supersaturated with respect to both calcite and aragonite, the two common crystal forms of CaCO . Secondly, Ca and HCO are never limiting factors in the growth of the coccoliths (plants) and forams (animals) that precipitate CaCO ; their production depends on the availability of phosphate and nitrogen. Because these elements are efficiently recycled before they fall into the deep ocean, their supply does not depend on upwelling, and so the production of solid is more uniformly distributed over the world’s oceans. More importantly, however, the chances that a piece of carbonate skeleton will end up as sediment will be highly dependent on both the local CO concentration and the depth of the ocean floor. These factors give rise to small-scale variations in the production of carbonate sediments that can be quite wide-ranging. New crust is being generated and moving away from the crests of the mid-ocean ridges at a rate of a few centimetres per year. Although the crests of these ridges are relatively high points, projecting to within about 3000 m of the surface, the continual injection of new material prevents sediments from accumulating in these areas. Farther from the crests, carbonate sediments do build up, eventually reaching a depth of about 500 m, but by this time the elevation has dropped off below the saturation horizon, so from this point on the carbonate sediments are overlaid by red clay. If we drill a hole down through a part of the ocean floor that is presently below the saturation horizon, the top part of the drill core will consist of clay, followed by CaCO at greater depths. The core may also contain regions in which silica predominates. Since silica production is very high in equatorial regions, the appearance of such a layer suggests that this particular region of the oceanic crust has moved across the equator. )
17,513
4,441
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Disaccharides/Sucrose
Sucrose or table sugar is obtained from sugar cane or sugar beets. Sucrose is made from glucose and fructose units. The glucose and fructose units are joined by an acetal oxygen bridge in the alpha orientation. The structure is easy to recognize because it contains the six member ring of glucose and the five member ring of fructose. To recognize glucose look for the horizontal projection of the -OH on carbon #4. The alpha acetal is is really part of a double acetal, since the two monosaccharides are joined at the hemiacetal of glucose and the hemiketal of the fructose. There are no hemiacetals remaining in the sucrose and therefore sucrose is a . Sugar or more specifically sucrose is a carbohydrate that occurs naturally in every fruit and vegetable. It is the major product of photosynthesis, the process by which plants transform the sun's energy into food. Sugar occurs in greatest quantities in sugar cane and sugar beets from which it is separated for commercial use. In the first stage of processing the natural sugar stored in the cane stalk or beet root is separated from the rest of the plant material by physical methods. For sugar cane, this is accomplished by: Beet sugar processing is similar, but it is done in one continuous process without the raw sugar stage. The sugar beets are washed, sliced and soaked in hot water to separate the sugar-containing juice from the beet fiber. The sugar-laden juice is purified, filtered, concentrated and dried in a series of steps similar to cane sugar processing. Carbon # 1 (red on left) is called the and is the center of an acetal functional group. A carbon that has two ether oxygens attached is an acetal. The is defined as the ether oxygen being on the opposite side of the ring as the C # 6. In the chair structure this results in a . This is the same definition as the -OH in a hemiacetal. A second acetal grouping is defined by the green atoms. This result because the the formation reaction of the disaccharide is between the hemiacetal of glucose and the hemiketal of the fructose. When sucrose is hydrolyzed it forms a 1:1 mixture of glucose and fructose. This mixture is the main ingredient in honey. It is called because the angle of the specific rotation of the plain polarized light changes from a positive to a negative value due to the presence of the optical isomers of the mixture of glucose and fructose sugars. In the hydrolysis of any di- or poly saccharide, a water molecule helps to break the acetal bond as shown in red. The acetal bond is broken, the H from the water is added to the oxygen on the glucose. The -OH is then added to the carbon on the fructose. Is glucose - alpha or beta? The -OH on carbon # 1 is projected down therefore, alpha. Is fructose - alpha or beta? The -OH on carbon # 1 is projected downand is on the same side of the ring as C#6, extreme right on fructose therefore, beta.
2,920
4,443
https://chem.libretexts.org/Bookshelves/General_Chemistry/Exercises%3A_General_Chemistry/Exercises%3A_Brown_et_al./10.E%3A_Gases_(Exercises)
. In addition to these individual basis; please contact Explain the differences between the microscopic and the macroscopic properties of matter. Is the boiling point of a compound a microscopic or macroscopic property? molecular mass? Why? How do the microscopic properties of matter influence the macroscopic properties? Can you relate molecular mass to boiling point? Why or why not? ​​For a substance that has gas, liquid, and solid phases, arrange these phases in order of increasing density. Which elements of the periodic table exist as gases at room temperature and pressure? Of these, which are diatomic molecules and which are monatomic? Which elements are liquids at room temperature and pressure? Which portion of the periodic table contains elements whose binary hydrides are most likely gases at room temperature? What four quantities must be known to completely describe a sample of a gas? What units are commonly used for each quantity? If the applied force is constant, how does the pressure exerted by an object change as the area on which the force is exerted decreases? In the real world, how does this relationship apply to the ease of driving a small nail versus a large nail? As the force on a fixed area increases, does the pressure increase or decrease? With this in mind, would you expect a heavy person to need smaller or larger snowshoes than a lighter person? Explain. What do we mean by ? Is the barometric pressure at the summit of Mt. Rainier greater than or less than the pressure in Miami, Florida? Why? Which has the highest barometric pressure—a cave in the Himalayas, a mine in South Africa, or a beach house in Florida? Which has the lowest? Mars has an average barometric pressure of 0.007 atm. Would it be easier or harder to drink liquid from a straw on Mars than on Earth? Explain your answer. Is the pressure exerted by a 1.0 kg mass on a 2.0 m area greater than or less than the pressure exerted by a 1.0 kg mass on a 1.0 m area? What is the difference, if any, between the pressure of the atmosphere exerted on a 1.0 m piston and a 2.0 m piston? If you used water in a barometer instead of mercury, what would be the major difference in the instrument? Calculate the pressure in pascals and in atmospheres exerted by a carton of milk that weighs 1.5 kg and has a base of 7.0 cm × 7.0 cm. If the carton were lying on its side (height = 25 cm), would it exert more or less pressure? Explain your reasoning. If barometric pressure at sea level is 1.0 × 10 Pa, what is the mass of air in kilograms above a 1.0 cm area of your skin as you lie on the beach? If barometric pressure is 8.2 × 10 Pa on a mountaintop, what is the mass of air in kilograms above a 4.0 cm patch of skin? Complete the following table: The SI unit of pressure is the pascal, which is equal to 1 N/m . Show how the product of the mass of an object and the acceleration due to gravity result in a force that, when exerted on a given area, leads to a pressure in the correct SI units. What mass in kilograms applied to a 1.0 cm area is required to produce a pressure of If you constructed a manometer to measure gas pressures over the range 0.60–1.40 atm using the liquids given in the following table, how tall a column would you need for each liquid? The density of mercury is 13.5 g/cm . Based on your results, explain why mercury is still used in barometers, despite its toxicity. Sketch a graph of the volume of a gas versus the pressure on the gas. What would the graph of V versus P look like if volume was directly proportional to pressure? What properties of a gas are described by Boyle’s law, Charles’s law, and Avogadro’s law? In each law, what quantities are held constant? Why does the constant in Boyle’s law depend on the amount of gas used and the temperature at which the experiments are carried out? Use Charles’s law to explain why cooler air sinks. Use Boyle’s law to explain why it is dangerous to heat even a small quantity of water in a sealed container. A 1.00 mol sample of gas at 25°C and 1.0 atm has an initial volume of 22.4 L. Calculate the results of each change, assuming all the other conditions remain constant. A 1.00 mol sample of gas is at 300 K and 4.11 atm. What is the volume of the gas under these conditions? The sample is compressed to 6.0 atm at constant temperature, giving a volume of 3.99 L. Is this result consistent with Boyle’s law? For an ideal gas, is volume directly proportional or inversely proportional to temperature? What is the volume of an ideal gas at absolute zero? What is meant by STP? If a gas is at STP, what further information is required to completely describe the state of the gas? For a given amount of a gas, the volume, temperature, and pressure under any one set of conditions are related to the volume, the temperature, and the pressure under any other set of conditions by the equation   Derive this equation from the ideal gas law. At constant temperature, this equation reduces to one of the gas laws discussed previosuly; which one? At constant pressure, this equation reduces to one of the laws discussed in Section 10.3"; which one? Predict the effect of each change on one variable if the other variables are held constant. What would the ideal gas law be if the following were true? Given the following initial and final values, what additional information is needed to solve the problem using the ideal gas law? Given the following information and using the ideal gas law, what equation would you use to solve the problem? Using the ideal gas law as a starting point, derive the relationship between the density of a gas and its molar mass. Which would you expect to be denser—nitrogen or oxygen? Why does radon gas accumulate in basements and mine shafts? Use the ideal gas law to derive an equation that relates the remaining variables for a sample of an ideal gas if the following are held constant. Tennis balls that are made for Denver, Colorado, feel soft and do not bounce well at lower altitudes. Use the ideal gas law to explain this observation. Will a tennis ball designed to be used at sea level be harder or softer and bounce better or worse at higher altitudes? Calculate the number of moles in each sample at STP. Calculate the number of moles in each sample at STP. Calculate the mass of each sample at STP. Calculate the mass of each sample at STP. Calculate the volume in liters of each sample at STP. Calculate the volume in liters of each sample at STP. Calculate the volume of each gas at STP. Calculate the volume of each gas at STP. A 8.60 L tank of nitrogen gas at a pressure of 455 mmHg is connected to an empty tank with a volume of 5.35 L. What is the final pressure in the system after the valve connecting the two tanks is opened? Assume that the temperature is constant. 281 mmHg At constant temperature, what pressure in atmospheres is needed to compress 14.2 L of gas initially at 25.2 atm to a volume of 12.4 L? What pressure is needed to compress 27.8 L of gas to 20.6 L under similar conditions? One method for preparing hydrogen gas is to pass HCl gas over hot aluminum; the other product of the reaction is AlCl . If you wanted to use this reaction to fill a balloon with a volume of 28,500 L at sea level and a temperature of 78°F, what mass of aluminum would you need? What volume of HCl at STP would you need? 20.9 kg Al, 5.20 × 10 L HCl An 3.50 g sample of acetylene is burned in excess oxygen according to the following reaction: \[\ce{2 C2H2(g) + 5 O2(g) → 4 CO2(g) + 2 H2O(l)}\] At STP, what volume of CO (g) is produced? Calculate the density of ethylene (C H ) under each set of conditions. Determine the density of O under each set of conditions. At 140°C, the pressure of a diatomic gas in a 3.0 L flask is 635 kPa. The mass of the gas is 88.7 g. What is the most likely identity of the gas? What volume must a balloon have to hold 6.20 kg of H for an ascent from sea level to an elevation of 20,320 ft, where the temperature is −37°C and the pressure is 369 mmHg? What must be the volume of a balloon that can hold 313.0 g of helium gas and ascend from sea level to an elevation of 1.5 km, where the temperature is 10.0°C and the pressure is 635.4 mmHg? 2174 L A typical automobile tire is inflated to a pressure of 28.0 lb/in. Assume that the tire is inflated when the air temperature is 20°C; the car is then driven at high speeds, which increases the temperature of the tire to 43°C. What is the pressure in the tire? If the volume of the tire had increased by 8% at the higher temperature, what would the pressure be? The average respiratory rate for adult humans is 20 breaths per minute. If each breath has a volume of 310 mL of air at 20°C and 0.997 atm, how many moles of air does a person inhale each day? If the density of air is 1.19 kg/m , what is the average molecular mass of air? Kerosene has a self-ignition temperature of 255°C. It is a common accelerant used by arsonists, but its presence is easily detected in fire debris by a variety of methods. If a 1.0 L glass bottle containing a mixture of air and kerosene vapor at an initial pressure of 1 atm and an initial temperature of 23°C is pressurized, at what pressure would the kerosene vapor ignite?​ Why are so many industrially important reactions carried out in the gas phase? The volume of gas produced during a chemical reaction can be measured by collecting the gas in an inverted container filled with water. The gas forces water out of the container, and the volume of liquid displaced is a measure of the volume of gas. What additional information must be considered to determine the number of moles of gas produced? The volume of some gases cannot be measured using this method. What property of a gas precludes the use of this method? Equal masses of two solid compounds (A and B) are placed in separate sealed flasks filled with air at 1 atm and heated to 50°C for 10 hours. After cooling to room temperature, the pressure in the flask containing A was 1.5 atm. In contrast, the pressure in the flask containing B was 0.87 atm. Suggest an explanation for these observations. Would the masses of samples A and B still be equal after the experiment? Why or why not? Balance each chemical equation and then determine the volume of the indicated reactant at STP that is required for complete reaction. Assuming complete reaction, what is the volume of the products? During the smelting of iron, carbon reacts with oxygen to produce carbon monoxide, which then reacts with iron(III) oxide to produce iron metal and carbon dioxide. If 1.82 L of CO at STP is produced, Complete decomposition of a sample of potassium chlorate produced 1.34 g of potassium chloride and oxygen gas. The combustion of a 100.0 mg sample of an herbicide in excess oxygen produced 83.16 mL of CO and 72.9 mL of H O vapor at STP. A separate analysis showed that the sample contained 16.44 mg of chlorine. If the sample is known to contain only C, H, Cl, and N, determine the percent composition and the empirical formula of the herbicide. The combustion of a 300.0 mg sample of an antidepressant in excess oxygen produced 326 mL of CO and 164 mL of H O vapor at STP. A separate analysis showed that the sample contained 23.28% oxygen. If the sample is known to contain only C, H, O, and N, determine the percent composition and the empirical formula of the antidepressant. Percent composition: 58.3% C, 4.93% H, 23.28% O, and 13.5% N; empirical formula: C H O N Dalton’s law of partial pressures makes one key assumption about the nature of the intermolecular interactions in a mixture of gases. What is it? What is the relationship between the partial pressure of a gas and its mole fraction in a mixture? What is the partial pressure of each gas if the following amounts of substances are placed in a 25.0 L container at 25°C? What is the total pressure of each mixture? What is the partial pressure of each gas in the following 3.0 L mixtures at 37°C as well as the total pressure? In a mixture of helium, oxygen, and methane in a 2.00 L container, the partial pressures of He and O are 13.6 kPa and 29.2 kPa, respectively, and the total pressure inside the container is 95.4 kPa. What is the partial pressure of methane? If the methane is ignited to initiate its combustion with oxygen and the system is then cooled to the original temperature of 30°C, what is the final pressure inside the container (in kilopascals)? 52.6 kPa, 66.2 kPa A 2.00 L flask originally contains 1.00 g of ethane (C H ) and 32.0 g of oxygen at 21°C. During ignition, the ethane reacts completely with oxygen to produce CO and water vapor, and the temperature of the flask increases to 200°C. Determine the total pressure and the partial pressure of each gas before and after the reaction. If a 20.0 L cylinder at 19°C is charged with 5.0 g each of sulfur dioxide and oxygen, what is the partial pressure of each gas? The sulfur dioxide is ignited in the oxygen to produce sulfur trioxide gas, and the mixture is allowed to cool to 19°C at constant pressure. What is the final volume of the cylinder? What is the partial pressure of each gas in the piston? The highest point on the continent of Europe is Mt. Elbrus in Russia, with an elevation of 18,476 ft. The highest point on the continent of South America is Mt. Aconcagua in Argentina, with an elevation of 22,841 ft. Which of the following processes represents effusion, and which represents diffusion? Which postulate of the kinetic molecular theory of gases most readily explains the observation that a helium-filled balloon is round? Why is it relatively easy to compress a gas? How does the compressibility of a gas compare with that of a liquid? A solid? Why? Which of the postulates of the kinetic molecular theory of gases most readily explains these observations? What happens to the average kinetic energy of a gas if the rms speed of its particles increases by a factor of 2? How is the rms speed different from the average speed? Which gas—radon or helium—has a higher average kinetic energy at 100°C? Which has a higher average speed? Why? Which postulate of the kinetic molecular theory of gases most readily supports your answer? What is the relationship between the average speed of a gas particle and the temperature of the gas? What happens to the distribution of molecular speeds if the temperature of a gas is increased? Decreased? Qualitatively explain the relationship between the number of collisions of gas particles with the walls of a container and the pressure of a gas. How does increasing the temperature affect the number of collisions? What happens to the average kinetic energy of a gas at constant temperature if the What happens to the density of a gas at constant temperature if the Use the kinetic molecular theory of gases to describe how a decrease in volume produces an increase in pressure at constant temperature. Similarly, explain how a decrease in temperature leads to a decrease in volume at constant pressure. Graham’s law is valid only if the two gases are at the same temperature. Why? If we lived in a helium atmosphere rather than in air, would we detect odors more or less rapidly than we do now? Explain your reasoning. Would we detect odors more or less rapidly at sea level or at high altitude? Why? At a given temperature, what is the ratio of the rms speed of the atoms of Ar gas to the rms speed of molecules of H gas? At any temperature, the rms speed of hydrogen is 4.45 times that of argon. At a given temperature, what is the ratio of the rms speed of molecules of CO gas to the rms speed of molecules of H S gas? What is the ratio of the rms speeds of argon and oxygen at any temperature? Which diffuses more rapidly? What is the ratio of the rms speeds of Kr and NO at any temperature? Which diffuses more rapidly? Deuterium (D) and tritium (T) are heavy isotopes of hydrogen. Tritium has an atomic mass of 3.016 amu and has a natural abundance of 0.000138%. The effusion of hydrogen gas (containing a mixture of H , HD, and HT molecules) through a porous membrane can be used to obtain samples of hydrogen that are enriched in tritium. How many membrane passes are necessary to give a sample of hydrogen gas in which 1% of the hydrogen molecules are HT? Samples of HBr gas and NH gas are placed at opposite ends of a 1 m tube. If the two gases are allowed to diffuse through the tube toward one another, at what distance from each end of the tube will the gases meet and form solid NH Br? What factors cause deviations from ideal gas behavior? Use a sketch to explain your answer based on interactions at the molecular level. Explain the effect of nonzero atomic volume on the ideal gas law at high pressure. Draw a typical graph of volume versus 1/ for an ideal gas and a real gas. For an ideal gas, the product of pressure and volume should be constant, regardless of the pressure. Experimental data for methane, however, show that the value of decreases significantly over the pressure range 0 to 120 atm at 0°C. The decrease in over the same pressure range is much smaller at 100°C. Explain why decreases with increasing temperature. Why is the decrease less significant at higher temperatures. What is the effect of intermolecular forces on the liquefaction of a gas? At constant pressure and volume, does it become easier or harder to liquefy a gas as its temperature increases? Explain your reasoning. What is the effect of increasing the pressure on the liquefaction temperature? Describe qualitatively what and , the two empirical constants in the van der Waals equation, represent. In the van der Waals equation, why is the term that corrects for volume negative and the term that corrects for pressure positive? Why is / squared? Liquefaction of a gas depends strongly on two factors. What are they? As temperature is decreased, which gas will liquefy first—ammonia, methane, or carbon monoxide? Why? What is a cryogenic liquid? Describe three uses of cryogenic liquids. Air consists primarily of O , N , Ar, Ne, Kr, and Xe. Use the concepts discussed in this chapter to propose two methods by which air can be separated into its components. Which component of air will be isolated first? How can gas liquefaction facilitate the storage and transport of fossil fuels? What are the potential drawbacks to these methods? The van der Waals constants for xenon are = 4.19 (L ·atm)/mol and = 0.0510 L/mol. If a 0.250 mol sample of xenon in a container with a volume of 3.65 L is cooled to −90°C, what is the pressure of the sample assuming ideal gas behavior? What would be the pressure under these conditions? The van der Waals constants for water vapor are = 5.46 (L ·atm)/mol and = 0.0305 L/mol. If a 20.0 g sample of water in a container with a volume of 5.0 L is heated to 120°C, what is the pressure of the sample assuming ideal gas behavior? What would be the pressure under these conditions?
18,981
4,444
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Carbohydrates/Disaccharides/Lactose
Lactose or milk sugar occurs in the milk of mammals - 4-6% in cow's milk and 5-8% in human milk. It is also a by product in the the manufacture of cheese. Lactose is made from galactose and glucose units. The galactose and glucose units are joined by an acetal oxygen bridge in the beta orientation. To recognize galactose look for the upward projection of the -OH on carbon # 4. See details on the page towards the bottom. Lactose intolerance is the inability to digest significant amounts of lactose, the predominant sugar of milk. This inability results from a shortage of the enzyme lactase, which is normally produced by the cells that line the small intestine. Lactase breaks down the lactose, milk sugar, into glucose and galactose that can then be absorbed into the bloodstream. When there is not enough lactase to digest the amount of lactose consumed, produce some uncomfortable symptoms. Some adults have low levels of lactase. This leads to lactose intolerance. The ingested lactose is not absorbed in the small intestine, but instead is fermented by bacteria in the large intestine, producing uncomfortable volumes of carbon dioxide gas. While not all persons deficient in lactase have symptoms, those who do are considered to be lactose intolerant. Common symptoms include nausea, cramps, bloating, gas, and diarrhea, which begin about 30 minutes to 2 hours after eating or drinking foods containing lactose. The severity of symptoms varies depending on the amount of lactose each individual can tolerate. Fortunately, lactose intolerance is relatively easy to treat by controlling the diet. No cure or treatment exists to improve the body's ability to produce lactase. Young children with lactase deficiency should not eat any foods containing lactose. Most older children and adults need not avoid lactose completely, but individuals differ in the amounts and types of foods they can handle. Dietary control of lactose intolerance depends on each person's learning through trial and error how much lactose he or she can handle. Carbon # 1 (red on left) is called the and is the center of an acetal functional group. A carbon that has two ether oxygens attached is an acetal. The is defined as the ether oxygen being on the same side of the ring as the C # 6. In the chair structure this results in a . This is the same definition as the -OH in a hemiacetal. The position of the oxygen in the acetal on the anomeric carbon (#1) is an important distinction for disaccharide chemistry. The is defined as the oxygen in the acetal being on the same side of the ring as the C # 6. In the chair structure this results in a . The is defined as the oxygen in the acetal being on the opposite side of the ring as the C # 6. In the chair structure this results in a . The alpha and beta acetal label is not applied to any other carbon - only the anomeric carbon of the left monosaccharide, in this case # 1 (red). To further identify lactose and maltose, identify the presence of in the left most structure by the upward -OH on the carbon # 4. Identify in the left most structure by the horizontal -OH on the carbon # 4.
3,162
4,445
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/09%3A_Spices/9.03%3A_Origins_of_Salt
In some countries, salt is mined from salt beds approximately 150 m to 300 m (490 ft. to 985 ft.) below Earth’s surface. Sometimes, impurities such as clay make it impossible to use rock salt without purification. Purification makes it possible to get the desired flavor and color, thus making it edible. Edible salt is highly refined: pure and snow white. Salt can also be mined from natural salt beds by using water to extract the salt in the form of a brine, which saves having to construct a mine. Holes are drilled approximately 20 cm (8 in.) in diameter until the salt deposits are reached. A pipe is then driven into the salt beds and another pipe is driven inside the larger pipe further into the deposits. Pressurized water is forced through the outer pipe into the salt beds, and then pumped back out through the smaller pipe to the refineries. Through separation of the impurities, eventually all water in the brine will evaporate, leaving crystallized salt, which then can be dried, sifted, and graded in different sizes. In some countries, especially those with dry and warm climates, salt is recovered straight from the ocean or salt lakes. The salt water is collected in large shallow ponds (also calledsalt gardens) where, through the heat of the sun, the water slowly evaporates. Moving the salt solution from one pond to another until the salt crystals become clear and the water has evaporated eliminates impurities. The salt is then purified, dried completely, crushed, sifted, and graded.
1,521
4,446
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Acid_Rain/Acid_Rain_Transport
The reactions of sulfur oxides to form sulfuric acid are quite slow. Sulfur dioxide may remain airborne for 3-4 days.As a consequence acid rain derived from sulfur oxides may travel for hundreds of miles or even a thousand miles. Nitrogen oxides may persist for only about one half day and therefore may travel only tens or hundreds of miles. Once airborne, the sulfur and nitrogen oxides eventually come down in one form or another. Where they come down depends on the height of the smokestack and the prevailing weather conditions. In general, prevailing winds in North America transport pollutants from west to east or northeast. The nine largest coal burning states are in the Midwest and the Ohio River valley. It is estimated that two thirds of the acid rain in the Northeast and Eastern Canada comes from these sources. Blue arrow shows the upper winds that travel from the west to the east or northeast. Winds travel from the mid-west to the northeast. In addition, a copper-nickel smelter in Sudbury, Ontario, just north of Lake Huron is the most significant sulfur oxide source in Canada. The winds may also carry the sulfur oxide clouds to the Northeast in the U.S. where it may be converted to acid rain. These maps show how the areas of lower pH have spread in a 30 years 1955-1988. Darkest area is lowest pH. Since the Clean Air Act Amendments of 1990, there have been significant decreases in the amount of sulfur oxides escaping from the electric power plants. As a result there has been a measurable reduction in the amount of acid rain, which is actually translated as an increase in the pH levels (higher pH means less acid).  
1,664
4,447
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Conjugation/The_Diels-Alder_Cycloaddition
Conjugated double bond systems can participate in a variety of reactions.The reaction is one in which a conjugated diene bonds in with an alkene to produce a cyclohexene molecule. As aforementioned the reaction forms a cyclohexene ring. The process by which the reaction occurs is by cycloaddition. This means that the electrons are transferred in a cyclic fashion between the diene and the alkene to for the cyclic structure. Although heat is not required in reactions, heating up the reaction will improve yield. But again heat is not required for the reaction to go through. To go into more detail, the alkene that reacts with the diene is commonly reffered to as the .Although this reaction occurs readily, it doesn’t give a very good yield. This reaction tends to work best with dienes that are electron rich and dienophiles that are electron poor.To solve this problem we add an electron withdrawing group (EWG) to our dienophile.With the addition of these EWG’s, they pull the electrons away from the dienophile allowing the pi electrons from the diene to interact with those of the dienophile to bond with each other to form our product.Good EWG’s include keto groups, aldehyde , nitrile groups, nitro groups, trifluoromethyl groups, etc. Before we begin, there are a few things to consider when carrying out the reaction. Diels-Alder reactions are concerted, stereospecific, and follow the . The Diels-Alder reaction is a reaction, this means it occurs in only one step. Moreover, all of the atoms that are participating in the reaction form bonds simultaneously. Secondly, Diels-Alder reactions are stereospecific.This means that the substituents attached to the both the diene and the dienophile retain their stereochemistry throughout the reaction.For example, if the functional groups on the dienophile are trans to each other in the reactants, they should remain trans to each other products. View the illustration below to clear up any confusion. Thirdly, Diels-Alder reactions are governed by the .This means that whenever a bridged ring is formed, the substituents bonded to the dienophile are either trans or cis to the bridge.What if there are more than two things attached to the dienophile?Well, two of them will point towards the side and the other two will go towards the side.As common conventions have pointed out, the functional groups bonded on the right side of the dienophile go towards the endo side (meaning away from the bridge) and the groups attached to the left of the dienophile point towards the side (meaning towards the bridge). The second part of the rule is that substituents on the left side of the dienophile are considered to be on the endo side in the product and that substituents bonded to the right side are considered to be exo.What this means is that endo substituents point down and exo substituents point up in the final product.An example of this can be seen below.
2,950
4,448
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.05%3A_Lewis_Acids_and_Bases
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic. According to Lewis, In modern chemistry, electron donors are often referred to as nucleophiles, while acceptors are electrophiles. Just as any Arrhenius acid is also a Brønsted acid, any Brønsted acid is also a Lewis acid, so the various acid-base concepts are all "upward compatible". Although we do not really need to think about electron-pair transfers when we deal with ordinary aqueous-solution acid-base reactions, it is important to understand that it is the opportunity for electron-pair sharing that enables proton transfer to take place. This equation for a simple acid-base neutralization shows how the Brønsted and Lewis definitions are really just different views of the same process. Take special note of the following points: The point about the electron-pair remaining on the donor species is especially important to bear in mind. For one thing, it distinguishes a from an , in which a physical transfer of one or more electrons from donor to acceptor does occur. The product of a Lewis acid-base reaction is known formally as an "adduct" or "complex", although we do not ordinarily use these terms for simple proton-transfer reactions such as the one in the above example. Here, the proton combines with the hydroxide ion to form the "adduct" H O. The following examples illustrate these points for some other proton-transfer reactions that you should already be familiar with. Another example, showing the autoprotolysis of water. Note that the conjugate base is also the adduct. Ammonia is both a Brønsted and a Lewis base, owing to the unshared electron pair on the nitrogen. The reverse of this reaction represents the of the ammonium ion. Because \(\ce{HF}\) is a weak acid, fluoride salts behave as bases in aqueous solution. As a Lewis base, F accepts a proton from water, which is transformed into a hydroxide ion. The bisulfite ion is and can act as an electron donor or acceptor. The major utility of the Lewis definition is that it extends the concept of acids and bases beyond the realm of proton transfer reactions. The classic example is the reaction of boron trifluoride with ammonia to form an : \[\ce{BF_3 + NH_3 \rightarrow F_3B-NH_3}\] One of the most commonly-encountered kinds of Lewis acid-base reactions occurs when electron-donating ligands form coordination complexes with transition-metal ions. Here are several more examples of Lewis acid-base reactions that be accommodated within the Brønsted or Arrhenius models. Identify the Lewis acid and Lewis base in each reaction. Although organic chemistry is beyond the scope of these lessons, it is instructive to see how electron donors and acceptors play a role in chemical reactions. The following two diagrams show the mechanisms of two common types of reactions initiated by simple inorganic Lewis acids: In each case, the species labeled "Complex" is an intermediate that decomposes into the products, which are conjugates of the original acid and base pairs. The electric charges indicated in the complexes are formal charges, but those in the products are "real". In reaction 1, the incomplete octet of the aluminum atom in \(\ce{AlCl3}\) serves as a better electron acceptor to the chlorine atom than does the isobutyl part of the base. In reaction 2, the pair of non-bonding electrons on the dimethyl ether coordinates with the electron-deficient boron atom, leading to a complex that breaks down by releasing a bromide ion. We ordinarily think of Brønsted-Lowry acid-base reactions as taking place in aqueous solutions, but this need not always be the case. A more general view encompasses a variety of acid-base , of which the is only one (Table \(\Page {1}\)). Each of these has as its basis an amphiprotic solvent (one capable of undergoing autoprotolysis), in parallel with the familiar case of water. The ammonia system is one of the most common non-aqueous system in Chemistry. Liquid ammonia boils at –33° C, and can conveniently be maintained as a liquid by cooling with dry ice (–77° C). It is a good solvent for substances that also dissolve in water, such as ionic salts and organic compounds since it is capable of forming hydrogen bonds. However, many other familiar substances can also serve as the basis of protonic solvent systems as Table \(\Page {1}\) indicates: One use of nonaqueous acid-base systems is to examine the relative strengths of the strong acids and bases, whose strengths are " " by the fact that they are all totally converted into H O or OH ions in water. By studying them in appropriate non-aqueous solvents which are poorer acceptors or donors of protons, their relative strengths can be determined.
4,883
4,449
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/10%3A_Liquids_and_Solids/10.5%3A_The_Solid_State_of_Matter
When most liquids are cooled, they eventually freeze and form , solids in which the atoms, ions, or molecules are arranged in a definite repeating pattern. It is also possible for a liquid to freeze before its molecules become arranged in an orderly pattern. The resulting materials are called or noncrystalline solids (or, sometimes, glasses). The particles of such solids lack an ordered internal structure and are randomly arranged (Figure \(\Page {1}\)). Metals and ionic compounds typically form ordered, crystalline solids. Substances that consist of large molecules, or a mixture of molecules whose movements are more restricted, often form amorphous solids. For examples, candle waxes are amorphous solids composed of large hydrocarbon molecules. Some substances, such as boron oxide (Figure \(\Page {2}\)), can form either crystalline or amorphous solids, depending on the conditions under which it is produced. Also, amorphous solids may undergo a transition to the crystalline state under appropriate conditions. Crystalline solids are generally classified according the nature of the forces that hold its particles together. These forces are primarily responsible for the physical properties exhibited by the bulk solids. The following sections provide descriptions of the major types of crystalline solids: ionic, metallic, covalent network, and molecular. , such as sodium chloride and nickel oxide, are composed of positive and negative ions that are held together by electrostatic attractions, which can be quite strong (Figure \(\Page {3}\)). Many ionic crystals also have high melting points. This is due to the very strong attractions between the ions—in ionic compounds, the attractions between full charges are (much) larger than those between the partial charges in polar molecular compounds. This will be looked at in more detail in a later discussion of lattice energies. Although they are hard, they also tend to be brittle, and they shatter rather than bend. Ionic solids do not conduct electricity; however, they do conduct when molten or dissolved because their ions are free to move. Many simple compounds formed by the reaction of a metallic element with a nonmetallic element are ionic. such as crystals of copper, aluminum, and iron are formed by metal atoms Figure \(\Page {4}\). The structure of metallic crystals is often described as a uniform distribution of atomic nuclei within a “sea” of delocalized electrons. The atoms within such a metallic solid are held together by a unique force known as that gives rise to many useful and varied bulk properties. All exhibit high thermal and electrical conductivity, metallic luster, and malleability. Many are very hard and quite strong. Because of their malleability (the ability to deform under pressure or hammering), they do not shatter and, therefore, make useful construction materials. The melting points of the metals vary widely. Mercury is a liquid at room temperature, and the alkali metals melt below 200 °C. Several post-transition metals also have low melting points, whereas the transition metals melt at temperatures above 1000 °C. These differences reflect differences in strengths of metallic bonding among the metals. include crystals of diamond, silicon, some other nonmetals, and some covalent compounds such as silicon dioxide (sand) and silicon carbide (carborundum, the abrasive on sandpaper). Many minerals have networks of covalent bonds. The atoms in these solids are held together by a network of covalent bonds, as shown in Figure \(\Page {5}\). To break or to melt a covalent network solid, covalent bonds must be broken. Because covalent bonds are relatively strong, covalent network solids are typically characterized by hardness, strength, and high melting points. For example, diamond is one of the hardest substances known and melts above 3500 °C. , such as ice, sucrose (table sugar), and iodine, as shown in Figure \(\Page {6}\), are composed of neutral molecules. The strengths of the attractive forces between the units present in different crystals vary widely, as indicated by the melting points of the crystals. Small symmetrical molecules (nonpolar molecules), such as H , N , O , and F , have weak attractive forces and form molecular solids with very low melting points (below −200 °C). Substances consisting of larger, nonpolar molecules have larger attractive forces and melt at higher temperatures. Molecular solids composed of molecules with permanent dipole moments (polar molecules) melt at still higher temperatures. Examples include ice (melting point, 0 °C) and table sugar (melting point, 185 °C). A crystalline solid, like those listed in Table \(\Page {1}\) has a precise melting temperature because each atom or molecule of the same type is held in place with the same forces or energy. Thus, the attractions between the units that make up the crystal all have the same strength and all require the same amount of energy to be broken. The gradual softening of an amorphous material differs dramatically from the distinct melting of a crystalline solid. This results from the structural nonequivalence of the molecules in the amorphous solid. Some forces are weaker than others, and when an amorphous material is heated, the weakest intermolecular attractions break first. As the temperature is increased further, the stronger attractions are broken. Thus amorphous materials soften over a range of temperatures. Carbon is an essential element in our world. The unique properties of carbon atoms allow the existence of carbon-based life forms such as ourselves. Carbon forms a huge variety of substances that we use on a daily basis, including those shown in Figure \(\Page {7}\). You may be familiar with diamond and graphite, the two most common of carbon. (Allotropes are different structural forms of the same element.) Diamond is one of the hardest-known substances, whereas graphite is soft enough to be used as pencil lead. These very different properties stem from the different arrangements of the carbon atoms in the different allotropes. You may be less familiar with a recently discovered form of carbon: graphene. Graphene was first isolated in 2004 by using tape to peel off thinner and thinner layers from graphite. It is essentially a single sheet (one atom thick) of graphite. Graphene, illustrated in Figure \(\Page {8}\), is not only strong and lightweight, but it is also an excellent conductor of electricity and heat. These properties may prove very useful in a wide range of applications, such as vastly improved computer chips and circuits, better batteries and solar cells, and stronger and lighter structural materials. The 2010 Nobel Prize in Physics was awarded to Andre and Konstantin for their pioneering work with graphene. In a crystalline solid, the atoms, ions, or molecules are arranged in a definite repeating pattern, but occasional defects may occur in the pattern. Several types of defects are known, as illustrated in Figure \(\Page {9}\). are defects that occur when positions that should contain atoms or ions are vacant. Less commonly, some atoms or ions in a crystal may occupy positions, called , located between the regular positions for atoms. Other distortions are found in impure crystals, as, for example, when the cations, anions, or molecules of the impurity are too large to fit into the regular positions without distorting the structure. Trace amounts of impurities are sometimes added to a crystal (a process known as in order to create defects in the structure that yield desirable changes in its properties. For example, silicon crystals are doped with varying amounts of different elements to yield suitable electrical properties for their use in the manufacture of semiconductors and computer chips. Some substances form crystalline solids consisting of particles in a very organized structure; others form amorphous (noncrystalline) solids with an internal structure that is not ordered. The main types of crystalline solids are ionic solids, metallic solids, covalent network solids, and molecular solids. The properties of the different kinds of crystalline solids are due to the types of particles of which they consist, the arrangements of the particles, and the strengths of the attractions between them. Because their particles experience identical attractions, crystalline solids have distinct melting temperatures; the particles in amorphous solids experience a range of interactions, so they soften gradually and melt over a range of temperatures. Some crystalline solids have defects in the definite repeating pattern of their particles. These defects (which include vacancies, atoms or ions not in the regular positions, and impurities) change physical properties such as electrical conductivity, which is exploited in the silicon crystals used to manufacture computer chips.
8,916
4,450
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Chirality/Stereoisomers/Ethane_Conformers
The simple alkane ethane provides a good introduction to conformational analysis. Here there is only one carbon-carbon bond, and the rotational structures (rotamers) that it may assume fall between two extremes, and . In the following description of these conformers, several structural notations are used. The first views the ethane molecule from the side, with the carbon-carbon bond being horizontal to the viewer. The hydrogens are then located in the surrounding space by (in front of the plane) and (behind the plane) bonds. If this structure is rotated so that carbon #1 is canted down and brought closer to the viewer, the "sawhorse" projection is presented. Finally, if the viewer looks down the carbon-carbon bond with carbon #1 in front of #2, the projection is seen. Name of Conformer Wedge-Hatched Bond Structure Sawhorse Structure Newman Projection As a result of bond-electron repulsions, illustrated in Figure 2, the eclipsed conformation is less stable than the staggered conformation by roughly 3 kcal / mol (eclipsing strain). The most severe repulsions in the eclipsed conformation are depicted by the red arrows. There are six other less strong repulsions that are not shown. In the staggered conformation there are six equal bond repulsions, four of which are shown by the blue arrows, and these are all substantially less severe than the three strongest eclipsed repulsions. Consequently, the potential energy associated with the various conformations of ethane varies with the of the bonds, as shown below. Although the conformers of ethane are in rapid equilibrium with each other, the 3 kcal/mol energy difference leads to a substantial preponderance of staggered conformers (> 99.9%) at any given time.
1,751
4,451
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Esters/Synthesis_of_Esters/Esterification
The bond of the carbonyl group can act as a base to a strong inorganic acid due to the distortion of the electrons from the electronegativity difference between the oxygen atom and the carbon atom and also the resonance dipole. The cation produced in the reaction with sulfuric acid will have resonance stabilization. : Formation of cation : The methanol can act as a nucleophile to a carbocation. Remember that there are many methanol molecules in the solution...it is always in excess in this reaction. The protonated ether can leave as methanol but that will not accomplish anything. A proton can be transferred to one of the hydroxyl groups and thus make it a good leaving group. The alcohol oxygen atom from the hydroxy group can donate a pair of electrons to the carbon atom making a bond and eliminating water. The water will not be a viable nucleophile that will reverse the reaction because its concentration will be low compared to the concentration of the methanol. Step 5: The water will be in too low a concentration to reverse the reaction. ( )
1,083
4,452
https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Supplemental_Modules_and_Websites_(Inorganic_Chemistry)/Molecular_Geometry/VSEPR
Valence Shell Electron Pair Repulsion (VSPER) theory is used to predict the geometric shape of the molecules based on the electron repulsive force. There are some limitation to VSEPR. The shapes of the molecules is determined mainly by the electrons surrounding the central atom. Therefore, VSEPR theory gives simple directions on how to predict the shape of the molecules. The VSEPR model combines the original ideas of Sidwick and Powell and further development of Nyholm and Gillespie. In a molecule EX , the valence shell electron pair around the central atom E and the E-X single bonds are very important due to the repulsion in which determine the shape of the molecule. The repulsions decrease in order of: lone pair-lone pair, lone pair-bonding pair, bonding pair-bonding pair. At the same time, the repulsion would decrease in order of: triple bond-single bond, double bond-single bond, and single bond-single bond if the central atom E has multiple bonds. The difference between the electronegativities of E and X also determine the repulsive force between the bonding pairs. If electron-electron repulsive force is less, then more electron density is drawn away from the central atom E. VSEPR model works better for simple halides of the p-block elements but can also be used with other substituents. It does not take steric factors, size of the substituents into account. Therefore, the shape of the molecules are arranged so that the energy is minimized. For example: Lone pair electrons are also taken into account. When lone pair electrons are present, the "parent structure" are used as a guideline for determining the shape.. 1. What is VSEPR used in chemistry? It is used to predict the molecular shape of molecules 2. How to predict a molecule structure using VSEPR theory? First step is to count the total number of valence electrons. After the total number of electrons is determined, this number is divided by two to give the total number of electron pairs. With the electron pairs of the molecule, the shape of the molecule is determined based on the table shown above. 3. What is the shape of PF ? It is trigonal bipyramidal because it has total of 20 electron pairs. Each Fluorine atom give 1 electron to the Phosphorus central atom which creates total of 5 pairs. Also, each Fluorine atom has 3 electron pairs. With the presence of 5 Fluorine atom, there are 15 more electron pairs so there are 20 electron pairs total.
2,464
4,453
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Chirality/Stereoisomers/Butane_Conformers
The hydrocarbon butane has a larger and more complex set of conformations associated with its constitution than does ethane. Of particular interest and importance are the conformations produced by rotation about the central carbon-carbon bond. Among these we shall focus on two staggered conformers ( ) and two eclipsed conformers ( ), shown below in several stereo-representations. As in the case of ethane, the staggered conformers are more stable than the eclipsed conformers by 2.8 to 4.5 kcal/mol. Since the staggered conformers represent the chief components of a butane sample they have been given the identifying prefix designations for A and for C. The following diagram illustrates the change in potential energy that occurs with rotation about the C2–C3 bond. The model on the right is shown in conformation , and by clicking on any of the colored data points on the potential energy curve, it will change to the conformer corresponding to that point. The full rotation will be displayed by turning the animation on. This model may be manipulated by click-dragging the mouse for viewing from any perspective. For a more extensive discussion of rotamer analysis .
1,191
4,454
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/17%3A_Electrochemical_Cells/17.02%3A_Electrolysis
A typical electrolytic cell can be made as shown in Figure \(\Page {1}\). Two electrical conductors ( ) are immersed in the liquid to be electrolyzed. These electrodes are often made of an inert material such as stainless steel, platinum, or graphite. The liquid to be electrolyzed must be able to conduct electricity, and so it is usually an aqueous solution of an or a molten ionic compound. The electrodes are connected by wires to a battery or other source of direct current. This current source may be thought of as an “electron pump” which takes in from one electrode and forces them out into the other electrode. The electrode from which electrons are removed becomes positively charged, while the electrode to which they are supplied has an excess of electrons and a negative charge. The negatively charged electrode will attract positive ions (cations) toward it from the solution. It can donate some of its excess electrons to such cations or to other species in the liquid being electrolyzed. Hence this electrode is in effect a . In any electrochemical cell (electrolytic or galvanic) is called the . The positive electrode, on the other hand, will attract negative ions (anions) toward itself. This electrode can accept electrons from those negative ions or other species in the solution and hence behaves as an . In any electrochemical cell the is . An easy way to remember which electrode is which is that anode and oxidation begin with vowels while cathode and reduction begin with consonants. The following video shows this process taking place in a neutral solution of water with some electrolytes present. As an example of how electrolysis can cause a chemical reaction to occur, suppose we pass a direct electrical current through 1 HCl. The H O ions in this solution will be attracted to the cathode, and the \(\ce{Cl^{–}}\) ions will migrate toward the anode. At the cathode, H O will be reduced to H gas according to the half-equation \[\text{2H}^{+} + \text{2e}^{-} \rightarrow \text{H}_2\label{1} \] (As seen in , we shall write H instead of H O in half-equations to save time.) At the anode, electrons will be accepted from \(\ce{Cl^{–}}\) ions, oxidizing them to Cl : \[\text{2Cl}^{-} \rightarrow \text{Cl}_2 + \text{2e}^{-} \label{2} \] During electrolysis \(\ce{H2(g)}\) and \(\ce{Cl2(g)}\) bubble from the cathode and anode, respectively. The overall equation for the electrolysis is the sum of Eqsuations \ref{1} and \(\ref{2}\) : \[\text{2H}^{+}(aq) + \text{2Cl}^{-}(aq) \rightarrow \text{H}_2(g) + \text{Cl}_2(g)\label{3} \] or \[\text{2H}_3\text{O}^{+}(aq) + \text{2Cl}^{-}(aq) \rightarrow \text{H}_2(g) + \text{Cl}_2(g) + \text{2H}_2\text{O}(l) \nonumber \] The net reaction in Equation \ref{3} is the of the spontaneous combination of \(\ce{H2(g)}\) with C\(\ce{Cl2(g)}\) to form \(\ce{HCl(aq)}\). Such a result is true of electrolysis in general: Although electrolysis always reverses a spontaneous , the result of a given electrolysis may not always be the reaction we want. In an aqueous solution, for example, there are always a great many water molecules in the vicinity of both the anode and cathode. These water molecules can donate electrons to the anode or accept electrons from the cathode just as anions or cations can. Consequently the electrolysis may oxidize and/or reduce water instead of causing the dissolved electrolyte to react. An example of this problem is electrolysis of lithium fluoride, \(\ce{LiF}\). We might expect reduction of \(\ce{Li^{+}}\) at the cathode and oxidation of \(\ce{F^{–}}\) at the anode, according to the half-equations \[\text{Li}^{+}(aq) + \text{e}^{-} \rightarrow \text{Li}(s)\label{5} \] \[\text{2F}^{-}(aq) \rightarrow \text{F}_2(g) + \text{2e}^{-} \nonumber \] However, \(\ce{Li^{+}}\) is a very poor electron acceptor, and so it is very difficult to force Equation \ref{5} to occur. Consequently, excess electrons from the cathode are accepted by water molecules instead: \[\text{2H}_2\text{O}(l) + \text{2e}^{-} \rightarrow \text{2OH}^{-}(aq) + \text{H}_2(g)\label{7} \] A similar situation arises at the anode. F ions are extremely weak reducing agents—much weaker than H O molecules—so the half-equation is \[\text{2H}_2\text{O}(l) \rightarrow \text{O}_2(g) + \text{4H}^{+}(aq) + \text{4e}^{-}\label{8} \] The overall equation can be obtained by multiplying Equation \(\ref{7 ) by 2, adding it to Equation \(\ref{8}\) and combining H with OH to form H O: \[\text{2H}_2\text{O}(l) \rightarrow \text{2H}_2(g) + \text{O}_2(g) \nonumber \] The following video shows the electrolysis of water taking place, using sulfuric acid as a bridge to allow for the transfer of charge. After the electrolysis is complete, the identities of the gases formed are verified using burning splint tests. Thus this electrolysis reverses the spontaneous combination of H and O to form H O. In discussing we mention several , such as which are strong enough to oxidize H O. At the same time we describe which are strong enough to reduce H O such as the alkali metals and the heavier alkaline earths. As a general rule such substances cannot be produced by electrolysis of aqueous solutions because H O is oxidized or reduced instead. Substances which undergo spontaneous redox reaction with H O are usually produced by electrolysis of molten salts or in some other solvent. There are some exceptions to this rule, however, because some electrode reactions are slower than others. Using Table 11.5, for example, we would predict that H O is a better reducing agent than \(\ce{Cl^{–}}\). Hence we would expect O , not Cl , to be produced by electrolysis of 1 HCl, in contradiction of Equation \(\ref{1}\). It turns out that O is produced more than Cl , and the latter bubbles out of solution before the H O can be oxidized. For this reason cannot always be used to predict what will happen in an electrolysis.
5,932
4,457
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Introductory_Chemistry_(CK-12)/18%3A_Kinetics/18.01%3A_Chemical_Reaction_Rate
Drag racing is a sport that involves two cars starting from a dead stop, and driving as fast as they can down a quarter-mile strip. At the end of the strip are timers that determine both elapsed time (how long it took for the cars to cover the quarter mile) and top speed (how fast they were going as they went through the timer chute). Both pieces of data are important. One car may accelerate faster and get ahead that way, while the other car may be slower off the line, but can get up to a higher top speed at the end of the run. Chemical reactions vary widely in terms of the speed with which they occur. Some reactions occur very quickly. If a lit match is brought in contact with lighter fluid or another flammable liquid, it erupts into flames instantly and burns fast. Other reactions occur very slowly. A container of milk in the refrigerator will be good to drink for weeks before it begins to turn sour. Millions of years were required for dead plants under Earth's surface to accumulate and eventually turn into fossil fuels such as coal and oil. Chemists need to be concerned with the rates at which chemical reactions occur. Rate is another word for speed. If a sprinter takes \(11.0 \: \text{s}\) to run a \(100 \: \text{m}\) dash, his rate is given by the distance traveled divided by the time. \[\text{speed} = \frac{\text{distance}}{\text{time}} = \frac{100 \; \text{m}}{11.0 \: \text{s}} = 9.09 \: \text{m/s}\nonumber \] The sprinter's average running rate for the race is \(9.09 \: \text{m/s}\). We say that it is his average rate because he did not run at that speed for the entire race. At the very beginning of the race, while coming from a standstill, his rate must be slower until he is able to get up to his top speed. His top speed must then be greater than \(9.09 \: \text{m/s}\) so that, taken over the entire race, the average ends up at \(9.09 \: \text{m/s}\). Chemical reactions can't be measured in units of meters per second, as that would not make any sense. A is the change in concentration of a reactant or product with time. Suppose that a simple reaction were to take place in which a \(1.00 \: \text{M}\) aqueous solution of substance \(\ce{A}\) was converted to substance \(\ce{B}\). \[\ce{A} \left( aq \right) \rightarrow \ce{B} \left( aq \right)\nonumber \] Suppose that after 20.0 seconds, the concentration of \(\ce{A}\) had dropped from \(1.00 \: \text{M}\) to \(0.72 \: \text{M}\) as \(\ce{A}\) was slowly being converted to \(\ce{B}\). We can express the rate of this reaction as the change in concentration of \(\ce{A}\) divided by time. \[\text{rate} = -\frac{\Delta \left[ \ce{A} \right]}{\Delta t} = -\frac{\left[ \ce{A} \right]_\text{final} - \left[ \ce{A} \right]_\text{initial}}{\Delta t}\nonumber \] A bracket around a symbol or formula means the concentration in molarity of that substance. The change in concentration of \(\ce{A}\) is its final concentration minus its initial concentration. Because the concentration of \(\ce{A}\) is decreasing over time, the negative sign is used. Thus, the rate for the reaction is positive and the units are molarity per second or \(\text{M/s}\). \[\text{rate} = -\frac{0.72 \: \text{M} - 1.00 \: \text{M}}{20.0 \: \text{s}} = -\frac{-0.28 \: \text{M}}{20.0 \: \text{s}} = 0.014 \: \text{M/s}\nonumber \] The molarity of \(\ce{A}\) decreases by an average rate of \(0.014 \: \text{M}\) every second. In summary, the rate of a chemical reaction is measured by the change in concentration over time for a reactant or product. The unit of measurement for a reaction rate is molarity per second \(\left( \text{M/s} \right)\).
3,641
4,459
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.03%3A_Light_Particles_and_Waves
Make sure you thoroughly understand the following essential ideas Our intuitive view of the "real world" is one in which objects have definite masses, sizes, locations and velocities. Once we get down to the atomic level, this simple view begins to break down. It becomes totally useless when we move down to the subatomic level and consider the lightest of all chemically-significant particles, the . The chemical properties of a particular kind of atom depend on the arrangement and behavior of the electrons which make up almost the entire volume of the atom. The electronic structure of an atom can only be determined indirectly by observing the manner in which atoms absorb and emit light. Light, as you already know, has wavelike properties, so we need to know something about waves in order to interpret these observations. But because the electrons are themselves and therefore have wavelike properties of their own, we will find that an understanding of the behavior of electrons in atoms can only be gained through the language of waves. Atoms are far too small to see directly, even with the most powerful optical microscopes. But atoms do interact with and under some circumstances emit light in ways that reveal their internal structures in amazingly fine detail. It is through the "language of light" that we communicate with the world of the atom. This section will introduce you to the rudiments of this language. In the early 19th century, the English scientist Thomas Young carried out the famous double-slit experiment which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. By 1820, Augustin Fresnel had put this theory on a sound mathematical basis, but the exact nature of the waves remained unclear until the 1860's when James Clerk Maxwell developed his electromagnetic theory. But Einstein's 1905 explanation of the photoelectric effect showed that light also exhibits a particle-like nature. The is the smallest possible packet ( ) of light; it has zero mass but a definite energy. When light-wave interference experiments are conducted with extremely low intensities of light, the wave theory breaks down; instead of recording a smooth succession of interference patterns as shown above, an extremely sensitive detector sees individual pulses— that is, individual . Suppose we conduct the double-slit interference experiment using a beam of light so weak that only one photon at a time passes through the apparatus (it is experimentally possible to count single photons, so this is a practical experiment.) Each photon passes through the first slit, and then through one or the other of the second set of slits, eventually striking the photographic film where it creates a tiny dot. If we develop the film after a sufficient number of photons have passed through, we find the very same interference pattern we obtained with higher-intensity light whose behavior was could be explained by wave interference. There is something strange here. Each photon, acting as a particle, must pass through one or the other of the pair of slits, so we would expect to get only two groups of spots on the film, each opposite one of the two slits. Instead, it appears that the each particle, on passing through one slit, "knows" about the other, and adjusts its final trajectory so as to build up a wavelike interference pattern. It gets even stranger: suppose that we set up a detector to determine which slit a photon is heading for, and then block off the other slit with a shutter. We find that the photon sails straight through the open slit and onto the film without trying to create any kind of an interference pattern. Apparently, any attempt to observe the photon as a discrete particle causes it to behave like one. Later on, virtually the same experiment was repeated with electrons, thus showing that particles can have wavelike properties (as the French physicist Louis de Broglie predicted in 1923), just as what were conventionally thought to be electromagnetic waves possess particle-like properties. For large bodies (most atoms, baseballs, cars) there is no question: the wave properties are insignificant, and the laws of classical mechanics can adequately describe their behaviors. But for particles as tiny as electrons ( ), the situation is quite different: instead of moving along well defined paths, a quantum particle seems to have an infinity of paths which thread their way through space, seeking out and collecting information about all possible routes, and then adjusting its behavior so that its final trajectory, when combined with that of others, produces the same overall effect that we would see from a train of waves of wavelength = . Taking this idea of quantum indeterminacy to its most extreme, the physicist Erwin Schrödinger proposed a "thought experiment" in which the radioactive decay of an atom would initiate a chain of events that would lead to the death of a cat placed in a closed box. The atom has a 50% chance of decaying in an hour, meaning that its wave representation will contain both possibilities until an observation is made. The question, then, is will the cat be simultaneously in an alive-and-dead state until the box is opened? If so, this raises all kinds of interesting questions about the nature of being. We use the term "wave" to refer to a quantity which changes with time. Waves in which the changes occur in a repeating or periodic manner are of special importance and are widespread in nature; think of the motions of the ocean surface, the pressure variations in an organ pipe, or the vibrations of a plucked guitar string. What is interesting about all such repeating phenomena is that they can be described by the same mathematical equations. Wave motion arises when a periodic disturbance of some kind is propagated through a medium; pressure variations through air, transverse motions along a guitar string, or variations in the intensities of the local electric and magnetic fields in space, which constitutes electromagnetic radiation. For each medium, there is a characteristic at which the disturbance travels. There are three measurable properties of wave motion: , , and , the number of vibrations per second. The relation between the wavelength \(λ\) (Greek ) and frequency of a wave \( u\) (Greek ) is determined by the propagation velocity \[v = u λ\] What is the wavelength of the musical note A = 440 hz when it is propagated through air in which the velocity of sound is 343 m s ? \[λ = \dfrac{v} { u} = \dfrac{343\; m \,s^{–1}}{440\, s^{–1}} = 0.80\; m\] Michael Faraday's discovery that electric currents could give rise to magnetic fields and raised the question of how these effects are transmitted through space. Around 1870, the Scottish physicist James Clerk Maxwell (1831-1879) showed that this electromagnetic radiation can be described as a train of perpendicular oscillating electric and magnetic fields. Maxwell was able to calculate the speed at which electromagnetic disturbances are propagated, and found that this speed is the same as that of light. He therefore proposed that light is itself a form of electromagnetic radiation whose wavelength range forms only a very small part of the entire electromagnetic spectrum. Maxwell's work served to unify what were once thought to be entirely separate realms of wave motion. The electromagnetic spectrum is conventionally divided into various parts as depicted in the diagram below, in which the four logarithmic scales correlate the wavelength of electromagnetic radiation with its frequency in herz (units of s ) and the energy per photon, expressed both in joules and electron-volts. The other items shown on the diagram, from the top down, are: It's worth noting that radiation in the ultraviolet range can have direct chemical effects by ionizing atoms and disrupting chemical bonds. Longer-wavelength radiation can interact with atoms and molecules in ways that provide a valuable means of identifying them and revealing particular structural features. It is useful to develop some feeling for the various magnitudes of energy that we must deal with. The basic SI unit of energy is the ; the appearance of this unit in allows us to express the energy equivalent of light in joules. For example, light of wavelength 500 nm, which appears blue-green to the human eye, would have a frequency of The quantum of energy carried by a single photon of this frequency is Another energy unit that is commonly employed in atomic physics is the ; this is the kinetic energy that an electron acquires upon being accelerated across a 1-volt potential difference. The relationship 1 eV = 1.6022E–19 J gives an energy of 2.5 eV for the photons of blue-green light. Two small flashlight batteries will produce about 2.5 volts, and thus could, in principle, give an electron about the same amount of kinetic energy that blue-green light can supply. Because the energy produced by a battery derives from a chemical reaction, this quantity of energy is representative of the magnitude of the energy changes that accompany chemical reactions. In more familiar terms, one mole of 500-nm photons would have an energy equivalent of Avogadro's number times 4E–19 J, or 240 kJ per mole. This is comparable to the amount of energy required to break some chemical bonds. Many substances are able to undergo chemical reactions following light-induced disruption of their internal bonding; such molecules are said to be . Any body whose temperature is above absolute zero emits radiation covering a broad range of wavelengths. At very low temperatures the predominant wavelengths are in the radio microwave region. As the temperature increases, the wavelengths decrease; at room temperature, most of the emission is in the infrared. At still higher temperatures, objects begin to emit in the visible region, at first in the red, and then moving toward the blue as the temperature is raised. These are described as , since all wavelengths within the broad emission range are present. The source of thermal emission most familiar to us is the Sun. When sunlight is refracted by rain droplets into a rainbow or by a prism onto a viewing screen, we see the visible part of the spectrum. Red hot, white hot, blue hot... your rough guide to temperatures of hot objects. Heat a piece of iron up to near its melting point and it will emit a broad continuous spectrum that the eye perceives as orange-yellow. But if you zap the iron with an electric spark, some of the iron atoms will vaporize and have one or more of their electrons temporarily knocked out of them. As they cool down the electrons will re-combine with the iron ions, losing energy as the move in toward the nucleus and giving up this excess energy as light. The spectrum of this light is anything but continuous; it consists of a series of discrete wavelengths which we call . Each chemical element has its own characteristic line spectrum which serves very much like a "fingerprint" capable of identifying a particular element in a complex mixture. Shown below is what you would see if you could look at several different atomic line spectra directly. If you live in a city, you probably see atomic line light sources every night! "Neon" signs are the most colorful and spectacular, but high-intensity street lighting is the most widespread source. A look at the emission spectrum (above) of sodium explains the intense yellow color of these lamps. The spectrum of mercury (not shown) similarly has its strongest lines in the blue-green region. There is one more fundamental concept you need to know before we can get into the details of atoms and their spectra. If light has a particle nature, why should particles not possess wavelike characteristics? In 1923 a young French physicist, Louis de Broglie, published an argument showing that matter should indeed have a wavelike nature. The of a body is inversely proportional to its momentum : \[ \lambda =\dfrac{h}{mv}\] If you explore the magnitude of the quantities in this equation (recall that is around 10 J s), it will be apparent that the wavelengths of all but the lightest bodies are insignificantly small fractions of their dimensions, so that the objects of our everyday world all have definite boundaries. Even individual atoms are sufficiently massive that their wave character is not observable in most kinds of experiments. Electrons, however, are another matter; the electron was in fact the first particle whose wavelike character was seen experimentally, following de Broglie's prediction. Its small mass (9.1E–31 kg) made it an obvious candidate, and velocities of around 100 km/s are easily obtained, yielding a value of λ in the above equation that well exceeds what we think of as the "radius" of the electron. At such velocities the electron behaves as if it is "spread out" to atomic dimensions; a beam of these electrons can be diffracted by the ordered rows of atoms in a crystal in much the same way as visible light is diffracted by the closely-spaced groves of a CD recording. Electron diffraction has become an important tool for investigating the structures of molecules and of solid surfaces. A more familiar exploitation of the wavelike properties of electrons is seen in the , whose utility depends on the fact that the wavelength of the electrons is much less than that of visible light, thus allowing the electron beam to reveal detail on a correspondingly smaller scale. In 1927, the German physicist Werner Heisenberg pointed out that the wave nature of matter leads to a profound and far-reaching conclusion: no method of observation, however perfectly it is carried out, can reveal both the exact and (and thus the ) of a particle. This is the origin of the widely known concept that the very process of observation will change the value of the quantity being observed. The Heisenberg principle can be expressed mathematically by the inequality \[ \Delta{x}\Delta{p} \leq \dfrac{h}{2\pi}\] in which the \(\Delta\) (deltas) represent the uncertainties with which the location and momentum are known. Suppose that you wish to measure the exact location of a particle that is at rest (zero momentum). To accomplish this, you must "see" the molecule by illuminating it with light or other radiation. But the light acts like a beam of photons, each of which possesses the momentum h/λ in which λ is the wavelength of the light. When a photon collides with the particle, it transfers some of its momentum to the particle, thus altering both its position and momentum. Notice how the form of this expression predicts that if the location of an object is known exactly (\(\Delta{x} = 0\)), then the uncertainty in the momentum must be infinite, meaning that nothing at all about the velocity can be known. Similarly, if the velocity were specified exactly, then the location would be entirely uncertain and the particle could be anywhere. One interesting consequence of this principle is that even at a temperature of absolute zero, the molecules in a crystal must still possess a small amount of , sufficient to limit the precision to which we can measure their locations in the crystal lattice. An equivalent formulation of the uncertainty principle relates the uncertainties associated with a measurement of the energy of a system to the time \(\Delta{t}\) taken to make the measurement: \[ \Delta{E}\Delta{t} \leq \dfrac{h}{2 \pi}\] The "uncertainty" referred to here goes much deeper than merely limiting our ability to the quantity \(\Delta{x}\Delta{p}\) to a greater precision than /2\pi\). It means, rather, that this product has no exact value, nor, by extension, do and on a microscopic scale. A more appropriate term would be , which is closer to Heisenberg's original word . The revolutionary nature Heisenberg's uncertainty principle soon extended far beyond the arcane world of physics; its consequences quickly entered the realm of ideas and has inspired numerous creative works in the arts— few of which really have much to do with the Principle! A possible exception is Michael Frayn's widely acclaimed play (see below) that has brought a sense of Heisenberg's thinking to a wide segment of the public.
16,399
4,460
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/10%3A_Spectroscopic_Methods/10.09%3A_Problems
1. Provide the missing information in the following table. 2. Provide the missing information in the following table. 3. A solution’s transmittance is 35.0%. What is the transmittance if you dilute 25.0 mL of the solution to 50.0 mL? 4. A solution’s transmittance is 85.0% when measured in a cell with a pathlength of 1.00 cm. What is the %T if you increase the pathlength to 10.00 cm? 5. The accuracy of a spectrophotometer is evaluated by preparing a solution of 60.06 ppm K Cr O in 0.0050 M H SO , and measuring its absorbance at a wavelength of 350 nm in a cell with a pathlength of 1.00 cm. The expected absorbance is 0.640. What is the expected molar absorptivity of K Cr O at this wavelength? 6. A chemical deviation to Beer’s law may occur if the concentration of an absorbing species is affected by the position of an equilibrium reaction. Consider a weak acid, HA, for which is \(2 \times 10^{-5}\). Construct Beer’s law calibration curves of absorbance versus the total concentration of weak acid ( = [HA] + [A ]), using values for of \(1.0 \times 10^{-5}\), \(3.0 \times 10^{-5}\), \(5.0 \times 10^{-5}\), \(7.0 \times 10^{-5}\), \(9.0 \times 10^{-5}\), \(11 \times 10^{-5}\), and \(13 \times 10^{-5}\) M for the following sets of conditions and comment on your results: (a) \(\varepsilon_{HA} = \varepsilon_{A^-} = 2000\) M cm ; unbuffered solution. (b) \(\varepsilon_{HA} = 2000\) M cm ; \(\varepsilon_{A^-} = 500\) M cm ; unbuffered solution. (c) \(\epsilon_{HA} = 2000\) M cm ; \(\epsilon_{A^-} = 500\) M cm ; solution buffered to a pH of 4.5. Assume a constant pathlength of 1.00 cm for all samples. 7. One instrumental limitation to Beer’s law is the effect of polychromatic radiation. Consider a line source that emits radiation at two wavelengths, \(\lambda^{\prime}\) and \(\lambda^{\prime \prime}\). When treated separately, the absorbances at these wavelengths, ′ and ′′, are \[A^{\prime}=-\log \frac{P_{\mathrm{T}}^{\prime}}{P_{0}^{\prime}}=\varepsilon^{\prime} b C \quad \quad A^{\prime \prime}=-\log \frac{P_{\mathrm{T}}^{\prime \prime}}{P_{0}^{\prime \prime}}=\varepsilon^{\prime \prime} b C \nonumber\] If both wavelengths are measured simultaneously the absorbance is \[A=-\log \frac{\left(P_{\mathrm{T}}^{\prime}+P_{\mathrm{T}}^{\prime \prime}\right)}{\left(P_{0}^{\prime}+P_{0}^{\prime \prime}\right)} \nonumber\] (a) Show that if the molar absorptivities at \(\lambda^{\prime}\) and \(\lambda^{\prime \prime}\) are the same (\(\varepsilon^{\prime} = \varepsilon^{\prime \prime} = \varepsilon\)), then the absorbance is equivalent to \[A=\varepsilon b C \nonumber\] (b) Construct Beer’s law calibration curves over the concentration range of zero to \(1 \times 10^{-4}\) M using \(\varepsilon^{\prime} = 1000\) M cm and \(\varepsilon^{\prime \prime} = 1000\) M cm , and \(\varepsilon^{\prime} = 1000\) M cm and \(\varepsilon^{\prime \prime} = 100\) M cm . Assume a value of 1.00 cm for the pathlength and that \(P_0^{\prime} = P_0^{\prime \prime} = 1\). Explain the difference between the two curves. 8. A second instrumental limitation to Beer’s law is stray radiation. The following data were obtained using a cell with a pathlength of 1.00 cm when stray light is insignificant ( = 0). Calculate the absorbance of each solution when is 5% of , and plot Beer’s law calibration curves for both sets of data. Explain any differences between the two curves. ( : Assume is 100). 9. In the process of performing a spectrophotometric determination of iron, an analyst prepares a calibration curve using a single-beam spectrophotometer similar to that shown in . After preparing the calibration curve, the analyst drops and breaks the cuvette. The analyst acquires a new cuvette, measures the absorbance of the sample, and determines the %w/w Fe in the sample. Does the change in cuvette lead to a determinate error in the analysis? Explain. 10. The spectrophotometric methods for determining Mn in steel and for determining glucose use a chemical reaction to produce a colored spe- cies whose absorbance we can monitor. In the analysis of Mn in steel, colorless Mn is oxidized to give the purple \(\text{MnO}_4^{-}\) ion. To analyze for glucose, which is also colorless, we react it with a yellow colored solution of the \(\text{Fe(CN)}_6^{3-}\), forming the colorless \(\text{Fe(CN)}_6^{4-}\) ion. The directions for the analysis of Mn do not specify precise reaction conditions, and samples and standards are treated separately. The conditions for the analysis of glucose, however, require that the samples and standards are treated simultaneously at exactly the same temperature and for exactly the same length of time. Explain why these two experimental procedures are so different. 11. One method for the analysis of Fe , which is used with a variety of sample matrices, is to form the highly colored Fe –thioglycolic acid complex. The complex absorbs strongly at 535 nm. Standardizing the method is accomplished using external standards. A 10.00-ppm Fe working standard is prepared by transferring a 10-mL aliquot of a 100.0 ppm stock solution of Fe to a 100-mL volumetric flask and diluting to volume. Calibration standards of 1.00, 2.00, 3.00, 4.00, and 5.00 ppm are prepared by transferring appropriate amounts of the 10.0 ppm working solution into separate 50-mL volumetric flasks, each of which contains 5 mL of thioglycolic acid, 2 mL of 20% w/v ammonium citrate, and 5 mL of 0.22 M NH . After diluting to volume and mixing, the absorbances of the external standards are measured against an appropriate blank. Samples are prepared for analysis by taking a portion known to contain approximately 0.1 g of Fe , dissolving it in a minimum amount of HNO , and diluting to volume in a 1-L volumetric flask. A 1.00-mL aliquot of this solution is transferred to a 50-mL volumetric flask, along with 5 mL of thioglycolic acid, 2 mL of 20% w/v ammonium citrate, and 5 mL of 0.22 M NH and diluted to volume. The absorbance of this solution is used to determine the concentration of Fe3 in the sample. (a) What is an appropriate blank for this procedure? (b) Ammonium citrate is added to prevent the precipitation of Al . What is the effect on the reported concentration of iron in the sample if there is a trace impurity of Fe in the ammonium citrate? (c) Why does the procedure specify that the sample contain approximately 0.1 g of Fe ? (d) Unbeknownst to the analyst, the 100-mL volumetric flask used to prepare the 10.00 ppm working standard of Fe has a volume that is significantly smaller than 100.0 mL. What effect will this have on the reported concentration of iron in the sample? 12. A spectrophotometric method for the analysis of iron has a linear calibration curve for standards of 0.00, 5.00, 10.00, 15.00, and 20.00 mg Fe/L. An iron ore sample that is 40–60% w/w is analyzed by this method. An approximately 0.5-g sample is taken, dissolved in a minimum of concentrated HCl, and diluted to 1 L in a volumetric flask using distilled water. A 5.00 mL aliquot is removed with a pipet. To what volume—10, 25, 50, 100, 250, 500, or 1000 mL—should it be diluted to minimize the uncertainty in the analysis? Explain. 13. Lozano-Calero and colleagues developed a method for the quantitative analysis of phosphorous in cola beverages based on the formation of the blue-colored phosphomolybdate complex, (NH ) [PO (MoO ) ] [Lozano-Calero, D.; Martín-Palomeque, P.; Madueño-Loriguillo, S. , , 1173–1174]. The complex is formed by adding (NH ) Mo O to the sample in the presence of a reducing agent, such as ascorbic acid. The concentration of the complex is determined spectrophotometrically at a wavelength of 830 nm, using an external standards calibration curve. In a typical analysis, a set of standard solutions that contain known amounts of phosphorous is prepared by placing appropriate volumes of a 4.00 ppm solution of P O in a 5-mL volumetric flask, adding 2 mL of an ascorbic acid reducing solution, and diluting to volume with distilled water. Cola beverages are prepared for analysis by pouring a sample into a beaker and allowing it to stand for 24 h to expel the dissolved CO . A 2.50-mL sample of the degassed sample is transferred to a 50-mL volumetric flask and diluted to volume. A 250-μL aliquot of the diluted sample is then transferred to a 5-mL volumetric flask, treated with 2 mL of the ascorbic acid reducing solution, and diluted to volume with distilled water. (a) The authors note that this method can be applied only to noncol- ored cola beverages. Explain why this is true. (b) How might you modify this method so that you can apply it to any cola beverage? (c) Why is it necessary to remove the dissolved gases? (d) Suggest an appropriate blank for this method? (e) The author’s report a calibration curve of \[A=-0.02+\left(0.72 \ \mathrm{ppm}^{-1}\right) \times C_{\mathrm{P}_{2} \mathrm{O}_{5}} \nonumber\] A sample of Crystal Pepsi, analyzed as described above, yields an absorbance of 0.565. What is the concentration of phosphorous, reported as ppm P, in the original sample of Crystal Pepsi? Crystal Pepsi was a colorless, caffeine-free soda produced by PepsiCo. It was available in the United States from 1992 to 1993. 14. EDTA forms colored complexes with a variety of metal ions that may serve as the basis for a quantitative spectrophotometric method of analysis. The molar absorptivities of the EDTA complexes of Cu , Co , and Ni at three wavelengths are summarized in the following table (all values of \(\varepsilon\) are in M cm ). Using this information determine the following, assuming a pathlength, , of 1.00 cm for all measurements: (a) The concentration of Cu in a solution that has an absorbance of 0.338 at a wavelength of 732.0 nm. (b) The concentrations of Cu and Co in a solution that has an absorbance of 0.453 at a wavelength of 732.0 nm and 0.107 at a wavelength of 462.9 nm. (c) The concentrations of Cu , Co , and Ni in a sample that has an absorbance of 0.423 at a wavelength of 732.0 nm, 0.184 at a wavelength of 462.9 nm, and 0.291 at a wavelength of 378.7 nm. 15. The concentration of phenol in a water sample is determined by using steam distillation to separate the phenol from non-volatile impurities, followed by reacting the phenol in the distillate with 4-aminoantipyrine and K Fe(CN) at pH 7.9 to form a colored antipyrine dye. A phenol standard with a concentration of 4.00 ppm has an absorbance of 0.424 at a wavelength of 460 nm using a 1.00 cm cell. A water sample is steam distilled and a 50.00-mL aliquot of the distillate is placed in a 100-mL volumetric flask and diluted to volume with distilled water. The absorbance of this solution is 0.394. What is the concentration of phenol (in parts per million) in the water sample? 16. Saito describes a quantitative spectrophotometric procedure for iron based on a solid-phase extraction using bathophenanthroline in a poly(vinyl chloride) membrane [Saito, T. , , 351–355]. In the absence of Fe the membrane is colorless, but when immersed in a solution of Fe and I , the membrane develops a red color as a result of the formation of an Fe –bathophenanthroline complex. A calibration curve determined using a set of external standards with known concentrations of Fe gave a standardization relationship of \[A=\left(8.60 \times 10^{3} \ \mathrm{M}^{-1}\right) \times\left[\mathrm{Fe}^{2+}\right] \nonumber\] What is the concentration of iron, in mg Fe/L, for a sample with an absorbance of 0.100? 17. In the DPD colorimetric method for the free chlorine residual, which is reported as mg Cl /L, the oxidizing power of free chlorine converts the colorless amine N,N-diethyl- -phenylenediamine to a colored dye that absorbs strongly over the wavelength range of 440–580 nm. Analysis of a set of calibration standards gave the following results. A sample from a public water supply is analyzed to determine the free chlorine residual, giving an absorbance of 0.113. What is the free chlorine residual for the sample in mg Cl /L? 18. Lin and Brown described a quantitative method for methanol based on its effect on the visible spectrum of methylene blue [Lin, J.; Brown, C. W. , , 48–51]. In the absence of methanol, methylene blue has two prominent absorption bands at 610 nm and 663 nm, which correspond to the monomer and the dimer, respectively. In the presence of methanol, the intensity of the dimer’s absorption band decreases, while that for the monomer increases. For concentrations of methanol between 0 and 30% v/v, the ratio of the two absorbance, / , is a linear function of the amount of methanol. Use the following standardization data to determine the %v/v methanol in a sample if is 0.75 and is 1.07. 19. The concentration of the barbiturate barbital in a blood sample is determined by extracting 3.00 mL of blood with 15 mL of CHCl . The chloroform, which now contains the barbital, is extracted with 10.0 mL of 0.45 M NaOH (pH ≈ 13). A 3.00-mL sample of the aqueous extract is placed in a 1.00-cm cell and an absorbance of 0.115 is measured. The pH of the sample in the absorption cell is then adjusted to approximately 10 by adding 0.50 mL of 16% w/v NH Cl, giving an absorbance of 0.023. When 3.00 mL of a standard barbital solution with a concentration of 3 mg/100 mL is taken through the same procedure, the absorbance at pH 13 is 0.295 and the absorbance at a pH of 10 is 0.002. Report the mg barbital/100 mL in the sample. 20. Jones and Thatcher developed a spectrophotometric method for analyzing analgesic tablets that contain aspirin, phenacetin, and caffeine [Jones, M.; Thatcher, R. L. , , 957–960]. The sample is dissolved in CHCl and extracted with an aqueous solution of NaHCO to remove the aspirin. After the extraction is complete, the chloroform is transferred to a 250-mL volumetric flask and diluted to volume with CHCl . A 2.00-mL portion of this solution is then diluted to volume in a 200-mL volumetric flask with CHCl . The absorbance of the final solution is measured at wavelengths of 250 nm and 275 nm, at which the absorptivities, in ppm cm , for caffeine and phenacetin are Aspirin is determined by neutralizing the NaHCO in the aqueous solution and extracting the aspirin into CHCl . The combined extracts are diluted to 500 mL in a volumetric flask. A 20.00-mL portion of the solution is placed in a 100-mL volumetric flask and diluted to volume with CHCl . The absorbance of this solution is measured at 277 nm, where the absorptivity of aspirin is 0.00682 ppm cm . An analgesic tablet treated by this procedure is found to have absorbances of 0.466 at 250 nm, 0.164 at 275 nm, and 0.600 at 277 nm when using a cell with a 1.00 cm pathlength. Report the milligrams of aspirin, caffeine, and phenacetin in the analgesic tablet. 21. The concentration of SO in a sample of air is determined by the -rosaniline method. The SO is collected in a 10.00-mL solution of \(\text{HgCl}_4^{2-}\), where it reacts to form \(\text{Hg(SO}_3 )_2\), by pulling air through the solution for 75 min at a rate of 1.6 L/min. After adding -rosaniline and formaldehyde, the colored solution is diluted to 25 mL in a volumetric flask. The absorbance is measured at 569 nm in a 1-cm cell, yielding a value of 0.485. A standard sample is prepared by substituting a 1.00-mL sample of a standard solution that contains the equivalent of 15.00 ppm SO for the air sample. The absorbance of the standard is found to be 0.181. Report the concentration of SO in the air in mg SO /L. The density of air is 1.18 g/liter. 22. Seaholtz and colleagues described a method for the quantitative analysis of CO in automobile exhaust based on the measurement of infrared radiation at 2170 cm [Seaholtz, M. B.; Pence, L. E.; Moe, O. A. Jr. , , 820–823]. A calibration curve is prepared by filling a 10-cm IR gas cell with a known pressure of CO and measuring the absorbance using an FT-IR, giving a calibration equation of \[A=-1.1 \times 10^{-4}+\left(9.9 \times 10^{-4}\right) \times P_{\mathrm{CO}} \nonumber\] Samples are prepared by using a vacuum manifold to fill the gas cell. After measuring the total pressure, the absorbance at 2170 cm is measured. Results are reported as %CO ( / ). The analysis of five exhaust samples from a 1973 coupe gives the following results. Determine the %CO for each sample, and report the mean and the 95% confidence interval. 23. shows an example of a disposable IR sample card made using a thin sheet of polyethylene. To prepare an analyte for analysis, it is dissolved in a suitable solvent and a portion of the sample placed on the IR card. After the solvent evaporates, leaving the analyte behind as a thin film, the sample’s IR spectrum is obtained. Because the thickness of the polyethylene film is not uniform, the primary application of IR cards is for a qualitative analysis. Zhao and Malinowski reported how an internal standardization with KSCN can be used for a quantitative IR analysis of polystyrene [Zhao, Z.; Malinowski, E. R. , , 44–49]. Polystyrene is monitored at 1494 cm and KSCN at 2064 cm . Standard solutions are prepared by placing weighed portions of polystyrene in a 10-mL volumetric flask and diluting to volume with a solution of 10 g/L KSCN in methyl isobutyl ketone. A typical set of results is shown here. When a 0.8006-g sample of a poly(styrene/maleic anhydride) copolymer is analyzed, the following results are obtained. What is the %w/w polystyrene in the copolymer? Given that the reported %w/w polystyrene is 67%, is there any evidence for a determinate error at \(\alpha\) = 0.05? 24. The following table lists molar absorptivities for the Arsenazo complexes of copper and barium [Grossman, O.; Turanov, A. N. , , 195–202]. Suggest appropriate wavelengths for analyzing mixtures of copper and barium using their Arsenzao complexes. 25. Blanco and colleagues report several applications of multiwavelength linear regression analysis for the simultaneous determination of two-component mixtures [Blanco, M.; Iturriaga, H.; Maspoch, S.; Tarin, P. , , 178–180]. For each of the following, determine the molar concentration of each analyte in the mixture. (a) Titanium and vanadium are determined by forming complexes with H O . Results for a mixture of Ti(IV) and V(V) and for stan-dards of 63.1 ppm Ti(IV) and 96.4 ppm V(V) are listed in the following table. (b) Copper and zinc are determined by forming colored complexes with 2-pyridyl-azo-resorcinol (PAR). The absorbances for PAR, a mixture of Cu and Zn , and standards of 1.00 ppm Cu and 1.00 ppm Zn are listed in the following table. Note that you must correct the absorbances for the each metal for the contribution from PAR. 26. The stoichiometry of a metal–ligand complex, ML , is determined by the method of continuous variations. A series of solutions is prepared in which the combined concentrations of M and L are held constant at \(5.15 \times 10^{-4}\) M. The absorbances of these solutions are measured at a wavelength where only the metal–ligand complex absorbs. Using the following data, determine the formula of the metal–ligand complex. 27. The stoichiometry of a metal–ligand complex, ML , is determined by the mole-ratio method. A series of solutions are prepared in which the metal’s concentration is held constant at \(3.65 \times 10^{-4}\) M and the ligand’s concentration is varied from \(1 \times 10^{-4}\) M to \(1 \times 10^{-3}\) M. Using the following data, determine the stoichiometry of the metal-ligand complex. 28. The stoichiometry of a metal–ligand complex, ML , is determined by the slope-ratio method. Two sets of solutions are prepared. For the first set of solutions the metal’s concentration is held constant at 0.010 M and the ligand’s concentration is varied. The following data are obtained at a wavelength where only the metal–ligand complex absorbs. For the second set of solutions the concentration of the ligand is held constant at 0.010 M, and the concentration of the metal is varied, yielding the following absorbances. Using this data, determine the stoichiometry of the metal-ligand complex. 29. Kawakami and Igarashi developed a spectrophotometric method for nitrite based on its reaction with 5, 10, 15, 20-tetrakis(4-aminophenyl) porphrine (TAPP). As part of their study they investigated the stoichiometry of the reaction between TAPP and \(\text{NO}_2^-\). The following data are derived from a figure in their paper [Kawakami, T.; Igarashi, S. , , 175–180]. What is the stoichiometry of the reaction? 30. The equilibrium constant for an acid–base indicator is determined by preparing three solutions, each of which has a total indicator concentration of \(1.35 \times 10^{-5}\) M. The pH of the first solution is adjusted until it is acidic enough to ensure that only the acid form of the indicator is present, yielding an absorbance of 0.673. The absorbance of the second solution, whose pH is adjusted to give only the base form of the indicator, is 0.118. The pH of the third solution is adjusted to 4.17 and has an absorbance of 0.439. What is the acidity constant for the acid–base indicator? 31. The acidity constant for an organic weak acid is determined by measuring its absorbance as a function of pH while maintaining a constant total concentration of the acid. Using the data in the following table, determine the acidity constant for the organic weak acid. 32. Suppose you need to prepare a set of calibration standards for the spectrophotometric analysis of an analyte that has a molar absorptivity of 1138 M cm at a wavelength of 625 nm. To maintain an acceptable precision for the analysis, the %T for the standards should be between 15% and 85%. (a) What is the concentration for the most concentrated and for the least concentrated standard you should prepare, assuming a 1.00-cm sample cell. (b) Explain how you will analyze samples with concentrations that are 10 μM, 0.1 mM, and 1.0 mM in the analyte. 33. When using a spectrophotometer whose precision is limited by the uncertainty of reading %T, the analysis of highly absorbing solutions can lead to an unacceptable level of indeterminate errors. Consider the analysis of a sample for which the molar absorptivity is \(1.0 \times 10^4\) M cm and for which the pathlength is 1.00 cm. (a) What is the relative uncertainty in concentration for an analyte whose concentration is \(2.0 \times 10^{-4}\) M if is ±0.002? (b) What is the relative uncertainty in the concentration if the spectrophotometer is calibrated using a blank that consists of a \(1.0 \times 10^{-4}\) M solution of the analyte? 34. Hobbins reported the following calibration data for the flame atomic absorption analysis for phosphorous [Hobbins, W. B. “Direct Determination of Phosphorous in Aqueous Matricies by Atomic Absorption,” Varian Instruments at Work, Number AA-19, February 1982]. To determine the purity of a sample of Na HPO , a 2.469-g sample is dissolved and diluted to volume in a 100-mL volumetric flask. Analysis of the resulting solution gives an absorbance of 0.135. What is the purity of the Na HPO ? 35. Bonert and Pohl reported results for the atomic absorption analysis of several metals in the caustic suspensions produced during the manufacture of soda by the ammonia-soda process [Bonert, K.; Pohl, B. “The Determination of Cd, Cr, Cu, Ni, and Pb in Concentrated CaCl /NaCl solutions by AAS,” AA Instruments at Work (Varian) Number 98, November, 1990]. (a) The concentration of Cu is determined by acidifying a 200.0-mL sample of the caustic solution with 20 mL of concentrated HNO , adding 1 mL of 27% w/v H O , and boiling for 30 min. The resulting solution is diluted to 500 mL in a volumetric flask, filtered, and analyzed by flame atomic absorption using matrix matched standards. The results for a typical analysis are shown in the following table. Determine the concentration of Cu in the caustic suspension. (b) The determination of Cr is accomplished by acidifying a 200.0-mL sample of the caustic solution with 20 mL of concentrated HNO , adding 0.2 g of Na SO and boiling for 30 min. The Cr is isolated from the sample by adding 20 mL of NH , producing a precipitate that includes the chromium as well as other oxides. The precipitate is isolated by filtration, washed, and transferred to a beaker. After acidifying with 10 mL of HNO , the solution is evaporated to dryness. The residue is redissolved in a combination of HNO and HCl and evaporated to dryness. Finally, the residue is dissolved in 5 mL of HCl, filtered, diluted to volume in a 50-mL volumetric flask, and analyzed by atomic absorption using the method of standard additions. The atomic absorption results are summarized in the following table. Report the concentration of Cr in the caustic suspension. 36. Quigley and Vernon report results for the determination of trace metals in seawater using a graphite furnace atomic absorption spectrophotometer and the method of standard additions [Quigley, M. N.; Vernon, F. , , 671–673]. The trace metals are first separated from their complex, high-salt matrix by coprecipitating with Fe . In a typical analysis a 5.00-mL portion of 2000 ppm Fe is added to 1.00 L of seawater. The pH is adjusted to 9 using NH OH, and the precipitate of Fe(OH) allowed to stand overnight. After isolating and rinsing the precipitate, the Fe(OH) and coprecipitated metals are dissolved in 2 mL of concentrated HNO and diluted to volume in a 50-mL volumetric flask. To analyze for Mn , a 1.00-mL sample of this solution is diluted to 100 mL in a volumetric flask. The following samples are injected into the graphite furnace and analyzed. Report the ppb Mn in the sample of seawater. 37. The concentration of Na in plant materials are determined by flame atomic emission. The material to be analyzed is prepared by grinding, homogenizing, and drying at 103 C. A sample of approximately 4 g is transferred to a quartz crucible and heated on a hot plate to char the organic material. The sample is heated in a muffle furnace at 550 C for several hours. After cooling to room temperature the residue is dissolved by adding 2 mL of 1:1 HNO and evaporated to dryness. The residue is redissolved in 10 mL of 1:9 HNO , filtered and diluted to 50 mL in a volumetric flask. The following data are obtained during a typical analysis for the concentration of Na in a 4.0264-g sample of oat bran. Report the concentration of μg Na/g sample. 38. Yan and colleagues developed a method for the analysis of iron based its formation of a fluorescent metal–ligand complex with the ligand 5-(4-methylphenylazo)-8-aminoquinoline [Yan, G.; Shi, G.; Liu, Y. , , 121–124]. In the presence of the surfactant cetyltrimethyl ammonium bromide the analysis is carried out using an excitation wavelength of 316 nm with emission monitored at 528 nm. Standardization with external standards gives the following calibration curve. \[I_{f}=-0.03+\left(1.594 \ \mathrm{mg}^{-1} \ \mathrm{L}\right) \times \frac{\mathrm{mg} \ \mathrm{Fe}^{3+}}{\mathrm{L}} \nonumber\] A 0.5113-g sample of dry dog food is ashed to remove organic materials, and the residue dissolved in a small amount of HCl and diluted to volume in a 50-mL volumetric flask. Analysis of the resulting solution gives a fluorescent emission intensity of 5.72. Determine the mg Fe/L in the sample of dog food. 39. A solution of \(5.00 \times 10^{-5}\) M 1,3-dihydroxynaphthelene in 2 M NaOH has a fluorescence intensity of 4.85 at a wavelength of 459 nm. What is the concentration of 1,3-dihydroxynaphthelene in a solution that has a fluorescence intensity of 3.74 under identical conditions? 40. The following data is recorded for the phosphorescent intensity of several standard solutions of benzo[a]pyrene. What is the concentration of benzo[a]pyrene in a sample that yields a phosphorescent emission intensity of 4.97? 41. The concentration of acetylsalicylic acid, C H O , in aspirin tablets is determined by hydrolyzing it to the salicylate ion, \(\text{C}_7 \text{H}_5 \text{O}_2^-\), and determining its concentration spectrofluorometrically. A stock standard solution is prepared by weighing 0.0774 g of salicylic acid, C H O , into a 1-L volumetric flask and diluting to volume. A set of calibration standards is prepared by pipeting 0, 2.00, 4.00, 6.00, 8.00, and 10.00 mL of the stock solution into separate 100-mL volumetric flasks that contain 2.00 mL of 4 M NaOH and diluting to volume. Fluorescence is measured at an emission wavelength of 400 nm using an excitation wavelength of 310 nm with results shown in the following table. Several aspirin tablets are ground to a fine powder in a mortar and pestle. A 0.1013-g portion of the powder is placed in a 1-L volumetric flask and diluted to volume with distilled water. A portion of this solution is filtered to remove insoluble binders and a 10.00-mL aliquot transferred to a 100-mL volumetric flask that contains 2.00 mL of 4 M NaOH. After diluting to volume the fluorescence of the resulting solution is 8.69. What is the %w/w acetylsalicylic acid in the aspirin tablets? 42. Selenium (IV) in natural waters is determined by complexing with ammonium pyrrolidine dithiocarbamate and extracting into CHCl . This step serves to concentrate the Se(IV) and to separate it from Se(VI). The Se(IV) is then extracted back into an aqueous matrix using HNO . After complexing with 2,3-diaminonaphthalene, the complex is extracted into cyclohexane. Fluorescence is measured at 520 nm following its excitation at 380 nm. Calibration is achieved by adding known amounts of Se(IV) to the water sample before beginning the analysis. Given the following results what is the concentration of Se(IV) in the sample. 43. Fibrinogen is a protein that is produced by the liver and found in human plasma. Its concentration in plasma is clinically important. Many of the analytical methods used to determine the concentration of fibrinogen in plasma are based on light scattering following its precipitation. For example, da Silva and colleagues describe a method in which fibrino- gen precipitates in the presence of ammonium sulfate in a guanidine hydrochloride buffer [da Silva, M. P.; Fernandez-Romero, J. M.; Luque de Castro, M. D. , , 101–106]. Light scattering is measured nephelometrically at a wavelength of 340 nm. Analysis of a set of external calibration standards gives the following calibration equation \[I_{\mathrm{s}}=-4.66+9907.63 C \nonumber\] where is the intensity of scattered light and is the concentration of fibrinogen in g/L. A 9.00-mL sample of plasma is collected from a patient and mixed with 1.00 mL of an anticoagulating agent. A 1.00-mL aliquot of this solution is diluted to 250 mL in a volumetric flask and is found to have a scattering intensity of 44.70. What is the concentration of fibrinogen, in gram per liter, in the plasma sample?
30,883
4,461
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_Concept_Development_Studies_in_Chemistry_(Hutchinson)/16%3A_Reaction_Rates
We will assume an understanding of the postulates of the and of the energetics of chemical reactions. We will also assume an understanding of phase equilibrium and reaction equilibrium, including the temperature dependence of equilibrium constants. We have carefully examined the observation that chemical reactions come to equilibrium. Depending on the reaction, the equilibrium conditions can be such that there is a mixture of reactants and products, or virtually all products, or virtually all reactants. We have not considered the time scale for the reaction to achieve these conditions, however. In many cases, the speed of the reaction might be of more interest than the final equilibrium conditions of the reaction. Some reactions proceed so slowly towards equilibrium as to appear not to occur at all. For example, metallic iron will eventually oxidize in the presence of aqueous salt solutions, but the time is sufficiently long for this process that we can reasonably expect to build a boat out of iron. On the other hand, some reactions may be so rapid as to pose a hazard. For example, hydrogen gas will react with oxygen gas so rapidly as to cause an explosion. In addition, the time scale for a reaction can depend very strongly on the amounts of reactants and their temperature. In this concept development study, we seek an understanding of the rates of chemical reactions. We will define and measure reaction rates and develop a quantitative analysis of the dependence of the reaction rates on the conditions of the reaction, including concentration of reactants and temperature. This quantitative analysis will provide us insight into the process of a chemical reaction and thus lead us to develop a model to provide an understanding of the significance of reactant concentration and temperature. We will find that many reactions proceed quite simply, with reactant molecules colliding and exchanging atoms. In other cases, we will find that the process of reaction can be quite complicated, involving many molecular collisions and rearrangements leading from reactant molecules to product molecules. The rate of the chemical reaction is determined by these steps. We begin by considering a fairly simple reaction on a rather elegant molecule. One oxidized form of buckminsterfullerene, \(\ce{C_{60}}\), is \(\ce{C_{60}O_3}\), with a three oxygen bridge as shown in Figure 16.1. \(\ce{C_{60}O_3}\) is prepared from \(\ce{C_{60}}\) dissolved in toluene solution at temperatures of \(0^\text{o} \text{C}\) or below. When the solution is warmed, \(\ce{C_{60}O_3}\) decomposes, releasing \(\ce{O_2}\) and creating \(\ce{C_{60}O}\) in a reaction which goes essentially to completion. We can actually watch this process happen in time by measuring the amount of light of a specific frequency absorbed by the \(\ce{C_{60}O_3}\) molecules, called the . The absorbance is proportional to the concentration of the \(\ce{C_{60}O_3}\) in the toluene solution, so observing the absorbance as a function of time is essentially the same as observing the concentration as a function of time. One such set of data is given in Table 16.1, and is shown in the graph in Figure 16.2. The rate at which the decomposition reaction is occurring is clearly related to the rate of change of the concentration \(\left[ \ce{C_{60}O_3} \right]\), which is proportional to the slope of the graph in Figure 16.2. Therefore, we define the rate of this reaction as \[\text{Rate} = -\frac{d \left[ \ce{C_{60}O_3} \right]}{dt} \cong -\frac{\Delta \left[ \ce{C_{60}O_3} \right]}{\Delta t}\] We want the rate of reaction to be positive, since the reaction is proceeding forward. However, because we are measuring the rate of disappearance of the reactant in this case, that rate is negative. We include a negative sign in this definition of rate so that the rate in the equation is a positive number. Note also that the slope of the graph in Figure 16.2 should be taken as the derivative of the graph, since the graph is not a straight line. We will approximate that derivative by estimating the slope at each time in the data, taking the change in the absorbance of the \(\ce{C_{60}O_3}\) divided by the change in time at each time step. The rate, calculated in this way, is plotted as a function of time in Figure 16.3. It is clear that the slope of the graph in Figure 16.2 changes over the course of time. Correspondingly, Figure 16.3 shows that the rate of the reaction decreases as the reaction proceeds. The reaction is at first very fast but then slows considerably as the reactant \(\ce{C_{60}O_3}\) is depleted. The shape of the graph for rate versus time (Figure 16.3) is very similar to the shape of the graph for concentration versus time (Figure 16.2). This suggests that the rate of the reaction is related to the concentration of \(\ce{C_{60}O_3}\) at each time. Therefore, in Figure 16.4, we plot the rate of the reaction, defined in the equation above and shown in Figure 16.3, versus the absorbance of the \(\ce{C_{60}O_3}\). We find that there is a very simple proportional relationship between the rate of the reaction and the concentration of the reactant. Therefore we can write \[\begin{align} \text{Rate} &= -\frac{d \left[ \ce{C_{60}O_3} \right]}{dt} \\ &= k \left[ \ce{C_{60}O_3} \right] \end{align}\] where \(k\) is a proportionality constant. This equation shows that, early in the reaction when \(\left[ \ce{C_{60}O_3} \right]\) is large, the reaction proceeds rapidly, and that as \(\ce{C_{60}O_3}\) is consumed, the reaction slows down. The rate equation above is an example of a , expressing the relationship between the rate of a reaction and the concentrations of the reactant or reactants. Rate laws are expressions of the relationship between experimentally observed rates and concentrations. As a second example of a reaction rate, we consider the dimerization reaction of butadiene gas, \(\ce{CH_2=CH-CH=CH_2}\). Two butadiene molecules can combine to form vinylcyclohexene, shown in Figure 16.5. Table 16.2 provides experimental data on the gas phase concentration of butadiene \(\left[ \ce{C_4H_6} \right]\) as a function of time at \(T = 250^\text{o} \text{C}\). We can estimate the rate of reaction at each time step as in the rate equation shown earlier, and these data are presented in Table 16.2 as well. Again we see that the rate of reaction decreases as the concentration of butadiene decreases. This suggests that the rate is given by an expression like the rate law. To test this, we calculate \(\frac{\text{Rate}}{\left[ \ce{C_4H_6} \right]}\) in Table 16.2 for each time step. We note that this is a constant, so the rate law above does not describe the relationship between the rate of reaction and the concentration of butadiene. Instead we calculate \(\frac{\text{Rate}}{\left[ \ce{C_4H_6} \right]^2}\) in Table 16.2. We discover that this ratio is a constant throughout the reaction. Therefore, the relationship between the rate of the reaction and the concentration of the reactant in this case is given by \[\begin{align} \text{Rate} &= -\frac{d \left[ \ce{C_4H_6} \right]}{dt} \\ &= k \left[ \ce{C_4H_6} \right]^2 \end{align}\] which is the rate law for the reaction in Figure 16.5. This is a very interesting result when compared to the rate law given above. In both cases, the results demonstrate that the rate of reaction depends on the concentration of the reactant. However, we now also know that the way in which the rate varies with the concentration depends on what the reaction is. Each reaction has its own rate law, observed experimentally. We would like to understand what determines the specific dependence of the reaction rate on concentration in each reaction. In the first case considered above, the rate depends on the concentration of the reactant to the first power. We refer to this as a . In the second case above, the rate depends on the concentration of the reactant to the second power, so this is called a second order reaction. There are also , and even whose rates do not depend on the amount of the reactant. We need more observations of rate laws for different reactions. The approach used in the previous section to determine a reaction's rate law is fairly clumsy and at this point difficult to apply. We consider here a more systematic approach. First, consider the decomposition of \(\ce{N_2O_5} \left( g \right)\). \[2 \ce{N_2O_5} \left( g \right) \rightarrow 4 \ce{NO_2} \left( g \right) + \ce{O_2} \left( g \right)\] We can create an initial concentration of \(\ce{N_2O_5}\) in a flask and measure the rate at which the \(\ce{N_2O_5}\) first decomposes. We can then create a different initial concentration of \(\ce{N_2O_5}\) and measure the new rate at which the \(\ce{N_2O_5}\) decomposes. By comparing these rates, we can find the order of the decomposition reaction. The rate law for decomposition of \(\ce{N_2O_5} \left( g \right)\) is of the general form: \[\text{Rate} = k \left[ \ce{N_2O_5} \right]^m\] so we need to determine the exponent \(m\). For example, at \(25^\text{o} \text{C}\) we observe that the rate of decomposition is \(1.4 \times 10^{-3} \: \frac{\text{M}}{\text{s}}\) when the concentration of \(\ce{N_2O_5}\) is \(0.020 \: \text{M}\). If instead we begin with \(\left[ \ce{N_2O_5} \right] = 0.010 \: \text{M}\), we observe that the rate of decomposition is \(7.0 \times 10^{-4} \: \frac{\text{M}}{\text{s}}\). We can compare the rate from the first measurement, Rate 1, to the rate from the second measurement, Rate 2. From the equation above, we can write that \[\begin{align} \frac{\text{Rate 1}}{\text{Rate 2}} &= \frac{k \left[ \ce{N_2O_5} \right]^m_1}{k \left[ \ce{N_2O_5} \right]^m_2} \\ &= \frac{1.4 \times 10^{-3} \: \frac{\text{M}}{\text{s}}}{7.0 \times 10^{-4} \: \frac{\text{M}}{\text{s}}} \\ &= \frac{k \left( 0.020 \: \text{M} \right)^m}{k \left( 0.010 \: \text{M} \right)^m} \end{align}\] This can be simplified on both sides of the equation to give \[2.0 = 2.0^m\] Clearly, then, \(m = 1\) and the decomposition is a first order reaction. We can also then find the first order rate constant \(k\) for this reaction by simply plugging in one of the initial rate measurements to the rate law equation. We find that \(k = 0.070 \: \text{s}^{-1}\). This approach to finding reaction order is called the method of initial rates, since it relies on fixing the concentration at specific initial values and measuring the initial rate associated with each concentration. So far we have considered only reactions which have a single reactant. Consider a second example of the method of initial rates involving the reaction of hydrogen gas and iodine gas: \[\ce{H_2} \left( g \right) + \ce{I_2} \left( g \right) \rightarrow 2 \ce{HI} \left( g \right)\] In this case, we expect to find that the rate of the reaction depends on the concentrations for both reactants. As such, we need more initial rate observations to determine the rate law. In Table 16.3, observations are reported for the initial rate for three sets of initial concentrations of \(\ce{H_2}\) and \(\ce{I_2}\). Following the same process we used in the \(\ce{N_2O_5}\) example, we write the general rate law for the reaction as \[\text{Rate} = k \left[ \ce{H_2} \right]^n \left[ \ce{I_2} \right]^m\] By comparing Experiment 1 to Experiment 2, we can write \[\begin{align} \frac{\text{Rate 1}}{\text{Rate 2}} &= \frac{k \left[ \ce{H_2} \right]^n_1 \left[ \ce{I_2} \right]^m_1}{k \left[ \ce{H_2} \right]^n_2 \left[ \ce{I_2} \right]^m_2} \\ &= \frac{3.00 \times 10^{-4} \: \frac{\text{M}}{\text{s}}}{6.00 \times 10^{-4} \: \frac{\text{M}}{\text{s}}} \\ &= \frac{k \left( 0.10 \: \text{M} \right)^n \left( 0.10 \: \text{M} \right)^m}{k \left( 0.20 \: \text{M} \right)^n \left( 0.10 \: \text{M} \right)^m} \end{align}\] This simplifies to \[0.50 = 0.50^m 1.00^n\] from which it is clear that \(m = 1\). Similarly, we can find than \(n = 1\). The reaction is therefore first order in each reactant and is second order overall. \[\text{Rate} = k \left[ \ce{H_2} \right] \left[ \ce{I_2} \right]\] Once we know the rate law, we can use any of the data from Table 16.3 to determine the rate constant, simply by plugging in concentrations and rate into the rate law equation. We find that \(k = 3.00 \times 10^{-2} \: \frac{1}{\text{Ms}}\). This procedure can be applied to any number of reactions. The challenge is preparing the initial conditions and measuring the initial change in concentration precisely versus time. Table 16.4 provides an overview of the rate laws for several reactions. A variety of reaction orders are observed, and they cannot be easily correlated with the stoichiometry of the reaction. Once we know the rate law for a reaction, we should be able to predict how fast a reaction will proceed. From this, we should also be able to predict how much reactant remains or how much product has been produced at any given time in the reaction. We will focus on the reactions with a single reactant to illustrate these ideas. Consider a first order reaction like \(\ce{A} \rightarrow \text{products}\), for which the rate law must be \[\begin{align} \text{Rate} &= -\frac{d \ce{A}}{dt} \\ &= k \left[ \ce{A} \right] \end{align}\] From calculus, it is possible to use the above equation to find the function \(\left[ \ce{A} \right] \left( t \right)\) which tells us the concentration \(\left[ \ce{A} \right]\) as a function of time. The result is \[\left[ \ce{A} \right] = \left[ \ce{A} \right]_0 e^{-kt}\] or equivalently \[\text{ln} \left( \left[ \ce{A} \right] \right) = \text{ln} \left( \left[ \ce{A} \right]_0 \right) - kt\] The above equation reveals that, if a reaction is first order, we can plot \(\text{ln} \left( \left[ \ce{A} \right]_0 \right)\) versus time and get a straight line with slope equal to \(-k\). Moreover, if we know the rate constant and the initial concentration, we can predict the concentration at any time during the reaction. An interesting point in the reaction is the time at which exactly half of the original concentration of \(\ce{A}\) has been consumed. We call this time the of the reaction and denote it as \(t_\frac{1}{2}\). At that time, \(\left[ \ce{A} \right] = \frac{1}{2} \left[ \ce{A} \right]_0\). From the above equation and using the properties of logarithms, we find that, for a first order reaction \[t_\frac{1}{2} = \frac{\text{ln} \left( 2 \right)}{k}\] This equation tells us that the half-life of a first order reaction does not depend on how much material we start with. It takes exactly the same amount of time for the reaction to proceed from all of the starting material to half of the starting material as it does to proceed from half of the starting material to one-fourth of the starting material. In each case, we halve the remaining material in time equal to the constant half-life shown in the equation above. These conclusions are only valid for first order reactions. Consider then a second order reaction, such as the butadiene dimerization discussed above. The general second order reaction \(\ce{A} \rightarrow \text{products}\) has the rate law \[\begin{align} \text{Rate} &= -\frac{d \left[ \ce{A} \right]}{dt} \\ &= k \left[ \ce{A} \right]^2 \end{align}\] Again, we can use calculus to find the function \(\left[ \ce{A} \right] \left( t \right)\) from the above equation. The result is most easily written as \[\frac{1}{\left[ \ce{A} \right]} = \frac{1}{\left[ \ce{A} \right]_0} + k \left( t \right)\] Note that, as \(t\) increases, \(\frac{1}{\left[ \ce{A} \right]}\) increases, so \(\left[ \ce{A} \right]\) decreases. The equation reveals that, for a reaction which is second order in the reactant \(\ce{A}\), we can plot \(\frac{1}{\left[ \ce{A} \right]}\) as a function of time to get a straight line with slope equal to \(k\). Again, if we know the rate constant and the initial concentration, we can find the concentration \(\left[ \ce{A} \right]\) at any time of interest during the reaction. The half-life of a second order reaction differs from the half-life of a first order reaction. From the above equation, if we take \(\left[ \ce{A} \right] = \frac{1}{2} \left[ \ce{A} \right]_0\), we get \[t_\frac{1}{2} = \frac{1}{k \left[ \ce{A} \right]_0}\] This shows that, unlike a first order reaction, the half-life for a second order reaction depends on how much material we start with. From this equation, the the more concentrated the reactant is, the shorter the half-life. It is a common observation that reactions tend to proceed more rapidly with increasing temperature. Similarly, cooling reactants can have the effect of slowing a reaction to a near halt. How is this change in rate reflected in the rate law equation? One possibility is that there is a slight dependence on temperature of the concentrations, since volumes do vary with temperature. However, this is insufficient to account for the dramatic changes in rate typically observed. Therefore, the temperature dependence of reaction rate is primarily found in the rate constant, \(k\). Consider for example the reaction of hydrogen gas with iodine gas at high temperatures. The rate constant of this reaction at each temperature can be found using the method of initial rates, as discussed above, and we find in Table 16.5 that the rate constant increases dramatically as the temperature increases. As shown in Figure 16.6, the rate constant appears to increase exponentially with temperature. After a little experimentation with the data, we find in Figure 16.7 that there is a simple linear relationship between \(\text{ln} \left( k \right)\) and \(\frac{1}{T}\). From Figure 16.7, we can see that the data in Table 16.5 fit the equation \[\text{ln} \left( k \right) = a \frac{1}{T} + b\] where \(a\) and \(b\) are constant for this reaction. It turns out that, for our purposes, all reactions have rate constants which fit this equation, but with different constants \(a\) and \(b\) for each reaction. Figure 16.7 is referred to as an , after Svante Arrhenius. It is very important to note that the form of the above equation and the appearance of Figure 16.7 are both the same as the equations and graphs for the temperature dependence of the equilibrium constant for an endothermic reaction. This suggests a model to account for the temperature dependence of the rate constant, based on the energetics of the reaction. In particular, it appears that the reaction rate is related to the amount of energy required for the reaction to occur. We will develop this further in the next section. At this point, we have only observed the dependence of reaction rates on concentration of reactants and on temperature, and we have fit these data to equations called rate laws. Although this is very convenient, it does not provide us insight into why a particular reaction has a specific rate law or why the temperature dependence should obey the equation shown above. Nor does it provide any physical insights into the order of the reaction or the meaning of the constants \(a\) and \(b\) in the equation. We begin by asking why the reaction rate should depend on the concentration of the reactants. To answer this, we consider a simple reaction between two molecules in which atoms are transferred between the molecules during the reaction. For example, a reaction important in the decomposition of ozone \(\ce{O_3}\) by aerosis is \[\ce{O_3} \left( g \right) + \ce{Cl} \left( g \right) \rightarrow \ce{O_2} \left( g \right) + \ce{ClO} \left( g \right)\] What must happen for a reaction to occur between an \(\ce{O_3}\) molecule and a \(\ce{Cl}\) atom? Obviously, for these two particles to react, they must come into close proximity to one another so that an \(\ce{O}\) atom can be transferred from one to the other. In general, two molecules cannot trade atoms to produce new produce molecules unless they are close together for the atoms of the two molecules to interact. This requires a collision between molecules. The rate of collisions depends on the concentrations of the reactants, since the more molecules there are in a confined space, the more likely they are to run into each other. To write this relationship in an equation, we can think in terms of probability, and we consider the reaction above. The probability for an \(\ce{O_3}\) molecule to be near a specific point increases with the number of \(\ce{O_3}\) molecules, and therefore increases with the concentration of \(\ce{O_3}\) molecules. The probability for a \(\ce{Cl}\) atom to be near that specific point is also proportional to the concentration of \(\ce{Cl}\) atoms. Therefore, the probability for an \(\ce{O_3}\) molecule and a \(\ce{Cl}\) atom to be in close proximity to the same specific point at the same time is proportional to the \(\left[ \ce{O_3} \right]\) times \(\left[ \ce{Cl} \right]\). It is important to remember that not all collisions between \(\ce{O_3}\) molecules and \(\ce{Cl}\) atoms will result in a reaction. There are other factors to consider including how the molecules approach one another. The atoms may not be positioned properly to exchange between molecules, in which case the molecules will simply bounce off of one another without reacting. For example, if the \(\ce{Cl}\) atom approaches the center \(\ce{O}\) atom of the \(\ce{O_3}\) molecule, that \(\ce{O}\) atom will not transfer. Another factor is energy associated with the reaction. Clearly, though, a collision must occur for the reaction to occur, and therefore the rate of the reaction can be no faster than the rate of collisions between the reactant molecules. Therefore, we can say that, in a , where two molecules collide and react, the rate of the reaction will be proportional to the product of the concentrations of the reactants. For the reaction of \(\ce{O_3}\) with \(\ce{Cl}\), the rate must therefore be proportional to \(\left[ \ce{O_3} \right] \left[ \ce{Cl} \right]\), and we observe this in the experimental rate law in Table 16.4. Thus, it appears that we can understand the rate law by understanding the collisions which must occur for the reaction to take place. We also need our model to account for the temperature dependence of the rate constant. As noted at the end of the last section, the temperature dependence of the rate constant is the same as the temperature dependence of the equilibrium constant for an endothermic reaction. This suggests that the temperature dependence is due to an energetic factor required for the reaction to occur. However, we find experimentally that the rate constant equation describes the rate constant temperature dependence regardless of whether the reaction is endothermic or exothermic. Therefore, whatever the energetic factor is that is required for the reaction to occur, it is not just the endothermicity of the reaction. It must be that all reactions, regardless of the overall change in energy, require energy to occur. A model to account for this is the concept of . For a reaction to occur, at least some bonds in the reactant molecule must be broken, so that atoms can rearrange and new bonds can be created. At the time of collision, bonds are stretched and broken as new bonds are made. Breaking these bonds and rearranging the atoms during the collision requires the input of energy. The minimum amount of energy required for the reaction to occur is called the activation energy, \(E_a\). This is illustrated in Figure 16.8, showing conceptually how the energy of the reactants varies as the reaction proceeds. In Figure 16.8a, the energy is low early in the reaction, when the molecules are still arranged as reactants. As the molecules approach and begin to rearrange, the energy rises sharply, rising to a maximum in the middle of the reaction. This sharp rise in energy is the activation energy, as illustrated. After the middle of the reaction has passed and the molecules are arranged more as products than reactants, the energy begins to fall again. However, the energy does not fall to its original value, so this is an endothermic reaction. Figure 16.8b shows the analogous situation for an exothermic reaction. Again, as the reactants approach one another, the energy rises as the atoms begin to rearrange. At the middle of the collision, the energy maximizes and then falls as the product molecules form. In an exothermic reaction, the product energy is lower than the reactant energy. Figure 16.8 thus shows that an energy barrier must be surmounted for the reaction to occur, regardless of whether the energy of the products is greater than (Figure 16.8a) or less than (Figure 16.8b) the energy of the reactants. This barrier accounts for the temperature dependence of the reaction rate. We know from the kinetic molecular theory that as temperature increases the average energy of the molecules in a sample increases. Therefore, as temperature increases, the fraction of molecules with sufficient energy to surmount the reaction activation barrier increases. a. b. Although we will not show it here, kinetic molecular theory shows that the fraction of molecules with energy greater than \(E_a\) at temperature \(T\) is proportional to \(e^{-\frac{E_a}{RT}}\). This means that the reaction rate and therefore also the rate constant must be proportional to \(e^{-\frac{E_a}{RT}}\). Therefore we can write \[k \left( T \right) = Ae^{-\frac{E_a}{RT}}\] where \(A\) is a proportionality constant. If we take the logarithm of both sides of the equation, we find that \[\text{ln} \left( k \left( T \right) \right) = -\frac{E_a}{RT} + \text{ln} \left( A \right)\] This equation matches the experimentally observed equation. We recall that a graph of \(\text{ln} \left( k \right)\) versus \(\frac{1}{T}\) is observed to be linear. Now we can see that the slope of that graph is equal to \(-\frac{E_a}{R}\). As a final note on the above equation, the constant \(A\) must have some physical significance. We have accounted for the probability of collision between two molecules and we have accounted for the energetic requirement for a successful reactive collision. We have not accounted for the probability that a collision will have the appropriate orientation of reactant molecules during the collision. Moreover, not every collision which occurs with proper orientation and sufficient energy will actually result in a reaction. There are other random factors relating to the internal structure of each molecule at the instant of collision. The factor \(A\) takes account for all of these factors, and is essentially the probability that a collision with sufficient energy for reaction will indeed lead to reaction. \(A\) is commonly called the . Our collision model in the previous section accounts for the concentration and temperature dependence of the reaction rate, as expressed by the rate law. The concentration dependence arises from calculating the probability of the reactant molecules being in the same vicinity at the same instant. Therefore, we should be able to predict the rate law for any reaction by simply multiplying together the concentrations of all reactant molecules in the balanced stoichiometric equation. The order of the reaction should therefore be simply related to the stoichiometric coefficients in the reaction. However, Table 16.4 shows that this is incorrect for many reactions. Consider for example the apparently simple reaction \[2 \ce{ICl} \left( g \right) + \ce{H_2} \left( g \right) \rightarrow 2 \ce{HCl} \left( g \right) + \ce{I_2} \left( g \right)\] Based on the collision model, we would assume that the reaction occurs by \(2 \ce{ICl}\) molecules colliding with a single \(\ce{H_2}\) molecule. The probability for such a collision should be proportional to \(\left[ \ce{ICl} \right]^2 \left[ \ce{H_2} \right]\). However, experimentally we observe (see Table 16.4) that the rate law for this reaction is \[\text{Rate} = k \left[ \ce{ICl} \right] \left[ \ce{H_2} \right]\] As a second example, consider the reaction \[\ce{NO_2} \left( g \right) + \ce{CO} \left( g \right) \rightarrow \ce{NO} \left( g \right) + \ce{CO_2} \left( g \right)\] It would seem reasonable to assume that this reaction occurs as a single collision in which an oxygen atom is exchanged between the two molecules. However, the experimentally observed rate law for this reaction is \[\text{Rate} = k \left[ \ce{NO_2} \right]^2\] In this case, the \(\left[ \ce{CO} \right]\) concentration does not affect the rate of the reaction at all, and the \(\left[ \ce{NO_2} \right]\) concentration is squared. These examples demonstrate that the rate law for a reaction cannot be predicted from the stoichiometric coefficients and therefore that the collision model does not account for the rate of the reaction. There must be something seriously incomplete with the collision model. The key assumption of the collision model is that the reaction occurs by a single collision. Since this assumption leads to incorrect predictions of rate laws in some cases, the assumption must be invalid in at least those cases. It may well be that reactions require more than a single collision to occur, even in reactions involving just two types of molecules. Moreover, if more than two molecules are involved, the chance of a single collision involving all of the reactive molecules becomes very small. We conclude that many reactions must occur as a result of several collisions occurring in sequence, rather than a single collision. The rate of the chemical reaction must be determined by the rates of the individual steps in the reaction. Each step in a complex reaction is a single collision, often referred to as an . In a single collision process step, our collision model should correctly predict the rate of that step. The sequence of such elementary processes leading to the overall reaction is referred to as the . Determining the mechanism for a reaction can require gaining substantially more information than simply the rate data we have considered here. However, we can gain some progress just from the rate law. Consider for example the reaction of nitrogen dioxide and carbon monoxide. Since the rate law involved \(\left[ \ce{NO_2} \right]^2\), one step in the reaction mechanism must involve the collision of two \(\ce{NO_2}\) molecules. Furthermore, this step must determine the rate of the overall reaction. Why would that be? In any multi-step process, if one step is considerably slower than all of the other steps, the rate of the multi-step process is determined entirely by that slowest step, because the overall process cannot go any faster than the slowest step. It does not matter how rapidly the rapid steps occur. Therefore, the slowest step in a multi-step process is thus called the or step. This argument suggests that the reaction proceeds via a slow step in which two \(\ce{NO_2}\) molecules collide, followed by at least one other rapid step leading to the products. A possible mechanism is therefore \[\ce{NO_2} + \ce{NO_2} \rightarrow \ce{NO_3} + \ce{NO}\] \[\ce{NO_3} + \ce{CO} \rightarrow \ce{NO_2} + \ce{CO_2}\] If Step 1 is much slower than Step 2, the rate of the reaction is entirely determined by the rate of Step 1. From our collision model, the rate law for Step 1 must be \(k \left[ \ce{NO_2} \right]^2\), which is consistent with the experimentally observed rate law for the overall reaction. This suggests that the mechanism is the correct description of the reaction process, with the first step as the rate determining step. There are a few important notes about the mechanism. First, one product of the reactions is produced in the first step, and the other is produced in the second step. Therefore, the mechanism does lead to the overall reaction, consuming the correct amount of reactants and producing the correct amount of products. Second, the first reaction produces a new molecule, \(\ce{NO_3}\), which is neither a reactant nor a product. The second step then consumes that molecule, and \(\ce{NO_3}\) therefore does not appear in the overall reaction. As such, \(\ce{NO_3}\) is called a . Intermediates play important roles in the rates of many reactions. If the first step in a mechanism is rate determining as in this case, it is easy to find the rate law for the overall expression from the mechanism. If the second step or later steps are rate determining, determining the rate law is slightly more involved. When \(\ce{C_{60}O_3}\) in toluene solution decomposes, \(\ce{O_2}\) is released leaving \(\ce{C_{60}O}\) in solution. Based on the data in Figure 16.2 and Figure 16.3, plot the concentration of \(\ce{C_{60}O}\) as a function of time. How would you define the rate of the reaction in terms of the slope of the graph from Figure 16.3? How is the rate of appearance of \(\ce{C_{60}O}\) related to the rate of disappearance of \(\ce{C_{60}O_3}\)? Based on this, plot the rate of appearance of \(\ce{C_{60}O}\) as a function of time. The reaction \(2 \ce{N_2O_5} \left( g \right) \rightarrow 4 \ce{NO_2} \left( g \right) + \ce{O_2} \left( g \right)\) was found in this study to have rate law given by \(\text{Rate} = k \left[ \ce{N_2O_5} \right]\) with \(k = 0.070 \: \text{s}^{-1}\). How is the rate of appearance of \(\ce{NO_2}\) related to the rate of disappearance of \(\ce{N_2O_5}\)? Which rate is larger? Based on the rate law and rate constant, sketch a plot of \(\left[ \ce{N_2O_5} \right]\), \(\left[ \ce{NO_2} \right]\), and \(\left[ \ce{O_2} \right]\) versus time all on the same graph. For which of the reactions listed in Table 16.4 can you be certain that the reaction does not occur as a single step collision? Explain your reasoning. Consider two decomposition reactions for two hypothetical materials, \(\ce{A}\) and \(\ce{B}\). The decomposition of \(\ce{A}\) is found to be first order, and the decomposition of \(\ce{B}\) is found to be second order. Assuming that the two reactions have the same rate constant at the same temperature, sketch \(\left[ \ce{A} \right]\) and \(\left[ \ce{B} \right]\) versus time on the same graph for the same initial conditions, i.e. \(\left[ \ce{A} \right]_0 = \left[ \ce{B} \right]_0\). Compare the half-lives of the two reactions. Under what conditions will the half-life of \(\ce{B}\) be less than the half-life of \(\ce{A}\)? Under what conditions will the half-life of \(\ce{B}\) be greater than the half-life of \(\ce{A}\)? A graph of the logarithm of the equilibrium constant for a reaction versus \(\frac{1}{T}\) is linear but can have either a negative slope or a positive slope, depending on the reaction. However, the graph of the logarithm of the rate constant for a reaction versus \(\frac{1}{T}\) has a negative slope for essentially every reaction. Using equilibrium arguments, explain why the graph for the rate constant must have a negative slope. Using the rate constant equation involving activation energy and the data in Table 16.5, determine the activation energy for the reaction \(\ce{H_2} \left( g \right) + \ce{I_2} \left( g \right) \rightarrow 2 \ce{HI} \left( g \right)\). We found that the rate law for the reaction \(\ce{H_2} \left( g \right) + \ce{I_2} \left( g \right) \rightarrow 2 \ce{HI} \left( g \right)\) is \(\text{Rate} = k \left[ \ce{H_2} \right] \left[ \ce{I_2} \right]\). Therefore, the reaction is overall but in \(\ce{H_2}\). Imagine that we start with \(\left[ \ce{H_2} \right]_0 = \left[ \ce{I_2} \right]_0\) and we measure \(\left[ \ce{H_2} \right]\) versus time. Will a graph of \(\text{ln} \left( \left[ \ce{H_2} \right] \right)\) versus time be linear or will a graph of \(\frac{1}{\left[ \ce{H_2} \right]}\) versus time be linear? Explain your reasoning. As a rough estimate, chemists often assume a that the rate of any reaction will double when the temperature is increased by \(10^\text{o} \text{C}\). What does this suggest about the activation energies of reactions?Using the rate constant equation involving activation energy, calculate the activation energy of a reaction whose rate doubles when the temperature is raised from \(25^\text{o} \text{C}\) to \(35^\text{o} \text{C}\). Does this rule of thumb estimate depend on the temperature range? To find out, calculate the factor by which the rate constant increases when the temperature is raised from \(100^\text{o} \text{C}\) to \(110^\text{o} \text{C}\), assuming the same activation energy you found above. Does the rate double in this case? Consider a very simple hypothetical reaction \(\ce{A} + \ce{B} \leftrightarrow 2 \ce{C}\) which comes to equilibrium. At equilibrium, what must be the relationship between the rate of the forward reaction, \(\ce{A} + \ce{B} \rightarrow 2 \ce{C}\) and the reverse reaction \(2 \ce{C} \rightarrow \ce{A} + \ce{B}\)? Assume that both the forward and reverse reactions are elementary processes occurring by a single collision. What is the rate law for the forward reaction? What is the rate law for the reverse reaction? Using these results, show that the equilibrium constant for this reaction can be calculated from \(K_c = \frac{k_f}{k_r}\), where \(k_f\) is the rate constant for the forward reaction and \(k_r\) is the rate constant for the reverse reaction. Consider a very simple hypothetical reaction \(\ce{A} + \ce{B} \leftrightarrow \ce{C} + \ce{D}\). By examining Figure 16.8, provide and explain the relationship between the activation energy in the forward direction, \(E_{a,f}\), and in the reverse direction, \(E_{a,r}\). Does this relationship depend on whether the reaction is endothermic (Figure 16.8a) or exothermic (Figure 16.8b)? Explain. For the reaction \(\ce{H_2} \left( g \right) + \ce{I_2} \left( g \right) \rightarrow 2 \ce{HI} \left( g \right)\), the rate law is \(\text{Rate} = k \left[ \ce{H_2} \right] \left[ \ce{I_2} \right]\). Although this suggests that the reaction is a one-step elementary process, there is evidence that the reaction occurs in two steps, and the second step is the rate determining step: \[\ce{I_2} <=>2 \ce{I}\] \[\ce{H_2} + 2 \ce{I} \rightarrow 2 \ce{HI}\] Where Step 1 is fast and Step 2 is slow. If both the forward and reverse reactions in Step 1 are much faster than Step 2, explain why Step 1 can be considered to be at equilibrium. What is the rate law for the rate determining step? Since this rate law depends on the concentration of an intermediate \(\ce{I}\), we need to find that intermediate. Calculate \(\left[ \ce{I} \right]\) from Step 1, assuming that Step 1 is at equilibrium. Substitute \(\left[ \ce{I} \right]\) into the rate law found previously to find the overall rate law for the reaction. Is this result consistent with the experimental observation?   ; Chemistry)
38,647
4,463
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/13%3A_Kinetic_Methods/13.03%3A_Radiochemistry
Atoms that have the same number of protons but a different number of neutrons are . To identify an isotope we use the notation \({}_Z^A E\), where is the element’s atomic symbol, is the element’s atomic number, and is the element’s atomic mass number. Although an element’s different isotopes have the same chemical properties, their nuclear properties are not identical. The most important difference between isotopes is their stability. The nuclear configuration of a stable isotope remains constant with time. Unstable isotopes, however, disintegrate spontaneously, emitting radioactive particles as they transform into a more stable form. An element’s atomic number, , is equal to the number of protons and its atomic mass, , is equal to the sum of the number of protons and neutrons. We represent an isotope of carbon-13 as \(_{6}^{13} \text{C}\) because carbon has six protons and seven neutrons. Sometimes we omit from this notation—identifying the element and the atomic number is repetitive because all isotopes of carbon have six protons and any atom that has six protons is an isotope of carbon. Thus, C and C–13 are alternative notations for this isotope of carbon. The most important types of radioactive particles are alpha particles, beta particles, gamma rays, and X-rays. An , \(\alpha\), is equivalent to a helium nucleus, \({}_2^4 \text{He}\). When an atom emits an alpha particle, the product in a new atom whose atomic number and atomic mass number are, respectively, 2 and 4 less than its unstable parent. The decay of uranium to thorium is one example of alpha emission. \[_{92}^{238} \text{U} \longrightarrow _{90}^{234} \text{Th}+\alpha \nonumber\] A beta particle, \(\beta\), comes in one of two forms. A , \(_{-1}^0 \beta\), is produced when a neutron changes into a proton, increasing the atomic number by one, as shown here for lead. \[_{82}^{214} \mathrm{Pb} \longrightarrow_{83}^{214} \mathrm{Bi} + _{-1}^{0} \beta \nonumber\] The conversion of a proton to a neutron results in the emission of a , \(_{1}^0 \beta\). \[_{15}^{30} \mathrm{P} \longrightarrow_{14}^{30} \mathrm{Si} + _{1}^{0} \beta \nonumber\] A negatron, which is the more common type of beta particle, is equivalent to an electron. The emission of an alpha or a beta particle often produces an isotope in an unstable, high energy state. This excess energy is released as a , \(\gamma\), or as an X-ray. Gamma ray and X-ray emission may also occur without the release of an alpha particle or a beta particle. A radioactive isotope’s rate of decay, or activity, follows first-order kinetics \[A=-\frac{d N}{d t}=\lambda N \label{13.1}\] where is the isotope’s activity, is the number of radioactive atoms present in the sample at time , and \(\lambda\) is the isotope’s decay constant. Activity is expressed as the number of disintegrations per unit time. As with any first-order process, we can rewrite Equation \ref{13.1} in an integrated form. \[N_{t}=N_{0} e^{-\lambda t} \label{13.2}\] Substituting Equation \ref{13.2} into Equation \ref{13.2} gives \[A=\lambda N_{0} e^{-\lambda t}=A_{0} e^{-\lambda t} \label{13.3}\] If we measure a sample’s activity at time we can determine the sample’s initial activity, , or the number of radioactive atoms originally present in the sample, . An important characteristic property of a radioactive isotope is its , , which is the amount of time required for half of the radioactive atoms to disintegrate. For first-order kinetics the half-life is \[t_{1 / 2}=\frac{0.693}{\lambda} \label{13.4}\] Because the half-life is independent of the number of radioactive atoms, it remains constant throughout the decay process. For example, if 50% of the radioactive atoms remain after one half-life, then 25% remain after two half-lives, and 12.5% remain after three half-lives. Suppose we begin with an of 1200 atoms During the first half-life, 600 atoms disintegrate and 600 remain. During the second half-life, 300 of the 600 remaining atoms disintegrate, leaving 300 atoms or 25% of the original 1200 atoms. Of the 300 remaining atoms, only 150 remain after the third half-life, or 12.5% of the original 1200 atoms. Kinetic information about a radioactive isotope usually is given in terms of its half-life because it provides a more intuitive sense of the isotope’s stability. Knowing, for example, that the decay constant for \(_{38}^{90}\text{Sr}\) is 0.0247 yr does not give an immediate sense of how fast it disintegrates. On the other hand, knowing that its half-life is 28.1 yr makes it clear that the concentration of \(_{38}^{90}\text{Sr}\) in a sample remains essentially constant over a short period of time. Alpha particles, beta particles, gamma rays, and X-rays are measured by using the particle’s energy to produce an amplified pulse of electrical current in a detector. These pulses are counted to give the rate of disintegration. There are three common types of detectors: gas-filled detectors, scintillation counters, and semiconductor detectors. A gas-filled detector consists of a tube that contains an inert gas, such as Ar. When a radioactive particle enters the tube it ionizes the inert gas, producing an Ar / ion-pair. Movement of the electron toward the anode and of the Ar toward the cathode generates a measurable electrical current. A is one example of a gas-filled detector. A uses a fluorescent material to convert radioactive particles into easy to measure photons. For example, one solid-state scintillation counter consists of a NaI crystal that contains 0.2% TlI, which produces several thousand photons for each radioactive particle. Finally, in a semiconductor detector, adsorption of a single radioactive particle promotes thousands of electrons to the semiconductor’s conduction band, increasing conductivity. In this section we consider three common quantitative radiochemical methods of analysis: the direct analysis of a radioactive isotope by measuring its rate of disintegration, neutron activation, and isotope dilution. The concentration of a long-lived radioactive isotope remains essentially constant during the period of analysis. As shown in Example 13.3.1 , we can use the sample’s activity to calculate the number of radioactive particles in the sample. The activity in a 10.00-mL sample of wastewater that contains \(_{38}^{90}\text{Sr}\) is \(9.07 \times 10^6\) disintegrations/s. What is the molar concentration of \(_{38}^{90}\text{Sr}\) in the sample? The half-life for \(_{38}^{90}\text{Sr}\) is 28.1 yr. Solving Equation \ref{13.4} for \(\lambda\), substituting into Equation \ref{13.1}, and solving for gives \[N=\frac{A \times t_{1 / 2}}{0.693} \nonumber\] Before we can determine the number of atoms of \(_{38}^{90}\text{Sr}\) in the sample we must express its activity and its half-life using the same units. Converting the half-life to seconds gives as \(8.86 \times 10^8\) s; thus, there are \[\frac{\left(9.07 \times 10^{6} \text { disintegrations/s }\right)\left(8.86 \times 10^{8} \text{ s}\right)}{0.693} = 1.16 \times 10^{16} \text{ atoms} _{38}^{90}\text{Sr} \nonumber\] The concentration of \(_{38}^{90}\text{Sr}\) in the sample is \[\frac{1.16 \times 10^{16} \text { atoms } _{38}^{90} \text{Sr}}{\left(6.022 \times 10^{23} \text { atoms/mol }\right)(0.01000 \mathrm{L})} = 1.93 \times 10^{-6} \text{ M } _{38}^{90}\text{Sr} \nonumber\] The direct analysis of a short-lived radioactive isotope using the method outlined in Example 13.3.1 is less useful because it provides only a transient measure of the isotope’s concentration. Instead, we can measure its activity after an elapsed time, , and use Equation \ref{13.3} to calculate . Few analytes are naturally radioactive. For many analytes, however, we can induce radioactivity by irradiating the sample with neutrons in a process called (NAA). The radioactive element formed by neutron activation decays to a stable isotope by emitting a gamma ray, and, possibly, other nuclear particles. The rate of gamma-ray emission is proportional to the analyte’s initial concentration in the sample. For example, if we place a sample containing non-radioactive \(_{13}^{27}\text{Al}\) in a nuclear reactor and irradiate it with neutrons, the following nuclear reaction takes place. \[_{13}^{27} \mathrm{Al}+_{0}^{1} \mathrm{n} \longrightarrow_{13}^{28} \mathrm{Al} \nonumber\] The radioactive isotope of Al has a characteristic decay process that includes the release of a beta particle and a gamma ray. \[_{13}^{28} \mathrm{Al} \longrightarrow_{14}^{28} \mathrm{Al}+_{-1}^{0} \beta + \gamma \nonumber\] When irradiation is complete, we remove the sample from the nuclear reactor, allow any short-lived radioactive interferences to decay into the background, and measure the rate of gamma-ray emission. The initial activity at the end of irradiation depends on the number of atoms that are present. This, in turn, is a equal to the difference between the rate of formation for \(_{13}^{28}\text{Al}\) and its rate of disintegration \[\frac {dN_{_{13}^{28} \text{Al}}} {dt} = \Phi \sigma N_{_{13}^{27} \text{Al}} - \lambda N_{_{13}^{28} \text{Al}} \label{13.5}\] where \(\Phi\) is the neutron flux and \(\sigma\) is the reaction cross-section, or probability that a \(_{13}^{27}\text{Al}\) nucleus captures a neutron. Integrating Equation \ref{13.5} over the time of irradiation, , and multiplying by \(\lambda\) gives the initial activity, , at the end of irradiation as \[A_0 = \lambda N_{_{13}^{28}\text{Al}} = \Phi \sigma N_{_{13}^{27}\text{Al}} (1-e^{-kt}) \nonumber\] If we know the values for , \(\Phi\), \(\sigma\), \(\lambda\), and , then we can calculate the number of atoms of \(_{13}^{27}\text{Al}\) initially present in the sample. A simpler approach is to use one or more external standards. Letting \((A_0)_x\) and \((A_0)_s\) represent the analyte’s initial activity in an unknown and in an external standard, and letting \(w_x\) and \(w_s\) represent the analyte’s weight in the unknown and in the external standard, we obtain the following pair of equations \[\left(A_{0}\right)_{x}=k w_{x} \label{13.6}\] \[\left(A_{0}\right)_{s}=k w_{s} \label{13.7}\] that we can solve to determine the analyte’s mass in the sample. As noted earlier, gamma ray emission is measured following a period during which we allow short-lived interferents to decay into the background. As shown in Figure 13.3.1 , we determine the sample’s or the standard’s initial activity by extrapolating a curve of activity versus time back to = 0. Alternatively, if we irradiate the sample and the standard at the same time, and if we measure their activities at the same time, then we can substitute these activities for ( ) and ( ) . This is the strategy used in the following example. The concentration of Mn in steel is determined by a neutron activation analysis using the method of external standards. A 1.000-g sample of an unknown steel sample and a 0.950-g sample of a standard steel known to contain 0.463% w/w Mn are irradiated with neutrons for 10 h in a nuclear reactor. After a 40-min delay the gamma ray emission is 2542 cpm (counts per minute) for the unknown and 1984 cpm for the external standard. What is the %w/w Mn in the unknown steel sample? Combining equations \ref{13.6} and \ref{13.7} gives \[w_{x}=\frac{A_{x}}{A_{s}} \times w_{s} \nonumber\] The weight of Mn in the external standard is \[w_{s}=\frac{0.00463 \text{ g } \text{Mn}}{\text{ g } \text { steel }} \times 0.950 \text{ g} \text { steel }=0.00440 \text{ g} \text{ Mn} \nonumber\] Substituting into the above equation gives \[w_{x}=\frac{2542 \text{ cpm}}{1984 \text{ cpm}} \times 0.00440 \text{ g} \text{ Mn}=0.00564 \text{ g} \text{ Mn} \nonumber\] Because the original mass of steel is 1.000 g, the %w/w Mn is 0.564%. Among the advantages of neutron activation are its applicability to almost all elements in the periodic table and that it is nondestructive to the sample. Consequently, NAA is an important technique for analyzing archeological and forensic samples, as well as works of art. Another important radiochemical method for the analysis of nonradioactive analytes is . An external source of analyte is prepared in a radioactive form with a known activity, , for its radioactive decay—we call this form of the analyte a . To prepare a sample for analysis we add a known mass of the tracer, , to a portion of sample that contains an unknown mass, , of analyte. After homogenizing the sample and tracer, we isolate grams of analyte by using a series of appropriate chemical and physical treatments. Because these chemical and physical treatments cannot distinguish between radioactive and nonradioactive forms of the analyte, the isolated material contains both. Finally, we measure the activity of the isolated sample, . If we recover all the analyte—both the radioactive tracer and the nonradioactive analyte—then and are equal and = – . Normally, we fail to recover all the analyte. In this case is less than , and \[A_{A}=A_{T} \times \frac{w_{A}}{w_{x}+w_{T}} \label{13.8}\] The ratio of weights in Equation \ref{13.8} accounts for any loss of activity that results from our failure to recover all the analyte. Solving Equation \ref{13.8} for gives \[w_{x}=\frac{A_{T}}{A_{A}} w_{A}-w_{T} \label{13.9}\] How we process the sample depends on the analyte and the sample’s matrix. We might, for example, digest the sample to bring the analyte into solution. After filtering the sample to remove the residual solids, we might precipitate the analyte, isolate it by filtration, dry it in an oven, and obtain its weight. Given that the goal of an analysis is to determine the amount of nonradioactive analyte in our sample, the realization that we might not recover all the analyte might strike you as unsettling. Recall from , that a single liquid–liquid extraction rarely has an extraction efficiency of 100%. One advantage of isotope dilution is that the extraction efficiency for the nonradioactive analyte and for the tracer are the same. If we recover 50% of the tracer, then we also recover 50% of the nonradioactive analyte. Because we know how much tracer we added to the sample, we can determine how much of the nonradioactive analyte is in the sample. The concentration of insulin in a production vat is determined by isotope dilution. A 1.00-mg sample of insulin labeled with C having an activity of 549 cpm is added to a 10.0-mL sample taken from the production vat. After homogenizing the sample, a portion of the insulin is separated and purified, yielding 18.3 mg of pure insulin. The activity for the isolated insulin is measured at 148 cpm. How many mg of insulin are in the original sample? Substituting known values into Equation \ref{13.8} gives \[w_{x}=\frac{549 \text{ cpm}}{148 \text{ cpm}} \times 18.3 \text{ mg}-1.00 \text{ mg}=66.9 \text{ mg} \text { insulin } \nonumber\] Equation \ref{13.8} and Equation \ref{13.9} are valid only if the tracer’s half-life is considerably longer than the time it takes to conduct the analysis. If this is not the case, then the decrease in activity is due both to the incomplete recovery and the natural decrease in the tracer’s activity. Table 13.3.1 provides a list of several common tracers for isotope dilution. An important feature of isotope dilution is that it is not necessary to recover all the analyte to determine the amount of analyte present in the original sample. Isotope dilution, therefore, is useful for the analysis of samples with complex matrices, where a complete recovery of the analyte is difficult. One example of a characterization application is the determination of a sample’s age based on the decay of a radioactive isotope naturally present in the sample. The most common example is carbon-14 dating, which is used to determine the age of natural organic materials. As cosmic rays pass through the upper atmosphere, some \(_7^{14}\text{N}\) atoms in the atmosphere capture high energy neutrons, converting them into \(_6^{14}\text{C}\). The \(_6^{14}\text{C}\) then migrates into the lower atmosphere where it oxidizes to form C-14 labeled CO . Animals and plants subsequently incorporate this labeled CO into their tissues. Because this is a steady-state process, all plants and animals have the same ratio of \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) in their tissues. When an organism dies, the radioactive decay of \(_6^{14}\text{C}\) to \(_7^{14}\text{N}\) by \(_{-1}^0 \beta\) emission ( = 5730 years) leads to predictable reduction in the \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) ratio. We can use the change in this ratio to date samples that are as much as 30000 years old, although the precision of the analysis is best when the sample’s age is less than 7000 years. The accuracy of carbon-14 dating depends upon our assumption that the natural \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) ratio in the atmosphere is constant over time. Some variation in the ratio has occurred as the result of the increased consumption of fossil fuels and the production of \(_6^{14}\text{C}\) during the testing of nuclear weapons. A calibration curve prepared using samples of known age—examples of samples include tree rings, deep ocean sediments, coral samples, and cave deposits—limits this source of uncertainty. There is no need to prepare a calibration curve for each analysis. Instead, there is a universal calibration curve known as IntCal. The most recent such curve, IntCal13 is described in the following paper: Reimer, P. J., et. al. “IntCal13 and Marine 13 Radiocarbon Age Calibration Curve 0–50,000 Years Cal BP,” , , 1869–1887. This calibration spans 50 000 years before the present (BP). To determine the age of a fabric sample, the relative ratio of \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) was measured yielding a result of 80.9% of that found in modern fibers. How old is the fabric? Equation \ref{13.3} and Equation \ref{13.4} provide us with a method to convert a change in the ratio of \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) to the fabric’s age. Letting be the ratio of \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) in modern fibers, we assign it a value of 1.00. The ratio of \(_6^{14}\text{C}\) to \(_6^{12}\text{C}\) in the sample, , is 0.809. Solving gives \[t=\ln \frac{A_{0}}{A} \times \frac{t_{1 / 2}}{0.693}=\ln \frac{1.00}{0.809} \times \frac{5730 \text { yr }}{0.693}=1750 \text { yr } \nonumber\] Other isotopes can be used to determine a sample’s age. The age of rocks, for example, has been determined from the ratio of the number of \(_{92}^{238}\text{U}\) to the number of stable \(_{82}^{206}\text{Pb}\) atoms produced by radioactive decay. For rocks that do not contain uranium, dating is accomplished by comparing the ratio of radioactive \(_{19}^{40}\text{K}\) to the stable \(_{18}^{40}\text{Ar}\). Another example is the dating of sediments collected from lakes by measuring the amount of \(_{82}^{210}\text{Pb}\) that is present. Radiochemical methods routinely are used for the analysis of trace analytes in macro and meso samples. The accuracy and precision of radiochemical methods generally are within the range of 1–5%. We can improve the precision—which is limited by the random nature of radioactive decay—by counting the emission of radioactive particles for as long a time as is practical. If the number of counts, , is reasonably large ( ≥ 100), and the counting period is significantly less than the isotope’s half-life, then the percent relative standard deviation for the activity, \((\sigma_A)_{rel}\), is approximately \[\left(\sigma_{A}\right)_{\mathrm{rel}}=\frac{1}{\sqrt{M}} \times 100 \nonumber\] For example, if we determine the activity by counting 10 000 radioactive particles, then the relative standard deviation is 1%. A radiochemical method’s sensitivity is inversely proportional to \((\sigma_A)_{rel}\), which means we can improve the sensitivity by counting more particles. Selectivity rarely is of concern when using a radiochemical method because most samples have only a single radioactive isotope. When several radioactive isotopes are present, we can determine each isotope’s activity by taking advantage of differences in the energies of their respective radioactive particles or differences in their respective decay rates. In comparison to most other analytical techniques, radiochemical methods usually are more expensive and require more time to complete an analysis. Radiochemical methods also are subject to significant safety concerns due to the analyst’s potential exposure to high energy radiation and the need to safely dispose of radioactive waste.
20,642
4,464
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Metabolism/Catabolism/Fermentation
Fermentation is the process by which living organisms recycle \(NADH \rightarrow NAD^+\). \(NAD^+\) is a required molecule necessary for the oxidation of Glyceraldehyde-3-phosphate to produce the high energy molecule 1,3-bisphosphoglycerate ( ). Fermentation occurs in the cytosol of cells. Because \(NAD^+\) is used in Glycolysis it is important that living cells have a way of recycling \(NAD^+\) from \(NADH\). One way that a cell recycles \(NAD^+\) is through the process of respiration, a set of sequential electron transfers involving an to a terminal electron acceptor. In aerobic organisms, the terminal electron acceptor is oxygen. In anaerobic organisms, the terminal electron acceptor can vary from species to species and include but are not limited to various metals like Fe(III), Mn(IV) and Co(III), CO , nitrate, sulfur This process reduces NADH back to \(NAD^+\) which can then be used again in step 6 of Glycolysis or other red/ox reactions in the cell. Another way that \(NAD^+\) is recycled from \(NADH\) is by a process called fermentation. Lactic acid fermentation occurs by converting pyruvate into lactate using the enzyme Lactate dehydrogenase and producing \(NAD^+\) in the process. This process takes place in oxygen depleted muscle and some bacteria. It is responsible for the sour taste of sauerkraut and yogurt. \(NAD^+\) is required for the oxidation of glyceraldehyde-3-P to produce 1,3-Bisphosphoglycerate (Step 6 of Gycolysis). If the supply of \(NAD^+\) is not replenished by the ETC or fermentation, glycolysis is unable to proceed. Fermentation is a necessary process for anaerobic organisms to produce energy. The yield of energy is much less than if the organism were to continue on through the TCA cycle and ETC, but energy is produce nonetheless. The purpose of fermentation in yeast is the same as that in muscle and bacteria, to replenish the supply of NAD for glycolysis, but this process occurs in two steps: ( )
1,979
4,481
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/16%3A_Electrochemistry/16.10%3A_Electrolytic_Cells_and_Electrolysis
Make sure you thoroughly understand the following essential ideas. Electrolysis refers to the decomposition of a substance by an electric current. The electrolysis of sodium and potassium hydroxides, first carried out in 1808 by Sir Humphrey Davey, led to the discovery of these two metallic elements and showed that these two hydroxides which had previously been considered un-decomposable and thus elements, were in fact compounds. Electrolysis of molten alkali halides is the usual industrial method of preparing the alkali metals: Ions in aqueous solutions can undergo similar reactions. Thus if a solution of nickel chloride undergoes electrolysis at platinum electrodes, the reactions are Both of these processes are carried out in electrochemical cells which are forced to operate in the "reverse", or non-spontaneous direction, as indicated by the negative for the above cell reaction. The free energy is supplied in the form of electrical work done on the system by the outside world (the surroundings). This is the only fundamental difference between an and the in which the free energy supplied by the cell reaction is extracted as work done on the surroundings. A common misconception about electrolysis is that "ions are attracted to the oppositely-charged electrode." This is true only in the very thin interfacial region near the electrode surface. Ionic motion throughout the bulk of the solution occurs mostly by diffusion, which is the transport of molecules in response to a concentration gradient. Migration— the motion of a charged particle due to an applied electric field, is only a minor player, producing only about one non-random jump out of around 100,000 random ones for a 1 volt cm electric field. Only those ions that are near the interfacial region are likely to undergo migration. Water is capable of undergoing both \[H_2O \rightarrow O_{2(g)} + 4 H^+ + 2 e^– \;\;\; E^o = -1.23 V\] and \[2 H_2O + 2 e^– \rightarrow H_{2(g)} + 2 OH^– \;\;\; E^o = -0.83 \;V\] Thus if an aqueous solution is subjected to electrolysis, one or both of the above reactions may be able to compete with the electrolysis of the solute. For example, if we try to electrolyze a solution of sodium chloride, hydrogen is produced at the cathode instead of sodium: Electrolysis of salt ("brine") is carried out on a huge scale and is the basis of the industry. Pure water is an insulator and cannot undergo significant electrolysis without adding an electrolyte. If the object is to produce hydrogen and oxygen, the electrolyte must be energetically more difficult to oxidize or reduce than water itself. Electrolysis of a solution of sulfuric acid or of a salt such as NaNO results in the decomposition of water at both electrodes: Electrolytic production of hydrogen is usually carried out with a dilute solution of sulfuric acid. This process is generally too expensive for industrial production unless highly pure hydrogen is required. However, it becomes more efficient at higher temperatures, where thermal energy reduces the amount of electrical energy required, so there is now some interest in developing high-temperature electrolytic processes. Most hydrogen gas is manufactured by the steam reforming of natural gas. One mole of electric charge (96,500 coulombs), when passed through a cell, will discharge half a mole of a divalent metal ion such as Cu . This relation was first formulated by Faraday in 1832 in the form of two : The equivalent weight of a substance is defined as the molar mass, divided by the number of electrons required to oxidize or reduce each unit of the substance. Thus one mole of V corresponds to three equivalents of this species, and will require three faradays of charge to deposit it as metallic vanadium. Most stoichiometric problems involving electrolysis can be solved without explicit use of Faraday's laws. The "chemistry" in these problems is usually very elementary; the major difficulties usually stem from unfamiliarity with the basic electrical units: A metallic object to be plated with copper is placed in a solution of CuSO . (0.22 amp) × (5400 sec) = 1200 C or (1200 C) ÷ (96500 c F ) = 0.012 F Since the reduction of one mole of Cu ion requires the addition of two moles of electrons, the mass of Cu deposited will be (63.54 g mol ) (0.5 mol Cu/F) (.012 F) = How much electric power is required to produce 1 metric ton (1000 kg) of chlorine from brine, assuming the cells operate at 2.0 volts and assuming 100 % efficiency? (In the last step, recall that 1 W = 1 J/s, so 1 kW-h = 3.6 MJ) For many industrial-scale operations involving the oxidation or reduction of both inorganic and organic substances, and especially for the production of the more active metals such as sodium, calcium, magnesium, and aluminum, the most cost-effective reducing agent is electrons supplied by an external power source. The two most economically important of these processes are described below. The electrolysis of brine is carried out on a huge scale for the industrial production of chlorine and caustic soda (sodium hydroxide). Because the reduction potential of Na is much higher than that of water, the latter substance undergoes decomposition at the cathode, yielding hydrogen gas and OH . 4 OH → O (g) + 2 H O + 4 cathode reactions H O + 2 → H (g) + 2 OH A comparison of the s would lead us to predict that the reduction ( ) would be favored over that of ( ). This is certainly the case from a purely energetic standpoint, but as was mentioned in the section on fuel cells, electrode reactions involving O are notoriously slow (that is, they are kinetically hindered), so the anodic process here is under kinetic rather than thermodynamic control. The reduction of water ( ) is energetically favored over that of Na ( ), so the net result of the electrolysis of brine is the production of Cl and NaOH ("caustic"), both of which are of immense industrial importance: \[\ce{2 NaCl + 2 H2O -> 2 NaOH + Cl2(g) + H2(g)} \nonumber\] Since chlorine reacts with both OH and H , it is necessary to physically separate the anode and cathode compartments. In modern plants this is accomplished by means of an ion-selective polymer membrane, but prior to 1970 a more complicated cell was used that employed a pool of mercury as the cathode. A small amount of this mercury would normally find its way into the plant's waste stream, and this has resulted in serious pollution of many major river systems and estuaries and devastation of their fisheries. Aluminum is present in most rocks and is the most abundant metallic element in the earth's crust (eight percent by weight.) However, its isolation is very difficult and expensive to accomplish by purely chemical means, as evidenced by the high E° (–1.66 V) of the Al /Al couple. For the same reason, aluminum cannot be isolated by electrolysis of aqueous solutions of its compounds, since the water would be electrolyzed preferentially. And if you have ever tried to melt a rock, you will appreciate the difficulty of electrolyzing a molten aluminum ore! Aluminum was in fact considered an exotic and costly metal until 1886, when Charles Hall (U.S.A) and Paul Hérault (France) independently developed a practical electrolytic reduction process. The takes advantage of the principle that the melting point of a substance is reduced by admixture with another substance with which it forms a homogeneous phase. Instead of using the pure alumina ore Al O which melts at 2050°C, it is mixed with cryolite, which is a natural mixture of NaF and AlF , thus reducing the temperature required to a more manageable 1000°C. The anodes of the cell are made of carbon (actually a mixture of pitch and coal), and this plays a direct role in the process; the carbon gets oxidized (by the oxide ions left over from the reduction of Al to CO, and the free energy of this reaction helps drive the aluminum reduction, lowering the voltage that must be applied and thus reducing the power consumption. This is important, because aluminum refining is the largest consumer of industrial electricity, accounting for about 5% of all electricity generated in North America. Since aluminum cells commonly operated at about 100,000 amperes, even a slight reduction in voltage can result in a large saving of power. The net reaction is \[\ce{2 Al_2O_3 + 3 C \rightarrow 4 Al + 3 CO_2} \nonumber\] However, large quantities of CO and of HF (from the cryolite), and hydrocarbons (from the electrodes) are formed in various side reactions, and these can be serious sources of environmental pollution.
8,617
4,482
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Chemical_Reactions_and_Interactions/Overview
So far we've seen a variety of types of reaction. This section is intended to help you fit all the pieces together. You've seen combination reactions and decomposition reactions. Sometimes these are the same reactions, just going in opposite directions. Likewise, dissolution and precipitation are opposite processes. Many chemists think of reactions as falling into 2 main categories: acid-base type reactions and redox reactions. In an , an under-populated nucleus makes a bond with an over-populated nucleus, but the electrons don't change their primary loyalty. (The electrons from the over-populated nucleus do appreciate the better benefits they get from the under-populated nucleus, which has more pension money than it can spend on its own population.) In the classic acid-base reaction, the electrons on water really like oxygen as a home, but they are feeling a little crowded and poor; alliance with a hydrogen ion provides lots of money to make them happier, and a nice convenient vacation destination. In contrast, a is any reaction in which electrons change their primary loyalties. Bonds between nuclei may change or not, but oxidation numbers do change. Try going through all the examples and deciding which category they fit and why.
1,333
4,483
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Data_Analysis/Dimensional_Analysis
Dimensional analysis is amongst the most valuable tools physical scientists use. Simply put, it is the conversion between an amount in one unit to the corresponding amount in a desired unit using various conversion factors. This is valuable because certain measurements are more accurate or easier to find than others. The use of units in a calculation to ensure that we obtain the final proper units is called . For example, if we observe experimentally that an object’s potential energy is related to its mass, its height from the ground, and to a gravitational force, then when multiplied, the units of mass, height, and the force of gravity must give us units corresponding to those of energy. Energy is typically measured in joules, calories, or electron volts (eV), defined by the following expressions: Performing dimensional analysis begins with finding the appropriate . Then, you simply multiply the values together such that the units cancel by having equal units in the numerator and the denominator. To understand this process, let us walk through a few examples. Imagine that a chemist wants to measure out 0.214 mL of benzene, but lacks the equipment to accurately measure such a small volume. The chemist, however, is equipped with an analytical balance capable of measuring to \(\pm 0.0001 \;g\). Looking in a reference table, the chemist learns the density of benzene (\(\rho=0.8765 \;g/mL\)). How many grams of benzene should the chemist use? \[0.214 \; \cancel{mL} \left( \dfrac{0.8765\; g}{1\;\cancel{mL}}\right)= 0.187571\; g \nonumber\] Notice that the mL are being divided by mL, an equivalent unit. We can cancel these our, which results with the 0.187571 g. However, this is not our final answer, since this result has too many and must be rounded down to three significant digits. This is because 0.214 mL has three significant digits and the conversion factor had four significant digits. Since 5 is greater than or equal to 5, we must round the preceding 7 up to 8. Hence, the chemist should weigh out 0.188 g of benzene to have 0.214 mL of benzene. To illustrate the use of dimensional analysis to solve energy problems, let us calculate the kinetic energy in joules of a 320 g object traveling at 123 cm/s. To obtain an answer in joules, we must convert grams to kilograms and centimeters to meters. Using Equation 5.4, the calculation may be set up as follows: \[ \begin{align*} KE &=\dfrac{1}{2}mv^2=\dfrac{1}{2}(g) \left(\dfrac{kg}{g}\right) \left[\left(\dfrac{c}{ms}\right)\left(\dfrac{m}{cm}\right) \right]^2 \\[4pt] &= (\cancel{g})\left(\dfrac{kg}{\cancel{g}}\right) \left(\dfrac{\cancel{m^2}}{s^2}\right) \left(\dfrac{m^2}{\cancel{cm^2}}\right) = \dfrac{kg⋅m^2}{s^2} \\[4pt] &=\dfrac{1}{2}320\; \cancel{g} \left( \dfrac{1\; kg}{1000\;\cancel{g}}\right) \left[\left(\dfrac{123\;\cancel{cm}}{1 \;s}\right) \left(\dfrac{1 \;m}{100\; \cancel{cm}}\right) \right]^2=\dfrac{0.320\; kg}{2}\left[\dfrac{123 m}{s(100)}\right]^2 \\[4pt] &=\dfrac{1}{2} 0.320\; kg \left[ \dfrac{(123)^2 m^2}{s^2(100)^2} \right]= 0.242 \dfrac{kg⋅m^2}{s^2} = 0.242\; J \end{align*}\] Alternatively, the conversions may be carried out in a stepwise manner: Step 1: convert \(g\) to \(kg\) \[320\; \cancel{g} \left( \dfrac{1\; kg}{1000\;\cancel{g}}\right) = 0.320 \; kg \nonumber\] Step 2: convert \(cm\) to \(m\) \[123\;\cancel{cm} \left(\dfrac{1 \;m}{100\; \cancel{cm}}\right) = 1.23\ m \nonumber \] Now the natural units for calculating joules is used to get final results \[ KE=\dfrac{1}{2} 0.320\; kg \left(1.23 \;ms\right)^2=\dfrac{1}{2} 0.320\; kg \left(1.513 \dfrac{m^2}{s^2}\right)= 0.242\; \dfrac{kg⋅m^2}{s^2}= 0.242\; J \nonumber\] Of course, steps 1 and 2 can be done in the opposite order with no effect on the final results. However, this second method involves an additional step. Now suppose you wish to report the number of kilocalories of energy contained in a 7.00 oz piece of chocolate in units of kilojoules per gram. To obtain an answer in kilojoules, we must convert 7.00 oz to grams and kilocalories to kilojoules. Food reported to contain a value in Calories actually contains that same value in kilocalories. If the chocolate wrapper lists the caloric content as 120 Calories, the chocolate contains 120 kcal of energy. If we choose to use multiple steps to obtain our answer, we can begin with the conversion of kilocalories to kilojoules: \[120 \cancel{kcal} \left(\dfrac{1000 \;\cancel{cal}}{\cancel{kcal}}\right)\left(\dfrac{4.184 \;\cancel{J}}{1 \cancel{cal}}\right)\left(\dfrac{1 \;kJ}{1000 \cancel{J}}\right)= 502\; kJ \nonumber\] We next convert the 7.00 oz of chocolate to grams: \[7.00\;\cancel{oz} \left(\dfrac{28.35\; g}{1\; \cancel{oz}}\right)= 199\; g \nonumber\] The number of kilojoules per gram is therefore \[\dfrac{ 502 \;kJ}{199\; g}= 2.52\; kJ/g \nonumber\] Alternatively, we could solve the problem in one step with all the conversions included: \[\left(\dfrac{120\; \cancel{kcal}}{7.00\; \cancel{oz}}\right)\left(\dfrac{1000 \;\cancel{cal}}{1 \;\cancel{kcal}}\right)\left(\dfrac{4.184 \;\cancel{J}}{1 \; \cancel{cal}}\right)\left(\dfrac{1 \;kJ}{1000 \;\cancel{J}}\right)\left(\dfrac{1 \;\cancel{oz}}{28.35\; g}\right)= 2.53 \; kJ/g \nonumber\] The discrepancy between the two answers is attributable to rounding to the correct number of significant figures for each step when carrying out the calculation in a stepwise manner. Recall that all digits in the calculator should be carried forward when carrying out a calculation using multiple steps. In this problem, we first converted kilocalories to kilojoules and then converted ounces to grams. Skill Builder ES2 allows you to practice making multiple conversions between units in a single step.
5,725
4,484
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Microscopy/Atomic_Force_Microscopy
Atomic force microscopy utilizes a microscale probe to produce three dimensional image of surfaces at sub nanometer scales. The atomic force microscope obtains images by measurement of the attractive and repulsive forces acting on a microscale probe interacting with the surface of a sample. Ideally the interaction occurs at an atomically fine probe tip being attracted and repulsed by the atoms of the surface giving atomically resolved surface images. The atomic force microscope (AFM) probe is mounted onto a flexible cantilever that is manipulated by a vertical piezoelectric into interacting with the sample. The piezoelectric expands and exerts a force on the cantilever proportional to the potential voltage. The force is balanced by those acting on the probe by its interaction with the surface and the stain on the cantilever. The flexible cantilever will bend proportionally to the force acting upon it in accordance to Hooke's law. By measuring the reflection of a laser source on the cantilever it is possible to determine the degree of the bend and by a feedback loop control the force exerted by the cantilever. By using the strain as a restoring force, a piezoelectric element can be used to drive the probe as a mechanical oscillator with a calculable resonate frequency, allowing for tapping mode microscopy. The laser acts on a photodiode array to give measurement of the deflection both horizontally and vertically. Using the deflection it is possible to calculate the quantity of force acting on the probe in both the horizontal and vertical directions. From the force applied by the vertical piezoelectric and the force acting on the probe it is possible to obtain a measure of the relative height of the probe. As the probe encounters a feature, it raises with the feature, causing a deflection measured by the photodiode and a change in the force on the vertical piezoelectric. The potential may be adjusted to minimize the deflection, by feedback from the photodiode array, knowing the expansion rate of the vertical piezoelectric allows for a direct computation of the height, this is the z-sensor. By computation from the total deflection and thus strain on the cantilever, it is also possible to obtain the relative height of the feature given a known spring constant of the cantilever. The probe is scanned across the surface, with either the probe or the sample being moved by piezoelectric elements. This allows for the measure of the interaction of the forces across the entire sample, allowing the surface to be rendered as a three dimensional image. The force exerted by the probe has the potential to alter the surface by etching or simply moving loosely bound surface features. As such, this microscopy technique can be potentially used to write (etch) as well as read, atomic scale surface features. The strain on the probe tip may cause deformation that leads to loss in resolution due to flattening or the generation of artifacts due to secondary tips. The probe tip is idealized to be an atomically perfect spherical surface with a nanoscopic radius of curvature, leading to a single point of contact between the probe tip and the surface. Tips however may have multiple points of contact, leading to image artifacts such as doubled images or shadowing. Alternatively tips may be flattened or event indented, causing a lower resolution as smaller surface features are passed over. Significant error may arise from the expansion of the piezoelectric materials as they become heated. This problem is typically mitigated by attempting to maintain an isothermal environment. Drift may be measured and accounted for by repeat measurement of a known point and normalizing the data from that known height. There are mutiple methods of imaging a surface using an atomic force microscope, these imaginging modes uniquely utilize the probe and its interaction with the surface to obtain data. If a constant potential is maintained for the vertical piezoelectric, the probe will maintain continuous contact with the surface. It is then possible to use deflection and z-sensor to yield accurate height information of surface features. When the probe is in contact with the sample the resistance acting on the probe's horizontal motion causes it to be strained horizontally. This strain is proportional to the resistance, friction, allowing for a direct measurement of the friction of the sample and probe surface. Also known as non-contact mode, this method utilizes the attractive forces acting on the probe. To measure this, the probe is pulled from the sample by the vertical piezoelectric with a set force, as a feature is encountered the attractive force acting on the probe increases causing it to deflect downward. The downward deflection is counteracted by the vertical piezoelectric in a similar manner to contact mode, reaching the same equilibrium of forces acting on the cantilever. By using a piezoelectric device it is possible to use the cantilever as a harmonic oscillator with a resonance frequency proportional to the known spring constant. The probe then moves with an amplitude proportional to the driving force which is controlled and a frequency which is proportional to the spring constant. When the probe tip contact the surface the effective restoring force increases, increasing the frequency. The total change in frequency is proportional to the feature's height, and vertical piezoelectric can then be used to raise the cantilever and restore the frequency to the original giving additional data from the z-sensor regarding the feature's height. Additionally, as the probe contacts the surface it acts as a driving force, deforming the surface which is restored by the internal stress (that acts on the probe to repel it). The phenomenon is proportional to the Young's modulus of compressibility of the sample and will case a phase shift between the oscillation of the piezoelectric driver of the probe and the probe itself. Use of specialized probes allows a further expansion of atomic force microscopy's role in nanoscience. By alterations in probe design it is possible to: directly obtain data about the surface interaction with other functional groups, alter the surface by etching or causing chemical change, or deposit substrates onto a surface; all at the nanoscopic scale. Functionalizing a probe can be accomplished by binding a protein or functional groups to the probe tip surface. The probe now takes direct measure of the iteration between the surface and the functional group. This technique is particularly useful for biological application, e.g. affinity of a protein to the binding site of a membrane. The strain exerted by the tip on the surface has the potential to manipulate or alter the surface features, allowing for the mechanical etching of the system. By using specialized , it becomes possible to heat the surface. Either technique can be used to precisely carve into or otherwise alter a surface at the scales necessary for many nanotechnologic advances. Use of a special tip similar to a fountain pen head, it is possible to deposit units of a substrate onto a surface. Precise deposition allows for building very precise surface structures, e.g. protien binding sites, onto an atomically flat surface. AFM has been demonstrated as a potential means of data storage by the IBM corperation. By using a heated tip it is possible to alter a polymer surface by a reversible polymerization reaction. The indentation created may be read by contact or tapping mode allowing for written data to be read. Data may be erased by use of the thermal tip to cause a polymerization on the surface, sealing the indentation.
7,713
4,485
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/04%3A_The_Structure_of_Atoms/4.10%3A_The_Nucleus
The results of Thomson’s and other experiments implied that electrons were constituents of all matter and hence of all atoms. Since macroscopic samples of the elements are found to be electrically neutral, this meant that each atom probably contained a positively charged portion to balance the negative charge of its electrons. In an attempt to learn more about how positive and negative charges were distributed in atoms, Ernest Rutherford (1871 to 1937) and his coworkers performed numerous experiments in which α particles emitted from a radioactive element such as polonium were allowed to strike thin sheets of metals such as gold or platinum. It was already known that the α particles carried a positive charge and traveled rapidly through gases in straight lines. Rutherford reasoned that in a solid, where the atoms were packed tightly together, there would be numerous collisions of α particles with electrons or with the unknown positive portions of the atoms. Since the mass of an individual electron was quite small, a great many collisions would be necessary to deflect an α particle from its original path, and Rutherford’s preliminary calculations indicated that most would go right through the metal targets or be deflected very little by the electrons. In 1909, confirmation of this expected result was entrusted to Hans Geiger and a young student, Ernest Marsden, who was working on his first research project. \(\Page {1}\) The results of Geiger and Marsden’s work (using apparatus whose design is shown schematically in Figure \(\Page {1}\) were quite striking. Most of the α particles went straight through the sample or were deflected very little. These were observed by means of continuous luminescence of the ZnS screen at position 1 in the diagram. Observations made at greater angles from the initial path of the a particles (positions 2 and 3) revealed fewer and fewer flashes of light, but even at an angle nearly 180° from the initial path (position 4) . This result amazed Rutherford. In his own words, “It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you. On consideration, I realized that this scattering backwards must be the result of a single collision, and when I made calculations I saw that it was impossible to get anything of that order of magnitude unless you took a system in which the greater part of the mass of an atom was concentrated in a minute nucleus.” Rutherford’s interpretation of Geiger and Marsden’s experiment is shown schematically in Fig \(\Page {2}\).   Figure \(\Page {2}\) Rutherford’s microscopic interpretation of the results of Geiger and Marsden’s experiment. Quantitative calculations using these experimental results showed that the diameter of the nucleus was about one ten-thousandth that of the atom. The positive charge on the nucleus was found to be + , where is the number which indicates the position of an element in the periodic table. (For example, H is the first element and has = 1. Helium is the second element and = 2. The twentieth element in the constructed earlier is Ca, and the nucleus of each Ca atom therefore has a charge of + 20 = 20 × 1.60 × 10 C = 32.0 × 10 C.) In order for an atom to remain electrically neutral, it must have a total of electrons outside the nucleus. These provide a charge of to balance the positive nuclear charge. The number , which indicates the positive charge on the nucleus and the number of electrons in an atom, is called the . The significance of the atomic number was firmly established in 1914 when H. G. Moseley (1888 to 1915) published the results of experiments in which he bombarded a large number of different metallic elements with electrons in a cathode-ray tube. Wilhelm Roentgen (1845 to 1923) had discovered earlier that in such an experiment, rays were given off which could penetrate black paper or other materials opaque to visible light. He called this unusual radiation x-rays, the x indicating . Moseley found that the frequency of the x-rays was unique for each different metal. It depended on the atomic number (but not on the atomic weight) of the metal. (If you are not familiar with electromagnetic radiation or the term , read where they are discussed more fully.) Using his x-ray frequencies, Moseley was able to establish the correct ordering in the periodic table for elements such as Co and Ni whose atomic weights disagreed with the positions to which Mendeleev had assigned them. His work confirmed the validity of Mendeleev’s assumption that chemical properties were more important than atomic weights.
4,742
4,486
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Metabolism/Catabolism/Kreb's_Cycle
Organisms derive the majority of their energy from the Kreb's Cycle, also known as the TCA cycle. The Kreb's Cycle is an aerobic process consisting of eight definite steps. In order to enter the Kreb's Cycle pyruvate must first be converted into Acetyl-CoA by pyruvate dehydrogenase complex found in the mitochondria. In the presence of oxygen organisms are capable of using the Kreb's Cycle. The reason oxygen is required is because the NADH and [FADH ] produced in the Kreb's Cycle are able to be oxydized in the electron transport chain (ETC) thus replenishing the supply of NAD and [FAD]. In order for pyruvate from glycolysis to enter the Kreb's Cycle it must first be converted into acetyl-CoA by the pyruvate dehydrogenase complex which is an oxidative process wherein NADH and CO are formed. Another source of acetyl-CoA is beta oxidation of fatty acids.
888
4,489
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_(Zumdahl_and_Decoste)/7%3A_Acids_and_Bases/7.07_Polyprotic_Acids
The name "polyprotic" literally means many protons. Therefore, in this section we will be observing some specific acids and bases which either lose or accept proton. Then, we will be talking about the equations used in finding the degree of dissociation. Finally, with given examples, we will be able to approach problems dealing with polyprotic acids and bases. are specific acids that are capable of more than a single proton per molecule in acid-base reactions. (In other words, acids that have more than one ionizable H atom per molecule). Protons are lost through several stages (one at each stage), with the first proton being the fastest and most easily lost. Contrast with in section . From the table above, we see that sulfuric acid is the strongest. It is important to know that K >K >K , where K stands for the acidity constant or (first, second, and third, respectively). These constants are used to measure the degree of dissociation of hydrogens in the acid. For a more in depth discussion on this, go to . To find K of Hydrosulfuric acid (H S), you must first write the reaction: \[H_2S \rightleftharpoons H^+ + HS^- \nonumber \] Dividing the products by the reactants, we then have: \[K_{a1} = \dfrac{[H^+] [HS^-]}{ [HS-]} \nonumber \] To find K , we start with the reaction: \[HS^- \rightleftharpoons H^+ + S_2^- \nonumber \] Then, like when finding \(K_{a1}\), write the products over the reactants: \[K_{a2} = \dfrac{[H^+] [S_2^-]}{[HS^-]} \nonumber \] From these reactions we can observe that it takes two steps to fully remove the H ion. This also means that this reaction will produce two or . The equivalence point, by definition, is the point during an acid-base titration in which there has been equal amounts of acid and base reacted. If we were to graph this, we would be able to see exactly just what two equivalence points looks like. Let's check it out: Note the multiple equivalence points and notice that they are almost straight lines at that point, indicating equal added quantities of acid and base. In strong acid + strong base titrations, the pH changes slowly at first, rapidly through the equivalence point of pH=7, and then slows down again. If it is being titrated in a strong acid, the pH will go up as the base is added to it. Conversely, if it is in a strong base, the pH will fall down as acid is added. Next, let's take a look at sulfuric acid. This unique polyprotic acid is the only one to be completely deprotonated after the first step: \[H_2SO_{4(aq)} + H_2O_{(l)} \rightleftharpoons H_3O^+_{(aq)} + HSO^-_{4(aq)} \nonumber \] Now let's try something a little harder. The ionization of phosphoric acid (three dissociation reactions this time) can be written like this: Start with H PO : \[K_{a1}: H_3PO_{4(aq)} \rightleftharpoons H^+_{(aq)} + H_2PO^-_{4(aq)} \nonumber \] \[K_{a2} : H_2PO^-_{4(aq)} \rightleftharpoons HPO_{4(aq)} + H^+_{(aq)} \nonumber \] \[K_{a3} : HPO^-_{4(aq)} \rightleftharpoons H^+_{(aq)} + PO^{3-}_{4(aq)} \nonumber \] So from above reactions we can see that it takes three steps to fully remove the H ion. This also means that this reaction will produce three equivalence points. are bases that can at least one H ion, or proton, in acid-base reactions. First, start with the reaction A + H O ? HA + OH K = [OH ,HA ]/[A ]=K /K Then, we plug in the products over the reactants: HA + H O ? H A + OH K = [OH ,H A ]/[HA ]=K /K Finally, we are left with the third dissociation, or K : H A + H O ? H A + OH K = [OH ,H A]/[H A ]=K /K
3,551
4,490
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Named_Reactions/TES_protection
Below is a standard procedure for the . To an ice-cold (0 °C) solution of the alcohol (1 mmol) in DMF (2 mL, 0.5 M) under N is added imidazole (3 mmol). TESCl (2 mmol) is then added dropwise. After complete disappearance of the starting material by TLC, the reaction mixture is quenched by addition of water (1 mL), diluted with Et O (10 mL) and the layers are separated. The organic layer is washed with water (10 x 3 mL) and brine (2 mL), dried (Na SO ), filtered and concentrated under reduced pressure. The residue is purified by flash chromatography on silica gel. To a solution of the alcohol (1 mmol) in DCM (2 mL, 0.5 M) under N at -78 °C, is added dry 2,6-lutidine (1.5 mmol). TESOTf (1.1 mmol) is then added dropwise. After complete disappearance of the starting material by TLC, the reaction mixture is quenched by addition of sat. aq. NaHCO (2 mL), diluted with DCM (5 mL) and brought back to rt. The layers are separated and the aqueous layer is extracted with DCM (3 x 2 mL). The combined organic layers are washed with brine (2 mL), dried (Na SO ), filtered and concentrated under reduced pressure. The residue is purified by flash chromatography on silica gel.
1,198
4,492
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/16%3A_Electrochemistry/16.05%3A_Applications_of_the_Nernst_Equation
Make sure you thoroughly understand the following essential ideas: We ordinarily think of the oxidation potential being controlled by the concentrations of the oxidized and reduced forms of a redox couple, as given by the Nernst equation. Under certain circumstances it becomes more useful to think of as an independent variable that can be used to control the value of in the Nernst equation. This usually occurs when two redox systems are present, one being much more concentrated or kinetically active than the other. By far the most important example of this is the way atmospheric oxygen governs the composition of the many redox systems connected with biological activity. The presence of oxygen in the atmosphere has a profound effect on the redox properties of the aquatic environment— that is, on natural waters exposed directly or indirectly to the atmosphere, and by extension, on organisms that live in an aerobic environment.This is due, of course, to its being an exceptionally strong oxidizing agent and thus a low-lying sink for electrons from most of the elements and all organic compounds. Those parts of the environment that are protected from atmospheric oxygen are equally important because it is only here that electrons are sufficiently available to produce the "reducing" conditions that are essential for processes varying from photosynthesis to nitrogen fixation. Estimate the redox potential of a natural water that is in equilibrium with the atmosphere at pH 7 and 298 K. What fraction of a dilute solution Fe will be in its oxidized form Fe in such a water? The relevant °s are The potential (with respect to the SHE, of course) is given by the Nernst equation which works out to = 0.82 volt. As the Le Chatelier principle predicts, the higher pH (lower [H ] compared to that at the "standard" pH of zero) reduces the electron-accepting tendency of oxygen. The Nernst equation for the reduction of Fe is = .77–.059 log in which is the ratio [Fe ]/[Fe ]. With E set by the O /H O couple, this becomes 0.82 = 0.77 – 0.059 log which gives = 10 or [Fe ]/[Fe ] = 0.14/1, so the fraction of the iron in its oxidized form is 1/1.14 = 0.88. As you will recall from your study of acid-base chemistry, the pH of a solution (defined as –log {H }) is a measure of availablity (technically, the ) of protons in the solution. As is explained in more detail , protons tend to "fall" (in free energy) from filled donor levels (acids) to lower acceptor levels (bases.) Through the relation \[[H^+] \approx K_a \dfrac{C_a}{C_b}\] which can be rewritten as \[\dfrac{C_a}{C_b} \approx \dfrac{[H^+]}{K_a}\] in which the pH is treated as an independent variable that controls the ratio of the conjugate forms of any acid-base pairs in the solution: \[\log \left(\dfrac{C_a}{C_b}\right) \approx pH – pK_a\] In the same way, we can define the pE as the negative log of the in the solution: \[pE = –\log{e^–}\] A few other points about this plot are worth noting: For a more detailed chart, see . A very large part of Chemistry is concerned, either directly or indirectly, with determining the concentrations of ions in solution. Any method that can accomplish such measurements using relatively simple physical techniques is bound to be widely exploited. Cell potentials are fairly easy to measure, and although the Nernst equation relates them to ionic activities rather than to concentrations, the difference between them becomes negligible in solutions where the total ionic concentration is less than about 10 . The concentrations of ions in equilibrium with a sparingly soluble salt are sufficiently low that their direct determination can be quite difficult. A far simpler and common procedure is to set up a cell in which one of the electrode reactions involves the insoluble salt, and whose net cell reaction corresponds to the dissolution of the salt. For example, to determine the K for silver chloride, we could use the cell \[Ag_{(s)} | Ag^+(?\; M) || Ag^+,Cl^– | AgCl_{(s)} | Ag_{(s)}\] whose net equation corresponds to the dissolution of silver chloride: The standard potential for the net reaction refers to a hypothetical solution in which the activities of the two ions are unity. The cell potential we actually observe corresponds to in the Nernst equation, which is then solved for which gives directly. In many situations, accurate determination of an ion concentration by direct measurement of a cell potential is impossible due to the presence of other ions and a lack of information about activity coefficients. In such cases it is often possible to determine the ion indirectly by titration with some other ion. For example, the initial concentration of an ion such as Fe can be found by titrating with a strong oxidizing agent such as Ce . The titration is carried out in one side of a cell whose other half is a reference electrode: Pt | Fe , Fe || Initially the left cell contains only Fe . As the titrant is added, the ferrous ion is oxidized to Fe in a reaction that is virtually complete: \[Fe^{2+} + Ce^{4+} → Fe^{3+} + Ce^{3+}\] The cell potential is followed as the Fe is added in small increments. Once the first drop of ceric ion titrant has been added, the potential of the left cell is controlled by the ratio of oxidized and reduced iron according to the Nernst equation \[E = 0.68 - 0.059 \; \log \dfrac{]Fe^{3+}]}{[Fe^{2+}]}\] which causes the potential to rise as more iron becomes oxidized. When the equivalence point is reached, the Fe will have been totally consumed (the large equilibrium constant ensures that this will be so), and the potential will then be controlled by the concentration ratio of Ce /Ce . The idea is that both species of a redox couple must be present in reasonable concentrations poise an electrode (that is, to control its potential according to the Nernst equation.) If one works out the actual cell potentials for various concentrations of all these species, the resulting titration curve looks much like the familiar acid-base titration curve. The end point is found not by measuring a particular cell voltage, but by finding what volume of titrant gives the steepest part of the curve. Since pH is actually defined in terms of hydrogen ion and not its concentration, a allows a direct measure of {H } and thus of –log {H }, which is the pH. All you need is to measure the voltage of a cell H (g, 1 atm) | Pt | H (? ) || In theory this is quite simple, but when it was first employed in the pre-electronics era, it required some rather formidable-looking apparatus (such as the L&N vibrating-reed electrometer setup from the 1920's shown here) and the use of explosive hydrogen gas. Although this arrangement (in which the reference electrode could be a standard hydrogen electrode) has been used for high-precision determinations since that time, it would be impractical for routine pH measurements of the kinds that are widely done, especially outside the research laboratory. In 1914 it was discovered that a thin glass membrane enclosing a solution of HCl can produce a potential that varies with the hydrogen ion activity {H } in about the same way as that of the hydrogen electrode. Glass electrodes are manufactured in huge numbers for both laboratory and field measurements. They contain a built-in Ag-AgCl reference electrode in contact with the HCl solution enclosed by the membrane. The potential of a glass electrode is given by a form of the Nernst equation very similar to that of an ordinary hydrogen electrode, but of course without the H : ln ( {H } + ) in which and are constants that depend on the particular glass membrane. The reason a glass membrane would behave in this way was not understood until around 1970. It now appears that hydrogen ions in the external solution diffuse through the glass and push out a corresponding number of the Na ions which are normally present in most glasses. These sodium ions diffuse to whichever side of the membrane has the lower concentration, where they remain mostly confined to the surface of the glass, which has a porous, gelatinous nature. It is the excess charge produced by these positive ions that gives rise to the pH-dependent potential. The first commercial was developed by Arnold Beckman (1900-2004) while he was a Chemistry professor at CalTech. He was unable to interest any of the instrumentation companies in marketing it, so he founded his own company and eventually became a multi-millionaire philanthropist. The function of the membrane in the glass electrode is to allow hydrogen ions to pass through and thus change its potential, while preventing other cations from doing the same thing (this selectivity is never perfect; most glass electrodes will respond to moderate concentrations of sodium ions, and to high concentrations of some others.) A glass electrode is thus one form of ion-selective electrode. Since about 1970, various other membranes have been developed which show similar selectivities to certain other ions. These are widely used in industrial, biochemical, and environmental applications. You may recall the phenomena of osmosis and osmotic pressure that are observed when two solutions having different solute concentrations are separated by a thin film or membrane whose porosity allows small ions and molecules to diffuse through, but which holds back larger particles. If one solution contains a pair of oppositely-charged ionic species whose sizes are very different, the smaller ions may pass through the semipermeable membrane while the larger ones are retained. This will produce a charge imbalance between the two solutions, with the original solution having the charge sign of the larger ion. Eventually the electrical work required to bring about further separation of charges becomes too large to allow any further net diffusion to take place, and the system settles into an equilibrium state in which a constant potential difference (usually around a volt or less) is maintained. This potential difference is usually called a membrane potential or after the English chemist who first described this phenomenon around 1930. If the smaller ions are able to diffuse through the membrane but the larger ions cannot, a potential difference will develop between the two solutions. This membrane potential can be observed by introducing a pair of platinum electrodes. The figure shows a simple system containing the potassium salt of a protein on one side of a membrane, and potassium chloride on the other. The proteinate anion, being too large to diffuse through the membrane, gives rise to the potential difference. The value of this potential difference can be expressed by a relation that is essentially the same as the Nernst equation, although its derivation is different. The membrane potential can be expressed in terms of the ratio of either the K or Cl ion activities: The membrane surrounding most living cells contains sites or "channels" through which K ions are selectively transported so that the concentration of K inside the cell is 10-30 times that of the intracellular fluid. Taking the activity ratio as about 20, the above equation predicts that the potential difference θ - θ will be which is consistent with observed values. Transport of an ion such as K from a region of low concentration into the more concentrated intercellular fluid requires a source of free energy, which is supplied by ATP under enzymatic control. The metabolic processes governing this action are often referred to as "ion pumps". Transmission of signals through the nervous system occurs not by the movement of a charge carrier through the nerve, but by waves of differential ion concentrations that travel along the length of the nerve. These concentration gradients are reduced by protein-based ion channels and ATP-activated (and energy-consuming) ion pumps specific to K and Ca ions. We sometimes think of our nerves as the body's wiring, but the "electricity" that they transmit is not a flow of electrons, but a rapidly-traveling wave of depolarization involving the transport of ions through the nerve membrane. The normal potential difference between the inner and outer parts of nerve cells is about –70 mv as estimated above. Transmissin of a nerve impulse is initiated by a reduction of this potential difference to about –20 mv. This has the effect of temporarily opening the Na channel; the influx of these ions causes the membrane potential of the adjacent portion of the nerve to collapse, leading to an effect that is transmitted along the length of the nerve. As this pulse passes, K and Na pumps restore the nerve to its resting condition. Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic.
12,946
4,493
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/09%3A_Gases/9.11%3A_The_Law_of_Combining_Volumes
In effect, the preceding example used the factor to convert from volume to amount of gas. The reciprocal of this factor can be used to convert from amount of gas to volume. This is emphasized if write the Ideal Gas Equation as \[V=\frac{RT}{P}n \nonumber \] This indicates that when we write a chemical equation involving gases, the coefficients not only tell us what amount of each substance is consumed or produced, they also indicate the relative of each gas consumed or produced. For example, \[ \ce{ 2H2 (g) + O2 (g) \rightarrow 2 H2O (g)} \label{eq2} \] means that for every 2 mol \(\ce{H2(g)}\) consumed there will be 1 mol \(\ce{O2(g)}\) consumed and 2 mol \(\ce{H2O(g)}\) produced. It also implies that for every: \(\left( \text{2 mol }\times \frac{RT}{P} \right)\text{L}\) H there will be \(\left( \text{1 mol }\times \frac{RT}{P} \right)\text{L}\) O and \(\left( \text{2 mol }\times \frac{RT}{P} \right)\text{L}\) H O. In the image below, we see a more literal example. As Gay Lussac discovered, if you mix 2 L of H gas with 1 L of O , you get 1 L H O. The ratio of volumes matches the stoichiometric ratio of the chemical reaction in Equation \ref{eq2}. This is an example of the : When gases combine at constant temperature and pressure, the volumes involved are always in the ratio of simple whole numbers. Since the factor would be the same for all three gases, the volume of O ( ) consumed must be half the volume of H ( ) consumed. The volume of H O( ) produced would be only two-thirds the total volume [of H ( ) and O ( )] consumed, and so at the end of the reaction the total volume must be less than at the beginning. The law of combining volumes was proposed by Gay-Lussac at about the same time that Dalton published his atomic theory. Shortly thereafter, Avogadro suggested the hypothesis that equal volumes of gases contained equal numbers of molecules. Dalton strongly opposed Avogadro’s hypothesis because it required that some molecules contain more than the minimum number of atoms. For example, according to Dalton, the formula for hydrogen gas should be the simplest possible, e.g., H. Similarly, Dalton proposed the formula O for oxygen gas. His equation for formation of water vapor was \[\underset{\begin{smallmatrix} \text{1 volume} \\ \text{hydorgen} \end{smallmatrix}}{\mathop{\text{H}}}\,\text{ + }\underset{\begin{smallmatrix} \text{1 volume} \\ \text{oxygen} \end{smallmatrix}}{\mathop{\text{O}}}\,\text{ }\to \text{ }\underset{\begin{smallmatrix} \text{1 volume} \\ \text{water vapor} \end{smallmatrix}}{\mathop{\text{HO}}}\, \nonumber \] But experiments showed that twice as great a volume of hydrogen as of oxygen was required for complete reaction. Furthermore, the volume of water vapor produced was twice the volume of oxygen consumed. Avogadro proposed (correctly, as it turned out) that the formulas for hydrogen, oxygen, and water were H , O and H O, and he explained the volume data in much the same way as we have done for Eq. (2). Dalton, who had originally conceived the idea of atoms and molecules, was unwilling to concede that substances such as hydrogen or water might have formulas more complicated than was absolutely necessary. Partly as a result of Dalton’s opposition, it took almost half a century before Avogadro’s Italian countryman Stanislao Cannizzaro (1826 to 1910) was able to convince chemists that Avogadro’s hypothesis was correct. The blindness of chemists to Avogadro’s ideas for so long makes one wonder whether today’s Nobel prize winners might not be equally wrong about some other aspect of chemistry. Who knows but that some forgotten Argentinian Avogadro is still waiting for a Cannizzaro to explain his or her ideas to the scientific world. Because the amount of gas is related to volume by the ideal gas law, it is possible to calculate the volume of a gaseous substance consumed or produced in a reaction. Molar mass and stoichiometric ratio are employed in the same way as in , and the factor is used to convert from amount of gas to volume. Oxygen was first prepared by Joseph Priestly-by heating mercury(II)oxide, \(\ce{HgO}\), then called "calx of mercury“ according to the equation \[\ce{2HgO(s) \rightarrow 2 Hg (l) + O2 (g)} \nonumber \] What volume (in cubic centimeters) of O can be prepared from 1.00 g \(\ce{HgO}\)? The volume is measured at 20°C and 0.987 atm. The mass of HgO can be converted to amount of HgO and this can be converted to amount of O by means of a stoichiometric ratio. Finally, the ideal gas law is used to obtain the volume of O . Schematically, \[m_{\text{HgO}}\xrightarrow{M_{\text{HgO}}}n_{\text{HgO}}\xrightarrow{S\left( \text{O}_{\text{2}}\text{/HgO} \right)}n_{\text{O}_{\text{2}}}\xrightarrow{RT/P}V_{\text{O}_{\text{2}}} \nonumber \] \[V_{\text{O}_{\text{2}}}=\text{1 g HgO }\times \text{ }\frac{\text{1 mol HgO}}{\text{216}\text{.59 g HgO}}\text{ }\times \text{ }\frac{\text{1 mol O}_{\text{2}}}{\text{2 mol HgO}}\text{ } \nonumber \] \[\times \text{ }\frac{\text{0}\text{.0820 liter atm}}{\text{1 K mol O}_{\text{2}}}\text{ }\times \text{ }\frac{\text{293}\text{.15 K}}{\text{0}\text{.987 atm}}=\text{0}\text{.0562 liter}=\text{56}\text{.2 cm}^{\text{3}} \nonumber \]
5,222
4,495
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/05%3A_Atoms_and_the_Periodic_Table/5.01%3A_Primer_on_Quantum_Theory
A is a discrete unit of matter having the attributes of , (and thus ) and optionally of . For example, the wave motion of a vibrating guitar string is defined by the displacement of the string from its center as a function of distance along the string. A sound wave consists of variations in the pressure with location. A wave is characterized by its λ\) (lambda) and \(\nu\ (nu), which are connected by the relation \[ \lmabda =\dfrac{u}{\nu} in which is the velocity of propagation of the disturbance in the medium. The velocity of sound in the air is 330 m s . What is the wavelength of A440 on the piano keyboard? \[ \lambda =\dfrac{330\, m S^{-1}}{440 \,s^{-1}} = 0.75\,m\] Two other attributes of waves are the (the height of the wave crests with respect to the base line) and the , which measures the position of a crest with respect to some fixed point. The square of the amplitude gives the of the wave: the energy transmitted per unit time). A unique property of waves is their ability to combine constructively or destructively, depending on the relative phases of the combining waves. Phrasing the question in this way reflects the deterministic mode of Western thought which assumes that something cannot "be" two quite different things at the same time. The short response to this question is that all we know about light (or anything else, for that matter) are the results of experiments, and that some kinds of experiments show that light behaves like , and that other experiments reveal light to have the properties of . In the early 19th century, the English scientist Thomas Young carried out the famous two-slit experiment which demonstrated that a beam of light, when split into two beams and then recombined, will show interference effects that can only be explained by assuming that light is a wavelike disturbance. By 1820, Augustin Fresnel had put this theory on a sound mathematical basis, but the exact nature of the waves remained unclear until the 1860's when James Clerk Maxwell developed his . From the laws of electromagnetic induction that were discovered in the period 1820-1830 by Hans Christien Oersted and Michael Faraday, it was known that a moving electric charge gives rise to a magnetic field, and that a changing magnetic field can induce electric charges to move. Maxwell showed theoretically that when an electric charge is accelerated (by being made to oscillate within a piece of wire, for example), electrical energy will be lost, and an equivalent amount of energy is radiated into space, spreading out as a series of waves extending in all directions. What is "waving" in electromagnetic radiation? According to Maxwell, it is the strengths of the electric and magnetic fields as they travel through space. The two fields are oriented at right angles to each other and to the direction of travel. As the electric field changes, it induces a magnetic field, which then induces a new electric field, etc., allowing the wave to propagate itself through space These waves consist of periodic variations in the electrostatic and electromagnetic field strengths. These variations occur at right angles to each other. Each electrostatic component of the wave induces a magnetic component, which then creates a new electrostatic component, so that the wave, once formed, continues to propagate through space, essentially feeding on itself. In one of the most brilliant mathematical developments in the history of science, Maxwell expounded a detailed theory, and even showed that these waves should travel at about 3E8 m s , a value which experimental observations had shown corresponded to the speed of light. In 1887, the German physicist Heinrich Hertz demonstrated that an oscillating electric charge (in what was in essence the world's first radio transmitting antenna) actually does produce electromagnetic radiation just as Maxwell had predicted, and that these waves behave exactly like light. It is now understood that is electromagnetic radiation that falls within a range of wavelengths that can be perceived by the eye. The entire electromagnetic spectrum runs from radio waves at the long-wavelength end, through heat, light, X-rays, and to gamma radiation. It did not arise from any attempt to explain the behavior of light itself; by 1890 it was generally accepted that the electromagnetic theory could explain all of the properties of light that were then known. Certain aspects of the interaction between light and matter that were observed during the next decade proved rather troublesome, however. The relation between the temperature of an object and the peak wavelength emitted by it was established empirically by Wilhelm Wien in 1893. This put on a quantitative basis what everyone knows: the hotter the object, the "bluer" the light it emits. All objects above the temperature of absolute zero emit electromagnetic radiation consisting of a broad range of wavelengths described by a distribution curve whose peak wavelength at absolute temperature for a "perfect radiator" known as a black body is given by Wien's law. \[ \lambda_{peak} (cm) = 0.0029\,m\,K\] At ordinary temperatures this radiation is entirely in the infrared region of the spectrum, but as the temperature rises above about 1000K, more energy is emitted in the visible wavelength region and the object begins to glow, first with red light, and then shifting toward the blue as the temperature is increased. This type of radiation has two important characteristics. First, the spectrum is a continuous one, meaning that all wavelengths are emitted, although with intensities that vary smoothly with wavelength. The other curious property of black body radiation is that it is independent of the composition of the object; all that is important is the temperature. Black body radiation, like all electromagnetic radiation, must originate from oscillations of electric charges which in this case were assumed to be the electrons within the atoms of an object acting somewhat as miniature Hertzian oscillators. It was presumed that since all wavelengths seemed to be present in the continuous spectrum of a glowing body, these tiny oscillators could send or receive any portion of their total energy. However, all attempts to predict the actual shape of the emission spectrum of a glowing object on the basis of classical physical theory proved futile. In 1900, the great German physicist Max Planck (who earlier in the same year had worked out an empirical formula giving the detailed shape of the black body emission spectrum) showed that the shape of the observed spectrum could be exactly predicted if the energies emitted or absorbed by each oscillator were restricted to integral values of hν, where ν ("nu") is the frequency and is a constant 6.626E–34 J s which we now know as . The allowable energies of each oscillator are quantized, but the emission spectrum of the body remains continuous because of differences in frequency among the uncountable numbers of oscillators it contains.This modification of classical theory, the first use of the quantum concept, was as unprecedented as it was simple, and it set the stage for the development of modern quantum physics. Shortly after J.J. Thompson's experiments led to the identification of the elementary charged particles we now know as electrons, it was discovered that the illumination of a metallic surface by light can cause electrons to be emitted from the surface. This phenomenon, the photoelectric effect, is studied by illuminating one of two metal plates in an evacuated tube. The kinetic energy of the photoelectrons causes them to move to the opposite electrode, thus completing the circuit and producing a measurable current. However, if an opposing potential (the ) is imposed between the two plates, the kinetic energy can be reduced to zero so that the electron current is stopped. By observing the value of the retarding potential , the kinetic energy of the photoelectrons can be calculated from the electron charge , its mass and the frequency of the incident light: These two diagrams are taken from a web page by Joseph Alward of the University of the Pacific. The plot at the right shows how the kinetic energy of the photoelectrons falls to zero at the critical wavelength corresponding to frequency . Although the number of electrons ejected from the metal surface per second depends on the intensity of the light, as expected, the kinetic energies of these electrons (as determined by measuring the retarding potential needed to stop them) does not, and this was definitely not expected. Just as a more intense physical disturbance will produce higher energy waves on the surface of the ocean, it was supposed that a more intense light beam would confer greater energy on the photoelectrons. But what was found, to everyone's surprise, is that the photoelectron energy is controlled by the wavelength of the light, and that there is a critical wavelength below which no photoelectrons are emitted at all. Albert Einstein quickly saw that if the kinetic energy of the photoelectrons depends on the wavelength of the light, then so must its energy. Further, if Planck was correct in supposing that energy must be exchanged in packets restricted to certain values, then light must similarly be organized into energy packets. But a light ray consists of electric and magnetic fields that spread out in a uniform, continuous manner; how can a continuously-varying wave front exchange energy in discrete amounts? Einstein's answer was that the energy contained in each packet of the light must be concentrated into a tiny region of the wave front. This is tantamount to saying that light has the nature of a quantized particle whose energy is given by the product of Planck's constant and the frequency: Einstein's publication of this explanation in 1905 led to the rapid acceptance of Planck's idea of energy quantization, which had not previously attracted much support from the physics community of the time. It is interesting to note, however, that this did not make Planck happy at all. Planck, ever the conservative, had been reluctant to accept that his own quantized-energy hypothesis was much more than an artifice to explain black-body radiation; to extend it to light seemed an absurdity that would negate the well-established electromagnetic theory and would set science back to the time before Maxwell. Einstein's special theory of relativity arose from his attempt to understand why the laws of physics that describe the current induced in a fixed conductor when a magnet moves past it are not formulated in the same way as the ones that describe the magnetic field produced by a moving conductor. The details of this development are not relevant to our immediate purpose, but some of the conclusions that this line of thinking led to very definitely are. Einstein showed that the velocity of light, unlike that of a material body, has the same value no matter what velocity the observer has. Further, the mass of any material object, which had previously been regarded as an absolute, is itself a function of the velocity of the body relative to that of the observer (hence "relativity"), the relation being given by in which is the rest mass of the particle, is its velocity with respect to the observer, and is the velocity of light. According to this formula, the mass of an object increases without limit as the velocity approaches that of light. Where does the increased mass come from? Einstein's answer was that the increased mass is that of the kinetic energy of the object; that is, energy itself has mass, so that mass and energy are equivalent according to the famous formula The only particle that can move at the velocity of light is the photon itself, due to its zero rest mass. Although the photon has no rest mass, its energy, given by , confers upon it an effective mass of and a momentum of In 1924, the French physicist Louis de Broglie proposed (in his doctoral thesis) that just as light possesses particle-like properties, so should particles of matter exhibit a wave-like character. Within two years this hypothesis had been confirmed experimentally by observing the diffraction (a wave interference effect) produced by a beam of electrons as they were scattered by the row of atoms at the surface of a metal. deBroglie showed that the wavelength of a particle is inversely proportional to its momentum: Notice that the wavelength of a stationary particle is infinitely large, while that of a particle of large mass approaches zero. For most practical purposes, the only particle of interest to chemistry that is sufficiently small to exhibit wavelike behavior is the electron (mass 9.11E­31 kg). We pointed out earlier that a wave is a change that varies with location in a periodic, repeating way. What kind of a change do the crests and hollows of a "matter wave" trace out? The answer is that the wave represents the value of a quantity whose square is a measure of the probability of finding the particle in that particular location. In other words, what is "waving" is the value of a . In 1927, Werner Heisenberg proposed that certain pairs of properties of a particle cannot simultaneously have exact values. In particular, the position and the momentum of a particle have associated with them uncertainties x and p given by As with the de Broglie particle wavelength, this has practical consequences only for electrons and other particles of very small mass. It is very important to understand that these "uncertainties" are not merely limitations related to experimental error or observational technique, but instead they express an underlying fact that Nature does not allow a particle to possess definite values of position and momentum at the same time. This principle (which would be better described by the term "indeterminacy" than "uncertainty") has been thoroughly verified and has far-reaching practical consequences which extend to chemical bonding and molecular structure. Yes; either one really implies the other. Consider the following two limiting cases: · A particle whose velocity is known to within a very small uncertainty will have a sharply-defined energy (because its kinetic energy is known) which can be represented by a probability wave having a single, sharply-defined frequency. A "monochromatic" wave of this kind must extend infinitely in space: But if the peaks of the wave represent locations at which the particle is most likely to manifest itself, we are forced to the conclusion that it can "be" virtually anywhere, since the number of such peaks is infinite! Now think of the opposite extreme: a particle whose location is closely known. Such a particle would be described by a short wave train having only a single peak, the smaller the uncertainty in position, the more narrow the peak. To help you see how waveforms of different wavelength combine, two such combinations are shown below: It is apparent that as more waves of different frequency are mixed, the regions in which they add constructively diminish in extent. The extreme case would be a wave train in which destructive interference occurs at all locations except one, resulting in a single pulse: Is such a wave possible, and if so, what is its wavelength? Such a wave is possible, but only as the sum (interference) of other waves whose wavelengths are all slightly different. Each component wave possesses its own energy (momentum), and adds that value to the range of momenta carried by the particle, thus increasing the uncertainty δ . In the extreme case of a quantum particle whose location is known exactly, the probability wavelet would have zero width which could be achieved only by combining waves of all wavelengths-- an infinite number of wavelengths, and thus an infinite range of momentum dp and thus kinetic energy. Suppose we direct a beam of photons (or electrons; the experiment works with both) toward a piece of metal having a narrow opening. On the other side there are two more openings, or slits. Finally the particles impinge on a photographic plate or some other recording device. Taking into account their wavelike character, we would expect the probability waves to produce an interference pattern of the kind that is well known for sound and light waves, and this is exactly what is observed; the plate records a series of alternating dark and light bands, thus demonstrating beyond doubt that electrons and light have the character of waves. Now let us reduce the intensity of the light so that only one photon at a time passes through the apparatus (it is experimentally possible to count single photons, so this is a practical experiment). Each photon passes through the first slit, and then through one or the other of the second set of slits, eventually striking the photographic film where it creates a tiny dot. If we develop the film after a sufficient number of photons have passed through, we find the very same interference pattern we obtained previously. There is something strange here. Each photon, acting as a particle, must pass through one or the other of the pair of slits, so we would expect to get only two groups of spots on the film, each opposite one of the two slits. Instead, it appears that the each particle, on passing through one slit, "knows" about the other, and adjusts its final trajectory so as to build up a wavelike interference pattern. It gets even stranger: suppose that we set up a detector to determine which slit a photon is heading for, and then block off the other slit with a shutter. We find that the photon sails straight through the open slit and onto the film without trying to create any kind of an interference pattern. Apparently, any attempt to observe the photon as a discrete particle causes it to behave like one. The only conclusion possible is that quantum particles have no well defined paths; each photon (or electron) seems to have an infinity of paths which thread their way through space, seeking out and collecting information about all possible routes, and then adjusting its behavior so that its final trajectory, when combined with that of others, produces the same overall effect that we would see from a train of waves of wavelength λ= We have already seen that a glowing body (or actually, any body whose temperature is above absolute zero) emits and absorbs radiation of all wavelength in a . In striking contrast is the spectrum of light produced when certain substances are volatilized in a flame, or when an electric discharge is passed through a tube containing gaseous atoms of an element. The light emitted by such sources consists entirely of discrete wavelengths. This kind of emission is known as a discrete spectrum or (the "lines" that appear on photographic images of the spectrum are really images of the slit through which the light passes before being dispersed by the prism in the spectrograph). Every element has its own line spectrum which serves as a sensitive and useful tool for detecting the presence and relative abundance of the element, not only in terrestrial samples but also in stars. (As a matter of fact, the element helium was discovered in the sun, through its line spectrum, before it had been found on Earth.) In some elements, most of the energy in the visible part of the emission spectrum is concentrated into just a few lines, giving their light characteristic colors: yellow-orange for sodium, blue-green for mercury (these are commonly seen in street lights) and orange for neon. Line spectra were well known early in the 19th century, and were widely used for the analysis of ores and metals. The German spectroscopist R.W. Bunsen, now famous for his gas burner, was then best known for discovering two new elements, rubidium and cesium, from the line spectrum he obtained from samples of mineral spring waters. Until 1885, line spectra were little more than "fingerprints" of the elements; extremely useful in themselves, but incapable of revealing any more than the identify of the individual atoms from which they arise. In that year a Swiss school teacher named Johann Balmer published a formula that related the wavelengths of the four known lines in the emission spectrum of hydrogen in a simple way. Balmer's formula was not based on theory; it was probably a case of cut-and-try, but it worked: he was able to predict the wavelength of a fifth, yet-to-be discovered emission line of hydrogen, and as spectroscopic and astronomical techniques improved (the only way of observing highly excited hydrogen atoms at the time was to observe the solar spectrum during an eclipse), a total of 35 lines were discovered, all having wavelengths given by the formula which we write in the modern manner as in which = 2 and is a constant (the , after the Swedish spectroscopist) whose value is 1.09678E7 m . The variable is an integer whose values 1, 2, etc. give the wavelengths of the different lines. It was soon discovered that by replacing with integers other than 2, other series of hydrogen lines could be accounted for. These series, which span the wavelength region from the ultraviolet through infrared, are named after their discoverers. 1 2 3 4 5 Attempts to adapt Balmer's formula to describe the spectra of atoms other than hydrogen generally failed, although certain lines of some of the spectra seemed to fit this same scheme, with the same value of . There is no limit; values in the hundreds have been observed, although doing so is very difficult because of the increasingly close spacing of successive levels as n becomes large. Atoms excited to very high values of n are said to be in . As n becomes larger, the spacing between neighboring levels diminishes and the discrete lines merge into a . This can mean only one thing: the energy levels converge as n approaches infinity. This convergence limit corresponds to the energy required to completely remove the electron from the atom; it is the . At energies in excess of this, the electron is no longer bound to the rest of the atom, which is now of course a positive ion. But an unbound system is not quantized; the kinetic energy of the ion and electron can now have any value in excess of the ionization energy. When such an ion and electron pair recombine to form a new atom, the light emitted will have a wavelength that falls in the continuum region of the spectrum. Spectroscopic observation of the convergence limit is an important method of measuring the ionization energies of atoms. Rutherford's demonstration that the mass and the positive charge of the atom is mostly concentrated in a very tiny region called the nucleus, forced the question of just how the electrons are disposed outside the nucleus. By analogy with the solar system, a planetary model was suggested: if the electrons were orbiting the nucleus, there would be a centrifugal force that could oppose the electrostatic attraction and thus keep the electrons from falling into the nucleus. This of course is similar to the way in which the centrifugal force produced by an orbiting planet exactly balances the force due to its gravitational attraction to the sun. The planetary model suffers from one fatal weakness: electrons, unlike planets, are electrically charged. An electric charge revolving in an orbit is continually undergoing a change of direction, that is, acceleration. It has been well known since the time of Hertz that an accelerating electric charge radiates energy. We would therefore expect all atoms to act as miniature radio stations. Even worse, conservation of energy requires that any energy that is radiated must be at the expense of the kinetic energy of the orbital motion of the electron. Thus the electron would slow down, reducing the centrifugal force and allowing the electron to spiral closer and closer to the nucleus, eventually falling into it. In short, no atom that operates according to the planetary model would last long enough for us to talk about it. As if this were not enough, the planetary model was totally unable to explain any of the observed properties of atoms, including their line spectra. Niels Bohr was born in the same year (1885) that Balmer published his formula for the line spectrum of hydrogen. Beginning in 1913, the brilliant Danish physicist published a series of papers that would ultimately derive Balmer's formula from first principles. Bohr's first task was to explain why the orbiting electron does not radiate energy as it moves around the nucleus. This energy loss, if it were to occur at all, would do so gradually and smoothly. But Planck had shown that black body radiation could only be explained if energy changes were limited to jumps instead of gradual changes. If this were a universal characteristic of energy- that is, if all energy changes were quantized, then very small changes in energy would be impossible, so that the electron would in effect be "locked in" to its orbit. From this, Bohr went on to propose that there are certain stable orbits in which the electron can exist without radiating and thus without falling into a "death spiral". This supposition was a daring one at the time because it was inconsistent with classical physics, and the theory which would eventually lend it support would not come along until the work of de Broglie and Heisenberg more than ten years later. Since Planck's quanta came in multiples of , Bohr restricted his allowed orbits to those in which the product of the radius and the momentum of the electron mv (which has the same units as , ) are integral multiples of h: 2π ( = 1,2,3, . .) Each orbit corresponds to a different energy, with the electron normally occupying the one having the lowest energy, which would be the innermost orbit of the hydrogen atom. Taking the lead from Einstein's explanation of the photoelectric effect, Bohr assumed that each spectral line emitted by an atom that has been excited by absorption of energy from an electrical discharge or a flame represents a change in energy given by Δ = = /λ, the energy lost when the electron falls from a higher orbit (value of ) into a lower one. Finally, as a crowning triumph, Bohr derived an expression giving the radius of the nth orbit for the electron in hydrogen as Substitution of the observed values of the electron mass and electron charge into this equation yielded a value of 0.529E­10 m for the radius of the first orbit, a value that corresponds to the radius of the hydrogen atom obtained experimentally from the kinetic theory of gases. Bohr was also able to derive a formula giving the value of the Rydberg constant, and thus in effect predict the entire emission spectrum of the hydrogen atom. There were two kinds of difficulties. First, there was the practical limitation that it only works for atoms that have one electron-- that is, for H, He , Li , etc. The second problem was that Bohr was unable to provide any theoretical justification for his assumption that electrons in orbits described by the preceding equation would not lose energy by radiation. This reflects the fundamental underlying difficulty: because de Broglie's picture of matter waves would not come until a decade later, Bohr had to regard the electron as a classical particle traversing a definite orbital path. Once it became apparent that the electron must have a wavelike character, things began to fall into place. The possible states of an electron confined to a fixed space are in many ways analogous to the allowed states of a vibrating guitar string. These states are described as standing waves that must possess integral numbers of nodes. The states of vibration of the string are described by a series of integral numbers = 1,2,... which we call the fundamental, first overtone, second overtone, etc. The energy of vibration is proportional to . Each mode of vibration contains one more complete wave than the one below it. In exactly the same way, the mathematical function that defines the probability of finding the electron at any given location within a confined space possesses peaks and corresponds to states in which the energy is proportional to . The electron in a hydrogen atom is bound to the nucleus by its spherically symmetrical electrostatic charge, and should therefore exhibit a similar kind of wave behavior. This is most easily visualized in a two-dimensional cross section that corresponds to the conventional electron orbit. But if the particle picture is replaced by de Broglie's probability wave, this wave must follow a circular path, and- most important of all- its wavelength (and consequently its energy) is restricted to integral multiples = 1,2,.. of the circumference 2π = λ. for otherwise the wave would collapse owing to self-interference. That is, the energy of the electron must be quantized; what Bohr had taken as a daring but arbitrary assumption was now seen as a fundamental requirement. Indeed the above equation can be derived very simply by combining Bohr's quantum condition 2π = with the expression = /λ for the deBroglie wavelength of a particle. Viewing the electron as a standing-wave pattern also explains its failure to lose energy by radiating. Classical theory predicts that an accelerating electric charge will act as a radio transmitter; an electron traveling around a circular wire would certainly act in this way, and so would one rotating in an orbit around the nucleus. In a standing wave, however, the charge is distributed over space in a regular and unchanging way; there is no motion of the charge itself, and thus no radiation. Because the classical view of an electron as a localizable particle is now seen to be untenable, so is the concept of a definite trajectory, or "orbit". Instead, we now use the word to describe the state of existence of an electron. An orbital is really no more than a mathematical function describing the standing wave that gives the probability of the electron manifesting itself at any given location in space. More commonly (and loosely) we use the word to describe the region of space in which an electron is likely to be found. Each kind of orbital is characterized by a set of quantum numbers , and These relate, respectively, to the average distance of the electron from the nucleus, to the shape of the orbital, and to its orientation in space. In its lowest state in the hydrogen atom (in which =0) the electron has zero angular momentum, so electrons in s orbitals are not in motion. In orbitals for which >0 the electron does have an effective angular momentum, and since the electron also has a definite rest mass = 9.11E­31 kg, it must possess an effective velocity. Its value can be estimated from the Uncertainty Principle; if the volume in which the electron is confined is about 10 m, then the uncertainty in its momentum is at least /(10 ) = 6.6E–24 kg m s , which implies a velocity of around 10 m s , or almost one-tenth the velocity of light. The stronger the electrostatic force of attraction by the nucleus, the faster the effective electron velocity. In fact, the innermost electrons of the heavier elements have effective velocities so high that relativistic effects set in; that is, the effective mass of the electron significantly exceeds its rest mass. This has direct chemical effects; it is the cause, for example, of the low melting point of metallic mercury and of the color of gold. The negatively-charged electron is attracted to the positive charge of the nucleus. What prevents it from falling in? This question can be answered in various ways at various levels. All start with the statement that the electron, being a quantum particle, has a dual character and cannot be treated solely by the laws of Newtonian mechanics. We saw above that in its wavelike guise, the electron exists as a standing wave which must circle the nucleus at a sufficient distance to allow at least one wavelength to fit on its circumference. This means that the smaller the radius of the circle, the shorter must be the wavelength of the electron, and thus the higher the energy. Thus it ends up "costing" the electron energy if it gets too close to the nucleus. The normal orbital radius represents the balance between the electrostatic force trying to pull the electron in, and what we might call the "confinement energy" that opposes the electrostatic energy. This confinement energy can be related to both the particle and wave character of the electron. If the electron as a particle were to approach the nucleus, the uncertainty in its position would become so small (owing to the very small volume of space close to the nucleus) that the momentum, and therefore the energy, would have to become very large. The electron would, in effect, be "kicked out" of the nuclear region by the confinement energy. The standing-wave patterns of an electron in a box can be calculated quite easily. For a spherical enclosure of diameter , the energy is given by in which = 1,2,3. etc. Each electron in an atom has associated with it a magnetic field whose direction is quantized; there are only two possible values that point in opposite directions. We usually refer to these as "up" and "down", but the actual directions are parallel and antiparallel to the local magnetic field associated with the orbital motion of the electron. The term implies that this magnetic moment is produced by the electron charge as the electron rotates about its own axis. Although this conveys a vivid mental picture of the source of the magnetism, the electron is not an extended body and its rotation is meaningless. Electron spin has no classical counterpart and no simple explanation; the magnetic moment is a consequence of relativistic shifts in local space and time due to the high effective velocity of the electron in the atom. This effect was predicted theoretically by P.A.M. Dirac in 1928.
34,057
4,496
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Reactions/Named_Reactions/Named_Reagents
( )-(-)-1-Methyl-3,3-diphenylhexahydropyrrolo[1,2-c,1,3,2]oxazaborole 112022-81-8 (3a ,4 ,5 ,6a )-(+)-Hexahydro-5-hydroxy-4-(hydroxymethyl)-2 -cyclopenta[ ]furan-2-one 76704-05-7 ( )-tert-butyl 4-formyl-2,2-dimethyloxazolidine-3-carboxylate 95715-87-0 ( )-7a-methyl-2,3,7,7a-tetrahydro-1 -indene-1,5(6 )-dione 17553-89-8 ( )-3,3,3-Trifluoro-2-methoxy-2-phenylpropanoic acid 17257-71-5 Methyl ( )-(+)-3-hydroxy-2-methylpropionate 72657-23-9 (-)-(5 )-2,8-dimethyl-6,12-dihydro-5,11-methanodibenzo[ , ,1,5]diazocine 14645-24-0 ( )-(–)-8a-Methyl-3,4,8,8a-tetrahydro-2H-naphthalene-1,6-dione 100348-93-4
616
4,502
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/06%3A_Gases/6.2%3A_The_Simple_Gas_Laws
Early scientists explored the relationships among the pressure of a gas ( ) and its temperature ( ), volume ( ), and amount ( ) by holding two of the four variables constant (amount and temperature, for example), varying a third (such as pressure), and measuring the effect of the change on the fourth (in this case, volume). The history of their discoveries provides several excellent examples of the . As the pressure on a gas increases, the volume of the gas decreases because the gas particles are forced closer together. Conversely, as the pressure on a gas decreases, the gas volume increases because the gas particles can now move farther apart. Weather balloons get larger as they rise through the atmosphere to regions of lower pressure because the volume of the gas has increased; that is, the atmospheric gas exerts less pressure on the surface of the balloon, so the interior gas expands until the internal and external pressures are equal. The Irish chemist Robert Boyle (1627–1691) carried out some of the earliest experiments that determined the quantitative relationship between the pressure and the volume of a gas. Boyle used a J-shaped tube partially filled with mercury, as shown in Figure \(\Page {1}\). In these experiments, a small amount of a gas or air is trapped above the mercury column, and its volume is measured at atmospheric pressure and constant temperature. More mercury is then poured into the open arm to increase the pressure on the gas sample. The pressure on the gas is atmospheric pressure plus the difference in the heights of the mercury columns, and the resulting volume is measured. This process is repeated until either there is no more room in the open arm or the volume of the gas is too small to be measured accurately. Data such as those from one of Boyle’s own experiments may be plotted in several ways (Figure \(\Page {2}\)). A simple plot of \(V\) versus \(P\) gives a curve called a hyperbola and reveals an inverse relationship between pressure and volume: as the pressure is doubled, the volume decreases by a factor of two. This relationship between the two quantities is described as follows: \[PV = \rm constant \label{6.2.1}\]   Dividing both sides by \(P\) gives an equation illustrating the inverse relationship between \(P\) and \(V\): or \[V \propto \dfrac{1}{P} \label{6.2.3}\] where the ∝ symbol is read “is proportional to.” A plot of versus 1/ is thus a straight line whose slope is equal to the constant in Equation 6.2.1 and Equation 6.2.3. Dividing both sides of Equation 6.2.1 by instead of gives a similar relationship between and 1/ . The numerical value of the constant depends on the amount of gas used in the experiment and on the temperature at which the experiments are carried out. This relationship between pressure and volume is known as Boyle’s law, after its discoverer, and can be stated as follows: Boyle’s Law: Hot air rises, which is why hot-air balloons ascend through the atmosphere and why warm air collects near the ceiling and cooler air collects at ground level. Because of this behavior, heating registers are placed on or near the floor, and vents for air-conditioning are placed on or near the ceiling. The fundamental reason for this behavior is that gases expand when they are heated. Because the same amount of substance now occupies a greater volume, hot air is less dense than cold air. The substance with the lower density—in this case hot air—rises through the substance with the higher density, the cooler air. The first experiments to quantify the relationship between the temperature and the volume of a gas were carried out in 1783 by an avid balloonist, the French chemist Jacques Alexandre César Charles (1746–1823). Charles’s initial experiments showed that a plot of the volume of a given sample of gas versus temperature (in degrees Celsius) at constant pressure is a straight line. Similar but more precise studies were carried out by another balloon enthusiast, the Frenchman Joseph-Louis Gay-Lussac (1778–1850), who showed that a plot of V versus T was a straight line that could be extrapolated to a point at zero volume, a theoretical condition now known to correspond to −273.15°C (Figure \(\Page {3}\)).A sample of gas cannot really have a volume of zero because any sample of matter must have some volume. Furthermore, at 1 atm pressure all gases liquefy at temperatures well above −273.15°C. Note from part (a) in Figure \(\Page {3}\) that the slope of the plot of V versus T varies for the same gas at different pressures but that the intercept remains constant at −273.15°C. Similarly, as shown in part (b) in Figure \(\Page {3}\), plots of V versus T for different amounts of varied gases are straight lines with different slopes but the same intercept on the T axis.   The significance of the invariant T intercept in plots of V versus T was recognized in 1848 by the British physicist William Thomson (1824–1907), later named Lord Kelvin. He postulated that −273.15°C was the lowest possible temperature that could theoretically be achieved, for which he coined the term absolute zero (0 K). We can state Charles’s and Gay-Lussac’s findings in simple terms: At constant pressure, the volume of a fixed amount of gas is directly proportional to its absolute temperature (in kelvins). This relationship, illustrated in part (b) in Figure \(\Page {3}\) is often referred to as Charles’s law and is stated mathematically as \[V ={\rm const.}\; T \label{6.2.4}\] or \[V \propto T \label{6.2.5}\] with not Charles’s law is valid for virtually all gases at temperatures well above their boiling points. Charles’s Law: We can demonstrate the relationship between the volume and the amount of a gas by filling a balloon; as we add more gas, the balloon gets larger. The specific quantitative relationship was discovered by the Italian chemist Amedeo Avogadro, who recognized the importance of Gay-Lussac’s work on combining volumes of gases. In 1811, Avogadro postulated that, at the same temperature and pressure, equal volumes of gases contain the same number of gaseous particles (Figure \(\Page {4}\)). This is the historic “Avogadro’s hypothesis.”   A logical corollary to Avogadro's hypothesis (sometimes called Avogadro’s law) describes the relationship between the volume and the amount of a gas: Stated mathematically, This relationship is valid for most gases at relatively low pressures, but deviations from strict linearity are observed at elevated pressures. For a sample of gas, The relationships among the volume of a gas and its pressure, temperature, and amount are summarized in Figure \(\Page {5}\). Volume with increasing temperature or amount but with increasing pressure.  Avogadro’s Law: The volume of a gas is inversely proportional to its pressure and directly proportional to its temperature and the amount of gas. Boyle showed that the volume of a sample of a gas is inversely proportional to its pressure ( ), Charles and Gay-Lussac demonstrated that the volume of a gas is directly proportional to its temperature (in kelvins) at constant pressure ( ), and Avogadro postulated that the volume of a gas is directly proportional to the number of moles of gas present ( ). Plots of the volume of gases versus temperature extrapolate to zero volume at −273.15°C, which is , the lowest temperature possible. Charles’s law implies that the volume of a gas is directly proportional to its absolute temperature.
7,493
4,503
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Supplemental_Modules_(Environmental_Chemistry)/Atmospheric_Chemistry/Ozone
Most of the ozone in the atmosphere is in the stratosphere of the atmosphere, with about 8% in the lower troposphere. As mentioned there, the ozone is formed due to photo reaction. The ozone level is measured in Dobson Unit (DU), named after G.M.B. Dobson, who investigated the ozone between 1920 and 1960. One Dobson Unit (DU) is defined to be 0.01 mm thickness of ozone at STP when all the ozone in the air column above an area is collected and spread over the entire area. Thus, 100 DU is 1 mm thick. In the electromagnetic radiation spectrum, the region beyond the violet (wavelength ~ 400 nanometer nm) invisible to eye detection is called ultraviolet (UV) rays. Its wavelength is shorter than 400 nm. UV is divided into three regions: Obviously, photons of UV C are the most energetic. UV-A radiation is needed by humans for the synthesis of vitamin-D; however, too much UV-A causes photoaging (toughening of the skin), suppression of the immune system and, to a lesser degree, reddening of the skin, and cataract formation. Ozone strongly absorbs UV B and C, but the absorption decreases as the wavelength increases to 320 nm. Very little UV C reaches the Earth surface due to ozone absorption. When an oxygen molecule receive a photon (h\nu), it dissociates into monoatomic (reactive) atoms. These atoms attack an oxygen molecule to form ozone, O3. \[\ce{O2 + h\nu \rightarrow O + O}\label{1}\] \[\ce{O2 + O \rightarrow O3} \label{2}\] The last reaction requires a third molecule to take away the energy associated with the free radical \(O^{\cdot}\) and \(O_2\), and the reaction can be represented by \[\ce{O2 + O + M \rightarrow O3 + M*} \label{3}\] The over all reaction between oxygen and ozone formation is: \[\ce{3 O2 \rightleftharpoons 2 O3} \label{4}\] The absorption of UV B and C leads to the destruction of ozone \[\ce{O3 + h\nu \rightarrow O + O2} \label{5}\] \[\ce{O3 + O \rightarrow 2 O2} \label{6}\] A is established in these reactions. The ozone concentration varies due to the amount of radiation received from the sun. The enthalpy of formation of ozone is 142.7 kJ / mol. The bond energy of O2 is 498 kJ / mol. What is the average O=O bond energy of the bent ozone molecule O=O=O? The overall reaction is \[\ce{3 O2 \rightarrow 2 O3} \;\;\; \Delta H = 286 kJ\] Note that 3 O=O bonds of oxygen are broken, and 4 O-O bonds of ozone are formed. If the bond energy of ozone is , then \[ \begin{align*} E &= (3*498 + 286) kJ / 4 mol \\[4pt] &= 445 kJ / mol \end{align*} \] The ozone bonds are slightly weaker than the oxygen bonds. The average bond energy is not the bond energy for the removal of one oxygen from ozone. \[\ce{O3 + h\nu \rightarrow O + O2}\] Can the energy to remove one oxygen be estimated from the data given here? The techniques used in this calculation is based on the . The bond energy of O2 is 498 kJ / mol. What is the maximum wavelength of the photon that has enough energy to break the O=O bond of oxygen? The energy per O=O bond is: \[(498000 J/mol) / (6.022x1023 bonds/mol) = 8.27x10-19 J/bonds\] The wavelength \(\lambda\) of the photons can be evaluated using \[E = \dfrac{h c}{\lambda}\] \[ \begin{align*} \lambda &= \dfrac{(6.626 \times 10^{-34}\, J \cdot s)*(3 \times 10^8\, m/s)}{8.27 \times 10^{-19} J} \\[4pt] &= 2.403 \times 10^{-7} \,m = 240 nm \end{align*} \] The visible region range from 300 nm to 700 nm, and radiation with a wavelength of 240 nm is in the ultraviolet region (Figure \(\Page {1}\)). Visible light cannot break the O=O bond, and UV light has enough energy to break the O=O bond. Chemist Roy J. Plunkett discovered tetrafluoroethylene resin while researching refrigerants at DuPont. Known by its trade name, Teflon, Plunkett's discovery was found to be extremely heat-tolerant and stick-resistant. After ten years of research, Teflon was introduced in 1949. His continued research led to the usage of chlorofluorohydrocarbons known as CFCs or freon as refrigerants. CFCs are made up of carbon, hydrogen, fluorine, and chlorine. DuPont used a number system to distinguish their product based on three digits. The digits are related to the molecular formulas. For example, CFC (or freon) 123 should have a formula C HF Cl . The number of chlorine atoms can be deduced from the structural formula of saturated carbon chains. CFC's containing only one carbon atom per molecule has only two digits. Freon 12 used for fridge and automobil air conditioners has a formula of CF Cl . The nontoxic and nonflammable CFCs have been widely used as refrigerants, in aerosol spray, and dry cleaning liquid, foam blowing agents, cleansers for electronic components in the 70s, 80s and early 90s. In 1973, James Lovelock demonstrated that all the CFCs produced up to that time have not been destroyed, but spread globally throughout the troposphere. (Lovelock's report was later published: J. E. Lovelock, R.J.Maggs, and R.J. Wade, (1974); , , 194) In the article, concentrations of CFCs at some parts per 10 by volume was measured, and they deducted that with such a concentration, CFCs are not destroyed over the years. In 1974, Mario J. Molina published an article in describing the ozone depletion by CFCs. (see M.J. Malina and F.S. Rowland, (1974); , , 810) NASA later confirmed that HF was present in the stratosphere, and this compound had no natural source but from the decomposition of CFCs. Molina and Rowland suggested that the chlorine radicals in CFCs catalyze the decomposition of ozone as discussed below. A relatively recent concern is the depletion of ozone, O3 due to the presence of chlorine in the troposphere, and eventually their migration to the stratosphere. A major source of chlorine is Freons: CFCl3 (Freon 11), CF2Cl2 (Freon 12), C2F3Cl3 (Freon 113), C2F4Cl2 (Freon 114). Freons decompose in the troposphere. For example, \[\ce{CFCl3 \rightarrow CFCl2 + Cl}\] \[\ce{CF2Cl3 \rightarrow CF2Cl + Cl^.}\] The chlorine atoms catalyze the decomposition of ozone, \[\ce{Cl + O3 \rightarrow ClO + O2}\] and ClO molecules further react with O generated due to photochemical decomposition of ozone: \[\ce{O3 + h\nu \rightarrow O + O2}\] \[\ce{ClO + O \rightarrow Cl + O2}\] \[\ce{O + O3 \rightarrow O2 + O2.}\] The net result or reaction is \[\ce{2 O3 \rightarrow 3 O2}\] Thus, the use of CFCs is now a world wide concern. In 1987, one hundred and forty nine (149) nations signed the Montreal Protocol. They agreed to reduce the manufacturing of CFCs by half in 1998; they also agree to phase out CFCs. Ozone depletion in the polar region is different from other regions. The debate of ozone depletion often involves the North and South Poles. In these regions when temperatures drop to 190 K, ice cloud is formed. The ice crystals act as heterogeneous catalyst converting HCl and ClONO into \(HNO_3\) and \(Cl_2\), \[\ce{Cl + ClONO2 \rightarrow HNO3 + Cl2}\] \[\ce{H2O + ClONO2 \rightarrow HNO3 + HOCl.}\] Both Cl , and HOCl are easily photolyzed to Cl atoms, which catalyze the depletion of ozone. This has just been discussed in the previous section. The U.S. and Canadian governments have banned the use of Freons in aerosol sprays, but their use in air conditioner and cooling machines continue. In order to eliminate Freon in the atmosphere, international concerted effort and determination is required. However, sound and reliable scientific information is required. The banning of CFCs opens a research opportunity for another invention to find its substitute. Who knows what other problems will the new product bring? Define a unit you use. Describe UV radiation. Describe the formation of ozone. Explain a photodecomposition reaction. Explain the mechanism of the catalytic reaction.
7,706
4,508
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Fermentation_in_Food_Chemistry/01%3A_Modules/1.06%3A_Acetic_Acid_Fermentation
The first description of microbial vinegar fermentation was made by Pasteur in 1862. He recognized that vinegar was produced by a living organism. Acetic acid bacteria (AAB), genus , are a group of Gram-negative bacteria which oxidize sugars or ethanol and produce acetic acid during fermentation. There are several different genera in the family Acetobacteraceae. AAB are found in sugary, alcoholic and acidic niches such as fruits, flowers and particularly fermented beverages. Given sufficient oxygen, these bacteria produce acetic acid (vinegar) from ethanol. Several species of acetic acid bacteria are used in industry for production of certain foods and chemicals. Commonly used feeds include apple cider, wine and fermented grain mashes. AAB are also involved in the production of other foods such as cocoa powder and kombucha. However, they can also be considered spoilage organisms. List 2-3 places/times that acetic acid bacteria would be considered spoilage organisms. AAB make acetic acid by two successive catalytic reactions of the alcohol dehydrogenase (ADH) and a membrane-bound aldehyde dehydrogenase (ALDH) that are bound to the periplasmic side of the cytoplasmic membrane. Ethanol, acetaldehyde, and acetic acid can be quite toxic for living organisms. However, AAB are able to live in both alcoholic and acid media because of a few adaptations. AAB are able to oxidize ethanol to acetic acid using a membranebound ADH and ALDH complexes with a PQQ cofactor. This enzyme is capable of oxidizing a few primary alcohols (C2 to C6) but not methanol or secondary alcohols. Add a curved arrow mechanism for the oxidation of ethanol to acetaldehyde using this PQQ cofactor. How many electrons are transferred from the ethanol molecule to the PQQ in this step? In the second step, acetaldehyde forms a hydrate. Show the mechanism for this step. The acetaldehyde hydrate then reacts with another PQQ to form acetic acid. Propose a curved arrow mechanism for this transformation. The electrons are transferred electrons to ubiquinone (UQ) that are tightly linked to the respiratory chain (oxidative phosphorylation). Some and strains can metabolize acetic acid to carbon dioxide and water using Krebs cycle enzymes. In vinegar, for instance, Acetobacter species exhibits a biphasic growth curve, where the first corresponds to an EtOH oxidation with AcOH production. The second spike in growth is due to ‘ ’ wherein the bacteria move the ethanol and/or acetic acid into the cytoplasm to metabolize using the TCA cycle and oxidative phosphorylation. The overall chemical reaction facilitated by these bacteria is: \[\ce{C2H5OH + O2 → CH3COH → CH3COOH + H2O} \nonumber\] Propose a mechanism for the conversion of ethanol to acetaldehyde (reverse of the reduction done by yeast) utilizing NAD . In the second step, acetaldehyde forms a hydrate which is then converted to acetic acid. Propose a mechanism for the conversion of acetaldehyde to acetic acid utilizing NAD . In the third step, acetic acid is converted to acetyl CoA for use in the TCA Cycle. Propose the missing biological ‘reagents’ for this conversion.
3,142
4,510
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Chemistry_of_Cooking_(Rodriguez-Velazquez)/08%3A_Chocolate/8.03%3A_Couverture
The usual term for top quality chocolate is . Couverture chocolate is a very high-quality chocolate that contains extra cocoa butter. The higher percentage of cocoa butter, combined with proper tempering, gives the chocolate more sheen, firmer “snap” when broken, and a creamy mellow flavor. Dark, milk, and white chocolate can all be made as couvertures. The total percentage cited on many brands of chocolate is based on some combination of cocoa butter in relation to cocoa liquor. In order to be labelled as couverture by European Union regulations, the product must contain not less than 35% total dry cocoa solids, including not less than 31% cocoa butter and not less than 2.5% of dry non-fat cocoa solids. Couverture is used by professionals for dipping, coating, moulding, and garnishing. What the percentages don’t tell you is the proportion of cocoa butter to cocoa solids. You can, however, refer to the nutrition label or company information to find the amounts of each. All things being equal, the chocolate with the higher fat content will be the one with more cocoa butter, which contributes to both flavor and mouthfeel. This will also typically be the more expensive chocolate, because cocoa butter is more valuable than cocoa liquor. But keep in mind that just because two chocolates from different manufacturers have the same percentages, they are not necessarily equal. They could have dramatically differing amounts of cocoa butter and liquor, and dissimilar flavors, and substituting one for the other can have negative effects for your recipe. Determining the amounts of cocoa butter and cocoa liquor will allow you to make informed decisions on chocolate choices.
1,702
4,511
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.03%3A_Rush_Delivery
Finding new medicines and cost-effective ways to manufacture them is only half the battle. An enormous challenge for pharmacologists is figuring out how to get drugs to the right place, a task known as drug delivery. Ideally, a drug should enter the body, go directly to the diseased site while bypassing healthy tissue, do its job, and then disappear. Unfortunately, this rarely happens with the typical methods of delivering drugs: swallowing and injection. When swallowed, many medicines made of protein are never absorbed into the bloodstream because they are quickly chewed up by enzymes as they pass through the digestive system. If the drug does get to the blood from the intestines, it falls prey to liver enzymes. For doctors prescribing such drugs, this means that several doses of an oral drug are needed before enough makes it to the blood. Drug injections also cause problems, because they are expensive, difficult for patients to self-administer, and are unwieldy if the drug must be taken daily. Both methods of administration also result in fluctuating levels of the drug in the blood, which is inefficient and can be dangerous. What to do? Pharmacologists can work around the first-pass effect by delivering medicines via the skin, nose, and lungs. Each of these methods bypasses the intestinal tract and can increase the amount of drug getting to the desired site of action in the body. Slow, steady drug delivery directly to the bloodstream—without stopping at the liver first—is the primary benefit of skin patches, which makes this form of drug delivery particularly useful when a chemical must be administered over a long period. Hormones such as testosterone, progesterone, and estrogen are available as skin patches. These forms of medicines enter the blood via a meshwork of small arteries, veins, and capillaries in the skin. Researchers also have developed skin patches for a wide variety of other drugs. Some of these include Duragesic (a prescription-only pain medicine), Transderm Scop (a motion-sickness drug), and Transderm Nitro (a blood vessel-widening drug used to treat chest pain associated with heart disease). Despite their advantages, however, skin patches have a significant drawback. Only very small drug molecules can get into the body through the skin. Inhaling drugs through the nose or mouth is another way to rapidly deliver drugs and bypass the liver. Inhalers have been a mainstay of asthma therapy for years, and doctors prescribe nasal steroid drugs for allergy and sinus problems. Researchers are investigating insulin powders that can be inhaled by people with diabetes who rely on insulin to control their blood sugar daily. This still-experimental technology stems from novel uses of chemistry and engineering to manufacture insulin particles of just the right size. Too large, and the insulin particles could lodge in the lungs; too small, and the particles will be exhaled. If clinical trials with inhaled insulin prove that it is safe and effective, then this therapy could make life much easier for people with diabetes. Scientists try hard to listen to the noisy, garbled "discussions" that take place inside and between cells. Less than a decade ago, scientists identified one very important cellular communication stream called . Today, molecular pharmacologists such as Melanie H. Cobb of the University of Texas Southwestern Medical Center at Dallas are studying how MAP kinase signaling pathways malfunction in unhealthy cells. Kinases are enzymes that add phosphate groups (red-yellow structures) to proteins (green), assigning the proteins a code. In this reaction, an intermediate molecule called ATP (adenosine triphosphate) donates a phosphate group from itself, becoming ADP (adenosine diphosphate). Some of the interactions between proteins in these pathways involve adding and taking away tiny molecular labels called phosphate groups. Kinases are the enzymes that add phosphate groups to proteins, and this process is called phosphorylation. Marking proteins in this way assigns the proteins a code, instructing the cell to do something, such as divide or grow. The body employs many, many signaling pathways involving hundreds of different kinase enzymes. Some of the important functions performed by MAP kinase pathways include instructing immature cells how to "grow up" to be specialized cell types like muscle cells, helping cells in the pancreas respond to the hormone insulin, and even telling cells how to die. Since MAP kinase pathways are key to so many important cell processes, researchers consider them good targets for drugs. Clinical trials are under way to test various molecules that, in animal studies, can effectively lock up MAP kinase signaling when it's not wanted, for example, in cancer and in diseases involving an overactive immune system, such as arthritis. Researchers predict that if drugs to block MAP kinase signaling prove effective in people, they will likely be used in combination with other medicines that treat a variety of health conditions, since many diseases are probably caused by simultaneous errors in multiple signaling pathways. Proteins that snake through membranes help transport molecules into cells. HTTP:/WWW.PHARMACOLOGY.UCLA.EDU
5,278
4,512
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Fermentation_in_Food_Chemistry/01%3A_Modules/1.04%3A_Basic_Metabolic_Pathways
This is just the beginning of energy production. NADH and FADH can be converted to more ATP. is a metabolic pathway that transfers energy from NADH to the synthesis of ATP in the mitochondria. Electrons stored in the form of the reduced coenzymes, NADH or FADH , are passed through a chain of proteins and coenzymes to reduce O – the terminal electron acceptor – into H O. The energy released by electrons flowing through this electron transport chain is used to transport protons to generate a pH gradient across the membrane. Glucose is metabolized to produce energy (ATP) for the cell with the release of CO and H O as byproducts. Glycolysis is a series of enzyme-catalyzed reactions that break glucose into 2 equivalents of pyruvate. This process (summarized below) is also called the Embden-Meyerhoff pathway. Assume all reactions take place within an enzyme. Glucose is first phosphorylated at the hydroxyl group on C6 by reaction with ATP. Glucose-6-phosphate is isomerized to fructose-6-phosphate in the next step. The glucose-fructose interconversion is a multistep process whose details are not yet fully understood. It begins with opening of the hemiacetal to an open-chain aldehyde. The open-chain aldehyde undergoes keto-enol tautomerization to the enediol which is further tautomerized to a different keto form. Cyclization of the open-chain hydroxy ketone gives fructose (hemiacetal). Fructose-6-phosphate is then converted to fructose 1,6-bisphosphate which is subsequently cleaved into two three-carbon compounds through a retro-aldol. aldol reaction If the reaction is driven to starting materials (retro-aldol), then the reaction will favor the starting materials. This mechanism is actually completed with an imine. Fructose 1,6-bisphosphate first reacts with the amino group of a lysine residue from an enzyme. The imine can then do a ‘retro-Stork enamine’ reaction (similar to the retro-aldol). Stork Enamine (an adol with the enamine replacing the enolate anion as the nucleophile). If the reaction is driven to starting materials (retro-Stork enamine), then the reaction will favor the enamine and aldol starting materials. The products of the retro-Stork enamine are the enamine of dihydroxyacetone phosphate and glyceraldehyde 3-phosphate (shown below). Glyceraldehyde 3-phosphate is oxidized and phosphorylated to 1,3-bisphosphoglycerate. Phosphoglycerate kinase catalyzes the transfer of a phosphoryl group from 1,3-bisphosphoglycerate to ADP forming ATP and 3-phosphoglycerate. 3-phosphoglycerate is converted to phosphoenol pyruvate (PEP) through dehydration and dephosphorylation. In the last step of the metabolic breakdown of sugars (glycolysis), an enol phosphate is converted to pyruvic acid (shown below). The pyruvic acid is then converted to Acetyl Co A, which is the beginning of the TCA cycle. Citryl CoA is then hydrolyzed to citrate. Isocitrate is oxidized to oxalosuccinate with NAD . Ketoglutarate is transformed to succinyl CoA in a multistep process analogous to the transformation of pyruvate to acetyl CoA that we saw in the first step. Succinyl CoA is hydrolyzed to succinate and is coupled with the phosphorylation of guanosine diphosphate (GDP) to give guanosine triphosphate (GTP). Complex I is located in the inner mitochondrial membrane in eukaryotes. The electrons from NADH (produced in the TCA cycle) begin to be shuttled through small steps to capture the energy. This section will examine the mechanisms of electron transfer by the peripheral domain, proton transfer by the membrane domain and how their coupling can drive proton transport. The net reaction of Complex I is the oxidation of NADH and the reduction of ubiquinone. Net reaction: \[\ce{NADH + H^+ + UQ \rightarrow NAD^+ + UQH2}\] Complex II (aka succinate dehydrogenase from the TCA cycle) oxidizes succinate ( O CCH CH CO ) to fumarate ( - O CCH=CHCO ). Complex II also has a cascade of electron transfers. When succinate is converted to fumarate, the electrons are passed through a new cascade to eventually reduce UQ (just like Complex I!) \[\ce{succinate \rightarrow fumarate + 2H+ + 2e-}\] \[\ce{UQ + 2H+ + 2e- \rightarrow UQH2}\] Complex III (sometimes called cytochrome bc1 complex) has two main substrates: cytochrome c and UQH . The structure of this complex was determined by Johann Deisenhofer (Nobel Prize for a photosynthetic reaction center – we will see this soon). This role of complex III is to transfer the electrons from UQH to cytochrome c. ___ UQH + 1 UQ + 2 H + ___ cyt c \(\ce{\rightarrow}\) ___ UQH + ___ UQ + 4 H + ___ cyt c • There are two H coming from the mitochondrial matrix but _____ H are transported into the inter-membrane space Another complex whose goal is to move electrons and protons! This is the big step since it is the main site for dioxygen utilization in all anaerobic organisms. The structure of complex IV is shown in the left figure and to the right in a diagram taken from the Kegg pathways (with permission). ___ cyt c + 1 O + 8 H \(\ce{\rightarrow}\) ___ H O + 4 H + ___ cyt c Neglecting Complex II, the overall reaction of the mitochondrial chain, per 2e transferred, can be written as: \[\ce{NADH + H+ + ½ O2 + 10 H+("in") \rightarrow NAD+ + H2O + 10 H+("out")} E° = +1.135V\] Each two e (from 1 NADH molecule) through the electron transport chain results in the net transfer of 10 protons across the membrane: Protons will diffuse from an area of high proton concentration to an area of lower proton concentration. Peter Mitchell received the Nobel Prize in 1978 for his proposal that an electrochemical concentration gradient of protons across a membrane could be harnessed to make ATP. The proton gradient created by the electron transport chain provides enough energy to synthesize about 2.5 molecules of ATP through a process called . is an important enzyme that utilizes the proton gradient drive the synthesis of (ATP). The rotor is not locked in a fixed position in the center of the bilayer and the rotor sites switch between the empty and the ion bound states. When driving ATP synthesis, an ion arrives from the periplasm and binds at an empty rotor site. The positive stator charge (Arg ) plays a fundamental role in the function of the F motor. During ATP synthesis, the ____________ gradient fuels the membrane-embedded F motor to rotate the central stalk. This rotation causes sequential binding changes at the peripheral F domain so that one catalytic site binds ________ and phosphate, the second makes tightly bound ATP, and the third step ____________. In anaerobically growing bacteria, when the respiratory enzymes are not active, the F motor can hydrolyze ATP. Dimroth, Operation of the F0 motor of the ATP synthase, , , , 374-386.
6,788
4,513
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/19%3A_Nuclear_Chemistry/19.17%3A_Nuclear_Fusion
In addition to , a second possible method for obtaining energy from nuclear reactions lies in the fusing together of two light nuclei to form a heavier nucleus. As we see when discussing , such a process results in nucleons which are more firmly bonded to each other, and hence lower in potential energy. This is particularly true if \({}_{\text{2}}^{\text{4}}\text{He}\) is formed, because this nucleus is very stable. Such a reaction occurs between the nuclei of the two heavy isotopes of hydrogen, deuterium and tritium: \[{}_{\text{1}}^{\text{2}}\text{D + }{}_{\text{1}}^{\text{3}}\text{T }\to \text{ }{}_{\text{2}}^{\text{4}}\text{He + }{}_{\text{0}}^{\text{1}}n \label{1} \] For this reaction, Δ = – 0.018 88 g mol so that Δ = – 1700 GJ mol . Although very large quantities of energy are released by a reaction like Equation \(\ref{1}\), such a reaction is very difficult to achieve in practice. This is because of the very high activation energy, about 30 GJ mol , which must be overcome to bring the nuclei close enough to fuse together. This barrier is created by coulombic repulsion between the positively charged nuclei. The only place where scientists have succeeded in producing fusion reactions on a large scale is in a . Here, the necessary activation energy is achieved by exploding a fission bomb to heat the reactants to a temperature of about 10 K. Attempts to carry out fusion in a more controlled way have met only limited success. At the very high temperatures required, all molecules dissociate and most atoms ionize. A new state of matter called a is formed. It is neither solid, liquid, nor gas. Plasma behaves much like the universal solvent of the alchemists by converting any solid material that it contacts into vapor. Two techniques for producing a controlled fusion reaction are currently being explored. The first is to restrict the plasma by means of a strong magnetic field, rather than the walls of a container. This has met some success, but has not yet been able to contain a plasma long enough for usable energy to be obtained. The second technique involves the sudden compression and heating of pellets of deuterium and tritium by means of a sharply focused laser beam. Again, only limited success has been obtained. Though these attempts at a controlled fusion reaction have so far been only partially successful, they are nevertheless worth pursuing. Because of the much readier availability of lighter isotopes necessary for fusion, as opposed to the much rarer heavier isotopes required for fission, controlled nuclear fusion would offer the human race an essentially limitless supply of energy. There would still be some environmental difficulties with the production of isotopes such as tritium, but these would be nowhere near the seriousness of the problem caused by the production of the witches brew of radioactive isotopes in a fission reactor. It must be confessed, though, that at the present rate of progress, the prospect of limitless clean energy from fusion seems unlikely in the next decade or two.
3,077
4,514
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/10%3A_Fundamentals_of_Acids_and_Bases/10.04%3A_Acid-Base_Reactions
It is especially important that you know the precise meanings of all the highlighted terms in the context of this topic. Will this acid react with that base? And if so, to what extent? These questions can be answered quantitatively by carrying out the detailed equilibrium calculations you will learn about in another lesson. However, modern acid-base chemistry offers a few simple principles that can enable you to make a qualitative decision at a glance. More importantly, the ideas which we develop in this section are guaranteed to give you a far better conceptual understanding of proton-based acid-base reactions in general. Will acid HA react with base B? We stated above that the outcome of any acid-base reaction depends on how well two different bases can compete in the tug-of-war for the proton \[A^– \leftarrow H^+\rightarrow B^– \label{9.4.1}\] . Some insight into this can be had by thinking of the proton as having different potential energies when it is bound to different acceptors. We can draw a useful analogy with the electrons in an atom, which, you will recall, will always fall into the lowest-potential energy orbitals available, filling them from the bottom up. In a similar way, protons will "fall" into the lowest-energy empty spots (bases) they can find. Consider the scheme shown here, which depicts two hypothetical acid-base conjugate pairs. Take careful note of the labeling of this diagram: the acids HA and HB are and the conjugate bases A and B are . This "source-sink" terminology is synonymous with the "donor-acceptor" language that Brønsted taught us, but it also carries an implication about the relative energies of the proton as it exists in the two molecules HA and HB. If, as is indicated here, the proton is at a higher "potential energy" when it is in the form of HA than in HB, the reaction HA + B → HB + A will be favored compared to the reverse process HB + A → HA + B , which would require elevating the proton up to the A level. In this example, We will refer to diagrams such as the one in Figure \(\Page {1}\) as "proton-energy diagrams", which is not quite correct, but we do not want to get into thermodynamics at this point. (If you already know something about chemical thermodynamics, we are really referring to ) It follows, then, that if we can arrange all the common acid-base conjugate pairs on this kind of a scale, we can predict the direction of any simple acid-base reaction without resorting to numbers. This will be illustrated further on, but in order to keep things simple, let's look at a few proton-energy diagrams that illustrate some of the acid-base chemistry that we discussed in the preceding section. The hydronium ion is the dividing line; a strong acid, you will recall, is one whose conjugate base A loses out to the "stronger" base H O in the competition for the proton: \[A^– \leftarrow H^+ \rightarrow H_2O \label{9.4.2}\] This is seen most clearly in the diagram here, which contrasts the strong acid HA with the weak acid HB. HB "dissociates" to only a tiny extent because it is energetically unfavorable to promote its proton up to the H O-H O level (process in the diagram). A strong acid, you will recall, is one whose conjugate base A loses out to the "stronger" base H O in the competition for the proton: \[A^- ← H^+→ H_2O \label{9.4.3}\] Because the reaction \[HA + H_2O \rightarrow A^–+ H_3O^+ \label{9.4.4}\] for strong acid HA is virtually complete, all strong acids appear to be equally strong in water (the leveling effect.) From the proton-energy standpoint, a strong acid is one in which the energy of the proton is substantially greater when attached to the anion A than when it is attached to H O. Adding a strong acid HA to water will put it in contact with a huge proton sink that drains off the protons from any such acid, leaving the conjugate base A along with hydronium ion, . Conjugate bases of weak acids tend to accept protons from water, leaving a small excess of OH ions and thus an alkaline solution. As you can see in the diagram, the weak base ammonia accepts a proton from water: \[NH_3 + H_2O \rightarrow NH_4^+ + OH^– \label{9.4.6}\] The "weakness" of such a base is a consequence of the energetically unfavorable process ( ) in which a proton must be raised up from the low-lying H O-OH level. From the standpoint of the "proton sources" column on the left, you can think of this as similar to the situation for weak acids that we discussed above; it can be considered a special case in which the weak acid is H O. For a very long time, chemists had regarded methane, CH , as the weakest acid, making the CH (which is also the simplest ) the strongest base. Methane still holds its position as the weakest acid, but in 2008, the ion LiO was found to be an even stronger base than CH . Because both of these bases are observable only in the gas phase, these facts have little obvious import on aqueous-solution chemistry. Because water is amphiprotic, one H O molecule can donate a proton to another, as explained above. In this case the proton has to acquire considerable energy to make the jump ( ) from the H O-OH level to the H O -H O level, so the reaction \[2 H_2O \rightarrow H_3O^++ OH^– \label{9.4.7}\] occurs only to a minute extent. Think of this as the special case of the "weakest" acid H O reacting with the "weakest" base H O. Finally, what is a strong base? Just as a strong acid lies above the H O -H O level, so does a strong base lie below the H O-OH level. And for the same reason that H O is the strongest that can exist in water, OH is the strongest that can exist in water. The example of the oxide ion O is shown here. Sodium oxide Na O is a white powder that dissolves in water to give oxide ions which immediately decompose into hydroxide ions \[O^{2–} + H_2O \rightarrow 2 OH^– \label{9.4.8}\] This table combines common examples covering the entire range of acid-base strengths, from the strong to the very weak. The energy scale at the left gives you some idea of the relative proton-energy levels for each conjugate pair; notice that the zero is arbitrarily set to that of the H O -H O pair. Of more importance is the pH scale on the right. The pH that corresponds to any conjugate pair is the pH at which equal concentrations of that pair are in their acid and base forms. For example, acetic acid CH COOH is "half ionized" at a pH of 4.7. If another strong acid such as HCl is added so as to reduce the pH, the proportion of acetate ion decreases, while if sodium hydroxide is added to force the pH higher, a larger fraction of the acetic acid will be "dissociated". This illustrates another aspect of pH: at its most fundamental level, pH is an inverse measure of the "proton intensity" in the solution. The lower the pH, the higher the proton intensity, and the greater will be the fraction of higher-energy proton levels populated— which translates to higher acid-to-conjugate base concentration ratios. It is easy to see why acids such as H SO and bases such as the amide ion NH cannot exist in aqueous solution; the pH would have to be at the impossible level of –6 for the former and +23 for the latter! When you titrate an acid with a base, you want virtually every molecule of the acid to react with the base. In the case of a weak acid such as hypochlorous, the reaction would be \[HOCl + OH^– \rightarrow OCl^– + H_2O \label{9.4.9}\] Because the proton level in HOCl is considerably above that in H O, titration with NaOH solution will ensure that every last proton is eaten up by the hydroxide ion. If, instead, you used ammonia NH as a titrant, the closeness of the two proton levels would cause the reaction to be incomplete, yielding a less distinct equivalence point. And, of course, titration with a base that is weaker then hypochlorite ion (such as sodium bicarbonate) would be hopeless. As a practical matter, you can usually estimate that when the pH differs by more than about two units from the value that corresponds to the conjugate-pair for a monoprotic acid, the concentration of the non-favored species will be down by a factor of around 1000. )
8,178
4,516
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/06%3A_Properties_of_Gases/6.05%3A_More_on_Kinetic_Molecular_Theory
Make sure you thoroughly understand the following essential ideas that are presented below. It is especially important that you know the precise meanings of all the italicized terms in the context of this topic. In this section, we look in more detail at some aspects of the kinetic-molecular model and how it relates to our empirical knowledge of gases. For most students, this will be the first application of algebra to the development of a chemical model; this should be educational in itself, and may help bring that subject back to life for you! As before, your emphasis should on understanding these models and the ideas behind them, there is no need to memorize any of the formulas. Image: Wikimedia Commons At temperatures above absolute zero, all molecules are in motion. In the case of a gas, this motion consists of straight-line jumps whose lengths are quite great compared to the dimensions of the molecule. Although we can never predict the velocity of a particular individual molecule, the fact that we are usually dealing with a huge number of them allows us to know what fraction of the molecules have kinetic energies (and hence velocities) that lie within any given range. The trajectory of an individual gas molecule consists of a series of straight-line paths interrupted by collisions. What happens when two molecules collide depends on their relative kinetic energies; in general, a faster or heavier molecule will impart some of its kinetic energy to a slower or lighter one. Two molecules having identical masses and moving in opposite directions at the same speed will momentarily remain motionless after their collision. If we could measure the instantaneous velocities of all the molecules in a sample of a gas at some fixed temperature, we would obtain a wide range of values. A few would be zero, and a few would be very high velocities, but the majority would fall into a more or less well defined range. We might be tempted to define an average velocity for a collection of molecules, but here we would need to be careful: molecules moving in opposite directions have velocities of opposite signs. Because the molecules are in a gas are in random thermal motion, there will be just about as many molecules moving in one direction as in the opposite direction, so the velocity vectors of opposite signs would all cancel and the average velocity would come out to zero. Since this answer is not very useful, we need to do our averaging in a slightly different way. The proper treatment is to average the of the velocities, and then take the square root of this value. The resulting quantity is known as the , or velocity \[ \nu_{rms} = \sqrt{\dfrac{\sum \nu^2}{n}}\] which we will denote simply by \(\bar{v}\). The formula relating the RMS velocity to the temperature and molar mass is surprisingly simple, considering the great complexity of the events it represents: \[ \bar{v}= v_{rms} = \sqrt{\dfrac{3RT}{m}}\] in which \(m\) is the molar mass in kg mol , and = ÷6.02E23, the “gas constant per molecule", is known as the . What is the average velocity of a nitrogen molecules at 300 K? The molar mass of N is 28.01 g. Substituting in the above equation and expressing in energy units, we obtain \[v^{2}=\frac{3 \times 8.31 \mathrm{J} \mathrm{mol}^{-1} \mathrm{K}^{-1} \times 300 \mathrm{K}}{28.01 \times 10^{-3} \mathrm{kg} \mathrm{mol}^{-1}}=2.67 \times 10^{5} \mathrm{J} \mathrm{kg}^{-1} \nonumber\] Recalling the definition of the joule (1 J = 1 kg m s ) and taking the square root, \[\overline{v}=\sqrt{2.67 \times 10^{5} \mathrm{J} \mathrm{kg}^{-1} \times \frac{1 \mathrm{kg} \mathrm{m}^{2} \mathrm{s}^{-2}}{1 \mathrm{J}}}=517 \mathrm{ms}^{-1} \nonumber\] or \[517 \mathrm{m} \mathrm{s}^{-1} \times \frac{1 \mathrm{km}}{10^{3} \mathrm{m}} \times \frac{3600 \mathrm{s}}{1 \mathrm{h}}=1860 \mathrm{km} \mathrm{h}^{-1} \nonumber\] this is fast! The velocity of a rifle bullet is typically 300-500 m s ; convert to common units to see the comparison for yourself. A simpler formula for estimating average molecular velocities is \[v=157 \sqrt{\dfrac{T}{m}}\] in which \(v\) is in units of meters/sec, \(T\) is the absolute temperature and \(m\) the molar mass in grams. If we were to plot the number of molecules whose velocities fall within a series of narrow ranges, we would obtain a slightly asymmetric curve known as a . The peak of this curve would correspond to the velocity. This velocity distribution curve is known as the , but is frequently referred to only by Boltzmann's name. The was first worked out around 1850 by the great Scottish physicist, (left, 1831-1879), who is better known for discovering the laws of electromagnetic radiation. Later, the Austrian physicist (1844-1906) put the relation on a sounder theoretical basis and simplified the mathematics somewhat. Boltzmann pioneered the application of statistics to the physics and thermodynamics of matter, and was an ardent supporter of the atomic theory of matter at a time when it was still not accepted by many of his contemporaries. The derivation of the Boltzmann curve is a bit too complicated to go into here, but its physical basis is easy to understand. Consider a large population of molecules having some fixed amount of kinetic energy. As long as the temperature remains constant, this total energy will remain unchanged, but it can be distributed among the molecules in many different ways, and this distribution will change continually as the molecules collide with each other and with the walls of the container. It turns out, however, that kinetic energy is acquired and handed around only in discrete amounts which are known as . Once the molecule has a given number of kinetic energy quanta, these can be apportioned amongst the three directions of motion in many different ways, each resulting in a distinct total for the molecule. The greater the number of quanta, (that is, the greater the total kinetic energy of the molecule) the greater the number of possible velocity states. If we assume that all velocity states are equally probable, then simple statistics predicts that higher velocities will be more favored simply because there are so many more of them . Although the number of possible higher-energy states is greater, the lower-energy states are more likely to be occupied . This is because only so much kinetic energy available to the gas as a whole; every molecule that acquires kinetic energy in a collision leaves behind another molecule having less. This tends to even out the kinetic energies in a collection of molecules, and ensures that there are always some molecules whose instantaneous velocity is near zero. The net effect of these two opposing tendencies, one favoring high kinetic energies and the other favoring low ones, is the peaked curve seen above. Notice that because of the asymmetry of this curve, the (rms average) velocity is not the same as the velocity, which is defined by the peak of the curve. At higher temperatures (or with lighter molecules) the latter constraint becomes less important, and the mean velocity increases. But with a wider velocity distribution, the number of molecules having any one velocity diminishes, so the curve tends to flatten out. Higher temperatures allow a larger fraction of molecules to acquire greater amounts of kinetic energy, causing the Boltzmann plots to spread out. Notice how the left ends of the plots are anchored at zero velocity (there will always be a few molecules that happen to be at rest.) As a consequence, the curves flatten out as the higher temperatures make additional higher-velocity states of motion more accessible. The area under each plot is the same for a constant number of molecules. All molecules have the same kinetic energy ( /2) at the same temperature, so the fraction of molecules with higher velocities will increase as , and thus the molecular weight, decreases. The ability of a planet to retain an atmospheric gas depends on the average velocity (and thus on the temperature and mass) of the gas molecules and on the planet's mass, which determines its gravity and thus the escape velocity. In order to retain a gas for the age of the solar system, the average velocity of the gas molecules should not exceed about one-sixth of the escape velocity. The escape velocity from the Earth is 11.2 km/s, and 1/6 of this is about 2 km/s. Examination of the above plot reveals that hydrogen molecules can easily achieve this velocity, and this is the reason that hydrogen, the most abundant element in the universe, is almost absent from Earth's atmosphere. Although is not a significant atmospheric component, water vapor is. A very small amount of this diffuses to the upper part of the atmosphere, where intense solar radiation breaks down the H O into H . Escape of this hydrogen from the upper atmosphere amounts to about 2.5 × 10 g/year. The ideal gas equation of state came about by combining the empirically determined ("ABC") laws of , , and , but one of the triumphs of the kinetic molecular theory was the derivation of this equation from simple mechanics in the late nineteenth century. This is a beautiful example of how the principles of elementary mechanics can be applied to a simple model to develop a useful description of the behavior of macroscopic matter. We begin by recalling that the pressure of a gas arises from the force exerted when molecules collide with the walls of the container. This force can be found from Newton's law \[f = ma = m\dfrac{dv}{dt} \label{2.1}\] in which \(v\) is the velocity component of the molecule in the direction perpendicular to the wall and \(m\) is its mass. To evaluate the derivative in Equation \ref{2.1}, which is the velocity change per unit time, consider a single molecule of a gas contained in a cubic box of length \( . For simplicity, assume that the molecule is moving along the -axis which is perpendicular to a pair of walls, so that it is continually bouncing back and forth between the same pair of walls. When the molecule of mass \( strikes the wall at velocity \( (and thus with a momentum \( ) it will rebound elastically and end up moving in the opposite direction with . The total change in velocity per collision is thus 2 and the change in momentum is \(2mv\). After the collision the molecule must travel a distance to the opposite wall, and then back across this same distance before colliding again with the wall in question. This determines the time between successive collisions with a given wall; the number of collisions per second will be \(v/2l\). The \(F\) exerted on the wall is the rate of change of the momentum, given by the product of the momentum change per collision and the collision frequency: \[F = \dfrac{d(mv_x}{dt} = (2mv_x) \times \left( \dfrac{v_x}{2l} \right) = \dfrac{m v_x^2}{l} \label{2-2}\] Pressure is force per unit area, so the pressure \(P\) exerted by the molecule on the wall of cross-section \(l^2\) becomes \[ P = \dfrac{mv^2}{l^3} = \dfrac{mv^2}{V} \label{2-3}\] in which \(V\) is the volume of the box. As noted near the beginning of this unit, any given molecule will make about the same number of moves in the positive and negative directions, so taking a simple average would yield zero. To avoid this embarrassment, we square the velocities before averaging them, and take the square root of the average. This result is known as the (rms) velocity. We have calculated the pressure due to a single molecule moving at a constant velocity in a direction perpendicular to a wall. If we now introduce more molecules, we must interpret \(v^2\) as an average value which we will denote by \(\bar{v^2}\). Also, since the molecules are moving randomly in all directions, only one-third of their total velocity will be directed along any one Cartesian axis, so the total pressure exerted by \(N\) molecules becomes \[ P=\dfrac{N}{3}\dfrac{m \bar{\nu}^2}{V} \label{2.4}\] The above statement that "one-third of the total velocity (of all the molecules together)..." does mean that 1/3 of the molecules themselves are moving in each of these three directions; each individual particle is free to travel in any possible direction between collisions. However, any random trajectory can be regarded as composed of three that correspond to these three axes. The red arrow in the illustration depicts the path of a single molecule as it travels from the point of its last collision at the origin (lower left corner). The length of the arrow (which you may recognize as a ) is proportional to its velocity. The three components of the molecule's velocity are indicated by the small green arrows. It should be clearly apparent the trajectory is mainly along the , axis. In the section that follows, Equation \ref{2-5} contains another 1/3 factor that similarly divides the kinetic energy into components along the three axes. This makes sense because kinetic energy is partly determined by velocity. The temperature of a gas is a measure of the average translational kinetic energy of its molecules, so we begin by calculating the latter. Recalling that /2 is the average translational kinetic energy \(ε\), we can rewrite the Equation \ref{2-4} as \[PV = \dfrac{1}{3} N m \bar{v^2} = \dfrac{2}{3} N \epsilon \label{2-5}\] The 2/3 factor in the proportionality reflects the fact that velocity components in each of the three directions contributes ½ to the kinetic energy of the particle. The average translational kinetic energy is directly proportional to temperature: \[\epsilon = \dfrac{3}{2} kT \label{2.6}\] in which the proportionality constant is known as the . Substituting this into Equation \ref{2-5} yields \[ PV = \left( \dfrac{2}{3}N \right) \left( \dfrac{3}{2}kT \right) =NkT \label{2.7}\] Notice that Equation \ref{2-7} looks very much like the ideal gas equation \[PV = nRT \nonumber \] but is not quite the same, however; we have been using capital \(N\) to denote the number of , whereas \(n\) stands for the number of . And of course, the proportionality factor is not the gas constant \(R\), but rather the Boltzmann constant, 1.381 × 10 J K . If we multiply \( by Avogadro's number (\(N_A\) \[(1.381 \times 10^{–23}{\, J \,K^{–1}) (6.022 \times 10^{23}) = 8.314 \,J \,K^{–1}.\] Hence, the Boltzmann constant \(k\) is just the gas constant per molecule. So for of particles, the Equtation \ref{2-7} turns into our old friend \[ P V = n R T \label{2.8}\] The ideal gas equation of state came about by combining the empirically determined laws of Boyle, Charles, and Avogadro, but one of the triumphs of the kinetic molecular theory was the derivation of this equation from simple mechanics in the late nineteenth century. This is a beautiful example of how the principles of elementary mechanics can be applied to a simple model to develop a useful description of the behavior of macroscopic matter, and it will be worth your effort to follow and understand the individual steps of the derivation. (But don't bother to memorize it!) Since the product \(PV\) has the dimensions of energy, so does , and this quantity in fact represents the average translational kinetic energy per mole of molecular particles. The relationship between these two energy units can be obtained by recalling that 1 atm is \(1.013\times 10^{5}\, N\, m^{–2}\), so that \[1\, liter-atm = 1000 \mathrm{cm}^{3}\left(\frac{1 \mathrm{m}^{3}}{10^{6} \mathrm{cm}^{3}}\right) \times 1.01325 \times 10^5} \mathrm{Nm}^{2}=101325 \mathrm{J}\] The \( is one of the most important fundamental constants relating to the macroscopic behavior of matter. It is commonly expressed in both pressure-volume and in energy units: = 0.082057 L atm mol K = 8.314 J mol K That is, expresses the amount of energy per Kelvin degree. As noted above, the , which appears in many expressions relating to the statistical treatment of molecules, is just ÷ 6.02E23 = 1.3807 × 10 J K , the "gas constant per molecule " Molecular velocities tend to be very high by our everyday standards (typically around 500 meters per sec), but even in gases, they bump into each other so frequently that their paths are continually being deflected in a random manner, so that the net movement ( ) of a molecule from one location to another occurs rather slowly. How close can two molecules get? The average distance a molecule moves between such collisions is called the (\(\lambda\)), which depends on the number of molecules per unit volume and on their size. To avoid collision, a molecule of diameter σ must trace out a path corresponding to the axis of an imaginary cylinder whose cross-section is \(\pi \sigma^2\). Eventually it will encounter another molecule (extreme right in the diagram below) that has intruded into this cylinder and defines the terminus of its free motion. The volume of the cylinder is \(\pi \sigma^2 \lambda.\) At each collision the molecule is diverted to a new path and traces out a new exclusion cylinder. After colliding with all molecules in one cubic centimeter of the gas it will have traced out a total exclusion volume of \(\pi \sigma^2 \lambda\). Solving for \(\lambda\) and applying a correction factor \(\sqrt{2}\) to take into account exchange of momentum between the colliding molecules (the detailed argument for this is too complicated to go into here), we obtain \[\lambda = \dfrac{1}{\sqrt{2\pi n \sigma^2}} \label{3.1}\] Small molecules such as He, H and CH typically have diameters of around 30-50 pm. At STP the value of \(n\), the number of molecules per cubic meter, is \[\dfrac{6.022 \times 10^{23}\; mol^{-1}}{22.4 \times 10^{-3} m^3 \; mol^{-1}} = 2.69 \times 10 \; m^{-3}\] Substitution into Equation \(\ref{3.1}\) yields a value of around \(10^{ m (100\; nm)\) for the mean free path of most molecules under these conditions. Although this may seem like a very small distance, it typically amounts to 100 molecular diameters, and more importantly, about 30 times the average distance between molecules. This explains why so many gases conform very closely to the ideal gas law at ordinary temperatures and pressures. On the other hand, at each collision the molecule can be expected to change direction. Because these changes are random, the net change in location a molecule experiences during a period of one second is typically rather small. Thus in spite of the high molecular velocities, the speed of molecular in a gas is usually quite small.
18,507
4,517
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Polymers/Synthesis_of_Addition_Polymers
All the monomers from which addition polymers are made are alkenes or functionally substituted alkenes. The most common and thermodynamically favored chemical transformations of alkenes are addition reactions. Many of these addition reactions are known to proceed in a stepwise fashion by way of reactive intermediates, and this is the mechanism followed by most polymerizations. A general diagram illustrating this assembly of linear macromolecules, which supports the name chain growth polymers, is presented here. Since a pi-bond in the monomer is converted to a sigma-bond in the polymer, the polymerization reaction is usually exothermic by 8 to 20 kcal/mol. Indeed, cases of explosively uncontrolled polymerizations have been reported. It is useful to distinguish four polymerization procedures fitting this general description. • Radical Polymerization The initiator is a radical, and the propagating site of reactivity (*) is a carbon radical. • Cationic Polymerization The initiator is an acid, and the propagating site of reactivity (*) is a carbocation. • Anionic Polymerization The initiator is a nucleophile, and the propagating site of reactivity (*) is a carbanion. • Coordination Catalytic Polymerization The initiator is a transition metal complex, and the propagating site of reactivity (*) is a terminal catalytic complex. Virtually all of the monomers described above are subject to radical polymerization. Since this can be initiated by traces of oxygen or other minor impurities, pure samples of these compounds are often "stabilized" by small amounts of radical inhibitors to avoid unwanted reaction. When radical polymerization is desired, it must be started by using a radical initiator, such as a peroxide or certain azo compounds. The formulas of some common initiators, and equations showing the formation of radical species from these initiators are presented below. By using small amounts of initiators, a wide variety of monomers can be polymerized. One example of this radical polymerization is the conversion of styrene to polystyrene, shown in the following diagram. The first two equations illustrate the initiation process, and the last two equations are examples of chain propagation. Each monomer unit adds to the growing chain in a manner that generates the most stable radical. Since carbon radicals are stabilized by substituents of many kinds, the preference for head-to-tail regioselectivity in most addition polymerizations is understandable. Because radicals are tolerant of many functional groups and solvents (including water), radical polymerizations are widely used in the chemical industry. In principle, once started a radical polymerization might be expected to continue unchecked, producing a few extremely long chain polymers. In practice, larger numbers of moderately sized chains are formed, indicating that chain-terminating reactions must be taking place. The most common termination processes are Radical Combination and Disproportionation. These reactions are illustrated by the following equations. The growing polymer chains are colored blue and red, and the hydrogen atom transferred in disproportionation is colored green. Note that in both types of termination two reactive radical sites are removed by simultaneous conversion to stable product(s). Since the concentration of radical species in a polymerization reaction is small relative to other reactants (e.g. monomers, solvents and terminated chains), the rate at which these radical-radical termination reactions occurs is very small, and most growing chains achieve moderate length before termination. The relative importance of these terminations varies with the nature of the monomer undergoing polymerization. For acrylonitrile and styrene combination is the major process. However, methyl methacrylate and vinyl acetate are terminated chiefly by disproportionation. Another reaction that diverts radical chain-growth polymerizations from producing linear macromolecules is called chain transfer. As the name implies, this reaction moves a carbon radical from one location to another by an intermolecular or intramolecular hydrogen atom transfer (colored green). These possibilities are demonstrated by the following equations Chain transfer reactions are especially prevalent in the high pressure radical polymerization of ethylene, which is the method used to make LDPE (low density polyethylene). The 1º-radical at the end of a growing chain is converted to a more stable 2º-radical by hydrogen atom transfer. Further polymerization at the new radical site generates a side chain radical, and this may in turn lead to creation of other side chains by chain transfer reactions. As a result, the morphology of LDPE is an amorphous network of highly branched macromolecules. Polymerization of isobutylene (2-methylpropene) by traces of strong acids is an example of cationic polymerization. The polyisobutylene product is a soft rubbery solid, Tg = _70º C, which is used for inner tubes. This process is similar to radical polymerization, as demonstrated by the following equations. Chain growth ceases when the terminal carbocation combines with a nucleophile or loses a proton, giving a terminal alkene (as shown here). Monomers bearing cation stabilizing groups, such as alkyl, phenyl or vinyl can be polymerized by cationic processes. These are normally initiated at low temperature in methylene chloride solution. Strong acids, such as HClO4 , or Lewis acids containing traces of water (as shown above) serve as initiating reagents. At low temperatures, chain transfer reactions are rare in such polymerizations, so the resulting polymers are cleanly linear (unbranched). Treatment of a cold THF solution of styrene with 0.001 equivalents of n-butyllithium causes an immediate polymerization. This is an example of anionic polymerization, the course of which is described by the following equations. Chain growth may be terminated by water or carbon dioxide, and chain transfer seldom occurs. Only monomers having anion stabilizing substituents, such as phenyl, cyano or carbonyl are good substrates for this polymerization technique. Many of the resulting polymers are largely isotactic in configuration, and have high degrees of crystallinity. Species that have been used to initiate anionic polymerization include alkali metals, alkali amides, alkyl lithiums and various electron sources. A practical application of anionic polymerization occurs in the use of superglue. This material is methyl 2-cyanoacrylate, CH2=C(CN)CO2CH3. When exposed to water, amines or other nucleophiles, a rapid polymerization of this monomer takes place. An efficient and stereospecific catalytic polymerization procedure was developed by Karl Ziegler (Germany) and Giulio Natta (Italy) in the 1950's. Their findings permitted, for the first time, the synthesis of unbranched, high molecular weight polyethylene (HDPE), laboratory synthesis of natural rubber from isoprene, and configurational control of polymers from terminal alkenes like propene (e.g. pure isotactic and syndiotactic polymers). In the case of ethylene, rapid polymerization occurred at atmospheric pressure and moderate to low temperature, giving a stronger (more crystalline) product (HDPE) than that from radical polymerization (LDPE). For this important discovery these chemists received the 1963 Nobel Prize in chemistry. Ziegler-Natta catalysts are prepared by reacting certain transition metal halides with organometallic reagents such as alkyl aluminum, lithium and zinc reagents. The catalyst formed by reaction of triethylaluminum with titanium tetrachloride has been widely studied, but other metals (e.g. V & Zr) have also proven effective. The following diagram presents one mechanism for this useful reaction. Others have been suggested, with changes to accommodate the heterogeneity or homogeneity of the catalyst. Polymerization of propylene through action of the titanium catalyst gives an isotactic product; whereas, a vanadium based catalyst gives a syndiotactic product. ),
8,098
4,518
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Chemistry_Calculations/Dimensional_Analysis
A is any measurable extent, such as length, time, and mass. help describe the measurement according to certain standards. In the metric system for example, a one-dimensional (1-D) length is measured in meters (m) a two-dimensional (2-D) area is measured in meters squared (m ), and a three-dimensional (3-D) volume is measured in meters cubed (m ). Other types of quantities (time, mass, temperature) are measured using different units because they have different dimensions. means to think about something, often focusing on one part at a time. Putting it all together, means thinking about units piece by piece. Dimensional analysis can by to correctly go between different types of units, to catch mistakes in one's calculations, and to make many useful calculations in real life. Essentially, dimensional analysis means multiplying by one. You collect a set of "conversion factors" or ratios that equal one, and then multiply a quantity that you are interested in by those "ones." For example, if you want to know how many seconds it would take to get from New York to Philadelphia, you'd do it like this: First, using the express train it takes 2.5 hours to get to Philadelphia from a station in New York. Then, we know that 1 hour = 60 minutes and 1 minute = 60 seconds, so (1h / 60 min) = 1, and (1 min / 60 s) = 1. Now, all we have to do it multiply our starting number (2.5 h) by "one" twice, making sure that the units cancel correctly so that we have only seconds at the end. \[(2.5\; \cancel{ h}) \left(\dfrac{60\; \cancel{min}}{1\; \cancel{ h}}\right)\left(\dfrac{60 s}{1 \; \cancel{min}}\right) = 9.0 \times 10^3 s\] If each part is not put in the right place, the units will come out wrong. For example: \[\left(\dfrac{1}{2.5\;\cancel{ h}}\right)\left(\dfrac{1 \,\cancel{h}}{60\,m \cancel{ min}}\right)\left(\dfrac{1 \, \cancel{min}}{60 \,s}\right) = 1.1 \times 10^{-4} s^{-1}\] In this case, we put the starting quantity on the bottom, so we got s when the units are canceled out. Here is an example of not being able to cancel out the units correctly: \[(2.5 h)\left(\dfrac{1 h}{60 min}\right)\left(\dfrac{60 s}{1 min}\right) = 2.5 s\cdot h^2\cdot min^{-2}$$ The important part is that if you check the units to make sure that they come out right, you can be pretty sure you set the calculation up right! Here is an example of how dimensional analysis can help. A student was calculating initial velocity (v ) from this equation: \[d = (v_0)t + \dfrac{at^2}{2} \nonumber\] But the student had derived the equation incorrectly, and used this equation instead: \[v_0 = \dfrac{d}{t} - \dfrac{at^2}{2} \nonumber \] So the student had the wrong answer, but didn't know that because he just put the numbers for d, t, and a into his calculator using the wrong equation. If he had checked the units, he would have seen that (d/t) has units of meters per second (m/s) while (at )/2 has units of meters (m). Dimensional analysis is often useful when you want to estimate some quantity in the real world. For instance, maybe you want to know how much money you spend on coffee each month. If you spend $5 per cup and have 2 cups per day, and there are approximately 30 days in a month, than you can set up a calculation just like those above to calculate dollars per month spent on coffee. This works for many important, less obvious situations, for instance in business, to get an approximate idea of some quantity.
3,451
4,519
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Local_Anesthetics
Unlike other drugs which act in the region of the synapse, local anesthetics are agents that reversibly block the generation and conduction of nerve impulses along a nerve fiber. They depress impulses from sensory nerves of the skin, surfaces of mucosa, and muscles to the central nervous system. These agents are widely used in surgery, dentistry, and ophthalmology to block transmission of impulses in peripheral nerve endings. Most local anesthetics can be represented by the following general formula. In both the official chemical name and the proprietary name, a local anesthetic drug can be recognized by the "-caine" ending. The ester linkage can also be an amide linkage. The most recent research indicates that the local anesthetic binds to a phospholipid in the nerve membrane and inhibits the ability of the phospholipid to bind Ca ions. Practically all of the free-base forms of the drugs are liquids. For this reason most of these drugs are used as salts (chloride, sulfate, etc.) which are water soluble, odorless, and crystalline solids. As esters these drugs are easily hydrolyzed with consequent loss of activity. The amide form of the drug is more stable and resistant to hydrolysis. Two local anesthtics are shown below.
1,261
4,520
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_(Zumdahl_and_Decoste)/16%3A_Liquids_and_Solids/16.E%3A_Exercises
1. For each of the following pairs of substances, specify the type of interparticle bonding in each, and indicate which one has the higher boiling point: 2. For each of the following types of solids, describe its structure and the nature of the forces holding it together, and give the formula of at least one example: 3. List the substance types in (3) in order of increasing melting point. 4. Which of the types of substances in (3) conduct electricity as solids? as liquids? 5. Of the following substances: NaCl, diamond, Fe, F , C H OH, which one 6. Define boiling point, critical temperature, critical pressure, and triple point. 7. Explain how each of the following affects the vapor pressure of a liquid: 8. What are the three types of intermolecular attractive forces and list them in order of increasing strength? 9. The normal (1 atm) melting and boiling points of O are -218 C and -183 C, respectively. Its triple point is at -219 C and 1.14 x 10 atm, and its critical point is at -119 C and 49.8 atm. 10. The vapor pressure of solid iodine (I ) at 30°C is 0.466 mm Hg. How many milligrams of iodine will sublime into an evacuated 1.00-liter flask?
1,199
4,521
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/15%3A_Thermodynamics_of_Chemical_Equilibria/15.02%3A_Entropy_Rules
Entropy is one of the most fundamental concepts of physical science, with far-reaching consequences ranging from cosmology to chemistry. It is also widely mis-represented as a measure of "disorder", as we discuss below. The German physicist Rudolf Clausius originated the concept as "energy gone to waste" in the early 1850s, and its definition went through a number of more precise definitions over the next 15 years. Previously, we explained how the tendency of thermal energy to disperse as widely as possible is what drives all spontaneous processes, including, of course . We now need to understand how the direction and extent of the spreading and sharing of energy can be related to measurable thermodynamic properties of substances— that is, of and . You will recall that when a quantity of heat flows from a warmer body to a cooler one, permitting the available thermal energy to spread into and populate more microstates, that the ratio / measures the extent of this energy spreading. It turns out that we can generalize this to other processes as well, but there is a difficulty with using because it is not a state function; that is, its value is dependent on the pathway or manner in which a process is carried out. This means, of course, that the quotient / cannot be a state function either, so we are unable to use it to get differences between reactants and products as we do with the other state functions. The way around this is to restrict our consideration to a special class of pathways that are described as . For example, the reversible expansion of a gas can be achieved by reducing the external pressure in a series of infinitesimal steps; reversing any step will restore the system and the surroundings to their previous state. Similarly, heat can be transferred reversibly between two bodies by changing the temperature difference between them in infinitesimal steps each of which can be undone by reversing the temperature difference. The most widely cited example of an change is the free expansion of a gas into a vacuum. Although the system can always be restored to its original state by recompressing the gas, this would require that the surroundings perform work on the gas. Since the gas does no work on the surrounding in a free expansion (the external pressure is zero, so Δ = 0,) there will be a permanent change in the surroundings. Another example of irreversible change is the conversion of mechanical work into frictional heat; there is no way, by reversing the motion of a weight along a surface, that the heat released due to friction can be restored to the system. These diagrams show the same expansion and compression ±ΔV carried out in different numbers of steps ranging from a single step at the top to an "infinite" number of steps at the bottom. As the number of steps increases, the processes become less irreversible; that is, the difference between the work done in expansion and that required to re-compress the gas diminishes. In the limit of an ”infinite” number of steps (bottom), these work terms are identical, and both the system and surroundings (the “world”) are unchanged by the expansion-compression cycle. In all other cases the system (the gas) is restored to its initial state, A reversible change is one carried out in such as way that, when undone, both the system and surroundings (that is, the world) remain unchanged. It should go without saying, of course, that any process that proceeds in infinitesimal steps would take infinitely long to occur, so thermodynamic reversibility is an idealization that is never achieved in real processes, except when the system is already at equilibrium, in which case no change will occur anyway! So why is the concept of a reversible process so important? The answer can be seen by recalling that the change in the internal energy that characterizes any process can be distributed in an infinity of ways between heat flow across the boundaries of the system and work done on or by the system, as expressed by the . Each combination of and represents a different pathway between the initial and final states. It can be shown that as a process such as the expansion of a gas is carried out in successively longer series of smaller steps, the absolute value of approaches a minimum, and that of approaches a maximum that is characteristic of the particular process. Thus when a process is carried out reversibly, the -term in the First Law expression has its greatest possible value, and the -term is at its smallest. These special quantities and (which we denote as and pronounce “q-reversible”) have unique values for any given process and are therefore state functions. For a process that reversibly exchanges a quantity of heat with the surroundings, the entropy change is defined as \[ \Delta S = \dfrac{q_{rev}}{T} \label{23.2.1}\] This is the basic way of evaluating Δ for constant-temperature processes such as phase changes, or the isothermal expansion of a gas. For processes in which the temperature is not constant such as heating or cooling of a substance, the equation must be integrated over the required temperature range, as discussed below. This is a rather fine point that you should understand: although transfer of heat between the system and surroundings is impossible to achieve in a truly reversible manner, this idealized pathway is only crucial for the definition of Δ ; by virtue of its being a state function, the same value of Δ will apply when the system undergoes the same net change via any pathway. For example, the entropy change a gas undergoes when its volume is doubled at constant temperature will be the same regardless of whether the expansion is carried out in 1000 tiny steps (as reversible as patience is likely to allow) or by a single-step (as irreversible a pathway as you can get!) expansion into a vacuum. Entropy is a measure of the degree of spreading and sharing of thermal energy within a system. This “spreading and sharing” can be spreading of the thermal energy into a larger volume of or its sharing amongst previously inaccessible microstates of the system. The following table shows how this concept applies to a number of common processes. Entropy is an quantity; that is, it is proportional to the quantity of matter in a system; thus 100 g of metallic copper has twice the entropy of 50 g at the same temperature. This makes sense because the larger piece of copper contains twice as many quantized energy levels able to contain the thermal energy. Entropy is still described, particularly in older textbooks, as a measure of disorder. In a narrow technical sense this is correct, since the spreading and sharing of thermal energy does have the effect of randomizing the disposition of thermal energy within a system. But to simply equate entropy with “disorder” without further qualification is extremely misleading because it is far too easy to forget that Carrying these concepts over to macro systems may yield compelling analogies, but it is no longer science. it is far better to avoid the term “disorder” altogether in discussing entropy. The distribution of thermal energy in a system is characterized by the number of quantized microstates that are accessible (i.e., among which energy can be shared); the more of these there are, the greater the entropy of the system. This is the basis of an alternative (and more fundamental) definition of entropy \[\color{red} S = k \ln Ω \label{23.2.2}\] in which is the Boltzmann constant (the gas constant per molecule, 1.38 10 J K ) and Ω (omega) is the number of microstates that correspond to a given macrostate of the system. The more such microstates, the greater is the probability of the system being in the corresponding macrostate. For any physically realizable macrostate, the quantity Ω is an unimaginably large number, typically around \(10^{10^{25}}\) for one mole. By comparison, the number of atoms that make up the earth is about \(10^{50}\). But even though it is beyond human comprehension to compare numbers that seem to verge on infinity, the thermal energy contained in actual physical systems manages to discover the largest of these quantities with no difficulty at all, quickly settling in to the most probable macrostate for a given set of conditions. The reason depends on the of Ω is easy to understand. Suppose we have two systems (containers of gas, say) with S , Ω and S , Ω . If we now redefine this as a single system (without actually mixing the two gases), then the entropy of the new system will be \[S = S_1 + S_2\] but the number of microstates will be the product Ω Ω because for each state of system 1, system 2 can be in any of Ω states. Because \[\ln(Ω_1Ω_2) = \ln Ω_1 + \ln Ω_2\] Hence, the additivity of the entropy is preserved. If someone could make a movie showing the motions of individual atoms of a gas or of a chemical reaction system in its equilibrium state, there is no way you could determine, on watching it, whether the film is playing in the forward or reverse direction. Physicists describe this by saying that such systems possess ; neither classical nor quantum mechanics offers any clue to the direction of time. However, when a movie showing changes at the scopic level is being played backward, the weirdness is starkly apparent to anyone; if you see books flying off of a table top or tea being sucked back up into a tea bag (or a chemical reaction running in reverse), you will immediately know that something is wrong. At this level, time clearly has a direction, and it is often noted that because the entropy of the world as a whole always increases and never decreases, it is entropy that gives time its direction. It is for this reason that entropy is sometimes referred to as "time's arrow". But there is a problem here: conventional thermodynamics is able to define entropy change only for reversible processes which, as we know, take infinitely long to perform. So we are faced with the apparent paradox that thermodynamics, which deals only with differences between states and not the journeys between them, is unable to describe the very process of change by which we are aware of the flow of time. The direction of time is revealed to the chemist by the progress of a reaction toward its state of equilibrium; once equilibrium is reached, the net change that leads to it ceases, and from the standpoint of that particular system, the flow of time stops. If we extend the same idea to the much larger system of the world as a whole, this leads to the concept of the "heat death of the universe" that was mentioned briefly in the previous lesson. Energy values, as you know, are all relative, and must be defined on a scale that is completely arbitrary; there is no such thing as the absolute energy of a substance, so we can arbitrarily define the enthalpy or internal energy of an element in its most stable form at 298K and 1 atm pressure as zero. The same is true of the entropy; since entropy is a measure of the “dilution” of thermal energy, it follows that the less thermal energy available to spread through a system (that is, the lower the temperature), the smaller will be its entropy. In other words, as the absolute temperature of a substance approaches zero, so does its entropy. This principle is the basis of the , which states that the entropy of a perfectly-ordered solid at 0 K is zero. The entropy of a perfectly-ordered solid at 0 K is zero. The of a substance at any temperature above 0 K must be determined by calculating the increments of heat required to bring the substance from 0 K to the temperature of interest, and then summing the ratios / . Two kinds of experimental measurements are needed: \[ S_{0^o \rightarrow T^o} = \int _{o^o}^{T^o} \dfrac{C_p}{T} dt \] Because the heat capacity is itself slightly temperature dependent, the most precise determinations of absolute entropies require that the functional dependence of on be used in the above integral in place of a constant . \[ S_{0^o \rightarrow T^o} = \int _{o^o}^{T^o} \dfrac{C_p(T)}{T} dt \] When this is not known, one can take a series of heat capacity measurements over narrow temperature increments Δ and measure the area under each section of the curve in Figure \(\Page {3}\). The area under each section of the plot represents the entropy change associated with heating the substance through an interval Δ . To this must be added the enthalpies of melting, vaporization, and of any solid-solid phase changes. Values of for temperatures near zero are not measured directly, but can be estimated from quantum theory. / T are added to obtain the absolute entropy at temperature . As shown in Figure \(\Page {4}\) above, the entropy of a substance increases with temperature, and it does so for two reasons: The of a substance is its entropy at 1 atm pressure. The values found in tables are normally those for 298K, and are expressed in units of J K mol . The table below shows some typical values for gaseous substances. Note especially how the values given in this Table illustrate these important points: The entropies of the solid elements are strongly influenced by the manner in which the atoms are bound to one another. The contrast between diamond and graphite is particularly striking; graphite, which is built up of loosely-bound stacks of hexagonal sheets, appears to be more than twice as good at soaking up thermal energy as diamond, in which the carbon atoms are tightly locked into a three-dimensional lattice, thus affording them less opportunity to vibrate around their equilibrium positions. Looking at all the examples in the above table, you will note a general inverse correlation between the hardness of a solid and its entropy. Thus sodium, which can be cut with a knife, has almost twice the entropy of iron; the much greater entropy of lead reflects both its high atomic weight and the relative softness of this metal. These trends are consistent with the oft-expressed principle that the more “disordered” a substance, the greater its entropy. , which serve as efficient vehicles for spreading thermal energy over a large volume of space, have much higher entropies than condensed phases. Similarly, liquids have higher entropies than solids owing to the multiplicity of ways in which the molecules can interact (that is, store energy.) As a substance becomes more dispersed in space, the thermal energy it carries is also spread over a larger volume, leading to an increase in its entropy. Because entropy, like energy, is an extensive property, a dilute solution of a given substance may well possess a smaller entropy than the same volume of a more concentrated solution, but the entropy of solute (the entropy) will of course always increase as the solution becomes more dilute. For gaseous substances, the volume and pressure are respectively direct and inverse measures of concentration. For an ideal gas that expands at a constant temperature (meaning that it absorbs heat from the surroundings to compensate for the work it does during the expansion), the increase in entropy is given by Note: If the gas is allowed to cool during the expansion, the relation becomes more complicated and will best be discussed in a more advanced course. Because the pressure of a gas is inversely proportional to its volume, we can easily alter the above relation to express the entropy change associated with a change in the pressure of a perfect gas: Expressing the entropy change directly in concentrations, we have the similar relation Although these equations strictly apply only to perfect gases and cannot be used at all for liquids and solids, it turns out that in a dilute solution, the solute can often be treated as a gas dispersed in the volume of the solution, so the last equation can actually give a fairly accurate value for the entropy of dilution of a solution. We will see later that this has important consequences in determining the equilibrium concentrations in a homogeneous reaction mixture. is the portion of a molecule's energy that is proportional to its , and thus relates to motion at the molecular scale. What kinds of molecular motions are possible? For monatomic molecules, there is only one: actual movement from one location to another, which we call . Since there are three directions in space, all molecules possess three modes of . For polyatomic molecules, two additional kinds of motions are possible. One of these is ; a linear molecule such as CO in which the atoms are all laid out along the x-axis can rotate along the y- and z-axes, while molecules having less symmetry can rotate about all three axes. Thus linear molecules possess two modes of rotational motion, while non-linear ones have three rotational modes. Finally, molecules consisting of two or more atoms can undergo internal . For freely moving molecules in a gas, the number of vibrational modes or patterns depends on both the number of atoms and the shape of the molecule, and it increases rapidly as the molecule becomes more complicated. Notice the greatly different spacing of the three kinds of energy levels. This is extremely important because it determines the number of energy quanta that a molecule can accept, and, as the following illustration shows, the number of different ways this energy can be distributed amongst the molecules. The spacing of molecular energy states becomes closer as the mass and number of bonds in the molecule increases, so we can generally say that the more complex the molecule, the greater the density of its energy states. At the atomic and molecular level, all energy is ; each particle possesses discrete states of kinetic energy and is able to accept thermal energy only in packets whose values correspond to the energies of one or more of these states. Polyatomic molecules can store energy in rotational and vibrational motions, and all molecules (even monatomic ones) will possess translational kinetic energy (thermal energy) at all temperatures above absolute zero. The energy difference between adjacent translational states is so minute that translational kinetic energy can be regarded as (non-quantized) for most practical purposes. The number of ways in which thermal energy can be distributed amongst the allowed states within a collection of molecules is easily calculated from simple statistics, but we will confine ourselves to an example here. Suppose that we have a system consisting of three molecules and three quanta of energy to share among them. We can give all the kinetic energy to any one molecule, leaving the others with none, we can give two units to one molecule and one unit to another, or we can share out the energy equally and give one unit to each molecule. All told, there are ten possible ways of distributing three units of energy among three identical molecules as shown here: Each of these ten possibilities represents a distinct that will describe the system at any instant in time. Those microstates that possess identical distributions of energy among the accessible quantum levels (and differ only in which particular molecules occupy the levels) are known as . Because all microstates are equally probable, the probability of any one configuration is proportional to the number of microstates that can produce it. Thus in the system shown above, the configuration labeled will be observed 60% of the time, while will occur only 10% of the time. As the number of molecules and the number of quanta increases, the number of accessible microstates grows explosively; if 1000 quanta of energy are shared by 1000 molecules, the number of available microstates will be around 10 — a number that greatly exceeds the number of atoms in the observable universe! The number of possible configurations (as defined above) also increases, but in such a way as to greatly reduce the probability of all but the most probable configurations. Thus for a sample of a gas large enough to be observable under normal conditions, only a single configuration (energy distribution amongst the quantum states) need be considered; even the second-most-probable configuration can be neglected. : any collection of molecules large enough in numbers to have chemical significance will have its therrmal energy distributed over an unimaginably large number of microstates. The number of microstates increases exponentially as more energy states ("configurations" as defined above) become accessible owing to Energy is ; if you lift a book off the table, and let it fall, the total amount of energy in the world remains unchanged. All you have done is transferred it from the form in which it was stored within the glucose in your body to your muscles, and then to the book (that is, you did work on the book by moving it up against the earth’s gravitational field). After the book has fallen, this same quantity of energy exists as thermal energy (heat) in the book and table top. What changed, however, is the availability of this energy. Once the energy has spread into the huge number of thermal microstates in the warmed objects, the probability of its spontaneously (that is, by chance) becoming un-dispersed is essentially zero. Thus although the energy is still “there”, it is forever beyond utilization or recovery. The profundity of this conclusion was recognized around 1900, when it was first described at the “heat death” of the world. This refers to the fact that every spontaneous process (essentially every change that occurs) is accompanied by the “dilution” of energy. The obvious implication is that all of the molecular-level kinetic energy will be spread out completely, and nothing more will ever happen. Everybody knows that a gas, if left to itself, will tend to expand and fill the volume within which it is confined completely and uniformly. What “drives” this expansion? At the simplest level it is clear that with more space available, random motions of the individual molecules will inevitably disperse them throughout the space. But as we mentioned above, the allowed energy states that molecules can occupy are spaced more closely in a larger volume than in a smaller one. The larger the volume available to the gas, the greater the number of microstates its thermal energy can occupy. Since all such states within the thermally accessible range of energies are equally probable, the expansion of the gas can be viewed as a consequence of the tendency of thermal energy to be spread and shared as widely as possible. Once this has happened, the probability that this sharing of energy will reverse itself (that is, that the gas will spontaneously contract) is so minute as to be unthinkable. Imagine a gas initially confined to one half of a box (Figure \(\Page {7}\)). The barrier is then removed so that it can expand into the full volume of the container. We know that the entropy of the gas will increase as the thermal energy of its molecules spreads into the enlarged space. In terms of the spreading of thermal energy, Figure 23.2.X may be helpful. The tendency of a gas to expand is due to the more closely-spaced thermal energy states in the larger volume . Mixing and dilution really amount to the same thing, especially for idea gases. Replace the pair of containers shown above with one containing two kinds of molecules in the separate sections (Figure \(\Page {9}\)). When we remove the barrier, the "red" and "blue" molecules will each expand into the space of the other. (Recall that "each gas is a vacuum to the other gas".) However, notice that although each gas underwent an expansion, the overall process amounts to what we call "mixing". What is true for gaseous molecules can, in principle, apply also to solute molecules dissolved in a solvent. But bear in mind that whereas the enthalpy associated with the expansion of a perfect gas is by definition zero, Δ 's of mixing of two liquids or of dissolving a solute in a solvent have finite values which may limit the miscibility of liquids or the solubility of a solute. But what's really dramatic is that when just of a second gas is introduced into the container ( in Figure \(\Page {8}\)), an unimaginably huge number of new configurations become possible, greatly increasing the number of microstates that are thermally accessible (as indicated by the pink shading above). Just as gases spontaneously change their volumes from “smaller-to-larger”, the flow of heat from a warmer body to a cooler one always operates in the direction “warmer-to-cooler” because this allows thermal energy to populate a larger number of energy microstates as new ones are made available by bringing the cooler body into contact with the warmer one; in effect, the thermal energy becomes more “diluted”. When the bodies are brought into thermal contact ( ), thermal energy flows from the higher occupied levels in the warmer object into the unoccupied levels of the cooler one until equal numbers are occupied in both bodies, bringing them to the same temperature. As you might expect, the increase in the amount of energy spreading and sharing is proportional to the amount of heat transferred , but there is one other factor involved, and that is the at which the transfer occurs. When a quantity of heat passes into a system at temperature , the degree of dilution of the thermal energy is given by \[\dfrac{q}{T}\] To understand why we have to divide by the temperature, consider the effect of very large and very small values of in the denominator. If the body receiving the heat is initially at a very low temperature, relatively few thermal energy states are initially occupied, so the amount of energy spreading into vacant states can be very great. Conversely, if the temperature is initially large, more thermal energy is already spread around within it, and absorption of the additional energy will have a relatively small effect on the degree of thermal disorder within the body. When a chemical reaction takes place, two kinds of changes relating to thermal energy are involved: Figure \(\Page {11}\) a The ability of energy to spread into the product molecules is constrained by the availability of sufficient thermal energy to produce these molecules. This is where the temperature comes in. At absolute zero the situation is very simple; no thermal energy is available to bring about dissociation, so the only component present will be dihydrogen. The result is exactly what the LeChatelier Principle predicts: the equilibrium state for an endothermic reaction is shifted to the right at higher temperatures. The following table generalizes these relations for the four sign-combinations of Δ and Δ . (Note that use of the standard Δ ° and Δ ° values in the example reactions is not strictly correct here, and can yield misleading results when used generally.) This , like most such reactions, is . The positive entropy change is due mainly to the greater mass of CO molecules compared to those of O . The decrease in moles of gas in the drives the entropy change negative, making the reaction . Thus higher , which speeds up the reaction, also reduces its extent. are typically endothermic with positive entropy change, and are therefore . This reaction is , meaning that . But because the reverse reaction is kinetically inhibited, NO can exist indefinitely at ordinary temperatures even though it is thermodynamically unstable. Everybody knows that the is the stable form of a substance at low temperatures, while the gaseous state prevails at high temperatures. Why should this be? The diagram in Figure \(\Page {12}\) shows that involve exchange of energy with the surroundings (whose energy content relative to the system is indicated (with much exaggeration!) by the height of the yellow vertical bars in Figure \(\Page {13}\). When solid and liquid are in equilibrium (middle section of diagram below), there is sufficient thermal energy (indicated by pink shading) to populate the energy states of both phases. If heat is allowed to flow into the surroundings, it is withdrawn selectively from the more abundantly populated levels of the liquid phase, causing the quantity of this phase to decrease in favor of the solid. The temperature remains constant as the heat of fusion is returned to the system in exact compensation for the heat lost to the surroundings. Finally, after the last trace of liquid has disappeared, the only states remaining are those of the solid. Any further withdrawal of heat results in a temperature drop as the states of the solid become depopulated. Vapor pressure lowering, boiling point elevation, freezing point depression and osmosis are well-known phenomena that occur when a non-volatile solute such as sugar or a salt is dissolved in a volatile solvent such as water. All these effects result from “dilution” of the solvent by the added solute, and because of this commonality they are referred to as (Lat. , connected to.) The key role of the solvent concentration is obscured by the greatly-simplified expressions used to calculate the magnitude of these effects, in which only the solute concentration appears. The details of how to carry out these calculations and the many important applications of colligative properties are covered elsewhere. Our purpose here is to offer a more complete explanation of these phenomena occur. Basically, these all result from the effect of on its entropy, and thus in the increase in the density of energy states of the system in the solution compared to that in the pure liquid. Equilibrium between two phases (liquid-gas for boiling and solid-liquid for freezing) occurs when the energy states in each phase can be populated at . The temperatures at which this occurs are depicted by the shading. Dilution of the solvent adds new energy states to the liquid, but does not affect the vapor phase. This raises the temperature required to make equal numbers of microstates accessible in the two phases. Dilution of the solvent adds new energy states to the liquid, but does not affect the solid phase. This reduces the temperature required to make equal numbers of states accessible in the two phases. When a liquid is subjected to hydrostatic pressure— for example, by an inert, non-dissolving gas that occupies the vapor space above the surface, the vapor pressure of the liquid is raised (Figure \(\Page {16}\)). The pressure acts to compress the liquid very slightly, effectively narrowing the potential energy well in which the individual molecules reside and thus increasing their tendency to escape from the liquid phase. (Because liquids are not very compressible, the effect is quite small; a 100-atm applied pressure will raise the vapor pressure of water at 25°C by only about 2 torr.) In terms of the entropy, we can say that the applied pressure reduces the dimensions of the "box" within which the principal translational motions of the molecules are confined within the liquid, thus reducing the density of energy states in the liquid phase. Applying hydrostatic pressure to a liquid increases the spacing of its microstates, so that the number of energetically accessible states in the gas, although unchanged, is relatively greater— thus increasing the tendency of molecules to escape into the vapor phase. In terms of free energy, the higher pressure raises the free energy of the liquid, but does not affect that of the gas phase. This phenomenon can explain . Osmotic pressure, students must be reminded, is not what drives osmosis, but is rather the hydrostatic pressure that must be applied to the more concentrated solution (more dilute solvent) in order to osmotic flow of solvent into the solution. The effect of this pressure \(\Pi\) is to slightly increase the spacing of solvent energy states on the high-pressure (dilute-solvent) side of the membrane to match that of the pure solvent, restoring osmotic equilibrium.
32,060
4,522
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/08%3A_Solutions/8.05%3A__Colligative_Properties_-_Osmotic_Pressure
Osmosis is the process in which a liquid passes through a membrane whose pores permit the passage of solvent molecules but are too small for the larger solute molecules to pass through. Figure \(\Page {1}\) shows a simple osmotic cell. Both compartments contain water, but the one on the right also contains a solute whose molecules (represented by green circles) are too large to pass through the membrane. Many artificial and natural substances are capable of acting as semi-permeable membranes. The walls of most plant and animal cells fall into this category. If the cell is set up so that the liquid level is initially the same in both compartments, you will soon notice that the liquid rises in the left compartment and falls in the right side, indicating that water molecules from the right compartment are migrating through the semipermeable membrane and into the left compartment. This migration of the solvent is known as osmotic flow, or simply osmosis. The escaping tendency of a substance from a phase increases with its concentration in the phase. What is the force that drives the molecules through the membrane? This is a misleading question, because there is no real “force” in the physical sense other than the thermal energies all molecules possess. Osmosis is a consequence of simple statistics: the randomly directed motions of a collection of molecules will cause more to leave a region of high concentration than return to it; the escaping tendency of a substance from a phase increases with its concentration in the phase. Suppose you drop a lump of sugar into a cup of tea, without stirring. Initially there will be a very high concentration of dissolved sugar at the bottom of the cup, and a very low concentration near the top. Since the molecules are in random motion, there will be more sugar molecules moving from the high concentration region to the low concentration region than in the opposite direction. The motion of a substance from a region of high concentration to one of low concentration is known as . Diffusion is a consequence of a (which is a measure of the difference in escaping tendency of the substance in different regions of the solution). There is really no special force on the individual molecules; diffusion is purely a consequence of statistics. Osmotic flow is simply diffusion of a solvent through a membrane impermeable to solute molecules. Now take two solutions of differing solvent concentration, and separate them by a semipermeable membrane (Figure \(\Page {2}\)). Being semipermeable, the membrane is essentially invisible to the solvent molecules, so they diffuse from the high concentration region to the low concentration region just as before. This flow of solvent constitutes , or . Figure \(\Page {2}\) shows water molecules (blue) passing freely in both directions through the semipermeable membrane, while the larger solute molecules remain trapped in the left compartment, diluting the water and reducing its escaping tendency from this cell, compared to the water in the right side. This results in a net osmotic flow of water from the right side which continues until the increased hydrostatic pressure on the left side raises the escaping tendency of the diluted water to that of the pure water at 1 atm, at which point osmotic equilibrium is achieved. Osmotic flow is simply diffusion of a solvent through a membrane impermeable to solute molecules. In the absence of the semipermeable membrane, diffusion would continue until the concentrations of all substances are uniform throughout the liquid phase. With the semipermeable membrane in place, and if one compartment contains the pure solvent, this can never happen; no matter how much liquid flows through the membrane, the solvent in the right side will always be more concentrated than that in the left side. Osmosis will continue indefinitely until we run out of solvent, or something else stops it. One way to stop osmosis is to raise the hydrostatic pressure on the solution side of the membrane. This pressure squeezes the solvent molecules closer together, raising their escaping tendency from the phase. If we apply enough pressure (or let the pressure build up by osmotic flow of liquid into an enclosed region), the escaping tendency of solvent molecules from the solution will eventually rise to that of the molecules in the pure solvent, and osmotic flow will case. The pressure required to achieve is known as the . Note that the osmotic pressure is the pressure required to osmosis, not to sustain it. Osmotic pressure is the pressure required to osmotic flow It is common usage to say that a solution “has” an osmotic pressure of atmospheres". It is important to understand that this means nothing more than that a pressure of this value must be applied to the solution to flow of pure solvent into this solution through a semipermeable membrane separating the two liquids. The Dutch scientist Jacobus Van't Hoff (1852-1911) was one of the giants of physical chemistry. He discovered this equation after a chance encounter with a botanist friend during a walk in a park in Amsterdam; the botanist had learned that the osmotic pressure increases by about 1/273 for each degree of temperature increase. van’t Hoff immediately grasped the analogy to the ideal gas law. The osmotic pressure \(\Pi\) of a solution containing \(n\) moles of solute particles in a solution of volume \(V\) is given by the : \[\Pi = \dfrac{nRT}{V} \label{8.4.3}\] in which In contrast to the need to employ solute molality to calculate the effects of a non-volatile solute on changes in the freezing and boiling points of a solution, we can use solute molarity to calculate osmotic pressures. Note that the fraction \(n/V\) corresponds to the molarity (\(M\)) of a solution of a non-dissociating solute, or to twice the molarity of a totally-dissociated solute such as \(NaCl\). In this context, molarity refers to the summed total of the concentrations of all solute species. Hence, Equation \ref{8.4.3} can be expressed as \[\Pi =MRT \label{8.4.3B}\] \(\Pi V = nRT\) of the above equation should look familiar. Much effort was expended around the end of the 19th century to explain the similarity between this relation and the , but in fact, the Van’t Hoff equation turns out to be only a very rough approximation of the real osmotic pressure law, which is considerably more complicated and was derived after van 't Hoff's formulation. As such, this equation gives valid results only for extremely dilute ("ideal") solutions. According to the Van't Hoff equation, an ideal solution containing 1 mole of dissolved particles per liter of solvent at 0° C will have an osmotic pressure of 22.4 atm. Sea water contains dissolved salts at a total ionic concentration of about 1.13 mol L . What pressure must be applied to prevent osmotic flow of pure water into sea water through a membrane permeable only to water molecules? This is a simple application of Equation \ref{8.4.3B}. \[ \begin{align*} \Pi &= MRT \\[4pt] &= (1.13\; mol /L)(0.0821\; L \,atm \,mol^{–1}\; K^{–1})(298\; K) \\[4pt] &= 27.6\; atm \end{align*}\] Since all of the of solutions depend on the concentration of the solvent, their measurement can serve as a convenient experimental tool for determining the concentration, and thus the molecular weight, of a solute. Osmotic pressure is especially useful in this regard, because a small amount of solute will produce a much larger change in this quantity than in the boiling point, freezing point, or vapor pressure. even a 10 molar solution would have a measurable osmotic pressure. Molecular weight determinations are very frequently made on proteins or other high molecular weight polymers. These substances, owing to their large molecular size, tend to be only sparingly soluble in most solvents, so measurement of osmotic pressure is often the only practical way of determining their molecular weights. The osmotic pressure of a benzene solution containing 5.0 g of polystyrene per liter was found to be 7.6 torr at 25°C. Estimate the average molecular weight of the polystyrene in this sample. osmotic pressure: \[ \begin{align*} \Pi &= \dfrac{7.6\, torr}{760\, torr\, atm^{–1}} \\[4pt] &= 0.0100 \,atm \end{align*} \] Using the form of the van 't Hoff equation (Equation \ref{8.4.3}), = , the number of moles of polystyrene is = (0.0100 atm)(1 L) ÷ (0.0821 L atm mol K )(298 K) = 4.09 x 10 mol Molar mass of the polystyrene: (5.0 g) ÷ (4.09 x 10 mol) = . The is quite simple: pure solvent is introduced into one side of a cell that is separated into two parts by a semipermeable membrane. The polymer solution is placed in the other side, which is enclosed and connected to a manometer or some other kind of pressure gauge. As solvent molecules diffuse into the solution cell the pressure builds up; eventually this pressure matches the osmotic pressure of the solution and the system is in osmotic equilibrium. The osmotic pressure is read from the measuring device and substituted into the van’t Hoff equation to find the number of moles of solute.
9,135
4,523
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/14%3A_Thermochemistry/14.0E%3A_14.E%3A_Thermochemistry_(Exercises)
After going through combustion in a bomb calorimeter a sample gives off 5,435 cal. The calorimeter experiences an increase of 4.27°C in its temperature. Using this information, determine the heat capacity of the calorimeter in kJ/°C. Referring to the example given above about the heat of combustion, calculate the temperature change that would occur in the combustion of 1.732 g \(C_{12}H_{22}O_{12}\) in a bomb calorimeter that had the heat capacity of 3.87 kJ/°C. Given the following data calculate the heat of combustion in kJ/mol of xylose,\(C_{5}H_{10}O_{5}\)(s), used in a bomb calorimetry experiment: mass of \(C_{5}H_{10}O_{5}\)(s) = 1.250 g, heat capacity of calorimeter = 4.728 kJ/°C, Initial Temperature of the calorimeter = 24.37°C, Final Temperature of calorimeter = 28.29°C. Determine the heat capacity of the bomb calorimeter if 1.714 g of naphthalene, \(C_{10}H_{8}\)(s), experiences an 8.44°C increase in temperature after going through combustion. The heat of combustion of naphthalene is -5156 kJ/mol \(C_{10}H_{8}\). What is the heat capacity of the bomb calorimeter if a 1.232 g sample of benzoic acid causes the temperature to increase by 5.14°C? The heat of combustion of benzoic acid is -26.42 kJ/g. Use equation to calculate the heat of capacity: \(q_{calorimeter} = \; heat \; capicity \; of \; calorimeter \; x \; \Delta{T}\) 5435 cal = heat capacity of calorimeter x 4.27°C Heat capacity of calorimeter = (5435 cal/ 4.27°C) x (4.184 J/1 cal) x (1kJ/1000J) = 5.32 kJ/°C The temperature should increase since bomb calorimetry releases heat in an exothermic combustion reaction. Change in Temp = (1.732 g \(C_{12}H_{22}O_{11}\)) x (1 mol \(C_{12}H_{22}O_{11}\)/342.3 g \(C_{12}H_{22}O_{11}\)) x (6.61 x 10³ kJ/ 1 mol \(C_{12}H_{22}O_{11}\)) x (1°C/3.87kJ) = 8.64°C [(Heat Capacity x Change in Temperature)/mass] =[ ((4.728 kJ/°C) x(28.29 °C – 24.37 °C))/1.250 g] = 14.8 kJ/g xylose \(q_{rxn}\) = (-14.8 kJ/g xylose) x (150.13 g xylose/ 1 mol xylose) = -2.22x10³ kJ/mol xylose Heat Capacity = [(1.714 g \(C_{10}H_{8}\)) x (1 mol \(C_{10}H_{8}\)/128.2 g \(C_{10}H_{8}\)) x (5.156x10³ kJ/1 mol \(C_{10}H_{8}\))]/8.44°C = 8.17 kJ/ °C Heat Capacity = [(1.232 g benzoic acid) x (26.42 kJ/1 g benzoic acid)]/5.14°C = 6.31 kJ/ °C
2,269
4,525
https://chem.libretexts.org/Bookshelves/Introductory_Chemistry/Introductory_Chemistry_(CK-12)/16%3A_Solutions/16.18%3A_Net_Ionic_Equations
At sports events around the world, a small number of athletes fiercely compete on fields and in stadiums. They get tired, dirty, and sometimes hurt as they try to win the game. Surrounding them are thousands of spectators watching and cheering. Would the game be different without the spectators? Definitely! Spectators provide encouragement to the team and generate enthusiasm. Although the spectators are not playing the game, they are certainly a part of the process. We can write a molecular equation for the formation of silver chloride precipitate: \[\ce{NaCl} + \ce{AgNO_3} \rightarrow \ce{NaNO_3} + \ce{AgCl}\nonumber \] The corresponding ionic equation is: \[\ce{Na^+} \left( aq \right) + \ce{Cl^-} \left( aq \right) + \ce{Ag^+} \left( aq \right) + \ce{NO_3^-} \left( aq \right) \rightarrow \ce{Na^+} \left( aq \right) + \ce{NO_3^-} \left( aq \right) + \ce{AgCl} \left( s \right)\nonumber \] If you look carefully at the ionic equation, you will notice that the sodium ion and the nitrate ion appear unchanged on both sides of the equation. When the two solutions are mixed, neither the \(\ce{Na^+}\) nor the \(\ce{NO_3^-}\) ions participate in the reaction. They can be eliminated from the reaction. \[\cancel{\ce{Na^+} \left( aq \right)} + \ce{Cl^-} \left( aq \right) + \ce{Ag^+} \left( aq \right) + \cancel{\ce{NO_3^-} \left( aq \right)} \rightarrow \cancel{\ce{Na^+} \left( aq \right)} + \cancel{\ce{NO_3^-} \left( aq \right)} + \ce{AgCl} \left( s \right)\nonumber \] A is an ion that does not take part in the chemical reaction and is found in solution both before and after the reaction. In the above reaction, the sodium ion and the nitrate ion are both spectator ions. The equation can now be written without the spectator ions: \[\ce{Ag^+} \left( aq \right) + \ce{Cl^-} \left( aq \right) \rightarrow \ce{AgCl} \left( s \right)\nonumber \] The is the chemical equation that shows only those elements, compounds, and ions that are directly involved in the chemical reaction. Notice that in writing the net ionic equation, the positively-charged silver cation was written first on the reactant side, followed by the negatively-charged chloride anion. This is somewhat customary because that is the order in which the ions must be written in the silver chloride product. However, it is not absolutely necessary to order the reactants in this way. Net ionic equations must be balanced by both mass and charge. Balancing by mass means ensuring that there are equal masses of each element on the product and reactant sides. Balancing by charge means making sure that the overall charge is the same on both sides of the equation. In the above equation, the overall charge is zero, or neutral, on both sides of the equation. As a general rule, if you balance the molecular equation properly, the net ionic equation will end up being balanced by both mass and charge. When aqueous solutions of copper (II) chloride and potassium phosphate are mixed, a precipitate of copper (II) phosphate is formed. Write a balanced net ionic equation for this reaction. Write and balance the molecular equation first, making sure that all formulas are correct. Then write the ionic equation, showing all aqueous substances as ions. Carry through any coefficients. Finally, eliminate spectator ions and write the net ionic equation. Molecular equation: \[3 \ce{CuCl_2} \left( aq \right) + 2 \ce{K_3PO_4} \left( aq \right) \rightarrow 6 \ce{KCl} \left( aq \right) + \ce{Cu_3(PO_4)_2} \left( s \right)\nonumber \] Ionic equation: \[3 \ce{Cu^{2+}} \left( aq \right) + 6 \ce{Cl^-} \left( aq \right) + 6 \ce{K^+} \left( aq \right) + 2 \ce{PO_4^{3-}} \left( aq \right) \rightarrow 6 \ce{K^+} \left( aq \right) + 6 \ce{Cl^-} \left( aq \right) + \ce{Cu_3(PO_4)_2} \left( s \right)\nonumber \] Notice that the balance of the equation is carried through when writing the dissociated ions. For example, there are six chloride ions on the reactant side because the coefficient of 3 is multiplied by the subscript of 2 on the copper (II) chloride formula. The spectator ions, \(\ce{K^+}\) and \(\ce{Cl^-}\), can be eliminated. Net ionic equation: \[3 \ce{Cu^{2+}} \left( aq \right) + 2 \ce{PO_4^{3-}} \left( aq \right) \rightarrow \ce{Cu_3(PO_4)_2} \left( s \right)\nonumber \] For a precipitation reaction, the net ionic equation always shows the two ions that come together to form the precipitate. The equation is balanced by mass and charge.
4,449
4,526
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Lipids/Fatty_Acids/Prostaglandins
Prostaglandins were first discovered and isolated from human semen in the 1930s by Ulf von Euler of Sweden. Thinking they had come from the prostate gland, he named them prostaglandins. It has since been determined that they exist and are synthesized in virtually every cell of the body. Prostaglandins, are like hormones in that they act as chemical messengers, but do not move to other sites, but work right within the cells where they are synthesized. Prostaglandins are unsaturated carboxylic acids, consisting of of a 20 carbon skeleton that also contains a five member ring. They are biochemically synthesized from the fatty acid, arachidonic acid. See the graphic on the left. The unique shape of the arachidonic acid caused by a series of cis double bonds helps to put it into position to make the five member ring. See the prostaglandin in the next panel Prostaglandins are unsaturated carboxylic acids, consisting of of a 20 carbon skeleton that also contains a five member ring and are based upon the fatty acid, arachidonic acid. There are a variety of structures one, two, or three double bonds. On the five member ring there may also be double bonds, a ketone, or alcohol groups. A typical structure is on the left graphic. There are a variety of physiological effects including: When you see that prostaglandins induce inflammation, pain, and fever, what comes to mind but aspirin. Aspirin blocks an enzyme called cyclooxygenase, COX-1 and COX-2, which is involved with the ring closure and addition of oxygen to arachidonic acid converting to prostaglandins. The acetyl group on aspirin is hydrolzed and then bonded to the alcohol group of serine as an ester. This has the effect of blocking the channel in the enzyme and arachidonic can not enter the active site of the enzyme. By inhibiting or blocking this enzyme, the synthesis of prostaglandins is blocked, which in turn relives some of the effects of pain and fever. Aspirin is also thought to inhibit the prostaglandin synthesis involved with unwanted blood clotting in coronary heart disease. At the same time an injury while taking aspirin may cause more extensive bleeding.
2,173
4,527
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Thermochemistry/Kinetic_and_Potential_Energy
You are probably familiar with these types of energy from a physics class. is the capacity (ability, sort of) to do work. You have a sense of what work is from regular life, it's things that require effort. Energy and work have the same units. is the energy that comes from motion. The equation for kinetic energy is \[KE=\frac{1}{2}mv^{2}\] where KE is kinetic energy, m is mass, and v is velocity. This definition should make sense: big things moving fast have the most energy, the most ability to shove other things or knock them over, etc. is energy that comes from position and a force. For instance, gravitational potential energy is the energy that things have if they are high up. If they fall, their potential energy will turn into kinetic energy because they are accelerated by gravity. The equation for potential energy from gravity is \[PE=mgh\] where PE is potential energy, m is mass, g is the acceleration of gravity, and h is the height. This makes the units of energy very clear: mass x distance x acceleration, or force x distance, which comes to kg•m s . In chemistry, the force that leads to potential energy is almost always the Coulomb force, not gravity. In this case, the potential energy from 2 charges near each other is \[PE=\frac{kQq}{d}\] where q and Q are the 2 charges, d is the distance between them, and k is a constant, 8.99 x 10 J•m•C . (Joules, J, are the SI unit of energy, and coulombs, C, are the SI unit of charge.) When the charges have the same sign, they repel and will accelerate away from each other if allowed to move; the potential energy has a positive sign. When the charges have the opposite sign, they attract each other and have negative potential energy. If they are allowed to get closer together, the potential energy will get more negative. If they are separated, d gets bigger and the potential energy approaches zero. Conservation of Energy You probably learned about already in a physics class. For instance, if you have a pendulum as shown, at Position 1 the weight has some potential energy, but no kinetic energy. When you release the weight, the weight falls, moving through Position 2. At Position 2, some of the potential energy has been converted to kinetic energy. Finally, at Position 3, all the potential energy has been converted to kinetic energy. As it passes 3, the process is reversed, and kinetic energy is converted to potential energy. When the weight reaches Position 4, all the kinetic energy has been converted back to the same amount of potential energy it started with at 1. This is just one example of conservation of energy. It is a general observation that the amount of energy in the universe doesn't change, and the amount of energy in a particular system doesn't change unless there is a flow of energy in or out.
2,830
4,528
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Enzyme_Inhibition
Although activation of enzymes may be exploited therapeutically, most effects are produced by enzyme inhibition. Inhibition caused by drugs may be either reversible or irreversible. A reversible situation occurs when an equilibrium can be established between the enzyme and the inhibitory drug. A competitive inhibition occurs when the drug, as "mimic" of the normal substrate competes with the normal substrate for the active site on the enzyme. Concentration effects are important for competitive inhibition. In noncompetitive inhibition, the drug combines with an enzyme, at a different site other than the active site. The normal substrate can not displace the drug from this site and can not interact with the active either since the shape of the enzyme has been altered. Among the many types of drugs that act as enzyme inhibitors the following may be included: antibiotics, acetylchlolinesterase agents, certain antidepressants such as monoamine oxidase inhibitors and some diuretics. Many drugs act as suppressors of gene function including antibiotics, fungicides, antimalarials and antivirals. Gene function may be suppressed in several steps of protein synthesis or inhibition of nucleic acid biosynthesis. Many substances which inhibit nucleic acid biosynthesis are very toxic since the drug is not very selective in its action between the parasite and host. The strategy of chemotherapy consists of exploiting the biochemical differences between the host and parasite cells. Metabolites are any substances used or produced by biochemical reactions. A drug which possesses a remarkably close chemical similarity (mimic) to the normal metabolite is called an . The antimetabolite enters a normal synthetic reaction by "fooling" an enzyme and producing a counterfeit metabolite. The counterfeit metabolite inhibits another enzyme or is an unusable fraudulent end product which cannot be utilized by the cell for growth or reproduction. Such antimetabolites have been used as antibacterial or anticancer agents.
2,039
4,531
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.14%3A_Random_Number_Table
The following table provides a list of random numbers in which the digits 0 through 9 appear with approximately equal frequency. Numbers are arranged in groups of five to make the table easier to view. This arrangement is arbitrary, and you can treat the table as a sequence of random individual digits (1, 2, 1, 3, 7, 4...going down the first column of digits on the left side of the table), as a sequence of three digit numbers (111, 212, 104, 367, 739... using the first three columns of digits on the left side of the table), or in any other similar manner. Let’s use the table to pick 10 random numbers between 1 and 50. To do so, we choose a random starting point, perhaps by dropping a pencil onto the table. For this exercise, we will assume that the starting point is the fifth row of the third column, or 12032 (highlighted in below). Because the numbers must be between 1 and 50, we will use the last two digits, ignoring all two-digit numbers less than 01 or greater than 50. Proceeding down the third column, and moving to the top of the fourth column if necessary, gives the following 10 random numbers: 32, 01, 05, 16, 15, 38, 24, 10, 26, 14. These random numbers (1000 total digits) are a small subset of values from the publication (Rand Corporation, 2001) and used with permission. Information about the publication, and a link to a text file containing the million random digits is available at .
1,430
4,533
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_-_The_Central_Science_(Brown_et_al.)/15%3A_Chemical_Equilibrium
We introduced the concept of equilibrium in , where you learned that a liquid and a vapor are in equilibrium when the number of molecules evaporating from the surface of the liquid per unit time is the same as the number of molecules condensing from the vapor phase. Vapor pressure is an example of a because only the physical form of the substance changes. Similarly, in , we discussed saturated solutions, another example of a physical equilibrium, in which the rate of dissolution of a solute is the same as the rate at which it crystallizes from solution. In this chapter, we describe the methods chemists use to quantitatively describe the composition of chemical systems at equilibrium, and we discuss how factors such as temperature and pressure influence the equilibrium composition. As you study these concepts, you will also learn how urban smog forms and how reaction conditions can be altered to produce H rather than the combustion products CO and H O from the methane in natural gas. You will discover how to control the composition of the gases emitted in automobile exhaust and how synthetic polymers such as the polyacrylonitrile used in sweaters and carpets are produced on an industrial scale.
1,241
4,534
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Metabolism/Catabolism/Glycolysis
Glycolysis is the process in which is converted into pyruvate via ten enzymatic steps. There are three regulatory steps, each of which is highly regulated. There are two phases of Glycolysis: The end result of Glycolysis is two new pyruvate molecules which can then be fed into the Citric Acid cycle (also known as the ) if oxygen is present, or can be reduced to lactate or ethanol in the absence of of oxygen using a process known as . Glycolysis occurs within almost all living cells and is the primary source of Acetyl-CoA, which is the molecule responsible for the majority of energy output under conditions. The structures of Glycolysis intermediates can be found in the following diagram: The first phase of Glycolysis requires an input of energy in the form of ATP (adenosine triphosphate). The second phase of Glycolysis where 4 molecules of ATP are produced per molecule of glucose. Enzymes appear in red: Because Glucose is split to yield two molecules of D-Glyceraldehyde-3-phosphate, each step in the "Pay Off" phase occurs twice per molecule of glucose.  
1,094
4,536
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Supplemental_Modules_(Analytical_Chemistry)/Electrochemistry/Nernst_Equation
The enables the determination of cell potential under non-standard conditions. It relates the measured cell potential to the and allows the accurate determination of equilibrium constants (including solubility constants). The is derived from the Gibbs free energy . \[E^o = E^o_{reduction} - E^o_{oxidation} \label{1}\] \(\Delta{G}\) is also related to \(E\) under general conditions (standard or not) via \[\Delta{G} = -nFE \label{2}\] with Under standard conditions, Equation \ref{2} is then \[\Delta{G}^{o} = -nFE^{o}. \label{3}\] Hence, when \(E^o\) is positive, the reaction is spontaneous and when \(E^o\) is negative, the reaction is non-spontaneous. From thermodynamics, the Gibbs energy change under non-standard conditions can be related to the Gibbs energy change under standard Equations via \[\Delta{G} = \Delta{G}^o + RT \ln Q \label{4}\] Substituting \(\Delta{G} = -nFE\) and \(\Delta{G}^{o} = -nFE^{o}\) into Equation \ref{4}, we have: \[-nFE = -nFE^o + RT \ln Q \label{5}\] Divide both sides of the Equation above by \(-nF\), we have \[E = E^o - \dfrac{RT}{nF} \ln Q \label{6}\] Equation \ref{6} can be in the form of \(\log_{10}\): \[E = E^o - \dfrac{2.303 RT}{nF} \log_{10} Q \label{Generalized Nernst Equation}\] At standard temperature T = 298 K, the \(\frac{2.303 RT}{F}\) term equals 0.0592 V and Equation \ref{Generalized Nernst Equation} can be rewritten: \[E = E^o - \dfrac{0.0592\, V}{n} \log_{10} Q \label{Nernst Equation @ 298 K}\] The Equation above indicates that the electrical potential of a cell depends upon the reaction quotient \(Q\) of the reaction. As the redox reaction proceeds, reactants are consumed, and thus concentration of reactants decreases. Conversely, the products concentration increases due to the increased in products formation. As this happens, cell potential gradually until the reaction is at , at which \(\Delta{G} = 0\). At equilibrium, the reaction quotient \(Q = K_{eq}\). Also, at equilibrium, \(\Delta{G} = 0\) and \(\Delta{G} = -nFE\), so \(E = 0\). Therefore, substituting \(Q = K_{eq}\) and \(E = 0\) into the Nernst Equation, we have: \[0 = E^o - \dfrac{RT}{nF} \ln K_{eq} \label{7}\] At room temperature, Equation \ref{7} simplifies into (notice natural log was converted to log base 10): \[0 = E^o - \dfrac{0.0592\, V}{n} \log_{10} K_{eq} \label{8}\] This can be rearranged into: \[\log K_{eq} = \dfrac{nE^o}{0.0592\, V} \label{9}\] The Equation above indicates that the equilibrium constant \(K_{eq}\) of the reaction. Specifically, when: This result fits , which states that when a system at equilibrium experiences a change, the system will minimize that change by shifting the equilibrium in the opposite direction. The \(E^{o}_{cell} = +1.10 \; V\) for the Zn-Cu redox reaction: \[Zn_{(s)} + Cu^{2+}_{(aq)} \rightleftharpoons Zn^{2+}_{(aq)} + Cu_{(s)}.\] What is the equilibrium constant for this reversible reaction? Under standard conditions, \([Cu^{2+}] = [Zn^{2+}] = 1.0\, M\) and T = 298 K. As the reaction proceeds, \([Cu^{2+}]\) decreases as \([Zn^{2+}]\) increases. Lets say after one minute, \([Cu^{2+}] = 0.05\, M\) while \([Zn^{2+}] = 1.95\, M\). According to the Nernst Equation, the cell potential after 1 minute is: \[E = E^o - \dfrac{0.0592 V}{n} \log Q\] \[E = 1.10V - \dfrac{0.0592 V}{2} \log\dfrac{1.95 \; M}{0.05 \; M}\] \[E = 1.05 \; V\] As you can see, the initial cell potential is \(E = 1.10\, V\), after 1 minute, the potential drops to 1.05 V. This is after 95% of the reactants have been consumed. As the reaction continues to progress, more \(Cu^{2+}\) will be consumed and more \(Zn^{2+}\) will be generated (at a 1:1 ratio). As a result, the cell potential continues to decrease and when the cell potential drops down to 0, the concentration of reactants and products stops changing. This is when the reaction is at equilibrium. From from Equation 9, the \(K_{eq}\) can be calculated from \[\begin{align} \log K_{eq} & = \dfrac{2 \times 1.10\, V}{0.0592\,V}\\ & = 37.2 \end{align}\] \[K_{eq} = 10^{37.2}= 1.58 \times 10^{37}\] This make sense from a , since the reaction strongly favors the products over the reactants to result in a large \(E^{o}_{cell}\) of 1.103 V. Hence, the cell is greatly out of equilibrium under standard conditions. Reactions that are just weakly out of equilibrium will have smaller \(E^{o}_{cell}\) values (neglecting a change in \(n\) of course).
4,414
4,537
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_(Morsch_et_al.)/18%3A_Ethers_and_Epoxides_Thiols_and_Sulfides
We shall begin in a very traditional manner, with a discussion of the nomenclature of ethers. We will then describe how ethers may be prepared in the laboratory, and discuss the relative inertness of these compounds. A discussion of the chemistry of cyclic ethers follows, with particular emphasis on the preparation and reactions of epoxides (cyclic ethers containing a three-membered ring). We will then introduce crown ethers—compounds that consist of large rings containing several oxygen atoms and the spectroscopic properties of ethers. The unit will close with a description of the chemistry of thiols and sulfides, the sulfur-containing analogues of alcohols and ethers.
699
4,538
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/16%3A_Appendix/16.12%3A_Formation_Constants
The following table provides \(K_i\) and \(\beta_i\) values for selected metal–ligand complexes, arranged by the ligand. All values are from Martell, A. E.; Smith, R. M. , Vols. 1–4. Plenum Press: New York, 1976. Unless otherwise stated, values are for 25 C and zero ionic strength. Those values in brackets are considered less reliable. Acetate \(\ce{CH3COO-}\) Ammonia \(\ce{NH3}\) Chloride \(\ce{Cl-}\) Ag (\(\mu = 5.0 \text{ M}\)) Cyanide \(\ce{CN-}\) Ethylenediamine \(\ce{H2NCH2CH2NH2}\) EDTA Fluoride \(\ce{F-}\) Hydroxide \(\ce{OH-}\) Iodide \(\ce{I-}\) Nitriloacetate Oxalate \(\ce{C2O4^{2-}}\) 1,10-phenanthroline Thiosulfate \(\ce{S2O3^{2-}}\) Thiocyanate \(\ce{SCN-}\)
695
4,539
https://chem.libretexts.org/Bookshelves/Environmental_Chemistry/Geochemistry_(Lower)/04%3A_The_Biosphere/4.02%3A_Biogeochemical_Evolution
Present evidence suggests that blue-green algae, and possibly other primitive microbial forms of life, were flourishing 3 billion years ago. This brackets the origin of life to within one billion years; prior to 4 billion years ago, surface temperatures were probably above the melting point of iron, and there was no atmosphere nor hydrosphere. By about 3.8 billion years ago, or one billion years after the earth was formed, cooling had occurred to the point where rain was possible, and primitive warm, shallow oceans had formed. The atmosphere was anoxic and highly reducing, containing mainly CO , N , CO, H O, H S, traces of H , NH , CH , and less than 1% of the present amount of O , probably originating from the photolysis of water vapor. This oxygen would have been taken up quite rapidly by the many abundant oxidizable substances such as Fe(II), H S, and the like. The fossil record that preserves the structural elements of organisms in sedimentary deposits has for some time provided a reasonably clear picture of the evolution of life during the past 750,000 years. In more recent years, this record has been considerably extended, as improved techniques have made it possible to study the impressions made by single-celled microorganisms embedded in rock formations. The main difficulty in studying fossil microorganisms extending back beyond a billion years is in establishing that the relatively simple structural forms one observes are truly biogenic. There are three major kinds of evidence for this. If all three of these lines of evidence are present in samples that can be shown to be contemporaneous with the sediments in which they are found, then the argument for life is incontrovertible. One of the most famous of these sites was discovered near Thunder Bay, Ontario in the early 1950’s. The Gunflint Formation consists of an exposed layer of chert (largely silica) from which the overlying shale of the Canadian Shield had been removed. Microscopic examination of thin sections of this rock revealed a variety of microbial cell forms, including some resembling present freshwater blue-green algae. Also present in the Gunflint Deposits are the oldest known examples of metazoa, or organisms which display a clear differentiation into two or more types of cell. These deposits have been dated at 1.9-2.0 billion years. The evidence from very old paleomicrobiotic deposits is less clear. Western Australia has yielded fossil forms that are apparently 2.8 billion years old, and other deposits in the same region contain structures resembling living blue-green algae. Other forms, heavily modified by chemical infiltration, bear some resemblence to a present iron bacterium, and are found in sediments laid down 3.5 billion years ago, but evidence that these fossils are contemporaneous with the sediments in which they are found is not convincing. The oldest evidence of early life is the observed depletion of C in 3.8-billion year old rocks found in southwestern Greenland. Under the conditions that prevailed at this time, most organic molecules would be thermodynamically stable, and there is every indication that a rich variety of complex molecules would be present. The most direct evidence of this comes from laboratory experiments that attempt to simulate the conditions of the primitive environment of this period, the first and most famous of these being the one carried out by Stanley Miller in 1953. Since that time, other experiments of a similar nature have demonstrated the production of a wide variety of compounds under prebiotic conditions, including nearly all of the monomeric components of the macromolecules present in living organisms. In addition, small macromolecules, including peptides and sugars, as well as structural entities such as lipid-based micelles, have been prepared in this way. The discovery in 1989 of a number of amino acids in the iridum-rich clay layer at the Cretaceous-Tertiary boundary suggests that bio-precursor molecules can be formed or deposited during a meteoric impact. Although this particular event occurred only 65 million years ago (and is presumed to be responsible for the extinction of the dinosaurs), the Earth has always been subject to meteoric impacts, and it is conceivable that these have played a role in the origin of life. The presence of clays, whose surfaces are both asymmetric and chemically active, could have favored the formation of species of a particular chirality; a number of experiments have shown that clay surfaces can selectively adsorb amino acids which then form small peptides. It has been suggested that the highly active and ordered surfaces of clays not only played a crucial role in the formation of life, but might have actually served as parts of the first primitive self-replicating life forms, which only later evolved into organic species. Since no laboratory experiment has yet succeeded in producing a self-replicating species that can be considered living, the mechanism by which this came about in nature must remain speculative. Infectious viruses have been made in the laboratory by simply mixing a variety of nucleotide precursors with a template nucleic acid and a replicase enzyme; the key to the creation of life is how to do the same thing without the template and the enzyme. Smaller polynucleotides may have formed adventitiously, possibly on the active surface of an inorganic solid. These could form complementary base-paired polymers, which might then serve as the templates for larger molecules. Non-enzymatic template-directed synthesis of nucleotides has been demonstrated in the laboratory, but the resulting polymers have linkages that are not present in natural nucleotides. It has been suggested that these linkages could have been selectively hydrolyzed by a long period of cycling between warm, cool, wet, and dry environmental conditions. The earth at that time was rotating more rapidly than it is now; cycles of hydration-dehydration and of heating-cooling would have been more frequent and more extreme. The first organisms would of necessity have been heterotrophs— that is, they derived their metabolic energy from organic compounds in the environment. Their capacity to synthesize molecules was probably very limited, and they would have had to absorb many key substances from their surroundings in order to maintain their metabolic activity. Among the most primitive organisms of this kind are the archaeons, which are believed to be predecessors of both bacteria and eucaryotes. DNA sequencing of one such organism, a methane-producer that lives in ocean-bottom sediments at 200 atm and 48-94°C, reveals that only about a third of the genes resemble those of bacteria or eucaryotes. It has been estimated that about 50 genes are required in order to define the minimal biochemical and structural machinery that a hypothetical simplest possible cell would have. The earliest organisms derived their metabolic energy from the organic substances present in their environment; once they began to reproduce, this nutrient source began to become depleted. Some species had probably by this time developed the ability to reduce carbon dioxide to methane; the hydrogen source could at first have been H itself (at that time much more abundant in the atmosphere), and later, various organic metabolites from other species could have served. Before the food supply neared exhaustion, some of these organisms must have developed at least a rudimentary means of absorbing sunlight and using this energy to synthesize metabolites. The source of hydrogen for the reduction of CO was at first small organic molecules; later photosynthetic organisms were able to break this dependence on organic nutrients and obtain the hydrogen from H S. These bacterial forms were likely the dominant form of life for several hundred million years. Eventually, due perhaps to the failing supply of H S, plants capable of mediating the photochemical extraction of hydrogen from water developed. This represented a large step in biochemical complexity; it takes 10 times as much energy to abstract hydrogen from water than from hydrogen sulfide, but the supply is virtually limitless. It appears that photosynthesis evolved in a kind of organism whose present-day descendents are known as cyanobacteria. The five “kingdoms” into which living organisms are classified are Monera, Protista (protozoans, algae), Fungi, Plantae, and Animalia. The genetic (and thus, evolutionary) relations between these and the subcategories within them are depicted below. Superimposed on this, however, is an even more fundamental division between the procaryotes and eucaryotes. In this group are primitive organisms whose single cells contain no nucleus; the gene-bearing structure is a single long DNA chain that is folded irregularly throughout the cell. Procaryotic cells usually reproduce by budding or division; where sexual reproduction does occur, there is a net transfer of some genetic material from one cell to another, but there is never an equal contribution from both parents. In spite of their primitive nature, procaryotes constitute the majority of organisms in the biosphere. The division between bacteria and archaea within the procaryotic group is a fairly recent one. Archaea are now believed to be the most primitive of all organisms, and include the so-called extremophiles that occupy environmental niches in which life was at one time thought to be impossible; they have been found in sedimentary rocks, hot springs, and highly saline environments. All other organisms— seaweeds (algae), protozoa, molds, fungi, animals and plants, are composed of eucaryotic cells. These all have a membrane-bound nucleus, and with a few exceptions they all reproduce by mitosis, in which the chromosomes split longitudinally and move toward opposite poles. Other organelles unique to eucaryotes are mitochondria, ribosomes, and structural elements such as microtubules. Oxygen is poisonous to all forms of life in the absence of enzymes that can reduce the highly reactive byproducts of oxidation and oxidative metabolism (peroxides, superoxides, etc.). All organic compounds are thermodynamically unstable in the presence of oxygen; carbon-carbon double bonds in lipids are subject to rapid attack. Prebiotic chemical evolution leading to the development of biopolymers was possible only under the reducing, anoxic conditions of the primitive atmosphere. As the oxygen concentration began to rise, organisms in contact with the atmosphere had to develop protective mechanisms in order to survive. One indication of such adaptation is the discovery of fossil microbes whose cell walls are unusually thick. A more useful kind of adaptation was the synthesis of compounds that would detoxify oxygen by reacting rapidly either with O itself or with peroxides and other active species derived from it. Isoprenoids (the precursors of steroids) and porphyrins are examples of two general classes of compounds that are found in nearly all organisms, and which may have originated in this way. Later, highly efficient oxygen mediating enzymes such as peroxidase and catalase developed. The widespread phenomenon of bioluminescence may be the result of a very early adaptation to oxygen. The compound luciferin is a highly efficient oxygen detoxifier, which also happens to be able to emit light under certain conditions. Bioluminescence probably developed as a by-product in early procaryotic organisms, but was gradually lost as more efficient detoxifying mechanisms became available. In spite of the deleterious effects of oxygen on cell biomolecules, O is nevertheless an excellent electron sink, capable of releasing large quantities of energy through the oxidation of glucose. This energy can be efficiently captured through oxidative phosphorylation, the key process in respiration. A cell that utilizes oxygen must have a structural organization that isolates the oxygen-consuming respiratory centers from the other parts of the cell that would be poisoned by oxygen or its reaction products. Some procaryotic organisms have developed in this way; a number of cyanobacteria and other species are facultative anaerobes which can survive both in the presence and absence of oxygen. It is in the eucaryotic cell, however, that this organization is fully elaborated; here, respiration occurs in membrane-bound organelles called mitochondria. With only a few exceptions, all eucaryotic organisms are obligate aerobes; they can rarely survive and can never reproduce in the absence of oxygen. Mitotic cell division depends on the contractile properties of the protein actomyosin, which only forms when oxygen is present. The development of the eucaryotic cell about 1.4 billion years ago is regarded as the most significant event in the evolution of the earth and of the biosphere since the appearance of photosynthesis and the origin of life itself. How did it come about? The present belief, supported by an increasing amount of evidence, suggests that it began when one species of organism engulfed another. The ingested organism possessed biochemical machinery not present in the host, but which was retained in such a way that it conferred a selective evolutionary advantage on the host. Eventually the two organisms became able to reproduce as one, and so effectively became a single organism. This process is known as According to this view, mitochondria represent the remains of a primitive oxygen-tolerant organism that was incorporated into one that could produce the glucose fuel for the oxygen to burn. Chloroplasts were once free-living photosynthesizing procaryotes similar to present-day cyanobacteria. It is assumed that some of these began parasitising respiratory organisms, conferring upon them the ability to synthesize their carbohydrate food during daylight. The immense selective advantage of this arrangement is evident in the extent of the plant kingdom. It is interesting that an atmospheric oxygen concentration of about 1 percent, known as the is both the maximum that obligate anaerobes can tolerate, and the minimum required for oxidative phosphorylation. Louis Pasteur discovered that some bacteria are anaerobic and unable to tolerate oxygen above 1% concentration. As was mentioned previously, the oxygen produced by the first photosynthetic organisms was taken up by ferrous iron in sediments and surface minerals. The widespread deposits known as banded iron formations consist of alternating layers of Fe(III)-containing oxides (hematite and magnetite) that were laid down between 1 and 2 billion years ago; the layering may reflect changing climatic or other environmental conditions that brought about a cycling of the organism population. During the buildup of oxygen, an equivalent amount of carbon had to be deposited in sediments in order to avoid the thermodynamically spontaneous back reaction which would consume the O through oxidation of the organic matter. Thus the present levels of atmospheric oxygen are due to a time lag in the geochemical cycling of photosynthetic products. As the oxygen concentration increased, evolution seems to have speeded up; this may reflect both the increased metabolic efficiency and the greater biochemical complexity of the eucaryotic cell. The oldest metazoan (multiple-celled) fossils are coelenterates that appeared about 700 million years ago. Modern representatives of this group such as marine worms and jellyfish can tolerate oxygen concentrations as low as 7%, thus placing a lower boundary on the atmospheric oxygen content of that era. The oldest fossil organisms believed to have possessed gills, which function only above 10% oxygen concentration, appeared somewhat later.Carbon dioxide decreased as oxygen increased, as indicated by the prevalence of dolomite over limestone in early marine sediments.
15,929
4,541
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Book3A_Medicines_by_Design/04%3A_Molecules_to_Medicines/4.06%3A_The_G_Switch
act like relay batons to pass messages from circulating hormones into cells. Imagine yourself sitting on a cell, looking outward to the bloodstream rushing by. Suddenly, a huge glob of something hurls toward you, slowing down just as it settles into a perfect dock on the surface of your cell perch. You don't realize it, but your own body sent this substance—a hormone called epinephrine—to protect you, telling you to get out of the way of a car that just about sideswiped yours while drifting out of its lane. Your body reacts, whipping up the familiar, spine-tingling, "fight-or-flight" response that gears you to respond quickly to potentially threatening situations such as this one. How does it all happen so fast? Getting into a cell is a challenge, a strictly guarded process kept in control by a protective gate called the plasma membrane. Figuring out how molecular triggers like epinephrine communicate important messages to the inner parts of cells earned two scientists the Nobel Prize in physiology or medicine in 1994. Getting a cellular message across the membrane is called signal transduction, and it occurs in three steps. First, a message (such as epinephrine) encounters the outside of a cell and makes contact with a molecule on the surface called a receptor. Next, a connecting transducer, or switch molecule, passes the message inward, sort of like a relay baton. Finally, in the third step, the signal gets amplified, prompting the cell to do something: move, produce new proteins, even send out more signals. One of the Nobel Prize winners, pharmacologist Alfred G. Gilman of the University of Texas Southwestern Medical Center at Dallas, uncovered the identity of the switch molecule, called a G protein. Gilman named the switch, which is actually a huge family of switch molecules, not after himself but after the type of cellular fuel it uses: and energy currency called GTP. As with any switch, G proteins must be turned on only when needed, then shut off. Some illnesses, including fatal diseases like cholera, occur when a G protein is errantly left on. In the case of cholera, the poisonous weaponry of the cholera bacterium "freezes" in place one particular type of G protein that controls water balance. The effect is constant fluid leakage, causing life-threatening diarrhea. In the few decades since Gilman and the other Nobel Prize winner, the late National Institutes of Health scientist Martin Rodbell, made their fundamental discovery about G protein switches, pharmacologists all over the world have focused on these signaling molecules. Research on G proteins and on all aspects of cell signaling has prospered, and as a result scientists now have an avalanche of data. In the fall of 2000, Gilman embarked on a groundbreaking effort to begin to untangle and reconstruct some of this information to guide the way toward creating a "virtual cell." Gilman leads the Alliance for Cellular Signaling, a large, interactive research network. The group has a big dream: to understand everything there is to know about signaling inside cells. According to Gilman, Alliance researchers focus lots of attention on G proteins and also on other signaling systems in selected cell types. Ultimately, the scientists hope to test drugs and learn about disease through computer modeling experiments with the virtual cell system.
3,375
4,542
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/07%3A_Solids_and_Liquids/7.08%3A_Cubic_Lattices_and_Close_Packing
Make sure you thoroughly understand the following essential ideas: Crystals are of course three-dimensional objects, but we will begin by exploring the properties of arrays in two-dimensional space. This will make it easier to develop some of the basic ideas without the added complication of getting you to visualize in 3-D — something that often requires a bit of practice. Suppose you have a dozen or so marbles. How can you arrange them in a single compact layer on a table top? Obviously, they must be in contact with each other in order to minimize the area they cover. It turns out that there are two efficient ways of achieving this: The essential difference here is that any marble within the interior of the square-packed array is in contact with four other marbles, while this number rises to six in the hexagonal-packed arrangement. It should also be apparent that the latter scheme covers a smaller area (contains less empty space) and is therefore a more efficient packing arrangement. If you are good at geometry, you can show that square packing covers 78 percent of the area, while hexagonal packing yields 91 percent coverage. If we go from the world of marbles to that of atoms, which kind of packing would the atoms of a given element prefer? If the atoms are identical and are bound together mainly by dispersion forces which are completely non-directional, they will favor a structure in which as many atoms can be in direct contact as possible. This will, of course, be the hexagonal arrangement. Directed chemical bonds between atoms have a major effect on the packing. The version of hexagonal packing shown at the right occurs in the form of carbon known as which forms 2-dimensional sheets. Each carbon atom within a sheet is bonded to three other carbon atoms. The result is just the basic hexagonal structure with some atoms missing. The coordination number of 3 reflects the -hybridization of carbon in graphite, resulting in plane-trigonal bonding and thus the sheet structure. Adjacent sheets are bound by weak dispersion forces, allowing the sheets to slip over one another and giving rise to the lubricating and flaking properties of graphite. The underlying order of a crystalline solid can be represented by an array of regularly spaced points that indicate the locations of the crystal's basic structural units. This array is called a crystal lattice. Crystal lattices can be thought of as being built up from repeating units containing just a few atoms. These repeating units act much as a rubber stamp: press it on the paper, move ("translate") it by an amount equal to the lattice spacing, and stamp the paper again. The gray circles represent a square array of lattice points. The orange square is the simplest unit cell that can be used to define the 2-dimensional lattice. Building out the lattice by moving ("translating") the unit cell in a series of steps, Although real crystals do not actually grow in this manner, this process is conceptually important because it allows us to classify a lattice type in terms of the simple repeating unit that is used to "build" it. We call this shape the . Any number of primitive shapes can be used to define the unit cell of a given crystal lattice. The one that is actually used is largely a matter of convenience, and it may contain a lattice point in its center, as you see in two of the unit cells shown here. In general, the best unit cell is the simplest one that is capable of building out the lattice. Shown above are unit cells for the close-packed square and hexagonal lattices we discussed near the start of this lesson. Although we could use a hexagon for the second of these lattices, the rhombus is preferred because it is simpler. Notice that in both of these lattices, the corners of the unit cells are centered on a lattice point. This means that an atom or molecule located on this point in a real crystal lattice is shared with its neighboring cells. As is shown more clearly here for a two-dimensional square-packed lattice, a single unit cell can claim "ownership" of only one-quarter of each molecule, and thus "contains" 4 × ¼ = 1 molecule. The unit cell of the graphite form of carbon is also a rhombus, in keeping with the hexagonal symmetry of this arrangement. Notice that to generate this structure from the unit cell, we need to shift the cell in both the - and - directions in order to leave empty spaces at the correct spots. We could alternatively use regular hexagons as the unit cells, but the + shifts would still be required, so the simpler rhombus is usually preferred. As you will see in the next section, the empty spaces within these unit cells play an important role when we move from two- to three-dimensional lattices. In order to keep this lesson within reasonable bounds, we are limiting it mostly to crystals belonging to the so-called system. In doing so, we can develop the major concepts that are useful for understanding more complicated structures (as if there are not enough complications in cubics alone!) But in addition, it happens that cubic crystals are very commonly encountered; most metallic elements have cubic structures, and so does ordinary salt, sodium chloride. We usually think of a cubic shape in terms of the equality of its edge lengths and the 90° angles between its sides, but there is another way of classifying shapes that chemists find very useful. This is to look at what (such as rotations around an axis) we can perform that leave the appearance unchanged. For example, you can rotate a cube 90° around an axis perpendicular to any pair of its six faces without making any apparent change to it. We say that the cube possesses three mutually perpendicular , abbreviated C axes. But if you think about it, a cube can also be rotated around the axes that extend between opposite corners; in this case, it takes three 120° rotations to go through a complete circle, so these axes (also four in number) are three-fold or C axes. Cubic crystals belong to one of the seven crystal systems whose lattice points can be extended indefinitely to fill three-dimensional space and which can be constructed by successive translations (movements) of a primitive unit cell in three dimensions. As we will see below, the cubic system, as well as some of the others, can have variants in which additional lattice points can be placed at the center of the unit or at the center of each face. The three Bravais lattices which form the cubic crystal system are shown here. Structural examples of all three are known, with body- and face-centered (BCC and FCC) being much more common; most metallic elements crystallize in one of these latter forms. But although the simple cubic structure is uncommon by itself, it turns out that many BCC and FCC structures composed of ions can be regarded as interpenetrating combinations of two simple cubic lattices, one made up of positive ions and the other of negative ions. Close-packed lattices allow the maximum amount of interaction between atoms. If these interactions are mainly attractive, then close-packing usually leads to more energetically stable structures. These lattice geometries are widely seen in metallic, atomic, and simple ionic crystals. As we pointed out above, hexagonal packing of a single layer is more efficient than square-packing, so this is where we begin. Imagine that we start with the single layer of green atoms shown below. We will call this the A layer. If we place a second layer of atoms (orange) on top of the A-layer, we would expect the atoms of the new layer to nestle in the hollows in the first layer. But if all the atoms are identical, only some of these void spaces will be accessible. In the diagram on the left, notice that there are two classes of void spaces between the A atoms; one set (colored blue) has a vertex pointing up, while the other set (not colored) has down-pointing vertices. Each void space constitutes a depression in which atoms of a second layer (the B-layer) can nest. The two sets of void spaces are completely equivalent, but only one of these sets can be occupied by a second layer of atoms whose size is similar to those in the bottom layer. In the illustration on the right above we have arbitrarily placed the B-layer atoms in the blue voids, but could just as well have selected the white ones. Now consider what happens when we lay down a third layer of atoms. These will fit into the void spaces within the B-layer. As before, there are two sets of these positions, but unlike the case described above, they are not equivalent. The atoms in the third layer are represented by open blue circles in order to avoid obscuring the layers underneath. In the illustration on the left, this third layer is placed on the B-layer at locations that are directly above the atoms of the A-layer, so our third layer is just a another A layer. If we add still more layers, the vertical sequence A-B-A-B-A-B-A... repeats indefinitely. In the diagram on the right above, the blue atoms have been placed above the white (unoccupied) void spaces in layer A. Because this third layer is displaced horizontally (in our view) from layer A, we will call it layer C. As we add more layers of atoms, the sequence of layers is A-B-C-A-B-C-A-B-C..., so we call it ABC packing. These two diagrams that show exploded views of the vertical stacking further illustrate the rather small fundamental difference between these two arrangements— but, as you will see below, they have widely divergent structural consequences. Note the opposite orientations of the A and C layers The HCP stacking shown on the left just above takes us out of the cubic crystal system into the hexagonal system, so we will not say much more about it here except to point out each atom has 12 nearest neighbors: six in its own layer, and three in each layer above and below it. Below we reproduce the FCC structure that was shown above. You will notice that the B-layer atoms form a hexagon, but this is a structure. How can this be? The answer is that the FCC stack is inclined with respect to the faces of the cube, and is in fact coincident with one of the three-fold axes that passes through opposite corners. It requires a bit of study to see the relationship, and we have provided two views to help you. The one on the left shows the cube in the normal isometric projection; the one on the right looks down upon the top of the cube at a slightly inclined angle. Both the CCP and HCP structures fill 74 percent of the available space when the atoms have the same size. You should see that the two shaded planes cutting along diagonals within the interior of the cube contain atoms of different colors, meaning that they belong to different layers of the CCP stack. Each plane contains three atoms from the B layer and three from the C layer, thus reducing the symmetry to C , which a cubic lattice must have. The figure below shows the the face-centered cubic unit cell of a cubic-close packed lattice. How many atoms are contained in a unit cell? Each corner atom is shared with eight adjacent unit cells and so a single unit cell can claim only 1/8 of each of the eight corner atoms. Similarly, each of the six atoms centered on a face is only half-owned by the cell. The grand total is then (8 × 1/8) + (6 × ½) = 4 atoms per unit cell. The atoms in each layer in these close-packing stacks sit in a depression in the layer below it. As we explained above, these void spaces are not completely filled. (It is geometrically impossible for more than two identical spheres to be in contact at a single point.) We will see later that these can sometimes accommodate additional (but generally smaller) atoms or ions. If we look down on top of two layers of close-packed spheres, we can pick out two classes of void spaces which we call and . If we direct our attention to a region in the above diagram where a single atom is in contact with the three atoms in the layers directly below it, the void space is known as a . A similar space will be be found between this single atom and the three atoms (not shown) that would lie on top of it in an extended lattice. Any interstitial atom that might occupy this site will interact with the four atoms surrounding it, so this is also called a . Don't be misled by this name; the boundaries of the void space are spherical sections, not tetrahedra. The tetrahedron is just an imaginary construction whose four corners point to the centers of the four atoms that are in contact. Similarly, when two sets of three trigonally-oriented spheres are in close-packed contact, they will be oriented 60° apart and the centers of the spheres will define the six corners of an imaginary octahedron centered in the void space between the two layers, so we call these or . Octahedral sites are larger than tetrahedral sites. An octahedron has six corners and eight sides. We usually draw octahedra as a double square pyramid standing on one corner (left), but in order to visualize the octahedral shape in a close-packed lattice, it is better to think of the octahedron as lying on one of its faces (right). Each sphere in a close-packed lattice is associated with one octahedral site, whereas there are only half as many tetrahedral sites. This can be seen in this diagram that shows the central atom in the B layer in alignment with the hollows in the C and A layers above and below. The face-centered cubic unit cell contains a single octahedral hole within itself, but octahedral holes shared with adjacent cells exist at the centers of each edge. Each of these twelve edge-located sites is shared with four adjacent cells, and thus contributes (12 × ¼) = 3 atoms to the cell. Added to the single hole contained in the middle of the cell, this makes a total of 4 octahedral sites per unit cell. This is the same as the number we calculated above for the number of atoms in the cell. It can be shown from elementary trigonometry that an atom will fit exactly into an octahedral site if its radius is 0.414 as great as that of the host atoms. The corresponding figure for the smaller tetrahedral holes is 0.225. Many pure metals and compounds form face-centered cubic (cubic close- packed) structures. The existence of tetrahedral and octahedral holes in these lattices presents an opportunity for "foreign" atoms to occupy some or all of these interstitial sites. In order to retain close-packing, the interstitial atoms must be small enough to fit into these holes without disrupting the host CCP lattice. When these atoms are too large, which is commonly the case in ionic compounds, the atoms in the interstitial sites will push the host atoms apart so that the face-centered cubic lattice is somewhat opened up and loses its close-packing character. Alkali halides that crystallize with the "rock-salt" structure exemplified by sodium chloride can be regarded either as a FCC structure of one kind of ion in which the octahedral holes are occupied by ions of opposite charge, or as two interpenetrating FCC lattices made up of the two kinds of ions. The two shaded octahedra illustrate the identical coordination of the two kinds of ions; each atom or ion of a given kind is surrounded by six of the opposite kind, resulting in a coordination expressed as (6:6). How many NaCl units are contained in the unit cell? If we ignore the atoms that were placed outside the cell in order to construct the octahedra, you should be able to count fourteen "orange" atoms and thirteen "blue" ones. But many of these are shared with adjacent unit cells. An atom at the corner of the cube is shared by eight adjacent cubes, and thus makes a 1/8 contribution to any one cell. Similarly, the center of an edge is common to four other cells, and an atom centered in a face is shared with two cells. Taking all this into consideration, you should be able to confirm the following tally showing that there are four AB units in a unit cell of this kind. If we take into consideration the actual sizes of the ions (Na = 116 pm, Cl = 167 pm), it is apparent that neither ion will fit into the octahedral holes with a CCP lattice composed of the other ion, so the actual structure of NaCl is somewhat expanded beyond the close-packed model. The space-filling model on the right depicts a face-centered cubic unit cell of chloride ions (purple), with the sodium ions (green) occupying the octahedral sites. Since there are two tetrahedral sites for every atom in a close-packed lattice, we can have binary compounds of 1:1 or 1:2 stoichiometry depending on whether half or all of the tetrahedral holes are occupied. Zinc-blende is the mineralogical name for zinc sulfide, ZnS. An impure form known as is the major ore from which zinc is obtained. This structure consists essentially of a FCC (CCP) lattice of sulfur atoms (orange) (equivalent to the lattice of chloride ions in NaCl) in which zinc ions (green) occupy half of the tetrahedral sites. As with any FCC lattice, there are four atoms of sulfur per unit cell, and the the four zinc atoms are totally contained in the unit cell. Each atom in this structure has nearest neighbors, and is thus tetrahedrally coordinated. It is interesting to note that if all the atoms are replaced with carbon, this would correspond to the structure. Fluorite, CaF , having twice as many ions of fluoride as of calcium, makes use of all eight tetrahedral holes in the CPP lattice of calcium ions (orange) depicted here. To help you understand this structure, we have shown some of the octahedral sites in the next cell on the right; you can see that the calcium ion at is surrounded by eight fluoride ions, and this is of course the case for all of the calcium sites. Since each fluoride ion has four nearest-neighbor calcium ions, the coordination in this structure is described as (8:4). Although the radii of the two ions (F = 117 pm, Ca = 126 pm does not allow true close packing, they are similar enough that one could just as well describe the structure as a FCC lattice of fluoride ions with calcium ions in the octahedral holes. In Section 4 we saw that the only cubic lattice that can allow close packing is the face-centered cubic structure. The simplest of the three cubic lattice types, the , lacks the hexagonally-arranged layers that are required for close packing. But as shown in this exploded view, the void space between the two square-packed layers of this cell constitutes an octahedral hole that can accommodate another atom, yielding a packing arrangement that in favorable cases can approximate true close-packing. Each second-layer B atom (blue) resides within the unit cell defined the A layers above and below it. The A and B atoms can be of the same kind or they can be different. If they are the same, we have a . If they are different, and especially if they are oppositely-charged ions (as in the CsCl structure), there are size restrictions: if the B atom is too large to fit into the interstitial space, or if it is so small that the A layers (which all carry the same electric charge) come into contact without sufficient A-B coulombic attractions, this structural arrangement may not be stable. CsCl is the common model for the BCC structure. As with so many other structures involving two different atoms or ions, we can regard the same basic structure in different ways. Thus if we look beyond a single unit cell, we see that CsCl can be represented as two interpenetrating simple cubic lattices in which each atom occupies an octahedral hole within the cubes of the other lattice.
19,635
4,544
https://chem.libretexts.org/Bookshelves/Biological_Chemistry/Supplemental_Modules_(Biological_Chemistry)/Pharmaceuticals/Misc_Antibiotics
Antibiotics are specific chemical substances derived from or produced by living organisms that are capable of inhibiting the life processes of other organisms. The first antibiotics were isolated from microorganisms but some are now obtained from higher plants and animals. Over 3,000 antibiotics have been identified but only a few dozen are used in medicine. Antibiotics are the most widely prescribed class of drugs comprising 12% of the prescriptions in the United States. Macrolides are products of actinomycetes (soil bacteria) or semi-synthetic derivatives of them. Erythromycin is an orally effective antibiotic discovered in 1952 in the metabolic products of a strain of , originally obtained from a soil sample. Erythromycin and other macrolide antibiotics inhibit protein synthesis by binding to the 23S rRNA molecule (in the 50S subunit) of the bacterial ribosome blocking the exit of the growing peptide chain. of sensitive microorganisms. (Humans do not have 50 S ribosomal subunits, but have ribosomes composed of 40 S and 60 S subunits). Certain resistant microorganisms with mutational changes in components of this subunit of the ribosome fail to bind the drug. The association between erythromycin and the ribosome is reversible and takes place only when the 50 S subunit is free from tRNA molecules bearing nascent peptide chains. Gram-positive bacteria accumulate about 100 times more erythromycin than do gram-negative microorganisms. The non ionized from of the drug is considerably more permeable to cells, and this probably explains the increased antimicrobial activity that is observed in alkaline pH. Tetracyclines have the broadest spectrum of antimicrobial activity. These may include: Aureomycin, Terramycin, and Panmycin. Four fused 6-membered rings, as shown in the figure below, form the basic structure from which the various tetracyclines are made. The various derivatives are different at one or more of four sites on the rigid, planar ring structure. The classical tetracyclines were derived from ., but the newer derivatives are semisynthetic as is generally true for newer members of other drug groups. Tetracyclines inhibit bacterial protein synthesis by blocking the attachment of the transfer RNA-amino acid to the ribosome. More precisely they are inhibitors of the codon-anticodon interaction. Tetracyclines can also inhibit protein synthesis in the host, but are less likely to reach the concentration required because eukaryotic cells do not have a tetracycline uptake mechanism. Streptomycin is effective against gram-negative bacteria, although it is also used in the treatment of tuberculosis. Streptomycin binds to the 30S ribosome and changes its shape so that it and inhibits protein synthesis by causing a misreading of messenger RNA information. Chloromycetin is also a broad spectrum antibiotic that possesses activity similar to the tetracylines. At present, it is the only antibiotic prepared synthetically. It is reserved for treatment of serious infections because it is potentially highly toxic to bone marrow cells. It inhibits protein synthesis by attaching to the ribosome and interferes with the formation of peptide bonds between amino acids. It behaves as an antimetabolite for the essential amino acid phenylalanine at ribosomal binding sites.
3,333
4,546
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Polymers/Molecular_Weights_of_Polymers
Unlike simpler pure compounds, most polymers are not composed of identical molecules. The HDPE molecules, for example, are all long carbon chains, but the lengths may vary by thousands of monomer units. Because of this, polymer molecular weights are usually given as averages. Two experimentally determined values are common: \(M_n\), the number average molecular weight, is calculated from the mole fraction distribution of different sized molecules in a sample, and \(M_w\), the weight average molecular weight, is calculated from the weight fraction distribution of different sized molecules. These are defined below. Since larger molecules in a sample weigh more than smaller molecules, the weight average Mw is necessarily skewed to higher values, and is always greater than \(M_n\). As the weight dispersion of molecules in a sample narrows, approaches , and in the unlikely case that all the polymer molecules have identical weights (a pure mono-disperse sample), the ratio / becomes unity. ),
1,019
4,547
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Polymers/Writing_Formulas_for_Polymers
The repeating structural unit of most simple polymers not only reflects the monomer(s) from which the polymers are constructed, but also provides a concise means for drawing structures to represent these macromolecules. For polyethylene, arguably the simplest polymer, this is demonstrated by the following equation. Here ethylene (ethene) is the monomer, and the corresponding linear polymer is called high-density polyethylene (HDPE). HDPE is composed of macromolecules in which n ranges from 10,000 to 100,000 (molecular weight \(2 \times 10^5\) to \(3 \times10^6\) ). If Y and Z represent moles of monomer and polymer respectively, Z is approximately \(10^{-5}\) Y. This polymer is called polyethylene rather than polymethylene, \(\ce{(-CH_2-)_{n}}\), because ethylene is a stable compound (methylene is not), and it also serves as the synthetic precursor of the polymer. The two open bonds remaining at the ends of the long chain of carbons (colored magenta) are normally not specified, because the atoms or groups found there depend on the chemical process used for polymerization. The synthetic methods used to prepare this and other polymers will be described later in this chapter. ),
1,214
4,548
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/17%3A_Chemical_Kinetics_and_Dynamics/17.04%3A_Reaction_Mechanisms
Make sure you thoroughly understand the following essential ideas which have been presented above. It is especially important that you know the precise meanings of all the green-highlighted terms in the context of this topic. We are now ready to open up the "black box" that lies between the reactants and products of a net chemical reaction. What we find inside may not be very pretty, but it is always interesting because it provides us with a blow-by-blow description of chemical reactions take place. The of a chemical reaction is the sequence of actual events that take place as reactant molecules are converted into products. Each of these events constitutes an that can be represented as a coming-together of discrete particles ("collision") or as the breaking-up of a molecule ("dissociation") into simpler units. The molecular entity that emerges from each step may be a final product of the reaction, or it might be an — a species that is created in one elementary step and destroyed in a subsequent step, and therefore does not appear in the net reaction equation. For an example of a mechanism, consider the decomposition of nitrogen dioxide into nitric oxide and oxygen. The net balanced equation is \[\ce{2 NO2(g) → 2 NO(g) + O2(g)} \nonumber\] The mechanism of this reaction is believed to involve the following two elementary steps: \[ \begin{align*} \ce{2 NO2} &→ \ce{NO3 + NO} \label{step 1} \\[4pt] \ce{NO3} &→ \ce{NO + O2} \label{step 2} \end{align*}\] Note that the intermediate species \(\ce{NO3}\) has only a transient existence and does not appear in the net equation. A useful reaction mechanism It is important to understand that the mechanism of a given net reaction may be different under different conditions. For example, the dissociation of hydrogen bromide \[\ce{2 HBr(g) → H2(g) + Br2(g)} \nonumber\] proceeds by different mechanisms (and follows different rate laws) when carried out in the dark ( ) and in the light ( ). Similarly, the presence of a can enable an alternative mechanism that greatly speeds up the rate of a reaction. A reaction mechanism must ultimately be understood as a "blow-by-blow" description of the molecular-level events whose sequence leads from reactants to products. These elementary steps (also called ) are almost always very simple ones involving one, two, or [rarely] three chemical species which are classified, respectively, as Elementary reactions differ from ordinary net chemical reactions in two important ways: Some net reactions do proceed in a single elementary step, at least under certain conditions. However, without careful experimentation, one can never be sure. The gas-phase formation of \(\ce{HI}\) from its elements was long thought to be a simple bimolecular combination of \(\ce{H2}\) and \(\ce{I2}\), but it was later found that under certain conditions, it follows a more complicated rate law. Mechanisms in which one elementary step is followed by another are very common. \[\ce{ A + B → \cancel{Q} } \tag{step 1}\] \[\ce{B + \cancel{Q} → C} \tag{step 2}\] \[\ce{A + 2B → C} \tag{net reaction}\] (As must always be the case, the net reaction is just the sum of its elementary steps.) In this example, the species \(Q\) is an , usually an unstable or highly reactive species. If both steps proceed at similar rates, rate law experiments on the net reaction would not reveal that two separate steps are involved here. The rate law for the reaction would be \[rate = k[A,B]^2 \nonumber\] (Bear in mind that intermediates such as \(Q\) appear in the rate law of a net reaction.) When the rates are quite different, things can get interesting and lead to quite varied kinetics as well as some simplifying approximations. When the rate constants of a series of consecutive reactions are quite different, a number of relationships can come into play that greatly simplify our understanding of the observed reaction kinetics. The rate-determining step is also known as the . We can generally expect that one of the elementary reactions in a sequence of consecutive steps will have a rate constant that is smaller than the others. The effect is to slow the rates of all the reactions — very much in the way that a line of cars creeps slowly up a hill behind a slow truck. The three-step reaction depicted here involves two intermediate species and , and three activated complexes numbered . Although the step I → products has the smallest individual activation energy , the energy of X with respect to the determines the activation energy of the overall reaction, denoted by the leftmost vertical arrow . Thus the rate-determining step is \[X_1 → X_2. \nonumber\] Chemists often refer to elementary reactions whose forward rate constants have large magnitudes as "fast", and those with forward small rate constants as "slow". Always bear in mind, however, that as long as the steps proceed in single file (no short-cuts!), . So even the "fastest" members of a consecutive series of reactions will proceed as slowly as the "slowest" one. In many multi-step processes, the forward and reverse rate constants for the formation of an intermediate \(Q\) are of similar magnitudes and sufficiently large to make the reaction in each direction quite rapid. Decomposition of the intermediate to product is a slower process: \[\ce{A <=>[k_1,k_{-1}] Q ->[k_2] B} \nonumber\] This is often described as a rapid equilibrium in which the concentration of \(Q\) can be related to the equilibrium constant \[K = \dfrac{k_1}{k_{–1}} \nonumber\] This is just the . It should be understood, however, that true equilibrium is never achieved because \(Q\) is continually being consumed; that is, the rate of formation of \(Q\) always exceeds its rate of decomposition. For this reason, the steady-state approximation described below is generally preferred to treat processes of this kind. Consider a mechanism consisting of two sequential reactions \[\ce{A ->[k_1] Q ->[k_2] B} \nonumber\] in which \(Q\) is an intermediate. The time-vs-concentration profiles of these three substances will depend on the relative magnitudes of \(k_1\) and \(k_2\), as shown in the following diagrams. Construction of these diagrams requires the solution of sets of , which is [fortunately!] beyond the scope of this course. The is usually not covered in introductory courses, although it is not particularly complicated mathematically. In the left-hand diagram, the rate-determining step is clearly the conversion of the rapidly-formed intermediate into the product, so there is no need to formulate a rate law that involves \(Q\). But on the right side, the formation of \(Q\) is rate-determining, but its conversion into \(B\) is so rapid that \([Q]\) never builds up to a substantial value. (Notice how the plots for \([A]\) and \([B]\) are almost mutually inverse.) The effect is to maintain the concentration of \(Q\) at an approximately constant value. This can greatly simplify the analysis of many reaction mechanisms, especially those that are mediated by enzymes in organisms. We are unable to look directly at the elementary steps hidden within the "black box" of the reaction mechanism, we are limited to proposing a sequence that would be consistent with the reaction order which we can observe. Chemical intuition can guide us in this, for example by guessing the magnitudes of some of the activation energies. In the end, however, the best we can do is to work out a mechanism that is ; we can never "prove" that what we come up with is the actual mechanism. Consider the following reaction: \[A + B → C\nonumber\] One possible mechanism might involve two intermediates \(Q\) and \(R\): The rate law corresponding to this mechanism would be that of the rate-determining step: \[\text{rate} = k_1[A,B]. \nonumber\] If the first step in a mechanism is rate-determining, it is easy to find the rate law for the overall expression from the mechanism. If the second or a later step is rate-determining, determining the rate law is slightly more complicated and often requires either of the two approximation above to identify. An alternative mechanism for the following reaction: \[A + B → C \nonumber\] in which the rate-determining step involves one of the intermediates would display third-order kinetics: Since intermediates cannot appear in rate law expressions, we must express \([Q]\) in the rate-determining step in terms of the other reactants. To do this, we make use of the fact that Step involves an equilibrium constant \(K_1\): \[K_1 = \dfrac{k_1}{k_{-1}} = \dfrac{[Q]}{[A,B]} \nonumber\] Solving this for \([Q]\), we obtain \[[Q] = K_1[A,B]. \nonumber \] We can now express the rate law for as \[\begin{align*} \text{rate} &= k_2 K_1[A,B,A] \\[4pt] &= k[A]^2[B] \end{align*}\] in which the constants \(k_2\) and \(K_1\) have been combined into a single constant \(k\). Consider the following reaction: \[\ce{ F2 + 2 NO2 → 2 NO2F }\nonumber\] Application of the "chemical intuition" mentioned in the above box would lead us to suspect that any process that involves breaking of the strong F–F bond would likely be slow enough to be rate limiting, and that the resulting atomic fluorines would be very fast to react with another odd-electron species: If this mechanism is correct, then the rate law of the net reaction would be that of the rate-determining step: \[\text{rate} = k_1[F_2,NO_2] \nonumber\] Ozone is an unstable allotrope of oxygen that decomposes back into ordinary dioxygen according to the net reaction \[\ce{2 O3 → 3 O2} \nonumber\] A possible mechanism would be the simple one-step bimolecular collision suggested by the reaction equation, but this would lead to a second-order rate law which is not observed. Instead, experiment reveals a more complicated rate law: \[\text{rate} = [O_3]^2[O_2]^{–1} \nonumber\] What's this? It looks as if actually the reaction in some way. The generally-accepted mechanism for this reaction is: Does this seem reasonable? Note that To translate this mechanism into a rate law, we first write the equilibrium constant for Step and solve it for the concentration of the intermediate: \[ K =\dfrac{[O_2,O]}{[O_3]} \nonumber\] \[[O] = \dfrac{k_1[O_3]}{k_{-1}[O_2]} \nonumber\] We substitute this value of \(\ce{[O]}\) into the rate expression \(\ce{[O,O3]}\) for Step 2, which yields the experimentally-obtained rate law \[ rate = k_1K \dfrac{[O_3]^2}{[O_2]} \nonumber\] Consider the gas-phase oxidation of nitric oxide: \[\ce{2 NO + O_2 → 2 NO_2} \nonumber\] This reaction, like most third-order reactions, is not termolecular but rather a combination of an equilibrium followed by a subsequent bimolecular step: Since the intermediate \(\ce{N2O2}\) may not appear in the rate equation, we need to express its concentration in terms of the reactant \(NO\). As in the previous example, we do this through the equilibrium constant of Step : \[K = \dfrac{[N_2O_2]}{[NO]^2} \nonumber\] \[[N_2O_2] = K [NO]^2 \nonumber\] \[ \begin{align*} \text{rate} &= k_2 [N_2O_2 ,O_2] \\[4pt] &= k_2 K [NO]^2 [O_2] \end{align*} \] The unusual feature of this net reaction is that its rate diminishes as the temperature increases, suggesting that the activation energy is negative. Reaction involves bond formation and is exothermic, so as the temperature rises, \(K\) decreases (Le Chatelier effect). At the same time, increases, but not sufficiently to overcome the decrease in \(K\). So the apparently negative activation energy of the overall process is simply an artifact of the magnitudes of the opposing temperature coefficients of and \(K\). Many important reaction mechanisms, particularly in the gas phase, involve intermediates having unpaired electrons, commonly known as . Free radicals are often fairly stable thermodynamically and may be quite long-lived by themselves, but they are highly reactive, and hence kinetically labile. The dot ·, representing the unpaired electron, is not really a part of the formula, and is usually shown only when we want to emphasize the radical character of a species. The "atomic" forms of many elements that normally form diatomic molecules are free radicals; H·, O·, and Br· are common examples. The simplest and most stable (Δ = +87 kJ/mol) molecular free radical or " " is nitric oxide, NO·. The most important chemical property of a free radical is its ability to pass the odd electron along to another species with which it reacts. This process creates a new radical which becomes capable of initiating another reaction. Radicals can, of course, also react with each other, destroying both (" ") while creating a new covalent-bonded species. Much of the pioneering work in this field, of which the \(\ce{HBr}\) synthesis is a classic example, was done by the German chemist Max Bodenstein (1871-1942) The synthesis of hydrogen bromide from its elements illustrates the major features of a chain reaction. The figures in the right-hand column are the activation energies per mole. Note the following points: The rate laws for chain reactions tend to be very complex, and often have non-integral orders. The gas-phase oxidation of hydrogen has been extensively studied over a wide range of temperatures and pressures. \[\ce{H2(g) + 1/2 O2(g) → H2O(g)}\quad ΔH^o = –242\, kJ/mol \nonumber\] This reaction does not take place at all when the two gases are simply mixed at room temperature. At temperatures around 500-600°C it proceeds quite smoothly, but when heated above 700° or ignited with a spark, the mixture explodes. As with all combustion reactions, the mechanism of this reaction is extremely complex (you do not want to see the rate law!) and varies somewhat with the conditions. Some of the major radical formation steps are Reactions and give birth to more radicals than they consume, so when these are active, each one effectively initiates a new chain process causing the rate overall rate to increase exponentially, producing an explosion. An explosive reaction is a highly exothermic process that once initiated, goes to completion very rapidly and cannot be stopped. The destructive force of an explosion arises from the rapid expansion of the gaseous products as they absorb the heat of the reaction. There are two basic kinds of chemical explosions: Whether or not a reaction proceeds explosively depends on the balance between formation and destruction of the chain-carrying species. This balance depends on the temperature and pressure, as illustrated here for the hydrogen-oxygen reaction. The lower explosion limit of gas mixtures varies with the size, shape, and composition of the enclosing container. Needless to say, experimental determination of explosion limits requires some care and creativity. Upper and lower explosion limits for several common fuel gases are shown below.
14,829
4,549
https://chem.libretexts.org/Bookshelves/General_Chemistry/Book%3A_ChemPRIME_(Moore_et_al.)/04%3A_The_Structure_of_Atoms/4.08%3A_Radiation
Just prior to the turn of the twentieth century, additional observations were made which contradicted parts of Dalton’s atomic theory. The French physicist Henri Becquerel (1852 to 1928) discovered by accident that compounds of uranium and thorium emitted which, like rays of sunlight, could darken photographic films. Seen in Figure \(\Page {2}\) is the photographic Becquerel used, darkened by the emitted by Uranium. Becquerel’s rays differed from light in that they could even pass through the black paper wrappings in which his films were stored. Although themselves invisible to the human eye, the rays could be detected easily because they produced visible light when they struck phosphors such as impure zinc sulfide. Such luminescence is similar to the glow of a psychedelic poster when invisible ultraviolet (black light) rays strike it. Further experimentation showed that if the rays were allowed to pass between the poles of a magnet, they could be separated into the three groups shown in Figure \(\Page {2}\). Because little or nothing was known about these rays, they were labeled with the first three letters of the Greek alphabet. Upon passing through the magnetic field, the alpha rays (α rays) were deflected slightly in one direction, beta rays (β rays) were deflected to a much greater extent in the opposite direction, and gamma rays (γ rays) were not deflected at all (Figure \(\Page {2}\)). Deflection by a magnet is a characteristic of electrically charged (as opposed to rays of light). From the direction and extent of deflection it was concluded that the β particles had a negative charge and were much less massive than the positively charged α particles. The γ rays did not behave as electrically charged particles would, and so the name was retained for them. Taken together the α particles, β particles, and γ rays were referred to as , and the compounds which emitted them as . The three types of particle differ greatly in penetrating power Figure \(\Page {3}\). While γ particles may penetrate several millimeters of lead, β particles are may penetrate 1 mm of aluminum, but α particles do not penetrate thin paper, or a centimeter or two of air. The high penetrating power of γs does not make them more dangerous, because if they penetrate matter they do not cause changes in it. On the other hand, if an α source is a few inches away, it is not harmful at all; but if an α emitter like radon is inhaled, the \(\alpha\) particles are very dangerous. Because they do not penetrate matter, their energy is absorbed in the alveoli of the lung where it causes molecular damage, sometimes leading to lung cancer. Study of radioactive compounds by the French chemist Marie Curie (1867 to 1934) revealed the presence of several previously undiscovered elements (radium, polonium, actinium, and radon). These elements, and any compounds they formed, were intensely radioactive. When thorium and uranium compounds were purified to remove the newly discovered elements, the level of radioactivity decreased markedly. It increased again over a period of months or years, however. Even if the uranium or thorium compounds were carefully protected from contamination, it was possible to find small quantities of radium, polonium, actinium, or radon in them after such a time. To chemists, who had been trained to accept Dalton’s indestructible atoms, these results were intellectually distasteful. The inescapable conclusion was that some of the uranium or thorium atoms were spontaneously changing their structures and becoming atoms of the newly discovered elements. A change in atomic structure which produces a different element is called . Transmutation of uranium into the more radioactive elements could explain the increased emission of radiation by a carefully sealed sample of a uranium compound. During these experiments with radioactive compounds it was observed that minerals containing uranium or thorium always contained lead as well. \[\ce{^{238}_{92}U} \rightarrow \ce{^{234}_{90}Th} + \alpha \nonumber \] This lead apparently resulted from further transmutation of the highly radioactive elements radium, polonium, actinium, and radon. The lead found in uranium ores always had a significantly lower atomic weight than lead from most other sources (as low as 206.4 compared with 207.2, the accepted value). Lead associated with thorium always had an unusually high atomic weight. Nevertheless, all three forms of lead had the same chemical properties. Once mixed together, they could not be separated. Such results, as well as the reversed order of elements such as Ar and K in the periodic table, implied that atomic weight is not the fundamental determinant of chemical behavior.
4,746
4,550
https://chem.libretexts.org/Bookshelves/General_Chemistry/General_Chemistry_Supplement_(Eames)/Chemistry_Calculations/Significant_Figures
Suppose you were measuring the diameter of the box below and you needed to report its circumference. You used a ruler and found the diameter to be 31 mm. Then you do the calculation on your calculator and get 97.389372261284 mm. So what do you say the circumference of the circle was? You certainly don't know the circumference more precisely than you knew the diameter, which was between 30 and 32 mm. Going through the calculations with these values would tell you that the circumference was between 94.25 mm and 100.53 mm. This is where significant figures come in handy. , often called sig figs, are the number of digits in a given value, or number. For instance, 18 has 2 sig figs, and 3.456 has 4 sig figs. However, both 10 and 1000 have only 1 sig fig. The reason is because the zeros have to be there to show what the number is, so they don't count as significant digits. What about 1001? It has 4 sig figs. We could have "rounded" it to 1000, showing that the last digit wasn't significant, but we didn't. This shows that the 1 on the right is significant, and so if the smallest digit (representing 1s) is significant, then the bigger ones (representing 10s and 100s) must be also. However, if the zeros are significant, then a period or decimal point would be added to the end. For example, if given a problem in which 20. mL are used, then there are 2 sig figs in the number 20. You may forget to include the decimal point, particularly in your lab notebook when working in the lab. But you can assume that you used the standard measuring tools in the lab and use the significant figures based on the tools' accuracy. For example, a graduated cylinder could be accurate to 2 mL. So recording 20. mL would be like saying the measurement was between 18 - 22 mL. This means that recording the data with 2 sig figs would be correct. Generally, including an extra sig fig, especially in the middle of calculations is reasonable. When measuring a quantity, the significant figures describe how precise the measurement was by listing the digits in a measured value which are known with certainty. For example, suppose you measure the length of a box with a normal ruler with increments, or markings, for millimeters (mm). You can be sure that your measurement is no more than 1 mm different from the real length of the box if you measured carefully. So, for instance, you could report the length as 31 mm or 3.1 cm. You wouldn't round to 3 cm or 30 mm since you were able to measure the box more precisely than that. But since you were only able to accurately measure to the nearest millimeter, the certainty of your measurement is within a millimeter of the reading. So you would read your measurement of 3.1 cm as 31 mm ± 1 mm. Now suppose you wanted to know the length of the box much more precisely. To do that you will need a better tool. For instance, you could use a dial caliper to measure to the nearest 0.02 mm. Now you could report your length as, say, 31.14 mm, which means that your are certain your measurement was between 31.12 mm and 31.16 mm. If you measured with a ruler but wrote 31.1 mm, or 31.12 mm, people going through your numbers would probably think that you used a better tool than you actually did, so that would be almost dishonest . Going back to the original problem, one rule for using sig figs when doing multiplication with a measured number is to report the answer with the same number of sig figs and the number you started with: 2 sig figs, so 97 mm. The uncertainty is a little bigger than it was before, 97 ± 3 mm. But you shouldn't write 100 mm (1 sig fig) because you don't mean 0 - 200 mm, but you also don't mean 90 - 110 mm (sig fig change!). If you wanted to say 100 mm with 2 sig figs, you would have to write it as 10. cm or use scientific notation and write it as 1.0 x 10 mm. Some numbers are counted or defined, meaning that they are not measured. These are exact numbers. For instance, there are exactly 1000 grams (g) in 1 kilogram (kg), because that's the definition. Or if you use a volumetric pipette to add 1.00 mL of liquid twice, then the total amount added was 2 x 1.00 mL = 2.00 mL. You used the pipette exactly twice, so the 2 is exact, and you don't have to round to 1 sig fig (2 mL) for the total volume.
4,297
4,551
https://chem.libretexts.org/Bookshelves/General_Chemistry/Exercises%3A_General_Chemistry/Exercises%3A_OpenStax/12%3A_Kinetics_(Exercises)
What is the difference between average rate, initial rate, and instantaneous rate? First, a general reaction rate must be defined to know what any variation of a rate is. The reaction rate is defined as the measure of the change in concentration of the reactants or products per unit time. The rate of a chemical reaction is not a constant and rather changes continuously, and can be influenced by temperature. Rate of a reaction can be defined as the disappearance of any reactant or appearance of any product. Thus, an average rate is the average reaction rate over a given period of time in the reaction, the instantaneous rate is the reaction rate at a specific given moment during the reaction, and the initial rate is the instantaneous rate at the very start of the reaction (when the product begins to form). The instantaneous rate of a reaction can be denoted as \[ \lim_{\Delta t \rightarrow 0} \dfrac{\Delta [concentration]}{\Delta t} \nonumber \] Ozone decomposes to oxygen according to the equation \(\ce{2O3}(g)⟶\ce{3O2}(g)\). Write the equation that relates the rate expressions for this reaction in terms of the disappearance of O and the formation of oxygen. For the general reaction, aA ---> bB, the rate of the reaction can be expressed in terms of the disappearance of A or the appearance of B over a certain time period as follows. \[- \dfrac{1}{a}\dfrac{\Delta [A]}{\Delta t} = - \dfrac{1}{b}\dfrac{\Delta [B]}{\Delta t} = \dfrac{1}{c}\dfrac{\Delta [C]}{\Delta t} = \dfrac{1}{d}\dfrac{\Delta [D]}{\Delta t}\] We want the rate of a reaction to be positive, but the change in the concentration of a reactant, A, will be negative because it is being used up to be transformed into product, B. Therefore, when expressing the rate of the reaction in terms of the change in the concentration of A, it is important to add a negative sign in front to ensure the overall rate positive. Lastly, the rate must be normalized according to the stoichiometry of the reaction. In the decomposition of ozone to oxygen, two moles of ozone form three moles of oxygen gas. This means that the increase in oxygen gas will be 1.5 times as great as the decrease in ozone. Because the rate of the reaction should be able to describe both species, we divide the change in concentration by its stoichiometric coefficient in the balanced reaction equation to deal with this issue. Therefore, the rate of the reaction of the decomposition of ozone into oxygen gas can be described as follows: \[Rate=-\frac{Δ[O3]}{2ΔT}=\frac{Δ[O2]}{3ΔT}\] $$Rate=-\frac{Δ[O3]}{2ΔT}=\frac{Δ[O2]}{3ΔT}\] In the nuclear industry, chlorine trifluoride is used to prepare uranium hexafluoride, a volatile compound of uranium used in the separation of uranium isotopes. Chlorine trifluoride is prepared by the reaction \(\ce{Cl2}(g)+\ce{3F2}(g)⟶\ce{2ClF3}(g)\). Write the equation that relates the rate expressions for this reaction in terms of the disappearance of Cl and F and the formation of ClF . In this problem we are asked to write the equation that relates rate expressions in terms of disappearance of the reactants of the equation and in terms of the formation of the product. A reaction rate gives insight to how rate is affected as a function of concentration of the substances in the equation. Rates can often be expressed on graphs of concentration vs time expressed in change (\({\Delta}\)) of concentration and time and in a short enough time interval, the instantaneous rate can be approximated. If we were to analyze the reaction given, the graph would demonstrate that Cl decreases, that F decreases 3 times as quickly, and then ClF increases at a rate doubles. The reactants are being used and converted to product so they decrease while products increase. For this problem, we can apply the general formula of a rate to the specific aspects of a problem where the general form follows: \[aA+bB⟶cC+dD\nonumber \]. And the rate can then be written as \(rate=-\frac {1}{a}\frac{{\Delta}[A]}{{\Delta}t}\) \(=-\frac {1}{b}\frac{{\Delta}[B]}{{\Delta}t}\) \(=\frac {1}{c}\frac{{\Delta}[C]}{{\Delta}t}\) \(=\frac {1}{d}\frac{{\Delta}[D]}{{\Delta}t}.\) Here the negative signs are used to keep the convention of expressing rates as positive numbers. In this specific case we use the stoichiometry to get the specific rates of disappearance and formation (back to what was said in the first paragraph). So, the problem just involves referring the to the equation and its balanced coefficients. Based upon the equation we see that Cl is a reactant and has no coefficient, F has a coefficient of 3 and is also used up, and then ClF is a product that increases two-fold with a coefficient of 2. So, the rate here can be written as: \[rate=-\frac{{\Delta}[Cl_2]}{{\Delta}t}=-\frac {1}{3}\frac{{\Delta}[F_2]}{{\Delta}t}=\frac {1}{2}\frac{{\Delta}[ClF_3]}{{\Delta}t}\nonumber \] \[\ce{rate}=+\dfrac{1}{2}\dfrac{Δ[\ce{CIF3}]}{Δt}=−\dfrac{Δ[\ce{Cl2}]}{Δt}=−\dfrac{1}{3}\dfrac{Δ[\ce{F2}]}{Δt}\nonumber \] A study of the rate of dimerization of C H gave the data shown in the table: \[\ce{2C4H6⟶C8H12}\nonumber \] 1.) The average rate of dimerization is the change in concentration of a reactant per unit time. In this case it would be: \(rate\) \(of\) \(dimerization=-\frac{\Delta [C_4H_6]}{\Delta t}\) Rate of dimerization between 0 s and 1600 s: \(rate\) \(of\) \(dimerization=-\frac{5.04×10^{-3}M-1.00×10^{-2}M}{1600 s-0 s}\) Rate of dimerization between 1600 s and 3200 s: \(rate\) \(of\) \(dimerization=-\frac{3.37×10^{-3}M-5.04×10^{-3}M}{3200 s-1600 s}\) 2.) The instantaneous rate of dimerization at 3200 s can be found by graphing time versus [C H ]. Because you want to find the rate of dimerization at 3200 s, you need to find the slope between 1600 s and 3200 s and also 3200 s and 4800 s. For the slope between 1600 s and 3200 s use the points (1600 s, 5.04 x 10 M) and (3200 s, 3.37 x 10 M) \(\frac{3.37×10^{-3}M-5.04×10^{-3}M}{3200 s-1600 s}\) \(\frac{-0.00167 M}{1600 s}\) \(-1.04×10^{-6}\frac{M}{s}\) For the slope between 3200 s and 4800 s use the points (3200s, 3.37 x 10 M) and (4800s, 2.53 x 10 M) \(\frac{2.53×10^{-3}M-3.37×10^{-3}M}{4800 s-3200 s}\) \(\frac{-8.4×10^{-4} M}{1600 s}\) \(-5.25×10^{-7}\frac{M}{s}\) Take the two slopes you just found and find the average of them to get the instantaneous rate of dimerization. \(\frac{-1.04×10^{-6}\frac{M}{s}+-5.25×x10^{-7}\frac{M}{s}}{2}\) \(\frac{-1.565×10^{-6}\frac{M}{s}}{2}\) \(-7.83×10^-7\frac{M}{s}\) The instantaneous rate of dimerization is and the units of this rate is . 3.) The average rate of formation of C H at 1600 s and the instantaneous rate of formation at 3200 s can be found by using our answers from part a and b. If you look back up at the original equation, you could see that C H and C H are related in a two to one ratio. For every two moles of C H used, there is one mole of C H produced. For this reaction, the average rate of dimerization and the average rate of formation can be linked through this equation: \(\frac{-1}{2}\frac{\Delta [C_4H_6]}{\Delta t}=\frac{\Delta [C_8H_{12}]}{\Delta t}\) Notice that reactant side is negative because the reactants are being used up in the reaction. So, for the average rate of formation of C H at 1600 s, use the rate of dimerization between 0 s and 1600 s we found earlier and plug into the equation: The average rate of formation for C H at 1600 s is \(1.55×10^{-6}\frac{M}{s}\). The rate of formation will be positive because products are being formed. The instantaneous rate of formation for C H can be linked to the instantaneous rate of dimerization by this equation: \(\frac{-1}{2}\frac{d[C_4H_6]}{dt}=\frac{d[C_8H_{12}]}{dt}\) So, for the instantaneous rate of formation for C H at 3200 s, use the value of instantaneous rate of dimerization at 3200 s found earlier and plug into the equation: The instantaneous rate of formation for C H at 3200 s is \(-3.92×10^-7\frac{M}{s}\) A study of the rate of the reaction represented as \(2A⟶B\) gave the following data: Equations: \(\frac{-\bigtriangleup A}{\bigtriangleup time}\) and Rate=\(\frac{-\bigtriangleup A}{2\bigtriangleup time}=\frac{\bigtriangleup B}{time}\) Solve: 1.)The change in A from 0s to 10s is .625-1=-.375 so \(\frac{-\bigtriangleup A}{\bigtriangleup time}\)=.375/10= Similarly, the change in A from 10 to 20 seconds is .370-.625=-.255 so \(\frac{-\bigtriangleup A}{\bigtriangleup time}\)=.255/20-10= 2.) We can estimate the rate law graphing the points against different order equations to determine the right order. Zero Order: \[\frac{d[A]}{dt}=-k\nonumber \] \[\int_{A_{\circ}}^{A}d[A]=-k\int_{0}^{t}dt\nonumber \] \[[A]=-kt+[A_{\circ}]\nonumber \] First Order: \[\frac{d[A]}{dt}=-k[A]\nonumber \] \[\int_{A_{\circ}}^{A}\frac{d[A]}{[A]}=-kdt\nonumber \] \[Ln(A)=-kt+Ln(A_{\circ})\nonumber \] Second Order: \[\frac{d[A]}{dt}=-k[A]^{2}\nonumber \] \[\int_{A\circ}^{A}\frac{d[A]}{[A]^{2}}=-k\int_{0}^{t}dt\nonumber \] \[\frac{1}{[A]}=kt+\frac{1}{[A_{\circ}]}\nonumber \] Now that we have found the linear from of each order we will plot the points vs an [A] y-axis, a Ln(A) y-axis, and a 1/[A] y-axis. whichever of the plots has the most linear points will give us a good idea of the order and the slope will be the k value. Here we notice that the second order is most linear so we conclude the Rate to be.. \[\frac{-d[A]}{2dt}=k[A]^{2}\nonumber \] At 15 seconds [A]=.465 and from the slope of the graph we find k=.116.so if we plug this data in and multiply both sides by 2 to get rid of the 2 in the denominator on the left side of the equation we find that the rate of disappearance of A is .05 M/s where the units are equivalent to [mol*L *s ] 3.) Using the equation \(\frac{-\bigtriangleup A}{2\bigtriangleup time}=\frac{\bigtriangleup B}{time}\) we divide the rates in part a and b in half to get .0188 M/s from 0 to 10 seconds and .025 M/s for the estimated instantaneous rate at 15s. (a) average rate, 0 − 10 s = 0.0375 mol L s ; average rate, 12 − 18 s = 0.0225 mol L s ; (b) instantaneous rate, 15 s = 0.0500 mol L s ; (c) average rate for B formation = 0.0188 mol L s ; instantaneous rate for B formation = 0.0250 mol L s Consider the following reaction in aqueous solution: \[\ce{5Br-}(aq)+\ce{BrO3-}(aq)+\ce{6H+}(aq)⟶\ce{3Br2}(aq)+\ce{3H2O}(l)\nonumber \] If the rate of disappearance of Br ( ) at a particular moment during the reaction is 3.5 × 10 , what is the rate of appearance of Br ( ) at that moment? Define the rate of the reaction. Recall: For the general reaction: aA + bB → cC+ dD \(rate =- \frac{\Delta[A]}{a\Delta{t}}=- \frac{\Delta[B]}{b\Delta{t}}= \frac{\Delta[C]}{c\Delta{t}}=\frac{\Delta[D]}{d\Delta{t}}\) So, for the reaction: \(5Br^−(aq)+BrO^−_3(aq)+6H^+→3Br_2(aq)+3H_2O(l)\) The rate would be: \(rate =- \frac{\Delta[Br^-]}{5\Delta{t}}=- \frac{\Delta[BrO^-_3]}{\Delta{t}}= -\frac{\Delta[H^+]}{6\Delta{t}}=\frac{\Delta[Br_2]}{3\Delta{t}}=\frac{H_2O}{3\Delta{t}}\) Since we are given the rate for the disappearance of \(Br^-\)(aq) is \(3.5x10^-4 Ms^{-1}\), and we want to find the rate of appearance of \(Br_2\)(aq). Therefore we set the two rates equal to each other. \(rate =- \frac{\Delta[Br^-]}{5\Delta{t}}= \frac{\Delta[Br_2]}{3\Delta{t}}\) And,\(-\frac{\Delta[Br^-]}{\Delta{t}}= -3.5x10^{-4} Ms^{-1}\) So, \(3.5x10^{-4} Ms^{-1}\) = \(\frac{5}{3}\frac{\Delta[Br_2]}{\Delta{t}}\) Now solve the equation. \(\frac{(3.5x10^{-4})(3)}{5} = \frac{\Delta[Br_2]}{\Delta{t}}\) \(\frac{\Delta[Br_2]}{\Delta{t}} = 2.1 x 10^{-4} Ms^{-1}\) \(\frac{\Delta[Br_2]}{\Delta{t}} = 2.1 x 10^{-4} Ms^{-1}\) Describe the effect of each of the following on the rate of the reaction of magnesium metal with a solution of hydrochloric acid: the molarity of the hydrochloric acid, the temperature of the solution, and the size of the pieces of magnesium. Molarity of Hydrochloric Acid Go to the PhET Reactions & Rates interactive. Use the Single Collision tab to represent how the collision between monatomic oxygen (O) and carbon monoxide (CO) results in the breaking of one bond and the formation of another. Pull back on the red plunger to release the atom and observe the results. Then, click on “Reload Launcher” and change to “Angled shot” to see the difference. According to the collision theory, there are many factors that cause a reaction to happen, with three of the factors being how often the molecules or atoms collide, the molecules' or atoms' orientations, and if there is sufficient energy for the reaction to happen. So, if the angle of the plunger is changed, the atom that is shot (a lone Oxygen atom in this case) will hit the other molecule (CO in this case) at a different spot and at a different angle, therefore changing the orientation and the number of proper collisions will most likely not cause for a reaction to happen. Thanks to the simulation, we can see that this is true: depending on the angle selected, the atom may take a long time to collide with the molecule and, when a collision does occur, it may not result in the breaking of the bond and the forming of the other (no reaction happens). In this particular case, the rate of the reaction will decrease because, by changing the angle, the molecules or atoms won't collide with the correct orientation or as often with the correct orientation. In the PhET Reactions & Rates interactive, use the “Many Collisions” tab to observe how multiple atoms and molecules interact under varying conditions. Select a molecule to pump into the chamber. Set the initial temperature and select the current amounts of each reactant. Select “Show bonds” under Options. How is the rate of the reaction affected by concentration and temperature? Based on the Collision Theory, a reaction will only occur if the molecules collide with proper orientation and with sufficient energy required for the reaction to occur. The minimum energy the molecules must collide with is called the activation energy (energy of transition state). Increasing the concentration of reactants increases the probability that reactants will collide in the correct orientation since there are more reactants in the same volume of space. Therefore, increasing the concentration of reactants would increase the rate of the reaction. Decreasing the concentration of reactants would decrease the rate of reaction because the overall number of possible collisions would decrease. Temperature is directly related the the kinetic energy of molecules and activation energy \(E_a\) is the minimum energy required for a reaction to occur and doesn't change for a reaction. Increasing the temperature increases the kinetic energy of the reactants meaning the reactants will move faster and collide with each other more frequently. Therefore, increasing the temperature increase the rate of the reaction. Decreasing the temperature decreases the rate of reaction since the molecules will have less kinetic energy, move slower, and therefore collide with each other less frequently. In the PhET Reactions & Rates interactive, on the Many Collisions tab, set up a simulation with 15 molecules of A and 10 molecules of BC. Select “Show Bonds” under Options. a. On the simulation, we select the default setting and the reaction A+BC. In the default setting, we see frequent collisions, a low initial temperature, and a total average energy lower than the energy of activation. The collision theory states that the rate of a reaction is directly proportional to (the fraction of molecules with required orientation), (fractions of collisions with required energy), and (collision frequency). Although we see moving and frequently colliding reactants, the rate of the forward reaction is actually slow because it takes a long time for the products, AB and C, to start appearing. This is mainly because the fractions of collisions with required energy is low, coming from the average energy of the molecules being lower than the energy of activation. b. The reaction proceeds at an even faster rate. Again, the collision theory states that the rate of a reaction is directly proportional to (the fraction of molecules with required orientation), (fractions of collisions with required energy), and (collision frequency). Because molecules have a higher amount of energy, they have more kinetic energy. With an increased kinetic energy, the molecules not only collide more but also increase in the fraction of collision. However, the forward reaction and the backward reaction both proceed at a fast rate, so both happen almost simultaneously. It takes a shorter time for both reactions to happen. With both of the reactions adding up together overall, there is eventually a state of equilibrium. The process at which equilibrium is reached, however, is faster. Therefore, the amount of products of A+BC stays the same after a while. How do the rate of a reaction and its rate constant differ? The rate of a reaction or reaction rate is the change in the concentration of either the reactant or the product over a period of time. If the concentrations change, the rate also changes. Rate for A → B: The rate constant (k) is a proportionality constant that relates the reaction rates to reactants. If the concentrations change, the rate constant does not change. For a reaction with the general equation: \(aA+bB→cC+dD \) the experimentally determined rate law usually has the following form: Doubling the concentration of a reactant increases the rate of a reaction four times. With this knowledge, answer the following questions: (a) 2; (b) 1 Tripling the concentration of a reactant increases the rate of a reaction nine times. With this knowledge, answer the following questions: How much and in what direction will each of the following affect the rate of the reaction: \(\ce{CO}(g)+\ce{NO2}(g)⟶\ce{CO2}(g)+\ce{NO}(g)\) if the rate law for the reaction is \(\ce{rate}=k[\ce{NO2}]^2\)? (a) The process reduces the rate by a factor of 4. (b) Since CO does not appear in the rate law, the rate is not affected. How will each of the following affect the rate of the reaction: \(\ce{CO}(g)+\ce{NO2}(g)⟶\ce{CO2}(g)+\ce{NO}(g)\) if the rate law for the reaction is \(\ce{rate}=k[\ce{NO2},\ce{CO}]\) ? Regular flights of supersonic aircraft in the stratosphere are of concern because such aircraft produce nitric oxide, NO, as a byproduct in the exhaust of their engines. Nitric oxide reacts with ozone, and it has been suggested that this could contribute to depletion of the ozone layer. The reaction \(\ce{NO + O3⟶NO2 + O2}\) is first order with respect to both NO and O with a rate constant of 2.20 × 10 L/mol/s. What is the instantaneous rate of disappearance of NO when [NO] = 3.3 × 10 and [O ] = 5.9 × 10 ? 4.3 × 10 mol/L/s Radioactive phosphorus is used in the study of biochemical reaction mechanisms because phosphorus atoms are components of many biochemical molecules. The location of the phosphorus (and the location of the molecule it is bound in) can be detected from the electrons (beta particles) it produces: \[\ce{^{32}_{15}P⟶^{32}_{16}S + e-}\nonumber \] Rate = 4.85 × 10 \(\mathrm{day^{-1}\:[^{32}P]}\) What is the instantaneous rate of production of electrons in a sample with a phosphorus concentration of 0.0033 ? The rate constant for the radioactive decay of C is 1.21 × 10 year . The products of the decay are nitrogen atoms and electrons (beta particles): \[\ce{^6_{14}C⟶^{6}_{14}N + e-}\nonumber \] \[\ce{rate}=k[\ce{^6_{14}C}]\nonumber \] What is the instantaneous rate of production of N atoms in a sample with a carbon-14 content of 6.5 × 10 ? 7.9 × 10 mol/L/year What is the instantaneous rate of production of N atoms Q12.3.8 in a sample with a carbon-14 content of 1.5 × 10 ? The decomposition of acetaldehyde is a second order reaction with a rate constant of 4.71 × 10 L/mol/s. What is the instantaneous rate of decomposition of acetaldehyde in a solution with a concentration of 5.55 × 10 ? Alcohol is removed from the bloodstream by a series of metabolic reactions. The first reaction produces acetaldehyde; then other products are formed. The following data have been determined for the rate at which alcohol is removed from the blood of an average male, although individual rates can vary by 25–30%. Women metabolize alcohol a little more slowly than men: Determine the rate equation, the rate constant, and the overall order for this reaction. rate = ; = 2.0 × 10 mol/L/h (about 0.9 g/L/h for the average male); The reaction is zero order. Under certain conditions the decomposition of ammonia on a metal surface gives the following data: Determine the rate equation, the rate constant, and the overall order for this reaction. Nitrosyl chloride, NOCl, decomposes to NO and Cl . \[\ce{2NOCl}(g)⟶\ce{2NO}(g)+\ce{Cl2}(g)\nonumber \] Determine the rate equation, the rate constant, and the overall order for this reaction from the following data: Before we can figure out the rate constant first we must first determine the basic rate equation and rate order. The basic rate equation for this reaction, where n is the rate order of NOCl and k is the rate constant, is \[rate = k[NOCl]^n\nonumber \] since NOCl is the reactant in the reaction. In order to figure out the order of the reaction we must find the order of [NOCl] as it is the only reactant in the reaction. To do this we must examine how the rate of the reaction changes as the concentration of NOCl changes. As [NOCl] doubles in concentration from 0.10 M to 0.20 M the rate goes from 8.0 x 10 to 3.2 x 10 (3.2 x 10 (mol/L/h))/(8.0 x 10 (mol/L/h)) = 4 so we conclude that as [NOCl] doubles, the rate goes up by 4. Since 2 = 4 we can say that the order of [NOCl] is 2 so our updated rate law is \[rate = k[NOCl]^2\nonumber \] Now that we have the order, we can substitute the first experimental values from the given table to find the rate constant, k (8.0 x 10 (mol/L/h)) = k(0.10 M) so \[k= \dfrac{8.0 \times 10^{-10}}{ (0.10\, M)^2} = 8 \times 10^{-8} M^{-1} sec^{-1}\nonumber \] We were able to find the units of k using rate order, when the rate order is 2 units of k are M x sec So the rate equation is: rate = k[NOCl] , it is second order, and k = 8 x 10 M x sec Overall rate law : \[rate = \underbrace{(8 \times 10^{-8})}_{\text{1/(M x sec)}} [NOCl]^2\nonumber \] rate = [NOCl] ; = 8.0 × 10 L/mol/s; second order From the following data, determine the rate equation, the rate constant, and the order with respect to for the reaction \(A⟶2C\). A. Using the experimental data, we can compare the effects of changing [A] on the rate of reaction by relating ratios of [A] to ratios of rates \[ \frac{2.66 \times 10^{-2}}{1.33 \times 10^{-2}} = 2\nonumber \] and \[ \frac{1.52 \times 10^{-6}}{3.8 \times 10^{-7}} = 4\nonumber \] B. From this we know that doubling the concentration of A will result in quadrupling the rate of reaction. The order of this reaction is 2. C. We can now write the rate equation since we know the order: \[rate=k[A]^2\nonumber \] D. By plugging in one set of experimental data into our rate equation we can solve for the rate constant, k: \[3.8 \times 10^{-7} = k \times (1.33 \times 10^{-2})^{2}\nonumber \] \[k = \frac{3.8 \times 10^{-7}}{1.769 \times 10^{-4}}\nonumber \] \[k= .00215 M^{-1}s^{-1}\nonumber \] \(k= .00215 M^{-1}s^{-1}\) 2nd Order Nitrogen(II) oxide reacts with chlorine according to the equation: \[\ce{2NO}(g)+\ce{Cl2}(g)⟶\ce{2NOCl}(g)\nonumber \] The following initial rates of reaction have been observed for certain reactant concentrations: What is the rate equation that describes the rate’s dependence on the concentrations of NO and Cl ? What is the rate constant? What are the orders with respect to each reactant? For the general equation, \(aA + bB \rightarrow cC + dD\) The rate can be written as \(rate = k[A]^{m}[B]^{n}\) where k is the rate constant, and m and n are the reaction orders. For our equation \(2NO(g) + Cl_{2}(g) \rightarrow 2NOCl(g)\) the \(rate = k[NO]^{m}[Cl_{2}]^{n}\) Now, we need to find the reaction orders. Reaction orders can only be found through experimental values. We can compare two reactions where one of the reactants has the same concentration for both trials, and solve for the reaction order. \(\frac{rate_{1}}{rate_{2}}=\frac{[NO]_{1}^{m}[Cl_{2}]_{1}^{n}}{[NO]_{2}^{m}[Cl_{2}]_{2}^{n}}\) We can use the data in the table provided. If we plug in the values for rows 1 and 2, we see that the values for the concentration of Cl will cancel, leaving just the rates and the concentrations of NO. \(\frac{1.14}{4.56}=\frac{[0.5]^{m}}{[1.0]^{m}}\) We can now solve for m, and we find that m =2. This means that the reaction order for [NO] is 2. Now we must find the value of n. To do so, we can use the same equation but with the values from rows 2 and 3. This time, the concentration of NO will cancel out. \(\frac{4.56}{9.12}=\frac{[0.5]^{n}}{[1.0]^{n}}\) When we solve for n, we find that n = 1. This means that the reaction order for [Cl ] is 1. We are one step closer to finishing our rate equation. \(rate = k[NO]^{2}[Cl_{2}]\) Finally, we can solve for the rate constant. To do this, we can use one of the trials of the experiment, and plug in the values for the rate, and concentrations of reactants, then solve for k. \(1.14 mol/L/h = k[0.5 mol/L]^{2}[0.5mol/L]\) \(k=9.12L^{2}mol^{-2}h^{-1}\) So, our final rate equation is: \(rate = (9.12 L^{2} mol^{-2}h^{-1})[NO]^{2}[Cl_{2}]\) *A common mistake is forgetting units. Make sure to track your units throughout the process of determining your rate constant. Be careful because the units will change relative to the reaction order. rate = [NO] [Cl] ; = 9.12 L mol h ; second order in NO; first order in Cl Hydrogen reacts with nitrogen monoxide to form dinitrogen monoxide (laughing gas) according to the equation: \[\ce{H2}(g)+\ce{2NO}(g)⟶\ce{N2O}(g)+\ce{H2O}(g)\nonumber \] Determine the rate equation, the rate constant, and the orders with respect to each reactant from the following data: Determine the rate equation, the rate constant, and the orders with respect to each reactant. The rate constant and the orders can be determined through the differential rate law. The general form of the differential rate law is given below: aA + bB + cC ==> products where A, B, and C are the concentrations of the reactants, k is the rate constant, and n,m, and p refer to the order of each reactant. To find the orders of each reactant, we see that when [NO] doubles but [H ] doesn't change, the rate quadruples, meaning that [NO] is a second order reaction ([NO] ). When [H ] doubles but [NO] doesn't change, the rate doubles, meaning that [H ] is a first order reaction. So the rate law would look something like this: Rate = k[NO] [H ] We can use this rate law to determine the value of the rate constant. Plug in the data for reactant concentration and rate from one of the trials to solve for k the rate constant. In this case, we chose to use the data from trial 1 from the second column of the data table. 2.835x10 = k[0.3] [0.35] k = .09 M /s For the reaction \(A⟶B+C\), the following data were obtained at 30 °C: 1. The rate equation for an \(n\) order reaction is given as \(\frac{dr}{dt}={k}{[A]^n}\). Where \([A]\) is the concentration in M, and \(\frac{dr}{dt}\) is the rate in M/s. We can then use each set of data points, plug its values into the rate equation and solve for \(n\). Note you can use any of the data points as long as the concentration corresponds to its rate. Rate equation 1: \(4.17 \times {10}^{-4}={k}{[0.230]^n}\) Rate equation 2: \(9.99 \times {10}^{-4}={k}{[0.356]^n}\) We divide Rate equation 1 by Rate equation 2 in order to cancel out k, the rate constant. \({\frac{4.17 \times {10}^{-4}}{9.99 \times {10}^{-4}}} = {\frac{k[0.230]^n}{k[0.356]^n}} \) \({0.417}={0.646^n}\) Now the only unknown we have is \(n\). Using logarithm rules one can solve for it. \(ln{\: 0.417}={n \cdot ln{\: 0.646}}\) \(\frac{ln{\: 0.417}}{ln{\:0.646}}=n=2\) The rate equation is second order with respect to A and is written as \(\frac{dr}{dt}={k}{[A]^2}\). 2. We can solve for \(k\) by plugging in any data point into our rate equation \(\frac{dr}{dt}={k}{[A]^2}\). Using the first data points for instance \( [A]=0.230 \:\frac{mol}{L}\) and \( \frac{dr}{dt} = 4.17 \times {10}^{-4} \:\frac{mol}{L \cdot s}\)] we get the equation \(4.17 \times {10}^{-4} \:\frac{mol}{L \cdot s}={k}{[0.230 \:\frac{mol}{L}]^2}\) Which solves for \(k=7.88 \times {10}^{-3} \frac{L}{mol \cdot s}\) Since we know this is a second order reaction the appropriate units for \(k\) can also be written as \( \frac{1}{M \cdot s}\) (a) The rate equation is second order in A and is written as rate = [ ] . (b) = 7.88 × 10 L mol s For the reaction \(Q⟶W+X\), the following data were obtained at 30 °C: What is the order of the reaction with respect to [ ], and what is the rate equation? Order: 2 k=0.231 \(M^{-1}s^{-1}\) The rate constant for the first-order decomposition at 45 °C of dinitrogen pentoxide, N O , dissolved in chloroform, CHCl , is 6.2 × 10 min . \[\ce{2N2O5⟶4NO2 + O2}\nonumber \] What is the rate of the reaction when [N O ] = 0.40 ? The first step is to write the rate law. We know the general formula for for a first-order rate law. It is as follows: Rate=k[A] We now plug in [N O ] in for [A] in our general rate law. We also plug in our rate constant (k), which was given to us. Now our equation looks as follows: Rate=(6.2x10 min )[N O ] We now plug in our given molarity. [N O ]=0.4 M. Now our equation looks as follows: Rate=(6.2x10 min )(0.4 M) We now solve our equation. Rate=(6.2x10 min )(0.4 M)= 2.48x10 M/min. Use significant figures and unit conversion to round 2.48x10 M/min to 2.5 × 10 (moles)L min (a) 2.5 × 10 mol/L/min The annual production of HNO in 2013 was 60 million metric tons Most of that was prepared by the following sequence of reactions, each run in a separate reaction vessel. The first reaction is run by burning ammonia in air over a platinum catalyst. This reaction is fast. The reaction in equation (c) is also fast. The second reaction limits the rate at which nitric acid can be prepared from ammonia. If equation (b) is second order in NO and first order in O , what is the rate of formation of NO when the oxygen concentration is 0.50 and the nitric oxide concentration is 0.75 ? The rate constant for the reaction is 5.8 × 10 L /mol /s. To determine the rate law for an equation we need to look at its slow step. Since both equation a and c are fast, equation b can be considered the slow step of the reaction. The slow step is also considered the rate determining step of the system. Hence, The rate determining step is the second step because it's the slow step. rate of production of \(NO_2 = k [A]^m [B]^n \) \(rate = k [NO]^2 [O_2]^1~M/s\) \(rate = (5.8*10^{-6}) [0.75]^2 [0.5]^1 ~M/s\) \(rate = 1.6*10^{-6}~M/s\) \(rate = 1.6*10^{-6}~M/s\) The following data have been determined for the reaction: \[\ce{I- + OCl- ⟶ IO- + Cl-}\nonumber \] Determine the rate equation and the rate constant for this reaction. Using the reactants, we can form the rate law of the reaction: $$ r=k[OCl^-]^n[I^-]^m \] From there, we need to use the data to determine the order of both \([OCl^-]\) and \([I^-]\). In doing so, we need to compare \(r_1\) to \(r_2\) such that: \[ \frac {r_1}{r_2} = \frac {(0.10^m)(0.050^n)}{(0.20^m)(0.050^n)} = \frac {3.05 \times 10^{-4}}{6.20 \times 10^{-4}} \] \[ 0.5^m = 0.5 \] \[ m = 1 \] We can "cross out" the concentration of \([OCl^-]\) because it has the same concentration in both of the trials used. Now that we know m (\([I^-]\)) has a first order of 1. We cannot "cross out" \([I^-]\) to find \([OCl^-]\) because no two trials have the same concentration. In order to solve for n we will plug in 1 for m. \[ \frac {r_1}{r_3} = \frac {(0.10^{1})(0.050^n)}{(0.30^{1})(0.010^n)} = \frac {3.05 \times 10^{-4}}{1.83 \times 10^{-4}} \] \[ \frac {1}{3} (5^{n}) = 1.6666667 \] \[ 5^{n} = 5 \] \[ n = 1 \] Since we know that orders of both n and m are equal to one, we can not substitute them into the rate law equation along with the respective concentrations (from either the first, second, or third reaction) and solve for the rate constant, k. \[ r=k[OCl^-]^n[I^-]^m \] \[ 3.05 * 10^{-4}= k[0.05]^1[0.10]^1 \] \[ k = 6.1 * 10^{-2} \frac {L}{mol \times s} \] Thus the overall rate law is: $$ r = (6.1 * 10^{-2} \frac {L}{mol \times s})[OCl^-,I^-] \] The units for K depend on the overall order of the reaction. To find the overall order we add m and n together. By doing this we find an overall order of 2. This is why the units for K are $$ \frac {L}{mol \times s} \] rate = [I ,OCl ]; = 6.1 × 10 L mol s In the reaction \[2NO + Cl_2 → 2NOCl\nonumber \] the reactants and products are gases at the temperature of the reaction. The following rate data were measured for three experiments: The rate equation can be determined by designing experiments that measure the concentration(s) of one or more reactants or products as a function of time. For the reaction \(A+B\rightarrow products\), for example, we need to determine and the exponents and in the following equation: \[rate=k[A]^m[B]^n\nonumber \] To do this, the initial concentration of B can be kept constant while varying the initial concentration of A and calculating the initial reaction rate. This information would deduce the reaction order with respect to A. The same process can be done to find the reaction order with respect to B. In this particular example, \[\frac{rate_2}{rate_3}=\frac{k[A_2]^m[B_2]^n}{k[A_3]^m[B_3]^n}\nonumber \] So taking the values from the table, \[\frac{4.0*10^{-2}}{1.0*10^{-2}}=\frac{k[1.0]^m[1.0]^n}{k[0.5]^m[1.0]^n}\nonumber \] and by canceling like terms, you are left with \[\frac{4.0*10^{-2}}{1.0*10^{-2}}=\frac{[1.0]^m}{[0.5]^m}\nonumber \] Now, solve for m \(4=2^m\Longrightarrow m=2\) Because m=2, the reaction with respect to \(NO\) is 2. You can repeat the same process to find n. \[\frac{rate_3}{rate_1}=\frac{k[A_3]^m[B_3]^n}{k[A_1]^m[B_1]^n}\nonumber \] Taking the values from the table, \[\frac{1.0*10^{-2}}{5.1*10^{-3}}=\frac{k[0.5]^m[1.0]^n}{k[0.5]^m[0.5]^n}\nonumber \] and by canceling like terms, you are left with \[\frac{1.0*10^{-2}}{5.1*10^{-3}}=\frac{[1.0]^n}{[0.5]^n}\nonumber \] Now this time, solve for n \(2=2^n\Longrightarrow n=1\) Because n=1, the reaction with respect to \(Cl_2\) is 1. So the rate equation is\[rate=k[NO]^2[Cl_2]^1\nonumber \] To find the overall rate order, you simply add the orders together. Second order + first order makes the The rate constant is calculated by inserting the data from any row of the table into the experimentally determined rate law and solving for k. For a third order reaction, the units of k are \(frac{1}{atm^2*sec}\). Using Experiment 1, \[rate=k[NO]^2[Cl_2]^1\Longrightarrow 5.1*10^{-3} \frac{atm}{sec}=k[0.5m atm]^2[0.5 atm]^1\nonumber \] \[k=0.0408 \frac{1}{atm^2*sec}\nonumber \] \(NO\) is second order. \(Cl_2\) is first order. Overall reaction order is three. b) \(k=0.0408\; atm^{-2}*sec^{-1}\) Describe how graphical methods can be used to determine the order of a reaction and its rate constant from a series of data that includes the concentration of at varying times. To determine the order of a reaction when given the data series, one must graph the data how it is, graph it as natural log of [A], and graph it as 1/[A]. Whichever method yields a straight line will determine the order. Respective of the methods of graphing above, if a straight line is yielded by the first graphing method its a 0 order, if by the second method it's a 1st order, and the third graphing method its a 2nd order. When the order of the graph is known, a series of equations, given in the above image, can be used with the various points on the graph to determine the value of k. We can see that we need an initial value of A and a final value of A, and both of these would be given by the data. Use the data provided to graphically determine the order and rate constant of the following reaction: \(\ce{SO2Cl2 ⟶ SO2 + Cl2}\) Use the data to graphically determine the order and rate constant of the following reaction. slope= -2.0 x 10 The In this graph, ln(concentration) vs time is linear, indicating that the . k=-slope of line Plotting a graph of ln[SO Cl ] versus reveals a linear trend; therefore we know this is a first-order reaction: = −2.20 × 10 s Use the data provided in a graphical method to determine the order and rate constant of the following reaction: \[2P⟶Q+W\nonumber \] Pure ozone decomposes slowly to oxygen, \(\ce{2O3}(g)⟶\ce{3O2}(g)\). Use the data provided in a graphical method and determine the order and rate constant of the reaction. To identify how the concentrations changes a function of time, requires solving the appropriate differential equation (i.e., the differential rate law). The zero-order rate law predicts in a linear decay of concentration with time The 1st-order rate law predicts in an exponential decay of concentration with time The 2nd-order rate law predicts in an reciprocal decay of concentration with time The plot is not linear, so the reaction is not zero order. The plot is not linear, so the reaction is not first order. The plot is nicely linear, so the reaction is second order. To a second order equation, \( 1/[A] \ = k*t + 1/[A_0] \) Thus, the value of K is the slope of the graph Time vs \( \frac{1}{\ce{O3}}\), = 50.3*10^6 L mol h The plot is nicely linear, so the reaction is second order. = 50.1 L mol h From the given data, use a graphical method to determine the order and rate constant of the following reaction: \[2X⟶Y+Z\] In order to determine the order of the reaction we need to plot the data using three different graphs. All three graphs will have time in seconds as the x-axis, but the y-axis is what will differ. One graph will plot concentration versus time, the second will plot natural log of concentration versus time, and the other will plot 1/concentration versus times. Whichever graph results in a line, we know that must be the order of the reaction. If we get a line using the first graph, it will be zero order, if it is a line for the second graph it will be first order, and if it is a line for the third graph it will be a second order reaction. Now lets plot the data to determine the order. We can clearly see that the third graph, which plots 1/M versus time, is a straight line while the other two are slightly curved. Therefore, we can determine that the rate of this reaction is second order. This also tells us that the units of the rate constant which should be M s for a second order reaction. To determine the rate constant, called k, we simple need to figure out the slope of the third graph since that is the order of this reaction. To find the slope of the line, we take two points and subtract the y values and then divide them by the difference of the x values. This is how to do it: Use the points (5, 10.101) and (40, 80). Now use these to get the slop, aka the rate constant: (80-10.101)/(40-5) = 1.997 = k So the rate constant for this second order reaction is 1.997 M s . What is the half-life for the first-order decay of phosphorus-32? \(\ce{(^{32}_{15}P⟶^{32}_{16}S + e- )}\) The rate constant for the decay is 4.85 × 10 day . This is a first order reaction, so we can use our half life equation below: \[t_{1/2}=\frac{0.693}{k}\nonumber \] The rate constant is given to us in units per day. All we have to do, is to plug it into the equation. \[t_{1/2}=\frac{0.693}{4.85*10^{-2}}\nonumber \] \[=14.3\; days\nonumber \] 14.3 d What is the half-life for the first-order decay of carbon-14? \(\ce{(^6_{14}C⟶^7_{14}N + e- )}\) The rate constant for the decay is 1.21 × 10 year . To find the half life, we need to use the first-order half-life equation. All half life reactions undergo first order reactions. The half-life equation for first order is \[t_{1/2}=ln2/k \nonumber \]with k being the rate constant. The rate constant for carbon-14 was given as \(1.21 × 10^{-4} year^{−1}\). Plug it in the equation. \[t_{1/2}=ln2/(1.21 × 10^{−4} year^{−1})\nonumber \] and solve for \( t_{1/2}\). When you calculate it, the half life for carbon-14 is 5.73*10 The half-life for carbon-14 is calculated to be 5.73*10 What is the half-life for the decomposition of NOCl when the concentration of NOCl is 0.15 ? The rate constant for this second-order reaction is 8.0 × 10 L/mol/s. The half-life of a reaction, t , is the amount of time that is required for a reactant concentration to decrease by half compared to its initial concentration. When solving for the half-life of a reaction, we should first consider the order of reaction to determine it's rate law. In this case, we are told that this reaction is second-order, so we know that the integrated rate law is given as: \[\dfrac{1}{[A]} = kt + \dfrac{1}{[A]_0­}\nonumber \] Isolating for time, we find that: \[t_{1/2} = \dfrac{1}{k[A]_0­}\nonumber \] Now it is just a matter of substituting the information we have been given to calculate \(t_{1/2}\), where the rate constant, \({k}\), is equal to 8.0 × 10 L/mol/s and initial concentration, \({[A]_0}\), is equal to 0.15 : \[t_{1/2} = \dfrac{1}{(8.0×10^{-8})(0.15)} = {8.33×10^7 seconds}\nonumber \] 8.33 × 10 s What is the half-life for the decomposition of O when the concentration of O is 2.35 × 10 ? The rate constant for this second-order reaction is 50.4 L/mol/h. Since the reaction is second order, its half-life is \[t_{1/2}=\dfrac{1}{(50.4M^{-1}/h)[2.35×10^{-6}M]}\nonumber \] So, half-life is 8443 hours. The reaction of compound to give compounds and was found to be second-order in . The rate constant for the reaction was determined to be 2.42 L/mol/s. If the initial concentration is 0.500 mol/L, what is the value of t ? As mentioned in the question the reaction of compound will result in the formation of compounds C and D. This reaction was found to be second-order in . Therefore, we should use the second order equation for half-life which relates the rate constant and initial concentrations to the half-life: \[t_{\frac{1}{2}}=\frac{1}{k[A]_{0}}\nonumber \] Since we were given (rate constant) and Initial concentration of A, we have everything needed to calculate the half life of A. \[k=0.5\frac{\frac{L}{mol}}{s}\nonumber \] \[[A]_{0}=2.42\frac{mol}{L}\nonumber \] When we plug in the given information notice that the units cancel out to seconds. \[t_{\frac{1}{2}}=\frac{1}{\frac{2.42Lmol^{-}}{s}[0.500\frac{mol}{L}]}=0.826 s\nonumber \] 0.826 s The half-life of a reaction of compound to give compounds and is 8.50 minutes when the initial concentration of is 0.150 mol/L. How long will it take for the concentration to drop to 0.0300 mol/L if the reaction is (a) first order with respect to or (b) second order with respect to ? Organize the given variables: (half-life of ) \(t_{1/2}=8.50min\) (initial concentration of ) \([A]_{0}=0.150mol/L\) (target concentration of ) \([A]=0.0300mol/L\) Find the the rate constant k, using the half-life formulas for each respective order. After finding k, use the integrated rate law respective to each order and the initial and target concentrations of to find the time it took for the concentration to drop. (half-life) \(t_{1/2}=\frac{ln(2)}{k}=\frac{0.693}{k}\) (rearranged for k) \(k=\frac{0.693}{t_{1/2}}\) (plug in t = 8.50 min) \(k=\frac{0.693}{8.50min}=0.0815min^{-1}\) (integrated rate law) \(ln[A]=-kt+ln[A]_{0}\) (rearranged for t) \(ln(\frac{[A]}{[A]_{0}})=-kt\) \(-ln(\frac{[A]}{[A]_{0}})=kt\) \(ln(\frac{[A]}{[A]_{0}})^{-1}=kt\) \(ln(\frac{[A]_{0}}{[A]})=kt\) \(t=\frac{ln(\frac{[A]_{0}}{[A]})}{k}\) (plug in variables) \(t=\frac{ln(\frac{0.150mol/L}{0.0300mol/L})}{0.0815min^{-1}}=\frac{ln(5.00)}{0.0815min^{-1}}=19.7min\) (half-life) \(t_{1/2}=\frac{1}{k[A]_{0}}\) (rearranged for k) \(k=\frac{1}{t_{1/2}[A]_{0}}\) (plug in variables) \(k=\frac{1}{(8.50min)(0.150mol/L)}=\frac{1}{1.275min\cdot mol/L}=0.784L/mol\cdot min\) (integrated rate law) \(\frac{1}{[A]}=kt+\frac{1}{[A]_{0}}\) (rearranged for t) \(\frac{1}{[A]}-\frac{1}{[A]_{0}}=kt\) \(t=\frac{1}{k}(\frac{1}{[A]}-\frac{1}{[A]_{0}})\) (plug in variables) \(t=\frac{1}{0.784L/mol\cdot min}(\frac{1}{0.0300mol/L}-\frac{1}{0.150mol/L})=\frac{1}{0.784L/mol\cdot min}(\frac{80}{3}L/mol)=34.0min\) a) 19.7 min b) 34.0 min Some bacteria are resistant to the antibiotic penicillin because they produce penicillinase, an enzyme with a molecular weight of 3 × 10 g/mol that converts penicillin into inactive molecules. Although the kinetics of enzyme-catalyzed reactions can be complex, at low concentrations this reaction can be described by a rate equation that is first order in the catalyst (penicillinase) and that also involves the concentration of penicillin. From the following data: 1.0 L of a solution containing 0.15 µg (0.15 × 10 g) of penicillinase, determine the order of the reaction with respect to penicillin and the value of the rate constant. The first step is to solve for the order or the reaction. This can be done by setting up two expressions which equate the rate to the rate constant times the molar concentration of penicillin raised to the power of it's order. Once we have both expressions set up, we can divide them to cancel out k (rate constant) and use a basic logarithm to solve for the exponent, which is the order. It will look like this. rate(mol/L/min)=k[M] (1.0 x 10 )=k[2.0 x 10 ] (1.5 x 10 )=k[3.0 x 10 ] (2/3)=(2/3) *A single ratio equation can also be set up to solve for the reaction order: *\[\frac{rate_{1}}{rate_{2}}=\frac{k[Penicillin]_{1}^{x}}{k[Penicillin]_{2}^{x}}\nonumber \] *We then solve for x in a similar fashion. *\[\frac{1.0x10^{-10}}{1.5x10^{-10}}=\frac{[2.0x10^{-6}]^{x}}{[3.0x10^{-6}]^{x}}\nonumber \] Now that we have the order of the reaction, we can proceed to solve for the value of the rate constant. Substituting x=1 into our first equation yields the expression: (1 x 10 )=k[2.0 x 10 ] k=(1 x 10 )/(2 x 10 ) k= (5 x 10 ) min We have a unit of min because we divided (mol/L/min) by molarity, which is in (mol/L), yielding a unit of min . We were given two important pieces of information to finish the problem. It is stated that the enzyme has a molecular weight of 3 × 10 g/mol, and that we have a one liter solution that contains (0.15 x 10 g) of penicillinase. Dividing the amount of grams by the molecular weight yields . (0.15 x 10 ) g / (3 x 10 ) g/mol = (5 x 10 ) mol Now that we have the amount of moles, we can divide our rate constant by this value. (5 x 10 ) min / (5 x 10 ) mol = The reaction is first order with = 1.0 × 10 mol min Both technetium-99 and thallium-201 are used to image heart muscle in patients with suspected heart problems. The half-lives are 6 h and 73 h, respectively. What percent of the radioactivity would remain for each of the isotopes after 2 days (48 h)? This problem is asking us for the percentage of radioactivity remaining after a certain time for both isotopes after 48 hours. We must identify an equation that will help us solve this and we can determine that we can determine this information using the first order equation. This equation Ln(N/N )= -kt tells that the Natural log of the fraction remaining is equal to the rate constant times time. To determine the rate constant, we can also compute .693 over the half-life given in the information. For Technetium-99 we can determine the rate constant by plugging into the second equation: .693/6 hrs= .1155 h Now that we have the rate constant we can plug in : Ln(N/N )=-(.1155h )(48h) so Ln(N/N )=-5.544 and if we take the inverse of the natural log, we get (N/N )=3.9x10 and if we multiply this by 100, we get .39% remaining. We can do this same process for Thallium-201 and plugin: .693/73 hrs= .009493151 h and when we plug this into the first order equation we get: Ln(N/N )=-(.009493h )(48h) so Ln(N/N )=-.45567248 and when we take the inverse of the natural log, we get (N/N )=.6340 and when multiplied by 100, we get 63.40% remaining which makes sense since its half-life is 73 hours and only 48 hours have passed, half of the amount has yet to be consumed. Technetium-99: 0.39% Thallium-201: 63.40% There are two molecules with the formula C H . Propene, \(\ce{CH_3CH=CH_2}\), is the monomer of the polymer polypropylene, which is used for indoor-outdoor carpets. Cyclopropane is used as an anesthetic: When heated to 499 °C, cyclopropane rearranges (isomerizes) and forms propene with a rate constant of 5.95 × 10 s . What is the half-life of this reaction? What fraction of the cyclopropane remains after 0.75 h at 499 °C? Use the equation \[ t{_1}{_/}{_2} = \frac{ln2} k\nonumber \] since this is a first-order reaction. You can tell that this is a due to the units of measurement of the rate constant, which is s . Different orders of reactions lead to different rate constants, and a rate constant of s will always be first order. Plug into the equation, and you get half life = 1164.95 seconds. To convert this to hours, we would divide this number by 3600 seconds/hour, to get . Use the integrated first order rate law \[ln\frac{[A]}{[A]_0} = -kt\nonumber \]. In this equation, [A] represents the initial amount of compound present at time 0, while [A] represents the amount of compound that is left after the reaction has occurred. Therefore, the fraction \[\frac{[A]}{[A]_0}\nonumber \] is equal to the fraction of cyclopropane that remains after a certain amount of time, in this case, 0.75 hours. Substitute x for the fraction of \[\frac{[A]}{[A]_0}\nonumber \] into the integrated rate law: \[ln\frac{[A]}{[A]_0} = -kt\nonumber \] \[ln(x) = -5.95x10^{-4}(0.75)\nonumber \] \[x=e^{(-0.000595)(0.75)}\nonumber \] = 0.20058 = 20%. So, the half life is 0.324 hours, and 20% of the cyclopropane will remain as 80% will have formed propene. 0.324 hours. ; 20% remains Fluorine-18 is a radioactive isotope that decays by positron emission to form oxygen-18 with a half-life of 109.7 min. (A positron is a particle with the mass of an electron and a single unit of positive charge; the nuclear equation is \(\ce{^{18}_9F ⟶ _8^{18}O + ^0_{1}e^+}\).) Physicians use F to study the brain by injecting a quantity of fluoro-substituted glucose into the blood of a patient. The glucose accumulates in the regions where the brain is active and needs nourishment. a) The nuclear decay of an isotope of an element is represented by the first order equation: ln(N/N0) = −kt Where t is time, N0 is the initial amount of the substance, N is the amount of the substance after time t, and k is the rate constant. We can rearrange the equation and isolate k so that we could solve for the rate constant: k = [-ln(N/N0)] / t We are given that fluorine-18 has a half-life of 109.7 minutes. Since we have the half-life, we can choose an arbitrary value for N and use half of that value for N. In this case, we choose 100 for N and 50 for N. Now we can plug in those values into the equation above and solve for k. k = [-ln(50/100)] / 109.7 k = 0.6931 / 109.7 = 0.006319 min b) For this problem, we are able to use the same equation from part a: ln(N/N0) = −kt However, this time we are given the amount of time elapsed instead of the half-life, and we are asked to determine the percent of fluorine-18 radioactivity remaining after that time. In this problem, we must plug in values for N0, k (determined from part a), and t. But first, since we are given the elapsed time in hours, we must convert it into minutes: 5.59 hours x (60 minutes / 1 hours) = 335.4 minutes This gives us the value for t. We also have values for k (0.006319 min ) and N (again an arbitrary number.) Now we can plug values into the original equation, giving us: ln(N/100) = −(0.006319)(335.4) We solve this equation by taking the exponential of both sides: e = e where e equals 1 and now we can just solve for N: N/100 = e N = [e ] x 100 = 12.0 Since 100 was used as the initial amount and 12.0 was determined as the remaining amount, 12.0 can be used as the percentage of remaining amount of radioactivity of fluorine-18. c) This part of the question is much like the previous two parts, but this time we are given the initial amount of radioactivity, the final amount of radioactivity and we are asked do determine how long it took for that amount of radioactivity to decay. We are able to use the same equation: ln(N/N0) = −kt However, now we are given N and N and we have already determined k from before. We are told that 99.99% of the radioactivity has decayed, so we can use 100 and 0.01 for N and N respectively. We plug these values in to the equation, solve for t, and get ln(0.01/1000) = −0.006319t -9.21 = −0.006319t t = 1458 minutes a) 0.006319 min b) 12.0% c) 1458 minutes Suppose that the half-life of steroids taken by an athlete is 42 days. Assuming that the steroids biodegrade by a first-order process, how long would it take for \(\dfrac{1}{64}\) of the initial dose to remain in the athlete’s body? 252 days for first order reaction: t = 0.693 / k k = 0.693 / 42 k = 0.0165 for first order reaction: [A] = [A] e 1/64 initial means that: [A] = 1/64 [A] therefore: 1/64 [A] = [A] e t = 252 days Recently, the skeleton of King Richard III was found under a parking lot in England. If tissue samples from the skeleton contain about 93.79% of the carbon-14 expected in living tissue, what year did King Richard III die? The half-life for carbon-14 is 5730 years. In order to find out what year King Richard III died, set [A]/[A ] (the percent of carbon-14 still contained) equal to 0.5 or use the equation N(t) = N e Using the first equation: \(A/A_{0}\) = \(0.5^{t/t_{1/2}}\) plug in the given numbers \(.9379 = 0.5^{t/5730}\) and solve for t. \(ln.9379\) = \((t/5730)(ln0.5)\) (using the rule of logs) \(-.0641\) = \((t/5730)(-.693)\) \(-367.36\) = \(-.693t\) \(t = 530.1 years\) Using \(N(t) = N_{0}e^{-rt}\) this problem is solved by the following: \(1/2 = e^{-5730r}\) \(r = 0.000121\) Now that we know what r is, we can use this value in our original formula and solve for t, the amount of years that have passed. This time, we use 93.78, the percent of the carbon-14 remaining as N(t) and 100 as the original, N . \(93.78 = 100e^{-0.000121t}\) \(t = 530.7\) years Another way of doing this is by using these two equations: λ = \(\dfrac{0.693}{t_{1/2}}\) and \(\dfrac{n_{t}}{n_{0}}\) = -λt \(n_{t}\) = concentration at time t (93.79) \(n_{0}\) = initial concentration (100) First solve for lambda or the decay constant by plugging in the half life. Then plug in lambda and the other numbers into the second equation, and solve for t- which should equal to 530.1 years as well. If we want to find out what year King Richard III died, we take the current year, 2017, and subtract 530 years. Doing this, we find that King Richard III died in the year 1487. King Richard III died in the year 1487 Nitroglycerine is an extremely sensitive explosive. In a series of carefully controlled experiments, samples of the explosive were heated to 160 °C and their first-order decomposition studied. Determine the average rate constants for each experiment using the following data: First we need to understand what the question is asking for: the average rate constant. The average rate constant is the variable "k" when discussing kinetics and it can be defined as the proportionality constant in the equation that expresses the relationship between the rate of a chemical reaction and the concentrations of the reacting substances. Knowing that we need to find K in this first order reaction, we can look to formulas that include "k," initial and final concentrations \([A]_o and [A]_t\), and half life time "t." Since this is a first order reaction, we can look to the first order equations, and doing that we find one that includes the variables given in the question: \[\ln[A]_t=-kt+\ln[A]_o\nonumber \] For the first reaction, we have an initial concentration of 4.88 M, and a percentage decomposed. To find the final concentration, we must multiply the initial concentration by the percentage decomposed to know how much decomposed, and subtract that from the original to find out how much is left: 4.88M x 0.52= 2.54 M and 4.88M-2.54M=2.34M Now, we have the variables we need, and we plug it into the equation above: \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[2.34M]=-k(300s)+\ln[4.88M]\) k=\({-(\ln[2.34M]-\ln[4.88M])}\over 300\) \(k=2.45x10^{-3}\) Since it asks for the rate constant of each experiment, we now must do the same procedure for each data set to find the rate constant: \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[1.66M]=-k(300s)+\ln[3.52M]\) k=\({-(\ln[1.66M]-\ln[3.52M])}\over 300\) \(k=2.51x10^{-3}\) \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[1.07M]=-k(300s)+\ln[2.29M]\) k=\({-(\ln[1.07M]-\ln[2.29M])}\over 300\) \(k=2.54x10^{-3}\) \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[0.834M]=-k(300s)+\ln[1.81M]\) k=\({-(\ln[0.834M]-\ln[1.81M])}\over 300\) \(k=2.58x10^{-3}\) \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[3.49M]=-k(180s)+\ln[5.33M]\) k=\({-(\ln[3.49M]-\ln[5.33M])}\over 180\) \(k=2.35x10^{-3}\) \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[2.60M]=-k(180s)+\ln[4.05M]\) k=\({-(\ln[2.60M]-\ln[4.05M])}\over 180\) \(k=2.46x10^{-3}\) \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[1.89M]=-k(180s)+\ln[2.95M]\) k=\({-(\ln[1.89M]-\ln[2.95M])}\over 180\) \(k=2.47x10^{-3}\) \(\ln[A]_t=-kt+\ln[A]_o\) \(\ln[1.11M]=-k(180s)+\ln[1.72M]\) k=\({-(\ln[1.11M]-\ln[1.72M])}\over 180\) \(k=2.43x10^{-3}\) For the past 10 years, the unsaturated hydrocarbon 1,3-butadiene \(\ce{(CH2=CH–CH=CH2)}\) has ranked 38th among the top 50 industrial chemicals. It is used primarily for the manufacture of synthetic rubber. An isomer exists also as cyclobutene: The isomerization of cyclobutene to butadiene is first-order and the rate constant has been measured as 2.0 × 10 s at 150 °C in a 0.53-L flask. Determine the partial pressure of cyclobutene and its concentration after 30.0 minutes if an isomerization reaction is carried out at 150 °C with an initial pressure of 55 torr. Since this is a first order reaction, the integrated rate law is: \([A_{t}]=[A_{0}]e^{-kt}\) Use the integrated rate law to find the partial pressure at 30 minutes: Use \(A_0\) = 55 torr, t = 30 min, and k = \(2.0 * 10^{-4}s^{-1}\) to solve the integrated rate law equation: \([A_{30}]=(55 torr)*e^{-(2.0x10^{-4}\frac{1}{sec})(30min\cdot\frac{60sec}{1 min})}\) Solve this equation to get: \([A_{30}]=(55 torr)*e^{-0.36}\) \(A_{30}]\) = 38.37 torr. Find the initial concentration using the ideal gas law. The ideal gas law is given by \(PV = nRT → n = \frac{PV}{RT}\). Use this form of the gas law to solve for the initial concentration n. Use V = 0.53L, R = 0.08206 \(\frac{L*atm}{mol*L}\), T = 423.15 K, and P = \(\frac{1 atm}{760}\) = 0.07237 atm . Solve the ideal gas equation using these values: \(n=\frac{(55torr)(0.53L)}{(0.08206\frac{L*atm}{mol*K})(423.15K)} = 0.00110\) moles cyclobutene. Now find the initial concentration of cyclobutene \(A_0\) using the equation \([A_0] = \frac{n}{V}\): \(A_0 = \frac{n}{V} = \frac{0.00110 moles}{0.53 L} = 0.00208 M\) Find the concentration of cyclobutene at 30 minutes by using the integrated rate law given above, using time t = 30 minutes, or 1800 seconds. \([A_{30}]=(0.00208M)e^{-0.36}= 0.00145M\) So at 30 minutes, the cyclobutene concentration is 0.00145 M, and the partial pressure is 38.37 torr. Partial Pressure: 38.37 torr. Concentration: 0.00145 M Chemical reactions occur when reactants collide. What are two factors that may prevent a collision from producing a chemical reaction? The two factors that may prevent a collision from producing a chemical reaction are: 1. In order for chemical reactions to occur, molecules require enough velocity to overcome the minimum activation energy needed to break the old bonds and form new bonds with other molecules. At higher temperatures, the molecules possess the minimum amount of kinetic energy needed which ensures the collisions will be energetic enough to lead to a reaction. 2. Two molecules have to collide in the right orientation in order for the reaction to occur. Molecules have to orient properly for another molecule to collide at the right activation state. When every collision between reactants leads to a reaction, what determines the rate at which the reaction occurs? There has to be contact between reactants for a reaction to occur. The more the reactants collide, the more often reactions can occur. Factors that determine reaction rates include concentration of reactants, temperature, physical states of reactants, surface area, and the use of a catalyst. The reaction rate usually increases as the concentration of a reactant increases. Increasing the temperature increases the average kinetic energy of molecules, causing them to collide more frequently, which increases the reaction rate. When two reactants are in the same fluid phase, their particles collide more frequently, which increases the reaction rate. If the surface area of a reactant is increased, more particles are exposed to the other reactant therefore more collisions occur and the rate of reaction increases. A catalyst participates in a chemical reaction and increases the reaction rate without changing itself. What is the activation energy of a reaction, and how is this energy related to the activated complex of the reaction? Activation energy is the energy barrier that must be overcome in order for a reaction to occur. To get the molecules into a state that allows them to break and form bonds, the molecules must be contorted (deformed, or bent) into an unstable state called the transition state. The transition state is a high-energy state, and some amount of energy – the activation energy – must be added in order for the molecule reach it. Because the transition state is unstable, reactant molecules don’t stay there long, but quickly proceed to the next step of the chemical reaction.The activated complex is the highest energy of the transition state of the reaction. Describe how graphical methods can be used to determine the activation energy of a reaction from a series of data that includes the rate of reaction at varying temperatures. How does an increase in temperature affect rate of reaction? Explain this effect in terms of the collision theory of the reaction rate. Collision theory states that the rates of chemical reactions depend on the fraction of molecules with the correct orientation, fraction of collisions with required energy, and the collision frequency. Because the fraction of collisions with required energy is a function of temperature, as temperature increases, the fraction of collisions with required energy also increases. The kinetic energy of reactants also increases with temperature which means molecules will collide more often increasing collisions frequency. With increased fraction of collisions with required energy and collisions frequency, the rate of chemical reaction increases. Arrhenius equation, that temperature and the rate constant are related. \[k=Ae^{\frac {E_a}{RT}}\] where k is the rate constant, A is a specific constant, R is 8.3145 J/K, Ea is the reaction-specific activation energy in J, and T is temperature in K. We see from the equation that k is very sensitive to changes in the temperature. The rate of a certain reaction doubles for every 10 °C rise in temperature. By finding the difference in temperature, 45 °C - 25 °C, we get 20 °C. Since the rate of the reaction doubles every 10 °C increase in temperature and the rate of the reaction experienced a 20 °C increase in temperature, we see that the reaction rate doubled twice (2 = 4). As a result, the . Following the same process as in part a, we get the difference in temperature to be 70 °C. Since the rate of the reaction doubles every 10 °C increase in temperature and the system experienced a 70 °C change, we see that the reaction doubled seven times (2 = 128). We can see the . (a) 4-times faster (b) 128-times faster In an experiment, a sample of NaClO was 90% decomposed in 48 min. Approximately how long would this decomposition have taken if the sample had been heated 20 °C higher? First off, it is important to recognize that this decomposition reaction is a , which can be written as follows: \(\mathrm2NaClO_3\to2NaCl + 3O_2\) Understanding this, it is important to be able to then be able to recognize which equation would be most useful given the initial conditions presented by the question. Since we are dealing with time, percentage of material left, and temperature, the only viable equation that could relate all of this would be the Arrhenius Equation, which is written as follows: \(\mathrm \ln(\frac{k_2}{k_1}) = \frac {Ea}{R}({\frac1{t_1}}-{\frac{1}{t_{2}}})\) , this problem does not give us enough information such as what the activation energy is or the initial temperature in order to mathematically solve this problem. Additionally, the problem tells us to approximate how long the decomposition would take, which means we are asked to answer this question conceptually based on our knowledge of thermodynamics and reaction rates. As a general rule of thumb, we know that for every 10˚C rise in temperature the rate of reaction doubles. Since the question tells us that there is a 20˚C rise in temperature we can deduce that the reaction rate doubles twice, as per the general rule mentioned before. This means , or would be 4 times faster than the reaction rate at the initial temperature. We can gut check this answer by recalling how an increase in the average kinetic energy (temperature) decreases the time it takes for the reaction to take place and increase the reaction rate. Thus, if we increase the temperature we should have a faster reaction rate. The rate constant at 325 °C for the decomposition reaction \(\ce{C4H8⟶2C2H4}\) is 6.1 × 10 s , and the activation energy is 261 kJ per mole of C H . Determine the frequency factor for the reaction. Using the Arrhenius equation allows me to find the frequency factor, A. k=Ae ​ k, Ea, R, and T are all known values. k, Ea, and T are given in the problem as 6.1x10 , 261 kJ, and 598 K, respectively. So, plugging them into the equation gives: 6.1x10 s =Ae Take e and get 1.59 x 10 . Divide k, 6.1 x 10 , by 1.59 x 10 and get A=3.9 x 10 s \(\mathrm{3.9×10^{15}\:s^{−1}}\) The rate constant for the decomposition of acetaldehyde (CH CHO), to methane (CH ), and carbon monoxide (CO), in the gas phase is 1.1 × 10 L/mol/s at 703 K and 4.95 L/mol/s at 865 K. Determine the activation energy for this decomposition. The equation for relating the rate constant and activation energy of a reaction is the Arrhenius equation: \[ln (\frac{k_2}{k_1}) = \frac{E_a}{R} (\frac{1}{T_1} - \frac{1}{T_2})\] In this problem, all the variables are given except for the E (activation energy). k = 1.1 × 10 L/mol/s T = 703 K k = 4.95 L/mol/s T = 865 K R = 8.314 J/(mol K) (Ideal Gas Constant) Now plug in all these values into the equation, and solve for E . \[ln (\frac{4.95\frac{L}{mol×s}}{1.1 × 10^{-2}\frac{L}{mol×s}}) = \frac{E_a}{8.314 × 10^{-3}\frac{kJ}{mol×K}} (\frac{1}{703} - \frac{1}{865})\] E = 190 kJ (2 sig figs) An elevated level of the enzyme alkaline phosphatase (ALP) in the serum is an indication of possible liver or bone disorder. The level of serum ALP is so low that it is very difficult to measure directly. However, ALP catalyzes a number of reactions, and its relative concentration can be determined by measuring the rate of one of these reactions under controlled conditions. One such reaction is the conversion of p-nitrophenyl phosphate (PNPP) to p-nitrophenoxide ion (PNP) and phosphate ion. Control of temperature during the test is very important; the rate of the reaction increases 1.47 times if the temperature changes from 30 °C to 37 °C. What is the activation energy for the ALP–catalyzed conversion of PNPP to PNP and phosphate? 43.0 kJ/mol In terms of collision theory, to which of the following is the rate of a chemical reaction proportional? Hydrogen iodide, HI, decomposes in the gas phase to produce hydrogen, H , and iodine, I . The value of the rate constant, , for the reaction was measured at several different temperatures and the data are shown here: What is the value of the activation energy (in kJ/mol) for this reaction? 177 kJ/mol The element Co exists in two oxidation states, Co(II) and Co(III), and the ions form many complexes. The rate at which one of the complexes of Co(III) was reduced by Fe(II) in water was measured. Determine the activation energy of the reaction from the following data: The hydrolysis of the sugar sucrose to the sugars glucose and fructose, \[\ce{C12H22O11 + H2O ⟶ C6H12O6 + C6H12O6}\nonumber \] follows a first-order rate equation for the disappearance of sucrose: Rate = [C H O ] (The products of the reaction, glucose and fructose, have the same molecular formulas but differ in the arrangement of the atoms in their molecules.) = 108 kJ = 2.0 × 10 s = 3.2 × 10 s (b) 1.81 × 10 h or 7.6 × 10 day. (c) Assuming that the reaction is irreversible simplifies the calculation because we do not have to account for any reactant that, having been converted to product, returns to the original state. Use the to simulate a system. On the “Single collision” tab of the simulation applet, enable the “Energy view” by clicking the “+” icon. Select the first \(A+BC⟶AB+C\) reaction (A is yellow, B is purple, and C is navy blue). Using the “straight shot” default option, try launching the atom with varying amounts of energy. What changes when the Total Energy line at launch is below the transition state of the Potential Energy line? Why? What happens when it is above the transition state? Why? Use the to simulate a system. On the “Single collision” tab of the simulation applet, enable the “Energy view” by clicking the “+” icon. Select the first \(A+BC⟶AB+C\) reaction (A is yellow, B is purple, and C is navy blue). Using the “angled shot” option, try launching the atom with varying angles, but with more Total energy than the transition state. What happens when the atom hits the molecule from different directions? Why? The atom has enough energy to react with ; however, the different angles at which it bounces off of without reacting indicate that the orientation of the molecule is an important part of the reaction kinetics and determines whether a reaction will occur. Why are elementary reactions involving three or more reactants very uncommon? In general, can we predict the effect of doubling the concentration of on the rate of the overall reaction \(A+B⟶C\) ? Can we predict the effect if the reaction is known to be an elementary reaction? No. In general, for the overall reaction, we cannot predict the effect of changing the concentration without knowing the rate equation. Yes. If the reaction is an elementary reaction, then doubling the concentration of doubles the rate. Phosgene, COCl , one of the poison gases used during World War I, is formed from chlorine and carbon monoxide. The mechanism is thought to proceed by: Define these terms: What is the rate equation for the elementary termolecular reaction \(A+2B⟶\ce{products}\)? For \(3A⟶\ce{products}\)? We are given that both of these reactions are elementary termolecular. The molecularity of a reaction refers to the number of reactant particles that react together with the proper and energy and orientation. Termolecular reactions have three atoms to collide simultaneously. As it is termolecular, and there are no additional reactants aside from the three given in each reaction, there are no intermediate reactions. The rate law for elementary reactions is determined by the stoichiometry of the reaction without needed experimental data. The basic rate form for the elementary step is what follows: \(rate= {k} \cdot {reactant \ 1}^{i} \cdot {reactant \ 2}^{ii} \cdot ... \) Where i and ii are the stochiometric coefficient from reactant 1 and 2 respectively. For: \(3A \rightarrow products \) \({k} \cdot {A}^3 = rate\) For: \(A + 2B \rightarrow products \) \({k} \cdot {[A]} \cdot {[B]}^2 = rate\) Note that the order of these reactions are both three. Rate = [ , ] ; Rate = [ ] Given the following reactions and the corresponding rate laws, in which of the reactions might the elementary reaction and the overall reaction be the same? (a) \(\ce{Cl2 + CO ⟶ Cl2CO}\) \(\ce{rate}=k\ce{[Cl2]^{3/2}[CO]}\) (b) \(\ce{PCl3 + Cl2 ⟶ PCl5}\) \(\ce{rate}=k\ce{[PCl3,Cl2]}\) (c) \(\ce{2NO + H2 ⟶ N2 + H2O}\) \(\ce{rate}=k\ce{[NO,H2]}\) (d) \(\ce{2NO + O2 ⟶ 2NO2}\) \(\ce{rate}=k\ce{[NO]^2[O2]}\) (e) \(\ce{NO + O3 ⟶ NO2 + O2}\) \(\ce{rate}=k\ce{[NO,O3]}\) An elementary reaction is a chemical reaction in which the reactants directly form products in a single step. In another words, the rate law for the overall reaction is same as experimentally found rate law. Out of 5 options, option (b),(d), and (e) are such reactions Write the rate equation for each of the following elementary reactions: Rate equations are The rate law of a reaction can be found using a rate constant (which is found experimentally), and the initial concentrations of reactants. A general solution for the equation \(aA + bB \rightarrow cC + dD\) is \(rate = k[A]^{m}[B]^{n}\) where m and n are reaction orders. However, reaction orders are found experimentally, and since we do not have experimental data for these reactions, we can disregard that part of the equation. To find the rate laws, all we have to do is plug the reactants into the rate formula. This is only due to the case that these are elementary reactions. Further reading on elementary reactions can be found on Libre Texts. a. O ⟶ O + O To write this reaction's rate equation, only focus on the reactant(s) and its/their concentration and multiplying that by a rate constant, "k". Rate = [O ] b. O + Cl ⟶ O + ClO To write this reaction's rate equation, only focus on the reactant(s) and its/their concentration and multiplying that by a rate constant, "k". c. ClO + O ⟶ Cl + O To write this reaction's rate equation, only focus on the reactant(s) and its/their concentration and multiplying that by a rate constant, "k". Rate = [ClO,O] d. O + NO ⟶ NO + O To write this reaction's rate equation, only focus on the reactant(s) and its/their concentration and multiplying that by a rate constant, "k". Rate = [O ,NO] e. NO + O ⟶ NO + O To write this reaction's rate equation, only focus on the reactant(s) and its/their concentration and multiplying that by a rate constant, "k". Rate = [NO ,O] (a) Rate = [O ]; (b) Rate = [O ,Cl]; (c) Rate = [ClO,O]; (d) Rate = [O ,NO]; (e) Rate = [NO ,O] Nitrogen(II) oxide, NO, reacts with hydrogen, H , according to the following equation: \[\ce{2NO + 2H2 ⟶ N2 + 2H2O}\nonumber \] What would the rate law be if the mechanism for this reaction were: \[\ce{2NO + H2 ⟶ N2 + H2O2\:(slow)}\nonumber \] \[\ce{H2O2 + H2 ⟶ 2H2O\:(fast)}\nonumber \] The rate law of the mechanism is determined by the slow step of the reaction. Since the slow step is an elementary step, the rate law can be drawn from the coefficients of the chemical equation. So therefore, the rate law is as follows: rate=k[NO] [H ]. Since both NO and H are reactants in the overall reaction (therefore are not intermediates in the reaction), no further steps have to be done to determine the rate law. Consider the reaction CH + Cl → CH Cl + HCl (occurs under light) The mechanism is a chain reaction involving Cl atoms and CH radicals. Which of the following steps does not terminate this chain reaction? Chain reactions involve reactions that create products necessary for more reactions to occur. In this case, a reaction step will continue the chain reaction if a radical is generated. Radicals are highly reactive particles, so more reactions in the chain will take place as long as they are present. The chlorine is considered a free radical as it has an unpaired electron; for this reason it is very reactive and propagates a chain reaction. It does so by taking an electron from a stable molecule and making that molecule reactive, and that molecule goes on to react with stable species, and in that manner a long series of "chain" reactions are initiated. A chlorine radical will continue the chain by completing the following reaction: \({Cl \cdot}+{CH_4} \rightarrow {CH_3 \cdot}+{HCl} \) The \({CH_3}\) generated by this reaction can then react with other species, continuing to propagate the chain reaction. Option 1 is incorrect because the only species it produces is \({CH_3Cl}\), a product in the overall reaction that is unreactive. This terminates the chain reaction because it fails to produce any \(Cl\) or \(CH_3\) radicals that are necessary for further propagating the overall reaction. Option 2 is the correct answer because it produces a \(Cl\) radical. This \(Cl\) radical can continue the chain by colliding with \(CH_4\) molecules. Option 3 is incorrect because it fails to produce a radical capable of continuing the chain. Option 4 is incorrect because it produces \(Cl_2\), a molecule that does not react unless additional light is supplied. Therefore, this step breaks the chain. Answer: Option 2: \({CH_3}+{HCl} \rightarrow {CH_4}+{Cl}\) Experiments were conducted to study the rate of the reaction represented by this equation. \[\ce{2NO}(g)+\ce{2H2}(g)⟶\ce{N2}(g)+\ce{2H2O}(g)\nonumber \] Initial concentrations and rates of reaction are given here. Consider the following questions: Step 1: \(\ce{NO + NO ⇌ N2O2}\) Step 2: \(\ce{N2O2 + H2 ⇌ H2O + N2O}\) Step 3: \(\ce{N2O + H2 ⇌ N2 + H2O}\) Based on the data presented, which of these is the rate determining step? Show that the mechanism is consistent with the observed rate law for the reaction and the overall stoichiometry of the reaction. S12.6.10 1. i) Find the order for [NO] by using experiment 3 and 4 where [H ] is constant Notice that [NO] doubles from experiment 3 to 4 and the rate quadruples. So the order for [NO] is ii) Find the order for [H ] by using experiment 1 and 2 where [NO] is constant Notice that [H ] doubles from experiment 1 to 2 and the rate doubles as well. So the order for [H ] is 2. Put in the order for each product as the exponents for the corresponding reactant. \(rate = k [NO]^2 [H_2]\) 3. Put in the concentrations and the rate from one of the experiments into the rate law and solve for k. (Here, experiement 1 is used but any of them will work) \(rate = k [NO]^2 [H_2]\) \(.00018 = k [.006]^2 [.001]\) \(k = 5000 M^{-2}s^{-1}\) 4. Plug in values for experiment 2 into the rate law equation and solve for the concentration of NO \(.00036=5000[NO]^2[.001]\) \([NO]^2= 7.2 x 10^{-5}\) \([NO] = .0085 M\) 5. Write the rate laws for each step and then see which matches the rate law we found in question 2. The rate determining step (the slow step) is the one that gives the rate for the overall reaction. Because of this, only those concentrations will influence the overall reaction, contrary to what we would believe if we just looked at the overall reaction. Step 1: \(NO + NO \rightleftharpoons N_2O_2\) \(rate =k_1[NO]^2\) This rate law is not the same as the one we calculate in question 2 so this be the rate determining step Step 2: \(N_2O_2+H_2 \rightleftharpoons N_2O + N_2O\) \(rate = k_2[N_2O_2,H_2]\) Since \(N_2O_2\) is an intermediate you must replace it in the rate law equation. Intermediates can not be in the rate law because they do not appear in the overall reaction. Here you can take the reverse of equation 1 (k ) and substitute the other side (the reactants of equation 1) for the intermediate in the rate law equation. \[rate_1 = rate_{-1}\nonumber \] \[k_1[NO]^2 = k_{-1}[N_2O_2]\nonumber \] \[[N_2O_2] = \frac{k_1[NO]^2}{k_{-1}}\nonumber \] \(rate= \frac{k_2k_{1}[NO]^2[H_2]}{k_{-1}}\) Overall: \(rate={k[NO]^2[H_2]}\) This so it is the rate determining step. So \(N_2O_2+H_2 \rightleftharpoons N_2O + N_2O\) is the rate determining step step. (a) NO: 2 \(\ce {H2}\) : 1 (b) Rate = [NO] [H ]; (c) = 5.0 × 10 mol L min ; (d) 0.0050 mol/L; (e) Step II is the rate-determining step. The reaction of CO with Cl gives phosgene (COCl ), a nerve gas that was used in World War I. Use the mechanism shown here to complete the following exercises: 1. To write the overall reaction you have to identify the intermediates and leave them out. The easiest way to do this is to write out all the products and reactants and cross out anything that is on both sides. Cl (g) + CO(g) + + ) + + COCl (g) In this you will cross out the 2Cl(g) molecules and the COCl(g). What is left after that is the overall reaction. Cl (g) + CO(g) + COCl (g) 2. For part two you will just list the intermediates that you crossed out. Cl and COCl are intermediates 3. Each rate law will be the rate equal to the rate constant times the concentrations of the reactants reaction 1 (forward) rate=k [Cl ] ( reverse) rate=k [Cl] reaction 2 rate=k [CO,Cl] Reaction 3 rate=k [COCl,Cl] 4. The overall rate law is based off the slowest step (step #2), since it is the rate determining step, but Cl is present in that rate law so we have to replace it with an equivalent that does not contain an intermediate. To do this you use the equilibrium since the rates are the same you can set up the rate laws of the forward and reverse equal to each other. k [Cl ] = k [Cl] [Cl]= k [Cl ]/k rate=k [CO]k [Cl ]/k Steps to replacing and intermediate Account for the increase in reaction rate brought about by a catalyst.   Compare the functions of homogeneous and heterogeneous catalysts.   Consider this scenario and answer the following questions: Chlorine atoms resulting from decomposition of chlorofluoromethanes, such as CCl F , catalyze the decomposition of ozone in the atmosphere. One simplified mechanism for the decomposition is: \[\ce{O3 \xrightarrow{sunlight} O2 + O}\\ \ce{O3 + Cl ⟶ O2 + ClO}\\ \ce{ClO + O ⟶ Cl + O2}\nonumber \] Is NO a catalyst for the decomposition? Explain your answer. For each of the following pairs of reaction diagrams, identify which of the pair is catalyzed: (a) For each of the following pairs of reaction diagrams, identify which of the pairs is catalyzed: (a) (b) For each of the following reaction diagrams, estimate the activation energy ( ) of the reaction: (a) (b) For each of the following reaction diagrams, estimate the activation energy ( ) of the reaction: (a) (b)
82,730
4,552
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chem1_(Lower)/11%3A_Chemical_Equilibrium/11.06%3A_Phase_Distribution_Equilibria
It often happens that two immiscible liquid phases are in contact, one of which contains a solute. How will the solute tend to distribute itself between the two phases? One’s first thought might be that some of the solute will migrate from one phase into the other until it is distributed equally between the two phases, since this would correspond to the maximum dispersion (randomness) of the solute. This, however, does not take into the account the differing solubilities the solute might have in the two liquids; if such a difference does exist, the solute will preferentially migrate into the phase in which it is more soluble. For a solute \(S\) distributed between two phases a and b the process S = S is defined by the distribution law \[K_{a,b} = \dfrac{[S]_a}{[S]_b}\] in which The transport of substances between different phases is of immense importance in such diverse fields as pharmacology and environmental science. For example, if a drug is to pass from the aqueous phase with the stomach into the bloodstream, it must pass through the lipid (oil-like) phase of the epithelial cells that line the digestive tract. Similarly, a pollutant such as a pesticide residue that is more soluble in oil than in water will be preferentially taken up and retained by marine organism, especially fish, whose bodies contain more oil-like substances; this is basically the mechanism whereby such residues as DDT can undergo as they become more concentrated at higher levels within the food chain. For this reason, environmental regulations now require that oil-water distribution ratios be established for any new chemical likely to find its way into natural waters. The standard “oil” phase that is almost universally used is octanol, C H OH. In preparative chemistry it is frequently necessary to recover a desired product present in a reaction mixture by extracting it into another liquid in which it is more soluble than the unwanted substances. On the laboratory scale this operation is carried out in a separatory funnel as shown below. If the distribution ratio is too low to achieve efficient separation in a single step, it can be repeated; there are automated devices that can carry out hundreds of successive extractions, each yielding a product of higher purity. In these applications our goal is to exploit the Le Chatelier principle by repeatedly upsetting the phase distribution equilibrium that would result if two phases were to remain in permanent contact. The distribution ratio for iodine between water and carbon disulfide is 650. Calculate the concentration of I remaining in the aqueous phase after 50.0 mL of 0.10M I in water is shaken with 10.0 mL of CS . The equilibrium constant is \[K_d = \dfrac{C_{CS_2}}{C_{H_2O}} = 650 \nonumber\] \[((5.00 – m_1) mmol / 10 mL) ÷ (m_1 mmol / 50 mL) = 650 \nonumber\] Simplifying and solving for yields \[ \dfrac{(0.50 – 0.1)m_1}{(0.02 m_1} = 650 \nonumber\] with = 0.0382 mmol. The concentration of solute in the water layer is (0.0382 mmol) / (50 mL) = , showing that almost all of the iodine has moved into the CS layer.
3,119
4,553
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Supplemental_Modules_(Organic_Chemistry)/Esters/Reactivity_of_Esters/Polyesters
This page looks at the formation, structure and uses of a common polyester sometimes known as Terylene if it is used as a fibre, or PET if it used in, for example, plastic drinks bottles. A polyester is a polymer (a chain of repeating units) where the individual units are held together by ester linkages. The diagram shows a very small bit of the polymer chain and looks pretty complicated. But it is not very difficult to work out - and that's the best thing to do: work it out, not try to remember it. You will see how to do that in a moment. The usual name of this common polyester is poly(ethylene terephthalate). The everyday name depends on whether it is being used as a fibre or as a material for making things like bottles for soft drinks. When it is being used as a fiber to make clothes, it is often just called . It may sometimes be known by a brand name like . When it is being used to make bottles, for example, it is usually called . In condensation polymerisation, when the monomers join together a small molecule gets lost. That's different from addition polymerisation which produces polymers like poly(ethene) - in that case, nothing is lost when the monomers join together. A polyester is made by a reaction involving an acid with two -COOH groups, and an alcohol with two -OH groups. In the common polyester drawn below. Now imagine lining these up alternately and making esters with each acid group and each alcohol group, losing a molecule of water every time an ester linkage is made. That would produce the chain shown above (although this time written without separating out the carbon-oxygen double bond - write it whichever way you like). The reaction takes place in two main stages: a pre-polymerisation stage and the actual polymerisation. In the first stage, before polymerization happens, you get a fairly simple ester formed between the acid and two molecules of ethane-1,2-diol. In the polymerisation stage, this is heated to a temperature of about 260°C and at a low pressure. A catalyst is needed - there are several possibilities including antimony compounds like antimony(III) oxide. The polyester forms and half of the ethane-1,2-diol is regenerated. This is removed and recycled. are easily hydrolyzed by reaction with dilute acids or alkalis. are attacked readily by alkalis, but much more slowly by dilute acids. Hydrolysis by water alone is so slow as to be completely unimportant. (You wouldn't expect your polyester fleece to fall to pieces if you went out in the rain!). If you spill dilute alkali on a fabric made from polyester, the ester linkages are broken. Ethane-1,2-diol is formed together with the salt of the carboxylic acid. Because you produce small molecules rather than the original polymer, the fibers are destroyed, and you end up with a hole! For example, if you react the polyester with sodium hydroxide solution: Jim Clark ( )
2,908
4,554
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/12%3A_Kinetics/12.2%3A_Factors_Affecting_Reaction_Rates
The rates at which reactants are consumed and products are formed during chemical reactions vary greatly. We can identify five factors that affect the rates of chemical reactions: the chemical nature of the reacting substances, the state of subdivision (one large lump versus many small particles) of the reactants, the temperature of the reactants, the concentration of the reactants, and the presence of a catalyst. The rate of a reaction depends on the nature of the participating substances. Reactions that appear similar may have different rates under the same conditions, depending on the identity of the reactants. For example, when small pieces of the metals iron and sodium are exposed to air, the sodium reacts completely with air overnight, whereas the iron is barely affected. The active metals calcium and sodium both react with water to form hydrogen gas and a base. Yet calcium reacts at a moderate rate, whereas sodium reacts so rapidly that the reaction is almost explosive. Except for substances in the gaseous state or in solution, reactions occur at the boundary, or interface, between two phases. Hence, the rate of a reaction between two phases depends to a great extent on the surface contact between them. A finely divided solid has more surface area available for reaction than does one large piece of the same substance. Thus a liquid will react more rapidly with a finely divided solid than with a large piece of the same solid. For example, large pieces of iron react slowly with acids; finely divided iron reacts much more rapidly (Figure \(\Page {1}\)). Large pieces of wood smolder, smaller pieces burn rapidly, and saw dust burns explosively. Chemical reactions typically occur faster at higher temperatures. Food can spoil quickly when left on the kitchen counter. However, the lower temperature inside of a refrigerator slows that process so that the same food remains fresh for days. We use a burner or a hot plate in the laboratory to increase the speed of reactions that proceed slowly at ordinary temperatures. In many cases, an increase in temperature of only 10 °C will approximately double the rate of a reaction in a homogeneous system. The rates of many reactions depend on the concentrations of the reactants. Rates usually increase when the concentration of one or more of the reactants increases. For example, calcium carbonate (\(\mathrm{CaCO_3}\)) deteriorates as a result of its reaction with the pollutant sulfur dioxide. The rate of this reaction depends on the amount of sulfur dioxide in the air (Figure \(\Page {2}\)). As an acidic oxide, sulfur dioxide combines with water vapor in the air to produce sulfurous acid in the following reaction: \[\ce{SO}_{2(g)}+\ce{H_2O}_{(g)}⟶\ce{H_2SO}_{3(aq)} \label{12.3.1} \] Calcium carbonate reacts with sulfurous acid as follows: \[\ce{CaCO}_{3(s)}+\ce{H_2SO}_{3(aq)}⟶\ce{CaSO}_{3(aq)}+\ce{CO}_{2(g)}+\ce{H_2O}_{(l)} \label{12.3.2} \] In a polluted atmosphere where the concentration of sulfur dioxide is high, calcium carbonate deteriorates more rapidly than in less polluted air. Similarly, phosphorus burns much more rapidly in an atmosphere of pure oxygen than in air, which is only about 20% oxygen. Hydrogen peroxide solutions foam when poured onto an open wound because substances in the exposed tissues act as catalysts, increasing the rate of hydrogen peroxide’s decomposition. However, in the absence of these catalysts (for example, in the bottle in the medicine cabinet) complete decomposition can take months. A is a substance that increases the rate of a chemical reaction by lowering the activation energy without itself being consumed by the reaction. Activation energy is the minimum amount of energy required for a chemical reaction to proceed in the forward direction. A catalyst increases the reaction rate by providing an alternative pathway or mechanism for the reaction to follow (Figure \(\Page {3}\)). Catalysis will be discussed in greater detail later in this chapter as it relates to mechanisms of reactions. Chemical reactions occur when molecules collide with each other and undergo a chemical transformation. Before physically performing a reaction in a laboratory, scientists can use molecular modeling simulations to predict how the parameters discussed earlier will influence the rate of a reaction. Use the PhET Reactions & Rates interactive to explore how temperature, concentration, and the nature of the reactants affect reaction rates. The rate of a chemical reaction is affected by several parameters. Reactions involving two phases proceed more rapidly when there is greater surface area contact. If temperature or reactant concentration is increased, the rate of a given reaction generally increases as well. A catalyst can increase the rate of a reaction by providing an alternative pathway that causes the activation energy of the reaction to decrease.
4,905
4,555
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_Chemistry_(Zumdahl_and_Decoste)/17%3A_Solutions/17.7%3A_Osmotic_Pressure
Semipermiable membranes do no H O Osmosis is the diffusion of a fluid through a semipermeable membrane. When a semipermeable membrane (animal bladders, skins of fruits and vegetables) separates a solution from a solvent, then only solvent molecules are able to pass through the membrane. The osmotic pressure of a solution is the pressure difference needed to stop the flow of solvent across a semipermeable membrane. The osmotic pressure of a solution is proportional to the of the solute particles in solution. \[\Pi = i \dfrac{n}{V}RT = i M RT \label{eq1}\] where Calculate molarity of a sugar solution in water (300 K) has osmotic pressure of 3.00 atm. Since it is sugar, we know it doesn't dissociate in water, so \(i\) is 1. Then we use Equation \ref{eq1} directly \[M = \dfrac{\Pi}{RT} = \dfrac{3.00\, atm}{(0.0821\, atm.L/mol.K)(300\,K)} = 0.122\,M \nonumber\] Calculate osmotic pressure for 0.10 M \(\ce{Na3PO4}\) aqueous solution at 20°C. Since \(\ce{Na3PO4}\) ionizes into four particles (3 Na+1 + \(PO_4^{-3}\)), then \(i = 4\). We can then calculate the osmotic pressure via Equation \ref{eq1} \[\Pi = iMRT = (0.40)(0.0821)(293) = 9.6\, atm \nonumber\] Hemoglobin is a large molecule that carries oxygen in human blood. A water solution that contains 0.263 g of hemoglobin (Hb) in 10.0 mL of solution has an osmotic pressure of 7.51 torr at \(25 ^oC\). What is the molar mass of the hemoglobin? \(6.51 \times 10^4 \; g/mol\)
1,457
4,556
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/08%3A_Gravimetric_Methods/8.05%3A_Problems
1. Starting with the equilibrium constant expressions for , and for , , and , verify that is correct. 2. explains how the solubility of AgCl varies as a function of the equilibrium concentration of Cl . Derive a similar equation that describes the solubility of AgCl as a function of the equilibrium concentration of Ag . Graph the resulting solubility function and compare it to that shown in . 3. Construct a solubility diagram for Zn(OH) that takes into account the following soluble zinc-hydroxide complexes: Zn(OH) , \(\text{Zn(OH)}_3^-\), and \(\text{Zn(OH)}_4^{2-}\). What is the optimum pH for the quantitative precipitation of Zn(OH) ? For your solubility diagram, plot log( ) on the -axis and pH on the -axis. See the for relevant equilibrium constants. 4. Starting with , verify that is correct. 5. For each of the following precipitates, use a ladder diagram to identify the pH range where the precipitates has its lowest solubility? See the appendices for relevant equilibrium constants. (a) CaC O ; (b) PbCrO ; (c) BaSO ; (d) SrCO ; (e) ZnS 6. Mixing solutions of 1.5 M KNO and 1.5 M HClO produces a precipitate of KClO . If permanganate ions are present, an inclusion of KMnO is possible. Shown below are descriptions of two experiments in which KClO is precipitated in the presence of \(\text{MnO}_4^-\). Explain why the experiments lead to the different results shown in the figure below. . Place 1 mL of 1.5 M KNO in a test tube, add 3 drops of 0.1 M KMnO , and swirl to mix. Add 1 mL of 1.5 M HClO dropwise, agitating the solution between drops. Destroy the excess KMnO by adding 0.1 M NaHSO dropwise. The resulting precipitate of KClO has an intense purple color. . Place 1 mL of 1.5 M HClO in a test tube, add 3 drops of 0.1 M KMnO , and swirl to mix. Add 1 mL of 1.5 M KNO dropwise, agitating the solution between drops. Destroy the excess KMnO by adding 0.1 M NaHSO dropwise. The resulting precipitate of KClO has a pale purple in color. 7. Mixing solutions of Ba(SCN) and MgSO produces a precipitate of BaSO . Shown below are the descriptions and results for three experiments using different concentrations of Ba(SCN) and MgSO . Explain why these experiments produce different results. . When equal volumes of 3.5 M Ba(SCN) and 3.5 M MgSO are mixed, a gelatinous precipitate forms immediately. . When equal volumes of 1.5 M Ba(SCN) and 1.5 M MgSO are mixed, a curdy precipitate forms immediately. Individual particles of BaSO are seen as points under a magnification of \(1500 \times\) (a particle size less than 0.2 μm). . When equal volumes of 0.5 mM Ba(SCN) and 0.5 mM MgSO are mixed, the complete precipitation of BaSO requires 2–3 h. Individual crystals of BaSO obtain lengths of approximately 5 μm. 8. Aluminum is determined gravimetrically by precipitating Al(OH) and isolating Al O . A sample that contains approximately 0.1 g of Al is dissolved in 200 mL of H O, and 5 g of NH Cl and a few drops of methyl red indicator are added (methyl red is red at pH levels below 4 and yellow at pH levels above 6). The solution is heated to boiling and 1:1 NH is added dropwise until the indicator turns yellow, precipitating Al(OH) . The precipitate is held at the solution’s boiling point for several minutes before filtering and rinsing with a hot solution of 2% w/v NH NO . The precipitate is then ignited at 1000–1100 C, forming Al O . (a) Cite at least two ways in which this procedure encourages the formation of larger particles of precipitate. (b) The ignition step is carried out carefully to ensure the quantitative conversion of Al(OH) to Al O . What is the effect of an incomplete conversion on the %w/w Al? (c) What is the purpose of adding NH Cl and methyl red indicator? (d) An alternative procedure for aluminum involves isolating and weighing the precipitate as the 8-hydroxyquinolate, Al(C H NO) . Why might this be a more advantageous form of Al for a gravimetric analysis? Are there any disadvantages? 9. Calcium is determined gravimetrically by precipitating CaC O •H O and isolating CaCO . After dissolving a sample in 10 mL of water and 15 mL of 6 M HCl, the resulting solution is heated to boiling and a warm solution of excess ammonium oxalate is added. The solution is maintained at 80 C and 6 M NH is added dropwise, with stirring, until the solution is faintly alkaline. The resulting precipitate and solution are removed from the heat and allowed to stand for at least one hour. After testing the solution for completeness of precipitation, the sample is filtered, rinsed with 0.1% w/v ammonium oxalate, and dried for one hour at 100–120 C. The precipitate is transferred to a muffle furnace where it is converted to CaCO by drying at 500 ± 25 C until constant weight. (a) Why is the precipitate of CaC O •H O converted to CaCO ? (b) In the final step, if the sample is heated at too high of a temperature some CaCO is converted to CaO. What effect would this have on the reported %w/w Ca? (c) Why is the precipitant, (NH ) C O , added to a hot, acidic solution instead of a cold, alkaline solution? 10. Iron is determined gravimetrically by precipitating as Fe(OH) and igniting to Fe O . After dissolving a sample in 50 mL of H O and 10 mL of 6 M HCl, any Fe is converted Fe by oxidizing with 1–2 mL of concentrated HNO . The sample is heated to remove the oxides of nitrogen and the solution is diluted to 200 mL. After bringing the solution to a boil, Fe(OH) is precipitated by slowly adding 1:1 NH until an odor of NH is detected. The solution is boiled for an additional minute and the precipitate allowed to settle. The precipitate is then filtered and rinsed with several portions of hot 1% w/v NH NO until no Cl is found in the wash water. Finally, the precipitate is ignited to constant weight at 500–550 C and weighed as Fe O . (a) If ignition is not carried out under oxidizing conditions (plenty of O present), the final product may contain Fe O . What effect will this have on the reported %w/w Fe? (b) The precipitate is washed with a dilute solution of NH NO . Why is NH NO added to the wash water? (c) Why does the procedure call for adding NH until the odor of ammonia is detected? (d) Describe how you might test the filtrate for Cl . 11. Sinha and Shome described a gravimetric method for molybdenum in which it is precipitated as MoO (C H NO ) using -benzoyl-phenylhydroxylamine, C H NO , as the precipitant [Sinha, S. K.; Shome, S. C. , , 33–36]. The precipitate is weighed after igniting to MoO . As part of their study, the authors determined the optimum conditions for the analysis. Samples that contained 0.0770 g of Mo each were taken through the procedure while varying the temperature, the amount of precipitant added, and the pH of the solution. The solution volume was held constant at 300 mL for all experiments. A summary of their results is shown in the following table. Based on these results, discuss the optimum conditions for determining Mo by this method. Express your results for the precipitant as the minimum %w/v in excess, needed to ensure a quantitative precipitation. 12. A sample of an impure iron ore is approximately 55% w/w Fe. If the amount of Fe in the sample is determined gravimetrically by isolating it as Fe O , what mass of sample is needed to ensure that we isolate at least 1.0 g of Fe O ? 13. The concentration of arsenic in an insecticide is determined gravimetrically by precipitating it as MgNH AsO and isolating it as Mg As O . Determine the %w/w As O in a 1.627-g sample of insecticide if it yields 106.5 mg of Mg As O . 14. After preparing a sample of alum, K SO •Al (SO ) •24H O, an analyst determines its purity by dissolving a 1.2931-g sample and precipitating the aluminum as Al(OH) . After filtering, rinsing, and igniting, 0.1357 g of Al O is obtained. What is the purity of the alum preparation? 15. To determine the amount of iron in a dietary supplement, a random sample of 15 tablets with a total weight of 20.505 g is ground into a fine powder. A 3.116-g sample is dissolved and treated to precipitate the iron as Fe(OH) . The precipitate is collected, rinsed, and ignited to a constant weight as Fe O , yielding 0.355 g. Report the iron content of the dietary supplement as g FeSO •7H O per tablet. 16. A 1.4639-g sample of limestone is analyzed for Fe, Ca, and Mg. The iron is determined as Fe O yielding 0.0357 g. Calcium is isolated as CaSO , yielding a precipitate of 1.4058 g, and Mg is isolated as 0.0672 g of Mg P O . Report the amount of Fe, Ca, and Mg in the limestone sample as %w/w Fe O , %w/w CaO, and %w/w MgO. 17. The number of ethoxy groups (CH CH O–) in an organic compound is determined by the following two reactions. \[\mathrm{R}\left(\mathrm{OCH}_{2} \mathrm{CH}_{3}\right)_{x}+x \mathrm{HI} \rightarrow \mathrm{R}(\mathrm{OH})_{x}+x \mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{I} \nonumber\] \[\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{I}+\mathrm{Ag}^{+}+\mathrm{H}_{2} \mathrm{O} \rightarrow \operatorname{AgI}(s)+\mathrm{CH}_{3} \mathrm{CH}_{2} \mathrm{OH}\nonumber\] A 36.92-mg sample of an organic compound with an approximate molecular weight of 176 is treated in this fashion, yielding 0.1478 g of AgI. How many ethoxy groups are there in each molecule of the compound? 18. A 516.7-mg sample that contains a mixture of K SO and (NH ) SO is dissolved in water and treated with BaCl , precipitating the \(\text{SO}_4^{2-}\) as BaSO . The resulting precipitate is isolated by filtration, rinsed free of impurities, and dried to a constant weight, yielding 863.5 mg of BaSO . What is the %w/w K SO in the sample? 19. The amount of iron and manganese in an alloy is determined by precipitating the metals with 8-hydroxyquinoline, C H NO. After weighing the mixed precipitate, the precipitate is dissolved and the amount of 8-hydroxyquinoline determined by another method. In a typical analysis a 127.3-mg sample of an alloy containing iron, manganese, and other metals is dissolved in acid and treated with appropriate masking agents to prevent an interference from other metals. The iron and manganese are precipitated and isolated as Fe(C H NO) and Mn(C H NO) , yielding a total mass of 867.8 mg. The amount of 8-hydroxyquinolate in the mixed precipitate is determined to be 5.276 mmol. Calculate the %w/w Fe and %w/w Mn in the alloy. 20. A 0.8612-g sample of a mixture of NaBr, NaI, and NaNO is analyzed by adding AgNO and precipitating a 1.0186-g mixture of AgBr and AgI. The precipitate is then heated in a stream of Cl , which converts it to 0.7125 g of AgCl. Calculate the %w/w NaNO in the sample. 21. The earliest determinations of elemental atomic weights were accomplished gravimetrically. To determine the atomic weight of manganese, a carefully purified sample of MnBr weighing 7.16539 g is dissolved and the Br precipitated as AgBr, yielding 12.53112 g. What is the atomic weight for Mn if the atomic weights for Ag and Br are taken to be 107.868 and 79.904, respectively? 22. While working as a laboratory assistant you prepared 0.4 M solutions of AgNO , Pb(NO ) , BaCl , KI and Na SO . Unfortunately, you became distracted and forgot to label the solutions before leaving the laboratory. Realizing your error, you label the solutions A–E and perform all possible binary mixtures of the five solutions, obtaining the results shown in the figure below (key: NP means no precipitate formed, W means a white precipitate formed, and Y means a yellow precipitate formed). Identify solutions A–E. 23. A solid sample has approximately equal amounts of two or more of the following soluble salts: AgNO , ZnCl , K CO , MgSO , Ba(C H O ) , and NH NO . A sample of the solid, sufficient to give at least 0.04 moles of any single salt, is added to 100 mL of water, yielding a white precipitate and a clear solution. The precipitate is collected and rinsed with water. When a portion of the precipitate is placed in dilute HNO it completely dissolves, leaving a colorless solution. A second portion of the precipitate is placed in dilute HCl, yielding a solid and a clear solution; when its filtrate is treated with excess NH , a white precipitate forms. Identify the salts that must be present in the sample, the salts that must be absent, and the salts for which there is insufficient information to make this determination [Adapted from Sorum, C. H.; Lagowski, J. J. , Prentice-Hall: Englewood Cliffs, N. J., 5th Ed., 1977, p. 285]. 24. Two methods have been proposed for the analysis of pyrite, FeS , in impure samples of the ore. In the first method, the sulfur in FeS is determined by oxidizing it to \(\text{SO}_4^{2-}\) and precipitating it as BaSO . In the second method, the iron in FeS is determined by precipitating the iron as Fe(OH) and isolating it as Fe O . Which of these methods provides the more sensitive determination for pyrite? What other factors should you consider in choosing between these methods? 25. A sample of impure pyrite that is approximately 90–95% w/w FeS is analyzed by oxidizing the sulfur to \(\text{SO}_4^{2-}\) and precipitating it as BaSO . How many grams of the sample should you take to ensure that you obtain at least 1.0 g of BaSO ? 26. A series of samples that contain any possible combination of KCl, NaCl, and NH Cl is to be analyzed by adding AgNO and precipitating AgCl. What is the minimum volume of 5% w/v AgNO necessary to precipitate completely the chloride in any 0.5-g sample? 27. If a precipitate of known stoichiometry does not form, a gravimetric analysis is still feasible if we can establish experimentally the mole ratio between the analyte and the precipitate. Consider, for example, the precipitation gravimetric analysis of Pb as PbCrO [Grote, F. , , 395–398]. (a) For each gram of Pb, how many grams of PbCrO will form, assuming the reaction is stoichiometric? (b) In a study of this procedure, Grote found that 1.568 g of PbCrO formed for each gram of Pb. What is the apparent stoichiometry between Pb and PbCrO ? (c) Does failing to account for the actual stoichiometry lead to a positive determinate error or a negative determinate error? 28. Determine the uncertainty for the gravimetric analysis described in . The expected accuracy for a gravimetric method is 0.1– 0.2%. What additional sources of error might account for the difference between your estimated uncertainty and the expected accuracy? 29. A 38.63-mg sample of potassium ozonide, KO , is heated to 70 C for 1 h, undergoing a weight loss of 7.10 mg. A 29.6-mg sample of impure KO experiences a 4.86-mg weight loss when treated under similar condition. What is the %w/w KO in the sample? 30. The water content of an 875.4-mg sample of cheese is determined with a moisture analyzer. What is the %w/w H O in the cheese if the final mass was found to be 545.8 mg? 31. describes a procedure for determining Si in ores and alloys. In this analysis a weight loss of 0.21 g corresponds to 0.1 g of Si. Show that this relationship is correct. 32. The iron in an organometallic compound is determined by treating a 0.4873-g sample with HNO and heating to volatilize the organic material. After ignition, the residue of Fe O weighs 0.2091 g. (a) What is the %w/w Fe in this compound? (b) The carbon and hydrogen in a second sample of the compound are determined by a combustion analysis. When a 0.5123-g sample is carried through the analysis, 1.2119 g of CO and 0.2482 g of H O re collected. What are the %w/w C and %w/w H in this compound and what is the compound’s empirical formula? 33. A polymer’s ash content is determined by placing a weighed sample in a Pt crucible previously brought to a constant weight. The polymer is melted using a Bunsen burner until the volatile vapor ignites and then allowed to burn until a non-combustible residue remain. The residue then is brought to constant weight at 800 C in a muffle furnace. The following data were collected for two samples of a polymer resin. (a) For each polymer, determine the mean and the standard deviation for the %w/w ash. (b) Is there any evidence at \(\alpha = 0.05\) for a significant difference between the two polymers? See the for statistical tables. 34. In the presence of water vapor the surface of zirconia, ZrO , chemically adsorbs H O, forming surface hydroxyls, ZrOH (additional water is physically adsorbed as H O). When heated above 200 C, the surface hydroxyls convert to H O , releasing one molecule of water for every two surface hydroxyls. Below 200 C only physically absorbed water is lost. Nawrocki, et al. used thermogravimetry to determine the density of surface hydroxyls on a sample of zirconia that was heated to 700 C and cooled in a desiccator containing humid N [Nawrocki, J.; Carr, P. W.; Annen, M. J.; Froelicher, S. , , 261–266]. Heating the sample from 200 C to 900 C released 0.006 g of H O for every gram of dehydroxylated ZrO . Given that the zirconia had a surface area of 33 m /g and that one molecule of H O forms two surface hydroxyls, calculate the density of surface hydroxyls in μmol/m . 35. The concentration of airborne particulates in an industrial workplace is determined by pulling the air for 20 min through a single-stage air sampler equipped with a glass-fiber filter at a rate of 75 m /h. At the end of the sampling period, the filter’s mass is found to have increased by 345.2 mg. What is the concentration of particulates in the air sample in mg/m and mg/L? 36. The fat content of potato chips is determined indirectly by weighing a sample before and after extracting the fat with supercritical CO . The following data were obtained for the analysis of potato chips [ , ISCO, Inc. Lincoln, NE]. (a) Determine the mean and standard deviation for the %w/w fat. (b) This sample of potato chips is known to have a fat content of 22.7% w/w. Is there any evidence for a determinate error at \(\alpha = 0.05\)? See the for statistical tables. 37. Delumyea and McCleary reported results for the %w/w organic material in sediment samples collected at different depths from a cove on the St. Johns River in Jacksonville, FL [17 Delumyea, R. D.; McCleary, D. L. , , 172–173]. After collecting a sediment core, they sectioned it into 2-cm increments. Each increment was treated using the following procedure: Using the following data, determine the %w/w organic matter as a function of the average depth for each increment. Prepare a plot showing how the %w/w organic matter varies with depth and comment on your results. 52.10 48.83 38. Yao, et al. described a method for the quantitative analysis based on its reaction with I [Yao, S. F.; He, F. J. Nie, L. H. , , 311–314]. \[\mathrm{CS}\left(\mathrm{NH}_{2}\right)_{2}+4 \mathrm{I}_{2}+6 \mathrm{H}_{2} \mathrm{O} \longrightarrow\left(\mathrm{NH}_{4}\right)_{2} \mathrm{SO}_{4}+8 \mathrm{HI}+\mathrm{CO}_{2} \nonumber\] The procedure calls for placing a 100-μL aqueous sample that contains thiourea in a 60-mL separatory funnel and adding 10 mL of a pH 7 buffer and 10 mL of 12 μM I in CCl . The contents of the separatory funnel are shaken and the organic and aqueous layers allowed to separate. The organic layer, which contains the excess I , is transferred to the surface of a piezoelectric crystal on which a thin layer of Au has been deposited. After allowing the I to adsorb to the Au, the CCl is removed and the crystal’s frequency shift, \(\Delta f\), measured. The following data is reported for a series of thiourea standards. 74.6 327 120 543 (a) Characterize this method with respect to the scale of operation shown in of Chapter 3. (b) Prepare a calibration curve and use a regression analysis to determine the relationship between the crystal’s frequency shift and the concentration of thiourea. (c) If a sample that contains an unknown amount of thiourea gives a \(\Delta f\) of 176 Hz, what is the molar concentration of thiourea in the sample? (d) What is the 95% confidence interval for the concentration of thiourea in this sample assuming one replicate? See the appendices for statistical tables.
20,073
4,557
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Polymer_Chemistry_(Schaller)/04%3A_Polymer_Properties/4.10%3A_Chapter_Solutions
a) With a shorter elution time, has a higher molecular weight than . The narrower peak means that has a narrower dispersity than . b) With a shorter elution time, has a higher molecular weight than . The wider peak means that has a broader dispersity than . c) With a higher elution time, has a lower molecular weight than . The wider peak means that has a broader dispersity than . a) repeat unit: 109 x (C H O ) = 109 x 86.09 g/mol = 9,383.81 g/mol end groups: C H + Br = 119.19 + 79.9 = 199.09 g/mol total: 9,383.81 + 199.09 = 9,582.90 g/mol b) repeat unit: 763 x (C H ) = 763 x 104.15 g/mol = 79,466.45 g/mol end groups: C H + H = 57.12 + 1.008 = 58.13 g/mol total: 79,466.45 + 58.13 = 79,524.58 g/mol c) repeat unit: 48 x (C H O ) = 48 x 86.09 g/mol = 4,132.32 g/mol end groups: C H + C H S = 57.12 + 153.26 = 210.38 g/mol total: 4,132.32 + 210.38 = 4,342.70 g/mol The ratio of the repeat unit integral per proton to the end group proton per integral gives the degree of polymerization. We could take the entire integration of the end group and divide it by the entire number of protons in that group, or select one position to represent the end group. Similarly, we can select one position to represent the repeat unit. a) repeat unit integral per proton = 36.0 / 2H = 18 end group integral per proton = 0.32 / 2H = 0.16 degree of polymerization = 18/0.16 = 112 b) repeat unit integral per proton = 26.0 / 1H = 26 end group integral per proton = 1.32 / 9H = 0.15 degree of polymerization = 26/0.15 = 173 c) repeat unit integral per proton = 54.0 / 4H = 13.5 end group integral per proton = 0.49 / 1H = 0.49 degree of polymerization = 13.5/0.49 = 28 α is the slope, which is [rise]/[run]. That's approximately [4.0-2.4]/[6.6-4.4] = 1.6/2.2 = 0.73. is the y-intercept. The equation for a straight line is y = mx +b; in this case, y = 0.73x + b. If we choose a point on the line, such as (x,y) = (4.9, 2.0), we can substitute those values on for x and y to get b. So 2.0 = 0.73(4.9) + b, or b = 2.0 - 3.56 = -1.56. If the molecular weight is a million g/mol, then log( ) = 6. Interpolating, log([η]) = 4, or [η] = 10,000 ml/g. If the intrinsic viscosity, [η] = 800 ml/g, then log([η]) = 2.9. Interpolating, log( ) = 5.1, or = 126,000 g/mol. Ethylene glycol can form hydrogen bonds at either end of the molecule, forming a supramolecular assembly much like a polymer. As a result, it has a much greater drag in solution, higher viscosity. Honey is a concentrated solution of simple sugars, which are small molecules. Molasses, although similar to honey in some ways, also contain starches, which are polymers. This polymeric content leads to shear-thinning behavior. a) There is a glass transition at around -18°C. b) There is a melting point at around 125°C. c) There is a glass transition at around -4°C. d) There is a glass transition at around 117°C and a melting point at around 146°C. a) is observed at around 78°C, is observed at around 117°C and is observed at around 104°C. b) is observed at around 134°C and is observed at around 167°C, but is not observed; the sample failed to crystallize, but remained an amorphous solid. c) is observed at around 194°C and is observed at around 187°C. is not observed, and probably occurs below 150°C d) is observed at around 123°C, but is not observed. The experiment checked much higher than (over a hundred degrees), so the material may be an amorphous solid. a) d = (2 x 3.14) / 0.40 = 16 Å; d = (2 x 3.14) / 0.70 = 9.0 Å b) d = (2 x 3.14) / 0.25 = 25 Å; d = (2 x 3.14) / 0.85 = 7.4 Å c) d = (2 x 3.14) / 0.25 = 25 Å; d = (2 x 3.14) / 0.52 = 12 Å; d = (2 x 3.14) / 0.66 = 10 Å a) ultimate tensile strength = 800 Pa; strain at break = 55% b) ultimate tensile strength = 750 Pa; strain at break = 215% c) ultimate tensile strength = 220 Pa; strain at break = 120% a) E = σ / ε = 180 Pa / 0.30 = 600 Pa b) E = σ / ε = 450 Pa / 0.15 = 3,000 Pa c) E = σ / ε = 50 Pa / 0.25 = 200 Pa d) E = σ / ε = 75 Pa / 0.30 = 250 Pa 1 Pa = 1 Pa 1 kPa = 1,000 Pa 1 MPa = 1,000,000 Pa 1GPa = 1,000,000,000 Pa a) glassy: storage modulus = 15 MPa; loss modulus = 70 kPa rubbery: storage modulus = 7 MPa; loss modulus = 80 kPa b) glassy: storage modulus = 600 kPa; loss modulus = 140 kPa rubbery: storage modulus = 130 kPa; loss modulus = 150 kPa c) glassy: storage modulus = 320 kPa; loss modulus = 80 kPa rubbery: storage modulus = 70 kPa; loss modulus = 70 kPa a) 89 °C b) 170 °C c) 124 °C
4,623
4,558
https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/06%3A_Gases/6.6%3A_Mixtures_of_Gases
In our use of the ideal gas law thus far, we have focused entirely on the properties of pure gases with only a single chemical species. But what happens when two or more gases are mixed? In this section, we describe how to determine the contribution of each gas present to the total pressure of the mixture. The ideal gas law that all gases behave identically and that their behavior is independent of attractive and repulsive forces. If volume and temperature are held constant, the ideal gas equation can be rearranged to show that the pressure of a sample of gas is directly proportional to the number of moles of gas present: \[P=n \bigg(\dfrac{RT}{V}\bigg) = n \times \rm const. \label{6.6.1}\] Nothing in the equation depends on the of the gas—only the amount. With this assumption, let’s suppose we have a mixture of two ideal gases that are present in equal amounts. What is the total pressure of the mixture? Because the pressure depends on only the total number of particles of gas present, the total pressure of the mixture will simply be twice the pressure of either component. More generally, the total pressure exerted by a mixture of gases at a given temperature and volume is the sum of the pressures exerted by each gas alone. Furthermore, if we know the volume, the temperature, and the number of moles of each gas in a mixture, then we can calculate the pressure exerted by each gas individually, which is its partial pressure, the pressure the gas would exert if it were the only one present (at the same temperature and volume). To summarize, . This law was first discovered by John Dalton, the father of the atomic theory of matter. It is now known as . We can write it mathematically as where \(P_{tot}\) is the total pressure and the other terms are the partial pressures of the individual gases (up to \(n\) component gases). For a mixture of two ideal gases, \(A\) and \(B\), we can write an expression for the total pressure: More generally, for a mixture of \(n\) component gases, the total pressure is given by Equation 6.6.4 restates Equation 6.6.3 in a more general form and makes it explicitly clear that, at constant temperature and volume, the pressure exerted by a gas depends on only the total number of moles of gas present, whether the gas is a single chemical species or a mixture of dozens or even hundreds of gaseous species. For Equation 6.6.4 to be valid, the identity of the particles present cannot have an effect. Thus an ideal gas must be one whose properties are not affected by either the size of the particles or their intermolecular interactions because both will vary from one gas to another. The calculation of total and partial pressures for mixtures of gases is illustrated in Example \(\Page {1}\). Deep-sea divers must use special gas mixtures in their tanks, rather than compressed air, to avoid serious problems, most notably a condition called “the bends.” At depths of about 350 ft, divers are subject to a pressure of approximately 10 atm. A typical gas cylinder used for such depths contains 51.2 g of \(O_2\) and 326.4 g of He and has a volume of 10.0 L. What is the partial pressure of each gas at 20.00°C, and what is the total pressure in the cylinder at this temperature? masses of components, total volume, and temperature partial pressures and total pressure The number of moles of \(He\) is \[n_{\rm He}=\rm\dfrac{326.4\;g}{4.003\;g/mol}=81.54\;mol\] The number of moles of \(O_2\) is \[n_{\rm O_2}=\rm \dfrac{51.2\;g}{32.00\;g/mol}=1.60\;mol\] We can now use the ideal gas law to calculate the partial pressure of each: \[P_{\rm He}=\dfrac{n_{\rm He}​RT}{V}=\rm\dfrac{81.54\;mol\times0.08206\;\dfrac{atm\cdot L}{mol\cdot K}\times293.15\;K}{10.0\;L}=196.2\;atm\] \[P_{\rm O_2}=\dfrac{n_{\rm O_2}​ RT}{V}=\rm\dfrac{1.60\;mol\times0.08206\;\dfrac{atm\cdot L}{mol\cdot K}\times293.15\;K}{10.0\;L}=3.85\;atm\] The total pressure is the sum of the two partial pressures: \[P_{\rm tot}=P_{\rm He}+P_{\rm O_2}=\rm(196.2+3.85)\;atm=200.1\;atm\] A cylinder of compressed natural gas has a volume of 20.0 L and contains 1813 g of methane and 336 g of ethane. Calculate the partial pressure of each gas at 22.0°C and the total pressure in the cylinder. Answer: \(P_{CH_4}=137 \; atm\); \(P_{C_2H_6}=13.4\; atm\); \(P_{tot}=151\; atm\) The composition of a gas mixture can be described by the mole fractions of the gases present. The mole fraction (\(X\)) of any component of a mixture is the ratio of the number of moles of that component to the total number of moles of all the species present in the mixture (\(n_{tot}\)): \[x_A=\dfrac{\text{moles of A}}{\text{total moles}}= \dfrac{n_A}{n_{tot}} =\dfrac{n_A}{n_A+n_B+\cdots}\label{6.6.5}\] The mole fraction is a dimensionless quantity between 0 and 1. If \(x_A = 1.0\), then the sample is pure \(A\), not a mixture. If \(x_A = 0\), then no \(A\) is present in the mixture. The sum of the mole fractions of all the components present must equal 1. To see how mole fractions can help us understand the properties of gas mixtures, let’s evaluate the ratio of the pressure of a gas \(A\) to the total pressure of a gas mixture that contains \(A\). We can use the ideal gas law to describe the pressures of both gas \(A\) and the mixture: \(P_A = n_ART/V\) and \(P_{tot} = n_tRT/V\). The ratio of the two is thus \[\dfrac{P_A}{P_{tot}}=\dfrac{n_ART/V}{n_{tot}RT/V} = \dfrac{n_A}{n_{tot}}=x_A \label{6.6.6}\] Rearranging this equation gives \[P_A = x_AP_{tot} \label{6.6.7}\] That is, the partial pressure of any gas in a mixture is the total pressure multiplied by the mole fraction of that gas. This conclusion is a direct result of the ideal gas law, which assumes that all gas particles behave ideally. Consequently, the pressure of a gas in a mixture depends on only the percentage of particles in the mixture that are of that type, not their specific physical or chemical properties. By volume, Earth’s atmosphere is about 78% \(N_2\), 21% \(O_2\), and 0.9% \(Ar\), with trace amounts of gases such as \(CO_2\), \(H_2O\), and others. This means that 78% of the particles present in the atmosphere are \(N_2\); hence the mole fraction of \(N_2\) is 78%/100% = 0.78. Similarly, the mole fractions of \(O_2\) and \(Ar\) are 0.21 and 0.009, respectively. Using Equation 6.6.7, we therefore know that the partial pressure of N is 0.78 atm (assuming an atmospheric pressure of exactly 760 mmHg) and, similarly, the partial pressures of \(O_2\) and \(Ar\) are 0.21 and 0.009 atm, respectively. We have just calculated the partial pressures of the major gases in the air we inhale. Experiments that measure the composition of the air we yield different results, however. The following table gives the measured pressures of the major gases in both inhaled and exhaled air. Calculate the mole fractions of the gases in exhaled air. pressures of gases in inhaled and exhaled air mole fractions of gases in exhaled air Calculate the mole fraction of each gas using Equation 6.6.7. The mole fraction of any gas \(A\) is given by \[x_A=\dfrac{P_A}{P_{tot}}\] where \(P_A\) is the partial pressure of \(A\) and \(P_{tot}\) is the total pressure. For example, the mole fraction of \(CO_2\) is given as: \[x_{\rm CO_2}=\rm\dfrac{48\;mmHg}{767\;mmHg}=0.063\] The following table gives the values of \(x_A\) for the gases in the exhaled air. Venus is an inhospitable place, with a surface temperature of 560°C and a surface pressure of 90 atm. The atmosphere consists of about 96% CO and 3% N , with trace amounts of other gases, including water, sulfur dioxide, and sulfuric acid. Calculate the partial pressures of CO and N . \(P_{\rm CO_2}=\rm86\; atm\), \(P_{\rm N_2}=\rm2.7\;atm\) Dalton’s Law of Partial Pressures: The pressure exerted by each gas in a gas mixture (its ) is independent of the pressure exerted by all other gases present. Consequently, the total pressure exerted by a mixture of gases is the sum of the partial pressures of the components ( ). The amount of gas present in a mixture may be described by its partial pressure or its mole fraction. The of any component of a mixture is the ratio of the number of moles of that substance to the total number of moles of all substances present. In a mixture of gases, the partial pressure of each gas is the product of the total pressure and the mole fraction of that gas.
8,369
4,559
https://chem.libretexts.org/Bookshelves/General_Chemistry/Chemistry_1e_(OpenSTAX)/16%3A_Thermodynamics/16.E%3A_Thermodynamics_(Exercises)
What is a spontaneous reaction? A reaction has a natural tendency to occur and takes place without the continual input of energy from an external source. What is a nonspontaneous reaction? Indicate whether the following processes are spontaneous or nonspontaneous. spontaneous; nonspontaneous; spontaneous; nonspontaneous; spontaneous; spontaneous A helium-filled balloon spontaneously deflates overnight as He atoms diffuse through the wall of the balloon. Describe the redistribution of matter and/or energy that accompanies this process. Many plastic materials are organic polymers that contain carbon and hydrogen. The oxidation of these plastics in air to form carbon dioxide and water is a spontaneous process; however, plastic materials tend to persist in the environment. Explain. Although the oxidation of plastics is spontaneous, the rate of oxidation is very slow. Plastics are therefore kinetically stable and do not decompose appreciably even over relatively long periods of time. In the below Figure all possible distributions and microstates are shown for four different particles shared between two boxes. Determine the entropy change, Δ , if the particles are initially evenly distributed between the two boxes, but upon redistribution all end up in Box (b). In Figure all of the possible distributions and microstates are shown for four different particles shared between two boxes. Determine the entropy change, Δ , for the system when it is converted from distribution to distribution (d). There are four initial microstates and four final microstates. \[ΔS=k\ln\dfrac{W_\ce{f}}{W_\ce{i}}=\mathrm{1.38×10^{−23}\:J/K×\ln\dfrac{4}{4}=0}\] How does the process described in the previous item relate to the system shown in ? Consider a system similar to the one below, except that it contains six particles instead of four. What is the probability of having all the particles in only one of the two boxes in the case? Compare this with the similar probability for the system of four particles that we have derived to be equal to \(\dfrac{1}{8}\). What does this comparison tell us about even larger systems? The probability for all the particles to be on one side is \(\dfrac{1}{32}\). This probability is noticeably lower than the \(\dfrac{1}{8}\) result for the four-particle system. The conclusion we can make is that the probability for all the particles to stay in only one part of the system will decrease rapidly as the number of particles increases, and, for instance, the probability for all molecules of gas to gather in only one side of a room at room temperature and pressure is negligible since the number of gas molecules in the room is very large. Consider the system shown in Figure. What is the change in entropy for the process where the energy is initially associated only with particle A, but in the final state the energy is distributed between two different particles? Consider the system shown in . What is the change in entropy for the process where the energy is initially associated with particles A and B, and the energy is distributed between two particles in different boxes (one in A-B, the other in C-D)? There is only one initial state. For the final state, the energy can be contained in pairs A-C, A-D, B-C, or B-D. Thus, there are four final possible states. \[ΔS=k\ln\left(\dfrac{W_\ce{f}}{W_\ce{i}}\right)=\mathrm{1.38×10^{−23}\:J/K×\ln\left(\dfrac{4}{1}\right)=1.91×10^{−23}\:J/K}\] Arrange the following sets of systems in order of increasing entropy. Assume one mole of each substance and the same temperature for each member of a set. At room temperature, the entropy of the halogens increases from I to Br to Cl . Explain. The masses of these molecules would suggest the opposite trend in their entropies. The observed trend is a result of the more significant variation of entropy with a physical state. At room temperature, I is a solid, Br is a liquid, and Cl is a gas. Consider two processes: sublimation of I ( ) and melting of I ( ) (Note: the latter process can occur at the same temperature but somewhat higher pressure). \[\ce{I2}(s)⟶\ce{I2}(g)\] \[\ce{I2}(s)⟶\ce{I2}(l)\] Is Δ positive or negative in these processes? In which of the processes will the magnitude of the entropy change be greater? Indicate which substance in the given pairs has the higher entropy value. Explain your choices. C H OH( ) as it is a larger molecule (more complex and more massive), and so more microstates describing its motions are available at any given temperature. C H OH( ) as it is in the gaseous state. 2H( ), since entropy is an extensive property, and so two H atoms (or two moles of H atoms) possess twice as much entropy as one atom (or one mole of atoms). Predict the sign of the entropy change for the following processes: Predict the sign of the enthalpy change for the following processes. Give a reason for your prediction. Negative. The relatively ordered solid precipitating decreases the number of mobile ions in solution. Negative. There is a net loss of three moles of gas from reactants to products. Positive. There is a net increase of seven moles of gas from reactants to products. Write the balanced chemical equation for the combustion of methane, CH ( ), to give carbon dioxide and water vapor. Explain why it is difficult to predict whether Δ is positive or negative for this chemical reaction. Write the balanced chemical equation for the combustion of benzene, C H ( ), to give carbon dioxide and water vapor. Would you expect Δ to be positive or negative in this process? \[\ce{C6H6}(l)+7.5\ce{O2}(g)⟶\ce{3H2O}(g)+\ce{6CO2}(g)\] There are 7.5 moles of gas initially, and 3 + 6 = 9 moles of gas in the end. Therefore, it is likely that the entropy increases as a result of this reaction, and Δ is positive. What is the difference between Δ , Δ °, and \(ΔS^\circ_{298}\) for a chemical change? Calculate \(ΔS^\circ_{298}\) for the following changes. 107 J/K; −86.4 J/K; 133.2 J/K; 118.8 J/K; −326.6 J/K; −171.9 J/K; (g) −7.2 J/K Determine the entropy change for the combustion of liquid ethanol, C H OH, under standard state conditions to give gaseous carbon dioxide and liquid water. Determine the entropy change for the combustion of gaseous propane, C H , under standard state conditions to give gaseous carbon dioxide and water. 100.6 J/K “Thermite” reactions have been used for welding metal parts such as railway rails and in metal refining. One such thermite reaction is: \[\ce{Fe2O3}(s)+\ce{2Al}(s)⟶\ce{Al2O3}(s)+\ce{2Fe}(s)\] Is the reaction spontaneous at room temperature under standard conditions? During the reaction, the surroundings absorb 851.8 kJ/mol of heat. Using the relevant \(S^\circ_{298}\) values listed in , calculate \(S^\circ_{298}\) for the following changes: −198.1 J/K; −348.9 J/K From the following information, determine \(ΔS^\circ_{298}\) for the following: By calculating Δ at each temperature, determine if the melting of 1 mole of NaCl( ) is spontaneous at 500 °C and at 700 °C. \[S^\circ_{\ce{NaCl}(s)}=\mathrm{72.11\:\dfrac{J}{mol⋅K}}\hspace{40px} S^\circ_{\ce{NaCl}(l)}=\mathrm{95.06\:\dfrac{J}{mol⋅K}}\hspace{40px ΔH^\circ_\ce{fusion}=\mathrm{27.95\: kJ/mol}\] What assumptions are made about the thermodynamic information (entropy and enthalpy values) used to solve this problem? As Δ < 0 at each of these temperatures, melting is not spontaneous at either of them. The given values for entropy and enthalpy are for NaCl at 298 K. It is assumed that these do not change significantly at the higher temperatures used in the problem. Use the standard entropy data in to determine the change in entropy for each of the reactions listed in . All are run under standard state conditions and 25 °C. 2.86 J/K; 24.8 J/K; −113.2 J/K; −24.7 J/K; 15.5 J/K; 290.0 J/K What is the difference between Δ , Δ °, and \(ΔG^\circ_{298}\) for a chemical change? A reactions has \(ΔH^\circ_{298}\) = 100 kJ/mol and \(ΔS^\circ_{298}=\textrm{250 J/mol⋅K}\). Is the reaction spontaneous at room temperature? If not, under what temperature conditions will it become spontaneous? The reaction is nonspontaneous at room temperature. Above 400 K, Δ will become negative, and the reaction will become spontaneous. Explain what happens as a reaction starts with Δ < 0 (negative) and reaches the point where Δ = 0. Use the standard free energy of formation data in to determine the free energy change for each of the following reactions, which are run under standard state conditions and 25 °C. Identify each as either spontaneous or nonspontaneous at these conditions. 465.1 kJ nonspontaneous; −106.86 kJ spontaneous; −53.6 kJ spontaneous; −83.4 kJ spontaneous; −406.7 kJ spontaneous; −30.0 kJ spontaneous Use the standard free energy data in to determine the free energy change for each of the following reactions, which are run under standard state conditions and 25 °C. Identify each as either spontaneous or nonspontaneous at these conditions. Given: \[\ce{P4}(s)+\ce{5O2}(g)⟶\ce{P4O10}(s) \hspace{20px} ΔG^\circ_{298}=\mathrm{−2697.0\: kJ/mol}\] \[\ce{2H2}(g)+\ce{O2}(g)⟶\ce{2H2O}(g) \hspace{20px} ΔG^\circ_{298}=\mathrm{−457.18\: kJ/mol}\] \[\ce{6H2O}(g)+\ce{P4O10}(g)⟶\ce{4H3PO4}(l) \hspace{20px} ΔG^\circ_{298}=\mathrm{−428.66\: kJ/mol}\] −1124.3 kJ/mol for the standard free energy change. The calculation agrees with the value in because free energy is a state function (just like the enthalpy and entropy), so its change depends only on the initial and final states, not the path between them. Is the formation of ozone (O ( )) from oxygen (O ( )) spontaneous at room temperature under standard state conditions? Consider the decomposition of red mercury(II) oxide under standard state conditions. \[\ce{2HgO}(s,\,\ce{red})⟶\ce{2Hg}(l)+\ce{O2}(g)\] The reaction is nonspontaneous; Above 566 °C the process is spontaneous. Among other things, an ideal fuel for the control thrusters of a space vehicle should decompose in a spontaneous exothermic reaction when exposed to the appropriate catalyst. Evaluate the following substances under standard state conditions as suitable candidates for fuels. Calculate Δ ° for each of the following reactions from the equilibrium constant at the temperature given. 1.5 × 10 kJ; −21.9 kJ; −5.34 kJ; −0.383 kJ; 18 kJ; 71 kJ Calculate Δ ° for each of the following reactions from the equilibrium constant at the temperature given. Calculate the equilibrium constant at 25 °C for each of the following reactions from the value of Δ ° given. = 41; = 0.053; = 6.9 × 10 ; = 1.9; = 0.04 Calculate the equilibrium constant at 25 °C for each of the following reactions from the value of Δ ° given. Calculate the equilibrium constant at the temperature given. In each of the following, the value of Δ is not given at the temperature of the reaction. Therefore, we must calculate Δ ° from the values Δ ° and Δ ° and then calculate Δ from the relation ΔG° = ΔH° − TΔS°. Calculate the equilibrium constant at the temperature given. Consider the following reaction at 298 K: \[\ce{N2O4}(g)⇌\ce{2NO2}(g) \hspace{20px} K_P=0.142\] What is the standard free energy change at this temperature? Describe what happens to the initial system, where the reactants and products are in standard states, as it approaches equilibrium. The standard free energy change is \(ΔG^\circ_{298}=−RT\ln K=\mathrm{4.84\: kJ/mol}\). When reactants and products are in their standard states (1 bar or 1 atm), = 1. As the reaction proceeds toward equilibrium, the reaction shifts left (the amount of products drops while the amount of reactants increases): < 1, and \(ΔG_{298}\) becomes less positive as it approaches zero. At equilibrium, = , and Δ = 0. Determine the normal boiling point (in kelvin) of dichloroethane, CH Cl . Find the actual boiling point using the Internet or some other source, and calculate the percent error in the temperature. Explain the differences, if any, between the two values. Under what conditions is \(\ce{N2O3}(g)⟶\ce{NO}(g)+\ce{NO2}(g)\) spontaneous? The reaction will be spontaneous at temperatures greater than 287 K. At room temperature, the equilibrium constant ( ) for the self-ionization of water is 1.00 × 10 . Using this information, calculate the standard free energy change for the aqueous reaction of hydrogen ion with hydroxide ion to produce water. (Hint: The reaction is the reverse of the self-ionization reaction.) Hydrogen sulfide is a pollutant found in natural gas. Following its removal, it is converted to sulfur by the reaction \(\ce{2H2S}(g)+\ce{SO2}(g)⇌\dfrac{3}{8}\ce{S8}(s,\,\ce{rhombic})+\ce{2H2O}(l)\). What is the equilibrium constant for this reaction? Is the reaction endothermic or exothermic? = 5.35 × 10 The process is exothermic. Consider the decomposition of CaCO ( ) into CaO( ) and CO ( ). What is the equilibrium partial pressure of CO at room temperature? In the laboratory, hydrogen chloride (HCl( )) and ammonia (NH ( )) often escape from bottles of their solutions and react to form the ammonium chloride (NH Cl( )), the white glaze often seen on glassware. Assuming that the number of moles of each gas that escapes into the room is the same, what is the maximum partial pressure of HCl and NH in the laboratory at room temperature? (Hint: The partial pressures will be equal and are at their maximum value when at equilibrium.) 1.0 × 10 atm. This is the maximum pressure of the gases under the stated conditions. Benzene can be prepared from acetylene. \(\ce{3C2H2}(g)⇌\ce{C6H6}(g)\). Determine the equilibrium constant at 25 °C and at 850 °C. Is the reaction spontaneous at either of these temperatures? Why is all acetylene not found as benzene? Carbon dioxide decomposes into CO and O at elevated temperatures. What is the equilibrium partial pressure of oxygen in a sample at 1000 °C for which the initial pressure of CO was 1.15 atm? \[x=\mathrm{1.29×10^{−5}\:atm}=P_{\ce{O2}}\] Carbon tetrachloride, an important industrial solvent, is prepared by the chlorination of methane at 850 K. \[\ce{CH4}(g)+\ce{4Cl2}(g)⟶\ce{CCl4}(g)+\ce{4HCl}(g)\] What is the equilibrium constant for the reaction at 850 K? Would the reaction vessel need to be heated or cooled to keep the temperature of the reaction constant? Acetic acid, CH CO H, can form a dimer, (CH CO H) , in the gas phase. \[\ce{2CH3CO2H}(g)⟶\ce{(CH3CO2H)2}(g)\] The dimer is held together by two hydrogen bonds with a total strength of 66.5 kJ per mole of dimer. At 25 °C, the equilibrium constant for the dimerization is 1.3 × 10 (pressure in atm). What is Δ ° for the reaction? −0.16 kJ Nitric acid, HNO , can be prepared by the following sequence of reactions: \[\ce{4NH3}(g)+\ce{5O2}(g)⟶\ce{4NO}(g)+\ce{6H2O}(g)\] \[\ce{2NO}(g)+\ce{O2}(g)⟶\ce{2NO2}(g)\] \[\ce{3NO2}(g)+\ce{H2O}(l)⟶\ce{2HNO3}(l)+\ce{NO}(g)\] How much heat is evolved when 1 mol of NH ( ) is converted to HNO ( )? Assume standard states at 25 °C. Determine Δ for the following reactions. (a) Antimony pentachloride decomposes at 448 °C. The reaction is: \[\ce{SbCl5}(g)⟶\ce{SbCl3}(g)+\ce{Cl2}(g)\] An equilibrium mixture in a 5.00 L flask at 448 °C contains 3.85 g of SbCl , 9.14 g of SbCl , and 2.84 g of Cl . Chlorine molecules dissociate according to this reaction: \[\ce{Cl2}(g)⟶\ce{2Cl}(g)\] 1.00% of Cl molecules dissociate at 975 K and a pressure of 1.00 atm. Given that the \(ΔG^\circ_\ce{f}\) for Pb ( ) and Cl ( ) is −24.3 kJ/mole and −131.2 kJ/mole respectively, determine the solubility product, , for PbCl ( ). Determine the standard free energy change, \(ΔG^\circ_\ce{f}\), for the formation of S ( ) given that the \(ΔG^\circ_\ce{f}\) for Ag ( ) and Ag S( ) are 77.1 k/mole and −39.5 kJ/mole respectively, and the solubility product for Ag S( ) is 8 × 10 . 90 kJ/mol Determine the standard enthalpy change, entropy change, and free energy change for the conversion of diamond to graphite. Discuss the spontaneity of the conversion with respect to the enthalpy and entropy changes. Explain why diamond spontaneously changing into graphite is not observed. The evaporation of one mole of water at 298 K has a standard free energy change of 8.58 kJ. \[\ce{H2O}(l)⇌\ce{H2O}(g) \hspace{20px} ΔG^\circ_{298}=\mathrm{8.58\: kJ}\] (a) Under standard thermodynamic conditions, the evaporation is nonspontaneous; = 0.031; The evaporation of water is spontaneous; \(P_{\ce{H2O}}\) must always be less than or less than 0.031 atm. 0.031 atm represents air saturated with water vapor at 25 °C, or 100% humidity. In glycolysis, the reaction of glucose (Glu) to form glucose-6-phosphate (G6P) requires ATP to be present as described by the following equation: \[\mathrm{Glu + ATP ⟶ G6P + ADP} \hspace{20px} ΔG^\circ_{298}=\mathrm{−17\: kJ}\] In this process, ATP becomes ADP summarized by the following equation: \[\mathrm{ATP⟶ADP} \hspace{20px} ΔG^\circ_{298}=\mathrm{−30\: kJ}\] Determine the standard free energy change for the following reaction, and explain why ATP is necessary to drive this process: \[\mathrm{Glu⟶G6P} \hspace{20px} ΔG^\circ_{298}=\:?\] One of the important reactions in the biochemical pathway glycolysis is the reaction of glucose-6-phosphate (G6P) to form fructose-6-phosphate (F6P): \[\mathrm{G6P⇌F6P} \hspace{20px} ΔG^\circ_{298}=\mathrm{1.7\: kJ}\] (a) Nonspontaneous as \(ΔG^\circ_{298}>0\); \(ΔG^\circ_{298}=−RT\ln K,\) \(ΔG = 1.7×10^3 + \left(8.314 × 335 × \ln\dfrac{28}{128}\right) = \mathrm{−2.5\: kJ}\). The forward reaction to produce F6P is spontaneous under these conditions. Without doing a numerical calculation, determine which of the following will reduce the free energy change for the reaction, that is, make it less positive or more negative, when the temperature is increased. Explain. When ammonium chloride is added to water and stirred, it dissolves spontaneously and the resulting solution feels cold. Without doing any calculations, deduce the signs of Δ , Δ , and Δ for this process, and justify your choices. Δ is negative as the process is spontaneous. Δ is positive as with the solution becoming cold, the dissolving must be endothermic. Δ must be positive as this drives the process, and it is expected for the dissolution of any soluble ionic compound. An important source of copper is from the copper ore, chalcocite, a form of copper(I) sulfide. When heated, the Cu S decomposes to form copper and sulfur described by the following equation: \[\ce{Cu2S}(s)⟶\ce{Cu}(s)+\ce{S}(s)\] What happens to \(ΔG^\circ_{298}\) (becomes more negative or more positive) for the following chemical reactions when the partial pressure of oxygen is increased?
18,585
4,560
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Organic_Chemistry_(Morsch_et_al.)/05%3A_Stereochemistry_at_Tetrahedral_Centers
After you have completed Chapter 5, you should be able to This chapter introduces the concept of chirality, and discusses the structure of compounds containing one or two chiral centers. A convenient method of representing the three-dimensional arrangement of the atoms in chiral compounds is explained; furthermore, throughout the chapter , considerable emphasis is placed on the use of molecular models to assist in the understanding of the phenomenon of chirality. The chapter continues with an examination of stereochemistry—the three-dimensional nature of molecules. The subject is introduced using the experimental observation that certain substances have the ability to rotate plane-polarized light. Finally, certain reactions of alkenes are re-examined in the light of the new material encountered in this chapter. Thumbnail: Two enantiomers of a generic amino acid that are chiral. (Public Domain; unknonw author via )
949
4,561
https://chem.libretexts.org/Bookshelves/Analytical_Chemistry/Analytical_Chemistry_2.1_(Harvey)/09%3A_Titrimetric_Methods/9.06%3A_Problems
Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: 1. Calculate or sketch titration curves for the following acid–base titrations. (a) 25.0 mL of 0.100 M NaOH with 0.0500 M HCl (b) 50.0 mL of 0.0500 M HCOOH with 0.100 M NaOH (c) 50.0 mL of 0.100 M NH with 0.100 M HCl (d) 50.0 mL of 0.0500 M ethylenediamine with 0.100 M HCl (e) 50.0 mL of 0.0400 M citric acid with 0.120 M NaOH (f) 50.0 mL of 0.0400 M H PO with 0.120 M NaOH 2. Locate the equivalence point(s) for each titration curve in problem 1 and, where feasible, calculate the pH at the equivalence point. What is the stoichiometric relationship between the moles of acid and the moles of base for each of these equivalence points? 3. Suggest an appropriate visual indicator for each of the titrations in problem 1. 4. To sketch the titration curve for a weak acid we approximate the pH at 10% of the equivalence point volume as p – 1, and the pH at 90% of the equivalence point volume as p + 1. Show that these assumptions are reasonable. 5. Tartaric acid, H C H O , is a diprotic weak acid with a p of 3.0 and a p of 4.4. Suppose you have a sample of impure tartaric acid (purity > 80%), and that you plan to determine its purity by titrating with a solution of 0.1 M NaOH using an indicator to signal the end point. Describe how you will carry out the analysis, paying particular attention to how much sample to use, the desired pH range for the indicator, and how you will calculate the %w/w tartaric acid. Assume your buret has a maximum capacity of 50 mL. 6. The following data for the titration of a monoprotic weak acid with a strong base were collected using an automatic titrator. Prepare normal, first derivative, second derivative, and Gran plot titration curves for this data, and locate the equivalence point for each. 0.25 0.86 1.63 2.72 4.29 6.54 9.67 7. Schwartz published the following simulated data for the titration of a \(1.02 \times 10^{-4}\) M solution of a monoprotic weak acid (p = 8.16) with \(1.004 \times 10^{-3}\) M NaOH [Schwartz, L. M. , , 879–883]. The simulation assumes that a 50-mL pipet is used to transfer a portion of the weak acid solution to the titration vessel. A calibration of the pipet shows that it delivers a volume of only 49.94 mL. Prepare normal, first derivative, second derivative, and Gran plot titration curves for this data, and determine the equivalence point for each. How do these equivalence points compare to the expected equivalence point? Comment on the utility of each titration curve for the analysis of very dilute solutions of very weak acids. 0.03 6.212 4.79 0.09 0.29 8.994 0.72 8. Calculate or sketch the titration curve for a 50.0 mL solution of a 0.100 M monoprotic weak acid (p = 8.0) with 0.1 M strong base in a nonaqueous solvent with = \(10^{-20}\). You may assume that the change in solvent does not affect the weak acid’s p . Compare your titration curve to the titration curve when water is the solvent. 9. The titration of a mixture of -nitrophenol (p = 7.0) and -nitrophenol (p = 8.3) is followed spectrophotometrically. Neither acid absorbs at a wavelength of 545 nm, but their respective conjugate bases do absorb at this wavelength. The -nitrophenolate ion has a greater absorbance than an equimolar solution of the -nitrophenolate ion. Sketch the spectrophotometric titration curve for a 50.00-mL mixture consisting of 0.0500 M -nitrophenol and 0.0500 M -nitrophenol with 0.100 M NaOH. Compare your result to the expected potentiometric titration curves. 10. A quantitative analysis for aniline (C H NH , = \(3.94 \times 10^{-10}\)) is carried out by an acid–base titration using glacial acetic acid as the solvent and HClO as the titrant. A known volume of sample that contains 3–4 mmol of aniline is transferred to a 250-mL Erlenmeyer flask and diluted to approximately 75 mL with glacial acetic acid. Two drops of a methyl violet indicator are added, and the solution is titrated with previously standardized 0.1000 M HClO (prepared in glacial acetic acid using anhydrous HClO4) until the end point is reached. Results are reported as parts per million aniline. (a) Explain why this titration is conducted using glacial acetic acid as the solvent instead of using water. (b) One problem with using glacial acetic acid as solvent is its relatively high coefficient of thermal expansion of 0.11%/ C. For example, 100.00 mL of glacial acetic acid at 25 C occupies 100.22 mL at 27 C. What is the effect on the reported concentration of aniline if the standardization of HClO is conducted at a temperature that is lower than that for the analysis of the unknown? (c) The procedure calls for a sample that contains 3–4 mmoles of aniline. Why is this requirement necessary? Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: 11. Using a ladder diagram, explain why the presence of dissolved CO leads to a determinate error for the standardization of NaOH if the end point’s pH is between 6–10, but no determinate error if the end point’s pH is less than 6. 12. A water sample’s acidity is determined by titrating to fixed end point pHs of 3.7 and 8.3, with the former providing a measure of the concentration of strong acid and the later a measure of the combined concentrations of strong acid and weak acid. Sketch a titration curve for a mixture of 0.10 M HCl and 0.10 M H CO with 0.20 M strong base, and use it to justify the choice of these end points. 13. Ethylenediaminetetraacetic acid, H Y, is a weak acid with successive acid dissociation constants of 0.010, \(2.19 \times 10^{-3}\), \(6.92 \times 10^{-7}\), and \(5.75 \times 10^{-11}\). The figure below shows a titration curve for H Y with NaOH. What is the stoichiometric relationship between H Y and NaOH at the equivalence point marked with the red arrow? 14. A Gran plot method has been described for the quantitative analysis of a mixture that consists of a strong acid and a monoprotic weak acid [(a) Boiani, J. A. , , 724–726; (b) Castillo, C. A.; Jaramillo, A. , , 341]. A 50.00-mL mixture of HCl and CH COOH is transferred to an Erlenmeyer flask and titrated by using a digital pipet to add successive 1.00-mL aliquots of 0.09186 M NaOH. The progress of the titration is monitored by recording the pH after each addition of titrant. Using the two papers listed above as a reference, prepare a Gran plot for the following data and determine the concentrations of HCl and CH COOH. 1.00 1.83 24.00 4.45 47.00 12.14 2.00 1.86 4.53 48.00 12.17 3.00 1.89 4.61 49.00 12.20 4.00 1.92 4.69 50.00 12.23 5.00 1.95 4.76 51.00 12.26 6.00 1.99 4.84 52.00 12.28 7.00 2.03 4.93 53.00 12.30 8.00 2.10 5.02 54.00 12.32 9.00 2.18 5.13 55.00 12.34 10.00 5.23 56.00 12.36 5.37 57.00 12.38 5.52 58.00 12.39 5.75 59.00 12.40 6.14 60.00 12.42 10.30 15. Explain why it is not possible for a sample of water to simultaneously have OH and \(\text{HCO}_3^-\) as sources of alkalinity. 16. For each of the samples a–e, determine the sources of alkalinity (OH , \(\text{HCO}_3^-\), \(\text{CO}_3^{2-}\)) and their respective concentrations in parts per million In each case a 25.00-mL sample is titrated with 0.1198 M HCl to the bromocresol green and the phenolphthalein end points. 17. A sample may contain any of the following: HCl, NaOH, H PO , \(\text{H}_2\text{PO}_4^-\), \(\text{HPO}_4^{2-}\), or \(\text{PO}_4^{3-}\). The composition of a sample is determined by titrating a 25.00-mL portion with 0.1198 M HCl or 0.1198 M NaOH to the phenolphthalein and to the methyl orange end points. For each of the following samples, determine which species are present and their respective molar concentrations. 18. The protein in a 1.2846-g sample of an oat cereal is determined by a Kjeldahl analysis. The sample is digested with H SO , the resulting solution made basic with NaOH, and the NH distilled into 50.00 mL of 0.09552 M HCl. The excess HCl is back titrated using 37.84 mL of 0.05992 M NaOH. Given that the proteins in grains average 17.54% w/w N, report the %w/w protein in the sample. 19. The concentration of SO in air is determined by bubbling a sample of air through a trap that contains H O . Oxidation of SO by H O results in the formation of H SO , which is then determined by titrat-ing with NaOH. In a typical analysis, a sample of air is passed through the peroxide trap at a rate of 12.5 L/min for 60 min and required 10.08 mL of 0.0244 M NaOH to reach the phenolphthalein end point. Calculate the μL/L SO in the sample of air. The density of SO at the temperature of the air sample is 2.86 mg/mL. 20. The concentration of CO in air is determined by an indirect acid–base titration. A sample of air is bubbled through a solution that contains an excess of Ba(OH) , precipitating BaCO . The excess Ba(OH) is back titrated with HCl. In a typical analysis a 3.5-L sample of air is bubbled through 50.00 mL of 0.0200 M Ba(OH) . Back titrating with 0.0316 M HCl requires 38.58 mL to reach the end point. Determine the ppm CO in the sample of air given that the density of CO at the temperature of the sample is 1.98 g/L. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: 21. The purity of a synthetic preparation of methylethyl ketone, C H O, is determined by reacting it with hydroxylamine hydrochloride, liberating HCl (see reaction in ). In a typical analysis a 3.00-mL sample is diluted to 50.00 mL and treated with an excess of hydroxylamine hydrochloride. The liberated HCl is titrated with 0.9989 M NaOH, requiring 32.68 mL to reach the end point. Report the percent purity of the sample given that the density of methylethyl ketone is 0.805 g/mL. 22. Animal fats and vegetable oils are triesters formed from the reaction between glycerol (1,2,3-propanetriol) and three long-chain fatty acids. One of the methods used to characterize a fat or an oil is a determination of its saponification number. When treated with boiling aqueous KOH, an ester saponifies into the parent alcohol and fatty acids (as carboxylate ions). The saponification number is the number of milligrams of KOH required to saponify 1.000 gram of the fat or the oil. In a typical analysis a 2.085-g sample of butter is added to 25.00 mL of 0.5131 M KOH. After saponification is complete the excess KOH is back titrated with 10.26 mL of 0.5000 M HCl. What is the saponification number for this sample of butter? 23. A 250.0-mg sample of an organic weak acid is dissolved in an appropriate solvent and titrated with 0.0556 M NaOH, requiring 32.58 mL to reach the end point. Determine the compound’s equivalent weight. 24. The figure below shows a potentiometric titration curve for a 0.4300-g sample of a purified amino acid that was dissolved in 50.00 mL of water and titrated with 0.1036 M NaOH. Identify the amino acid from the possibilities listed in the table. 25. Using its titration curve, determine the acid dissociation constant for the weak acid in problem 9.6. 26. Where in the scale of operations do the microtitration techniques discussed in belong? 27. An acid–base titration can be used to determine an analyte’s equivalent weight, but it can not be used to determine its formula weight. Explain why. 28. Commercial washing soda is approximately 30–40% w/w Na CO . One procedure for the quantitative analysis of washing soda contains the following instructions: Transfer an approximately 4-g sample of the washing soda to a 250-mL volumetric flask. Dissolve the sample in about 100 mL of H O and then dilute to the mark. Using a pipet, transfer a 25-mL aliquot of this solution to a 125-mL Erlenmeyer flask and add 25-mL of H O and 2 drops of bromocresol green indicator. Titrate the sample with 0.1 M HCl to the indicator’s end point. What modifications, if any, are necessary if you want to adapt this procedure to evaluate the purity of commercial Na CO that is >98% pure? 29. A variety of systematic and random errors are possible when standardizing a solution of NaOH against the primary weak acid standard potassium hydrogen phthalate (KHP). Identify, with justification, whether the following are sources of systematic error or random error, or if they have no affect on the error. If the error is systematic, then indicate whether the experimentally determined molarity for NaOH is too high or too low. The standardization reaction is \[\text{C}_8\text{H}_5\text{O}_4^-(aq) + \text{OH}^-(aq) \rightarrow \text{C}_8\text{H}_4\text{O}_4^-(aq) + \text{H}_2\text{O}(l) \nonumber\] (a) The balance used to weigh KHP is not properly calibrated and always reads 0.15 g too low. (b) The indicator for the titration changes color between a pH of 3–4. (c) An air bubble, which is lodged in the buret’s tip at the beginning of the analysis, dislodges during the titration. (d) Samples of KHP are weighed into separate Erlenmeyer flasks, but the balance is tarred only for the first flask. (e) The KHP is not dried before it is used. (f) The NaOH is not dried before it is used. (g) The procedure states that the sample of KHP should be dissolved in 25 mL of water, but it is accidentally dissolved in 35 mL of water. 30. The concentration of -phthalic acid in an organic solvent, such as -butanol, is determined by an acid–base titration using aqueous NaOH as the titrant. As the titrant is added, the -phthalic acid extracts into the aqueous solution where it reacts with the titrant. The titrant is added slowly to allow sufficient time for the extraction to take place. (a) What type of error do you expect if the titration is carried out too quickly? (b) Propose an alternative acid–base titrimetric method that allows for a more rapid determination of the concentration of -phthalic acid in -butanol. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: 31. Calculate or sketch titration curves for 50.0 mL of 0.100 Mg with 0.100 M EDTA at a pH of 7 and 10. Locate the equivalence point for each titration curve. 32. Calculate or sketch titration curves for 25.0 mL of 0.0500 M Cu with 0.025 M EDTA at a pH of 10 and in the presence of 10 M and 10 M NH . Locate the equivalence point for each titration curve. 33. Sketch the spectrophotometric titration curve for the titration of a mixture of \(5.00 \times 10^{-3}\) M Bi and \(5.00 \times 10^{-3}\) M Cu with 0.0100 M EDTA. Assume that only the Cu –EDTA complex absorbs at the selected wavelength. 34. The EDTA titration of mixtures of Ca and Mg can be followed thermometrically because the formation of the Ca –EDTA complex is exothermic and the formation of the Mg –EDTA complex is endothermic. Sketch the thermometric titration curve for a mixture of \(5.00 \times 10^{-3}\) M Ca and \(5.00 \times 10^{-3}\) M Mg using 0.0100 M EDTA as the titrant. The heats of formation for CaY and MgY are, respectively, –23.9 kJ/mole and 23.0 kJ/mole. 35. EDTA is one member of a class of aminocarboxylate ligands that form very stable 1:1 complexes with metal ions. The following table provides log values for the complexes of six such ligands with Ca and Mg . Which ligand is the best choice for a direct titration of Ca in the presence of Mg ? 36. The amount of calcium in physiological fluids is determined by a complexometric titration with EDTA. In one such analysis a 0.100-mL sample of a blood serum is made basic by adding 2 drops of NaOH and titrated with 0.00119 M EDTA, requiring 0.268 mL to reach the end point. Report the concentration of calcium in the sample as milligrams Ca per 100 mL. 37. After removing the membranes from an eggshell, the shell is dried and its mass recorded as 5.613 g. The eggshell is transferred to a 250-mL beaker and dissolved in 25 mL of 6 M HCl. After filtering, the solution that contains the dissolved eggshell is diluted to 250 mL in a volumetric flask. A 10.00-mL aliquot is placed in a 125-mL Erlenmeyer flask and buffered to a pH of 10. Titrating with 0.04988 M EDTA requires 44.11 mL to reach the end point. Determine the amount of calcium in the eggshell as %w/w CaCO . 38. The concentration of cyanide, CN , in a copper electroplating bath is determined by a complexometric titration using Ag as the titrant, forming the soluble \(\text{Ag(CN)}_2^-\) complex. In a typical analysis a 5.00-mL sample from an electroplating bath is transferred to a 250-mL Erlenmeyer flask, and treated with 100 mL of H O, 5 mL of 20% w/v NaOH and 5 mL of 10% w/v KI. The sample is titrated with 0.1012 M AgNO , requiring 27.36 mL to reach the end point as signaled by the formation of a yellow precipitate of AgI. Report the concentration of cyanide as parts per million of NaCN. 39. Before the introduction of EDTA most complexation titrations used Ag or CN as the titrant. The analysis for Cd , for example, was accomplished indirectly by adding an excess of KCN to form \(\text{Cd(CN)}_4^{2-}\), and back-titrating the excess CN with Ag , forming \(\text{Ag(CN)}_2^-\). In one such analysis a 0.3000-g sample of an ore is dissolved and treated with 20.00 mL of 0.5000 M KCN. The excess CN requires 13.98 mL of 0.1518 M AgNO to reach the end point. Determine the %w/w Cd in the ore. 40. Solutions that contain both Fe and Al are selectively analyzed for Fe by buffering to a pH of 2 and titrating with EDTA. The pH of the solution is then raised to 5 and an excess of EDTA added, resulting in the formation of the Al –EDTA complex. The excess EDTA is back-titrated using a standard solution of Fe , providing an indirect analysis for Al . (a) At a pH of 2, verify that the formation of the Fe –EDTA complex is favorable, and that the formation of the Al –EDTA complex is not favorable. (b) A 50.00-mL aliquot of a sample that contains Fe and Al is transferred to a 250-mL Erlenmeyer flask and buffered to a pH of 2. A small amount of salicylic acid is added, forming the soluble red-colored Fe –salicylic acid complex. The solution is titrated with 0.05002 M EDTA, requiring 24.82 mL to reach the end point as signaled by the disappearance of the Fe –salicylic acid complex’s red color. The solution is buffered to a pH of 5 and 50.00 mL of 0.05002 M EDTA is added. After ensuring that the formation of the Al –EDTA complex is complete, the excess EDTA is back titrated with 0.04109 M Fe , requiring 17.84 mL to reach the end point as signaled by the reappearance of the red-colored Fe –salicylic acid complex. Report the molar concentrations of Fe and A in the sample. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: 41. Prada and colleagues described an indirect method for determining sulfate in natural samples, such as seawater and industrial effluents [Prada, S.; Guekezian, M.; Suarez-Iha, M. E. V. , , 197–202]. The method consists of three steps: precipitating the sulfate as PbSO ; dissolving the PbSO in an ammonical solution of excess EDTA to form the soluble PbY complex; and titrating the excess EDTA with a standard solution of Mg . The following reactions and equilibrium constants are known \[\text{PbSO}_4(s) \rightleftharpoons \text{Pb}^{2+}(aq) + \text{SO}_4^{2-}(aq) \quad K_\text{sp} = 1.6 \times 10^{-8} \nonumber\] \[\text{Pb}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{PbY}^{2-}(aq) \quad K_\text{f} = 1.1 \times 10^{18} \nonumber\] \[\text{Mg}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{MgY}^{2-}(aq) \quad K_\text{f} = 4.9 \times 10^{8} \nonumber\] \[\text{Zn}^{2+}(aq) + \text{Y}^{4-}(aq) \rightleftharpoons \text{ZnY}^{2-}(aq) \quad K_\text{f} = 3.2 \times 10^{16} \nonumber\] (a) Verify that a precipitate of PbSO will dissolve in a solution of Y . (b) Sporek proposed a similar method using Zn as a titrant and found that the accuracy frequently was poor [Sporek, K. F. , , 1030–1032]. One explanation is that Zn might react with the PbY complex, forming ZnY . Show that this might be a problem when using Zn as a titrant, but that it is not a problem when using Mg as a titrant. Would such a displacement of Pb by Zn lead to the reporting of too much or too little sulfate? (c) In a typical analysis, a 25.00-mL sample of an industrial effluent is carried through the procedure using 50.00 mL of 0.05000 M EDTA. Titrating the excess EDTA requires 12.42 mL of 0.1000 M Mg . Report the molar concentration of \(\text{SO}_4^{2-}\) in the sample of effluent. 42. provides values for the fraction of EDTA present as Y , \(\alpha_{\text{Y}^{4-}}\). Values of \(\alpha_{\text{Y}^{4-}}\) are calculated using the equation \[\alpha_{\text{Y}^{4-}} = \frac{[\text{Y}^{4-}]}{C_\text{EDTA}} \nonumber\] where [Y ] is the concentration of the fully deprotonated EDTA and EDTA is the total concentration of EDTA in all of its forms \[C_\text{EDTA} = [\text{H}_6\text{Y}^{2+}]+[\text{H}_5\text{Y}^{+}]+[\text{H}_4\text{Y}]+ [\text{H}_3\text{Y}^{-}] + [\text{H}_2\text{Y}^{2-}] + [\text{H}_\text{Y}^{3-}] + [\text{Y}^{4-}] \nonumber\] \[\text{H}_6\text{Y}^{2+} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_5\text{Y}^{+}(aq) \quad K_\text{a1} \nonumber\] \[\text{H}_5\text{Y}^{+} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_4\text{Y}(aq) \quad K_\text{a2} \nonumber\] \[\text{H}_4\text{Y} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_3\text{Y}^{-}(aq) \quad K_\text{a3} \nonumber\] \[\text{H}_3\text{Y}^{-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}_2\text{Y}^{2-}(aq) \quad K_\text{a4} \nonumber\] \[\text{H}_2\text{Y}^{2-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{H}\text{Y}^{3-}(aq) \quad K_\text{a5} \nonumber\] \[\text{H}\text{Y}^{2-} (aq) + \text{H}_2\text{O}(l) \rightleftharpoons \text{H}_3\text{O}^+(aq) + \text{Y}^{4-}(aq) \quad K_\text{a6} \nonumber\] to show that \[\alpha_{\text{Y}^{4-}} = \frac{K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5}K_\text{a6}}{d} \nonumber\] where \[d = [\text{H}_3\text{O}^+]^6 + [\text{H}_3\text{O}^+]^5K_\text{a1} + [\text{H}_3\text{O}^+]^4K_\text{a1}K_\text{a2} + [\text{H}_3\text{O}^+]^3K_\text{a1}K_\text{a2}K_\text{a3} + [\text{H}_3\text{O}^+]^2K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4} + [\text{H}_3\text{O}^+]K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5} + K_\text{a1}K_\text{a2}K_\text{a3}K_\text{a4}K_\text{a5}K_\text{a6} \nonumber\] 43. Calculate or sketch titration curves for the following redox titration reactions at 25 C. Assume the analyte initially is present at a concentration of 0.0100 M and that a 25.0-mL sample is taken for analysis. The titrant, which is the species in each reaction, has a concentration of 0.0100 M. (a) V ( ) + ( ) \(\rightarrow\) V ( ) + Ce ( ) (b) Sn ( ) + 2 ( ) \(\rightarrow\) Sn ( ) +2Ce ( ) (c) 5Fe ( ) + \(\mathbf{MnO}_\mathbf{4}^\mathbf{-}\)( ) + 8H ( ) \(\rightarrow\) 5Fe ( ) + Mn ( ) +4H O( ) at a pH of 1 44. What is the equivalence point for each titration in problem 43? 45. Suggest an appropriate indicator for each titration in problem 43. 46. The iron content of an ore is determined by a redox titration that uses K Cr O as the titrant. A sample of the ore is dissolved in concentrated HCl using Sn to speed its dissolution by reducing Fe to Fe . After the sample is dissolved, Fe and any excess Sn are oxidized to Fe and Sn using \(\text{MnO}_4^-\). The iron is then carefully reduced to Fe by adding a 2–3 drop excess of Sn . A solution of HgCl is added and, if a white precipitate of Hg Cl forms, the analysis is continued by titrating with K Cr O . The sample is discarded without completing the analysis if a precipitate of Hg Cl does not form or if a gray precipitate (due to Hg) forms. (a) Explain why the sample is discarded if a white precipitate of Hg Cl does not form or if a gray precipitate forms. (b) Is a determinate error introduced if the analyst forgets to add Sn in the step where the iron ore is dissolved? (c) Is a determinate error introduced if the iron is not quantitatively oxidized back to Fe by the \(\text{MnO}_4^-\)? 47. The amount of Cr in an inorganic salt is determined by a redox titration. A portion of sample that contains approximately 0.25 g of Cr is accurately weighed and dissolved in . The Cr is oxidized to \(\text{Cr}_2\text{O}_7^{2-}\) by adding , which serves as a catalyst, and , which serves as the oxidizing agent. After the reaction is complete, the resulting solution is boiled for 20 minutes to destroy the excess \(\text{S}_2\text{O}_8^{2-}\), cooled to room temperature, and diluted to 250 mL in a volumetric flask. A is transferred to an Erlenmeyer flask, treated with , and acidified with , reducing the \(\text{Cr}_2\text{O}_7^{2-}\) to Cr . The excess Fe is then determined by a back titration with a standard solution of K Cr O using an appropriate indicator. The results are reported as %w/w Cr . (a) There are several places in the procedure where a reagent’s volume is specified (see ). Which of these measurements must be made using a volumetric pipet. (b) Excess peroxydisulfate, \(\text{S}_2\text{O}_8^{2-}\) is destroyed by boiling the solution. What is the effect on the reported %w/w Cr if some of the \(\text{S}_2\text{O}_8^{2-}\) is not destroyed during this step? (c) Solutions of Fe undergo slow air oxidation to Fe . What is the effect on the reported %w/w Cr if the standard solution of Fe is inadvertently allowed to be partially oxidized? 48. The exact concentration of H O in a solution that is nominally 6% w/v H O is determined by a redox titration using \(\text{MnO}_4^-\) as the titrant. A 25-mL aliquot of the sample is transferred to a 250-mL volumetric flask and diluted to volume with distilled water. A 25-mL aliquot of the diluted sample is added to an Erlenmeyer flask, diluted with 200 mL of distilled water, and acidified with 20 mL of 25% v/v H SO . The resulting solution is titrated with a standard solution of KMnO until a faint pink color persists for 30 s. The results are reported as %w/v H O . (a) Many commercially available solutions of H O contain an inorganic or an organic stabilizer to prevent the autodecomposition of the peroxide to H O and O . What effect does the presence of this stabilizer have on the reported %w/v H O if it also reacts with \(\text{MnO}_4^-\)? (b) Laboratory distilled water often contains traces of dissolved organic material that may react with \(\text{MnO}_4^-\). Describe a simple method to correct for this potential interference. (c) What modifications to the procedure, if any, are needed if the sample has a nominal concentration of 30% w/v H O . 49. The amount of iron in a meteorite is determined by a redox titration using KMnO as the titrant. A 0.4185-g sample is dissolved in acid and the liberated Fe quantitatively reduced to Fe using a Walden reductor. Titrating with 0.02500 M KMnO requires 41.27 mL to reach the end point. Determine the %w/w Fe O in the sample of meteorite. 50. Under basic conditions, \(\text{MnO}_4^-\) is used as a titrant for the analysis of Mn , with both the analyte and the titrant forming MnO . In the analysis of a mineral sample for manganese, a 0.5165-g sample is dissolved and the manganese reduced to Mn . The solution is made basic and titrated with 0.03358 M KMnO , requiring 34.88 mL to reach the end point. Calculate the %w/w Mn in the mineral sample. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants: 51. The amount of uranium in an ore is determined by an indirect redox titration. The analysis is accomplished by dissolving the ore in sulfuric acid and reducing \(\text{UO}_2^+\) to U with a Walden reductor. The solution is treated with an excess of Fe , forming Fe and U . The Fe is titrated with a standard solution of K Cr O . In a typical analysis a 0.315-g sample of ore is passed through the Walden reductor and treated with 50.00 mL of 0.0125 M Fe . Back titrating with 0.00987 M K Cr O requires 10.52 mL. What is the %w/w U in the sample? 52. The thickness of the chromium plate on an auto fender is determined by dissolving a 30.0-cm section in acid and oxidizing Cr to \(\text{Cr}_2\text{O}_7^{2-}\) with peroxydisulfate. After removing excess peroxydisulfate by boiling, 500.0 mg of Fe(NH ) (SO ) •6H O is added, reducing the \(\text{Cr}_2\text{O}_7^{2-}\) to Cr . The excess Fe is back titrated, requiring 18.29 mL of 0.00389 M K Cr O to reach the end point. Determine the average thickness of the chromium plate given that the density of Cr is 7.20 g/cm . 53. The concentration of CO in air is determined by passing a known volume of air through a tube that contains I O , forming CO and I . The I is removed from the tube by distilling it into a solution that contains an excess of KI, producing \(\text{I}_3^-\). The \(\text{I}_3^-\) is titrated with a standard solution of Na S O . In a typical analysis a 4.79-L sample of air is sampled as described here, requiring 7.17 mL of 0.00329 M Na S O to reach the end point. If the air has a density of \(1.23 \times 10^{-3}\) g/mL, determine the parts per million CO in the air. 54. The level of dissolved oxygen in a water sample is determined by the Winkler method. In a typical analysis a 100.0-mL sample is made basic and treated with a solution of MnSO , resulting in the formation of MnO . An excess of KI is added and the solution is acidified, resulting in the formation of Mn and I . The liberated I is titrated with a solution of 0.00870 M Na S O , requiring 8.90 mL to reach the starch indicator end point. Calculate the concentration of dissolved oxygen as parts per million O . 55. Calculate or sketch the titration curve for the titration of 50.0 mL of 0.0250 M KI with 0.0500 M AgNO . Prepare separate titration curves using pAg and pI on the -axis. 56. Calculate or sketch the titration curve for the titration of a 25.0 mL mixture of 0.0500 M KI and 0.0500 M KSCN using 0.0500 M AgNO as the titrant. 57. The analysis for Cl using the Volhard method requires a back titration. A known amount of AgNO is added, precipitating AgCl. The unreacted Ag is determined by back titrating with KSCN. There is a complication, however, because AgCl is more soluble than AgSCN. (a) Why do the relative solubilities of AgCl and AgSCN lead to a titration error? (b) Is the resulting titration error a positive or a negative determinate error? (c) How might you modify the procedure to eliminate this source of determinate error? (d) Is this source of determinate error of concern when using the Volhard method to determine Br ? 58. Voncina and co-workers suggest that a precipitation titration can be monitored by measuring pH as a function of the volume of titrant if the titrant is a weak base [VonČina, D. B.; DobČnik, D.; GomiŠČek, S. , , 147–153]. For example, when titrating Pb with K CrO the solution that contains the analyte initially is acidified to a pH of 3.50 using HNO . Before the equivalence point the concentration of \(\text{CrO}_4^{2-}\) is controlled by the solubility product of PbCrO . After the equivalence point the concentration of \(\text{CrO}_4^{2-}\) is determined by the amount of excess titrant. Considering the reactions that control the concentration of \(\text{CrO}_4^{2-}\), sketch the expected titration curve of pH versus volume of titrant. 59. A 0.5131-g sample that contains KBr is dissolved in 50 mL of distilled water. Titrating with 0.04614 M AgNO requires 25.13 mL to reach the Mohr end point. A blank titration requires 0.65 mL to reach the same end point. Report the %w/w KBr in the sample. 60. A 0.1093-g sample of impure Na CO is analyzed by the Volhard method. After adding 50.00 mL of 0.06911 M AgNO , the sample is back titrated with 0.05781 M KSCN, requiring 27.36 mL to reach the end point. Report the purity of the Na CO sample. 61. A 0.1036-g sample that contains only BaCl and NaCl is dissolved in 50 mL of distilled water. Titrating with 0.07916 M AgNO requires 19.46 mL to reach the Fajans end point. Report the %w/w BaCl in the sample. Some of the problems that follow require one or more equilibrium constants or standard state potentials. For your convenience, here are hyperlinks to the appendices containing these constants:
32,892
4,562