id
stringlengths 77
537
| text
stringlengths 101
1.47M
| source
stringclasses 1
value | added
stringlengths 26
26
| created
timestamp[s] | metadata
dict |
---|---|---|---|---|---|
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/20%3A_The_Genetic_Code/20.03%3A_Gene_and_Protein_Colinearity_and_Triplet_Codons | 20.3: Gene and Protein Colinearity and Triplet Codons
-
- Last updated
- Save as PDF
Serious efforts to understand how proteins are encoded began after Watson and Crick used the experimental evidence of Maurice Wilkins and Rosalind Franklin (among others) to determine the structure of DNA. Most hypotheses about the genetic code assumed that DNA (i.e., genes) and polypeptides were colinear .
A. Colinearity
For genes and proteins, colinearity just means that the length of a DNA sequence in a gene is proportional to the length of the polypeptide encoded by the gene. The gene mapping experiments in E. coli already discussed certainly supported this hypothesis.
The concept of colinearity is illustrated below.
If the genetic code is collinear with the polypeptides it encodes, then a one-base codon obviously does not work because such a code would only account for four amino acids. A two-base genetic code also doesn’t work because it could only account for 16 (4 2 ) of the twenty amino acids found in proteins. However, threenucleotide codons could code for a maximum of 43 or 64 amino acids, more than enough to encode the 20 amino acids. And of course, a 4-base code also works; it satisfies the expectation that genes and proteins are collinear, with the’ advantage’ that there would be 256 possible codons to choose from (i.e., 44 possibilities).
B. How is the Genetic Code 'Read' to Account for All of an Organisms' Gene?
George Gamow (a Russian Physicist working at George Washington University) was the first to propose triplet codons to encode the twenty amino acids, the simplest hypothesis to account for the colinearity of gene and protein, and for encoding 20 amino acids. One concern that was raised was whether there is enough DNA in an organism’s genome to fit the all codons it needs to make all of its proteins? Assuming genomes did not have a lot of extra DNA laying around, how might genetic information be compressed into short DNA sequences in a way that is consistent with the colinearity of gene and polypeptide. One idea assumed 44 meaningless and 20 meaningful 3-base codons (one for each amino acid) and 44 meaningless codons, and that the meaningful codons in a gene (i.e., an mRNA) would be read and translated in an overlapping manner.
A code where codons overlap by one base is shown below.
You can figure out how compressed a gene could get with codons that overlapped by two bases. However, as attractive as an overlapping codon hypothesis was in achieving genomic economies, it sank of its own weight almost as soon as it was floated! If you look carefully at the example above, you can see that each succeeding amino acid would have to start with a specific base. A look back at the table of 64 triplet codons quickly shows that only one of 16 amino acids, those that begin with a C can follow the first one in the illustration. Based on amino acid sequences accumulating in the literature, virtually any amino acid could follow another in a polypeptide. Therefore, overlapping genetic codes are untenable. The genetic code must be non-overlapping!
Sidney Brenner and Frances Crick performed elegant experiments that directly demonstrated the non-overlapping genetic code. They showed that bacteria with a single base deletion in the coding region of a gene failed to make the expected protein. Likewise, deleting two bases from the gene. On the other hand, bacteria containing a mutant version of the gene in which three bases were deleted were able to make the protein. The protein it made was slightly less active than bacteria with genes with no deletions.
The next issue was whether there were only 20 meaningful codons and 44 meaningless ones. If only 20 triplets actually encoded amino acids, how would the translation machinery recognize the correct 20 codons to translate? What would prevent the translational machinery from ‘reading the wrong’ triplets, i.e., reading an mRNA out of phase ? If for example, if the translation machinery began reading an MRNA from the second or third bases of a codon, it would likely encounter a meaningless 3-base sequence in short order.
One speculation was that the code was punctuated . That is, perhaps there were the chemical equivalent of commas between the meaningful triplets. The commas would be of course, additional nucleotides. In such a punctuated code, the translation machinery would recognize the ‘commas’ and would not translate any meaningless 3- base triplet, avoiding out-of-phase translation attempts. Of course, a code with nucleotide ‘commas’ would increase the amount of DNA needed to specify a polypeptide by a third!
Then, Crick proposed the Commaless Genetic Code . He divided the 64 triplets into 20 meaningful codons that encoded the amino acids, and 44 meaningless ones that did not. The result was such that when the 20 meaningful codons are placed in any order, any of the triplets read in overlap would be among the 44 meaningless codons. In fact, he could arrange several different sets of 20 and 44 triplets with this property! Crick had cleverly demonstrated how to read the triplets in correct sequence without nucleotide ‘commas’.
202 Speculations About a Triplet Code
As we know now, the genetic code is indeed ‘commaless’… but not in the sense that Crick had envisioned. What’s more, Thanks to the experiments described next, we know that ribosomes read the correct codons in the right order because they know exactly where to start!
C. Breaking the Genetic Code
When the genetic code was actually broken, it was found that 61 of the codons specify amino acids and therefore, that the code is degenerate . Breaking the code began when Marshall Nirenberg and Heinrich J. Matthaei decoded the first triplet. They fractionated E. coli and identified which fractions had to be added back together in order to get polypeptide synthesis in a test tube ( in vitro translation).
The cell fractionation is summarized below.
Check out the original work in the classic paper by Nirenberg MW and Matthaei JH [(1961) The dependence of cell-free protein synthesis in E. coli upon naturally occurring or synthetic polyribo-nucleotides. Proc. Natl. Acad. Sci. USA 47:1588-1602]. The various cell fractions isolated by this protocol were added back together along with amino acids (one of which was radioactive) and ATP as an energy source. After a short incubation, Nirenberg and his coworkers looked for the presence of high molecular weight radioactive proteins as evidence of cell-free protein synthesis.
They found that all four final sub-fractions (1-4 above) must be added together to make radioactive proteins in the test tube. One of the essential cell fractions consisted of RNA that had been gently extracted from ribosome (fraction 2 in the illustration). Reasoning that this RNA might be mRNA, they substituted a synthetic poly(U) preparation for this fraction in their cell-free protein synthesizing mix, expecting poly(U) to encode a simple repeating amino acid.
They set up 20 reaction tubes, with a different amino acid in each…, and made only poly-phenylalanine. The experiment is illustrated below.
So, the triplet codon UUU means phenylalanine . Other polynucleotides were synthesized by G. Khorana, and in quick succession, poly(A) and poly(C) were shown to make poly-lysine and poly-proline in this experimental protocol. Thus AAA and CCC must encode lysine and proline respectively. With a bit more difficulty and ingenuity, poly di- and tri-nucleotides were also used in the cell free system to decipher several additional codons.
203 Deciphering the First Codon
M. W. Nirenberg, H. G. Khorana and R. W. Holley shared the 1968 Nobel Prize in Physiology or Medicine for their contributions to our understanding of protein synthesis. Deciphering the rest of the genetic code was based on Crick’s realization that chemically, amino acids have no attraction for either DNA or RNA (or triplets thereof). Instead, he predicted the existence of an adaptor molecule that would contain nucleic acid and amino acid information on the same molecule . Today we recognize this molecule as tRNA, the genetic decoding device .
Nirenberg and Philip Leder designed the experiment that pretty much broke the rest of the genetic code. They did this by adding individual amino acids to separate test tubes containing tRNAs, in effect causing the synthesis of specific aminoacyl-tRNAs.
They then mixed their amino acid-bound tRNAs with isolated ribosomes and synthetic triplets. Since they had already shown that synthetic three-nucleotide fragments would bind to ribosomes, they hypothesized that triplet-bound ribosomes would in turn, bind appropriate amino acid-bound tRNAs. The experiment is shown below.
Various combinations of tRNA, ribosomes and aminoacyl-tRNAs were placed over a filter. Nirenberg and Leder knew that aminoacyl-tRNAs alone passed through the filter and that ribosomes did not. They predicted then, that triplets would associate with the ribosomes, and further, that this complex would bind the tRNA with the amino acid encoded by the bound triplet. This 3-part complex would also be retained by the filter, allowing the identification of the amino acid retained on the filter, and therefore the triplet code-word that had enabled binding the amino acid to the ribosome.
204 Deciphering all 64 Triplet Codons
After the code was largely deciphered, Robert Holley actually sequenced a yeast tRNA, and from regions of internal complementarity, predicted the folded structure of the tRNA. This first successful sequencing of a nucleic acid was possible because the tRNA was short, and contained several modified bases that facilitated the sequencing chemistry. Holley found the amino acid alanine at one end of the tRNA and he found one of the anticodons for an alanine codon roughly in the middle of the tRNA sequence. Holley predicted that this (and other) tRNAs would fold and assume a stem-loop , or cloverleaf structure with a central anticodon loop . The illustration below shows this structure for a phenylalanine tRNA along with subsequent computer-generated structures (below right) showing a now familiar “L”-shaped molecule with an amino acid attachment site at the 3’-end at the top of the molecule, and the anticodon loop at the other, bottom ‘end’
205 tRNA Structure and Base Modifications
After a brief overview of translation, we’ll break translation down into its 3 steps and see how aminoacyl-tRNAs function in the initiation and elongation steps of translation, as well as the special role of an initiator tRNA. | libretexts | 2025-03-17T22:27:37.734571 | 2021-01-03T20:13:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/20%3A_The_Genetic_Code/20.03%3A_Gene_and_Protein_Colinearity_and_Triplet_Codons",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "20.3: Gene and Protein Colinearity and Triplet Codons",
"author": "Gerald Bergtrom"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/20%3A_The_Genetic_Code/20.04%3A_Translation | 20.4: Translation
-
- Last updated
- Save as PDF
A. Overview of Translation (Synthesizing Proteins)
Like any polymerization in a cell, translation occurs in three steps: initiation brings a ribosome, mRNA and an initiator tRNA together to form an initiation complex. Elongation is the successive addition of amino acids to a growing polypeptide. Termination is signaled by sequences (one of the stop codons) in the mRNA and protein termination factors that interrupt elongation and release a finished polypeptide. The events of translation occur at specific A , P and E sites on the ribosome (see drawing below).
B. Translation - First Steps
1. Making Aminoacyl-tRNAs
Translation is perhaps the most energy-intensive job a cell must do, beginning with the attachment of amino acids to their tRNAs. The basic amino-acylation reaction is the same for all amino acids. A specific aminoacyl-tRNA synthase attaches each tRNA to ( charges ) an appropriate amino acid. Charging tRNAs requires ATP and proceeds in three steps (shown below).
In the first step, ATP and an appropriate amino acid bind to the aminoacyl-tRNA synthase. ATP is hydrolyzed releasing a pyrophosphate (PPi) and leaving an enzyme-AMP-amino acid complex. Next, the amino acid is transferred to the enzyme, releasing the AMP. Finally, the tRNA binds to the enzyme, the amino acid is transferred to the tRNA and the intact enzyme is regenerated and released. The charged tRNA is ready to use in translation.
Several studies had already established that polypeptides are synthesized from their amino (N-) terminal end to their carboxyl (C-) terminal end. When it became possible to determine the amino acid sequences of polypeptides, it turned out that around 40% of E. coli proteins had an N-terminal methionine, suggesting that all proteins began with a methionine. It also turned out that, even though there is only one codon for methionine, two different tRNAs for methionine could be isolated. One of the tRNAs was bound to a methionine modified by formylation , called formylmethionine-tRNA fmet (or fmet-tRNAf for short). The other was methionine-tRNA met ( met-tRNA met for short), charged with an unmodified methionine.
Methionine and formylated methionine are shown below.
tRNA met and tRNAf each have an anticodon to AUG, the only codon for methionine, but have different base sequences encoded by different tRNA genes. tRNA met is used to insert methionine in the middle of a polypeptide. tRNAf is the initiator tRNA, and is only used to start new polypeptides with formylmethionine. In prokaryotes, methionine on met-tRNAf is formylated at its amino group to make the fmet-tRNAf. The formylating enzyme that does this does not recognize methionine on met-tRNA met .
In E. coli , a formylase enzyme removes the formyl group from all N-terminal formyl methionines at some point after translation has begun. As we noted, the methionine itself (and sometimes more N-terminal amino acids) are also removed from about 60% of E. coli polypeptides. Eukaryotes have inherited both the initiator tRNAf and the tRNAmet, using only met-tRNAf during initiation. However, methionine on the eukaryotic initiator met-tRNAf is never formylated in the first place. What’s more, methionine is absent from virtually all mature eukaryotic polypeptides.
Early in evolution, the need for an initiator tRNA must have ensured a correct starting point for translation on an mRNA and therefore growth of a polypeptide from one end to the other, that is, from its N- to its C-terminus. At one time, formylation of the N-terminal methionine may have served to block accidental addition of amino acids or other modifications at the N-terminus of a polypeptide. Today, formylation seems to be a kind of molecular appendix in bacteria. Since then, evolution (in eukaryotes at least) has selected other features to replace actual formylation as the protector of the N-terminus of polypeptides.
2. Initiation
Now that we have charged the tRNAs, we can look more closely at the three steps of translation. Understanding translation initiation began with a molecular dissection of the components of E. coli cells required for cell-free (in vitro) protein synthesis, including cell fractionation, protein purification and reconstitution experiments. Initiation starts with when the Shine-Delgarno sequence forms Hbonds with a complementary sequence in the 16S rRNA bound to 30S ribosomal subunit. The Shine-Delgarno sequence is a short nucleotide sequence in the 5’ untranslated region ( 5’-UTR ) of the messenger RNA, just upstream of the initiator AUG codon. This requires the participation of initiation factors IF1 and IF3 . In this event, IF1 and IF3 as well as the mRNA are bound to the 30S ribosomal subunit (below).
Demonstration of the binding of an mRNA to a ribosomal subunit required isolation and separation of the 30S ribosomal subunit, an RNA fraction of the cell, and the purification of initiation factor proteins from the bacterial cells. This was followed by reconstitution (adding the separated fractions back together) in the correct order show that mRNA would only bind to the 30S subunit in the presence of the two specific initiation factor proteins.
206 Translation Initiation: mRNA Associates with 30S Ribsomal Subunit
Next, with the help of GTP and another initiation factor ( IF2 ), the initiator fmettRNAf recognizes and binds to the AUG start codon found in all mRNAs. Some call the resulting structure (shown below) the Initiation Complex , which includes the 30S ribosomal subunit, Ifs 1, 2 and 3, and the fmet-tRNAf.
207 Initiation Complex Formation
In the last step of initiation, the large ribosomal subunit binds to this complex. IFs 1, 2 and 3 disassociate from the ribosome and the initiator fmet-tRNA fmet ends up in the P site of the ribosome.
Some prefer to call the structure formed at this point the initiation complex (below).
208 Adding the Large Ribosomal Subunit
Initiation can happen multiple times on a single mRNA, forming the polyribosome, or polysome described in Chapter 1. Each of the complexes formed above will engage in the elongation of a polypeptide described next.
3. Elongation
Elongation is a sequence of protein factor-mediated condensation reactions and ribosome movements along an mRNA. As you will see, polypeptide elongation requires a considerable input of free energy.
a) Elongation 1
The first step in elongation is the entry of the next aminoacyl-tRNA ( aa2- tRNAaa2 ), which requires the free energy of GTP hydrolysis. The energy is supplied by the hydrolysis of GTP bound elongation factor 2 ( EF2-GTP ). The aa2-tRNAaa2 enters the ribosome based on codon-anticodon interaction at the A site as shown below.
The GDP dissociates from EF2 as aa2-tRNAaa2 binds the anticodon in the A site. To keep elongation moving along, elongation factor (EF3) rephosphorylates the GDP to GTP, which can re-associate with free EF2.
b) Elongation 2
Peptidyl transferase , a ribozyme component of the ribosome itself, links the incoming amino acid to a growing chain in a condensation reaction.
In this reaction, the fmet is transferred from the initiator tRNAf in the P site to aa2-tRNAaa2 in the A site, forming a peptide linkage with aa2.
210 Elongation: A Ribozyme Catalyzes Peptide Linkage Formation
c) Elongation 3
Translocase catalyzes GTP hydrolysis as the ribosome moves (translocates) along the mRNA. After translocation, the next mRNA codon shows up in the A site of the ribosome and the first tRNA (in this example, tRNAf) ends up on the E site of the ribosome.
The movement of the ribosome along the mRNA is illustrated below.
The tRNAf, no longer attached to an amino acid, will exit the E site as the next (3rd) aa-tRNA enters the empty A site, based on a specific codon-anticodon interaction (assisted by elongation factors and powered by GTP hydrolysis) to begin another cycle of elongation. Note that in each cycle of elongation, an ATP is consumed to attach each amino acid to its tRNA, and two GTPs are hydrolyzed in the cycle itself. In other words, at the cost of three NTPs, protein synthesis is the most expensive polymer synthesis reaction in cells!
211 Elongation: Translocase Moves Ribosomes along mRNA
212 Adding the Third Amino Acid
213 Big Translation Energy Costs
As polypeptides elongate, they eventually emerge from a groove in the large ribosomal subunit. As noted, a formylase enzyme in E. coli cytoplasm removes the formyl group from the exposed initiation fmet from all growing polypeptides. While about 40% of E. coli polypeptides still begin with methionine, specific proteases catalyze the hydrolytic removal of the amino-terminal methionine (and sometimes even more amino acids) from the other 60% of polypeptides. The removal of the formyl group and one or more N-terminal amino acids from new polypeptides are examples of post-translational processing.
214 The Fates of fMet and Met: Cases of Post-Translational Processing
4. Termination
Translation of an mRNA by a ribosome ends when translocation exposes one of the three stop codons in the A site of the ribosome. Stop codons are not situated some distance from the 3’ end of an mRNA. The region between a stop codon to the end of the mRNA is called the 3’ untranslated region of the messenger RNA ( 3’UTR ).
Since there is no aminoacyl-tRNA with an anticodon to the stop codons (UAA, UAG or UGA), the ribosome actually stalls and the translation slow-down is just long enough for a protein termination factor to enter the A site. This interaction causes release of the new polypeptide and the disassembly of the ribosomal subunits from the mRNA. The process requires energy from yet another GTP hydrolysis. After dissociation, ribosomal subunits can be reassembled with an mRNA for another round of protein synthesis. Translation termination is illustrated below.
We have seen some examples of post-translational processing (removal of formyl groups in E. coli , removal of the N-terminal methionine from most polypeptides, etc.) Most proteins, especially in eukaryotes, undergo one or more additional steps of posttranslational processing before becoming biologically active. We will see examples in upcoming chapters.
Let’s conclude this chapter with a “we thought we knew everything” moment! A recent study reports that ribosomes can sometimes re-initiate translation in the 3’ UTR of an mRNA using AUG codons upstream of the normal start codon of the mRNA. There is evidence that the resulting short polypeptides may be functional! Click here to read more: here. | libretexts | 2025-03-17T22:27:37.812054 | 2021-01-03T20:13:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/20%3A_The_Genetic_Code/20.04%3A_Translation",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "20.4: Translation",
"author": "Gerald Bergtrom"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/21%3A_Alleles_and_Gene_Function/21.01%3A_Types_of_Mutations | 21.1: Types of Mutations
-
- Last updated
- Save as PDF
Mutations (changes in a gene sequence) can result in mutant alleles that no longer produce the same level or type of active product as the wild-type allele. Any mutant allele can be classified into one of five types: (1) amorph, (2) hypomorph, (3) hypermorph, (4) neomorph, and (5) antimorph.
- Amorph alleles are complete loss-of-function. They make no active product – zero function. The absence of function can be due to a lack of transcription (gene regulation mutation) or due to the production of a malfunctioning (protein coding mutation) product. These are also sometimes referred to as a Null allele.
- Hypomorph alleles are only a partial loss-of-function. They make an incompletely functioning product. This could occur via reduced transcription or via the production of a product that lacks complete activity. These alleles are sometimes referred to as Leaky mutations, because they provide some function, but not complete function.
Both amorphs and hypomorphs tend to be recessive to wild type because the wild type allele is usually able to supply sufficient product to produce a wild type phenotype (called haplo-sufficient - see Chapter 6). If the mutant allele is not haplo-sufficient, then it will be dominant to the wild type.
While the first two classes involve a loss-of-function , the next two involve a gain-of-function – quantity or quality. Gain-of-function alleles are almost always dominant to the wild type allele.
- Hypermorph alleles produce more of the same, active product. This can occur via increased transcription or by changing the product to make it more efficient/effective at its function.
- Neomorph alleles produce an active product with a new, different function, something that the wild type allele doesn’t do. It can be either new expression (new tissue or time) or a mutation in the product to create a new function (additional substrate or new binding site), not present in the wild type product.
Antimorph alleles are relatively rare, and have an activity that is dominant and opposite to the wild-type function. These alleles usually have no normal function of their own and they interfere with the function from the wild type allele. Thus, when an antimorph allele is heterozygous with wild type, the wild type allele function is reduced. While at the molecular level there are many ways this can happen, the simplest model to explain antimorph effect is that the product acts as a dimer (or any multimer) and one mutant subunit poisons the whole complex. Antimorphs are also known as dominant negative mutations.
Identifying Muller’s Morphs - All mutations can be sorted into one of the five morphs base on how they behave when heterozygous with other alleles – deletion alleles (zero function), wild type alleles (normal function), and duplication alleles (double normal function). | libretexts | 2025-03-17T22:27:38.057882 | 2021-01-03T20:13:04 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/21%3A_Alleles_and_Gene_Function/21.01%3A_Types_of_Mutations",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "21.1: Types of Mutations",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.01%3A__Classification_of_Cancers | 22.1: Classification of Cancers
selected template will load here
This action is not available.
Cancers can be classified based on the tissues in which they originate. Sarcomas are cancers that originate in mesoderm tissues, such as bone or muscle, and cancers arising in glandular tissues (e.g. breast, prostate) are classified as adenocarcinomas . Carcinomas originate in epithelial cells (both inside the body and on its surface) and are the most common types of cancer (~85%). Each of these classifications may be further sub-‐divided. For example, squamous cell carcinoma (SCC) , basal cell carcinoma (BCC) , and melanoma are all types of skin cancers originating respectively in the squamous cells, basal cells, or melanocytes of the skin. | libretexts | 2025-03-17T22:27:38.192750 | 2021-01-03T20:13:06 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.01%3A__Classification_of_Cancers",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.1: Classification of Cancers",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.02%3A_Cancer_Cell_Biology | 22.2: Cancer Cell Biology
selected template will load here
This action is not available.
Cancer is a progressive disease that usually begins with increased frequency of cell division (Figure \(\PageIndex{2}\)). Under the microscope, this may be detectable as increased cellular and nuclear size, and an increased proportion of cells undergoing mitosis. As the disease progresses, cells typically lose their normal shape and tissue organization. Tissues with increased cell division and abnormal tissue organization exhibit dysplasia . Eventually a tumor develops, which can grow rapidly and expand into adjacent tissues.
As cellular damage accumulates and additional control mechanisms are lost, some cells may break free of the primary tumor, pass into the blood or lymph system, and be transported to another organ, where they develop into new tumors (Figure \(\PageIndex{3}\)). The early detection of tumors is important so that they can be treated or removed before the onset of metastasis, but note that not all tumors will lead to cancer. Tumors that do not metastasize are classified as benign, and are not usually considered life threatening. In contrast, malignant tumors become invasive, and ultimately result in cancer. | libretexts | 2025-03-17T22:27:38.255791 | 2021-01-03T20:13:06 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.02%3A_Cancer_Cell_Biology",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.2: Cancer Cell Biology",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.03%3A_Hallmarks_of_Cancer | 22.3: Hallmarks of Cancer
-
- Last updated
- Save as PDF
Researchers have identified six molecular and cellular traits that characterize most cancers. These six hallmarks of cancer are summarized in Table \(\PageIndex{1}\). In this chapter, we will focus on the first two hallmarks, namely growth signal autonomy and insensitivity to anti-‐growth signals.
|
Table \(\PageIndex{1}\) Ten Hallmarks of Cancer (Hanahan and Weinberg, 2000; Hanahan 2011) |
|
1. Growth signal autonomy Cancer cells can divide without the external signals normally required to stimulate division. 2. Insensitivity to growth inhibitory signals Cancer cells are unaffected by external signals that inhibit division of normal cells. 3. Evasion of apoptosis When excessive DNA damage and other abnormalities are detected, apoptosis (a type of programmed cell death) is induced in normal cells, but not in cancer cells. 4. Reproductive potential not limited by telomeres Each division of a normal cell reduces the length of its telomeres. Normal cells arrest further division once telomeres reach a certain length. Cancer cells avoid this arrest and/or maintain the length of their telomeres. 5. Sustained angiogenesis Most cancers require the growth of new blood vessels into the tumor. Normal angiogenesis is regulated by both inhibitory and stimulatory signals not required in cancer cells. 6. Tissue invasion and metastasis Normal cells generally do not migrate (except in embryo development). Cancer cells invade other tissues including vital organs. 7. Deregulated metabolic pathways Cancer cells use an abnormal metabolism to satisfy a high demand for energy and nutrients. 8. Evasion of the immune system Cancer cells are able to evade the immune system. 9. Chromosomal instability Severe chromosomal abnormalities are found in most cancers. 10. Inflammation Local chronic inflammation is associated with many types of cancer. | | libretexts | 2025-03-17T22:27:38.314545 | 2021-01-03T20:13:07 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.03%3A_Hallmarks_of_Cancer",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.3: Hallmarks of Cancer",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.04%3A_Mutagens_and_Carcinogens | 22.4: Mutagens and Carcinogens
-
- Last updated
- Save as PDF
A carcinogen is any agent that directly increases the incidence of cancer. Most, but not all carcinogens are mutagens. Carcinogens that do not directly damage DNA include substances that accelerate cell division, thereby leaving less opportunity for cell to repair induced mutations, or errors in replication. Carcinogens that act as mutagens may be biological, physical, or chemical in nature, although the term is most often used in relation to chemical substances.
Human Papilloma Virus ( HPV , Figure \(\PageIndex{4}\)) is an example of a biological carcinogen. Almost all cervical cancers begin with infection by HPV, which contains genes that disrupt the normal pattern of cell division within the host cell. Any gene that leads to an uncontrolled increase in cell division is called an oncogene . The HPV E6 and E7 genes are considered oncogenes because they inhibit the host cell’s natural tumor suppressing proteins (include p53, described below). The product of the E5 gene mimics the host’s own signals for cell division, and these and other viral gene products may contribute to dysplasia, which is detected during a Pap smear (Figure \(\PageIndex{5}\)). Detection of abnormal cell morphology in a Pap smear is not necessarily evidence of cancer. It must be emphasized again that cells have many regulatory mechanisms to limit division and growth, and for cancer to occur, each of these mechanisms must be disrupted. This is one reason why only a minority of individuals with HPV infections ultimately develop cancer. Although most HPV-related cancers are cervical, HPV infection can also lead to cancer in other tissues, in both women and men.
Figure \(\PageIndex{4}\): Electron micrograph of HPV. (Wikipedia-Unknown-PD)
Figure \(\PageIndex{5}\): Dysplastic (left) and normal (right) cells from a Pap smear. (Flickr-Ed Uthman-CC:AS)
Radiation is a well-known physical carcinogen, because of its potential to induce DNA damage within the body. The most damaging type of radiation is ionizing , meaning waves or particles with sufficient energy to strip electrons from the molecules they encounter, including DNA or molecules that can subsequently react with DNA. Ionizing radiation, which includes x-rays, gamma rays, and some wavelengths of ultraviolet rays, is distinct from the non-ionizing radiation of microwave ovens, cell phones, and radios. As with other carcinogens, mutation of multiple, independent genes that normally regulate cell division is required before cancer develops.
Chemical carcinogens (Table \(\PageIndex{2}\)) can be either natural or synthetic compounds that, based on animal feeding trials or epidemiological (i.e. human population) studies, increase the incidence of cancer. The definition of a chemical as a carcinogen is problematic for several reasons. Some chemicals become carcinogenic only after they are metabolized into another compound in the body; not all species or individuals may metabolize chemicals in the same way. Also, the carcinogenic properties of a compound are usually dependent on its dose. It can be difficult to define a relevant dose for both lab animals and humans. Nevertheless, when a correlation between cancer incidence and chemical exposure is observed, it is usually possible to find ways to reduce exposure to that chemical.
|
Table \(\PageIndex{2}\): Some classes of chemical carcinogens (Pecorino 2008) |
|
1. PAHs (polycyclic aromatic hydrocarbons) e.g. benzo[a]pyrene and several other components of the smoke of cigarettes, wood, and fossil fuels 2. Aromatic amines e.g. formed in food when meat (including fish, poultry) are cooked at high temperature 3. Nitrosamines and nitrosamides e.g. found in tobacco and in some smoked meat and fish 4. Azo dyes e.g. various dyes and pigments used in textiles, leather, paints. 5. Carbamates e.g. ethyl carbamate (urethane) found in some distilled beverages and fermented foods 6. Halogenated compounds e.g. pentachlorophenol used in some wood preservatives and pesticides. 7. Inorganic compounds e.g. asbestos; may induce chronic inflammation and reactive oxygen species 8. Miscellaneous compounds e.g. alkylating agents, phenolics | | libretexts | 2025-03-17T22:27:38.375831 | 2021-01-03T20:13:07 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.04%3A_Mutagens_and_Carcinogens",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.4: Mutagens and Carcinogens",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.05%3A_Oncogenes | 22.5: Oncogenes
selected template will load here
This action is not available.
The control of cell division involves many different genes. Some of these genes act as signaling molecules to activate normal progression through the cell cycle. One of the pre-requisites for cancer occurs when one or more of these activators of cell division become mutated.
The mutation may involve a change in the coding sequence of the protein, so that it is more active than normal, or a change in the regulation of its expression, so that it is produced at higher levels than normal, or persists in the cell longer than normal. Genes that are a part of the normal regulation of cell division, but which after mutation contribute to cancer, are called proto-oncogenes . Once a proto-oncogene has been abnormally activated by mutation, it is called an oncogene. More than 100 genes have been defined as proto-oncogenes. These include genes at almost every step of the signaling pathways that normally induce cell to divide, including growth factors, receptors , signal transducers , and transcription factors.
ras is an example of a proto-oncogene. ras acts as a switch within signal transduction pathways, including the regulation of cell division. When a receptor protein receives a signal for cell division, the receptor activates ras , which in turn activates other signaling components, ultimately leading to activation of genes involved in cell division. Certain mutations of the ras sequence causes it to be in a permanently active form, which can lead to constitutive activation of the cell cycle. This mutation is dominant as are most oncogenes. An example of the role of ras in relaying a signal for cell division in the EGF pathway is shown in Figure \(\PageIndex{7}\). | libretexts | 2025-03-17T22:27:38.440565 | 2021-01-03T20:13:08 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.05%3A_Oncogenes",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.5: Oncogenes",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.06%3A_Tumor_Suppressor_Genes | 22.6: Tumor Suppressor Genes
selected template will load here
This action is not available.
More than 30 genes are classified as tumor suppressors . The normal functions of these genes include repair of DNA, induction of programmed cell death ( apoptosis ) and prevention of abnormal cell division. In contrast to proto-oncogenes, in tumor suppressors it is loss-of-function mutations that contribute to the progression of cancer. This means that tumor suppressor mutations tend to be recessive, and thus both alleles must be mutated in order to allow abnormal growth to proceed. It is perhaps not surprising that mutations in tumor suppressor genes, are more likely than oncogenes to be inherited. An example is the tumor suppressor gene, BRCA1 , which is involved in DNA-repair. Inherited mutations in BRCA1 increase a woman’s lifetime risk of breast cancer by up to seven times, although these heritable mutations account for only about 10% of breast cancer. Thus, sporadic rather than inherited mutations are the most common sources of both oncogenes and disabled tumor suppressor genes.
An important tumor suppressor gene is a transcription factor named p53 . Other proteins in the cell sense DNA damage, or abnormalities in the cell cycle and activate p53 through several mechanisms including phosphorylation (attachment of phosphate to specific site on the protein) and transport into the nucleus. In its active form, p53 induces the transcription of genes with several different types of tumor suppressing functions, including DNA repair, cell cycle arrest, and apoptosis. Over 50% of human tumors contain mutations in p53. People who inherit only one function copy of p53 have a greatly increased incidence of early onset cancer. However, as with the other cancer related genes we have discussed, most mutations in p53 are sporadic, rather than inherited. Mutation of p53, through formation of pyrimidine dimers in the genes following exposure to UV light, has been causally linked to squamous cell and basal cell carcinomas (but not melanomas, highlighting the variety and complexities of mechanisms that can cause cancer). | libretexts | 2025-03-17T22:27:38.504938 | 2021-01-03T20:13:09 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.06%3A_Tumor_Suppressor_Genes",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.6: Tumor Suppressor Genes",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.07%3A__The_Poster_Boy_of_Genetic_Research_Leading_to_a_Cancer_Treatment__Gleevec_(Imatinib) | 22.7: The “Poster Boy” of Genetic Research Leading to a Cancer Treatment – Gleevec™ (Imatinib)
-
- Last updated
- Save as PDF
Chronic myelogenous leukemia (CML)
Chronic myelogenous leukemia ( CML ) is a type of cancer of white blood cells, myeloid cells, that are mutated and proliferate uncontrollably through three stages (chronic, accelerated, and blast crisis) and lead eventually to death. Cytogenetics showed the myeloid cells of CML patients usually also have a consistent chromosome translocation (the mutant event) between the long arms of chromosomes 9 and 22, t(9:22)(q34;q11). It is also known as the Philadelphia chromosome (Ph + ). This translocation involves breaks in two genes, c-abl and bcr , on chromosomes 9 and 22, respectively. The fusion of the translocation breaks result in a chimeric gene, called bcr-abl , that contains exons 1 and/or 2 from bcr (this varies from patient to patient) and 2-11 from abl and it produces a chimeric protein (BCR-ABL or p185 bcr-abl ) that is transcribed like bcr and contains abl enzyme sequences. This chimeric protein has a tyrosine-kinase from the abl gene sequences that is unique to the CML mutant cell. The consistent, unregulated expression of this gene and its kinase product causes activation of a variety of intracellular signaling pathways, promoting the uncontrolled proliferative and survival properties of CML cells (the cancer). Thus the BCR-ABL tyrosine kinase enzyme exists only in cancer cells (and not in healthy cells) and a drug that inhibits this activity could be used to target and prevent the uncontrolled growth of the cancerous CML cells.
Inhibiting the Bcr-Abl tyrosine kinase activity
Knowing that the kinase activity was the key to treatment, pharmaceutical companies screened chemical libraries of potential kinase inhibitory compounds. After initially finding low potency inhibitors, a relationship between structure and activity suggested other compounds that were optimized to inhibit the BCR-ABL tyrosine kinase activity. The lead compound was STI571, now called Gleevec ™ or imatinib (Figure \(\PageIndex{9}\)). This drug was shown to inhibit the BCR-ABL tyrosine kinase activity and to inhibit CML cell proliferation in vitro and in vivo . Gleevec™ works via targeted therapy—only the kinase activity in cancer cells was targeted and thereby killed through the drug's action. In this regard, Gleevec™ was one of the first cancer therapies to show the potential for this type of targeted action. It was dependent upon the genetic identification of the cause and protein target and is often cited as a paradigm for genetic research in cancer therapeutics.
Figure \(\PageIndex{9}\): Biochemical structure of Gleevec™ or Imatinib. . (Wikipedia-Fuse809-CC:AN)
Caution
This is a simplified presentation of the CML/cancer targeting by the drug Gleevec™. There are many more details than could be presented here. It is represents as a model of finding a drug for each type of cancer, rather than the one, single “magic bullet” that kills all cancers. Remember, there are always complexities in this type of research to treatment process, such as patient genetic and environmental variation that leads to differences in drug metabolism, uptake, and binding. Also, changes in drug dose, mutation of the bcr-abl gene, and other events can affect the effectiveness of the treatment and the relapse rate. Biological systems are extremely complex and difficult to modulate in the specific, targeted manner necessary to treat cancer ideally.
Remember, the drug, Gleevec™, is not a cure, but only a treatment. It prevents the uncontrolled proliferation of the CML cells, but doesn’t kill them directly. The arrested cells will die eventually, but there is always a small pool of CML cells that will proliferate if the drug is discontinued. While sustained use of this expensive drug is beneficial to the pharmaceutical companies, it is certainly not the ideal situation for the patient. | libretexts | 2025-03-17T22:27:38.564480 | 2021-01-03T20:13:09 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/22%3A_Cancer_Genetics/22.07%3A__The_Poster_Boy_of_Genetic_Research_Leading_to_a_Cancer_Treatment__Gleevec_(Imatinib)",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "22.7: The “Poster Boy” of Genetic Research Leading to a Cancer Treatment – Gleevec™ (Imatinib)",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/23%3A_Genomics/23.01%3A__Omics__Technologies | 23.1: ‘Omics Technologies
selected template will load here
This action is not available.
The complete set of DNA within an organism is called its genome . Genomics is therefore the large-scale description, using techniques of molecular biology of many genes or even whole genomes at once. This type of research is facilitated by technologies that increase throughput (i.e. rate of analysis), and decrease cost. The – omics suffix has been used to indicate high-throughput analysis of many types of molecules, including transcripts ( transcriptomics ), proteins ( proteomics ), and the products of enzymatic reactions, or metabolites ( metabolomics ; Figure \(\PageIndex{1}\)). Interpretation of the large data sets generated by –omics research depends on a combination of computational, biological, and statistical knowledge provided by experts in bioinformatics . Attempts to combine information from different types of –‘omics studies is sometimes called systems biology . | libretexts | 2025-03-17T22:27:38.656014 | 2021-01-03T20:13:11 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/23%3A_Genomics/23.01%3A__Omics__Technologies",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "23.1: ‘Omics Technologies",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/23%3A_Genomics/23.02%3A_DNA_Sequencing | 23.2: DNA Sequencing
-
- Last updated
- Save as PDF
DNA sequencing determines the order of nucleotide bases within a given fragment of DNA. This information can be used to infer the RNA or protein sequence encoded by the gene, from which further inferences may be made about the gene’s function and its relationship to other genes and gene products. DNA sequence information is also useful in studying the regulation of gene expression. If DNA sequencing is applied to the study of many genes, or even a whole genome, it is considered an example of genomics.
Dideoxy sequencing
Recall that DNA polymerases incorporate nucleotides (dNTPs) into a growing strand of DNA, based on the sequence of a template strand. DNA polymerases add a new base only to the 3’-OH group of an existing strand of DNA; this is why primers are required in natural DNA synthesis and in techniques such as PCR. Most of the currently used DNA sequencing techniques rely on the random incorporation of modified nucleotides called terminators. Examples of terminators are the dideoxy nucleotides ( ddNTPs ), which lack a 3’-OH group and therefore cannot serve as an attachment site for the addition of new bases to a growing strand of DNA (Figure \(\PageIndex{1}\)). After a ddNTP is incorporated into a strand of DNA, no further elongation can occur. Terminators are labeled with one of four fluorescent dyes, each specific for one the four nucleotide bases.
To sequence a DNA fragment, you need many copies of that fragment (Figure \(\PageIndex{2}\)). Unlike PCR, DNA sequencing does not amplify the target sequence and only one primer is used. This primer is hybridized to the denatured template DNA, and determines where on the template strand the sequencing reaction will begin. A mixture of dNTPs, fluorescently labeled terminators, and DNA polymerase is added to a tube containing the primer-template hybrid. The DNA polymerase will then synthesize a new strand of DNA until a fluorescently labeled nucleotide is incorporated, at which point extension is terminated. Because the reaction contains millions of template molecules, a corresponding number of shorter molecules is synthesized, each ending in a fluorescent label that corresponds to the last base incorporated.
The newly synthesized strands can be denatured from the template, and then separated electrophoretically based on their length (Figure \(\PageIndex{3}\)). Since each band differs in length by one nucleotide, and the identity of that nucleotide is known from its fluorescence, the DNA sequence can be read simply from the order of the colors in successive bands. In practice, the maximum length of sequence that can be read from a single sequencing reaction is about 700 bp.
A particularly sensitive electrophoresis method used in the analysis of DNA sequencing reactions is called capillary electrophoresis (Figure \(\PageIndex{6}\)). In this method, a current pulls the sequencing products through a gel-like matrix that is encased in a fine tube of clear plastic. As in conventional electrophoresis, the smallest fragments move through the capillary the fastest. As they pass through a point near the end of the capillary, the fluorescent intensity of each dye is read. This produces a graph called a chromatogram. The sequence is determined by identifying the highest peak (i.e. the dye with the most intense fluorescent signal) at each position.
Next-generation sequencing
Advances in technology over the past two decades have increased the speed and quality of sequencing, while decreasing the cost. This has become especially true with the most recently developed methods called next-generation sequencing. Not all of these new methods rely on terminators, but one that does is a method used in instruments sold by a company called Illumina . Illumina sequencers use a special variant of PCR called bridge PCR to make many thousands of copies of a short (45bp) template fragment. Each of these short template fragments is attached in a cluster in a small spot on a reaction surface. Millions of other clusters, each made by different template fragment, are located at other positions on the reaction surface. DNA synthesis at each template strand then proceeds using dye-labeled terminators that are used are reversible. Synthesis is therefore terminated (temporarily) after the incorporation of each nucleotide. Thus, after the first nucleotide is incorporated in each strand, a camera records the color of fluorescence emitted from each cluster. The terminators are then modified, and a second nucleotide is incorporated in each strand, and again the reaction surface is photographed. This cycle is repeated a total of 45 times. Because millions of 45 bp templates are sequenced in parallel in a single process, Illumina sequencing is very efficient compared to other sequencing techniques. However, the short length of the templates currently limits the application of this technology. | libretexts | 2025-03-17T22:27:38.715931 | 2021-01-03T20:13:12 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/23%3A_Genomics/23.02%3A_DNA_Sequencing",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "23.2: DNA Sequencing",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/23%3A_Genomics/23.03%3A_Whole_Genome_Sequencing | 23.3: Whole Genome Sequencing
-
- Last updated
- Save as PDF
The need for assembly
Given that the length of a single, individual sequencing read is somewhere between 45bp and 700bp, we are faced with a problem determining the sequence of longer fragments, such as the chromosomes in an entire genome of humans (3 x10 9 bp). Obviously, we need to break the genome into smaller fragments. There are two different strategies for doing this:
- clone-by-clone sequencing, which relies on the creation of a physical map first then sequencing, and
- whole genome shotgun sequencing, which sequences first and does not require a physical map.
Physical mapping
A physical map is a representation of a genome, comprised of cloned fragments of DNA. The map is therefore made from physical entities (pieces of DNA) rather than abstract concepts such as the linkage frequencies and genes that make up a genetic map (Figure \(\PageIndex{1}\)). It is usually possible to correlate genetic and physical maps, for example by identifying the clone that contains a particular molecular marker. The connection between physical and genetic maps allows the genes underlying particular mutations to be identified through a process call map-based cloning.
To create a physical map, large fragments of the genome are cloned into plasmid vectors, or into larger vectors called bacterial artificial chromosomes (BACs). BACs can contain approximately 100kb fragments. The set of BACs produced in a cloning reaction will be redundant, meaning that different clones will contain DNA from the same part of the genome. Because of this redundancy, it is useful to select the minimum set of clones that represent the entire genome, and to order these clones respective to the sequence of the original chromosome. Note that this is all to be done without knowing the complete sequence of each BAC. Making a physical map may therefore rely on techniques related to Southern blotting: DNA from the ends of one BAC is used as a probe to find clones that contain the same sequence. These clones are then assumed to overlap each other. A set of overlapping clones is called a contig .
Clone-by-clone sequencing
Physical mapping of cloned sequences was once considered a pre-requisite for genome sequencing. The process would begin by breaking the genome into BAC-sized pieces, arranging these BACs into a map, then breaking each BAC up into a series of smaller clones, which were usually then also mapped. Eventually, a minimum set of smaller clones would be identified, each of which was small enough to be sequenced (Figure \(\PageIndex{8}\)). Because the order of clones relative to the complete chromosome was known prior to sequencing, the resulting sequence information could be easily assembled into one complete chromosome at the end of the project. Clone-by-clone sequencing therefore minimizes the number of sequencing reactions that must be performed, and makes sequence assembly straightforward and reliable. However, a drawback of this strategy is the tedious process of building physical map prior to any sequencing.
Whole genome shotgun sequencing
This strategy breaks the genome into fragments that are small enough to be sequenced, then reassembles them simply by looking for overlaps in the sequence of each fragment. It avoids the laborious process of making a physical map (Figure \(\PageIndex{2}\)). However, it requires many more sequencing reactions than the clone-by-clone method, because, in the shotgun approach, there is no way to avoid sequencing redundant fragments. There is also a question of the feasibility of assembling complete chromosomes based simply on the sequence overlaps of many small fragments. This is particularly a problem when the size of the fragments is smaller than the length of a repetitive region of DNA. Nevertheless, this method has now been successfully demonstrated in the nearly complete sequencing of many large genomes (rice, human, and many others). It is the current standard methodology.
However, shotgun assemblies are rarely able to complete entire genomes. The human genome, for example, relied on a combination of shotgun sequence and physical mapping to produce contiguous sequence for the length of each arm of each chromosome. Note that because of the highly repetitive nature of centromeric and telomeric DNA, sequencing projects rarely include these heterochromatic, gene poor regions.
Genome analysis
An assembled genome is a string of millions of A’s,C’s,G’s,T’s. Which of these represent nucleotides that encode proteins, and which of these represent other features of genes and their regulatory elements? The process of genome annotation relies on computers to define features such a start and stop codons, introns, exons, and splice sites. However, few of the predictions made by these programs is entirely accurate, and most must be verified experimentally for any gene of particular importance or interest. | libretexts | 2025-03-17T22:27:38.777137 | 2021-01-03T20:13:12 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/23%3A_Genomics/23.03%3A_Whole_Genome_Sequencing",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "23.3: Whole Genome Sequencing",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/24%3A_Functional_Genomics/24.01%3A_Functional_Genomics__Determining_Function(s) | 24.1: Functional Genomics – Determining Function(s)
selected template will load here
This action is not available.
Having identified putative genes within a genome sequence, how do we determine their function? Techniques of functional genomics are an experimental approach to address this question. One widely used technique in functional genomics is called microarray analysis (Figure \(\PageIndex{1}\)). Microarrays can measure the abundance of mRNA for hundreds or thousands of genes at once. The abundance of mRNA of a particular gene is usually correlated with the activity of that gene. For example, genes that are involved in neuronal development likely produce more mRNA in brain tissue than in heart tissue. We can therefore learn about the relationship between particular genes and particular processes by comparing transcript abundance under different conditions. This can identify tissue specific expression (e.g. the nerve/heart example above), as well as differences in temporal expression (development), or exposure to external agents (eg, disease, hormones, drugs, etc.). | libretexts | 2025-03-17T22:27:38.868438 | 2021-01-03T20:13:13 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/24%3A_Functional_Genomics/24.01%3A_Functional_Genomics__Determining_Function(s)",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "24.1: Functional Genomics – Determining Function(s)",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/24%3A_Functional_Genomics/24.02%3A_Genomic_Approaches-_The_DNA_Microarray | 24.2: Genomic Approaches- The DNA Microarray
-
- Last updated
- Save as PDF
Traditionally, when cellular levels of a protein were known to change in response to a chemical effector, molecular studies focused on control of the transcription of its gene. These studies often revealed that the control of gene expression was at the level of transcription, turning a gene on or off through interactions of transcription factors with DNA. However, protein levels are also controlled post-transcriptionally, by regulating the rate of mRNA translation or degradation. Studies of transcriptional and posttranscriptional regulation mechanisms are seminal to our understanding of how the correct protein is made in the right amounts at the right time.
We may have suspected, but now know that control of gene expression and cellular responses can be more complex than increasing or decreasing the transcription of a single gene or translation of a single protein. Whole genome sequences and new techniques make possible the study of the expression of virtually all genes in a cell at the same time, a field of investigation called genomics . Genomic studies reveal networks of regulated genes that must be understood to more fully explain the developmental and physiological changes in an organism. When you can ‘see’ all of the RNAs being transcribed from active genes in a cell, you are looking at a cell’s transcriptome . By analogy to genomics, transcriptomics defines studies of ‘webs’ of interactive RNAs. Again, by analogy to genomics and transcriptomics, the broad study of active and inactive proteins in cells or tissues, how they are modified (processed) before use and how they interact is called proteomics . The technologies applied to proteomic studies include protein microarrays, immunochemical techniques and others uniquely suited to protein analysis (click Proteomics Techniques-Wikipedia for more information). Protein Microarrays are increasingly being used to identify protein-protein interactions, as well as the different states of proteins under different cellular conditions. Read even more about these exciting developments and their impact on basic and clinical research at Protein Microarrays from ncbi.
Finally think about this: creating a proteomic library analogous to a genomic library would seem a daunting prospect. But efforts are underway. Check out A stab at mapping the Human Proteome for original research leading to the sampling of a tissue-specific human proteome, and click Strategies for Approaching the Proteome for more general information. Let’s look at some uses of DNA microarrays. This technology involves ‘spotting’ DNA (e.g., cloned DNA from a genomic or cDNA library, PCR products, oligonucleotides…) on a glass slide, or chip. In the language of microarray analysis, the slides are the probes. Spotting a chip is a robotic process. Because the DNA spots are microscopic, a cellspecific transcriptome (cDNA library) can fit on a single chip. A small genome microarray might also fit on a single chip, while larger genomes might need several slides. A primary use of DNA microarrays is transcriptional profiling . A genomic microarray can probe a mixture of fluorescently tagged target cDNAs made from mRNAs, in order to identify many (if not all) of the genes expressed in the cells at a given moment (i.e., its transcriptome). cDNA microarray probes can also probe quantitative differences in gene expression in cells or tissues during normal differentiation or in response to chemical signals. They are also valuable for genotyping, (i.e. characterizing the genes in an organism). Microarrays are so sensitive that they can even distinguish between two genes or regions of DNA that differ by a single nucleotide. Click Single Nucleotide Polymorphisms, or SNPs to learn more. In the microarray below, each colored spot (red, yellow, green) is a different fluorescently tagged molecule hybridizing to target sequences on the microarray. In the fluorescence microscope, the spots fluoresce different colors in response to UV light.
With quantitative microarray methods, the brightness (intensity) of the signal from each probe can be measured. In this way, we can compare the relative amounts of cDNA (and thus, different RNAs) in the transcriptome of different tissues or resulting from different tissue treatments. A table of different applications of microarrays (adapted from Wikipedia) is shown on the next page.
| Application of Technology | Synopsis |
|---|---|
| Gene Expression Profiling | In a transcription (mRNA or gene expression) profiling experiment the expression levels of thousands of genes are simultaneously monitored to study the effects of certain treatments, diseases, and developmental stages on gene expression. |
| Comparative genomic hybridization | Assessing genome content in different cells or closely related organisms, where one organism’s genome is the probe for a target genome from a different species. |
| GeneID | Small microarrays to check IDs of organisms in food and feed for genetically modified organisms (GMOs), mycoplasmas in cell culture, or pathogens for disease detection. These detection protocols often combine PCR and microarray technology. |
| CHIP; Chromatin immunoprecipitation | DNA sequences bound to a particular protein can be isolated by immunoprecipitating the protein. The fragments can be hybridized to a microarray (such as a tiling array) allowing the determination of protein binding site occupancy throughout the genome. |
| DamID | Analogously to ChIP, genomic regions bound by a protein of interest can be isolated and used to probe a microarray to determine binding site occupancy. Unlike ChIP, DamID does not require antibodies but makes use of adenine methylation near the protein's binding sites to selectively amplify those regions, introduced by expressing minute amounts of protein of interest fused to bacterial DNA adenine methyltransferase. |
| SNP detection | Identifying single nucleotide polymorphism among alleles within or between populations. Some microarray applications make use of SNP detection, including Genotyping, forensic analysis, measuring predisposition to disease, identifying drug-candidates, evaluating germline mutations in individuals or somatic mutations in cancers, assessing loss of heterozygosity, or genetic linkage analysis. |
| Alternative splicing protection | An exon junction array design uses probes specific to the expected or potential splice sites of predicted exons for a gene. It is of intermediate density, or coverage, to a typical gene expression array (with 1-3 probes per gene) and a genomic tiling array (with hundreds or thousands of probes per gene). It is used to assay the expression of alternative splice forms of a gene. Exon arrays have a different design, employing probes designed to detect each individual exon for known or predicted genes, and can be used for detecting different splicing isoforms |
| Tiling array | Genome tiling arrays consist of overlapping probes designed to densely represent a genomic region of interest, sometimes as large as an entire human chromosome. The purpose is to empirically detect expression of transcripts or alternatively spliced forms which may not have been previously known or predicted. |
The Power of Microarrays. https://youtu.be/88rzbpclscM
If you like world records, check out the salamander with the largest genome, 10X bigger than our own: The HUGE Axolotl Genome . What do they do with all that DNA? And can our current technologies figure it out? For the original report, click on the following link: here . | libretexts | 2025-03-17T22:27:38.931327 | 2021-01-03T20:13:14 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/24%3A_Functional_Genomics/24.02%3A_Genomic_Approaches-_The_DNA_Microarray",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "24.2: Genomic Approaches- The DNA Microarray",
"author": "Gerald Bergtrom"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.01%3A__Linkage | 25.1: Linkage
selected template will load here
This action is not available.
As we learned in Chapter 6, Mendel reported that the pairs of loci he observed behaved independently of each other; for example, the segregation of seed color alleles was independent from the segregation of alleles for seed shape. This observation was the basis for his Second Law (Independent Assortment), and contributed greatly to our understanding of heredity. However, further research showed that Mendel’s Second Law did not apply to every pair of genes that could be studied. In fact, we now know that alleles of loci that are located close together on the same chromosome tend to be inherited together. This phenomenon is called linkage , and is a major exception to Mendel’s Second Law of Independent Assortment. Researchers use linkage to determine the location of genes along chromosomes in a process called genetic mapping. The concept of gene linkage is important to the natural processes of heredity and evolution. | libretexts | 2025-03-17T22:27:39.102652 | 2021-01-03T20:13:16 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.01%3A__Linkage",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "25.1: Linkage",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.02%3A__Recombination | 25.2: Recombination
-
- Last updated
- Save as PDF
The term “recombination” is used in several different contexts in genetics. In reference to heredity, recombination is defined as any process that results in gametes with combinations of alleles that were not present in the gametes of a previous generation (see Figure \(\PageIndex{2}\)). In ter chromosomal recombination occurs either through independent assortment of alleles whose loci are on different chromosomes (Chapter 6). In tra chromosomal recombination occurs through crossovers between loci on the same chromosomes (as described below). It is important to remember that in both of these cases, recombination is a process that occurs during meiosis (mitotic recombination may also occur in some species, but it is relatively rare). If meiosis results in recombination, the products are said to have a recombinant genotype . On the other hand, if no recombination occurs during meiosis, the products have their original combinations and are said to have a non-recombinant, or parental genotype . Recombination is important because it contributes to the genetic variation that may be observed between individuals within a population and acted upon by selection to produce evolution.
As an example of interchromosomal recombination, consider loci on two different chromosomes as shown in Figure \(\PageIndex{2}\). We know that if these loci are on different chromosomes, there are no physical connections between them, so they are unlinked and will segregate independently as did Mendel’s traits. The segregation depends on the relative orientation of each pair of chromosomes at metaphase. Since the orientation is random and independent of other chromosomes, each of the arrangements (and their meiotic products) is equally possible for two unlinked loci as shown in Figure \(\PageIndex{2}\). More precisely, there is a 50% probability for recombinant genotypes, and a 50% probability for parental genotypes within the gametes produced by a meiocyte with unlinked loci. Indeed, if we examined all of the gametes that could be produced by this individual (which are the products of multiple independent meioses), we would note that approximately 50% of the gametes would be recombinant, and 50% would be parental. Recombination frequency (RF) is simply the number of recombinant gametes, divided by the total number of gametes. A frequency of approximately 50% recombination is therefore a defining characteristic of unlinked loci. Thus the greatest recombinant frequency expected is ~50%. | libretexts | 2025-03-17T22:27:39.159146 | 2021-01-03T20:13:17 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.02%3A__Recombination",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "25.2: Recombination",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.03%3A__Linkage_Reduces_Recombination_Frequency | 25.3: Linkage Reduces Recombination Frequency
selected template will load here
This action is not available.
Having considered unlinked loci above, let us turn to the opposite situation, in which two loci are so close together on a chromosome that the parental combinations of alleles always segregate together (Figure \(\PageIndex{3}\)). This is because during meiosis they are so close that there are no crossover events between the two loci and the alleles at the two loci are physically attached on the same chromatid and so they always segregate together into the same gamete. In this case, no recombinants will be present following meiosis, and the recombination frequency will be 0%. This is complete (or absolute ) linkage and is rare, as the loci must be so close together that crossovers are never detected between them. | libretexts | 2025-03-17T22:27:39.222081 | 2021-01-03T20:13:17 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.03%3A__Linkage_Reduces_Recombination_Frequency",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "25.3: Linkage Reduces Recombination Frequency",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.04%3A__Crossovers_Allow_Recombination_of_Linked_Loci | 25.4: Crossovers Allow Recombination of Linked Loci
selected template will load here
This action is not available.
Thus far, we have only considered situations with either no linkage (50% recombination) or complete linkage (0% recombination). It is also possible to obtain recombination frequencies between 0% and 50%, which is a situation we call incomplete (or partial ) linkage . Incomplete linkage occurs when two loci are located on the same chromosome but the loci are far enough apart so that crossovers occur between them during some, but not all, meioses. Genes that are on the same chromosome are said to be syntenic regardless of whether they are completely or incompletely linked. All linked genes are syntenic, but not all syntenic genes are linked, as we will learn later.
Crossovers occur during prophase I of meiosis, when pairs of homologous chromosomes have aligned with each other in a process called synapsis . Crossing over begins with the breakage of DNA of a pair of non-sister chromatids. The breaks occur at corresponding positions on two non-sister chromatids, and then the ends of non-sister chromatids are connected to each other resulting in a reciprocal exchange of double-stranded DNA (Figure \(\PageIndex{4}\)). Generally every pair of chromosomes has at least one (and often more) crossovers during meioses (Figure \(\PageIndex{5}\)).
Figure \(\PageIndex{5}\): A crossover between two linked loci can generate recombinant genotypes (AB, ab), from the chromatids involved in the crossover. Remember that multiple, independent meioses occur in each organism, so this particular pattern of recombination will not be observed among all the meioses from this individual. (Original-Deyholos-CC:AN)
Because the location of crossovers is essentially random along the chromosome, the greater the distance between two loci, the more likely a crossover will occur between them. Furthermore, loci that are on the same chromosome, but are sufficiently far apart from each other, will on average have multiple crossovers between them and they will behave as though they are completely unlinked. A recombination frequency of 50% is therefore the maximum recombination frequency that can be observed, and is indicative of loci that are either on separate chromosomes, or are located very far apart on the same chromosome. | libretexts | 2025-03-17T22:27:39.286036 | 2021-01-03T20:13:18 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.04%3A__Crossovers_Allow_Recombination_of_Linked_Loci",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "25.4: Crossovers Allow Recombination of Linked Loci",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.05%3A__Inferring_Recombination_From_Genetic_Data | 25.5: Inferring Recombination From Genetic Data
-
- Last updated
- Save as PDF
In the preceding examples, we had the advantage of knowing the approximate chromosomal positions of each allele involved, before we calculated the recombination frequencies. Knowing this information beforehand made it relatively easy to define the parental and recombinant genotypes, and to calculate recombination frequencies. However, in most experiments, we cannot directly examine the chromosomes, or even the gametes, so we must infer the arrangement of alleles from the phenotypes over two or more generations. Importantly, it is generally not sufficient to know the genotype of individuals in just one generation; for example, given an individual with the genotype AaBb , we do not know from the genotype alone whether the loci are located on the same chromosome, and if so, whether the arrangement of alleles on each chromosome is AB and ab or Ab and aB (Figure \(\PageIndex{6}\)) . The top cell has the two dominant alleles together and the two recessive alleles together and is said to have the genes in the coupling (or cis ) configuration . The alternative shown in the cell below is that the genes are in the repulsion (or trans ) configuration .
Fortunately for geneticists, the arrangement of alleles can sometimes be inferred if the genotypes of a previous generation are known. For example, if the parents of AaBb had genotypes AAB B and aabb respectively, then the parental gametes that fused to produce AaBb would have been genotype AB and genotype ab . Therefore, prior to meiosis in the dihybrid, the arrangement of alleles would likewise be AB and ab (Figure \(\PageIndex{7}\)). Conversely, if the parents of AaBb had genotypes aaBB and AAbb , then the arrangement of alleles on the chromosomes of the dihybrid would be aB and Ab . Thus, the genotype of the previous generation can determine which of an individual’s gametes are considered recombinant, and which are considered parental.
Let us now consider a complete experiment in which our objective is to measure recombination frequency (Figure \(\PageIndex{8}\)). We need at least two alleles for each of two genes, and we must know which combinations of alleles were present in the parental gametes. The simplest way to do this is to start with pure-breeding lines that have contrasting alleles at two loci. For example, we could cross short-tailed mice, brown mice ( aaBB ) with long-tailed, white mice ( AAbb ). Based on the genotypes of the parents, we know that the parental gametes will be aB or Ab (but not ab or AB ), and all of the progeny will be dihybrids, AaBb . We do not know at this point whether the two loci are on different pairs of homologous chromosomes, or whether they are on the same chromosome, and if so, how close together they are.
The recombination events that may be detected will occur during meiosis in the dihybrid individual. If the loci are completely or partially linked, then prior to meiosis, alleles aB will be located on one chromosome, and alleles Ab will be on the other chromosome (based on our knowledge of the genotypes of the gametes that produced the dihybrid). Thus, recombinant gametes produced by the dihybrid will have the genotypes ab or AB , and non-recombinant (i.e. parental) gametes will have the genotypes aB or Ab .
How do we determine the genotype of the gametes produced by the dihybrid individual? The most practical method is to use a testcross (Figure \(\PageIndex{8}\)), in other words to mate AaBb to an individual that has only recessive alleles at both loci ( aabb ). This will give a different phenotype in the F 2 generation for each of the four possible combinations of alleles in the gametes of the dihybrid. We can then infer unambiguously the genotype of the gametes produced by the dihybrid individual, and therefore calculate the recombination frequency between these two loci. For example, if only two phenotypic classes were observed in the F 2 (i.e. short tails and brown fur ( aaBb ), and white fur with long tails ( Aabb ) we would know that the only gametes produced following meiosis of the dihybrid individual were of the parental type: aB and Ab , and the recombination frequency would therefore be 0%. Alternatively, we may observe multiple classes of phenotypes in the F 2 in ratios such as shown in Table \(\PageIndex{1}\):
|
tail phenotype |
fur phenotype |
number of progeny |
gamete from dihybrid |
genotype of F 2 from test cross |
(P)arental or (R)ecombinant |
|
short |
brown |
48 |
aB |
aaBb |
P |
|
long |
white |
42 |
Ab |
Aabb |
P |
|
short |
white |
13 |
ab |
aabb |
R |
|
long |
brown |
17 |
AB |
AaBb |
R |
Given the data in Table \(\PageIndex{1}\), the calculation of recombination frequency is straightforward:
\[\begin{align} \textrm{recombination frequency} &= \mathrm{\dfrac{number\: of\: recombinant\: gametes}{total\: number\: of\: gametes\: scored}}\\ \textrm{R.F.} &= \dfrac{13+17}{48+42+13+17}\\ &=25\% \end{align}\] | libretexts | 2025-03-17T22:27:39.357810 | 2021-01-03T20:13:18 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.05%3A__Inferring_Recombination_From_Genetic_Data",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "25.5: Inferring Recombination From Genetic Data",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.06%3A__Genetic_Mapping | 25.6: Genetic Mapping
-
- Last updated
- Save as PDF
Because the frequency of recombination between two loci (up to 50%) is roughly proportional to the chromosomal distance between them, we can use recombination frequencies to produce genetic maps of all the loci along a chromosome and ultimately in the whole genome. The units of genetic distance are called map units (mu) or centiMorgans (cM), in honor of Thomas Hunt Morgan by his student, Alfred Sturtevant , who developed the concept. Geneticists routinely convert recombination frequencies into cM: the recombination frequency in percent is approximately the same as the map distance in cM. For example, if two loci have a recombination frequency of 25% they are said to be ~25cM apart on a chromosome (Figure \(\PageIndex{9}\)). Note: this approximation works well for small distances (RF<30%) but progressively fails at longer distances because the RF reaches a maximum at 50%. Some chromosomes are >100 cM long but loci at the tips only have an RF of 50%. The method for mapping of these long chromosomes is shown below.
Note that the map distance of two loci alone does not tell us anything about the orientation of these loci relative to other features, such as centromeres or telomeres, on the chromosome.
Map distances are always calculated for one pair of loci at a time. However, by combining the results of multiple pairwise calculations, a genetic map of many loci on a chromosome can be produced (Figure \(\PageIndex{10}\)). A genetic map shows the map distance, in cM, that separates any two loci, and the position of these loci relative to all other mapped loci. The genetic map distance is roughly proportional to the physical distance, i.e. the amount of DNA between two loci. For example, in Arabidopsis , 1.0 cM corresponds to approximately 150,000bp and contains approximately 50 genes. The exact number of DNA bases in a cM depends on the organism, and on the particular position in the chromosome; some parts of chromosomes (“crossover hot spots”) have higher rates of recombination than others, while other regions have reduced crossing over and often correspond to large regions of heterochromatin.
When a novel gene or locus is identified by mutation or polymorphism, its approximate position on a chromosome can be determined by crossing it with previously mapped genes, and then calculating the recombination frequency. If the novel gene and the previously mapped genes show complete or partial linkage, the recombination frequency will indicate the approximate position of the novel gene within the genetic map. This information is useful in isolating (i.e. cloning) the specific fragment of DNA that encodes the novel gene, through a process called map-based cloning .
Genetic maps are also useful to track genes/alleles in breeding crops and animals, in studying evolutionary relationships between species, and in determining the causes and individual susceptibility of some human diseases.
Genetic maps are useful for showing the order of loci along a chromosome, but the distances are only an approximation. The correlation between recombination frequency and actual chromosomal distance is more accurate for short distances (low RF values) than long distances. Observed recombination frequencies between two relatively distant markers tend to underestimate the actual number of crossovers that occurred. This is because as the distance between loci increases, so does the possibility of having a second (or more) crossovers occur between the loci. This is a problem for geneticists, because with respect to the loci being studied, these double-crossovers produce gametes with the same genotypes as if no recombination events had occurred (Figure \(\PageIndex{11}\)) – they have parental genotypes. Thus a double crossover will appear to be a parental type and not be counted as a recombinant, despite having two (or more) crossovers. Geneticists will sometimes use specific mathematical formulae to adjust large recombination frequencies to account for the possibility of multiple crossovers and thus get a better estimate of the actual distance between two loci. | libretexts | 2025-03-17T22:27:39.415428 | 2021-01-03T20:13:19 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/25%3A_Genetic_Linkage/25.06%3A__Genetic_Mapping",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "25.6: Genetic Mapping",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/26%3A_Genetic_Maps/26.01%3A__Mapping_With_Three-Point_Crosses | 26.1: Mapping With Three-Point Crosses
-
- Last updated
- Save as PDF
A particularly efficient method of mapping three genes at once is the three-point cross , which allows the order and distance between three potentially linked genes to be determined in a single cross experiment (Figure \(\PageIndex{12}\)). This is particularly useful when mapping a new mutation with an unknown location to two previously mapped loci. The basic strategy is the same as for the dihybrid mapping experiment; pure breeding lines with contrasting genotypes are crossed to produce an individual heterozygous at three loci (a trihybrid), which is then testcrossed to determine the recombination frequency between each pair of genes.
One useful feature of the three-point cross is that the order of the loci relative to each other can usually be determined by a simple visual inspection of the F 2 segregation data. If the genes are linked, there will often be two phenotypic classes that are much more infrequent than any of the others. In these cases, the rare phenotypic classes are usually those that arose from two crossover events, in which the locus in the middle is flanked by a crossover on either side of it. Thus, among the two rarest recombinant phenotypic classes, the one allele that differs from the other two alleles relative to the parental genotypes likely represents the locus that is in the middle of the other two loci. For example, based on the phenotypes of the pure-breeding parents in Figure \(\PageIndex{12}\), the parental genotypes are aBC and AbC (remember the order of the loci is unknown, and it is not necessarily the alphabetical order in which we wrote the genotypes). Because we can deduce from the outcome of the testcross (Table \(\PageIndex{2}\)) that the rarest genotypes were abC and ABc , we can conclude that locus A that is most likely located between the other two loci, since it would require a recombination event between both A and B and between A and C in order to generate these gametes. Thus, the order of loci is BAC (which is equivalent to CAB ).
|
tail phenotype |
fur phenotype |
whisker phenotype |
number of progeny |
gamete from trihybrid |
genotype of F 2 from test cross |
loci A, B |
loci A, C |
loci B, C |
|---|---|---|---|---|---|---|---|---|
|
short |
brown |
long |
5 |
aBC |
aaBbCc |
P |
R |
R |
|
long |
white |
long |
38 |
AbC |
AabbCc |
P |
P |
P |
|
short |
white |
long |
1 |
abC |
aabbCc |
R |
R |
P |
|
long |
brown |
long |
16 |
ABC |
AaBbCc |
R |
P |
R |
|
short |
brown |
short |
42 |
aBc |
aaBbcc |
P |
P |
P |
|
long |
white |
short |
5 |
Abc |
Aabbcc |
P |
R |
R |
|
short |
white |
short |
12 |
abc |
aabbcc |
R |
P |
R |
|
long |
brown |
short |
1 |
ABc |
AaBbcc |
R |
R |
P |
Recombination frequencies may be calculated for each pair of loci in the three-point cross as we did before for one pair of loci in our dihybrid (Figure 7. 8).
\[\begin{alignat}{2} \textrm{loci A,B R.F.} = &\dfrac{1+16+12+1}{120} &&= 25\%\\ \textrm{loci A,C R.F.} = &\dfrac{1+5+1+5}{120} &&= 10\%\\ \textrm{loci B,C R.F.} = &\dfrac{5+16+12+5}{120} &&= 32\%\\ \textrm{(not corrected for double}\\ \textrm{crossovers)}\hspace{40px} \end{alignat}\]
However, note that in the three point cross, the sum of the distances between A-B and A-C (35%) is less than the distance calculated for B-C (32%)(Figure \(\PageIndex{13}\)). this is because of double crossovers between B and C, which were undetected when we considered only pairwise data for B and C. We can easily account for some of these double crossovers, and include them in calculating the map distance between B and C, as follows. We already deduced that the map order must be BAC (or CAB ), based on the genotypes of the two rarest phenotypic classes in Table \(\PageIndex{2}\). However, these double recombinants, ABc and abC, were not included in our calculations of recombination frequency between loci B and C . If we included these double recombinant classes (multiplied by 2, since they each represent two recombination events), the calculation of recombination frequency between B and C is as follows, and the result is now more consistent with the sum of map distances between A-B and A-C.
\[\begin{align} \textrm{loci B,C R.F.} &= \dfrac{5+16+12+5+2(1)+2(1)}{120} = 35\%\\ \textrm{(corrected for double}&\\ \textrm{recombinants)}& \end{align}\]
Thus, the three point cross was useful for:
- determining the order of three loci relative to each other,
- calculating map distances between the loci, and
- detecting some of the double crossover events that would otherwise lead to an underestimation of map distance.
However, it is possible that other, double crossovers events remain undetected, for example double crossovers between loci A,B or between loci A,C. Geneticists have developed a variety of mathematical procedures to try to correct for things like double crossovers during large-scale mapping experiments.
As more and more genes are mapped a better genetic map can be constructed. Then, when a new gene is discovered, it can be mapped relative to other genes of known location to determine its location. All that is needed to map a gene is two alleles, a wild type allele (e.g. A) and a mutant allele (e.g. 'a'). | libretexts | 2025-03-17T22:27:39.525691 | 2021-01-03T20:13:22 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/3.0/",
"url": "https://bio.libretexts.org/Courses/Ohio_State_University/Ohio_State_University_SP22%3A_Molecular_Genetics_4606_(Chamberlin)/26%3A_Genetic_Maps/26.01%3A__Mapping_With_Three-Point_Crosses",
"book_url": "https://commons.libretexts.org/book/bio-42776",
"title": "26.1: Mapping With Three-Point Crosses",
"author": "Todd Nickle and Isabelle Barrette-Ng"
} |
https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.5%3A_Summary | 3.6: Summary
selected template will load here
This action is not available.
Understanding diversity, especially in the context of our country’s history, is an important part of being an engaged citizen who can help us to adapt to a changing world. Diversity goes hand in hand with the concepts of equity and inclusion, which increase the chances of equal opportunity and representation. Sometimes creating inclusive communities upsets the social order with which people are familiar. Change can be difficult, and people are passionate. These passions can disrupt communities and communication with uncivil behavior, or people can “fight fair” and use strategies that allow for the smooth exchange of ideas.
Everyone has a personal identity made up of various aspects and experiences—intersectionality. Some elements of identity place people in a diversity category. Some categories are expansive and well understood; others are new and may face scrutiny. Policies and laws have been put in place to protect underrepresented citizens from discrimination. These standards are constantly being challenged to make sure that they allow for the shifting demographics of the United States and shifting values of its citizens.
Cultural competency, which includes our ability to adapt to diversity, is a valuable skill in our communities and workplaces. The more culturally competent we are, the more we can help safeguard diversity and make equitable and inclusive connections on a global scale. | libretexts | 2025-03-17T22:27:41.085497 | 2022-01-06T09:10:18 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.5%3A_Summary",
"book_url": "https://commons.libretexts.org/book/biz-84879",
"title": "3.6: Summary",
"author": ""
} |
https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.6%3A_Career_Connection | 3.7: Career Connection
REFLECTION QUESTIONS
selected template will load here
This action is not available.
Keisha went to a temp agency to sign up for part-time work. The person in charge there gave her several tests on office skills. She checked Keisha’s typing speed, her ability to handle phone calls, and her writing skills. Keisha also took a grammar test and a test about how to handle disputes in the office. The tester also had Keisha answer questions about whether it was OK to take home office supplies and other appropriate things to do and not to do.
The tester told Keisha that she scored very well on the evaluations, but she never called Keisha back for a job or even an interview. Keisha knows that she presented herself well, but wonders if she was not called back because she wears her hair in dreadlocks or because she has been told that her name sounds African American?
REFLECTION QUESTIONS | libretexts | 2025-03-17T22:27:41.226100 | 2022-01-06T09:10:20 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.6%3A_Career_Connection",
"book_url": "https://commons.libretexts.org/book/biz-84879",
"title": "3.7: Career Connection",
"author": ""
} |
https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.7%3A_Rethinking | 3.8: Rethinking
selected template will load here
This action is not available.
Revisit the questions you answered at the beginning of the chapter, and consider one option you learned in this chapter that might make you rethink how you answered each one. Has this chapter prompted you to consider changing any of your feelings or practices?
Rank the following questions on a scale of 1–4, where 1 = “least like me” and 4 = “most like me.” | libretexts | 2025-03-17T22:27:41.286718 | 2022-01-06T09:10:22 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.7%3A_Rethinking",
"book_url": "https://commons.libretexts.org/book/biz-84879",
"title": "3.8: Rethinking",
"author": ""
} |
https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.8%3A_Where_do_you_go_from_here | 3.9: Where Do You Go From Here?
selected template will load here
This action is not available.
This chapter touched on many elements of civility and diversity, and mentioned a wide array of groups, identities, and populations. But the chapter certainly did not explore every concept or reflect every group you may encounter. In a similar way, you can’t know everything about everyone, but you can build cultural competency and understanding to make people feel included and deepen your abilities and relationships.
Sometimes learning about one group or making one person feel comfortable can be as important as addressing a larger population. To that end, consider researching or discussing one of the following topics to increase your level of civility and understanding: | libretexts | 2025-03-17T22:27:41.347854 | 2022-01-06T09:10:25 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Folsom_Lake_College/BUS_330%3A_Managing_Diversity_in_the_Workplace_(Buch)/Unit_1%3A_Diversity_Theories_Legislation_and_Cultural_Competence/Chapter_3%3A_Understanding_Civility_and_Cultural_Competence/9.8%3A_Where_do_you_go_from_here",
"book_url": "https://commons.libretexts.org/book/biz-84879",
"title": "3.9: Where Do You Go From Here?",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/01%3A_Analyze_the_causes_and_types_of_conflict/1.05%3A_Thinking_About_Conflict | 1.5: Thinking About Conflict
-
- Last updated
- Save as PDF
When you hear the word “conflict,” do you have a positive or negative reaction? Are you someone who thinks conflict should be avoided at all costs? While conflict may be uncomfortable and challenging it doesn’t have to be negative. Think about the social and political changes that came about from the conflict of the civil rights movement during the 1960’s. There is no doubt that this conflict was painful and even deadly for some civil rights activists, but the conflict resulted in the elimination of many discriminatory practices and helped create a more egalitarian social system in the United States. Let’s look at two distinct orientations to conflict, as well as options for how to respond to conflict in our interpersonal relationships.
Conflict as Destructive
When we shy away from conflict in our interpersonal relationships we may do so because we conceptualize it as destructive to our relationships. As with many of our beliefs and attitudes, they are not always well-grounded and lead to destructive behaviors. Augsburger outlined four assumptions of viewing conflict as destructive.
- Conflict is a destructive disturbance of the peace.
- The social system should not be adjusted to meet the needs of members; rather, members should adapt to the established values.
- Confrontations are destructive and ineffective.
- Disputants should be punished.
When we view conflict this way, we believe that it is a threat to the established order of the relationship. Think about sports as an analogy of how we view conflict as destructive. In the U.S. we like sports that have winners and losers. Sports and games where a tie is an option often seem confusing to us. How can neither team win or lose? When we apply this to our relationships, it’s understandable why we would be resistant to engaging in conflict. I don’t want to lose, and I don’t want to see my relational partner lose. So, an option is to avoid conflict so that neither person has to face that result.
Conflict as Productive
In contrast to seeing conflict as destructive, also possible, even healthy, is to view conflict as a productive natural outgrowth and component of human relationships. Augsburger described four assumptions of viewing conflict as productive.
- Conflict is a normal, useful process.
- All issues are subject to change through negotiation.
- Direct confrontation and conciliation are valued.
- Conflict is a necessary renegotiation of an implied contract—a redistribution of opportunity, release of tensions, and renewal of relationships.
From this perspective, conflict provides an opportunity for strengthening relationships, not harming them. Conflict is a chance for relational partners to find ways to meet the needs of one another, even when these needs conflict. Think back to our discussion of dialectical tensions. While you may not explicitly argue with your relational partners about these tensions, the fact that you are negotiating them points to your ability to use conflict in productive ways for the relationship as a whole, and the needs of the individuals in the relationship.
Types of Conflict
Understanding the different ways of valuing conflict is a first step toward engaging in productive conflict interactions. Likewise, knowing the various types of conflict that occur in interpersonal relationships also helps us to identify appropriate strategies for managing certain types of conflict. Cole states that there are five types of conflict in interpersonal relationships: Affective, Conflict of Interest, Value, Cognitive, and Goal.
- Affective conflict . Affective conflict arises when we have incompatible feelings with another person . For example, if a couple has been dating for a while, one of the partners may want to marry as a sign of love while the other decides they want to see other people. What do they do? The differences in feelings for one another are the source of affective conflict.
- Conflict of Interest . This type of conflict arises when people disagree about a plan of action or what to do in a given circumstance . For example, Julie, a Christian Scientist, does not believe in seeking medical intervention, but believes that prayer can cure illness. Jeff, a Catholic, does believe in seeking conventional medical attention as treatment for illness. What happens when Julie and Jeff decide to have children? Do they honor Jeff’s beliefs and take the kids to the doctor when they are ill, or respect and practice Julie’s religion? This is a conflict of interest.
- Value Conflict . A difference in ideologies or values between relational partners is called value conflict. In the example of Julie and Jeff, a conflict of interest about what to do concerning their children’s medical needs results from differing religious values. Many people engage in conflict about religion and politics. Remember the old saying, “Never talk about religion and politics with your family.”
- Cognitive Conflict . Cognitive conflict is the difference in thought process, interpretation of events, and perceptions . Marsha and Victoria, a long-term couple, are both invited to a party. Victoria declines because she has a big presentation at work the next morning and wants to be well rested. At the party, their mutual friends Michael and Lisa notice Marsha spending the entire evening with Karen. Lisa suspects Marsha may be flirting and cheating on Victoria, but Michael disagrees and says Marsha and Karen are just close friends catching up. Michael and Lisa are observing the same interaction but have a disagreement about what it means. This is an example of cognitive conflict.
- Goal Conflict . Goal conflict occurs when people disagree about a final outcome . Jesse and Maria are getting ready to buy their first house. Maria wants something that has long-term investment potential while Jesse wants a house to suit their needs for a few years and then plans to move into a larger house. Maria has long-term goals for the house purchase and Jesse is thinking in more immediate terms. These two have two different goals in regards to purchasing a home.
Strategies for Managing Conflict
When we ask our students what they want to do when they experience conflict, most of the time they say “resolve it.” While this is understandable, also important to understand is that conflict is ongoing in all relationships, and our approach to conflict should be to “manage it” instead of always trying to “resolve it.”
One way to understand options for managing conflict is by knowing five major strategies for managing conflict in relationships. While most of us probably favor one strategy over another, we all have multiple options for managing conflict in our relationships. Having a variety of options available gives us flexibility in our interactions with others. Five strategies for managing interpersonal conflict include dominating, integrating, compromising, obliging, and avoiding (Rahim; Rahim & Magner; Thomas & Kilmann). One way to think about these strategies, and your decision to select one over another, is to think about whose needs will be met in the conflict situation. You can conceptualize this idea according to the degree of concern for the self and the degree of concern for others.
When people select the dominating strategy , or win-lose approach, they exhibit high concern for the self and low concern for the other person . The goal here is to win the conflict. This approach is often characterized by loud, forceful, and interrupting communication. Again, this is analogous to sports. Too often, we avoid conflict because we believe the only other alternative is to try to dominate the other person. In relationships where we care about others, it’s no wonder this strategy can seem unappealing.
The obliging style shows a moderate degree of concern for self and others, and a high degree of concern for the relationship itself . In this approach, the individuals are less important than the relationship as a whole. Here, a person may minimize the differences or a specific issue in order to emphasize the commonalities. The comment, “The fact that we disagree about politics isn’t a big deal since we share the same ethical and moral beliefs,” exemplifies an obliging style.
The compromising style is evident when both parties are willing to give up something in order to gain something else . When environmental activist, Julia Butterfly Hill agreed to end her two-year long tree sit in Luna as a protest against the logging practices of Pacific Lumber Company (PALCO), and pay them $50,000 in exchange for their promise to protect Luna and not cut within a 20-foot buffer zone, she and PALCO reached a compromise. If one of the parties feels the compromise is unequal they may be less likely to stick to it long term. When conflict is unavoidable, many times people will opt for compromise. One of the problems with compromise is that neither party fully gets their needs met. If you want Mexican food and your friend wants pizza, you might agree to compromise and go someplace that serves Mexican pizza. While this may seem like a good idea, you may have really been craving a burrito and your friend may have really been craving a pepperoni pizza. In this case, while the compromise brought together two food genres, neither person got their desire met.
When one avoids a conflict they may suppress feelings of frustration or walk away from a situation. While this is often regarded as expressing a low concern for self and others because problems are not dealt with, the opposite may be true in some contexts. Take, for example, a heated argument between Ginny and Pat. Pat is about to make a hurtful remark out of frustration. Instead, she decides that she needs to avoid this argument right now until she and Ginny can come back and discuss things in a more calm fashion. In this case, temporarily avoiding the conflict can be beneficial. However, conflict avoidance over the long term generally has negative consequences for a relationship because neither person is willing to participate in the conflict management process.
Finally, integrating demonstrates a high level of concern for both self and others . Using this strategy, individuals agree to share information, feelings, and creativity to try to reach a mutually acceptable solution that meets both of their needs. In our food example above, one strategy would be for both people to get the food they want, then take it on a picnic in the park. This way, both people are getting their needs met fully, and in a way that extends beyond original notions of win-lose approaches for managing the conflict. The downside to this strategy is that it is very time consuming and requires high levels of trust.
Contributions and Affiliations
- Survey of Communication Study. Authored by : Scott T Paynton and Linda K Hahn. Provided by : Humboldt State University. Located at : en.wikibooks.org/wiki/Survey_of_Communication_Study. License : CC BY-SA: Attribution-ShareAlike
- Image of chess. Authored by : Cristian V.. Located at : https://flic.kr/p/C6ZpC . License : CC BY-ND: Attribution-NoDerivatives
- Image of small group. Authored by : NYU Stern BHR. Located at : https://flic.kr/p/qqgik3 . License : CC BY-NC: Attribution-NonCommercial | libretexts | 2025-03-17T22:27:47.219652 | 2021-04-19T01:14:41 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/01%3A_Analyze_the_causes_and_types_of_conflict/1.05%3A_Thinking_About_Conflict",
"book_url": "https://commons.libretexts.org/book/biz-72169",
"title": "1.5: Thinking About Conflict",
"author": null
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/02%3A_Examine_how_conflict_impacts_organizational_outcomes/2.05%3A_Dealing_with_Conflict-_Different_Approaches | 2.5: Dealing with Conflict- Different Approaches
Every individual or group manages conflict differently. In the 1970s, consultants Kenneth W. Thomas and Ralph H. Kilmann developed a tool for analyzing the approaches to conflict resolution. This tool is called the Thomas-Kilmann Conflict Mode Instrument (TKI) (Kilmann Diagnostics, 2017).
Essential Learning Activity T\(\PageIndex{1}\):
For information on the Thomas-Kilmann Conflict Mode Instrument, see the Kilmann Diagnostics website.
Thomas and Kilmann suggest that in a conflict situation, a person’s behaviour can be assessed on two factors:
- Commitment to goals or assertiveness —the extent to which an individual (or a group) attempts to satisfy his or her own concerns or goals.
- Commitment to relationships or cooperation —the extent to which an individual (or a group) attempts to satisfy the concerns of the other party, and the importance of the relationship with the other party.
Thomas and Kilmann use these factors to explain the five different approaches to dealing with conflict:
| avoiding | competing | accommodating | compromising | collaborating |
There is an appropriate time to use each approach in dealing with conflict. While most people will use different methods in various circumstances, we all tend to have a more dominant approach that feels most comfortable. One approach is not necessarily better than another and all approaches can be learned and utilized. To most effectively deal with conflict, it is important to analyze the situation and determine which approach is most appropriate.
Let’s take a closer look at each approach and when to use it.
Avoiding
An avoidance approach demonstrates a low commitment to both goals and relationships. This is the most common method of dealing with conflict, especially by people who view conflict negatively.
|
Types of Avoidance |
Results |
Appropriate When |
|
|
|
Application to Nursing—Avoidance
When might avoidance be an appropriate approach to conflict in a hospital or clinic setting?
In a hospital or clinical setting, there may be times when it is appropriate to avoid conflict. For example, on a particularly busy day in the emergency room, when a patient in life-threatening condition has just been received, the attending doctor may bark directions at the assisting nurses to get equipment. The nurses may feel offended by the doctor’s actions; however, it may be appropriate for the nurses to avoid the conflict at that moment given the emergency situation. The nurse, if he or she felt it was inappropriate behavior by the doctor, could then deal with the conflict after the patient has been stabilized.
When might avoidance be an inappropriate approach to conflict in a hospital or clinic setting?
Avoiding the conflict may be inappropriate if that same doctor continues to bark directions at the nursing staff in non-emergency situations, such as during debrief of a surgery, or when communicating non-emergency instructions. When the nurses and doctor have to continue a working relationship, avoiding the continuing conflict will no longer be appropriate.
Competing
A competing approach to conflict demonstrates a high commitment to goals and a low commitment to relationships. Individuals who use the competing approach pursue their own goals at the other party’s expense. People taking this approach will use whatever power is necessary to win. It may display as defending a position, interest, or value that you believe to be correct. Competing approaches are often supported by structures (courts, legislatures, sales quotas, etc.) and can be initiated by the actions of one party. Competition may be appropriate or inappropriate (as defined by the expectations of the relationship).
|
Types of Competing |
Results |
Appropriate When |
|
|
|
Application to Nursing — Competing
When might a competing approach to conflict be appropriate in a hospital or clinic setting?
A completing approach to conflict may be appropriate in a hospital or clinic setting if you recognize that another nurse has made an error in how much medication to administer to a patient. You recognize this mistake prior to the nurse entering the patient’s room so you approach the nurse, take the medication out of his or her hands, and place the correct dosage. The goal of patient safety outweighs the commitment to the relationship with that nurse in this case.
When might a competing approach to conflict be inappropriate in a hospital or clinic setting?
It would be inappropriate to continue to be competitive when you debrief with the nurse about the dangers of medication errors and the system of double checking dosage amounts. The goal at this point is to enhance the learning of that nurse as well as to build trust in your relationship as colleagues. A different approach is needed.
Accommodating
Accommodating demonstrates a low commitment to goals and high commitment to relationship. This approach is the opposite of competing. It occurs when a person ignores or overrides their own concerns to satisfy the concerns of the other party. An accommodating approach is used to establish reciprocal adaptations or adjustments. This could be a hopeful outcome for those who take an accommodating approach, but when the other party does not reciprocate, conflict can result. Others may view those who use the accommodating approach heavily as “that is the way they are” and don’t need anything in return. Accommodators typically will not ask for anything in return. Accommodators tend to get resentful when a reciprocal relationship isn’t established. Once resentment grows, people who rely on the accommodating approach often shift to a competing approach because they are tired of being “used.” This leads to confusion and conflict.
|
Types of
|
Results |
Appropriate When |
|
|
|
Application to Nursing—Accommodation
When might accommodation be an appropriate approach to conflict in a hospital or clinic setting?
It may be appropriate to use an accommodating approach when, for example, one of the nurses on your shift has a particularly difficult patient who is taking up a lot of time and effort. Seeing that the nurse is having difficulty, you take on some of her or his tasks. This increases your workload for a period of time, but it allows your colleague the time needed to deal with the difficult patient.
When might accommodation be an inappropriate approach to conflict in a hospital or clinic setting?
This approach may no longer be appropriate if that same nurse expects you to continue to cover his or her tasks after the situation with the difficult patient has been resolved.
Compromising
A compromising approach strikes a balance between a commitment to goals and a commitment to relationships. The objective of a compromising approach is a quick solution that will work for both parties. Usually it involves both parties giving up something and meeting in the middle. Compromising is often used in labour negotiations, as typically there are multiple issues to resolve in a short period of time.
|
Types of
|
Results |
Appropriate W hen |
|
|
|
Application to Nursing—Compromise
When might compromise be an appropriate approach to conflict in a hospital or clinic setting?
You are currently on shift with another nurse that does the bare minimum and rarely likes to help his or her colleagues out. It is two hours since lunch and one of your hyperglycemic patients have not received their lunch tray. You approach your colleague and ask him or her to go look for the tray while you draw blood from a patient for them. The other nurse agrees as he or she has been having difficulty with the patient that needs a blood draw.
When might a compromise be an inappropriate approach to conflict in a hospital or clinic setting?
It would be inappropriate to continue to ask the nurse to do tasks for you that are less appealing than the tasks you take on.
Collaborating
Collaborating is an approach that demonstrates a high commitment to goals and also a high commitment to relationships. This approach is used in an attempt to meet concerns of all parties. Trust and willingness for risk is required for this approach to be effective.
|
Type of
|
Results |
Appropriate W hen |
|
|
|
Application to Nursing—Collaboration
When might collaboration be an appropriate approach to conflict in a hospital or clinic setting?
It may be appropriate to use collaboration in a hospital or clinic setting when discussing vacation cover off with team members at a team meeting. During a team meeting, time is available to discuss and focus on what is important for each member of the team.
When might collaboration be an inappropriate approach to conflict in a hospital or clinic setting?
Collaboration would be inappropriate in a discussion of a new policy that has been put in place if the team has little influence in making adjustments. | libretexts | 2025-03-17T22:27:47.572008 | 2021-04-19T01:14:43 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/02%3A_Examine_how_conflict_impacts_organizational_outcomes/2.05%3A_Dealing_with_Conflict-_Different_Approaches",
"book_url": "https://commons.libretexts.org/book/biz-72169",
"title": "2.5: Dealing with Conflict- Different Approaches",
"author": "Joan Wagner"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/03%3A_Describe_how_to_apply_emotional_intelligence_in_the_workplace_to_increase_productivity./3.03%3A_What_are_the_theories_of_multiple_intelligences_and_emotional_intelligence | 3.3: What are the theories of multiple intelligences and emotional intelligence?
-
- Last updated
- Save as PDF
By Meagan Keith
Learning Objectives
- recognize and define Gardner's ten intelligences
- distinguish traditional views of intelligence (e.g., IQ) from Multiple Intelligences and Emotional Intelligence
- identify which kind of learning is best for them (e.g., visual, kinesthetic, etc.)
What is intelligence?
The traditional view of intelligence has always been that people are born with a fixed amount of intelligence in which that level does not change over a lifetime (Hampton, 2008). Under the traditional view of intelligence, intelligence consists of two abilities—logic and language. Short answer tests, such as the Stanford-Binet Intelligence Test and the Scholastic Aptitude Test, are common ways of measuring intelligence.
However, in the past twenty years or so, a more modern view of intelligence has begun to replace existing traditional views. Extensive research has shown that it is, indeed, possible to have more than one intelligence and that the level of intelligence can change over a lifetime. This theory of intelligence is called Multiple Intelligences as created by Howard Gardner, Ph.D., a psychologist and professor of neuroscience from Harvard University.
According to Gardner, “Intelligence is the ability to respond successfully to new situations and the capacity to learn from one’s past experiences” (Hampton, 2008). Gardner believes that, “we all possess at least [seven] unique intelligences through which we are able to learn and teach new information” (Hampton, 2008). He believes that “we can all improve each of the intelligences, though some people will improve more readily in one intelligence area than the others” (Hampton, 2008).
Gardner does not believe in short-answer tests to measure intelligence because “short answer tests do not measure disciplinary mastery or deep understanding, rather they measure root memorization skills and only one’s ability to do well on short-answer tests” (Hampton, 2008). Assessments that value the process over the final answer, such as the Performance Assessment in Math (PAM) and the Performance Assessment in Language (PAL), are more accurate measures of intelligence in Gardner’s theory than short-answer tests.
Introduction to Multiple Intelligences
In 1983 Howard Gardner proposed his theory of multiple intelligences in the book Frames of the Mind: The Theory of Multiple Intelligences . In his book, Gardner proposes that there are seven possible intelligences—linguistic intelligence, logical-mathematical intelligence, musical intelligence, bodily-kinesthetic intelligence, visual-spatial intelligence, interpersonal intelligence, and intrapersonal intelligence. Gardner would go on to add three more intelligences to his list—naturalist intelligence, spiritual intelligence, and existential intelligence—in his later book Intelligence Reframed: Multiple Intelligence for the 21st Century (1999).
According to the Educational Researcher , to arrive at Gardner’s first seven intelligences Gardner and his colleagues examined literature on the “development of cognitive capacities in normal individuals, the breakdown of cognitive capacities under various kinds of organic pathology, and the existence of abilities in ‘special populations,’ such as prodigies, autistic individuals, idiots savants, and learning disabled children” (Gardner & Hatch, 1989).
Gardner and his colleagues also examined literature on “forms of intellect that exist in different species, forms of intellect valued in different cultures, the evolution of cognition across the millennia, as well as two forms of psychological evidence—the results of factor-analytic studies of human cognitive capacities and the outcome of studies of transfer and generalization” (Gardner & Hatch, 1989).
Intelligences that appeared repeatedly in Gardner’s research were added to a provisional list, whilst intelligences only appearing once or twice were discarded. Gardner claimed that, “as a species, human beings have evolved over the millennia to carry out at least these seven forms of thinking” on his provisional list (Gardner & Hatch, 1989).
Multiple Intelligences Defined
Linguistic intelligence is the ability to learn languages and use language to express what is on one’s mind and to understand people. Those who have high linguistic intelligence are well-developed in verbal skills and have sensitivity to sounds, meanings and rhythms of words (Hampton, 2008). These kinds of people enjoy reading various kinds of literature, playing word games, making up poetry and stories, and getting into involved discussions with other people (Hampton, 2008).
Examples of people with high linguistic ability include poets, writers, public speakers, TV and radio newscasters, and journalists.
Logical-Mathematical intelligence is the ability to detect patterns, reason deductively, and think logically. Those who are “math smart” have the capacity to analyze problems logically, carry out mathematical operations, and investigate scientifically (Smith, 2008). Those with high Logical-Mathematical intelligence are highly capable of thinking conceptually and abstractly (Hampton, 2008). This kind of intelligence is often associated with scientific and mathematical thinking (Hampton, 2008).
Careers that “math smart” people tend to be employed in include computer technicians and programmers, accountants, poll takers, medical professionals, and math teachers (Smith, 2008).
Musical Intelligence is “the capacity to think in music, to be able to hear patterns, recognize them, and manipulate them” (Hampton, 2008). Those who are musically intelligent learn through sounds, rhythms, tones, beats, music produced by other people or present in the environment,” according to Gardner (Hampton, 2008). Musically intelligent people also have the ability to perform, compose, and appreciate music and music patterns (Smith, 2008).
Jobs in which musical intelligence is a desired aptitude include advertising, music studio directors and recorders, singers and songwriters, conductors, and music teachers (Hampton, 2008).
Bodily-Kinesthetic intelligence is defined as “having the potential of using one’s whole body or parts of the body to solve problems” (Smith, 2008). Those with high kinesthetic intelligence communicate well through body language and like to be taught through physical activity, hands-on learning, acting out, and role playing (Lane, n.d.). These kinds of people have a keen sense of body awareness and have the ability to use mental abilities to coordinate bodily movements (Smith, 2008).
Gymnasts, physical therapists, mechanics, athletes, builders, dancers, doctors, surgeons, nurses, and crafts persons tend to be highly kinesthetic.
Spatial intelligence “involves the potential to recognize and use patterns of wide space and more confined areas,” according to Gardner (Smith, 2008). As well as, “the ability to manipulate and mentally rotate objects,” adds Gardner (Thompson, 1999). Graphic artists, architects, and mapmakers tend to be highly spatially intelligent. These people are very aware of their environments.
Interpersonal intelligence is the capacity to understand the intentions, motivations, and desires of other people (Smith, 2008). These kinds of people are “people smart” and work well with others. Examples of people with high interpersonal intelligence include educators, salespeople, and religious and political leaders. Interpersonally intelligent people learn through personal interactions.
“[People with high interpersonal intelligence] probably have a lot of friends, show a great deal of empathy for other people, and exhibit a deep understanding of other people’s viewpoints,” according to MI Indentified (Hampton, 2008).
“ Intrapersonal intelligence is the capacity to understand oneself, to appreciate one’s feelings, fears and motivations,” according to Gardner. “It involves have an effective working model of ourselves, and to be able to use such information to regulate our lives” according to The Encyclopedia of Informal Education (Smith, 2008). People who possess high intrapersonal intelligence are “self smart.” These people know who they are, what they are capable of doing, how to react to things, what to avoid, and what they gravitate to (Hampton, 2008).
Psychologists, philosophers, social workers, and counselors are all examples of “self smart” careers.
Naturalist intelligence is defined as the ability to recognize and categorize plants, animals and other objects in nature (Hampton, 2008). Those with high naturalist intelligence include gardeners, biologists, birdwatchers, florists, horticulturists and more.
According to EdWeb , “People who are sensitive to changes in weather patterns or are adept at distinguishing nuances between large numbers of similar objects may be expressing naturalist intelligence abilities” (Carvin, n.d.). Naturalist intelligence is the intelligence that presumably helped our ancestors survive—“to decide what to eat and what to run from” (Holmes, 2002).
Existential Intelligence is defined as the ability to be sensitive to, or having the capacity for, conceptualizing or tackling deeper or larger questions about human existence, such as what is the meaning of life? Why are we born? And why do we die (Wilson, 2005)? Existential intelligence is often called the “wondering smart” or the metaphysical intelligence.
The clearest definition of existential intelligence defined by Gardner is: “individuals who exhibit the proclivity to pose and ponder questions about life, death, and ultimate realities” (Wilson, 2005). However, Gardner has not fully committed himself to this ninth intelligence despite his book Intelligence Reframed: Multiple Intelligence for the 21st Century in which he first mentions the possible existence of a ninth intelligence.
Spiritual Intelligence according to Dr. Cynthia Davis, clinical and corporate psychologist and emotional intelligence business coach, “is the ultimate intelligence in which we address and solve problems of meaning and value, in which we can place our actions and our lives in a wider, richer, meaning-giving context, and the intelligence with which we can assess that one course of action or one life path is more meaningful than another” (Mindwise Pty Ltd, 2004) .
“Spiritual intelligence is the intelligence that which makes us whole, integral and transformative,” according to Danah Zohar, author of Spiritual Capital: Wealth We Can Live By (Spiritual Intelligence and Spirtual Health, 2008). Spiritual intelligence is not necessarily religious nor is it dependent upon religion as a foundation (Mindwise Pty Ltd, 2004). Characteristics of spiritual intelligence include the capacity to face and use suffering, the capacity to face and transcend pain, the capacity to be flexible, actively and spontaneously adaptive, and high self-awareness (Mindwise Pty Ltd, 2004).
Note
GARDNER'S THEORY OF MULTIPLE INTELLIGENCES
Linguistic Intelligence
“Word Smart”
Logical-Mathematical Intelligence
“Number/Reasoning smart”
Spatial Intelligence
“Picture Smart”
Bodily-Kinesthetic Intelligence
“Body Smart”
Musical Intelligence
“Music Smart”
Interpersonal Intelligence
“People Smart”
Intrapersonal Intelligence
“Self Smart”
Naturalist Intelligence
“Nature Smart”
Existential Intelligence
“Wondering Smart”
Spiritual Intelligence
“Spiritual Smart”
Conclusion to Multiple Intelligences
Note
"The single most important contribution education can make to a child's development is to help him towards a field where his talents best suit him, where he will be satisfied and competent."
-Howard Gardner
Since the publication of Gardner’s Frames of Mind: The Theory of Multiple Intelligences, Gardner’s theory has been put into practice in schools all over the world. Gardner’s theory teaches that teachers should not teach the same material to the entire class rather individualize instruction by identifying students’ strengths and weaknesses.
One way of identifying students’ strengths and weaknesses is to offer a multiple intelligence assessment. Multiple Intelligence assessments typically ask students/test takers to rank statements from 1-5 indicating how well that statement describes them ("5" being the statement describes you exactly, and "1" being the statement does not describe you at all). Statements might look like the ones below from Dr. Terry Armstrong’s online assessment of strengths (Armstrong, n.d.):
- I pride myself on having a large vocabulary.
- Using numbers and numerical symbols is easy for me.
- Music is very important to me in my daily life.
- I always know where I am in relation to my home.
- I consider myself an athlete.
- I feel like people of all ages like me.
- I often look for weaknesses in myself that I see in others.
- The world of plants and animals is important to me.
Teachers can use assessments like Armstrong's to take an inventory of learner’s skills so that they can tailor their teaching methods to their learner’s strengths.
Introduction to Emotional Intelligence
Emotion can be any number of things. It can be anger, sadness, fear, enjoyment, love, surprise, disgust, or shame (Goleman, 2005, p. 289). Author of Emotional Intelligence , Daniel Goleman, suggests that emotion refers to a “feeling and its distinctive thoughts, psychological and biological states, and range of propensities to act” (Goleman, 2005, p. 289). But, the most fascinating part about emotions is that they are universal. People from cultures around the world all recognize the same basic emotions, even peoples presumably untainted by exposure to cinema or television (Goleman, 2005, p. 290).
There are two basic definitions of emotional intelligence. One is the Mayer-Salovey definition and the other, the Goleman definition. There are numerous other definitions of emotional intelligence floating about, especially on the net. However, none are as academically or scientifically accepted as Goleman's and Mayer and Salovey's.
Emotional Intelligence Defined
Mayer-Salovey Definition
The first two people to suggest that emotional intelligence is a true form of intelligence were Jack Mayer and Peter Salovey. Mayer and Salovey are leading researchers in the field of emotional intelligence. They first published their findings in a 1990 seminal article where they defining emotional intelligence as “the subset of social intelligence that involves the ability to monitor one’s own and other’s feelings and emotions," as well as, "the ability to discriminate among them and to use this information to guide one’s thinking and actions” (Hein, 2007). Mayer and Salovey further described emotional intelligence as, “a set of skills hypothesized to contribute to the accurate appraisal and expression of emotion in oneself and in others, the effective regulation of emotion in self and others, and the use of feelings to motivate, plan, and achieve in one’s life” (Hein, 2007).
Along with their definition of emotional intelligence, Mayer and Salovey proposed that there were four branches of emotional intelligence. Here is a compiled list of details from Mayer and Salovey’s 1990 and 1997 articles on the four branches of emotional intelligence:
1. Perception Appraisal and Expression of Emotion
- Ability to identify emotions in faces, music, and stories (1990)
- Ability to identify emotion in one’s physical states, feelings, and thoughts (1997)
- Ability to identify emotions in other people, designs, artwork, etc. through language, sound, appearance, and behavior (1997)
- Ability to discriminate between accurate and inaccurate, or honest vs. dishonest expressions of feeling (1997)
2. Emotional Facilitation of Thinking
- Ability to relate emotions to other mental sensations such as taste and color (1990)
- Ability to use emotion in reasoning and problem solving (1990)
- Emotions prioritize thinking by directing attention to important information (1997)
- Emotions are sufficiently vivid and available that they can be generated as aids to judgement and memory concerning feelings (1997)
- Emotional states differentially encourage specific problem-solving approaches such as when happiness facilitates inductive reasoning and creativity (1997)
3. Understanding and Analyzing Emotions; Employing Emotional Knowledge
- Ability to solve emotional problems such as knowing which emotions are similar, or opposites, and what relations that convey (1990)
- Ability to label emotions and recognize relations among the words and the emotions themselves, such as the relation between liking and loving (1997)
- Ability to interpret the meanings that emotions convey regarding relationships, such as that sadness often accompanies a loss (1997)
- Ability to understand complex feelings: simultaneous feelings of love and hate or blends such as awe as a combination of fear and surprise (1997)
- Ability to recognize likely transitions among emotions, such as the transition from anger to satisfaction or from anger to shame (1997)
4. Reflective Regulation of Emotions to Promote Emotional and Intellectual Growth
- Ability to understand the implications of social acts on emotions and the regulation of emotion in self and others (1990)
- Ability to stay open to feelings, both those that are pleasant and those that are unpleasant (1997)
- Ability to reflectively engage or detach from an emotion depending upon its judged informativeness or utility (1997)
- Ability to reflectively monitor emotions in relation to oneself and others, such as recognizing how clear, typical, influential or reasonable they are (1997)
- Ability to manage emotion in oneself and others by moderating negative emotions and enhancing pleasant ones, without repressing or exaggerating information they may convey (1997)
Goleman Defintion
Daniel Goleman, Ph.D., is another important figure in the field of emotional intelligence. Goleman is the successful author of New York Times bestsellers, Emotional Intelligence and Social Intelligence , as well as an internationally known psychologist. Goleman is currently working as a science journalist and frequently lectures to professional groups, business audiences, and on college campuses (Bio, 2009). Goleman is one of the foremost experts in emotional intelligence. In his book, Emotional Intelligence , Goleman defines emotional intelligence as, “a set of skills, including control of one’s impulses, self-motivation, empathy and social competence in interpersonal relationships” (Goleman, 2005).
Goleman, like Mayer and Salovey, divided emotional intelligence into key components; three that pertained to oneself and two that pertained to how one relates to others (Gergen, 1999). Goleman's five key components of emotional intelligence are: Emotional self-awareness, managing emotions, motivating oneself, recognizing emotions in others, and handling relationships. Goleman, for the most part, agrees with Mayer and Salovey. However, in recent years, Goleman has favored a four component system as opposed to his original five components in 1995.
Five Key Components (Goleman, 2005, p. 43-44):
1. Knowing one's emotions
- Self-awareness—recognizing a feeling as it happens —is the keystone of emotional intelligence
- The ability to monitor feelings from moment to moment is crucial to psychological insight and self-understanding
- People who know their emotions have a surer sense of how they really feel about personal decisions from whom to marry to what job to take
2. Managing emotions
- Handling feelings so they are appropriate is an ability that builds on self-awareness
- People who are poor in this ability are constantly battling feelings of distress, while those who excel in it can bounce back far more quickly from life's setbacks and upsets
3. Motivating oneself
- Marshalling emotions in the service of a goal is essential for paying attention, for self-motivation and mastery, and for creativity
- People who have this skill tend to be more highly productive and effective in whatever they undertake
4. Recognizing emotions in others
- Empathy is the fundamental people skill
- People who are empathetic are more attuned to the subtle social signals that indicate what others need or want; this makes them better at callings such as caring professions, teaching, sales, and management
5. Handling relationships
- Skill in managing emotions in others
- These are the abilities that undergird popularity, leadership, and interpersonal effectiveness
- People who excel in these skills do well at anything that relies on interacting smoothly with others
Conclusion to Emotional Intelligence
In 1998, Goleman developed a set of guidelines for The Consortium for Research on Emotional Intelligence in Organizations that could be applied in the workplace and in schools. This set of guidelines is divided into four parts: preparation, training, transfer and maintenance, and evaluation. Each phase is equally as important as the last.
Some of the first guidelines pertain to assessment. Teachers should assess the class and individuals and inform them of their strengths and weaknesses. In delivering the assessment the teacher should try to be accurate and clear. They should also allow plenty of time for the student to digest and integrate the information (Cherniss, 1998). The teacher should provide feedback in a safe and supportive environment and avoid making excuses or downplaying the seriousness of the deficiencies (Cherniss, 1998).
Other guidelines include: maximizing learner choice, encouraging people to participate, linking learning goals to personal values, adjusting expectations, and gauging readiness (Cherniss, 1998). Teachers should foster a positive relationship between their students and themselves. They should make change self-directed; tailoring a learning program that meets individual needs and circumstances.
Teachers should also set clear goals and make the steps towards those goals manageable, and not too overly ambitious (Cherniss, 1998). Teachers should provide opportunities to practice the new behaviors they have learned. Then, teachers should provide periodic feedback on the learners’ progress (Cherniss, 1998).
Teachers should rely on experiential methods of learning, such as activities that engage all the senses and that are dramatic and powerful, to aid learners in developing social and emotional competencies (Cherniss, 1998). Eventually, learners will develop a greater self-awareness. They should be able to understand how their thoughts, feelings, and behavior affect themselves and others at this point (Cherniss, 1998).
Note
The Self Science Curriculumfrom Self Science: The Subject is Me by Karen F. Stone (Goleman, 2005, p. 305)
Main ComponentsSelf-awareness :
obeserving yourself and recognizing your feelings; building a vocabulary for feelings; knowing the relationship between thoughts, feelings, and reactions
Personal Decision-making :
examining your actions and knowing their consequences; knowing if thought or feeling is ruling a decision; applying these insights to issues such a sex and drugs
Managing Feelings :
monitoring "self-talk" to catch negative messages such as internal put-downs; realizing what is behind a feeling (e.g., the hurt that underlies anger); finding ways to handle fears and anxieties, anger and sadness
Handling Stress :
learning the value of exercise, guided imagery, relaxation methods
Empathy :
understandign other peoples' feelings and concerns and taking their perspective; appreciating the differences in how people feel about things
Communications :
talking about feelings effectively; becoming a good listener and question-asker; distinguishing between what someone does or says and your own reactions or judgements about it; sending "I" messages instead of blame
Self-disclosure :
valuing openness and developing trust in a relationship; knowing when it is safe to risk talking about your private feelings
Insight :
identifing patterns in your emotional life and reactions; recognizing similar patterns in others
Self-acceptance :
feeling pride and seeing yourself in a positive light; recognizing your strengths and weaknesses; being able to laugh at yourself
Personal Responsibility :
taking responsibility; recognizing the consequences of your decisions and actions, accepting your feelings and moods, following through on commitments (e.g., studying)
Assertiveness :
stating your concerns and feelings without anger or passivity
Group dynamics :
cooperation; knowing when and how to lead, when to follow
Conflict resolution :
how to fight fair with other kids, with parents, with teachers; the win/win model for negotiating compromise
Exercise \(\PageIndex{1}\)
1. Who is author of the theory of multiple intelligences?
(a) Daniel Goleman
(b) Howard Gardner
(c) Mayer and Salovey
(d) Reuven Bar-On
2. Mary loves reading, writing, and telling stories. Her favorite course in school is Language arts. What kind of learning would be best for Mary?
(a) Interpersonal
(b) Kinesthetic
(c) Linguistic
(d) Spatial
3. According to Mayer and Salovey, emotional facilitation of thinking is the ability to__________.
(a) Label emotions and recognize relations among the words and the emotions themselves, such as the relation between liking and loving
(b) Relate emotions to other mental sensations such as taste and color
(c) Use emotion in reasoning and problem solving
(d) Both B and C
4. Mr. Conway likes to incorporate lots of hands-on activities into his curriculum. His often asks his students to role-play in class projects. What type of learner is Mr. Conway?
(a) Interpersonal
(b) Intrapersonal
(c) Kinesthetic
(d) Spatial
5. What might be a traditional view of intelligence?
(a) Intelligence is fixed at birth
(b) Standardized tests such as the Stanford-Binet tests accurately measure intelligence
(c) There is only one way to measure intelligence
(d) All of the above
- Answer
-
1. B
2. C
3. D
4. C
5. D
References
Armstrong, T. (n.d.). Assessment: Find Your Strengths! Retrieved February 5, 2009, from Multiple Intelligences for Adult Literacy and Education: http://literacyworks.org/mi/assessment/findyourstrengths.html
Bio. (2009). Retrieved February 8, 2009, from DanielGoleman.info: http://www.danielgoleman.info/blog/biography/
Carvin, A. (n.d.). Naturalist Intelligence . Retrieved February 5, 2009, from EdWeb: Exploring Technology and School Reform: http://www.edwebproject.org/edref.mi.th8.html
Cherniss, C. G. (1998). Guidelines for Best Practice . Retrieved February 19, 2009, from Consortium for Research on Emotional Intelligence in Organizations: eiconsortium.org/reports/guidelines.html
Gardner, H., & Hatch, T. (1989). Multiple Intelligences Go to School: Educational Implications of the Theory of Multiple Intelligences . Educational Researcher , 18 (8), 4-10. Retrieved February 3, 2009, from JSTOR database.
Gergen, D. (1999, February 8). Emotional Intelligence . Retrieved February 8, 2009, from Online NewsHour with Jim Lehrer: www.pbs.org/newshour/gergen/february99/gergen_2-8.html
Goleman, D. (2005). Emotional Intelligence (10th Anniversary ed.). New York: Bantam Books.
Hampton, R. (2008, September 30). Multiple Intelligences . Retrieved February 5, 2009, from lth3.k12.il.us/rhampton/mi/mi.html
Hein, S. (2007). Definition of Emotional Intelligence . Retrieved February 5, 2009, from Emotional Intelligences: http://eqi.org/eidefs.htm
Holmes, K. (2002, June 4). Naturalist Intelligence . Retrieved February 5, 2009, from Lesley University Library: http://www.lesley.edu/faculty/kholmes/presentations/naturalist.html
Lane, C. (n.d.). Multiple Intelligences . Retrieved February 5, 2009, from Distance Learning Technology Resource Guide: http://www.tecweb.org/styles/gardner.html
Mindwise Pty Ltd. (2004, April 13). Spirtual Intelligence . Retrieved February 5, 2009, from Mindwise: mindwise.com.au/spiritual_intelligence.shtml
Smith, M. K. (2008). Howard Gardner and Multiple Intelligences . Retrieved February 4, 2009, from The Encyclopedia of Informal Education: http://www.infed.org/thinkers/gardner.htm
Spiritual Intelligence and Spirtual Health . (2008, February 21). Retrieved February 5, 2009, from My Health for Life: www.myhealth.gov.my/myhealth/eng/kesihatan_mental_content.jsp?lang=mental&storyid=1203581305747&storymaster=1203581305747
Thompson, H. (1999). Visual-Spatial Intelligence . Retrieved February 4, 2009, from hmt.myweb.uga.edu/webwrite/visual-spatial.htm
Wilson, L. O. (2005). Newer Views of Learning: Exploring the Ninth Intelligence-Maybe . Retrieved February 5, 2009, from ED 703: www.uwsp.edu/education/lwilson/learning/ninthintelligence.htm | libretexts | 2025-03-17T22:27:47.830160 | 2021-04-19T01:14:46 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/03%3A_Describe_how_to_apply_emotional_intelligence_in_the_workplace_to_increase_productivity./3.03%3A_What_are_the_theories_of_multiple_intelligences_and_emotional_intelligence",
"book_url": "https://commons.libretexts.org/book/biz-72169",
"title": "3.3: What are the theories of multiple intelligences and emotional intelligence?",
"author": "Jennfer Kidd, Jamie Kaufman, Peter Baker, Patrick O'Shea, Dwight Allen, & Old Dominion U students"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/06%3A_Implications_for_Leaders_and_Managers_Resolving_Conflict/6.03%3A_Emotional_and_Social_Intelligence_in_Leadership | 6.3: Emotional and Social Intelligence in Leadership
Overview
The position of either leader or follower does not hold power. Rather, it is how we respond when we are in these roles, based on our emotional intelligence, that gives power to each role. Emotional intelligence has been described as the “ability to monitor and discriminate among emotions and to the use the data to guide thought and action” (Pangman & Pangman, 2010, p. 146). Goleman (1998), a researcher who has completed excellent work in the area of work performance, studied the importance of emotional intelligence in achieving personal excellence. He defines emotional intelligence in greater depth, stating that it is composed of “abilities such as being able to motivate oneself and persist in the face of frustrations; to control impulse and delay gratification; to regulate one’s moods and keep distress from swamping the ability to think; to empathise and to hope” (Goleman, 1995, p. 21). Goleman’s model of emotional intelligence contains five skills that comprise personal and social competencies (see Table \(\PageIndex{1}\) below). The three skills of self-awareness, self-regulation, and motivation relate to the individual’s personal competence. The remaining skills of empathy and social skills are classified as social competencies (Sadri, 2012, p. 537). Goleman stressed that all of the skills can be learned.
|
Competency |
Skill Area |
Description |
|---|---|---|
|
Personal |
Self-awareness |
Knowing one’s self |
|
Self-regulation |
Managing one’s self |
|
|
Motivation |
Sentiments and passions that facilitate the attainment of goals | |
|
Social |
Empathy |
Understanding of others and compassion toward them |
|
Social skills |
Expertise in inspiring others to be in agreement |
Developing Emotional and Social Intelligence
Students are at an ideal stage of their lives and careers to check their emotional intelligence. Completion of the emotional intelligence quiz at the link below may help you identify areas for growth.
Essential Learning Activity \(\PageIndex{1}\)
Visit Queendom.com to access an emotional intelligence assessment.
Now that you have identified an area for growth, you may ask, “How can I increase my emotional intelligence?” Your brain has been developing neural pathways in response to your environment since early childhood. Over time these pathways become hard-wired in your brain, allowing you to respond rapidly to circumstances in your environment. In fact, it is believed that emotional responses occur faster than cognitive responses, thus you seem to act before you think. Siegel’s (2012) research in the area of interpersonal neurobiology shows that there is a way to change your brain’s response to stressors. Increasing your “mindfulness” can provide you with an opportunity to “break the link between environmental stimuli and habitual responses” (Gerardi, 2015, p. 60) and to choose a different course of action. Daniel Siegel (2010) coined the term mindsight to refer to the phenomenon of becoming aware of emotional reactions and changing them in real time. Gerardi (2015) stressed that working on developing mindsight is hard but valuable work for those who wish to become successful leaders.
From the Field
It is important to step back, take a few deep breaths, and look at all aspects of the situation before reacting.
As a nurse, gaining emotional and social intelligence and using mindsight are all critical to becoming a successful leader in the field. You will encounter and be required to cope with many different types of people, both colleagues and patients. It is extremely important to be self-aware, reflect on your feelings, and think about how emotions can influence both actions and relationships (or social interactions). That is, you must learn to reflect on your clinical experiences and think of how you could have changed a situation by using self-awareness or mindsight. In the words of Pattakos, “Between stimulus and response, there is a space. In that space lies our freedom and our power to choose our response. In our response lies our growth and our happiness” (as cited in Gerardi, 2015, p. 60). | libretexts | 2025-03-17T22:27:48.442641 | 2021-04-19T01:14:56 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_2660%3A_Conflict_Management_(Perry_2021)/06%3A_Implications_for_Leaders_and_Managers_Resolving_Conflict/6.03%3A_Emotional_and_Social_Intelligence_in_Leadership",
"book_url": "https://commons.libretexts.org/book/biz-72169",
"title": "6.3: Emotional and Social Intelligence in Leadership",
"author": "Joan Wagner"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.01%3A_Introduction | 2.1: Introduction
-
- Last updated
- Save as PDF
The land of Sumer , in today’s southern Iraq, was home to some of the largest early cities in human history. In one of these ancient settlements, Ur, a beautiful wooden box was laid in a royal tomb in about 2550 BCE (Figure 3.1). It measures roughly nine by twenty inches (a little bigger than a laptop) and is inlaid with elaborate mosaic figures and borders composed of bits of red limestone, lapis lazuli, and marine shell. This kind of specialized craftsmanship was a hallmark of societies that no longer depended on hunting and gathering for food but rather produced crops capable of sustaining large populations. In turn, they gained enough time and prosperity for some members to focus on artisanal crafts.
The box indicates at least three important things about the civilization that produced it. First, a highly skilled artisan constructed the box and created the mosaics, indicating the presence of specialization of labor. Second, the mosaics show someone who is presumably the king at the center of the top row, directing the soldiers below. These power dynamics suggest new social hierarchies. Finally, the soldiers all appear smaller in the scene than the king, symbolically reflecting their subordinate position and telling us that social stratification had come into existence. All these developments took place gradually over time, bringing slow but enduring change to the lives of the people in Ur and those who lived nearby. Similar changes occurred in the world’s other ancient cities. | libretexts | 2025-03-17T22:27:51.929867 | 2025-02-12T00:43:09 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.01%3A_Introduction",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.1: Introduction",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.02%3A_Early_Civilizations | 2.2: Early Civilizations
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Discuss the attributes of early civilizations
- Analyze the way human relationships changed with the development of urban areas
Early civilizations, most of which arose along large rivers, were marked by an agriculturally sustained population that remained settled in one area and could number in the tens of thousands. The stability of the population allowed for the development of a discernible culture , which consists of all the different ways a distinct group of people interact with one another and their environment and pass these ways down from generation to generation over time. This is not to say that earlier groups of people lacked social identities. But there were important differences between them and the early civilizations that followed.
The development of early civilizations occurred between 10,000 and 8,000 BCE in just a few specific areas of the world that historians have labeled the “cradles of civilization.” In these locations—today’s Mexico, Peru, China, India/Pakistan, Iraq, and Egypt—the introduction of farming allowed larger populations to settle in one place, and the ability to produce and distribute surpluses of food enabled some people to specialize in such tasks as manufacturing handicrafts, tending to the spiritual world, and governing. The peoples of these cultures experienced radical changes in their lifestyles as well as in the ways their communities interacted with each other and their environments.
Attributes of Early Civilizations
Even after the Neolithic Revolution , many people continued to lead a nomadic or seminomadic existence, hunting and gathering or herding domesticated animals. People produced or gathered only enough materials to meet the immediate food, shelter, and clothing needs of their family unit. Even in societies that adopted farming as a way of life, people grew only enough for their own survival. Moreover, the family unit was self-sufficient and relied on its own resources and abilities to meet its needs. No great differences in wealth existed between families, and each person provided necessary support for the group. Group leaders relied primarily on consensus for decision-making. Order and peace were maintained by negotiations between community elders such as warriors and religious leaders. Stability also became dependent on peaceful relationships with neighboring societies, often built on trade.
Early civilizations, by contrast, arose where large numbers of people lived in a relatively small, concentrated area and worked to produce a surplus of food and other materials, which they distributed through a system of exchange. For farming communities, this food surplus meant family size grew to six or seven children and caused the global human population to skyrocket. Population growth rooted in agricultural production led to larger cities, in which the food produced by farmers in outlying rural areas was distributed among the population of the urban center, where food was not produced. This system of specialization was a key feature of early civilizations and what distinguished them from previous societies. Individuals performed specific tasks such as farming, writing, or performing religious rituals. People came to rely on the exchange of goods and services to obtain necessary supplies. For example, artisans specializing in craft production relied on farmers to cultivate the food they needed to thrive. In turn, farmers depended upon artisans to produce tools and clothing for them. A weaver acquired wool from a shepherd and produced cloth that might then be given to a physician in exchange for medicine or a priest as payment for conducting a religious ritual.
The system of exchange, however, created hierarchies within society. Those who could accumulate more goods became wealthy, and they passed that wealth from one generation to the next. This wealth led, in turn, to the accumulation of political and religious power, while those who continued to labor in production remained lower on the social scale. This social stratification , another characteristic of early civilizations, means that families and individuals could vary greatly in their wealth and status. Those who share the same level of wealth and status make up a distinct class or strata, and these strata or classes are ordered from highest to lowest based on their social standing.
The nature of government also changed as populations grew. In smaller groups, decisions about war and migration were made in concert because no individual or family was likely to survive without the others. Also, in small communities, order and peace were often enforced at the family level. If someone acted badly, the customs of the society were brought to bear on them to correct the offending behavior. For example, the San of South Africa held a ritual dance to contact their elders for advice on how to correct a difficult situation. The act of coming together was often enough for the community to heal. In larger civilizations, officials such as priests and kings possessed the authority to command the obedience of subjects, who relied on the powerful to protect them. In return for physical protection and the promise of prosperity, farmers and artisans provided food and goods and, eventually, paid taxes. This exchange served to reinforce both the developing social hierarchy and the specialization of labor.
As civilizations developed around the world in this way, they shared the features noted. Their existence did not mean the end of older ways of living, however. Nomadic and seminomadic peoples not only remained an integral part of the ancient world, they also provided crucial resources and a vehicle for the exchange of knowledge and culture. They were particularly important as a means of connecting one large city to another.
The First Urban Societies
Around 10,000 BCE, wheat was first domesticated in what is today northern Iraq, southeastern Turkey, and western Iran, and also in Syria and Israel. This region is commonly called the Fertile Crescent (because of its shape). It includes Mesopotamia (modern Iraq), southern Anatolia (modern Turkey), and the Levant (modern Syria, Lebanon, Israel, and Palestine) and has yielded the earliest evidence of agriculture (Figure 3.4). This same region saw the rise of the first urban areas in the Neolithic Age , often called Neolithic cities. Examples include Jericho (8300–6500 BCE) along the Jordan River in what is today the Palestinian Territories, and Çatalhöyük (7200–6000 BCE) in southeastern Turkey. Archaeologists have established that these early urban areas had populations as high as six thousand.
Link to Learning
Hunter-gatherer cultures also built large structures, such as the monumental architecture at Göbekli Tepe in southeast Turkey and at Poverty Point in Louisiana in the United States. Listen to this TEDx Talk lecture by the archaeologist who excavated at Göbekli Tepe to find out more about the site. You can learn more about the Poverty Point culture by exploring the Poverty Point website . Look especially at “History and Artifacts.”
Neolithic settlements depended upon the transition to agricultural production to sustain their populations. Such developments were also accompanied by increasing complexity in other areas of life, such as religion. An agricultural surplus enabled religious specialists to devote time to performing bull sacrifices at Çatalhöyük, for example, and freed artisans to hone their skills to create the frescoes that decorated the interior space where these sacrifices occurred. Some form of government must have organized the labor and materials necessary to construct the walls and tower at Jericho, which may have served as an observatory to mark the passage of the solar year. In both Jericho and Çatalhöyük, a shared belief system, or unity behind a leader, must have inspired the inhabitants to labor in the fields and distribute their agricultural surplus. At Jericho, the community may have been united by its veneration of ancestors, whose skulls were decorated and revered as idols. The people of Çatalhöyük may have offered their bull sacrifices to a mother-deity, possibly represented by small figurines of a woman that archaeologists have discovered there.
Beyond the Book
Interpreting Evidence from Neolithic Cities
Prehistoric peoples left no writings behind, and historians and archaeologists can only attempt to understand their beliefs and attitudes by studying the artifacts they produced. This is challenging because ancient societies had very different religious and social systems from our own. But even the most convincing interpretations may not persuade everyone. We may simply never know what certain artifacts meant to the people who created them.
Consider the famous tower of Jericho, built around 8000 BCE (Figure 3.5). Careful excavation has revealed that the tower likely took more than thirty years to build and had stairs for climbing to the top through the center. Some believe it was made for defensive purposes; others think it was a religious monument or even an observatory. Regardless of its use, it seems likely the city had some type of governing system that served to organize the labor. But that assumption too could be in error.
As another example, consider a decorated skull found in Neolithic Jericho (Figure 3.6). An ancient artisan made it by plastering over a human skull and placing pieces of shell in the eye sockets. Historians and archaeologists have speculated that the people of Jericho venerated such skulls, which may have been seen as relics of ancestors and objects of worship. But perhaps the skull meant something else entirely.
Evidence from the Neolithic city of Çatalhöyük demonstrates that its people venerated bulls. Archaeologists have discovered numerous bucranium (bull heads and horns) at the site (Figure 3.7). But what did these bull symbols mean? Popular interpretation suggests they symbolize the son and lover of an important mother-deity. Other explanations call them female symbols of life and rebirth. Still others propose different interpretations.
- What do scholars’ interpretations suggest about the way these artifacts are studied?
- Do their interpretations sound convincing to you? What others can you think of, given what you have read and seen?
The Neolithic cities of Jericho and Çatalhöyük were some of the earliest to emerge. But they are not the only such sites. As early as 7000 BCE, a Neolithic settlement appeared in modern Pakistan, at a site today known as Mehrgarh, whose inhabitants engaged in long-distance trade, grew barley, and raised goats and sheep. Comparable Neolithic settlements in China emerged around 8000 BCE along the Yellow and Yangtze Rivers, where people cultivated millet and rice. A few thousand years later in the Americas, Neolithic settlements sprang up in both Mesoamerica and the Andes Mountains region.
Not all the Neolithic settlements endured. Çatalhöyük, for example, was ultimately abandoned around 6000 BCE and never reoccupied. Jericho, on the other hand, was abandoned and resettled a few times and is still a functioning city today. What is important about these Neolithic settlements is what they can tell us about the long transition between the emergence of agriculture and the eventual rise of early civilizations thousands of years later in places like Mesopotamia, Egypt, and the Indus River valley. | libretexts | 2025-03-17T22:27:52.000906 | 2025-02-12T00:43:12 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.02%3A_Early_Civilizations",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.2: Early Civilizations",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.03%3A_Ancient_Mesopotamia | 2.3: Ancient Mesopotamia
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Identify characteristics of civilization in Ancient Mesopotamia
- Discuss the political history of Mesopotamia from the early Sumerian city-states to the rise of Old Babylon
- Describe the economy, society, and religion of Ancient Mesopotamia
In the fourth millennium BCE, the world’s first great cities arose in southern Mesopotamia , or the land between the Tigris and Euphrates Rivers, then called Sumer . The ancient Sumerians were an inventive people responsible for a host of technological advances, most notably a sophisticated writing system. Even after the Sumerian language ceased to be spoken early in the second millennium BCE, Sumerian literary works survived throughout the whole of Mesopotamia and were often collected by later cities and stored in the first libraries.
The Rise and Eclipse of Sumer
The term Mesopotamia , or “the land between the rivers” in Greek, likely originated with the Greek historian Herodotus in the fifth century BCE and has become the common name for the place between the Tigris and Euphrates Rivers in what is now Iraq. The rivers flow north to south, from the Taurus Mountains of eastern Turkey to the Persian Gulf, depositing fertile soil along their banks. Melting snow and rain from the mountains carry this topsoil to the river valleys below. In antiquity, the river flow was erratic, and flooding was frequent but unpredictable. The need to control it and manage the life-giving water led to the building of cooperative irrigation projects.
Agricultural practices reached Mesopotamia by around 8000 BCE, if not earlier. However, for about two millennia afterward, populations remained quite small, typically living in small villages of between one hundred and two hundred people. Beginning around 5500 BCE, some had begun to establish settlements in southern Mesopotamia, a wetter and more forbidding environment. It was here that the Sumerian civilization emerged (Figure 3.8). By around 4500 BCE, some of the once-small farming villages had become growing urban centers, some with thousands of residents. During the course of the fourth millennium BCE (3000s BCE), urbanization exploded in the region. By the end of the millennium, there were at least 124 villages with about one hundred residents each, twenty towns with as many as two thousand residents, another twenty small urban centers of about five thousand residents, and one large city, Uruk , with a population that may have been as high as fifty thousand. This growth helped make Sumer the earliest civilization to develop in Mesopotamia.
The fourth millennium BCE in Sumer was also a period of technological innovation. One important invention made after 4000 BCE was the process for manufacturing bronze, an alloy of tin and copper, which marked the beginning of the Bronze Age in Mesopotamia. In this period, bronze replaced stone as the premier material for tools and weapons and remained so for nearly three thousand years. The ancient Sumerians also developed the plow, the wheel, and irrigation techniques that used small channels and canals with dikes for diverting river water into fields. All these developments allowed for population growth and the continued rise of cities by expanding agricultural production and the distribution of agricultural goods. In the area of science, the Sumerians developed a sophisticated mathematical system based on the numbers sixty, ten, and one.
One of the greatest inventions of this period was writing. The Sumerians developed cuneiform , a script characterized by wedge-shaped symbols that evolved into a phonetic script, that is, one based on sounds, in which each symbol stood for a syllable (Figure 3.9). They wrote their laws, religious tracts, and property transactions on clay tablets, which became very durable once baked, just like the clay bricks the Sumerians used to construct their buildings. The clay tablets held records of commercial exchanges, including contracts and receipts as well as taxes and payrolls. Cuneiform also allowed rulers to record their laws and priests to preserve their rituals and sacred stories. In these ways, it helped facilitate both economic growth and the formation of states.
Dueling Voices
The Invention of Writing in Sumer
Writing developed independently in several parts of the world, but the earliest known evidence of its birth has been found in Sumer, where cuneiform script emerged as a genuine writing system by around 3000 BCE, if not earlier. But questions remain about how and why ancient peoples began reproducing their spoken language in symbolic form.
Archaeologist Denise Schmandt-Besserat argued in the 1990s that small clay representations of numbers and objects, often called “tokens,” date from thousands of years before the development of cuneiform writing and were its precursor. These tokens, she believed, were part of an accounting system, and each type represented a different good: livestock, grains, and oils. Some were found within hollow baseball-sized clay balls now called “bullae,” which were marked with pictures of the tokens inside. Schmandt-Besserat believed the pictures portray the type of transaction in which the goods represented by the tokens were exchanged, and thus they were a crucial step toward writing. Over time, she suggested, the marked bullae gave way to flat clay tablets recording the transactions, and the first truly written records emerged (Figure 3.10).
Schmandt-Besserat’s linear interpretation is still one of the best-known explanations for the emergence of writing. But it is hardly the only one. One scholar who offers a different idea is the French Assyriologist Jean-Jacques Glassner. Glassner believes that rather than being an extension of accounting techniques, early writing was a purposeful attempt to render the Sumerian language in script. He equates the development of writing, which gives meaning to a symbol, to the process by which Mesopotamian priests interpreted omens for divining the future. Writing allowed people to place language, a creation of the gods, under human control. Glassner’s argument is complex and relies on ancient works of literature and various theoretical approaches, including that of postmodernist philosopher Jacques Derrida.
Many disagree with Glassner’s conclusions, and modern scholars concede that tokens likely played an important role, but probably not in the linear way Schmandt-Besserat proposed. Uncertainty about the origin of writing in Sumer still abounds, and the scholarly debate continues.
- Why do you think Schmandt-Besserat’s argument was once so appealing?
- If you lived in a society with no writing, what might prompt you to develop a way to represent your language in symbolic form?
Cuneiform was a very complex writing system, and literacy remained the monopoly of an elite group of highly trained writing specialists, the scribes. But the script was also highly flexible and could be used to symbolize a great number of sounds, allowing subsequent Mesopotamian cultures such as the Akkadians, Babylonians, and many more to adapt it to their own languages. Since historians deciphered cuneiform in the nineteenth century, they have read the thousands of clay tablets that survived over the centuries and learned much about the history, society, economy, and beliefs of the ancient Sumerians and other peoples of Mesopotamia.
The Sumerians were polytheists , people who revered many gods. Each Sumerian city had its own patron god, however, one with whom the city felt a special connection and whom it honored above the others. For example, the patron god of Uruk was Inanna, the goddess of fertility; the city of Nippur revered the weather god Enlil; and Ur claimed the moon god Sin. Each city possessed an immense temple complex for its special deity, which included a site where the deity was worshipped and religious rituals were performed. This site, the ziggurat , was a stepped tower built of mud-brick with a flat top (Figure 3.11). At its summit stood a roofed structure that housed the sacred idol or image of the temple’s deity. The temple complex also included the homes of the priests, workshops for artisans who made goods for the temple, and storage facilities to meet the needs of the temple workers.
Sumerians were clearly eager to please their gods by placing them at the center of their society. These gods could be fickle, faithless, and easily stirred to anger. If displeased with the people, they might bring famine or conquest. Making sure the gods were praised and honored was thus a way of ensuring prosperity. Praising them, however, implied different things for different social tiers in Sumer . For common people, it meant living a virtuous life and giving to the poor. For priests and priestesses, it consisted of performing the various rituals at the temple complexes. And for rulers honoring the gods, it meant ensuring that the temples were properly funded, maintained, and regularly beautified and enlarged if possible.
By the Early Dynastic Period (c. 2650 BCE–2400 BCE), powerful dynasties of kings called lugals had established themselves as rulers of the cities. In each city, the lugals rose to power primarily as warlords, since the Sumerian cities often waged war against each other for control of farmland and access to water as well as other natural resources. Lugals legitimized their authority through the control of the religious institutions of the city. For example, at Ur, the daughter of the reigning lugal always served as the high priestess of the moon god Sin, the chief deity at Ur.
The lugals at Ur during this period, the so-called First Dynasty of Ur, were especially wealthy, as reflected in the magnificent beehive-shaped tombs in which they were buried. In these tombs, precious goods such as jewelry and musical instruments were stored, along with the bodies of servants who were killed and placed in the tomb to accompany the rulers to the Land of the Dead. One of the more spectacular tombs belonged to a woman of Ur called Pu-Abi, who was buried wearing an elaborate headdress and might have been a queen (Figure 3.12). The most famous lugal in all Sumer in this early period was Gilgamesh of Uruk, whose legendary exploits were recounted later in fantastical form in the Epic of Gilgamesh .
Link to Learning
The Epic of Gilgamesh is one of the world’s earliest examples of epic literature. To understand this ancient tale, first written down in the form we know today around 2100 BCE, read the overview of the Epic of Gilgamesh provided by the Metropolitan Museum of Art, which has a notable collection of ancient Mesopotamian artifacts.
The Rise of the World’s First Empire
Around 2300 BCE, the era of the independent Sumerian city-state , a political entity consisting of a city and surrounding territory that it controls, came to an end. Sumer and indeed all of Mesopotamia was conquered by Sargon of Akkad , who created the first-known empire, in this case, a number of regional powers under the control of one person. The word “Akkad” in his name was a reference to the Akkadians, a group that settled in central Mesopotamia, north of Sumer, around the ancient city of Kish. Over time, the Akkadians adopted Sumerian culture and adapted cuneiform to their own language, a language of the Semitic family that includes the Arabic and Hebrew spoken today. They also identified their own gods with the gods of the Sumerians and adopted Sumerian myths. For example, the Akkadians identified the fertility goddess Inanna with their own goddess Ishtar.
Sargon conquered not only Sumer but also what is today northern Iraq, Syria, and southwestern Iran. While the precise details of his origin and rise to power are not known, scholars believe the story Sargon told about himself, at least, has likely been accurately preserved in the Legend of Sargon , written two centuries after his death as a purported autobiography. It is a familiar story of a scrappy young hero born in humble circumstances and rising on his own merits to become a great leader. The Legend relates how, when Sargon was a baby, his unwed mother put him in a basket and cast it on the Euphrates River. A farmer found and raised him, and Ishtar loved Sargon and elevated him from a commoner to a great king and conqueror.
This interesting tale would have certainly been a powerful piece of propaganda justifying Sargon’s rule and endearing him to the common people, and some of it may even be true. But from what historians can tell, Sargon’s rise to power likely occurred during a period of turmoil as his kingdom of Kish, of which he had likely seized control, came under attack by another king named Lugalzagesi. Sargon’s eventual defeat of Lugalzagesi and conquest of all of Sumer proved to be the beginning of a larger conquest of Mesopotamia. The Akkadian Empire that Sargon created lasted for about a century and a half, officially coming to an end in the year 2193 BCE (Figure 3.13).
One of the rivals of the Akkadian Empire was the city-state of Ebla, located in northwestern Syria. At some point, its people had adapted Sumerian cuneiform to their own language, which, like Akkadian, belonged to the Semitic family of languages, and archaeologists have discovered thousands of cuneiform tablets at the site. These tablets reveal that Ebla especially worshipped the storm god Adad, who was honored with the title “Ba‘al” or lord. More than one thousand years later in the Iron Age, people in this region still worshipped Baal, who was the main rival of Yahweh for the affections of the ancient Israelites.
Other rivals of the Akkadians were the Elamites , who inhabited the region to the immediate southeast of Mesopotamia in southwest Iran and whose city of Susa arose around 4000 BCE. The art and architecture of the Elamites suggest a strong Sumerian influence. They developed their own writing system around 3000 BCE, even though they adapted Sumerian cuneiform to their language later in the third millennium BCE. The Elamites also worshipped their own distinct deities, such as Insushinak, the Lord of the Dead. Both Elam and Ebla eventually suffered defeat at the hands of the Akkadians.
In the year 2193 BCE, however, the Akkadian Empire collapsed. The precise reason is not entirely clear. However, some ancient accounts point to the incursions of the nomadic Guti tribes, whose original homes were located in the Zagros Mountains of northwestern Iran, northwest of Mesopotamia. These Guti were originally pastoralists , who lived off their herds of livestock and moved from place to place to find pasture for their animals. While the Guti tribes certainly did move into the Akkadian Empire toward its end, modern scholarship suggests that the empire was likely experiencing internal decline and famine before this. The Guti appear to have exploited this weakness rather than triggering it. Regardless, for around a century, the Guti ruled over Sumer and adopted its culture as their own. Around 2120 BCE, however, the Sumerians came together under the leadership of the cities of Uruk and Ur and expelled the Guti from their homeland.
Later Empires in Mesopotamia
While Sargon’s empire lasted only a few generations, his conquests dramatically transformed politics in Mesopotamia. The era of independent city-states waned, and over the next few centuries, a string of powerful Mesopotamian rulers were able to build their own empires, often using the administrative techniques developed by Sargon as a model. For example, beginning about 2112 BCE, all Sumer was again united under the Third Dynasty of Ur as the Guti were driven out. The rulers of this dynasty held the title of lugal of all Sumer and Akkad, and they were also honored as gods. They built temples in the Sumerian city of Nippur, which was sacred to the storm god Enlil, the ruler of the gods in the Sumerian pantheon. The most famous lugal of this dynasty was Ur-Nammu (c. 2150 BCE), renowned for his works of poetry as well as for the law code he published.
At its height, the Third Dynasty extended its control over both southern and northern Mesopotamia. But by the end of the third millennium, change was on the horizon. Foreign invaders from the north, east, and west put tremendous pressure on the empire, and its rulers increased their military preparedness and even constructed a 170-mile fortification wall to keep them out. While these strategies were somewhat effective, they appear to have only postponed the inevitable as Amorites, Elamites, and other groups eventually poured in and raided cities across the land. By about 2004 BCE, Sumer had crumbled, and even Ur was violently sacked by the invaders.
Link to Learning
The sack of Ur by the Elamites and others was the inspiration for a lament or song of mourning that became a classic of Sumerian literature. Read The Lament for Urim and pay attention to the way the writer attributes the destruction to the caprice of the gods; the actual invaders are merely tools. For descriptions of the destruction itself, focus on lines 161–229.
In the centuries after 2004 BCE, the migration of Amorites into Mesopotamia resulted in the gradual disappearance of Sumerian as a spoken language. People in the region came to speak Amorite, which belonged to the family of Semitic languages. Nonetheless, scribes continued to preserve and write works in Sumerian and Akkadian cuneiform. Sumerian and Akkadian became the languages of religious rituals, hymns, and prayers, as well as classic literary works such as the Epic of Gilgamesh . Consequently, the literary output of these earlier cultures was preserved and transmitted to the new settlers. When nomadic Amorite tribes settled in Mesopotamia, they eventually established new cities such as Mari, Asshur, and Babylon, and they adopted much of the culture they encountered. The ancient Sumerian cities of Larsa and Isin of this era also preserved these cultural traditions, even as they came under the rule of Amorite kings.
Hammurabi , the energetic ruler of Babylon during the first half of the eighteenth century BCE, defeated the kings of the rival cities of Mari and Larsa and created an empire that encompassed nearly all of Mesopotamia. To unify this new empire, Hammurabi initiated the construction of irrigation projects, built new temples at Nippur, and published his legal edicts throughout his realm. Hammurabi had these edicts inscribed on stone pillars erected in different places in the empire to inform his subjects about proper behavior and the laws of the land. Being especially clear, the Code of Hammurabi far outlived the king who created it. It also provides us with a fascinating window into how Mesopotamian society functioned at this time.
In Their Own Words
The Law in Old Babylon
Remarkable for its clarity, the Code of Hammurabi may have introduced concepts like the presumption of innocence and the use of evidence. It informed legal systems in Mesopotamia for many centuries after Hammurabi’s death (Figure 3.14).
The Code of Hammurabi promoted the principle that punishment should fit the crime, but penalties often depended on social class:
199. If [a man] put out the eye of a man’s slave, or break the bone of a man’s slave, he shall pay one-half of its value.
202. If any one strike the body of a man higher in rank than he, he shall receive sixty blows with an ox-whip in public.
Many edicts concern marriage, adultery, children, and marriage property.
129. If a man’s wife be surprised with another man, both shall be tied and thrown into the water, but the husband may pardon his wife and the king his slaves.
150. If a man give his wife a field, garden, and house and a deed therefor, if then after the death of her husband the sons raise no claim, then the mother may bequeath all to one of her sons whom she prefers, and need leave nothing to his brothers.
A good number of the code’s edicts concern the settling of commercial disputes:
9. If anyone lose an article, and find it in the possession of another [who says] “A merchant sold it to me, I paid for it before witnesses,” . . . The judge shall examine their testimony—both of the witnesses before whom the price was paid, and of the witnesses who identify the lost article on oath. The merchant is then proved to be a thief and shall be put to death. The owner of the lost article receives his property, and he who bought it receives the money he paid from the estate of the merchant.
48. If anyone owe a debt for a loan, and a storm prostrates the grain, or the harvest fail, or the grain does not grow for lack of water; in that year he need not give his creditor any grain, he washes his debt-tablet in water and pays no rent for this year.
—"Hammurabi’s Code of Laws,” c. 1780 BCE, translated by L.W. King
- What do these edicts suggest about the different social tiers in Babylonian society? How were they organized?
- Was marriage similar to or different from marriage today?
- Do the edicts for resolving economic disputes seem fair to you? Why or why not?
While Hammurabi’s empire lasted a century and a half, much of the territory he conquered began falling away from Babylon’s control shortly after he died. The empire continued to dwindle in size until 1595 BCE, when an army of Hittites from central Anatolia in the north (modern Turkey) sacked the city of Babylon. Shortly thereafter, Kassites from the Zagros Mountains of northwestern Iran conquered Babylon and southern Mesopotamia and settled there, unlike the Hittites who had returned to their Anatolian home. The Kassites established a dynasty that ruled over Babylon for nearly five hundred years, to the very end of the Bronze Age . Like the Guti and the Amorites before them, over time, the Kassite rulers adopted the culture of their Mesopotamian subjects.
Society and Religion in Ancient Mesopotamia
Thanks to the preservation of cuneiform clay tablets and the discovery and translation of law codes and works of literature, historians have at their disposal a wealth of information about Mesopotamian society. The study of these documents and the archaeological excavations carried out in Mesopotamia have allowed them to reconstruct the empire’s economy.
We know now that temples and royal palaces were not merely princely residences and places for religious rituals; they also functioned as economic redistribution centers. For example, agricultural goods were collected from farmers as taxes by civic and religious officials, who then stored them to provide payments to the artisans and merchants they employed. Palaces and temples thus needed to possess massive storage facilities. Scribes kept records in cuneiform of all the goods collected and distributed by these institutions. City gates served as areas where farmers, artisans, and merchants could congregate and exchange goods. Precious metals such as gold often served as a medium of exchange, but these goods had to be weighed and measured during commercial exchanges, since coinage and money as we understand it today did not emerge until the Iron Age, a millennium later.
Society in southern Mesopotamia was highly urban. About 70 to 80 percent of the population lived in cities, but not all were employed as artisans, merchants, or other traditional urban roles. Rather, agriculture and animal husbandry accounted for a majority of a city’s economic production. Much of the land was controlled by the temples, kings, or other powerful landowners and was worked by semi-free peasants who were tied to the land. The rest of the land included numerous small plots worked by the free peasants who made up about half the population. A much smaller portion was made up of enslaved people, typically prisoners of war or persons who had committed crimes or gone into debt. A man could sell his own children into slavery to cover a debt.
Much of the hard labor performed in the fields was done by men and boys, while the wives, mothers, and daughters of merchants and artisans were sometimes fully engaged in running family businesses. Cuneiform tablets tell us that women oversaw the business affairs of their families, especially when husbands were merchants who often traveled far from home. For example, cuneiform tablets from circa 1900 BCE show that merchants from Ashur in northern Mesopotamia conducted trade with central Anatolia and wrote letters to their female family members back home. Women were also engaged in the production of textiles like wool and linen. They not only produced these textiles in workshops with their own hands, but some appear to have held managerial positions within the textile industry.
Free peasant farmers, artisans, and merchants were all commoners. This put them in a higher social position than the semi-free peasants and slaves but lower than the elite nobility, who made up a very small percentage of the population and whose ranks included priests, official scribes, and military leaders. This aristocratic elite often received land in payment for their services to the kings and collected rents in kind from their peasant tenants. Social distinctions were also reflected in the law. For example, aspects of Hammurabi’s law code called for punishments for causing physical harm to another to be equal to the harm inflicted. This principle is best summarized in the line “an eye for an eye and a tooth for a tooth.” However, the principle applied only to victims and perpetrators of the same social class. An aristocrat convicted of the murder of a fellow noble paid with their life, while an aristocrat who harmed or murdered a commoner might be required only to pay a fine.
Men and women were not equal under the Code of Hammurabi . A man was free to have multiple wives and divorce a wife at will, whereas a woman could divorce her husband only if she could prove he had been unkind to her without reason. However, a woman from a family of means could protect her position in a marriage if her family put up a dowry, which could be land or goods. Upon marriage, the husband obtained the dowry, but if he divorced or was unkind to his wife, he had to return it to her and her family.
Cuneiform tablets have also allowed historians to read stories about the gods and heroes of Mesopotamian cultures. Mesopotamians revered many different gods associated with forces of nature. These were anthropomorphic deities who not only had divine powers but also frequently acted on very human impulses like anger, fear, annoyance, and lust. Examples include Utu, the god of the sun (Figure 3.15); Inanna (known to the Akkadians as Ishtar), the goddess of fertility; and Enlil (whose equivalent in other Mesopotamian cultures was Marduk), the god of wind and rain. The ancient Mesopotamians held that the gods were visible in the sky as heavenly bodies like stars, the moon, the sun, and the planets. This belief led them to pay close attention to these bodies, and over time, they developed a sophisticated understanding of their movement. This knowledge allowed them to predict astronomical events like eclipses and informed their development of a twelve-month calendar.
People in Mesopotamia believed human beings were created to serve the gods (Figure 3.16). They were expected to supply the gods with food through the sacrifice of sheep and cattle in religious rituals, and to honor them with temples, religious songs or hymns, and expensive gifts. People sought divine support from their gods. But they also feared that their worship might be insufficient and anger the deity. When that happened, the gods could bring death and devastation through floods and pestilence. Stories of gods wreaking great destruction, sometimes for petty reasons, are common in Mesopotamian myths. For example, in one Sumerian myth, the storm god Enlil nearly destroyed the entire human race with a flood when the noise made by humans annoyed him and kept him from sleep.
The ancient Mesopotamians’ belief that the gods were fickle, destructive, and easily stirred to anger is one reason many historians believe they had a generally pessimistic worldview. From the literature they left behind, we can see that while they hoped for the best, they were often resigned to accept the worst. Given the environment in which Mesopotamian civilization emerged, this pessimism is somewhat understandable. River flooding was common and could often be unpredictable and destructive. Wars between city-states and the destruction that comes with conflict were also common. Life was difficult in this unforgiving world, and the profiles of the various gods of the Mesopotamians reflect this harsh reality.
Evidence of Mesopotamians’ pessimism is also present in their view of the afterlife. In their religion, after death all people spent eternity in a shadowy underworld sometimes called “the land of no return.” Descriptions of this place differ somewhat in the details, but the common understanding was that it was a gloomy and frightening place where the dead were consumed by sorrow, eating dust and clay and longing pitifully and futilely to return to the land of the living. | libretexts | 2025-03-17T22:27:52.124663 | 2025-02-12T00:43:15 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.03%3A_Ancient_Mesopotamia",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.3: Ancient Mesopotamia",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.04%3A_Ancient_Egypt | 2.4: Ancient Egypt
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Discuss the unification of Ancient Egypt and the development of a distinct culture there
- Analyze the accomplishments of the pharaoh s under the Old Kingdom
- Describe the changes in government and society in Egypt during the Middle Kingdom
The rich agricultural valleys historians refer to as the “ Fertile Crescent ,” due to the shape of this region on the map, witnessed the development of an early civilization as long ago as the fourth millennium BCE. Adjacent to this region was another fertile river valley formed by the Nile in northeast Africa. Here arose another civilization that was quite unique. Unlike the city-states of Sumer , which were not organized into an empire until the time of Sargon of Akkad , the peoples of the Nile River valley were brought together under a single ruler around 3150 BCE. Although brief intervals of disunity occurred, Egypt remained a united and powerful kingdom, the great superpower of the ancient Near East, until the end of the Bronze Age in about 1100 BCE.
The Origins of Ancient Egypt
Aside from the Nile, Egypt and the areas around it are today part of the expansive and very arid Sahara. But around 10,000 BCE, as the Neolithic Revolution was getting underway in parts of southwestern Asia, much of North Africa including Egypt was lush, wet, and dotted with lakes. The region was highly hospitable to the many Paleolithic peoples living there and surviving on its abundant resources.
However, beginning around 6000 BCE the grasslands and lakes began to give way to sand as the once green environment was transformed into the Sahara we recognize today. As the environment became more difficult for humans to survive in, they retreated to oases and rivers on the fringes. One of these areas was the Nile River valley, a long thin area of fertility running through the deserts of eastern North Africa and made possible by the regular flooding of the Nile. The Nile is the longest river in Africa, and the second-longest in the world after the Amazon. It originates deep in central Africa and flows thousands of miles north through Egypt before it spills into the Mediterranean Sea.
It was around this same time, about 7000 to 6000 BCE, that agricultural technology and knowledge about the domestication of wheat, barley, sheep, goats, and cattle were introduced into the Nile River valley, likely through contact with the Levant. The earliest evidence for the emergence of Egyptian culture dates from this era as well. Two related but different Neolithic cultures arose: one in the Nile delta, where the river runs into the Mediterranean, and the other upriver and to the south of this location. The people of these cultures lived in crude huts, survived on fishing and agriculture, developed distinctive pottery styles, and even practiced burial rituals. Over thousands of years, they developed into two separate kingdoms, Lower Egypt or the delta region, and Upper Egypt or the area upriver (Figure 3.17).
A major political and cultural shift occurred in about 3150 BCE when Upper and Lower Egypt were unified into a single powerful kingdom. Some evidence suggests this achievement belongs to a king named Narmer . More recent records attribute it to a king called Menes, but many scholars now believe Menes and Narmer are one and the same (Figure 3.18).
Unification gave rise to what scholars refer to as the Early Dynasty Period (about 3150 to 2613 BCE), or the era of the earliest dynasties to rule a unified Egypt. The powerful kings of these dynasties established a bureaucratic system, possibly influenced by the palace/temple redistributive economic system in place in ancient Sumer. But unlike Mesopotamia, ancient Egypt in the Bronze Age was now a single state instead of a number of warring rivals. Also unlike Mesopotamia, which was subject to periodic invasion, Egypt was protected by its geography. On both east and west, the Nile River valley was surrounded by large deserts that were difficult to cross and that made the kingdom into a kind of island in a hot, dry sea. During this time, many of the best-known cultural characteristics of ancient Egypt emerged in their earliest forms. They include the institution of the pharaoh, distinctive religious practices, and the Egyptian writing system.
The Pharaoh
The king of the united Egypt, the pharaoh, governed a kingdom much larger than any contemporary realm. Historians estimate that the population of the Egyptian state, when first united in about 3150 BCE, numbered as many as two million people, whereas a typical Sumerian lugal ruled about thirty thousand subjects. The temple/palace system in Egypt therefore operated on a much vaster scale than anywhere in Mesopotamia.
The term pharaoh in ancient Egyptian is translated as “big house,” likely a reference to the size of the palaces along the Nile valley where the pharaoh resided and administered the lands. As in ancient Mesopotamia, the palace included large facilities for storing taxes in kind, as well as workshops for artisans who produced goods for the palace. Also, as in Mesopotamia, a large portion of the population were peasant farmers. They paid taxes in kind to support the artisans and others working in the pharaoh’s palaces and temples and living nearby, inside the city. The ruling elite included scribes, priests, and the pharaoh’s officials.
The pharaoh was not merely a political figure but also served as the high priest and was revered as a god. In the role of high priest, the pharaoh united the lands by performing religious rituals to honor the different gods worshipped up and down the Nile River valley. As a deity, the pharaoh was the human form or incarnation of Horus, the god of justice and truth. Egyptians believed the divine presence of the pharaoh as Horus maintained justice throughout the land, which, in turn, maintained peace and prosperity, as evidenced by the welcome annual flooding of the Nile.
Egyptian Religion
Like the people of Mesopotamia, Ancient Egyptians were polytheists and worshipped many deities who controlled the forces of nature. For example, Re was the god of the sun, and Isis was the earth goddess of fertility. Osiris was associated with the Nile. The annual flooding of the river, the central event of the Egyptian year, was explained through the myth of Osiris, who was murdered by his brother Seth, the god of the desert wind, but then resurrected by his devoted wife Isis. The Nile (Osiris) was at its lowest in the summer when the hot desert wind was blowing (Seth), but then it was “resurrected” when it flooded its banks and brought life-giving water to the earth (Isis). Horus (the pharaoh) was the child of Isis and Osiris. Since Osiris was a god who had died, he was also the lord of the underworld and the judge of the dead. Ancient Egyptians believed Osiris would reward people who had lived a righteous life with a blessed afterlife in the underworld, whereas he would punish wicked evildoers.
As these gods and myths indicate, the Nile played an important role in the development of Egyptian religion. Whereas the unpredictable flooding of the Tigris and Euphrates Rivers in southern Mesopotamia commonly brought destruction along with fresh alluvial deposits, the Nile’s summer flooding, predictable as clockwork, brought only welcome deposits of rich sediment. It provided Egyptians with a sense that the world was harmonious and organized around cycles. In later centuries, this notion developed into the concept of Ma’at (also personified as a goddess), which combined the ideas of order, truth, justice, and balance. In contrast to the apparently pessimistic people in Mesopotamia, Egyptians drew from their environment a feeling that their world was orderly, balanced, and geared toward a sense of cosmic justice. It was an Egyptian’s responsibility to live in harmony with this order.
In Their Own Words
Flooding, Stories, and Cosmology in Ancient Egypt and Sumer
Ancient Egypt (the first excerpt that follows) and Ancient Sumer (the second) both depended on life-giving rivers, but their reactions to periodic flooding were quite different. Note the way each discusses the flooding, those responsible, and the reasons for it.
Hymn to the flood. Hail flood!
emerging from the earth, arriving to bring Egypt to life,
hidden of form, the darkness in the day,
the one whose followers sing to him, as he waters the plants,
created by Re to make every herd live,
who satisfies the desert hills removed from the water,
for it is his due that descends from the sky
—he, the beloved of Geb, controller of Nepri,
the one who makes the crafts of Ptah verdant.
Lord of fish, who allows south marsh fowl,
without a bird falling from heat.
Maker of barley, grower of emmer grain,
creator of festivals of the temples.
When he delays, then noses are blocked,
everyone is orphaned,
and if the offerings of the gods are distributed,
then a million men perish among mankind. . . .
Verdant the spirit at your coming, O Flood.
Verdant the spirit at your coming.
Come to Egypt,
make its happiness.
Make the Two Riverbanks verdant, . . .
Men and herds are brought to life by your deliveries of the fields, . . .
Verdant the spirit at your coming, O Flood.—Author unknown, Hymn to the Nile , 2000–1700 BCE
I will reveal to you, O Gilgamesh, the mysterious story,
And one of the mysteries of the gods I will tell you.
The city of Shurippak, a city which, as you know,
Is situated on the bank of the river Euphrates. The gods within it
Decided to bring about a flood, even the great gods,
As many as there were. . . .
I saw the approach of the storm,
And I was afraid to witness the storm;
I entered the ship and shut the door.
I entrusted the guidance of the ship to the boat-man,
Entrusted the great house, and the contents therein.
As soon as early dawn appeared,
There rose up from the horizon a black cloud,
Within which the weather god thundered,
And the king of the gods went before it. . . .
The storm brought on by the gods swept even up to the heavens,
And all light was turned into darkness. It flooded the land; it blew with violence;
And in one day it rose above the mountains.
Like an onslaught in battle it rushed in on the people.
Brother could not save brother.
The gods even were afraid of the storm;
They retreated and took refuge in the heaven of Anu.
There the gods crouched down like dogs, in heaven they sat cowering.—Author unknown, Epic of Gilgamesh , translated by R. Campbell Thompson and William Muse Arnold and compiled by Laura Getty
- What do these excerpts reveal about each people’s view of their world and the supernatural?
- What do they suggest about each culture’s relationship to its river(s)?
Egyptian Writing
Egyptians developed their own unique writing system, known today by the Greek word hieroglyphics (meaning “sacred writings”), though the Egyptians called it medu-netjer (“the god’s words”). The roots of hieroglyphic writing can be traced to the time before the Early Dynastic Period when the first written symbols emerged. But by at least 3000 BCE, the use of these symbols had developed into a sophisticated script. It used a combination of alphabetic signs, syllabic signs, word signs, and pictures of objects. In this complicated system, then known only to highly trained professional scribes, written symbols represented both sounds and ideas (Figure 3.19). The Egyptians also developed a simplified version of this hieroglyphic script known as hieratic , which they often employed for more mundane purposes such as recordkeeping and issuing receipts in commercial transactions.
Egyptian scribes recorded their ideas in stone inscriptions on the walls of temples and painted them on the walls of tombs, but they also used the fibers from a reed plant growing along the banks of the Nile to produce papyrus , a writing material like paper that could be rolled into scrolls and stored as records. Some of these papyrus rolls have survived for thousands of years because of the way the dry heat preserved them, and they proved very useful for modern historians and archaeologists after hieroglyphics were deciphered in the nineteenth century. They preserved Egyptian myths and poetry, popular stories, and lists of pharaohs, along with records of the daily life of ancient Egyptians.
The Age of Pyramid Building
By the 2600s BCE, the power of the pharaohs and the sophistication of the state in Egypt were such that the building of large-scale stone architecture became possible. Historians in the nineteenth century believed the significance of these developments was so great that it required a different name for the period. Today we call it the Old Kingdom (2613–2181 BCE), and it is best known for the massive stone pyramids that continue to awe visitors to Egypt today, many thousands of years after they were built (Table 3.1).
| 6000–3150 BCE | Pre-Dynastic Egypt |
| 3150–2613 BCE | Early Dynastic Egypt |
| 2613–2181 BCE | Old Kingdom Period |
| 2181–2040 BCE | First Intermediate Period |
| 2040–1782 BCE | Middle Kingdom Period |
| 1782–1570 BCE | Second Intermediate Period |
| 1570–1069 BCE | New Kingdom Period |
| 1069–525 BCE | Third Intermediate Period |
The pyramids were tombs for the pharaohs of Egypt, places where their bodies were stored and preserved after death. The preservation of the body was important and was directly related to Egyptian religious beliefs that a person was composed of a number of different elements. These included the Ka , Ba , Ahk , and others. A person’s Ka was their spiritual double. After the physical body died, the Ka remained but had to stay in the tomb with the body and be nourished with offerings. The Ba was also a type of spiritual essence, but it separated from the body after death, going out in the world during the day and returning to the body each night. The duty of the Ahk, yet another type of spirit, was to travel to the underworld and the afterlife. The belief in concepts like the Ka and Ba was what made the practice of mummification and the creation of tombs important in Egyptian religion. Both elements needed the physical body to survive.
Before the pyramids, tombs and other architectural features were built of mud-brick and called mastabas . But during the Early Dynastic reign of the pharaoh Djoser , just before the start of the Old Kingdom, a brilliant architect named Imhotep decided to build a marvelous stone tomb for his king. Originally, it was intended to be merely a stone mastaba. However, Imhotep went beyond this plan and constructed additional smaller stone mastabas, one on top of the other. The result was a multitiered step pyramid (Figure 3.20). Surrounding it, Imhotep built a large complex that included temples.
The step pyramid of Djoser was revolutionary, but the more familiar smooth-sided style appeared a few decades later in the reign of Snefru, when three pyramids were constructed. The most impressive has become known as the Red Pyramid, because of the reddish limestone revealed after the original white limestone surface fell away over the centuries. It had smooth sides and rose to a height of 344 feet over the surrounding landscape. Still an impressive sight, it pales in comparison to the famed Great Pyramid built by Snefru’s son Khufu at Giza near Cairo (Figure 3.21). The Great Pyramid at Giza was 756 feet long on each side and originally 481 feet high. Its base covers four city blocks and contains 2.3 million stone blocks, each weighing about 2.5 tons. Even more than the Pyramid of Djoser, the Great Pyramid is a testament to the organization and power of the Egyptian state.
Later pharaohs of the Old Kingdom built two additional but slightly smaller pyramids at the same location. All align with the position of the Dog Star Sirius in the summer months, when the Nile floods each year. Each was also linked to a temple along the Nile dedicated to the relevant pharaoh.
Egyptian rulers invested heavily in time and resources to construct these tombs. In the mid-fifth century BCE, the ancient Greek historian Herodotus recorded that the pyramid of Khufu took 100,000 workers twenty years to construct. Herodotus lived two thousand years after this pyramid was built, however, so we might easily dismiss his report as exaggeration. Modern archaeologists suspect that a much smaller but still substantial workforce of around twenty thousand was likely employed. Excavations at the site reveal that these workers lived in cities built nearby that housed them as well as many others dedicated to feeding and caring for them. The workers were not enslaved, as is commonly assumed. Indeed, they likely enjoyed a higher standard of living than many other Egyptians at the time.
As the pyramid and temple complexes became larger and more numerous during the Old Kingdom, so too did the number of priests and administrators in charge of managing them. This required that ever-increasing amounts of wealth be redirected toward these individuals from the central state. Over time, the management of the large Egyptian state also required more support from the regional governors or nomarchs and administrators of other types, which meant the pharaohs had to delegate more authority to them. By around 2200 BCE, priests and regional governors possessed a degree of wealth and power that rivaled and sometimes surpassed that of the nobility. For all these reasons and more, centralized power in Old Kingdom Egypt weakened greatly during this time, and scholars since the nineteenth century have referred to it as the First Intermediate Period .
Scholars once claimed that this was a time of chaos and darkness. As evidence, they noted the decline in the building of large-scale monuments like the giant pyramids as well as a drop in the quality of artwork and historical records during these decades. Modern research, however, has demonstrated that this is a gross simplification. Power wasn’t necessarily lost so much as redistributed from central to regional control. From the perspective of the reigning noble families, this may have seemed like chaos and disorder. But it was not necessarily the dark age older generations of historians believed it to be.
A Second Age of Egyptian Greatness
The First Intermediate Period came to an end around 2040 BCE as a series of powerful rulers, beginning with Mentuhotep II , was able to reestablish centralized control in Egypt. This led to the rise of what we now call the Middle Kingdom Period , which lasted nearly 260 years.
In the year 1991 BCE, Amenemhat, a former vizier (adviser) to the line of kings who established the Middle Kingdom, assumed control and founded a line of pharaohs who ruled Egypt for two centuries. Under the leadership of these pharaohs, Egypt acquired its first standing army, restarted the large-scale building projects known in earlier times, made contacts with surrounding peoples and kingdoms in the Levant and in Kush (modern Sudan), and generally held itself together with a strong centralized power structure.
Link to Learning
New Kingdom pharaohs circulated a work of literature that foretold the rise of Amenemhat, who would bring an end to disorder and restore Egypt to prosperity. This ancient work was called the Prophecy of Nefert y and is presented as an English translation by University College London.
During the Middle Kingdom Period, pharaohs introduced the cult of the deity Amon-Re at Thebes. Amon-Re was a combination of the sun-god Re, the creator god worshipped in the north of Egypt, and Amon, a sky god revered in the south. He was portrayed as the king of the gods and the father of each reigning pharaoh. The pharaohs of the Middle Kingdom no longer constructed massive pyramids for their tombs. Instead, they focused on erecting massive temples to Amon-Re and his wife, the mother-goddess Mut, at Thebes (Figure 3.22). The ruins of these temples are located at Karnak in southern Egypt. Amon-Re’s temples featured immense halls in which multiple columns or colonnades supported the roof, courtyards, and ceremonial gates. They housed the sacred images of the deities, which on festival days were brought out in ritual processions.
Middle Kingdom Egypt reached its height in the 1870s and 1860s BCE during the reign of Senusret III , a powerful warrior pharaoh and capable administrator of the centralized state. He greatly expanded Egypt’s territorial control, leading armies up the Nile into Kush and into the Levant. These efforts not only strengthened Egypt’s ability to protect itself from invasion but also greatly increased the flow of trade from these regions. Kush was known for its rich gold deposits and capable warriors, and Senusret III’s several campaigns there brought Egypt access not only to the gold but also to mercenaries from Kush.
Senusret also dramatically increased the degree of centralized power held by the pharaoh, reducing the authority and even the number of the nomarchs. Overall, Egypt now grew wealthier, safer, more centralized, and more powerful than it had ever been. As a result, his reign was also a time of cultural flourishing when Egyptian art, architecture, and literature grew in refinement and sophistication (Figure 3.23).
The deaths of Senusret III and his son Amenemhat III led indirectly to a rare but not unprecedented transfer of royal power to an Egyptian woman. Possibly Amenemhat IV’s wife, sister, or both, Sobekneferu , the daughter of Amenemhat III, was the first woman to rule Egypt since before the Old Kingdom. She reigned for only a few years, and little is known of her accomplishments. But scholars have determined that she was the first pharaoh to associate herself with the Egyptian crocodile god Sobek. She may even have commissioned the construction of the city of Crocodilopolis to honor this important god. Because she died without having had children, she was the last in the long series of pharaohs in the line of Amenemhat I.
Even before the reign of Sobekneferu, Egypt was already experiencing some degree of decline. Over the next century, the pharaohs and their centralized control became steadily weaker. Increasing numbers of Semitic-speaking peoples from the Levant flowed into Egypt, possibly the result of increased trade between Egypt and the Levant at first. But by the late 1700s BCE, these Semitic-speaking groups had grown so numerous in the Nile delta region and centralized control of Egypt had grown so weak that some of their chieftains began to assert control in a few areas. The Egyptians called these Semitic-speaking chieftains Heqau-khasut (rulers of foreign lands). Today they are more commonly called Hyksos , a Greek corruption of this Egyptian name.
By the time the Hyksos were asserting their control over parts of the Nile delta, Egypt was already well into what historians of the nineteenth century dubbed the Second Intermediate Period. Like the First Intermediate Period, the second was a time of reduced centralized control. Not only did the Egyptian nobles, ruling from their capital in Thebes, lose control of the delta, they also lost territory upriver to an increasingly powerful kingdom of Kush in the south. This meant that the territory once controlled by the powerful centralized state bureaucracy was effectively split into three regions: one ruled by Hyksos in the north, one by Kushite in the south, and one by the remnants of the Egyptian nobility in the center.
Despite the fragmentation, for most of this period, the three regions of Egypt appear to have maintained peaceful relationships. That changed, however, beginning in the 1550s BCE when a string of Theban Egyptian rulers was able to go on the offensive against the Hyksos. After the Hyksos were defeated and the Nile delta recaptured, the emboldened Egyptians turned their attention south to Kush, eventually extending their control over these regions as well. These efforts ushered in a new period of Egyptian greatness called the New Kingdom , the highest high-water mark of Egyptian power and cultural influence in the ancient world. | libretexts | 2025-03-17T22:27:52.225776 | 2025-02-12T00:43:17 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.04%3A_Ancient_Egypt",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.4: Ancient Egypt",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.05%3A_North_Africas_Mediterranean_and_Trans-Saharan_Connections | 2.5: North Africa’s Mediterranean and Trans-Saharan Connections
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Describe the interactions between North Africa , the Levant , and Europe
- Analyze the trade routes from North Africa to the Mediterranean , the Sahara , and the Levant
The Mediterranean coast of North Africa has been a crossroads of civilizations for millennia. Beginning in the first millennium BCE, it was occupied successively by a string of invaders, including the Phoenicians, Greeks, Romans, Vandals, and Arabs, and it has been the site of countless internal migrations, as in the case of the Mauri and Massylii peoples. One result of these interactions was a long-term process of cultural commingling, reabsorption, and acculturation that has left a rich tapestry of human societies in its wake.
North Africa and Egypt
The Phoenicians were responsible for the earliest known trade network that unified the Mediterranean world. A Semitic-speaking and seafaring people originally from the Levant, or eastern Mediterranean coast, the Phoenicians emerged initially from the areas around Tyre in what is present-day Lebanon (or Canaan in the Bible, which refers to the Phoenicians as the Canaanites). Around the end of the tenth century BCE, the Phoenicians began to found a series of trading posts and colonies along the Mediterranean coast, a loop of interconnected settlements that eventually stretched from Byblos in the east to Nimes and Gadir (Cadiz) in the west and Libdah (Leptis Magna) in the south (Figure 9.19). In 814 BCE, they established what would become their greatest settlement, Carthage . Located on the North African coast in modern-day Tunisia, Carthage was in an ideal position to dominate the trade activities of the western Mediterranean.
The government of Carthage was originally a monarchy, but by the turn of the fourth century BCE, it had given way to a republic. The Carthaginian republic was singled out for praise by the Greek philosopher Aristotle, who considered it the perfect balance between monarchy, aristocracy, and democracy. By this time, Carthage had been the dominant military power of the western Mediterranean for almost a century. By 300 BCE, the city controlled dozens of the trading towns that dotted the North African coast. Thus, by the start of the third century BCE, Carthaginian power and influence could be felt along a thousand-mile stretch of the Mediterranean.
At the time of Carthage’s founding, the population of the Maghreb —the western half of North Africa, including most of present-day Morocco, Algeria, Tunisia, and Libya—spoke indigenous languages and had adapted their lives to the landscape. Those who lived on the coastal plains were mostly settled farmers, while those who lived in the Atlas Mountain range were seminomadic pastoralists, and nomadic peoples lived in the Sahara. To the Carthaginians, Greeks, and Romans, these North African natives were collectively known as Berbers 1 , a pejorative term equating to “barbarian.” This simplification belies the political, social, and cultural complexities of what was actually a wide range of different African ethnic groups and societies, including the Mauri, Massylii, Musulamii, Masaesylii, Garamantes, and Gaetuli. These groups had built impressive societies of their own. For example, the Garamantes developed a major urban society in the Libyan Fezzan, and the Masaesyli established the kingdom of Numidia. Over centuries of interaction, cooperation, and tension, a rich tapestry of customs, values, and traditions developed among these groups—who typically refer to themselves as Amazigh, or Imazighen, rather than Berber—to produce societies and cultural practices well suited to the Maghreb region of North Africa.
A city like Carthage, with its republican form of government and leaders who exercised political authority, levied taxes, enforced laws, and employed armed forces for defense, was foreign to the indigenous systems of the area. Early in its history, Carthage adopted a pragmatic relationship with its neighbors; it paid tribute to them to forestall attacks and facilitate good relations. But that changed after 480 BCE, when Carthage stopped paying tribute and moved to subjugate the region’s peoples. In time, not only were the coastal towns dominated by Punic-speaking elite of Carthaginian origin, but the peoples of the inland towns began to adapt and adopt Carthaginian culture. Not only did they speak Punic , the language of the Phoenicians, but on occasion they also spoke Greek, particularly if they held positions of power or engaged in trade with Carthage. Moreover, the urban elite in these towns began to emulate Carthage, and eventually they founded their own states inland, such as Mauretania, which was established by the Mauri and Massylii peoples of the Atlas Mountains.
Carthaginian control over North Africa did not go unchallenged. The most significant threat to its dominance emerged with the rise of the Roman Republic in the third century BCE. To that point, the Romans had been preoccupied with consolidating their control over the Italian peninsula south of the River Rubicon, a process finally brought to an end with the defeat of Magna Graecia (as southern Italy was then called) in the Pyrrhic War in 275 BCE. Rome’s encounter with Carthage led to three long and exhaustive conflicts known as the Punic Wars . The first began in 264 BCE, and the third wrapped up in 146 BCE.
The most famous of these is the Second Punic or Hannibalic War , during which the Carthaginian general Hannibal Barca (Figure 9.20) invaded Italy with tens of thousands of soldiers and dozens of war elephants. For over a decade, Hannibal terrorized the Romans, destroying their armies at Trebia (218 BCE), Lake Trasimene (217 BCE), and, in one of the most lethal battle days in history, the village of Cannae in 216 BCE. The Carthaginians were unable to inflict a decisive defeat on the Romans, however, and the deadlock remained unbroken until the Roman general Publius Cornelius Scipio took the war to Carthage, forcing Hannibal to return from Italy to North Africa to defend it.
Carthage was finally defeated at the Battle of Zama in 202 BCE, a victory that earned Scipio the honorary name Africanus (Figure 9.21). Five decades later, urged on by the conservative senator Marcus Portius Cato (known to history as Cato the Censor), Rome returned to finish the job. The Third Punic War (149–146 BCE) ended with the destruction of Carthage . Tunis , some twenty miles inland, became the capital of Rome’s new African province.
The absorption of North Africa into the Roman Empire greatly affected the Indigenous African peoples of the region. Carthage had needed very little from the peoples who lived beyond its hinterland. What food the population required was supplied by Carthaginian estates located on the outskirts of the city. Rome, on the other hand, needed a great deal from the Maghrebi interior, including grain and olive oil to feed the capital’s growing urban population. Decades of Roman development of the inland territory resulted in farms that, by the first decades of the Common Era , were generating hundreds of thousands of gallons of olive oil and millions of tons of wheat per year—all destined to feed the residents of Rome. This bounty earned North Africa the nickname “the breadbasket of Rome.”
The intensification of agricultural production in the Maghreb led to the institution of individual land ownership and huge seasonal migrations of nomads and their animals to the coastal plains for work. To help control the flow of people during these periods and to protect crops from migrating cattle, the Romans established limes (lee-meis), or lines of fortified frontier posts that marked out the territorial limits of Roman occupation. Agricultural production also led to new growth for the old Phoenician coastal cities, which became commercial centers for shipping produce and livestock to Rome.
Eastern North Africa was also the site of great change in antiquity. In 332 BCE, Alexander the Great , king of Macedon in Greece , conquered Egypt , and before leaving to continue his advance into western Asia, he founded the great city of Alexandria on the Nile River. Following Alexander’s death in 323 BCE, his generals warred with each other over control of the empire and eventually divided the vast territory among themselves. One officer, Ptolemy , took Egypt and founded a dynasty that ruled it for the next three centuries. Of necessity, the Ptolemies styled themselves as pharaohs to demonstrate continuity from pharaonic times through Alexander to themselves (Figure 9.22). They adapted the Egyptian style partially because they were awed by the history and grandeur of Egypt and also because the Ptolemies wanted the people to see them as legitimate rulers of Egypt, not foreigners. Like all the Hellenistic (or Greek-like) monarchs of the three-hundred-year period following the death of Alexander, they encouraged Greeks from around the Mediterranean to settle in one of the three Greek city-states established by the Macedonian conquerors, including Alexandria. The Ptolemies also enticed Jewish people from Palestine to settle in northern Egypt, making Alexandria one of the most cosmopolitan cities in all antiquity.
The focal point of Greek culture in Egypt was the Museon or Museum of Alexandria. This “Home of the Muses” was much more than a place to see artifacts from the past; it was also the world’s largest library, housing some 700,000 scrolls representing all the knowledge of the known world. It had laboratories for the study of human anatomy and astronomy and was the home of dozens of intellectuals, who studied everything from geography and physics to literature and geometry. In many ways, it displaced the Academy at Athens as the ancient world’s center of learning. It was at the Museon , for example, that the canonical versions of Homer’s Iliad were identified, and that Eratosthenes developed his geographic understanding of the earth and estimated its circumference as between 24,500 and 29,000 miles (today we know it is 24,900 miles).
Day-to-day governance of Ptolemaic Egypt was in the hands of Egyptian officials and of Greek officials who brought Egyptian translators. To convincingly style themselves as pharaohs, the Ptolemies turned to religion. One of Ptolemy’s first acts as ruler of Egypt was to seize the body of Alexander the Great as it was being transported from Babylon home to Macedonia. Ptolemy had an elaborate tomb built for Alexander and made it a focal point of the capital at Alexandria. He then declared Alexander a god, a move fully in keeping with the status bestowed upon him in life by Egypt’s priests of Amun at Siwah in 332 BCE. In addition, Ptolemy had a temple dedicated to the new god Serapis built in the capital city. Serapis was an extraordinary deity demonstrating how astute Egypt’s Greek rulers were. A fusion of the Egyptian deities Osiris and Apis and the Greek deities Zeus and Helios, Serapis allowed the very different subjects of Ptolemaic Egypt to find common ground in worship. To further cement their position as Egypt’s legitimate rulers, the Ptolemies carried out the religious duties of the pharaoh, including dedicating new temples to Egyptian gods, visiting shrines throughout the country, and declaring themselves the inheritors of Alexander’s godlike mantle.
The last of the Ptolemies was Cleopatra VII (Figure 9.23). A brilliant politician with a strong character, Cleopatra spoke upward of a dozen languages and was the only Greek ruler of Egypt fluent in Egyptian. Politically ambitious, she was determined to preserve what autonomy she could in the face of Rome’s growing dominance of the Mediterranean. To this end, she had an affair with the Roman general and dictator Julius Caesar and bore him a child named Caesarion. When Caesar fell afoul of the Roman Senate (whose members suspected him of wanting to be king) and was assassinated in 44 BCE, Cleopatra shifted her affection to the inheritor of Caesar’s armies, Marc Antony . This strategy was ill-fated, however, because Marc Antony was increasingly embroiled in a conflict with Octavian, Caesar’s adopted son, which soon erupted in an all-out civil war between the two.
The climax of the war came at the Battle of Actium in 31 BCE, during which the naval forces of Octavian and Marcus Agrippa met to defeat those of Marc Antony and Cleopatra. Octavian pursued the vanquished pair, and soon after he invaded Egypt in 30 BCE, Marc Antony and Cleopatra died by suicide. Cleopatra’s death ushered in the start of Roman rule in Egypt as it marked the end of the Ptolemaic dynasty.
The Past Meets the Present
Ancient Perspectives on Cleopatra
Can we ever know history with certainty? Only the smallest fraction of anything ever written in antiquity survives today, and much of that was set down long after the events and people it describes, possibly by writers hostile to their subject matter. Figures such as Cleopatra have been the source of endless ancient propaganda and character assassinations. Read the following excerpts by ancient writers describing Cleopatra, and consider the information they provide.
Why Cleopatra, who heaped insults on our army, a woman worn out by her own attendants, who demanded the walls of Rome and the Senate bound to her rule, as a reward from her obscene husband? . . . Truly that whore, queen of incestuous Canopus, a fiery brand burned by the blood of Philip, dared to oppose our Jupiter with yapping Anubis, and forced Tiber to suffer the threats of Nile, banished the Roman trumpet with the rattle of the sistrum, chased the Liburnian prow with a poled barge, spread her foul mosquito nets over the Tarpeian Rock, and gave judgements among Marius’ weapons and statues.
— Propertius , Poems III
It would have been wrong, before today, to broach the Caecuban wines from out the ancient bins, while a maddened queen was still plotting the Capitol’s and the empire’s ruin, with her crowd of deeply-corrupted creatures sick with turpitude, she, violent with hope of all kinds, and intoxicated by Fortune’s favor. But it calmed her frenzy that scarcely a single ship escaped the flames, and Caesar reduced the distracted thoughts, bred by Mareotic wine, to true fear, pursuing her close as she fled from Rome, out to capture that deadly monster, bind her, as the sparrow-hawk follows the gentle dove or the swift hunter chases the hare, over the snowy plains of Thessaly.
—Horace, Cleopatra
For she was a woman of surpassing beauty, and at that time, when she was in the prime of her youth, she was most striking; she also possessed a most charming voice and a knowledge of how to make herself agreeable to every one. Being brilliant to look upon and to listen to, with the power to subjugate every one, even a love-sated man already past his prime, she thought that it would be in keeping with her rôle to meet Caesar, and she reposed in her beauty all her claims to the throne. She asked therefore for admission to his presence, and on obtaining permission adorned and beautified herself so as to appear before him in the most majestic and at the same time pity-inspiring guise. When she had perfected her schemes she entered the city (for she had been living outside of it), and by night without Ptolemy's knowledge went into the palace.
—Cassius Dio , Roman History XLII
For her beauty, as we are told, was in itself not altogether incomparable, nor such as to strike those who saw her; but converse with her had an irresistible charm, and her presence, combined with the persuasiveness of her discourse and the character which was somehow diffused about her behaviour towards others, had something stimulating about it. There was sweetness also in the tones of her voice; and her tongue, like an instrument of many strings, she could readily turn to whatever language she pleased.
— Plutarch , Life of Antony XXVII
- Who was Cleopatra? What was her character like? What might have motivated these widely varying descriptions of her?
- How might have these ancient accounts gotten Cleopatra wrong? How much do you think we are likely to “know” about ancient people?
Link to Learning
The tradition of interpreting and depicting Cleopatra is presented in this article.
Rome’s conquest of Egypt added yet another layer of complexity to Egyptian society. While Latin-speaking governors and administrators continued to run the affairs of state from Alexandria , they did so in Greek , which remained the language of government. Rome invested heavily in the development of Egypt’s largest cities and creating inviting cosmopolitan spaces eventually inhabited by Greeks, Jewish people, Romans, and assimilated Egyptians. Still, under the Romans, the majority of Egyptian subjects lived in rural areas. In more than two thousand villages scattered throughout the Nile delta and along the Nile valley, people labored to produce the tons of grain that supplied the imperial capital with bread. These people were also hardest hit by Roman taxation, a circumstance that inspired periodic revolts against the empire.
Roman imperial administration over North Africa remained constant until the fourth and fifth centuries CE, when the weakened Western Roman Empire confronted a new series of challenges, including widespread barbarian invasions. One invading group was the Vandals . A Germanic people originating in present-day Poland, the Vandals migrated westward in the second century CE and settled in the region of Silesia. By the third century, they had been contained in the Roman province of Pannonia (a sizable territory that included parts of modern-day Hungary, Austria, Croatia, Serbia, Bosnia and Herzegovina, and Slovenia), but they pushed west in the face of the advance of the Huns , nomadic steppe people from central Asia.
By the fifth century, Vandals had migrated to Gaul and the Iberian Peninsula . Around 430, under their leader Genseric , they were invited by Bonifacius , Rome’s governor in North Africa, to help him establish himself as a ruler independent of Rome. For the next several years, the Vandals fought Rome’s imperial forces on behalf of Bonifacius, who died at the Battle of Remini in Italy in 432. Rome finally agreed to a peace treaty that granted the Vandals control of Mauretania and the western half of Numidia. Unsatisfied, Genseric then pursued a plan to extend his control over Roman North Africa by breaking the treaty and invading Carthage , which he seized in 439.
The Vandals remained in control of the Maghreb region of Roman North Africa until the early sixth century, when Byzantine forces under the general Belisarius reconquered the territory and forced the Vandal king Gelimer to surrender in 534. Less than a century later, a new power from the east threatened the Byzantine position in North Africa. Beginning in the 640s, the armies of Islam advanced, conquering Byzantine Egypt in 642. Using Egypt as a forward position, they then launched successive invasions across the region until the final Byzantine strongholds of North Africa including Carthage fell. By 709, the whole of North Africa had been conquered.
North African and Trans-Saharan Trade
Trans-Saharan trade —the movement of goods between oases and larger settlements in North and West Africa—has existed in one form or another since at least the ninth century BCE. Over time, this system grew from the relatively localized trade in agricultural products and iron goods centered on the Phoenician city of Carthage. It became a continent-wide system of exchange that moved commodities such as copper, salt, ivory, enslaved people, textiles, and gold between what is now Senegal in West Africa and Egypt in the east, reaching as far south as Niger and as far east as Somalia in the Horn of Africa (Figure 9.24). At its height, the trans-Saharan exchange of goods influenced commerce and finance across the whole of North Africa, as well as the economies of Europe and the Near East. This system of trade was made possible by the nomadic peoples of North and West Africa.
In the ninth century BCE, African farmers supplied the Phoenician towns of North Africa with food. In exchange, the Phoenicians introduced these peoples to innovative technologies such as ironworking . Over centuries of interaction, the two groups intermarried and became an integral part of North African society. From around the seventh century BCE, Phoenician merchants relied on the herders of the Atlas Mountains (in present-day Morocco) and the stretch of the northern Sahara to the south.
Indigenous peoples of North Africa had long maintained contact across the Sahara, but it could be tenuous due to the inherent risks of desert travel, including attacks on trading caravans and slave raids by the Garamantes desert people of Libya. Helping facilitate contact in the desert extremes were small settlements of seminomadic peoples at a fragile line of oases forging a point-to-point trading system. Thus, early trade in the Sahara was a matter not of transporting goods across the vast desert expanse but rather of passing them from oasis to oasis. A principal commodity exchanged during this early stage of trade was salt , which was carried to the south and acted as a sort of currency. Salt was highly prized in the agricultural communities south of the Sahara where the mineral is scarce. This is because humans require salt to maintain healthy bodily functions and must regularly consume salt to replace its loss through sweat and urination. The Saharan traders knew where the salt was located, accessed it for themselves, and traded in the substance for goods they could otherwise obtain. Only gradually were highly valuable trading goods introduced, such as gold and copper, which were then brought across the desert from tropical West Africa to the far reaches of the North African coast.
During the period of Carthaginian dominance in Tunisia, goods were carried by pack animals such as mules, horses, and donkeys between the Phoenician imperial capital and the independent African kingdoms in the mountainous and coastal regions to its west. These kingdoms, known to the Romans as Mauretania and Numidia , had extended their control of much of North Africa by the second century BCE as Carthage declined and Rome ascended. For a time, the Romans and the North African kingdoms enjoyed a relatively peaceful and prosperous alliance, but gradual Roman interference in the domestic political affairs of the Numidian state caused their relations to sour, and eventually Rome conquered both Numidia and Mauretania.
In typical fashion, the Romans established large estates as well as towns in the newly conquered territories. Their administration outside these enclaves reached only so far, however. Beyond them, the region remained under the dominance of the people native to the area, in both language and culture. But it was in the strategic interests of Rome to secure the southernmost frontiers of these new provinces. Doing so effectively required not only establishing a border but also patrolling it. This was impossible with horses, so the Romans used the dromedary camel (one-hump camel) (Figure 9.25). Biologically equipped to survive desert extremes, the camel was the ideal means to help secure Rome’s new southernmost frontier.
The introduction of the dromedary camel originally from Arabia into North Africa revolutionized the trans-Saharan trade, but its adoption across the region was slow. The first camels in North Africa may have reached Egypt by as early as the ninth century BCE, but it was not until the third and fourth centuries CE that its use spread to the African nomad groups of the northern Sahara, likely helped along by Roman use of the animal. By the fifth century, it had become a major form of transportation in the region. The camel had many advantages over other pack animals. It could maintain a steady pace over much longer distances than oxen, and it could carry upward of three hundred pounds of goods an average of fifteen to eighteen miles a day. Further, the camel’s capacity to store fat and water enabled it to travel up to ten days without stopping for fresh water, more than twice the time and distance of almost every other pack animal. Added to this was the camel’s unique splayed foot, which allowed it to walk easily in the soft, sandy conditions of the Saharan environment.
The camel enabled desert nomads to reach more distant oases than ever before and so open entirely new routes across the desert. Although desert travel remained precarious and filled with risk, it certainly became more reliable. For the first time, it was possible for desert travelers to consider dispatching large-scale and regular long-distance trading caravans across the Sahara. Despite this, desert transport remained largely in the hands of the nomadic peoples of the region, principally the Sanhaja in the west and the Tuareg of the central and southern Sahara. Although trans-Saharan trade was growing at this time, it was not yet full-time work, so these groups remained largely nomadic pastoralists, harvesting date palms and grazing their flocks and herds at oases.
In many cases, they tended goats and sheep, but they often also had camel and cattle and occasionally horses. These animals all had to graze, and when they were unable to do so at oases because of either distance or weather, the nomads were forced to find other grazing land. This was particularly the case during the hottest and driest seasons, when the nomads migrated their flocks and herds to the better grazing areas of the Maghreb in the north or the Sahel in the south. Inevitably, this brought them into contact with the more settled agricultural peoples of these areas, and often into conflict as they competed for precious resources in a hostile environment. Beyond these settlements, the nomadic pastoralists dominated the Sahara . Yet there were other peoples in the desert, including small groups such as the Haratin who also called the oases home. They harvested dates and dug salt to exchange for food but were often kept in a subordinate position by the nomads, who controlled the oases.
As the camel transformed desert transport, the products of sub-Saharan Africa became more readily available to the Mediterranean world. Trade in West African gold expanded, demand increased for such goods as ivory and ostrich feathers, and large animals were hunted to extinction in North Africa. As cross-desert traffic grew, several new settlements developed to aid the movement of goods north and south of the Sahara, including Sijilmasa, Ghat, Gao, Awdaghust, and Kano. At sites such as these, goods were exchanged, and camel caravans were unloaded and replenished to continue their journey across the desert. While the desert traffic in goods remained in the hands of nomads, the actual demand for and exchange of goods was largely the work of peoples of settled societies to the north and south of the Sahara.
Footnotes
- 1 There is a growing awareness about the use of the term Berber to describe indigenous North Africans, many of whom self-identify as Amazigh, or Imazighen (plural). With this understanding, although we have introduced the term Berber as the most commonly used name in English, we have generally preferred to use the term Amazigh in this text. | libretexts | 2025-03-17T22:27:52.322001 | 2025-02-12T00:43:20 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.05%3A_North_Africas_Mediterranean_and_Trans-Saharan_Connections",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.5: North Africa’s Mediterranean and Trans-Saharan Connections",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.06%3A_The_Persian_Empire | 2.6: The Persian Empire
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Discuss the history of Persia through the reign of Darius I
- Describe the origin and tenets of Zoroastrianism
- Identify the achievements and innovations of the Persian Empire
In conquering Mesopotamia, Syria, Canaan, and Egypt, the Neo-Assyrians created the largest empire the Near East had ever seen. Their dominance did not last, however, because Babylonia and Media destroyed the empire and carved up the spoils. But this proved to be a transitional period that set the stage for an empire that dwarfed even that of the Neo-Assyrians. Emerging from the area to the east of Mesopotamia, the founders of the Persian Empire proved to be both excellent warriors and efficient imperial organizers. Their kings commanded power, wealth, and authority over an area stretching from the Indus River to the Nile. Governors stationed in the many conquered regions served as extensions of the king’s authority, and trade flowed along a large network of roads under Persian military protection. For two centuries, Persia was the undisputed superpower of the ancient world.
The Rise of Persia
The origins of the Persians are murky and stretch back to the arrival of nomadic Indo-European speakers in the Near East, possibly as early as 2000 BCE. Those who reached Persia (modern Iran) are often described as Indo-Iranians or Indo-Aryans. They were generally pastoralists, relying on animal husbandry, living a mostly migratory life, and using the horse-drawn chariot. The extent to which they displaced or blended with existing groups in the region is not clear. From written records of the rise of the Neo-Assyrian Empire during the ninth century BCE, we know the Assyrians conducted military campaigns against and exacted tribute from an Indo-Iranian group called the Persians. The Persians lived in the southern reaches of the Zagros Mountains and along the Persian Gulf, in general proximity to the Medes with whom they shared many cultural traits.
Much of what we know about the early Persians comes from the work of the ancient Greek historian Herodotus , who was born about 484 BCE (Figure 4.28). According to Herodotus, Persia was made a vassal of Media in the seventh century BCE but freed itself in the sixth century BCE under the leadership of Cyrus II , also called Cyrus the Great. Inscriptions from the period suggest that Cyrus was likely a member of the Persian royal family, the Achaemenid dynasty. Once in power, he reorganized the Persian state and its military to mirror those of the Median Empire . This step included creating divisions for cavalry, archers, and infantry and setting up special training for the cavalry. Then, in 550 BCE, just a few years into his reign, Cyrus sent his military to challenge the Medes, whereupon the Median troops revolted, handed over their king, and accepted Cyrus’s rule. He then proceeded to integrate the Median elite and officials into his own government. The Median domains had become the Persian Empire.
Between 550 and 539 BCE, Cyrus sent his armies east and west to expand his recently acquired realm. In 539 BCE, he turned his attention to the Neo-Babylonian Empire , defeating its armies and marching into Babylon. His Persian Empire now incorporated the territories controlled by Babylonia, including Mesopotamia, Syria, Phoenicia, and Judah, and had become the largest empire to have existed in the Near East to that point. Organizing and administering this massive domain required the use of governors, whom Cyrus generally selected from local areas, a prudent move in a world where rebellions were common.
Link to Learning
Cyrus the Great left a record of his conquest of Babylon inscribed on a clay spool about eight inches long. This artifact, now called the Cyrus Cylinder, was created in 539 BCE and promptly buried in the foundation of the city wall, to remain there until its discovery in 1879. Explore this link to the British Museum website to see a high-resolution image of the Cyrus Cylinder and a translation of its inscriptions.
Cyrus died in in battle in 530 BCE, leaving the throne and empire to his son Cambyses II. The first task for Cambyses was to continue preparing for the invasion of Egypt his father had planned. A large fleet was built in the Mediterranean and a massive land force assembled for crossing the Sinai. The invasion began in 525 BCE. The defending Egyptians were soon overwhelmed, and the pharaoh retreated up the Nile but was captured. Having now added Egypt to his already large empire, Cambyses took on the role of pharaoh, adopting the proper titles and caring for the Egyptian religious institutions. This practice of respecting local traditions was a common feature of Persian expansion and helped to win support in newly acquired areas.
Under Cambyses II , the Persian Empire stretched from the edges of India to the shores of the Aral Sea, the Aegean coast of Anatolia, and the Nile River and included everything in between. Then, just as Cambyses was reaching the height of his power, a Persian revolt broke out in 522 BCE in support of his brother Bardiya. On his way to put down the rebellion, Cambyses II died, leaving the future of the empire uncertain but allowing for the rise of possibly Persia’s most famous and powerful leader, Darius I.
Darius I and the Reorganization of the Empire
The events surrounding the rebellion of Cambyses II’s brother Bardiya are unclear because a handful of different accounts survive. According to Herodotus, Cambyses ordered one of his trusted advisers to secretly murder Bardiya. Since no one knew Bardiya was dead, an impostor pretending to be him launched a rebellion against Cambyses, though after several months the false Bardiya was killed in a palace coup at the hands of Darius, an army officer who claimed descent from the royal house. Afterward, since neither Cambyses nor Bardiya had sons, Darius made himself king. Other accounts differ in some ways, and some scholars have speculated that Darius invented the story about a false Bardiya in order to legitimize his own coup against the real Bardiya and take the throne.
We may never know exactly what happened, but Darius was indeed able to grasp control of the Persian Empire in 522 BCE. However, it took more than a year for him to put down the ensuing rebellions, some possibly instigated by those who refused to recognize the legitimacy of his claim to the throne. Once these had been quelled, Darius commissioned an enormous relief inscription to be made on the cliff face of Mount Behistun . It shows a dominating figure of himself facing a number of bound former rebels, accompanied by lengthy descriptions of the rebellion and, in three different languages, Darius’s version of the events that led to his rise to power (Figure 4.29). To further strengthen his claim on the throne, Darius integrated himself deeply into the royal line through a number of marriages, to the daughters of Cyrus II, the widow of Cambyses II, and two of Cambyses’s sisters.
Darius now set about reorganizing the empire, carving it into twenty different governing districts called satrapies (Figure 4.30). Each satrapy was administered by a royal governor called a satrap, usually a trusted Persian or Median noble. Satraps answered directly to the king, had their own courts, wielded great power, and possessed vast lands within the satrapy. They often ruled from the large cities of the regions and were responsible for ensuring that their satrapy remained pacified and submitted its allotted taxes, though there were also local rulers within the region who managed affairs related to specific ethnic or religious groups. The only area not made into a satrapy was the Persian heartland, which was governed directly by the king.
Darius I and later kings had a number of tools at their disposal to keep the powerful satraps in line. For example, they frequently sent royal officials, known as the “eyes and ears of the great king,” to arrive unannounced and conduct audits, compiling detailed reports about how the satrapies were being governed that were sent directly to the king for review. If the reports were negative, the satraps could expect either removal or even execution at the hands of the region’s military garrison. These garrisons were used by the satraps to enforce the laws and maintain order, but they ultimately answered to the king and could discipline the satraps when necessary.
Communication between the satraps and the king was carried out through letters dictated to scribes and transmitted along royal roads. These roads constituted an impressive communication system that linked the many key cities of the empire with the Persian heartland and its cities, like Susa, Persepolis, and Pasargadae. While it was not new—the Neo-Assyrian Empire had its own network of roads that the Persians adopted and improved—it was a valuable tool for administering the large and complicated empire. Along the many royal roads of the empire were inns, resting places, and waystations with stables for horses. Safety was ensured by the troops stationed along the way, especially at key and vulnerable points. To move letters along the roads, a member of the army of mounted royal messengers would travel the roughly twenty miles to the first station, change horses, and continue to the next station. In this way, communication could move roughly two hundred miles in a single day.
The Past Meets the Present
Persia and the U.S. Postal Service
The Persian Empire required a sophisticated communications network to move messages across its vast territory, so it relied on speedy couriers who traveled roads first developed by the Assyrians and then improved. The ancient Greek historian Herodotus commented on Persian communications in his famous Histories :
There is nothing that travels faster, and yet is mortal, than these couriers; the Persians invented this system, which works as follows. It is said that there are as many horses and men posted at intervals as there are days required for the entire journey, so that one horse and one man are assigned to each day. And neither snow nor rain nor heat nor dark of night keeps them from completing their appointed course as swiftly as possible. The first courier passes on the instructions to the second, the second to the third, and from there they are transmitted from one to another all the way through, just as the torchbearing relay is celebrated by the Hellenes in honor of Hephaistos. The Persians call this horse-posting system the angareion .
— Herodotus , Histories
Herodotus was not the only ancient author to describe the Persian courier system. The biblical Old Testament Book of Esther notes that not just horses were used:
And he wrote in the king Ahasuerus’ name, and sealed it with the king's ring, and sent letters by posts on horseback, and riders on mules, camels, and young dromedaries.
—Esther 8:10 (KJV)
Even today, many still marvel at the efficiency of the Persian courier system. When the chief architect for the Eighth Avenue post office in New York City came across Herodotus’s description, he thought it perfect for a large inscription on the new building (Figure 4.31). His paraphrase of Herodotus is still visible there. Popularly thought of as the U.S. Postal Service’s unofficial motto, it reads as follows: “Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds.”
- What purposes might the Persian courier system have served? How might the empire have functioned in the absence or breakdown of such a system?
- Why might the chief architect for the Eighth Avenue post office in New York City have selected Herodotus’ description?
Building projects were another important expression of Darius’s power and authority. During his reign, he undertook the construction of elaborate palaces at Susa, Persepolis , and Pasargadae (Figure 4.32). These were constructed and decorated by skilled workers from many different locations and reflected artistic influences from around the empire, among them fluted columns designed by Greek stonemasons, Assyrian reliefs carved by Mesopotamians, and a variety of other features of Egyptian, Lydian, Babylonian, Elamite, and Median origin. The many workers—men, women, and children—who built these palaces migrated to the construction sites and often lived in nearby villages or encampments.
Link to Learning
Explore a reconstruction of the palace complex of Persepolis as it may have appeared to a visitor in ancient Persia via the Getty Museum’s Persepolis Reimagined interactive exhibit.
Major infrastructure projects were also a feature of Darius’s reign. For example, he ordered the construction of a long canal that would have allowed ships to pass from the Red Sea into Egypt’s Nile River and thus to the Mediterranean. It is unclear whether he actually completed it. It seems unlikely, though Herodotus insists he did. Whatever the case, that Darius attempted this massive undertaking is a testament to the power and resources the kings of Persia had at their fingertips. Other infrastructure projects included the expansion and rebuilding of the many roads that crisscrossed the empire, as well as the construction of a number of qanats (Figure 4.33). These were long, underground tunnels used for carrying fresh water over many kilometers, usually for irrigation, and represented a major improvement over earlier technologies. They likely had been used before the Achaemenids, but their construction expanded with the rise of Persian power.
Persian Culture and Daily Life
The social order of the Persian Empire included a number of hierarchically organized groups. At the bottom were the enslaved. While the Persians did not have a long history of using slavery before becoming a major power, it was common in the regions they conquered. Over time, the Persian nobility adapted to the practice and used enslaved people to work their land.
Next in the hierarchy were the free peasants, who generally worked the land and lived in the villages of the empire. On the next level were the various kinds of artisans, and higher still were the educated classes of scribes, imperial recordkeepers, and important merchants. And higher than all of these was the ruling order, including priests, nobles, and warriors (Figure 4.34).
The Persian king occupied a place far above and removed from these groups. As the earthly representative of the god Ahura Mazda , he expected complete submission from everyone in the empire. Those in his presence had to position themselves on the ground to show his superiority over them. Servants who came near had to cover their mouths so as not to breathe on him. His power was absolute, though he was restrained by custom and the advice of leading nobles. One of the most important of these nobles was the “Commander of a Thousand,” who managed the large court, served as a gatekeeper to any audience with the king, and oversaw the king’s personal protection service. As was the case in the Assyrian Empire, kings were not necessarily eldest sons. Rather, the current king could select his heir and frequently chose a younger son for any number of reasons.
The Persian king and his court seem not to have remained in one centralized capital. Rather, they moved periodically between the cities and regions of Babylon, Susa, Rhagae, Parthia, Ecbatana, Persepolis, and possibly others. One motive was a desire to avoid extreme weather during certain seasons, but there were also political considerations. For example, by moving across the countryside, the king made himself visible not only to important individuals in the cities but also to the many peasants in the villages that dotted the landscape. Thus, he allowed them opportunities to present him with petitions or seek his guidance.
Moving the court in this way was no easy feat, however. It required the efforts of thousands of people including officials, soldiers, religious leaders, wives, other women, and servants of all types, and the transport of horses, chariots, religious objects, treasure, and military equipment. In many ways, it was as though the state itself migrated with the seasons. The arrival of this migrating state in any major location was met with elaborate public ceremonies of greeting and welcome. Some contemporary descriptions detail how flowers and incense were laid along the city roads where the king moved. His dramatic entry was followed by the proper sacrifices to the local gods and an opportunity for the people to bring gifts to the king, such as exotic animals, jewels, precious metals, food, and wine. It was considered a great honor to present the king with a gift, and the gift-giving ceremonies served to strengthen the king’s relationship with his subjects.
The vast army of Persia had its own ceremonies and customs. Herodotus records that it was made up of a great number of subject peoples from around the empire, all with their own colorful uniforms. Military training began at a very young age and included lessons in archery, horseback combat, and hand-to-hand combat. The most talented of the infantrymen in the Persian army might hope to rise to the ranks of the Immortals, an elite, heavy-infantry combat force that served both in war and in the king’s personal guard. The larger army was made up of various units of infantry, archers, and cavalry. The largest unit was the corps, made up of ten thousand men. Each corps had a commanding officer who answered to the supreme commander. In battle, the archers would rapidly fire their arrows into the enemy as the cavalry and infantry advanced in their respective formations. Occasionally, when rebellions were put down or new territories added, the Persians deported the conquered populations elsewhere within the empire.
Because most records from the Persian Empire focus on kings, wars, the military, and high-level officials and bureaucrats, we know little about commoners. But we know that most ordinary Persians had diets of bread or mash made of barley, supplemented by figs, dates, plums, apples, almonds, and other fruits and nuts. Much more rarely, meals might also include goat, mutton, or poultry. Besides the military, the empire supported a host of other necessary occupations, such as sentinels, messengers, various types of attendants, architects, merchants, and numerous types of lower professions. The many agricultural workers grew traditional crops of the Near East, like wheat and barley, in addition to rice (brought from India) and alfalfa (for horse feed). Merchants in the Persian Empire benefited greatly from the stability created by the government and the extensive network of crisscrossing roads that connected the far-flung regions. Although long-distance trade was prohibitively expensive for most things except luxury goods, trade across short distances was apparently common.
The religion of the Persians was a tradition we describe today as Zoroastrianism . Its name comes from Zoroaster , the Greek pronunciation of the name of its founder, Zarathustra. Scholars today believe that Zoroaster likely lived at some point between 1400 and 900 BCE and was almost certainly a Persian priest, prophet, or both. His followers likely practiced a polytheistic religion similar in many ways to the Vedic traditions held by Indo-European speakers who migrated into India. Among Zoroastrians’ many gods were both powerful heavenly deities and more terrestrial nature gods. Ceremonies included various rituals similar to those of other polytheistic religions, such as the sacrifice of animals on outdoor altars.
Zoroaster appears to have emphasized the perpetual conflict between the forces of justice and those of wickedness. Over time, he developed supernatural personifications of these forces: Ahura Mazda was the lord of wisdom and the force of good (Figure 4.35), and Angra Mainu was the destructive spirit and the force of evil. Each was supported by lesser supernatural beings. On the side of Ahura Mazda were the ahuras who worked to bring good to the world, and on the side of Angra Mainu were the daevas who served the interests of evil.
The Persian followers of Zoroastrianism believed Ahura Mazda had created the world as an entirely good place. However, Angra Mainu was dedicated to destroying this perfection with evil, so the two forces fought for the supremacy of good or evil on Earth. The world the Persians saw around them was the product of their pitched battle. However, the fight would not last forever. At some appointed time in the future, Ahura Mazda would overcome the forces of Angra Mainu, and the followers of evil would face judgment and punishment for their crimes. It was up to humans to decide for themselves what path to follow. At the final judgment, the dead would be resurrected and made to walk through a river of fire. Those consumed by the fire were unworthy and would be condemned to torment in hell, while those who survived would live forever in a paradise with no evil.
While Zoroaster’s beliefs were not readily accepted by his own people, he found protection and a following among others, and in the centuries after his death, his ideas spread and changed. For example, the Medes incorporated their own priestly class into the Zoroastrian traditions. The Achaemenids borrowed artistic traditions from the Mesopotamians to depict Ahura Mazda in the same way they styled their important gods. Later, Judeans within the Persian Empire, who were from the Canaanite kingdom of Judah and followers of Judaism, incorporated many Zoroastrian ideas into their own religious traditions. These ideas went on to influence the religions of Christianity and Islam.
The Past Meets the Present
Zoroastrianism, Judaism , and Christianity
Zoroastrianism, Judaism, and Christianity may have emerged in the ancient world, but they are all still practiced today. And while in modern times these religions appear quite different, they share important similarities.
Consider these modern similarities between Zoroastrianism and Christianity. Both accept the idea of a powerful god as the source of all good, the existence of evil and deceptive forces that plague the world, a final judgment that occurs when the forces of evil have been vanquished forever, and a pleasant afterlife for those who follow the path of righteousness. These similarities are not the product of random accident. Rather, the connections between Zoroastrianism and Christianity date to developments within Judaism in the centuries before the birth of Christ.
It was likely that when the Judeans were members of the Persian Empire, they became acquainted with some of the ideas of Zoroastrianism, and these ideas influenced the way they understood their own monotheistic religion. The notion that a force of evil was responsible for the many problems in the world may have been a comforting thought for those who wanted to believe that God was both all-powerful and thoroughly benevolent. The concept of a final judgment was also appealing to Judeans, who held that they were not only God’s chosen people but also persecuted by the forces of evil. While these ideas begin to appear in Judean writings only in the centuries after the fall of Persia, the seeds had likely been planted much earlier through a growing familiarity with the tenets of Zoroastrianism. In the second century BCE, many followers of Judaism had come to accept the idea of a final judgment. It was this form of Judaism that ultimately influenced the fundamental tenets of Christianity.
- What do the connections between Zoroastrianism, Judaism, and Christianity suggest about the way religions borrow from each other? Can you think of other examples?
- How might modern Christianity be different had Judaism not been influenced by Zoroastrianism?
While the religion of the Persians was Zoroastrianism, the empire included people of different religions, including Armenians, Nubians, Libyans, Phoenicians, Egyptians, Babylonians, Ionian Greeks, Bactrians, Judeans, and many others. Indeed, it was the Persian king Cyrus II who permitted the Judeans exiled in Babylon to return to Judah and rebuild their temple. The empire expected loyalty and the payment of tribute, but its kings were not interested in transforming their diverse peoples into Persians. Instead, they developed an imperial system that supported the maintenance of a multiethnic, multilingual, and multireligious empire. | libretexts | 2025-03-17T22:27:52.493952 | 2025-02-12T00:43:21 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.06%3A_The_Persian_Empire",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.6: The Persian Empire",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.07%3A_The_Hebrews | 2.7: The Hebrews
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Discuss the history of the Hebrews in the context of the development of the Near East
- Explain how the Hebrew faith differed from others in the same region and time period
The Hebrews, a Semitic-speaking Canaanite people known for their monotheistic religion of Judaism or the Jewish religion, have preserved a history of their people that claims very ancient origins and includes descriptions of early leaders, kings, religious traditions, prophets, and numerous divine interventions. That history, often called the Tanakh or Hebrew Bible in the Jewish tradition and the Old Testament in the Christian tradition, has survived for many centuries and influenced the emergence of the two other major monotheistic faiths, Christianity and Islam. While fundamentalist Christians and Orthodox Jews hold that the Bible is both divinely inspired and inerrant, historians must scrutinize the text and the rich history it records. This study and the careful work of archaeologists in the Near East have revealed a number of problems with accepting as infallible the story as recorded in the Hebrew Bible, but research has also opened our eyes to a history that is perhaps even more interesting than the account traditionally preserved.
The History of the Hebrews
The history of the Hebrews recorded in the Bible starts with the beginning of time and the creation of the first man, Adam. However, it is with the life of the patriarch Abraham that we begin to see the emergence of the Hebrews as a distinct group. Abraham, we are told, descended from Noah a thousand years before, and Noah himself descended from Adam a thousand years before that. Relying on the ages and generations referenced in the Hebrew Bible, we can deduce that Abraham was born around 2150 BCE in the Mesopotamian city of Ur. At the age of seventy-five, he left this city and traveled to the land of Canaan in the eastern Mediterranean. There Abraham and his wife Sarah had their first son together, Isaac. Isaac then had a son, Jacob, and Jacob gave birth to twelve sons. From these twelve sons, the traditional Twelve Tribes of Israel descend (Figure 4.36).
While this chronology explains how the Hebrews found themselves in Canaan, there is little to support it. There are no archaeological sites we can reference, and the only evidence we have for Abraham, his trip from Mesopotamia, and his children, grandchildren, and great-grandchildren comes from the Hebrew Bible. This has led some to suspect that the stories of Abraham and his family may have been developed much later than the Bible suggests. And in fact, historians have traced the story of Abraham to sources written down between the tenth and sixth centuries BCE. It is possible that Abraham was a historical person and part of an ancient migration recounted for centuries in oral form, but without additional records or archaeological discoveries that attest to his existence, we cannot know for sure.
The Hebrew Bible notes that Joseph, one of Abraham’s twelve great-grandsons, ended up in Egypt . Later, around 1800 BCE based on the biblical chronology, Joseph’s family joined him, and his descendants lived there for several generations. During this long time in Egypt, the Bible explains that the descendants of Joseph experienced increasingly poor treatment, including being enslaved by the (unnamed) Egyptian pharaoh and put to work on building projects in the Nile delta (Figure 4.37). Later, the pharaoh decided to kill all the male Hebrew children, but one was saved from the slaughter by being hidden in a basket to float down the Nile. He was discovered by the pharaoh’s daughter, who named him Moses and raised him among the Egyptian royalty as her own.
The Bible continues the story by explaining that the adult Moses discovered who he actually was and demanded that the pharaoh release the Hebrews and allow them to return to Canaan. After experiencing a number of divine punishments issued by the Hebrew god, the pharaoh reluctantly agreed. The Hebrews’ flight from Egypt included a protracted trek across the Sinai desert and into Canaan, during which they agreed to worship only the single god Yahweh and obey his laws. This period of their history is often called the Exodus , because it records their mass migration out of Egypt and eventually to Canaan . Once in Canaan, Moses’s general Joshua led several military campaigns against the inhabitants, which allowed the Hebrews to settle the land.
The details in the biblical account of the Hebrews’ life in Egypt and their exodus from that kingdom have led some scholars to associate these stories with the period of Hyksos rule. It was then, during the Second Intermediate Period, that the Canaanites flooded into the Nile delta and took control, and it may be that the story of Joseph and his family entering Egypt preserves a memory of that process. The exact time of the exodus from Egypt has been difficult for historians to determine for a number of reasons, not least of which is the fact that the Bible does not name the Egyptian pharaohs of the Exodus period.
Yet some features of the biblical account indicate there was in fact some type of exodus. For example, Moses’s name is Egyptian and not Hebrew, suggesting he came from Egypt. The Bible also names the two midwives who traveled with the group, leading some scholars to conclude there was some oral tradition about a very small group that may have crossed the Sinai into Canaan, though not the very large group described in the Bible. As for the story of the conquests of Joshua, the archaeological record simply does not support this. Even at the site of Jericho, extensive archaeological work has been unable to prove that the city was destroyed when and in the way the Bible describes. This absence of strong evidence has led most to conclude that there likely was no conquest, and that there was already a population of Hebrews in Canaan who were later joined by a smaller group from Egypt.
In Their Own Words
What Is in a Name?
Without archaeological or other evidence, historians have had to rely on the Hebrew Bible for clues about the Exodus. One possible hint comes from the Bible’s book of Exodus, which describes the birth of Moses, his mother’s effort to save him from slaughter, and his discovery and adoption by the pharaoh’s daughter (Figure 4.38).
And there went a man of the house of Levi, and took to wife a daughter of Levi. And the woman conceived, and bare a son: and when she saw him that he was a goodly child, she hid him three months. And when she could no longer hide him, she took for him an ark of bulrushes, and daubed it with slime and with pitch, and put the child therein; and she laid it in the flags by the river's brink. And his sister stood afar off, to witness what would be done to him. And the daughter of Pharaoh came down to wash herself at the river; and her maidens walked along by the river's side; and when she saw the ark among the flags, she sent her maid to fetch it. And when she had opened it, she saw the child: and, behold, the babe wept. And she had compassion on him, and said, This is one of the Hebrews' children. Then said his [Moses’s] sister to Pharaoh's daughter, Shall I go and call to thee a nurse of the Hebrew women, that she may nurse the child for thee? And Pharaoh's daughter said to her, Go. And the maid went and called the child's mother. And Pharaoh's daughter said unto her, Take this child away, and nurse it for me, and I will give thee thy wages. And the woman took the child, and nursed it. And the child grew, and she brought him unto Pharaoh's daughter, and he became her son. And she called his name Moses: and she said, Because I drew him out of the water.
—Exodus 2:1-10 (KJV)
As this story explains, the pharaoh’s daughter named Moses to reflect the fact that she “drew him out of the water.” Some scholars believe this phrase is a reference to the Hebrew word mashah , meaning to “draw out,” which sounds similar to the Hebrew pronunciation of Moses, Mosheh . That explanation would have made sense to Hebrew readers of the Bible, but it does not make sense that an Egyptian princess would speak Hebrew. While this problem makes it difficult to take the story seriously as evidence, it does raise an interesting question.
Is the biblical account actually an attempt to explain a Hebrew man’s name that was not Hebrew but Egyptian? In Egyptian, Moses means “child of.” It would have been part of a larger name such as Thutmose, which means “child of [the god] Thoth.” The fact that Hebrew tradition tried to explain his Egyptian name suggests to some that Moses may have been a real person with Egyptian heritage. That, in turn, suggests there is some validity to the Exodus story itself.
- Does the scholarly interpretation of the name Moses as Egyptian in origin seem credible to you? Why or why not?
- What does this story reveal about family relationships in the period?
The biblical book of Judges describes how the Hebrews moved into the hills of Canaan and lived as members of twelve tribes. In the book of Samuel, we hear how they faced oppression from the Philistines , one of the many Sea Peoples groups. To better defend themselves against the Philistines, the Hebrews organized themselves into a kingdom they called Israel . Their first leader, Saul , became king around 1030 BCE but failed to rule properly. The second king, David , not only ruled effectively but also was able to drive back the Philistines.
The Hebrews, properly referred to as Israelites in this period because of their formation of the Kingdom of Israel, now entered a golden age in their history. David suppressed the surrounding kingdoms, made Jerusalem his capital, and established a shrine there to the Israelite god Yahweh. This more organized kingdom was then left to David’s son Solomon , who furthered the organization of Israel, made alliances with surrounding kingdoms, and embarked on numerous construction projects, the most important of which was a large temple to Yahweh in Jerusalem.
Historians call the period of these three kings—Saul, David, and Solomon—the united monarchy period. Archaeological work and extrabiblical sources support many biblical claims about the era. For example, there was a threat to the Hebrews from the Philistines, who were likely one of the many groups of migrants moving, often violently, around the eastern Mediterranean during the period of the Late Bronze Age Collapse . We have Egyptian and other records of these migrants, some specifically mentioning the Philistines by name. It seems likely that the founding of Israel was a response to this threat.
As for the existence of Saul and David, things are less clear. The Bible provides several conflicting accounts of how these two men became king. For example, Saul is made king when he is found hiding among some baggage, but also after leading troops in a dramatic rescue. Similar confusion surrounds David, though it seems clear he became an enemy of Saul at some point and was able to make himself king. Despite these contradictions, there is one piece of archaeological evidence for the existence of King David. The Tel Dan stele discovered in the Golan Heights in the 1990s makes reference to the “house of David,” meaning the kingdom of David (Figure 4.39). However, no similar archaeological evidence has been unearthed for David’s son Solomon. Indeed, evidence of Solomon’s most famous achievement, the building of the first temple in Jerusalem, has yet to be discovered. However, we have strong archaeological evidence for some of his other public works projects, such as the three-thousand-year-old gates discovered at Gezer, Hazor, and Megiddo.
After the death of Solomon, the period of the united monarchy came to an end, and Israel split into two kingdoms, Israel in the north and Judah in the south. This inaugurated the period of the divided monarchy (Figure 4.40). Jerusalem remained the capital of Judah, while Samaria was the capital of Israel. The northern kingdom was the larger and wealthier of the two and exerted influence over and sometimes warred with Judah. The biblical account often puts the kings of the northern kingdom in a negative light, noting that they abused their subjects and incorporated elements of foreign religious traditions in their worship of Yahweh .
With the rise of the Neo-Assyrian Empire and its expansion into Canaan, Israel and Judah entered a new era under foreign domination within the Assyrian-controlled Near East. Anti-Assyrian sentiment in both kingdoms and the Neo-Assyrians’ desire to control the eastern Mediterranean eventually led to multiple Assyrian attacks on Israel. The most devastating occurred in 722 BCE, when thousands of Israelites were deported to other parts of the empire, as was the Assyrians’ custom.
Prophets in Judah interpreted the destruction of Israel as punishment for its having veered from the covenant with Yahweh. They called for religious reforms in Judah in order to avoid a similar fate. While Judah was incorporated into the Neo-Assyrian Empire, it avoided the destruction experienced by Israel. However, the defeat of Assyria by the Neo-Babylonians brought new challenges to Judah. Resistance to Babylon led to punishments and forced deportations in 597 BCE, and finally to the destruction of Jerusalem and its temple in 586 BCE.
The many Judeans deported to Babylon after the fall of Jerusalem were settled in Mesopotamia and expected to help repopulate areas that had been devastated by wars. Many assimilated into Babylonian culture and became largely indistinguishable from other Mesopotamians. Some, however, retained their Judean culture and religious beliefs. For these Judeans, the Babylonian exile , as it was called, was a time of cultural and religious revival. They edited various earlier Hebrew writings and combined them into a larger work, thus giving shape to the core of the Hebrew Bible. Finally, with the rise of the Persian Empire and its conquest of Babylonia, the Persian king Cyrus the Great permitted the unassimilated Judeans to return to Judah. They went in two major waves over the next few decades and began a process of reconstruction that eventually included the rebuilding of Yahweh’s temple at Jerusalem.
The Culture of the Hebrews
The most salient feature of Hebrew culture during this period was its then-unusual monotheism. The Bible suggests this tradition began with Abraham , who was said to have entered into a covenant with Yahweh as far back as 2100 BCE. With the emergence of Moses in the Bible, Hebrew monotheism really began to take shape. As the Bible explains, during the exodus from Egypt, Moses was given the laws directly from Yahweh, including the command that only Yahweh be worshipped. This account suggests that pure monotheism was commonly practiced by the Hebrews from that time forward. Yet closer inspection of the biblical stories reveals a much more complicated and gradual process toward monotheism.
For example, the first of the commandments given to Moses by Yahweh demands that the Hebrews “have no other gods before me.” This language implies that there are in fact other gods, but those gods are not to be worshipped. In other places in the Bible, God is referred to as plural or occasionally as part of an assembly of gods. This textual evidence likely preserves small elements of the earlier Canaanite polytheistic religious traditions. These include the veneration of El, the head of the pantheon and often associated with Yahweh, and of Yahweh’s consort Asherah, the storm god Baal, the fertility goddess Astarte, and many others. Archaeologists’ discoveries of temples and figurines representing these gods attest to the fact that they were worshipped in some form well into the eighth century BCE.
Many portions of the Bible describe how the Hebrews frequently fell away from Yahweh and back into their polytheistic traditions. This backsliding is usually condemned in the Bible and occasionally results in efforts by biblical heroes to restore Moses’s covenant with God. King Hezekiah of Judah (727–697 BCE), for example, conducted a cleansing campaign against unauthorized worship around his kingdom. He removed local shrines, destroyed sacred monuments, and smashed cult objects. His son, King Manasseh, however, restored some of these cultic practices and shrines. Setting aside the bias of the Bible’s writers, Manasseh may have been attempting to rescue long-standing religious traditions that had been under assault by his reform-minded father. However, as early as the mid-seventh century BCE, the religious reformers who promoted the centralized worship of Yahweh and obedience to the laws of Moses had clearly gained the upper hand. Their interpretation of Hebrew history and religion was then on the rise.
The backsliding theme of the Hebrew Bible was partly a way for its writers to account for the vestiges of Canaanite religious practices that did not fit neatly with their view of the Hebrews as having been monotheistic from the time of Moses. The abandonment of Yahweh accounted for the disasters that befell the Hebrews in Israel and Judah, especially the destruction of the temple and forced deportation to Babylon. Neo-Assyria and Neo-Babylonia were merely tools, the biblical writers and the prophets they record attest, used by Yahweh to compel the Hebrews to follow the correct path or face punishment. This version of Israelite history was kindled and strengthened during the Babylonian exile, when the core portion of the Hebrew Bible was being edited and assembled.
By the time the Judeans were allowed to return to Jerusalem and rebuild their temple, the basic framework of what we understand today as Judaism had emerged and been largely accepted. The Jews (or people from Judah) were expected to worship only Yahweh, live moral lives consistent with his dictates, and closely follow the laws of Moses. For example, they were prohibited from murdering, stealing, and committing adultery. They were barred from consuming specific foods such as pork, shellfish, insects, and meat that had been mixed with dairy. Food had to be properly prepared, which included ritual slaughter for animals. Jewish people were also prohibited from working on the seventh day of the week and were compelled to treat wives with respect and give to charity, among many other acts. And of course there were important rules about the worship of Yahweh, including loving him, fearing him, emulating him, and not profaning his name.
Since the Hebrews could trace their origins back to agricultural clans, a number of the laws of Moses dealt with agricultural issues, like prohibitions against eating ripe grains from the harvest before they are made into an offering. The festival of Sukkot , meaning “huts,” was a harvest festival when Jewish people were expected to erect huts, possibly as a way to remember the time when they were primarily agriculturalists. However, as the Hebrews grew in number and began living in cities and adopting urban occupations, these agricultural traditions were relegated primarily to symbolic religious practice. In cities, Jewish people found economic opportunities as craftspeople, traders, and merchants. As Jerusalem grew in the centuries after the Babylonian exile, their religion became ever more adapted to urban life.
At the center of urban life in Jerusalem was the temple, completed around 515 BCE (Figure 4.41). It included courtyards as well as an enclosed sanctuary with altars and a special location kept in total darkness, referred to as the Holy of Holies, where Yahweh was present. In the temple, the priests organized various religious festivals and performed elaborate rituals, including special sacrifices of animals supplied by worshippers seeking the favor of Yahweh. | libretexts | 2025-03-17T22:27:52.575242 | 2025-02-12T00:43:24 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.07%3A_The_Hebrews",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.7: The Hebrews",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.08%3A_Ancient_China | 2.8: Ancient China
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Discuss the early dynasties of ancient China
- Analyze the impact of the Warring States Period on ancient Chinese politics and culture
- Explain the connections between ancient Chinese philosophy and its political and social context
Ancient China was not the first area in Asia to practice agriculture and develop cities. But it was home to some of the world’s earliest political dynasties, and it produced written scripts, influential schools of thought and religion, and innovations in architecture and metallurgy, such as the manufacture of bronze and iron agricultural implements, weapons, chariots, and jewelry. A climate of constant regional warfare between small Chinese states imparted to kings and philosophers alike a sense of urgency to build institutions and systems that would bring stability to their realms. Against this background, China’s first empire, the Qin, presided over the creation of some of the ancient world’s greatest historical treasures, including the Terracotta Army and an early form of the Great Wall.
Prehistoric China
Recent studies of Paleolithic and Neolithic China suggest it was home to several distinct cultural complexes that developed independently of one another and exhibited notable regional variations in agriculture, social organization, language, and religion.
Human beings set foot on the Chinese subcontinent more than a million years ago. Evidence indicates the presence there of an archaic member of the human lineage known as Homo erectus , a term meaning “upright man.” One example is the well-known Peking Man , a subspecies of Homo erectus identified by fossil remains found in northern China in 1929. The species Ho mo sapiens (meaning “wise man” and including all modern humans) appeared later, around 100,000 years ago. These communities of hunter-gatherers followed the mammoth, elk, and moose on which they subsisted into northern China. Later they learned to fish along China’s many rivers and long coastlines and supplemented their food stores by foraging from a rich variety of plants, including many grasses, beans, yams, and roots.
Archaeological evidence from this stage of China’s prehistory, the Paleolithic period from roughly 100,000 to 10,000 BCE, confirms that these groups developed symbolic language, which enabled them to evolve ideas about abstractions like kinship and an afterlife and thus produce the foundations for a shared culture and society. Their tools, such as those used for grinding plants, were simple and fashioned primarily of stone, but also of bone and wood. Early humans arrived in China from Africa and western Asia in waves separated by hundreds of years, but they were far from uniform. Thus, they eventually produced early societies that spoke a variety of languages, differed in their spiritual beliefs, and developed the capacity for agriculture independently of one another.
China’s diverse geography, climate, and terrain reinforced regional variations in these early cultures as well (Figure 5.4). The country today stretches for roughly a thousand miles from north to south and east to west, occupying a temperate zone dominated by two major river systems, the Yellow and the Yangtze. Mountains, deserts, grasslands, high plateaus, jungles, and a variety of climates exist, such as the frozen environs surrounding the city of Harbin in the north and the subtropical climate around Hong Kong in the south. Most of the early cultures and later dynasties that produced Chinese civilization lay in a much smaller area, within a series of provinces along the Yellow and Yangtze Rivers, ringed by the outer areas of Manchuria , Mongolia , Xinjiang , and Tibet . Today these provinces make up the most densely populated areas of the People’s Republic of China, inhabited almost entirely by the majority ethnic group in China, Han Chinese . The outlying areas have been the traditional homelands of a great many religious and ethnic minorities, such as Mongolians , Tibetans , Uyghurs , and Manchu , who did not become incorporated into the first dynasties of ancient China. Early inhabitants of China found that each region offered advantages and challenges to meeting the necessities of daily life: food, shelter, and security.
More than twenty sites that produced unique Neolithic cultures have been found in China. The earliest such culture was the Nanzhuangtou (8500 to 7700 BCE) in Hebei , a province in the northeast, and the last known was the Yueshi culture (1900 to 1500 BCE) found in Shandong , an eastern coastal province. All were capable of farming, domesticating animals, and manufacturing textiles and ceramics.
China’s Neolithic cultures are notable for their independent growth and regional diversity, and for the differences between those in the north and those in the south. For example, in the southeastern part of the country, near Shanghai, a site dated to around 8000 BCE was home to people who cultivated rice, used boats, constructed standing homes, and made pottery with geometric designs. Evidence suggests their language was more closely related to those of the peoples living in Southeast Asia today, so calling them “Chinese” is open to debate. To the north, the colder climate forced early communities in today’s Hebei province to rely on another grain, millet, for their primary foodstuff. These farmers used stone tools such as sickles and made simple jars to store their grain. Wooden spears and hoes were more common in the south than stone tools, and while both north and south domesticated dogs and pigs, in the north grazing animals such as sheep were tamed, while in the south farmers harnessed the power of water buffalo.
There were distinctive Neolithic cultures in the east and west of China. From about 4100 to 2600 BCE, the Dawenkou culture arose near Shandong in the east, characterized by the manufacture of exquisite works of pottery and the use of turquoise, ivory, and jade. The burial practices of the Dawenkou became more elaborate over time, eventually leading to the use of wooden coffins and the creation of ledges of earth to surround the graves. Later eastern cultures lavished treasures on the deceased, burying them with necklaces and beads, showing an increasing sophistication in the decorative arts.
To the west lay the Yangshao culture, dating to 5000 BCE, whose people farmed millet and dug homes in the earth to protect themselves from a cool climate. In Yangshao, burying the dead was a simpler process, but artists decorated pottery with painted designs and intricate geometric patterns. To the east there are few examples of painted bowls, jars, or cups. Instead, eastern cultures devoted their creative efforts to the slow, painstaking process of shaping jade. The Hongshan culture in Liaoning province and the Liangzhu complex in Jiangsu fashioned beautiful jade talismans, ornaments, and treasures for spiritual ceremonies. The great distance between these two cultures—with Hongshan far in the northeast near today’s border with North Korea and the Liangzhu located around the Yangtze River delta in the southeast—shows the breadth of jade’s influence along China’s eastern seaboard. In the west, jade remained a much rarer object.
Later networks of exchange connected these regional cultures, which increasingly borrowed from each other, accelerating change, innovation, and collision. From roughly 3000 to 2000 BCE, China’s Neolithic cultures created and shared new implements for cooking and artistic styles such as geometric patterns on ceramics. With contact, however, came growing conflict as well, suggested in the archaeological record by the emergence of metalworking and cities defended by walls of rammed earth. The need to coordinate defense and construct such ramparts likely required a political evolution within these cultures, giving rise to an elite military class led by chiefs. Thereafter, military elites were shrouded in spiritual rituals revolving around human sacrifice, possibly of captives of war, who were entombed beneath buildings in sites found in northern China. Increasing exchange between Neolithic cultures and the prominence of war may also have led to greater social differentiation. Burial sites for elites show evidence of increasingly elaborate ceremonies to please the gods or ancestors and to honor the deceased and denote their status.
Women were often buried with the same quantity of items and laid in the same position as their male counterparts. Archaeological remains such as graves, figurines, tools, and other materials suggest that many Neolithic Chinese communities were matrilineal societies, in which lines of kinship were traced through the mother’s family. While weaving textiles became an important occupation for many women, the division of labor was far less rigid in this period. Carvings depicting goddesses, symbols of fertility, and women’s genitalia are prevalent in many of the cultures and seem to suggest women were on a par with men in the Neolithic era, especially when compared with later periods in Chinese history.
Early Dynastic China
The Yellow River had an enormous impact on the development of Chinese civilization. It stretches for more than 3,395 miles, beginning in the mountains of western China and emptying into the Bohai Sea from Shandong province. (Only the Yangtze River to the south is longer.) Critical to the development of farming and human settlement along the Yellow River was the soil, which is loess —a sediment that is highly fertile, but easily moved by winds roaming the plain and driven along as silt by the power of the river. This portability of the soil and the human-built dikes along the river have caused it to constantly evolve and change over the centuries, leaving the surrounding areas prone to regular flooding and subjecting farmers to recurring cycles of bountiful harvests and natural disasters. Rainfall around the Yellow River is limited to around twenty inches annually, meaning that the river’s floods have usually been paired with periodic droughts.
Near the Yellow River, the site of Erlitou in Henan province reveals a culture defined by the building of palaces, the creation of bronze vessels for rituals, and the practice of forms of ancestor worship. Sites such as these have led to debate about whether they prove the existence of the Xia dynasty , a fabled kingdom said to have been founded by one of China’s mythological heroes, the Great Yu . No site has yet been found with documents written by the Xia. Instead, all references to it come from records written many centuries after the possible mythical kingdom ceased to exist.
The first Chinese dynasty for which we have solid evidence is the Shang . It created a complex, socially stratified Bronze Age civilization whose signature achievement was the creation of a written script . The Shang were long thought to be a mythological dynasty like the Xia until scholars in the late nineteenth century discovered old turtle shells inscribed with Chinese characters in a medicine shop. Eventually, these shells and other “ oracle bones ,” once used in the art of divination, were found to be written records from China’s first dynasty (Figure 5.5).
Shang kings exerted their authority through rituals of ancestor worship drawn from the Erlitou culture and adapted to the art of bone divination. First carving written characters onto shells and animal bones and then applying heat to crack and shatter them, they posed questions to spirits and divined from the bones the spirits’ predictions regarding impending harvests, military campaigns, or the arrival of an heir. From there, the Shang developed a logographic script whose characters visually represented words and ideas, combining symbols to make new concepts and sounds as needed. These characters served in a number of tasks such as keeping records, making calendars and organizing time, and preserving knowledge and communicating it from generation to generation.
The earliest forms of Chinese writing were likely forged on fragile materials such as bamboo or even silk and have not survived. But the Shang’s passing on to future dynasties a logographic script , rather than a phonographic alphabet, meant that for centuries literacy was the preserve of elites. Reading required memorizing hundreds and eventually thousands of symbols and their meanings, rather than learning the sounds of a far fewer number of letters as is the case with an alphabet . Chinese ideas, values, and spiritual beliefs stored in this logographic script long outlived the Shang, becoming a key element of continuity from one dynasty to the next.
Through their invention of writing, the Shang were also able to command enormous resources for two centuries. They developed the organizational capacity to mine metal ores and transport them to foundries to make bronze cups, goblets, and cauldrons that grew to weigh hundreds of pounds. Shang artisans began weaving silk into cloth, and the city walls around an early capital in Zhengzhou were erected by ten thousand workers moving earth into bulwarks that stood thirty feet high and sixty feet wide.
But the Shang became China’s first dynasty largely because of their military prowess, expanding their power through conquest, unlike the earlier and more trade-oriented cultures. Through warfare and the construction of a network of walled towns, the Shang built one of the world’s first large territorial states controlled by a noble warrior class. This area included territory in Henan, Anhui , Shandong , Hebei , and Shanxi provinces. The Shang used bronze spears, bows, and later horse-drawn chariots to make raids against neighboring cultures, distributing the prizes to vassals and making enemies into allies for a share of the plunder. The prizes included captives of war, enslaved by the Shang warrior elite or sacrificed. An aristocratic and militaristic culture, the Shang also organized royal hunts for game such as deer, bear, and even tigers and elephants to hone their skills.
Link to Learning
Visit this website and read a detailed summary of the importance to ancient Chinese cultures of ritual killing to learn more about and see visual examples of the Shang’s ritualistic vessels, art of divination, and burial customs.
The oracle bones suggest that religion and ritual were the backbone of Shang society. The kings were not just military leaders but high priests who worshipped their ancestors and the supreme deity known as Di . Shang queens and princesses were also active in politics and warfare, with a few notable women such as the general Fu Hao leading large armies onto the battlefield. Aristocratic women also regularly served as priests in the royal ancestral cult. Like many other ancient societies, the Shang dynasty exhibited a theocratic dimension, with the kings claiming the exclusive right to act as intermediaries between their subjects and the spirit world.
To stage this royal role, the Shang built palaces, temples, and altars for worship in their capital cities, served by artisans making a host of goods. They developed enormous tombs tunneled beneath the earth for royals and nobility, signifying their capacity to organize labor and resources on a vast scale. Fu Hao’s tomb, for example, was small by comparison to many others for Shang royals, but it was dug twenty-five feet deep into the earth and was large enough to hold sixteen human sacrifices and hundreds of bronze weapons, mirrors, bells, and other items fashioned from bone, jade, ivory, and stone. A comparison of early Shang tombs in Zhengzhou with those of a later period discovered in Anyang suggests that human sacrifices became ever more spiritually significant, and also more extreme. Later kings were found buried not with a few victims but with hundreds of servants and prisoners of war, as well as animals such as dogs and horses. By spilling human blood, Shang royalty hoped to appease Di and their ancestors to ward off problems such as famine. But the scale of these rituals ballooned, with one record indicating that King Wu Ding went so far as to sacrifice more than nine thousand victims in one ritual bloodletting.
Under the sway of the Shang, the disparate Neolithic cultures of northern China grew more uniform, while even groups beyond the Shang’s control in the Yangtze River valley and the west were influenced by their artistic styles and motifs. Yet over the course of their reign, the Shang’s reliance on constant warfare and a religion centered on human sacrifices bred discontent and may have fueled the perception of their kings as corrupt and sadistic. It might even have precipitated revolt against the Shang rulers and the culture’s eventual demise.
The Zhou Dynasty
The Zhou dynasty, which supplanted the Shang dynasty in 1045 BCE, borrowed extensively from its predecessors. But the Zhou people were originally independent of the Shang, with their homeland lying in today’s Shaanxi province in north-central China, in a large fertile basin surrounded by mountains just beyond the core Shang territory that lay to the east. Once settled there, Zhou nobility became vassals of the Shang kings, equipped to defend them and campaign against their hated rivals the Qiang , a proto-Tibetan tribe.
The Zhou combined the practices of farming learned from the Shang with livestock raising learned from nomadic groups living outside the Chinese core. From the Shang, the Zhou also acquired the arts of bronze-making and divination before later developing their own ritual vessels and spiritual practices. Armed by the Shang with chariots, bows, and bronze armor, the Zhou eventually overthrew the Shang kings and founded a new dynastic ruling house. Inheriting the Shang logographic script, the Zhou dynasty became the first to transmit texts such as the Book of Documents , records of dozens of speeches and announcements attributed to historical leaders, from the ancient world directly to future generations.
But for all that the Zhou inherited from the Shang, their dynasty also introduced influential changes to ancient China. Likely in order to distance themselves from the Shang, the Zhou allowed the scale of human sacrifices in burials to decline and phased out the use of divination with oracle bones. Above the deity Di, they introduced the concept of a higher power referred to as heaven, and they situated themselves as mediators by performing rituals designed to show that the cosmos legitimated their right to rule (Table 5.1).
| Chinese Dynasty | Approximate Duration |
|---|---|
|
Shang dynasty |
1600–1050 BCE |
|
Zhou ( pronounced “Jeo” ) dynasty
|
1046–256 BCE |
|
Qin ( pronounced “chin” ) dynasty |
221–206 BCE |
|
Han dynasty
|
206 BCE–220 CE |
|
Six Dynasties Period
|
220–589 CE |
|
Sui ( pronounced “sway” ) dynasty |
581–618 CE |
|
Tang dynasty |
618–906 CE |
|
Five Dynasties Period |
907–960 CE |
|
Song dynasty
|
960–1279 |
|
Yuan dynasty |
1279–1368 |
|
Ming dynasty |
1368–1644 |
|
Qing dynasty |
1644–1912 |
More than just spiritual changes, these policy shifts helped the Zhou spread a political ideology that fostered a shared cultural identity that was formative to Chinese civilization. According to the Zhou, the Shang rulers over time had grown despotic, ruining the lives of their subjects and squandering the bountiful resources of China. Around 1046 BCE, the Zhou, having grown tired of their abuses, rose up against the Shang and, led by King Wu, defeated them in battle.
The Zhou victory and Shang defeat were recorded in various Chinese classical texts as proof that the heavens had revoked the Shang’s right to rule and conferred it upon the new Zhou dynasty. This “ Mandate of Heaven ” shaped Chinese ideology and understanding of dynastic cycles for centuries to come (Figure 5.6). It justified the overthrow of bad governments and corrupt or inept rulers and reinforced a common conviction that in order to govern, a ruling house must demonstrate morality and order to retain heaven’s favor. The concept also pressured dynastic rulers to deserve the mandate by exhibiting moral leadership and proving their legitimacy through support for agriculture, the arts, and the welfare of the common people. Thereafter, natural disasters such as flood or famine and social upheaval in the form of rebellions or poverty were read as signs that a dynasty was in peril of having its mandate to rule rescinded.
The Mandate of Heaven also ensured continuity between dynasties because it became an element of a core ideology passed from one ruling house to the next, even as non-Chinese groups such as Mongols and Jurchen later invoked it as conquerors. Thus, the mandate created a basis for increasing political unity of the Chinese under a supreme sovereign, while also promoting dissent and latent revolution against unpopular rulers. From this ideology sprang new terms for subjects, identified as denizens of Zhongguo (China), a name formed from the terms for central and state , or as Huaxia (Chinese) in the Zhou dynasty, to express their membership in a shared culture defined by farming, writing, and metalworking and inherited from mythical figures and common ancestors.
To consolidate their political control, the early Zhou rulers led military campaigns to extend their territory east over the Yellow River and relied on a complex system of decentralized rule. Leniency was shown to the Shang , with a son of the dynasty left to rule his own city and preside over rituals to honor his ancestors. Other Shang nobles were uprooted and moved to new cities to keep them under the watch of the Zhou, whose relatives and trusted advisors governed walled garrisons and cities on the frontier to guard against rising threats. In other areas, the Zhou cooperated with largely autonomous leaders, granting aristocratic titles in return for tribute and military service from local chiefs and nobility. To cement these ties, the Zhou brokered marriages between the royal line and the families of local lords, who within their own domains performed the same spiritual and administrative functions as the ruling family. Like the Zhou, local lords were served by ministers, scribes, court attendants, and warriors, and they enjoyed the fruits of the efforts of ordinary laborers and farmers who lived on their estates.
The Zhou proved more durable than the Shang but, especially in later centuries, their power was diffused among many smaller, competing kingdoms only nominally under their control. The Zhou’s decentralized feudal system , in which land and power was granted by the king to local leaders in return for special privileges, gradually weakened as those regional lords ignored the commands of kings, instead amassing armies and searching for alliances and technological advantages over their neighbors.
As a result, scholars typically divide the Zhou dynasty into several periods. The Western Zhou (c. 1046–771 BCE) refers to the first half of the dynasty’s rule, from its founding to the sack of its capital in Haojing by nomadic armies in 771 BCE. Afterwards the Zhou reestablished their capital in the east, in Luoyang , inaugurating the period called the Eastern Zhou (771–256 BCE). The Eastern Zhou dynasty itself is often divided into two halves—the Spring and Autumn (771–476 BCE) and the Warring States (475–256 BCE) periods. The first half of the Eastern Zhou dynasty derives its name from the Spring and Autumn Annals . That chronicle, from about the fifth century BCE, documents the gradual erosion of the Zhou kings’ power as outlying territories such as Chu , Qin , and Yan became increasingly autonomous. Not surprisingly, then, the Warring States era was characterized by open warfare between these regional powers to enlarge their territories, absorb neighboring kingdoms, and replace the Zhou as the new sovereigns of ancient China.
Bridging the two eras of the Spring and Autumn and the Warring States was a period defined by a flourishing of literature and philosophy known as the Hundred Schools of Thought (770–221 BCE). Inspired by political turmoil and rivalries between various Chinese states, those who wished to retain power were drawn to the study of the military arts, diplomacy, and political intrigue. Those who lamented the lack of order and waning loyalty to authority and tradition turned to the study of morality and ethics. In a political climate of competition and reform, new schools of thought informed a swelling class of capable administrators and military strategists contesting for the patronage of rulers. Philosophers such as Mozi and Sunzi , author of The Art of War , created their own rival traditions and contributed to courtly debates on morality, war, government, technology, and law.
In this marketplace of ideas, Chinese civilization as a whole rapidly grew more sophisticated. At the same time, rulers sought to expand their revenues, increase the size of their populations, implement new techniques for farming such as draining marshes, and create new forms of currency such as bolts of silk. This era also fostered dynamic new forms of art as the Zhou court became home to musicians skilled with chimes, drums, lutes, flutes, and bells. States such as Chu and Zheng became famous for their artists and styles of dance, while popular hymns were later translated into poems and recorded in the Book of Songs . These intellectual traditions and cultural forms, though varied, served as the foundational core for Chinese politics, education, and art in the ancient world.
Foremost among the new schools of thought was Confucianism , a philosophical system that shaped morality, governance, and social relations in China before spreading to Korea, Vietnam, and Japan in later centuries. Its founding philosopher, known as Kong Fuzi, or Confucius , was probably born about 551 BCE and lived in relative obscurity as a teacher in the small state of Lu. Later his descendants and disciples made his teachings on the family, society, and politics known in ancient China via The Analects , a collection of brief statements attributed to him and recorded long after his death. Later scholars influenced by Confucius, such as Mengzi , went on to win renown for their teachings, attracting throngs of new students while gaining influential positions as advisers in the service of rulers.
A central tenet of Confucianism is the importance of exemplifying virtuous leadership by living a moral life, studiously observing rituals, and being tirelessly devoted to the duties owed to the leader’s subjects. Confucian texts such as the Book of Documents promoted habits like literacy, critical thinking, the search for universal truth, humility, respect for ancestors and elders, and the valuing of merit over aristocratic privilege. Confucius also considered family relationships to be central to an orderly society. Specifically, he delineated five cordial relationships—between king and subjects, father and son, husband and wife, elder brother and younger siblings, and friends. Each relationship consisted of an authority figure who required obedience and honor from the other person or persons, except for friends who were to honor one another. In return, the person in authority was supposed to embody ren , an attitude of generosity and empathy for those beneath him. So long as everyone behaved as they should, good order would flourish.
Later Confucian teachers such as Xun Kuang (also known as Xunzi ), witnessing the violence of the Warring States period, argued that humanity’s base impulses necessitated rigorous self-cultivation and discipline. Among devout Confucians, such ideas spawned a constant search for internal self-improvement and concern for the well-being of others and society as a whole. During this period, Zhou kings presided over rites to honor royal ancestors, but they also made greater use of written works to magnify their prestige and power. Yijing , or The Book of Changes , presented a new system of divination later included as a seminal text in the Confucian canon.
In Their Own Words
The Analects of Confucius
Over many decades following Confucius’s death, his students and followers collected his words of wisdom in The Analects . The Analects consists of twenty short books, each of which includes a series of short quotations on a particular theme. Confucius’s main concern was to teach people how to become junzi, compassionate and moral beings more concerned with doing what was right than with satisfying their own desires. The junzi understood their duties to others and fulfilled all the ancient ritual obligations. Confucius believed junzi could be created through education, and that society would be harmonious and peaceful if the government was guided by junzi . The following are some excerpts from Book 2.
CHAP. I. The Master [Confucius] said, “He who exercises government by means of his virtue may be compared to the north polar star, which keeps its place and all the stars turn towards it.”
CHAP. II. The Master said, “In the Book of Poetry are three hundred pieces, but the design of them all may be embraced in one sentence—Having no depraved thoughts.”
CHAP. III. 1. The Master said, “If the people be led by laws, and uniformity sought to be given them by punishments, they will try to avoid the punishment, but have no sense of shame. 2. “If they be led by virtue, and uniformity sought to be given them by the rules of propriety, they will have the sense of shame, and moreover will become good.”
CHAP. IV. 1. The Master said, “At fifteen, I had my mind bent on learning. 2. “At thirty, I stood firm. 3. “At forty, I had no doubts. 4. “At fifty, I knew the decrees of Heaven. 5. “At sixty, my ear was an obedient organ for the reception of truth. 6. “At seventy, I could follow what my heart desired, without transgressing what was right.”
—Confucius, The Analects , translated by James Legge
- Why would Confucius think it important to be able to feel shame?
- How would the values expressed here help make a person a better leader?
- What connection, if any, can you see between the teachings of Confucius and the Zhou concept of the Mandate of Heaven?
Link to Learning
You can read the full text of The Analects at the Project Gutenberg website.
A mystical indigenous religion that venerated nature, Daoism borrowed from various ideological systems, such as the dualism of yin-yang with its emphasis on the complementary poles of light and dark cosmological forces. Daoism’s thousands of texts, temples, and priests did not flower until the later Han dynasty, but during the Zhou era, this school emerged as a major influence thanks to teachers like Laozi and Zhuang Zhou (commonly known as Zhuangzi ) and the circulation of the books attributed to them, the Tao Te Ching and the Zhuangzi . From them, Daoists learned a litany of poems, sayings, parables, and folktales teaching that dao (or “the way”) was an underlying influence that shaped and infused all humans, the natural world, and the cosmos. Daoists encouraged dwelling on the beauty of the natural world, exploring mystic rituals, and contemplating the comparative insignificance of the individual against the vastness of time and space. Perhaps the most important political concept introduced by Daoists was the idea of wuwei (or “nonaction”), implying to those in power that the best form of governance was a minimalist approach that avoided interfering in the lives of their subjects.
Counter to the Daoist tradition and Confucianism ran the school of thought known as Legalism , the focal point of which was the accumulation of power. Legalists argued that governments drew power from a written legal code backed by an expansive system of rewards and punishments to ensure enforcement and order. A few of its exponents, like the thinker Han Feizi , studied Confucianism first, but came to see its proponents and teachings as too idealistic and naïve. Legalists downplayed the need for morality and asserted that the bedrock of a good government was a “rich country and a strong army.”
While Confucianism, Daoism, and Legalism remained distinct, they borrowed liberally from each other and incorporated values, themes, and terminology to round out their own philosophies. All were open, eclectic systems reacting to historical circumstances and conditions. Moreover, each of these schools of thought and even the more minor traditions formed a common frame of reference within which Chinese rulers, philosophers, scribes, and even hermits expressed their own views. Confucianism and Legalism encouraged the study of texts over mystic rites, or society and its history over the supernatural and the afterlife, while other thinkers continued to ponder the yin and yang and work out principles applicable to astronomy, medicine, and the calendar. The world of spirits, ancestor worship, and folktales was no less prevalent than before. Still, it was the emergence of these new systems and their contributions that make this era an “axial age,” a critical stage in the evolution of not just Chinese civilization but the world.
The Warring States Era and Qin Unification
Over the course of the long Eastern Zhou era (771–256 BCE), the means and methods of warfare changed, with dramatic consequences for ancient China. Initially war was regulated by chivalrous codes of conduct, complete with rituals of divination conducted before and after battle. Battles were fought according to a set of established rules by armies of a few thousand soldiers fighting for small Chinese states. The seasons and the rhythms of agricultural life limited the scope of campaigns. Victorious armies followed the precedent set by the early Zhou conquerors, sparing aristocratic leaders in order to maintain lines of kinship and preserve an heir who would perform rites of ancestor worship.
With the advent of the Warring States era (475–256 BCE), these rules were cast aside, and values such as honor and mercy went out of fashion. New military technologies provided the catalyst for these changes. The invention of the crossbow made the advantages once owned by cavalry and chariots nearly obsolete. The result was ballooning conscript armies of hundreds of thousands, making military service nearly universal for men. Protected by leather armor and iron helmets, soldiers skilled in the art of mounted archery trickled into Chinese states from the steppes. Discipline, drilling, logistics, organization, and strategy became paramount to success. Treatises on deceptive military maneuvers and the art of siege craft proliferated among the various states of the Zhou.
Not all the changes wrought by war in the late Zhou period were unwelcome. For example, common farmers gained the right to include their family names on registration rolls and pressure sovereigns for improvements to their lands such as new irrigation channels. Iron technology was developed for weapons, but was also used for new agricultural tools. Together, increasing agricultural productivity and advancements in iron technology were part of a late Zhou surge in economic growth. Mobilization for war stimulated a cross-regional trade in furs, copper, salt, and horses. And with that long-distance trade came increased coinage. The destruction of states through war also created social volatility, reducing the status of formerly great aristocratic families while giving rise to new forms of gentry and a more powerful merchant class. The only way back up the social ladder was through merit, and many lower-level aristocrats proved themselves as eager bureaucrats in the service of new sovereigns.
One of the many warring states in this period, the state of Qin , capitalized on these economic and social changes by adopting Legalist reforms to justify an agenda of power and expansionism. The arrival of Lord Shang , a migrant born in a rival territory in approximately 390 BCE, who soon took the position of prime minister, was the turning point, when Legalism came to dominate the thinking of Qin’s elite. Before this, the Qin state had been a marginal area within the lands of the Zhou, a frontier state on the western border charged with defending the borderlands and raising horses. The Qin state leveraged this location by trading with peoples from central Asia. At the same time, their vulnerability on the periphery kept them in a state of constant alert and readiness for war, creating a more militaristic culture and an experienced army that proved invaluable when set against their Chinese neighbors in the east.
To offset their initial disadvantages, the Qin leaders wisely embraced immigrant talent such as Lord Shang and solicited help from advisors, militarists, and diplomats from rival domains. They adopted new techniques of governance, appointing officials and delegates to centralize rule rather than relying on hereditary nobles. Theirs became a society with new opportunities for social advancement based on talent and merit. Under Shang’s advisement, the Qin scorned tradition and introduced new legal codes, unified weights and measures, and applied a system of incentives for able administrators that helped create an army and bureaucracy based more on merit than on birth. Over time, these changes produced an obedient populace, full coffers, and higher agricultural productivity.
The Qin state’s rising strength soon overwhelmed its rivals, propelling to victory its king Ying Zheng , who anointed himself China’s first emperor and was known as Qin Shi Huang , or Shihuangdi, literally “first emperor” (Figure 5.7). The Qin war machine defeated the states of Han, Wei, Zhao, Chu, Yan, and Qi in less than a decade. Under Shihuangdi’s rule, the tenets of Legalism fostered unity as the emperor standardized the writing system, coins, and the law throughout northern China. Defeated aristocratic families were forced to uproot themselves and move to the new capital near Xi’an. To consolidate political control and reverse the fragmentation of the Zhou era, officials appointed by the emperor were dispatched to govern on his behalf, which cast aside the older feudal system of governance. Officials who performed poorly were removed and severely punished. Those who did well wrote regular detailed reports closely read by the emperor himself.
Qin militarism also turned outward, enlarging the bounds of Chinese territory as far as the Ordos Desert in the northwest. In the south, Shihuangdi’s armies ranged into modern-day Vietnam, laying a Chinese claim to the people and territory in this area for the first time in history. These expansions and the need for defense generated new infrastructure, such as fortified towns and thousands of miles of new roads to transport the Qin’s armies to the borders. Northern nomadic and tribal civilizations known to Chinese as the Hu (or Donghu) and Yuezhi were seen as formidable threats. To guard against these “barbarians,” hundreds of thousands of laborers, convicts, and farmers were sent to connect a series of defensive structures of rammed earth built earlier by states in northern China. Once completed, the Qin’s Great Wall illustrated how fortifying the north and guarding against the steppes became the focal point of statecraft in ancient China. Successive empires in China followed a similar wall-building pattern. The walls commonly referred to as the Great Wall of China today are in fact Ming dynasty walls built between the fourteenth and seventeenth centuries CE.
Shihuangdi was also ruthless in defending himself from criticism at home. Informed by his chancellor in 213 BCE that literate Chinese were using commentary on classical texts and literary works to critique his rule, the emperor ordered the destruction of thousands of texts, hoping to leave in print only technical treatises on topics such as agriculture or medicine. An oft-cited story of Shihuangdi’s brutality credits him with calling for the execution of hundreds of Confucian and Daoist intellectuals by burying them alive. Recent scholars have scrutinized these tales, questioning how much about his reign was distorted and exaggerated by the scholars of his successors, the Han dynasty, to strengthen their own legitimacy. In studying the ancient past, we must likewise always question the veracity of historical sources and not just reproduce a history “written by the winners.”
Another monumental feat of Shihuangdi’s reign was the creation of the Terracotta Army , thousands of life-sized clay soldiers fully armed with bronze weaponry and horses. From the time he was a young boy, the emperor had survived a series of assassination attempts, leaving him paranoid and yearning for immortality. Trusted servants were sent in search of paradise and magical elixirs, while hundreds of thousands of others were charged with the years-long process of constructing an enormous secret tomb to protect him in the afterlife. Almost immediately upon ascending the throne in 221 BCE, Shihuangdi began planning for this imperial tomb to be filled with clay replicas of his imperial palace, army, and servants. The massive underground pits, which cover an area of approximately thirty-eight square miles, were discovered with their innumerable contents near Xi’an in the 1970s (Figure 5.8). Labor for projects such as the Great Wall and the Terracotta Army came from commoners as a form of tax or as a requirement under the Qin’s law codes. Penalties for violating the criminal code were severe—forced labor, banishment, slavery, or death.
Link to Learning
Shihuangdi’s mausoleum has been designated a UNESCO World Heritage site. Use the tabs at the UNESCO website to view pictures and to access the videos of the Terracotta Army to learn more.
The Qin Empire quickly collapsed in the wake of the emperor’s death in 210 BCE. Conspiracy within the royal court by one of the emperor’s sons led to the deaths of his rightful heir, a loyal general, and a talented chancellor. Beyond the court, the Legalist philosophy and practices that had helped the Qin accrue strength now made them brittle. Imperial power exercised in the form of direct rule and harsh laws inspired revolts by generals and great families calling for a restoration of the aristocratic feudal society of the Zhou.
The armies of the Qin’s second emperor failed against Liu Bang , a commoner who rose to become Emperor Gaozu of the newly formed Han dynasty. The Han’s early emperors distanced themselves from Shihuangdi’s legacy by reducing taxes and burdens on the common people. But the Qin’s imperial blueprint—uniform laws, consistent weights and measurements, a centralized bureaucracy, and early focus on expansionism to ward off “barbarians” in the north—provided the scaffolding for the Han’s greatness. | libretexts | 2025-03-17T22:27:52.699325 | 2025-02-12T00:43:26 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.08%3A_Ancient_China",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.8: Ancient China",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.09%3A_Korea_Japan_and_Southeast_Asia | 2.9: Korea, Japan, and Southeast Asia
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Discuss how geography and climate change influenced the early history of Korea , Japan , and Southeast Asia
- Describe the cultural exchanges between ancient Korea and Japan
- Compare daily life in ancient Korea, Japan, and Southeast Asia
Korea, Japan, and Southeast Asia were notable in the ancient world as homes to cultures uniquely engaged with the wider world. Via trade, religion, and diplomacy, Korea and Japan borrowed and adapted from Chinese civilization, but even more importantly from each other. Ties between Southeast Asia and India likewise proved formative in the eras in which many cultures evolved from small cities and agrarian villages into trade-post empires with monumental architecture. Conversely, geography, climate, and the early cultural forms produced by the first migrants meant that each area also produced its own indigenous systems. For example, Buddhist missionaries traveled from the Indian subcontinent across the Silk Roads and pilgrims trekked to temples to study and eventually bring home tools to convert their native cultures, but in each destination the faith was transformed into hundreds of new sects and interpretations of the path to enlightenment.
Ancient Korea
The earliest humans to reach the Korean peninsula did so around thirty thousand years ago. The land is very hilly, and mountains in the north form a barrier with Manchuria. Important rivers include the Daedong, the Han, and the Yalu. Winters are cold and snowy in the north, while summer months in the south feature blistering heat and torrential rains. Archaeologists have found evidence of bronze weapons dating back to 1300 BCE, but no clear proof that Korea at that time produced a Bronze Age civilization. The earliest written records of the first Koreans come from China, where the Book of Documents recounts the creation of a fief known as Joseon , located in northern Korea and awarded to a Chinese noble referred to as Gija . Later records in Chinese documents from 200 BCE to 313 CE provide descriptions of various small states in areas of Korea and Manchuria.
Seen from the vantage point of Chinese authors, our picture of ancient Korea begins to take shape during China’s Han dynasty, when the peninsula was home to a number of small tribes, cultures, and communities living near the borders of the Chinese empire. From these Chinese records it is also clear that the earliest Koreans were in constant contact and exchange with not just the Chinese but also Inner Asian Steppe peoples like the Xiongnu as well as settlers in Japan . Thus, ethnicity in ancient Korea was quite fluid and prone to change. Groups borrowed liberally from each other’s cultures, traded, and were absorbed and transformed by conquest.
The transformation of Korea into a unified culture and civilization is a story with many stops and starts. Historians often begin with the narrative of the Han dynasty historian Sima Qian , which tells of the dynasty’s efforts to suppress the Xiongnu by invading northern Korea and establishing four garrisons there, from the Liao River to near today’s Seoul. The presence of Chinese generals, troops, and settlers spurred exchange with societies on the Korean peninsula, which borrowed from Chinese culture the ideas of coins, seals, artwork, and building techniques to make roads and mounded tombs. Adopted by tribal chieftains and aristocratic warrior families, Chinese culture provided a wealth of material needed to engineer the first Korean states by controlling large areas of the northern half of the peninsula.
When unable to trade, Korean tribal societies mimicked the Xiongnu and raided the Chinese settlements, drawing strength and forging their own war bands. Among the early Korean polities noted in Chinese records in the north were Joseon, Goguryeo , and Buyeo , a frequent ally of the Han. With the collapse of the Han , each of these Korean societies lost a valuable partner and source of weapons, technology, and wealth. In the fourth and fifth centuries CE, all three struggled to defend themselves against rising powers in the north, such as the Xianbei .
In roughly the same time frame, in the central and southern parts of the Korean Peninsula, and therefore beyond the reach of Chinese administration, three groups known collectively as the Three Han established their territories. Chinese sources from the time refer to people in the southwest as the Mahan , those in the southeast as Jinhan , and between them a group known as the Byeonhan . These societies were ruled by aristocratic families that chose a chief and controlled the lives of lower-ranking commoners, servants, and enslaved people.
The Three Han were far less formidable military powers than their counterparts to the north, in part because they lacked horses and fought primarily on foot. On the other hand, they showed considerable cultural fluidity and knowledge of their neighbors. In Jinhan, many tattooed their bodies in decorative patterns like those found on the bodies of the Wa living in Japan, not surprising given that groups moved relatively freely between Japan and Korea in this age. Residents of Korea traveled to Japan, sometimes as traders and fishers and other times as migrants and permanent settlers. In Mahan, clothing and hairstyles mimicked a style used by the Xianbei on the Inner Asian Steppe, even as their lifestyles revolved around farming rather than a nomadic culture lived on horseback. Indeed, it appears that early Korean societies were quite selective in their borrowing.
Practices such as the levirate , in which a young male marries his elder brother’s widow, were used widely by Inner Asian Steppe peoples and adopted by a number of early Korean ruling families. But the decision whether to emulate the Chinese, Japanese, or other neighbors presented a range of options and cultural choices for chieftains and elite families to build on in this age.
The decline of Chinese power in the fourth century unleashed a wave of refugees that proved pivotal in speeding up the process of state-building in Korea, opening an age known as the Three Kingdoms (313–668 CE). The three kingdoms in questions were Goguryeo , Baekje , and Silla (Figure 5.11). Chinese immigrants resettling within the bounds of Korea provided a source of knowledge about political practices that strengthened the rule of elites, transforming them into kings. These kings commanded large armies, drawing legitimacy from their military prowess and creating mounded tombs that required vast resources and labor. From the struggles against groups such as the Khitan and Xianbei in the north emerged the kingdom of Goguryeo. In the late fourth century, under the leadership of King Gwanggaeto , this kingdom drove southward in a series of expansionist wars against its main rival, the Korean kingdom of Baekje and its allies from Japan, the Wa. In doing so, Goguryeo managed to make the third of the Three Kingdoms, Silla, a vassal by the early fifth century.
Beyond Goguryeo’s militarism, its expansion was marked by two other critical developments. The first was its skillful use of diplomacy and regional politics to manage alliances and threats, playing off groups within and around the Korean peninsula to secure its power. The second was the adoption of a written script from China, evidenced by 414 CE in a stone slab inscribed to note the accomplishments of King Gwanggaeto upon his death. Goguryeo’s elites also learned from the Chinese the art of adorning their large, mounded tombs with colorful murals depicting the lives of royals surrounded by dancers, servants, and enslaved people. Images of large battles, wrestling matches, and mythical creatures such as the phoenix in other mural scenes suggest the emergence of a rich courtly life.
During the Three Kingdoms era, the Chinese writing system spread throughout Korea, allowing those excluded from the ranks of aristocratic families a chance to seek appointment as scribes. The literacy necessary to study Confucian texts or Buddhist sutras, teachings of the Buddha, was a rare and very valuable skill. Knowledge of Chinese culture was another means to a life within the Korean courts, especially when writing poetry became a favorite pastime of Korean royalty and composing an eloquent verse was a critical sign of nobility and refinement. Many kingdoms sent royals, aristocrats, traders, scholars, and monks to China as apprentices to acquire skills and expertise they could bring home. These groups had an indelible impact on early Korean culture and society, particularly as Buddhism developed and grew into distinct sects and traditions. A few Korean Buddhist monks even traveled as far as India and central Asia, while others worked as teachers in the Three Kingdoms, inspiring new forms of painting, sculpture, and jewelry, and later the famed Buddhist monument, the first-century Seokguram Grotto (Figure 5.12).
These changes to the Korean political, social, and spiritual landscape also powered Gogoryeo’s rival state Baekje . The kingdom of Baekje emerged from its home near the Liao River in Manchuria to conquer and absorb the Mahan territory in 369. To consolidate their control over the southwestern area of the peninsula, Baekje’s rulers created a Chinese-style bureaucracy with a chief minister and carried on a successful maritime trade with China and Japan . Demand for Chinese culture, weapons, and Buddhism gave Baekje influence and prestige in Japan. However, a military defeat by Goguryeo and Silla in 475 kept Baekje hemmed in below the Han River. Afterward, Baekje turned its energy to upsetting the balance of power on the Korean peninsula by entreating Silla to rise against Goguryeo, its protector and overlord.
Ultimately, however, it was Silla that emerged from these plots to unify a larger share of today’s Korea than any kingdom that preceded it. The smallest kingdom at the beginning of the era, located in the southeastern corner of the peninsula, Silla was ruled by powerful families that, like their neighbors, eventually copied models from China to wield power. Silla’s rulers created Chinese-style ministries and codes of law and supported the practice of Buddhism to enhance their prestige and legitimacy. Maritime trade later proved a channel for Silla to form an alliance with a reunified China under the Sui and then the Tang dynasties. Conflict between Goguryeo and the Sui began in the 590s and lasted for decades. Later, the Tang supplanted the Sui and renewed Chinese ambitions to dominate the Korean peninsula.
By the 640s, the skillful diplomacy of Queen Seondeok of Silla (Figure 5.13) had leveraged the hostility between Goguryeo and the Chinese into the means for a Silla alliance with the Tang. Her kingdom’s ships proved invaluable in ferrying Chinese armies onto the peninsula to lay conquest. Together, Silla and the Tang first subjugated Baekje and then eliminated Goguryeo in the north. Then, while the Tang set up bureaucracies to administer Korea , Seondeok’s successors in Silla conspired with the defeated forces of their rivals to evict the invaders. Together, the remnants of the Baekje and Goguryeo’s armies under the sway of Silla expelled Chinese forces in 676, ushering in a new era of unified rule over much of the peninsula that lasted from 668 to 892 CE. For a time, Silla severed its relations with the Tang, forgoing a critical resource that had powered its survival for centuries. But by the eighth century, new threats to both the Tang and Silla had emerged in the north. As a result, Silla once again sent tribute to the Chinese in return for protection and trade.
Part of Queen Seondeok’s legacy was a period of unprecedented female rule, during which new art forms emerged that in later centuries became distinctive Korean traditions. In ancient Korea, women wielded power as royal princesses, and affluent women often served as advisors and regents. But Queen Seondeok’s reign paved the way for future queens of Silla, Jindeok , and Jinseong , to inherit the throne. Over time, Korean artisans learned from China how to make celadon ceramic, known for its lustrous green glaze, creating exquisite vases, jugs, bowls, and even pillows with Buddhist motifs such as cranes and clouds. In later centuries these works helped support a robust trade network running from China through Korea to Japan.
Beyond the Book
The Tombs of Goguryeo
Chinese rulers were not the only ones to build tombs that provide us with clues about what they valued in life. The rulers of the Korean kingdom of Goguryeo also constructed tombs decorated with murals depicting everyday scenes, presumably representing the lives they had lived and the lives they hoped to have after death. Shown here (Figure 5.14) are murals from the tomb of a Goguryeo man who was buried in the fourth century CE, not long after the end of China’s Qin dynasty. As you study the images, consider what they tell us about life among the Goguryeo elite at this time.
- What do these murals tell you about the lives of Goguryeo’s elite? What did they value and what were their concerns?
- Do you see any Chinese influences in these depictions? If so, where are they?
Ancient Japan
As in Korea, geography shaped much of Japan ’s early development and history. Four main islands make up Japan. The northernmost is Hokkaido . Then comes Honshu , which is home to Tokyo and the largest present-day population. Continuing south, the next island is Shikoku , and finally Kyushu , which is closest to the Asian mainland. Japan today also includes the islands of Okinawa and thousands of others strewn across the Pacific. But much of the story of ancient Japan concerns only the main isles, the inland Sea of Japan, the country’s countless mountains, and a few fertile plains fed by monsoon rains that sustain agriculture.
Critical to the formation of the main islands and their geographical features is the belt known as the Ring of Fire (Figure 5.15), mapped by a horseshoe-shaped line drawn around the rim of the Pacific Ocean to mark a zone of frequent earthquake and volcanic activity that has generated countless tsunamis in Japan’s distant past and modern day.
The story of prehistoric Japan is typically divided into two halves: the era of Jōmon hunters, ranging from 14,500 to 300 BCE, and that of the Yayoi agriculturalists who emerged after them to dominate the centuries from 300 BCE to 300 CE. These groups had no concept of themselves as “Japanese” in a sense recognizable to us today. But they left their imprint on the main isles via migration, settlement, and the development of practices for making the fertile plains, mountainous forests, and innumerable ocean bays and rivers their homes. Without written records, our knowledge of the Jōmon and Yaoyi is almost wholly based on the archaeological evidence and contemporary theories about the pathways humanity followed out of Africa and across the world. Early hominids likely made their way to the Japanese archipelago when it was still connected to the Asian continent, perhaps over 100,000 years ago. Hunting giant mammals such as elephants, wolves, and enormous deer, many of these early hunters moved to an area near today’s Sea of Japan. Much later, changing climates led ocean levels to slowly rise and cover the stretches of land that connected Japan to the Asian mainland.
Linguists theorize that ultimately three early waves of foragers and hunters made their way overland or across the Tsushima Strait that separates the four main isles from the Asian mainland. These groups were descended from Ural-Altaic, Chinese, and Austro-Asiatic peoples. The climate shift that severed Japan from the Asian continent around twelve thousand years ago transformed the newly formed isles and the game these people once hunted. Grasslands for bison disappeared, for example, and wolves became smaller.
But the early inhabitants adapted, inventing new technologies to enhance their chances for survival. In the caves of Kyushu, archaeologists have found evidence of one of the world’s earliest technological breakthroughs, the development of pottery . Known for its elaborate handles and distinctive cord patterns around rims, this pottery allowed Japan’s inhabitants to become more sedentary and less dependent on finding wild game and foraging for edible plants. First used to store vegetables and boil water from the sea to make salt, the pottery called Jōmon was likely later used for rituals to promote unity and cooperation (Figure 5.16). The increasingly sophisticated culture of the people, also called Jōmon, was characterized by settlements with shared spaces for burials, food storage, and elaborate ceremonies. Among the important cultural symbols were earthenware figurines known as dogu .
Skeletal remains of the Jōmon people suggest that despite their diverse diet of fruits, nuts, and seafood such as clams and fish, they lived with the constant threat of starvation and malnutrition. Another shift in the region’s climate produced a deadly drop in temperatures that led to a decline in the populations of game such as deer and boar. To survive, many Jōmon moved to the coastal areas to supplement their food supply by fishing. Others likely began experimenting with early forms of agriculture; evidence suggests the cultivation of yams and lily-bulbs in the years from 3000 to 2400 BCE. Further evidence of early agriculture comes from traces of rice found in jars dated to the later years of the Jōmon era.
Agriculture and the Yayoi
The next phase of Japan’s prehistory is marked by the leap into agriculture and is known by the name of a separate people and culture, the Yayoi . Beginning in 300 BCE, a new wave of migrants descended from groups in northern Asia began arriving on the southern island of Kyushu . They brought with them knowledge about cultivating barley, buckwheat, and later rice, and gradually they overwhelmed and replaced the Jōmon. Later, the Yayoi built impressive storehouses for grain and domesticated horses and dogs. Other archaeological sites show that, as Yayoi culture spread, the people developed the capacity to engineer the landscape for farming by creating irrigation canals, wells, and pits. Agriculture brought stability and growth, and the Yayoi population is estimated to have ballooned to more than half a million people by the first centuries of the common era.
The Yayoi period marked a turning point in Japan’s prehistory. From this point forward, Korea, China, and Japan were in more consistent contact than in the centuries before. This was especially the case because in the Bronze Age , with copper in short supply on the islands of Japan, the Yayoi were forced to import much of the material. The Yayoi period also marked the beginning of a written record of Japan.
Han conquests and the construction of garrisons on the Korean peninsula began a period of trade in bronze mirrors, iron weapons, and agricultural practices transmitted via Korea to Japan. The Han dynasty (206 BCE–220 CE) and later the kingdom of Cao Wei (220–265 CE) also sent occasional envoys to Japan (which the Han called the Wa kingdom). These envoys left behind the first written records of the lives and cultures of the Yayoi. Their observations show a slow but gradual transformation of society and politics. Despite increasing food surpluses and material abundance, the Yayoi people at first remained largely communal, sharing wooden tools and public spaces. Over time, the appeal of certain areas and sites for agriculture led to competition and increasing warfare and, by extension, the emergence of states to provide for defense.
Chinese records also note the distinctive style of dual governance—in which power was shared between male and female rulers—that developed in early Japanese states at the end of the Yayoi era. Among the notable rulers was Queen Himiko , who ruled in the early third century and, in return for paying tribute to the Chinese emperor, was recognized as an ally and given a golden seal. Ruling alongside Himiko was her younger brother, who handled the administration of her realm. The Cao Wei’s records show that Himiko was at war with a neighboring king and staked the legitimacy of her rule on her spiritual powers, expressed in elaborate burials, the practice of divination, and other sacred rituals. Through such specialization, ancient Japanese women exercised political power and influence, possibly built upon the legacy of the Jōmon, whose dogu figurines depicted women as deities ensuring fertility and safety.
Chinese envoys noted that Japan was also home to an increasingly stratified society that included aristocratic families, merchants, skilled divers and fishers, farmers, and other commoners. With warfare constant, palaces looked more like garrisons, but granaries and markets were full and lively. And while these early Japanese lacked a written script, the Yayoi did develop a rich art form literally written on their bodies, as the practice of tattooing patterns to denote rank, status, and family was widespread in this era.
The Dawn of the Yamato Age
Records of Queen Himiko’s era also suggest a growing concentration of political power and control over territories held by a loose confederation of states and powerful families. This period, known as the Yamato era , was marked by the construction of tombs for deceased royals like Himiko, who were buried with an impressive array of treasures and human sacrifices to accompany them in the afterlife.
It may be that the onset of the Yamato era was produced by a changing climate and constant turmoil in Japan, as a result of which many of the island’s inhabitants despaired and abandoned deities who seemed negligent in their duty to protect them. Instead, new gods associated with an imported technology—mirrors from China and Korea—arose to take their place. Powerful rulers and a new military class forged from warfare associated themselves with these new gods, or more importantly, with the female god of the Sun, Amaterasu , who soon became the ancestral figurehead of the imperial household. Yamato kings further accrued power by brokering alliances, managing trade, giving symbolic gifts, and presiding over ceremonies designed to forge a common culture across Japan.
As co-rulers of a kingdom or heads of households, women continued to wield political clout by using expertise in sorcery via items such as mirrors and often expressing their triumphs in gold jewelry and earrings. Many spiritual practices were imported from earlier Chinese dynasties such as the Shang dynasty. For example, during the burial of Queen Himiko, Chinese envoys recorded that Japanese employed the art of divining the future with heated bones by reading their cracks to foretell the outcome of harvests and wars. Other burial practices such as water purification were more indigenous to Japan and left an imprint on later religions such as Shintoism . Later, new foreign religions such as Buddhism were used by women such as Empress Suiko , who ruled in the early seventh century CE and sought to preserve women’s role in politics through practices such as piety, rigorous study of sutras, and the construction of shrines and temples.
Regardless, it was the construction of large keyhole-style tombs and control over the burial rituals that brought the Yamato rulers power over the area stretching from western Honshu to northern Kyushu. Employing laborers and skilled artisans such as blacksmiths, the Yamato tomb makers showed their wealth and organizational capacity, skills they used later to create capital cities with large markets and highways to the countryside and the coastal ports. To centralize power, kings soon began issuing law codes, such as Prince Shotoku’s Seventeen Article Constitution in 604. The emphasis on law as the basis for rule, the creation of a bureaucracy to help rulers govern, and Confucian values embedded in the document show the Yamato’s reliance on Chinese culture as a source of ideas and inspiration. Borrowing from the Chinese model for imperial statecraft, the Yamato strengthened their rule with mythology and bejeweled regalia, elevating kings to godlike status and eventually transforming them into emperors. Later legal codes such as the Kiyomihara Codes in 689 organized monasteries, created a judiciary, and managed relations between the king’s advisors and vassals. These set the stage for the evolution of Japan’s culture and political system in the later Heian and Nara periods.
Link to Learning
Prince Shotoku’s “Seventeen Article Constitution” was an effort to reform the Japanese state along the lines of the Chinese imperial model. Read the translated law codes at Columbia University’s Asia for Educators site.
To the north, beyond the lands of the Yamato, was another group descended from the early Jōmon foragers who had lived on and resisted the sweep of Yayoi settled agriculture. Later, they continued to forage and hunt and practice their own spiritual beliefs, rejecting the Yamato cultural sphere and its borrowings from China, such as Buddhism, Confucianism , and the idea of large states governed by kings and emperors. While the later Japanese imperial courts in Nara and Heian deepened ties with China’s Tang dynasty, these northern people existed in another orbit defined by contact with smaller northern Asian cultures, such as the Satsumon people of Hokkaido. Relations with the descendants of the Yamato to the south soured over these centuries, as Nara and Heian came to call the northerners Emishi , or “barbarians.” The Emishi survived from the seventh to the eleventh century despite repeated attempts by emperors and militarists from the south to subjugate them. They resisted via war, moved to remote areas, and used other forms of evasion, but the culture of Japan’s south moved inexorably north, and the smaller remnants of the Emishi were subdued by the end of the ninth century.
Southeast Asia
The term “ Southeast Asia ” describes a large area in subtropical Asia that can also include thousands of islands in the Pacific. Today it often refers to Brunei, Burma, Thailand, Laos, Cambodia, Vietnam, Malaysia, Singapore, and Indonesia. For much of human history, travel across this area was far easier by boat along the shore and between islands than overland. Lands were more sparsely populated than in India, China, Japan, and Korea, and most communities were isolated from their closest neighbors by forests and mountains. Early on, however, they became able to engage with other peoples through the sea lanes.
In general, communities in Southeast Asia settled first along coastlines near rivers, lakes, and the oceans and seas. But archaeological sites in Thailand, Burma, and Laos prove that many chose to make upland regions their homes as well. As in India, early agriculture was driven by the rhythms of monsoon season. Farmers developed rainwater tanks to manage their supply of water and learned how to grow rice in paddies. A reliance on slash-and-burn agriculture meant that many Southeast Asians had to migrate after the soil had been exhausted, making the population fluid as people moved from one area to the next. The social structure, too, was less stratified than in India and China. Only with the later arrival of new religions such as Buddhism and Hinduism did priestly and kingly classes start to form and play a central role in religion and politics.
Despite its great territorial expanse and varying climates and topography, the region does have broad commonalities that make it useful to see Southeast Asia as one geographic and cultural zone (Figure 5.17). For example, its location between India and China led to the growth of royal courts that borrowed from foreign traditions to develop rituals and diplomatic relations, assert control over ordinary farmers and fishers, and create trading-post empires. The region’s geography and climate also made sailing a universally efficient craft. For centuries, merchants and adventurers traveling from the Indian landmass along the coastlines of Asia have exploited monsoon winds from June to November that easily push boats all the way to the Malaysian peninsula. Return voyages were made possible by a second set of monsoon winds blowing in the opposite direction from December to May. The holdover period between the two monsoon seasons proved ample time for merchants and missionaries to transplant customs, religion, and art from India to new environs in Southeast Asia. At the same time, the arrival of boats and merchants traveling from Vietnam and Malaysia to China and back as early as 300 BCE meant residents of Southeast Asia enjoyed a rich marketplace of ideas, goods, and cultures at a very early stage in world history.
While the influx of foreign ideas was critical to the development of societies across Southeast Asia, each local community made selective adaptations and preserved its indigenous customs. For example, the importance of the individual family was a point of commonality for many societies in Southeast Asia, in contrast to the weight given extended families and clan s in India and China. Most peasant communities in Southeast Asia also afforded women higher status than their counterparts in China were allowed under the stricter Confucian values system.
Archaeological remains of the region’s prehistory show that inhabitants of northeast Thailand used bronze and mastered agriculture as early as 3000 BCE. Evidence found at Non Nok Tha shows that they grew rice and cast bronze in factories using molds, later producing iron objects. The spread of rice cultivation produced densely populated centers along the region’s smaller fertile plains. Expanding populations were often forced into hilly regions, which they made suitable for farming by creating terraces. Migration chains and artifacts such as simple tools suggest that by the time the inhabitants of India began making contact with Southeast Asia, the islands and coastline settlements there were dominated by peoples related to Malays , who had made their way from southern China . Expert sailors working with finished stone tools and navigating by the stars, these peoples developed long, narrow boats that navigated Southeast Asia’s water with speed and grace. Moreover, they left behind cultures with maritime traditions that echo today with Malaysians, Indonesians, and the people of Singapore.
The archaeological record of Southeast Asia’s prehistory is less clear than that of many other areas of the world, however, and its study has been hampered by many circumstances, including the political volatility of countries such as Vietnam and Cambodia after 1945. As a result, historians often look to the region’s villages and families for insights into its remote past. For example, cultivating rice in terraced rice paddies requires skill and cooperation among many families, likely making this task the basis for village leadership and unity. Elders with experience in selecting breeds, transplanting young plants, and negotiating water resources likely used their authority to foster consensus around values and politics oriented toward giving deference to seniority. It is also possible that growing rice is particularly suited to cultures with animist religions, which venerate deities and spirits thought to inhabit nature. Rites and festivals to honor grains and timber and to appease forces that control wind and rain are still important to local cultures in Southeast Asia today, even as many people also participate in universal religions such as Buddhism .
Occupations offer another important point of continuity. Fishing, farming, and craftwork in fabrics are depicted in carvings found in caves, temples, and mountainsides and remain the primary labor activities of rural peoples today. For example, in Brunei many people still live much of their life on the water—at work as fishers and divers as well as at play when racing boats and swimming. Houses and many other buildings are still situated in the hills or on stilts to protect them from flooding, and many people share a diet of fish, simple grains, and coconut, just as their ancestors did.
In early Southeast Asia, trade and the arrival of outside religion were critical to the development of larger states and powerful kingdoms. Even in the interior, Buddhist artwork and texts flowed in steadily from 300 to 600 CE. The mouths of great rivers linked the interiors and the coasts, and capitals and small principalities that developed there taxed the trade on goods traveling to and from the wider world. During these centuries, Southeast Asians also traveled to India to trade and learn Sanskrit . When Indian elites and literate Buddhists arrived, they came to be known as purohita , advisers to Southeast Asia’s powerful chiefs and nobility. Other immigrants became teachers and founded temples across the region’s landscape, critical hubs that promoted travel, learning, and commerce.
As they did in India, Buddhism and Hinduism coexisted with local religions in much of Southeast Asia. In areas such as the kingdom of Srivijaya , which ruled over the island of Sumatra and southern Malaya Peninsula from the seventh to the twelfth centuries CE, Indian merchants and missionaries were welcomed, while the people retained their own religious traditions rooted in the worship of spirits that inhabited trees, rocks, water, and various physical features of the land. Proclaiming themselves “Lord of the Mountains,” Srivijaya’s rulers patronized Buddhism to foster trade relations across the Malaccan Straits and Indian Ocean.
Other communities, such as nearby Borobudur , which controlled central Java in Indonesia, were more firmly devoted to Buddhism. There, Buddhism inspired countless converts and the later Shailendra Kings (775–860 CE) to erect the world’s largest Buddhist monument, a structure more than one hundred feet above the ground and adorned with magnificent artwork (Figure 5.18). Buddhists from all over Southeast Asia made pilgrimages to Borobudur, leaving behind thousands of clay tablets and pots as offerings. Wreckage from a nearby ship dated to the ninth century shows that the people of Borobudur were engaged in commerce that connected them to Islamic and Arabic cultures in the Middle East. Like many heads of Southeast Asian states, Borobudur rulers staked their political legitimacy on setting a pious example for their subjects and thrived economically by opening their ports to the wider world. Thus India’s centrality to much of Southeast Asia in the ancient world was founded on trade, religion, and art. India was a repository of desired goods and a source of inspiration for religion and state-building, but also a bridge to the wider Eurasian world.
While much of Southeast Asia faced west toward India as the center of trade, culture, and religion, the area near today’s Vietnam fell within the orbit of China’s cultural sphere emanating from the east. The natural geography of Vietnam creates three distinct zones that shaped the evolution of the country from ancient times to the present: one area in the north surrounding the Red River delta; below that, in the south, another densely populated center on the Mekong River delta; and lastly, a long narrow land bridge along the coast squeezing between mountains to join the other two areas together. Humans practicing wet-field rice agriculture developed settlements in the northern zone sometime around 2500 BCE, and a millennium after, there is evidence of bronze-making by the region’s inhabitants. But the most notable contribution to world history from this area in northern Vietnam came from the Dong Son culture (c. 600 BCE–200 CE), defined by its remarkable bronze drums decorated with cords and images of animals such as frogs. Dong Son drums have been found at sites all over Southeast Asia.
Whether the Dong Son culture and its drums originated in Vietnam or inside China near Yunnan province is the subject of debate. Evidence suggests that southeast China below the Yangtze River was once home to peoples who were more strongly linked, culturally and linguistically, to Southeast Asia than to the dynasties in the north such as the Shang and Zhou. During the Zhou dynasty, many non-Chinese societies and kingdoms inhabiting provinces such as Fujian and Yunnan were known as the Yue , the Mandarin version of “Viet.” These areas and groups remained independent of Chinese control for centuries. Chinese records of the Yue demonstrate their sophistication and diversity. They were known for practicing wet-field rice cultivation, adorning their bodies with tattoos, and traveling widely by boat along the seas and the Red River that linked China to Vietnam .
These Chinese records further indicate that many early Vietnamese groups spoke a multitude of languages and were divided into as many as one hundred small polities, kingdoms, tribal clans, and autonomous villages. Unlike in northern China, there appears to have been no successful drive to centralize power under a unified dynasty in Vietnam’s prehistory. In later centuries, many Vietnamese accepted the mythological lore of a mighty king known as Van-Lang , who in the seventh century BCE united the various tribes of the Yue and established a dynastic line of Hung kings. This origin tale eventually evolved to include a divine origin for the Vietnamese people, telling of a union between a dragon lord and a female mountain deity that produced the Hung royalty.
At best, Vietnam’s prehistoric record can only validate the idea that chiefdoms grew increasingly large around 258 BCE. By then, the rulers of a new kingdom known as Au Lac had constructed an impressive capital arranged in the shape of a widening spiral near today’s Hanoi. Later, around 179 BCE, Au Lac was conquered by another kingdom, Nam Viet , an offshoot of China’s Qin dynasty. The area was later retaken by the Han dynasty, which attempted to establish permanent control by dividing its territory spanning southern China and northern Vietnam into nine administrative units. In 40 CE, Han control of the Red River delta ran afoul of two rebellious daughters of a Vietnamese general known as Trung Trac and Trung Nhi . The uprising launched by these women rallied native resistance from southern China to central Vietnam (Figure 5.19). Briefly victorious, the Trung sisters’ rebellion was eventually squashed. Their legacy was indelible, however, and stories of their exploits riding elephants into battle became a source of Vietnamese nationalist pride and rejection of encroachment by outsiders such as the Chinese and, much later, the French.
The end of the Trung sisters’ uprising began a period of more direct Chinese governance, with the aim of assimilating the region and its inhabitants. Over time, however, the families of the Han generals and officials who were sent as administrators took on many local habits and customs, blurring the boundaries between Chinese and Vietnamese culture in the ancient world. By that time, the area around the Red River delta had become critical to the Han’s maritime trade in Southeast Asia . Thus, even after the dynasty collapsed, China ’s political dominance of northern Vietnam lasted into the next few centuries. Sporadic uprisings continued, occasionally resulting in independence for rulers in northern Vietnam. But the Sui and Tang relaunched campaigns to reabsorb the Red River delta. Thus, northern Vietnam remained on the border of the Chinese imperial frontier for centuries.
Farther south, an area known in the ancient world as Champa was settled by a wave of people arriving from the sea around 500 BCE. Distinct from the Dong Son culture, these people engaged in trade across the waterways of Asia, from India to the Philippines. Chinese records of a civilization in this central region of Vietnam describe a unique people who reserved a higher status for women than for men, and who used an Indian script written on leaves from trees. Indeed, all the remaining inscriptions on artifacts found within this region until the ninth century are written in Sanskrit .
Another import from India to central Vietnam was the idea of a society led by a priestly Brahman class and deities such as Shiva , identified with the Champa kings. Still, indigenous spirits and ancestors were worshipped as well, coexisting alongside the Indian imports whose foreignness faded slowly over the centuries. Lacking a large agricultural region to supply a powerful state, Champa may have been a region with many centers, loosely knit by trading networks exchanging rice, salt, horns, and sandalwood.
Even farther to the south, people known as the Khmers had made the Mekong River delta their home by the early centuries of the common era. Chinese texts referring to this region named it Funan , and its history shows many similarities to that of Champa, its neighbor to the north. Funan too was engaged in wide trade. Archaeological remains show items that made their way to Vietnam from India, the Middle East, and Rome. Funan’s inhabitants and rulers imported features of Indian culture such as Sanskrit to help create royal courts, but little writing survived until the development of the powerful Khmer empire in the ninth century. | libretexts | 2025-03-17T22:27:52.815376 | 2025-02-12T00:43:28 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.09%3A_Korea_Japan_and_Southeast_Asia",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.9: Korea, Japan, and Southeast Asia",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.10%3A_Vedic_India_to_the_Fall_of_the_Maurya_Empire | 2.10: Vedic India to the Fall of the Maurya Empire
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Explain the caste system and the way it functioned in Indian society
- Identify the main elements of Buddhism
- Describe India’s faith traditions: Brahmanism , Buddhism, and Hinduism
Few areas of the world are as important to our understanding of the emergence of human civilizations as India . Occupying an enormous subcontinent in South Asia, India has three distinct geographic zones: a northern area defined by the Himalayas that forms a natural barrier to the rest of the Asian mainland, the densely populated river valleys of the Indus and Ganges Rivers that lie to the south and northwest of that area, and lastly the tropical south, cut off from those valleys by many mountains and thick forests (Figure 5.20).
Early humans traveled into Asia in waves around sixty thousand to eighty thousand years ago, moving from Africa to the Arabian Peninsula into India and beyond, on routes that hugged the coast. Some of the earliest evidence of this migration was found at Jwalapuram, India. Here, hundreds of stone tools dating to 74,000 BCE were discovered that resemble those of roughly the same age found in Africa, Laos, and Australia. But the roots of India’s ancient civilizations lie in the north, amid the archaeological remains of two ancient cities, Harappa and Mohenjo-Daro .
Harappa and Mohenjo-Daro
Unlike ancient cultures in Mesopotamia (3500–3000 BCE), Egypt (3500–3000 BCE), and China (2200–2000 BCE), the Indus valley civilization shows little evidence of political power concentrated in the hands of hereditary monarchs. Yet its culture and technology spread, in an area running from parts of present-day Afghanistan into Pakistan and western India. There, early human communities capable of agriculture flourished near the fertile plains around the Indus River and other waters fed annually by the region’s monsoons.
Farmers harvested domesticated crops of peas, dates, and cotton, harnessing the power of draft animals such as the water buffalo. The archaeological record shows few traces of any kind of elaborate monumental architecture, burial mounds, or domination by warriors and kings. Instead, a common culture grew that was defined by urban planning, complete with advanced drainage systems, orderly streets, and distinctive bricks made in ovens. Equipped with those tools, the Indus River valley produced two of the ancient world’s most technologically advanced cities, Harappa and Mohenjo-Daro. Within them, residents developed a highly urban society and rich spiritual life, with altars featuring fire and incense, practices such as ceremonial bathing, and a symbolic vocabulary using elephants and bulls as revered animals. Dedicated artisans made jewelry and fabrics. All these aspects of the Indus valley culture left an imprint on later Indian civilizations.
How did a civilization with a high degree of labor specialization and the coordination necessary for irrigated agriculture and large urban centers manage such complexity without a powerful centralized state? There is no consensus answer, though the Indus valley civilization may have developed as a series of small republic-like states, dominated by religious specialists such as priests presiding over an intensely hierarchical class system. It does seem likely, however, that the environmental toll the civilization inflicted upon the surrounding areas led to its decline. Over time, irrigation replaced fertile soil with soil having greater quantities of salt, lowering crop yields. The use of wood as a fuel source, such as for making the oven-fired bricks, led to rapid deforestation and even greater soil erosion. It appears that most communities in and around Harappa and Mohenjo-Daro abandoned the sites around 1700 BCE, when they became unable to feed and supply themselves. Before their decline, however, the two cities housed perhaps as many as forty thousand residents each, most of whom lived in comparatively luxurious homes of more than one story that featured indoor plumbing and were laid out in an orderly pattern along grid-like streets. Public buildings such as bathhouses were quite large, as were the protective city walls and citadels.
The development of a written script , found on clay seals and pottery at the sites, likely made such feats possible. The written language of the Indus valley civilization featured more than four hundred symbols that functioned as pictures of ideas, words, and numbers. While many of the symbols have yet to be deciphered, one of the primary functions of writing appears to have been commerce because many finished goods were stamped with written seals. Writing used as a means of communication and recordkeeping probably also helped the Indus valley civilization profit from long-distance trade with Mesopotamia and Egypt.
Merchants from Sumer traveled to the Indus River valley to establish trade in luxury items such as lapis lazuli. In return, it appears that traders and merchants from cities such as Harappa took up residence in cities in Mesopotamia to facilitate exchange. In this way, Mesopotamia exerted a recognizable influence on India ’s art and culture. Scholars have identified aspects of Greek naturalist art in sculptures found in Harappa, combined with local preferences for representing human bodies in motion rather than adopting the Greek emphasis on anatomical correctness. Art from these early cities helped usher in artistic styles and motifs that created a continuous tradition ingrained within Indian culture. Stone seals with fantastic beasts and anthropomorphic deities were later associated with Indian traditions such as yoga and Hindu deities.
Significant archaeological evidence suggests that urban women in the Indus valley were influential figures who functioned as specialists in rituals. More figurines were found depicting female than male deities, and women were typically buried with female relatives—their mothers and grandmothers—and not with their husbands. This is not to suggest that all women were equals. The prevalence of contrasting hairstyles and clothing on many surviving figurines indicates that women were differentiated by a great number of class and ethnic markers.
Among the more intriguing clues to the way women fared in the Indus valley is a tiny artifact from Mohenjo-Daro called Dancing Girl , a bronze and copper figurine about 4.5 inches tall and dating from around 2500 BCE (Figure 5.21). Created by a method of casting bronze known as the “lost wax” method, the nude figure appears in a confident and relaxed pose, with her hair gathered in a bun. She may have represented a royal woman, a sacred priestess of a temple, or perhaps a lower-born tribal girl. That scholars can draw such a wide array of plausible conclusions speaks to the fact that the Indus valley likely had a very fluid class structure and a highly complex society.
The Aryans and Brahmanism
The Aryans entered the Indian subcontinent as conquerors beginning in 1800 BCE. With them came a new religion, Vedic , named for their hymns called Vedas. Vedas were sung in rituals to celebrate a pantheon of gods representing various aspects of nature and human life and were a useful way of teaching, given that the Aryans were illiterate. Gods such as Varuna ruled the sky, while Indra was the god of war. The Aryans offered ritualistic sacrifices to their gods and built enormous altars of fire, imposing a hierarchy on the people they conquered that emphasized strict observance of the law. The Vedas, along with poems and prayers, were first transmitted orally from one generation to the next; later they were recorded in the written language of Sanskrit . Over time, the Indian peoples added new dimensions to the Vedic religion, changing the nature of Aryan society as well. New gods such as Soma, associated with magical elixirs, storehouses for grain, and the moon, grew in importance as the practice of ritual became ever more meaningful.
A later series of treatises known as the Upanishads , written by a priestly class called Brahmans, developed new expressions of the Vedic religion, gradually transforming it into what many scholars refer to as Brahmanism . These new expressions include samsara and karma . Samsara was a view of humanity and the universe in which the soul left the body after death to be reborn. Karma represented the idea that all human actions, moral and immoral, were counted and weighed, ultimately governing whether a person was reborn higher on the spiritual ladder in the next life, perhaps as a king or priest, or—if ruined by immoral acts—as a lower life form, perhaps a detested reptile, to try again. The ultimate goal of a person’s earthly life was to achieve union with Brahman , the ultimate and universal reality. Even gods needed to perform good acts such as penance or meditation to transcend to a higher plane of existence. Belief in reincarnation supported the idea that a person’s status in the present life came about not by chance, but rather as a consequence of past lives. Thus the authority of elites such as the Brahmans was sanctified as reflecting the divine will of the cosmos.
In this way, the Vedic religion of the Aryans religion produced the varna , a strictly hierarchical society based on inherited status. At the highest level were the Brahmans, who exerted authority by virtue of their knowledge of the sacrificial rituals and their role as guardians of the poems, hymns, and later texts that carried on the Vedic traditions. Below them were aristocratic warriors, Kshatriya , members of noble families who fought in small but effective armies to protect their kingdoms and carry conquest into new areas. Members of the third class living in the upper half of society were merchants and Aryan commoners, Vaishya , who along with the other two enjoyed privileges based on the idea that in their late childhood they underwent a rebirth.
The fourth major group were the Shudras , non-Aryan servants and peasants who were denied the opportunity to read or listen to Vedic hymns and accounted for more than half the population. At the bottom of society were the Dalits , a class of “untouchables,” who were likely the descendants of the populations that lived in parts of south India before the arrival of the Aryans. They were effectively outside and hierarchically below the four-tiered caste system. Prohibitions against marrying Indians from another caste were just one element of a constellation of provisions designed to keep everyone locked in their inherited class from birth to death. Taxes on the lower classes ensured that wealth remained at the top. In truth, the Indian subcaste system was quite complex. Distinctions between groups within each caste mattered a great deal as well, creating sub-castes that came with separate privileges, obligations, and social circles that fixed where people lived and who they could marry.
The caste system reflected Hindu religious beliefs by ensuring that people performed their proper role in this life based on their actions in the past. The laws preventing upward mobility and protecting the privilege of elites were seen as guaranteeing order. They kept the low-born from escaping the divine plan and the cosmic justice that allowed for a slow, steady advancement up the spiritual scale over a series of lives. The ultimate goal was release from the wheel of life, the never-ending transmission of the soul to ultimate peace. Thus the arrival of Aryans and the gradual emergence of Brahmanism created a new social blueprint for India.
Buddhism
Indian culture, religion, and art were forever transformed with the life of Buddha Sakyamuni around 563 BCE. The son of a royal family living near India’s eastern border with Nepal and sometimes known as Siddartha or Gautama, Sakyamuni abandoned a life of luxury in his family’s palace after experiencing an awakening, upon which he embarked on a spiritual journey that lasted the rest of his life. He came to be called the Buddha, meaning “enlightened,” because his teachings offered an alternative to the then-dominant Brahmanist values.
Buddhism explores the depths of human suffering, desire, envy, decadence, and death, offering adherents a way out of an eternal cycle of misery if they adopt the Four Noble Truths leading to the Eightfold Path . The Four Noble Truths acknowledge that pain and disappointment are an unavoidable part of life and that by focusing on spiritual matters via the Eightfold Path, pain and suffering can be overcome. By adopting Buddha’s teachings about how to think, speak, and act with respect for all life, and many other practices, followers eventually arrive at an enlightened salvation called nirvana . Nirvana is a state of ultimate peace found in the extinction of all desire and transcendence of the person’s very being. Without nirvana , upon death the soul is reincarnated into a new life that will again run the gamut of suffering, misery, and the search for enlightenment.
The teachings of Buddha and his followers issued a direct challenge to the status quo in ancient India . In his time, Buddha relished criticizing the Brahmans, questioning their authority and their dependence on ritualism. Continued generations of teachers, missionaries, and lay Buddhists used his teachings to assail the Brahmanist-based caste system. Female Buddhists were attracted by ideas promoting the opportunity for women to achieve enlightenment on an equal basis with men.
Before Buddhism, Brahmanist teachings had supported a system of gender that in the first centuries of the common era pronounced women’s genitalia foul, leading women to be excluded from public rituals and worship. Buddhism protected women from being seen as spiritually unclean, promising them an elevated status and greater participation in the community’s spiritual life. The same was true for members of the lower castes despite their inherited class. Both women and lower castes were drawn to Buddhism by the greater independence and freedom they found in it. But women adopting Buddhism often found the religion just as patriarchal: Buddhist monasteries were segregated into spheres for male monks and female nuns, and women were given lower positions and fewer privileges.
Buddhism never supplanted Brahmanism as the dominant religion in India. In later centuries, Buddhist thought and institutions were influenced by Brahmanism, incorporating deities such as Shiva and concepts such as karma. Boundaries between the two religions became blurred, a development that helped followers of Brahmanism and Buddhists find a means for coexistence and even cooperation. Buddhism arose in a historical context dominated by a Brahmanist society, and many Buddhist teachings and practices such as meditation reflect the influence of Brahmanism. Likewise, Brahmanism was greatly influenced by Buddhism and its popularity with certain classes in India. As a result, over several centuries between around 400 BCE and 200 CE, Brahmanism evolved into more of a devotional religion, allowing individual practitioners to communicate directly with the gods, not just through the Brahman priests. Worship became more personalized and private, centered on prayer and songs within the home. In this way, Brahmanism emerged as Hinduism , which retained the caste system and belief in the Vedas while also offering a prescription for common followers seeking to live a moral and fulfilling life. What emerged as the central text of Hinduism was called the Bhagavad Gita . Finished around 300 CE, it taught that commoners, not just Brahmans, could lead exemplary moral lives by abandoning bodily desires and seeking inner peace.
Both Buddhism and Hinduism were and remained diverse, branching into hundreds of schools of thought and sects that were each quite adaptable to local contexts. As it became institutionalized, however, Buddhism lost some of its early character as a means for liberation of the lowly of India. Instead it attracted the patronage of elites, who elevated it into Asia’s most influential source of inspiration for monumental architecture and high art. Buddhism made inroads across all of Asia , coming to be adopted by millions in China, Korea, Thailand, Japan, and many other communities in Southeast Asia.
Dueling Voices
Hinduism and Buddhism in Ancient India
The first excerpt, concerning the Hindu tradition, is from the Bhagavad Gita , titled “Perform Action, Free from Attachment.” The second, “Basic Teachings of the Buddha,” includes a version of Buddhism’s teachings on the Four Noble Truths and the Eightfold Path. Notice how each spiritual system conceived of immorality, the proper way to demonstrate right conduct and living, and the purpose of life.
8. Perform thou action that is (religiously) required;
For action is better than inaction.
And even the maintenance of the body for thee
Can not succeed without action.
9. Except action for the purpose of worship,
This world is bound by actions;
Action for that purpose, son of Kunti,
Perform thou, free from attachment (to its fruits)
10. Therefore unattached ever
Perform action that must be done;
For performing action without attachment
Man attains the highest. . . .
21. Whatsoever the noblest does,
Just that in every case other folk (do);
What he makes his standard,
That the world follows.
35. Better one’s own duty, (tho) imperfect,
Than another’s duty well performed;
Better death in (doing) one’s own duty;
Another’s duty brings danger.— Bhagavad Gita , translated by Franklin Edgerton
What, now, is the Noble Truth of Suffering? Birth is suffering; Decay is suffering; Death is suffering; Sorrow, Lamentation, Pain, Grief, and Despair, are suffering; not to get what one desires, is suffering. . . .
What, now, is the Noble Truth of the Origin of Suffering? It is that craving which gives rise to fresh rebirth, and, bound up with pleasure and lust, now here, now there, finds ever fresh delight.
What, now, is the Noble Truth of the Extinction of Suffering? It is the complete fading away and extinction of this desire, its forsaking and giving up, the liberation and detachment from it. . . .
It is the Noble Eightfold Path, the way that leads to the extinction of suffering, namely: 1. Right Understanding, 2. Right Mindedness, which together are Wisdom. 3. Right Speech, 4. Right Action, 5. Right Living, which together are Morality. 6. Right Effort, 7. Right Attentiveness, 8. Right Concentration, which together are Concentration. This is the Middle Path which the Perfect One has found out, which makes one both to see and to know, which leads to peace, to discernment, to enlightenment, to Nirvana. . . .
— Buddha, the Word , edited by Nyanatiloka
- Based on these excerpts, what does it mean for one to lead a moral life in each of these distinct traditions?
- How is the Eightfold Path in the Buddhist excerpt similar to or different from the call for action in the Hindu excerpt?
The Mauryan Empire
The initial spur to Buddhism’s migration across Asia occurred with the rise of the Mauryan Empire (326–184 BCE). This entity grew out of the smaller Indian kingdom of Magahda once its ruler, Chandragupta Maurya , managed to unify much of north India from a capital near the city of Patna and pass it on to his descendants, founding the Maurya dynasty. A Greek historian named Megasthenes visited the seat of Chandragupta’s power around the end of the fourth century BCE, marveling at its palaces replete with grottoes, bathing pools, and gardens filled with jasmine, hibiscus, and lotus.
Ruling over a population nearing fifty million, Chandragupta’s successors conquered all but the southern tip of the subcontinent in a series of military campaigns. The Mauryan Empire’s political structure employed a large and well-run army, administered by a war office with branches for a navy and for raising horses and elephants for cavalry warfare. A civilian bureaucracy ran the ministries overseeing industries such as weaving, mining, and shipbuilding as well as organizing irrigation, road construction, and tax collection. The Mauryan rulers lived in constant fear of assassination and intrigue against their rule, however, which forced them to rely on an elaborate network of spies to monitor officials throughout the empire.
The high point of Mauryan greatness came with the ascension of Emperor Ashoka in approximately 268 BCE, opening a period of monumental architecture that left its mark on the ancient world. Ashoka’s personal grandeur came from the story of his transformation from a ruthless warrior general to a devout man of peace with a universal mission (Figure 5.22). As the head of the Mauryan army laying siege to the kingdom of Kalinga , he won a great battle that caused an estimated 100,000 deaths. The carnage brought an awakening that led Ashoka to Buddhism and to reforms intended to promote harmony and compassionate rule throughout India. To that end, he supported missionary efforts to spread Buddhism to Burma and Sri Lanka. His new law code gave protections to the vulnerable—the ill and diseased, the poor and powerless, and travelers making their way across the empire. His ministers put their sovereign’s will into action by building hospitals, digging wells, setting up rest-houses along India’s roads, and sending out traveling magistrates to resolve disputes and bring justice to remote areas.
Ashoka also had a lasting influence on the world of art. He decreed that his sayings and teachings on morality be inscribed on stone pillars erected throughout India (Figure 5.23). The Pillars of Ashoka demonstrate the Indian empire’s character as a spiritual and political system. Through Buddhism, patronage of the arts, and monumental architecture, the Mauryans wished to demonstrate morality and benevolence to their subjects and exercise less direct rule. Leaders such as Ashoka hoped the people’s loyalty and duty in turn would be motivated by admiration of their achievements, if not by the money and other gifts given to reward the virtuous and charm supporters. The Pillars of Ashoka also demonstrate the flexibility of the Mauryan system of rule. Those closest to the capital were inscribed with detailed summaries of the Mauryan codes for behavior and an orderly society. Farther away, in newly won territories, the pillars promoted very simple teachings, a mark of the ruler’s intent to allow room for local autonomy and customs to prevail as long as his subjects met certain universal norms and tax obligations.
At the end of Ashoka’s reign, the Mauryans left a legacy for future generations of Indian rulers to try to emulate so as to rule a diverse society. When the Mauryan Empire finally collapsed in 185 BCE, India entered another period of fragmentation and rule by small competing states and autonomous cities and villages. By the early centuries of the common era, it was a multitude of smaller regional kingdoms that shared with each other a common culture linked by Hinduism , Buddhism , a canon of Sanskrit texts, and the caste system .
The Gupta Dynasty
From the fourth to the seventh centuries, an empire founded by the Gupta dynasty (320–600 CE) ruled over northern India. As revealed by the name he took, Chandragupta, the founder, emulated the Mauryans and its famous founder, Chandragupta Maurya . He hired scribes working in Sanskrit to promote learning and the arts, and during this age, Sanskrit became the basis for a classical literature that influenced generations of Indians and the world. Texts such as the Mahabharata and Ramayana glorified ideas about duty, valor, and performing a proper role in society (Figure 5.24). The first was a collection of thrilling poems featuring feuding rulers and powerful families, the other an epic tale of a warrior prince’s journey to recover his honor.
In Ramayana , Rama, an avatar for the Hindu deity Vishnu, triumphs over the demon Ravana on the island of Sri Lanka and rescues his wife Sita before going on to found a perfect Indian society from his capital of Ayudha. His noble virtues and ideal society became models for Hindus to aspire to as rulers and aristocrats, while his exploits were retold for centuries in countless paintings, sculptures, carnivals, plays, and shadow theatres.
The Sanskrit classics Mahabharata and Ramayana soon spread far and wide in Southeast Asia, where they became part of the cultural fabric for a multitude of non-Indians as well. Other intellectuals of the Gupta era proved themselves in the field of mathematics by using decimals and a mark to denote the concept of zero for precise measurements and recordkeeping. Among the more notable was the astronomer Brahmagupta , who in the seventh century CE pioneered the use of multiplication and division and the idea of negative numbers .
Link to Learning
You can read a brief synopsis of the Ramayana and a description of the epic’s major characters at the British Library website.
An animated English-language version of the epic is also available.
In politics the Guptas were innovators as well. In return for their loyalty, rulers granted tracts of land as gifts to powerful families, Brahmans, and temple complexes, guaranteeing these followers a share of the harvest and consolidating their own control. In return, the Brahmans elevated the Gupta rulers to new heights in rituals honoring Vishnu and Shiva. Yet as these deities became more important, worship among the commoners turned more personal and private; singing as a form of prayer and ritualism inside the home became essential to daily lives. Many Indians began to believe in the sanctity of bhakti , a direct personal relationship between a follower and the deity. This idea bypassed the role of Brahmans as intermediaries, displeasing the Brahmans but gaining popularity in southern India, where poems written in the Tamil language became foundational to the new practice of personalized worship among Hindus.
The Gupta’s dynasty marked a flourishing of art and religion and the heyday of Buddhism in India. Painted caves with beautiful sculptures found in the Ajanta caves illustrate the sophistication of the artists patronized by the dynasty. While Hinduism remained the official religion of the state and the Guptas, Buddhist universities such as Nalanda were among the first of their kind in the ancient world and attracted throngs of students and pilgrims from China . India ’s educated classes ranked among the most learned and knowledgeable of the ancient world, and at times they turned their attention from math and morality to explore the depths of passion, love, and eroticism. During this period, the Kama Sutra , a treatise on courtship and sexuality, became a seminal piece of Indian literature, inspiring and titillating generations worldwide ever since.
The opulence and stability provided by the Guptas dissipated under the threat of invaders from the north known as the Huns . While northern India fractured into smaller states after this point, southern India’s ties and trade with South Asia deepened and matured. By the eleventh century, the region’s profitable exports of goods such as ivory, pepper, spices, Roman coins, and even animals like the peacock had led to the formation of notable southern kingdoms, such as the Tamil Chola dynasty. But the most influential exports from India to the rest of South Asia—Hinduism, Buddhism, and the art and learning each inspired—long outlived these states. | libretexts | 2025-03-17T22:27:52.914291 | 2025-02-12T00:43:31 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.10%3A_Vedic_India_to_the_Fall_of_the_Maurya_Empire",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.10: Vedic India to the Fall of the Maurya Empire",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.11%3A_Early_Cultures_and_Civilizations_in_the_Americas | 2.11: Early Cultures and Civilizations in the Americas
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Describe how civilizations in the Americas adapted to their environments
- Discuss the contributions of the Olmec civilization to culture and religion in Mesoamerica
- Identify the key components of early cultures in North and South America
At the start of the third century BCE, after thousands of years of hunter-gatherer existence, the peoples living in the Americas began to form complex agricultural-based societies. Over the next few thousand years, the early settled communities gave way to large and architecturally impressive settlements from the Andean region to the Eastern Woodlands of North America. These led to local similarities in art, architecture, religion, and pottery design.
Complex Civilizations in Mesoamerica
By the year 1200 BCE, farming had become well established across southern Mexico, especially in the gulf lowland areas where there was sufficient water for irrigation. The many societies there were not exclusively agricultural; they continued to rely on hunting and gathering to supplement their diets. One of them, the Olmec culture, emerged around this time as Mesoamerica’s first complex civilization with its own monumental architecture.
Olmec Culture
The start of the Olmec civilization, at a site known as San Lorenzo in the modern Mexican state of Veracruz, stretches back to about 1350 BCE and the construction of a large earthen platform rising some 164 feet above the flat landscape. Upon this platform, the Olmec built ceremonial and other structures, water reservoirs, a system of drains, numerous stone works of art, and a number of massive sculpted stone heads. One of the structures has become known as “the red palace” because of the red ocher pigment on the floor and walls. It was likely a residence for the elite and included large stone columns and aqueducts. The massive stone heads and other sculptures, some weighing as much as fifty tons, were carved from volcanic basalt that came from as far as ninety miles away and was likely brought by raft for part of the way and on rollers over land.
Because little of the San Lorenzo site remains, we can only speculate about the organization of the Olmec civilization, but it is clear that their civilization shaped those that followed. For example, the great earthen platform and monumental sculptures shaped liked step pyramids attest to a highly sophisticated culture, with a clearly defined elite that could control large labor forces. Relying on pottery fragments and population density estimates, scholars have concluded that most workers were probably free laborers working to accomplish larger goals. They likely lived well beyond the elevated center reserved for the elite, in villages surrounded by gardens and other agricultural zones where the Olmec grew maize, avocados, palm nuts, squash, tomatoes, beans, tropical fruits, and cacao for chocolate.
The stone heads themselves are remarkable (Figure 8.12). Seventeen have been found across all the Olmec sites; some stand eleven feet tall. All are generally similar in form and style, depicting men’s faces with large lips and noses with flared nostrils, but they were likely intended to be realistic portraits of rulers of the sites where they were discovered. Upon their heads are helmets of various styles, some with coverings for the ears. Given the effort required to transport the stone and carve the heads, these works were likely intended to emphasize the power of the rulers, both to the Olmec people and to outsiders.
Evidence of possible vandalism on some of the heads has led some scholars to suspect an invasion occurred in the tenth century BCE, with desecration of the images as a result. Others, however, believe this is evidence of reworking that was never completed. We may never know for sure, but we do know that during the tenth century BCE, San Lorenzo declined in importance. At the same time, another Olmec site rose in significance, some fifty miles to the northeast at La Venta.
La Venta was built around 1200 BCE on a high ridge above the Palma River less than ten miles from the Gulf of Mexico. By 900 BCE, it had become the dominant Olmec city in the region. At its height, La Venta covered almost five hundred acres and may have supported as many as eighteen thousand people. Its central monuments included several large earthen mounds, plazas, a possible sports arena, several tombs, and numerous stone heads and other sculptures. The complexity of this urban complex reflects a major development in Mesoamerican civilizational and architectural design. It was likely built as a sacred site, with its temples and other complexes organized on a north–south axis believed to enhance the rulers’ authority by connecting them to supernatural environments. This style of urban design was later adopted by other Mesoamerican civilizations like the Maya.
Olmec art depicts numerous deities, such as a dragon god, a bird god, a fish god, and many fertility deities like a maize god and water gods. The Olmec also clearly recognized many types of supernatural mixed beings, like a feathered serpent and the were-jaguar, a cross between a jaguar and a human. These artistic images imply that the Olmec had a sophisticated pantheon of gods who controlled the universe and expected certain rituals be performed, perhaps by Olmec leaders themselves, who may have functioned as shamans empowered to communicate with the spirit world. The rituals were performed in the temples and plazas of the sacred cities like La Venta and San Lorenzo, as well as in sacred natural sites like caves and mountaintops.
Other rituals were connected to a type of ball game played in a special court with balls made from the abundant natural rubber of the region. Sports contests often existed to bring communities together, to allow men to show prowess and strength in times of peace, and to entertain. It is also likely that in times of heightened spiritual need, such contests could take on greater meaning and might have been choreographed to play out supernatural narratives and perhaps connect people to the gods. Like some later civilizations, the Olmec also saw bloodletting as a link to the spirit world. Blood sports may have been used to create pathways to understanding the will of their gods.
Link to Learning
The ritual ball game of the Olmec became a cultural feature of Mesoamerica over the centuries, and various forms of it were played by the Maya, the Aztec, and many others. Read more about the history of the Mesoamerican ball game and see pictures of related artifacts from different Mesoamerican cultures at the Metropolitan Museum of Art website.
The Olmec were clearly in contact with other groups around southern Mexico and Central America. There is evidence of a robust trade in pottery and valued materials like obsidian, magnetite, and shells, likely carried out by merchants traveling across the larger region. Over time, this trade exposed other Mesoamerican cultures to Olmec ideas about religion, art, architecture, and governance. Some scholars thus conclude that Olmec civilization was a “mother culture” for later large and sophisticated Mesoamerican states. Cultural similarities exist among these, such as ritual ball games, deities, and calendar systems. Olmec-style artifacts have also been found at sites as far away as what are now western Mexico and El Salvador. Like much related to the Olmec, however, the extent of their influence is a question we may never answer with certainty. By the time this civilization disappeared around 400 BCE, a number of other Mesoamerican cultures were emerging.
The Zapotec civilization appeared in the valleys of Oaxaca in western Mexico beginning around 500 BCE, with the construction of the regional capital known today as Monte Albán (Figure 8.13). Set on a flattened mountaintop overlooking the larger region, Monte Albán likely had a population of about five thousand by around 400 BCE and as many as twenty-five thousand by around 700 CE. As it grew over the centuries, so too did its stone temples and other complexes. The city exerted influence on the hundreds of much smaller communities scattered across the Oaxaca Valley. The region was highly suitable to maize cultivation, thus allowing for larger populations and monumental architecture. From the defensive walls created around their settlements, it seems the Zapotec lived in a world where warfare was especially common. Monte Albán itself was likely selected for defensive reasons.
The structures built at Monte Albán after 300 CE reflect the influence of another major Mesoamerican civilization about thirty miles northeast of Mexico City. The massive city of Teotihuacán dominated trade in obsidian, salt, cotton, cacao, and marine shells across southern Mexico and greatly influenced cultures like that of the Zapotec. The origins of the Teotihuacán settlement date to about 400 BCE, but major building at the site did not begin until centuries later. By 300, the growing city had a population of about 100,000, making it one of the largest cities in the world at the time (Figure 8.14). It exercised enormous cultural and military influence across large portions of Mesoamerica until it declined in the sixth and seventh centuries CE.
The Teotihuacanos built numerous stone temples and other structures organized around a north–south passageway known as the Avenue of the Dead . The largest temples are known as the Pyramid of the Sun and the Pyramid of the Moon. Both are multitiered stone structures, 197 and 141 feet tall, respectively. The site also includes a large royal residence known as the Citadel, which includes the elaborate Temple of Quetzalcoatl, the feathered serpent. Elite military leaders and others lived in large apartment compounds decorated with colorful artwork depicting priests, gods, or warriors. The remaining population was spread across the roughly ten thousand square miles that surrounded the city and produced trade goods as well as agricultural products.
The size of Teotihuacán denotes its wealth and regional influence at its height. This wealth came from trading in crafts, agricultural products, obsidian tools, cloth, ceramics, and artwork. The many preserved frescos and murals show the city’s rulers dressed in elaborate clothing, including iridescent quetzal bird feathers from as far away as Guatemala, testifying to Teotihuacán’s long reach. To influence areas so far away, the city wielded power through its control of trade and use of military force and diplomacy. Sculptures at Monte Albán show Teotihuacano diplomats meeting with the Zapotec elite, reflecting mostly peaceful contact between the two civilizations. Evidence from Maya sites also demonstrates that the Teotihuacanos commonly intervened in Maya affairs deep in Central America, sometimes militarily. They may even have orchestrated a coup in the powerful Maya city of Tikal in 378.
Maya Culture
While Maya civilization was clearly influenced by the Teotihuacanos beginning in the fourth century CE, evidence of urban development and rapid population growth in the Maya heartland of Central America dates to before 600 BCE. Village life may go back much further, but in any case, by 600 BCE, the lowlands of Central America were full of small villages, each showing evidence of sophisticated pottery, architecture, irrigation techniques, and religious traditions. By 250 BCE, a handful of powerful Maya city-states had emerged. The major cities of this Early Classic period (250–600 CE) include Tikal, Calakmul, El Mirador, and a few others.
El Mirador was a dominant city before 150 CE, with a population of about 100,000 at its height. But Tikal and Calakmul were equally impressive. All had numerous large pyramid-like structures creating an impressive skyline across the spaces cleared of jungle. Most of the major cities were built next to large, shallow lakes, since access to water for drinking and irrigation was important in the lowlands, where rainfall was often insufficient. The tropical soil in the area is also insufficiently fertile, and the Maya developed a style of slash-and-burn agriculture to raise maize, squash, beans, and cacao for the growing urban populations in these cities.
The Maya were certainly influenced by Olmec civilization, though likely not directly. For example, some examples of Maya art include Olmec-derived features like the were-jaguar. The Maya also played a ritual ball game based on the earlier Olmec variety. Another possible Olmec influence was the Maya calendar. This consisted of two different parts—the 260-day Sacred Round calendar and the 365-day Vague Year calendar—that functioned together to create a 52-year cycle for measuring time and tying the dates for ceremonies to important mythological events performed by the gods.
The Past Meets the Present
Did the Maya Predict the End of the World?
The premise of a 2009 science fiction movie was that the Maya calendar predicted the end of the world would occur in the year 2012. While the film (called 2012 ) was a commercial success, the idea that the Maya predicted when the world would end has been largely discredited.
The Maya had a sophisticated calendar system evolved from earlier Mesoamerican versions, possibly the Olmec. Because it used two different calendar rounds working together, it revealed important ritual days and cycles over long periods of time (Figure 8.15). For example, one full cycle covered a space of fifty-two solar years, often called a bundle. But to explore longer chunks of time, the Maya relied on what scholars call the Long Count Calendar. This had cycles that included the winal (20 days), the tun (360 days), the k’atun (7,200 days), and the bak’tun (144,000 days). The Great Cycle occurred every thirteen bak’tun , or about every 5,125 years. And this is where the idea of the significance of 2012 comes from.
According to scholars’ calculations, the Maya Great Cycle would have begun in 2012 CE. But did that really mean the Maya thought this was the end of the world? Most historians and archaeologists say the answer is a resounding “no.” Rather, that year would simply have started a new cycle, though the Maya would have seen great importance in the event and celebrated it with major festivities. It appears that only Hollywood and some imaginative modern writers have read an Earth-ending catastrophe into this date.
- What does the cyclical nature of the Maya calendar system suggest about their rituals and cosmology?
- Why do you think the concept of an apocalypse occurring in 2012 was so attractive to modern people?
The era of Maya greatness begins with the Classic period , starting around 250 CE and lasting until about 900. During this time, urbanization in the Maya world expanded greatly, with approximately forty different city-states emerging in different areas. Some of the most powerful were older sites like Tikal and Calakmul, along with newer locations like Palenque , Copan, Yaxchilan, and Piedras Negras (Figure 8.16). Each had its own rulers, referred to as “divine lords.” These powerful chieftains exercised their authority over the city-state through their control over religious rituals and ceremonies, the construction of temples, and especially wars they waged with other Maya city-states. Such wars were common for weakening rivals and keeping neighbors in line, and they may even have served important ritual purposes. They also allowed for the exacting of tribute from subdued enemies in the form of animal products, salt, textiles, artwork, and agricultural goods like cacao and maize. Tribute could be paid through labor as well, when defeated enemies supplied workers for the victorious city-state. Only rarely did rulers seek to control conquered city-states, however. These generally remained independent, though they all shared many cultural attributes.
At the heart of Maya religious practices was the veneration of family ancestors, who were considered bridges between heaven and earth. Homes had shrines for performing ritual bloodletting and prayers directed to the ancestors, and deceased family members were typically interred beneath the floor. Indeed, the large stone temples themselves were in some ways grander versions of these family shrines, usually with large tombs within them, and deceased kings were effectively ancestors for the entire city-state. Ritual practices were tied to the complicated Maya calendar, and gods could act in certain ways depending on the time of year and the location of certain heavenly bodies. Shamans and priests guided rituals like bloodletting, which allowed for communication with the ancestors by releasing a sacred essence in the blood called chu’ulel . The same principle applied to the human sacrifice of war captives and especially captured rival leaders.
While we can only speculate about how the Olmec played their ritual ball game, we know more about the Maya and later versions (Figure 8.17). The intention was to reenact aspects of Maya mythology, and the game held a significant place in religious practice. Two teams of four wore ritual protective padding and passed the ball to each other without using hands or feet on long I-shaped courts flanked by sloping walls. The object appeared to be to move the ball through a stone ring without letting it hit the ground. As the use of padding indicates, the game could be quite dangerous; the ball was solid rubber and could weigh more than seven pounds. But the true danger came at the end, when losing team leaders or sometimes the entire losing team could expect to be sacrificed to fulfill the game’s ritual purpose.
One of the reasons we know so much about the Maya is that, unlike some other Mesoamerican civilizations, they created a writing system that scholars have been able to decode and read (Figure 8.18). This system was phonetically based, with complex characters, and was far more developed than any other writing system discovered in Mesoamerica. It allowed the Maya to record their own history in stone monuments, including invaluable political histories, descriptions of rituals, propagandistic records of battles, and genealogies.
Classical Maya civilization entered a period of decline in the ninth century CE and then deteriorated rapidly. Over a period of about a century, alliances broke down, conflicts became more common, the production of luxury goods slowed to a stop, and cities went from thriving urban centers to depopulated shells. The reason for this collapse has been a topic of debate among historians and archaeologists for many years, and much remains uncertain. Among the proposed causes are epidemic diseases, invasions, natural disasters, internal revolutions, and environmental degradation. Several of these may have been influential; it is unlikely there was a single cause.
For example, studies over the last few decades have pointed to the environmental problems created by demographic growth. This growth led to large-scale deforestation, which in turn produced soil erosion. Large populations that required high agricultural yields made Mayan civilization more vulnerable to variations in climate or a string of bad harvests caused by crop disease. Such problems would have put enormous pressure on elites and commoners alike and contributed to disorder, war, and perhaps internal revolts. However it happened, by 900 CE the Classic period of Maya civilization had come to an end. But this was not the end of the Maya. In the Yucatán Peninsula , well north of the old centers of power, Maya civilization would experience a rebirth that extended into the sixteenth century and the arrival of the Spanish.
Early Cultures and Civilizations in South America
South of Mesoamerica and north of the Andes lies a dense tropical jungle that long prevented any regular communication or cultural transmission between the two areas. As a result, the early cultures and civilizations in South America developed in different ways and responded to different environmental factors. Neolithic settlements like Norte Chico in today’s Peru had already emerged by 3000 BCE. However, in the centuries following this, others proliferated in the Northern Highlands as well. These include sites known today as Huaricoto, Galgada, and Kotosh, which were likely religious centers for offering sacrifices. There was also Sechin Alto, built along the desert coast after 2000 BCE. Then, around 1400 BCE, groups in the Southern Highlands area around Lake Titicaca (on the border between Peru and Bolivia) began growing in size after adopting agricultural practices. The construction of a large sunken court in this area around 1000 BCE indicates they had their own sophisticated ceremonial rituals.
Around 900 BCE, the Andes region experienced a transformation when a single society, often called the Chavín culture , expanded across the entire area, opening what archaeologists call the Early Horizon , or Formative, period. The Chavín culture is known for its distinctive pottery style, which spread throughout the entire region and depicted numerous people, deities, and animals in a flowing and balanced manner (Figure 8.19).
Link to Learning
Read or listen to a short expert description of the Chavín bottle with caiman presented by the Metropolitan Museum of Art, which holds this item in its collection.
In addition, you can explore a number of other artifacts from the period at the Met website.
The name Chavín comes from Chavín de Huántar , possibly the culture’s most important religious center. This site is more than ten thousand feet high in the Andes Mountains, to the east of the older Norte Chico settlements. Its dominant architectural feature was its large temple complex, which faced the rising sun and included a maze of tunnels snaking through. Deep within the tunnels was a large sculpture of possibly this culture’s chief deity, called El Lanzón (“great lance”) because of its long lance-like shape. The image of El Lanzón mixes both human and animal features, with flared wide nostrils, bared teeth, long fangs on either side of the mouth, and claws protruding from fingertips and toes. The temple was also decorated with many other sculptures of animals, human heads, and deities bearing the features of both, all probably intended to awe residents and visitors alike.
The inhabitants of Chavín de Huántar numbered about twenty-five hundred by 200 BCE as it slipped into decline. The site’s importance lay in its role as a religious or ceremonial site, not as a population center. But by around 400 BCE, the Chavín religion and culture had spread far and wide across the Andes region. Whether these influences were transmitted by trade or warfare is unclear. Eventually, however, they replaced other architectural and artistic styles and burial practices. Innovations in textile production and metalworking in gold, silver, and copper also proliferated around the region. Craftspeople in towns and villages produced textiles and metal objects, and traders moved them from place to place along improved routes and with the aid of llamas as pack animals (Figure 8.20).
Beginning around 200 BCE, the influence of Chavín cultural styles and religious symbols began to wane. This came at a time of increased regional warfare among many groups, evidenced by the increasing use of defensive features like walls around settlements. The broader Chavín-influenced region then fragmented into a number of regional cultures that grew to full-fledged civilizations like the Moche, Nazca, and Tiwanaku (Figure 8.21).
The Moche civilization emerged in northern Peru and made major settlements with large pyramid-style architecture at Sipán, Moche, and Cerro Blanco. Its people were agriculturalists with a keen knowledge of irrigation technology, which they used to grow squash, beans, maize, and peppers. They were also a highly militaristic society; their art depicts warriors in hand-to-hand combat, scenes of torture, and other forms of physical violence (Figure 8.22). The Moche formed a politically organized state with a sophisticated administration system. Their cities and burial practices reflect a hierarchical organization, with powerful divine kings and families of nobles ruling from atop large pyramids. Below these two tiers was a class of many bureaucrats who helped manage the state. Near the bottom of the social order were the large numbers of workers, agricultural and otherwise, who lived in the many agricultural villages controlled by the elite.
Far to the south of the Moche, along the dry coast of southern Peru, were the Nazca , whose culture also emerged around 200 BCE. While the terrain there is parched, with rainfall virtually unknown in some areas, the rivers that carry water from the mountains provided the Nazca with sufficient water for irrigation. Unlike the Moche in their large cities, the Nazca people lived mostly in small villages. However, they maintained important ceremonial sites like Cahuachi , where villagers made pilgrimages and witnessed elaborate fertility and other rituals.
Politically, the Nazca may have adopted a type of confederation made up of a number of important families. Apart from many human-altered hills, called huacas , they also left behind hundreds of geoglyphs, large artistic representations imprinted in the dry desert ground. These are sometimes referred to as the Nazca Lines , and they can be either geometric patterns or images of animals like birds, fish, lizards, and cats (Figure 8.23). Some are as large as twelve hundred feet long and were created by clearing stones away from the desert floor to reveal the different-colored ground beneath.
Link to Learning
The Nazca Lines in Peru have baffled scholars for many years. Watch this video about the Nazca Lines to learn more about how some are trying to understand these giant geoglyphs today.
Whereas the Nazca lived in the arid coastal desert, the Tiwanaku civilization thrived high in the mountains near Lake Titicaca. Like the Moche and Nazca societies, this culture emerged in the wake of the collapse of Chavín culture around 200 BCE. Beginning around 100 CE, it entered a period of sustained building at its key city of Tiwanaku. There, residents built two large stone structures topped by additional buildings and carved stone artwork. A signature feature of the structures at Tiwanaku is the many “trophy heads” that poke out from among the stone blocks (Figure 8.24). Noting the different facial features on each head, some scholars have concluded that they represent important ancestors of the Tiwanaku elite or possibly the gods of various conquered groups.
At its height, the city supported perhaps as many as forty thousand people and oversaw at least four smaller cities in the surrounding area. It may even have been the center of a type of imperial system, with colonies on both the Pacific coast and the eastern side of the Andes. To support Tiwanaku and the other related cities, the people irrigated massive fields with a network of canals to grow potatoes. They also raised domesticated llamas and used them as pack animals for long-distance trade.
Tiwanaku survived until about 1000 CE and may have declined as the water level in Lake Titicaca rose to flood its farmland. The other civilizations of this period—the Moche and the Nazca—had disappeared long before, between 500 and 600 CE, for reasons that likely included environmental transformations. Other Andean civilizations emerged in their wake, including the Wari of the highlands of southeastern Peru and the Chimor of coastal Peru. These later groups built upon the earlier cultures’ innovations in agriculture, art, manufacturing, and trade. While Wari declined around 800 CE, Chimor survived into the fifteenth century. It was only in the 1400s that Chimor was conquered by a new and expanding imperial system, the Inca .
North America in the Formative Period
The earliest complex societies in North America began to emerge in the Ohio River valley around 1000 BCE, at the start of the Formative period, when mound-building cultures with large populations in the Eastern Woodlands became more common.
Mound-Building Cultures in the Eastern Woodlands
The mound-building culture of the Ohio River valley area is often referred to as the Adena , after a mound excavated in 1901 in Ross County, Ohio. This and the hundreds of others discovered in the area were burial sites. They started small, with the burial of one or two important people, but grew over time as more were buried and more earth was used to cover them. Some of the mounds had a large circular ditch surrounding them and logs lining the interior. Evidence of postholes indicates that structures once stood there as well, suggesting the locations may have been meeting or ceremonial spots. The bodies of the dead themselves were often decorated with red ocher and other pigments. Grave objects included jewelry, weapons, stone tools, marine shells, and pipes for smoking kinnikinnick (a mixture of leaves and bark) and perhaps tobacco (Figure 8.25).
Communities of mound builders in the valley remained small at first, sometimes erecting no more than a couple of structures. The mounds themselves were also relatively small when compared with those of later cultures like the Hopewell tradition , a civilization that emerged around 200 BCE and eventually spread across the Eastern Woodlands through a common network of trade routes. Named for a large earthwork complex occupying 130 acres in today’s Ohio, the Hopewell tradition emerged around 200 BCE and is one of the most impressive of many of this period in the Woodlands. The site encloses thirty-eight different mounds within a large earthen D-shaped rectangle. The largest are three conjoined mounds; before centuries of erosion occurred, together they measured about five hundred feet wide and thirty feet high. Large platforms once supported wooden structures and were likely used for ritual purposes.
Another Hopewell site located near Newark, Ohio, is equally impressive, with earthen enclosures, mounds, and an observation circle all organized to align with the movement of the moon and likely used to predict lunar eclipses and other seasonal events. Building such mounds with the available technology would have been a labor-intensive task and indicates the culture responsible was highly organized.
The mound complexes were used for ceremonial purposes and do not appear to have been the site of urban settlements. Instead, most people of the Hopewell culture lived in small dispersed communities consisting of only a few extended families. They employed both hunter-gatherer strategies and the cultivation of domesticated plants like sunflowers and bottle gourds. Neighboring groups likely came together to participate in hunting, gathering, and religious events at their ceremonial sites. Religious traditions included the veneration of ancestors, such as those buried in the mounds.
Different communities from the wider area buried their dead leaders in the same mounds, likely as a way to establish symbolic connections across kin groups. Evidence from sites like the one at Newark suggests that ceremonies for burial and veneration were probably connected to seasonal changes and important astronomical observations. The items deposited in the mounds included a number of artistic depictions of animals like beavers, bears, dogs, cats, and even supernatural mixtures of these. These likely had symbolic importance for the individual kin groups and were connected to both their religious practices and specific ancestral ceremonies.
Politically, the settlements of the Hopewell tradition were decentralized and mostly egalitarian. The leadership structure of individual kin groups may have revolved around shamans or shamanistic practices, but there were no powerful rulers. There were, however, some divisions of labor based on specialization, including healers, clan leaders, and those who possessed certain spiritual qualities necessary for interpreting astronomical signs, preparing burials, and preserving important religious traditions. Ceremonial objects made of copper, bone, stone, and wood and shaped into bird claws and totem animals aided shamanistic figures in their duties and were often buried with them. Items within the mounds also provide evidence of extensive long-distance trading. Those discovered in the Ohio River valley include copper from Lake Superior, quartz from Arkansas, mica from the Appalachian region, marine shells from the Gulf coast, and obsidian from as far away as the Rocky Mountains. Trade in these objects was carried out by individuals moving along rivers or the networks of village paths.
Beyond the Book
Turtle Island
The earthen mounds of the Eastern Woodlands region had a number of symbolic meanings and purposes. They served as burial sites, provided connections to ancestors, and were settings for religious rituals. But what do ancient stories suggest about these mounds? Because the Native Americans who built them did not leave behind written records, their legends are one tool modern scholars can use to understand their symbolic importance.
Consider one of the ancient origin stories common to many Indigenous groups of the Eastern Woodlands. Preserved orally in numerous versions, it tells of the construction of the world by the accumulation of earth upon the shell of a large turtle, which grew over time and supported life. Some versions of the story begin with a great flood, after which animals work diligently to bring up earth from below the water to place on the turtle’s back. Other versions refer to a woman with supernatural powers who falls or travels from the heavens and creates the world on a turtle’s back (Figure 8.26). Across all the versions, the symbolic importance of the turtle, representing life, is paramount.
While we cannot know for sure, the Woodlands mounds may have been connected to this ancient origin story. They certainly would have provided safety from river flooding in low-lying areas. During such times, the connection between the mound and the turtle floating in the water would have been difficult to miss.
- What purpose do you think origin stories like these served for the ancient people of the Eastern Woodlands?
- Do you think using preserved origin stories is a good way to understand ancient peoples and customs? Why or why not?
The Hopewell tradition settlements began to decline in the fourth century CE, evidenced by a waning of mound building and trade. The precise reason is not clear, but larger kin group alliances may have broken down as a result of underlying religious issues. Beginning around 600, groups in the Midwest built a number of so-called effigy mounds. These are earthen mounds formed in the image of animals like wolves, bears, snakes, and birds. Like many earlier mounds, the effigy mounds were also burial sites, but they usually contained only a few individuals. In comparison to the earlier Hopewell mounds, they were generally constructed with less labor and in a shorter amount of time, possibly by just a few dozen people working for a few days.
Early Cultures of the American Southwest
Far to the west of the mound-building cultures, a very different cultural tradition formed in the arid landscape of the Southwest. Here, people began experimenting with maize varieties as early as the third millennium BCE. By that time, some groups in the region had begun planting maize in small plots along riverbanks and using it to supplement their hunter-gatherer existence. Exactly how maize reached the American Southwest from southern Mexico is not clear, but there must have been some sporadic contact between cultivators in the south and hunter-gatherer adopters farther north. However, for many centuries after maize was introduced into the Southwest, its cultivation remained limited to one small part of a lifestyle firmly rooted in hunting and gathering. It is possible that the arid conditions of the region necessitated greater mobility and thus made the advantages of maize cultivation less obvious.
Some of the earliest evidence of maize cultivation in the area dates from about 2250 BCE and comes from what is now northwestern New Mexico. By around 1200 BCE, groups in the Las Capas area, by the Santa Cruz River near modern Tucson, Arizona, had developed a sophisticated irrigation system for cultivating maize. The people at Las Capas built a network of canals that directed water from the river into their fields. Around this agricultural base, they constructed oval-shaped homes and pits for roasting the maize they grew. Over time, the homes became more elaborate and were organized in rings around courtyards. But even here the cultivation of maize remained only a small part of a largely hunter-gatherer lifestyle, which included gathering goosefoot and piñons as well as hunting rabbits, bison, and deer.
By around 500 BCE, the cultivation of beans was adding to the growing diversity of foods consumed in the Southwest. This change helped to encourage more dependence on maize since, nutritionally speaking, these two foods are complementary—beans are a source of lysine, a necessary amino acid that maize lacks. Growing beans with maize also increases the nitrogen in the soil and preserves its fertility for longer periods. However, even after the introduction of beans, settled and solidly agricultural communities in the Southwest did not begin to emerge until around 200 CE. Once they did, the region entered a transformational period that resulted in the development of the Anasazi or Ancestral Pueblo societies. | libretexts | 2025-03-17T22:27:53.031066 | 2025-02-12T00:43:33 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/02%3A_Ancient_Worlds/2.11%3A_Early_Cultures_and_Civilizations_in_the_Americas",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "2.11: Early Cultures and Civilizations in the Americas",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.01%3A_Early_Mediterranean_Peoples | 3.1: Early Mediterranean Peoples
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Identify the regional peoples of the Mediterranean before 500 BCE
- Discuss the technological achievements of the early Mediterranean peoples
- Describe the interconnectedness of the early Mediterranean peoples
During the Bronze Age (c. 3300–1200 BCE), trade connected the peoples and cultures of Greece and the Aegean islands such as Crete . By the third millennium BCE, the inhabitants of these lands were already producing wine and olive oil, products in high demand in ancient Egypt and the Near East. The Aegean Minoan and Mycenaean civilizations of the Late Bronze Age (c. 1600–1100 BCE) thus shared in the economic prosperity and cultural interaction that linked the eastern Mediterranean with the ancient cultures of western Asia.
The eventual collapse of the Late Bronze Age world coincided with the development of new technology that allowed people to devise iron tools and weapons. During the new Iron Age , the Phoenicians not only preserved Bronze Age cultural traditions but they also developed a revolutionary new communication tool, the alphabet , that vastly expanded literacy. They established trading posts across the Mediterranean as far as Spain, often in search of new sources of iron ore and other metals such as tin. The arrival of the Phoenicians and Greek traders in the western Mediterranean brought them into contact with the Etruscans in the Italian peninsula. Thus, the period from the Bronze Age through the Iron Age witnessed the development of numerous cultures across the Mediterranean.
The Late Bronze Age World
Egypt was the dominant economic and military power of the Late Bronze Age, for the most part a time of economic prosperity and political stability. Other powerful kingdoms included Minoan Crete, Mycenaean Greece, the Hittites of Asia Minor (modern Turkey), the Mitanni and Assyrians in northern Mesopotamia, and the Kassites and Elamites in southern Mesopotamia and western Iran (Figure 6.4). While each maintained its own unique culture, their interactions created a shared Late Bronze Age culture.
For instance, they all used a redistributive economic system in which agricultural goods were collected from local farmers as taxes, stored in the palace or temple, and redistributed to urban artisans, merchants, and officials who could not grow food. They all possessed military forces of elite warriors trained to fight from horse-drawn chariots. They interacted using a common set of diplomatic practices: Official correspondence was often written in Akkadian cuneiform , military alliances were sealed by arranged marriages between the royal families of allied states, and vassal states paid tribute to dominant states to avoid military assault.
These civilizations also exchanged prized goods, such as wine and oil from Greece, cedar logs from the Levant (modern Israel, Jordan, Lebanon, Palestine, and Syria), and copper from the island of Cyprus . Great cultural achievements resulted from their interaction. For example, in the small maritime kingdom of Ugarit (now Syria), scribes modified Egyptian hieroglyphics to suit their local Semitic Canaanite language, creating an ancestor of our alphabet. They used this script to record traditional epic poetry featuring myths of their main deity, the storm god Baal .
Minoan Crete and Mycenaean Greece
By 2000 BCE, a unique culture had developed on the Aegean island of Crete, reaching the height of its power at the beginning of the Late Bronze Age around 1600 BCE. The later Classical Greeks told the myth of King Minos of Crete, who built a giant maze known as the Labyrinth and imprisoned there a half-man, half-bull called the Minotaur (the “Bull of Minos”). To avenge his own son’s death, Minos forced young men and women from Athens in Greece to enter the Labyrinth and be eaten by the monster. Historians see in the myth a distant memory of the earlier civilization on Crete and use the term Minoan, derived from Minos, to describe it.
The Minoans built spacious palaces on Crete, the largest at Knossos . Since these were usually unfortified, historians believe Crete was generally peaceful and united under a single government with Knossos as the capital. The Minoans also established settlements and trading posts on other Aegean islands such as Thera and along the Anatolian coast. Their palaces were huge complexes that served as economic and administrative centers. To keep records for these centers the Minoans developed their own script, written on clay tablets and known to scholars as Linear A . It has not yet been deciphered.
A common weapon and symbol in these palaces was the labrys , or double ax, from which the word “labyrinth” arose. In the courtyards, young men and women participated in bullfights that may be the basis for the myth of the Minotaur. Frescoes on the palace walls depict these fights as well as sea creatures and scenes from nature (Figure 6.5). The Minoan religion revered bulls and a goddess associated with snakes, nature, and fertility. The abundance of figurines of this snake-wielding female deity and other artistic depictions of women may mean that at least some women enjoyed high social status in Minoan society. Religious rituals were practiced in small shrines as well as on mountain tops and in caves and sacred forests.
Link to Learning
For a thorough examination of the art and archaeology of the Aegean Bronze Age, visit Dartmouth University’s Aegean Prehistoric Archaeology website.
Sometime around 1500 BCE, the palaces on Crete were destroyed. Knossos was rebuilt, and scribes there began employing a new script scholars call Linear B , apparently derived from Linear A and found to be an early form of Greek. Linear B clay tablets discovered on the Greek mainland led historians to conclude that Greeks from the mainland conquered Crete and rebuilt Knossos.
The Bronze Age culture that produced Linear B is called Mycenaean since the largest Bronze Age city in Greece was at Mycenae . Bronze Age Greeks appear to have migrated from the Balkans into mainland Greece around 2000 BCE and adopted Minoan civilization around the beginning of the Late Bronze Age , in 1600 BCE. Unlike the Minoans, the Mycenaean Greeks were divided into a number of separate kingdoms. Immense palace complexes like those at Knossos have been found at Mycenae, Tiryns, Thebes, Pylos, and Sparta, sometimes surrounded by monumental fortifications. These locations correspond to the powerful kingdoms described in the later Greek epic poem the Iliad , attributed to the poet Homer . This poem tells the story of the Trojan War , in which the Greek kingdoms, led by King Agamemnon of Mycenae, waged war against the city of Troy . Archaeologists have also uncovered the Bronze Age city of Troy in western Turkey, which suggests the Iliad was loosely based on oral traditions that preserved the memory of these ancient Bronze Age kingdoms. The Linear B tablets indicate that the ruler of these palaces was known as the Wanax or “lord,” the same word used to describe the heroic kings of the Iliad .
The Collapse of the Bronze Age World
The last century of the Late Bronze Age, after 1200 BCE, was a period of wars and invasions that witnessed the collapse of many powerful states. The palaces of Mycenaean Greece were destroyed, perhaps following revolts by the lower class and natural disasters like climate change and earthquakes. In the centuries that followed, the population declined drastically, writing and literacy disappeared, and Greece entered a “Dark Age.”
Later ancient Greek historians reported that Greek-speaking tribes known as the Dorians migrated from northwest Greece to the south after the Trojan War. The instability in Greece and the Aegean resulted in much migration by people in search of new homes. For instance, ancient Egyptian inscriptions tell us that the “ Sea Peoples ” destroyed the Hittite Empire and numerous kingdoms in the Levant to the north of Egypt. One particular group known as the Philistines (Peleset), who attacked Egypt , eventually settled just north of Egypt along the coast of the southern Levant. But there were many others, including the Akawasha, Lukka, Shardana, Tursha, and more who washed across the eastern Mediterranean during the Late Bronze Age Collapse (Figure 6.6).
Other groups were also on the move. Libyans , who inhabited the North African coastal region west of Egypt, invaded the northern Nile River valley and settled there. The attacks of the Sea Peoples and Libyans contributed to the later collapse of Egypt’s central governments after 1100 BCE, ending the New Kingdom period. Phrygians , who inhabited the Balkans in southeast Europe, migrated into Asia Minor (Turkey). The Aramaeans , nomadic tribes who spoke a Semitic language and inhabited the Arabian Desert, migrated into Syria and Mesopotamia.
These wars and invasions coincided with an important technological innovation, the birth of sophisticated iron-making technology. For thousands of years, bronze had been the metal of choice in the ancient world. But the disruptions caused by the Late Bronze Age Collapse made it difficult for metal workers to access tin, a crucial ingredient in bronze. Without a sufficient supply of tin, artisans experimented for centuries with iron ore. In the process, they developed the techniques of steeling (adding carbon to the iron to make it stronger), quenching (rapidly cooling hot iron with water), and tempering (heat treating) to produce a metal far superior in strength to bronze. By around 900 BCE, the Iron Age had begun in the eastern Mediterranean.
Phoenicians, Greeks, and Etruscans
The Phoenicians were descended from the Bronze Age Canaanites and lived in cities like Sidon and Tyre (in today’s Lebanon), each ruled by a king. They were great sailors, explorers, and traders who established trading posts in Cyprus, North Africa, Sicily, Sardinia, and Spain. They sailed along the west coast of Africa and to the British Isles in search of new markets and goods such as tin (See Figure 6.7).
Around 1100 BCE, the Phoenicians also invented the world’s first known alphabet , using symbols that represented consonant sounds. Strung together, these consonants created words in which vowel sounds were interpreted by the order of the consonants. Because the Phoenician alphabet simplified the earlier script of the Canaanites, more people could now become literate, not just a small, specialized group of scribes. The Phoenicians’ commercial success was undoubtedly partly a result of their better, more efficient record-keeping system that a larger population could learn and employ. Other cultures like the Aramaean peoples and the Israelites quickly adapted the new script to their own languages. By the eighth century BCE, the Greeks had also adopted and later adapted the Phoenician alphabet to write their language.
Beginning with the Assyrian Empire’s expansion in the eighth century BCE, the Phoenician kingdoms became subjects of the successive Iron Age empires of western Asia: the Assyrians , the Chaldeans (Neo-Babylonian), and the Persians . The Phoenicians continued to flourish, however. The Assyrians valued Phoenician artists, and finely crafted Phoenician wares such as jewelry and furniture became popular among the ruling elites. The Persians relied largely on Phoenician sailors and ships to serve as the naval forces, especially in their campaigns to conquer Greece in the early fifth century BCE. When Phoenician city-states such as Sidon and Tyre became subject to foreign rule, many Phoenicians immigrated to the city of Carthage (in modern-day Tunisia), founded by Phoenician merchants around 700 BCE as a stopping place on the long but profitable voyage to Spain. Given this influx of immigrants, Carthage grew large and wealthy, and by the fifth century BCE the southern Italian peninsula was the dominant power in the western Mediterranean.
The Phoenicians were not the only people establishing colonial outposts around the wider Mediterranean world. Beginning in the eighth century, Greeks began founding colonies in North Africa, in coastal Spain and France, on the shores of the Black Sea, and on the Italian peninsula. Many of these colonies were built in resource-rich areas and commonly produced grain, tin, or timber for export back to Greece. Others served more mercantile interests, trading with major and minor powers across the Mediterranean. It was through these colonial ventures that Greeks and Phoenicians came into contact with the Etruscans of the northern Italian peninsula.
The Etruscans were organized into independent city-state s such as Veii and Vulci , much like the Greeks were, and each city was ruled by its own king and council of elders. In their art and architecture, the Etruscans followed Greek models (Figure 6.8). They modified the alphabet the Greeks had acquired from the Phoenicians to write their language, which scholars have not yet fully deciphered. By 600 BCE, they had expanded beyond their base in modern Tuscany and colonized Rome , which became an Etruscan city. They also founded new colonies in northern and southern Italy. The Etruscan states remained the dominant power in the Italian peninsula until 474 BCE. In that year, at the Battle of Cumae off the coast of southern Italy, the naval forces of the Greek city-state of Syracuse won a decisive victory over the Etruscan fleet and emerged as the chief power in the region, along with Carthage .
Since ancient Rome began as an Etruscan city-state, the Etruscans strongly influenced the development of Roman culture. For example, Roman priests divined the will of the gods by examining a sacrificed animal’s entrails, a custom adopted from the Etruscans. The Etruscans honored their dead with elaborate tombs, and the Romans did the same, maintaining that the spirits of their ancestors watched over them. Gladiatorial contests in Rome had origins among the Etruscans, who at funerals forced prisoners of war to fight to the death as human sacrifices to their dead. The fasces , a bundle of rods and an ax that symbolized the authority of Roman magistrates, originally denoted the authority of Etruscan kings. Finally, the Roman alphabet, still used in western and central Europe today, was based on Etruscan modifications to the Greek alphabet. | libretexts | 2025-03-17T22:27:53.141388 | 2025-02-12T00:43:37 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.01%3A_Early_Mediterranean_Peoples",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "3.1: Early Mediterranean Peoples",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.02%3A_Ancient_Greece | 3.2: Ancient Greece
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Identify the historical factors that shaped the development of the Greek city-state
- Describe the evolution of the political, economic, and social systems of Athens and Sparta
- Discuss the alliances and hostilities among the Greek city-states during the Classical period
- Identify the major accomplishments of Ancient Greek philosophy, literature, and art
In the centuries following the collapse of the Bronze Age Mycenaean kingdoms around 1100 BCE, a dynamic new culture evolved in Iron Age Greece and the Aegean region. During this period, the Greek city-states developed innovative consensual governments. Free adult males participated in their own governance and voted to create laws and impose taxes. This system of government contrasted with the earlier monarchies of the ancient Near East, in which rulers claimed to govern their subjects through the will of the gods.
The degree of political participation in the Greek city-states varied from monarchy and oligarchy, or government by a small group of wealthy elites, to democracy , literally “rule by the people,” a broader-based participation that eventually included both rich and poor adult males. These systems influenced Ancient Roman and European political thought through the centuries. The Greek Classical period (500–323 BCE) witnessed constant warfare among rival city-states, yet it was marked by the creation of enduring works of literature and art that inspired centuries of European artists and writers. Greek philosophers also subjected the human condition and the natural world to rational analysis, rejecting traditional beliefs and sacred myths.
Archaic Greece
The Greek Dark Ages (1100–800 BCE) persisted after the collapse of the Mycenaean civilization but began to recede around 800 BCE. From this point and for the next few centuries, Greece experienced a revival in which a unique and vibrant culture emerged and evolved into what we recognize today as Classical Greek civilization. This era, from 800 to 500 BCE, is called Archaic Greece after arche , Greek for “beginning.”
The Greek renaissance was marked by rapid population growth and the organization of valleys and islands into independent city-states, each known as a polis (Greek for city-state). Towns arose around a hill fortress or acropolis to which inhabitants could flee in times of danger. Each polis had its own government and religious cults, and each built monumental temples for the gods, such as the temple of Hera, wife of Zeus and protector of marriage and the home, at the city-state of Argos. Though politically disunited, the Greeks, who began to refer to themselves as Hellenes after the mythical king Hellen, did share a common language and religion. The most famous of their sacred sites were Delphi , near Mount Parnassus in central Greece and seat of the oracle of Apollo , the god of prophecy, and Olympia in southern Greece, sacred to Zeus, who ruled the pantheon of gods at Mount Olympus (Figure 6.9). Beginning in 776 BCE, according to Aristotle , Greeks traveled to Olympia every four years to compete in athletic contests in Zeus’s honor, the origin of the Olympic Games .
The Past Meets the Present
The Olympic Games
Postponed a year because of the COVID-19 pandemic, the 2021 Games of the XXXII Olympiad in Japan included more than three hundred events in thirty-three sports, including new entries like skateboarding, rock climbing, and surfing. Modern games have been held since 1896, when the new International Olympic Committee started the tradition, but as the name suggests, the inspiration came from Ancient Greece.
Athletic events in Ancient Greece were important displays of strength and endurance. There were contests at the sanctuaries at Delphi and Nemea (near Argos), but none was as renowned as the Olympic Games, held at the sanctuary in Olympia that was dedicated to Zeus. Contestants came from all over the Greek world, including Sicily and southern Italy.
Unlike the skateboarding and surfing of modern games, the ancient games focused on skills necessary for war: running, jumping, throwing, and wrestling. Over time, sports that included horses, like chariot racing, were also incorporated. Such events were referenced in Homer’s Iliad , when the hero Achilles held athletic contests to honor his fallen comrade Patroclus and awarded prizes or athla (from which the word “athlete” is derived). The centerpiece of the ancient games was the two-hundred-yard sprint, or stadion , from which comes the modern word “stadium” (Figure 6.10).
Unlike the modern games, where attendees pay great sums to watch athletes compete, admission to the ancient games was free—for men. Women were forbidden from watching and, if they dared to attend, could pay with their lives. Competitors were likely locals with proven abilities, though over time professional athletes came to dominate the sport. They could earn a good living from prizes and other rewards gained through their talent and celebrity, and their statues adorned the sanctuary at Olympia. The poet Pindar in the early fifth century BCE was renowned for composing songs to honor them when they returned home as victors. The Olympic Games continued to be celebrated until 393 CE, when they were halted during the reign of the Christian Roman emperor Theodosius .
- Why might the organizers of the modern Olympic Games have named their contest after the ancient Greek version?
- How are the ancient games similar to the modern Olympic Games? How are they different?
The start of the Archaic period also witnessed the reemergence of specialization in Greek society. Greek artists became more sophisticated and skilled in their work. They often copied artistic styles from Egypt and Phoenicia , where Greek merchants were engaging in long-distance trade. At the site of Al-Mina , along the Mediterranean coast in Syria where historians believe the Phoenician alphabet was first transmitted to the Greeks, Greek and Phoenician merchants exchanged goods. Far to the west, on the island of Ischia off the west coast of Italy, Greeks were competing with Phoenician merchants for trade with local peoples, whose iron ore was in strong demand. Thanks to their contact and trade with the Phoenicians, Greeks adapted the Phoenician alphabet to their own language, making an important innovation by adding vowels (a, e, i, o, u). The eighth century BCE thus witnessed the return of literacy and the end of the Aegean world’s relative isolation after the interlude of the Greek Dark Ages .
The eighth century BCE was also the period in which the epic poems the Iliad and the Odyssey were composed, traditionally attributed to the blind poet Homer . While historians debate whether Homer was a historical or a legendary figure, they agree the epics originated in the songs of oral poets in the Greek Dark Ages. In the eighth century BCE, using the Greek alphabet, scribes wrote these stories down for the first time.
As the population expanded during the Archaic period, a shortage of farmland brought dramatic changes. Many Greeks in search of land to farm left their homes and founded colonies along the shores of the Black Sea and the northern Aegean, in North Africa at Cyrene in Libya, and in southern Gaul (modern France) at Massalia (Marseille). The largest number were on the island of Sicily and in southern Italy, the region the Greeks referred to as Magna Graecia or “Greater Greece.” When Greeks established a colony, it became an independent polis with its own laws. The free adult males of the community divided the colony’s land into equal lots. Thus, a new idea developed in the colonies that citizenship in a community was associated with equality and participation in the governing of the state.
In the society of Archaic Greece , the elite landowners, or aristoi , traditionally controlled the government and the priesthoods in the city-states. But thanks to the new ideas from the colonies, the common people, or kakoi , began demanding land and a voice in the governing of the polis. They were able to gain leverage in these negotiations because city-state s needed troops in their wars for control of farmland. The nobility relied on the wealthier commoners, who could afford to equip themselves with iron weapons and armor. In some city-states, the aristoi and the kakoi were not able to resolve their differences peaceably. In such cases, a man who had strong popular support in the city would seize power and rule over the city. The Greeks referred to such populist leaders as tyrant s.
In the sixth century BCE, the difficulties caused by the land shortage were relieved by the invention of coinage . A century before, adopting a practice of the kings of Lydia in western Asia Minor (Turkey), Athens stamped silver pieces with the image of an owl, a symbol of wisdom often associated with the goddess Athena (See Figure 6.11). Instead of weighing precious metals to use as currency or arguing over the value of bartered goods to trade, merchants could use coins as a simple medium of exchange. The agora , or place of assembly in each city-state, thus became a marketplace to buy and sell goods. In the sixth century BCE, this rise of a market economy stimulated economic growth as farmers, artisans, and merchants discovered stronger incentives to produce and procure more goods for profit. For example, farmers learned how to produce more food with the land they already possessed rather than always seeking more land. The economic growth of this period is reflected in the many new temples the Greek city-states constructed then.
Sparta and Athens
In the Archaic period, Athens and Sparta emerged as two of the most important of the many Greek city-states. Not only did their governments and cultures dominate the Greek world in the subsequent Classical period; they also fired the imaginations of Western cultures for centuries to come. Athens was the birthplace of democracy , whereas Sparta was an oligarchy headed by two kings.
The Rise and Organization of Sparta
Sparta in the eighth century BCE was a collection of five villages in Laconia , a mountain valley in the Peloponnese in southern Greece. Due to the shortage of farmland, the citizens (adult males) of these villages, the Spartiates , all served in the military and waged war on neighboring towns, forcing them to pay tribute. The Spartiates also appropriated farmland for themselves and enslaved the inhabitants of these lands, most famously the Messenians, who became known as the helots. Just as Greek colonists at this time divided land among themselves into equal lots, the Spartiates likewise divided the conquered land equally and assigned to each landowner a certain number of helot families to work it. Helots, unlike enslaved people in other parts of Greece, could not be bought or sold but remained on the land as forced laborers from generation to generation. In the seventh century BCE, Sparta conquered the land of Messene to its west and divided its farmland equally among the Spartiates.
By the late sixth century BCE, the wealth from the rich agricultural land that Sparta then controlled had made it the most powerful state in the Peloponnese. Sparta also organized the city-states of this region and parts beyond into a system of alliances that historians refer to as the Peloponnesian League . Its members still had self-government and paid no tribute to Sparta, but all were expected to have the same friends and enemies as Sparta, which maintained its dominance in the league. Sparta also used its army to overthrow tyrants in the Peloponnesian city-states and restore political power to the aristoi .
The Spartans were proud of their unique system of government, or constitution, which was a set of laws and traditional political practices rather than a single document. It was said to have been created by a great lawgiver named Lycurgus around 800 BCE, but modern historians view its development as an evolutionary process during the Archaic period rather than the work of a single person.
Sparta had two hereditary kings drawn from rival royal families. Their powers were very limited, though both sat as permanent members of the Council of Elders and were priests in the state religion. On occasion, the Spartan kings also led armies into battle. The Assembly of Spartiates passed all laws and approved all treaties with the advice of the Council of Elders . This Assembly also elected five judges every year who administered the affairs of state, as well as the members of the Council of Elders.
The unique element of Spartan culture was the agoge , its educational system. At the age of seven, boys were separated from their families and raised by the state. To teach them to live by their wits and courage, they were fed very little so they had to learn how to steal food to survive. At the age of twelve they began an even more severe regimen. They were not allowed clothes except a cloak in the wintertime, and they bathed just once a year (Figure 6.12). They also underwent ritual beatings intended to make them physically strong and hardened warriors. At the age of eighteen, young men began two years of intense military training. At the age of twenty, a young Spartan man’s education was complete.
Women of the Spartiate class, before marrying in their mid-teens, also practiced a strict physical regimen, since they were expected to be as strong as their male relatives and husbands and even participate in defending the homeland. Spartan women enjoyed a reputation for independence, since they managed the farms while men were constantly training for or at war and often ran their family estates alone due to the early deaths of their soldier husbands. The state organized unmarried women into teams known as chorai (from which the term chorus is derived) that danced and sang at religious festivals.
When a Spartiate man reached the age of thirty, he could marry, vote in the Assembly, and serve as a judge. Each Spartiate remained in the army reserve until the age of sixty, when he could finally retire from military service and became eligible for election to the Council of Elders. Spartan citizens were proud to devote their time to the service of the state in the military and government; they did not have to work the land or learn a trade since this work was done for them by commoners and helot subjects.
The Rise and Organization of Athens
Athens , like Sparta, developed its own system of government in the Archaic period. Uniquely large among Greek city-states, Athens had long enclosed all the land of Attica , which included several mountain valleys. It was able to eventually develop into a militarily powerful democratic state in which all adult male citizens could participate in government, though “citizenship” was a restricted concept, and because only males could participate, it was by nature a limited democracy .
The roots of Athenian democracy are long and deep, however, and its democratic institutions evolved over centuries before reaching their fullest expression in the fifth century BCE. It was likely the growing prosperity of Athenians in the eighth century that had set Athens on this path. As more families became prosperous, they demanded greater say in the functioning of the city-state. By the seventh century BCE, Athens had an assembly allowing citizens (free adult males) to gather and discuss the affairs of the state. However, as the rising prosperity of Athenians stalled and economic hardship loomed by the end of the century, the durability of the fledgling democracy seemed in doubt. Attempts to solve the economic problems by adjusting the legal code, most notably by the legislator Draco (from whose name we get the modern term “draconian”), had little effect, though codifying the law in written form brought more clarity to the legal system.
With the once-thriving middle class slipping into bankruptcy and sometimes slavery, civil war seemed inevitable. Disaster was avoided only with the appointment of Solon in 594 BCE to restore order. Solon came from a wealthy elite family, but he made it known that he would draft laws to benefit all Athenians, rich and poor. A poet, he used his songs to convey his ideas for these new laws (Figure 6.13).
One of Solon’s first measures was to declare that all debts Athenians owed one another were forgiven. Solon also made it law that no Athenian could be sold into slavery for failure to repay a loan. These decrees did much to provide relief to farmers struggling with debt who could now return to work the land. Under Solon’s new laws, each of Athens’s four traditional tribes chose one hundred of its members by lot, including commoners, to sit in the new Council of Four Hundred and run the government. There were still magistrates, but now Solon created the jury courts. All Athenians could appeal the ruling of a magistrate in court and have their cases heard by a jury of fellow citizens. Solon also set up a hierarchal system in which citizens were eligible for positions in government based on wealth instead of hereditary privilege. Wealth was measured by the amount of grain and olive oil a citizen’s land could produce. Only the wealthiest could serve as a magistrate, sit on the Council, and attend the Assembly and jury courts. Citizens with less wealth could participate in all these activities but could not serve as magistrates. The poorest could only attend the Assembly and the jury courts.
Solon’s reforms were not enough to end civil unrest, however. By 545 BCE, a relative of his named Pisistratus had seized power by force with his own private army and ruled as a tyrant with broad popular support. Pisistratus was reportedly a benevolent despot and very popular. He kept Solon’s reforms largely in place, and Athenians became accustomed to serving in Solon’s Council and in jury courts. They were actively engaged in self-government, thus setting the stage for the establishment of democracy. Pisistratus also encouraged the celebration of religious festivals and cults that united the people of Attica through a common religion. To further help the farmers Solon brought back, Pisistratus redistributed land so they could once again make a living.
After Pisistratus’s death, his sons tried to carry on as tyrants, but they lacked their father’s popularity. Around 509 BCE, an Athenian aristocrat named Cleisthenes persuaded the Spartans to intervene in Athens and overthrow these tyrants. The Spartans, however, set up a government of elites in Athens that did not include Cleisthenes. Consequently, he appealed to the common people living in the villages, or demes , to reject this pro-Spartan regime and establish a “ demo cracy.” His appeal was successful, and Cleisthenes implemented reforms to Solon’s system of government. He replaced the Council of Four Hundred with one of five hundred and reorganized the Athenians into ten new tribes, including in each one villages from different parts of Attica . Every year, each tribe chose fifty members by lot to sit in the new Council. This reform served to unite the Athenians, since each tribe consisted of people from different parts of Attica who now had to work together politically. Each tribe’s delegation of fifty also served as presidents for part of the year and ran the day-to-day operation of the government.
By the end of the Archaic period, Athens had developed a functioning direct democracy , which differs from modern republics in which citizens vote for representatives who sit in the legislature. All citizens could sit in the Athenian Assembly , which then was required to meet at least ten times a year. All laws had to be approved by the Assembly. Only the Assembly could declare war and approve treaties. Athens had a citizen body of thirty to forty thousand adult males in the Classical period, but only six thousand needed to convene for meetings of the Assembly. Citizens could also be chosen by lot to sit in the Council. Since they were permitted to serve for just two one-year terms over a lifetime, many Athenians had the opportunity to participate in the executive branch of government. All citizens also served on juries, which not only determined the guilt or innocence of the accused but also interpreted the way the law was applied. Women, enslaved people, and foreign residents could not participate. However, women of the citizen class were prominent in the public religious life of the city, serving as priestesses and in ceremonial roles in religious festivals.
Classical Greece
The Greek Classical period (500–323 BCE) was an era of great cultural achievement in which enduring art, literature, and schools of philosophy were created. It began with the Greek city-state s uniting temporarily to face an invasion by the mighty Persian Empire , but it ended with them locked in recurring conflicts and ultimately losing their independence, first to Persia and later to Macedon .
The Persian Wars
The Persian Wars (492–449 BCE) were a struggle between the Greek city-states and the expanding Persian Empire. In the mid-sixth century BCE, during the reign of Cyrus the Great , Persian armies subdued the Greek city-states of Ionia, located across the Aegean from Greece in western Asia Minor (Turkey) (Figure 6.14). To govern the cities, the Persians installed tyrant s recruited from the local Greek population. The resident Greeks were unhappy with the tyrants’ rule, and in 499 BCE they rose in the Ionian Rebellion , joined by Athens and the Greek cities on the island of Cyprus . But by 494 BCE Persian forces had crushed the rebellions in both Ionia and Cyprus. For intervening in Persian affairs, the Persian king Darius decided that Athens must be punished.
In 490 BCE, Darius assembled a large fleet and army to cross the Aegean from Asia Minor, planning to subdue Athens and install one of Pisistratus’s sons as tyrant there. These Persian forces landed at Marathon on the west coast of Attica. They vastly outnumbered the Athenians but were drafted subjects with little motivation to fight and die. The Athenian soldiers, in contrast, were highly motivated to defend their democracy. The Persians could not withstand the Athenians’ spirited charge in the Battle of Marathon and were forced back onto their ships. Leaving the battle, the Persians then sailed around Attica to Athens. The soldiers at Marathon raced by land across the peninsula to guard the city. Seeing the city defended, the Persians returned to Asia Minor in defeat.
In 480 BCE, Xerxes , the son and successor of Darius, launched his own invasion of Greece intended to avenge this defeat and subdue all the Greek city-states. He assembled an even larger fleet as well as an army that would invade by land from the north. At this time of crisis, most of the Greek city-states decided to unite as allies and formed what is commonly called the Hellenic League . Sparta commanded the armies and Athens the fleet. A small band of the larger land forces, mostly Spartans, decided to make a stand at Thermopylae , a narrow pass between the mountains and the sea in northeastern Greece. Their goal was not to defeat the invading Persian army, which vastly outnumbered them, but to delay them so the rest of the forces could organize a defense. For days the small Spartan force, led by their king Leonidas , successfully drove back a vastly superior Persian army, until a Greek traitor informed the Persians of another mountain pass that enabled them to circle around and surround the Spartans. The Spartan force fought to the death, inspiring the Greeks to continue the fight and hold the Hellenic League together.
After the Battle of Thermopylae , the Persian forces advanced against Athens. The Athenians abandoned their city and withdrew to the nearby island of Salamis, where they put their faith in their fleet to protect them. At the naval Battle of Salamis , the allied Greek fleet led by Athens destroyed the Persian ships. Xerxes then decided to withdraw much of his force from Greece, since he no longer had a fleet to keep it supplied.
In 479 BCE, the reduced Persian force had retreated from Athens to the plains of Boeotia , just north of Attica. The Greek allied forces under the command of Sparta advanced into Boeotia and met the Persian army at the Battle of Plataea . The Persian forces, mostly unwilling draftees, were no match for the Spartan troops, and the battle ended in the death or capture of most of the Persian army.
The Athenian Empire and the Peloponnesian War
After the Persian Wars , the Athenians took the lead in continuing the fight against Persia and liberating all Greek city-states. In 477 BCE, they organized an alliance of Greek city-states known today as the Delian League , headquartered on the Aegean island of Delos . Members could provide ships and troops for the league or simply pay Athens to equip the fleet, which most chose to do. Over the next several decades, allied forces of the Delian League liberated the Greek city-states of Ionia from Persian rule and supported rebellions against Persia in Cyprus and Egypt. Around 449 BCE, Athens and Persia reached a peace settlement in which the Persians recognized the independence of Ionia and the Athenians agreed to stop aiding rebels in the Persian Empire.
Over the course of this war, the money from the Delian League enriched many lower-class Athenians, who found employment as rowers in the fleet. Athens even began paying jurors in jury courts and people who attended meetings of the Assembly. Over time it became clear to the other Greeks that the Delian League was no longer an alliance but an empire in which the subject city-states paid a steady flow of tribute. In 465 BCE, the city-state of Thasos withdrew from the league but was compelled by Athenian forces to rejoin. Around 437 BCE, the Athenians began using tribute to rebuild the temples on the Acropolis that the Persians had destroyed. Including the Parthenon , dedicated to Athena Parthenos, these were some of the most beautiful temples ever built and the pride of Athens, but to the subject city-states they came to symbolize Athenians’ despotism and arrogance (See Figure 6.15).
The wealth and power of Athens greatly concerned the Spartans, who saw themselves as the greatest and noblest of the Greeks. The rivalry between the two city-states eventually led them into open conflict. In 433 BCE, the Athenians assisted the city-state of Corcyra in its war against Corinth . Corinth was a member of the Peloponnesian League and requested that Sparta, the leader of this league, take action against Athenian aggression. Thus, in 431 BCE, the Peloponnesian War began with the invasion of Attica by Sparta and its allies (See Figure 6.16).
The political leader Pericles persuaded his fellow Athenians to withdraw from the countryside of Attica and move within the walls of Athens, reasoning that the navy would provide them food and supplies and the wall would keep them safe until Sparta tired of war and sought peace. Pericles’s assessment proved correct. In 421 BCE, after ten years of war, the Spartans and Athenians agreed to the Peace of Nicias , which kept the Athenian empire intact. The cost of the war for Athens was high, however. Due to the crowding of people within its walls, a plague had erupted in the city in 426 BCE and killed many, including Pericles.
Link to Learning
We know of the 426 BCE plague in Athens from the writings of Thucydides, the ancient chronicler and historian of the Peloponnesian War. But what was the mysterious illness? And how did it affect Athenian society and politics? Take a look at this article about the plague in Athens from The National Geographic for some modern answers.
Several years later, arguing that the empire could thrive only by expanding, an ambitious young Athenian politician named Alcibiades (a kinsman of Pericles) inspired a massive invasion of Sicily targeting Syracuse , the island’s largest city-state. Just as the campaign began in 415 BCE, Alcibiades’s political enemies in Athens accused him of impiety and treason, and he fled to Sparta to avoid a trial. Without his leadership, the expedition against Syracuse floundered, and in 413 BCE the entire Athenian force was destroyed. In exile, Alcibiades convinced the Spartans to invade Attica again, now that Athens had been weakened by the disaster in Syracuse. In the years that followed, the Spartans realized they needed a large fleet to defeat Athens, and they secured funds for it from Persia on the condition that Sparta restore the Greek cities in Ionia to Persian rule. In 405 BCE, the new Spartan fleet destroyed the Athenian navy at the Battle of Aegospotami in the Hellespont . The Athenians, under siege, could not secure food or supplies without ships, and in 404 BCE the city surrendered to Sparta. The Peloponnesian War ended with the fall of the city and the collapse of the Athenian empire.
The conclusion of the Peloponnesian War initially left Sparta dominant in Greece. Immediately following the war, Sparta established oligarchies of local aristocrats in the city-states that had been democracies under the Delian League . And it set up the Era of the Thirty Tyrants , a brief rule of oligarchs in Athens. With regard to Persia, Sparta reneged on its promise to restore the Greek city-states in Ionia to Persian control. Persia responded by funding Greek resistance to Sparta, which eventually compelled Sparta to accept Persia’s terms in exchange for Persian support. This meant turning over the Ionian city-states as it had previously promised.
Now with Persian backing, the Spartans continued to interfere in the affairs of other Greek city-states. This angered city-states like Thebes and Athens. In 371 BCE, the Thebans defeated the Spartans at the Battle of Leuctra in Boeotia . The next year they invaded the Peloponnese and liberated Messene from Spartan rule, depriving the Spartans of most of their helot labor there. Without the helots, the Spartans could not support their military system as before, and their Peloponnesian League collapsed. Alarmed by the sudden growth of Thebes’s power, Athens and Sparta again joined forces and, in 362 BCE, fought the Thebans at the Battle of Mantinea . The battle was inconclusive, but Thebes’s dominance soon faded. By 350 BCE, the Greek city-states were exhausted economically and politically after decades of constant warfare.
The Classical “Golden Age”
Many historians view the Greek Classical period and the cultural achievements in Athens in particular as a “Golden Age” of art, literature, and philosophy. Some scholars argue that this period saw the birth of science and philosophy because for the first time people critically examined the natural world and subjected religious beliefs to reason. (Other modern historians argue that this position discounts the accomplishments in medicine and mathematics of ancient Egypt and Mesopotamia.) For example, around 480 BCE, Empedocles speculated that the universe was not created by gods but instead was the result of the four material “elements”—air, water, fire, earth—being subjected to the forces of attraction and repulsion. Another philosopher and scientist of the era, Democritus , maintained that the universe consisted of tiny particles he called “atoms” that came together randomly in a vortex to form the universe.
Philosophers questioned not only the traditional views of the gods but also traditional values. Some of this questioning came from the sophists (“wise ones”) of Athens, those with a reputation for learning, wisdom, and skillful deployment of rhetoric. Sophists emerged as an important presence in the democratic world of Athens beginning in the mid-fourth century BCE. They claimed to be able to teach anyone rhetoric , or the art of persuasion, for a fee, as a means to achieve success as a lawyer or a politician. While many ambitious men sought the services of sophists, others worried that speakers thus trained could lead the people to act against their own self-interest.
Many thought Socrates was one of the sophists. A stonecutter by trade, Socrates publicly questioned sophists and politicians about good and evil, right and wrong. He wanted to base values on reason instead of on unchallenged traditional beliefs. His questioning often embarrassed powerful people in Athens and made enemies, while his disciples included the politician Alcibiades and even some who had opposed Athenian democracy. In 399 BCE, an Athenian jury court found Socrates guilty of impiety and corrupting the youth, and he was sentenced to death (Figure 6.17).
Socrates left behind no writings of his own, but some of his disciples wrote about him. One of these was Plato , who wrote dialogues from 399 BCE to his death in 347 BCE that featured Socrates in conversation with others. Through these dialogues, Plato constructed a philosophical system that included the study of nature (physics), of the human mind (psychology and epistemology, the theory of knowledge), and ethics. He maintained that the material world we perceive is an illusion, a mere shadow of the real world of ideas and forms that underlie the universe. According to Plato, the true philosopher uses reason to comprehend these ideas and forms.
Plato established a school at the Academy , which was a gymnasium or public park near Athens where people went to relax and exercise. One of his most famous pupils was Aristotle , who came to disagree with his teacher and believed that ideas and forms could not exist independently of the material universe. In 334 BCE, Aristotle founded his own school at a different gymnasium in Athens, the Lyceum , where his students focused on the reasoned study of the natural world. Modern historians view Plato and Aristotle as the founders of Western (European) philosophy because of the powerful influence of their ideas through the centuries.
Athens in the Golden Age was also the birthplace of theater . Playwrights of the fifth century BCE such as Sophocles and Euripides composed tragedies that featured music and dance, like operas and musicals today (Figure 6.18). The plots were based on traditional myths about gods and heroes, but through their characters the playwrights pondered philosophical questions of the day that have remained influential over time. In Sophocles’s Antigone , for example, Antigone, the daughter of Oedipus, must decide whether to obey the laws or follow her religious beliefs.
Link to Learning
For an example of Greek theatre, watch this modern performance of Lysistrata by the comic poet Aristophanes . In this comedy, first performed in 411 BCE, the women of Greece plot to end the Peloponnesian War. In the Greek original, the actors would have worn masks and sung their parts, as in a modern opera.
The study of history also evolved during the Golden Age. Herodotus and Thucydides are considered the first true historians because they examined the past to rationally explain the causes and effects of human actions. Herodotus wrote a sweeping history of wide geographic scope, called Histories (“inquiries”), to explore the deep origins of the tension between the Persian and Greek worlds. In History of the Peloponnesian Wa r , Thucydides employed objectivity to explain the politics, events, and brutality of the conflict in a way that is similar in some respects to the approach of modern historians.
Finally, this period saw masterpieces of sculpture, vase painting, and architecture. Classical Age Greek artists broke free of the heavily stylized and two-dimensional art of Egypt and the Levant , which had inspired Greek geometric forms, and produced their own uniquely realistic styles that aimed to capture in art the ideal human form. Centuries later, and especially during the European Renaissance, artists modeled their own works on these classical models.
Beyond the Book
Ancient Greek Sculpture and Painting
In the Archaic period, the Greeks had more contact with the cultures of Phoenicia and Egypt, and artists modeled their work on examples from these regions. For instance, ancient Egyptian artists followed strict conventions in their heavily stylized works, such as arms held close to the sides of the body and a parallel stance for the feet. Greek artists adopted these conventions in their statues of naked youths, or kouroi , which were often dedicated in religious sanctuaries (Figure 6.19).
During the Classical period, Greek sculptors still produced statues of naked youths for religious sanctuaries, but in more lifelike poses that resembled the way the human body appears naturally (Figure 6.20).
Greek painting is most often preserved on vases. In the Archaic period, artists frequently decorated vases with motifs such as patterning, borrowed from Phoenician and Egyptian art (Figure 6.21).
By the Classical era, especially in Athens, vase painters were relying less on patterning and instead depicting realistic scenes from myths and daily life (Figure 6.22).
In the Classical period, Greek artists thus came into their own and no longer borrowed heavily from the art of Egypt and Phoenicia.
- What do the many artistic influences on Greece suggest about its connections with other parts of the ancient world?
- Why might Greek art have relied heavily on mythical symbols and depictions? What does this indicate about Greek culture? | libretexts | 2025-03-17T22:27:53.268001 | 2025-02-12T00:43:39 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.02%3A_Ancient_Greece",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "3.2: Ancient Greece",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.03%3A_The_Hellenistic_Era | 3.3: The Hellenistic Era
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Explain the events that led to the rise of Alexander the Great
- Analyze Alexander the Great’s successes as a military and political leader
- Discuss the role that Alexander the Great’s conquests played in spreading Greek culture
The Classical period in Greece ended when Greece lost its freedom to the Kingdom of Macedon and Macedon’s king Alexander the Great conquered the Persian Empire . The period that followed Alexander’s death is known as the Hellenistic period (323–31 BCE). Alexander’s empire was divided among his top generals, including Seleucus , Ptolemy , and Antigonus . During this time, Greeks, also called Hellenes, ruled over and interacted with the populations of the former Persian Empire. The resulting mixture of cultures was neither Greek nor non-Greek but “Greek-like,” or Hellenistic , a term that refers to the flourishing and expansion of Greek language and culture throughout the Mediterranean and Near East during this period.
The Kingdom of Macedon
The ancient Kingdom of Macedon straddled today’s Greece and northern Macedonia . The Macedonians did not speak Greek but had adopted Greek culture in the Archaic period, and their royal family claimed to be descended from the mythical Greek hero Heracles .
King Philip II of Macedon, who reigned from 359 to 336 BCE, transformed the kingdom into a great power. He recruited common farmers and developed them into a formidable infantry, with trained aristocrats as cavalry. His tactical skills and diplomacy allowed Philip to secure control of new territory in Thrace (modern-day northern Greece and Bulgaria), which provided access to precious metals and thus the economic resources to expand his military power.
In 338 BCE, Athens and Thebes finally put decades of conflict aside to ally against the rising power of Macedon. At the Battle of Chaeronea (338 BCE), the Macedonians crushed this allied army. Philip sought to unite the Greek city-states under his leadership after this victory, and he organized them toward the goal of waging war against the Persian Empire. However, in 336 BCE, Philip was killed by an assassin with a personal grudge.
Philip II was succeeded by his twenty-year-old son Alexander III, later known as Alexander the Great, who immediately faced an invasion by Thracian tribes from the north and a rebellion in Greece led by Thebes and Athens. Within a year, the young king had crushed these opponents and announced he was carrying out his father’s plan to wage war against Persia. Darius III , the Persian king, amassed armies to face him, but they were mainly draftees from the subject peoples of the Persian Empire. At the battles of Issus (333 BCE) and Gaugamela (330 BCE), these forces collapsed against the Macedonians, commanded by Alexander himself.
At first, Alexander envisioned his campaign as a war of vengeance against Persia. Although he was Macedonian, he saw himself as a Hellene and often compared himself to the hero Achilles of the Iliad , from whom he claimed to be descended through his mother. In 330 BCE, Alexander’s forces sacked and later burned Persepolis, the jewel of the Persian Empire. After the assassination of Darius III by disgruntled Persian nobles that same year, however, Alexander claimed the Persian throne and introduced Persian customs to his court, such as having his subjects prostrate themselves before him. To consolidate his control of the Persian Empire, in 330–326 BCE he advanced his army deep into central Asia and to the Indus River valley (modern Pakistan) (Figure 6.23). In 326 BCE, his exhausted troops mutinied and refused to advance to the Ganges River in central India as Alexander desired. He led his army back to Babylon in Mesopotamia, where he died in 323 BCE at the age of thirty-three, probably due to the cumulative impact of injuries experienced during the campaign.
Dueling Voices
Why Did Alexander Burn Persepolis ?
When Alexander reached Persepolis after the Battle of Gaugamela, he saw what was possibly the most beautiful city in the entire Persian Empire. Over the centuries Darius, Xerxes, and others had adorned it with colorful palaces, public buildings, and artwork. Within a few months of his arrival, however, Alexander had reduced the once-stunning imperial city to ashes and ruins. Why?
Historians have pondered this question for thousands of years. Though there are several accounts, the earliest was penned centuries after the actual events. The most common explanation cites a long night of drunken revels and a Greek woman named Thaïs (Figure 6.24). This account is by the first-century BCE Greek historian Diodorus Siculus :
Alexander held games to celebrate his victories; he offered magnificent sacrifices to the gods and entertained his friends lavishly. One day when the Companions [fellow cavalry soldiers] were feasting, and intoxication was growing as the drinking went on, a violent madness took hold of these drunken men. One of the women present [Thaïs] declared that it would be Alexander’s greatest achievement in Asia to join in their procession and set fire to the royal palace, allowing women’s hands to destroy in an instant what had been the pride of the Persians.
— Diodorus of Sicily , Library of World History
Later Roman historians such as Quintus Curtius Rufus and Plutarch provide similar accounts, saying the fire was the result of an out-of-control party and lit at Thaïs’s insistence. But at least one ancient writer disagrees. Relying on sources from Ptolemy and other contemporaries of Alexander, the historian Arrian of Nicomedia makes no mention of Thaïs or a night of heavy drinking. In Anabasis , he says the destruction of the city was intentional, the product of calculated revenge “for their invasion of Greece...for the destruction of Athens, the burning of the temples, and all the other crimes they had committed against the Greeks.”
What really happened at Persepolis? Was Thaïs the instigator or merely the scapegoat? Thousands of years later we may be able only to speculate about the cause of this catastrophic event.
- Given what you’ve read, who do you think was responsible for the burning of Persepolis? Why?
- If Thaïs wasn’t responsible, why do you think some ancient historians were convinced of her culpability?
Though his Bactrian wife Roxane was pregnant when he died, Alexander had made no arrangements for a successor. Members of his court and his military commanders thus fought among themselves for control of the empire in what historians refer to as the Wars of the Successors . One of the more colorful contestants was Pyrrhus , who was not Macedonian but was the king of Epirus and Alexander’s cousin. Pyrrhus temporarily seized the throne of Macedon and attempted to carve out an empire for himself in Sicily and southern Italy. He never lost a battle, but he lost so many troops in a campaign defending Magna Graecia in southern Italy from Rome that he was never able to capitalize on his success. (Today the term pyrrhic victory refers to a win so costly that it is in effect a loss.) In 272 BCE, Pyrrhus died after being struck by a roof tile thrown at him by an elderly woman during a street battle in the city of Argos . His death marked the end of the wars among Alexander’s generals.
By the middle of the third century BCE, certain generals and their descendants were ruling as kings over different portions of Alexander’s empire (Figure 6.25). Antigonus and his descendants, the Antigonids , ruled Macedon and much of Greece. Some city-state s in Greece organized federal leagues to maintain their independence from Macedon. The Achaean League was in the Peloponnese and the Aetolian League in central Greece. Another Macedonian general, Ptolemy , was king of Egypt. To win the support of the Egyptian people, Ptolemy and his successors assumed the title of pharaoh and built temples to Egyptian gods. Yet another Macedonian general, Seleucus and his descendants, the Seleucids , ruled as kings over much of the former Persian Empire, from Asia Minor in the west to central Asia in the east. They adopted many practices of the Persian Empire, including honoring local gods, as revealed by cuneiform records of the offerings they made.
The Seleucid Kingdom was an enormous and complicated region, stretching from the Aegean Sea to today’s Afghanistan, with a population of some thirty million people of various ethnic and linguistic groups. Keeping control over the vast kingdom proved difficult, and some of the far eastern portions like Bactria and Parthia began to break away around 250 BCE. Both became separate Hellenistic kingdoms, ruled initially by former Greek governors of the areas. Around 200 BCE, the Bactrian kingdom invaded and conquered the Indus River valley. The most famous of the Bactrian kings of India was Menander I , whose kingdom stretched from the Indus River valley to the upper Ganges in central India. Menander converted to Buddhism and became a holy man, known in India as Milinda . The Greek colonists who settled in Bactria and India introduced their art into the region, which influenced Indian sculpture, painting, and architecture. By the end of the second century BCE, however, the Bactrian kingdom had collapsed due to constant civil wars between rival claimants to the throne. We know of their existence only through the coins they issued as kings (Figure 6.26).
In these Hellenistic kingdoms, where peace treaties and alliances could be secured through arranged marriages, elite women might achieve political power unimaginable in Classical Greece. In Egypt, for example, Ptolemy II married his sister Arsinoe , as was the custom for pharaohs, and installed her as co-ruler. Dynastic queens also often ruled when the designated heir was just a child. In 253 BCE, the Seleucid king Antiochus II ended his war for control of Syria with a treaty by which he married Berenice , the daughter of his opponent Ptolemy II. However, Antiochus’s former wife Laodice murdered Berenice and her children upon Antiochus’s death in 246 BCE to secure the succession for her own young son Seleucus II . Ptolemy III subsequently declared war to avenge the death of his sister and her children.
In 194 BCE, Antiochus III ended yet another war for control of Syria by giving his daughter Cleopatra I in marriage to Ptolemy V . Upon Ptolemy’s death in 180 BCE, Cleopatra ruled because their sons and daughter were still children. The most famous of the powerful Hellenistic queens was this Cleopatra’s descendant, Cleopatra VII , who reigned from 51 to 31 BCE. The last of the Ptolemies, Cleopatra VII reigned as co-ruler with her brothers Ptolemy XIII and Ptolemy XIV , as well as with Ptolemy XV , also called Caesarion , who was her son with the Roman general Julius Caesar .
Hellenistic Culture
A characteristic cultural feature of the Hellenistic period was the blending of Greek and other cultures of the former Persian Empire . The Seleucid and Ptolemaic dynasties both employed Greeks and Macedonians as soldiers and bureaucrats in their empires. Alexander the Great and subsequent Hellenistic kings founded Greek cities in the former Persian Empire for Greek and Macedonian colonists, often naming them in honor of themselves or their queens. These cities included the institutions of the Greek cities of their homeland—temples to Greek gods, theaters, agora (marketplaces), and gymnasia—so the colonists could feel at home in their new environment. At the site of Ai Khanum in modern Afghanistan, archaeologists have uncovered the impressive remains of one such Hellenistic city with a gymnasium.
Alexandria in Egypt, founded by Alexander himself in 331 BCE, was the capital of the Ptolemaic kingdom and the largest Hellenistic city, with a population that reached one million. There the Ptolemies founded the Museon , or “home of the Muses,” from which the term “museum” derives. They modeled this on Aristotle’s Lyceum , as a center for scientific research and literary studies. These same kings also patronized the Alexandrian Library , where they assembled the largest collection of books in the ancient world. Antioch , in today’s southeastern Turkey, was the largest city of the Seleucid kingdom, with a population of half a million. In cities such as Alexandria and Antioch, the Greek-speaking population became integrated with the indigenous population.
Most Greek cities in this period were no longer independent since they were usually under the control of one of the Hellenistic kingdoms. The city-state s of the Achaean and Aetolian Leagues in Greece were the exception, fiercely maintaining their independence against the Antigonid rulers of Macedon. Having lost the right of self-government, many Greeks in cities under the rule of kings no longer focused on politics and diplomacy but turned to the search for personal happiness. New religions emerged that promised earthly contentment and eternal life and combined Greek and non-Greek elements. For example, the worship of the Egyptian goddess Isis became common in many Hellenistic cities.
Mithras was a Persian sun god worshiped by the Medes, but in the second century BCE, Greeks in Hellenistic cities came to believe Mithras would lead them, too, to eternal life. His followers built special chapels decorated with symbols whose meaning is still disputed. The emphasis on secret religious rituals, or mysteries , about which followers were sworn to silence, lends the worship of Isis and Mithras in this period the name mystery religions (Figure 6.27).
Another religion practiced in Hellenistic cities was Judaism , whose followers included migrant Jewish people and new converts. By the second century BCE, the Hebrew Bible had been translated into Greek under the Ptolemies, since ancient Judea was within their control for much of the Hellenistic period and many Jewish people had immigrated to Alexandria.
Some Greeks preferred new philosophies to religion as a means to achieve happiness. Hellenistic philosophy emphasized the search for internal peace and contentment. Stoicism , for example, maintained that the universe was governed by divine reason ( Logos ), which determined the fate of all people. Happiness therefore resulted from learning how to cope with life and accepting fate while avoiding extreme negative emotions such as fear and anger. Epicureans , however, maintained that the key to happiness was to avoid physical and mental pain by pursuing pleasure. The founders of these two philosophical schools, Zeno and Epicurus respectively, both lived in the early third century BCE and taught in Athens , which continued to be a center of learning in this period. The Stoics were so named because Zeno instructed his students in the stoa poikile , or “painted porch” in the Athenian agora . The mystery religions and philosophies of the Hellenistic era continued to flourish as these cities became incorporated into the expanding Roman Empire. | libretexts | 2025-03-17T22:27:53.355242 | 2025-02-12T00:43:42 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.03%3A_The_Hellenistic_Era",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "3.3: The Hellenistic Era",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.05%3A_The_Roman_Republic | 3.5: The Roman Republic
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Identify the key institutions of the Roman Republic
- Discuss class differences and conflict in the Roman Republic
- Analyze the challenges that strained democratic institutions in the Roman Republic, including the Punic Wars
Many elements of early Roman culture and society resulted from Greek influence on the Italian peninsula. Later, when the Roman state expanded and built an empire, its people transmitted their culture—heavily indebted to Ancient Greece—to the Celtic and Germanic tribes of central and western Europe. They also transmitted their language, which is why French, Portuguese, Italian, and Spanish are known as “Romance” languages: They are descended from the Latin language spoken by the Romans. The classical civilizations of Ancient Greece and Rome were therefore the foundation for what became known as Western civilization.
The Foundation and Function of the Roman Republic
During the Archaic period, Greeks established colonies on Sicily and in southern Italy that went on to influence the culture of Italy. By around 500 BCE, the inhabitants of central Italy, who spoke Latin, had adopted much of Greek culture as their own, including the idea that citizens should have a voice in the governance of the state. For example, the people of the small city-state of Rome referred to their state as res publica, meaning “public thing” (to distinguish it from the res privata , or “private thing,” that had characterized oligarchical and monarchical rule under the Etruscans). Res publica —from which the word “ republic ” derives—signified that government happens in the open, for everyone to see. Early Romans also adopted Greek gods and myths as well as other elements of Greek culture.
The Romans passed down many traditions about the early history of their republic, recorded by historians such as Livy in the first century BCE. These stories often reflected the values that the Romans revered. According to Roman tradition, the city was founded in 753 BCE by the twin brothers Romulus and Remus , sons of Mars, the god of war (Figure 6.28). It was said that Romulus killed his brother when Remus mocked his construction of a wall around the new city and jumped over it. This story brought into focus for Romans their respect for boundaries and private property.
Romulus assembled a group of criminals and debtors to inhabit his city, and, to secure wives for them, he invited the neighboring Sabines to attend a festival with their unmarried daughters and sisters. The Romans seized the women, and when the Sabines returned with an army to recover them, the women, now Roman wives, said they had been treated with respect and wished to remain. The Sabines and the Romans then joined together in a single city-state. This story showed that a person did not have to be born a Roman to receive the rights of citizenship. It also reflected women’s social status in Rome, which was higher than their status in other ancient cultures. They couldn’t vote or hold public office, but they could own property and freely participate in public events such as banquets.
These stories also include details of Roman ideas about government. For example, they note that in its early centuries, Rome was a monarchy , with the first king being Romulus. After the passing of the fourth king, the throne was assumed by Lucius Tarquinius Priscus , an Etruscan. The next two kings were also Etruscan. The last of these, Tarquin the Proud , was the final king of Rome, whose son raped a young Roman woman named Lucretia. This act triggered a rebellion against the monarchy, which ultimately ousted the Etruscan king. In 509 BCE, the victorious Romans declared their government to be a republic and vowed never to be subject to tyranny again. This story emphasized the Roman respect for the rule of law. No one, no matter how powerful, was above it.
In Their Own Words
Lucretia ’s Sacrifice for Rome
Like many stories about Rome’s early history, the story of the rape of Lucretia emphasizes Roman values, in this case, virtue. Revered as a model Roman woman, Lucretia embodied sexual purity and loyalty to her husband at the expense of her safety, her autonomy, and even her life. According to the story, Sextus Tarquinius, the son of the king, is staying at Collatinus and Lucretia's home. During the night, Tarquinius enters Lucretia's chambers with his sword in hand, He threatens her with successive acts of violence and disgrace before raping her. While recounting the events, Lucretia asks her family to pledge that they will avenge her, and then she dies by suicide. Scholars debate the reason for her suicide, with some indicating it was related to shame, others viewing it as Lucretia asserting control, while still others see it as an allegory for the death of the Roman monarchy.
The historian Livy’s account of Lucretia’s suicide, written in the first century BCE, shows the story’s enduring value in Roman culture. It begins as Lucretia’s husband and father run to her aid after hearing she has been raped by Sextus Tarquinius, son of the king. Lucretia they found sitting sadly in her chamber.
The entrance of her friends brought the tears to her eyes, and to her husband’s question, “Is all well?” She replied, “Far from it; for what can be well with a woman when she has lost her honor? The print of a strange man, Collatinus [her husband], is in your bed. Yet my body only has been violated; my heart is guiltless, as death shall be my witness. But pledge your right hands and your words that the adulterer shall not go unpunished. Sextus Tarquinius is he that last night returned hostility for hospitality, and armed with force brought ruin on me, and on himself no less—if you are men—when he worked his pleasure with me.” They give their pledges, every man in turn. They seek to comfort her, sick at heart as she is, by diverting the blame from her who was forced to the doer of the wrong. They tell her it is the mind that sins, not the body; and that where purpose has been wanting there is no guilt. “It is for you to determine,” she answers, “what is due to him; for my own part, though I acquit myself of the sin, I do not absolve myself from punishment; not in time to come shall ever unchaste woman live through the example of Lucretia.” Taking a knife that she had concealed beneath her dress, she plunged it into her heart, and sinking forward upon the wound, died as she fell. The wail for the dead was raised by her husband and her father.
— Livy , Ab Urbe Condita ( The History of Rome )
- Why does Lucretia choose death?
- What does her choice say about Roman values concerning the conduct of women, chastity, and reputation?
Archaeological evidence seems to indicate at least some historical basis for these accounts of Rome's founding. In 1988, a wall was discovered around the Palatine Hill where Romulus reportedly built his fortification. Archaeologists also found Greek pottery from this period at the same location, suggesting trade took place. The city of Rome is located along the Tiber River where it was no longer navigable to sea-going vessels. Greek merchants would have sailed up the Tiber from the Mediterranean Sea and traded with the native peoples there. Greek merchants and colonists arriving in Italy at this time influenced the Iron Age culture in northern and central Italy, which then evolved though Greek influence into the Latin and Etruscan cultures. Around 600 BCE, the Etruscans colonized Rome, which became an Etruscan city-state. The story of the Tarquin dynasty reflects this Etruscan period of Roman history. Modern historians maintain that the story of the expulsion of the Tarquins is loosely based on historical events, which saw the Roman city-state free itself from Etruscan domination and establish an independent republic around 500 BCE.
In the early republic, Rome was ruled by elected magistrates instead of kings, and by a Council of Elders or Senate . Roman society was divided into two classes or orders, patricians and plebeians . The patricians were the aristocratic elite, who alone could hold public office and sit in the Senate. From the beginning of the republic through the third century BCE, the plebeians, or common people, worked to achieve equality before the law in Roman society. The political conflict between these two classes is known as the Struggle of the Orders .
Rome was located on a coastal plain known as Latium . East of it were the foothills of the Apennine Mountains , inhabited by warlike tribes that made periodic raids. When Rome was under threat, the plebeians could gain leverage with the patricians by refusing to fight until their demands were met. In 450 BCE, the plebeians went on strike for the first time. They feared that patrician judges were interpreting Rome’s unwritten laws to take advantage of ignorant plebeians, so they demanded the laws be written down. The patricians agreed. In the Twelve Tables , published in the Forum, Rome’s laws were written for the first time and were then accessible to all citizens.
Link to Learning
Read excerpts from Rome’s Twelve Tables of law from Fordham University’s Ancient History Sourcebook. What do these laws tell us about Roman society in 450 BCE, when they were first written down?
After 450 BCE, the plebeians met in a Plebian Assembly that annually elected ten officials known as tribunes. These tribunes attended meetings of Rome’s assemblies, the Senate, and the law courts. If they saw any public body or official taking action that would bring harm to plebeians, they could say “ Veto ” or “I forbid” and stop that action. This power to veto gave plebeians a way to protect themselves and put a check on the power of patrician officials.
In the fourth and third centuries BCE, plebeians won more concessions by again seceding from the patrician state. After 367 BCE, one of the two consuls, the highest officials in the republic, had to be a plebian. After 287 BCE, the Plebian Assembly could pass laws for the republic that were introduced to it by the tribunes, and their laws applied to all Roman citizens. By the third century BCE, the Struggle of the Orders had effectively concluded, since it was now possible for plebeians to pass laws, serve as elected officials, and sit in the Senate, equals of the patricians under Roman law. The Struggle of the Orders did not bring equality to everyone in Rome, however. Rather, it gave well-off plebeians access to positions of power.
Romans were a very conservative people who greatly venerated the mos maiorum or “way of the ancestors.” Their political system was a combination of written laws and political traditions and customs that had evolved since the birth of the Republic. By the third century BCE, this system was being administrated by a combination of public assemblies, elected officials, and the Senate.
The Roman Republic had three main public assemblies—the Plebian Assembly, the Tribal Assembly , and the Centuriate Assembly —that elected various officials every year. Only plebeians could attend the Plebian Assembly, organized into thirty-five regional tribes with a single vote each. It was this assembly that annually elected the ten tribunes, who possessed veto power and could present laws to the assembly for approval. The Tribal Assembly was likewise divided into thirty-five tribes based on place of residence, with each tribe casting one vote, but both plebeians and patricians could attend. Every year, the Tribal Assembly elected the Quaestors , treasurers in charge of public money.
Only the Centuriate Assembly could declare war, though the Senate remained in control of foreign policy. Both plebeians and patricians could attend this assembly, which was organized into blocs. The number of votes assigned to each bloc was based on the number of centuries—meaning a group of one hundred men in a military unit—that bloc could afford to equip with weapons and armor. Wealthier citizens had more votes because they could pay more to support the military. This assembly also elected military commanders, judges, and the censor, whose main task was to conduct the census to assess the wealth of Rome’s citizens.
All elected officials joined the Roman Senate as members for life after their term in office. By far the most powerful institution in the Roman state, the Senate decided how public money was to be spent and advised elected officials on their course of action. Elected officials rarely ignored the Senate’s advice since many of them would be senators themselves after leaving office.
The patron-client system was another important element in the Roman political system. A patron was usually a wealthy citizen who provided legal and financial assistance to his clients, who were normally less affluent citizens. In return, clients in the Roman assemblies voted as directed by their patrons. Patrons could inherit clients, and those with many wielded great influence in Rome.
The Expansion of the Roman Republic
The early Romans did not plan on building an immense empire. They were surrounded by hostile city-states and tribes, and in the process of defeating them they made new enemies even as they expanded their network of allies. Thus they were constantly sending armies farther afield to crush these threats until Rome emerged in the second century BCE as the most powerful state in all the lands bordering the Mediterranean Sea.
The Roman Senate developed certain policies in conducting wars that proved quite successful (Figure 6.29). One was to divide and conquer. The Romans always tried to defeat one enemy at a time and avoid waging war against a coalition. Thus they often attempted to turn their enemies against each other. Another tactic was to negotiate from strength. Even after suffering enormous defeats in battle, Rome would continue a war until it won a major engagement and reach a position from which to negotiate for peace with momentum on its side. Yet another successful strategy was to establish colonies in recently conquered lands to serve as the first line of defense if a region revolted against Rome. Well-constructed roads were also built to link Rome to these colonies, so armies could arrive quickly in a region that rebelled. Thanks to these networks across Italy, the language and culture of Rome eventually spread throughout its empire as well. Romans also transformed former enemies into loyal allies who could enjoy self-government as long as they honored Rome’s other alliances and provided troops in times of war. Some even received Roman citizenship.
The Roman Conquest of the Mediterranean
After conquering most of the Italian peninsula, Rome came to challenge the other major power in the region, Carthage . A series of wars ensued, called the Punic Wars , in which Rome and Carthage vied for dominance. During the First Punic War (264–241 BCE), Rome and Carthage battled for control of the island of Sicily. Although Carthage had the largest fleet at the time, the Romans won by dropping a hooked plank on the deck of an opposing ship and using it as a causeway to cross over, transforming a sea battle in which they were at a disadvantage into a land battle where they could dominate. After the destruction of its fleet, Carthage sued for peace, and the war ended with Rome annexing Sicily.
Carthage desired revenge. In the Second Punic War (218–201 BCE), the Carthaginian general Hannibal marched his army, along with dozens of war elephants, from Hispania (modern-day Portugal and Spain), across southern Gaul, and then over the Alps into Italy. Hannibal hoped Rome’s allies would abandon it and leave the city at his mercy. Most of Rome’s Italian allies remained loyal, however, even after Hannibal repeatedly defeated Roman armies, and after his decisive victory at the Battle of Cannae . As Hannibal’s army was rampaging through Italy, Rome sent an army across the Mediterranean to Africa to attack Carthage, which summoned Hannibal back to defend his homeland (Figure 6.30).
At the Battle of Zama in 202 BCE, the Roman army defeated Hannibal, and the Roman commander Scipio earned the nickname “Africanus” (Figure 6.31). Carthage sued for peace and was stripped of all its overseas territory. Rome thus acquired Carthage’s lands in Hispania.
During the war, King Philip V of Macedon , concerned by the growth of Rome just across the Adriatic Sea from his own kingdom, made an alliance with Carthage. After Rome’s victory against Carthage , Rome declared war against this new enemy. Philip’s Macedonian troops won numerous victories over Roman armies, but in 196 BCE at the Battle of Cynoscephalae in northern Greece, Philip suffered a defeat and lacked the resources to continue. Consequently, he agreed to become an ally of Rome. Rome also liberated all regions in Greece formerly under Macedonian control.
Philip’s defeat emboldened the king of the Seleucid Empire , Antiochus III , to advance his army into Greece, hoping to obtain the territory Philip had vacated. Rome feared that Antiochus’s occupation of Greece posed a threat to Italy, just as Philip had. In 190 BCE, Roman armies smashed the forces of Antiochus III at the Battle of Magnesia in western Asia Minor. Antiochus then agreed to withdraw from Asia Minor.
Rome discovered in the second century BCE that there was no end to the threats from hostile powers. Perseus , the son of Philip V, renounced the alliance with Rome. When he made alliances with Balkan tribes that threatened to invade Italy, Roman armies invaded Macedon and defeated his army at the Battle of Pydna in 168 BCE. Rome then dissolved the monarchy in Macedon, which soon afterward became a Roman province, and Perseus died of starvation as a prisoner in Rome. When the Achaean League in the Peloponnese in Greece challenged Roman control of Greece and Macedon, Rome declared war and sacked Corinth , the League’s largest city, in 146 BCE. In that same year, Roman armies also destroyed the city of Carthage in the Third Punic War , fearing the city’s revival as an economic and military power. After 146 BCE, no power remained in the Mediterranean that could challenge Rome (Figure 6.32).
A Republic of Troubles
Rome’s constant wars and conquests in the third and second centuries BCE created a host of social, economic, and political problems for the republic. The Roman people grew dissatisfied with the leadership of the Senate and the aristocratic elite, and they increasingly looked to strong military leaders to address the problems.
A number of factors contributed to these problems and transformations. From the foundation of the republic, most Roman citizens had owned and operated small family farms. Indeed, to serve as Roman soldiers, men had to own property. However, the Punic Wars had strained this traditional system. Roman soldiers were often away from home for long periods of time, leaving the women and children to maintain their holdings. When they ultimately did return, many found their property in another’s hands. Others decided to sell their neglected farms and move their families to the expanding city of Rome, where they joined the growing ranks of the landless working class known as the proletariat . By the first century BCE, the population of the city of Rome may have exceeded one million.
The growth of the proletariat disrupted the Roman political system and invited large-scale corruption. The traditional patron-client system collapsed, since landless Romans didn’t need the assistance of patrons to settle property disputes. Politicians therefore had to win the support of the urban masses with free food and entertainment, such as gladiatorial combats, and promises to create jobs through public works projects. Some even organized the poor into violent gangs to frighten their political rivals. These conditions resulted in widespread dissatisfaction with the government of the republic.
To meet the growing demand for grain, wine, and olive oil to feed the urban population, large landowners bought land from poor Roman farmers and leased public land from the Roman state to create large plantations. These were very profitable because landowners could cheaply purchase enslaved people, who were plentiful. For example, after the defeat of Perseus of Macedon in 168 BCE, the Romans enslaved 150,000 people from Epirus as punishment since this kingdom had been allied with Perseus in the war. Pirates from Cilicia (in southeast Turkey) and from the Greek island of Crete also kidnapped people throughout the eastern Mediterranean and sold them to Roman traders. The island of Delos in the Aegean Sea became a massive human market in the second century BCE, where reportedly ten thousand people were bought and sold every day.
Terrible working conditions resulted in massive revolts by the enslaved, beginning in the second half of the second century BCE. The most famous was led by Spartacus , an enslaved man and gladiator from Thrace (modern Bulgaria). In 76 BCE, Spartacus and other enslaved gladiators rose against their owners and were quickly joined by hundreds of thousands of others (Figure 6.33). Spartacus’s forces defeated two Roman armies before being crushed in 71 BCE. The Romans crucified thousands of the rebels along Italy’s major roads to send a warning to enslaved people across Italy.
In addition to the proletariat and enslaved people, new classes of wealthy Romans were also unhappy with the leadership of the traditional elite. The most profitable enterprise for these new Roman entrepreneurs was acting as bankers and public contractors, or publicans . The republic relied on publicans to construct public works such as aqueducts and theaters, as well as to operate government-owned mines and collect taxes. Roman governors often looked the other way when publicans squeezed additional tax revenues from the populations of the provinces.
This tumultuous and complicated environment led to the rise of two of the Late Republic’s most intriguing political figures, Tiberius and Gaius Gracchus. The Gracchi , as they are collectively known, were plebian brothers whose families had been members of the elite for generations (Scipio Africanus was their grandfather). Tiberius, the elder brother, was concerned to see the large plantations being worked by enslaved foreigners rather than Roman farmers. He feared Rome’s military was in danger since Rome relied on its land-owning farmers to equip themselves and serve in the army. In 133 BCE, as a tribune, he proposed a law to distribute public land to landless Romans. This measure struck a blow at the senatorial class, many of whom had accumulated huge swaths of land formerly owned by independent farmers who had gone to war. The assembly voted to approve the proposal, but many senators were horrified not only because they stood to lose land but also because, to win the vote, Tiberius had violated the traditions of the Republic. The Republic was ruled by the upper classes, and in courting popular opinion, the brothers had challenged elite control over high political institutions. Convinced he was assuming too much popular support and violating the traditions of Rome, the Senate declared a state of emergency and a group of senators beat Tiberius to death.
Ten years later Tiberius’s brother Gaius, an astute politician as well, was also elected tribune. He won over poor Roman farmers with his proposal to establish new colonies to give them land. He also provided free grain for the poor and called for new public works projects to create jobs for the working class and lucrative contracts for wealthy publicans. His measures passed the Plebian Assembly. Gaius was also elected tribune for two years straight, in violation of Roman political tradition. The final straw for the Senate was Gaius’s proposal to establish a new court system that could try senators for corruption. In 121 BCE, the senators took action to subdue Gaius. He attempted to use force himself to resist the Senate, but in the end his supporters were massacred and he died, either by his own hand or at the hands of senators who had opposed his rise to power.
The Rise of Client Armies
After the assassination of Gaius Gracchus, Rome’s political class was divided into two warring factions. The populares were politicians who, like Gaius, sought the political support of discontented groups in Roman society, whereas the optimates were the champions of the old order and the traditional leadership of the elite in the Roman Senate. In 112 BCE, Rome went to war against Jugurtha , the king of Numidia (modern Algeria/Tunisia) in North Africa, after he slaughtered Romans there who had supported his brother as king. Roman armies suffered defeat after defeat, and due to the decline in numbers of Roman farmers, Rome was having difficulty filling the ranks.
Gaius Marius was a plebian and commoner who rose up the ranks of the Roman army and emerged as the leader of the populares . In 107 BCE, he ran for consul by denouncing the traditional Roman elites as weak and ineffective generals and promising to quickly end the war with Jugurtha. Such rhetoric was wildly popular with the common people who supported him. Once in power, Marius reformed the entrance requirements for the army to open it to proletariats, extending them the opportunity for war gains and even land for their service. These reforms led to the emergence of professional client armies, or armies composed of men more loyal to their commander than to the state.
By 105 BCE, Jugurtha was captured and then paraded through the Roman streets in chains. That same year, Rome faced new threats from the north in the form of Germanic tribes crossing the Rhine River and seeking to invade Italy. The Romans elected Marius consul for five consecutive terms (105–101 BCE) to lead his professional army against these enemies. After his victories, however, his enemies in the Senate wanted to embarrass him politically, so they prevented his proposal to give veterans land from becoming law. Marius was intimidated by these events and retired from politics.
In 90 BCE, Rome was again in turmoil when its Italian allies revolted after years of providing troops without having any voice in governing. During this “Social” War (90–88 BCE), the Romans under the leadership of Sulla , an optimate , defeated the rebels. Shortly thereafter, in 88 BCE, Rome’s provinces in Greece and Asia Minor also revolted, after years of heavy taxes and corrupt governors. The rebels massacred thousands of Roman citizens and rallied around Mithridates , the Hellenistic king of Pontus in north Asia Minor. Optimates in the Senate appointed Sulla to lead an army against Mithridates. Like Marius, Sulla had promised his recruits land in return for their service. Populares in the Plebian Assembly, however, assigned command of the army to Marius, who had come out of retirement.
Sulla, then outside Rome with his client army, convinced his soldiers to choose personal loyalty to their general and his promise of land over their allegiance to Rome, and they marched on the city. Sulla’s army hunted down and murdered many populares , and after establishing his own faction in charge of Rome, Sulla marched against Mithridates (Figure 6.34).
In 87 BCE, Marius, who had been in hiding, rallied his old veterans and marched on Rome, marking the second time in two years that Roman soldiers had chosen personal loyalty to their general over obedience to Rome’s laws. Marius’s men now hunted down and murdered optimates . After winning his seventh term as consul in 87 BCE, Marius died in office from natural causes. Having forced Mithridates out of Greece and restored Roman rule there, Sulla led his army back to Rome in 83 BCE to overthrow the populares who were still in charge. While in Rome, he compelled the Senate to appoint him dictator . The office of dictator was an ancient republican office used only during emergencies because it granted absolute authority for a limited time to handle the emergency. When Sulla assumed the office, it hadn’t been used since the Second Punic War .
During Sulla’s time as dictator, he ordered the execution of his political enemies and reformed the laws. In 79 BCE, he relinquished the office and retired from public life, convinced he had saved the republic and preserved the power of the traditional elite in the Senate. Instead, however, within half a century the Roman Republic was dead. | libretexts | 2025-03-17T22:27:53.480356 | 2025-02-12T00:43:45 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.05%3A_The_Roman_Republic",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "3.5: The Roman Republic",
"author": "OpenStax"
} |
https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.06%3A_The_Age_of_Augustus | 3.6: The Age of Augustus
-
- Last updated
- Save as PDF
Learning Objectives
By the end of this section, you will be able to:
- Identify the key events of the First and Second Triumvirate
- Analyze the personal charisma and leadership styles of Julius Caesar and Augustus
- Explain the fall of the Roman Republic and the rise of the first emperor
The social troubles that rocked Rome following the Punic Wars led to populists like the Gracchi and military leaders like Sulla , who marched on Rome in his attempt to restore order. Such events made it clear to many that Rome’s republican institutions were no longer able to adapt to the transformed landscape produced by decades of territorial expansion. These problems also presaged the political transformations Rome was to suffer through in the following decades. Between 60 BCE and 31 BCE, a string of powerful military leaders took the stage and bent the Republic to their will. In their struggle for power, Rome descended further into civil war and disorder. By 27 BCE, only one leader remained. Under his powerful hand, the Republic became a mere façade for the emergent Roman Empire .
The First Triumvirate
Sulla was unable to crush the populares completely since some discontented groups still opposed the Senate leadership. After his retirement, new military and political leaders sought power with the support of these groups. Three men in particular eventually assumed enormous dominance. One was Pompey Magnus, who became a popular general, and thousands of landless Romans joined his client army on the promise of land. In 67 BCE, Roman armies under Pompey’s command suppressed pirates in the eastern Mediterranean who had threatened Rome’s imported grain supplies. Pompey next conclusively defeated Mithridates of Pontus, who had again gone on the attack against Rome. By 63 BCE, Pompey had subdued Asia Minor, annexed Syria, destroyed the Seleucid kingdom, and occupied Jerusalem.
Another politician and military commander of this era was Crassus . He had served under Sulla, achieved popularity in Rome by fighting against Spartacus, and used the support of disaffected wealthy Romans such as publicans to amass a huge fortune. The third influential figure was Julius Caesar , whose original source of popularity was the fact that Marius was his uncle. When Sulla took control, Caesar lost much of his influence, but by 69 BCE he was making a political comeback and winning the support of populares in Rome.
The optimates in the Senate distrusted all these men and cooperated to block their influence in Roman politics. In response, in 60 BCE the three decided to join forces to advance their interests though a political alliance known to history as the First Triumvirate (“rule by three men”). Together its members had the wealth and influence to run the Roman Republic, but they were all very ambitious and each greatly distrusted the others. After serving as consul in 60 BCE, Julius Caesar took command of the Roman army in Gaul (modern France). Over the next ten years, his armies conquered all Gaul and launched attacks against German tribes across the Rhine, and on the island of Britain across the English Channel. The Roman people were awed by Caesar’s military success, and Pompey and Crassus grew jealous of his popularity. In 54 BCE, Crassus invaded the Parthian Kingdom in central Asia, hoping for similar military and political triumphs. The invasion was a disaster, however, and Crassus was captured by the Parthians and executed.
The Roman Empire had now grown large, thanks to Pompey’s and Caesar’s conquests (Figure 6.35). After Crassus’s death, Pompey decided to break with Caesar and support his old enemies the optimates. In 49 BCE, the optimates and Pompey controlled the Senate and demanded that Caesar disband his army in Gaul and return to Rome to stand trial on various charges. Instead, Caesar convinced his client army to march on Rome. In January of that year he famously led his troops across the Rubicon River , the traditional boundary between Italy and Gaul. Since Caesar knew this move would trigger war, as it was illegal to bring a private army into Rome proper, the phrase “crossing the Rubicon” continues to mean “passing the point of no return.” In 48 BCE, Caesar defeated Pompey at the Battle of Pharsalus in northern Greece. Shortly after this, Pompey fled to Egypt, where he was murdered by the Egyptian pharaoh Ptolemy XIII , who hoped to win Caesar’s favor.
To prosecute the war against Pompey, Caesar had himself appointed dictator in 48 BCE. Despite the tradition that dictatorship was to be temporary, Caesar’s position was indefinite. In 46 BCE, he was appointed dictator for a term of ten years, and in 44 BCE his dictatorship was made permanent, or for life. These appointments and other efforts to accumulate power unnerved many Romans, who had a deep and abiding distrust of autocratic rulers that stretched all the way back to the period of Etruscan rule. Caesar had hoped to win over his former enemies by inviting them to serve again in the Senate and appointing them to positions in his government. However, these former optimates viewed him as a tyrant, and in 44 BCE two of them, Brutus and Cassius , led a conspiracy that resulted in his assassination.
Link to Learning
In Shakespeare’s play Julius Caesar , written in about 1599, Marc Antony gives one of the most famous speeches in English literature, based in part on the work of ancient Roman historians like Plutarch . In this short clip of that speech from the 1970 film adaptation of the play, Charlton Heston plays the part of Marc Antony.
From Republic to Principate
Octavian was only eighteen when Caesar was killed, but as Caesar’s adopted son and heir he enjoyed the loyalty and political support of Caesar’s military veterans. In 43 BCE, Octavian joined forces with two seasoned generals and politicians, Marc Antony and Lepidus , who both had been loyal supporters of Caesar. Marc Antony had been particularly close to him, as evidenced by the fact that Caesar left his legions under Antony’s command in his will. Together these three shared the power of dictator in Rome in a political arrangement known as the Second Triumvirate . Unlike the First Triumvirate, which was effectively a conspiracy, the Second Triumvirate was formally recognized by the Senate. In 42 BCE, the army of the Second Triumvirate, under the command of Antony, defeated the forces of Julius Caesar’s assassins Brutus and Cassius at the Battle of Philippi in northern Greece. The Second Triumvirate also ordered the execution of thousands of their political opponents.
After crushing the remnants of the optimates , the three men divided the Roman Empire between them: Octavian took Italy, Hispania, and Gaul; Lepidus Africa; and Antony Macedon, Greece, and Asia Minor. Soon they quarreled, however, and civil war erupted once again. Having greater support from Caesar’s troops than his two opponents, in 36 BCE Octavian forced Lepidus into retirement. Antony countered by forming an alliance with Cleopatra VII , the Macedonian queen of Egypt, whom he married. Cleopatra was at that time co-ruler with Ptolemy XV , her son by Julius Caesar. With her financial support, Antony raised an army and fleet. In 31 BCE, in the naval Battle of Actium off the coast of northern Greece, Octavian defeated the forces of Antony and Cleopatra. When he afterwards invaded Egypt, the pair died by suicide (Figure 6.36), and Octavian installed himself as the new Egyptian pharaoh after executing Ptolemy XV. Octavian used the wealth of his kingdom in Egypt to finance his restructuring of the Roman state.
One of Octavian’s primary tasks after 31 BCE was to consolidate his position in order to preserve the peace and stability he had created. To avoid the fate of his adopted father, he successfully maintained a façade that the Roman Republic was alive and well, assuming titles and powers traditionally associated with it. After stacking the Senate with his supporters, in 27 BCE Octavian officially stepped down as dictator and “restored” the Republic.
The Senate immediately appointed him proconsul or governor of all Roman frontier provinces, which made him effectively the commander of the entire Roman army. The Senate also recognized him as the Princeps Senatus , or “leader of the Senate,” meaning the senator who enjoyed the most prestige and authority due to his service to the Republic. (The name of this political order, the principate , derives from this title.) Finally, the Senate voted to honor Octavian with the title of Augustus or “revered one,” used to describe gods and great heroes of the past. As these honors and titles suggest, Octavian, traditionally referred to as Augustus after 27 BCE, had assumed enormous power. Despite his claim that he had restored the Republic, he had in fact inaugurated the Empire, with himself as emperor possessing almost godlike authority (Figure 6.37).
After 27 BCE, Augustus held elected office as one of the two consuls, so he could sit in the Senate, oversee the law courts, and introduce legislation to the Centuriate Assembly , but the senators disliked this arrangement because it closed the opportunity for one of them to hold this prestigious office instead. In 23 BCE, therefore, the Senate gave Augustus several powers of a tribune. He could now veto any action taken by government officials, the Senate, and the assemblies, and he could introduce laws to the Plebian Assembly . He could wield political and military power based on the traditional constitution of the Republic.
As emperor, Augustus successfully tackled problems that had plagued Rome for at least a century. He reduced the standing army from 600,000 to 200,000 and provided land for thousands of discharged veterans in recently conquered areas such as in Gaul and Hispania. He also created new taxes specifically to fund land and cash bonuses for future veterans. To encourage native peoples in the provinces to adopt Roman culture, he granted them citizenship after twenty-five years of service in the army. Indigenous cities built in the Roman style and adopting its political system were designated municipia , which gave all elected officials Roman citizenship. Through these “ Romanization ” policies, Augustus advanced Roman culture across the empire.
Augustus also finally brought order and prosperity to the city of Rome. He began a vast building program that provided jobs for poor Romans in the city and reportedly boasted that he had transformed Rome from a city of brick to a city of marble. To win over the masses, he also provided free grain (courtesy of his control of fertile Egypt) and free entertainment (gladiator combats and chariot races), making Rome famous for its bounty of “bread and circuses.” He also established a permanent police force in the city, the Praetorian Guard , which he recruited from the Roman army. He even created a fire department.
Augustus provided wealthy Romans outside the ranks of the Senate with new opportunities for advancement via key positions he reserved for them, such as prefect (commander/governor) of the Praetorian Guard and prefect of Egypt. These officials could join the Senate and become members of the senatorial elite. Augustus thus created an effective new bureaucracy to govern the Roman Empire. Emperors who followed him continued these practices.
Augustus was keenly aware that the peace and prosperity he had created was largely built upon his image and power, and he feared what might happen when he died. As a result, the last few decades of his life were spent arranging for a political successor. This was a complicated matter since there was neither an official position of emperor nor a republican tradition of hereditary rule. Augustus had no son of his own, and his attempts to groom others to take control were repeatedly frustrated when his proposed successors died before him. Before his own death in 14 CE, Augustus arranged for his stepson Tiberius to receive from the Senate the power of a proconsul and a tribune. While not his first choice, Tiberius was an accomplished military leader with senatorial support.
Despite the smooth transition to Tiberius in 14 CE, problems with imperial inheritance remained. There were always risks that a hereditary ruler might prove incompetent. Tiberius himself became dangerously paranoid late in his reign. And he was succeeded by his grandnephew and adopted son Gaius, known as Caligula , who after a severe illness became insane. The prefect of the Praetorian Guard assassinated Caligula in 40 CE, and the guard replaced him with his uncle Claudius (40–54 CE). The Roman Senate agreed to this step only out of fear of the army. Claudius was an effective emperor, however, and under his reign the province of Britain (modern England and Wales) was added to the empire.
The government of Claudius’s successor, his grandnephew Nero (54–68 CE), was excellent as long as Nero’s mother Agrippina was the power behind the throne. After ordering her murder, however, Nero proved a vicious despot who used the Praetorian Guard to intimidate and execute his critics in the Senate. By the end of his reign, Roman armies in Gaul and Hispania were mutinying. The Senate declared him an enemy of state, and he died by suicide. During the year after his death, 68–69 CE, four different generals assumed power, thus earning it the name “Year of the Four Emperors.”
Of the four, Vespasian (69–79 CE) survived the civil war and adopted the name Caesar and the title Augustus, even though he was not related to the family of Augustus or their descendants (the Julio-Claudian dynasty). On Nero’s death, he had been in command of Roman armies suppressing the revolt of Judea (Roman armies eventually crushed this revolt and sacked Jerusalem in 70 CE). In his administration, Vespasian followed the precedents established by Augustus. For example, he ordered the construction of the Colosseum as a venue for the gladiator shows he provided as entertainment for the Roman masses, and he arranged for his two sons, Titus (79–81 CE) and Domitian (81–96 CE), to succeed him as emperor. Domitian, like Nero, was an insecure ruler and highly suspicious of the Senate; he employed the Praetorian Guard to arrest and execute his critics in that body. In 96 CE, his wife Domitia worked with members of the Senate to arrange for his assassination. Thus the flaws of the principate continued to haunt the Roman state long after its founder was gone. | libretexts | 2025-03-17T22:27:53.584737 | 2025-02-12T00:43:48 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://human.libretexts.org/Courses/Coastline_College/Hum_C100%3A_Introduction_to_the_Humanities_(Volmer)/03%3A_The_Ancient_Greek_World_and_Roman_Empire/3.06%3A_The_Age_of_Augustus",
"book_url": "https://commons.libretexts.org/book/human-206684",
"title": "3.6: The Age of Augustus",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.01%3A_The_Nature_of_Management | 2.1: The Nature of Management
-
- Last updated
- Save as PDF
3. What is expected of a manager?
If organizations are to be successful in meeting these challenges, management must lead the way. With effective management, contemporary companies can accomplish a great deal toward becoming more competitive in the global environment. On the other hand, ineffective management dooms the organization to mediocrity and sometimes outright failure. Because of this, we turn now to a look at the nature of management. However, we want to point out that even though our focus is on managers, what we discuss is also relevant to the actions of nonmanagers. On the basis of this examination, we should be ready to begin our analysis of what managers can learn from the behavioral sciences to improve their effectiveness in a competitive environment.
What is Management?
Many years ago, Mary Parker Follett defined management as “the art of getting things done through people.” A manager coordinates and oversees the work of others to accomplish ends he could not attain alone. Today this definition has been broadened. Management is generally defined as the process of planning, organizing, directing, and controlling the activities of employees in combination with other resources to accomplish organizational objectives. In a broad sense, then, the task of management is to facilitate the organization’s effectiveness and long-term goal attainment by coordinating and efficiently utilizing available resources. Based on this definition, it is clear that the topics of effectively managing individuals, groups, or organizational systems is relevant to anyone who must work with others to accomplish organizational objectives.
Management exists in virtually all goal-seeking organizations, whether they are public or private, large or small, profit-making or not-for-profit, socialist or capitalist. For many, the mark of an excellent company or organization is the quality of its managers.
Managerial Responsibilities
An important question often raised about managers is: What responsibilities do managers have in organizations? According to our definition, managers are involved in planning, organizing, directing, and controlling. Managers have described their responsibilities that can be aggregated into nine major types of activities. These include:
- Long-range planning . Managers occupying executive positions are frequently involved in strategic planning and development.
- Controlling . Managers evaluate and take corrective action concerning the allocation and use of human, financial, and material resources.
- Environmental scanning . Managers must continually watch for changes in the business environment and monitor business indicators such as returns on equity or investment, economic indicators, business cycles, and so forth.
- Supervision . Managers continually oversee the work of their subordinates.
- Coordinating . Managers often must coordinate the work of others both inside the work unit and out.
- Customer relations and marketing . Certain managers are involved in direct contact with customers and potential customers.
- Community relations . Contact must be maintained and nurtured with representatives from various constituencies outside the company, including state and federal agencies, local civic groups, and suppliers
- Internal consulting. Some managers make use of their technical expertise to solve internal problems, acting as inside consultants for organizational change and development.
- Monitoring products and services . Managers get involved in planning, scheduling, and monitoring the design, development, production, and delivery of the organization’s products and services.
As we shall see, not every manager engages in all of these activities. Rather, different managers serve different roles and carry different responsibilities, depending upon where they are in the organizational hierarchy. We will begin by looking at several of the variations in managerial work.
Variations in Managerial Work
Although each manager may have a diverse set of responsibilities, including those mentioned above, the amount of time spent on each activity and the importance of that activity will vary considerably. The two most salient perceptions of a manager are (1) the manager’s level in the organizational hierarchy and (2) the type of department or function for which he is responsible. Let us briefly consider each of these.
Management by Level. We can distinguish three general levels of management: executives, middle management , and first-line management (see Exhibit 1.6 ). Executive managers are at the top of the hierarchy and are responsible for the entire organization, especially its strategic direction. Middle managers, who are at the middle of the hierarchy, are responsible for major departments and may supervise other lower-level managers. Finally, first-line managers supervise rank-and-file employees and carry out day-to-day activities within departments.
Figure \(\PageIndex{1}\) shows differences in managerial activities by hierarchical level. Senior executives will devote more of their time to conceptual issues, while first-line managers will concentrate their efforts on technical issues. For example, top managers rate high on such activities as long-range planning , monitoring business indicators, coordinating, and internal consulting. Lower-level managers, by contrast, rate high on supervising because their responsibility is to accomplish tasks through rank-and-file employees. Middle managers rate near the middle for all activities. We can distinguish three types of managerial skills: 8
- Technical skills . Managers must have the ability to use the tools, procedures, and techniques of their special areas. An accountant must have expertise in accounting principles, whereas a production manager must know operations management. These skills are the mechanics of the job.
- Human relations skills . Human relations skills involve the ability to work with people and understand employee motivation and group processes. These skills allow the manager to become involved with and lead his or her group.
- Conceptual skills . These skills represent a manager’s ability to organize and analyze information in order to improve organizational performance. They include the ability to see the organization as a whole and to understand how various parts fit together to work as an integrated unit. These skills are required to coordinate the departments and divisions successfully so that the entire organization can pull together.
As shown in Figure \(\PageIndex{2}\), different levels of these skills are required at different stages of the managerial hierarchy. That is, success in executive positions requires far more conceptual skill and less use of technical skills in most (but not all) situations, whereas first-line managers generally require more technical skills and fewer conceptual skills. Note, however, that human or people skills remain important for success at all three levels in the hierarchy.
Management by Department or Function. In addition to level in the hierarchy, managerial responsibilities also differ with respect to the type of department or function. There are differences found for quality assurance, manufacturing, marketing, accounting and finance, and human resource management departments. For instance, manufacturing department managers will concentrate their efforts on products and services, controlling, and supervising. Marketing managers, in comparison, focus less on planning, coordinating, and consulting but more on customer relations and external contact. Managers in both accounting and human resource management departments rate high on long-range planning, but will spend less time on the organization’s products and service offerings. Managers in accounting and finance are also concerned with controlling and with monitoring performance indicators, while human resource managers provide consulting expertise, coordination, and external contacts. The emphasis on and intensity of managerial activities varies considerably by the department the manager is assigned to.
At a personal level, knowing that the mix of conceptual, human, and technical skills changes over time and that different functional areas require different levels of specific management activities can serve at least two important functions. First, if you choose to become a manager, knowing that the mix of skills changes over time can help you avoid a common complaint that often young employees want to think and act like a CEO before they have mastered being a first-line supervisor. Second, knowing the different mix of management activities by functional area can facilitate your selection of an area or areas that best match your skills and interests.
In many firms, managers are rotated through departments as they move up in the hierarchy. In this way they obtain a well-rounded perspective on the responsibilities of the various departments. In their day-to-day tasks they must emphasize the right activities for their departments and their managerial levels. Knowing what types of activity to emphasize is the core of the manager’s job. In any event, we shall return to this issue when we address the nature of individual differences in the next chapter.
The Twenty-First Century Manager
We discussed above many of the changes and challenges facing organizations in the twenty-first century. Because of changes such as these, the managers and executives of tomorrow will have to change their approaches to their jobs if they are to succeed in meeting the new challenges. In fact, their profiles may even look somewhat different than they often do today. Consider the five skills that Fast Company predicts that successful future managers, compared to the senior manager in the year 2000, will need. The five skills are: the ability to think of new solutions, being comfortable with chaos, an understanding of technology, high emotional intelligence, and the ability to work with people and technology together.
For the past several decades, executive profiles have typically looked like this: He started out in finance with an undergraduate degree in accounting. He methodically worked his way up through the company from the controller’s office in a division, to running that division, to the top job. His military background shows. He is used to giving orders—and to having them obeyed. As head of the philanthropic efforts, he is a big man in his community. However, the first time he traveled overseas on business was as chief executive. Computers, which became ubiquitous during his career, make him nervous. 9
Her [or his] undergraduate degree might be in French literature, but she also has a joint MBA/engineering degree. She started in research and was quickly picked out as a potential CEO. She is able to think creatively and thrives in a chaotic environment. She zigzagged from research to marketing to finance. She is comfortable with technology and people, with a high degree of emotional intelligence. She proved valuable in Brazil by turning around a failing joint venture. She speaks multiple languages and is on a first-name basis with commerce ministers in half a dozen countries. Unlike her predecessor’s predecessor, she isn’t a drill sergeant. She is first among equals in a five-person office of the chief executive.
Clearly, the future holds considerable excitement and promise for future managers and executives who are properly prepared to meet the challenges. How do we prepare them? One study suggested that the manager of the future must be able to fill at least the following four roles: 10
Global strategist. Executives of the future must understand world markets and think internationally. They must have a capacity to identify unique business opportunities and then move quickly to exploit them.
Master of technology. Executives and managers of the future must be able to get the most out of emerging technologies, whether these technologies are in manufacturing, communications, marketing, or other areas.
Leadership that embraces vulnerability. The successful executive of the future will understand how to cut through red tape to get a job done, how to build bridges with key people from highly divergent backgrounds and points of view, and how to make coalitions and joint ventures work.
Follow-from-the-front motivator. Finally, the executive of tomorrow must understand group dynamics and how to counsel, coach, and command work teams and individuals so they perform at their best. Future organizations will place greater emphasis on teams and coordinated efforts, requiring managers to understand participative management techniques.
Great communicator. To this list of four, we would add that managers of the future must be great communicators. They must be able to communicate effectively with an increasingly diverse set of employees as well as customers, suppliers, and community and government leaders.
Whether these predictions are completely accurate is difficult to know. Suffice it to say that most futurists agree that the organizational world of the twenty-first century will likely resemble, to some extent, the portrait described here. The task for future managers, then, is to attempt to develop these requisite skills to the extent possible so they will be ready for the challenges of the next decade.
8 R. Katz, “Skills of an Effective Administrator,” Harvard Business Review, September-October 1974, pp. 34–56.
9 J. Lindzon, “Five Skills That You’ll Need to Lead the Company of the Future,” Fast Company, May 18, 2017, https://www.fastcompany.com/40420957...-of-the-future ; A. Bennett, “Going Global: The Chief Executives in the Year 2000 Are Likely to Have Had Much Foreign Experience,” Wall Street Journal, February 27, 1989, p. A–4.
10 Jacob Morgan, “5 Qualities of the Modern Manager,” Forbes, July 23, 2013, https://www.forbes.com/sites/jacobmo.../#644a2b6a3a0b .
Exhibit 1.6 (Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license)
Exhibit 1.7 (Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license) | libretexts | 2025-03-17T22:27:54.230939 | 2021-05-08T21:23:42 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.01%3A_The_Nature_of_Management",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "2.1: The Nature of Management",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.04%3A_AuthorityEstablishing_Organizational_Relationships | 2.4: Authority—Establishing Organizational Relationships
-
- Last updated
- Save as PDF
What tools do companies use to establish relationships within their organizations?
Once companies choose a method of departmentalization, they must then establish the relationships within that structure. In other words, the company must decide how many layers of management it needs and who will report to whom. The company must also decide how much control to invest in each of its managers and where in the organization decisions will be made and implemented.
Managerial Hierarchy
Managerial hierarchy (also called the management pyramid ) is defined by the levels of management within an organization. Generally, the management structure has three levels: top, middle, and supervisory management. In a managerial hierarchy, each organizational unit is controlled and supervised by a manager in a higher unit. The person with the most formal authority is at the top of the hierarchy. The higher a manager, the more power he or she has. Thus, the amount of power decreases as you move down the management pyramid. At the same time, the number of employees increases as you move down the hierarchy.
Not all companies today are using this traditional configuration. One company that has eliminated hierarchy altogether is The Morning Star Company, the largest tomato processor in the world. Based in Woodland, California, the company employs 600 permanent “colleagues” and an additional 4,000 workers during harvest season. Founder and sole owner Chris Rufer started the company and based its vision on the philosophy of self-management, in which professionals initiate communication and coordination of their activities with colleagues, customers, suppliers, and others, and take personal responsibility for helping the company achieve its corporate goals. 12
An organization with a well-defined hierarchy has a clear chain of command , which is the line of authority that extends from one level of the organization to the next, from top to bottom, and makes clear who reports to whom. The chain of command is shown in the organization chart and can be traced from the CEO all the way down to the employees producing goods and services. Under the unity of command principle, everyone reports to and gets instructions from only one boss. Unity of command guarantees that everyone will have a direct supervisor and will not be taking orders from a number of different supervisors. Unity of command and chain of command give everyone in the organization clear directions and help coordinate people doing different jobs.
Matrix organizations automatically violate the unity of command principle because employees report to more than one boss, if only for the duration of a project. For example, Unilever, the consumer-products company that makes Dove soap, Ben & Jerry’s ice cream, and Hellmann’s mayonnaise, used to have a matrix structure with one CEO for North America and another for Europe. But employees in divisions that operated in both locations were unsure about which CEO’s decisions took precedence. Today, the company uses a product departmentalization structure. 13 Companies like Unilever tend to abandon matrix structures because of problems associated with unclear or duplicate reporting relationships, in other words, with a lack of unity of command.
Individuals who are part of the chain of command have authority over other persons in the organization. Authority is legitimate power, granted by the organization and acknowledged by employees, that allows an individual to request action and expect compliance. Exercising authority means making decisions and seeing that they are carried out. Most managers delegate , or assign, some degree of authority and responsibility to others below them in the chain of command. The delegation of authority makes the employees accountable to their supervisor. Accountability means responsibility for outcomes. Typically, authority and responsibility move downward through the organization as managers assign activities to, and share decision-making with, their subordinates. Accountability moves upward in the organization as managers in each successively higher level are held accountable for the actions of their subordinates.
Span of Control
Each firm must decide how many managers are needed at each level of the management hierarchy to effectively supervise the work performed within organizational units. A manager’s span of control (sometimes called span of management ) is the number of employees the manager directly supervises. It can be as narrow as two or three employees or as wide as 50 or more. In general, the larger the span of control, the more efficient the organization. As Table \(\PageIndex{1}\) shows, however, both narrow and wide spans of control have benefits and drawbacks.
Table \(\PageIndex{1}\): Spans of control
| Narrow and Wide Spans of Control | ||
|---|---|---|
| Advantages | Disadvantages | |
| Narrow span of control |
|
|
| Wide span of control |
|
|
If hundreds of employees perform the same job, one supervisor may be able to manage a very large number of employees. Such might be the case at a clothing plant, where hundreds of sewing machine operators work from identical patterns. But if employees perform complex and dissimilar tasks, a manager can effectively supervise only a much smaller number. For instance, a supervisor in the research and development area of a pharmaceutical company might oversee just a few research chemists due to the highly complex nature of their jobs.
CONCEPT CHECK
- How does the chain of command clarify reporting relationships?
- What is the role of a staff position in a line-and-staff organization?
- What factors determine the optimal span of control? | libretexts | 2025-03-17T22:27:54.437190 | 2021-05-08T21:23:45 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.04%3A_AuthorityEstablishing_Organizational_Relationships",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "2.4: Authority—Establishing Organizational Relationships",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.05%3A_Building_Organizational_Structures | 2.5: Building Organizational Structures
-
- Last updated
- Save as PDF
What are the traditional forms of organizational structure?
The key functions that managers perform include planning, organizing, leading, and controlling. This module focuses specifically on the organizing function. Organizing involves coordinating and allocating a firm’s resources so that the firm can carry out its plans and achieve its goals. This organizing, or structuring, process is accomplished by:
- Determining work activities and dividing up tasks (division of labor)
- Grouping jobs and employees (departmentalization)
- Assigning authority and responsibilities (delegation)
The result of the organizing process is a formal structure within an organization. An organization is the order and design of relationships within a company or firm. It consists of two or more people working together with a common objective and clarity of purpose. Formal organizations also have well-defined lines of authority, channels for information flow, and means of control. Human, material, financial, and information resources are deliberately connected to form the business organization. Some connections are long-lasting, such as the links among people in the finance or marketing department. Others can be changed at almost any time—for example, when a committee is formed to study a problem.
Every organization has some kind of underlying structure. Typically, organizations base their frameworks on traditional, contemporary, or team-based approaches. Traditional structures are more rigid and group employees by function, products, processes, customers, or regions. Contemporary and team-based structures are more flexible and assemble employees to respond quickly to dynamic business environments. Regardless of the structural framework a company chooses to implement, all managers must first consider what kind of work needs to be done within the firm.
Division of Labor
The process of dividing work into separate jobs and assigning tasks to workers is called division of labor . In a fast-food restaurant, for example, some employees take or fill orders, others prepare food, a few clean and maintain equipment, and at least one supervises all the others. In an auto assembly plant, some workers install rearview mirrors, while others mount bumpers on bumper brackets. The degree to which the tasks are subdivided into smaller jobs is called specialization . Employees who work at highly specialized jobs, such as assembly-line workers, perform a limited number and variety of tasks. Employees who become specialists at one task, or a small number of tasks, develop greater skill in doing that particular job. This can lead to greater efficiency and consistency in production and other work activities. However, a high degree of specialization can also result in employees who are disinterested or bored due to the lack of variety and challenge.
Traditional Structures
After a company divides the work it needs to do into specific jobs, managers then group the jobs together so that similar or associated tasks and activities can be coordinated. This grouping of people, tasks, and resources into organizational units is called departmentalization . It facilitates the planning, leading, and control processes.
An organization chart is a visual representation of the structured relationships among tasks and the people given the authority to do those tasks. In the organization chart in Figure \(\PageIndex{2}\), each figure represents a job, and each job includes several tasks. The sales manager, for instance, must hire salespeople, establish sales territories, motivate and train the salespeople, and control sales operations. The chart also indicates the general type of work done in each position. As Figure \(\PageIndex{3}\) shows, five basic types of departmentalization are commonly used in organizations:
- Functional departmentalization , which is based on the primary functions performed within an organizational unit (marketing, finance, production, sales, and so on). Ethan Allen Interiors, a vertically integrated home furnishings manufacturer, continues its successful departmentalization by function, including retail, manufacturing and sourcing, product design, logistics, and operations, which includes tight financial controls. 1
- Product departmentalization , which is based on the goods or services produced or sold by the organizational unit (such as outpatient/emergency services, pediatrics, cardiology, and orthopedics). For example, ITT is a diversified leading manufacturer of highly engineered components and customized technology solutions for the transportation, industrial, and oil and gas markets. The company is organized into four product divisions: Industrial Process (pumps, valves, and wastewater treatment equipment), Control Technologies (motion control and vibration isolation products), Motion Technologies (shock absorbers, brake pads, and friction materials), and Interconnect Solutions (connectors for a variety of markets). 2
- Process departmentalization , which is based on the production process used by the organizational unit (such as lumber cutting and treatment, furniture finishing, and shipping). For example, the organization of Gazprom Neft, a Russian oil company, reflects the activities the company needs to perform to extract oil from the ground and turn it into a final product: exploration and research, production (drilling), refining, and marketing and distribution. 3 Pixar, the animated-movie company now part of Disney, is divided into three parallel yet interactive process-based groups: technology development, which delivers computer-graphics tools; creative development, which creates stories and characters and animates them; and production, which coordinates the film-making process. 4
- Customer departmentalization , which is based on the primary type of customer served by the organizational unit (such as wholesale or retail purchasers). The PNC Financial Services Group offers a wide range of services for all of its customers and is structured by the type of consumer it serves: retail banking for consumers; the asset management group, with specific focus on individuals as well as corporations, unions, municipalities, and others; and corporate and institutional banking for middle-market companies nationwide. 5
ETHICS IN PRACTICE
Ethics in Practice
Panera’s Menu Comes Clean Making a strategic change to a company’s overall philosophy and the way it does business affects every part of the organizational structure. And when that change pertains to sustainability and “clean food,” Panera Bread Company took on the challenge more than a decade ago and now has a menu free of man-made preservatives, sweeteners, colors, and flavors.
In 2015, Ron Shaich, company founder and CEO, announced Panera’s “no-no” list of nearly 100 ingredients, which he vowed would be eliminated or never used again in menu items. Two years later, the company announced that its menu was “100 percent clean,” but the process was not an easy one.
Panera used thousands of labor hours to review the 450 ingredients used in menu items, eventually reformulating more than 120 of them to eliminate artificial ingredients. Once the team identified the ingredients that were not “clean,” they worked with the company’s 300 vendors—and in some instances, a vendor’s supplier—to reformulate an ingredient to make it preservative-free. For example, the recipe for the company’s popular broccoli cheddar soup had to be revised 60 times to remove artificial ingredients without losing the soup’s taste and texture. According to Shaich, the trial-and-error approach was about finding the right balance of milk, cream, and emulsifiers, like Dijon mustard, to replace sodium phosphate (a no-no item) while keeping the soup’s texture creamy. Panera also created a new cheddar cheese to use in the soup and used a Dijon mustard that contained unpreserved vinegar as a substitute for the banned sodium phosphate.
Sara Burnett, Panera’s director of wellness and food policy, believes that the company’s responsibility goes beyond just serving its customers. She believes that Panera can make a difference by using its voice and purchasing power to have a positive impact on the overall food system. In addition, the company’s Herculean effort to remove artificial ingredients from its menu items also helped it take a close look at its supply chain and other processes that Panera could simplify by using better ingredients.
Panera is not yet satisfied with its commitment to clean food. The food chain recently announced its goal of sourcing 100 percent cage-free eggs for all of its U.S. Panera bakery-cafés by 2020.
Critical Thinking Questions
- How does Panera’s approach to clean eating provide the company with a competitive advantage?
- What kind of impact does this commitment to preservative-free food have on the company’s organizational structure?
- Does “clean food” put additional pressure on Panera and its vendors? Explain your reasoning.
Sources: “Our Food Policy,” www.panerabread.com, accessed July 24, 2017; Emily Payne, “Panera Bread’s Sara Burnett on Shifting Demand for a Better Food System,” Food Tank, http://foodtank.com , accessed July 18, 2017; Julie Jargon, “What Panera Had to Change to Make Its Menu ‘Clean,’” The Wall Street Journal, https://www.wsj.com , February 20, 2017; John Kell, “Panera Says Its Food Menu Is Now 100% ‘Clean Eating,’” Fortune, http://fortune.com , January 13, 2017; Lani Furbank, “Seven Questions with Sara Burnett, Director of Wellness and Food Policy at Panera Bread,” Food Tank, https://foodtank.com , April 12, 2016.
- Geographic departmentalization , which is based on the geographic segmentation of organizational units (such as U.S. and Canadian marketing, European marketing, and Latin American marketing).
People are assigned to a particular organizational unit because they perform similar or related tasks, or because they are jointly responsible for a product, client, or market. Decisions about how to departmentalize affect the way management assigns authority, distributes resources, rewards performance, and sets up lines of communication. Many large organizations use several types of departmentalization. For example, Procter & Gamble (P&G), the multibillion-dollar consumer-products company, integrates four different types of departmentalization, which the company refers to as “four pillars.” First, the Global Business Units (GBU) divide the company according to products (baby, feminine, and family care; beauty; fabric and home care; and health and grooming). Then, P&G uses a geographical approach, creating business units to market its products around the world. There are Selling and Market Operations (SMO) groups for North America; Latin America; Europe; Asia Pacific; Greater China; and India, the Middle East, and Africa. P&G’s third pillar is Global Business Services division (GBS), which also uses geographic departmentalization. GBS provides technology processes and standard data tools to enable the GBUs and SMOs to better understand the business and to serve consumers and customers better. It supports P&G business units in areas such as accounting and financial reporting, information technology, purchases, payroll and benefits administration, and facilities management. Finally, the divisions of the Corporate Functions pillar provide a safety net to all the other pillars. These divisions are comprised of functional specialties such as customer business development; external relations; human resources; legal, marketing, consumer, and market knowledge; research and development; and workplace services. 6
Line-and-Staff Organization
The line organization is designed with direct, clear lines of authority and communication flowing from the top managers downward. Managers have direct control over all activities, including administrative duties. An organization chart for this type of structure would show that all positions in the firm are directly connected via an imaginary line extending from the highest position in the organization to the lowest (where production of goods and services takes place). This structure, with its simple design and broad managerial control, is often well-suited to small, entrepreneurial firms.
As an organization grows and becomes more complex, the line organization can be enhanced by adding staff positions to the design. Staff positions provide specialized advisory and support services to line managers in the line-and-staff organization , shown in Figure \(\PageIndex{4}\). In daily operations, individuals in line positions are directly involved in the processes used to create goods and services. Individuals in staff positions provide the administrative and support services that line employees need to achieve the firm’s goals. Line positions in organizations are typically in areas such as production, marketing, and finance. Staff positions are found in areas such as legal counseling, managerial consulting, public relations, and human resource management.
CONCEPT CHECK
- How does specialization lead to greater efficiency and consistency in production?
- What are the five types of departmentalization? | libretexts | 2025-03-17T22:27:54.511342 | 2021-05-08T21:23:45 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.05%3A_Building_Organizational_Structures",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "2.5: Building Organizational Structures",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.07%3A_Current_Issues_-_Internal_Affairs_and_Discipline | 2.7: Current Issues - Internal Affairs and Discipline
-
- Last updated
- Save as PDF
- Alison S. Burke, David Carter, Brian Fedorek, Tiffany Morey, Lore Rutz-Burri, & Shanell Sanchez
- Southern Oregon University via OpenOregon
Internal affairs (IA) exists to hold officers accountable for their actions. Whenever there is an issue, either brought forth by another officer, a supervisor or a member of the general public, the IA division of the police department is responsible for conducting a thorough investigation into the incident. Members of the IA division work directly under the Chief or Sheriff.
In the 1960s the overwhelming number of riots revealed the problem of corruption and misconduct in policing- one of the most significant issues centered around citizen complaints against officers and the lack of proper investigation into the complaint. Most officers back then were found exonerated (not guilty) when a complaint ensued, and this did not bode with the public. [1]
Supervisors in Policing Example
As a young girl, I never had dreams of one day being a supervisor in the police world. In fact, I didn’t even want to be a cop! However, life would direct me towards policing, and after years of testing, I found myself hired as a police officer in Las Vegas. The life of an officer is full of wonder and excitement, but it is also full of stress, and a lot of pressure! After I completed the police academy, field training, and probation I soon learned that all supervisors (sergeants and lieutenants) were not created equal.I received my first police oficer annual evaluation and found that I ONLY met standards in the areas evaluated. How could that be, I thought? I had never worked harder! I always stayed late, I wrote amazing reports, I volunteered and helped out my community, I engaged in constant training, I did everything I knew AND was trained to do. Yet, I still only met standards. Now I wasn’t delusional. I knew that I was a new police officer and had many things to learn, but why was my sergeant failing to mentor or recognize me for my above average efforts in many areas? I was even told by a female sergeant, that she had to work harder than any other police officer because she was a female, so I should have to do the same. Where was mentoring? Where was the training offered by supervision? I soon learned it did not exist and the only way to create it was to test for promotion myself and enter the world of supervision as a sergeant. Don’t get me wrong, throughout my tenure as a police officer I did encounter some amazing supervisors, but they were rare and an exception to the rule. I did the test for promotion, and I was promoted to sergeant. My goals were to change the way officers were supervised at my department. I worked hard to create a sergeant training program that ensured future supervisors received the knowledge and power of how-to mentor and train their employees. Three years later I tested and promoted to lieutenant. I took advantage of my new position in adminstration to mentor many young officers and help them to succeed in their careers.
Discipline
Police departments are paramilitary organizations or a semi-militarized force whose organizational structure, tactics, training, subculture, and (often) function are similar to those of a professional military, but which is formally not part of a government’s armed forces. Therefore, the handling of discipline is serious business. If an officer is accused of a minor infraction, such as the use of profanity, the officer’s immediate supervisor will generally handle the policy infraction and note what occurred in the officer’s file and counsel the officer of the following: 1- Inform the police officer why the conduct was wrong 2- Inform the police officer how to stop engaging in the conduct 3- Inform the police officer when the conduct must stop 4- Inform the police officer the time elapsed after the conduct and a scheduled meeting to review and ensure the conduct is still not occurring. Depending on the conduct, the supervisor may require the officer to attend training to assist the officer.
Another answer was to create external civilian review boards to hold police accountable for their actions by reviewing all use of force incidents. With the onset of the 21st century and new technology, came new tools in policing. One such tool was a new program called IA Pro. This program followed individual officers throughout their entire career. A scheming grass or meat eater officer could bid on a new shift each year, gaining a new supervisor who would be oblivious to past infractions. IA Pro ensured any, and all infractions by an officer were recorded and followed through upon by the applicable supervisor. If an officer used profanity, the program would require the officer to attend training. If the officer used profanity a second time within the prescribed time limits, the officer would be placed on an timed employee development program and could face discipline up to termination. IA Pro was not a panacea, but it would significantly lower the number of officers allowed to continue to operate as grass or meat eaters.
If an officer is accused of a more serious infraction, such as excessive use of force or lying, the officer will immediately be placed on administrative leave and The Internal Affairs Division of the department will investigate the incident. The Internal Affairs Division will offer a finding of 1- Sustained Complaint 2- Not-Sustained Complaint 3- Exonerated Complaint 4- Unfounded Complaint. Once one of the above complaint dispositions is assigned, it is then forwarded to the Command Staff (Chief or Sheriff and Assistant Chief/Sheriff, Deputy Chief/Sheriff, and Captains) for review and discipline. Discipline can include time-off up to termination.
When an Officer Does Something Illegal Example
I was a lieutenant over two sergeants and dozens of officers when I received the dreaded phone call. One of my officers was being placed on administrative leave by Internal Affairs due to a horrendous allegation. The officer had been pulling over female drivers for ‘so-called’ traffic violations and offering them an ‘out’ if they performed some sort of sexual activity. My heart sank, how could this have happened and on my watch? After weeks of investigation, I learned that the officer had been engaging in this illegal activity for months. It took several brave women to contact our Internal Affairs Division and tell their stories, to stop it. I racked my brain as to what I could have done to prevent the officer. Did I miss the signs? Should I have been sterner? What could I have done? Even years later it tears at my soul. What those women had to endure. How scared they must have been. It must have been their worst nightmare come true. I have played many scenarios in my head as to what I could have done or should have done to stop this officer’s actions. And I finally learned that some people are just ethically and morally corrupt. No matter how hard we, in supervision, try to identify them through the L.E.T. Process or keep tabs on them when they engage in such acts, sometimes they slip through the cracks and are allowed to spread their evilness. This is what happened with this officer. The officer was smart enough to engage in this activity while alone on patrol, knowing that he could stop this action if another officer or supervisor assisted on the traffic stop. His actions were scary and should send a message to every police department and every supervisor that they must always be on the look-out for those officers that are corrupt and will use their power to engage in illegal and horrendous crimes. This was a hard lesson for me to learn, but an eye-opening one that would forever change the way I supervised those officers in my command.
-
Goldstein, H. (1977).
Policing a free society
. Cambridge, MA: Ballinger.
↵ | libretexts | 2025-03-17T22:27:54.607914 | 2021-05-08T21:23:46 | {
"license": "Creative Commons - Attribution Share-Alike - https://creativecommons.org/licenses/by-sa/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/02%3A_Explain_the_role_of_the_modern_supervisor_in_relation_to_upper_management_unions_and_governmental_regulations./2.07%3A_Current_Issues_-_Internal_Affairs_and_Discipline",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "2.7: Current Issues - Internal Affairs and Discipline",
"author": "Alison S. Burke, David Carter, Brian Fedorek, Tiffany Morey, Lore Rutz-Burri, & Shanell Sanchez"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication | 3.1: Managerial Communication
Learning Objectives
After reading this chapter, you should be able to answer these questions:
- Understand and describe the communication process.
- Know the types of communications that occur in organizations.
- Understand how power, status, purpose, and interpersonal skills affect communications in organizations.
- Describe how corporate reputations are defined by how an organization communicates to all of its stakeholders.
- Know why talking, listening, reading, and writing are vital to managing effectively.
EXPLORING MANAGERIAL CAREERS
John Legere, T-Mobile
The chief executive officer is often the face of the company. He or she is often the North Star of the company, providing guidance and direction for the entire organization. With other stakeholders, such as shareholders, suppliers, regulatory agencies, and customers, CEOs often take more reserved and structured approaches. One CEO who definitely stands out is John Legere, the CEO of T-Mobile. The unconventional CEO of the self-proclaimed “un-carrier” hosts a Sunday morning podcast called “Slow Cooker Sunday” on Facebook Live, and where most CEOs appear on television interviews in standard business attire, Legere appears with shoulder-length hair dressed in a magenta T-shirt, black jacket, and pink sneakers. Whereas most CEOs use well-scripted language to address business issues and competitors, Legere refers to T-Mobile’s largest competitors, AT&T and Verizon, as “dumb and dumber.”
In the mobile phone market, T-Mobile is the number-three player competing with giants AT&T and Verizon and recently came to an agreement to merge with Sprint. Of all the consolidation sweeping through the media and telecommunications arena, T-Mobile and Sprint are the most direct of competitors. Their merger would reduce the number of national wireless carriers from four to three, a move the Federal Communications Commission has firmly opposed in the past. Then again, the wireless market looks a bit different now, as does the administration in power.
John Legere and other CEOs such as Mark Cuban, Elon Musk, and Richard Branson have a more public profile than executives at other companies that keep a lower profile and are more guarded in their public comments, often restricting their public statements to quarterly investor and analyst meetings. It is likely that the personality and communication style that the executives reveal in public is also the way that they relate to their employees. The outgoing personality of someone such as John Legere will motivate some employees, but he might be seen as too much of a cheerleader by other employees.
Sometimes the unscripted comments and colorful language that Legere uses can cause issues with employees and the public. For instance, some T-Mobile employees in their call center admonished Legere for comments at a press event where he said Verizon and AT&T were “raping” customers for every penny they have. Legere’s comments caused lengthy discussions in online forums such as Reddit about his choice of words. Legere is known for speaking his mind in public and often uses profanity, but many thought this comment crossed the line. While frank, open communication is often appreciated and leads to a clarity of message, senders of communication, be it in a public forum, an internal memo, or even a text message, should always think through the consequences of their words.
sources
Tara Lachapelle, “T-Mobile’s Argument for Sprint Deal is as Loud as CEO John Legere’s Style,” The Seattle Times , July 9, 2018, www.seattletimes.com/busines...legeres-style/;
Janko Roettgers, “T-Mobile CEO John Legere Pokes Fun at Verizon’s Go90 Closure,” Variety , June 29, 2018, https://variety.com/2018/digital/new...90-1202862397/ ;
Rachel Lerman, “T-Mobile’s Loud, Outspoken John Legere is Not Your Typical CEO,” The Chicago Tribune , April 30, 2018, https://www.chicagotribune.com/busin...430-story.html ;
Steve Kovach, T-Mobile Employees Speak Out and Call CEO’s Recent Rape Comments “Violent” and “Traumatizing”,” Business Insider , June 27, 2014, https://www.businessinsider.com/t-mo...comment-2014-6 ;
Brian X. Chen, One on One: John Legere, the Hip New Chief of T-Mobile USA,” New York Times , January 9, 2013, https://bits.blogs.nytimes.com/2013/...-t-mobile-usa/ .
We will distinguish between communication between two individuals and communication amongst several individuals (groups) and communication outside the organization. We will show that managers spend a majority of their time in communication with others. We will examine the reasons for communication and discuss the basic model of interpersonal communication, the types of interpersonal communication, and major influences on the communication process. We will also discuss how organizational reputation is defined by communication with stakeholders. | libretexts | 2025-03-17T22:27:54.705418 | 2021-05-08T21:23:47 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1: Managerial Communication",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.01%3A_The_Process_of_Managerial_Communication | 3.1.1: The Process of Managerial Communication
-
- Last updated
- Save as PDF
Learning Objectives
- Understand and describe the communication process.
Interpersonal communication is an important part of being an effective manager:
- It influences the opinions, attitude, motivation, and behaviors of others.
- It expresses our feelings, emotions, and intentions to others.
- It is the vehicle for providing, receiving, and exchanging information regarding events or issues that concern us.
- It reinforces the formal structure of the organization by such means as making use of formal channels of communication.
Interpersonal communication allows employees at all levels of an organization to interact with others, to secure desired results, to request or extend assistance, and to make use of and reinforce the formal design of the organization. These purposes serve not only the individuals involved, but the larger goal of improving the quality of organizational effectiveness.
The model that we present here is an oversimplification of what really happens in communication, but this model will be useful in creating a diagram to be used to discuss the topic. Figure 16.1.1 illustrates a simple communication episode where a communicator encodes a message and a receiver decodes the message. 1
Encoding and Decoding
Two important aspects of this model are encoding and decoding. Encoding is the process by which individuals initiating the communication translate their ideas into a systematic set of symbols (language), either written or spoken. Encoding is influenced by the sender’s previous experiences with the topic or issue, her emotional state at the time of the message, the importance of the message, and the people involved. Decoding is the process by which the recipient of the message interprets it. The receiver attaches meaning to the message and tries to uncover its underlying intent. Decoding is also influenced by the receiver’s previous experiences and frame of reference at the time of receiving the message.
Feedback
Several types of feedback can occur after a message is sent from the communicator to the receiver. Feedback can be viewed as the last step in completing a communication episode and may take several forms, such as a verbal response, a nod of the head, a response asking for more information, or no response at all. As with the initial message, the response also involves encoding, medium, and decoding.
There are three basic types of feedback that occur in communication. 2 These are informational, corrective, and reinforcing. In informational feedback, the receiver provides nonevaluative information to the communicator. An example is the level of inventory at the end of the month. In corrective feedback, the receiver responds by challenging the original message. The receiver might respond that it is not her responsibility to monitor inventory. In reinforcing feedback, the receiver communicated that she has clearly received the message and its intentions. For instance, the grade that you receive on a term paper (either positive or negative) is reinforcing feedback on your term paper (your original communication).
Noise
There is, however, a variety of ways that the intended message can get distorted. Factors that distort message clarity are noise . Noise can occur at any point along the model shown in Figure 16.1.1, including the decoding process. For example, a manager might be under pressure and issue a directive, “I want this job completed today, and I don’t care what it costs,” when the manager does care what it costs.
concept check
- Describe the communication process.
- Why is feedback a critical part of the communication process?
- What are some things that managers can do to reduce noise in communication? | libretexts | 2025-03-17T22:27:54.767543 | 2021-05-08T21:23:48 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.01%3A_The_Process_of_Managerial_Communication",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1.1: The Process of Managerial Communication",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.02%3A_Types_of_Communications_in_Organizations | 3.1.2: Types of Communications in Organizations
-
- Last updated
- Save as PDF
Learning Objectives
- Know the types of communications that occur in organizations.
In the communication model described above, three types of communication can be used by either the communicator in the initial transmission phase or the receiver in the feedback phase. These three types are discussed next.
Oral Communication
This consists of all messages or exchanges of information that are spoken, and it’s the most prevalent type of communication.
Written Communication
This includes e-mail, texts, letters, reports, manuals, and annotations on sticky notes. Although managers prefer oral communication for its efficiency and immediacy, the increase in electronic communication is undeniable. As well, some managers prefer written communication for important messages, such as a change in a company policy, where precision of language and documentation of the message are important.
Managerial Leadership
Dealing with Information Overload
One of the challenges in many organizations is dealing with a deluge of emails, texts, voicemails, and other communication. Organizations have become flatter, outsourced many functions, and layered technology to speed communication with an integrated communication programs such as Slack, which allows users to manage all their communication and access shared resources in one place. This can lead to information overload, and crucial messages may be drowned out by the volume in your inbox.
Add the practice of “reply to all,” which can add to the volume of communication, that many coworkers use, and that means that you may get five or six versions of an initial e-mail and need to understand all of the responses as well as the initial communication before responding or deciding that the issue is resolved and no response is needed. Here are suggestions to dealing with e-mail overload upward, horizontally, and downward within your organization and externally to stakeholders and customers.
One way to reduce the volume and the time you spend on e-mail is to turn off the spigot of incoming messages. There are obvious practices that help, such as unsubscribing to e-newsletters or turning off notifications from social media accounts such as Facebook and Twitter. Also, consider whether your colleagues or direct reports are copying you on too many emails as an FYI. If yes, explain that you only need to be updated at certain times or when a final decision is made.
You will also want to set up a system that will organize your inbox into “folders” that will allow you to manage the flow of messages into groups that will allow you to address them appropriately. Your system might look something like this:
- Inbox: Treat this as a holding pen. E-mails shouldn’t stay here any longer than it takes for you to file them into another folder. The exception is when you respond immediately and are waiting for an immediate response.
- Today: This is for items that need a response today.
- This week: This is for messages that require a response before the end of the week.
- This month/quarter: This is for everything that needs a longer-term response. Depending on your role, you may need a monthly or quarterly folder.
- FYI: This is for any items that are for information only and that you may want to refer back to in the future.
This system prioritizes e-mails based on timescales rather than the e-mails’ senders, enabling you to better schedule work and set deadlines.
Another thing to consider is your outgoing e-mail. If your outgoing messages are not specific, too long, unclear, or are copied too widely, your colleagues are likely to follow the same practice when communicating with you. Keep your communication clear and to the point, and managing your outbox will help make your inbound e-mails manageable.
critical thinking questions
- How are you managing your e-mails now? Are you mixing personal and school and work-related e-mails in the same account?
- How would you communicate to a colleague that is sending too many FYI e-mails, sending too may unclear e-mails, or copying too many people on her messages?
sources
Amy Gallo, Stop Email Overload, Harvard Business Review , February 21, 2012, https://hbr.org/2012/02/stop-email-overload-1 ;
Barry Chingel, “How to beat email Overload in 2018”, CIPHER , January 16, 2018, https://www.ciphr.com/advice/email-overload/ ;
Monica Seely, “At the Mercy of Your Inbox? How to Cope With Email Overload”, The Guardian , November 6, 2017, https://www.theguardian.com/small-bu...email-overload .
Nonverbal Communication
There is also the transformation of information without speaking or writing. Some examples of this are things such as traffic lights and sirens as well as things such as office size and placement, which connote something or someone of importance. As well, things such as body language and facial expression can convey either conscious or unconscious messages to others.
Major Influences on Interpersonal Communication
Regardless of the type of communication involved, the nature, direction, and quality of interpersonal communication processes can be influenced by several factors. 3
Social Influences
Communication is a social process, as it takes at least two people to have a communication episode. There is a variety of social influences that can affect the accuracy of the intended message. For example, status barriers between employees at different levels of the organization can influence things such as addressing a colleague as at a director level as “Ms. Jones” or a coworker at the same level as “Mike.” Prevailing norms and roles can dictate who speaks to whom and how someone responds. Figure 16.2.2 illustrates a variety of communications that illustrate social influences in the workplace.
Perception
In addition, the communication process is heavily influenced by perceptual processes. The extent to which an employee accurately receives job instructions from a manager may be influences by her perception of the manager, especially if the job instructions conflict with her interest in the job or if they are controversial. If an employee has stereotyped the manager as incompetent, chances are that little that the manager says will be taken seriously. If the boss is well regarded or seen as influential in the company, everything that she says may be interpreted as important.
Interaction Involvement
Communication effectiveness can be influenced by the extent to which one or both parties are involved in conversation. This attentiveness is called interaction attentiveness or interaction involvement. 4 If the intended receiver of the message is preoccupied with other issues, the effectiveness of the message may be diminished. Interaction involvement consists of three interrelated dimensions: responsiveness, perceptiveness, and attentiveness.
Organizational Design
The communication process can also be influenced by the design of the organization. It has often been argued to decentralize an organization because that will lead to a more participative structure and lead to improved communication in the organization. When messages must travel through multiple levels of an organization, the possibility of distortion can also occur, which would be diminished with more face-to-face communication.
concept check
- What are the three major types of communication?
- How can you manage the inflow of electronic communication?
- What are the major influences on organizational communication, and how can organizational design affect communication? | libretexts | 2025-03-17T22:27:54.840341 | 2021-05-08T21:23:48 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.02%3A_Types_of_Communications_in_Organizations",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1.2: Types of Communications in Organizations",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.03%3A_Factors_Affecting_Communications_and_the_Roles_of_Managers | 3.1.3: Factors Affecting Communications and the Roles of Managers
-
- Last updated
- Save as PDF
Learning Objectives
- Understand how power, status, purpose, and interpersonal skills affect communications in organizations.
The Roles Managers Play
In Mintzberg’s seminal study of managers and their jobs, he found the majority of them clustered around three core management roles. 5
Interpersonal Roles
Managers are required to interact with a substantial number of people during a workweek. They host receptions; take clients and customers to dinner; meet with business prospects and partners; conduct hiring and performance interviews; and form alliances, friendships, and personal relationships with many others. Numerous studies have shown that such relationships are the richest source of information for managers because of their immediate and personal nature. 6
Three of a manager’s roles arise directly from formal authority and involve basic interpersonal relationships. First is the figurehead role. As the head of an organizational unit, every manager must perform some ceremonial duties. In Mintzberg’s study, chief executives spent 12% of their contact time on ceremonial duties; 17% of their incoming mail dealt with acknowledgments and requests related to their status. One example is a company president who requested free merchandise for a handicapped schoolchild. 7
Managers are also responsible for the work of the people in their unit, and their actions in this regard are directly related to their role as a leader. The influence of managers is most clearly seen, according to Mintzberg, in the leader role. Formal authority vests them with great potential power. Leadership determines, in large part, how much power they will realize.
Does the leader’s role matter? Ask the employees of Chrysler Corporation (now Fiat Chrysler). When Sergio Marchionne, who passed away in 2018, took over the company in the wake of the financial crisis, the once-great auto manufacturer was in bankruptcy, teetering on the verge of extinction. He formed new relationships with the United Auto Workers, reorganized the senior management of the company, and—perhaps, most importantly—convinced the U.S. federal government to guarantee a series of bank loans that would make the company solvent again. The loan guarantees, the union response, and the reaction of the marketplace, especially for the Jeep brand, were due in large measure to Marchionne’s leadership style and personal charisma. More recent examples include the return of Starbucks founder Howard Schultz to reenergize and steer his company and Amazon CEO Jeff Bezos and his ability to innovate during a downturn in the economy. 8
Popular management literature has had little to say about the liaison role until recently. This role, in which managers establish and maintain contacts outside the vertical chain of command, becomes especially important in view of the finding of virtually every study of managerial work that managers spend as much time with peers and other people outside of their units as they do with their own subordinates. Surprisingly, they spend little time with their own superiors. In Rosemary Stewart’s (1967) study, 160 British middle and top managers spent 47% of their time with peers, 41% of their time with people inside their unit, and only 12% of their time with superiors. Guest’s (1956) study of U.S. manufacturing supervisors revealed similar findings.
Informational Roles
Managers are required to gather, collate, analyze, store, and disseminate many kinds of information. In doing so, they become information resource centers, often storing huge amounts of information in their own heads, moving quickly from the role of gatherer to the role of disseminator in minutes. Although many business organizations install large, expensive management information systems to perform many of those functions, nothing can match the speed and intuitive power of a well-trained manager’s brain for information processing. Not surprisingly, most managers prefer it that way.
As monitors, managers are constantly scanning the environment for information, talking with liaison contacts and subordinates, and receiving unsolicited information, much of it because of their network of personal contacts. A good portion of this information arrives in verbal form, often as gossip, hearsay, and speculation. 9
In the disseminator role, managers pass privileged information directly to subordinates, who might otherwise have no access to it. Managers must decide not only who should receive such information, but how much of it, how often, and in what form. Increasingly, managers are being asked to decide whether subordinates, peers, customers, business partners, and others should have direct access to information 24 hours a day without having to contact the manager directly. 10
In the spokesperson role, managers send information to people outside of their organizations: an executive makes a speech to lobby for an organizational cause, or a supervisor suggests a product modification to a supplier. Increasingly, managers are also being asked to deal with representatives of the news media, providing both factual and opinion-based responses that will be printed or broadcast to vast unseen audiences, often directly or with little editing. The risks in such circumstances are enormous, but so too are the potential rewards in terms of brand recognition, public image, and organizational visibility. 11
Decisional Roles
Ultimately, managers are charged with the responsibility of making decisions on behalf of both the organization and the stakeholders with an interest in it. Such decisions are often made under circumstances of high ambiguity and with inadequate information. Often, the other two managerial roles—interpersonal and informational—will assist a manager in making difficult decisions in which outcomes are not clear and interests are often conflicting.
In the role of entrepreneur, managers seek to improve their businesses, adapt to changing market conditions, and react to opportunities as they present themselves. Managers who take a longer-term view of their responsibilities are among the first to realize that they will need to reinvent themselves, their product and service lines, their marketing strategies, and their ways of doing business as older methods become obsolete and competitors gain advantage.
While the entrepreneur role describes managers who initiate change, the disturbance or crisis handler role depicts managers who must involuntarily react to conditions. Crises can arise because bad managers let circumstances deteriorate or spin out of control, but just as often good managers find themselves in the midst of a crisis that they could not have anticipated but must react to just the same. 12
The third decisional role of resource allocator involves managers making decisions about who gets what, how much, when, and why. Resources, including funding, equipment, human labor, office or production space, and even the boss’s time, are all limited, and demand inevitably outstrips supply. Managers must make sensible decisions about such matters while still retaining, motivating, and developing the best of their employees.
The final decisional role is that of negotiator. Managers spend considerable amounts of time in negotiations: over budget allocations, labor and collective bargaining agreements, and other formal dispute resolutions. During a week, managers will often make dozens of decisions that are the result of brief but important negotiations between and among employees, customers and clients, suppliers, and others with whom managers must deal. 13
concept check
- What are the major roles that managers play in communicating with employees?
- Why are negotiations often brought in to communications by managers? | libretexts | 2025-03-17T22:27:54.906605 | 2021-05-08T21:23:49 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.03%3A_Factors_Affecting_Communications_and_the_Roles_of_Managers",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1.3: Factors Affecting Communications and the Roles of Managers",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.04%3A_Managerial_Communication_and_Corporate_Reputation | 3.1.4: Managerial Communication and Corporate Reputation
-
- Last updated
- Save as PDF
Learning Objectives
- Describe how corporate reputations are defined by how an organization communicates to its stakeholders.
Management communication is a central discipline in the study of communication and corporate reputation. An understanding of language and its inherent powers, combined with the skill to speak, write, listen, and form interpersonal relationships, will determine whether companies succeed or fail and whether they are rewarded or penalized for their reputations.
At the midpoint of the twentieth century, Peter Drucker wrote, “Managers have to learn to know language, to understand what words are and what they mean. Perhaps most important, they have to acquire respect for language as [our] most precious gift and heritage. The manager must understand the meaning of the old definition of rhetoric as ‘the art which draws men’s hearts to the love of true knowledge.’” 14
Later, Eccles and Nohria reframed Drucker’s view to offer a perspective of management that few others have seen: “To see management in its proper light, managers need first to take language seriously.” 15 In particular, they argue, a coherent view of management must focus on three issues: the use of rhetoric to achieve a manager’s goals, the shaping of a managerial identity, and taking action to achieve the goals of the organizations that employ us. Above all, they say, “the essence of what management is all about [is] the effective use of language to get things done.” 16 One of the things managers get done is the creation, management, and monitoring of corporate reputation.
The job of becoming a competent, effective manager thus becomes one of understanding language and action. It also involves finding ways to shape how others see and think of you in your role as a manager. Many noted researchers have examined the important relationship between communication and action within large and complex organizations and conclude that the two are inseparable. Without the right words, used in the right way, it is unlikely that the right reputations develop. “Words do matter,” write Eccles and Nohria. “They matter very much. Without words, we have no way of expressing strategic concepts, structural forms, or designs for performance measurement systems.” Language, they conclude, “is too important to managers to be taken for granted or, even worse, abused.” 17
So, if language is a manager’s key to corporate reputation management, the next question is obvious: How good are managers at using language? Managers’ ability to act—to hire a talented workforce, to change an organization’s reputation, to launch a new product line—depends entirely on how effectively they use management communication, both as a speaker and as a listener. Managers’ effectiveness as a speaker and writer will determine how well they are able to manage the firm’s reputation. And their effectiveness as listeners will determine how well they understand and respond to others and can change the organization in response to their feedback.
We will now examine the role management communication plays in corporate reputation formation, management, and change and the position occupied by rhetoric in the life of business organizations. Though, this chapter will focus on the skills, abilities, and competencies for using language, attempting to influence others, and responding to the requirements of peers, superiors, stakeholders, and the organization in which managers and employees work.
Management communication is about the movement of information and the skills that facilitate it—speaking, writing, listening, and processes of critical thinking. It’s also about understanding who your organization is (identity), who others think your organization is (reputation), and the contributions individuals can make to the success of their business considering their organization’s existing reputation. It is also about confidence—the knowledge that one can speak and write well, listen with great skill as others speak, and both seek out and provide the feedback essential to creating, managing, or changing their organization’s reputation.
At the heart of this chapter, though, is the notion that communication, in many ways, is the work of managers. We will now examine the roles of writing and speaking in the role of management, as well as other specific applications and challenges managers face as they play their role in the creation, maintenance, and change of corporate reputation.
concept check
- How are corporate reputations affected by the communication of managers and public statements?
- Why is corporate reputation important? | libretexts | 2025-03-17T22:27:54.966969 | 2021-05-08T21:23:49 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.04%3A_Managerial_Communication_and_Corporate_Reputation",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1.4: Managerial Communication and Corporate Reputation",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.05%3A_The_Major_Channels_of_Management_Communication_Are_Talking_Listening_Reading_and_Writing | 3.1.5: The Major Channels of Management Communication Are Talking, Listening, Reading, and Writing
-
- Last updated
- Save as PDF
Learning Objective
- Know why talking, listening, reading, and writing are vital to managing effectively.
The major channels of managerial communication displayed in Figure 16.5.1 are talking, listening, reading, and writing. Among these, talking is the predominant method of communicating, but as e-mail and texting increase, reading and writing are increasing. Managers across industries, according to Deirdre Borden, spend about 75% of their time in verbal interaction. Those daily interactions include the following.
One-on-One Conversations
Increasingly, managers find that information is passed orally, often face-to-face in offices, hallways, conference rooms, cafeterias, restrooms, athletic facilities, parking lots, and literally dozens of other venues. An enormous amount of information is exchanged, validated, confirmed, and passed back and forth under highly informal circumstances.
Telephone Conversations
Managers spend an astounding amount of time on the telephone these days. Curiously, the amount of time per telephone call is decreasing, but the number of calls per day is increasing. With the nearly universal availability of cellular and satellite telephone service, very few people are out of reach of the office for very long. The decision to switch off a cellular telephone, in fact, is now considered a decision in favor of work-life balance.
Video Teleconferencing
Bridging time zones as well as cultures, videoconferencing facilities make direct conversations with employees, colleagues, customers, and business partners across the nation or around the world a simple matter. Carrier Corporation, the air-conditioning manufacturer, is now typical of firms using desktop videoconferencing to conduct everything from staff meetings to technical training. Engineers at Carrier’s Farmington, Connecticut, headquarters can hook up with service managers in branch offices thousands of miles away to explain new product developments, demonstrate repair techniques, and update field staff on matters that would, just recently, have required extensive travel or expensive, broadcast-quality television programming. Their exchanges are informal, conversational, and not much different than they would be if the people were in the same room. 18
Presentations to Small Groups
Managers frequently find themselves making presentations, formal and informal, to groups of three to eight people for many different reasons: they pass along information given to them by executives, they review the status of projects in process, and they explain changes in everything from working schedules to organizational goals. Such presentations are sometimes supported by overhead transparencies or printed outlines, but they are oral in nature and retain much of the conversational character of one-to-one conversations.
Public Speaking to Larger Audiences
Most managers are unable to escape the periodic requirement to speak to larger audiences of several dozen or, perhaps, several hundred people. Such presentations are usually more formal in structure and are often supported by PowerPoint or Prezi software that can deliver data from text files, graphics, photos, and even motion clips from streaming video. Despite the more formal atmosphere and sophisticated audio-visual support systems, such presentations still involve one manager talking to others, framing, shaping, and passing information to an audience.
A series of scientific studies, beginning with Rankin, Nichols and Stevens, and Wolvin and Coakley, confirm: most managers spend the largest portion of their day talking and listening. 19 Werner’s thesis, in fact, found that North American adults spend more than 78% of their communication time either talking or listening to others who are talking.
According to Werner and others who study the communication habits of postmodern business organizations, managers are involved in more than just speeches and presentations from the dais or teleconference podium. They spend their days in meetings, on the telephone, conducting interviews, giving tours, supervising informal visits to their facilities, and at a wide variety of social events. 20
Each of these activities may look to some managers like an obligation imposed by the job. Shrewd managers see them as opportunities to hear what others are thinking, to gather information informally from the grapevine, to listen in on office gossip, to pass along viewpoints that haven’t yet made their way to the more formal channels of communication, or to catch up with a colleague or friend in a more relaxed setting. No matter what the intention of each manager who engages in these activities, the information they produce and the insight that follows from them can be put to work the same day to achieve organizational and personal objectives. “To understand why effective managers behave as they do,” writes Kotter, “it is essential first to recognize two fundamental challenges and dilemmas found in most of their jobs.” Managers must first figure out what to do, despite an enormous amount of potentially relevant information (along with much that is not), and then they must get things done “through a large and diverse group of people despite having little direct control over most of them.” 21
The Role of Writing
Writing plays an important role in the life of any organization. In some organizations, it becomes more important than in others. At Procter & Gamble, for example, brand managers cannot raise a work-related issue in a team meeting unless the ideas are first circulated in writing. For P&G managers, this approach means explaining their ideas in explicit detail in a standard one-to-three-page memo, complete with background, financial discussion, implementation details, and justification for the ideas proposed.
Other organizations are more oral in their traditions—3M Canada is a “spoken” organization—but the fact remains: the most important projects, decisions, and ideas end up in writing. Writing also provides analysis, justification, documentation, and analytic discipline, particularly as managers approach important decisions that will affect the profitability and strategic direction of the company.
Writing is a career sifter. If managers demonstrate their inability to put ideas on paper in a clear, unambiguous fashion, they’re not likely to last. Stories of bad writers who’ve been shown the door early in their careers are legion. Managers’ principal objective, at least during the first few years of their career, is to keep their name out of such stories. Remember: those who are most likely to notice the quality and skill in managers’ written documents are the very people most likely to matter to managers’ future.
Managers do most of their own writing and editing. The days when managers could lean back and thoughtfully dictate a letter or memo to a skilled secretarial assistant are mostly gone. Some senior executives know how efficient dictation can be, especially with a top-notch administrative assistant taking shorthand, but how many managers have that advantage today? Very few, mostly because buying a computer and printer is substantially cheaper than hiring another employee. Managers at all levels of most organizations draft, review, edit, and dispatch their own correspondence, reports, and proposals.
Documents take on lives of their own. Once it’s gone from the manager’s desk, it isn’t theirs anymore. When they sign a letter and put it in the mail, it’s no longer their letter—it’s the property of the person or organization it was sent to. As a result, the recipient is free to do as she sees fit with the writing, including using it against the sender. If the ideas are ill-considered or not well expressed, others in the organization who are not especially sympathetic to the manager’s views may head for the copy machine with the manager’s work in hand. The advice for managers is simple: do not mail the first draft, and do not ever sign your name to a document you are not proud of.
Communication Is Invention
Without question, communication is a process of invention. Managers literally create meaning through communication. A company, for example, is not in default until a team of auditors sits down to examine the books and review the matter. Only after extended discussion do the accountants conclude that the company is, in fact, in default. It is their discussion that creates the outcome. Until that point, default was simply one of many possibilities.
The fact is managers create meaning through communication. It is largely through discussion and verbal exchange—often heated and passionate—that managers decide who they wish to be: market leaders, takeover artists, innovators, or defenders of the economy. It is only through communication that meaning is created for shareholders, employees, customers, and others. Those long, detailed, and intense discussions determine how much the company will declare in dividends this year, whether the company is willing to risk a strike or labor action, and how soon to roll out the new product line customers are asking for. Additionally, it is important to note that managers usually figure things out by talking about them as much as they talk about the things they have already figured out. Talk serves as a wonderful palliative: justifying, analyzing, dissecting, reassuring, and analyzing the events that confront managers each day.
Information Is Socially Constructed
If we are to understand just how important human discourse is in the life of a business, several points seem especially important.
Information is created, shared, and interpreted by people. Meaning is a truly human phenomenon. An issue is only important if people think it is. Facts are facts only if we can agree upon their definition. Perceptions and assumptions are as important as truth itself in a discussion about what a manager should do next. 22 Information never speaks for itself. It is not uncommon for a manager to rise to address a group of her colleagues and say, “The numbers speak for themselves.” Frankly, the numbers never speak for themselves. They almost always require some sort of interpretation, some sort of explanation or context. Do not assume that others see the facts in the same way managers do, and never assume that what is seen is the truth. Others may see the same set of facts or evidence but may not reach the same conclusions. Few things in life are self-explanatory.
Context always drives meaning. The backdrop to a message is always of paramount importance to the listener, viewer, or reader in reaching a reasonable, rational conclusion about what she sees and hears. What’s in the news these days as we take up this subject? What moment in history do we occupy? What related or relevant information is under consideration as this new message arrives? We cannot possibly derive meaning from one message without considering everything else that surrounds it.
A messenger always accompanies a message. It is difficult to separate a message from its messenger. We often want to react more to the source of the information than we do to the information itself. That’s natural and entirely normal. People speak for a reason, and we often judge their reasons for speaking before analyzing what they have to say. Keep in mind that, in every organization, message recipients will judge the value, power, purpose, intent, and outcomes of the messages they receive by the source of those messages as much as by the content and intent of the messages themselves. If the messages managers send are to have the impact hoped for, they must come from a source the receiver knows, respects, and understands.
Managers’ Greatest Challenge
Every manager knows communication is vital, but every manager also seems to “know” that she is great at it. Managers’ greatest challenge is to admit to flaws in their skill set and work tirelessly to improve them. First, managers must admit to the flaws.
Larkin and Larkin write, “Deep down, managers believe they are communicating effectively. In ten years of management consulting, we have never had a manager say to us that he or she was a poor communicator. They admit to the occasional screw-up, but overall, everyone, without exception, believes he or she is basically a good communicator.” 23
Managers’ Task as Professionals
As a professional manager, the first task is to recognize and understand one’s strengths and weaknesses as a communicator. Until these communication tasks at which one is most and least skilled are identified, there will be little opportunity for improvement and advancement.
Foremost among managers’ goals should be to improve existing skills. Improve one’s ability to do what is done best. Be alert to opportunities, however, to develop new skills. Managers should add to their inventory of abilities to keep themselves employable and promotable.
Two other suggestions come to mind for improving managers’ professional standing. First, acquire a knowledge base that will work for the years ahead. That means speaking with and listening to other professionals in their company, industry, and community. They should be alert to trends that could affect their company’s products and services, as well as their own future.
It also means reading. Managers should read at least one national newspaper each day, including the Wall Street Journal , the New York Time s, or the Financial Times , as well as a local newspaper. Their reading should include weekly news magazines, such as U.S. News & World Report, Bloomberg’s Business Week , and the Economist . Subscribe to monthly magazines such as Fast Company and Fortune . And they should read at least one new hardcover title a month. A dozen books each year is the bare minimum on which one should depend for new ideas, insights, and managerial guidance.
Managers’ final challenge is to develop the confidence needed to succeed as a manager, particularly under conditions of uncertainty, change, and challenge.
ETHICS IN PRACTICE
Disney and H-1B Visas
On January 30, 2015, The Walt Disney Company laid off 250 of its IT workers. In a letter to the laid-off workers, Disney outlined the conditions for receipt of a “stay bonus,” which would entitle each worker to a lump-sum payment of 10% of her annual salary.
Of course, there was a catch. Only those workers who trained their replacements over a 90-day period would receive the bonus. One American worker in his 40s who agreed to Disney’s severance terms explained how it worked in action:
“The first 30 days was all capturing what I did. The next 30 days, they worked side by side with me, and the last 30 days, they took over my job completely. I had to make sure they were doing my job correctly.”
To outside observers, this added insult to injury. It was bad enough to replace U.S. workers with cheaper, foreign labor. But to ask, let alone strong-arm, the laid-off workers into training their replacements seemed a bit much.
However unfortunate, layoffs are commonplace. But this was different. From the timing to the apparent neglect of employee pride, the sequence of events struck a nerve. For many, the issue was simple, and Disney’s actions seemed wrong at a visceral level. As criticism mounted, it became clear that this story would develop legs. Disney had a problem.
For David Powers and Leo Perrero, each a 10-year information technology (IT) veteran at Disney, the invitation came from a vice president of the company. It had to be good news, the men thought. After all, they were not far removed from strong performance reviews—perhaps they would be awarded performance bonuses. Well, not exactly. Leo Perrero, one of the summoned workers, explains what happened next.
“I’m in the room with about two-dozen people, and very shortly thereafter an executive delivers the news that all of our jobs are ending in 90 days, and that we have 90 days to train our replacements or we won’t get a bonus that we’ve been offered.”
Powers explained the deflating effect of the news: “When a guillotine falls down on you, in that moment you're dead . . . and I was dead.”
These layoffs and the hiring of foreign workers under the H-1B program lay at the center of this issue. Initially introduced by the Immigration and Nationality Act of 1965, subsequent modifications produced the current iteration of the H-1B visa program in 1990. Importantly, at that time, the United States faced a shortage of skilled workers necessary to fill highly technical jobs. Enter the H-1B visa program as the solution. This program permits U.S. employers to temporarily employ foreign workers in highly specialized occupations. “Specialty occupations” are defined as those in the fields of architecture, engineering, mathematics, science, medicine, and others that require technical and skilled expertise.
Congress limited the number of H-1B visas issued to 85,000 per year. That total is divided into two subcategories: “65,000 new H-1B visas issued for overseas workers in professional or specialty occupation positions, and an additional 20,000 visas available for those with an advanced degree from a U.S. academic institution.” Further, foreign workers are not able to apply for an H-1B visa. Instead, a U.S. employer must petition on their behalf no earlier than six months before the starting date of employment.
In order to be eligible for an employer to apply a foreign worker for an H-1B visa, the worker needed to meet certain requirements, such as an employee-employer relationship with the petitioning U.S. employer and a position in a specialty occupation related to the employee’s field of study, where the employee must meet one of the following criteria: a bachelor’s degree or the foreign equivalent of a bachelor’s degree, a degree that is standard for the position, or previous qualified experience within the specialty occupation.
If approved, the initial term of the visa is three years, which may be extended an additional three years. While residing in the United States on an H-1B visa, a worker may apply to become a permanent resident and receive a green card, which would entitle the worker to remain indefinitely.
U.S. employers are required to file a Labor Condition Application (LCA) on behalf of each foreign worker they seek to employ. That application must be approved by the U.S. Department of Labor. The LCA requires the employer to assure that the foreign worker will be paid a wage and be provided working conditions and benefits that meet or exceed the local prevailing market and to assure that the foreign worker will not displace a U.S. worker in the employer’s workforce.
Given these representations, U.S. employers have increasingly been criticized for abuse of the H-1B program. Most significantly, there is rising sentiment that U.S. employers are displacing domestic workers in favor of cheaper foreign labor. Research indicates that a U.S. worker’s salary for these specialty occupations often exceeds $100,000, while that of a foreign worker is roughly $62,000 for the very same job. The latter figure is telling, since $60,000 is the threshold below which a salary would trigger a penalty.
Disney faced huge backlash and negative press because of the layoffs and hiring of foreign workers. Because of this, Disney had communication challenges, both internally and externally.
Disney executives framed the layoffs as part of a larger plan of reorganization intended to enable its IT division to focus on driving innovation. Walt Disney World spokesperson Jacquee Wahler gave the following explanation:
“We have restructured our global technology organization to significantly increase our cast member focus on future innovation and new capabilities , and are continuing to work with leading technical firms to maintain our existing systems as needed.” (Italics added for emphasis.)
That statement is consistent with a leaked memo drafted by Disney Parks and Resort CIO Tilak Mandadi, which he sent to select employees on November 10, 2014 (not including those who would be laid off), to explain the rationale for the impending layoffs. The memo read, in part, as follows:
“To enable a majority of our team to shift focus to new capabilities , we have executed five new managed services agreements to support testing services and application maintenance. Last week, we began working with both our internal subject matter experts and the suppliers to start transition planning for these agreements. We expect knowledge transfer to start later this month and last through January. Those Cast Members who are involved will be contacted in the next several weeks.”
Responding to the critical New York Times article, Disney represented that when all was said and done, the company had in fact produced a net jobs increase. According to Disney spokesperson Kim Prunty:
“Disney has created almost 30,000 new jobs in the U.S. over the past decade, and the recent changes to our parks’ IT team resulted in a larger organization with 70 additional in-house positions in the U.S. External support firms are responsible for complying with all applicable employment laws for their employees.”
New jobs were promised due to the restructuring, Disney officials said, and employees targeted for termination were pushed to apply for those positions. According to a confidential Disney source, of the approximately 250 laid-off employees, 120 found new jobs within Disney, 40 took early retirement, and 90 were unable to secure new jobs with Disney.
On June 11, 2015, Senator Richard Durbin of Illinois and Senator Jeffrey Sessions of Alabama released a statement regarding a bipartisan letter issued to the attorney general, the Department Homeland Security, and the Department of Labor.
“A number of U.S. employers, including some large, well-known, publicly-traded corporations, have laid off thousands of American workers and replaced them with H-1B visa holders . . . . To add insult to injury, many of the replaced American employees report that they have been forced to train the foreign workers who are taking their jobs. That’s just plain wrong and we’ll continue to press the Administration to help solve this problem.”
In response to request for comment on the communications issues raised by the Disney layoffs and aftermath, New York Times columnist Julia Preston shared the following exclusive analysis:
“I would say Disney’s handling of those lay-offs is a case study in how not to do things. But in the end it’s not about the communications, it’s about the company. Those layoffs showed a company that was not living up to its core vaunted family values and no amount of shouting by their communications folks could change the facts of what happened.”
questions for discussion
- Is it ethical for U.S. companies to lay off workers and hire foreign workers under the H-1B program? Should foreign countries restrict the hiring of foreign workers that meet their workforce requirements?
- Discuss the internal and external communications that Disney employed in this situation. The examples here are of the formal written communications. What should Disney have been communicating verbally to their employees and externally?
sources
Preston, Julia, Pink Slips at Disney. But First, Training Foreign Replacements , The New York Times June 3, 2015, www.nytimes.com/2015/06/04/us...lacements.html;
Vargas, Rebecca, EXCLUSIVE: Former Employees Speak Out About Disney's Outsourcing of High-Tech Jobs , WWSB ABC 7 (Oct. 28, 2015), www.mysuncoast.com/news/local...5081380c1.html;
Boyle, Mathew, Ahead of GOP Debate, Two Ex-Disney Workers Displaced by H1B Foreigners Speak Out for First Time, Breitbart.com, October 28, 2015 , http://www.breitbart.com/big-governm...for-first-time ;
Sandra Pedicini, Tech Workers File Lawsuits Against Disney Over H-1B Visas , Orlando Sentinel , published January 25, 2016, accessed February 6, 2016, available at http://www.orlandosentinel.com/busin...125-story.html ;
U.S. Citizenship and Immigration Services, Understanding H-1B Requirements, accessed February 6, 2016, available at https://www.uscis.gov/eir/visa-guide...b-requirements ;
May, Caroline, Sessions, Durbin: Department Of Labor Has Launched Investigation Into H-1B Abuses, Breitbart.com (June 11, 2015), http://www.breitbart.com/big-governm...o-h-1b-abuses/ ;
Email from Julia Preston, National Immigration Correspondent, The New York Times, to Bryan Shannon, co-author of this case study, dated February 10, 2016.
concept check
- What are the four components of communication discussed in this section?
- Why is it important to understand your limitations in communicating to others and in larger groups?
- Why should managers always strive to improve their skills? | libretexts | 2025-03-17T22:27:55.055984 | 2021-05-08T21:23:50 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.05%3A_The_Major_Channels_of_Management_Communication_Are_Talking_Listening_Reading_and_Writing",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1.5: The Major Channels of Management Communication Are Talking, Listening, Reading, and Writing",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.06%3A_Summary | 3.1.6: Summary
-
- Last updated
- Save as PDF
key terms
- communicator
- The individual, group, or organization that needs or wants to share information with another individual, group, or organization.
- decoding
- Interpreting and understanding and making sense of a message.
- encoding
- Translating a message into symbols or language that a receiver can understand.
- noise
- Anything that interferes with the communication process.
- receiver
- The individual, group, or organization for which information is intended.
- interaction attentiveness/ interaction involvement
- A measure of how the receiver of a message is paying close attention and is alert or observant.
- figurehead role
- A necessary role for a manager who wants to inspire people within the organization to feel connected to each other and to the institution, to support the policies and decisions made on behalf of the organization, and to work harder for the good of the institution.
The Process of Managerial Communication
- Understand and describe the communication process.
The basic model of interpersonal communication consists of an encoded message, a decoded message, feedback, and noise. Noise refers to the distortions that inhibit message clarity.
Types of Communications in Organizations
- Know the types of communications that occur in organizations.
Interpersonal communication can be oral, written, or nonverbal. Body language refers to conveying messages to others through such techniques as facial expressions, posture, and eye movements.
Factors Affecting Communications and the Roles of Managers
- Understand how power, status, purpose, and interpersonal skills affect communications in organizations.
Interpersonal communication is influenced by social situations, perception, interaction involvement, and organizational design. Organizational communication can travel upward, downward, or horizontally. Each direction of information flow has specific challenges.
Managerial Communication and Corporate Reputation
- Describe how corporate reputations are defined by how an organization communicates to all of its stakeholders.
It is important for managers to understand what your organization stands for (identity), what others think your organization is (reputation), and the contributions individuals can make to the success of the business considering their organization’s existing reputation. It is also about confidence—the knowledge that one can speak and write well, listen with great skill as others speak, and both seek out and provide the feedback essential to creating, managing, or changing their organization’s reputation.
The Major Channels of Management Communication Are Talking, Listening, Reading, and Writing
- Describe the roles that managers perform in organizations.
There are special communication roles that can be identified. Managers may serve as gatekeepers, liaisons, or opinion leaders. They can also assume some combination of these roles. It is important to recognize that communication processes involve people in different functions and that all functions need to operate effectively to achieve organizational objectives.
chapter review questions
- Describe the communication process.
- Why is feedback a critical part of the communication process?
- What are some things that managers can do to reduce noise in communication?
- Compare and contrast the three primary forms of interpersonal communication.
- Describe the various individual communication roles in organizations.
- How can managers better manage their effectiveness by managing e-mail communication?
- Which communication roles are most important in facilitating managerial effectiveness?
- Identify barriers to effective communication.
- How can barriers to effective communication be overcome by managers?
management skills application exercises
- The e-mails below are not written as clearly or concisely as they could be. In addition, they may have problems in organization or tone or mechanical errors. Rewrite them so they are appropriate for the audience and their purpose. Correct grammatical and mechanical errors. Finally, add a subject line to each.
E-Mail 1
To: Employees of The Enormously Successful Corporation
From: CEO of The Enormously Successful Corporation
Subject:
Stop bringing bottled soft drinks, juices and plastic straws to work. Its an environment problem that increases our waste and the quality of our water is great. People don’t realize how much wasted energy goes into shipping all that stuff around, and plastic bottles, aluminum cans and straws are ruining our oceans and filling land fills. Have you seen the floating island of waste in the Pacific Ocean? Some of this stuff comes from other countries like Canada Dry I think is from canada and we are taking there water and Canadians will be thirsty. Fancy drinks isn’t as good as the water we have and tastes better anyway.
E-Mail 2
To: All Employees
From: Management
Subject:
Our Committee to Improve Inter-Office Communication has decided that there needs to be an update and revision of our policy on emailing messages to and from those who work with us as employees of this company. The following are the results of the committee’s decisions, and constitute recommendations for the improvement of every aspect of email communication.
- Too much wordiness means people have to read the same thing over and over repeatedly, time after time. Eliminating unnecessary words, emails can be made to be shorter and more to the point, making them concise and taking less time to read.
- You are only allowed to send and receive messages between 8:30 AM east coat time and 4:30 PM east coast time. You are also not allowed to read e-mails outside of these times. We know that for those of you on the west coast or traveling internationally it will reduce the time that you are allowed to attend to e-mail, but we need this to get it under control.
- You are only allowed to have up to 3 recipients on each e-mail. If more people need to be informed it is up to the people to inform them.
-
Write a self-evaluation that focuses specifically on your class participation in this course. Making comments during class allows you to improve your ability to speak extemporaneously, which is exactly what you will have to do in all kinds of business situations (e.g., meetings, asking questions at presentations, one-on-one conversations). Thus, write a short memo (two or three paragraphs) in which you describe the frequency with which you make comments in class, the nature of those comments, and what is easy and difficult for you when it comes to speaking up in class.
If you have made few (or no) comments during class, this is a time for us to come up with a plan to help you overcome your shyness. Our experience is that as soon as a person talks in front of a group once or twice, it becomes much easier—so we need to come up with a way to help you break the ice.
Finally, please comment on what you see as the strengths and weaknesses of your discussions and presentations in this class.
- Refer to the photo in Figure 16.2.1. Comment on the body language exhibited by each person at the meeting and how engaged they are in the communication.
- In the movie The Martian , astronaut Mark Watney (played by Matt Damon) is stranded on Mars with limited ability to communicate with mission control. Watney holds up questions to a camera that can transmit photographs of his questions, and mission control could respond by pointing the camera at a “yes” or “no” card with the camera. Eventually, they are able to exchange “text” messages but no voice exchanges. Also, there is a significant time delay between the sending and receipt of the messages. Which part of the communication process would have to be addressed to ensure that the encoding of the messages, the decoding of the messages, and that noise is minimized by Watney and mission control?
managerial decision exercises
- Ginni Rometty is the CEO of IBM. Shortly after taking on the role of CEO and being frustrated by the progress and sales performance, Rometty released a five-minute video to all 400,000 plus IBM employees criticizing the lack of securing deals to competitors and lashed out at the sales organization for poor sales in the preceding quarter. Six months later, Rometty sent another critical message, this time via e-mail. How effective will the video and e-mail be in communicating with employees? How should she follow up on these messages?
- Social media, such as Facebook, is now widespread. Place yourself as a manager that has just received a “friend” request from one of your direct reports. Do you accept, reject, or ignore the request? Why, and what additional communication would you have regarding this with the employee?
- During a cross-functional meeting, one of the attendees who reports to a manager who is also at the meeting accuses one of your reports of not being fit for the position she is in. You disagree and feel that your report is a good fit for her role. How do you handle this?
Critical Thinking Case
Facebook, Inc.
Facebook has been in the news with criticism of its privacy policies, sharing customer information with Fusion GPS, and criticism regarding the attempts to influence the 2016 election. In March 2014, Facebook released a study entitled “Experimental evidence of massive-scale emotional contagion through social networks.” It was published in the Proceedings of the National Academy of Sciences (PNAS) , a prestigious, peer-reviewed scientific journal. The paper explains how social media can readily transfer emotional states from person to person through Facebook’s News Feed platform. Facebook conducted an experiment on members to see how people would respond to changes in a percentage of both positive and negative posts. The results suggest that emotional contagion does occur online and that users’ positive expressions can generate positive reactions, while, in turn, negative expressions can generate negative reactions.
Facebook has two separate value propositions aimed at two different markets with entirely different goals.
Originally, Facebook’s main market was its end users—people looking to connect with family and friends. At first, it was aimed only at college students at a handful of elite schools. The site is now open to anyone with an Internet connection. Users can share status updates and photographs with friends and family. And all of this comes at no cost to the users.
Facebook’s other major market is advertisers, who buy information about Facebook’s users. The company regularly gathers data about page views and browsing behavior of users in order to display targeted advertisements to users for the benefit of its advertising partners.
The value proposition of the Facebook News Feed experiment was to determine whether emotional manipulation would be possible through the use of social networks. This clearly could be of great value to one of Facebook’s target audiences—its advertisers.
The results suggest that the emotions of friends on social networks influence our own emotions, thereby demonstrating emotional contagion via social networks. Emotional contagion is the tendency to feel and express emotions similar to and influenced by those of others. Originally, it was studied by psychologists as the transference of emotions between two people.
According to Sandra Collins, a social psychologist and University of Notre Dame professor of management, it is clearly unethical to conduct psychological experiments without the informed consent of the test subjects. While tests do not always measure what the people conducting the tests claim, the subjects need to at least know that they are, indeed, part of a test. The subjects of this test on Facebook were not explicitly informed that they were participating in an emotional contagion experiment. Facebook did not obtain informed consent as it is generally defined by researchers, nor did it allow participants to opt-out.
When information about the experiment was released, the media response was overwhelmingly critical. Tech blogs, newspapers, and media reports reacted quickly.
Josh Constine of TechCrunch wrote:
“ . . . there is some material danger to experiments that depress people. Some people who are at risk of depression were almost surely part of Facebook’s study group that were shown a more depressing feed, which could be considered dangerous. Facebook will endure a whole new level of backlash if any of those participants were found to have committed suicide or had other depression-related outcomes after the study.”
The New York Times quoted Brian Blau, a technology analyst with the research firm Gartner, “Facebook didn’t do anything illegal, but they didn’t do right by their customers. Doing psychological testing on people crosses the line.” Facebook should have informed its users, he said. “They keep on pushing the boundaries, and this is one of the reasons people are upset.”
While some of the researchers have since expressed some regret about the experiment, Facebook as a company was unapologetic about the experiment. The company maintained that it received consent from its users through its terms of service. A Facebook spokesperson defended the research, saying, “We do research to improve our services and make the content people see on Facebook as relevant and engaging as possible. . . . We carefully consider what research we do and have a strong internal review process.”
With the more recent events, Facebook is changing the privacy settings but still collects an enormous amount of information about its users and can use that information to manipulate what users see. Additionally, these items are not listed on Facebook’s main terms of service page. Users must click on a link inside a different set of terms to arrive at the data policy page, making these terms onerous to find. This positioning raises questions about how Facebook will employ its users’ behaviors in the future.
critical thinking questions
- How should Facebook respond to the 2014 research situation? How could an earlier response have helped the company avoid the 2018 controversies and keep the trust of its users?
- Should the company promise to never again conduct a survey of this sort? Should it go even further and explicitly ban research intended to manipulate the responses of its users?
- How can Facebook balance the concerns of its users with the necessity of generating revenue through advertising?
- What processes or structures should Facebook establish to make sure it does not encounter these issues again?
- Respond in writing to the issues presented in this case by preparing two documents: a communication strategy memo and a professional business letter to advertisers.
sources
Kramer, Adam; Guillory, Jamie; and Hancock, Jeffrey, “Experimental evidence of massive-scale emotional contagion through social networks,” PNAS (Proceedings of the National Academy of Sciences of the United States of America ). March 25, 2014 www.pnas.org/content/111/24/8788.full;
Laja, Peep. “Useful Value Proposition Examples (and How to Create a Good One), ConversionXL , 2015 conversionxl.com/value-propos...how-to-create/;
Yadav, Sid. “Facebook - The Complete Biography,” Mashable, Aug. 25, 2006. mashable.com/2006/08/25/faceb.../#orb9TmeYHiqK;
Felix, Samantha, “This Is How Facebook Is Tracking Your Internet Activity,” Business Insider , Sept. 9, 2012 http://www.businessinsider.com/this-...ctivity-2012-9 | libretexts | 2025-03-17T22:27:55.141393 | 2021-05-08T21:23:50 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.01%3A_Managerial_Communication/3.1.06%3A_Summary",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.1.6: Summary",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.02%3A_The_Roles_Managers_Play | 3.2: The Roles Managers Play
-
- Last updated
- Save as PDF
Learning Objectives
- What are the roles that managers play in organizations?
In Mintzberg’s seminal study of managers and their jobs, he found the majority of them clustered around three core management roles.
Interpersonal Roles
Managers are required to interact with a substantial number of people in the course of a workweek. They host receptions; take clients and customers to dinner; meet with business prospects and partners; conduct hiring and performance interviews; and form alliances, friendships, and personal relationships with many others. Numerous studies have shown that such relationships are the richest source of information for managers because of their immediate and personal nature.14
Three of a manager’s roles arise directly from formal authority and involve basic interpersonal relationships. First is the figurehead role . As the head of an organizational unit, every manager must perform some ceremonial duties. In Mintzberg’s study, chief executives spent 12% of their contact time on ceremonial duties; 17% of their incoming mail dealt with acknowledgments and requests related to their status. One example is a company president who requested free merchandise for a handicapped schoolchild.15
Managers are also responsible for the work of the people in their unit, and their actions in this regard are directly related to their role as a leader. The influence of managers is most clearly seen, according to Mintzberg, in the leader role. Formal authority vests them with great potential power. Leadership determines, in large part, how much power they will realize.16
Does the leader’s role matter? Ask the employees of Chrysler Corporation (now DaimlerChrysler). When Lee Iacocca took over the company in the 1980s, the once-great auto manufacturer was in bankruptcy, teetering on the verge of extinction. He formed new relationships with the United Auto Workers, reorganized the senior management of the company, and—perhaps most importantly—convinced the U.S. federal government to guarantee a series of bank loans that would make the company solvent again. The loan guarantees, the union response, and the reaction of the marketplace were due in large measure to Iacocca’s leadership style and personal charisma. More recent examples include the return of Starbucks founder Howard Schultz to reenergize and steer his company, and Amazon CEO Jeff Bezos and his ability to innovate during a downturn in the economy.17
Popular management literature has had little to say about the liaison role until recently. This role, in which managers establish and maintain contacts outside the vertical chain of command, becomes especially important in view of the finding of virtually every study of managerial work that managers spend as much time with peers and other people outside of their units as they do with their own subordinates. Surprisingly, they spend little time with their own superiors. In Rosemary Stewart’s study, 160 British middle and top managers spent 47% of their time with peers, 41% of their time with people inside their unit, and only 12% of their time with superiors. Guest’s (1956) study of U.S. manufacturing supervisors revealed similar findings.18
Informational Roles
Managers are required to gather, collate, analyze, store, and disseminate many kinds of information. In doing so, they become information resource centers, often storing huge amounts of information in their own heads, moving quickly from the role of gatherer to the role of disseminator in minutes. Although many business organizations install large, expensive management information systems to perform many of those functions, nothing can match the speed and intuitive power of a well-trained manager’s brain for information processing. Not surprisingly, most managers prefer it that way.
As monitors , managers are constantly scanning the environment for information, talking with liaison contacts and subordinates, and receiving unsolicited information, much of it as a result of their network of personal contacts. A good portion of this information arrives in verbal form, often as gossip, hearsay, and speculation.
In the disseminator role, managers pass privileged information directly to subordinates, who might otherwise have no access to it. Managers must not only decide who should receive such information, but how much of it, how often, and in what form. Increasingly, managers are being asked to decide whether subordinates, peers, customers, business partners, and others should have direct access to information 24 hours a day without having to contact the manager directly.
In the spokesperson role, managers send information to people outside of their organizations: an executive makes a speech to lobby for an organizational cause, or a supervisor suggests a product modification to a supplier. Increasingly, managers are also being asked to deal with representatives of the news media, providing both factual and opinion-based responses that will be printed or broadcast to vast unseen audiences, often directly or with little editing. The risks in such circumstances are enormous, but so too are the potential rewards in terms of brand recognition, public image, and organizational visibility.
Decisional Roles
Ultimately, managers are charged with the responsibility of making decisions on behalf of both the organization and the stakeholders with an interest in it. Such decisions are often made under circumstances of high ambiguity and with inadequate information. Often, the other two managerial roles—interpersonal and informational—will assist a manager in making difficult decisions in which outcomes are not clear and interests are often conflicting.
In the role of entrepreneur , managers seek to improve their businesses, adapt to changing market conditions, and react to opportunities as they present themselves. Managers who take a longer-term view of their responsibilities are among the first to realize that they will need to reinvent themselves, their product and service lines, their marketing strategies, and their ways of doing business as older methods become obsolete and competitors gain advantage.
While the entrepreneur role describes managers who initiate change, the disturbance or crisis handler role depicts managers who must involuntarily react to conditions. Crises can arise because bad managers let circumstances deteriorate or spin out of control, but just as often good managers find themselves in the midst of a crisis that they could not have anticipated but must react to just the same.
The third decisional role of resource allocator involves managers making decisions about who gets what, how much, when, and why. Resources, including funding, equipment, human labor, office or production space, and even the boss’s time are all limited, and demand inevitably outstrips supply. Managers must make sensible decisions about such matters while still retaining, motivating, and developing the best of their employees.
The final decisional role is that of negotiator . Managers spend considerable amounts of time in negotiations: over budget allocations, labor and collective bargaining agreements, and other formal dispute resolutions. In the course of a week, managers will often make dozens of decisions that are the result of brief but important negotiations between and among employees, customers and clients, suppliers, and others with whom managers must deal.19 A visual interpretation of the roles managers play is illustrated in Figure \(\PageIndex{3}\)
References:
14. Mintzberg, H. (1990). “The Manager’s Job: Folklore and Fact.” Harvard Business Review , March–April 1990, pp. 166–167.
15. Mintzberg, H. (1990). “The Manager’s Job: Folklore and Fact.” Harvard Business Review , March–April 1990, p. 167.
16. Mintzberg, H. (1990). “The Manager’s Job: Folklore and Fact.” Harvard Business Review , March–April 1990, p. 168.
17. McGregor, J. (2008). “Bezos: How Frugality Drives Innovation,” BusinessWeek , April 28, 2008, pp. 64–66.
18. Stewart, R. (1967). Managers and Their Jobs . London: Macmillan.
19. Mintzberg, H. (1990). “The Manager’s Job: Folklore and Fact.” Harvard Business Review , March–April 1990. | libretexts | 2025-03-17T22:27:55.209738 | 2021-05-08T21:23:51 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.02%3A_The_Roles_Managers_Play",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.2: The Roles Managers Play",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.05%3A_Factors_Affecting_Communications_and_the_Roles_of_Managers | 3.5: Factors Affecting Communications and the Roles of Managers
-
- Last updated
- Save as PDF
Learning Objectives
- Understand how power, status, purpose, and interpersonal skills affect communications in organizations.
The Roles Managers Play
In Mintzberg’s seminal study of managers and their jobs, he found the majority of them clustered around three core management roles. 5
Interpersonal Roles
Managers are required to interact with a substantial number of people during a workweek. They host receptions; take clients and customers to dinner; meet with business prospects and partners; conduct hiring and performance interviews; and form alliances, friendships, and personal relationships with many others. Numerous studies have shown that such relationships are the richest source of information for managers because of their immediate and personal nature. 6
Three of a manager’s roles arise directly from formal authority and involve basic interpersonal relationships. First is the figurehead role. As the head of an organizational unit, every manager must perform some ceremonial duties. In Mintzberg’s study, chief executives spent 12% of their contact time on ceremonial duties; 17% of their incoming mail dealt with acknowledgments and requests related to their status. One example is a company president who requested free merchandise for a handicapped schoolchild. 7
Managers are also responsible for the work of the people in their unit, and their actions in this regard are directly related to their role as a leader. The influence of managers is most clearly seen, according to Mintzberg, in the leader role. Formal authority vests them with great potential power. Leadership determines, in large part, how much power they will realize.
Does the leader’s role matter? Ask the employees of Chrysler Corporation (now Fiat Chrysler). When Sergio Marchionne, who passed away in 2018, took over the company in the wake of the financial crisis, the once-great auto manufacturer was in bankruptcy, teetering on the verge of extinction. He formed new relationships with the United Auto Workers, reorganized the senior management of the company, and—perhaps, most importantly—convinced the U.S. federal government to guarantee a series of bank loans that would make the company solvent again. The loan guarantees, the union response, and the reaction of the marketplace, especially for the Jeep brand, were due in large measure to Marchionne’s leadership style and personal charisma. More recent examples include the return of Starbucks founder Howard Schultz to reenergize and steer his company and Amazon CEO Jeff Bezos and his ability to innovate during a downturn in the economy. 8
Popular management literature has had little to say about the liaison role until recently. This role, in which managers establish and maintain contacts outside the vertical chain of command, becomes especially important in view of the finding of virtually every study of managerial work that managers spend as much time with peers and other people outside of their units as they do with their own subordinates. Surprisingly, they spend little time with their own superiors. In Rosemary Stewart’s (1967) study, 160 British middle and top managers spent 47% of their time with peers, 41% of their time with people inside their unit, and only 12% of their time with superiors. Guest’s (1956) study of U.S. manufacturing supervisors revealed similar findings.
Informational Roles
Managers are required to gather, collate, analyze, store, and disseminate many kinds of information. In doing so, they become information resource centers, often storing huge amounts of information in their own heads, moving quickly from the role of gatherer to the role of disseminator in minutes. Although many business organizations install large, expensive management information systems to perform many of those functions, nothing can match the speed and intuitive power of a well-trained manager’s brain for information processing. Not surprisingly, most managers prefer it that way.
As monitors, managers are constantly scanning the environment for information, talking with liaison contacts and subordinates, and receiving unsolicited information, much of it because of their network of personal contacts. A good portion of this information arrives in verbal form, often as gossip, hearsay, and speculation. 9
In the disseminator role, managers pass privileged information directly to subordinates, who might otherwise have no access to it. Managers must decide not only who should receive such information, but how much of it, how often, and in what form. Increasingly, managers are being asked to decide whether subordinates, peers, customers, business partners, and others should have direct access to information 24 hours a day without having to contact the manager directly. 10
In the spokesperson role, managers send information to people outside of their organizations: an executive makes a speech to lobby for an organizational cause, or a supervisor suggests a product modification to a supplier. Increasingly, managers are also being asked to deal with representatives of the news media, providing both factual and opinion-based responses that will be printed or broadcast to vast unseen audiences, often directly or with little editing. The risks in such circumstances are enormous, but so too are the potential rewards in terms of brand recognition, public image, and organizational visibility. 11
Decisional Roles
Ultimately, managers are charged with the responsibility of making decisions on behalf of both the organization and the stakeholders with an interest in it. Such decisions are often made under circumstances of high ambiguity and with inadequate information. Often, the other two managerial roles—interpersonal and informational—will assist a manager in making difficult decisions in which outcomes are not clear and interests are often conflicting.
In the role of entrepreneur, managers seek to improve their businesses, adapt to changing market conditions, and react to opportunities as they present themselves. Managers who take a longer-term view of their responsibilities are among the first to realize that they will need to reinvent themselves, their product and service lines, their marketing strategies, and their ways of doing business as older methods become obsolete and competitors gain advantage.
While the entrepreneur role describes managers who initiate change, the disturbance or crisis handler role depicts managers who must involuntarily react to conditions. Crises can arise because bad managers let circumstances deteriorate or spin out of control, but just as often good managers find themselves in the midst of a crisis that they could not have anticipated but must react to just the same. 12
The third decisional role of resource allocator involves managers making decisions about who gets what, how much, when, and why. Resources, including funding, equipment, human labor, office or production space, and even the boss’s time, are all limited, and demand inevitably outstrips supply. Managers must make sensible decisions about such matters while still retaining, motivating, and developing the best of their employees.
The final decisional role is that of negotiator. Managers spend considerable amounts of time in negotiations: over budget allocations, labor and collective bargaining agreements, and other formal dispute resolutions. During a week, managers will often make dozens of decisions that are the result of brief but important negotiations between and among employees, customers and clients, suppliers, and others with whom managers must deal. 13
concept check
- What are the major roles that managers play in communicating with employees?
- Why are negotiations often brought in to communications by managers? | libretexts | 2025-03-17T22:27:55.414382 | 2021-05-08T21:23:52 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/03%3A_Evaluate_the_various_communication_techniques_(phone_fax_e-mail_mail_face_to_face_etc.)_usedin_business_organizations_and_when_where_and_why_they_are_appropriate./3.05%3A_Factors_Affecting_Communications_and_the_Roles_of_Managers",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "3.5: Factors Affecting Communications and the Roles of Managers",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.02%3A_Problem_Solving_and_Decision_Making_in_Groups | 4.2: Problem Solving and Decision Making in Groups
-
- Last updated
- Save as PDF
- Anonymous
- LibreTexts
Learning Objectives
- Discuss the common components and characteristics of problems.
- Explain the five steps of the group problem-solving process.
- Describe the brainstorming and discussion that should take place before the group makes a decision.
- Compare and contrast the different decision-making techniques.
- Discuss the various influences on decision making.
Although the steps of problem solving and decision making that we will discuss next may seem obvious, we often don’t think to or choose not to use them. Instead, we start working on a problem and later realize we are lost and have to backtrack. I’m sure we’ve all reached a point in a project or task and had the “OK, now what?” moment. In this section, we will discuss the group problem-solving process, methods of decision making, and influences on these processes.
Group Problem Solving
The problem-solving process involves thoughts, discussions, actions, and decisions that occur from the first consideration of a problematic situation to the goal. The problems that groups face are varied, but some common problems include budgeting funds, raising funds, planning events, addressing customer or citizen complaints, creating or adapting products or services to fit needs, supporting members, putting together a presentation, and raising awareness about issues or causes.
Problems of all sorts have three common components (Adams & Galanes, 2009):
- An undesirable situation. When conditions are desirable, there isn’t a problem.
- A desired situation. Even though it may only be a vague idea, there is a drive to better the undesirable situation. The vague idea may develop into a more precise goal that can be achieved, although solutions are not yet generated.
- Obstacles between undesirable and desirable situation. These are things that stand in the way between the current situation and the group’s goal of addressing it. This component of a problem requires the most work, and it is the part where decision making occurs. Some examples of obstacles include limited funding, resources, personnel, time, or information. Obstacles can also take the form of people who are working against the group, including people resistant to change or people who disagree.
Discussion of these three elements of a problem helps the group tailor its problem-solving process, as each problem will vary. While these three general elements are present in each problem, the group should also address specific characteristics of the problem. Five common and important characteristics to consider are task difficulty, number of possible solutions, group member interest in problem, group member familiarity with problem, and the need for solution acceptance (Adams & Galanes, 2009).
- Task difficulty. Difficult tasks are also typically more complex. Groups should be prepared to spend time researching and discussing a difficult and complex task in order to develop a shared foundational knowledge. This typically requires individual work outside of the group and frequent group meetings to share information. This is common in group presentations.
- Number of possible solutions. There are usually multiple ways to solve a problem or complete a task, but some problems have more potential solutions than others. Figuring out how to prepare a beach house for an approaching hurricane is fairly complex and difficult, but there are still a limited number of things to do—for example, taping and boarding up windows; turning off water, electricity, and gas; trimming trees; and securing loose outside objects. Other problems may be more creatively based. For example, putting together a relevant and interesting group presentation entails specifics as well as engaging in creative options.
- Group member interest in problem. When group members are interested in the problem, they will be more engaged with the problem-solving process and invested in finding a quality solution. Groups with high interest in and knowledge about the problem may want more freedom to develop and implement solutions, while groups with low interest may prefer a leader who provides structure and direction.
- Group familiarity with problem. Some groups encounter a problem regularly, while other problems are more unique or unexpected. A family who has lived in hurricane alley for decades probably has a better idea of how to prepare its house for a hurricane than does a family that just recently moved from the Midwest. Many groups that rely on funding have to revisit a budget every year, and in recent years, groups have had to get more creative with budgets as funding has been cut in nearly every sector. When group members aren’t familiar with a problem, they will need to do background research on what similar groups have done and may also need to bring in outside experts. For a group presentation for your communication class, your instructor can definitely serve as an "outside expert."
- Need for solution acceptance. In this step, groups must consider how many people the decision will affect and how much “buy-in” from others the group needs in order for their solution to be successfully implemented. Some small groups have many stakeholders on whom the success of a solution depends. Other groups are answerable only to themselves. When a small group is planning on building a new park in a crowded neighborhood or implementing a new policy in a large business, it can be very difficult to develop solutions that will be accepted by all. In such cases, groups will want to poll those who will be affected by the solution and may want to do a pilot implementation to see how people react. Imposing an excellent solution that doesn’t have buy-in from stakeholders can still lead to failure.
Group Problem-Solving Process
There are several variations of similar problem-solving models based on US American scholar John Dewey’s reflective thinking process (Bormann & Bormann, 1988). As you read through the steps in the process, think about how you can apply what we learned regarding the general and specific elements of problems. Some of the following steps are straightforward, and they are things we would logically do when faced with a problem. However, taking a deliberate and systematic approach to problem solving has been shown to benefit group functioning and performance. A deliberate approach is especially beneficial for groups that do not have an established history of working together and will only be able to meet occasionally. Although a group should attend to each step of the process, group leaders or other group members who facilitate problem solving should be cautious not to dogmatically follow each element of the process or force a group along. Such a lack of flexibility could limit group member input and negatively affect the group’s cohesion and climate.
Step 1: Define the Problem
Define the problem by considering the three elements shared by every problem: the current undesirable situation, the goal or more desirable situation, and obstacles in the way (Adams & Galanes, 2009). At this stage, group members share what they know about the current situation, without proposing solutions or evaluating the information. Here are some good questions to ask during this stage: What is the current difficulty? How did we come to know that the difficulty exists? Who/what is involved? Why is it meaningful/urgent/important? What have the effects been so far? What, if any, elements of the difficulty require clarification? At the end of this stage, the group should be able to compose a single sentence that summarizes the problem called a problem statement. Avoid wording in the problem statement or question that hints at potential solutions. A small group formed to investigate ethical violations of college officials could use the following problem statement: “Our college does not currently have a mechanism for students to report suspected ethical violations by college officials.”
Step 2: Analyze the Problem
During this step a group should analyze the problem and the group’s relationship to the problem. Whereas the first step involved exploring the “what” related to the problem, this step focuses on the “why.” At this stage, group members can discuss the potential causes of the difficulty. Group members may also want to begin setting out an agenda or timeline for the group’s problem-solving process, looking forward to the other steps. To fully analyze the problem, the group can discuss the five common problem variables discussed before. Here are two examples of questions that the group formed to address ethics violations might ask: Why doesn’t our college have an ethics reporting mechanism? Do colleges of similar size have such a mechanism? Once the problem has been analyzed, the group can pose a problem question that will guide the group as it generates possible solutions. “How can students report suspected ethical violations of college officials and how will such reports be processed and addressed?” As you can see, the problem question is more complex than the problem statement, since the group has moved on to more in-depth discussion of the problem during step 2.
Step 3: Generate Possible Solutions
During this step, group members generate possible solutions to the problem. Again, solutions should not be evaluated at this point, only proposed and clarified. The question should be what could we do to address this problem, not what should we do to address it. It is perfectly OK for a group member to question another person’s idea by asking something like “What do you mean?” or “Could you explain your idea more?” Discussions at this stage may reveal a need to return to previous steps to better define or more fully analyze a problem. Since many problems are multifaceted, it is necessary for group members to generate solutions for each part of the problem separately, making sure to have multiple solutions for each part. Stopping the solution-generating process prematurely can lead to groupthink. For the problem question previously posed, the group would need to generate solutions for all three parts of the problem included in the question. Possible solutions for the first part of the problem (How can students report ethical violations?) may include “online reporting system, e-mail, in-person, anonymously, on-the-record,” and so on. Possible solutions for the second part of the problem (How will reports be processed?) may include “daily by a newly appointed ethics officer, weekly by a nonpartisan nongovernment employee,” and so on. Possible solutions for the third part of the problem (How will reports be addressed?) may include “by a newly appointed ethics committee, by the accused’s dean, by the college president,” and so on.
Step 4: Evaluate Solutions
During this step, solutions can be critically evaluated based on their credibility, completeness, and worth. Once the potential solutions have been narrowed based on more obvious differences in relevance and/or merit, the group should analyze each solution based on its potential effects—especially negative effects. Groups that are required to report the rationale for their decision or whose decisions may be subject to public scrutiny would be wise to make a set list of criteria for evaluating each solution. Additionally, solutions can be evaluated based on how well they fit with the group’s charge and the abilities of the group. To do this, group members may ask, “Does this solution live up to the original purpose or mission of the group?” and “Can the solution actually be implemented with our current time/resource/people restraints?” and “How will this solution be supported, funded, enforced, and assessed?” Secondary tensions and substantive conflict, two concepts discussed earlier, emerge during this step of problem solving, and group members will need to employ effective critical thinking and listening skills.
Decision making is part of the larger process of problem solving and it plays a prominent role in this step. While there are several fairly similar models for problem solving, there are many varied decision-making techniques that groups can use. For example, to narrow the list of proposed solutions, group members may decide by majority vote, by weighing the pros and cons, or by discussing them until a consensus is reached. There are also more complex decision-making models like the “six hats method,” which we will discuss later. Once the final decision is reached, the group leader or facilitator should confirm that the group is in agreement. It may be beneficial to let the group break for a while or even to delay the final decision until a later meeting to allow people time to evaluate it outside of the group context.
Step 5: Implement and Assess the Solution
Implementing the solution requires some advanced planning, and it should not be rushed unless the group is operating under strict time restraints or delay may lead to some kind of harm. Although some solutions can be implemented immediately, others may take days, months, or years. As was noted earlier, it may be beneficial for groups to poll those who will be affected by the solution as to their opinion of it or even to do a pilot test to observe the effectiveness of the solution and how people react to it. Before implementation, groups should also determine how and when they would assess the effectiveness of the solution by asking, “How will we know if the solution is working or not?” Since solution assessment will vary based on whether or not the group is disbanded, groups should also consider the following questions: If the group disbands after implementation, who will be responsible for assessing the solution? If the solution fails, will the same group reconvene or will a new group be formed?
Certain elements of the solution may need to be delegated out to various people inside and outside the group. Group members may also be assigned to implement a particular part of the solution based on their role in the decision making or because it connects to their area of expertise. Likewise, group members may be tasked with publicizing the solution or “selling” it to a particular group of stakeholders. Last, the group should consider its future. In some cases, the group will get to decide if it will stay together and continue working on other tasks or if it will disband. In other cases, outside forces determine the group’s fate.
“Getting Competent”
Problem Solving and Group Presentations
Giving a group presentation requires that individual group members and the group as a whole solve many problems and make many decisions. Although having more people involved in a presentation increases logistical difficulties and has the potential to create more conflict, a well-prepared and well-delivered group presentation can be more engaging and effective than a typical presentation. The main problems facing a group giving a presentation are (1) dividing responsibilities, (2) coordinating schedules and time management, and (3) working out the logistics of the presentation delivery.
In terms of dividing responsibilities, assigning individual work at the first meeting and then trying to fit it all together before the presentation (which is what many college students do when faced with a group project) is not the recommended method. Integrating content and visual aids created by several different people into a seamless final product takes time and effort, and the person “stuck” with this job at the end usually ends up developing some resentment toward his or her group members. While it’s OK for group members to do work independently outside of group meetings, spend time working together to help set up some standards for content and formatting expectations that will help make later integration of work easier. Taking the time to complete one part of the presentation together can help set those standards for later individual work. Discuss the roles that various group members will play openly so there isn’t role confusion. There could be one point person for keeping track of the group’s progress and schedule, one point person for communication, one point person for content integration, one point person for visual aids, and so on. Each person shouldn’t do all that work on his or her own but help focus the group’s attention on his or her specific area during group meetings (Stanton, 2009).
Scheduling group meetings is one of the most challenging problems groups face, given people’s busy lives. From the beginning, it should be clearly communicated that the group needs to spend considerable time in face-to-face meetings, and group members should know that they may have to make an occasional sacrifice to attend. Especially important is the commitment to scheduling time to rehearse the presentation. Consider creating a contract of group guidelines that includes expectations for meeting attendance to increase group members’ commitment.
Group presentations require members to navigate many logistics of their presentation. While it may be easier for a group to assign each member to create a five-minute segment and then transition from one person to the next, this is definitely not the most engaging method. Creating a master presentation and then assigning individual speakers creates a more fluid and dynamic presentation and allows everyone to become familiar with the content, which can help if a person doesn’t show up to present. Once the content of the presentation is complete, figure out introductions, transitions, visual aids, and the use of time and space (Stanton, 2012). In terms of introductions, figure out if one person will introduce all the speakers at the beginning, if speakers will introduce themselves at the beginning, or if introductions will occur as the presentation progresses. In terms of transitions, make sure each person has included in his or her speaking notes when presentation duties switch from one person to the next. Visual aids have the potential to cause hiccups in a group presentation if they aren’t fluidly integrated. Practicing with visual aids and having one person control them may help prevent this. Know how long your presentation is and know how you’re going to use the space. Presenters should know how long the whole presentation should be and how long each of their segments should be so that everyone can share the responsibility of keeping time. Also consider the size and layout of the presentation space. You don’t want presenters huddled in a corner until it’s their turn to speak or trapped behind furniture when their turn comes around.
- What do you think are the major challenges facing members of a group tasked with developing and presenting a group presentation? What have been some of the problems you have faced in previous group presentations and how do you think they could have been avoided?
Decision Making in Groups
We all engage in personal decision making daily, and we all know that some decisions are more difficult or significant than others. When we make decisions in groups, we face some challenges that we do not face in our personal decision making, but we also stand to benefit from some advantages of group decision making (Napier & Gershenfeld, 2004). Group decision making can appear fair and democratic but really only be a gesture that covers up the fact that certain group members or the group leader have already decided. Group decision making also takes more time than individual decisions and can be burdensome if some group members do not do their assigned work, divert the group with self-centered or unproductive role behaviors, or miss meetings. Conversely, though, group decisions are often more informed, since all group members develop a shared understanding of a problem through discussion and debate. The shared understanding may also be more complex and deep than what an individual would develop, because the group members are exposed to a variety of viewpoints that can broaden their own perspectives. Group decisions also benefit from synergy, one of the key advantages of group communication that we discussed earlier. Most groups do not use a specific method of decision making, perhaps thinking that they’ll work things out as they go. This can lead to unequal participation, social loafing, premature decisions, prolonged discussion, and a host of other negative consequences. So in this section we will learn some practices that will prepare us for good decision making and some specific techniques we can use to help us reach a final decision.
Brainstorming before Decision Making
Before groups can make a decision, they need to generate possible solutions to their problem. The most commonly used method is brainstorming, although most people don’t follow the recommended steps of brainstorming. As you’ll recall, brainstorming refers to the quick generation of ideas free of evaluation. The originator of the term brainstorming said the following four rules must be followed for the technique to be effective (Osborn, 1959):
- Evaluation of ideas is forbidden.
- Wild and crazy ideas are encouraged.
- Quantity of ideas, not quality, is the goal.
- New combinations of ideas presented are encouraged.
To make brainstorming more of a decision-making method rather than an idea-generating method, group communication scholars have suggested additional steps that precede and follow brainstorming (Cragan & Wright, 1991).
- Do a warm-up brainstorming session. Some people are more apprehensive about publicly communicating their ideas than others are, and a warm-up session can help ease apprehension and prime group members for task-related idea generation. The warm-up can be initiated by anyone in the group and should only go on for a few minutes. To get things started, a person could ask, “If our group formed a band, what would we be called?” or “What other purposes could a mailbox serve?” In the previous examples, the first warm up gets the group’s more abstract creative juices flowing, while the second focuses more on practical and concrete ideas.
- Do the actual brainstorming session. This session shouldn’t last more than thirty minutes and should follow the four rules of brainstorming mentioned previously. To ensure that the fourth rule is realized, the facilitator could encourage people to piggyback off each other’s ideas.
- Eliminate duplicate ideas. After the brainstorming session is over, group members can eliminate (without evaluating) ideas that are the same or very similar.
- Clarify, organize, and evaluate ideas. Before evaluation, see if any ideas need clarification. Then try to theme or group ideas together in some orderly fashion. Since “wild and crazy” ideas are encouraged, some suggestions may need clarification. If it becomes clear that there isn’t really a foundation to an idea and that it is too vague or abstract and can’t be clarified, it may be eliminated. As a caution though, it may be wise to not throw out off-the-wall ideas that are hard to categorize and to instead put them in a miscellaneous or “wild and crazy” category.
Discussion before Decision Making
The nominal group technique guides decision making through a four-step process that includes idea generation and evaluation and seeks to elicit equal contributions from all group members (Delbecq & Ven de Ven, 1971). This method is useful because the procedure involves all group members systematically, which fixes the problem of uneven participation during discussions. Since everyone contributes to the discussion, this method can also help reduce instances of social loafing. To use the nominal group technique, do the following:
- Silently and individually list ideas.
- Create a master list of ideas.
- Clarify ideas as needed.
- Take a secret vote to rank group members’ acceptance of ideas.
During the first step, have group members work quietly, in the same space, to write down every idea they have to address the task or problem they face. This shouldn’t take more than twenty minutes. Whoever is facilitating the discussion should remind group members to use brainstorming techniques, which means they shouldn’t evaluate ideas as they are generated. Ask group members to remain silent once they’ve finished their list so they do not distract others.
During the second step, the facilitator goes around the group in a consistent order asking each person to share one idea at a time. As the idea is shared, the facilitator or recorder records it on a master list that everyone can see. Keep track of how many times each idea comes up, as that could be an idea that warrants more discussion. Continue this process until all the ideas have been shared. As a note to facilitators, some group members may begin to edit their list or self-censor when asked to provide one of their ideas. To limit a person’s apprehension with sharing his or her ideas and to ensure that each idea is shared, the leader can ask group members to exchange lists with someone else so they can share ideas from the list they receive without fear of being personally judged.
During step three, the facilitator should note that group members can now ask for clarification on ideas on the master list. Do not let this discussion stray into evaluation of ideas. To help avoid an unnecessarily long discussion, it may be useful to go from one person to the next to ask which ideas need clarifying and then go to the originator(s) of the idea in question for clarification.
During the fourth step, members use a voting ballot to rank the acceptability of the ideas on the master list. If the list is long, you may ask group members to rank only their top five or so choices. The facilitator then takes up the secret ballots and reviews them in a random order, noting the rankings of each idea. Ideally, the highest ranked ideas can then be discussed. The nominal group technique does not carry a group all the way through to the point of decision; rather, it sets the group up for a roundtable discussion or use of some other method to evaluate the merits of the top ideas.
Specific Decision-Making Techniques
Some decision-making techniques involve determining a course of action based on the level of agreement among the group members. These methods include majority, expert, authority, and consensus rule. Table 14.1 “Pros and Cons of Agreement-Based Decision-Making Techniques” reviews the pros and cons of each of these methods.
Majority rule is a commonly used decision-making technique in which a majority (one-half plus one) must agree before a decision is made. A show-of-hands vote, a paper ballot, or an electronic voting system can determine the majority choice. Many decision-making bodies, including the US House of Representatives, Senate, and Supreme Court, use majority rule to make decisions, which shows that it is often associated with democratic decision making, since each person gets one vote and each vote counts equally. Of course, other individuals and mediated messages can influence a person’s vote, but since the voting power is spread out over all group members, it is not easy for one person or party to take control of the decision-making process. In some cases—for example, to override a presidential veto or to amend the constitution—a super majority of two-thirds may be required to make a decision.
Minority rule is a decision-making technique in which a designated authority or expert has final say over a decision and may or may not consider the input of other group members. When a designated expert makes a decision by minority rule, there may be buy-in from others in the group, especially if the members of the group didn’t have relevant knowledge or expertise. When a designated authority makes decisions, buy-in will vary based on group members’ level of respect for the authority. For example, decisions made by an elected authority may be more accepted by those who elected him or her than by those who didn’t. As with majority rule, this technique can be time saving. Unlike majority rule, one person or party can have control over the decision-making process. This type of decision making is more similar to that used by monarchs and dictators. An obvious negative consequence of this method is that the needs or wants of one person can override the needs and wants of the majority. A minority deciding for the majority has led to negative consequences throughout history. The white Afrikaner minority that ruled South Africa for decades instituted apartheid, which was a system of racial segregation that disenfranchised and oppressed the majority population. The quality of the decision and its fairness really depends on the designated expert or authority.
Consensus rule is a decision-making technique in which all members of the group must agree on the same decision. On rare occasions, a decision may be ideal for all group members, which can lead to unanimous agreement without further debate and discussion. Although this can be positive, be cautious that this isn’t a sign of groupthink. More typically, consensus is reached only after lengthy discussion. On the plus side, consensus often leads to high-quality decisions due to the time and effort it takes to get everyone in agreement. Group members are also more likely to be committed to the decision because of their investment in reaching it. On the negative side, the ultimate decision is often one that all group members can live with but not one that’s ideal for all members. Additionally, the process of arriving at consensus also includes conflict, as people debate ideas and negotiate the interpersonal tensions that may result.
Table \(\PageIndex{1}\): Pros and Cons of Agreement-Based Decision-Making Techniques
| Decision-Making Technique | Pros | Cons |
|---|---|---|
| Majority rule |
|
|
| Minority rule by expert |
|
|
| Minority rule by authority |
|
|
| Consensus rule |
|
|
“Getting Critical”
Six Hats Method of Decision Making
Edward de Bono developed the Six Hats method of thinking in the late 1980s, and it has since become a regular feature in decision-making training in business and professional contexts (de Bono, 1985). The method’s popularity lies in its ability to help people get out of habitual ways of thinking and to allow group members to play different roles and see a problem or decision from multiple points of view. The basic idea is that each of the six hats represents a different way of thinking, and when we figuratively switch hats, we switch the way we think. The hats and their style of thinking are as follows:
- White hat. Objective—focuses on seeking information such as data and facts and then processes that information in a neutral way.
- Red hat. Emotional—uses intuition, gut reactions, and feelings to judge information and suggestions.
- Black hat. Negative—focuses on potential risks, points out possibilities for failure, and evaluates information cautiously and defensively.
- Yellow hat. Positive—is optimistic about suggestions and future outcomes, gives constructive and positive feedback, points out benefits and advantages.
- Green hat. Creative—tries to generate new ideas and solutions, thinks “outside the box.”
- Blue hat. Philosophical—uses metacommunication to organize and reflect on the thinking and communication taking place in the group, facilitates who wears what hat and when group members change hats.
Specific sequences or combinations of hats can be used to encourage strategic thinking. For example, the group leader may start off wearing the Blue Hat and suggest that the group start their decision-making process with some “White Hat thinking” in order to process through facts and other available information. During this stage, the group could also process through what other groups have done when faced with a similar problem. Then the leader could begin an evaluation sequence starting with two minutes of “Yellow Hat thinking” to identify potential positive outcomes, then “Black Hat thinking” to allow group members to express reservations about ideas and point out potential problems, then “Red Hat thinking” to get people’s gut reactions to the previous discussion, then “Green Hat thinking” to identify other possible solutions that are more tailored to the group’s situation or completely new approaches. At the end of a sequence, the Blue Hat would want to summarize what was said and begin a new sequence. To successfully use this method, the person wearing the Blue Hat should be familiar with different sequences and plan some of the thinking patterns ahead of time based on the problem and the group members. Each round of thinking should be limited to a certain time frame (two to five minutes) to keep the discussion moving.
- This decision-making method has been praised because it allows group members to “switch gears” in their thinking and allows for role playing, which lets people express ideas more freely. How can this help enhance critical thinking? Which combination of hats do you think would be best for a critical thinking sequence?
- What combinations of hats might be useful if the leader wanted to break the larger group up into pairs and why? For example, what kind of thinking would result from putting Yellow and Red together, Black and White together, or Red and White together, and so on?
- Based on your preferred ways of thinking and your personality, which hat would be the best fit for you? Which would be the most challenging? Why?
Influences on Decision Making
Many factors influence the decision-making process. For example, how might a group’s independence or access to resources affect the decisions they make? What potential advantages and disadvantages come with decisions made by groups that are more or less similar in terms of personality and cultural identities? In this section, we will explore how situational, personality, and cultural influences affect decision making in groups.
Situational Influences on Decision Making
A group’s situational context affects decision making. One key situational element is the degree of freedom that the group has to make its own decisions, secure its own resources, and initiate its own actions. Some groups have to go through multiple approval processes before they can do anything, while others are self-directed, self-governing, and self-sustaining. Another situational influence is uncertainty. In general, groups deal with more uncertainty in decision making than do individuals because of the increased number of variables that comes with adding more people to a situation. Individual group members can’t know what other group members are thinking, whether or not they are doing their work, and how committed they are to the group. So the size of a group is a powerful situational influence, as it adds to uncertainty and complicates communication.
Access to information also influences a group. First, the nature of the group’s task or problem affects its ability to get information. Group members can more easily make decisions about a problem when other groups have similarly experienced it. Even if the problem is complex and serious, the group can learn from other situations and apply what it learns. Second, the group must have access to flows of information. Access to archives, electronic databases, and individuals with relevant experience is necessary to obtain any relevant information about similar problems or to do research on a new or unique problem. In this regard, group members’ formal and information network connections also become important situational influences.
The origin and urgency of a problem are also situational factors that influence decision making. In terms of origin, problems usually occur in one of four ways:
- Something goes wrong. Group members must decide how to fix or stop something. Example—a group member consistently is not following through with what s/he is expected to do.
- Expectations change or increase. Group members must innovate more efficient or effective ways of doing something. Example—an English learner does not understand what the rest of the group members are talking about. The group's discussion needs to take that into consideration.
- Something goes wrong and expectations change or increase. Group members must fix/stop and become more efficient/effective. Example—a group member's laptop crashes and s/he is not able to do research at home temporarily.
- The problem existed from the beginning. Group members must go back to the origins of the situation and walk through and analyze the steps again to decide what can be done differently. Example—the group's topic for their presentation not relevant and/or interesting.
In each of the cases, the need for a decision may be more or less urgent depending on how badly something is going wrong, how high the expectations have been raised, or the degree to which people are fed up with a broken system. Decisions must be made in situations ranging from crisis level to mundane.
Personality Influences on Decision Making
A long-studied typology of value orientations that affect decision making consists of the following types of decision maker: the economic, the aesthetic, the theoretical, the social, the political, and the religious (Spranger, 1928).
- The economic decision maker makes decisions based on what is practical and useful.
- The aesthetic decision maker makes decisions based on form and harmony, desiring a solution that is elegant and in sync with the surroundings.
- The theoretical decision maker wants to discover the truth through rationality.
- The social decision maker emphasizes the personal impact of a decision and sympathizes with those who may be affected by it.
- The political decision maker is interested in power and influence and views people and/or property as divided into groups that have different value.
- The religious decision maker seeks to identify with a larger purpose, works to unify others under that goal, and commits to a viewpoint, often denying one side and being dedicated to the other.
In the United States, economic, political, and theoretical decision making tend to be more prevalent decision-making orientations, which likely corresponds to the individualistic cultural orientation with its emphasis on competition and efficiency. But situational context, as we discussed before, can also influence our decision making.
The personalities of group members, especially leaders and other active members, affect the climate of the group. Group member personalities can be categorized based on where they fall on a continuum anchored by the following descriptors: dominant/submissive, friendly/unfriendly, and instrumental/emotional (Cragan & Wright, 1999). The more group members there are in any extreme of these categories, the more likely that the group climate will also shift to resemble those characteristics.
- Dominant versus submissive. Group members that are more dominant act more independently and directly, initiate conversations, take up more space, make more direct eye contact, seek leadership positions, and take control over decision-making processes. More submissive members are reserved, contribute to the group only when asked to, avoid eye contact, and leave their personal needs and thoughts unvoiced or give into the suggestions of others.
- Friendly versus unfriendly. Group members on the friendly side of the continuum find a balance between talking and listening, don’t try to win at the expense of other group members, are flexible but not weak, and value democratic decision making. Unfriendly group members are disagreeable, indifferent, withdrawn, and selfish, which leads them to either not invest in decision making or direct it in their own interest rather than in the interest of the group.
- Instrumental versus emotional. Instrumental group members are emotionally neutral, objective, analytical, task-oriented, and committed followers, which leads them to work hard and contribute to the group’s decision making as long as it is orderly and follows agreed-on rules. Emotional group members are creative, playful, independent, unpredictable, and expressive, which can lead them to make rash decisions, resist group norms or decision-making structures, and switch often from relational to task focus.
Cultural Context and Decision Making
Just like neighborhoods, schools, and countries, small groups vary in terms of their degree of similarity and difference. Demographic changes in the United States and increases in technology that can bring different people together make it more likely that we will be interacting in more and more heterogeneous groups (Allen, 2011). Some small groups are more homogenous, meaning the members are more similar, and some are more heterogeneous, meaning the members are more different. Diversity and difference within groups has advantages and disadvantages. In terms of advantages, research finds that, in general, groups that are culturally heterogeneous have better overall performance than more homogenous groups (Haslett & Ruebush, 1999). Additionally, when group members have time to get to know each other and competently communicate across their differences, the advantages of diversity include better decision making due to different perspectives (Thomas, 1999). Unfortunately, groups often operate under time constraints and other pressures that make the possibility for intercultural dialogue and understanding difficult. The main disadvantage of heterogeneous groups is the possibility for conflict, but given that all groups experience conflict, this isn’t solely due to the presence of diversity. We will now look more specifically at how some of the cultural value orientations we’ve learned about already in this book can play out in groups with international diversity and how domestic diversity in terms of demographics can also influence group decision making.
International Diversity in Group Interactions
Cultural value orientations such as individualism/collectivism, power distance, and high-/low-context communication styles all manifest on a continuum of communication behaviors and can influence group decision making. Group members from individualistic cultures are more likely to value task-oriented, efficient, and direct communication. This could manifest in behaviors such as dividing up tasks into individual projects before collaboration begins and then openly debating ideas during discussion and decision making. Additionally, people from cultures that value individualism are more likely to openly express dissent from a decision, essentially expressing their disagreement with the group. Group members from collectivistic cultures are more likely to value relationships over the task at hand. Because of this, they also tend to value conformity and face-saving (often indirect) communication. This could manifest in behaviors such as establishing norms that include periods of socializing to build relationships before task-oriented communication like negotiations begin or norms that limit public disagreement in favor of more indirect communication that doesn’t challenge the face of other group members or the group’s leader. In a group composed of people from a collectivistic culture, each member would likely play harmonizing roles, looking for signs of conflict and resolving them before they become public.
Power distance can also affect group interactions. Some cultures rank higher on power-distance scales, meaning they value hierarchy, make decisions based on status, and believe that people have a set place in society that is fairly unchangeable. Group members from high-power-distance cultures would likely appreciate a strong designated leader who exhibits a more directive leadership style and prefer groups in which members have clear and assigned roles. In a group that is homogenous in terms of having a high-power-distance orientation, members with higher status would be able to openly provide information, and those with lower status may not provide information unless a higher status member explicitly seeks it from them. Low-power-distance cultures do not place as much value and meaning on status and believe that all group members can participate in decision making. Group members from low-power-distance cultures would likely freely speak their mind during a group meeting and prefer a participative leadership style.
How much meaning is conveyed through the context surrounding verbal communication can also affect group communication. Some cultures have a high-context communication style in which much of the meaning in an interaction is conveyed through context such as nonverbal cues and silence. Group members from high-context cultures may avoid saying something directly, assuming that other group members will understand the intended meaning even if the message is indirect. So if someone disagrees with a proposed course of action, he or she may say, “Let’s discuss this next time” and mean, “I don’t think we should do this.” Such indirect communication is also a face-saving strategy that is common in collectivistic cultures. Other cultures have a low-context communication style that places more importance on the meaning conveyed through words than through context or nonverbal cues. Group members from low-context cultures often say what they mean and mean what they say. For example, if someone doesn’t like an idea, they might say, “I think we should consider more options. This one doesn’t seem like the best we can do.”
In any of these cases, an individual from one culture operating in a group with people of a different cultural orientation could adapt to the expectations of the host culture, especially if that person possesses a high degree of intercultural communication competence (ICC). Additionally, people with high ICC can also adapt to a group member with a different cultural orientation than the host culture. Even though these cultural orientations connect to values that affect our communication in fairly consistent ways, individuals may exhibit different communication behaviors depending on their own individual communication style and the situation.
Domestic Diversity and Group Communication
While it is becoming more likely that we will interact in small groups with international diversity, we are guaranteed to interact in groups that are diverse in terms of the cultural identities found within a single country or the subcultures found within a larger cultural group.
Gender stereotypes sometimes influence the roles that people play within a group. For example, the stereotype that women are more nurturing than men may lead group members (both male and female) to expect that women will play the role of supporters or harmonizers within the group. Since women have primarily performed secretarial work since the 1900s, it may also be expected that women will play the role of recorder. In both of these cases, stereotypical notions of gender place women in roles that are typically not as valued in group communication. The opposite is true for men. In terms of leadership, despite notable exceptions, research shows that men fill an overwhelmingly disproportionate amount of leadership positions. We are socialized to see certain behaviors by men as indicative of leadership abilities, even though they may not be. For example, men are often perceived to contribute more to a group because they tend to speak first when asked a question or to fill a silence and are perceived to talk more about task-related matters than relationally oriented matters. Both of these tendencies create a perception that men are more engaged with the task. Men are also socialized to be more competitive and self-congratulatory, meaning that their communication may be seen as dedicated and their behaviors seen as powerful, and that when their work isn’t noticed they will be more likely to make it known to the group rather than take silent credit. Even though we know that the relational elements of a group are crucial for success, even in high-performance teams, that work is not as valued in our society as the task-related work.
Despite the fact that some communication patterns and behaviors related to our typical (and stereotypical) gender socialization affect how we interact in and form perceptions of others in groups, the differences in group communication that used to be attributed to gender in early group communication research seem to be diminishing. This is likely due to the changing organizational cultures from which much group work emerges, which have now had more than sixty years to adjust to women in the workplace. It is also due to a more nuanced understanding of gender-based research, which doesn’t take a stereotypical view from the beginning as many of the early male researchers did. Now, instead of biological sex being assumed as a factor that creates inherent communication differences, group communication scholars see that men and women both exhibit a range of behaviors that are more or less feminine or masculine. It is these gendered behaviors, and not a person’s gender, that seem to have more of an influence on perceptions of group communication. Interestingly, group interactions are still masculinist in that male and female group members prefer a more masculine communication style for task leaders and that both males and females in this role are more likely to adapt to a more masculine communication style. Conversely, men who take on social-emotional leadership behaviors adopt a more feminine communication style. In short, it seems that although masculine communication traits are more often associated with high status positions in groups, both men and women adapt to this expectation and are evaluated similarly (Haslett & Ruebush, 1999).
Other demographic categories are also influential in group communication and decision making. In general, group members have an easier time communicating when they are more similar than different in terms of race and age. This ease of communication can make group work more efficient, but the homogeneity may sacrifice some creativity. As we learned earlier, groups that are diverse (e.g., they have members of different races and generations) benefit from the diversity of perspectives in terms of the quality of decision making and creativity of output.
In terms of age, for the first time since industrialization began, it is common to have three generations of people (and sometimes four) working side by side in an organizational setting. Although four generations often worked together in early factories, they were segregated based on their age group, and a hierarchy existed with older workers at the top and younger workers at the bottom. Today, however, generations interact regularly, and it is not uncommon for an older person to have a leader or supervisor who is younger than him or her (Allen, 2011). The current generations in the US workplace and consequently in work-based groups include the following:
- The Silent Generation. Born between 1925 and 1942, currently in their midsixties to mideighties, this is the smallest generation in the workforce right now, as many have retired or left for other reasons. This generation includes people who were born during the Great Depression or the early part of World War II, many of whom later fought in the Korean War (Clarke, 1970).
- The Baby Boomers. Born between 1946 and 1964, currently in their late forties to midsixties, this is the largest generation in the workforce right now. Baby boomers are the most populous generation born in US history, and they are working longer than previous generations, which means they will remain the predominant force in organizations for ten to twenty more years.
- Generation X. Born between 1965 and 1981, currently in their early thirties to midforties, this generation was the first to see technology like cell phones and the Internet make its way into classrooms and our daily lives. Compared to previous generations, “Gen-Xers” are more diverse in terms of race, religious beliefs, and sexual orientation and also have a greater appreciation for and understanding of diversity.
- Generation Y. Born between 1982 and 2000, “Millennials” as they are also called are currently in their late teens up to about thirty years old. This generation is not as likely to remember a time without technology such as computers and cell phones. They are just starting to enter into the workforce and have been greatly affected by the economic crisis of the late 2000s, experiencing significantly high unemployment rates.
The benefits and challenges that come with diversity of group members are important to consider. Since we will all work in diverse groups, we should be prepared to address potential challenges in order to reap the benefits. Diverse groups may be wise to coordinate social interactions outside of group time in order to find common ground that can help facilitate interaction and increase group cohesion. We should be sensitive but not let sensitivity create fear of “doing something wrong” that then prevents us from having meaningful interactions. Reviewing Chapter 8 “Culture and Communication” will give you useful knowledge to help you navigate both international and domestic diversity and increase your communication competence in small groups and elsewhere.
Key Takeaways
- Every problem has common components: an undesirable situation, a desired situation, and obstacles between the undesirable and desirable situations. Every problem also has a set of characteristics that vary among problems, including task difficulty, number of possible solutions, group member interest in the problem, group familiarity with the problem, and the need for solution acceptance.
-
The group problem-solving process has five steps:
- Define the problem by creating a problem statement that summarizes it.
- Analyze the problem and create a problem question that can guide solution generation.
- Generate possible solutions. Possible solutions should be offered and listed without stopping to evaluate each one.
- Evaluate the solutions based on their credibility, completeness, and worth. Groups should also assess the potential effects of the narrowed list of solutions.
- Implement and assess the solution. Aside from enacting the solution, groups should determine how they will know the solution is working or not.
- Before a group makes a decision, it should brainstorm possible solutions. Group communication scholars suggest that groups (1) do a warm-up brainstorming session; (2) do an actual brainstorming session in which ideas are not evaluated, wild ideas are encouraged, quantity not quality of ideas is the goal, and new combinations of ideas are encouraged; (3) eliminate duplicate ideas; and (4) clarify, organize, and evaluate ideas. In order to guide the idea-generation process and invite equal participation from group members, the group may also elect to use the nominal group technique.
- Common decision-making techniques include majority rule, minority rule, and consensus rule. With majority rule, only a majority, usually one-half plus one, must agree before a decision is made. With minority rule, a designated authority or expert has final say over a decision, and the input of group members may or may not be invited or considered. With consensus rule, all members of the group must agree on the same decision.
-
Several factors influence the decision-making process:
- Situational factors include the degree of freedom a group has to make its own decisions, the level of uncertainty facing the group and its task, the size of the group, the group’s access to information, and the origin and urgency of the problem.
- Personality influences on decision making include a person’s value orientation (economic, aesthetic, theoretical, political, or religious), and personality traits (dominant/submissive, friendly/unfriendly, and instrumental/emotional).
- Cultural influences on decision making include the heterogeneity or homogeneity of the group makeup; cultural values and characteristics such as individualism/collectivism, power distance, and high-/low-context communication styles; and gender and age differences.
Exercise
- Group communication researchers have found that heterogeneous groups (composed of diverse members) have advantages over homogenous (more similar) groups. Discuss a group situation you have been in where diversity enhanced your and/or the group’s experience.
References
Adams, K., and Gloria G. Galanes, Communicating in Groups: Applications and Skills , 7th ed. (Boston, MA: McGraw-Hill, 2009), 220–21.
Allen, B. J., Difference Matters: Communicating Social Identity , 2nd ed. (Long Grove, IL: Waveland, 2011), 5.
Bormann, E. G., and Nancy C. Bormann, Effective Small Group Communication , 4th ed. (Santa Rosa, CA: Burgess CA, 1988), 112–13.
Clarke, G., “The Silent Generation Revisited,” Time, June 29, 1970, 46.
Cragan, J. F., and David W. Wright, Communication in Small Group Discussions: An Integrated Approach , 3rd ed. (St. Paul, MN: West Publishing, 1991), 77–78.
de Bono, E., Six Thinking Hats (Boston, MA: Little, Brown, 1985).
Delbecq, A. L., and Andrew H. Ven de Ven, “A Group Process Model for Problem Identification and Program Planning,” The Journal of Applied Behavioral Science 7, no. 4 (1971): 466–92.
Haslett, B. B., and Jenn Ruebush, “What Differences Do Individual Differences in Groups Make?: The Effects of Individuals, Culture, and Group Composition,” in The Handbook of Group Communication Theory and Research , ed. Lawrence R. Frey (Thousand Oaks, CA: Sage, 1999), 133.
Napier, R. W., and Matti K. Gershenfeld, Groups: Theory and Experience , 7th ed. (Boston, MA: Houghton Mifflin, 2004), 292.
Osborn, A. F., Applied Imagination (New York: Charles Scribner’s Sons, 1959).
Spranger, E., Types of Men (New York: Steckert, 1928).
Stanton, C., “How to Deliver Group Presentations: The Unified Team Approach,” Six Minutes Speaking and Presentation Skills , November 3, 2009, accessed August 28, 2012, http://sixminutes.dlugan.com/group-presentations-unified-team-approach .
Thomas, D. C., “Cultural Diversity and Work Group Effectiveness: An Experimental Study,” Journal of Cross-Cultural Psychology 30, no. 2 (1999): 242–63. | libretexts | 2025-03-17T22:27:56.037929 | 2021-05-08T21:24:01 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.02%3A_Problem_Solving_and_Decision_Making_in_Groups",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.2: Problem Solving and Decision Making in Groups",
"author": "Anonymous"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management | 4.4: Human Resource Management Last updated Save as PDF Page ID 74599 OpenStax OpenStax 4.4.1: Chapter Introduction 4.4.2: An Introduction to Human Resource Management 4.4.3: Human Resource Management and Compliance 4.4.4: Performance Management 4.4.5: Influencing Employee Performance and Motivation 4.4.6: Building an Organization for the Future 4.4.7: Talent Development and Succession Planning 4.4.8: Glossary | libretexts | 2025-03-17T22:27:56.169560 | 2021-05-08T21:24:02 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4: Human Resource Management",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.02%3A_An_Introduction_to_Human_Resource_Management | 4.4.2: An Introduction to Human Resource Management
-
- Last updated
- Save as PDF
What has been the evolution of human resource management (HRM) over the years, and what is the current value it provides to an organization?
Human resource management over the years has served many purposes within an organization. From its earliest inception as a primarily compliance-type function, it has further expanded and evolved into its current state as a key driver of human capital development. In the book HR From the Outside In (Ulrich, Younger, Brockbank, Younger, 2012), the authors describe the evolution of HR work in “waves”. Wave 1 focused on the administrative work of HR personnel, such as the terms and conditions of work, delivery of HR services, and regulatory compliance. This administrative side still exists in HR today, but it is often accomplished differently via technology and outsourcing solutions. The quality of HR services and HR’s credibility came from the ability to run administrative processes and solve administrative issues effectively. Wave 2 focused on the design of innovative HR practice areas such as compensation, learning, communication, and sourcing. The HR professionals in these practice areas began to interact and share with each other to build a consistent approach to human resource management. The HR credibility in Wave 2 came from the delivery of best-practice HR solutions.
Wave 3 HR, over the last 15–20 years or so, has focused on the integration of HR strategy with the overall business strategy. Human resources appropriately began to look at the business strategy to determine what HR priorities to work on and how to best use resources. HR began to be a true partner to the business, and the credibility of HR was dependent upon HR having a seat at the table when the business was having strategic discussions. In Wave 4, HR continues to be a partner to the business, but has also become a competitive practice for responding to external business conditions. HR looks outside their organizations to customers, investors, and communities to define success—in the form of customer share, investor confidence, and community reputation. HR’s credibility is thus defined in terms of its ability to support and drive these external metrics. Although each “wave” of HR’s evolution is important and must be managed effectively, it is the “outside in” perspective that allows the human resource management function to shine via the external reputation and successes of the organization.
catching the entrepreneurial spirit
Human Resources Outsourcing—Entrepreneurial Ventures
Human resources is a key function within any company, but not all companies are able to afford or justify full-time HR staff. Over the last decade, HR outsourcing has become a good business decision for many small companies whose current staff doesn’t have the bandwidth or expertise to take on the risks of employee relations issues, benefits and payroll, or HR compliance responsibilities. This has led many HR practitioners to try out their entrepreneurial skills in the areas of HR outsourcing and “fractional HR.”
Human resources outsourcing is very commonly used by smaller companies (and often large companies too) to cover such tasks as benefits and payroll management. This is an area that has been outsourced to third parties for many years. More recent is the trend to have “fractional HR” resources to help with the daily/weekly/monthly HR compliance, employee relations, and talent management issues that companies need to address. Fractional HR is a growing industry, and it has become the service offering of many entrepreneurial HR ventures. Fractional HR is essentially as it sounds—it is the offering of HR services to a company on a part-time or intermittent basis when the company may not be able to justify the cost of a full-time HR resource. An HR professional can be available onsite for a specified number of hours or days weekly or monthly, depending on the company’s needs and budget. The HR professional handles everything from HR compliance issues and training to employee issues support. Also, for companies that are keen on development of employees, the HR resource can drive the talent management processes—such as performance management, succession planning, training, and development—for companies who require more than just basic HR compliance services.
How does a business leader decide whether HR outsourcing is needed? There are generally two factors that drive a leader to consider fractional HR or HR outsourcing—time and risk. If a leader is spending too much time on HR issues and employee relations, he may decide that it is a smart tradeoff to outsource these tasks to a professional. In addition, the risk inherent in some HR issues can be very great, so the threat of having a lawsuit or feeling that the company is exposed can lead the company to seek help from a fractional HR professional.
HR entrepreneurs have taken full advantage of this important trend, which many say will likely continue as small companies grow and large companies decide to off-load HR work to third parties. Some HR companies offer fractional HR as part of their stated HR services, in addition to payroll and benefits support, compensation, and other HR programmatic support. Having a fractional HR resource in place will often illuminate the need for other HR services and program builds, which are generally supported by those same companies. Whether you are an individual HR practitioner or have a small company of HR practitioners and consultants, fractional HR and HR outsourcing can be a very viable and financially rewarding business model. It can also be very personally rewarding, as the HR professional enables smaller companies to grow and thrive, knowing that its HR compliance and processes are covered.
Discussion Questions
- At what point should a company consider bringing on a full-time HR resource instead of using a fractional HR resource? What questions should the company ask itself?
Human resource management provides value to an organization, to a large extent, via its management of the overall employee life cycle that employees follow—from hiring and onboarding, to performance management and talent development, all the way through to transitions such as job change and promotion, to retirement and exit. Human capital is a key competitive advantage to companies, and those who utilize their human resource partners effectively to drive their human capital strategy will reap the benefits.
Human resource management includes the leadership and facilitation of the following key life cycle process areas:
- Human resources compliance
- Employee selection, hiring, and onboarding
- Performance management
- Compensation rewards and benefits
- Talent development and succession planning
Human resources is responsible for driving the strategy and policies in these areas to be in accordance with and in support of the overall business strategy. Each of these areas provides a key benefit to the organization and impacts the organization’s value proposition to its employees.
concept check
- In what way do you usually interact with human resources? | libretexts | 2025-03-17T22:27:56.262079 | 2021-05-08T21:24:03 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.02%3A_An_Introduction_to_Human_Resource_Management",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4.2: An Introduction to Human Resource Management",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.03%3A_Human_Resource_Management_and_Compliance | 4.4.3: Human Resource Management and Compliance
-
- Last updated
- Save as PDF
How does the human resources compliance role of HR add value to an organization?
Human resources compliance is an area that traces back to the very origin of the human resources function—to administrative and regulatory functions. Compliance continues to be a very important area that HR manages, and there are numerous regulations and laws that govern the employment relationship. HR professionals must be able to understand and navigate these laws to help their organizations remain compliant and avoid having to pay fines or penalties. The additional threat of reputational harm to the organization is another reason that HR needs to be aware and alert to any potential gaps in compliance.
Some of the most common examples of laws and regulations that govern the employer-employee relationship include the following (SHRM.org):
- Age Discrimination in Employment Act (ADEA)
- Americans with Disabilities Act (ADA)
- Fair Labor Standards Act (FLSA)
- Family and Medical Leave Act (FMLA)
- National Labor Relations Act (NLRA)
- Worker Adjustment and Retraining Notification Act (WARN)
The Age Discrimination in Employment Act (ADEA) of 1967 protects individuals who are 40 years of age or older from employment discrimination based on age. These protections apply to both employees and job applicants. It also makes it unlawful to discriminate based on age with respect to any terms of employment, such as hiring, firing, promotion, layoff, compensation, benefits, job assignments, and training.
The Americans with Disabilities Act (ADA) of 1990 prohibits private employers, state and local governments, employment agencies, and labor unions from discriminating against qualified individuals with disabilities. The ADA defines an individual with a disability as a person who: 1) has a mental or physical impairment that substantially limits one or more major life activities, 2) has a record of such impairment, or 3) is regarded as having such impairment. An employer is required to make a reasonable accommodation to the known disability of a qualified applicant or employee if it would not impose an “undue hardship” on the operation of the employer’s business.
The Fair Labor Standards Act (FLSA) of 1938 establishes the minimum wage, overtime pay, recordkeeping, and youth employment standards affecting full-time and part-time workers in the private sector and in federal, state, and local governments. Special rules apply to state and local government employment involving fire protection and law enforcement activities, volunteer services, and compensatory time off instead of cash overtime pay.
The Family and Medical Leave Act (FMLA) of 1993 entitles eligible employees to take up to 12 weeks of unpaid, job-protected leave in a 12-month period for specified family and medical reasons. FMLA applies to all public agencies, including state, local, and federal employers, local education agencies (schools), and private-sector employers who employed 50 or more employees in 20 or more workweeks in the current or preceding calendar year, including joint employers and successors of covered employers.
The National Labor Relations Act (NLRA) of 1947 extends rights to many private-sector employees, including the right to organize and bargain with their employer collectively. Employees covered by the act are protected from certain types of employer and union misconduct and have the right to attempt to form a union where none exists.
The Worker Adjustment and Retraining Notification Act (WARN) of 1988 generally covers employers with 100 or more employees, not counting those who have worked less than six months in the last 12 months and those who work an average of less than 20 hours a week. Regular federal, state, and local government entities that provide public services are not covered. WARN protects workers, their families, and communities by requiring employers to provide notification 60 calendar days in advance of plant closings and mass layoffs.
These are just a few of the key regulatory federal statutes, regulations, and guidance that human resources professionals need to understand to confirm organizational compliance. For additional information on HR compliance resources, the Society of Human Resource Management (SHRM) at SHRM.org maintains a plethora of resources for the HR professional and the businesses that they support.
To ensure the successful management and oversight of the many compliance rules and regulations, the human resources team must utilize best practices to inform and hold employees accountable to HR compliance practices. Some of these best practices include education and training, documentation, and audit. Each of these is described in greater detail, and will help HR achieve its important goal of maintaining HR compliance for the organization.
Education and training in the areas of compliance and labor law is critical to ensure that all applicable laws and regulations are being followed. These laws can change from year to year, so the HR professionals in the organization need to ensure that they are engaged in ongoing education and training. It is not just imperative for the HR professional to receive training. In many organizations, managers receive training on key rules and regulations (such as FMLA or ADA, to name a few) so that they have a foundation of knowledge when dealing with employee situations and potential risk areas. Human resources and management need to partner to ensure alignment on compliance issues—especially when there is a risk that an employee situation treads into compliance regulation territory. See Table \(\PageIndex{1}\) for a partial list of federal labor laws by number of employees, as displayed on the Society for Human Resource Management website.
Table \(\PageIndex{1}\): Federal Labor Laws by Number of Employees.
| Federal Labor Laws by Number of Employees |
|---|
| American Taxpayer Relief Act of 2012 |
| Consumer Credit Protection Act of 1968 |
| Employee Polygraph Protection Act of 1988 |
| Employee Retirement Income Security Act of 1974 (ERISA) |
| Equal Pay Act of 1963 |
| Fair and Accurate Credit Transaction Act of 2003 (FACT) |
| Fair Credit Reporting Act of 1969 |
| Fair Labor Standards Act of 1938 |
| Federal Insurance Contributions Act of 1935 (Social Security) (FICA) |
| Health Insurance Portability and Accountability Act of 1996 (if a company offers benefits) (HIPPA) |
| Immigration Reform and Control Act of 1986 |
| These federal laws cover all employees of all organizations. Several other factors may apply in determining employer coverage, such as whether the employer is public or private, whether the employer offers health insurance, and whether the employer uses a third party to conduct background checks. Source: SHRM website, https://www.shrm.org/ , accessed October 20, 2018. |
Table \(\PageIndex{1}\). (Attribution: Copyright Rice University, OpenStax, under CC-BY 4.0 license)
Documentation of the rules and regulations—in the form of an employee handbook—can be one of the most important resources that HR can provide to the organization to mitigate compliance risk. The handbook should be updated regularly and should detail the organization’s policies and procedures and how business is to be conducted. Legal counsel should review any such documentation before it is distributed to ensure that it is up-to-date and appropriate for the audience.
Scheduling HR compliance audits should be part of the company’s overall strategy to avoid legal risk. Noncompliance can cause enormous financial and reputational risk to a company, so it is important to have audits that test the organization’s controls and preparedness. When the human resources function takes the lead in implementing audits and other best practices, they create real value for the organization.
concept check
- What does an employee handbook provide to an organization? | libretexts | 2025-03-17T22:27:56.326484 | 2021-05-08T21:24:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.03%3A_Human_Resource_Management_and_Compliance",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4.3: Human Resource Management and Compliance",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.04%3A_Performance_Management | 4.4.4: Performance Management
-
- Last updated
- Save as PDF
How do performance management practices impact company performance?
Performance management practices and processes are among the most important that human resources manages, yet they are also among the most contentious processes in an organization. Many people view performance management as a human resources role and believe that it is in some parallel path with the business. On the contrary, for the process to be successful, it should not only be human resources that is responsible for driving performance. For the (typically) annual performance management process, human resources and line management should partner on the implementation and ongoing communication of the process. Although HR is responsible for creating and facilitating the performance management processes, it is the organizational managers that need to strongly support the process and communicate the linkage of performance management to overall organizational goals and performance. In my experience, it was helpful when business leadership emphasized that performance management isn’t a human resources process—it is a mission-critical business process. If a business manager can’t track and drive performance at the individual level, then the overall organization won’t know how it’s tracking on overall organizational goals. Performance Management Before discussing the state of performance management in the workplace today, it is important to understand the origin of performance management. Performance management began as a simple tool to drive accountability (as it still does) but has evolved more recently into a tool used for employee development.
Performance management can be tracked back to the U.S. military’s “merit rating” system, which was created during World War I to identify poor performers for discharge or transfer (“The Performance Management Revolution,” Harvard Business Review, October 2016). After World War II, about 60% of all U.S. companies were using a performance appraisal process. (By the 1960s nearly 90% of all U.S. companies were using them.) Although the rules around job seniority determined pay increases and promotions for the unionized worker population, strong performance management scores meant good advancement prospects for managers. In the beginning, the notion of using this type of system to improve performance was more of an afterthought, and not the main purpose. By the 1960s or so, when we started to see a shortage of managerial talent, companies began to use performance systems to develop employees into supervisors, and managers into executives.
In 1981, when Jack Welch became CEO of General Electric, he championed the forced-ranking system—another military creation. He did this to deal with the long-standing concern that supervisors failed to label real differences in performance (HBR, The Performance Management Revolution). GE utilized this performance management system to shed the people at the bottom. They equated performance with people’s inherent capabilities and ignored their potential to grow. People were categorized as “A” players (to be rewarded), “B” players (to be accommodated), and “C” players (to be dismissed). In the GE system, development was reserved for the “A” players—and those with high potential were chosen to advance to senior positions. Since the days of GE’s forced ranking, many companies have implemented a similar forced-ranking system, but many have backed away from the practice. After Jack Welch retired, GE backed away from the practice as well. Companies, GE included, saw that it negatively fostered internal competition and undermined collaboration and teamwork and thus decided to drop forced ranking from their performance management processes.
Most people agree, in theory, that performance management is important. What people may not agree on is how performance management should be implemented. As the dissatisfaction with performance management processes began to increase, some companies began to change the way they thought about performance. In 2001, an “Agile Manifesto” was developed by software developers and “emphasized principles of collaboration, self-organization, self-direction, and regular reflection on how to work more effectively, with the aim of prototyping more quickly and responding in real-time to customer feedback and changes in requirements.” (Performance Management Revolution, HBR). The impact on performance management was clear, and companies started to think about performance management processes that were less cumbersome, incorporated frequent feedback, and delivered performance impacts.
In a recent public survey by Deloitte Services, 58% of executives surveyed believed that their current performance management approach drives neither employee engagement nor high performance. They need something more nimble, real-time, and individualized—and focused on fueling performance in the future rather than assessing it in the past. (“ Reinventing Performance Management ,” Harvard Business Review, Buckingham and Goodall, 2015). In light of this study, Deloitte became one of the companies that has recently sought to redesign their performance processes. As part of their “radical redesign,” they seek to see performance at the individual level, and thus they ask team leaders about their own future actions and decisions with respect to each individual. They ask leaders what they’d do with their team members, not what they think of them (“Reinventing Performance Management,” HBR). The four questions that Deloitte asks of its managers are as follows:
- Given what I know of this person’s performance, and if it were my money, I would award this person the highest possible compensation increase and bonus.
- Given what I know of this person’s performance, I would always want him or her on my team.
- This person is at risk for low performance.
- This person is ready for promotion today.
Although there has been some discussion over the last several years about some companies wanting to drop performance appraisals completely, most of the research seems to support that the total absence of performance management doesn’t help either. A recent global survey by CEB Global reports that more than 9,000 managers and employees think that not having performance evaluations is worse than having them. (“Let’s Not Kill Performance Evaluations Yet,” HBR, Nov 2016, Goler, Gale, Grant). Their findings indicate that even though every organization has people who are unhappy with their bonuses or disappointed that they weren’t promoted, research shows that employees are more willing to accept an undesirable outcome when the process is fair. The key question really becomes: how can HR help the business create a process to fairly evaluate performance and enhance employee development while not burdening the business with undue bureaucracy and non-value-added activities?
managing change
Global versus Local HR
Multinational companies are always challenged to determine the balance between global and local needs when creating a human resource management strategy. Some large companies weigh heavily on the side of centralization, with very few local deviations from the global strategy. Others may allow more localization of processes and decision-making if there are very specific local cultural needs that must be addressed. In either case, companies are well-served by maintaining global standards while also allowing for local market adaptation in the human resources areas where it makes the most sense.
According to the MIT Sloan Management Review article “Six Principles of Effective Global Talent Management” (Winter 2012), most multinational companies introduce global performance standards, competency profiles, and performance management tools and processes. These are the human resources areas that are most closely linked to the overall strategies and goals, and thus remain at the global level. Those HR processes that are not perceived as being as closely linked to the strategy and that may need to have local market inputs include processes such as training and compensation. Hiring practices may also need to be locally adapted, due to country-specific labor laws and challenges. One caveat, however, is that a company may limit itself in terms of its global talent management if it has too many country-specific adaptations to hiring, assessment, and development processes for top talent. It is important that the company takes a global approach to talent management so that cross-learning opportunities and cross-cultural development opportunities can take place.
One of the most important aspects of global talent management is that a company can break down silos and pollinate the business with talented employees from around the globe. Some companies even have global leadership programs that bring together high-potential leaders from across the organization to build camaraderie, share knowledge, and engage in learning. Others have created rotational programs for leaders to be able to experience new roles in other cultures in order to build their personal resumes and cultural intelligence. Human resources can have an enormous impact on the company’s ability to harness the power of a global talent pool when they create a global network for talent while also balancing this with the requirements of the local market.
Discussion Questions:
- Why might compensation programs and hiring practices need to have local adaptation? What would be the risks if these were not adapted to local markets?
As organizations evaluate their options for a performance management system, human resources and business leadership need to consider several challenges that will need to be addressed—no matter what the system. (“ The Performance Management Revolution ,” Capelli and Tavis, HBR, pp. 9-11).
The first is the challenge of aligning individual and company goals. Traditionally, the model has been to “cascade” goals down through the organization, and employees are supposed to create goals that reflect and support the direction set at the top. The notion of SMART goals (Specific, Measurable, Achievable, Relevant, Timebound) has made the rounds over the years, but goal setting can still be challenging if business goals are complex or if employee goals seem more relatable to specific project work than to the overall top-line goals. The business and the individual need to be able to respond to goal shifts, which occur very often in response to the rapid rate of change and changing customer needs. This is an ongoing issue that human resources and business leadership will need to reconcile.
The next key challenge to think about when designing a performance management process is rewarding performance. Reward structures are discussed later in this chapter, but reward systems must be rooted in performance management systems. Currently, the companies that are redesigning their performance processes are trying to figure out how their new practices will impact their pay-for-performance models. Companies don’t appear to be abandoning the concept of rewarding employees based on and driven by their performance, so the linkage between the two will need to be redefined as the systems are changed.
The identification of poor performers is a challenge that has existed since the earliest days of performance management, and even the most formal performance management process doesn’t seem to be particularly good at weeding out poor performers. A lot of this is due to the managers who evaluate employees and are reluctant to address the poor performers that they’re seeing. Also, the annual performance management process tends to make some managers feel that the poor performance should be overlooked during the year and only addressed (often ineffectively) during a one-per-year review. Whatever new performance management models an organization adopts, they will have to ensure that poor performance is dealt with in real time and is communicated, documented, and managed closely.
Avoiding legal troubles is another ongoing challenge for organizations and is another reason for real-time communication and documentation of performance issues. Human resources supports managers as they deal with employee relations issues, and the thought of not having a formal, numerical ratings system is unfathomable for some people who worry about defending themselves against litigation. However, because even formal performance processes can be subjective and may reveal ratings bias, neither the traditional formal process nor some of the radical new approaches can guarantee that legal troubles will never develop. From my experience, the best strategy for effective and fair performance management is real-time communication and documentation of issues. The employee is told about his or her performance issues (in as close to real time as possible), and the manager has documented the performance issues and conversations objectively and has engaged human resources with any larger or more complex issues.
“Managing the feedback firehose” and keeping conversations, documentation, and feedback in a place where it can be tracked and utilized is an ongoing challenge. The typical annual performance process is not conducive to capturing ongoing feedback and conversations. There have been some new technologies introduced (such as apps) that can be used to capture ongoing conversations between managers and employees. General Electric uses an app called PD@GE (PD = performance development) that allows managers to pull up notes and materials from prior conversations with employees. IBM has a similar app that allows peer-to-peer feedback. Although there are clearly some technology solutions that can be used to help communicate and collect feedback, human resources will need to continue to communicate and reinforce rules around objectivity and appropriate use of the tools.
Performance management processes—traditional and inventive new approaches alike—will face the same challenges over time. Human resource management professionals need to be aware of these challenges and design a performance management system that addresses them in the format and within the context of their culture.
concept check
- What are some of the key challenges of any performance management process? | libretexts | 2025-03-17T22:27:56.391456 | 2021-05-08T21:24:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.04%3A_Performance_Management",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4.4: Performance Management",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.05%3A_Influencing_Employee_Performance_and_Motivation | 4.4.5: Influencing Employee Performance and Motivation
-
- Last updated
- Save as PDF
How do companies use rewards strategies to influence employee performance and motivation?
Both performance management and rewards systems are key levers that can be used to motivate and drive individual and group performance ... which leads to overall organizational performance, productivity, and growth. Performance and rewards systems are also “cultural” in that they provide a glimpse into the way a company manages the performance (or nonperformance) of its employees, and to what extent they are willing to differentiate and reward for that performance. There has been a great deal of discussion over the years to identify best practices in the ways we differentiate and reward employees, which will also drive employee performance and motivation.
Before we can talk about best practices and findings in rewards and motivation systems, we must first define the terms. Rewards systems are the framework that an organization (generally via human resources) creates and manages to ensure that employee performance is reciprocated with some sort of reward (e.g., monetary or other extrinsic) that will drive and motivate the employee to continue to perform for the organization. Rewards programs consist primarily of compensation programs and policies, but can also include employee benefits and other extrinsic rewards that fulfill employee needs.
Within human resource management, the primary focus of a rewards program in an organization is to successfully implement a compensation system. Most organizations strive to implement a pay-for-performance compensation program that offers competitive pay in the marketplace and allows differentiation of compensation based on employee performance. Pay for performance begins with a philosophy that an organization adopts that states that they seek to reward the best-performing employees to enhance business performance and take care of those who can have the greatest impact.
In the 2011 SHRM article by Stephen Miller, entitled “Study: Pay for Performance Pays Off,” Miller says that companies’ top four drivers for moving to a pay-for-performance strategy are to:
- Recognize and reward high performers (46.9%)
- Increase the likelihood of achieving corporate goals (32.5%)
- Improve productivity (7.8%)
- Move away from an entitlement culture (7.8%)
The study also showed that the drivers differed depending on whether the company was high performing or lower performing. Almost half of high-performing organizations indicated that recognizing and rewarding top performers was the main driver of their pay-for-performance strategy, making it number one on the list of primary drivers. Lower-performing organizations did not appear to be as sure about the drivers behind their strategy. The number one driver among this group was achieving corporate goals. It appears that those top-performing organizations that implement a pay-for-performance strategy truly believe in the idea of differentiating among different levels of performance.
According to the 2015 World at Work “Compensation Programs and Practices Report,” pay for performance continues to thrive with better than 7 in 10 (72%) companies saying that they directly tie pay increases to job performance, and two-thirds (67%) indicating increases for top performers are at least 1.5 times the increase for average performers. In addition, the results of the survey seem to indicate that employees’ understanding of the organization’s compensation philosophy improves when there is higher differentiation in increases between average and top performers. The greater differentiation of increases is more visible and drives home the point that the company is serious about pay for performance.
A pay-for-performance program may have many components, and the human resources organization has the challenge of designing, analyzing, communicating, and managing the different components to ensure that the philosophy and the practices themselves are being carried out appropriately and legally. Human resource management’s role in establishing pay for performance is that HR must engage business leadership to establish the following elements of the framework:
- Define the organization’s pay philosophy. Leadership needs to agree that they will promote a culture that rewards employees for strong performance.
- Review the financial impacts of creating pay-for-performance changes. How much differentiation of performance will we have? What is the cost of doing this?
- Identify any gaps that exist in the current processes. If any of the current human resources and compensation policies conflict with pay for performance, they should be reviewed and changed. Examples may lie in the performance management process, the merit increase process, and the short-term and long-term bonus processes. If the performance management process has gaps, these should be corrected before pay for performance is implemented; otherwise this will generate more distrust in the system. The salary structure should also be benchmarked with market data to ensure that the organization is compensating according to where it wishes to be in the marketplace.
- Update compensation processes with new pay for-performance elements. This includes the design of a merit matrix that ties employee annual pay increases to performance. Other areas of focus should be the design of a short-term bonus matrix and a long-term bonus pay-for-performance strategy. In other words, how does performance drive the bonus payouts? What is the differential (or multiplier) for each level?
- Communicate and train managers and employees on the pay for-performance philosophy and process changes. Explain the changes in the context of the overall culture of the organization. This is a long-term investment in talent and performance.
Human resource management professionals play a key role in the rewards processes, and employee compensation is only one piece (although a key piece!) of the “total rewards” pie. World at Work defines total rewards as a “dynamic relationship between employers and employees.” World at Work also defines a total rewards strategy as the six elements of total rewards that “collectively define an organization’s strategy to attract, motivate, retain and engage employees.” These six elements include:
- Compensation—Pay provided by an employer to its employees for services rendered (i.e., time, effort, and skill). This includes both fixed and variable pay tied to performance levels.
- Benefits—Programs an employer uses to supplement the cash compensation employees receive. These health, income protection, savings, and retirement programs provide security for employees and their families.
- Work-life effectiveness—A specific set of organizational practices, policies, and programs, plus a philosophy that actively supports efforts to help employees achieve success at both work and home.
- Recognition—Formal or informal programs that acknowledge or give special attention to employee actions, efforts, behavior, or performance and support business strategy by reinforcing behaviors (e.g., extraordinary accomplishments) that contribute to organizational success.
- Performance management—The alignment of organizational, team, and individual efforts toward the achievement of business goals and organizational success. Performance management includes establishing expectations, skill demonstration, assessment, feedback, and continuous improvement.
- Talent development—Provides the opportunity and tools for employees to advance their skills and competencies in both their short- and long-term careers.
Human resource management is responsible for defining and driving the various elements of an organization’s total rewards strategy and ensuring that it is engaging enough to attract and retain good employees. It is easy to see that there are many different types of rewards that can motivate individuals for many different reasons. In the HBR article “Employee Motivation: A Powerful New Model” (Nohria, Groysberg, Lee), August 2008, the authors describe four different drives that underlie motivation. They assert that these are hardwired into our brains and directly affect our emotions and behaviors. These include the drives to acquire, bond, comprehend, and defend. Table \(\PageIndex{1}\) illustrates each of these drives, the primary levers found in an organization to address those drives, and the actions that should be taken to support the primary levers.
Table \(\PageIndex{1}\) (Attribution: Copyright Rice University, OpenStax, under CC-BY 4.0 license)
| Hiring Top-Level Executives | |||
|---|---|---|---|
| Steps in the Process | Poor Practices | Best Practices | Challenges |
| Adapted from “The Definitive Guide to Recruiting in Good Times and Bad,” from article “Hiring Top Executives: A Comprehensive End-to-End Process,” Harvard Business Review, May 2009. | |||
|
Anticipate. |
Hiring only when you have an opening Poor succession plan Not anticipating future needs |
Conduct ongoing analysis of future needs. Always evaluate the pool of potential talent. |
Linking the talent plan to the strategic plan Incorporating HR into the strategic planning process |
|
Specify the job. |
Relying on generic job specifications |
Continually defining the specific demands of the job Specifying specific skills and experience requirements |
Dialogue between HR and top management |
|
Develop a pool. |
Limiting the pool Only looking for external or internal candidates |
Develop a large pool. Include all inside and outside potential candidates. |
Breaking organizational silos |
|
Assess the candidates. |
Don’t pick the first OK choice. Don’t only use your “gut.” |
Use a small pool of your best interviewers. Conduct robust background checks. |
Training senior managers on interviewing techniques |
|
Hire the choice. |
Don’t assume money is the only issue. Don’t only discuss the positives of the job. |
Show active support of the candidates’ interests. Realistically describe the job. Ensure that offered compensation is fair to other employees. |
Getting commitment of top managers Ensuring compensation equity |
|
Integrate the new hire. |
Don’t assume that the hew hire is a “plug and play.” |
Use a “top performer” as a mentor. Check in often early in the process even if no problems seem imminent. |
Rewarding mentors |
|
Review the process. |
Don’t hang on to bad hires. |
Remove bad hires early on. Review the recruiting practices. Reward your best interviewers. |
Institutionalizing audit and review practices Admitting mistakes and moving on |
The drive to acquire describes the notion that we are all driven to acquire scarce goods that bolster our sense of well-being. This drive also seems to be relative (we compare ourselves to others in what we have) and insatiable (we always want more). Within an organization, the primary lever to address this drive is the reward system, and the actions are to differentiate levels of performance, link performance to rewards, and pay competitively.
The drive to bond describes the idea that humans extend connections beyond just individuals, to organizations, associations, and nations. In organizations, this drive is fulfilled when employees feel proud to be a part of the company and enjoy being a member of their team. Within an organization, the primary lever to address this drive is culture, and the actions are to foster mutual reliance and friendships, to value collaboration and teamwork, and to encourage best practice sharing.
The drive to comprehend is the concept of all of us wanting to make sense of the world around us and producing different theories and accounts to explain things. People are motivated by the idea of figuring out challenges and making a contribution. In organizations, the primary lever to address this drive is job design, and the actions are to design jobs that have distinct and important roles in the organization, as well as jobs that are meaningful and foster a sense of contribution.
The drive to defend is our instinct to defend ourselves, our families, and our friends, and it describes our defensiveness against external threats. This drive also tells us a lot about our level of resistance to change, and why some employees have especially guarded or emotional reactions. In organizations, the primary levers that address this drive are performance management and resource-allocation processes, and the actions are to increase process transparency and fairness, and to build trust by being just in granting rewards, assignments, and other recognition.
Within human resource management, the area of compensation and reward systems is exceedingly complicated. In organizations, we think primarily of compensation rewards, which are very important drivers and motivators for most people. We need to also remember the other aspects of the total rewards strategy, as well as the drives and levers we can utilize to motivate employees.
concept check
- What is the first step in defining an organization’s pay-for-performance strategy? | libretexts | 2025-03-17T22:27:56.469462 | 2021-05-08T21:24:04 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.05%3A_Influencing_Employee_Performance_and_Motivation",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4.5: Influencing Employee Performance and Motivation",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.06%3A_Building_an_Organization_for_the_Future | 4.4.6: Building an Organization for the Future
-
- Last updated
- Save as PDF
What is talent acquisition, and how can it create a competitive advantage for a company?
We’ve discussed some of the key focus areas that human resource management professionals need to address to ensure that employees are performing their roles well and are being fairly rewarded for their contributions. We haven’t yet addressed how we think about where these employees come from—Whom do we hire? What skills do we need now and in the future? Where will we even look for these employees? What are some best practices? Talent acquisition is the area within human resource management that defines the strategy for selection, recruiting, and hiring processes, and helps the organization fight the “ war for talent ” during good times and bad.
Hiring strong talent is a key source of competitive advantage for a company, yet so many companies do it poorly. Often, the recruiting and hiring processes happen reactively—someone leaves the organization and then people scramble to fill the gap. Very few companies take a longer-term, proactive approach and work to create a strategic plan for talent acquisition. In the article “The Definitive Guide to Recruiting in Good Times and Bad” (Fernandez-Araoz, Groysberg, Nohria, HBR, 2009), the authors advocate for a rigorous and strategic recruiting process that includes the following critical actions:
- Anticipate your future leadership needs based on your strategic business plan.
- Identify the specific competencies required in each position you need to fill.
- Develop a sufficiently large candidate pool.
In organizations today, there are often pieces of the talent acquisition process that are outsourced to external recruiters, as opposed to being managed internally by human resources employees. While outsourcing specific searches is not an issue, there must be internal HR/talent acquisition employees responsible for creating the overall strategic plan for the recruiting function. Contract recruiters may then take responsibility for a piece of the overall process by leveraging the strategy and competencies that the HR team puts forth.
Recruiting and hiring of high-level leadership candidates has special risks and rewards associated with it. The risk that a key leadership position is vacant or becoming vacant poses a risk to the organization if it is left open for too long. These high-level positions are often harder to fill, with fewer candidates being available and the selection of the right talent being so critical to the organization’s future. The reward, however, is that with due diligence and clear goals and competencies/skills defined for the position, the HR/talent acquisition professional can create a competitive advantage through the recruitment of key high-level talent.
The following best practices illustrate the key steps for effective recruiting of key leadership hires. Both human resources and business leadership should partner to discuss and define each of the elements to ensure alignment and support of the recruiting plan and process (Definitive Guide to Recruiting, HBR, 2009).
Anticipate your needs. Every two to three years there should be a review of high-level leadership requirements based on the strategic plan. Some of the questions to answer here are:
- How many people will we need, and in what positions, in the next few years?
- What will the organizational structure look like?
- What must our leadership pipeline contain today to ensure that we find and develop tomorrow’s leaders?
Specify the job. For each leadership position identified, specify competencies needed in each role. For example:
- Job-based: What capabilities will the job require?
- Team-based: Will the applicant need to manage political dynamics?
- Firm-based: What resources (supporting, talent, technology) will the organization need to provide the person who fills this role?
Develop the pool. Cast a wide net for candidates by asking suppliers, customers, board members, professional service provides, and trusted insiders for suggestions. It helps to start this process even before you have a role that you’re hiring for. During succession planning and talent discussions internally, it helps to start making of list of internal and external contacts and potential candidates before the need arises.
Assess the candidates. Have the hiring manager, the second-level manager, and the top HR manager conduct a “behavioral event interview” with each candidate. Candidates will describe experiences they’ve had that are like situations they’ll face in the organization. Gain an understanding of how the candidate acted and the reasoning behind their actions. Make sure to evaluate a broad range of references to ask about results the candidate achieved.
Close the deal. Once you have chosen the final candidate, you can increase the chance that the job offer will be accepted by:
- Sharing passion about the company and role, and showing genuine interest in the candidate
- Acknowledging the opportunities and challenges of the role, differentiating the opportunities at your organization from those of your competitor
- Striking a creative balance between salary, bonuses, and other long-term incentives
Integrate the newcomer. It is important to integrate new hires into the company’s culture:
- During the first few months, have the managers and the HR team check in with each new hire.
- Assign a mentor (star employee) to provide ongoing support to each new hire.
- Check in with the new hire to ensure that they are getting enough support, and inquire about what other support might be needed. Ensure that new hires are adequately building new relationships throughout the organization.
Refer to Table \(\PageIndex{1}\): Hiring Top-Level Executives, adapted from “The Definitive Guide to Recruiting in Good Times and Bad,” from the article “Hiring Top Executives: A Comprehensive End-to-End Process,” Harvard Business Review , May 2009.
By following these best practices, human resources and business leadership can ensure that the new hire is integrating well and has the best possible start in the new role. Talent acquisition is a key element of any human resource management program, and the right process can mean the difference between a poor hire and a distinct competitive advantage gained through top talent.
concept check
- How can we ensure a more successful integration of the new hire? | libretexts | 2025-03-17T22:27:56.532599 | 2021-05-08T21:24:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.06%3A_Building_an_Organization_for_the_Future",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4.6: Building an Organization for the Future",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.07%3A_Talent_Development_and_Succession_Planning | 4.4.7: Talent Development and Succession Planning
-
- Last updated
- Save as PDF
What are the benefits of talent development and succession planning?
Talent development and succession planning are, in my opinion, two of the most critical human resource management processes within an organization. You can work tirelessly to recruit and hire the right people, and you can spend a lot of time defining and redesigning your performance and rewards programs, but if you can’t make decisions that effectively assess and develop the key talent that you have, then everything else feels like a wasted effort. Talent development describes all process and programs that an organization utilizes to assess and develop talent. Succession planning is the process for reviewing key roles and determining the readiness levels of potential internal (and external!) candidates to fill these roles. It is an important process that is a key link between talent development and talent acquisition/recruiting.
The human resources function facilitates talent development activities and processes, but they are also heavily reliant on business inputs and support. Each of the talent development processes that will be discussed require heavy involvement and feedback from the business. Like performance management, talent development is a process that HR owns and facilitates, but it is a true business process that has a fundamental impact on an organization’s performance. Talent is a competitive advantage, and in the age of the “war for talent,” an organization needs to have a plan for developing its key talent.
One of the key tools that is used in talent development is the talent review. This process generally follows an organization’s performance management process (which is primarily focused on current employee performance) and is more focused on employee development and potential for the future. Talent reviews often employ the use of a 9-box template, which plots employee performance versus employee potential and provides the reviewer with nine distinct options, or boxes, to categorize where the employee is.
Table \(\PageIndex{1}\): Performance and Potential Grid.
|
Performance and Potential Grid |
||||
|---|---|---|---|---|
|
Potential |
||||
|
Performance over time |
Lowest potential rating |
Middle potential rating |
Highest potential rating |
|
|
Highest |
John Smith Melanie Roper Keegan Flanagan |
Chieh Zhang Edgar Orrelana |
Rory Collins Aimee Terranova |
|
|
Medium |
Joseph Campbell Alina Dramon Alex Joiner Lauren Gress |
Christina Martin Thomas Weimeister |
Richard Collins |
|
|
Lowest |
Marty Hilton |
The performance axis ratings are low/medium/high and based on the employee’s recent performance management rating. Low = below target, medium = at target, and high = above target. Like the performance rating, this reflects performance against objectives and the skills and competencies required in the employee’s current role and function. Performance can change over time (for example, with a promotion or job change). Performance is overall a more objective rating than potential, which leaves the rater to make some assumptions about the future.
Potential is defined as an employee’s ability to demonstrate the behaviors necessary to be successful at the next highest level within the company. Competencies and behaviors are a good indicator of an employee’s potential. Higher-potential employees, no matter what the level, often display the following competencies: business acumen, strategic thinking, leadership skills, people skills, learning agility, and technology skills. Other indicators of potential may include:
- Top performance in current job
- Success in other positions held (within or outside of the company)
- Education/certifications
- Significant accomplishments/events
- Willingness and desire to advance
Managing Change
Tech in Human Resources
There has been a boom in HR technology and innovation over the last several years—and it is making some of the traditional HR systems from last decade seem enormously outdated. Some of the trends that are driving this HR tech innovation include mobile technology, social media, data analytics, and learning management. Human resources professionals need to be aware of some of the key technology innovations that have emerged as a result of these trends because there’s no sign that they will be going away any time soon.
Josh Bersin of Bersin by Deloitte, Deloitte Consulting LLP, wrote about some of these HR technology innovations in his SHRM.org article “9 HR Tech Trends for 2017” (Jan. 2017). One of these technology innovations is the “performance management revolution” and the new focus on managing performance by team and not just by hierarchy. Performance management technologies have become more agile and real time, with built-in pulse surveys and easy goal tracking. Now, instead of the formal, once-a-year process that brings everything to a halt, these performance management technologies allow ongoing, real-time, and dynamic input and tracking of performance data.
Another HR tech trend named is the “rise of people analytics.” Data analytics has become such a huge field, and HR’s adoption of it is no exception. Some disruptive technologies in this area are predictive—they allow analysis of job change data and the prediction of successful versus unsuccessful outcomes. Predictive analytics technologies can also analyze patterns of e-mails and communications for good time-management practices, or to predict where a security leak is likely to occur. One other incredible analytics application consists of a badge that monitors employees’ voices and predicts when an employee is experiencing stress. That is either really cool or really eerie, if you ask me.
The “maturation of the learning market” is a fascinating trend to me, as an HR professional who grew up in the days of multiple in-class trainings and week-long leadership programs. Learning processes have changed greatly with the advent of some of these innovative HR technologies. Although many larger companies have legacy learning management systems (like Cornerstone, Saba, and SuccessFactors), there are many new and competitive options that focus on scaling video learning to the entire organization. The shift has gone from learning management to learning—with the ability to not only register and track courses online, but to take courses online. Many companies are realizing that these YouTube-like learning applications are a great complement to their existing learning systems, and it is predicted that the demand will continue to grow.
Other trends of note include technologies that manage the contingent workforce, manage wellness, and automate HR processes via artificial intelligence. It is amazing to think about so many interesting and innovative technologies that are being designed for Human Resources. The investment in human capital is one of the most critical investments that a company makes, and it is refreshing to see that this level of innovation is being created to manage, engage, and develop this investment.
Discussion Questions:
- Why do you think learning systems evolved in this way? Is there still a place for group classroom training? What types of learning might require classroom training, and what is better suited for online and YouTube-style learning?
In the talent review, the potential axis equates to potential for advancement within the organization: low = not ready to advance, medium = close to ready, and high = ready to advance. Potential does not equate to the value of an individual within the organization, nor does it state the quality of individual. There are likely many strong performers (top contributors) in every company who prefer to stay in their current role for years and be specialists of their own processes. A specialist or expert may not want to manage people, and thus would be rated as low on potential due to the lack of interest in advancement. Advancement may also mean relocation or lifestyle change that an employee is not willing to make at that time, so the employee would be rated low on potential for that reason. Potential can certainly change over time, given people’s individual situations and life circumstances. Potential tends to be the more subjective ratings axis, as it involves some assumptions into what a team member could be capable of based on limited information that is available now.
A human resources team member should absolutely facilitate the talent review process and provide leaders with clear session objectives and specific instructions in order maintain the integrity and confidentiality of this important talent process. The book One Page Talent Management (Effron and Ort, HBS Press, 2010) describes the talent review meeting as a talent review calibration process that “ensures objective performance and potential evaluations, clear development plans, and an understanding of what high potential means in your company. A calibration meeting brings together a manager and her team members to discuss their talent. Each team member presents the performance and potential (PxP) grid that he prepared on direct reports and briefly describes how each person is rated. Other team members contribute their opinions based on their firsthand interactions with that person. The discussion concludes after they have discussed each person, agreed on their final placement, and identified key development steps for them.”
After everyone being discussed has been placed in one of the boxes on the 9-box template, the leadership team should discuss key development actions for each employee. (If there isn’t time to discuss development activities for each employee, the group should start with the high-potential employees.) After the talent review calibration process is complete, human resources should keep a master list of the documented outcomes, as well as the development activities that were suggested for everyone. HR should follow up with each of the leaders to help with the planning and execution of the development activities as needed. The key outputs of the talent review process include:
- Identification of the “high-potential” employees in the organization
- Definition of development actions/action plans for each employee
- Insight into talent gaps and issues
- Input into the succession planning process
Succession planning generally follows shortly after (if not right after) a talent review because human resources and organizational leadership now have fresh information on the performance and potential of employees in the organization. Succession planning is a key process used to identify the depth of talent on the “bench” and the readiness of that talent to move into new roles. The process can be used to identify gaps or a lack of bench strength at any levels of the organization, but it is usually reserved for leadership roles and other key roles in the organization. In succession planning, human resources will generally sit down with the group leader to discuss succession planning for his group and create a defined list of leadership and other critical roles that will be reviewed for potential successors.
Once the roles for succession planning analysis have been defined, both HR and the business leader will define the following elements for each role:
- Name of incumbent
- Attrition risk of incumbent
- Names of short-term successor candidates (ready in <1 year)
- Names of mid-term successor candidates (ready in 1–3 years)
- Names of long-term successor candidates (ready in 3+ years)
- Optional—9-box rating next to each successor candidate’s name
The names of longer-term successor candidates are not as critical, but it is always helpful to understand the depth of the bench. With the information recently collected during the talent review process, HR and management will have a lot of quality information on the internal successor candidates. It is important to include external successor candidates in the succession planning analysis as well. If there are no candidates that are identified as short-, mid-, or long-term successor candidates for a role, then the word “EXTERNAL” should automatically be placed next to that role. Even if there are internal candidates named, any external successor candidates should still be captured in the analysis as appropriate.
Talent reviews and succession planning processes both generate excellent discussions and very insightful information on the state of talent in the organization. Human resources facilitates both processes, in very close partnership with the business, and ultimately keeps the output information from the sessions—i.e., the final succession plan, the final 9-box, and the follow-up development actions and activities as defined in the talent review session. With this information, human resources possesses a level of knowledge that will allow it to drive talent development and coach managers on the follow-up actions that they need to set in motion. Some examples of follow-up development activities that may be appropriate based on the outputs of the succession and 9-box events include training, stretch assignments, individual assessments, and individual development plans . Training and training plans identify the learning events that an individual would benefit from, either in a classroom or online format. Stretch assignments may be an appropriate development action for an employee who is being tested for or who wants to take on additional responsibility. Individual assessments, such as a 360 assessment for managers, is a good developmental tool to provide feedback from manager, peers, direct reports, customers, or others who interact with the employee regularly. Finally, an individual development plan is an important document that employees should use to map out their personal development goals and actions, and to track their own status and progress toward those goals.
Talent development is a collection of organization-wide processes that help to evaluate talent strengths and gaps within the organization. Although many of the processes are carried out in a group setting, the output of talent development needs to be very individualized via a collection of development tools and strategies to enhance performance. Human resources is a key resource and partner for these tools and strategies, and thus plays a critical role in the future of talent for the organization.
Conclusion
Human resource management is a complex and often difficult field because of the nature of the key area of focus—people. In working with people, we begin to understand both the expressed and the hidden drives—intentions and emotions that add complexity and additional context to the processes and tasks that we set forth. We also begin to understand that an organization is a group of individuals, and that human resources plays a critical role in ensuring that there are philosophies, structures, and processes in place to guide, teach, and motivate individual employees to perform at their best possible levels.
concept check
- What roles should an organization discuss as part of the succession planning process? | libretexts | 2025-03-17T22:27:56.607939 | 2021-05-08T21:24:05 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/04%3A_Identify_the_major_decision_making_techniques_and_their_formats./4.04%3A_Human_Resource_Management/4.4.07%3A_Talent_Development_and_Succession_Planning",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "4.4.7: Talent Development and Succession Planning",
"author": ""
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/07%3A_Analyze_the_fundamentals_of_the_control_process./7.02%3A_Degree_of_Centralization | 7.2: Supervision: Centralization Versus Decentralization
-
- Last updated
- Save as PDF
How can the degree of centralization/decentralization be altered to make an organization more successful?
The optimal span of control is determined by the following five factors:
- Nature of the task . The more complex the task, the narrower the span of control.
- Location of the workers . The more locations, the narrower the span of control.
- Ability of the manager to delegate responsibility . The greater the ability to delegate, the wider the span of control.
- Amount of interaction and feedback between the workers and the manager . The more feedback and interaction required, the narrower the span of control.
- Level of skill and motivation of the workers . The higher the skill level and motivation, the wider the span of control.
The final component in building an effective organizational structure is deciding at what level in the organization decisions should be made. Centralization is the degree to which formal authority is concentrated in one area or level of the organization. In a highly centralized structure, top management makes most of the key decisions in the organization, with very little input from lower-level employees. Centralization lets top managers develop a broad view of operations and exercise tight financial controls. It can also help to reduce costs by eliminating redundancy in the organization. But centralization may also mean that lower-level personnel don’t get a chance to develop their decision-making and leadership skills and that the organization is less able to respond quickly to customer demands.
Decentralization is the process of pushing decision-making authority down the organizational hierarchy, giving lower-level personnel more responsibility and power to make and implement decisions. Benefits of decentralization can include quicker decision-making, increased levels of innovation and creativity, greater organizational flexibility, faster development of lower-level managers, and increased levels of job satisfaction and employee commitment. But decentralization can also be risky. If lower-level personnel don’t have the necessary skills and training to perform effectively, they may make costly mistakes. Additionally, decentralization may increase the likelihood of inefficient lines of communication, competing objectives, and duplication of effort.
Several factors must be considered when deciding how much decision-making authority to delegate throughout the organization. These factors include the size of the organization, the speed of change in its environment, managers’ willingness to give up authority, employees’ willingness to accept more authority, and the organization’s geographic dispersion.
Decentralization is usually desirable when the following conditions are met:
- The organization is very large, like ExxonMobil, Ford, or General Electric.
- The firm is in a dynamic environment where quick, local decisions must be made, as in many high-tech industries.
- Managers are willing to share power with their subordinates.
- Employees are willing and able to take more responsibility.
- The company is spread out geographically, such as Nordstrom, Caterpillar, or Ford.
As organizations grow and change, they continually reevaluate their structure to determine whether it is helping the company to achieve its goals.
CONCEPT CHECK
- What are the characteristics of a centralized organization?
- What are the benefits of a decentralized organization?
- What factors should be considered when choosing the degree of centralization? | libretexts | 2025-03-17T22:27:57.843936 | 2021-05-08T21:24:22 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/07%3A_Analyze_the_fundamentals_of_the_control_process./7.02%3A_Degree_of_Centralization",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "7.2: Supervision: Centralization Versus Decentralization",
"author": "OpenStax"
} |
https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/07%3A_Analyze_the_fundamentals_of_the_control_process./7.04%3A_Summary | 7.4: Supervisory Roles in the Control Function
-
- Last updated
- Save as PDF
Key Terms
- Decisional role
-
One of the three major roles that a manager assumes in the organization.
- Executive Managers
-
Generally, a team of individuals at the highest level ofmanagementof an organization.
- First-line Managers
-
The level of management directly managing nonmanagerial employees.
- Informational Role
-
One of the three major roles that a manager assumes in the organization.
- Interpersonal Role
-
One of the three major roles that a manager assumes in the organization.
- Middle Management
-
The managers in an organization at a level just below that of senior executives.
Summary of Learning Outcomes
1.2 What Do Managers Do?
1. What do managers do to help organizations achieve top performance?
Managers perform a variety of functions in organizations, but amongst one of the most important functions they perform is communicating with direct reports to help their organizations achieve and exceed goals.
1.3 The Roles Managers Play
2. What do managers do to help organizations achieve top performance?
Managers perform a variety of roles in organizations, but amongst one of the most important functions they perform is communicating with direct reports to help their organizations achieve and exceed goals. Managers perform three major types of roles within organizations, interpersonal roles, informational roles, and decisional roles. the extent of each of these roles depends on the manager’s position within the organizational hierarchy.
1.4 Major Characteristics of the Manager's Job
3. What are the characteristics that effective managers display?
Management is the process of planning, organizing, directing, and controlling the activities of employees in combination with other resources to accomplish organizational goals. Managerial responsibilities include longrange planning, controlling, environmental scanning, supervision, coordination, customer relations, community relations, internal consulting, and monitoring of products and services. These responsibilities differ by level in the organizational hierarchy and by department or function. The twenty-first-century manager will differ from most current managers in four ways. In essence, he will be a global strategist, a master of technology, a good politician, and a premier leader-motivator.
Chapter Review Questions
- What are the characteristics and traits that you possess that are common to all successful managers?
- Why should management be considered an occupation rather than a profession?
- How do managers learn how to perform the job?
- Explain the manager’s job according to Henry Mintzberg.
- What responsibilities do managers have towards people within the organization? How do they express these responsibilities?
- How do managers perform their job according to John Kotter?
- How do managers make rational decisions?
- How does the nature of management change according to one’s level and function in the organization?
- Discuss the role of management in the larger societal context. What do you think the managers of the future will be like?
- Identify what you think are the critical issues facing contemporary management. Explain.
Management Skills Application Exercises
- During this and your other courses, there will likely be products of your and team-based assignments that can illustrate specific competencies such as the ability to prepare a spreadsheet application, write programming code, or show your communication abilities that demonstrate your skills in a video. It is a good practice to catalog and save these artifacts in a portfolio that will be a useful in demonstrating your skills in future job interviews.
- Time management is an important skill that will impact your future as a manager. You can categorize the time that you spend as either required or discretionary. You can assess your time management skills by keeping track of your time using a schedule calendar and breaking down the time devoted to each activity over a week. After a week of logging the activity, note whether each activity was required or discretionary and whether the time was used productively or unproductively using a 10-point scale in which 10 is very productive and 1 is completely unproductive. Now write up a plan on how to manage your time by coming up with a list of what to start doing and stop doing and what you can do to manage your discretionary time more productively.
Managerial Decision Exercises
- You are a manager at a local convenience store that has been the victim of graffiti. Identify the roles you will undertake with both internal employees and others.
-
Here are three job titles. Rank which job would devote the most of its time to conceptual, human, and technical skills.
- Vice president of finance at a Fortune 100 company
- Coding for a video game producer
- General manager at a local McDonald’s franchise
Critical Thinking Case
New Management Challenges for the New Age
Today’s news is littered with scandals, new allegations of sexual assault, and tragedy. Since 2017 and the #metoo Movement, stemming from the Harvey Weinstein scandal, more and more public figures have been put into the spotlight to defend themselves against allegations from women around the globe.
Not only publically, but privately in companies around the world, there have been firings, and investigations into misconduct from co-workers, managers, and CEOs. It is a relevant topic that is getting long overdue publicity and encouraging more men and women to come forward to discuss openly rather than hide the events and injustices of the past. Other events showcase the tumultuous and on-edge society we are living in, such as the Charlottesville, VA attack, that left 1 dead and 19 injured when a person drove a car through a crowd of protestors during a white nationalist gathering.
With events on a daily business, it is important for companies to take a stand against racial hatred, harassment of any kind, and have firm policies when such events occur. Take Netflix for example, who in July of 2018 fired chief communications officer for saying the “N-word” in full form. This event occurred during an internal meeting, not directing the slur at anyone specific, but claimed it was being made as an emphatic point about offensive words in comedy programming. The “Netflix way”, the culture that is built around radical candor and transparency was put to the test during this occurrence.
The offender, Jonathan Friedland attempted to apologize for his misdeed, hoping it would fade away and his apology would be accepted. However, it didn’t work that way, instead the anger was palpable between coworkers, and eventually led to the firing of Friedland after a few months of inaction.
Netflixers are given a high level of freedom and responsibility within their “Netflix way” culture. Blunt feedback is encouraged, trust and discretion is the ultimate gate keeper, as employees have access to sensitive information, and are ultimately trusted for how they expense items and take vacation time.
Between the insanely fast-paced streaming services industry, it is hard to keep this culture at a premium, but it is imperative for the success of the company overall. “As you scale a company to become bigger and bigger how do you scale that kind of culture?” said Colin Estep, a former senior engineer who left voluntarily in 2016. “I don’t know that we ever had a good answer.”
In order to keep up, sometimes the company is seen as harsh in their tactics to keep the best of the best. “I think we’re transparent to a fault in our culture and that can come across as cutthroat,” said Walta Nemariam, an employee in talent acquisition at Netflix, in the video.
Netflix has stayed true to their cultural values despite the pressures and sometimes negative connotations associated with this “cutthroat” environment. Their ability to remain agile, while displaying no tolerances for societal injustices makes them at the forefront of new age companies. It is a difficult pace to stay in line with, but it seems that they are keeping in stride and remaining true to who they are, for now.
Questions:
- How have the current cultural environment of our country shaped the way that companies are looking at their own corporate cultural standards?
- What are the potential downfalls and positive influences of the “Netflix way”?
- How does Netflix’s internal culture negatively or positively affect their ability to stay competitive and deliver cutting edge content?
Sources: B. Stelter, "The Weinstein Effect: Harvey Weinstein scandal sparks movements in Hollywood and beyond," CNN Business, October 20, 2017, money.cnn.com/2017/10/20/med...rveyweinstein/; www.washingtonpost.com/; L. Hertzler, " Talking #MeToo, one year after bombshell Weinstein allegations," Penn Today, October 30, 2018, penntoday.upenn.edu/news/tal...one-year-later; S. Ramachandaran and J. Flint, " At Netflix, Radical Transparency and Blunt Firings Unsettle the Ranks," Wall Street Journal, October 25, 2018, www.wsj.com/articles/at-netf...nks-1540497174 | libretexts | 2025-03-17T22:27:58.250964 | 2021-05-08T21:24:27 | {
"license": "Creative Commons - Attribution - https://creativecommons.org/licenses/by/4.0/",
"url": "https://biz.libretexts.org/Courses/Prince_Georges_Community_College/BMT_1550%3A_Supervision_(Perry_2021)/07%3A_Analyze_the_fundamentals_of_the_control_process./7.04%3A_Summary",
"book_url": "https://commons.libretexts.org/book/biz-74553",
"title": "7.4: Supervisory Roles in the Control Function",
"author": "OpenStax"
} |
Subsets and Splits