chapter
stringlengths
1.97k
1.53M
path
stringlengths
47
241
In the preceding sections, we described science as knowledge acquired through a scientific method. So what exactly is the “scientific method”? Scientific method refers to a standardized set of techniques for building scientific knowledge, such as how to make valid observations, how to interpret results, and how to generalize those results. The scientific method allows researchers to independently and impartially test preexisting theories and prior findings, and subject them to open debate, modifications, or enhancements. The scientific method must satisfy four key characteristics: • Logical: Scientific inferences must be based on logical principles of reasoning. • Confirmable: Inferences derived must match with observed evidence. • Repeatable: Other scientists should be able to independently replicate or repeat a scientific study and obtain similar, if not identical, results. • Scrutinizable: The procedures used and the inferences derived must withstand critical scrutiny (peer review) by other scientists. Any branch of inquiry that does not allow the scientific method to test its basic laws or theories cannot be called “science.” For instance, theology (the study of religion) is not science because theological ideas (such as the presence of God) cannot be tested by independent observers using a logical, confirmable, repeatable, and scrutinizable. Similarly, arts, music, literature, humanities, and law are also not considered science, even though they are creative and worthwhile endeavors in their own right. The scientific method, as applied to social sciences, includes a variety of research approaches, tools, and techniques, for collecting and analyzing qualitative or quantitative data. These methods include laboratory experiments, field surveys, case research, ethnographic research, action research, and so forth. Much of this book is devoted to learning about these different methods. However, recognize that the scientific method operates primarily at the empirical level of research, i.e., how to make observations and analyze these observations. Very little of this method is directly pertinent to the theoretical level, which is really the more challenging part of scientific research. 1.05: Types of Scientific Research Depending on the purpose of research, scientific research projects can be grouped into three types: exploratory, descriptive, and explanatory. Exploratory research is often conducted in new areas of inquiry, where the goals of the research are: (1) to scope out the magnitude or extent of a particular phenomenon, problem, or behavior, (2) to generate some initial ideas (or “hunches”) about that phenomenon, or (3) to test the feasibility of undertaking a more extensive study regarding that phenomenon. For instance, if the citizens of a country are generally dissatisfied with governmental policies regarding during an economic recession, exploratory research may be directed at measuring the extent of citizens’ dissatisfaction, understanding how such dissatisfaction is manifested, such as the frequency of public protests, and the presumed causes of such dissatisfaction, such as ineffective government policies in dealing with inflation, interest rates, unemployment, or higher taxes. Such research may include examination of publicly reported figures, such as estimates of economic indicators, such as gross domestic product (GDP), unemployment, and consumer price index, as archived by third-party sources, obtained through interviews of experts, eminent economists, or key government officials, and/or derived from studying historical examples of dealing with similar problems. This research may not lead to a very accurate understanding of the target problem, but may be worthwhile in scoping out the nature and extent of the problem and serve as a useful precursor to more in-depth research. Descriptive research is directed at making careful observations and detailed documentation of a phenomenon of interest. These observations must be based on the scientific method (i.e., must be replicable, precise, etc.), and therefore, are more reliable than casual observations by untrained people. Examples of descriptive research are tabulation of demographic statistics by the United States Census Bureau or employment statistics by the Bureau of Labor, who use the same or similar instruments for estimating employment by sector or population growth by ethnicity over multiple employment surveys or censuses. If any changes are made to the measuring instruments, estimates are provided with and without the changed instrumentation to allow the readers to make a fair before-and-after comparison regarding population or employment trends. Other descriptive research may include chronicling ethnographic reports of gang activities among adolescent youth in urban populations, the persistence or evolution of religious, cultural, or ethnic practices in select communities, and the role of technologies such as Twitter and instant messaging in the spread of democracy movements in Middle Eastern countries. Explanatory research seeks explanations of observed phenomena, problems, or behaviors. While descriptive research examines the what, where, and when of a phenomenon, explanatory research seeks answers to why and how types of questions. It attempts to “connect the dots” in research, by identifying causal factors and outcomes of the target phenomenon. Examples include understanding the reasons behind adolescent crime or gang violence, with the goal of prescribing strategies to overcome such societal ailments. Most academic or doctoral research belongs to the explanation category, though some amount of exploratory and/or descriptive research may also be needed during initial phases of academic research. Seeking explanations for observed events requires strong theoretical and interpretation skills, along with intuition, insights, and personal experience. Those who can do it well are also the most prized scientists in their disciplines.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/01%3A_Science_and_Scientific_Research/1.04%3A_Scientific_Method.txt
Before closing this chapter, it may be interesting to go back in history and see how science has evolved over time and identify the key scientific minds in this evolution. Although instances of scientific progress have been documented over many centuries, the terms “science,” “scientists,” and the “scientific method” were coined only in the 19th century. Prior to this time, science was viewed as a part of philosophy, and coexisted with other branches of philosophy such as logic, metaphysics, ethics, and aesthetics, although the boundaries between some of these branches were blurred. In the earliest days of human inquiry, knowledge was usually recognized in terms of theological precepts based on faith. This was challenged by Greek philosophers such as Plato, Aristotle, and Socrates during the 3rd century BC, who suggested that the fundamental nature of being and the world can be understood more accurately through a process of systematic logical reasoning called rationalism. In particular, Aristotle’s classic work Metaphysics (literally meaning “beyond physical [existence]”) separated theology (the study of Gods) from ontology (the study of being and existence) and universal science (the study of first principles, upon which logic is based). Rationalism (not to be confused with “rationality”) views reason as the source of knowledge or justification, and suggests that the criterion of truth is not sensory but rather intellectual and deductive, often derived from a set of first principles or axioms (such as Aristotle’s “law of non-contradiction”). The next major shift in scientific thought occurred during the 16th century, when British philosopher Francis Bacon (1561-1626) suggested that knowledge can only be derived from observations in the real world. Based on this premise, Bacon emphasized knowledge acquisition as an empirical activity (rather than as a reasoning activity), and developed empiricism as an influential branch of philosophy. Bacon’s works led to the popularization of inductive methods of scientific inquiry, the development of the “scientific method” (originally called the “Baconian method”), consisting of systematic observation, measurement, and experimentation, and may have even sowed the seeds of atheism or the rejection of theological precepts as “unobservable.” Empiricism continued to clash with rationalism throughout the Middle Ages, as philosophers sought the most effective way of gaining valid knowledge. French philosopher Rene Descartes sided with the rationalists, while British philosophers John Locke and David Hume sided with the empiricists. Other scientists, such as Galileo Galilei and Sir Issac Newton, attempted to fuse the two ideas into natural philosophy (the philosophy of nature), to focus specifically on understanding nature and the physical universe, which is considered to be the precursor of the natural sciences. Galileo (1564-1642) was perhaps the first to state that the laws of nature are mathematical, and contributed to the field of astronomy through an innovative combination of experimentation and mathematics. In the 18th century, German philosopher Immanuel Kant sought to resolve the dispute between empiricism and rationalism in his book Critique of Pure Reason, by arguing that experience is purely subjective and processing them using pure reason without first delving into the subjective nature of experiences will lead to theoretical illusions. Kant’s ideas led to the development of German idealism, which inspired later development of interpretive techniques such as phenomenology, hermeneutics, and critical social theory. At about the same time, French philosopher Auguste Comte (1798–1857), founder of the discipline of sociology, attempted to blend rationalism and empiricism in a new doctrine called positivism. He suggested that theory and observations have circular dependence on each other. While theories may be created via reasoning, they are only authentic if they can be verified through observations. The emphasis on verification started the separation of modern science from philosophy and metaphysics and further development of the “scientific method” as the primary means of validating scientific claims. Comte’s ideas were expanded by Emile Durkheim in his development of sociological positivism (positivism as a foundation for social research) and Ludwig Wittgenstein in logical positivism. In the early 20th century, strong accounts of positivism were rejected by interpretive sociologists (antipositivists) belonging to the German idealism school of thought. Positivism was typically equated with quantitative research methods such as experiments and surveys and without any explicit philosophical commitments, while antipositivism employed qualitative methods such as unstructured interviews and participant observation. Even practitioners of positivism, such as American sociologist Paul Lazarsfield who pioneered large-scale survey research and statistical techniques for analyzing survey data, acknowledged potential problems of observer bias and structural limitations in positivist inquiry. In response, antipositivists emphasized that social actions must be studied though interpretive means based upon an understanding the meaning and purpose that individuals attach to their personal actions, which inspired Georg Simmel’s work on symbolic interactionism, Max Weber’s work on ideal types, and Edmund Husserl’s work on phenomenology. In the mid-to-late 20th century, both positivist and antipositivist schools of thought were subjected to criticisms and modifications. British philosopher Sir Karl Popper suggested that human knowledge is based not on unchallengeable, rock solid foundations, but rather on a set of tentative conjectures that can never be proven conclusively, but only disproven. Empirical evidence is the basis for disproving these conjectures or “theories.” This metatheoretical stance, called postpositivism (or postempiricism), amends positivism by suggesting that it is impossible to verify the truth although it is possible to reject false beliefs, though it retains the positivist notion of an objective truth and its emphasis on the scientific method. Likewise, antipositivists have also been criticized for trying only to understand society but not critiquing and changing society for the better. The roots of this thought lie in Das Capital, written by German philosophers Karl Marx and Friedrich Engels, which critiqued capitalistic societies as being social inequitable and inefficient, and recommended resolving this inequity through class conflict and proletarian revolutions. Marxism inspired social revolutions in countries such as Germany, Italy, Russia, and China, but generally failed to accomplish the social equality that it aspired. Critical research (also called critical theory) propounded by Max Horkheimer and Jurgen Habermas in the 20th century, retains similar ideas of critiquing and resolving social inequality, and adds that people can and should consciously act to change their social and economic circumstances, although their ability to do so is constrained by various forms of social, cultural and political domination. Critical research attempts to uncover and critique the restrictive and alienating conditions of the status quo by analyzing the oppositions, conflicts and contradictions in contemporary society, and seeks to eliminate the causes of alienation and domination (i.e., emancipate the oppressed class). More on these different research philosophies and approaches will be covered in future chapters of this book.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/01%3A_Science_and_Scientific_Research/1.06%3A_History_of_Scientific_Thought.txt
Conducting good research requires first retraining your brain to think like a researcher. This requires visualizing the abstract from actual observations, mentally “connecting the dots” to identify hidden concepts and patterns, and synthesizing those patterns into generalizable laws and theories that apply to other contexts beyond the domain of the initial observations. Research involves constantly moving back and forth from an empirical plane where observations are conducted to a theoretical plane where these observations are abstracted into generalizable laws and theories. This is a skill that takes many years to develop, is not something that is taught in graduate or doctoral programs or acquired in industry training, and is by far the biggest deficit amongst Ph.D. students. Some of the mental abstractions needed to think like a researcher include unit of analysis, constructs, hypotheses, operationalization, theories, models, induction, deduction, and so forth, which we will examine in this chapter. 02: Thinking Like a Researcher One of the first decisions in any social science research is the unit of analysis of a scientific study. The unit of analysis refers to the person, collective, or object that is the target of the investigation. Typical unit of analysis include individuals, groups, organizations, countries, technologies, objects, and such. For instance, if we are interested in studying people’s shopping behavior, their learning outcomes, or their attitudes to new technologies, then the unit of analysis is the individual. If we want to study characteristics of street gangs or teamwork in organizations, then the unit of analysis is the group. If the goal of research is to understand how firms can improve profitability or make good executive decisions, then the unit of analysis is the firm. In this case, even though decisions are made by individuals in these firms, these individuals are presumed to represent their firm’s decision rather than their personal decisions. If research is directed at understanding differences in national cultures, then the unit of analysis becomes a country. Even inanimate objects can serve as units of analysis. For instance, if a researcher is interested in understanding how to make web pages more attractive to its users, then the unit of analysis is a web page (and not users). If we wish to study how knowledge transfer occurs between two firms, then our unit of analysis becomes the dyad (the combination of firms that is sending and receiving knowledge). Understanding the units of analysis can sometimes be fairly complex. For instance, if we wish to study why certain neighborhoods have high crime rates, then our unit of analysis becomes the neighborhood, and not crimes or criminals committing such crimes. This is because the object of our inquiry is the neighborhood and not criminals. However, if we wish to compare different types of crimes in different neighborhoods, such as homicide, robbery, assault, and so forth, our unit of analysis becomes the crime. If we wish to study why criminals engage in illegal activities, then the unit of analysis becomes the individual (i.e., the criminal). Like, if we want to study why some innovations are more successful than others, then our unit of analysis is an innovation. However, if we wish to study how some organizations innovate more consistently than others, then the unit of analysis is the organization. Hence, two related research questions within the same research study may have two entirely different units of analysis. Understanding the unit of analysis is important because it shapes what type of data you should collect for your study and who you collect it from. If your unit of analysis is a web page, you should be collecting data about web pages from actual web pages, and not surveying people about how they use web pages. If your unit of analysis is the organization, then you should be measuring organizational-level variables such as organizational size, revenues, hierarchy, or absorptive capacity. This data may come from a variety of sources such as financial records or surveys of Chief Executive Officers (CEO), who are presumed to be representing their organization (rather than themselves). Some variables such as CEO pay may seem like individual level variables, but in fact, it can also be an organizational level variable because each organization has only one CEO pay at any time. Sometimes, it is possible to collect data from a lower level of analysis and aggregate that data to a higher level of analysis. For instance, in order to study teamwork in organizations, you can survey individual team members in different organizational teams, and average their individual scores to create a composite team-level score for team-level variables like cohesion and conflict. We will examine the notion of “variables” in greater depth in the next section.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/02%3A_Thinking_Like_a_Researcher/2.01%3A_Unit_of_Analysis.txt
We discussed in Chapter 1 that although research can be exploratory, descriptive, or explanatory, most scientific research tend to be of the explanatory type in that they search for potential explanations of observed natural or social phenomena. Explanations require development of concepts or generalizable properties or characteristics associated with objects, events, or people. While objects such as a person, a firm, or a car are not concepts, their specific characteristics or behavior such as a person’s attitude toward immigrants, a firm’s capacity for innovation, and a car’s weight can be viewed as concepts. Knowingly or unknowingly, we use different kinds of concepts in our everyday conversations. Some of these concepts have been developed over time through our shared language. Sometimes, we borrow concepts from other disciplines or languages to explain a phenomenon of interest. For instance, the idea of gravitation borrowed from physics can be used in business to describe why people tend to “gravitate” to their preferred shopping destinations. Likewise, the concept of distance can be used to explain the degree of social separation between two otherwise collocated individuals. Sometimes, we create our own concepts to describe a unique characteristic not described in prior research. For instance, technostress is a new concept referring to the mental stress one may face when asked to learn a new technology. Concepts may also have progressive levels of abstraction. Some concepts such as a person’s weight are precise and objective, while other concepts such as a person’s personality may be more abstract and difficult to visualize. A construct is an abstract concept that is specifically chosen (or “created”) to explain a given phenomenon. A construct may be a simple concept, such as a person’s weight, or a combination of a set of related concepts such as a person’s communication skill, which may consist of several underlying concepts such as the person’s vocabulary, syntax, and spelling. The former instance (weight) is a unidimensional construct, while the latter (communication skill) is a multi-dimensional construct (i.e., it consists of multiple underlying concepts). The distinction between constructs and concepts are clearer in multi-dimensional constructs, where the higher order abstraction is called a construct and the lower order abstractions are called concepts. However, this distinction tends to blur in the case of unidimensional constructs. Constructs used for scientific research must have precise and clear definitions that others can use to understand exactly what it means and what it does not mean. For instance, a seemingly simple construct such as income may refer to monthly or annual income, before-tax or after-tax income, and personal or family income, and is therefore neither precise nor clear. There are two types of definitions: dictionary definitions and operational definitions. In the more familiar dictionary definition, a construct is often defined in terms of a synonym. For instance, attitude may be defined as a disposition, a feeling, or an affect, and affect in turn is defined as an attitude. Such definitions of a circular nature are not particularly useful in scientific research for elaborating the meaning and content of that construct. Scientific research requires operational definitions that define constructs in terms of how they will be empirically measured. For instance, the operational definition of a construct such as temperature must specify whether we plan to measure temperature in Celsius, Fahrenheit, or Kelvin scale. A construct such as income should be defined in terms of whether we are interested in monthly or annual income, before-tax or after-tax income, and personal or family income. One can imagine that constructs such as learning, personality, and intelligence can be quite hard to define operationally. A term frequently associated with, and sometimes used interchangeably with, a construct is a variable. Etymologically speaking, a variable is a quantity that can vary (e.g., from low to high, negative to positive, etc.), in contrast to constants that do not vary (i.e., remain constant). However, in scientific research, a variable is a measurable representation of an abstract construct. As abstract entities, constructs are not directly measurable, and hence, we look for proxy measures called variables. For instance, a person’s intelligence is often measured as his or her IQ (intelligence quotient) score, which is an index generated from an analytical and pattern-matching test administered to people. In this case, intelligence is a construct, and IQ score is a variable that measures the intelligence construct. Whether IQ scores truly measures one’s intelligence is anyone’s guess (though many believe that they do), and depending on whether how well it measures intelligence, the IQ score may be a good or a poor measure of the intelligence construct. As shown in Figure 2.1, scientific research proceeds along two planes: a theoretical plane and an empirical plane. Constructs are conceptualized at the theoretical (abstract) plane, while variables are operationalized and measured at the empirical (observational) plane. Thinking like a researcher implies the ability to move back and forth between these two planes. Depending on their intended use, variables may be classified as independent, dependent, moderating, mediating, or control variables. Variables that explain other variables are called independent variables, those that are explained by other variables are dependent variables, those that are explained by independent variables while also explaining dependent variables are mediating variables (or intermediate variables), and those that influence the relationship between independent and dependent variables are called moderating variables. As an example, if we state that higher intelligence causes improved learning among students, then intelligence is an independent variable and learning is a dependent variable. There may be other extraneous variables that are not pertinent to explaining a given dependent variable, but may have some impact on the dependent variable. These variables must be controlled for in a scientific study, and are therefore called control variables. To understand the differences between these different variable types, consider the example shown in Figure 2.2. If we believe that intelligence influences (or explains) students’ academic achievement, then a measure of intelligence such as an IQ score is an independent variable, while a measure of academic success such as grade point average is a dependent variable. If we believe that the effect of intelligence on academic achievement also depends on the effort invested by the student in the learning process (i.e., between two equally intelligent students, the student who puts is more effort achieves higher academic achievement than one who puts in less effort), then effort becomes a moderating variable. Incidentally, one may also view effort as an independent variable and intelligence as a moderating variable. If academic achievement is viewed as an intermediate step to higher earning potential, then earning potential becomes the dependent variable for the independent variable academic achievement, and academic achievement becomes the mediating variable in the relationship between intelligence and earning potential. Hence, variable are defined as an independent, dependent, moderating, or mediating variable based on their nature of association with each other. The overall network of relationships between a set of related constructs is called a nomological network (see Figure 2.2). Thinking like a researcher requires not only being able to abstract constructs from observations, but also being able to mentally visualize a nomological network linking these abstract constructs.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/02%3A_Thinking_Like_a_Researcher/2.02%3A_Concepts_Constructs_and_Variables.txt
Figure 2.2 shows how theoretical constructs such as intelligence, effort, academic achievement, and earning potential are related to each other in a nomological network. Each of these relationships is called a proposition. In seeking explanations to a given phenomenon or behavior, it is not adequate just to identify key concepts and constructs underlying the target phenomenon or behavior. We must also identify and state patterns of relationships between these constructs. Such patterns of relationships are called propositions. A proposition is a tentative and conjectural relationship between constructs that is stated in a declarative form. An example of a proposition is: “An increase in student intelligence causes an increase in their academic achievement.” This declarative statement does not have to be true, but must be empirically testable using data, so that we can judge whether it is true or false. Propositions are generally derived based on logic (deduction) or empirical observations (induction). Because propositions are associations between abstract constructs, they cannot be tested directly. Instead, they are tested indirectly by examining the relationship between corresponding measures (variables) of those constructs. The empirical formulation of propositions, stated as relationships between variables, is called hypotheses (see Figure 2.1). Since IQ scores and grade point average are operational measures of intelligence and academic achievement respectively, the above proposition can be specified in form of the hypothesis: “An increase in students’ IQ score causes an increase in their grade point average.” Propositions are specified in the theoretical plane, while hypotheses are specified in the empirical plane. Hence, hypotheses are empirically testable using observed data, and may be rejected if not supported by empirical observations. Of course, the goal of hypothesis testing is to infer whether the corresponding proposition is valid. Hypotheses can be strong or weak. “Students’ IQ scores are related to their academic achievement” is an example of a weak hypothesis, since it indicates neither the directionality of the hypothesis (i.e., whether the relationship is positive or negative), nor its causality (i.e., whether intelligence causes academic achievement or academic achievement causes intelligence). A stronger hypothesis is “students’ IQ scores are positively related to their academic achievement”, which indicates the directionality but not the causality. A still better hypothesis is “students’ IQ scores have positive effects on their academic achievement”, which specifies both the directionality and the causality (i.e., intelligence causes academic achievement, and not the reverse). The signs in Figure 2.2 indicate the directionality of the respective hypotheses. Also note that scientific hypotheses should clearly specify independent and dependent variables. In the hypothesis, “students’ IQ scores have positive effects on their academic achievement,” it is clear that intelligence is the independent variable (the “cause”) and academic achievement is the dependent variable (the “effect”). Further, it is also clear that this hypothesis can be evaluated as either true (if higher intelligence leads to higher academic achievement) or false (if higher intelligence has no effect on or leads to lower academic achievement). Later on in this book, we will examine how to empirically test such cause-effect relationships. Statements such as “students are generally intelligent” or “all students can achieve academic success” are not scientific hypotheses because they do not specify independent and dependent variables, nor do they specify a directional relationship that can be evaluated as true or false.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/02%3A_Thinking_Like_a_Researcher/2.03%3A_Propositions_and_Hypotheses.txt
A theory is a set of systematically interrelated constructs and propositions intended to explain and predict a phenomenon or behavior of interest, within certain boundary conditions and assumptions. Essentially, a theory is a systemic collection of related theoretical propositions. While propositions generally connect two or three constructs, theories represent a system of multiple constructs and propositions. Hence, theories can be substantially more complex and abstract and of a larger scope than propositions or hypotheses. I must note here that people not familiar with scientific research often view a theory as a speculation or the opposite of fact. For instance, people often say that teachers need to be less theoretical and more practical or factual in their classroom teaching. However, practice or fact are not opposites of theory, but in a scientific sense, are essential components needed to test the validity of a theory. A good scientific theory should be well supported using observed facts and should also have practical value, while a poorly defined theory tends to be lacking in these dimensions. Famous organizational research Kurt Lewin once said, “Theory without practice is sterile; practice without theory is blind.” Hence, both theory and facts (or practice) are essential for scientific research. Theories provide explanations of social or natural phenomenon. As emphasized in Chapter 1, these explanations may be good or poor. Hence, there may be good or poor theories. Chapter 3 describes some criteria that can be used to evaluate how good a theory really is. Nevertheless, it is important for researchers to understand that theory is not “truth,” there is nothing sacrosanct about any theory, and theories should not be accepted just because they were proposed by someone. In the course of scientific progress, poorer theories are eventually replaced by better theories with higher explanatory power. The essential challenge for researchers is to build better and more comprehensive theories that can explain a target phenomenon better than prior theories. A term often used in conjunction with theory is a model. A model is a representation of all or part of a system that is constructed to study that system (e.g., how the system works or what triggers the system). While a theory tries to explain a phenomenon, a model tries to represent a phenomenon. Models are often used by decision makers to make important decisions based on a given set of inputs. For instance, marketing managers may use models to decide how much money to spend on advertising for different product lines based on parameters such as prior year’s advertising expenses, sales, market growth, and competing products. Likewise, weather forecasters can use models to predict future weather patterns based on parameters such as wind speeds, wind direction, temperature, and humidity. While these models are useful, they may not necessarily explain advertising expenditure or weather forecasts. Models may be of different kinds, such as mathematical models, network models, and path models. Models can also be descriptive, predictive, or normative. Descriptive models are frequently used for representing complex systems, for visualizing variables and relationships in such systems. An advertising expenditure model may be a descriptive model. Predictive models (e.g., a regression model) allow forecast of future events. Weather forecasting models are predictive models. Normative models are used to guide our activities along commonly accepted norms or practices. Models may also be static if it represents the state of a system at one point in time, or dynamic, if it represents a system’s evolution over time. The process of theory or model development may involve inductive and deductive reasoning. Recall from Chapter 1 that deduction is the process of drawing conclusions about a phenomenon or behavior based on theoretical or logical reasons and an initial set of premises. As an example, if a certain bank enforces a strict code of ethics for its employees (Premise 1) and Jamie is an employee at that bank (Premise 2), then Jamie can be trusted to follow ethical practices (Conclusion). In deduction, the conclusions must be true if the initial premises and reasons are correct. In contrast, induction is the process of drawing conclusions based on facts or observed evidence. For instance, if a firm spent a lot of money on a promotional campaign (Observation 1), but the sales did not increase (Observation 2), then possibly the promotion campaign was poorly executed (Conclusion). However, there may be rival explanations for poor sales, such as economic recession or the emergence of a competing product or brand or perhaps a supply chain problem. Inductive conclusions are therefore only a hypothesis, and may be disproven. Deductive conclusions generally tend to be stronger than inductive conclusions, but a deductive conclusion based on an incorrect premise is also incorrect. As shown in Figure 2.3, inductive and deductive reasoning go hand in hand in theory and model building. Induction occurs when we observe a fact and ask, “Why is this happening?” In answering this question, we advance one or more tentative explanations (hypotheses). We then use deduction to narrow down the tentative explanations to the most plausible explanation based on logic and reasonable premises (based on our understanding of the phenomenon under study). Researchers must be able to move back and forth between inductive and deductive reasoning if they are to post extensions or modifications to a given model or theory, or built better ones, which are the essence of scientific research.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/02%3A_Thinking_Like_a_Researcher/2.04%3A_Theories_and_Models.txt
In Chapter 1, we saw that scientific research is the process of acquiring scientific knowledge using the scientific method. But how is such research conducted? This chapter delves into the process of scientific research, and the assumptions and outcomes of the research process. 03: The Research Process Our design and conduct of research is shaped by our mental models or frames of references that we use to organize our reasoning and observations. These mental models or frames (belief systems) are called paradigms. The word “paradigm” was popularized by Thomas Kuhn (1962) in his book The Structure of Scientific Revolutions, where he examined the history of the natural sciences to identify patterns of activities that shape the progress of science. Similar ideas are applicable to social sciences as well, where a social reality can be viewed by different people in different ways, which may constrain their thinking and reasoning about the observed phenomenon. For instance, conservatives and liberals tend to have very different perceptions of the role of government in people’s lives, and hence, have different opinions on how to solve social problems. Conservatives may believe that lowering taxes is the best way to stimulate a stagnant economy because it increases people’s disposable income and spending, which in turn expands business output and employment. In contrast, liberals may believe that governments should invest more directly in job creation programs such as public works and infrastructure projects, which will increase employment and people’s ability to consume and drive the economy. Likewise, Western societies place greater emphasis on individual rights, such as one’s right to privacy, right of free speech, and right to bear arms. In contrast, Asian societies tend to balance the rights of individuals against the rights of families, organizations, and the government, and therefore tend to be more communal and less individualistic in their policies. Such differences in perspective often lead Westerners to criticize Asian governments for being autocratic, while Asians criticize Western societies for being greedy, having high crime rates, and creating a “cult of the individual.” Our personal paradigms are like “colored glasses” that govern how we view the world and how we structure our thoughts about what we see in the world. Paradigms are often hard to recognize, because they are implicit, assumed, and taken for granted. However, recognizing these paradigms is key to making sense of and reconciling differences in people’ perceptions of the same social phenomenon. For instance, why do liberals believe that the best way to improve secondary education is to hire more teachers, but conservatives believe that privatizing education (using such means as school vouchers) are more effective in achieving the same goal? Because conservatives place more faith in competitive markets (i.e., in free competition between schools competing for education dollars), while liberals believe more in labor (i.e., in having more teachers and schools). Likewise, in social science research, if one were to understand why a certain technology was successfully implemented in one organization but failed miserably in another, a researcher looking at the world through a “rational lens” will look for rational explanations of the problem such as inadequate technology or poor fit between technology and the task context where it is being utilized, while another research looking at the same problem through a “social lens” may seek out social deficiencies such as inadequate user training or lack of management support, while those seeing it through a “political lens” will look for instances of organizational politics that may subvert the technology implementation process. Hence, subconscious paradigms often constrain the concepts that researchers attempt to measure, their observations, and their subsequent interpretations of a phenomenon. However, given the complex nature of social phenomenon, it is possible that all of the above paradigms are partially correct, and that a fuller understanding of the problem may require an understanding and application of multiple paradigms. Two popular paradigms today among social science researchers are positivism and post-positivism. Positivism, based on the works of French philosopher Auguste Comte (1798- 1857), was the dominant scientific paradigm until the mid-20th century. It holds that science or knowledge creation should be restricted to what can be observed and measured. Positivism tends to rely exclusively on theories that can be directly tested. Though positivism was originally an attempt to separate scientific inquiry from religion (where the precepts could not be objectively observed), positivism led to empiricism or a blind faith in observed data and a rejection of any attempt to extend or reason beyond observable facts. Since human thoughts and emotions could not be directly measured, there were not considered to be legitimate topics for scientific research. Frustrations with the strictly empirical nature of positivist philosophy led to the development of post-positivism (or postmodernism) during the mid-late 20th century. Post-positivism argues that one can make reasonable inferences about a phenomenon by combining empirical observations with logical reasoning. Post-positivists view science as not certain but probabilistic (i.e., based on many contingencies), and often seek to explore these contingencies to understand social reality better. The post-positivist camp has further fragmented into subjectivists, who view the world as a subjective construction of our subjective minds rather than as an objective reality, and critical realists, who believe that there is an external reality that is independent of a person’s thinking but we can never know such reality with any degree of certainty. Burrell and Morgan (1979), in their seminal book Sociological Paradigms and Organizational Analysis, suggested that the way social science researchers view and study social phenomena is shaped by two fundamental sets of philosophical assumptions: ontology and epistemology. Ontology refers to our assumptions about how we see the world, e.g., does the world consist mostly of social order or constant change. Epistemology refers to our assumptions about the best way to study the world, e.g., should we use an objective or subjective approach to study social reality. Using these two sets of assumptions, we can categorize social science research as belonging to one of four categories (see Figure 3.1). If researchers view the world as consisting mostly of social order (ontology) and hence seek to study patterns of ordered events or behaviors, and believe that the best way to study such a world is using objective approach (epistemology) that is independent of the person conducting the observation or interpretation, such as by using standardized data collection tools like surveys, then they are adopting a paradigm of functionalism. However, if they believe that the best way to study social order is though the subjective interpretation of participants involved, such as by interviewing different participants and reconciling differences among their responses using their own subjective perspectives, then they are employing an interpretivism paradigm. If researchers believe that the world consists of radical change and seek to understand or enact change using an objectivist approach, then they are employing a radical structuralism paradigm. If they wish to understand social change using the subjective perspectives of the participants involved, then they are following a radical humanism paradigm. To date, the majority of social science research has emulated the natural sciences, and followed the functionalist paradigm. Functionalists believe that social order or patterns can be understood in terms of their functional components, and therefore attempt to break down a problem into small components and studying one or more components in detail using objectivist techniques such as surveys and experimental research. However, with the emergence of post-positivist thinking, a small but growing number of social science researchers are attempting to understand social order using subjectivist techniques such as interviews and ethnographic studies. Radical humanism and radical structuralism continues to represent a negligible proportion of social science research, because scientists are primarily concerned with understanding generalizable patterns of behavior, events, or phenomena, rather than idiosyncratic or changing events. Nevertheless, if you wish to study social change, such as why democratic movements are increasingly emerging in Middle Eastern countries, or why this movement was successful in Tunisia, took a longer path to success in Libya, and is still not successful in Syria, then perhaps radical humanism is the right approach for such a study. Social and organizational phenomena generally consists elements of both order and change. For instance, organizational success depends on formalized business processes, work procedures, and job responsibilities, while being simultaneously constrained by a constantly changing mix of competitors, competing products, suppliers, and customer base in the business environment. Hence, a holistic and more complete understanding of social phenomena such as why are some organizations more successful than others, require an appreciation and application of a multi-paradigmatic approach to research.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/03%3A_The_Research_Process/3.01%3A_Paradigms_of_Social_Research.txt
So how do our mental paradigms shape social science research? At its core, all scientific research is an iterative process of observation, rationalization, and validation. In the observation phase, we observe a natural or social phenomenon, event, or behavior that interests us. In the rationalization phase, we try to make sense of or the observed phenomenon, event, or behavior by logically connecting the different pieces of the puzzle that we observe, which in some cases, may lead to the construction of a theory. Finally, in the validation phase, we test our theories using a scientific method through a process of data collection and analysis, and in doing so, possibly modify or extend our initial theory. However, research designs vary based on whether the researcher starts at observation and attempts to rationalize the observations (inductive research), or whether the researcher starts at an ex ante rationalization or a theory and attempts to validate the theory (deductive research). Hence, the observation-rationalization-validation cycle is very similar to the induction-deduction cycle of research discussed in Chapter 1. Most traditional research tends to be deductive and functionalistic in nature. Figure 3.2 provides a schematic view of such a research project. This figure depicts a series of activities to be performed in functionalist research, categorized into three phases: exploration, research design, and research execution. Note that this generalized design is not a roadmap or flowchart for all research. It applies only to functionalistic research, and it can and should be modified to fit the needs of a specific project. The first phase of research is exploration. This phase includes exploring and selecting research questions for further investigation, examining the published literature in the area of inquiry to understand the current state of knowledge in that area, and identifying theories that may help answer the research questions of interest. The first step in the exploration phase is identifying one or more research questions dealing with a specific behavior, event, or phenomena of interest. Research questions are specific questions about a behavior, event, or phenomena of interest that you wish to seek answers for in your research. Examples include what factors motivate consumers to purchase goods and services online without knowing the vendors of these goods or services, how can we make high school students more creative, and why do some people commit terrorist acts. Research questions can delve into issues of what, why, how, when, and so forth. More interesting research questions are those that appeal to a broader population (e.g., “how can firms innovate” is a more interesting research question than “how can Chinese firms innovate in the service-sector”), address real and complex problems (in contrast to hypothetical or “toy” problems), and where the answers are not obvious. Narrowly focused research questions (often with a binary yes/no answer) tend to be less useful and less interesting and less suited to capturing the subtle nuances of social phenomena. Uninteresting research questions generally lead to uninteresting and unpublishable research findings. The next step is to conduct a literature review of the domain of interest. The purpose of a literature review is three-fold: (1) to survey the current state of knowledge in the area of inquiry, (2) to identify key authors, articles, theories, and findings in that area, and (3) to identify gaps in knowledge in that research area. Literature review is commonly done today using computerized keyword searches in online databases. Keywords can be combined using “and” and “or” operations to narrow down or expand the search results. Once a shortlist of relevant articles is generated from the keyword search, the researcher must then manually browse through each article, or at least its abstract section, to determine the suitability of that article for a detailed review. Literature reviews should be reasonably complete, and not restricted to a few journals, a few years, or a specific methodology. Reviewed articles may be summarized in the form of tables, and can be further structured using organizing frameworks such as a concept matrix. A well-conducted literature review should indicate whether the initial research questions have already been addressed in the literature (which would obviate the need to study them again), whether there are newer or more interesting research questions available, and whether the original research questions should be modified or changed in light of findings of the literature review. The review can also provide some intuitions or potential answers to the questions of interest and/or help identify theories that have previously been used to address similar questions. Since functionalist (deductive) research involves theory-testing, the third step is to identify one or more theories can help address the desired research questions. While the literature review may uncover a wide range of concepts or constructs potentially related to the phenomenon of interest, a theory will help identify which of these constructs is logically relevant to the target phenomenon and how. Forgoing theories may result in measuring a wide range of less relevant, marginally relevant, or irrelevant constructs, while also minimizing the chances of obtaining results that are meaningful and not by pure chance. In functionalist research, theories can be used as the logical basis for postulating hypotheses for empirical testing. Obviously, not all theories are well-suited for studying all social phenomena. Theories must be carefully selected based on their fit with the target problem and the extent to which their assumptions are consistent with that of the target problem. We will examine theories and the process of theorizing in detail in the next chapter. The next phase in the research process is research design. This process is concerned with creating a blueprint of the activities to take in order to satisfactorily answer the research questions identified in the exploration phase. This includes selecting a research method, operationalizing constructs of interest, and devising an appropriate sampling strategy. Operationalization is the process of designing precise measures for abstract theoretical constructs. This is a major problem in social science research, given that many of the constructs, such as prejudice, alienation, and liberalism are hard to define, let alone measure accurately. Operationalization starts with specifying an “operational definition” (or “conceptualization”) of the constructs of interest. Next, the researcher can search the literature to see if there are existing prevalidated measures matching their operational definition that can be used directly or modified to measure their constructs of interest. If such measures are not available or if existing measures are poor or reflect a different conceptualization than that intended by the researcher, new instruments may have to be designed for measuring those constructs. This means specifying exactly how exactly the desired construct will be measured (e.g., how many items, what items, and so forth). This can easily be a long and laborious process, with multiple rounds of pretests and modifications before the newly designed instrument can be accepted as “scientifically valid.” We will discuss operationalization of constructs in a future chapter on measurement. Simultaneously with operationalization, the researcher must also decide what research method they wish to employ for collecting data to address their research questions of interest. Such methods may include quantitative methods such as experiments or survey research or qualitative methods such as case research or action research, or possibly a combination of both. If an experiment is desired, then what is the experimental design? If survey, do you plan a mail survey, telephone survey, web survey, or a combination? For complex, uncertain, and multifaceted social phenomena, multi-method approaches may be more suitable, which may help leverage the unique strengths of each research method and generate insights that may not be obtained using a single method. Researchers must also carefully choose the target population from which they wish to collect data, and a sampling strategy to select a sample from that population. For instance, should they survey individuals or firms or workgroups within firms? What types of individuals or firms they wish to target? Sampling strategy is closely related to the unit of analysis in a research problem. While selecting a sample, reasonable care should be taken to avoid a biased sample (e.g., sample based on convenience) that may generate biased observations. Sampling is covered in depth in a later chapter. At this stage, it is often a good idea to write a research proposal detailing all of the decisions made in the preceding stages of the research process and the rationale behind each decision. This multi-part proposal should address what research questions you wish to study and why, the prior state of knowledge in this area, theories you wish to employ along with hypotheses to be tested, how to measure constructs, what research method to be employed and why, and desired sampling strategy. Funding agencies typically require such a proposal in order to select the best proposals for funding. Even if funding is not sought for a research project, a proposal may serve as a useful vehicle for seeking feedback from other researchers and identifying potential problems with the research project (e.g., whether some important constructs were missing from the study) before starting data collection. This initial feedback is invaluable because it is often too late to correct critical problems after data is collected in a research study. Having decided who to study (subjects), what to measure (concepts), and how to collect data (research method), the researcher is now ready to proceed to the research execution phase. This includes pilot testing the measurement instruments, data collection, and data analysis. Pilot testing is an often overlooked but extremely important part of the research process. It helps detect potential problems in your research design and/or instrumentation (e.g., whether the questions asked is intelligible to the targeted sample), and to ensure that the measurement instruments used in the study are reliable and valid measures of the constructs of interest. The pilot sample is usually a small subset of the target population. After a successful pilot testing, the researcher may then proceed with data collection using the sampled population. The data collected may be quantitative or qualitative, depending on the research method employed. Following data collection, the data is analyzed and interpreted for the purpose of drawing conclusions regarding the research questions of interest. Depending on the type of data collected (quantitative or qualitative), data analysis may be quantitative (e.g., employ statistical techniques such as regression or structural equation modeling) or qualitative (e.g., coding or content analysis). The final phase of research involves preparing the final research report documenting the entire research process and its findings in the form of a research paper, dissertation, or monograph. This report should outline in detail all the choices made during the research process (e.g., theory used, constructs selected, measures used, research methods, sampling, etc.) and why, as well as the outcomes of each phase of the research process. The research process must be described in sufficient detail so as to allow other researchers to replicate your study, test the findings, or assess whether the inferences derived are scientifically acceptable. Of course, having a ready research proposal will greatly simplify and quicken the process of writing the finished report. Note that research is of no value unless the research process and outcomes are documented for future generations; such documentation is essential for the incremental progress of science. 3.03: Common Mistakes in Research The research process is fraught with problems and pitfalls, and novice researchers often find, after investing substantial amounts of time and effort into a research project, that their research questions were not sufficiently answered, or that the findings were not interesting enough, or that the research was not of “acceptable” scientific quality. Such problems typically result in research papers being rejected by journals. Some of the more frequent mistakes are described below. Insufficiently motivated research questions. Often times, we choose our “pet” problems that are interesting to us but not to the scientific community at large, i.e., it does not generate new knowledge or insight about the phenomenon being investigated. Because the research process involves a significant investment of time and effort on the researcher’s part, the researcher must be certain (and be able to convince others) that the research questions they seek to answer in fact deal with real problems (and not hypothetical problems) that affect a substantial portion of a population and has not been adequately addressed in prior research. Pursuing research fads. Another common mistake is pursuing “popular” topics with limited shelf life. A typical example is studying technologies or practices that are popular today. Because research takes several years to complete and publish, it is possible that popular interest in these fads may die down by the time the research is completed and submitted for publication. A better strategy may be to study “timeless” topics that have always persisted through the years. Unresearchable problems. Some research problems may not be answered adequately based on observed evidence alone, or using currently accepted methods and procedures. Such problems are best avoided. However, some unresearchable, ambiguously defined problems may be modified or fine tuned into well-defined and useful researchable problems. Favored research methods. Many researchers have a tendency to recast a research problem so that it is amenable to their favorite research method (e.g., survey research). This is an unfortunate trend. Research methods should be chosen to best fit a research problem, and not the other way around. Blind data mining. Some researchers have the tendency to collect data first (using instruments that are already available), and then figure out what to do with it. Note that data collection is only one step in a long and elaborate process of planning, designing, and executing research. In fact, a series of other activities are needed in a research process prior to data collection. If researchers jump into data collection without such elaborate planning, the data collected will likely be irrelevant, imperfect, or useless, and their data collection efforts may be entirely wasted. An abundance of data cannot make up for deficits in research planning and design, and particularly, for the lack of interesting research questions.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/03%3A_The_Research_Process/3.02%3A_Overview_of_the_Research_Process.txt
As we know from previous chapters, science is knowledge represented as a collection of “theories” derived using the scientific method. In this chapter, we will examine what is a theory, why do we need theories in research, what are the building blocks of a theory, how to evaluate theories, how can we apply theories in research, and also presents illustrative examples of five theories frequently used in social science research. 04: Theories in Scientific Research Theories are explanations of a natural or social behavior, event, or phenomenon. More formally, a scientific theory is a system of constructs (concepts) and propositions (relationships between those constructs) that collectively presents a logical, systematic, and coherent explanation of a phenomenon of interest within some assumptions and boundary conditions (Bacharach 1989).1 Theories should explain why things happen, rather than just describe or predict. Note that it is possible to predict events or behaviors using a set of predictors, without necessarily explaining why such events are taking place. For instance, market analysts predict fluctuations in the stock market based on market announcements, earnings reports of major companies, and new data from the Federal Reserve and other agencies, based on previously observed correlations. Prediction requires only correlations. In contrast, explanations require causations, or understanding of cause-effect relationships. Establishing causation requires three conditions: (1) correlations between two constructs, (2) temporal precedence (the cause must precede the effect in time), and (3) rejection of alternative hypotheses (through testing). Scientific theories are different from theological, philosophical, or other explanations in that scientific theories can be empirically tested using scientific methods. Explanations can be idiographic or nomothetic. Idiographic explanations are those that explain a single situation or event in idiosyncratic detail. For example, you did poorly on an exam because: (1) you forgot that you had an exam on that day, (2) you arrived late to the exam due to a traffic jam, (3) you panicked midway through the exam, (4) you had to work late the previous evening and could not study for the exam, or even (5) your dog ate your text book. The explanations may be detailed, accurate, and valid, but they may not apply to other similar situations, even involving the same person, and are hence not generalizable. In contrast, nomothetic explanations seek to explain a class of situations or events rather than a specific situation or event. For example, students who do poorly in exams do so because they did not spend adequate time preparing for exams or that they suffer from nervousness, attentiondeficit, or some other medical disorder. Because nomothetic explanations are designed to be generalizable across situations, events, or people, they tend to be less precise, less complete, and less detailed. However, they explain economically, using only a few explanatory variables. Because theories are also intended to serve as generalized explanations for patterns of events, behaviors, or phenomena, theoretical explanations are generally nomothetic in nature. While understanding theories, it is also important to understand what theory is not. Theory is not data, facts, typologies, taxonomies, or empirical findings. A collection of facts is not a theory, just as a pile of stones is not a house. Likewise, a collection of constructs (e.g., a typology of constructs) is not a theory, because theories must go well beyond constructs to include propositions, explanations, and boundary conditions. Data, facts, and findings operate at the empirical or observational level, while theories operate at a conceptual level and are based on logic rather than observations. There are many benefits to using theories in research. First, theories provide the underlying logic of the occurrence of natural or social phenomenon by explaining what are the key drivers and key outcomes of the target phenomenon and why, and what underlying processes are responsible driving that phenomenon. Second, they aid in sense-making by helping us synthesize prior empirical findings within a theoretical framework and reconcile contradictory findings by discovering contingent factors influencing the relationship between two constructs in different studies. Third, theories provide guidance for future research by helping identify constructs and relationships that are worthy of further research. Fourth, theories can contribute to cumulative knowledge building by bridging gaps between other theories and by causing existing theories to be reevaluated in a new light. However, theories can also have their own share of limitations. As simplified explanations of reality, theories may not always provide adequate explanations of the phenomenon of interest based on a limited set of constructs and relationships. Theories are designed to be simple and parsimonious explanations, while reality may be significantly more complex. Furthermore, theories may impose blinders or limit researchers’ “range of vision,” causing them to miss out on important concepts that are not defined by the theory.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/04%3A_Theories_in_Scientific_Research/4.01%3A_Theories.txt
David Whetten (1989) suggests that there are four building blocks of a theory: constructs, propositions, logic, and boundary conditions/assumptions. Constructs capture the “what” of theories (i.e., what concepts are important for explaining a phenomenon), propositions capture the “how” (i.e., how are these concepts related to each other), logic represents the “why” (i.e., why are these concepts related), and boundary conditions/assumptions examines the “who, when, and where” (i.e., under what circumstances will these concepts and relationships work). Though constructs and propositions were previously discussed in Chapter 2, we describe them again here for the sake of completeness. Constructs are abstract concepts specified at a high level of abstraction that are chosen specifically to explain the phenomenon of interest. Recall from Chapter 2 that constructs may be unidimensional (i.e., embody a single concept), such as weight or age, or multi-dimensional (i.e., embody multiple underlying concepts), such as personality or culture. While some constructs, such as age, education, and firm size, are easy to understand, others, such as creativity, prejudice, and organizational agility, may be more complex and abstruse, and still others such as trust, attitude, and learning, may represent temporal tendencies rather than steady states. Nevertheless, all constructs must have clear and unambiguous operational definition that should specify exactly how the construct will be measured and at what level of analysis (individual, group, organizational, etc.). Measurable representations of abstract constructs are called variables. For instance, intelligence quotient (IQ score) is a variable that is purported to measure an abstract construct called intelligence. As noted earlier, scientific research proceeds along two planes: a theoretical plane and an empirical plane. Constructs are conceptualized at the theoretical plane, while variables are operationalized and measured at the empirical (observational) plane. Furthermore, variables may be independent, dependent, mediating, or moderating, as discussed in Chapter 2. The distinction between constructs (conceptualized at the theoretical level) and variables (measured at the empirical level) is shown in Figure 4.1. Propositions are associations postulated between constructs based on deductive logic. Propositions are stated in declarative form and should ideally indicate a cause-effect relationship (e.g., if X occurs, then Y will follow). Note that propositions may be conjectural but MUST be testable, and should be rejected if they are not supported by empirical observations. However, like constructs, propositions are stated at the theoretical level, and they can only be tested by examining the corresponding relationship between measurable variables of those constructs. The empirical formulation of propositions, stated as relationships between variables, is called hypotheses. The distinction between propositions (formulated at the theoretical level) and hypotheses (tested at the empirical level) is depicted in Figure 4.1. The third building block of a theory is the logic that provides the basis for justifying the propositions as postulated. Logic acts like a “glue” that connects the theoretical constructs and provides meaning and relevance to the relationships between these constructs. Logic also represents the “explanation” that lies at the core of a theory. Without logic, propositions will be ad hoc, arbitrary, and meaningless, and cannot be tied into a cohesive “system of propositions” that is the heart of any theory. Finally, all theories are constrained by assumptions about values, time, and space, and boundary conditions that govern where the theory can be applied and where it cannot be applied. For example, many economic theories assume that human beings are rational (or boundedly rational) and employ utility maximization based on cost and benefit expectations as a way of understand human behavior. In contrast, political science theories assume that people are more political than rational, and try to position themselves in their professional or personal environment in a way that maximizes their power and control over others. Given the nature of their underlying assumptions, economic and political theories are not directly comparable, and researchers should not use economic theories if their objective is to understand the power structure or its evolution in a organization. Likewise, theories may have implicit cultural assumptions (e.g., whether they apply to individualistic or collective cultures), temporal assumptions (e.g., whether they apply to early stages or later stages of human behavior), and spatial assumptions (e.g., whether they apply to certain localities but not to others). If a theory is to be properly used or tested, all of its implicit assumptions that form the boundaries of that theory must be properly understood. Unfortunately, theorists rarely state their implicit assumptions clearly, which leads to frequent misapplications of theories to problem situations in research.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/04%3A_Theories_in_Scientific_Research/4.02%3A_Building_Blocks_of_a_Theory.txt
Theories are simplified and often partial explanations of complex social reality. As such, there can be good explanations or poor explanations, and consequently, there can be good theories or poor theories. How can we evaluate the “goodness” of a given theory? Different criteria have been proposed by different researchers, the more important of which are listed below: • Logical consistency: Are the theoretical constructs, propositions, boundary conditions, and assumptions logically consistent with each other? If some of these “building blocks” of a theory are inconsistent with each other (e.g., a theory assumes rationality, but some constructs represent non-rational concepts), then the theory is a poor theory. • Explanatory power: How much does a given theory explain (or predict) reality? Good theories obviously explain the target phenomenon better than rival theories, as often measured by variance explained (R-square) value in regression equations. • Falsifiability: British philosopher Karl Popper stated in the 1940’s that for theories to be valid, they must be falsifiable. Falsifiability ensures that the theory is potentially disprovable, if empirical data does not match with theoretical propositions, which allows for their empirical testing by researchers. In other words, theories cannot be theories unless they can be empirically testable. Tautological statements, such as “a day with high temperatures is a hot day” are not empirically testable because a hot day is defined (and measured) as a day with high temperatures, and hence, such statements cannot be viewed as a theoretical proposition. Falsifiability requires presence of rival explanations it ensures that the constructs are adequately measurable, and so forth. However, note that saying that a theory is falsifiable is not the same as saying that a theory should be falsified. If a theory is indeed falsified based on empirical evidence, then it was probably a poor theory to begin with! • Parsimony: Parsimony examines how much of a phenomenon is explained with how few variables. The concept is attributed to 14th century English logician Father William of Ockham (and hence called “Ockham’s razor” or “Occam’s razor), which states that among competing explanations that sufficiently explain the observed evidence, the simplest theory (i.e., one that uses the smallest number of variables or makes the fewest assumptions) is the best. Explanation of a complex social phenomenon can always be increased by adding more and more constructs. However, such approach defeats the purpose of having a theory, which are intended to be “simplified” and generalizable explanations of reality. Parsimony relates to the degrees of freedom in a given theory. Parsimonious theories have higher degrees of freedom, which allow them to be more easily generalized to other contexts, settings, and populations. 4.04: Approaches to Theorizing How do researchers build theories? Steinfeld and Fulk (1990)2 recommend four such approaches. The first approach is to build theories inductively based on observed patterns of events or behaviors. Such approach is often called “grounded theory building”, because the theory is grounded in empirical observations. This technique is heavily dependent on the observational and interpretive abilities of the researcher, and the resulting theory may be subjective and non-confirmable. Furthermore, observing certain patterns of events will not necessarily make a theory, unless the researcher is able to provide consistent explanations for the observed patterns. We will discuss the grounded theory approach in a later chapter on qualitative research. The second approach to theory building is to conduct a bottom-up conceptual analysis to identify different sets of predictors relevant to the phenomenon of interest using a predefined framework. One such framework may be a simple input-process-output framework, where the researcher may look for different categories of inputs, such as individual, organizational, and/or technological factors potentially related to the phenomenon of interest (the output), and describe the underlying processes that link these factors to the target phenomenon. This is also an inductive approach that relies heavily on the inductive abilities of the researcher, and interpretation may be biased by researcher’s prior knowledge of the phenomenon being studied. The third approach to theorizing is to extend or modify existing theories to explain a new context, such as by extending theories of individual learning to explain organizational learning. While making such an extension, certain concepts, propositions, and/or boundary conditions of the old theory may be retained and others modified to fit the new context. This deductive approach leverages the rich inventory of social science theories developed by prior theoreticians, and is an efficient way of building new theories by building on existing ones. The fourth approach is to apply existing theories in entirely new contexts by drawing upon the structural similarities between the two contexts. This approach relies on reasoning by analogy, and is probably the most creative way of theorizing using a deductive approach. For instance, Markus (1987)3 used analogic similarities between a nuclear explosion and uncontrolled growth of networks or network-based businesses to propose a critical mass theory of network growth. Just as a nuclear explosion requires a critical mass of radioactive material to sustain a nuclear explosion, Markus suggested that a network requires a critical mass of users to sustain its growth, and without such critical mass, users may leave the network, causing an eventual demise of the network.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/04%3A_Theories_in_Scientific_Research/4.03%3A_Attributes_of_a_Good_Theory.txt
In this section, we present brief overviews of a few illustrative theories from different social science disciplines. These theories explain different types of social behaviors, using a set of constructs, propositions, boundary conditions, assumptions, and underlying logic. Note that the following represents just a simplistic introduction to these theories; readers are advised to consult the original sources of these theories for more details and insights on each theory. Agency Theory. Agency theory (also called principal-agent theory), a classic theory in the organizational economics literature, was originally proposed by Ross (1973)4 to explain two-party relationships (such as those between an employer and its employees, between organizational executives and shareholders, and between buyers and sellers) whose goals are not congruent with each other. The goal of agency theory is to specify optimal contracts and the conditions under which such contracts may help minimize the effect of goal incongruence. The core assumptions of this theory are that human beings are self-interested individuals, boundedly rational, and risk-averse, and the theory can be applied at the individual or organizational level. The two parties in this theory are the principal and the agent; the principal employs the agent to perform certain tasks on its behalf. While the principal’s goal is quick and effective completion of the assigned task, the agent’s goal may be working at its own pace, avoiding risks, and seeking self-interest (such as personal pay) over corporate interests. Hence, the goal incongruence. Compounding the nature of the problem may be information asymmetry problems caused by the principal’s inability to adequately observe the agent’s behavior or accurately evaluate the agent’s skill sets. Such asymmetry may lead to agency problems where the agent may not put forth the effort needed to get the task done (the moral hazard problem) or may misrepresent its expertise or skills to get the job but not perform as expected (the adverse selection problem). Typical contracts that are behavior-based, such as a monthly salary, cannot overcome these problems. Hence, agency theory recommends using outcome-based contracts, such as a commissions or a fee payable upon task completion, or mixed contracts that combine behavior-based and outcome-based incentives. An employee stock option plans are is an example of an outcome-based contract while employee pay is a behavior-based contract. Agency theory also recommends tools that principals may employ to improve the efficacy of behavior-based contracts, such as investing in monitoring mechanisms (such as hiring supervisors) to counter the information asymmetry caused by moral hazard, designing renewable contracts contingent on agent’s performance (performance assessment makes the contract partially outcome-based), or by improving the structure of the assigned task to make it more programmable and therefore more observable. Theory of Planned Behavior. Postulated by Azjen (1991)5, the theory of planned behavior (TPB) is a generalized theory of human behavior in the social psychology literature that can be used to study a wide range of individual behaviors. It presumes that individual behavior represents conscious reasoned choice, and is shaped by cognitive thinking and social pressures. The theory postulates that behaviors are based on one’s intention regarding that behavior, which in turn is a function of the person’s attitude toward the behavior, subjective norm regarding that behavior, and perception of control over that behavior (see Figure 4.2). Attitude is defined as the individual's overall positive or negative feelings about performing the behavior in question, which may be assessed as a summation of one's beliefs regarding the different consequences of that behavior, weighted by the desirability of those consequences. Subjective norm refers to one’s perception of whether people important to that person expect the person to perform the intended behavior, and represented as a weighted combination of the expected norms of different referent groups such as friends, colleagues, or supervisors at work. Behavioral control is one's perception of internal or external controls constraining the behavior in question. Internal controls may include the person’s ability to perform the intended behavior (self-efficacy), while external control refers to the availability of external resources needed to perform that behavior (facilitating conditions). TPB also suggests that sometimes people may intend to perform a given behavior but lack the resources needed to do so, and therefore suggests that posits that behavioral control can have a direct effect on behavior, in addition to the indirect effect mediated by intention. TPB is an extension of an earlier theory called the theory of reasoned action, which included attitude and subjective norm as key drivers of intention, but not behavioral control. The latter construct was added by Ajzen in TPB to account for circumstances when people may have incomplete control over their own behaviors (such as not having high-speed Internet access for web surfing). Innovation diffusion theory. Innovation diffusion theory (IDT) is a seminal theory in the communications literature that explains how innovations are adopted within a population of potential adopters. The concept was first studied by French sociologist Gabriel Tarde, but the theory was developed by Everett Rogers in 1962 based on observations of 508 diffusion studies. The four key elements in this theory are: innovation, communication channels, time, and social system. Innovations may include new technologies, new practices, or new ideas, and adopters may be individuals or organizations. At the macro (population) level, IDT views innovation diffusion as a process of communication where people in a social system learn about a new innovation and its potential benefits through communication channels (such as mass media or prior adopters) and are persuaded to adopt it. Diffusion is a temporal process; the diffusion process starts off slow among a few early adopters, then picks up speed as the innovation is adopted by the mainstream population, and finally slows down as the adopter population reaches saturation. The cumulative adoption pattern therefore an S-shaped curve, as shown in Figure 4.3, and the adopter distribution represents a normal distribution. All adopters are not identical, and adopters can be classified into innovators, early adopters, early majority, late majority, and laggards based on their time of their adoption. The rate of diffusion also depends on characteristics of the social system such as the presence of opinion leaders (experts whose opinions are valued by others) and change agents (people who influence others’ behaviors). At the micro (adopter) level, Rogers (1995)6 suggests that innovation adoption is a process consisting of five stages: (1) knowledge: when adopters first learn about an innovation from mass-media or interpersonal channels, (2) persuasion: when they are persuaded by prior adopters to try the innovation, (3) decision: their decision to accept or reject the innovation, (4) implementation: their initial utilization of the innovation, and (5) confirmation: their decision to continue using it to its fullest potential (see Figure 4.4). Five innovation characteristics are presumed to shape adopters’ innovation adoption decisions: (1) relative advantage: the expected benefits of an innovation relative to prior innovations, (2) compatibility: the extent to which the innovation fits with the adopter’s work habits, beliefs, and values, (3) complexity: the extent to which the innovation is difficult to learn and use, (4) trialability: the extent to which the innovation can be tested on a trial basis, and (5) observability: the extent to which the results of using the innovation can be clearly observed. The last two characteristics have since been dropped from many innovation studies. Complexity is negatively correlated to innovation adoption, while the other four factors are positively correlated. Innovation adoption also depends on personal factors such as the adopter’s risk-taking propensity, education level, cosmopolitanism, and communication influence. Early adopters are venturesome, well educated, and rely more on mass media for information about the innovation, while later adopters rely more on interpersonal sources (such as friends and family) as their primary source of information. IDT has been criticized for having a “pro-innovation bias,” that is for presuming that all innovations are beneficial and will be eventually diffused across the entire population, and because it does not allow for inefficient innovations such as fads or fashions to die off quickly without being adopted by the entire population or being replaced by better innovations. Whether people will be influenced by the central or peripheral routes depends upon their ability and motivation to elaborate the central merits of an argument. This ability and motivation to elaborate is called elaboration likelihood. People in a state of high elaboration likelihood (high ability and high motivation) are more likely to thoughtfully process the information presented and are therefore more influenced by argument quality, while those in the low elaboration likelihood state are more motivated by peripheral cues. Elaboration likelihood is a situational characteristic and not a personal trait. For instance, a doctor may employ the central route for diagnosing and treating a medical ailment (by virtue of his or her expertise of the subject), but may rely on peripheral cues from auto mechanics to understand the problems with his car. As such, the theory has widespread implications about how to enact attitude change toward new products or ideas and even social change. General Deterrence Theory. Two utilitarian philosophers of the eighteenth century, Cesare Beccaria and Jeremy Bentham, formulated General Deterrence Theory (GDT) as both an explanation of crime and a method for reducing it. GDT examines why certain individuals engage in deviant, anti-social, or criminal behaviors. This theory holds that people are fundamentally rational (for both conforming and deviant behaviors), and that they freely choose deviant behaviors based on a rational cost-benefit calculation. Because people naturally choose utility-maximizing behaviors, deviant choices that engender personal gain or pleasure can be controlled by increasing the costs of such behaviors in the form of punishments (countermeasures) as well as increasing the probability of apprehension. Swiftness, severity, and certainty of punishments are the key constructs in GDT. While classical positivist research in criminology seeks generalized causes of criminal behaviors, such as poverty, lack of education, psychological conditions, and recommends strategies to rehabilitate criminals, such as by providing them job training and medical treatment, GDT focuses on the criminal decision making process and situational factors that influence that process. Hence, a criminal’s personal situation (such as his personal values, his affluence, and his need for money) and the environmental context (such as how protected is the target, how efficient is the local police, how likely are criminals to be apprehended) play key roles in this decision making process. The focus of GDT is not how to rehabilitate criminals and avert future criminal behaviors, but how to make criminal activities less attractive and therefore prevent crimes. To that end, “target hardening” such as installing deadbolts and building self-defense skills, legal deterrents such as eliminating parole for certain crimes, “three strikes law” (mandatory incarceration for three offenses, even if the offenses are minor and not worth imprisonment), and the death penalty, increasing the chances of apprehension using means such as neighborhood watch programs, special task forces on drugs or gang-related crimes, and increased police patrols, and educational programs such as highly visible notices such as “Trespassers will be prosecuted” are effective in preventing crimes. This theory has interesting implications not only for traditional crimes, but also for contemporary white-collar crimes such as insider trading, software piracy, and illegal sharing of music.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/04%3A_Theories_in_Scientific_Research/4.05%3A_Examples_of_Social_Science_Theories.txt
Research design is a comprehensive plan for data collection in an empirical research project. It is a “blueprint” for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: (1) the data collection process, (2) the instrument development process, and (3) the sampling process. The instrument development and sampling processes are described in next two chapters, and the data collection process (which is often loosely called “research design”) is introduced in this chapter and is described in further detail in Chapters 9-12. Broadly speaking, data collection methods can be broadly grouped into two categories: positivist and interpretive. Positivist methods, such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected (quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth) and analyzed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that are not available from either types of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable. 05: Research Design The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity. Internal validity, also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in hypothesized independent variable, and not by variables extraneous to the research context. Causality requires three conditions: (1) covariation of cause and effect (i.e., if cause happens, then effect also happens; and if cause does not happen, effect does not happen), (2) temporal precedence: cause must precede effect in time, (3) no plausible alternative explanation (or spurious correlation). Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are, by no means, immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity. External validity or generalizability refers to whether the observed associations can be generalized from the sample to the population (population validity), or to other people, organizations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalized to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalizability than laboratory experiments where artificially contrived treatments and strong control over extraneous variables render the findings less generalizable to real-life settings where treatments and extraneous variables cannot be controlled. The variation in internal and external validity for a wide range of research designs are shown in Figure 5.1. Some researchers claim that there is a tradeoff between internal and external validity: higher external validity can come only at the cost of internal validity and vice-versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs is ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire. Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organizational learning are difficult to define, much less measure. For instance, construct validity must assure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter. Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure is valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical test, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2. 5.02: Improving Internal and External Validity The best research designs are those that can assure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalizable to the population at large. Controls are required to assure internal validity (causality) of research designs, and can be accomplished in four ways: (1) manipulation, (2) elimination, (3) inclusion, and (4) statistical control, and (5) randomization. In manipulation, the researcher manipulates the independent variables in one or more levels (called “treatments”), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a, a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs but not in nonexperimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail. The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalizability but also requires substantially larger samples. In statistical control, extraneous variables are measured and used as covariates during the statistical testing process. Finally, the randomization technique is aimed at canceling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomization are: (1) random selection, where a sample is selected randomly from a population, and (2) random assignment, where subjects selected in a non-random manner are randomly assigned to treatment groups. Randomization also assures external validity, allowing inferences drawn from the sample to be generalized to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalizability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for few of those dimensions.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/05%3A_Research_Design/5.01%3A_Key_Attributes_of_a_Research_Design.txt
As noted earlier, research designs can be classified into two categories – positivist and interpretive – depending how their goal in scientific research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalized patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9-12. Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the “treatment group”) but not to another group (“control group”), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value). More complex designs may include multiple treatment groups, such as low versus high dosage of the drug, multiple treatments, such as combining drug administration with dietary interventions. In a true experimental design, subjects must be randomly assigned between each group. If random assignment is not followed, then the design becomes quasi-experimental. Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organization where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analyzed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalizability since real life is often more complex (i.e., involve more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations. Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys, independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys, dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a “socially desirable” response rather than their true response) which further hurts internal validity. Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by country from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear. Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualized and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalized to other case sites. Generalizability can be improved by replicating and comparing the analysis in other case sites in a multiple case design. Focus group research is a type of research that involves bringing in a small group of subjects (typically 6 to 10 people) at one location, and having them discuss a phenomenon of interest for a period of 1.5 to 2 hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalized to other settings because of small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research. Action research assumes that complex social phenomena are best understood by introducing interventions or “actions” into those phenomena and observing the effects of those actions. In this method, the researcher is usually a consultant or an organizational member embedded within a social context such as an organization, who initiates an action such as new organizational procedures or new technologies, in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalizability of findings is often restricted to the context where the study was conducted. Ethnography is an interpretive research design inspired by anthropology that emphasizes that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time (8 months to 2 years), and during that period, engages, observes, and records the daily life of the studied culture, and theorizes about the evolution and behaviors in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves “sense-making”. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalizable to other cultures. 5.04: Selecting Research Designs Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for individual unit of analysis) or a case study (for organizational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate. Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organizational decision making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organizational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/05%3A_Research_Design/5.03%3A_Popular_Research_Designs.txt
Theoretical propositions consist of relationships between abstract constructs. Testing theories (i.e., theoretical propositions) require measuring these constructs accurately, correctly, and in a scientific manner, before the strength of their relationships can be tested. Measurement refers to careful, deliberate observations of the real world and is the essence of empirical research. While some constructs in social science research, such as a person’s age, weight, or a firm’s size, may be easy to measure, other constructs, such as creativity, prejudice, or alienation, may be considerably harder to measure. In this chapter, we will examine the related processes of conceptualization and operationalization for creating measures of such constructs. 06: Measurement of Constructs Conceptualization is the mental process by which fuzzy and imprecise constructs (concepts) and their constituent components are defined in concrete and precise terms. For instance, we often use the word “prejudice” and the word conjures a certain image in our mind; however, we may struggle if we were asked to define exactly what the term meant. If someone says bad things about other racial groups, is that racial prejudice? If women earn less than men for the same job, is that gender prejudice? If churchgoers believe that non-believers will burn in hell, is that religious prejudice? Are there different kinds of prejudice, and if so, what are they? Are there different levels of prejudice, such as high or low? Answering all of these questions is the key to measuring the prejudice construct correctly. The process of understanding what is included and what is excluded in the concept of prejudice is the conceptualization process. The conceptualization process is all the more important because of the imprecision, vagueness, and ambiguity of many social science constructs. For instance, is “compassion” the same thing as “empathy” or “sentimentality”? If you have a proposition stating that “compassion is positively related to empathy”, you cannot test that proposition unless you can conceptually separate empathy from compassion and then empirically measure these two very similar constructs correctly. If deeply religious people believe that some members of their society, such as nonbelievers, gays, and abortion doctors, will burn in hell for their sins, and forcefully try to change the “sinners” behaviors to prevent them from going to hell, are they acting in a prejudicial manner or a compassionate manner? Our definition of such constructs is not based on any objective criterion, but rather on a shared (“inter-subjective”) agreement between our mental images (conceptions) of these constructs. While defining constructs such as prejudice or compassion, we must understand that sometimes, these constructs are not real or can exist independently, but are simply imaginary creations in our mind. For instance, there may be certain tribes in the world who lack prejudice and who cannot even imagine what this concept entails. But in real life, we tend to treat this concept as real. The process of regarding mental constructs as real is called reification, which is central to defining constructs and identifying measurable variables for measuring them. One important decision in conceptualizing constructs is specifying whether they are unidimensional and multidimensional. Unidimensional constructs are those that are expected to have a single underlying dimension. These constructs can be measured using a single measure or test. Examples include simple constructs such as a person’s weight, wind speed, and probably even complex constructs like self-esteem (if we conceptualize self-esteem as consisting of a single dimension, which of course, may be a unrealistic assumption). Multidimensional constructs consist of two or more underlying dimensions. For instance, if we conceptualize a person’s academic aptitude as consisting of two dimensions – mathematical and verbal ability – then academic aptitude is a multidimensional construct. Each of the underlying dimensions in this case must be measured separately, say, using different tests for mathematical and verbal ability, and the two scores can be combined, possibly in a weighted manner, to create an overall value for the academic aptitude construct. 6.02: Operationalization Once a theoretical construct is defined, exactly how do we measure it? Operationalization refers to the process of developing indicators or items for measuring these constructs. For instance, if an unobservable theoretical construct such as socioeconomic status is defined as the level of family income, it can be operationalized using an indicator that asks respondents the question: what is your annual family income? Given the high level of subjectivity and imprecision inherent in social science constructs, we tend to measure most of those constructs (except a few demographic constructs such as age, gender, education, and income) using multiple indicators. This process allows us to examine the closeness amongst these indicators as an assessment of their accuracy (reliability). Indicators operate at the empirical level, in contrast to constructs, which are conceptualized at the theoretical level. The combination of indicators at the empirical level representing a given construct is called a variable. As noted in a previous chapter, variables may be independent, dependent, mediating, or moderating, depending on how they are employed in a research study. Also each indicator may have several attributes (or levels) and each attribute represent a value. For instance, a “gender” variable may have two attributes: male or female. Likewise, a customer satisfaction scale may be constructed to represent five attributes: “strongly dissatisfied”, “somewhat dissatisfied”, “neutral”, “somewhat satisfied” and “strongly satisfied”. Values of attributes may be quantitative (numeric) or qualitative (nonnumeric). Quantitative data can be analyzed using quantitative data analysis techniques, such as regression or structural equation modeling, while qualitative data require qualitative data analysis techniques, such as coding. Note that many variables in social science research are qualitative, even when represented in a quantitative manner. For instance, we can create a customer satisfaction indicator with five attributes: strongly dissatisfied, somewhat dissatisfied, neutral, somewhat satisfied, and strongly satisfied, and assign numbers 1 through 5 respectively for these five attributes, so that we can use sophisticated statistical tools for quantitative data analysis. However, note that the numbers are only labels associated with respondents’ personal evaluation of their own satisfaction, and the underlying variable (satisfaction) is still qualitative even though we represented it in a quantitative manner. Indicators may be reflective or formative. A reflective indicator is a measure that “reflects” an underlying construct. For example, if religiosity is defined as a construct that measures how religious a person is, then attending religious services may be a reflective indicator of religiosity. A formative indicator is a measure that “forms” or contributes to an underlying construct. Such indicators may represent different dimensions of the construct of interest. For instance, if religiosity is defined as composing of a belief dimension, a devotional dimension, and a ritual dimension, then indicators chosen to measure each of these different dimensions will be considered formative indicators. Unidimensional constructs are measured using reflective indicators (even though multiple reflective indicators may be used for measuring abstruse constructs such as self-esteem), while multidimensional constructs are measured as a formative combination of the multiple dimensions, even though each of the underlying dimensions may be measured using one or more reflective indicators.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/06%3A_Measurement_of_Constructs/6.01%3A_Conceptualization.txt
The first decision to be made in operationalizing a construct is to decide on what is the intended level of measurement. Levels of measurement, also called rating scales, refer to the values that an indicator can take (but says nothing about the indicator itself). For example, male and female (or M and F, or 1 and 2) are two levels of the indicator “gender.” In his seminal article titled "On the theory of scales of measurement" published in Science in 1946, psychologist Stanley Smith Stevens (1946) defined four generic types of rating scales for scientific measurements: nominal, ordinal, interval, and ratio scales. The statistical properties of these scales are shown in Table 6.1. Scale Central Tendency Statistics Transformations Nominal Mode Chi-square One-to-one (equality) Ordinal Median Percentile, non-parametric statistics Monotonic increasing (order) Interval Arithmetic mean, range, standard deviation Correlation, regression, analysis of variance Positive linear (affine) Ratio Geometric mean, harmonic mean Coefficient of variation Positive similarities (multiplicative, logarithmic) Note: All higher-order scales can use any of the statistics for lower order scales. Table 6.1. Statistical properties of rating scales Nominal scales, also called categorical scales, measure categorical data. These scales are used for variables or indicators that have mutually exclusive attributes. Examples include gender (two values: male or female), industry type (manufacturing, financial, agriculture, etc.), and religious affiliation (Christian, Muslim, Jew, etc.). Even if we assign unique numbers to each value, for instance 1 for male and 2 for female, the numbers don’t really mean anything (i.e., 1 is not less than or half of 2) and could have been easily been represented non-numerically, such as M for male and F for female. Nominal scales merely offer names or labels for different attribute values. The appropriate measure of central tendency of a nominal scale is mode, and neither the mean nor the median can be defined. Permissible statistics are chi-square and frequency distribution, and only a one-to-one (equality) transformation is allowed (e.g., 1=Male, 2=Female). Ordinal scales are those that measure rank-ordered data, such as the ranking of students in a class as first, second, third, and so forth, based on their grade point average or test scores. However, the actual or relative values of attributes or difference in attribute values cannot be assessed. For instance, ranking of students in class says nothing about the actual GPA or test scores of the students, or how they well performed relative to one another. A classic example in the natural sciences is Moh’s scale of mineral hardness, which characterizes the hardness of various minerals by their ability to scratch other minerals. For instance, diamonds can scratch all other naturally occurring minerals on earth, and hence diamond is the “hardest” mineral. However, the scale does not indicate the actual hardness of these minerals or even provides a relative assessment of their hardness. Ordinal scales can also use attribute labels (anchors) such as “bad”, “medium”, and “good”, or "strongly dissatisfied", "somewhat dissatisfied", "neutral", or "somewhat satisfied", and "strongly satisfied”. In the latter case, we can say that respondents who are “somewhat satisfied” are less satisfied than those who are “strongly satisfied”, but we cannot quantify their satisfaction levels. The central tendency measure of an ordinal scale can be its median or mode, and means are uninterpretable. Hence, statistical analyses may involve percentiles and non-parametric analysis, but more sophisticated techniques such as correlation, regression, and analysis of variance, are not appropriate. Monotonically increasing transformation (which retains the ranking) is allowed. Interval scales are those where the values measured are not only rank-ordered, but are also equidistant from adjacent attributes. For example, the temperature scale (in Fahrenheit or Celsius), where the difference between 30 and 40 degree Fahrenheit is the same as that between 80 and 90 degree Fahrenheit. Likewise, if you have a scale that asks respondents’ annual income using the following attributes (ranges): \$0 to 10,000, \$10,000 to 20,000, \$20,000 to 30,000, and so forth, this is also an interval scale, because the mid-point of each range (i.e., \$5,000, \$15,000, \$25,000, etc.) are equidistant from each other. The intelligence quotient (IQ) scale is also an interval scale, because the scale is designed such that the difference between IQ scores 100 and 110 is supposed to be the same as between 110 and 120 (although we do not really know whether that is truly the case). Interval scale allows us to examine “how much more” is one attribute when compared to another, which is not possible with nominal or ordinal scales. Allowed central tendency measures include mean, median, or mode, as are measures of dispersion, such as range and standard deviation. Permissible statistical analyses include all of those allowed for nominal and ordinal scales, plus correlation, regression, analysis of variance, and so on. Allowed scale transformation are positive linear. Note that the satisfaction scale discussed earlier is not strictly an interval scale, because we cannot say whether the difference between “strongly satisfied” and “somewhat satisfied” is the same as that between “neutral” and “somewhat satisfied” or between “somewhat dissatisfied” and “strongly dissatisfied”. However, social science researchers often “pretend” (incorrectly) that these differences are equal so that we can use statistical techniques for analyzing ordinal scaled data. Ratio scales are those that have all the qualities of nominal, ordinal, and interval scales, and in addition, also have a “true zero” point (where the value zero implies lack or nonavailability of the underlying construct). Most measurement in the natural sciences and engineering, such as mass, incline of a plane, and electric charge, employ ratio scales, as are some social science variables such as age, tenure in an organization, and firm size (measured as employee count or gross revenues). For example, a firm of size zero means that it has no employees or revenues. The Kelvin temperature scale is also a ratio scale, in contrast to the Fahrenheit or Celsius scales, because the zero point on this scale (equaling -273.15 degree Celsius) is not an arbitrary value but represents a state where the particles of matter at this temperature have zero kinetic energy. These scales are called “ratio” scales because the ratios of two points on these measures are meaningful and interpretable. For example, a firm of size 10 employees is double that of a firm of size 5, and the same can be said for a firm of 10,000 employees relative to a different firm of 5,000 employees. All measures of central tendencies, including geometric and harmonic means, are allowed for ratio scales, as are ratio measures, such as studentized range or coefficient of variation. All statistical methods are allowed. Sophisticated transformation such as positive similar (e.g., multiplicative or logarithmic) are also allowed. Based on the four generic types of scales discussed above, we can create specific rating scales for social science research. Common rating scales include binary, Likert, semantic differential, or Guttman scales. Other less common scales are not discussed here. Binary scales. Binary scales are nominal scales consisting of binary items that assume one of two possible values, such as yes or no, true or false, and so on. For example, a typical binary scale for the “political activism” construct may consist of the six binary items shown in Table 6.2. Each item in this scale is a binary item, and the total number of “yes” indicated by a respondent (a value from 0 to 6) can be used as an overall measure of that person’s political activism. To understand how these items were derived, refer to the “Scaling” section later on in this chapter. Binary scales can also employ other values, such as male or female for gender, fulltime or part-time for employment status, and so forth. If an employment status item is modified to allow for more than two possible values (e.g., unemployed, full-time, part-time, and retired), it is no longer binary, but still remains a nominal scaled item. Have you ever written a letter to a public official Yes No Have you ever signed a political petition Yes No Have you ever donated money to a political cause Yes No Have you ever donated money to a candidate running for public office Yes No Have you ever written a political letter to the editor of a newspaper or magazine Yes No Have you ever persuaded someone to change his/her voting plans Yes No Table 6.2. A six-item binary scale for measuring political activism Likert scale. Designed by Rensis Likert, this is a very popular rating scale for measuring ordinal data in social science research. This scale includes Likert items that are simply-worded statements to which respondents can indicate their extent of agreement or disagreement on a five or seven-point scale ranging from “strongly disagree” to “strongly agree”. A typical example of a six-item Likert scale for the “employment self-esteem” construct is shown in Table 6.3. Likert scales are summated scales, that is, the overall scale score may be a summation of the attribute values of each item as selected by a respondent. Strongly Disagree Somewhat Disagree Neutral Somewhat Agree Strongly Agree I feel good about my job 1 2 3 4 5 I get along well with others at work 1 2 3 4 5 I’m proud of my relationship with my supervisor at work 1 2 3 4 5 I can tell that other people at work are glad to have me there 1 2 3 4 5 I can tell that my coworkers respect me 1 2 3 4 5 I feel like I make a useful contribution at work 1 2 3 4 5 Table 6.3. A six-item Likert scale for measuring employment self-esteem Likert items allow for more granularity (more finely tuned response) than binary items, including whether respondents are neutral to the statement. Three or nine values (often called “anchors”) may also be used, but it is important to use an odd number of values to allow for a “neutral” (or “neither agree nor disagree”) anchor. Some studies have used a “forced choice approach” to force respondents to agree or disagree with the LIkert statement by dropping the neutral mid-point and using even number of values and, but this is not a good strategy because some people may indeed be neutral to a given statement and the forced choice approach does not provide them the opportunity to record their neutral stance. A key characteristic of a Likert scale is that even though the statements vary in different items or indicators, the anchors (“strongly disagree” to “strongly agree”) remain the same. Likert scales are ordinal scales because the anchors are not necessarily equidistant, even though sometimes we treat them like interval scales. How would you rate your opinions on national health insurance? Very much Somewhat Neither Somewhat Very much Good Bad Useful Useless Caring Uncaring Interesting Boring Table 6.4. A semantic differential scale for measuring attitude toward national health insurance Semantic differential scale. This is a composite (multi-item) scale where respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites. For instance, the construct “attitude toward national health insurance” can be measured using four items shown in Table 6.4. As in the Likert scale, the overall scale score may be a summation of individual item scores. Notice that in Likert scales, the statement changes but the anchors remain the same across items. However, in semantic differential scales, the statement remains constant, while the anchors (adjective pairs) change across items. Semantic differential is believed to be an excellent technique for measuring people’s attitude or feelings toward objects, events, or behaviors. Guttman scale. Designed by Louis Guttman, this composite scale uses a series of items arranged in increasing order of intensity of the construct of interest, from least intense to most intense. As an example, the construct “attitude toward immigrants” can be measured using five items shown in Table 6.5. Each item in the above Guttman scale has a weight (not indicated above) which varies with the intensity of that item, and the weighted combination of each response is used as aggregate measure of an observation. How will you rate your opinions on the following statements about immigrants? Do you mind immigrants being citizens of your country Yes No Do you mind immigrants living in your own neighborhood Yes No Would you mind living next door to an immigrant Yes No Would you mind having an immigrant as your close friend Yes No Would you mind if someone in your family married an immigrant Yes No Table 6.5. A five-item Guttman scale for measuring attitude toward immigrants
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/06%3A_Measurement_of_Constructs/6.03%3A_Levels_of_Measurement.txt
The previous section discussed how to measure respondents’ responses to predesigned items or indicators belonging to an underlying construct. But how do we create the indicators themselves? The process of creating the indicators is called scaling. More formally, scaling is a branch of measurement that involves the construction of measures by associating qualitative judgments about unobservable constructs with quantitative, measurable metric units. Stevens (1946) said, “Scaling is the assignment of objects to numbers according to a rule.” This process of measuring abstract concepts in concrete terms remains one of the most difficult tasks in empirical social science research. The outcome of a scaling process is a scale, which is an empirical structure for measuring items or indicators of a given construct. Understand that “scales”, as discussed in this section, are a little different from “rating scales” discussed in the previous section. A rating scale is used to capture the respondents’ reactions to a given item, for instance, such as a nominal scaled item captures a yes/no reaction and an interval scaled item captures a value between “strongly disagree” to “strongly agree.” Attaching a rating scale to a statement or instrument is not scaling. Rather, scaling is the formal process of developing scale items, before rating scales can be attached to those items. Scales can be unidimensional or multidimensional, based on whether the underlying construct is unidimensional (e.g., weight, wind speed, firm size) or multidimensional (e.g., academic aptitude, intelligence). Unidimensional scale measures constructs along a single scale, ranging from high to low. Note that some of these scales may include multiple items, but all of these items attempt to measure the same underlying dimension. This is particularly the case with many social science constructs such as self-esteem, which are assumed to have a single dimension going from low to high. Multi-dimensional scales, on the other hand, employ different items or tests to measure each dimension of the construct separately, and then combine the scores on each dimension to create an overall measure of the multidimensional construct. For instance, academic aptitude can be measured using two separate tests of students’ mathematical and verbal ability, and then combining these scores to create an overall measure for academic aptitude. Since most scales employed in social science research are unidimensional, we will next three examine approaches for creating unidimensional scales. Unidimensional scaling methods were developed during the first half of the twentieth century and were named after their creators. The three most popular unidimensional scaling methods are: (1) Thurstone’s equal-appearing scaling, (2) Likert’s summative scaling, and (3) Guttman’s cumulative scaling. The three approaches are similar in many respects, with the key differences being the rating of the scale items by judges and the statistical methods used to select the final items. Each of these methods are discussed next. Thurstone’s equal-appearing scaling method. Louis Thurstone. one of the earliest and most famous scaling theorists, published a method of equal-appearing intervals in 1925. This method starts with a clear conceptual definition of the construct of interest. Based on this definition, potential scale items are generated to measure this construct. These items are generated by experts who know something about the construct being measured. The initial pool of candidate items (ideally 80 to 100 items) should be worded in a similar manner, for instance, by framing them as statements to which respondents may agree or disagree (and not as questions or other things). Next, a panel of judges is recruited to select specific items from this candidate pool to represent the construct of interest. Judges may include academics trained in the process of instrument construction or a random sample of respondents of interest (i.e., people who are familiar with the phenomenon). The selection process is done by having each judge independently rate each item on a scale from 1 to 11 based on how closely, in their opinion, that item reflects the intended construct (1 represents extremely unfavorable and 11 represents extremely favorable). For each item, compute the median and inter-quartile range (the difference between the 75th and the 25th percentile – a measure of dispersion), which are plotted on a histogram, as shown in Figure 6.1. The final scale items are selected as statements that are at equal intervals across a range of medians. This can be done by grouping items with a common median, and then selecting the item with the smallest inter-quartile range within each median group. However, instead of relying entirely on statistical analysis for item selection, a better strategy may be to examine the candidate items at each level and selecting the statement that is the most clear and makes the most sense. The median value of each scale item represents the weight to be used for aggregating the items into a composite scale score representing the construct of interest. We now have a scale which looks like a ruler, with one item or statement at each of the 11 points on the ruler (and weighted as such). Because items appear equally throughout the entire 11-pointrange of the scale, this technique is called an equal-appearing scale. Thurstone also created two additional methods of building unidimensional scales – the method of successive intervals and the method of paired comparisons – which are both very similar to the method of equal-appearing intervals, except for how judges are asked to rate the data. For instance, the method of paired comparison requires each judge to make a judgment between each pair of statements (rather than rate each statement independently on a 1 to 11 scale). Hence, the name paired comparison method. With a lot of statements, this approach can be enormously time consuming and unwieldy compared to the method of equal-appearing intervals. Likert’s summative scaling method. The Likert method, a unidimensional scaling method developed by Murphy and Likert (1938), is quite possibly the most popular of the three scaling approaches described in this chapter. As with Thurstone’s method, the Likert method also starts with a clear definition of the construct of interest, and using a set of experts to generate about 80 to 100 potential scale items. These items are then rated by judges on a 1 to 5 (or 1 to 7) rating scale as follows: 1 for strongly disagree with the concept, 2 for somewhat disagree with the concept, 3 for undecided, 4 for somewhat agree with the concept, and 5 for strongly agree with the concept. Following this rating, specific items can be selected for the final scale can be selected in one of several ways: (1) by computing bivariate correlations between judges rating of each item and the total item (created by summing all individual items for each respondent), and throwing out items with low (e.g., less than 0.60) item-to-total correlations, or (2) by averaging the rating for each item for the top quartile and the bottom quartile of judges, doing a t-test for the difference in means, and selecting items that have high t-values (i.e., those that discriminates best between the top and bottom quartile responses). In the end, researcher’s judgment may be used to obtain a relatively small (say 10 to 15) set of items that have high item-to-total correlations and high discrimination (i.e., high t-values). The Likert method assumes equal weights for all items, and hence, respondent’s responses to each item can be summed to create a composite score for that respondent. Hence, this method is called a summated scale. Note that any item with reversed meaning from the original direction of the construct must be reverse coded (i.e., 1 becomes a 5, 2 becomes a 4, and so forth) before summating. Guttman’s cumulative scaling method. Designed by Guttman (1950), the cumulative scaling method is based on Emory Bogardus’ social distance technique, which assumes that people’s willingness to participate in social relations with other people vary in degrees of intensity, and measures that intensity using a list of items arranged from “least intense” to “most intense”. The idea is that people who agree with one item on this list also agree with all previous items. In practice, we seldom find a set of items that matches this cumulative pattern perfectly. A scalogram analysis is used to examine how closely a set of items corresponds to the idea of cumulativeness. Like previous scaling methods, the Guttman method also starts with a clear definition of the construct of interest, and then using experts to develop a large set of candidate items. A group of judges then rate each candidate item as “yes” if they view the item as being favorable to the construct and “no” if they see the item as unfavorable. Next, a matrix or table is created showing the judges’ responses to all candidate items. This matrix is sorted in decreasing order from judges with more “yes” at the top to those with fewer “yes” at the bottom. Judges with the same number of “yes”, the statements can be sorted from left to right based on most number of agreements to least. The resulting matrix will resemble Table 6.6. Notice that the scale is now almost cumulative when read from left to right (across the items). However, there may be a few exceptions, as shown in Table 6.6, and hence the scale is not entirely cumulative. To determine a set of items that best approximates the cumulativeness property, a data analysis technique called scalogram analysis can be used (or this can be done visually if the number of items is small). The statistical technique also estimates a score for each item that can be used to compute a respondent’s overall score on the entire set of items. Table 6.6. Sorted rating matrix for a Guttman scale Respondent Item 12 Item 5 Item 3 Item 22 Item 8 Item 7 29 Y Y Y Y Y Y 7 Y Y Y - Y - 15 Y Y Y Y - - 3 Y Y Y Y - - 32 Y Y Y - - - 4 Y Y - Y - - 5 Y Y - - - - 23 Y Y - - - - 11 Y - - Y - - Y indicates exceptions that prevents this matrix from being perfectly cumulative
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/06%3A_Measurement_of_Constructs/6.04%3A_Scaling.txt
An index is a composite score derived from aggregating measures of multiple constructs (called components) using a set of rules and formulas. It is different from scales in that scales also aggregate measures, but these measures measure different dimensions or the same dimension of a single construct. A well-known example of an index is the consumer price index (CPI), which is computed every month by the Bureau of Labor Statistics of the U.S. Department of Labor. The CPI is a measure of how much consumers have to pay for goods and services in general, and is divided into eight major categories (food and beverages, housing, apparel, transportation, healthcare, recreation, education and communication, and “other goods and services”), which are further subdivided into more than 200 smaller items. Each month, government employees call all over the country to get the current prices of more than 80,000 items. Using a complicated weighting scheme that takes into account the location and probability of purchase of each item, these prices are combined by analysts, which are then combined into an overall index score using a series of formulas and rules. Another example of index is socio-economic status (SES), also called the Duncan socioeconomic index (SEI). This index is a combination of three constructs: income, education, and occupation. Income is measured in dollars, education in years or degrees achieved, and occupation is classified into categories or levels by status. These very different measures are combined to create an overall SES index score, using a weighted combination of “occupational education” (percentage of people in that occupation who had one or more year of college education) and “occupational income” (percentage of people in that occupation who earned more than a specific annual income). However, SES index measurement has generated a lot of controversy and disagreement among researchers. The process of creating an index is similar to that of a scale. First, conceptualize (define) the index and its constituent components. Though this appears simple, there may be a lot of disagreement among judges on what components (constructs) should be included or excluded from an index. For instance, in the SES index, isn’t income correlated with education and occupation, and if so, should we include one component only or all three components? Reviewing the literature, using theories, and/or interviewing experts or key stakeholders may help resolve this issue. Second, operationalize and measure each component. For instance, how will you categorize occupations, particularly since some occupations may have changed with time (e.g., there were no Web developers before the Internet). Third, create a rule or formula for calculating the index score. Again, this process may involve a lot of subjectivity. Lastly, validate the index score using existing or new data. Though indexes and scales yield a single numerical score or value representing a construct of interest, they are different in many ways. First, indexes often comprise of components that are very different from each other (e.g., income, education, and occupation in the SES index) and are measured in different ways. However, scales typically involve a set of similar items that use the same rating scale (such as a five-point Likert scale). Second, indexes often combine objectively measurable values such as prices or income, while scales are designed to assess subjective or judgmental constructs such as attitude, prejudice, or selfesteem. Some argue that the sophistication of the scaling methodology makes scales different from indexes, while others suggest that indexing methodology can be equally sophisticated. Nevertheless, indexes and scales are both essential tools in social science research. 6.06: Typologies Scales and indexes generate ordinal measures of unidimensional constructs. However, researchers sometimes wish to summarize measures of two or more constructs to create a set of categories or types called a typology. Unlike scales or indexes, typologies are multidimensional but include only nominal variables. For instance, one can create a political typology of newspapers based on their orientation toward domestic and foreign policy, as expressed in their editorial columns, as shown in Figure 6.2. This typology can be used to categorize newspapers into one of four “ideal types” (A through D), identify the distribution of newspapers across these ideal types, and perhaps even create a classificatory model to classifying newspapers into one of these four ideal types depending on other attributes. 6.07: Summary In closing, scale (or index) construction in social science research is a complex process involving several key decisions. Some of these decisions are: • Should you use a scale, index, or typology? • How do you plan to analyze the data? • What is your desired level of measurement (nominal, ordinal, interval, or ratio) or rating scale? • How many scale attributes should you use (e.g., 1 to 10; 1 to 7; −3 to +3)? • Should you use an odd or even number of attributes (i.e., do you wish to have neutral or mid-point value)? • How do you wish to label the scale attributes (especially for semantic differential scales)? • Finally, what procedure would you use to generate the scale items (e.g., Thurstone, Likert, or Guttman method) or index components? This chapter examined the process and outcomes of scale development. The next chapter will examine how to evaluate the reliability and validity of the scales developed using the above approaches.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/06%3A_Measurement_of_Constructs/6.05%3A_Indexes.txt
The previous chapter examined some of the difficulties with measuring constructs in social science research. For instance, how do we know whether we are measuring “compassion” and not the “empathy”, since both constructs are somewhat similar in meaning? Or is compassion the same thing as empathy? What makes it more complex is that sometimes these constructs are imaginary concepts (i.e., they don’t exist in reality), and multi-dimensional (in which case, we have the added problem of identifying their constituent dimensions). Hence, it is not adequate just to measure social science constructs using any scale that we prefer. We also must test these scales to ensure that: (1) these scales indeed measure the unobservable construct that we wanted to measure (i.e., the scales are “valid”), and (2) they measure the intended construct consistently and precisely (i.e., the scales are “reliable”). Reliability and validity, jointly called the “psychometric properties” of measurement scales, are the yardsticks against which the adequacy and accuracy of our measurement procedures are evaluated in scientific research. A measure can be reliable but not valid, if it is measuring something very consistently but is consistently measuring the wrong construct. Likewise, a measure can be valid but not reliable if it is measuring the right construct, but not doing so in a consistent manner. Using the analogy of a shooting target, as shown in Figure 7.1, a multiple-item measure of a construct that is both reliable and valid consists of shots that clustered within a narrow range near the center of the target. A measure that is valid but not reliable will consist of shots centered on the target but not clustered within a narrow range, but rather scattered around the target. Finally, a measure that is reliable but not valid will consist of shots clustered within a narrow range but off from the target. Hence, reliability and validity are both needed to assure adequate measurement of the constructs of interest. 07: Scale Reliability and Validity Reliability is the degree to which the measure of a construct is consistent or dependable. In other words, if we use this scale to measure the same construct multiple times, do we get pretty much the same result every time, assuming the underlying phenomenon is not changing? An example of an unreliable measurement is people guessing your weight. Quite likely, people will guess differently, the different measures will be inconsistent, and therefore, the “guessing” technique of measurement is unreliable. A more reliable measurement may be to use a weight scale, where you are likely to get the same value every time you step on the scale, unless your weight has actually changed between measurements. Note that reliability implies consistency but not accuracy. In the previous example of the weight scale, if the weight scale is calibrated incorrectly (say, to shave off ten pounds from your true weight, just to make you feel better!), it will not measure your true weight and is therefore not a valid measure. Nevertheless, the miscalibrated weight scale will still give you the same weight every time (which is ten pounds less than your true weight), and hence the scale is reliable. What are the sources of unreliable observations in social science measurements? One of the primary sources is the observer’s (or researcher’s) subjectivity. If employee morale in a firm is measured by watching whether the employees smile at each other, whether they make jokes, and so forth, then different observers may infer different measures of morale if they are watching the employees on a very busy day (when they have no time to joke or chat) or a light day (when they are more jovial or chatty). Two observers may also infer different levels of morale on the same day, depending on what they view as a joke and what is not. “Observation” is a qualitative measurement technique. Sometimes, reliability may be improved by using quantitative measures, for instance, by counting the number of grievances filed over one month as a measure of (the inverse of) morale. Of course, grievances may or may not be a valid measure of morale, but it is less subject to human subjectivity, and therefore more reliable. A second source of unreliable observation is asking imprecise or ambiguous questions. For instance, if you ask people what their salary is, different respondents may interpret this question differently as monthly salary, annual salary, or per hour wage, and hence, the resulting observations will likely be highly divergent and unreliable. A third source of unreliability is asking questions about issues that respondents are not very familiar about or care about, such as asking an American college graduate whether he/she is satisfied with Canada’s relationship with Slovenia, or asking a Chief Executive Officer to rate the effectiveness of his company’s technology strategy – something that he has likely delegated to a technology executive. So how can you create reliable measures? If your measurement involves soliciting information from others, as is the case with much of social science research, then you can start by replacing data collection techniques that depends more on researcher subjectivity (such as observations) with those that are less dependent on subjectivity (such as questionnaire), by asking only those questions that respondents may know the answer to or issues that they care about, by avoiding ambiguous items in your measures (e.g., by clearly stating whether you are looking for annual salary), and by simplifying the wording in your indicators so that they not misinterpreted by some respondents (e.g., by avoiding difficult words whose meanings they may not know). These strategies can improve the reliability of our measures, even though they will not necessarily make the measurements completely reliable. Measurement instruments must still be tested for reliability. There are many ways of estimating reliability, which are discussed next. Inter-rater reliability. Inter-rater reliability, also called inter-observer reliability, is a measure of consistency between two or more independent raters (observers) of the same construct. Usually, this is assessed in a pilot study, and can be done in two ways, depending on the level of measurement of the construct. If the measure is categorical, a set of all categories is defined, raters check off which category each observation falls in, and the percentage of agreement between the raters is an estimate of inter-rater reliability. For instance, if there are two raters rating 100 observations into one of three possible categories, and their ratings match for 75% of the observations, then inter-rater reliability is 0.75. If the measure is interval or ratio scaled (e.g., classroom activity is being measured once every 5 minutes by two raters on 1 to 7 response scale), then a simple correlation between measures from the two raters can also serve as an estimate of inter-rater reliability. Test-retest reliability. Test-retest reliability is a measure of consistency between two measurements (tests) of the same construct administered to the same sample at two different points in time. If the observations have not changed substantially between the two tests, then the measure is reliable. The correlation in observations between the two tests is an estimate of test-retest reliability. Note here that the time interval between the two tests is critical. Generally, the longer is the time gap, the greater is the chance that the two observations may change during this time (due to random error), and the lower will be the test-retest reliability. Split-half reliability. Split-half reliability is a measure of consistency between two halves of a construct measure. For instance, if you have a ten-item measure of a given construct, randomly split those ten items into two sets of five (unequal halves are allowed if the total number of items is odd), and administer the entire instrument to a sample of respondents. Then, calculate the total score for each half for each respondent, and the correlation between the total scores in each half is a measure of split-half reliability. The longer is the instrument, the more likely it is that the two halves of the measure will be similar (since random errors are minimized as more items are added), and hence, this technique tends to systematically overestimate the reliability of longer instruments. Internal consistency reliability. Internal consistency reliability is a measure of consistency between different items of the same construct. If a multiple-item construct measure is administered to respondents, the extent to which respondents rate those items in a similar manner is a reflection of internal consistency. This reliability can be estimated in terms of average inter-item correlation, average item-to-total correlation, or more commonly, Cronbach’s alpha. As an example, if you have a scale with six items, you will have fifteen different item pairings, and fifteen correlations between these six items. Average inter-item correlation is the average of these fifteen correlations. To calculate average item-to-total correlation, you have to first create a “total” item by adding the values of all six items, compute the correlations between this total item and each of the six individual items, and finally, average the six correlations. Neither of the two above measures takes into account the number of items in the measure (six items in this example). Cronbach’s alpha, a reliability measure designed by Lee Cronbach in 1951, factors in scale size in reliability estimation, calculated using the following formula: where K is the number of items in the measure, is the variance (square of standard deviation) of the observed total scores, and is the observed variance for item i. The standardized Cronbach’s alpha can be computed using a simpler formula: where K is the number of items, is the average inter-item correlation, i.e., the mean of K(K1)/2 coefficients in the upper triangular (or lower triangular) correlation matrix.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/07%3A_Scale_Reliability_and_Validity/7.01%3A_Reliability.txt
Validity, often called construct validity, refers to the extent to which a measure adequately represents the underlying construct that it is supposed to measure. For instance, is a measure of compassion really measuring compassion, and not measuring a different construct such as empathy? Validity can be assessed using theoretical or empirical approaches, and should ideally be measured using both approaches. Theoretical assessment of validity focuses on how well the idea of a theoretical construct is translated into or represented in an operational measure. This type of validity is called translational validity (or representational validity), and consists of two subtypes: face and content validity. Translational validity is typically assessed using a panel of expert judges, who rate each item (indicator) on how well they fit the conceptual definition of that construct, and a qualitative technique called Q-sort. Empirical assessment of validity examines how well a given measure relates to one or more external criterion, based on empirical observations. This type of validity is called criterion-related validity, which includes four sub-types: convergent, discriminant, concurrent, and predictive validity. While translation validity examines whether a measure is a good reflection of its underlying construct, criterion-related validity examines whether a given measure behaves the way it should, given the theory of that construct. This assessment is based on quantitative analysis of observed data using statistical techniques such as correlational analysis, factor analysis, and so forth. The distinction between theoretical and empirical assessment of validity is illustrated in Figure 7.2. However, both approaches are needed to adequately ensure the validity of measures in social science research. Note that the different types of validity discussed here refer to the validity of the measurement procedures, which is distinct from the validity of hypotheses testing procedures, such as internal validity (causality), external validity (generalizability), or statistical conclusion validity. The latter types of validity are discussed in a later chapter. Face validity. Face validity refers to whether an indicator seems to be a reasonable measure of its underlying construct “on its face”. For instance, the frequency of one’s attendance at religious services seems to make sense as an indication of a person’s religiosity without a lot of explanation. Hence this indicator has face validity. However, if we were to suggest how many books were checked out of an office library as a measure of employee morale, then such a measure would probably lack face validity because it does not seem to make much sense. Interestingly, some of the popular measures used in organizational research appears to lack face validity. For instance, absorptive capacity of an organization (how much new knowledge can it assimilate for improving organizational processes) has often been measured as research and development intensity (i.e., R&D expenses divided by gross revenues)! If your research includes constructs that are highly abstract or constructs that are hard to conceptually separate from each other (e.g., compassion and empathy), it may be worthwhile to consider using a panel of experts to evaluate the face validity of your construct measures. Content validity. Content validity is an assessment of how well a set of scale items matches with the relevant content domain of the construct that it is trying to measure. For instance, if you want to measure the construct “satisfaction with restaurant service,” and you define the content domain of restaurant service as including the quality of food, courtesy of wait staff, duration of wait, and the overall ambience of the restaurant (i.e., whether it is noisy, smoky, etc.), then for adequate content validity, this construct should be measured using indicators that examine the extent to which a restaurant patron is satisfied with the quality of food, courtesy of wait staff, the length of wait, and the restaurant’s ambience. Of course, this approach requires a detailed description of the entire content domain of a construct, which may be difficult for complex constructs such as self-esteem or intelligence. Hence, it may not be always possible to adequately assess content validity. As with face validity, an expert panel of judges may be employed to examine content validity of constructs. Convergent validity refers to the closeness with which a measure relates to (or converges on) the construct that it is purported to measure, and discriminant validity refers to the degree to which a measure does not measure (or discriminates from) other constructs that it is not supposed to measure. Usually, convergent validity and discriminant validity are assessed jointly for a set of related constructs. For instance, if you expect that an organization’s knowledge is related to its performance, how can you assure that your measure of organizational knowledge is indeed measuring organizational knowledge (for convergent validity) and not organizational performance (for discriminant validity)? Convergent validity can be established by comparing the observed values of one indicator of one construct with that of other indicators of the same construct and demonstrating similarity (or high correlation) between values of these indicators. Discriminant validity is established by demonstrating that indicators of one construct are dissimilar from (i.e., have low correlation with) other constructs. In the above example, if we have a three-item measure of organizational knowledge and three more items for organizational performance, based on observed sample data, we can compute bivariate correlations between each pair of knowledge and performance items. If this correlation matrix shows high correlations within items of the organizational knowledge and organizational performance constructs, but low correlations between items of these constructs, then we have simultaneously demonstrated convergent and discriminant validity (see Table 7.1). Table 7.1. Bivariate correlational analysis for convergent and discriminant validity An alternative and more common statistical method used to demonstrate convergent and discriminant validity is exploratory factor analysis. This is a data reduction technique which aggregates a given set of items to a smaller set of factors based on the bivariate correlation structure discussed above using a statistical technique called principal components analysis. These factors should ideally correspond to the underling theoretical constructs that we are trying to measure. The general norm for factor extraction is that each extracted factor should have an eigenvalue greater than 1.0. The extracted factors can then be rotated using orthogonal or oblique rotation techniques, depending on whether the underlying constructs are expected to be relatively uncorrelated or correlated, to generate factor weights that can be used to aggregate the individual items of each construct into a composite measure. For adequate convergent validity, it is expected that items belonging to a common construct should exhibit factor loadings of 0.60 or higher on a single factor (called same-factor loadings), while for discriminant validity, these items should have factor loadings of 0.30 or less on all other factors (cross-factor loadings), as shown in rotated factor matrix example in Table 7.2. A more sophisticated technique for evaluating convergent and discriminant validity is the multi-trait multi-method (MTMM) approach. This technique requires measuring each construct (trait) using two or more different methods (e.g., survey and personal observation, or perhaps survey of two different respondent groups such as teachers and parents for evaluating academic quality). This is an onerous and relatively less popular approach, and is therefore not discussed here. Criterion-related validity can also be assessed based on whether a given measure relate well with a current or future criterion, which are respectively called concurrent and predictive validity. Predictive validity is the degree to which a measure successfully predicts a future outcome that it is theoretically expected to predict. For instance, can standardized test scores (e.g., Scholastic Aptitude Test scores) correctly predict the academic success in college (e.g., as measured by college grade point average)? Assessing such validity requires creation of a “nomological network” showing how constructs are theoretically related to each other. Concurrent validity examines how well one measure relates to other concrete criterion that is presumed to occur simultaneously. For instance, do students’ scores in a calculus class correlate well with their scores in a linear algebra class? These scores should be related concurrently because they are both tests of mathematics. Unlike convergent and discriminant validity, concurrent and predictive validity is frequently ignored in empirical social science research. Table 7.2. Exploratory factor analysis for convergent and discriminant validity
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/07%3A_Scale_Reliability_and_Validity/7.02%3A_Validity.txt
Now that we know the different kinds of reliability and validity, let us try to synthesize our understanding of reliability and validity in a mathematical manner using classical test theory, also called true score theory. This is a psychometric theory that examines how measurement works, what it measures, and what it does not measure. This theory postulates that every observation has a true score T that can be observed accurately if there were no errors in measurement. However, the presence of measurement errors E results in a deviation of the observed score X from the true score as follows: Across a set of observed scores, the variance of observed and true scores can be related using a similar equation: The goal of psychometric analysis is to estimate and minimize if possible the error variance var(E), so that the observed score X is a good measure of the true score T. Measurement errors can be of two types: random error and systematic error. Random error is the error that can be attributed to a set of unknown and uncontrollable external factors that randomly influence some observations but not others. As an example, during the time of measurement, some respondents may be in a nicer mood than others, which may influence how they respond to the measurement items. For instance, respondents in a nicer mood may respond more positively to constructs like self-esteem, satisfaction, and happiness than those who are in a poor mood. However, it is not possible to anticipate which subject is in what type of mood or control for the effect of mood in research studies. Likewise, at an organizational level, if we are measuring firm performance, regulatory or environmental changes may affect the performance of some firms in an observed sample but not others. Hence, random error is considered to be “noise” in measurement and generally ignored. Systematic error is an error that is introduced by factors that systematically affect all observations of a construct across an entire sample in a systematic manner. In our previous example of firm performance, since the recent financial crisis impacted the performance of financial firms disproportionately more than any other type of firms such as manufacturing or service firms, if our sample consisted only of financial firms, we may expect a systematic reduction in performance of all firms in our sample due to the financial crisis. Unlike random error, which may be positive negative, or zero, across observation in a sample, systematic errors tends to be consistently positive or negative across the entire sample. Hence, systematic error is sometimes considered to be “bias” in measurement and should be corrected. Since an observed score may include both random and systematic errors, our true score equation can be modified as: What does random and systematic error imply for measurement procedures? By increasing variability in observations, random error reduces the reliability of measurement. In contrast, by shifting the central tendency measure, systematic error reduces the validity of measurement. Validity concerns are far more serious problems in measurement than reliability concerns, because an invalid measure is probably measuring a different construct than what we intended, and hence validity problems cast serious doubts on findings derived from statistical analysis. Note that reliability is a ratio or a fraction that captures how close the true score is relative to the observed score. Hence, reliability can be expressed as: If var(T) = var(X), then the true score has the same variability as the observed score, and the reliability is 1.0.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/07%3A_Scale_Reliability_and_Validity/7.03%3A_Theory_of_Measurement.txt
A complete and adequate assessment of validity must include both theoretical and empirical approaches. As shown in Figure 7.4, this is an elaborate multi-step process that must take into account the different types of scale reliability and validity. The integrated approach starts in the theoretical realm. The first step is conceptualizing the constructs of interest. This includes defining each construct and identifying their constituent domains and/or dimensions. Next, we select (or create) items or indicators for each construct based on our conceptualization of these construct, as described in the scaling procedure in Chapter 5. A literature review may also be helpful in indicator selection. Each item is reworded in a uniform manner using simple and easy-to-understand text. Following this step, a panel of expert judges (academics experienced in research methods and/or a representative set of target respondents) can be employed to examine each indicator and conduct a Q-sort analysis. In this analysis, each judge is given a list of all constructs with their conceptual definitions and a stack of index cards listing each indicator for each of the construct measures (one indicator per index card). Judges are then asked to independently read each index card, examine the clarity, readability, and semantic meaning of that item, and sort it with the construct where it seems to make the most sense, based on the construct definitions provided. Inter-rater reliability is assessed to examine the extent to which judges agreed with their classifications. Ambiguous items that were consistently missed by many judges may be reexamined, reworded, or dropped. The best items (say 10-15) for each construct are selected for further analysis. Each of the selected items is reexamined by judges for face validity and content validity. If an adequate set of items is not achieved at this stage, new items may have to be created based on the conceptual definition of the intended construct. Two or three rounds of Q-sort may be needed to arrive at reasonable agreement between judges on a set of items that best represents the constructs of interest. Next, the validation procedure moves to the empirical realm. A research instrument is created comprising all of the refined construct items, and is administered to a pilot test group of representative respondents from the target population. Data collected is tabulated and subjected to correlational analysis or exploratory factor analysis using a software program such as SAS or SPSS for assessment of convergent and discriminant validity. Items that do not meet the expected norms of factor loading (same-factor loadings higher than 0.60, and cross-factor loadings less than 0.30) should be dropped at this stage. The remaining scales are evaluated for reliability using a measure of internal consistency such as Cronbach alpha. Scale dimensionality may also be verified at this stage, depending on whether the targeted constructs were conceptualized as being unidimensional or multi-dimensional. Next, evaluate the predictive ability of each construct within a theoretically specified nomological network of construct using regression analysis or structural equation modeling. If the construct measures satisfy most or all of the requirements of reliability and validity described in this chapter, we can be assured that our operationalized measures are reasonably adequate and accurate. The integrated approach to measurement validation discussed here is quite demanding of researcher time and effort. Nonetheless, this elaborate multi-stage process is needed to ensure that measurement scales used in our research meets the expected norms of scientific research. Because inferences drawn using flawed or compromised scales are meaningless, scale validation and measurement remains one of the most important and involved phase of empirical research.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/07%3A_Scale_Reliability_and_Validity/7.04%3A_An_Integrated_Approach_to_Measurement_Validation.txt
Sampling is the statistical process of selecting a subset (called a “sample”) of a population of interest for purposes of making observations and statistical inferences about that population. Social science research is generally about inferring patterns of behaviors within specific populations. We cannot study entire populations because of feasibility and cost constraints, and hence, we must select a representative sample from the population of interest for observation and analysis. It is extremely important to choose a sample that is truly representative of the population so that the inferences derived from the sample can be generalized back to the population of interest. Improper and biased sampling is the primary reason for often divergent and erroneous inferences reported in opinion polls and exit polls conducted by different polling groups such as CNN/Gallup Poll, ABC, and CBS, prior to every U.S. Presidential elections. 08: Sampling The sampling process comprises of several stage. The first stage is defining the target population. A population can be defined as all people or items (unit of analysis) with the characteristics that one wishes to study. The unit of analysis may be a person, group, organization, country, object, or any other entity that you wish to draw scientific inferences about. Sometimes the population is obvious. For example, if a manufacturer wants to determine whether finished goods manufactured at a production line meets certain quality requirements or must be scrapped and reworked, then the population consists of the entire set of finished goods manufactured at that production facility. At other times, the target population may be a little harder to understand. If you wish to identify the primary drivers of academic learning among high school students, then what is your target population: high school students, their teachers, school principals, or parents? The right answer in this case is high school students, because you are interested in their performance, not the performance of their teachers, parents, or schools. Likewise, if you wish to analyze the behavior of roulette wheels to identify biased wheels, your population of interest is not different observations from a single roulette wheel, but different roulette wheels (i.e., their behavior over an infinite set of wheels). The second step in the sampling process is to choose a sampling frame. This is an accessible section of the target population (usually a list with contact information) from where a sample can be drawn. If your target population is professional employees at work, because you cannot access all professional employees around the world, a more realistic sampling frame will be employee lists of one or two local companies that are willing to participate in your study. If your target population is organizations, then the Fortune 500 list of firms or the Standard & Poor’s (S&P) list of firms registered with the New York Stock exchange may be acceptable sampling frames. Note that sampling frames may not entirely be representative of the population at large, and if so, inferences derived by such a sample may not be generalizable to the population. For instance, if your target population is organizational employees at large (e.g., you wish to study employee self-esteem in this population) and your sampling frame is employees at automotive companies in the American Midwest, findings from such groups may not even be generalizable to the American workforce at large, let alone the global workplace. This is because the American auto industry has been under severe competitive pressures for the last 50 years and has seen numerous episodes of reorganization and downsizing, possibly resulting in low employee morale and self-esteem. Furthermore, the majority of the American workforce is employed in service industries or in small businesses, and not in automotive industry. Hence, a sample of American auto industry employees is not particularly representative of the American workforce. Likewise, the Fortune 500 list includes the 500 largest American enterprises, which is not representative of all American firms in general, most of which are medium and smallsized firms rather than large firms, and is therefore, a biased sampling frame. In contrast, the S&P list will allow you to select large, medium, and/or small companies, depending on whether you use the S&P large-cap, mid-cap, or small-cap lists, but includes publicly traded firms (and not private firms) and hence still biased. Also note that the population from which a sample is drawn may not necessarily be the same as the population about which we actually want information. For example, if a researcher wants to the success rate of a new “quit smoking” program, then the target population is the universe of smokers who had access to this program, which may be an unknown population. Hence, the researcher may sample patients arriving at a local medical facility for smoking cessation treatment, some of whom may not have had exposure to this particular “quit smoking” program, in which case, the sampling frame does not correspond to the population of interest. The last step in sampling is choosing a sample from the sampling frame using a well-defined sampling technique. Sampling techniques can be grouped into two broad categories: probability (random) sampling and non-probability sampling. Probability sampling is ideal if generalizability of results is important for your study, but there may be unique circumstances where non-probability sampling can also be justified. These techniques are discussed in the next two sections.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/08%3A_Sampling/8.01%3A_The_Sampling_Process.txt
Probability sampling is a technique in which every unit in the population has a chance (non-zero probability) of being selected in the sample, and this chance can be accurately determined. Sample statistics thus produced, such as sample mean or standard deviation, are unbiased estimates of population parameters, as long as the sampled units are weighted according to their probability of selection. All probability sampling have two attributes in common: (1) every unit in the population has a known non-zero probability of being sampled, and (2) the sampling procedure involves random selection at some point. The different types of probability sampling techniques include: Simple random sampling. In this technique, all possible subsets of a population (more accurately, of a sampling frame) are given an equal probability of being selected. The probability of selecting any set of n units out of a total of N units in a sampling frame is . Hence, sample statistics are unbiased estimates of population parameters, without any weighting. Simple random sampling involves randomly selecting respondents from a sampling frame, but with large sampling frames, usually a table of random numbers or a computerized random number generator is used. For instance, if you wish to select 200 firms to survey from a list of 1000 firms, if this list is entered into a spreadsheet like Excel, you can use Excel’s RAND() function to generate random numbers for each of the 1000 clients on that list. Next, you sort the list in increasing order of their corresponding random number, and select the first 200 clients on that sorted list. This is the simplest of all probability sampling techniques; however, the simplicity is also the strength of this technique. Because the sampling frame is not subdivided or partitioned, the sample is unbiased and the inferences are most generalizable amongst all probability sampling techniques. Systematic sampling. In this technique, the sampling frame is ordered according to some criteria and elements are selected at regular intervals through that ordered list. Systematic sampling involves a random start and then proceeds with the selection of every element from that point onwards, where k = N/n, where k is the ratio of sampling frame size N and the desired sample size n, and is formally called the sampling ratio. It is important that the starting point is not automatically the first in the list, but is instead randomly chosen from within the first k elements on the list. In our previous example of selecting 200 firms from a list of 1000 firms, you can sort the 1000 firms in increasing (or decreasing) order of their size (i.e., employee count or annual revenues), randomly select one of the first five firms on the sorted list, and then select every fifth firm on the list. This process will ensure that there is no overrepresentation of large or small firms in your sample, but rather that firms of all sizes are generally uniformly represented, as it is in your sampling frame. In other words, the sample is representative of the population, at least on the basis of the sorting criterion. Stratified sampling. In stratified sampling, the sampling frame is divided into homogeneous and non-overlapping subgroups (called “strata”), and a simple random sample is drawn within each subgroup. In the previous example of selecting 200 firms from a list of 1000 firms, you can start by categorizing the firms based on their size as large (more than 500 employees), medium (between 50 and 500 employees), and small (less than 50 employees). You can then randomly select 67 firms from each subgroup to make up your sample of 200 firms. However, since there are many more small firms in a sampling frame than large firms, having an equal number of small, medium, and large firms will make the sample less representative of the population (i.e., biased in favor of large firms that are fewer in number in the target population). This is called non-proportional stratified sampling because the proportion of sample within each subgroup does not reflect the proportions in the sampling frame (or the population of interest), and the smaller subgroup (large-sized firms) is oversampled. An alternative technique will be to select subgroup samples in proportion to their size in the population. For instance, if there are 100 large firms, 300 mid-sized firms, and 600 small firms, you can sample 20 firms from the “large” group, 60 from the “medium” group and 120 from the “small” group. In this case, the proportional distribution of firms in the population is retained in the sample, and hence this technique is called proportional stratified sampling. Note that the non-proportional approach is particularly effective in representing small subgroups, such as large-sized firms, and is not necessarily less representative of the population compared to the proportional approach, as long as the findings of the non-proportional approach is weighted in accordance to a subgroup’s proportion in the overall population. Cluster sampling. If you have a population dispersed over a wide geographic region, it may not be feasible to conduct a simple random sampling of the entire population. In such case, it may be reasonable to divide the population into “clusters” (usually along geographic boundaries), randomly sample a few clusters, and measure all units within that cluster. For instance, if you wish to sample city governments in the state of New York, rather than travel all over the state to interview key city officials (as you may have to do with a simple random sample), you can cluster these governments based on their counties, randomly select a set of three counties, and then interview officials from every official in those counties. However, depending on between-cluster differences, the variability of sample estimates in a cluster sample will generally be higher than that of a simple random sample, and hence the results are less generalizable to the population than those obtained from simple random samples. Matched-pairs sampling. Sometimes, researchers may want to compare two subgroups within one population based on a specific criterion. For instance, why are some firms consistently more profitable than other firms? To conduct such a study, you would have to categorize a sampling frame of firms into “high profitable” firms and “low profitable firms” based on gross margins, earnings per share, or some other measure of profitability. You would then select a simple random sample of firms in one subgroup, and match each firm in this group with a firm in the second subgroup, based on its size, industry segment, and/or other matching criteria. Now, you have two matched samples of high-profitability and low-profitability firms that you can study in greater detail. Such matched-pairs sampling technique is often an ideal way of understanding bipolar differences between different subgroups within a given population. Multi-stage sampling. The probability sampling techniques described previously are all examples of single-stage sampling techniques. Depending on your sampling needs, you may combine these single-stage techniques to conduct multi-stage sampling. For instance, you can stratify a list of businesses based on firm size, and then conduct systematic sampling within each stratum. This is a two-stage combination of stratified and systematic sampling. Likewise, you can start with a cluster of school districts in the state of New York, and within each cluster, select a simple random sample of schools; within each school, select a simple random sample of grade levels; and within each grade level, select a simple random sample of students for study. In this case, you have a four-stage sampling process consisting of cluster and simple random sampling.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/08%3A_Sampling/8.02%3A_Probability_Sampling.txt
Nonprobability sampling is a sampling technique in which some units of the population have zero chance of selection or where the probability of selection cannot be accurately determined. Typically, units are selected based on certain non-random criteria, such as quota or convenience. Because selection is non-random, nonprobability sampling does not allow the estimation of sampling errors, and may be subjected to a sampling bias. Therefore, information from a sample cannot be generalized back to the population. Types of nonprobability sampling techniques include: Convenience sampling. Also called accidental or opportunity sampling, this is a technique in which a sample is drawn from that part of the population that is close to hand, readily available, or convenient. For instance, if you stand outside a shopping center and hand out questionnaire surveys to people or interview them as they walk in, the sample of respondents you will obtain will be a convenience sample. This is a non-probability sample because you are systematically excluding all people who shop at other shopping centers. The opinions that you would get from your chosen sample may reflect the unique characteristics of this shopping center such as the nature of its stores (e.g., high end-stores will attract a more affluent demographic), the demographic profile of its patrons, or its location (e.g., a shopping center close to a university will attract primarily university students with unique purchase habits), and therefore may not be representative of the opinions of the shopper population at large. Hence, the scientific generalizability of such observations will be very limited. Other examples of convenience sampling are sampling students registered in a certain class or sampling patients arriving at a certain medical clinic. This type of sampling is most useful for pilot testing, where the goal is instrument testing or measurement validation rather than obtaining generalizable inferences. Quota sampling. In this technique, the population is segmented into mutually-exclusive subgroups (just as in stratified sampling), and then a non-random set of observations is chosen from each subgroup to meet a predefined quota. In proportional quota sampling, the proportion of respondents in each subgroup should match that of the population. For instance, if the American population consists of 70% Caucasians, 15% Hispanic-Americans, and 13% African-Americans, and you wish to understand their voting preferences in an sample of 98 people, you can stand outside a shopping center and ask people their voting preferences. But you will have to stop asking Hispanic-looking people when you have 15 responses from that subgroup (or African-Americans when you have 13 responses) even as you continue sampling other ethnic groups, so that the ethnic composition of your sample matches that of the general American population. Non-proportional quota sampling is less restrictive in that you don’t have to achieve a proportional representation, but perhaps meet a minimum size in each subgroup. In this case, you may decide to have 50 respondents from each of the three ethnic subgroups (Caucasians, Hispanic-Americans, and African-Americans), and stop when your quota for each subgroup is reached. Neither type of quota sampling will be representative of the American population, since depending on whether your study was conducted in a shopping center in New York or Kansas, your results may be entirely different. The non-proportional technique is even less representative of the population but may be useful in that it allows capturing the opinions of small and underrepresented groups through oversampling. Expert sampling. This is a technique where respondents are chosen in a non-random manner based on their expertise on the phenomenon being studied. For instance, in order to understand the impacts of a new governmental policy such as the Sarbanes-Oxley Act, you can sample an group of corporate accountants who are familiar with this act. The advantage of this approach is that since experts tend to be more familiar with the subject matter than nonexperts, opinions from a sample of experts are more credible than a sample that includes both experts and non-experts, although the findings are still not generalizable to the overall population at large. Snowball sampling. In snowball sampling, you start by identifying a few respondents that match the criteria for inclusion in your study, and then ask them to recommend others they know who also meet your selection criteria. For instance, if you wish to survey computer network administrators and you know of only one or two such people, you can start with them and ask them to recommend others who also do network administration. Although this method hardly leads to representative samples, it may sometimes be the only way to reach hard-toreach populations or when no sampling frame is available.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/08%3A_Sampling/8.03%3A_Non-Probability_Sampling.txt
In the preceding sections, we introduced terms such as population parameter, sample statistic, and sampling bias. In this section, we will try to understand what these terms mean and how they are related to each other. When you measure a certain observation from a given unit, such as a person’s response to a Likert-scaled item, that observation is called a response (see Figure 8.2). In other words, a response is a measurement value provided by a sampled unit. Each respondent will give you different responses to different items in an instrument. Responses from different respondents to the same item or observation can be graphed into a frequency distribution based on their frequency of occurrences. For a large number of responses in a sample, this frequency distribution tends to resemble a bell-shaped curve called a normal distribution, which can be used to estimate overall characteristics of the entire sample, such as sample mean (average of all observations in a sample) or standard deviation (variability or spread of observations in a sample). These sample estimates are called sample statistics (a “statistic” is a value that is estimated from observed data). Populations also have means and standard deviations that could be obtained if we could sample the entire population. However, since the entire population can never be sampled, population characteristics are always unknown, and are called population parameters (and not “statistic” because they are not statistically estimated from data). Sample statistics may differ from population parameters if the sample is not perfectly representative of the population; the difference between the two is called sampling error. Theoretically, if we could gradually increase the sample size so that the sample approaches closer and closer to the population, then sampling error will decrease and a sample statistic will increasingly approximate the corresponding population parameter. If a sample is truly representative of the population, then the estimated sample statistics should be identical to corresponding theoretical population parameters. How do we know if the sample statistics are at least reasonably close to the population parameters? Here, we need to understand the concept of a sampling distribution. Imagine that you took three different random samples from a given population, as shown in Figure 8.3, and for each sample, you derived sample statistics such as sample mean and standard deviation. If each random sample was truly representative of the population, then your three sample means from the three random samples will be identical (and equal to the population parameter), and the variability in sample means will be zero. But this is extremely unlikely, given that each random sample will likely constitute a different subset of the population, and hence, their means may be slightly different from each other. However, you can take these three sample means and plot a frequency histogram of sample means. If the number of such samples increases from three to 10 to 100, the frequency histogram becomes a sampling distribution. Hence, a sampling distribution is a frequency distribution of a sample statistic (like sample mean) from a set of samples, while the commonly referenced frequency distribution is the distribution of a response (observation) from a single sample. Just like a frequency distribution, the sampling distribution will also tend to have more sample statistics clustered around the mean (which presumably is an estimate of a population parameter), with fewer values scattered around the mean. With an infinitely large number of samples, this distribution will approach a normal distribution. The variability or spread of a sample statistic in a sampling distribution (i.e., the standard deviation of a sampling statistic) is called its standard error. In contrast, the term standard deviation is reserved for variability of an observed response from a single sample. The mean value of a sample statistic in a sampling distribution is presumed to be an estimate of the unknown population parameter. Based on the spread of this sampling distribution (i.e., based on standard error), it is also possible to estimate confidence intervals for that prediction population parameter. Confidence interval is the estimated probability that a population parameter lies within a specific interval of sample statistic values. All normal distributions tend to follow a 68-95-99 percent rule (see Figure 8.4), which says that over 68% of the cases in the distribution lie within one standard deviation of the mean value (µ + 1σ), over 95% of the cases in the distribution lie within two standard deviations of the mean (µ + 2σ), and over 99% of the cases in the distribution lie within three standard deviations of the mean value (µ + 3σ). Since a sampling distribution with an infinite number of samples will approach a normal distribution, the same 68-95-99 rule applies, and it can be said that: • (Sample statistic one standard error) represents a 68% confidence interval for the population parameter. • (Sample statistic two standard errors) represents a 95% confidence interval for the population parameter. • (Sample statistic three standard errors) represents a 99% confidence interval for the population parameter. A sample is “biased” (i.e., not representative of the population) if its sampling distribution cannot be estimated or if the sampling distribution violates the 68-95-99 percent rule. As an aside, note that in most regression analysis where we examine the significance of regression coefficients with p<0.05, we are attempting to see if the sampling statistic (regression coefficient) predicts the corresponding population parameter (true effect size) with a 95% confidence interval. Interestingly, the “six sigma” standard attempts to identify manufacturing defects outside the 99% confidence interval or six standard deviations (standard deviation is represented using the Greek letter sigma), representing significance testing at p<0.01.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/08%3A_Sampling/8.04%3A_Statistics_of_Sampling.txt
Survey research a research method involving the use of standardized questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviors in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences. • 9.0: Prelude to Survey Research • 9.1: Questionnaire Surveys A questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardized manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated for statistical analysis. • 9.2: Interview Survey • 9.3: Biases in Survey Research 09: Survey Research The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organizations or dyads (pairs of organizations, such as buyers and sellers), are also studied using surveys, such studies often use a specific person from each unit as a “key informant” or a “proxy” for that unit, and such surveys may be subject to respondent bias if the informant chosen does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employee’s perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem. Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviors (e.g., smoking or drinking behavior), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area, such as an entire country, can be covered using mail-in, electronic mail, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analyzing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is economical in terms of researcher time, effort and cost than most other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed in the last section of this chapter. Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be mail-in, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses, in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/09%3A_Survey_Research/9.00%3A_Prelude_to_Survey_Research.txt
Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardized manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed such that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate. Most questionnaire surveys tend to be self-administered mail surveys, where the same questionnaire is mailed to a large number of people, and willing respondents can complete the survey at their convenience and return it in postage-prepaid envelopes. Mail surveys are advantageous in that they are unobtrusive, and they are inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from mail surveys tend to be quite low since most people tend to ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey (or they may simply lose it). Hence, the researcher must continuously monitor responses as they are being returned, track and send reminders to non-respondents repeated reminders (two or three reminders at intervals of one to 1.5 months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next. A second type of survey is group-administered questionnaire. A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with each other. This format is convenient for the researcher, and high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organizations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives. A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an electronic mail request for participation in the survey with a link to an online website where the survey may be completed. Alternatively, the survey may be embedded into an e-mail, and can be completed and returned via e-mail. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people that do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward an younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic, if the survey link is posted on listservs or bulletin boards instead of being e-mailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., mail survey and online survey), allowing respondents to select their preferred method of response. Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses. Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats: • Dichotomous response, where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances (circle one): yes / no. • Nominal response, where respondents are presented with more than two unordered options, such as: What is your industry of employment: manufacturing / consumer services / retail / education / healthcare / tourism & hospitality / other. • Ordinal response, where respondents have more than two ordered options, such as: what is your highest level of education: high school / college degree / graduate studies. • Interval-level response, where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter. • Continuous response, where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type. Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinized for the following issues: • Is the question clear and understandable: Survey questions should be stated in a very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialized group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. • Is the question worded in a negative manner: Negatively worded questions, such as should your local government not raise taxes, tend to confuse many responses and lead to inaccurate responses. Such questions should be avoided, and in all cases, avoid double-negatives • Is the question ambiguous: Survey questions should not words or expressions that may be interpreted differently by different respondents (e.g., words like “any” or “just”). For instance, if you ask a respondent, what is your annual income, it is unclear whether you referring to salary/wages, or also dividend, rental, and other income, whether you referring to personal income, family income (including spouse’s wages), or personal and business income? Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly. • Does the question have biased or value-laden words: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for “assistance to the poor” and less for “welfare”, even though both terms had the same meaning. In this study, more support was also observed for “halting rising crime rate” (and less for “law enforcement”), “solving problems of big cities” (and less for “assistance to big cities”), and “dealing with drug addiction” (and less for “drug rehabilitation”). A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinized to avoid biased language. • Is the question double-barreled: Double-barreled questions are those that can have multiple answers. For example, are you satisfied with the hardware and software provided for your work? In this example, how should a respondent answer if he/she is satisfied with the hardware but not with the software or vice versa? It is always advisable to separate double-barreled questions into separate questions: (1) are you satisfied with the hardware provided for your work, and (2) are you satisfied with the software provided for your work. Another example: does your family favor public television? Some people may favor public TV for themselves, but favor certain cable TV programs such as Sesame Street for their children. • Is the question too general: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, if that person selected “extremely well”, what does he/she mean? Instead, ask more specific behavioral questions, such as will you recommend this book to others, or do you plan to read other books by the same author? Likewise, instead of asking how big is your firm (which may be interpreted differently by respondents), ask how many people work for your firm, and/or what is the annual revenues of your firm, which are both measures of firm size. • Is the question too detailed: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality. • Is the question presumptuous: If you ask, what do you see are the benefits of a tax cut, you are presuming that the respondent sees the tax cut as beneficial. But many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire service. Avoid questions with built-in presumptions. • Is the question imaginary: A popular question in many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most respondents have never been faced with such an amount of money and have never thought about it (most don’t even know that after taxes, they will get only about \$640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period, so that their net present value is even less), and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences. • Do respondents have the information needed to correctly answer the question: Often times, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, in such case, the responses tend to be inaccurate, given their lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or asking teachers about how much their students are learning, or asking high-schoolers “Do you think the US Government acted appropriately in the Bay of Pigs crisis?” Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioral to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing: • Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys. • Never start with an open ended question. • If following an historical sequence of events, follow a chronological order from earliest to latest. • Ask about one topic at a time. When switching topics, use a transition, such as “The next section examines your opinions about …” • Use filter or contingency questions as needed, such as: “If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3.” Other golden rules. Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research: • People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates. • Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate). • For organizational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise. • Thank your respondents for their participation in your study. • Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/09%3A_Survey_Research/9.01%3A_Questionnaire_Surveys.txt
Interviews are a more personalized form of data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardized set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that is not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike mail surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are timeconsuming and resource-intensive. Special interviewing skills are needed on part of the interviewer. The interviewer is also considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses. The most typical form of interview is personal or face-to-face interview, where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favored by some respondents, while others may feel uncomfortable in allowing a stranger in their homes. However, skilled interviewers can persuade respondents to cooperate, dramatically improving response rates. A variation of the personal interview is a group interview, also called focus group. In this technique, a small group of respondents (usually 6-10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research. A third type of interview survey is telephone interviews. In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI), increasing being used by academic, government, and commercial survey researchers, where the interviewer is a telephone operator, who is guided through the interview process by a computer program displaying instructions and questions to be asked on a computer screen. The system also selects respondents randomly using a random digit dialing technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations. Role of interviewer The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks: • Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. He/she should also rehearse and time the interview prior to the formal study. • Locate and enlist the cooperation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedule sometimes at undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study. • Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents won’t be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview. • Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script. • Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.
textbooks/socialsci/Social_Work_and_Human_Services/Social_Science_Research_-_Principles_Methods_and_Practices_(Bhattacherjee)/09%3A_Survey_Research/9.02%3A_Interview_Survey.txt
Learning Objectives At the end of the module, you will be able to: • explain concepts central to the application of sociology and sociological practice. • summarize how sociological perspectives develop. • describe the influence of history and biography on thinking and behavior. • employ a sociological imagination and apply the scientific method in problem-solving. Sociology is the study of human social life. Essentially, a career in sociology centers on work pertaining to people or providing a service to society. The knowledge and skills developed earning a sociology degree leads to employment advocating, guiding, and helping people. In jobs where you work with or serve society, you will encounter aspects of sociology including research methods, socialization, culture, race and ethnicity, gender, sex and sexuality, stratification and inequality, deviance, and other areas of human social life. Sociological practice is the use and application of sociological principles and approaches to serve and work with people. In other words, using sociological knowledge and skills to serve others is sociological practice . 01: Careers in Sociology There are a variety of ways people use and practice sociology. Basic, public, and applied sociology are the most common forms of sociological practice. Each form integrates research on human social life to understand and improve society. Some people in sociology use discipline concepts and theories to produce knowledge and research in the field. This form of sociological practice is basic sociology. Academics including teachers, scholars, and researchers use basic sociology to study society, test hypothesis, and construct theories. Theories explain how things work and are fundamental in understanding and solving social issues (Steele and Price 2008). To address social issues, we must understand their structure, influences, and processes. Sociological theories give a better understanding of how society works to guide solutions and improve circumstances. Basic sociology helps develop understanding about human social life including the influence of groups and organizations on people to improve society (Henslin 2011). A basic sociologist will analyze society based on theoretical foundation and publish findings for practitioners to identify and construct the best and most effective practices in addressing and solving social issues. Public sociology uses empirical methods and theoretical insights to evaluate and analyze social policy (Henslin 2011). Formal norms such as laws, regulations, court orders, and executive decisions enacted by government are social policies. A public sociologist studies society and social policies to engage in issues of public and political concern for social change (Burawoy 2014). These practitioners use sociological research and theories to contribute and influence policy, activism, and social movements. This image "Architecture Blur Close up Clouds" by Jacob Morch is licensed under CC BY 4.0 Applied sociology uses information about society and social forces or actions to solve social issues. The goal of applied sociology is to use theories, concepts, and methods to solve real-world problems (Steele and Price 2008). This form of sociological practice is the application of sociology to improve society, not rebuild it or create social reform as with public sociology. Applied sociologists use sociology to address a specific social issue for a specific group of people. This form of practice applies sociological principles and methods to enhance human social life by using analyzing, evaluating, and suggesting interventions or solutions grounded in theory. AN HISTORICAL PERSPECTIVE OF SOCIOLOGICAL PRACTICE As an applied sociologist, W.E.B. DuBois used social research and findings to liberate and empower people of color. Research the publications and work of DuBois then explain how his findings and efforts influenced sociological practice today. Clinical sociology, an arm of applied sociology, emphasizes the implementation of client-centered or direct service solutions. These practitioners work to solve client-centered problems by using social research to diagnose and measure interventions for change (Steele and Price 2008). These practitioners integrate sociological principles and methods to address social conditions and issues of individuals, groups, and organizations. Clinical sociologists use interventions or solutions supported by empirical evidence and grounded in theory to help improve the lives of others (Henslin 2011). This form of practice uses sociological components to serve and meet the needs of people and groups. 1. Watch the video entitled What is Applied Sociology by Dr. Stephen F. Steele: https://youtu.be/qEG5TV9za_g. 2. After viewing the film, explain the different forms of sociological practice. 3. Describe how sociology might be used or incorporated in the workplace. 4. Provide three examples of jobs or careers that incorporate sociological practice.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/01%3A_Careers_in_Sociology/1.01%3A_Basic_Public_and_Applied_Sociology.txt
Sociological practitioners are public or applied sociologists who apply theories, research, and methods to bring about social change (Bruhn and Rebach 2007). As a practitioner, you will be involved in planning and implementing problem-solving interventions to improve the lives of others by examining social situations and understanding how they are organized. Practitioners use their training, skills, and knowledge to provide clients (e.g., individuals, groups, or organizations) information or data about the social condition or problem and areas for improvement. Clients then use the data, with or without direct involvement of the practitioner, to plan and develop policies or programs for change. Practitioners may also play a role in monitoring and evaluating the effectiveness of policies and programs to guide their development and progress towards identified goals. A sociological practitioner is an active, ongoing agent of intervention and change (Bruhn and Rebach 2007). Sociological practitioners are one kind of interventionists among other professionals (e.g., social workers, therapists, physicians, probation officers, etc.) working on social problems. One of the most important competencies required in sociological practice, as with other interventionists, is the working relationship between practitioner and client (e.g., individuals, groups, or organizations). The overall outcomes of intervention and change rely on the trust, confidence, cooperation, and motivation of those working to improve or address the problem. Intervention and change are a process requiring collective action and collaboration among practitioners and clients to gain results. 1.03: The Scientific Method in Practice The field of sociology developed in the 1800s. Auguste Comte defined sociology as “the study of society.” His goal, coined positivism, centered on social reform with the aim of improving society. Comte’s work developed from his observations of the social world. His research founded the field of sociology through the application of the scientific method to collect empirical data on society. In essence, sociology became the scientific study of social patterns (Griffiths, Keirns, Strayer, Cody-Rydzewsk, Scaramuzzo, Sadler, Vyain, Byer, and Jones 2015). Since its inception, the scientific method is viewed as the way to answer questions about human social life. However, at the turn of the 20th century, some sociologists began to question the social research application of the scientific method. Instead, social researchers began to incorporate an interpretive approach to the field of sociology termed antipositivism. This interpretive framework implies numeric and statistical data gathered using a scientific method does not provide a deep understanding of the intent behind the thinking and behavioral patterns of people. As a result, sociologists today often examine statistical data and interpret or decode personal narratives in social research to identify patterns and draw conclusions about human social life. Sociologists use social research to create theories and identify solutions or interventions for change. The research process is a method for gathering facts. The purpose of social research is to investigate and provide insight into how human societies function (Griffiths et al. 2015). Social research includes the scientific method and empirical evidence resulting in an interpretive perspective based on theoretical foundation. Theories are perspectives or viewpoints. Without empirical evidence or facts, theories are simply ideas or things believed to be true but not proven. Figure 1. Visual Representation of the Interpretive Perspective. Attribution: Copyright Vera Kennedy, West Hills College Lemoore, under CC BY-NC-SA 4.0 license The scientific method provides parameters for social research. The scientific method involves careful data collection, theory development, hypothesis formulation and testing (Bruhn and Rebach 2007). By using the scientific method, sociologists ensure validity and reliability of research findings and results. Validity ensures the research study is measuring what it is intended to measure. Reliability means that if someone else copies the same research process design and plan, they get consistent findings or results as the original research study. The scientific method establishes the margins and boundaries for objective and accurate research (Griffiths et al. 2015). Using a scientific research design or plan is a recipe for other researchers to test and substantiate someone’s work and findings. Table 1. A Comparison of the Scientific Method in Basic and Applied Sociology. Attribution: Copyright Vera Kennedy, West Hills College Lemoore, under CC BY-NC-SA 4.0 license In basic sociology, the scientific method serves as a guide in research design and includes these steps: 1. Identify a research topic or issue to study 2. Develop a research question to examine or explore 3. Create a hypothesis or make a prediction about the anticipated findings and results 4. Complete a literature review of other research on the question or topic of study 5. Design a research method and approach for collecting data or information 6. Gather and collect data 7. Analyze and interpret data 8. Report findings and results In applied sociology, the scientific method serves as a guide in research design and in the identification of solutions or interventions and includes these steps: 1. Identify a social problem to address 2. Formulate a research question 3. Describe level of analysis and theoretical approach 4. Research interventions, programs, etc. 5. Develop a hypothesis 6. Identify intervention 7. Implement intervention 8. Evaluate and analyze results Client-centered services requires sociological practitioners to find and build on facts to construct and understand an issue or condition of an individual, group, or organization (Bruhn and Rebach 2007). In applied sociology, the scientific method incorporates steps for gathering facts and insights about social patterns and interventions. The primary differences between the scientific method approach in basic and applied sociology are the topics and conditions practitioners research and study. Findings of scientific investigation in applied sociology help inform an understanding of a condition or issue and selection of strategies, solutions, or interventions for change (Bruhn and Rebach 2007). The goal in using the scientific method in applied sociology is to identify solutions or interventions to use and implement derived from theoretical foundation.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/01%3A_Careers_in_Sociology/1.02%3A_What_is_a_Sociological_Practitioner.txt
Theories are perspectives or viewpoints. Without facts, theories are simply ideas or things believed to be true though not proven. The research process is a method for gathering facts. The purpose of social research is to investigate and provide insight into how human societies function (Griffiths, et al. 2015). Social research includes empirical evidence and the scientific method resulting in an interpretive perspective based on theoretical foundation. There are several research methods sociologists use to collect data or gather information about people. Each method has its strengths and weaknesses specific to the type of data collected and its usefulness. Every method collects certain types of information (quantitative or qualitative) on particular sample sizes (number of people of study). Quantitative data is numeric or statistical information. Quantitative data reflects social patterns of behavior with numbers and figures. Qualitative data is descriptive evidence. Qualitative data interprets personal accounts, narratives, and stories to depict social patterns. There are eight commonly used data collection methods to gather quantitative and/or qualitative data about people. This image "Close-up of Computer Keyboard" by Pixabay is licensed under CC BY 4.0 A survey or questionnaire is a series of questions. Before developing and disseminating a survey, sociologists must determine the target group or population of study. Once a population is selected, the researcher must determine the sample or individuals from the target group that will be examined. The best method to get a representative sample is to obtain a random sample. This will allow everyone from the target group an equal chance of being selected for the study. When a researcher wants to target subsets of a group, they can generate a stratified random sample. Survey questions must be developed in neutral language. Questions must allow respondents or participants to express their own opinions and respond to the survey to avoid biased answers. When designing a research study, researchers must decide whether to use closed-ended or open-ended questions. Closed-ended questions allow respondents or participants to answer questions from a list of possible answers. Open-ended questions allow respondents or participants to answer questions in their own words. Surveys are a good method for collecting quantitative data from large populations or groups. Administering a survey requires little to no direct contact with study subjects, meaning researchers spend less or little face-to-face time gathering data from people in comparison to other types of data collection methods. This method is limited to numeric or statistical analysis with narrow insight into the meaning or reasons behind responses or answers given by the participants. Table 2. Effective Focus of the Most Common Data Collection Methods. Attribution: Copyright Vera Kennedy, West Hills College Lemoore, under CC BY-NC-SA 4.0 license Research Method Data Collection Focus Survey Quantitative Participant Observation Qualitative Interview Qualitative Ethnography Quantitative/Qualitative Case Study Qualitative Secondary and Document Analysis Quantitative/Qualitative Unobtrusive Measures Quantitative Participant observation is the act of observing participants in the research setting. Here the researcher studies the group as a member of the group. Being a participant allows the researcher to observe and gain intimate insights into the ensemble and its members to develop a deep understanding of those involved. The primary challenge of this method is to avoid researcher bias from personal interactions and involvement as a member of the group. An interview is a conversation with study participants. The research interviewer develops a series of questions to ask study subjects. Interviews gather people’s thoughts, opinions, feelings, and biographies to help understand personal experiences and social patterns. Interviewers must develop rapport with participants to create a safe environment for sharing personal information and stories. This qualitative method is time consuming and widely used for collecting data from small groups or individuals. Ethnography involves both participant observation and interview research methods. This technique allows the researcher to collect in-depth information about the observations made through formal (structured interview) and informal interactions (participant observation). The researcher is able to receive information about the intentions, motivations, or thoughts of the study participants. This approach reduces researcher bias and ensures focused analysis of social patterns verifiable by personal accounts of others. Case studies involve a researcher focusing on a single event, situation, or individual to understand the dynamics of relationships (Henslin 2011). This in-depth qualitative method requires one-on-one longitudinal time with study subjects. The focus centers on understanding the personal biographies and accounts of individuals. Researchers must develop rapport and trust with participants over time to invoke open and honest truth telling about personal accounts and experiences. Secondary and document analysis is a research method used to analyze data or information collected by another person or party. Secondary analysis may include a review of documents or written sources including books, newspapers, records, etc. (Henslin 2011). The limitations of this method center on the data collection approach and credibility of the source. Researchers cannot alter the data collection method to ensure validity and reliability as well as make changes to the type and focus of information gathered. Unobtrusive measures are the act of “observing the behavior of people who do not know they are being studied” (Henslin 2011:26). Study subjects are unaware they are being examined. Researchers must take caution in using this approach. When appropriately employed, unobtrusive studies provide useful quantitative data for specific sites, environments, or contexts studied. However, information gathered using this method is not reliable for developing generalizations about social patterns. Additionally, researchers must protect identifying information during data collection to avoid violations of privacy, confidentiality and anonymity among study subjects. UNOBTRUSIVE VS. OBTRUSIVE OBSERVATIONS A useful research approach for sociological practitioners to apply in the workplace is observational research. This approach provides a practitioner to observe behavior in a natural setting. 1. Conduct an investigation on unobtrusive and obtrusive observations. Describe each research approach and explain how data is observed and collected. 2. What is the difference between obtrusive and unobtrusive observations? 3. Explain the positive aspects of applying an observational research approach to examine human social life. 4. Explain the negative aspects of applying an observational research approach to examine human social life. 5. Describe how a public sociologist and an applied sociologist would use observational research in her or his job to assess and solve social problems. Experiments determine cause and effect relationships (Henslin 2011). In an experiment, there is an experimental and control group. The experimental group receives a variable, factor, or change while the control group receives no adjustments in order to compare the impact of variances or alterations between groups. A variable that causes a change is the independent variable, whereas the variable that depends on another variable to change is the dependent variable. Experiments help understand the relationship between independent and dependent variables. There are several limitations of experiments as a research method. Conducting an experiment is expensive (e.g., space, materials, participant incentives, etc.) and time consuming. Replicating experiments and experimental conditions is challenging. Similar to unobtrusive measures, generalizations based on results are not possible. That is, inferences from the study are not applicable to other individuals or groups beyond those in the experiment. Findings are relevant to those in the study only. In addition, researchers have little to no control over extraneous variables that may influence participant bias or results leading to artificial conclusions. USING RESEARCH TO ASSESS AND SOLVE PROBLEMS Think about a social problem you want to impact. Align each research method with the type or variety of data categories and information you can gather by applying the technique to help you assess the problem and develop possible solutions.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/01%3A_Careers_in_Sociology/1.04%3A_Data_Collection_and_Analysis.txt
Every person analyzes and evaluates the world from a subjective perspective or viewpoint. Subjective concerns rely on judgments rather than external facts. Personal feelings and opinions from a person’s history and biography drive subjective concerns. The time period we live ( history ) and our personal life experiences ( biography ) influence our perspectives and understanding about others and the social world. Our history and biography guide our perceptions of reality reinforcing our personal bias and subjectivity. Figure 2. The Influence of History and Biography on Perspective Attribution: Copyright Vera Kennedy, , under license Relying on subjective viewpoints and perspectives leads to diffusion of misinformation (inaccurate), disinformation (false or fake), and fallacies (unsound reasoning) that can be detrimental to our physical and socio-cultural environment and negatively impact our perceptions, considerations, and acceptance of others. It takes awareness and deliberate practice to reduce personal bias in our interactions, interpretations, and understanding of others and the social world. We must seek out facts and develop knowledge to enhance our objective eye. By using valid, reliable, proven facts, data, and information, we establish credibility and make sound judgments and better decisions for the world and those we work with and serve. 1.07: The Sociological Imagination The sociological imagination is a practice sociologists employ to help recognize and step outside one’s personal history and biography to examine a situation, issue, person, or society through an objective eye (Carl 2013). According to C. Wright Mills ([1959] 2000), the sociological imagination requires individuals to “think themselves away.” Mills suggests the sociological imagination allows us to examine people and the world from a “new” eye to understand the personal and social influences on people’s life choices and outcomes. This practice helps remove personal bias and preconceived notions and opinions to improve acceptance, consideration, and the needs of others. Sociologists must remove the blinders of self-interest and ideology to look at others and the world as they are and not as we perceive them. 1. Consider your career goal and the professional environment you will work in. Describe a situation where you will need to use a formal observational approach to collect data or information to share with your clients or agency where you work. 2. How might using an informal unobtrusive observational approach be helpful in the workplace as a sociological practitioner? What type of knowledge or information can you discover by observing others? 3. Explain how either of these research approaches might improve the development of your sociological imagination or objective eye? Specifically, how might observational data help develop your sociological imagination about 1) a workplace situation, 2) co-workers, 3) a social issue, and 4) a client? 1.08: References Key Terms and Concepts Bruhn, John G. and Howard M. Rebach. 2007. Sociological Practice: Intervention and Social Change. 2nd ed. New York, NY: Springer. Burawoy, Michael. 2014. “Precious Engagements: Combat in the Realm of Public Sociology.” Current Sociology Monograph. 62(2):135-139. Carl, John D. 2013. Think Social Problems. 2nd ed. Boston, MA: Pearson Education, Inc. Griffiths, Heather, Nathan Keirns, Eric Strayer, Suasn Cody-Rydzewsk, Gail Scaramuzzo, Tommy Sadler, Sally Vyain, Jeff Byer, and Faye Jones. 2015. Introduction to Sociology 2e. Houston, TX: OpenStax College. Henslin, James M. 2011. Essentials of Sociology: A Down-to-Earth Approach. 11th ed. Upper Saddle River, NJ: Pearson. Mills, C. Wright. [1959] 2000. The Sociological Imagination. New York: Oxford University Press. Steele, Stephen F. and Jammie Price. 2008. Applied Sociology: Terms, Topics, Tools, and Tasks. 2nd ed. Belmont, CA: Thomson Wadsworth. Key Terms and Concepts Antipositivism Applied sociology Basic sociology Biography Case studies Clinical sociology Close-ended questions Dependent variable Ethnography Experiments History Independent variable Interpretive framework Interpretive perspective Interview Objective conditions Open-ended questions Participant observation Positivism Public sociology Qualitative data Quantitative data Random sample Reliability Research methods Research process Scientific method Secondary and document analysis Social policies Social research Sociological imagination Sociological practice Sociological practitioner Stratified random sample Survey Theories Unobtrusive measures Validity
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/01%3A_Careers_in_Sociology/1.06%3A_History_and_Biography.txt
Learning Objectives At the end of the module, you will be able to: • identify ways sociology is applied in the real world. • describe the macro, meso, and micro levels of analysis. • understand the similarities and differences between the theoretical paradigms in sociology. • use the sociological paradigms to identify and determine appropriateness of problem-solving approach and intervention. Developing a sociological imagination helps us understand how our history and biography influences our individual thinking and behavior. By understanding how our personal perspective or viewpoint develops, we are also able to understand how history and biography influences the perspective or viewpoint of others. Recognizing the effect of history and biography on people aids us in grasping the effect personal struggles and issues have on a person’s thinking and behavior. • 2.1: Levels of Analysis Sociological practitioners work on improving conditions for individuals and society as a whole. Basic, public, and applied sociologists gather research on personal issues to develop a framework for understanding public or social problems and possible solutions for improving human social life on a large scale. Public and applied sociologists specifically use scientific research to solve and improve social plights or conditions. • 2.2: Theoretical Approaches Paradigms are theoretical frameworks explaining society (Griffiths et al. 2015). These frameworks are perspectives, a way of observing and examining people and the world through different lenses. As a sociological practitioner, you must learn to use and apply sociological theories to understand and evaluate people and their social situations or conditions from an objective viewpoint to identify appropriate interventions. • 2.3: Using a Sociological Perspective Organizations hire sociological practitioners to design new programs or evaluate existing ones. As a practitioner, you will need to use a sociological imagination to analyze a program’s social condition or situation, its parts, and possible solutions. You will be responsible for examining the condition from the perspective of others including organizational administrators, staff, clients, and community affected to assist you in designing or evaluating the program. • 2.4: References, Key Terms and Concepts Thumbnail: This image "Person Holding Magnifying Glass" by Mauricio Mascaro is licensed under CC BY 4.0 02: Theoretical Approaches in Practice C. Wright Mills (1959) described the connection between personal struggles to public issues. He described that by understanding the personal struggles or conditions people confront, we foster awareness about how widespread the struggles are among people in society and the impact they have on everyone as a social problem. For example, by learning about the individual troubles and challenges one is facing with opioid addiction, sociologists get a good understanding about how these dilemmas might influence or manifest in other opioid addicts and provide insight into possible ways of combating the condition for everyone with a similar problem. Through scientific research, sociological practitioners not only gather and learn information to help individuals, they also use the data to infer or evaluate a problem on a larger scale to help society address the issue and those effected by the problem (i.e., individuals, family, friends, organizations, and communities). Sociological practitioners work on improving conditions for individuals and society as a whole. Basic, public, and applied sociologists gather research on personal issues to develop a framework for understanding public or social problems and possible solutions for improving human social life on a large scale. Public and applied sociologists specifically use scientific research to solve and improve social plights or conditions. Some practitioners choose to work with individuals to solve their personal issues or challenges using scientifically proven methods within the social context. These practitioners are clinical sociologists. There are three continuums or levels of social analysis in the field of sociology. A sociological practitioner works and solves problems within or across these continuums. Regardless of the level of analysis (macro, meso, or micro), a practitioner must learn about and understand all three continuums to find the best approach or solution to addressing the personal or public issue they are working to solve. Personal troubles influence and have consequences for individuals, families, friends, organizations, and communities. Sociological analysis of personal and public issues require comprehension about how people interact and live together (i.e., the social arrangement). Analyzing the macro, meso, and micro continuum gives us information about the social arrangement from three different levels. The macro continuum or macro level analysis examines large social units including global and national systems, policies, processes as well as large corporate structures, programs, and organizations. Macro level analysis includes exploration of broad scale social institutions including political and legal systems and processes, military systems and orders, economies, social welfare systems and processes, religions, educational systems and programs, and communication media (Bruhn and Rebach 2007). Macro analysis also evaluates social adaptation and change such as the evolving roles of women in the workplace, politics, and leadership. In sociological practice, we must be aware of the systems, policies, processes, institutions, and organizations connected to personal and public issues or problems. Considering the issue of opioid addiction, a practitioner will need to assess the macro level arrangements involved in creating (drug manufacturers), supporting (drug cartels), and combating (criminal justice system) the problem. Midlevel or meso level analysis examines networks, communities, organizations, and groups. The meso continuum ranges from government agencies, corporations, universities, and small secondary groups including departments, units, or clubs (Bruhn and Rebach 2007). This level of analysis evaluates internal and external effectiveness, change, adaption, and intergroup relations of a network, community, or organization. Working on the opioid addiction epidemic, a sociological practitioner must investigate the meso level arrangements supporting and fighting addiction such as the Drug Enforcement Agency, Coast Guard, Purdue Pharma (leading manufacturer of the narcotic painkiller OxyContin), local law enforcement agencies, local mental health professionals, community addiction programs, family support groups, etc. The practitioner must understand how these groups influence each other, work together, and impede each other’s goals or mission around opioid addiction. Table 3: Continuums of Social Analysis. Attribution: Copyright Vera Kennedy, West Hills College Lemoore, under CC BY-NC-SA 4.0 license Level of Analysis Social Arrangement Examination Macro Large social units • Systems • Structures • Policies • Processes • Institutions • Organizations Meso Midlevel social units • Network • Community • Organization • Group Micro Small social units • Interactions • Socialization • Relationships and roles • Thinking and motivation As you begin your professional career, you will need to learn about and understand the industry and organization you work in and the clientele you serve. 1. Research organizations in your desired field or matching your career interest. 2. Choose one organization for those you researched to learn more about for this application. 3. Using a macro level of analysis, find out the systems, policies, processes, and institutions influencing or affecting the operations of the organization. 4. Applying a meso level of analysis, investigate which networks, communities, external organizations, and external and internal groups shaping the organization. 5. Exercising the micro level of analysis, explore different roles of people involved in the organization, the socialization process of individuals within the organization to learn and establish organizational norms and acceptance, relationships between internal and external people involved with the organization, hierarchy and dominance structures of individuals within and receiving services from the organization. The micro level examines small social units of which the individual is the social focus as a member of a specific social system (Bruhn and Rebach 207). To understand the individual, micro level analysis serves to identify interactions among individuals and relationships among group members. This level focuses on understanding the roles of individuals in groups, relationships between group members, hierarchy and dominance structures of individuals within groups, and the socialization process of individuals to learn and establish group norms and acceptance. Micro analysis also studies the motivation, self-esteem, and socio-emotional intelligence of individuals and small groups (Rosenberg and Turner 1990; Hochschild 1979). To understand opioid addiction at a micro level, sociological practitioners will examine the first opioid experience of addicts (who, what, where, when, and why), the personal and social group influences supporting addiction, and social groups or group members helping to combat or reduce addiction.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/02%3A_Theoretical_Approaches_in_Practice/2.01%3A_Levels_of_Analysis.txt
Paradigms are theoretical frameworks explaining society (Griffiths et al. 2015). These frameworks are perspectives, a way of observing and examining people and the world through different lenses. As a sociological practitioner, you must learn to use and apply sociological theories to understand and evaluate people and their social situations or conditions from an objective viewpoint to identify appropriate interventions. Sociologists use theories to study and understand people. “The theoretical paradigms provide different lenses into the social constructions of life and the relationships of people” (Kennedy, Norwood, and Jendian 2017:22). In using the sociological eye, each theoretical paradigm helps remove bias in assessing people and social issues at all levels of analysis (macro, meso, and micro). There as three major paradigms in the field of sociology: functionalism, conflict theory, and symbolic interactionism. Functionalism and conflict theory examine society on macro and meso levels. Symbolic interactionism investigates micro level interactions in society. There are also three modern or emerging paradigms in sociology: feminism, exchange theory, and environmental theory. Feminism and environmental analyze macro and meso levels. Exchange theory focuses on micro level analysis. Functionalism is a macrosociological perspective examining the purpose or contributions of interrelated parts within the social structure. Functionalists examine how parts of society contribute to the whole. Everything in society has a purpose or function. Even a negative contribution helps society discern its function. For example, driving under the influence of alcohol or drugs inspired society to define the behavior as undesirable, develop laws, and consequences for people committing such an act. A manifest function in society results in expected outcomes (i.e., using a pencil to develop written communication). Whereas, a latent function has unexpected results (i.e., using a pencil to stab someone). When a function creates unexpected results that cause social hardships or negative consequences the result is defined as a latent dysfunction. Conflict Theory is a macrosociological perspective exploring the competition among social groups over resources in society. Groups compete for status, power, control, money, territory, and other resources for economic or social gain. Conflict Theory explores the struggle between those in power and those absent of power within a social context of struggle. The cultural war over immigration in the United States with competing groups representing open versus closed border ideologies is an example. Symbolic Interactionism is a microsociological perspective observing the influence of interactions on thinking and behavior. Interactionists consider how people interpret meaning and symbols to understand and navigate the social world. Individuals create social reality through verbal and non-verbal interactions. These interactions form thoughts and behaviors in response to others influencing motivation and decision-making. Hearing or reading a word in a language one understands, results in a mental image and comprehension about the information shared or communicated (i.e., the English word “bread” is most commonly visualized as a slice or loaf and considered a food item). There are three modern approaches to sociological theory (Carl 2013). Feminism, a macrosociological perspective, studies the experiences of women and minorities in the social world including the outcomes of inequality and oppression for these groups. One major focus of the feminist theoretical approach is to understand how age, ethnicity, race, sexuality, and social class intersect with gender to determine outcomes for people (Carl 2013). Exchange Theory examines decision-making of individuals in society. This microsociological perspective focuses on understanding how people consider a cost versus benefit analysis accentuating their motivation and self-interest in making decisions. Environmental Theory explores how people adjust to ecological, both environmental and social, changes over time (Carl 2013). The focal point of this macrosociological perspective is to figure out how people adapt or evolve over time in response to ecological space or context. Applying Theories Functionalists examine how people work together to create society as a whole. From this perspective, societies need systems, policies, processes, and institutions to exist (Griffiths et al. 2015). For example, policies or laws function to support the social structure of society, and values and norms guide people in their thoughts and actions. Consider how education is an important concept in the United States because it is valued. Educational institutions including the policies and norms surrounding registration, attendance, grades, graduation, and materials (i.e., classrooms, textbooks, libraries) all support the emphasis placed on the value of education in the United States. By observing people using functionalism, we study how members of a society work together by investigating how social systems, policies, processes, and institutions meet the needs of social networks, communities, organizations, and groups. Conflict theorists understand the social structure as inherently unequal resulting from the differences in power based on age, class, education, gender, income, race, sexuality, and other social factors. For a conflict theorist, society reinforces issues of "privilege” groups and their status in social categories (Griffiths et al. 2015). Inequalities exist in every social system. Therefore, social norms benefit people with status and power while harming others and at the expense of others. For example, although cultural diversity is valued in the United States, some people and states prohibit interracial marriages, same-sex marriages, and polygamy (Griffiths et al. 2015). By applying conflict theory, we investigate the dynamics of power among and between social systems, policies, processes, institutions networks, communities, organizations, and groups. Symbolic interactionists study the thoughts and actions of individuals through the expression of social interactions between them. These theorists conceptualize human interactions as a continuous process derived from the interpretation and meaning of the physical and social environment. “Every object and action has a symbolic meaning, and language serves as a means for people to represent and communicate their interpretations of these meanings to others” (Griffiths et al. 2015:72). Interactionists evaluate how people depend on the interpretation of meaning and how individuals interact when exchanging comprehension and meaning. For instance, derogatory terms such as the “N” word might be acceptable among people of the same cultural group but viewed as offensive and antagonistic when used by someone outside of the group. When sociological practitioners apply symbolic interactionism, they identify the implication words and symbols including tone, body language, and labels that influence thinking and behavior. Table 4. Theoretical Perspectives in Sociological Practice.Attribution: Copyright Vera Kennedy, West Hills College Lemoore, under CC BY-NC-SA 4.0 license Theoretical Paradigm Level of Analysis Application Functionalism Macro and meso Examine how members of society work together Conflict theory Macro and meso Investigate the social dynamics of power and inequality Symbolic interactionism Micro Identify the implication of words and symbols on thinking and behavior Feminism Macro and meso Distinguish the circumstances and effects of oppression on women and minority groups Exchange theory Micro Evaluate the influence of social forces on thinking, behavior, and decisions Environmental theory Macro and meso Discover the social and environmental impact on change or adaptation Feminism explores the lives and experiences of women and minorities. For example, a woman in Lebanon does not have the right to dissolve a marriage without her husband’s consent even in cases of spousal abuse (Human Rights Watch 2015). Feminism explicitly examines oppressive structures within systems, policies, and the inequity of institutions and groups in relation to age, gender, race, social class, sexuality, or other social category. The application of feminism in sociological practice notes the circumstances and effects of oppression resulting from social systems, policies, processes, and institutions on networks, communities, organizations, or social groups. Exchange theorists observe how society and social interactions influence decision-making. Social values and beliefs often influence people’s attitudes, judgments, or actions. Sociological practitioners apply exchange theory to evaluate people’s decisions to see the social forces motivating or driving people’s thinking, behavior, and choices. Environmental theorists assess how people, as part of the social and physical environment, adapt and change over time. If you contemplate any rule of law, you can see how society has altered because of shifts in social ideas or ecological fluctuations. Consider the anti-tobacco laws in the United States making it illegal to smoke in public spaces as an example of social shifts towards health and wellness, or water meters to control and regulate residential water usage and waste as an example of ecological drought and prolonged water shortages in the United States. Application of environmental theory uncovers the social and environmental influences of change or areas encountering change in social systems, policies, process, institutions, networks, communities, organizations, and groups. HARNESSING UNDERSTANDING ABOUT SOCIAL CONDITIONS 1. Review the organizational information you researched and assessed the Levels of Analysis exercise. 2. Analyze the organization using each of the theoretical paradigms: Functionalism, Conflict Theory, Interactionism, Feminism, Exchange Theory, and Environmental Theory. 3. Now, analyze the clientele or population served by the organization using each of paradigms: Functionalism, Conflict Theory, Interactionism, Feminism, Exchange Theory, and Environmental Theory.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/02%3A_Theoretical_Approaches_in_Practice/2.02%3A_Theoretical_Approaches.txt
Organizations hire sociological practitioners to design new programs or evaluate existing ones. As a practitioner, you will need to use a sociological imagination to analyze a program’s social condition or situation, its parts, and possible solutions. You will be responsible for examining the condition from the perspective of others including organizational administrators, staff, clients, and community affected to assist you in designing or evaluating the program. To accomplish this task, you must be able to complete an objective evaluation or needs assessment and communicate your findings to those involved. To apply a sociological perspective you will need to examine the social arrangement of the condition or situation from a macro, meso, and micro level of analysis. To begin, identify the structure and make-up of the organization and systems involved in or around the issue that the program is working to address. Next, assess the social groups involved in the organization or system. Understand what input and impact these groups have on persons both involved within and served by the program. Finally, investigate the relationships and interactions among the organization, groups, and individuals involved in the program and condition it is addressing. To construct a fishbone diagram, complete the following steps: 1. Identify the social condition or situation. 2. Brainstorm potential variables (categories or factors) influencing the condition including policies, resources, culture, etc. 3. Start a fishbone diagram with an arrow leading to the condition and variables branched off. 4. Write the categories or factors of the variables identified in item 3. Include the categories or factors above and below the arrow pointing to the condition. Be sure to connect the variable to specific names. For example: 1. Interpret the results and identify the primary issues influencing the current condition or situation. Consider how the theoretical paradigms can help you make sense of what is happening. 2. Write a summary of your findings by presenting the condition or situation, then explain the factors influencing the problem. 3. Plan the steps for implementing a solution or resolving the problem. A useful way to understand a program or organization’s needs is to chart the social condition or situation using a fishbone diagram. This tool constructs a visual representation of the social impact on a condition and identifies the primary issues affecting the problem. Results assist in developing a written summary of the factors influencing the condition and used to plan the next steps in solving or resolving any issues. The mayor is working with city council leaders to improve waste management services to local residents in the area. The city is considering a private-public partnership to provide waste removal and recycling services to the community. Use your sociological imagination to identify the pros and cons of the city embarking on this partnership with a private, for-profit company. The partnership will require an annual contract with a fee for services. How will this type of partnership impact local residents? Consider the impact five (5) years from now. 1. Use a fishbone diagram to structure the issue. 2. Apply the six (6) theoretical paradigms in sociology to identify variables that may influence the issue. 3. Write a summary of your analysis. 2.04: References Key Terms and Concepts Bruhn, John G. and Howard M. Rebach. 2007. Sociological Practice: Intervention and Social Change. 2nd ed. New York, NY: Springer. Carl, John D. 2013. Think Social Problems. 2nd ed. Boston, MA: Pearson Education, Inc. Griffiths, Heather, Nathan Keirns, Eric Strayer, Suasn Cody-Rydzewsk, Gail Scaramuzzo, Tommy Sadler, Sally Vyain, Jeff Byer, and Faye Jones. 2015. Introduction to Sociology 2e. Houston, TX: OpenStax College. Hochschild, Arlie Russell. 1979. “Emotion, Work, feeling Roles, and Social Structure.” American Journal of Sociology 85:551-575. Human Rights Watch. 2015. “Unequal and Unprotected: Women’s Rights under Lebanese Personal Status Laws.” Retrieved January 9, 2018 (https://www.hrw.org/report/2015/01/1...al-status-laws). Kennedy, Vera, Romney Norwood, and Matthew Jendian. 2017. Critical Thinking about Social Problems. Dubuque, IA: Kendall Hunt Publishing Company. Mills, C. Wright. 2000 [1959]. The Sociological Imagination. New York: Oxford University Press. Rosenberg, Morris and Ralph H. Turner. 1990. Social Psychology: Sociological Perspectives. New Brunswick, NJ: Transaction. Key Terms and Concepts Conflict Theory Functionalism Environmental Theory Exchange Theory Feminism Fishbone diagram Latent function Levels of social analysis Macro level Manifest function Meso level Micro level Paradigms Personal struggles Social arrangement Social problem Symbolic interactionism Theoretical paradigms
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/02%3A_Theoretical_Approaches_in_Practice/2.03%3A_Using_a_Sociological_Perspective.txt
Learning Objectives At the end of the module, you will be able to: 1. explain the sociological process of intervention. 2. describe the six approaches to solving social issues. 3. recognize and apply macro, meso, and micro levels of intervention. 4. explain the sociological practitioner's role and approach in solving social problems. 5. identify issues requiring a multilevel, multifactor problem solving approach. Social issues impair social functioning and negatively impact the lives of individuals, groups, and organizations (Bruhn and Rebach 2007). People effected by a particular social issue may face a variety of obstacles and challenges associated with the problem including labeling, stigma, discrimination, and isolation. Sociological practitioners work to address the problem by changing the social setting, arrangement, norms, and behaviors surrounding the issue and the people involved. A sociological practitioner may serve as the facilitator of this social change, a broker by acting on the behalf of others for change, or a clinician by providing direct services or help to change the situation of individuals and families. • 3.1: Interventions and Problem Solving There are six approaches most commonly used by sociological and other professional practitioners, communities, and clients to address social problems and create change. To resolve or improve situations, different problems require different approaches based on the client needs and social resources available to them. Each sociological approach incorporates a different level of analysis to assess the problem with a specific focal area of intervention. • 3.2: Problem Solving Approaches and Interventions There are six problem solving approaches and interventions most commonly used among practitioners. Each approach examines a different aspect of a social problem. The nature of the problem and people involved determines the most appropriate intervention to apply. • 3.3: Commonalities of Approaches There are four common themes among problem solving approaches. All approaches focus on creating change. Interventionists work towards changing behavior or the social arrangements of clients. The goal is to improve social functioning of individuals, groups, communities, and organizations. • 3.4: References, Key Terms and Concepts Thumbnail: This image "Man Leaning on Table" by Jopwell is licensed under CC BY 4.0 03: Sociological Interventions Social issues impair social functioning and negatively impact the lives of individuals, groups, and organizations (Bruhn and Rebach 2007). People effected by a particular social issue may face a variety of obstacles and challenges associated with the problem including labeling, stigma, discrimination, and isolation. Sociological practitioners work to address the problem by changing the social setting, arrangement, norms, and behaviors surrounding the issue and the people involved. A sociological practitioner may serve as the facilitator of this social change, a broker by acting on the behalf of others for change, or a clinician by providing direct services or help to change the situation of individuals and families. There are six approaches most commonly used by sociological and other professional practitioners, communities, and clients to address social problems and create change. To resolve or improve situations, different problems require different approaches based on the client needs and social resources available to them. Each sociological approach incorporates a different level of analysis to assess the problem with a specific focal area of intervention. When social change requires different levels of analysis, sociological approaches must identify and explore multiple solutions across continuums. Not all approaches result in an expeditious solution. Sociological approaches and interventions take planning and time to implement and can take years to gain permanent change or improve people’s lives. Process of Intervention Regardless of approach, sociologists follow an incremental process of intervention to remedy a social problem. Each sociological approach includes a process of intervention that includes an assessment, planning, implementation, and evaluation phase. There are no timelines of completion defined within each phase. Rather the sociological practitioner, clients, and other impacted individuals or groups set deadlines and completion parameters based on context and need. The first phase examines the social problem and needs of those it impairs. This is an investigative stage to gather information and understand the situation to define the problem (Bruhn and Rebach 2007). A sociological practitioner must first identify the presenting problem and client(s). The presenting problem refers to the client’s perspective of the problem as they see it in their own words (Bruhn and Rebach 2007). The assessment is a discovery phase of the history and evolution of the problem within the geographic region to find out who is seeking help and why. The assessment also helps determine the role or involvement of the sociological practitioner in the intervention. An assessment is a case study guided by the nature of the problem and clients (Bruhn and Rebach 2007). Data collection may include interviews, focus groups, surveys, and secondary analysis (e.g., analytic data, educational records, criminal records, medical files, etc.). Findings and results are presented and discussed with clients and other involved parties to formulate solutions and objectives of intervention. The next stage in the process is to plan the steps for achieving intervention objectives. The plan is a formal (written) agreement among interventionists (including the sociological practitioner) and client(s) outlining the objectives and roles and responsibilities of each person involved. The plan will include observable, measurable objectives that include: 1) subject and verb stating the condition to achieve, 2) amount or percentage of reduction or improvement of the condition, and 3) timeframe or deadline for completion (Bruhn and Rebach 2007). Both process and outcome objectives must be delineated in the plan. Process objectives will focus on program operations or services, and outcome objectives concentrate on the results of the intervention against baseline data (i.e., data collected prior to intervention). Interventionists and clients work together to develop a plan so everyone has an equal voice and understanding of their duties, obligations, and work to complete in the implementation phase. Consider a social problem you would like to address in your community. Conduct secondary analysis of the issue to identify the presenting problem, clientele, and existing community services. Explore nonprofit and public agencies in your community working on the problem you chose to help you gather information. After completing your analysis, draft four observable, measurable objectives of intervention for the problem and population you wish to address. Two objectives must focus on process and two on outcomes. All outcomes must include a verb and subject stating the condition to achieve, amount or percentage of reduction or improvement of the condition, and timeframe or deadline for completion. The third phase in the process centers on implementation. In this stage, the plan commences according to the steps outlined in the formal agreement. Implementation puts the plan into action by following the proposed sequence and schedule. This phase engages strategies in order to accomplish objectives. For example, solving chronic poverty in your community might require employing several strategies such as improving K-12 education, increasing higher education enrollments and job skills training, providing access to health care, and developing employment opportunities. During the implementation phase, interventionists and collaborators will initiate and work on each strategy for change. The final phase in the process of intervention is evaluation. Sociologists use evaluation to find out if a program, service, or intervention works (Steele and Price 2008). There are two types of evaluation. A process or formative evaluation gathers information to help improve or change a program, service, or intervention. Did everything occur and work according to plan? Sociological practitioners work with clients to determine program strengths, weaknesses, and areas of improvement to strengthen or adapt the program (Steele and Price 2008). An outcome or summative evaluation measures the impact of the program, service, or intervention on clients or participants. Were benchmarks achieved or changes made? Practitioners measure changes in clients over the duration of their participation from start to completion. The impact evaluation determines if change occurred, any unintended outcomes, and the long-term effects. Evaluation is an ongoing task tracking program progress from beginning to end (Bruhn and Rebach 2007). Interventionists and practitioners must monitor the program continuously to ensure the service or intervention is advancing toward change, and adjustments or alternatives are deployed to increase effectiveness in a timely manner. The goal of evaluation is to know why a program, service, or intervention succeeded or failed to reform or adapt present and future support and solutions. Evaluation is a mechanism of continual improvement by regularly providing information and identifying unintended consequences. Evaluation requires both quantitative and qualitative data (see page 5) using a variety of data collection methods and tools to gather information (e.g., tests, questionnaires, archival data, etc.). Data collection tools vary from program to program, sometimes tools exist to conduct an evaluation, and other times practitioners must develop them (Viola and McMahon 2010). Practitioners lead in the development of data collection protocols, tools, and instruments for review by participants (e.g., clients and community members) before they are ready to use. As a contributing member of an evaluation team, sociological practitioners (see page 3) must be aware of role-conflict. It is imperative to avoid role-conflict in a participatory evaluation model. In other words, practitioners must be aware of their role within the evaluative context or situation as to whether one is serving as a researcher, practitioner, or interventionist (i.e., clinical sociologist). It is difficult to implement the scientific method (process and procedures) in the field within the standards of academic research when serving as a practitioner (Bruhn and Rebach 2007). Sociological practitioners or interventionists do not always have control over the evaluation research, study environment, or time to complete an evaluative study as prescribed by the scientific method. The Workforce Internship Networking (WIN) Center at West Hill College Lemoore in California connects and supports students and alumni by providing employment, occupational readiness, and job placement information and resources to advance personal career goals. The WIN Center provides a space for employers and students to connect. At the WIN Center, students and alumni receive skills training, employment and internship application assistance, and support in creating a professional profile. 1. Describe why it might be important to evaluate the WIN Center. 2. Considering the importance of evaluating college campus programs, how often would you recommend evaluating the WIN Center’s programs and services? What should the evaluation examine? 3. What role could program monitoring play in the overall evaluation of the WIN Center? 4. If you were responsible for overseeing program monitoring and the evaluation of the WIN Center, what data would you collect to assess its impact? In addition, evaluations may cause tension between practitioners (interventionists) and evaluation associates. Interventionists are responsible for providing data and keeping records while implementing program activities. Conflicting demands for an interventionists’ time and energy during the program implementation process may lead to a delay in gathering and sharing data with evaluators. Evaluation is not always equally valued, and some interventionists may consider evaluation unimportant or a threat to their work or process resulting in uncooperative behavior or interest.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/03%3A_Sociological_Interventions/3.01%3A_Interventions_and_Problem_Solving.txt
There are six problem solving approaches and interventions most commonly used among practitioners. Each approach examines a different aspect of a social problem. The nature of the problem and people involved determines the most appropriate intervention to apply. A social systems approach examines the social structure surrounding the problem or issue. This approach requires macro, meso, and micro levels of analysis (see pages 12-13) to help understand the structure of the problem and the arrangement of individuals and social groups involved. Analysis requires comprehension of the entire issue and parts associated, as well as, which components and protocols of the structure are independent or dependent of each other. Application of this approach requires grasp of the complete problem including the hierarchy, order, patterns, and boundaries of individuals and social groups including their interactions, relationships, and processes as a body or structure surrounding the issue (Bruhn and Rebach 2007). The interventions deployed using a social systems approach focus on establishing and maintaining stability for all parties even while change is occurring. Social system interventions require change agents or leaders such as sociological practitioners to help control and guide inputs (what is put in or taken into the problem) and outputs (what is produced, delivered, or supplied resulting from change) used in problem solving (Bruhn and Rebach 2007). This approach requires the involvement of everyone in the social structure to design or re-design the system and processes around the issue. The human ecology approach examines the “web of life” or the ecosystem of a social problem or issue. This approach is often visually represented by a spider web to demonstrate how lives are interlinked and interdependent. A human ecology approach focuses on macro and meso levels of analysis to develop knowledge about the social bonds, personal needs, and environmental conditions that impede or support life challenges and opportunities for individuals. Practitioners evaluate and analyze where individuals and groups fit in the social structure or ecosystem and their roles. The purpose of this approach is to identify cognitive and emotional boundaries people experience living in social systems to help confront and remove the obstacles they face. Interventions applied in a human ecological approach target changes in families, institutions, and small communities. The goal is to confront the stressors and strain created by social situations and settings. Interventions from a human ecology approach help people determine acceptable behaviors within different social environments (Bruhn and Rebach 2007). Practitioners work with social groups to remove collaborative challenges between groups in a social ecosystem and the individuals working and living within them. Change is concentrated on developing a new system and process to support and remove obstacles for individuals effected by a social problem. 1. Describe the social systems approach and explain what type of social problems or issues this approach is the most valid method to use. 2. Describe the human ecology approach and explain what type of social problem or issues this approach is the most valid method to use. 3. Which approach is the most appropriate for assessing and addressing the social conditions listed below. Use supporting evidence to justify your analysis. 1. A county mental health court 2. Gender neutral bathrooms on a college campus 3. Anti-bullying campaign in local K-12 schools A life cycle approach examines the developmental stages and experiences of individuals facing issues or various life crises. Meso and micro levels of analysis are required with this method. Data gathered assists practitioners in understanding the adaption of individuals or groups to change, challenges, and demands at each developmental stage of life (Bruhn and Rebach 2007). Analysis incorporates evaluation of interpersonal connections between a person and the environment, life transitions, and patterns. This approach if applicable when working with individuals, groups, and organizations, which all have and go through a life cycle and stages of development. Interventions using this approach target changes in social norms and expectations of individuals or groups facing difficulties. Practitioners help identify the context and issues creating anxiety among individuals or groups and facilitate coping strategies to attack their issues. This approach builds on positive personal and social resources and networks to mend, retrain, or enable development and growth. The clinical approach evaluates disease, illness, and distress. Both meso and micro levels of analysis are required for this method. Practitioners assess biological, personal, and environmental connections by surveying the patient or client’s background, and current and recent conditions (Bruhn and Rebach 2007). A Patient Evaluation Grid (PEG) is the most commonly used tool for data collection. This approach requires in-depth interactions with the patient or client to identify themes associated with their condition and the structure of the social system related to their illness and support. When applying this approach in medical practice, the evaluation and analysis leads to a diagnosis. 1. Describe the life cycle approach and explain what type of social problems or issues this approach is the most valid method to use. 2. Describe the clinical approach and explain what type of social problem or issues this approach is the most valid method to use. 3. Which approach is the most appropriate for assessing and addressing the social issues listed below. Use supporting evidence to justify your analysis. 1. Policing strategies to reduce crime and improve community relationships 2. Reductions in self-injury or cutting among teens 3. A community college social work education degree program Intervention in a clinical approach concentrates on removal of symptoms, condition, or changes in the individual to solve the problem. The overarching goal of this method is to prevent the problem from reoccurring and the solution from interfering with the individual’s functioning. Problem management must minimally disrupt the social system of the patient or client. A social norms approach focuses on peer influences to provide individuals with accurate information and role models to induce change (Bruhn and Rebach 2007). This approach observes macro, meso, and micro levels of analysis. Intervention centers on providing correct perceptions about thinking and behavior to induce change in one’s thoughts and actions. This technique is a proactive prevention model aimed at addressing something from happening or arising. There are three levels of intervention when applying a social norms approach (Bruhn and Rebach 2007). Practitioners use interventions independently or together for a comprehensive solution. At the universal level of intervention, all members of a population receive the intervention without identifying which individuals are at risk. A selective level of intervention directs assistance or services to an entire group of at risk individuals. When specific individuals are beyond risk and already show signs of the problem, they receive an indicated level of intervention. A comprehensive intervention requires an integration of all three levels. Practitioners assist communities in problem solving by applying a community based approach. All three levels of analysis (macro, meso, and micro) are required for this method. The aim of this approach is to plan, develop, and implement community based interventions whereby local institutions and residents participate in problem solving and work towards preventing future issues. Practitioners work with communities on three outcomes, individual empowerment, connecting people, and improving social interactions and cooperation (Bruhn and Rebach 2007). Concentrating on these outcomes builds on community assets while tailoring solutions to local political, economic, and social conditions. By building bridges among individuals and groups in the community, practitioners facilitate connections between services, programs, and policies while attacking the problem from multiple vantage points. A community based approach helps ensure problem analysis, evaluation, and interventions are culturally and geographically appropriate for local residents, groups, and organizations. To operate effectively, this intervention requires practitioners to help facilitate face-to-face interactions among community members and develop a communication pattern for solving community problems. To build an appropriate intervention, practitioners must develop knowledge and understanding about the purpose, structure, and process of each group, organization, and collaboration within the community (Bruhn and Rebach 2007). Upon implementation, a community based approach endows local residents and organizations to observe and monitor their own progress and solutions directly. 1. Describe the social norms approach and explain what type of social problems or issues this approach is the most valid method to use. 2. Describe the community based approach and explain what type of social problem or issues this approach is the most valid method to use. 3. Which approach is the most appropriate for assessing and addressing the social problems listed below. Use supporting evidence to justify your analysis. 1. Human trafficking prevention program 2. Reductions in electronic cigarette, vaping, and new tobacco product usage 3.03: Commonalities of Approaches There are four common themes among problem solving approaches. All approaches focus on creating change. Interventionists work towards changing behavior or the social arrangements of clients. The goal is to improve social functioning of individuals, groups, communities, and organizations. Table 5. Problem Solving Interventions. Attribution: Copyright Vera Kennedy, West Hills College Lemoore, under CC BY-NC-SA 4.0 license Problem Solving Approach Intervention Focus Social Systems Create stability of the social arrangement or social structure Human Ecology Identify social location of individuals (place or position in society) to tackle and remove obstacles Lifecycle Build resources and networks to mend, retrain, or enable development and growth Clinical Remove symptoms or condition and help support change Social Norms Proactive prevention through modeling behavior Community Based Train and empower local residents and organizations to solve their problems Each approach begins with a judicious problem assessment. Identifying and investigating the presenting problem is critical to understanding and framing the needs of clients. The assessment stage allows interventionists to formulate a theory and construct an operational definition of what is to be changed (Bruhn and Reach 2007). This stage initiates the intervention planning process. Many social problems have common causes and solutions so a multidimensional approach to problem solving alleviates many symptoms but can also cause new ones to surface. According to Lindblom and Cohen (1979), solving one social problem often creates new problems or solves others. Because social problems are multidimensional touched by multiple factors and social arrangements, all approaches incorporate multi-factor and multilevel problem solving interventions examining micro (individual) and meso or macro (collective) needs of clients. This means all approaches involve more than one level of intervention. Lastly, all approaches include client follow-up and an evaluation of program and services. These activities serve as a feedback mechanism for determining successes, failures, and areas for improvement (Bruhn and Rebach 2007). Evaluation data is used for developing programs and providing accountability among practitioners and clients. 3.04: References Key Terms and Concepts Bruhn, John G. and Howard M. Rebach. 2007. Sociological Practice: Intervention and Social Change. 2nd ed. New York, NY: Springer. Lindblom, Charles E. and David K. Cohen. 1979. Usable Knowledge: Social Science and Social Problem Solving. New Have: Yale University Press. Steele, Stephen F. and Jammie Price. 2008. Applied Sociology: Terms, Topics, Tools, and Tasks. 2nd ed. Belmont, CA: Thomson Wadsworth. Viola, Judah J. and Susan Dvorak McMahon. 2010. Consulting and Evaluation with Nonprofit and Community-based Organizations. Sudbury, MA: Jones and Bartlett Publishers. Key Terms and Concepts Assessment Baseline data Clinical approach Community based approach Comprehensive intervention Continual improvement Evaluation Formative evaluation Human ecology approach Implementation Indicated level of intervention Life cycle approach Outcome objectives Plan Process objectives Process of intervention Role-conflict Selective level of intervention Social norms approach Social systems approach Summative evaluation Universal level of intervention
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/03%3A_Sociological_Interventions/3.02%3A_Problem_Solving_Approaches_and_Interventions.txt
Learning Objectives At the end of the module, you will be able to: • understand the personal characteristics for a successful career in sociology. • recognize the professional skills and competencies required in sociological practice. • define professional code of ethics. • evaluate and apply ethical standards in sociological practice. Careers for sociologists are diverse and sometimes nontraditional (Viola and McMahon 2010). Sociologists are trained to help people and organizations assess and improve their social condition or situation. As agents of change, sociologists must develop an understanding of investigational methods, measurements, and social relationships to help clients and communities narrow the gap between their current condition and what they need (Bellman 1990). Overall, a sociologist’s work focuses on helping people by adding value and expertise in research, data collection, and analysis about social issues and potential solutions. • 4.1: Skills and Competencies in Sociological Practice Sociologists are trained to help people and organizations assess and improve their social condition or situation. As agents of change, sociologists must develop an understanding of investigational methods, measurements, and social relationships to help clients and communities narrow the gap between their current condition and what they need. Sociologists focus on helping people by adding value and expertise in research, data collection, and analysis about social issues and potential solutions. • 4.2: Code of Ethics Ethics are a set of concepts and principles that guide social behavior. To avoid cultural and religious bias or judgment, society uses shared ethical ideologies as guides in reasoning ethical issues. Ethical life emerges from the human capacity and cognitive ability to comprehend the effect of helping or harming others (Paul and Elder 2005). • 4.3: References, Key Terms and Concepts Thumbnail: This image "Boy and Girl Sitting on Bench Toy" by June Intharoek is licensed under CC BY 4.0 04: Working with Diverse Groups Careers for sociologists are diverse and sometimes nontraditional (Viola and McMahon 2010). Sociologists are trained to help people and organizations assess and improve their social condition or situation. As agents of change, sociologists must develop an understanding of investigational methods, measurements, and social relationships to help clients and communities narrow the gap between their current condition and what they need (Bellman 1990). Overall, a sociologist’s work focuses on helping people by adding value and expertise in research, data collection, and analysis about social issues and potential solutions. Sociologists play a direct role in people’s lives by providing information and support to individuals and organizations for change. Sociological practitioners help others create programs and services, build capacity and infrastructure, adapt to social and organizational fluctuations, and develop or increase resources (Viola and McMahon 2010). The social and collaborative nature of being a practitioner requires people skills including trust and integrity. As a result, sociologists must develop personal characteristics such as patience, flexibility, and tolerance to build trust with clients and community members they serve. Being a sociologist requires knowledge in building and maintaining relationships with people. Sociological practitioners must develop the ability to establish rapport and trust with others for effective and efficient collaboration (Viola and McMahon 2010). The most critical interpersonal communication skills needed for a successful career in sociology includes listening, facilitation, and conflict resolution. These skills allow practitioners to communicate across cultures and share data or information using technical (formal) terminology and conversational (informal) language to work with and understand diverse individuals and groups. Communication only happens when practitioners, clients, and community members engage in uncovering and understanding the meaning behind the words. Active listening requires listeners to give feedback, confirm understanding by asking questions, and making clarifying statements rather than focusing on what they want to say (Freedom Learning Group 2019). Without listening, there is no understanding and no foundation for building trust and rapport. By listening, a sociological practitioner can assess people’s needs and translate their questions and desires into concrete tasks to support and help them. Much of the work facilitated by sociological practitioners focuses on partnerships with individuals and groups to cultivate solutions for change. Practitioners must demonstrate a level of leadership and facilitation skills to create an open, safe, and collaborative work environment for all participants. Creating an effective setting and atmosphere for change requires practitioners to be aware of the roles and intrinsic motivation of the people involved. This level of awareness helps manage group dynamics, cohesiveness, and direction for optimal results. Secondarily, effective facilitation requires sensitivity to the established norms people bring to the collaborative process and the social pressures to conform when working in groups (Black, Bright, Gardner, Hartmann, Lambert, Leduc, Leopold, O’Rourke, Pierce, Steers, Terjesen, and Weiss 2019). An effective sociological practitioner will identify and show consideration for established norms while helping people recognize shared standards and customs among the collaborative group to identify common goals for change. Additionally, supportive practitioners aid individuals in dealing with internal group pressures that allows everyone to retain their unique characteristics or traits while accepting the collaborative groups’ standards or procedures. Lastly, by harnessing group cohesiveness, practitioners show collaborates how to help each other and work together as a team (Black et al. 2019). Practitioners emphasize the benefits of working together towards common goals for change. Group cohesiveness blends complementary strengths and promotes a sense of ownership among each group member. There are five stages of team or group development (Tuckman and Jensen 1977). The forming stage or first phase of development begins with the introduction of team members. This is commonly known as the “polite” stage in which team participants are friendly, demonstrate enthusiasm, focus on similarities, and look for leadership and direction among its membership (Black et al. 2019). The second or storming stage initiates when team members begin testing the group process. This is the “win-lose” stage where individuals clash for control over the group and choose sides creating a negative atmosphere with frustration around the goals, tasks, and progress of the group (Black et al. 2019). The storming process may be long and painful for the team, but the third or norming stage will eventually form and take shape. In the norming stage, team members now demonstrate group cohesion, openly exchange and communicate ideas, have common goals, ground rules, boundaries, and share responsibility and control (Tuckman 1965). Once there is established value and respect for one another, the team is able to build momentum and achieve results. In the fourth or performing stage, the team is confident, self-directed, and expresses renewed enthusiasm (Black et al. 2019). In this stage, the team is a problem-solving instrument (Tuckman 1965). As a project, program, or initiative ends, team members complete their work and the group begins to dissolve. This adjourning stage is the fifth and final stage of team development. In this stage, team members seek closure and recognition for their work and contributions (Tuckman and Jensen 1977). Leading people and facilitating groups is challenging. It takes time and experience to understand and figure out the most appropriate methods and approaches for supporting people through change and problem solving. One of the most challenging aspects of working with teams and collaborative partnerships is figuring out how to balance the demands and expectations of individuals, the team or partners, and external stakeholders and constituencies (Black et al. 2019). Developing a checklist to identify people relevant to a particular effort or cause helps practitioners manage and facilitate the group process as well as ensure optimal performance for leading change. Some checklist questions to consider include: • Whose participation and support do we need to identify the issue or condition and solve the problem? • Who needs my support? What do they need from me or the team or collaborators? • Who can keep me and our team or effort from being successful? • What is my ongoing strategy to motivate, engage, and influence change? The answers to these questions are important in guiding and building the relationships we need to develop for social change. The primary role of sociological practitioners is to build and manage relationships with people who will support the team and their work (Black et al. 2019). This is the politics of sociological practice meaning practitioners must develop interpersonal skills to bridge people and shape strong working relationships. Acting as a facilitator, a sociological practitioner must demonstrate leadership skills. The practitioner often serves as an orchestrator or person that arranges and helps set the tone for a group to push on and accomplish its goals (Black et al. 2019). The process of leadership is different from being a leader or head of a group. Leadership is a working relationship with group members directed at achieving the needs of the team in problem solving and change. The act of leadership is an exchange relationship among group members and the practitioner to influence each other and the context or condition the collaborators are addressing. Several characteristics endow people with leadership potential (Kirkpatrick and Locke 1991; Kirkpatrick and Locke 2000; and Locke et al. 1991). The common traits of effective leadership include: • Drive or a strong desire to achieve accompanied with ambition, energy, tenacity, and initiative • Motivation to lead others • Commitment to truth, honesty, and integrity • Self-confidence or assurance in one’s self, ideas, and ability • Cognitive ability or analytical ability to think conceptually and strategically • Industry knowledge or understanding of the community and social conditions or needs • Miscellaneous traits such as charisma, creativity, flexibility, and self-monitoring or alter one’s behavior in context to social cues as necessary Sociological practice emphasizes a transformational leadership style where the focus is on inspiring others to action and helping people understand they can influence outcomes. Transformational leadership centers on engaging and energizing others through procedural justice whereby people effected by a condition or issue play an equitable role in confronting or solving the problem (Pillai, Schriesheim, and Williams 1999). This form of leadership motivates individuals to transcend their own centric thinking or self-interest for the benefit of the group, community, and society (Manz and Sims 1987). Through this process, collaborators focus on higher-order needs such as self-esteem, self-actualization, and get a voice in influencing decisions and outcomes that effect and are important to them. Interpersonal conflict involves situations when a person or group blocks expectations, ideas, or goals of another person or group. Conflict develops when people or groups desire different outcomes, opinions, offend one another, or simply do not get along (Black, et al. 2019). People tend to assume conflict is bad and must be eradicated. However, a moderate amount of conflict can be helpful in some cases. For example, conflict can lead people to discover new ideas and new ways of identifying solutions to social problems or conditions and is often the very mechanism to inspire innovation and change. It can also facilitate motivation among clients, communities, and organizations to excel and push themselves in order to meet outcomes and objectives (Black et al. 2019). According to Coser (1956), conflict is likely to have stabilizing and unifying functions for a relationship in its pursuit for resolution. People and social systems readjust their structures to eliminate dissatisfaction to re-establish unity. The appropriate conflict resolution approach depends on the situation and the goals of the people involved. According to Thomas (1977), each faction or party involved in the conflict must decide the extent to which it is interested in satisfying its own concerns categorized as assertiveness and satisfying their opponent’s concerns known as cooperativeness (Black et al. 2019). Assertiveness can range on a continuum from assertive to unassertive, and cooperativeness can range on a continuum from uncooperative to cooperative. Once the people involved in the conflict have determined their level of assertiveness and cooperativeness, a resolution strategy emerges. In the conflict resolution process, competing individuals or groups determine the extent to which a satisfactory resolution or outcome might be achieved. If someone does not feel satisfied or feels only partially satisfied with a resolution, discontent can lead to future conflict. An unresolved conflict can easily set the stage for a second confrontational episode (Black et al. 2019). Sociological practitioners can use several techniques to help prevent or reduce conflict. Actions directed at conflict prevention are often easier to implement than those directed at reducing conflict (Black et al. 2019). Common conflict prevention strategies include emphasizing collaborative goals, constructing structured tasks, facilitating intergroup communications, and avoiding win-lose situations. Focusing on collaborative goals and objectives prevents goal conflict (Black et al. 2019). Emphasis on primary goals help clients and community members see the big picture and work together. This approach separates people from the problem by maintaining focus on shared interests (Fisher and Ury 1981). The overarching goal is to work together to address the structure of the overarching social concern or issue. Table 6. Five Modes of Resolving Conflict. Source: Adapted from Thomas, Kenneth W. 1977. “Toward Multidimensional Values in Teaching: The Example of Conflict Behaviors.” Academy of Management Review 2:487. Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license Conflict-Handling Modes Appropriate Situations Competing (Assertive-Uncooperative) 1. When quick, decisive action is vital—e.g., emergencies 2. On important issues where unpopular actions need implementing—e.g., cost cutting, enforcing unpopular rules, discipline 3. On issues vital to company welfare when you know you’re right 4. Against people who take advantage of noncompetitive behavior Collaborating (Assertive-Cooperative) 1. When trying to find an integrative solution when both sets of concerns are too important to be compromised 2. When your objective is to learn 3. When merging insights from people with different perspectives 4. When gaining commitment by incorporating concerns into a consensus 5. When working through feelings that have interfered with a relationship Compromising 1. When goals are important but not worth the effort or potential disruption of more assertive modes 2. When opponents with equal power are committed to mutually exclusive goals 3. When attempting to achieve temporary settlements to complex issues 4. When arriving at expedient solutions under time pressure 5. As a backup when collaboration or competition is unsuccessful Avoiding (Unassertive-Uncooperative) 1. When an issue is trivial, or when more important issues are pressing 2. When you perceive no chance of satisfying your concerns 3. When potential disruption outweighs the benefits of resolution 4. When letting people cool down and regain perspective 5. When gathering information supersedes immediate decision 6. When others can resolve the conflict more effectively 7. When issues seem tangential or symptomatic of other issues Accommodating (Unassertive-Cooperative) 1. When you find you are wrong—to allow a better position to be heard, to learn, and to show your reasonableness 2. When issues are more important to others than yourself—to satisfy others and maintain cooperation 3. When building social credits for later issues 4. When minimizing loss when you are outmatched and losing 5. When harmony and stability are especially important 6. When allowing subordinates to develop by learning from mistakes When collaborative partners clearly define, understand, and accept tasks and activities aimed at shared goals, conflict is less likely to occur (Black et al. 2019). Conflict is most likely to occur when there is uncertainty and ambiguity in the roles and tasks of clients and community members. Dialogue and information sharing among collaborative partners is imperative and eliminates conflict. Understanding others’ thinking is helpful in collaborative problem solving. Through dialogue people, are better able to develop empathy, avoid speculation or misinterpreting intentions, and escape blaming others for situations and problems which leads to defensive behavior and counter attacks (Fisher and Ury 1981). Sharing information about the state, progress, and setbacks helps eliminate conflict or suspicions about problems or issues when they arise. As clients and community partners become familiar with each other, trust and teamwork develops. Giving people time to interact and get to know each other helps foster and build effective working relationships (Fisher and Ury 1981). It is important for team members to think of themselves as partners in a side-by-side effort to be effective in their work and accomplish shared goals. Avoiding win-lose situations among collaborative partners also weakens the potential for conflict (Black et al. 2019). Rewards and solutions must focus on shared benefits resulting in win-win scenarios. Conflict can have a negative impact on teams or collaborative work groups and individuals in achieving their goals and solving social issues. Sociological practitioners cannot always avoid or protect people from conflict when working collaboratively. However, there are actions practitioners can take to reduce or solve dysfunctional conflict. When conflict arises, sociological practitioners may employ two general approaches by either targeting changes in attitudes and/or behaviors. Changes in attitudes result in fundamental changes in how groups get along, whereas changes in behavior reduces open conflict but not internal perceptions maintaining separation between groups (Black et at. 2019). There are several ways to help reduce conflict between groups and individuals that either address attitudinal and/or behavioral changes. The nine conflict reduction techniques in Table 2 operate on a continuum, ranging from approaches that concentrate on changing behaviors at the top of the scale to tactics that focus on changing attitudes on the bottom of the scale. Table 7. Conflict Reduction Techniques. Source: Adapted from Black, J. Stewart, David S. Bright, Donald G. Gardner, Eva Hartmann, Jason Lambert, Laura M. Leduc, Joy Leopold, James S. O’Rourke, Jon L. Pierce, Richard M. Steers, Siri Terjesen, and Joseph Weiss. 2019. Organizational Behavior. Houston, TX: OpenStax College. Attribution: Copyright Rice University, OpenStax, under CC BY-NC-SA 4.0 license Technique Description Target of Change Physical separation Separate conflicting groups when collaboration or interaction is not needed for completing tasks and activities Behavior Use rules Introduce specific rules, regulations, and procedures that imposes particular processes, approaches, and methods for working together Behavior Limit intergroup interactions Limit interactions to issues involving common goals Behavior Use diplomats Identify individuals who will be responsible for maintaining boundaries between groups or individuals through diplomacy Behavior Confrontation and negotiation Bring conflicting parties together to discuss areas of disagreement and identify win-win solutions for all Attitude and behavior Third-party consultation Bring in outside practitioners or consultants to speak more directly to the issues from a neutral or outsider vantage point to help facilitate a resolution Attitude and behavior Rotation of members Rotate individuals from one group to another to help understand frame of reference, values, and attitudes of others Attitude and behavior Identify interdependent tasks and common goals Establish goals that require groups and individuals to work together Attitude and behavior Use of intergroup training Long-term, ongoing training aimed at helping groups develop methods for working together Attitude and behavior LEADING A TEAM 1. Describe a time you held a leadership position among your family, friends, at school, or with co-workers. 2. In reflecting back on the skills and competencies presented in the module, did you use any of them in the leadership position you described? Explain. 3. Which skills and competencies from the module do you think are the most important in leading others? Other Abilities and Proficiencies Working as a sociological practitioner, you must develop knowledge and historical information about the social problem you wish to address, the nature or origin of the condition, and possible methods for solving the issue (Lau and Chan 2019). Understanding a social problem aids in formulating the structure and approach to tackling it. This also ensures change agents are targeting and developing an accurate plan to address the “real” problem or issue. Solving problems requires critical and creative thinking. Critical thinking is the process of reflection and questioning aimed at confronting assumptions, examining context, and investigating alternatives (Brookfield 1986; Tice 2000). This process emphasizes assessment of one’s thoughts and interpretation of thoughts to validity, authenticity, and accuracy of understanding and reasoning. Creative thinking supports critical thinking as a process of developing new and useful possibilities to one’s thoughts (Lau and Chan 2019). Creative thinking aids in discovering new or alternative ideas and options. The inventive qualities of creative thinking aids sociological practitioners in constructing potential solutions to social problems. In addition, when working toward social change, practitioners and collaborators must employ strategic thinking. This cognitive activity improves decision-making, strengthens the ability to cope with change, and instills a mentality of continuous improvement. Strategic thinking is a critical thinking process people use to analyze, evaluate, and problem solve. The application of this process challenges conventional thought by emphasizing foresight or predicting human responses and outcomes. Strategic thinking requires aptitude in self-development (e.g., learning new skills and overcoming bad habits), organizational strategies that are productive and responsive to challenges and innovations, and tactical thinking to deal with confrontation, competition, maximize impact, and protect selves (Lau and Chan 2019). The purpose of strategic thinking in addressing social problems is to establish a systems perspective focusing on client needs and the barriers preventing success or change. SOCIOLOGICAL PRACTITIONER QUALITIES Below are a list of general traits and skills needed in the field of sociology. Review the items and complete a self-assessment for further preparation as a sociological practitioner. These items will give you a sense of where your strengths are and what you have to offer clients, communities, and employers. _____ I am comfortable speaking with and in front of people. _____ I am authentic and self-confident when working with others. _____ I have effective written and oral communication skills. _____ I am not afraid to say “no” or disappointing someone. _____ I am self- motivated and self-disciplined to complete my work on time. _____ I accept criticism and am willing to learn from my mistakes. _____ I am aware of my personal and professional weaknesses and strengths. _____ I have strong organizational skills (i.e., time management, recordkeeping, meeting deadlines, etc.). _____ I am flexible and tolerant when working with others. _____ I am able to reflect and synthesize what others share with me. _____ I am able to work collaboratively or on a team. _____ I am able to manage conflict without getting defensive. _____ I am able to use a variety of data collection, analysis, and reporting methods. _____ I am able to use software (word processing, spreadsheets, presentations, statistical analysis (SPSS or Nvivo, etc.) for creating, recording, and presenting data and reports. _____ I can apply a variety of theoretical models and approaches to solving social problems. EXPLORING SOCIOLOGICAL PRACTITIONER METHODS AND SKILLS 1. Review the Public Sociology Toolkit (https://publicsociologytoolkit.com/public-sociology-toolkit/). 2. Explore each of the 18 methods and skills sociological practitioners use to investigate social issues and work to create social change. 3. Think about a social problem that is important to you and describe how a team of practitioners, clients, and community stakeholders might apply each of the 18 methods in the toolkit to improve the social condition you identified.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/04%3A_Working_with_Diverse_Groups/4.01%3A_Skills_and_Competencies_in_Sociological_Practice.txt
Ethics are a set of concepts and principles that guide social behavior. To avoid cultural and religious bias or judgment, society uses shared ethical ideologies as guides in reasoning ethical issues. Ethical life emerges from the human capacity and cognitive ability to comprehend the effect of helping or harming others (Paul and Elder 2005). People enforce ethics through communication and social interactions. Through socialization and cultural teachings, society nurtures ethical behaviors and social expectations using positive and negative re-enforcement. Ethical decisions require critical evaluation and analysis of thinking, motivation, and consequences. Individuals must become proficient at reflection to assess and make sound ethical decisions. “Human nature has a strong tendency toward egotism, prejudice, self-justification, and self-deception” (Paul and Elder 2005:2). People can never eliminate egocentric tendencies but combat them as they evolve into ethical persons. The achievement of ethical reasoning requires doing what is right regardless of selfish desire. Obtaining ethical reasoning requires the deliberate practice and development of fair-mindedness, honesty, integrity, self-knowledge, and concern for others. Applying Ethical Principles in Sociological Practice The code of ethics establishes the social norms of acceptable and unacceptable conduct and behavior (Bruhn and Rebach 2007). All professional companies and organizations have rules and policies on ethical conduct and behaviors encouraged internally within the organization and externally when working with outside constituents or clientele. In the workplace, the code of ethics includes procedures for filing, investigating, and resolving complaints that violate the ethical principles and standards outlined by the organization (Bruhn and Rebach 2007). Professional associations comprised of members within a specific profession like sociology also have ethical codes of conduct establishing the expectations of professionals working within the field or discipline. The American Sociological Association (ASA) has published guidelines outlining the principles and standards that sociologists must adhere to in professional activities. The six guiding principles enforce: 1) professional competence, 2) integrity, 3) professional and scientific responsibility, 4) respect for people’s rights, dignity, and diversity, 5) social responsibility, and 6) human rights (ASA 2019). The Association also dictates the ethical standards for professional and scientific conduct that center on and clarify the rules and policies surrounding the established guiding principles. Any violation of the code may lead to imposition of sanctions including termination of membership (ASA 2019). Practitioners must be aware of the ethical standards established by their respective professional associations including other professional groups (i.e., therapists, counselors, etc.) to remain certified and affiliated with the organizations and networks. The Association for Applied and Clinical Sociology also has a code of ethics. The central value in sociological practice is “do no harm.” This means practitioners are responsible for protecting clients including community collaborators by obtaining informed consent to participate, protecting privacy and anonymity, preventing physical and emotional harm, ensuring truth and honesty, and providing information and feedback as needed (Bruhn and Rebach 2007). It is imperative for practitioners to be clear about the ethical soundness of their decisions when working with clients, the community, and collaborators to ensure the values and standards of the profession. Respect and communication are essential in building moral relationships and maintaining ethical standards in sociological practice. Consider the parameters of the ethical value “do no harm.” 1. Explain methodology and approach for obtaining informed consent from people in sociological practice. 2. Describe the ways to protect privacy and anonymity of clients, community members, and collaborators as a practitioner. 3. Discuss how to prevent physical and emotional harm when addressing the social conditions people face or confront. Explain the boundaries and code of conduct for maintaining professional relationships with clients, community members, and collaborators. 4. Illustrate how to maintain honesty and truth in your professional work and relationships. 5. Describe the appropriate attitude and approach when providing information and feedback to clients, community members, and collaborators. 4.03: References Key Terms and Concepts American Sociological Association. 2019. “Code of Ethics.” Washington, DC: American Sociological Association. Retrieved November 6, 2019 (https://www.asanet.org/code-ethics). Bellman, Geoffrey M. 1990. The Consultant’s Calling: Bringing Who You Are To What You Do. San Francisco, CA: Jossey-Bass. Black, J. Stewart, David S. Bright, Donald G. Gardner, Eva Hartmann, Jason Lambert, Laura M. Leduc, Joy Leopold, James S. O’Rourke, Jon L. Pierce, Richard M. Steers, Siri Terjesen, and Joseph Weiss. 2019. Organizational Behavior. Houston, TX: OpenStax College. Brookfield, Stephen D. 1991. Developing Critical Thinkers: Challenging Adults to Explore Alternative Ways of Thinking and Acting. San Francisco, CA: Jossey-Bass. Bruhn, John G. and Howard M. Rebach. 2007. Sociological Practice: Intervention and Social Change. 2nd ed. New York, NY: Springer. Coser, Lewis A. 1956. The Functions of Social Conflict. Glencoe, IL: Free Press. Fisher, Roger and William Ury. 1981. Getting to Yes Negotiating Agreement Without Giving In. New York: Penguin Group. Freedom Learning Group. 2019. “Methods of Communication.” Portland, OR: Lumen Learning. Retrieved August 30, 2019 (https://courses.lumenlearning.com/wm...communication/). Kirkpatrick, Shelley A. and Edwin A. Locke. 1991. “Leadership: Do Traits Matter?” The Executive 5(2):48-60. Kirkpatrick, Shelley A. and Edwin A. Lock. 2000. “The Best Managers: What It Takes.” Business Week 158. Lau, Joe and Jonathan Chan. 2019. Critical Thinking Web. Hong Kong: The University of Hong Kong. Locke, Edwin A., S. Kirkpatrick, J. K. Wheeler, J. Schneider, K. Niles, H. Goldstein, K. Welsh, and D. O. Chad. 1991. The Essence of Leadership: The Four Keys to Leading Successfully. New York: Lexington Books. Manz, Charles C. and Henry P. Sims, Jr. 1987. “Leading Workers To Lead Themselves: The External Leadership Of Self-Managed Work Teams.” Administrative Science Quarterly 32:106–129. Paul, Richard and Linda Elder. 2005. The Miniature Guide to Understanding the Foundations of Ethical Reasoning. Tomales, CA: The Foundation for Critical Thinking. Pillai, Rajnandini. Chester A. Schriesheim, and Eric S. Williams. 1999. “Fairness Perceptions and Trust as Mediator for Transformation and Transactional Leadership: A Two-Sample Study. Journal of Management 25:897-933. Thomas, Kenneth W. 1977. “Toward Multidimensional Values in Teaching: The Example of Conflict Behaviors.” Academy of Management Review 2:487. Tice, Elizabeth T. 2000. “What is Critical Thinking?” Journal of Excellence in Higher Education. Phoenix, AZ: University of Phoenix. Tuckman, Bruce W. 1965. “Development Sequence in Small Groups.” Psychological Bulletin 63(6):384-399. Tuckman, Bruce W. and Jensen, Mary Ann C. 1977. “Stages of Small-Group Development Revisited.” Group & Organization Studies 2(4):419–427. Viola, Judah J. and Susan Dvorak McMahon. 2010. Consulting and Evaluation with Nonprofit and Community-based Organizations. Sudbury, MA: Jones and Bartlett Publishers. Key Terms and Concepts Active listening Adjourning stage Assertiveness Code of ethics Conflict prevention strategies Conflict reduction techniques Conflict resolution Cooperativeness Creative thinking Critical thinking “Do no harm” Ethics Facilitation skills Five modes of resolving conflict Forming stage Interpersonal conflict Leadership Norming stage Performing stage Professional associations Storming stage Strategic thinking Team or group development Transformational leadership
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/04%3A_Working_with_Diverse_Groups/4.02%3A_Code_of_Ethics.txt
Learning Objectives At the end of the module, you will be able to: • articulate the academic and professional pathway required for a career in sociology. • design a plan or roadmap for developing knowledge, gaining work experience, and establishing a network for job readiness and growth. Sociology is a broad academic field that focuses on uncovering the sources and solutions of social problems (Bruhn and Rebach 2007). In choosing a sociology major, you must have an industry or social condition emphasis in mind when selecting course work. Many students do not adequately plan or choose appropriate courses to help them prepare for the area or type of work they want to do as a sociologist after graduation. Sociology is a flexible degree similar to Liberal Studies or Business in that you can tailor your course work to match your job market interests. • 5.1: Academic and Professional Preparation Many sociology degree programs prepare students for work as data or policy analysts, researchers, and support staff for private, public, and non-profit agencies (Soriano 2019). The problem with the degree’s generalized focus is that students often choose the quickest pathway to degree completion without considering the skills and competencies they will need to be a contender or compete in the job market. • 5.2: Job Hunting Search for jobs in a variety of fields related to your interests and skill level (Steele and Price 2008). This means job hunting will require time and attention to detail to find employment opportunities and job titles that meet your knowledge, abilities, interests, and social conditions you wish to address in your career. • 5.3: Networking and Building Relationships Networking and building relationships is part of everyday work for a sociologist (Viola and McMahon 2010). With the focus on helping people, you are responsible for nurturing professional contacts aimed at solving social problems and treating people you serve with integrity and respect. Your role as a practitioner will require you to form linkages, make connections, expand resources, and bring people together to employ interventions and change (Viola and McMahon 2010). • 5.4: References, Key Terms and Concepts Thumbnail: This image "Photography of People Graduating" by Emily Ranquist is licensed under CC BY 4.0 05: Preparing for a Career in Sociology Many sociology degree programs prepare students for work as data or policy analysts, researchers, and support staff for private, public, and non-profit agencies (Soriano 2019). The problem with the degree’s generalized focus is that students often choose the quickest pathway to degree completion without considering the skills and competencies they will need to be a contender or compete in the job market. From the onset, a sociology major must determine what type of work or career they want. A career as a basic sociologist will require a doctoral level degree, which will consist of four years completing undergraduate or bachelor’s level classes, two years fulfilling graduate or master’s level courses plus thesis, and two to four additional years to complete doctoral course work and a dissertation. Basic sociology is for someone who wants to focus on publishing research and/or working as a university professor. To be a public sociologist, you may or may not need to earn a doctoral degree depending on the area and level of social policy you wish to work. If you want to write policy or make direct changes to law or the judicial system, you will need a doctoral or a law degree. If you want a job in advocacy focused on policy changes, your job title or work will depend on the level of degree you obtain. An undergraduate or bachelor’s degree will more than likely align with entry-level or program/administrative support type jobs (e.g., intake specialist, case manager, behavior technician, victim advocate, analyst, etc.). Academic preparation for work in applied or clinical sociology is similar to public sociology. An applied sociologist with a doctoral or master’s degree may find work in academia, program evaluation, or research as an administrator/director, faculty, or consultant depending on interest and area of expertise. As with public sociology, an undergraduate or bachelor’s degree will prepare you for entry-level or program/administrative support jobs. In addition to the type of sociological work you wish to do, you must consider what social condition(s) you want to work on as a change agent. Preparing yourself academically for work in public health is very different than preparing yourself for a job in criminal justice, education counseling, social work, etc. The elective courses you choose in college are paramount for preparing you for the industry and type of work you want to do as a sociologist. Make sure you examine the sociology major requirements and course options you have when deciding which courses to take. Remember, you want to be qualified for the jobs you want. For example, do not take medical sociology as an elective over a course in deviance and control if you are preparing for a career in probation or criminal justice. Medical sociology will help you develop comprehensive knowledge as a sociologist and understand the needs of some clients when working in criminal justice, but the information and knowledge you gain will not be as applicable for a career in probation as deviance and control. Medical sociology is a good match along with deviance and control if you plan on working in a psychiatric facility or psychiatric ward of a prison. Think about your career goal and choose the college and courses that will prepare for your dream job in sociology. Lastly, another consideration is that all jobs in sociology will require knowledge and skills in report writing, public speaking, research methods, and data analysis. In order to share information about a social condition or issue, sociologists must disseminate factual and empirical data in written form and through social interactions including face-to-face meetings and presentations. Part of communicating effectively requires the ability to transfer technical information about a social problem to diverse groups who have different levels of education and experience (Viola and McMahon 2010). The same is true for sharing solutions and approaches for social change. In preparing for a career as a sociologist, it is imperative to develop writing, public speaking, and research abilities. Mastering these skills will establish your credibility and make you competitive in the job market. There are a variety of ways people use and practice sociology. Basic, public, and applied sociology are the most common forms of sociological practice. Each form integrates research on human social life to understand and improve society. Let us explore the type of work you might be interested in pursuing as a sociologist. 1. What is your dream job or career interest? 2. Which form of sociological practice does your interest align (basic, public, or applied sociology)? 3. What type of degree or how much education will you need to qualify for your dream job? 4. Research bachelor’s level college courses and their descriptions in sociology and identify the best courses and electives you will need to take to prepare you for your dream job. Explain how these courses will help you develop the skills and competencies you need for a successful career in sociology.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/05%3A_Preparing_for_a_Career_in_Sociology/5.01%3A_Academic_and_Professional_Preparation.txt
Since sociology is a broad field with diverse areas of interests, clientele, and conditions, you will not find a blanket job announcement seeking as a sociological practitioner or stating, “Sociologist Wanted.” Instead, you will need to read and search for work by examining the job description and qualifications in vacancy announcements. Search for jobs in a variety of fields related to your interests and skill level (Steele and Price 2008). This means job hunting will require time and attention to detail to find employment opportunities and job titles that meet your knowledge, abilities, interests, and social conditions you wish to address in your career. As a Sociologist, there are varieties of employment opportunities in private, public, and non-profit organizations. The type of job you qualify for will depend on the level of college degree you obtain as well as the knowledge, skills, and competencies you possess. Try to cast a wide net when looking for work. The right job for you may not have the title you expect. For example, let us imagine you are looking for work to improve educational programs and services for foster youth. Many private, public, and non-profit organizations work and serve foster youth, so you will need to research which ones are located in the community you want to work in, and then you will need to search for vacancies in the organizations you find. You may be surprised to find jobs that match your interest with titles such as program coordinator, program monitor, or analyst. Many times organizations use generic titles because the role and responsibilities of a position are wide-ranging and comprehensive like one working to improve educational programs and services for foster youth. Do not be discouraged because the employment opportunities you find available do not come with a fancy job title. As a Sociologist, your work and the contributions you make to improving people’s lives is what is most important. Marketing Yourself In preparation for work, one of your first tasks will be to develop a resumé. A resumé is a written document of your education, credentials, work experience, and accomplishments. Identify your credentials including degrees and certificates earned. Also, list your work experience including unpaid or volunteer work. The key to creating an effective resumé is the ability to articulate your accomplishments into skills and abilities. It is more important to state your role and describe your talents and proficiencies while volunteering at a local food pantry than stating you served food. In other words, explain what skills and competencies you used to serve food. For example, “While working at the community food pantry, I was responsible for 1) loading and unloading food to maintain inventory, 2) sorting and packaging food items to ensure safe handling and appropriate nutritional values per serving, 3) checking expiration dates for safe consumption, 4) discarding expired food items in accordance with health department regulations and protocols, 5) helping customers complete liability forms and answering customer questions and 6) maintaining a clean and sanitary service area.” When seeking employment as a Sociological Practitioner, you will need to showcase your knowledge and skills in a professional resumé. Only in academia will you find a job announcement for a Sociologist. Employment opportunities fit for your sociologically training are typically classified under titles such as eligibility worker, case manager, job developer, grant writer, program monitor, project coordinator, etc. In your professional resumé, you will need to demonstrate how your knowledge and skills fit the position advertised. For this application, develop a professional resumé using a template or format highlighting your qualifications. Resumé templates and formatting ideas are available in Microsoft Word or online by conducting an Internet search. In your resumé, include your education, work experience (including unpaid and volunteer work), skills, and references. In your skills section, do not forget to include the abilities, talents, and competencies you developed in your college courses (i.e., unobtrusive observations, survey development, survey administration, interviewing techniques, table and graph development using Google and Microsoft software, technical writing, presentations, etc.). When looking for work in sociological practice, emphasize your aptitude in research methods, statistics, and knowledge of diverse groups. If you have work or volunteer experience related to these competencies, delineate them in the section where you describe the job. If your only experience with research methods, statistics, and diverse groups is in the classroom, you might consider creating a section on your resumé for specialized skills to inform potential employers of your talents even for those developed as part of your academic training or other technical preparation. Anyone preparing for a professional career should develop a portfolio for potential employers displaying their work. A portfolio highlights your accomplishments, skills, and potential. The portfolio is a visual example of your academic and professional work that may include reports, papers, projects, artwork, presentations, or other samples as appropriate for the job you seek. A professional portfolio with specific examples demonstrating your sociological skills and abilities will give you a competitive edge and show employers the talents you will bring to the organization and its clientele. 1. Gather samples of your best academic work (e.g., papers, projects, presentations, etc.) to include in a portfolio. 2. Add any certificates or awards and degrees you have received. Include photographs highlighting your work or contributions you have made to the community. 3. Write a one page biographical sketch with career goals and interests to include as the opening page of your portfolio.
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/05%3A_Preparing_for_a_Career_in_Sociology/5.02%3A_Job_Hunting.txt
In sociological practice, you will need to develop and maintain people skills. This means you must learn to listen, communicate, and relate to others in a professional environment. People skills are essential for networking and building relationships, which is the foundation for improving the lives and social conditions of people. Networking and building relationships is part of everyday work for a sociologist (Viola and McMahon 2010). With the focus on helping people, you are responsible for nurturing professional contacts aimed at solving social problems and treating people you serve with integrity and respect. Your role as a practitioner will require you to form linkages, make connections, expand resources, and bring people together to employ interventions and change (Viola and McMahon 2010). The purpose behind networking and building relationships is to exchange information, obtain advice, and make referrals. There are several ways to build a professional network and relationships. When you are starting a career as a sociologist consider asking family members, friends, former and current professors or other university connections, and employers to mentor or provide you with meaningful contacts in the community such as organizational leaders. Contacts and networks may also develop by participating in unpaid community work or attending professional conferences. Your network may be the key to connecting you with the job and career pathway you seek (Steele and Price 2008). The number and types of connections you make with people will influence the opportunities and access to the work you want to pursue. Build your contacts and develop you reputation with care to establish credibility so others will want to help and open professional doors and opportunities for you. Maintain contact and regularly follow-up with your network regardless of need, so relationships stay intact for those instances requiring assistance and support. 1. Make a list of people in your personal and professional network (e.g., family, friends, current and former professors, employers, etc.). 2. From your list, pinpoint the people that could help you establish professional connections leading to your career interest or dream job. Discuss mentorship or apprenticeship opportunities you might be able to develop. Describe the type or kind of connections they could help you establish in the community. 3. Identify the types of contacts or resources you are missing or need to develop to build your professional network and relationships. Whom might you contact in your current network that could lead you to new or missing connections? 5.04: References Key Terms and Concepts Bruhn, John G. and Howard M. Rebach. 2007. Sociological Practice: Intervention and Social Change. 2nd ed. New York, NY: Springer. Soriano, Deborah Ziff. 2019. “What Can You Do With A Sociology Degree?” Washington, DC: U.S. News & World Report. Retrieved November 6, 2019 (www.usnews.com/education/bes...ciology-degree). Steele, Stephen F. and Jammie Price. 2008. Applied Sociology: Terms, Topics, Tools, and Tasks. 2nd ed. Belmont, CA: Thomson Wadsworth. Viola, Judah J. and Susan Dvorak McMahon. 2010. Consulting and Evaluation with Nonprofit and Community-based Organizations. Sudbury, MA: Jones and Bartlett Publishers. Key Terms and Concepts • Applied sociologist • Basic sociologist • Job hunting • Networking • People skills • Portfolio • Public sociologist • Resumé • Sociology • Sociology degree programs
textbooks/socialsci/Sociology/A_Career_in_Sociology_(Kennedy)/05%3A_Preparing_for_a_Career_in_Sociology/5.03%3A_Networking_and_Building_Relationships.txt
Learning Objectives At the end of the module, students will be able to: 1. explain the relationship between culture and the social world 2. understand the role and impact of culture on society 3. describe concepts central to cultural sociology 4. summarize and apply the theoretical perspectives on the study of culture Culture is an expression of our lives. It molds our identity and connection to the social world. Whether it is our values, beliefs, norms, language, or everyday artifacts each element of culture reflects who we are and influences our position in society. If you think about how we live, communicate, think and act, these parts of our existence develop from the values, beliefs, and norms we learn from others, the language and symbols we understand, and the artifacts or materials we use. Culture is embedded into everyday life and is the attribute in which others view and understand us. • 1.1: Link Between Culture and Society At the end of the module, students will be able to explain the relationship between culture and the social world. This chapter helps in understanding of the role and impact of culture on society and describes the concepts central to cultural sociology. Overall, it summarizes and applies the theoretical perspectives on the study of culture. • 1.2: Defining Culture Culture is universal. Every society has culture. Culture touches every aspect of who and what we are and becomes a lens of how we see and evaluate the world around us. Culture molds human nature and people learn to express nature in cultural ways. The sociological perspective acknowledges that all people are cultured. • 1.3: Cultural Sociology Cultural sociology examines the social meanings and expressions associated with culture. Cultural sociologists study representations of culture including elitist definitions and understanding such as art, literature, and classical music, but also investigate the broad range of culture in everyday social life. • 1.4: Theoretical Perspectives on Culture The social structure plays an integral role in the social location (i.e., place or position) people occupy in society. Your social location is a result of cultural values and norms from the time-period and place in which you live. Culture effects personal and social development including the way people will think or behave. Cultural characteristics pertaining to age, gender, race, education, income and other social factors influence the location people occupy at any given time. • 1.S: Culture and Meaning (Summary) Thumbnail: ǃKung woman and child sharing a meal. Image used with permission (CC-SA-BY 4.0; Staehler). 01: Culture and Meaning Culture is both expressive and social. Neither culture nor society exist in the real world rather it is the thoughts and behaviors of people that constructs a society, its culture, and meanings (Griswold 2013). People build the world we live in including the cultural attributes we choose to obtain, exhibit, and follow. Societies communicate and teach culture as part of the human experience. Historically, culture referred to characteristics and qualities of the fine arts, performing arts, and literature connecting culture to social status. This perspective emphasized a subculture shared by the social elite or upper class and has been historically characterized as civilized culture. This perspective within the humanities studied the “ideal type” or “high culture” of affluent social groups depicting whom was “cultured” or rather was wealthy and educated in society lending itself to a ranking of cultures in its study. In the 19th century, anthropologist Edward B. Tyler (1871) introduced culture as a complex social structure encompassing “... knowledge, belief, art, morals, law, customs, and other capabilities and habits acquired by man as a member of society.” This definition focused on culture as a social attribute of humanity. Social scientists adopted this perspective expanding the study of culture beyond the ethnocentric elitism of “high culture.” With emphasis on human social life as a reflection of culture, social scientists sought to understand not only how culture reflects society but also how society reflects culture. These new insights inspired social scientists to examine the practices of people lending itself to a sociological perspective on culture.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/01%3A_Culture_and_Meaning/1.01%3A_Link_Between_Culture_and_Society.txt
Culture is universal. Every society has culture. Culture touches every “aspect of who and what we are” and becomes a lens of how we see and evaluate the world around us (Henslin 2011:36). Culture molds human nature and people learn to express nature in cultural ways. The sociological perspective acknowledges that all people are cultured. Each generation transmits culture to the next providing us a roadmap and instruction on how to live our lives. Cultural transmission occurs through the learning and expression of traditions and customs. Learning your own group’s culture is enculturation. Adults are agents of enculturation responsible for passing on culture to each generation. Through learning, people develop individual cultural characteristics that are part of a social pattern and integrated set of traits expressing a group’s core values (Kottak and Kozitis 2003). Thus, cultures are integrated and patterned systems serving a variety of social functions within groups. Enculturation gives members of a group a process to think symbolically, use language and tools, share common experiences and knowledge, and learn by observation, experience, as well as unconsciously from each other (Kottak and Kozitis 2003). The commonalities we share through culture establish familiarity and comfort among members of our own group. Non-Material vs. Material Culture Culture is either non-material or material. Non-material culture includes psychological and spiritual elements influencing the way individuals think and act. Material culture refers to physical artifacts people use and consume. Immaterial aspects of culture reflect social values, beliefs, norms, expressive symbols, and practices. Though these cultural elements are intangible, they often take on a physical form in our minds. Non-material culture becomes real in our perceptions and we begin to view them as objects as in the belief of God or other deity. Though we cannot physically see, hear, or touch a God belief makes them real and imaginable to us. Values or ideals define what is desirable in life and guides our preferences and choices. Changes in core values may seem threatening to some individuals or societies as “a threat to a way of life” (Henslin 2011:53). A strong bind to core values can also blind individuals to reality or objectivity reinforcing fallacies and stereotypes. Throughout history, there have always been differences between what people value (their ideal or public culture) and how they actually live their lives (their real or personal culture). Beliefs sometimes mirror values. One’s belief system may align or determine their values influencing thoughts and actions. Beliefs are not always spiritual or supernatural. For example, the belief in love or feelings of affection are internal emotions or physical reactions that exhibit physiological changes in human chemistry. Some beliefs are true representations of metaphysical or abstract thinking which transcend the laws of nature such as faith or superstitions. CULTURAL INVENTORY 1. What is your personal cultural inventory? Describe your values and beliefs, the social norms in which you conform, the expressive symbols (including language) you understand and use regularly, your daily practices, and the artifacts you use frequently and those you treasure. 2. How did you learn culture? Explain the socializing agents responsible for teaching you the traditions, customs, and rituals you live by and follow. 3. What impact does culture have on your identity? Discuss how your culture influences your self-image, views, and role in society. 4. How does culture influence your thinking and behavior towards others? Explain how your culture impacts the image or understanding you have about others including assumptions, stereotypes, and prejudices. Norms or rules develop out of a group’s values and beliefs. When people defy the rules, they receive social reactions resulting in a sanction. Sanctions are a form of social control (Griffiths, Keirns, Strayer, Cody-Rydzewsk, Scaramuzzo, Sadler, Vyain, Byer, and Jones 2015). When people follow the rules, they receive a positive sanction or reward, and when they break the rules, they receive a negative one or punishment that may include social isolation. Symbols help people understand the world (Griffiths et al. 2015). Symbols include gestures, signs, signals, objects, and words. Language is the symbolic system people use to communicate both verbally and in writing (Griffiths et al. 2015). Language constantly evolves and provides the basis for sharing cultural experiences and ideas. The Sapir-Worf Hypothesis suggests people experience the world through symbolic language that derives from culture itself (Griffiths et al. 2015). If you see, hear, or think of a word, it creates a mental image in your head helping you understand and interpret meaning. If you are not familiar with a word or its language, you are unable to comprehend meaning creating a cultural gap or boundary between you and the cultural world around you. Language makes symbolic thought possible. Practices or the behaviors we carry out develop from or in response to our thoughts. We fulfill rituals, traditions, or customs based on our values, beliefs, norms, and expressive symbols. Culture dictates and influences how people live their lives. Cultural practices become habitual from frequent repetition (Henslin 2011). Habitualization leads to institutionalization by consensus of a social group. This results in cultural patterns and systems becoming logical and the viewed as the norm. Material culture is inherently unnatural, such as buildings, machines, electronic devices, clothing, hairstyles, etc. (Henslin 2011). Dialogue about culture often ignores its close tie to material realities in society. The cultural explanations we receive from family, friends, school, work, and media justify cultural realities and utilities of the artifacts we use and consume. Human behavior is purposeful and material culture in our lives derives from the interests of our socializing agents in our environment.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/01%3A_Culture_and_Meaning/1.02%3A_Defining_Culture.txt
There is division among sociologists who study culture. Those who study the sociology of culture have limitations on the categorizations of cultural topics and objects restricting the view of culture as a social product or consequence. The theoretical works of Emile Durkheim, Karl Marx, and Max Weber and the field of anthropology, shaped the sociology of culture. Durkheim found culture and society are interrelated. He explained social structures or institutions serve a functions society. As a collective group, society’s culture including its social, political and economic values are essentially part of and reflected in all structures or institutions (Durkheim 1965). Marx believed social power influences culture. He suggested cultural products depend on economics and people who have power are able to produce and distribute culture (Marx 1977). Weber in alignment with the traditional humanities viewpoint emphasized the ability of culture to influence human behavior. His perspective argued some cultures and cultural works are ideal types that could be lost if they were not preserved or archived (Weber 1946). Until the late twentieth century, anthropologists emphasized the importance of art and culture to educate, instill morality, critique society to inspire change (Best 2007). Initial thoughts on culture focused on how culture makes a person. These works accentuated the idea that certain cultural elements (i.e., elite or high culture) make a person cultured. In contrast, the study of cultural sociology suggests social phenomena is inherently cultural (Alexander 2003). Cultural sociology investigates culture as an explanation of social phenomena. During the cultural turn movement of the 1970s, cultural sociology emerged as a field of study among anthropologists and social scientists evaluating the role of culture in society. Academics expanded their research to the social process in which people communicate meaning, understand the world, construct identity, and express values and beliefs (Best 2007). This new approach incorporates analyzing culture using data from interviews, discussions, and observations of people to understand the social, historical structures and ideological forces that produce and confine culture. Cultural sociology examines the social meanings and expressions associated with culture. Cultural sociologists study representations of culture including elitist definitions and understanding such as art, literature, and classical music, but also investigate the broad range of culture in everyday social life (Back, Bennett, Edles, Gibson, Inglis, Jacobs, and Woodward 2012). Noting the significance of culture in human social life, sociologists empirically study culture, the impact of culture on social order, the link between culture and society, and the persistence and durability of culture over time (Griswold 2013). Cultural sociology incorporates an interdisciplinary approach drawing on different disciplines because of the broad scope and social influences culture has on people. Culture is inseparable from the acts and influence of cultural practices embedded within social categories (i.e., gender, ethnicity, and social class) and social institutions (i.e., family, school, and work) that construct identities and lifestyle practices of individuals (Giddens 1991; Chaney 1996). In the effort to understand the relationship between culture and society, sociologists study cultural practices, institutions, and systems including the forms of power exhibited among social groups related to age, body and mind, ethnicity, gender, geography, race, religion and belief systems, sex, sexuality, and social class. CULTURAL IDENTITY IN ART 1. What forms of identity and symbolism did you see within the music video This is America by Childish Gambino (https://youtu.be/VYOjWnS4cMY)? 2. Now, watch the breakdown video that explains the hidden symbolism and forms of cultural identity in the music video This is America by Childish Gambino (https://youtu.be/9_LIP7qguYw). 3. Compare your list and note what things you were able to identify and what things you missed. "Cultural Identity in Art" by Kristen Kennedy is licensed under CC BY 4.0 Ethnographers and Native Anthropologists In the study of cultural sociology, many practitioners examine both quantitative and qualitative data to develop an understanding of cultural experiences. Quantitative or numeric data provides a framework for understanding observable patterns or trends while qualitative or categorical data presents the reasoning behind thoughts and actions associated with patterns or trends. The collection of qualitative data incorporates scientific methodological approaches including participant observation (observing people as a member of the group), interviews (face-to-face meetings), focus groups (group discussions), or images (pictures or video). Each method focuses on collecting specific types of information to develop a deep understanding about a particular culture and the experiences associated with being a member of that culture. Ethnographers study people and cultures by using qualitative methods. Ethnography or ethnographic research is the firsthand, field-based study of a particular culture by spending at least one year living with people and learning their customs and practices (Kottak and Kozaitis 2012). In the field, ethnographers are participant observers and a participant of the group or society of study. Participant observers face challenges in remaining objective, non-bias, and ensuring their participation does not lead or influence others of the group in a specific direction (Kennedy, Norwood, and Jendian 2017). This research approach expects ethnographers to eliminate the risk of contaminating data with interference or bias interpretations as much as humanly possible. Some researchers choose to study their own culture. These practitioners refer to themselves as native anthropologists. Many native anthropologists have experience studying other cultures prior to researching their own (Kottak and Kozaitis 2012). The practice of learning how to study other cultures gives practitioners the skills and knowledge they need to study their own culture more objectively. In addition, by studying other cultures then one’s own, native anthropologists are able to compare and analyze similarities and differences in cultural perceptions and practices.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/01%3A_Culture_and_Meaning/1.03%3A_Cultural_Sociology.txt
The social structure plays an integral role in the social location (i.e., place or position) people occupy in society. Your social location is a result of cultural values and norms from the time-period and place in which you live. Culture effects personal and social development including the way people will think or behave. Cultural characteristics pertaining to age, gender, race, education, income and other social factors influence the location people occupy at any given time. Furthermore, social location influences how people perceive and understand the world in which we live. People have a difficult time being objective in all contexts because of their social location within cultural controls and standards derived from values and norms. Objective conditions exist without bias because they are measureable and quantifiable (Carl 2013). Subjective concerns rely on judgments rather than external facts. Personal feelings and opinions from a person’s social location drive subjective concerns. The sociological imagination is a tool to help people step outside subjective or personal biography, and look at objective facts and the historical background of a situation, issue, society, or person (Carl 2013). PERCEPTIONS OF REALITY The time period we live (history) and our personal life experiences (biography) influence our perspectives and understanding about others and the world. Our history and biography guide our perceptions of reality reinforcing our personal bias and subjectivity. Relying on subjective viewpoints and perspectives leads to diffusion of misinformation and fake news that can be detrimental to our physical and socio-cultural environment and negatively impact our interactions with others. We must seek out facts and develop knowledge to enhance our objective eye. By using valid, reliable, proven facts, data, and information, we establish credibility and make better decisions for the world and ourselves. 1. Consider a socio-cultural issue you are passionate about and want to change or improve. 2. What is your position on the issue? What ideological or value-laden reasons or beliefs support your position? What facts or empirical data support your position? 3. What portion of your viewpoint or perspective on the issue relies on personal values, opinions, or beliefs in comparison to facts? 4. Why is it important to identity and use empirical data or facts in our lives rather than relying on ideological reasoning and false or fake information? According to C. Wright Mills (1959), the sociological imagination requires individuals to “think themselves away” in examining personal and social influences on people’s life choices and outcomes. Large-scale or macrosociological influences help create understanding about the effect of the social structure and history on people’s lives. Whereas, small-scale or microsociological influences focus on interpreting personal viewpoints from an individual’s biography. Using only a microsociological perspective leads to an unclear understanding of the world from bias perceptions and assumptions about people, social groups, and society (Carl 2013). Sociologists use theories to study the people. “The theoretical paradigms provide different lenses into the social constructions of life and the relationships of people” (Kennedy, Norwood, and Jendian 2017:22). The theoretical paradigms in sociology help us examine and understand cultural reflections including the social structure and social value culture creates and sustains to fulfill human needs as mediated by society itself. Each paradigm provides an objective framework of analysis and evaluation for understanding the social structure including the construction of the cultural values and norms and their influence on thinking and behavior. The Theoretical Paradigms Macrosociology studies large-scale social arrangements or constructs in the social world. The macro perspective examines how groups, organizations, networks, processes, and systems influences thoughts and actions of individuals and groups (Kennedy et al. 2017). Functionalism, Conflict Theory, Feminism, and Environmental Theory are macrosociological perspectives. Microsociology studies the social interactions of individuals and groups. The micro perspective observes how thinking and behavior influences the social world such as groups, organizations, networks, processes, and systems (Kennedy et al. 2017). Symbolic Interactionism and Exchange Theory are microsociological perspectives. Functionalism is a macrosociological perspective examining the purpose or contributions of interrelated parts within the social structure. Functionalists examine how parts of society contribute to the whole. Everything in society has a purpose or function. Even a negative contribution helps society discern its function. For example, driving under the influence of alcohol or drugs inspired society to define the behavior as undesirable, develop laws, and consequences for people committing such an act. A manifest function in society results in expected outcomes (i.e., using a pencil to develop written communication). Whereas, a latent function has an unexpected result (i.e., using a pencil to stab someone). When a function creates unexpected results that cause hardships, problems, or negative consequences the result is a latent dysfunction. Conflict Theory is a macrosociological perspective exploring the fight among social groups over resources in society. Groups compete for status, power, control, money, territory, and other resources for economic or other social gain. Conflict Theory explores the struggle between those in power and those who are not in power within the context of the struggle. Cultural wars are common in society, whether controversy over a deity and way of life or ownership and rights over Holy Land. Symbolic Interactionism is a microsociological perspective observing the influence of interactions on thinking and behavior. Interactionists consider how people interpret meaning and symbols to understand and navigate the social world. Individuals create social reality through verbal and non-verbal interactions. These interactions form thoughts and behaviors in response to others influencing motivation and decision-making. Hearing or reading a word in a language one understands develop a mental image and comprehension about information shared or communicated (i.e., the English word “bread” is most commonly visualized as a slice or loaf and considered a food item). There are three modern approaches to sociological theory (Carl 2013). Feminism, a macrosociological perspective, studies the experiences of women and minorities in the social world including the outcomes of inequality and oppression for these groups. One major focus of the feminist theoretical approach is to understand how age, ethnicity, race, sexuality, and social class interact with gender to determine outcomes for people (Carl 2013). Exchange Theory examines decision-making of individuals in society. This microsociological perspective focuses on understanding how people consider a cost versus benefit analysis accentuating their self-interest to make decisions. Environmental Theory explores how people adjust to ecological (environmental and social) changes over time (Carl 2013). The focal point of this macrosociological perspective is to figure out how people adapt or evolve over time and share the same ecological space. Applying Theories Functionalists view how people work together to create society as a whole. From this perspective, societies needs culture to exist (Griffiths, Keirns, Strayer, Cody-Rydzewsk, Scaramuzzo, Sadler, Vyain, Byer, and Jones 2015). For example, cultural norms or rules function to support the social structure of society, and cultural values guide people in their thoughts and actions. Consider how education is an important concept in the United States because it is valued. The culture of education including the norms surrounding registration, attendance, grades, graduation, and material culture (i.e., classrooms, textbooks, libraries) all support the emphasis placed on the value of education in the United States. Just as members of a society work together to fulfill the needs or society, culture exists to meet the basic needs of its members. Conflict theorists understand the social structure as inherently unequal resulting from the differences in power based on age, class, education, gender, income, race, sexuality, and other social factors. For a conflict theorist, culture reinforces issues of "privilege” groups and their status in social categories (Griffiths et al. 2015). Inequalities exist in every cultural system. Therefore, cultural norms benefit people with status and power while harming others and at the expense of others. For example, although cultural diversity is valued in the United States, some people and states prohibit interracial marriages, same-sex marriages, and polygamy (Griffiths et al. 2015). Symbolic interactionists see culture as created and maintained by the interactions and interpretations of each other’s actions. These theorists conceptualize human interactions as a continuous process of deriving meaning from the physical and social environment. “Every object and action has a symbolic meaning, and language serves as a means for people to represent and communicate their interpretations of these meanings to others” (Griffiths et al. 2015:72). Interactionists evaluate how culture depends on the interpretation of meaning and how individuals interact when exchanging comprehension and meaning. For instance, derogatory terms such as the “N” word might be acceptable among people of the same cultural group but viewed as offensive and antagonistic when used by someone outside of the group. Feminists explore the cultural experiences of women and minorities. For example, women in Lebanon do not have the right to dissolve a marriage without her husband’s consent even in cases of spousal abuse (Human Rights Watch 2015). Feminism explicitly examines oppression structures within culture systems and the inequity some groups confront in relation to their age, gender, race, social class, sexuality, or other social category. Exchange theorists observe how culture influences decision-making. Cultural values and beliefs often influence people’s choices about premarital sex and cohabitation before marriage. If you evaluate your decisions on a daily basis, you might see elements of culture behind the motivation driving your choices. Environmental theorists assess how culture, as part of the social and physical environment, adapts and changes over time. If you contemplate any rule of law, you can see how culture has altered because of shifts in social ideas or ecological fluctuations. Consider the anti-tobacco laws in the United States making it illegal to smoke in public areas as an example of social shifts towards health and wellness or water meters to control and regulate residential water usage and waste as an example of ecological drought and prolonged water shortages in the United States. THEORETICAL APPLICATION Popular culture reflects prominent values, beliefs, norms, symbolic expressions, and practices while re-enforcing American ideologies and myths. Develop a written response exploring the depiction of contemporary American culture in an episode of a contemporary television show drama (i.e., NCIS, Game of Thrones, Agents of S.H.I.E.L.D., Breaking Bad, etc.) 1. Describe American cultural ideologies or principles portrayed in the show (i.e., unity, diversity, patriotism, etc.). 2. Explain which myths or untruths are evident in the film that express fundamental cultural values or norms. 3. Discuss how the show mirrors social and cultural trends. 4. Analyze the culture portrayed in the television show using each of the theoretical paradigms: Functionalism, Conflict theory, Interactionism, Feminism, Exchange Theory, and Environmental Theory. 1.S: Culture and Meaning (Summary) Key Terms and Concepts Beliefs Native Anthropologists Conflict Theory Non-Material Culture Cultural Sociology Norms Culture Practices Enculturation Qualitative Data Environmental Theory Quantitative Data Ethnographers Sapir-Worf Hypothesis Exchange Theory Social Location Feminism Sociological Imagination Functionalism Symbolic Interactionism Macrosociology Symbols Material Culture Theoretical paradigms Microsociology Values
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/01%3A_Culture_and_Meaning/1.04%3A_Theoretical_Perspectives_on_Culture.txt
Learning Objectives At the end of the module, students will be able to: 1. illustrate how culture is constructed and received 2. describe the influence of context on cultural creation and acceptance 3. explain the significance of collective culture on group solidarity and cohesion 4. discuss and assess the impact of cultural change on the social structure How does culture affect your thinking and behavior? How are you able to communicate the influence of culture on your life to others? How do you justify your culture as true, real, or tangible? Because culture is a socially meaningful expression that can be articulated and shared it often takes a physical form in our minds. A spiritual or philosophical expression that is not physical in nature becomes tangible in our minds and is equivalent to an “object” (Griswold 2013). The cultural expression is so real that people perceive it as something achievable or concrete (even if only in psychological form). The mental picture is the object and the meaning associated with the object is the expression when we are speaking about non-material culture. • 2.1: Social Production of Culture At the end of the module, students will be able to illustrate how culture is constructed and received. This chapter describes the influence of context on cultural creation and acceptance, further explaining the significance of collective culture on group solidarity and cohesion Overall, it discusses and assess the impact of cultural change on the social structure. • 2.2: Collective Culture Among humans, there are universal cultural patterns or elements across groups and societies. Cultural universals are common to all humans throughout the globe. Some cultural universals include cooking, dancing, ethics, greetings, personal names, and taboos to name a few. • 2.3: Group and Organizational Culture The term group refers to any collection of at least two people who interact frequently and share identity traits aligned with the group (Griffiths et al. 2015). Groups play different roles in our lives.  An organization refers to a group of people with a collective goal or purpose linked to bureaucratic tendencies including a hierarchy of authority, clear division of labor, explicit rules, and impersonal. Organizations function within existing cultures and produce their own. • 2.4: Levels of Culture There are three recognized levels of culture in society. Each level of culture signifies particular cultural traits and patterns within groups. • 2.S: Culture as a Social Construct (Summary) 02: Culture as a Social Construct When people discuss love, they imagine it in their minds and feel it in their hearts even though no one can truly touch love in a physical form. We associate love to a variety of mental and physical interactions, but love itself is not tangible or concrete. Whereas, material culture is associated with physical artifacts projecting a clear understanding of its nature because it is visible, audible, and can be touched. We buy and give gifts to express our love. The material artifact we give to someone is a tangible expression of love. In this example, the expression of non-material culture is evident in material culture (love = gift) and material culture represents non-material culture (gift = love) making both forms cultural “objects.” Cultural objects become representations of many things and can have many meanings based on the history and biography of an individual, group, or society. Think about the mantra, “Follow your dreams.” The expression is often used in the United States when discussing educational and career motivation and planning. For many U.S. citizens, this statement creates an open space for academic or professional choices and opportunities. However, the “object” is limited to the culture of the individual. In other words, your “dream” is limited to the cultural environment and social location you occupy. For example, if you are in a family where men and women fill different roles in work and family then your educational and career choices or pathways are limited to the options within the context of your culture (i.e., values, beliefs, and norms). Afghan culture does not value or permit the education of girls. In Afghanistan, one third of girls marry before 18, and once married they are compelled to drop out of school (Human Rights Watch 2017). The educational and career choices of Afghan girls is limited to the culture of their country and the social location of their gender. This means to “follow your dreams” in Afghanistan is confined to what a dream as an object can represent based on the gender of the person. How does culture become an “object” or solidified, socially accepted, and followed? According to Griswold (2013) people create, articulate, and communicate culture. However, this does not mean every cultural idea or creation is accepted by society. Though people create culture, other people must receive or accept culture to become tangible, real, or recognized as an object including artifacts. The creation of cultural ideas and concepts must have an audience to receive it and articulate its meaning in order for culture to be established and accepted. The context of the social world including time, place, conditions, and social forces influence whether an audience accepts or rejects a cultural object. Consider the many social media applications available to us today. With so many social media outlets and options available, which are the most recognized and used? Which social media apps have become part of our everyday lives, and which do we expect people to use and be familiar with as a norm? When Jack Dorsey, Noah Glass, Biz Stone, and Evan Williams created Twitter, they introduced a cultural idea to society. As word spread about the application and people began to use it, communication about its relevance and usefulness grew. As the network of users grew more and more people were intrigued to discover the application and make it part of their lives leading to Twitter becoming a cultural object. Not only did Twitter need to demonstrate relevance to reach potential users, but it also had to be timely and applicable in context or to the needs of modern society. Since the development of the Internet, many people and organizations have developed a variety of social media applications, but only a few apps have transcended time to become part of our culture because they were able to develop an audience or significant number of cultural receivers to legitimize them. Other than Twitter, what social media applications have become part of our culture? Research and describe the demographics of the audience or receivers for each application identified and discuss the context or environment that made the app relevant for its time and users. DISSECTING CULTURAL CONSTRUCTION-cyberbullying Consider the social issue of cyberbullying. 1. Describe the social context or environment that has led to the development and growth of this issue. 2. What cultural elements do we associate with cyberbullying? What are the values, beliefs, norms, symbolic expressions, and artifacts or materials used by perpetrators to create a culture of cyberbullying? 3. How do victims, observers, and the public receive this culture? What meanings do people associate by the expressions used by perpetrators that make the issue "real"? 4. Reflecting on your responses to Questions 1-3, explain how social context, cultural creation, and cultural acceptance work to make the issue of cyberbullying a cultural object.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/02%3A_Culture_as_a_Social_Construct/2.01%3A_Social_Production_of_Culture.txt
Among humans, there are universal cultural patterns or elements across groups and societies. Cultural universals are common to all humans throughout the globe. Some cultural universals include cooking, dancing, ethics, greetings, personal names, and taboos to name a few. Can you identify at least five other cultural universals shared by all humans? In thinking about cultural universals, you may have noted the variations or differences in the practice of these cultural patterns or elements. Even though humans share several cultural universals, the practice of culture expresses itself in a variety of ways across different social groups and institutions. When different groups identify shared culture, we often are speaking from generalizations or general characteristics and principles shared by humans. The description of cultural universals speak to the generalization of culture such as in the practice of marriage. Different social groups share the institution of marriage but the process, ceremony, and legal commitments are different depending on the culture of the group or society. Cultural generalities help us understand the similarities and connections all humans have in the way we understand and live even though we may have particular ways of applying them. Some cultural characteristics are unique to a single place, culture, group, or society. These particularities may develop or adapt from social and physical responses to time, geography, ecological changes, group member traits, and composition including power structures or other phenomena. Cultural and Social Bonds By living together in society, people “learn specific ways of looking at life” (Henslin 2011:104). Through daily interactions, people construct reality. The construction of reality provides a forum for interpreting experiences in life expressed through culture. Emile Durkheim ([1893] 1933) believed social bonds hold people together. When people live in small, integrated communities that share common values and beliefs, they develop a shared or collective consciousness. Durkheim referred to this type of social integration as mechanical solidarity meaning members of the community are all working parts of the group or work in unity creating a sense of togetherness forming a collective identity. In this example, members of the community think and act alike because they have a shared culture and shared experiences from living in remote, close-knit areas. As society evolves and communities grow, people become more specialized in the work they do. This specialization leads individuals to work independently in order to contribute to a segment or part of a larger society (Henslin 2011). Durkheim referred to this type of social unity as organic solidarity meaning each member of the community has a specific task or place in the group in which they contribute to the overall function of the community that is spatial and culturally diverse. In this example, community members do not necessarily think or act alike but participate by fulfilling their role or tasks as part of the larger group. If members fulfill their parts, then everyone is contributing and exchanging labor or production for the community to function as a whole. Both mechanical and organic solidarity explain how people cooperate to create and sustain social bonds relative to group size and membership. Each form of solidarity develops its own culture to hold society together and function. However, when society transitions from mechanical to organic solidarity, there is chaos or normlessness. Durkheim referred to this transition as social anomie meaning “without law” resulting from a lack of a firm collective consciousness. As people transition from social dependence (mechanical solidarity) to interdependence (organic solidarity), they become isolated and alienated from one another until a redeveloped set of shared norms arise. We see examples of this transition when there are changes in social institutions such as governments, industry, and religion. Transitions to democracy across the continent of Africa have shown countries contending with poverty, illiteracy, militarization, underdevelopment, and monopolization of power, all forms of anomie, as they move from social dependence to interdependence (The National Academic Press 1992). People develop an understanding about their culture specifically their role and place in society through social interactions. Charles Horton Cooley ([1902] 1964) suggested people develop self and identity through interpersonal interactions such as perceptions, expectations, and judgement of others. Cooley referred to this practice as the looking glass self. We imagine how others observe us and we develop ourselves in response to their observations. The concept develops over three phases of interactions. First, we imagine another’s response to our behavior or appearance, then we envision their judgment, and lastly we have an emotional response to their judgement influencing our self-image or identity (Griswold 2013). Interpersonal interactions play a significant role in helping us create social bonds and understand our place in society.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/02%3A_Culture_as_a_Social_Construct/2.02%3A_Collective_Culture.txt
The term group refers to any collection of at least two people who interact frequently and share identity traits aligned with the group (Griffiths et al. 2015). Groups play different roles in our lives. Primary groups are usually small groups characterized by face-to-face interaction, intimacy, and a strong sense of commitment. Primary groups remain “inside” us throughout our lifetime (Henslin 2011). Secondary groups are large and impersonal groups that form from sharing a common interest. Different types of groups influence our interactions, identity, and social status. George Herbert Mead (1934) called individuals effecting a person’s life as significant others, and he conceptualized “generalized others” as the organized and generalized attitude of a social group. Different types of groups influence our interactions, identity, and social status. An in-group is a group toward which one feels particular loyalty and respect. The traits of in-groups are virtues, whereas traits of out-groups are vices (Henslin 2011). An out-group is a group toward which one feels antagonism and contempt. Consider fans at a sporting event, people cheering on our supporting the same team will develop an in-group admiration and acceptance while viewing fans of the opposing team as members of their out-group. Figure \(1\): Man Person People Stadium. Image used with permission (CC0 1.0; Pixabay). Reference groups are also influential groups in someone’s life. A reference group provides a standard for judging one’s own attitudes or behaviors within a social setting or context (Henslin 2011). People use reference groups as a method for self-evaluation and social location. People commonly use reference groups in the workplace by watching and emulating the interactions and practices of others so they fit in and garner acceptance by the group. Group dynamics focus on how groups influence individuals and how individuals affect groups. The social dynamics between individuals plays a significant role in forming group solidarity. Social unity reinforces a collective identity and shared thinking among group members thereby constructing a common culture (Griswold 2013). Commonalities of group membership are important for mobilizing individual members. When people attempt to create social change or establish a social movement group, solidarity helps facilitate motivation of individuals and framing of their actions. The sense of belonging and trust among the group makes it easier for members to align and recognize the problem, accept a possible solution, take certain actions that are congruent and complementary to the collective identity of the group (Griswold 2013). People accept the group’s approach based on solidarity and cohesiveness that overall amplifies personal mobilization and commitment to the group and its goals. COLLECTIVE IDENTITY AND SOCIAL MOVEMENTS Research TED Talks videos on social movements and social change such as the following: • How to Start a Movement by Derek Sivers (www.ted.com/talks/derek_sive...art_a_movement) • Online Social Change by Zeynep Tufekci (www.ted.com/talks/zeynep_tuf...ze_hard_to_win) 1. What lessons can you learn about collective identity from the stories presented? 2. How does group culture make it possible to construct a social movement? Explain how microsociological acts (social interactions) lead to macrosociological changes (systems, organizations, and processes) in society. 3. What impact does intrinsic or internal motivation and framing of the issue have on organizing a social movement? An organization refers to a group of people with a collective goal or purpose linked to bureaucratic tendencies including a hierarchy of authority, clear division of labor, explicit rules, and impersonal (Giddens, Duneier, Applebaum, and Carr 2013). Organizations function within existing cultures and produce their own. Formal organizations fall into three categories including normative, coercive, and utilitarian (Etzioni 1975). People join normative or voluntary organizations based on shared interests (e.g., club or cause). Coercive organizations are groups that people are coerced or forced to join (e.g., addiction rehabilitation program or jail). People join utilitarian organizations to obtain a specific material reward (e.g., private school or college). When we work or live in organizations, there are multiple levels of interaction that effect social unity and operations. On an individual level, people must learn and assimilate into the culture of the organization. All organizations face the problem of motivating its members to work together to achieve common goals (Griswold 2013). Generally, in organizations small group subcultures develop with their own meaning and practices to help facilitate and safeguard members within the organizational structure. Group members will exercise force (peer pressure and incentives), actively socialize (guide feelings and actions with normative controls), and model behavior (exemplary actors and stories) to build cohesiveness (Griswold 2013). Small groups play an integral role in managing individual members to maintain the function of the organization. Think about the school or college you attend. There are many subcultures within any educational setting and each group establishes the norms and behaviors members must follow for social acceptance. Can you identify at least two subcultures on your school campus and speculate how members of the group pressure each other to fit in? Figure \(2\): Army Authority Drill Instructor Group. (CC0 1.0; Pixabay). On a group level, symbolic power matters in recruiting members and sustaining the culture of a group within the larger social culture (Hallet 2003). Symbolic power is the power of constructing reality to guide people in understanding their place in the organizational hierarchy (Bourdieu 1991). This power occurs in everyday interactions through unconscious cultural and social domination. The dominant group of an organization influences the prevailing culture and provides its function in communications forcing all groups or subcultures to define themselves by their distance from the dominant culture (Bourdieu 1991). The instrument of symbolic power is the instrument of domination in the organization by creating the ideological systems of its goals, purpose, and operations. Symbolic power not only governs culture of the organization but also manages solidarity and division between groups. We see examples of symbolic power in the military. Each branch of the military has a hierarchy of authority where generals serve as the dominate group and are responsible for the prevailing culture. Each rank socializes members according to their position within the organization in relation to the hierarchy and fulfills their role to achieve collective goals and maintain functions. CULTURAL SOLIDARITY Describe the culture of an organization where you have worked, volunteered, or attended school. 1. What are the stories and symbols that everyone who works, volunteers, or attends there knows? 2. What subculture groups exist within the organization, and what forms of conflict take place between units or classifications? 3. How do the heads of the organization use symbolic power to motivate people? There are external factors that influence organizational culture. The context and atmosphere of a nation shapes an organization. When an organization’s culture aligns with national ideology, they can receive special attention or privileges in the way of financial incentives or policy changes (Griswold 2013). In contrast, organizations opposing national culture may face suppression, marginalization, or be denied government and economic. Organizations must also operate across a multiplicity of cultures (Griswold 2013). Culture differences between organizations may affect their operations and achievement of goals. To be successful, organizations must be able to operate in a variety of contexts and cultures. Griswold (2013) suggested one way to work across cultural contexts is to maintain an overarching organizational mission but be willing to adapt on insignificant or minor issues. Financial and banking institutions use this approach. Depending on the region, banks offer different cultural incentives for opening an account or obtaining a loan. In California, homeowners may obtain a low interest loans for ecological improvements including installation of solar panels, weatherproof windows, or drought resistance landscaping. In the state of Michigan, affluent homeowners may acquire a low interest property improvement loan, and very low-income homeowners may receive grants for repairing, improving, or modernizing their homes to remove health and safety hazards. Working across organizational cultures also requires some dimension of trust. Organizational leaders must model forms and symbols of trust between organizations, groups, and individuals (Mizrachi, Drori, and Anspach 2007). This means authority figures must draw on the organization’s internal and external diversity of cultures to show its ability to adapt and work in a variety of cultural and political settings and climates. Organizations often focus on internal allegiance forgetting that shared meaning across the marketplace, sector, or industry is what moves understanding of the overall system and each organization’s place in it (Griswold 2013). The lack of cultural coordination and understanding undermines many organizations and has significant consequences for accomplishing its goals and ability to sustain itself. ORGANIZATIONAL CULTURE Consider the culture of an organization where you have worked, volunteered, or attended school. Describe a time when you witnessed someone receive a nonverbal, negative sanction (e.g., a look of disgust, a shake of the head, or some other nonverbal sign of disapproval). 1. What organizational norm was being broken (i.e., what was the act that led the person to give a nonverbal negative sanctioning)? 2. Was the norm broken considered a structural or cultural violation? 3. What was the reaction of the norm violator to the negative sanction? 4. Was the norm being enforced a result of peer pressure, external forces, mimicking, or modeling?
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/02%3A_Culture_as_a_Social_Construct/2.03%3A_Group_and_Organizational_Culture.txt
There are three recognized levels of culture in society (Kottak and Kozaitis 2012). Each level of culture signifies particular cultural traits and patterns within groups. International culture is one level referring to culture that transcends national boundaries. These cultural traits and patterns spread through migration, colonization, and the expansion of multinational organizations (Kottak and Kozaitis 2012). Some illustrations are evident in the adoption and use of technology and social media across continents. For example, computers and mobile devices allow people to live and operate across national boundaries enabling them to create and sustain an international culture around a common interest or purpose (i.e., Olympics, United Nations, etc.). In contrast, cultural traits and patterns shared within a country is national culture. National culture is most easily recognizable in the form of symbols such as flags, logos, and colors as well as sound including national anthems and musical styles. Think about American culture, which values, beliefs, norms, and symbols are common only among people living in the United States? How about those living in China and Brazil? Subcultures, another level of culture, are subgroups of people within the same country (e.g.,doctors, lawyers, teachers, athletes, etc.). Subcultures have shared experiences and common cultural distinctions, but they blend into the larger society or cultural system. Subcultures have their own set of symbols, meanings, and behavioral norms, which develop by interacting with one another. Subcultures develop their own self-culture or idioculture that has significant meaning to members of the group and creates social boundaries for membership and social acceptance (Griswold 2013). Think about social cliques whether they be categorized as jocks, nerds, hipsters, punks, or stoners. Each group has a particular subculture from the artifacts they wear to the values and beliefs they exhibit. All groups form a subculture resulting in group cohesion and shared consciousness among its members. SPORT AS A SUBCULTURE Research the sport, quadriplegic rugby. Examine the rules of the game, search for information or testimonials about any of the athletes, and watch videos of game highlights and athlete stories or interviews available online. 1. Describe the subculture of the athletes (i.e., values, beliefs, symbols including meanings and expressions, behavioral norms, and artifacts relevant to the game). 2. Discuss the socialization process of athletes into the sport. 3. Explain how social context, cultural creation, and cultural acceptance work to create the idioculture of quadriplegic rugby. Doing Culture All people are cultured. Social scientists argue all people have a culture represented in values, beliefs, norms, expressive symbols, practices, and artifacts. This viewpoint transcends the humanities perspective that suggests one must project refined tastes, manners, and have a good education as exhibited by the elite class to have culture. The perspective of social scientists reinforce the ideology that cultures are integrated and patterned systems not simply desired characteristics that distinguish the ruling class. Cultural patterns are a set of integrated traits transmitted by communication or social interactions (Kottak and Kozaitis 2012). Consider the cultural patterns associated with housing. Each cultural group or society maintains a housing system comprised of particular cultural traits including kitchen, sofa, bed, toilet, etc. The cultural traits or each individual cultural item is part of the home or accepted cultural pattern for housing. Not only do people share cultural traits, but they may also share personality traits. These traits are actions, attitudes, and behaviors (e.g., honesty, loyalty, courage, etc.). Shared personality traits develop through social interactions from core values within groups and societies (Kottak and Kozaitis 2012). Core values are formally (legally or recognized) and informally (unofficial) emphasized to develop a shared meaning and social expectations. The use of positive (reward) and negative (punishment) sanctions help in controlling desired and undesired personality traits. For example, if we want to instill courage, we might highlight people and moments depicting bravery with verbal praise or accepting awards. To prevent cowardness, we show a deserter or run-away to depict weakness and social isolation. Doing culture is not always an expression of ideal culture. People’s practices and behaviors do not always abide or fit into the ideal ethos we intend or expect. The Christmas holiday is one example where ideal culture does not match the real culture people live and convey. Christmas traditionally represents an annual celebration of the birth of Jesus Christ; however, many individuals and families do not worship Christ or attend church on Christmas day but instead exchange gifts and eat meals together. The ideal or public definition of Christmas does not match the real or individual practices people express on the holiday. Throughout history, there have always been differences between what people value (ideal culture) and how they actually live their lives (real culture). Cultural Change People biologically and culturally adapt. Cultural change or evolution is influenced directly (e.g., intentionally), indirectly (e.g., inadvertently), or by force. These changes are a response to fluctuations in the physical or social environment (Kottak and Kozaitis 2012). Social movements often start in response to shifting circumstances such as an event or issue in an effort to evoke cultural change. People will voluntarily join for collective action to either preserve or alter a cultural base or foundation. The fight over control of a cultural base has been the central conflict among many civil and human rights movements. On a deeper level, many of these movements are about cultural rights and control over what will be the prevailing or dominant culture. Changes in cultural traits are either adaptive (better suited for the environment) or maladaptive (inadequate or inappropriate for the environment). During times of natural disasters, people must make cultural changes to daily norms and practices such as donating time and money to help relief efforts (adaptive) while also rebuilding homes and businesses. However, not all relief efforts direct money, energy, or time into long-term contributions of modifying physical infrastructures including roads, bridges, dams, etc. or helping people relocate away from high disaster areas (maladaptive). People adjust and learn to cope with cultural changes whether adaptive or maladaptive in an effort to soothe psychological or emotional needs. Though technology continues to impact changes in society, culture does not always change at the same pace. There is a lag in how rapidly cultural changes occur. Generally, material culture changes before non-material culture. Contact between groups diffuses cultural change among groups, and people are usually open to adapt or try new artifacts or material possessions before modifying their values, beliefs, norms, expressive symbols (i.e., verbal and non-verbal language), or practices. Influencing fashion trends is easier than altering people’s religious beliefs. Through travel and technological communications, people are sharing cultural elements worldwide. With the ability to travel and communicate across continents, time and space link the exchange of culture. Modern society is operating on a global scale (known as globalization) and people are now interlinked and mutually dependent. Acculturation or the merging of cultures is growing. Groups are adopting the cultural traits and social patterns of other groups leading to the blending of cultures. Cultural leveling is the process where cultures are becoming similar to one another because of globalization. 2.S: Culture as a Social Construct (Summary) Key Terms and Concepts Acculturation Looking Glass Self Adaptive Maladaptive Anomie Mechanical Solidarity Coercive Organizations National Culture Collective Consciousness Normative Organizations Cultural Change Organic Solidarity Cultural Generalities Organizational Culture Cultural Lag Organizations Cultural Leveling Out-Group Cultural Objects Primary Group Cultural Patterns Reference Group Cultural Traits Sanctions Cultural Universals Secondary Group Group Shared Culture Group Dynamics Subcultures Ideal Culture Symbolic Power In-Group Utilitarian Organizations International Culture
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/02%3A_Culture_as_a_Social_Construct/2.04%3A_Levels_of_Culture.txt
Learning Objectives At the end of the module, students will be able to: 1. explain the implications of culture on social status and stratification 2. summarize the mechanisms used by dominant groups to develop and sustain culturalpower 3. understand cultural hegemony 4. describe the consequences of social conflicts over cultural power 5. identify and evaluate cultural prejudice and discrimination All humans are comprised of the same biological structure and matter. The unique distinctions among us stem from our culture (Kottak and Kozaitis 2012). The differences in our values, beliefs, norms, expressive language, practices, and artifacts is which stands us apart from each other. Being culturally unique projects exclusivity that draws attention to our variations and differences. People find cultural fit or acceptance from those who share uniqueness or the same cultural characteristics. Consequently, people may find or experience intolerance or rejection from those with different cultural traits. • 3.1: Cultural Hierarchies At the end of the module, students will be able to, explain the implications of culture on social status and stratification This chapter summarizes the mechanisms used by dominant groups to develop and sustain cultural power and helps to further understand cultural hegemony. Overall, it describes the consequences of social conflicts over cultural power as well as how to identify and evaluate cultural prejudice and discrimination • 3.2: Social and Cultural Capital Social and cultural relationships have productive benefits in society. Research defines social capital as a form of economic (e.g., money and property) and cultural (e.g., norms, fellowship, trust) assets central to a social network (Putnam 2000). The social networks people create and maintain with each other enable society to function. Sociologists find cultural capital or the social assets of person (including intellect, education, speech pattern, mannerisms, and dress) promote social mobility • 3.3: Cultural Hegemoney The very nature of cultural creation and production requires an audience to receive a cultural idea or product. Without people willing to receive culture, it cannot be sustainable or become an object. • 3.4: Prejudice and Discrimination Think about a time when you came across someone who did not fit the cultural “norm” either expressively or behaviorally. Were you compelled to understand the differences between you and the other person or were you eager to dismiss, confront, or ignore the other person? Prejudice is an attitude of thoughts and feelings directed at someone from prejudging or making negative assumptions. Discrimination is an action of unfair treatment against someone based on characteristics such as age, etc. • 3.S: Cultural Power (Summary) Thumbnail: Counter service in a McDonald's restaurant in Dukhan, Qatar. (CC BY-SA 3.0 Unported; Vincent van Zeijst). 03: Cultural Power Cultural distinctions make groups unique, but they also provide a social structure for creating and ranking cultures based on similarities or differences. A cultural group’s size and strength influences their power over a region, area, or other groups. Cultural power lends itself to social power that influences people’s lives by controlling the prevailing norms or rules and making individuals adhere to the dominant culture voluntarily or involuntarily. Culture is not a direct reflection of the social world (Griswold 2013). Humans mediate culture to define meaning and interpret the social world around them. As a result, dominant groups able to manipulate, reproduce, and influence culture among the masses. Common culture found in society is actually the selective transmission of elite-dominated values (Parenti 2006). This practice known as cultural hegemony suggests, culture is not autonomous, it is conditional dictated, regulated, and controlled by dominant groups. The major forces shaping culture are in the power of elite-dominated interests who make limited and marginal adjustments to appear culture is changing in alignment with evolving social values (Parenti 2006). The culturally dominating group often sets the standard for living and governs the distribution of resources.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/03%3A_Cultural_Power/3.01%3A_Cultural_Hierarchies.txt
Social and cultural relationships have productive benefits in society. Research defines social capital as a form of economic (e.g., money and property) and cultural (e.g., norms, fellowship, trust) assets central to a social network (Putnam 2000). The social networks people create and maintain with each other enable society to function. However, the work of Pierre Bourdieu (1972) found social capital produces and reproduces inequality when examining how people gain powerful positions through direct and indirect social connections. Social capital or a social network can help or hinder someone personally and socially. For example, strong and supportive social connections can facilitate job opportunities and promotion that are beneficial to the individual and social network. Weak and unsupportive social ties can jeopardize employment or advancement that are harmful to the individual and social group as well. People make cultural objects meaningful (Griswold 2013). Interactions and reasoning develop cultural perspectives and understanding. The “social mind” of groups process incoming signals influencing culture within the social structure including the social attributes and status of members in a society (Zerubavel 1999). Language and symbols express a person’s position in society and the expectations associated with their status. For example, the clothes people wear or car they drive represents style, fashion, and wealth. Owning designer clothing or a high performance sports car depicts a person’s access to financial resources and worth. The use of formal language and titles also represent social status such as salutations including your majesty, your highness, president, director, chief executive officer, and doctor. People may occupy multiple statuses in a society. At birth, people are ascribed social status in alignment to their physical and mental features, gender, and race. In some cases, societies differentiate status according to physical or mental disability as well as if a child is female or male, or a racial minority. According to Dr. Jody Heymann, Dean of the World Policy Analysis Center at the UCLA Fielding School of Public Health, "Persons with disabilities are one of the last groups whose equal rights have been recognized" around the world (Brink 2016). A report by the World Policy Analysis Center (2016) shows only 28% of 193 countries participating in the global survey guarantee a right to quality education for people with disabilities and only 18% guarantee a right to work. In some societies, people may earn or achieve status from their talents, efforts, or accomplishments (Griffiths et al. 2015). Obtaining higher education or being an artistic prodigy often correspond to high status. For example, a college degree awarded from an “Ivy League” university social weighs higher status than a degree from a public state college. Just as talented artists, musicians, and athletes receive honors, privileges, and celebrity status. Additionally, the social, political hierarchy of a society or region designates social status. Consider the social labels within class, race, ethnicity, gender, education, profession, age, and family. Labels defining a person’s characteristics serve as their position within the larger group. People in a majority or dominant group have higher status (e.g., rich, white, male, physician, etc.) than those of the minority or subordinate group (e.g., poor, black, female, housekeeper, etc.). Overall, the location of a person on the social strata influences their social power and participation (Griswold 2013). Individuals with inferior power have limitations to social and physical resources including lack of authority, influence over others, formidable networks, capital, and money. Social status serves as method for building and maintaining boundaries among and between people and groups. Status dictates social inclusion or exclusion resulting in cultural stratification or hierarchy whereby a person’s position in society regulates their cultural participation by others. Cultural attributes within social networks build community, group loyalty, and personal and social identity. People sometimes engage in status shifting to garner acceptance or avoid attention. DuBois (1903) described the act of people looking through the eyes of others to measure social place or position as double consciousness. His research explored the history and cultural experiences of American slavery and the plight of black folk in translating thinking and behavior between racial contexts. DuBois’ research helped sociologists understand how and why people display one identity in certain settings and another in different ones. People must negotiate a social situation to decide how to project their social identity and assign a label that fits (Kottak and Kozaitis 2012). Status shifting is evident when people move from informal to formal contexts. Our cultural identity and practices are very different at home than at school, work, or church. Each setting demands different aspects of who we are and our place in the social setting. THE SIGNIFICANCE OF CULTURAL CAPITAL This short video ( https://youtu.be/5DBEYiBkgp8) summarizes Pierre Bourdieu's (1930-2002) theory of cultural capital or “the cultural knowledge that serves as currency that helps us navigate culture and alters our experiences and the opportunities available to us.” The video discusses three different forms of cultural capital: embodied state, objectified state, and institutionalized state with examples of each type that students can apply to their own lives. At the end of the video, discussion questions are included to assist students in applying the concept of cultural capital to what is happening in the world today. Prepare a written response addressing the four discussion questions presented in the video to share with the class. Submitted By: Sociology Live!, Cindy Hager. "Cultural Capital" by The Sociological Cinema is licensed under CC BY 4.0 Sociologists find cultural capital or the social assets of person (including intellect, education, speech pattern, mannerisms, and dress) promote social mobility (Harper-Scott and Samson 2009). People who accumulate and display the cultural knowledge of a society or group may earn social acceptance, status, and power. Bourdieau (1991) explained the accumulation and transmission of culture is a social investment from socializing agents including family, peers, and community. People learn culture and cultural characteristics and traits from one another; however, social status effects whether people share, spread, or communicate cultural knowledge to each other. A person’s social status in a group or society influences their ability to access and develop cultural capitol. Cultural capital provides people access to cultural connections such as institutions, individuals, materials, and economic resources (Kennedy 2012). Status guides people in choosing who and when culture or cultural capital is transferable. Bourdieu (1991) believed cultural inheritance and personal biography attributes to individual success more than intelligence or talent. With status comes access to social and cultural capital that generates access to privileges and power among and between groups. Individuals with cultural capital deficits face social inequalities (Reay 2004). If someone does not have the cultural knowledge and skills to maneuver the social world she or he occupies, then she or he will not find acceptance within a group or society and access to support and resources. COLLEGE SUCCESS AND CULTURAL CAPITAL Cultural capital evaluates the validity of culture (i.e., language, values, norms, and access to material resources) on success and achievement. You can measure your cultural capital by examining the cultural traits and patterns of your life. The following questions examine student values and beliefs, parental and family support, residency status, language, childhood experiences focusing on access to cultural resources (e.g., books) and neighborhood vitality (e.g., employment opportunities), educational and professional influences, and barriers affecting college success (Kennedy 2012). 1. What are the most important values or beliefs influencing your life? 2. What kind of support have you received from your parents or family regarding school and your education? 3. How many generations has your family lived in the United States? 4. What do you consider your primary language? Did you have any difficulty learning to read or write the English language? 5. Did your family have more than fifty books in the house when you were growing up? What type of reading materials were in your house when you were growing up? 6. Did your family ever go to art galleries, museums, or plays when you were a child? What types of activities did your family do with their time other than work and school? 7. How would you describe the neighborhood where you grew up? 8. What illegal activities, if any, were present in the neighborhood where you grew up? 9. What employment opportunities were available to your parents or family in the neighborhood where you grew up? 10. Do you have immediate family members who are doctors, lawyers, or other professionals? What types of jobs have your family members had throughout their lives? 11. Why did you decide to go to college? What has influenced you to continue or complete your college education? 12. Did anyone ever discourage or prevent you from pursuing academics or a professional career? 13. Do you consider school easy or difficult for you? 14. What has been the biggest obstacle for you in obtaining a college education? 15. What has been the greatest opportunity for you in obtaining a college education? 16. How did you learn to navigate educational environments? Who taught you the “ins” and “outs” of college or school?
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/03%3A_Cultural_Power/3.02%3A_Social_and_Cultural_Capital.txt
The very nature of cultural creation and production requires an audience to receive a cultural idea or product. Without people willing to receive culture, it cannot be sustainable or become an object (Griswold 2013). Power and influence play an integral part in cultural creation and marketing. The ruling class has the ability to establish cultural norms and manipulate society while turning a profit. Culture is a commodity and those in a position of power to create, produce, and distribute culture gain further social and economic power. Culture producing organizations such as multinational corporations and media industries are in the business of producing mass culture products for profit. These organizations have the power to influence people throughout the world. Paul Hirsch (1972) referred to this enterprise as the culture industry system or the “market.” In the culture industry system, multinational corporations and media industries (i.e., cultural creators) produce an excess supply of cultural objects to draw in public attention with the goal of flooding the market to ensure receipt and acceptance of at least one cultural idea or artifact by the people for monetary gain. The culture industry system produces mass culture products to generate a culture of consumption (Grazian 2010). The production of mass culture thrives on the notion that culture influences people. In line with the humanities’ perspective on culture, multinational corporations and media industries, believe they have the ability to control and manipulate culture by creating objects or products that people want and desire. This viewpoint suggests cultural receivers or the people are weak, apathetic, and consume culture for recognition and social status (Griswold 2013). If you consider the cultural object of buying and owning a home, the concept of owning a home represents attaining the “American dream.” Even though not all Americans are able to buy and own a home, the cultural industry system has embedded home ownership as a requisite to success and achievement in America. In contrast, popular culture implies people influence culture. This perspective indicates people are active makers in the creation and acceptance of cultural objects (Griswold 2013). Take into account one of the most popular musical genres today, rap music. The creative use of language and rhetorical styles and strategies of rap music gained local popularity in New York during the 1970s and entered mainstream acceptance in mid-1980s to early ‘90s (Caramanica 2005). The early developments of rap music by the masses led to the genre becoming a cultural object. IS BROWN THE NEW GREEN? Latinos are the largest and fastest growing ethnic group in the United States. The culture industry system is seeking ways to profit from this group. As multinational corporations and media industries produce cultural objects or products geared toward this population, their cultural identity is transformed into a new subculture blending American and Latino values, beliefs, norms, and practices. Phillip Rodriguez is a documentary filmmaker on Latino culture, history, and identity. He and many other race and diversity experts are exploring the influence of consumption on American Latino culture. 1. Research the products and advertisements targeting Latinos in the United States. Describe the cultural objects and messaging encouraging a culture of consumption among this group. 2. What type of values, beliefs, norms, and practices are reinforced in the cultural objects or projects created by the culture industry system? 3. How might the purchase or consumption of the cultural objects or products you researched influence the self-image, identity, and social status of Latinos? 4. What new subculture arises by the blending of American and Latino culture? Describe the impact of uniting or combining these cultures on Latinos and Americans. Today, rap music like other forms of music is being created and produced by major music labels and related media industries. The culture industry system uses media gatekeepers to regulate information including culture (Grazian 2010). Even with the ability of the people to create popular culture, multinational corporations and media industries maintain power to spread awareness, control access, and messaging. This power to influence the masses also gives the hegemonic ruling class known as the culture industry system the ability to reinforce stereotypes, close minds, and promote fear to encourage acceptance or rejection of certain cultural ideas and artifacts.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/03%3A_Cultural_Power/3.03%3A_Cultural_Hegemoney.txt
Cultural intolerance may arise when individuals or groups confront new or differing values, beliefs, norms, expressive symbols, practices, or artifacts. Think about a time when you came across someone who did not fit the cultural “norm” either expressively or behaviorally. How did the person’s presence make you feel? What type of thoughts ran through your head? Were you compelled to understand the differences between you and the other person or were you eager to dismiss, confront, or ignore the other person? Living in a culturally diverse society requires us to tackle our anxiety of the unknown or unfamiliar. The discomfort or cognitive dissonance we feel when we are around others who live and think differently than ourselves makes us alter our thoughts and behaviors towards acceptance or rejection of the “different” person in order to restore cognitive balance (Festinger 1957). When people undergo culture shock or surprise from experiencing new culture, their minds undergo dissonance. Similar to a fight or flight response, we choose to learn and understand cultural differences or mock and run away from them. People have a tendency to judge and evaluate each other on a daily basis. Assessing other people and our surroundings is necessary for interpreting and interacting in the social world. Problems arise when we judge others using our own cultural standards. We call the practice of judging others through our own cultural lens, ethnocentrism. This practice is a cultural universal. People everywhere think their culture is true, moral, proper, and right (Kottak and Kozaitis 2012). By its very definition, ethnocentrism creates division and conflict between social groups whereby mediating differences is challenging when everyone believes they are culturally superior and their culture should be the standard for living. In contrast, cultural relativism insinuates judging a culture by the standards of another is objectionable. It seems reasonable to evaluate a person’s values, beliefs, and practices from their own cultural standards rather than judged against the criteria of another (Kottak and Kozaitis 2012). Learning to receive cultural differences from a place of empathy and understanding serves as a foundation for living together despite variances. Like many aspects of human civilization, culture is not absolute but relative suggesting values, beliefs, and practices are only standards of living as long as people accept and live by them (Boas 1887). Developing knowledge about cultures and cultural groups different from our own allows us to view and evaluate others from their cultural lens. Sometimes people act on ethnocentric thinking and feel justified disregarding cultural relativism. Overcoming negative attitudes about people who are culturally different from us is challenging when we believe our culture and thinking are justified. Consider the social issue of infanticide or the killing of unwanted children after birth. The historical practice occurred in times of famine or hardship when resources were scarce to keep non-productive humans alive. Many people find infanticide a human rights violation regardless of a person’s cultural traditions and beliefs and think the practice should stop. People often feel justified condemning the practice of infanticide and the people who believe and practice the tradition. Stereotypes are oversimplified ideas about groups of people (Griffiths et al. 2015). Prejudice is an attitude of thoughts and feelings directed at someone from prejudging or making negative assumptions. Negative attitudes about another’s culture is a form of prejudice or bias. Prejudice is a learned behavior. Prejudicial attitudes can lead to discriminatory acts and behaviors. Discrimination is an action of unfair treatment against someone based on characteristics such as age, gender, race, religion, etc. PRIVILEGE AND LIFE CHANCES Research YouTube user-created videos on privilege and life chances such as the following: Complete the Test Your Life Chances exercise and type a written response addressing the following questions: 1. What life barriers or issues are you able to identify about yourself after completing the exercise? 2. What life advantages or opportunities are you able to distinguish about yourself after completing the exercise? 3. Were there any statements you found more difficult or easier to answer? Explain. 4. Were there any life challenges or obstacles that you have faced missing in the exercise? If so, explain. 5. Were there any life privileges you have experienced missing in the exercise? If so, explain. 6. Did you ever answer untruthfully on any of the statements? If you are comfortable sharing, explain which one(s)? Why did you not answer truthfully? 7. How do life’s barriers and opportunities influence people's lives? What connections do you see among upward mobility and life chances in regards to: disability, racial-ethnic identity, gender identity, language, sexuality, and social class? Thinking the practice of infanticide should stop and those who practice it malevolent is prejudicial. Trying to stop the practice with force is discriminatory. There are times in the case of human rights issues like this where the fine line between criticizing with action (ethnocentrism) and understanding with empathy (cultural relativism) are clear. However, knowing the appropriate context when to judge or be open-minded is not always evident. Do we allow men to treat women as subordinates if their religion or faith justifies it? Do we allow people to eat sea turtles or live octopus if it is a delicacy? Do we stop children who do not receive vaccinations from attending school? All of these issues stem from cultural differences and distinguishing the appropriate response is not always easy to identify. When social groups have or are in power, they have the ability to discriminate on a large scale. A dominant group or the ruling class impart their culture in society by passing laws and informally using the culture industry system or “market” to spread it. Access to these methods allows hegemonic groups to institutionalize discrimination. This results in unjust and unequal treatment of people by society and its institutions. Those who culturally align to the ruling class fare better than those who are different. VISUAL ETHNOGRAPHY PART 1 Visual ethnography is a qualitative research method of photographic images with socio-cultural representations. The experience of producing and discussing visual images or texts develops ethnographic knowledge and provides sociological insight into how people live. In your home or the place you live, take one photo of the following: • The street you live on • Your home • Front door of your home • Your family • The living room • The ceiling • Your sofa or seating • Lamps or lighting • The stove • The kitchen sink • Your cutlery drawer • Pantry or where you store food • The toilet • The shower or bathing area • Your toothbrush • Your bedroom • Your wardrobe • Your shoes • Children’s toys (if applicable) • Children’s playground (if applicable) • Your pets • Your car or method of transportation PART 2 1. Watch the video by Anna Rosling Ronnlund entitled See How the Rest of the World Lives, Organized by Income: https://goo.gl/uJc6Vd 2. Next visit the website Dollar Street: https://goo.gl/Rb8WUJ 3. Once you have accessed the Dollar Street website, take the Quick Tour for a tutorial on how to use the site. If the Quick Tour does not appear when you click the site link, click the menu on the right-hand top corner and select Quick Guide, which will open the Quick Tour window. 4. After completing the Quick Tour, access your visual ethnography photos and compare your photographs with other people throughout the world. 5. For your analysis, in complete sentences explain the differences and similarities based on income and country. Specifically, describe what the poorest conditions are for each item as well as the richest conditions and what cultural similarities and/or differences exist in comparison to your items. 3.S: Cultural Power (Summary) Key Terms and Concepts Achieved Status Discrimination Ascribed Status Dominant Group Cognitive Dissonance Ethnocentrism Cultural Capital Hegemonic Ruling Class Cultural Creation Popular Culture Cultural Fit Prejudice Cultural Hegemony Social Capital Cultural Power Social Labels Cultural Receivers Social Status Cultural Relativism Socializing Agents Cultural Stratification Status Shifting Culture Industry System Stereotypes Culture Of Consumption Subordinate Group Culture Producing Organizations
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/03%3A_Cultural_Power/3.04%3A_Prejudice_and_Discrimination.txt
Learning Objectives At the end of the module, students will be able to: 1. explain the influence of culture on social and self-identity 2. discuss how personal, cultural, and universal identities shape perceptions 3. illustrate the relationship between self and social labels on status 4. assess the impact of technological advances and innovation on identity Trying to figure out who you are, what you value and believe, and why you think the way you do is a lifelong process. In the first chapter of Thinking Well, Stewart E. Kelly suggests, “we all have lenses through which we view reality, and we need to know what our individual lens is composed of and how it influences our perception of reality.” Take a moment to reflect and hypothetically paint a picture of yourself with words. Try to capture the core of your being by describing who you are. Once you have formulated a description of yourself, evaluate what you wrote. Does your description focus on your personal characteristics or your cultural characteristics you learned from other people in your life (i.e., family, friends, congregation, teachers, community, etc.)? • 4.1: Identitiy Formation At the end of the module students will be able to explain the influence of culture on social and self-identity. They will also be able to discuss how personal, cultural, and universal identities shape perceptions. This chapter illustrate the relationship between self and social labels on status. Overall, it assess the impact of technological advances and innovation on identity • 4.2: Identity Labels and Categories Identity shapes our perceptions and the way we categorize people. Our individual and collective views influence our thinking. Regardless of personal, cultural, or universal identity people naturally focus on traits, values, behaviors, and practices or behaviors they identify with and have a tendency to dismiss those they do not. • 4.3: Geographic Region People identify with the geographic location they live in as a part of who they are and what they believe. Places have subcultures specific to their geographic location, environmental surroundings, and population. • 4.4: Race and Ethnicity Race is truly an arbitrary label that has become part of society’s culture with no justifiable evidence to support differences in physical appearance substantiate the idea that there are a variety of human species. Ethnicity refers to the cultural characteristics related to ancestry and heritage. Ethnicity describes shared culture such as group practices, values, and beliefs. • 4.5: Social Class Social class serves as a marker or indication of resources. These markers are noticeable in the behaviors, customs, and norms of each stratified group. A person’s socio-economic status influences her or his personal and social identity. In society, we rank individuals on their wealth, power, and prestige. • 4.S: Cultural Identity (Summary) Thumbnail: Major Alan G. Rogers holding hands with his partner on the left at a same-sex wedding ceremony on June 28, 2006 (Public Domain; Stagedoorjohnny) 04: Cultural Identity Cultural identity, like culture itself, is a social construct. The values, beliefs, norms, expressive symbols, practices, and artifacts we hold develop from the social relationships we experience throughout our lives. Not only does cultural identity make us aware of who we are, but it also defines what we stand for in comparison to others. Cultural identity is relational between individuals, groups, and society meaning through culture people are able to form social connections or refrain from them. It is real to each of us with real social consequences. As defined in Module 1, we learn culture through the process of enculturation. Socializing agents including family, peers, school, work, and the media transmit traditions, customs, language, tools, and common experiences and knowledge. The passage of culture from one generation to the next ensures sustainability of that culture by instilling specific traits and characteristics of a group or society that become part of each group member’s identity.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/04%3A_Cultural_Identity/4.01%3A_Identitiy_Formation.txt
Identity shapes our perceptions and the way we categorize people. Our individual and collective views influence our thinking. Regardless of personal, cultural, or universal identity people naturally focus on traits, values, behaviors, and practices or behaviors they identify with and have a tendency to dismiss those they do not. Age Cohorts Our numeric ranking of age is associated with particular cultural traits. Even the social categories we assign to age express cultural characteristics of that age group or cohort. Age signifies one’s cultural identity and social status (Kottak and Kozaitis 2012). Many of the most common labels we use in society signify age categories and attributes. For example, the terms “newborns and infants “generally refer to children from birth to age four, whereas “school-age children” signifies youngsters old enough to attend primary school. Each age range has social and cultural expectations placed upon by others (Kottak and Kozaitis 2012). We have limited social expectations of newborns, but we expect infants to develop some language skills and behaviors like “potty training” or the practice of controlling bowel movements. Even though cultural expectations by age vary across other social categories (e.g., gender, geography, ethnicity, etc.), there are universal stages and understanding of intellectual, personal, and social development associated with each age range or cohort. Throughout a person’s life course, they will experience and transition across different cultural phases and stages. Life course is the period from one’s birth to death (Griffiths et al. 2015). Each stage in the life course aligns with age-appropriate values, beliefs, norms, expressive language, practices, and artifacts. Like other social categories, age can be a basis of social ranking (Kottak and Kozaitis 2012). Society finds it perfectly acceptable for a baby or infant to wear a diaper but considers it a taboo or fetish among an adult 30 years old. However, diaper wearing becomes socially acceptable again as people age into senior years of life when biological functions become harder to control. This is also an illustration of how people will experience more than one age-based status during their lifetime. Aging is a human universal (Kottak and Kozaitis 2012). Maneuvering life’s course is sometimes challenging. Cultural socialization occurs throughout the life course. Learning the cultural traits and characteristics needed at certain stages of life is important for developing self-identity and group acceptance. People engage in anticipatory socialization to prepare for future life roles or expectations (Griffiths et al. 2015). By engaging in social interactions with other people, we learn the cultural traits, characteristics, and expectations in preparation for the next phase or stage of life. Thinking back to “potty training” infants, parents and caregivers teach young children to control bowel movements so they are able to urinate and defecate in socially appropriate settings (i.e., restroom or outhouse) and times. Generations have collective identity or shared experiences based on the time-period the group lived. Consider the popular culture of the 1980s to today. In the 1980s, people used a landline or fixed line phone rather than a cellular phone to communicate and went to a movie theater to see a film rather than downloaded a video to a mobile device. Therefore, someone who spent his or her youth and most of their adulthood without or with limited technology may not deem it necessary to have or operate it in daily life. Whereas, someone born in the 1990s or later will only know life with technology and find it a necessary part of human existence. Each generation develops a perspective and cultural identify from the time and events surrounding their life. Generations experience life differently resulting from cultural and social shifts over time. The difference in life experience alters perspectives towards values, beliefs, norms, expressive symbols, practices, and artifacts. Political and social events often mark an era and influence generations. The ideology of white supremacy reinforced by events of Nazi Germany and World War II during the 1930s and 1940s instilled racist beliefs in society. Many adults living at this time believed the essays of Arthur Gobineau (1853-1855) regarding the existence of biologically differences between racial groups (Biddis 1970). It was not until the 1960s and 1970s when philosophers and critical theorists studied the underlying structures in cultural products and used analytical concepts from linguistics, anthropology, psychology, and sociology to interpret race discovering no biological or phenological variances between human groups and finding race is a social construct (Black and Solomos 2000). Scientists found cultural likeness did not equate to biological likeness. Nonetheless, many adults living in the 1930s and 1940s held racial beliefs of white supremacy throughout their lives because of the ideologies spread and shared during their lifetime. Whereas, modern science verifies the DNA of all people living today is 99.9% alike and a new generation of people are learning that there is only one human race despite the physical variations in size, shape, skin tone, and eye color (Smithsonian 2018). Because there are diverse cultural expectations based on age, there can be conflict between age cohorts and generations. Age stratification theorists suggest that members of society are classified and have social status associated to their age (Riley, Johnson, and Foner 1972). Conflict often develops from age associated cultural differences influencing social and economic power of age groups. For example, the economic power of working adults conflicts with the political and voting power of the retired or elderly. Age and generational conflicts are also highly influenced by government or state-sponsored milestones. In the United States, there are several age-related markers including the legal age of driving (16 years old), use of tobacco products (21 years old), consumption of alcohol ( 21 years old), and age of retirement (65-70 years old). Regardless of knowledge, skill, or condition, people must abide by formal rules with the expectations assigned to the each age group within the law. Because age serves as a basis of social control and reinforced by the state, different age groups have varying access to political and economic power and resources (Griffiths et al. 2015). For example, the United States is the only industrialized nation that does not respect the abilities of the elderly by assigning a marker of 65-70 years old as the indicator for someone to become a dependent of the state and an economically unproductive member of society. Sex and Gender Each of us is born with physical characteristics that represent and socially assign our sex and gender. Sex refers to our biological differences and gender the cultural traits assigned to females and males (Kottak and Kozaitis 2012). Our physical make-up distinguishes our sex as either female or male implicating the gender socialization process we will experience throughout our life associated with becoming a woman or man. Gender identity is an individual’s self-concept of being female or male and their association with feminine and masculine qualities. People teach gender traits based on sex or biological composition (Kottak and Kozaitis 2012). Our sex signifies the gender roles (i.e., psychological, social, and cultural) we will learn and experience as a member of society. Children learn gender roles and acts of sexism in society through socialization (Griffiths et al. 2015). Girls learn feminine qualities and characteristics and boys masculine ones forming gender identity. Children become aware of gender roles between the ages of two and three and by four to five years old, they are fulfilling gender roles based on their sex (Griffiths et al. 2015). Nonetheless, gender-based characteristics do not always match one’s self or cultural identity as people grow and develop. GENDER LABELS 1. Why do people need and use gender labels? 2. Why do people create gender roles or expectations? 3. Do gender labels and roles influence limitations on individuals or the social world? Explain. Gender stratification focuses on the unequal access females have to socially valued resources, power, prestige, and personal freedom as compared to men based on differing positions within the socio-cultural hierarchy (Light, Keller, and Calhoun 1997). Traditionally, society treats women as second-class citizens in society. The design of dominant gender ideologies and inequality maintains the prevailing social structure, presenting male privilege as part of the natural order (Parenti 2006). Theorists suggests society is a male dominated patriarchy where men think of themselves as inherently superior to women resulting in unequal distribution of rewards between men and women (Henslin 2011). Media portrays women and men in stereotypical ways that reflect and sustain socially endorsed views of gender (Wood 1994). Media affects the perception of social norms including gender. People think and act according to stereotypes associated with one’s gender broadcast by media (Goodall 2016). Media stereotypes reinforce gender inequality of girls and women. According to Wood (1994), the underrepresentation women in media implies that men are the cultural standard and women are unimportant or invisible. Stereotypes of men in media display them as independent, driven, skillful, and heroic lending them to higher-level positions and power in society. Figure \(3\): Man in Brown Long Sleeved Button Up Shirt Standing While Using Gray Laptop Computer on Brown Wooden Table Beside Woman in Gray Long Sleeved Shirt Sitting. (Public Domain; rawpixel.com). In countries throughout the world, including the United States, women face discrimination in wages, occupational training, and job promotion (Parenti 2006). As a result, society tracks girls and women into career pathways that align with gender roles and match gender-linked aspirations such as teaching, nursing, and human services (Henslin 2011). Society views men’s work as having higher value than that of women. Even if women have the same job as men, they make 77 cents per every dollar in comparison (Griffiths et al. 2015). Inequality in career pathways, job placement, and promotion or advancement result in an income gap between genders effecting the buying power and economic vitality of women in comparison to men. The United Nations found prejudice and violence against women are firmly rooted in cultures around the world (Parenti 2006). Gender inequality has allowed men to harness and abuse their social power. The leading cause of injury among women of reproductive age is domestic violence, and rape is an everyday occurrence and seen as a male prerogative throughout many parts of the world (Parenti 2006). Depictions in the media emphasize male dominant roles and normalize violence against women (Wood 1994). Culture plays an integral role in establishing and maintaining male dominance in society ascribing men the power and privilege that reinforces subordination and oppression of women. Cross-cultural research shows gender stratification decreases when women and men make equal contributions to human subsistence or survival (Sanday 1974). Since the industrial revolution, attitudes about gender and work have been evolving with the need for women and men to contribute to the labor force and economy. Gendered work, attitudes, and beliefs have transformed in responses to American economic needs (Margolis 1984, 2000). Today’s society is encouraging gender flexibility resulting from cultural shifts among women seeking college degrees, prioritizing career, and delaying marriage and childbirth. SEX-ROLE INVENTORY TRAITS Your task is to find the ten words on the sex-role inventory trait list below that are most often culturally associated with each of the following labels and categories: femininity, masculinity, wealth, poverty, President, teacher, mother, father, minister, or athlete. Write down the label or category and ten terms to compare your lists with other students. 1. self-reliant 2. yielding 3. helpful 4. defends own beliefs 5. cheerful 6. moody 7. independent 8. shy 9. conscientious 10. athletic 11. affectionate 12. theatrical 13. assertive 14. flatterable 15. happy 16. strong personality 17. loyal 18. unpredictable 19. forceful 20. feminine 21. reliable 22. analytical 23. sympathetic 24. jealous 25. leadership ability 26. sensitive to other's needs 27. truthful 28. willing to take risks 29. understanding 30. secretive 31. makes decisions easily 32. compassionate 33. sincere 34. self-sufficient 35. eager to soothe hurt feelings 36. conceited 37. dominant 38. soft-spoken 39. likable 40. masculine 41. warm 42. solemn 43. willing to take a stand 44. tender 45. friendly 46. aggressive 47. gullible 48. inefficient 49. act as leader 50. childlike 51. adaptable 52. individualistic 53. does not use harsh language 54. unsystematic 55. competitive 56. loves children 57. tactful 58. ambitious 59. gentle 60. conventional Compare your results with other students in the class and answer the following questions: 1. What are the trait similarities and commonalities between femininity, masculinity, wealth, poverty, President, teacher, mother, father, minister, and athlete? 2. How are masculinity and femininity used as measures of conditions and vocations? Sexuality and Sexual Orientation Sexuality is an inborn person’s capacity for sexual feelings (Griffiths et al. 2015). Normative standards about sexuality are different throughout the world. Cultural codes prescribe sexual behaviors as legal, normal, deviant, or pathological (Kottak and Kozaitis 2012). In the United States, people have restrictive attitudes about premarital sex, extramarital sex, and homosexuality compared to other industrialized nations (Griffiths et al. 2015). The debate on sex education in U.S. schools focuses on abstinence and contraceptive curricula. In addition, people in the U.S. have restrictive attitudes about women and sex, believing men have more urges and therefore it is more acceptable for them to have multiple sexual partners than women setting a double standard. Sexual orientation is a biological expression of sexual desire or attraction (Kottak and Kozaitis 2012). Culture sets the parameters for sexual norms and habits. Enculturation dictates and controls social acceptance of sexual expression and activity. Eroticism like all human activities and preferences, is learned and malleable (Kottak and Kozaitis 2012). Sexual orientation labels categorize personal views and representations of sexual desire and activities. Most people ascribe and conform to the sexual labels constructed and assigned by society (i.e., heterosexual or desire for the opposite sex, homosexual or attraction to the same sex, bisexual or appeal to both sexes, and asexual or lack of sexual attraction and indifference). The projection of one’s sexual personality is often through gender identity. Most people align their sexual disposition with what is socially or publically appropriate (Kottak and Kozaitis 2012). Because sexual desire or attraction is inborn, people within the socio-sexual dominant group (i.e., heterosexual) often believe their sexual preference is “normal.” However, heterosexual fit or type is not normal. History has documented diversity in sexual preference and behavior since the dawn of human existence (Kottak and Kozaitis 2012). There is diversity and variance in people’s libido and psychosocial relationship needs. Additionally, sexual activity or fantasy does not always align to sexual orientation (Kottak and Kozaitis 2012). Sexual pleasure from use of sexual toys, homoerotic images, or kinky fetishes do not necessarily correspond to a specific orientation, sexual label, or mean someone’s desire will alter or convert to another type because of the activity. Regardless, society uses sexual identity as an indicator of status dismissing the fact that sexuality is a learned behavior, flexible, and contextual (Kottak and Kozaitis 2012). People feel and display sexual variety, erotic impulses, and sensual expressions throughout their lives. Individuals develop sexual understanding around middle childhood and adolescence (APA 2008). There is no genetic, biological, developmental, social, or cultural evidence linked to homosexual behavior. The difference is in society’s discriminatory response to homosexuality. Alfred Kinsley was the first to identify sexuality is a continuum rather than a dichotomy of gay or straight (Griffiths et al. 2015). His research showed people do not necessarily fall into the sexual categories, behaviors, and orientations constructed by society. Eve Kosofky Sedgwick (1990) expanded on Kinsley’s research to find women are more likely to express homosocial relationships such as hugging, handholding, and physical closeness. Whereas, men often face negative sanctions for displaying homosocial behavior. Society ascribes meaning to sexual activities (Kottak and Kozaitis 2012). Variance reflects the cultural norms and sociopolitical conditions of a time and place. Since the 1970s, organized efforts by LGBTQ (Lesbian, Gay, Bisexual, Transgender, and Questioning) activists have helped establish a gay culture and civil rights (Herdt 1992). Gay culture provides social acceptance for persons rejected, marginalized, and punished by others because of sexual orientation and expression. Queer theorists are reclaiming the derogatory label to help in broadening the understanding of sexuality as flexible and fluid (Griffiths et al. 2015). Sexual culture is not necessarily subject to sexual desire and activity, but rather dominant affinity groups linked by common interests or purpose to restrict and control sexual behavior.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/04%3A_Cultural_Identity/4.02%3A_Identity_Labels_and_Categories.txt
The place people live or occupy renders a lifestyle and cultural identity. People identify with the geographic location they live in as a part of who they are and what they believe (Kottak and Kozaitis 2012). Places have subcultures specific to their geographic location, environmental surroundings, and population. As one of the largest cities in the United States, New York City is home to 21 million together speaking over 200 languages (U.S. News and World Report 2017). The city itself is fast-paced and its large population supports the need for around the clock services as the “city that never sleeps.” With so many people living in the metropolis, it is a diverse melting pot of racial, ethnic, and socio-economic backgrounds though each neighborhood is its own enclave with its own identity. This large, heterogeneous population effects the impersonal, sometimes characterized as “dismissive and arrogant” attitudes of its residents. By the very nature and size of the city, people are able to maintain anonymity but cannot develop or sustain intimacy with the entire community or its residents. With millions of diverse people living, working, and playing in 304 square miles, it is understandable why tourists or newcomers feel that residents are in a rush, rude, and unfriendly. On the opposite side of the nation in the Central Valley of California, many residents live in rural communities. The Central Valley is home to 6.5 million people across 18,000 square miles (American Museum of Natural History 2018). Though there is a large, metropolitan hub of Fresno, surrounding communities identify themselves as small, agricultural with a country lifestyle. Here residents seek face-to-face interactions and communities operate as kin or families. Like other social categories or labels, people use location to denote status or lifestyle. Consider people in the U.S. who “live in Beverly Hills” or “work on Wall Street.” These locations imply socio-economic status and privilege. Values of a dominant regional culture marginalize those who do not possess or have the cultural characteristics of that geographic location (Kottak and Kozaitis 2012). People who do not culturally fit in a place face social stigma and rejection. People move to explore new areas, experience new cultures, or change status. Changing where we live means changing our social and cultural surroundings including the family, friends, acquaintances, etc. The most desirable spaces are distributed inequitably (Kottak and Kozaitis 2012). Wealth and privilege provide access to desirable locations and living conditions. The poor, immigrants, and ethnic minorities are most likely to be concentrated in poor communities with less than optimal living standards (Kottak and Kozaitis 2012). Impoverished groups are the most likely to be exposed to environmental hazards and dangerous living conditions. The disproportionate impact of ecological hazards on people of color has led to the development of the environmental justice movement to abolish environmental racism and harm (Energy Justice Network 2018). Geographic places also convey or signify stereotypes. People living in or being from an area inherent the region’s stereotypes whether they are accurate or not. Think about the previous U.S. examples of “living in Beverly Hills” or “working on Wall Street.” Stereotypes associated with these labels imply wealth and status. However, approximately 10% of people living in Beverly Hills are living below the poverty rate and most people employed on Wall Street do not work for financial institutions instead are police, sanitation workers, street vendors, and public employees to name a few (Data USA 2018). YOUR REGIONAL CULTURE The place someone lives influences his or her value system and life. Describe the geographic location you live and the culture of your community. What values and beliefs do the social norms and practices of your neighborhood instill or project among residents? What type of artifacts or possessions (i.e., truck, luxury car, recreational vehicle, fenced yard, swimming pool, etc.) do people living in your community seek out, dismiss, or condone? Do you conform to the cultural standards where you live or deviate from them? Explain how the place you live influences your perceptions, choices, and life. Body and Mind Like other human characteristics, society constructs meaning and defines normality to physical and mental ability and appearance (Kottak and Kozaitis 2012). Behavior categorized as “normal” is the standard for determining appropriate thinking and behavior from an illness or disorder. An example of this construct is the criteria for determining mental illness that involves examining a person’s functionality around accepted norms, roles, status, and behavior appropriate for social situations and settings (Cockerham 2014). The difficulty in defining mental disorders, similar to defining other illness or deformities, is the ever-changing perspectives of society. For example, “homosexuality was considered a mental disorder by American psychiatrists until the early 1970s” (Cockerham, 2014:3). Other terms and classifications have either been eliminated or evolved over time including Melancholia (now Depression), Amentia (once referred to Mental Retardation is no longer used), and Neurosis (which is now classified into subtypes). Primitive society believed mental illness derived from supernatural phenomena (Cockerham 2014). Because mental disorders were not always observable, people thought supernatural powers were the cause of illness. These preliterate cultures assumed people became sick because they lost their soul, invaded by an evil spirit, violated a taboo, or were victims of witchcraft (Cockerham 2014). Witch doctors or shamans used folk medicine and religious beliefs to produce cures. Many of these healers older in age, had high intellect, were sometimes sexual deviants, orphans, disabled, or mentally ill themselves (Cockerham 2014). Nonetheless, healers helped reduce anxiety and reinforce faith in social norms and customs. Both physical and mental health conditions become part of a person’s identity. Medical professionals, as was the case with witch doctors and shamans, play a role in labeling illness or defect internalizing a person’s condition as part of one’s identity (Kottak and Kozaitis 2012). As a result, the culture free, scientific objectivity of medicine has come into question. For centuries in western society, science sought to validate religious ideologies and text including the natural inferiority of women and the mental and moral deficiencies of people of color and the poor (Parenti 2006). Many scientific opinions about the body and mind of minority groups have been disproven and found to be embellished beliefs posing as objective findings. Medicine and psychiatry like other aspects of social life have entrenched interests and do not always come from a place of bias-free science. People adopt behaviors to minimize the impact of their illness or ailment on others. A sick person assumes a sick role when ill and not held responsible for their poor health or disorder, the sick individual is entitled release from normal responsibilities and must take steps to regain his or her health under care of a physician or medical expert (Parsons 1951). Because society views illness as a dysfunction or abnormality, people who are ill or have a condition learn the sick role or social expectations to demonstrate their willingness to cooperate with society though they are unable to perform or maintain standard responsibilities (i.e., attend school, work, participate in physical activity, etc.). Social attributes around an ideal body and mind center on youthfulness and wellness without deformity or defect. Though a person’s physical and mental health ultimately affects them intrinsically, society influences the social or extrinsic experience related to one’s body and mind attributes. People face social stigma when they suffer from an illness or condition. Erving Goffman (1963) defined stigma as an unwanted characteristic that is devalued by society. Society labels health conditions or defects (e.g., cancer, diabetes, mental illness, disability, etc.) as abnormal and undesirable creating a negative social environment for people with physical or mental differences. Individuals with health issues or disparities face suspicion, hostility, or discrimination (Giddens, Duneier, Applebaum, and Carr 2013). Social stigma accentuates one’s illness or disorder marginalizing and alienating persons with physical or cognitive limitations. During the Middle Ages, the mentally ill were categorized as fools and village idiots. Some people were tolerated for amusement, others lived with family, and some were placed on ships for placement at a distant place (Cockerham 2014). People often blame the victim suggesting one’s illness or disability resulted from personal choice or behavior, and it is their responsibility to resolve, cope, and adapt. Blaming the victim ignores the reality that an illness or defect is always be preventable, people cannot always afford health care or purchase the medications to prevent or alleviate conditions, and care or treatment is not always available. Social stigma often results in individuals avoiding treatment for fear of social labeling, rejection, and isolation. One in four persons worldwide will suffer a mental disorder in their lifetime (Cockerham 2014). In a recent study of California residents, data showed approximately 77% of the population with mental health needs received no or inadequate treatment (Tran and Ponce 2017). Children, older adults, men, Latinos, and Asians, people with low education, the uninsured, and limited English speakers were most likely to have an unmet need of treatment. Respondents in the study reported the cost of treatment and social stigma were the contributing factors to not receiving treatment. Untreated mental disorders have high economic and social costs including alcoholism, drug abuse, divorce, domestic violence, suicide, and unemployment (Cockerham 2014). The lack of treatment have devastating effects on those in need, their families, and society. Society promotes health and wellness as the norm and ideal life experience. Media upholds these ideals by portraying the body as a commodity and the value of being young, fit, and strong (Kottak and Kozaitis 2012). This fitness-minded culture projects individuals who are healthy and well with greater social status than those inflicted with illness or body and mind differences. In today’s society, there is low tolerance for unproductive citizens characterized by the inability to work and contribute to the economy. Darwinian (1859) ideology embedded in modern day principles promote a culture of strength, endurance, and self-reliance under the guise of survival of the fittest. This culture reinforces the modern-day values of productivity associated to one being healthy and well in order to compete, conquer, and be successful in work and life. There are body and mind differences associated with age, gender, and race. Ideal, actual, and normal body characteristics vary from culture to culture and even within one culture over time (Kottak and Kozaitis 2012). Nonetheless, cultures throughout the world are obsessed with youth and beauty. We see examples of this in media and fashion where actors and models are fit to match regional stereotypes of the young and beautiful. In the United States, most Hollywood movies portray heroines and heroes who are fit without ailment or defect, under the age of 30, and reinforce beauty labels of hyper-femininity (i.e., thin, busty, sexy, cooperative, etc.) and hyper-masculinity (i.e., built, strong, aggressive, tough, etc.). The fashion industry also emphasizes this body by depicting unrealistic ideals of beauty for people to compare themselves while nonetheless achievable by buying the clothes and products models sell. Body and mind depictions in the media and fashion create appearance stereotypes that imply status and class. If one contains the resources to purchase high-end brands or expense apparel, she or he are able to project status through wealth. If one is attractive, she or he are able to project status through beauty. Research shows stereotypes influence the way people speak to each other. People respond warm and friendly to attractive people and cold, reserved, and humorless to unattractive (Snyder 1993). Additionally, attractive people earn 10-15% more than ordinary or unattractive people (Judge et al. 2009; Hamermesh 2011). We most also note, if one is able to achieve beauty through plastic surgery or exercise and have no health conditions or deformities, they are also more likely to be socially accepted and obtain status. People with disabilities have worked to dispel misconceptions, promote nondiscrimination, and fair representation (Kottak and Kozaitis 2012). Individuals with body and mind illnesses and differences form support groups and establish membership or affinity based on their condition to organize politically. By acknowledging differences and demanding civil rights, people with illnesses and disabilities are able to receive equal treatment and protection under the law eliminating the stigma and discriminatory labels society has long placed on them. Political organization for social change has given people with body and mind differences the ability to redefine culture and insist on social inclusion and participation of all people regardless of physical or mental differences, challenges, or limitations. An illustration of civil rights changes occurred in the 20th century with a paradigm shift and growth of professionals, paraprofessionals, and laypeople in mental health (Cockerham 2014). Treatment altered to focus on psychoanalysis and psychoactive drugs rather than institutionalization. With this new approach, hospital discharges increased and hospitalization stays decreased (Cockerham 2014). The most recent revolution in mental health treatment was the development of the community mental health model. The model emphasizes local community support as a method of treatment where relationships are the focus of care. This therapeutic approach uses mental health workers who live in the community to fill the service gaps between the patient and professionals stressing a social rather than medical model (Cockerham 2014). The community mental health model extends civil rights putting consent to treatment and service approach in the hands of patients.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/04%3A_Cultural_Identity/4.03%3A_Geographic_Region.txt
There are two myths or ideas about race. The first suggests people inherit physical characteristics distinguishing race. Second, the idea that one race is superior to others or that one “pure” race exists. In actuality, scientific research mapping of the human genome system found that humans are homogenous (Henslin 2011). Race is truly an arbitrary label that has become part of society’s culture with no justifiable evidence to support differences in physical appearance substantiate the idea that there are a variety of human species. Traditionally, racial terms classify and stratify people by appearance and inherently assign racial groups as inferior or superior in society (Kottak and Kozaitis 2012). Scientific data finds only one human species making up only one human race. Evidence shows physical differences in human appearance including skin color are a result of human migration patterns and adaptions to the environment (Jablonski 2012). Nonetheless, people use physical characteristics to identify, relate, and interact with one another. Ethnicity refers to the cultural characteristics related to ancestry and heritage. Ethnicity describes shared culture such as group practices, values, and beliefs (Griffiths et al. 2015). People who identify with an ethnic group share common cultural characteristics (i.e.,nationality, history, language, religion, etc.). Ethnic groups select rituals, customs, ceremonies, and other traditions to help preserve shared heritage (Kottak and Kozaitis 2012). Lifestyle requirements and other identity characteristics such as geography and region influence how we adapt our ethnic behaviors to fit the context or setting in which we live. Culture is also key in determining how human bodies grow and develop such as food preferences and diet and cultural traditions promote certain activities and abilities including physical well-being and sport (Kottak and Kozaitis 2012). Someone of Mexican decent living in Central California who is a college professor will project different ethnic behaviors than someone of the same ethnic culture who is a housekeeper in Las Vegas, Nevada. Differences in profession, social class, and region will influence each person’s lifestyle, physical composition, and health though both may identify and affiliate themselves as Mexican. Not all people see themselves as belonging to an ethnic group or view ethnic heritage as important to their identity. People who do not identify with an ethnic identity either have no distinct cultural background because their ancestors come from a variety of cultural groups and offspring have not maintained a specific culture, instead have a blended culture, or they lack awareness about their ethnic heritage (Kottak and Kozaitis 2012). It may be difficult for some people to feel a sense of solidarity or association with any specific ethnic group because they do not know where their cultural practices originated and how their cultural behaviors adapted over time. What is your ethnicity? Is your ethnic heritage very important, somewhat important, or not important in defining who you are? Why? Race and ethnic identity like other cultural characteristics influence social status or position in society. Minority groups are people who receive unequal treatment and discrimination based on social categories such as age, gender, sexuality, race and ethnicity, religious beliefs, or socio-economic class. Minority groups are not necessarily numerical minorities (Griffith et al. 2015). For example, a large group of people may be a minority group because they lack social power. The physical and cultural traits of minority groups “are held in low esteem by the dominant or majority group which treats them unfairly” (Henslin 2011:217). The dominant group has higher power and status in society and receives greater privileges. As a result, the dominant group uses its position to discriminate against those that are different. The dominant group in the United States is represented by white, middle-class, Protestant people of northern European descent (Doane 2005). Minority groups can garner power by expanding political boundaries or through expanded migration though both of these efforts do not occur with ease and require societal support from minority and dominant group members. The loss of power among dominant groups threatens not only their authority over other groups but also the privileges and way of life established by the majority. There are seven patterns of intergroup relations between dominant and minority groups influencing not only the racial and ethnic identity of people but also the opportunities and barriers each will experience through social interactions. Maladaptive contacts and exchanges include genocide, population transfer, internal colonialism, and segregation. Genocide attempts to destroy a group of people because of their race or ethnicity. “Labeling the targeted group as inferior or even less than fully human facilitates genocide” (Henslin 2011:225). Population transfer moves or expels a minority group through direct or indirect transfer. Indirect transfer forces people to leave by making living conditions unbearable, whereas, direct transfer literally expels minorities by force. Another form of rejection by the dominant group is a type of colonialism. Internal colonialism refers to a country’s dominant group exploiting the minority group for economic advantage. Internal colonialism generally accompanies segregation (Henslin 2011). In segregation, minority groups live physically separate from the dominant group by law. Three adaptive intergroup relations include assimilation, multiculturalism, and pluralism. The pattern of assimilation is the process by which a minority group assumes the attitudes and language of the dominant or mainstream culture. An individual or group gives up its identity by taking on the characteristics of the dominant culture (Griffiths et al. 2015). When minorities assimilate by force to dominant ideologies and practices, they can no longer practice their own religion, speak their own language, or follow their own customs. In permissible assimilation, minority groups adopt the dominant culture in their own way and at their own speed (Henslin 2011). Multiculturalism is the most accepting intergroup relationship between dominant and minority groups. Multiculturalism or pluralism encourages variation and diversity. Multiculturalism promotes affirmation and practice of ethnic traditions while socializing individuals into the dominant culture (Kottak and Kozaitis 2012). This model works well in diverse societies comprised of a variety of cultural groups and a political system supporting freedom of expression. Pluralism is a mixture of cultures where each retains its own identity (Griffiths et al. 2015). Under pluralism, groups exist separately and equally while working together such as through economic interdependence where each group fills a different societal niche then exchanges activities or services for the sustainability and survival of all. Both the multicultural and pluralism models stress interactions and contributions to their society by all ethnic groups. REDUCING ETHNIC CONFLICT Research three online sources on methods and approaches to reducing ethnic conflict such as the following: Video \(1\): The Path to Ending Ethnic Conflicts by Stefan Wolff (https://youtu.be/UfM7t_oqNDw) 1. What is your reaction or feelings about the suggestions or ideas for ending ethnic conflicts presented in the sources you identified? 2. Why does type of leadership, approaches to diplomacy, and collective or organizational design matter in reducing ethnic conflicts? 3. What is the most important idea from the sources you identified as they relate to peacekeeping and multiculturalism? Race reflects a social stigma or marker of superiority (Kottak and Kozaitis 2012). When discrimination centers on race, it is racism. There are two types of racial discrimination: individual and institutional. Individual discrimination is “unfair treatment directed against someone” (Henslin 2011:218). Whereas, institutional discrimination is negative systematic treatment of individuals by society through education, government, economy, health care, etc. According to Perry (2000), when people focus on racial-ethnic differences, they engage in the process of identity formation through structural and institutional norms. As a result, racial-ethnic identity conforms to normative perceptions people have of race and ethnicity reinforcing the structural order without challenging the socio-cultural arrangement of society. Maintaining racial-ethnic norms reinforces differences, creates tension, and disputes between racial-ethnic groups sustaining the status quo and reasserting the dominant groups position and hierarchy in society. Upon the establishment of the United States, white legislators and leaders limited the roles of racial minorities and made them subordinate to those of white Europeans (Konradi and Schmidt 2004). This structure systematically created governmental and social disadvantages for minority groups and people of color. Today, toxic waste dumps continue to be disproportionately located in areas with nonwhite populations (Kottak and Kozaitis 2012). It has taken over 200 years to ensure civil rights and equal treatment of all people in the United States; however, discriminatory practices continue because of policies, precedents, and practices historically embedded in U.S. institutions and individuals behaving from ideas of racial stereotypes. Think about the differences people have in employment qualifications, compensation, obtaining home loans, or getting into college. What racial and ethnic stereotypes persist about different racial and ethnic groups in these areas of life? Whites in the United States infrequently experience racial discrimination making them unaware of the importance of race in their own and others’ thinking in comparison to people of color or ethnic minorities (Konradi and Schmidt 2004). Many argue racial discrimination is outdated and are uncomfortable with the blame, guilt, and accountability of individual acts and institutional discrimination. By paying no attention to race, people think racial equality is an act of color blindness and it will eliminate racist atmospheres (Konradi and Schmidt 2004). They do not realize the experience of not “seeing” race itself is racial privilege. Research shows the distribution of resources and opportunities are not equal among racial and ethnic categories, and White groups do better than other groups and Blacks are predominantly among the underclass (Konradi and Schmidt 2004). Regardless of social perception, in reality, there are institutional and cultural differences in government, education, criminal justice, and media and racial-ethnic minorities received subordinate roles and treatment in society. Religion and Belief Systems The concept of a higher power or spiritual truth is a cultural universal. Like ethnicity, religion is the basis of identity and solidarity (Kottak and Kozaitis 2012). People’s beliefs and faith support their values, norms, and practices. Individual faith influences one’s extrinsic motivation and behaviors including treatment of others. Religion is malleable and adaptive for it changes and adapts within cultural and social contexts. Human groups have diverse beliefs and different functions of their faith and religion. Historically, religion has driven both social union and division (Kottak and Kozaitis 2012). When religious groups unite, they can be a strong mobilizing force; however, when they divide, they can work to destroy each other. Religion may be formal or informal (Kottak and Kozaitis 2012). Someone who is a member of an organized religious group, attends religious services, and practices rituals is a participant in formal or institutional religion. Whereas, someone participating in informal religion may or may not be a member of an organized religious group and experiencing a communal spirit, solidarity, and togetherness through shared experience. Informal religion may occur when we participate as a member of a team or during a group excursion such as camp. Religion is a vehicle for guiding values, beliefs, norms, and practices. People learn religion through socialization. The meaning and structure of religion controls lives through sanctions or rewards and punishments. Religion prescribes to a code of ethics to guide behavior (Kottak and Kozaitis 2012). One who abides by religious teachings receives rewards such as afterlife and one who contradicts its instruction accepts punishment including damnation. People engage in religion and religious practices because they think it works (Kottak and Kozaitis 2012). The connection between religious faith and emotion sustains belief playing a strong role on personal and social identity. What formal or informal religious experiences have you encountered during your life? How does your faith and spirituality conform or deviate from your family of origin and friends?
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/04%3A_Cultural_Identity/4.04%3A_Race_and_Ethnicity.txt
A person’s socio-economic status influences her or his personal and social identity. In society, we rank individuals on their wealth, power, and prestige (Weber [1968] 1978). The calculation of wealth is the addition of one’s income and assets minus their debts. The net worth of a person is wealth whereas income from work and investments is the resources a person has available to access. Power is the ability to influence others directly or indirectly and prestige is the esteem or respect associated with social status (Carl 2013). This social stratification system or ranking creates inequality in society and determines one’s social position in areas such as income, education, and occupation. Multiple factors influence social standing, however, people often assume hard work and effort leads to a high status and wealth. Socialization reinforces the ideology that social stratification is a result of personal effort or merit (Carl 2013). The concept of meritocracy is a social ideal or value, but no society exists where the determination of social rank is purely on merit. Inheritance alone shows social standing is not always individually earned. Some people have to put little to no effort to inherit social status and wealth. Additionally, societies operating under a caste system where birth determines lifelong status undermines meritocracy. Caste systems function on the structure that someone born into a low-status group remains low status regardless of their accomplishments, and those born into high-status groups stay high status (Henslin 2011). The caste system reinforces ascribed status rather than achieved to ensure sustainment of multiple roles and occupations in society. In modern societies, there is evidence of merit based standing in academics and job performance but other factors such as age, disability, gender, race, and region influence life’s opportunities and challenges for obtaining social standing. A major flaw of meritocracy is how society measures social contributions. Janitorial and custodial work is necessary in society to reduce illness and manage waste just as much as surgery is to keep people healthy and alive, but surgeons receive greater rewards than janitors do for their contributions. Marx and Engels (1967) suggested there is a social class division between the capitalists who control the means of production and the workers. In 1985, Erik Wright interjected that people can occupy contradictory class positions throughout their lifetime. People who have occupied various class positions (e.g., bookkeeper to manager to chief operating officer) relate to the experiences of others in those positions, and as a result may feel internal conflict in handling situations between positions or favoring one over another. Late in the twentieth century, Joseph Kahl and Dennis Gilbert (1992) updated the theoretical perspective of Max Weber by developing a six-tier model portraying the United States class structure including underclass, working-poor, working, lower middle, upper middle, and capitalists. The social class model depicts the distribution of property, prestige, and power among society based on income and education. Each class lifestyle requires a certain level of wealth in order to acquire the material necessities and comforts of life (Henslin 2011). The correlation between the standard of living and quality of life or life chances (i.e., opportunities and barriers) influences one’s ability to afford food, shelter, clothing, healthcare, other basic needs, and luxury items. A person’s standards of living including income, employment, class, and housing effects their cultural identity. Figure \(1\): Man Praying on Sidewalk with Food in Front. (CC BY 4.0; Sergio Omassi). Social class serves as a marker or indication of resources. These markers are noticeable in the behaviors, customs, and norms of each stratified group (Carl 2013). People living in impoverished communities have different cultural norms and practices compared to those with middle incomes or families of wealth. For example, the urban poor often sleep on cardboard boxes on the ground or on sidewalks and feed themselves by begging, scavenging, and raiding garbage (Kottak and Kozaitis 2012). Middle income and wealth families tend to sleep in housing structures and nourish themselves with food from supermarkets or restaurants. Language and fashion also vary among these classes because of educational attainment, employment, and income. People will use language like “white trash” or “welfare mom” to marginalize people in the lower class and use distinguished labels to identify the upper class such as “noble” and “elite.” Sometimes people often engage in conspicuous consumption or purchase and use certain products (e.g., buy a luxury car or jewelry) to make a social statement about their status (Henslin 2011). Nonetheless, the experience of poor people is very different in comparison to others in the upper and middle classes and the lives of people within each social class may vary based on their position within other social categories including age, disability, gender, race, region, and religion. Similar to people, nations are also stratified. The most extreme social class differences are between the wealthiest in industrialized countries and the poorest in the least developed nations (Kottak and Kozaitis 2012). The most industrialized or modern countries have the greatest property and wealth. Most industrialized nations are leaders in technology and progress allowing them to dominant, control, and access global resources. Industrializing nations have much lower incomes and standards of living than those living in most industrialized nations (Henslin 2011). The least industrialized nations are not modern, and people living in these nations tend to be impoverished and live on farms and in villages. HIDDEN RULES OF CLASS Could you survive in poverty, middle class, or wealth? In her book A Framework for Understanding Poverty (2005), Dr. Ruby K Payne presents lists of survival skills needed by different societal classes. Test your skills by answering the following questions: Could you survive in . . . (mark all that apply) POVERTY 1. ____ find the best rummage sales. 2. ____ locate grocery stores’ garbage bins that have thrown away food. 3. ____ bail someone out of jail. ____ get a gun, even if I have a police record. 4. ____ keep my clothes from being stolen at the laundromat. 5. ____ sniff out problems in a used car. 6. ____ live without a checking account. 7. ____ manage without electricity and a phone. 8. ____ entertain friends with just my personality and stories. 9. ____ get by when I don’t have money to pay the bills. 10. ____ move in half a day. 11. ____ get and use food stamps. 12. ____ find free medical clinics. 13. ____ get around without a car. 14. ____ use a knife as scissors. MIDDLE CLASS know how to.... 1. ____ get my children into Little League, piano lessons, and soccer. 2. ____ set a table properly. 3. ____ find stores that sell the clothing brands my family wears. 4. ____ use a credit card, checking and /or savings account. 5. ____ evaluate insurance: life, disability, 20/80 medical, homeowners, and personal-property. 6. ____ talk to my children about going to college. 7. ____ get the best interest rate on my car loan. 8. ____ help my children with homework and don’t hesitate to make a call if I need more information. WEALTH, check if you.... 1. ____ can read a menu in French, English and another language. 2. ____ have favorite restaurants in different countries around the world. 3. ____ know how to hire a professional decorator to help decorate your home during the holidays. 4. ____ can name your preferred financial advisor, lawyer, designer, hairdresser, or domestic-employment service. 5. ____ have at least two homes that are staffed and maintained. 6. ____ know how to ensure confidentiality and loyalty with domestic staff. 7. ____ use two or three “screens” that keep people whom you don’t wish to see away from you 8. ____ fly in your own plane, the company plane, or the Concorde. 9. ____ know how to enroll your children in the preferred private schools. 10. ____ are on the boards of at least two charities. 11. ____ know the hidden rules of the Junior League. 12. ____ know how to read a corporate balance sheet and analyze your own financial statements. 13. ____ support or buy the work of a particular artist. IDENTITY TODAY All forms of media and technology teach culture including values, norms, language, and behaviors by providing information about activities and events of social significance (Griffiths et al. 2015). Media and technology socialize us to think and act within socio-cultural appropriate norms and accepted practices. Watching and listening to people act and behave through media and technology shows the influence this social institution has like family, peers, school, and work on teaching social norms, values, and beliefs. Technological innovations and advancements have influenced social interactions and communication patterns in the twenty-first century creating new social constructions of reality. These changes, particularly in information technology, have led to further segmentation of society based on user-participant affinity groups (Kottak and Kozaitis 2012). The internet and web-based applications link people together transecting local, state, and national boundaries centered on common interests. People who share interests, ideas, values, beliefs, and practices are able to connect to one another through web-based and virtual worlds. These shared interests create solidarity among user-participants while disengaging them from others with differing or opposing interests. Cybersocial interactions have reinforced affinity groups creating attitudes and behaviors that strongly encourage tribalism or loyalty to the social group and indifference to others. Even though there are so many media, news, and information outlets available online, they are homogenous and tell the same stories using the same sources delivering the same message (McManus 1995). Regardless of the news or information outlets one accesses, the coverage of events is predominantly the same with differences focusing on commentary, perspective, and analysis. Shoemaker and Vos (2009) found this practice allow outlets to serve as gatekeepers by shaping stories and messages into mass media-appropriate forms and reducing them to a manageable amount for the audience. Fragmentation of stories and messages occurs solely on ideology related to events rather than actual coverage of accounts, reports, or news. People no longer form and take on identity solely from face-to-face interactions; they also construct themselves from online communication and cybersocial interactions. Approximately 73 percent of adults engage in some sort of online social networking extending their cultural identity to virtual space and time (Pew Research Center 2011). Technological innovations and advancements have even led some people to re-construct a new online identity different from the one they are in face-to-face contexts. Both identities and realities are real to the people who construct and create them, as they are the cultural creators of their personas. Technology like other resources in society creates inequality among social groups (Griffiths et al. 2015). People with greater access to resources have the ability to purchase and use online services and applications. Privilege access to technological innovations and advancements depend on one’s age, family, education, ethnicity, gender, profession, race, and social class (Kottak and Kozaitis 2012). Signs of technological stratification are visible in the increasing knowledge gap for those with less access to information technology. People with exposure to technology gain further proficiency that makes them more marketable and employable in modern society (Griffiths et al. 2015). Inflation of the knowledge gap results from the lack of technological infrastructure among races, classes, geographic areas creating a digital divide between those who have internet access and those that do not. NATIVE ANTHROPOLOGIST Native anthropologists study their own culture. For this project, you will explore your own culture by answering the questions below. Your response to each question must be a minimum of one paragraph consisting of 3-5 sentences, typed, and in ASA format (i.e., paragraphs indented and double-spaced). You must include parenthetical citations if you ask or interview someone in your family or kin group to help you understand and answer any one of the questions. Here is a helpful link with information on citing interviews in ASA format: libguides.tru.ca/c.php?g=194012&p=1277266. PART 1 1. In examining your background and heritage, what traditions or rituals do you practice regularly? To what extent are traditional cultural group beliefs still held by individuals within the community? To what extent and in what areas has your ethnic or traditional culture changed in comparison to your ancestors? 2. What major stereotypes do you have about other cultural groups based on age, gender, sex, sexuality, race, ethnicity, region, and social class? 3. Reflecting on your cultural background, how do you define family? 4. What is the hierarchy of authority in your family? 5. What do you think are the functions and obligations of the family as a large social unit to individual family members? To school? To work? To social events? 6. What do you think are the rights and responsibilities of each family member? For example, do children have an obligation to work and help the family? 7. In your culture, what stage of life is most valued? 8. What behaviors are appropriate or unacceptable for children of various ages? How might these conflict with behaviors taught or encouraged in the school, work, or by other social groups? 9. How does your cultural group compute age? What commemoration is recognized or celebrated, if any (i.e., birthdays, anniversaries, etc.)? PART 2 1. Considering your cultural heritage, what roles within a group are available to whom and how are they acquired? 2. Are there class or status differences in the expectations of roles within your culture? 3. Do particular roles have positive or malevolent characteristics? 4. Is language competence a requirement or qualification for family or cultural group membership? 5. How do people greet each other? 6. How is deference or respect shown? 7. How are insults expressed? 8. Who may disagree with whom in the cultural group? Under what circumstances? Are mitigating forms used? 9. Which cultural traditions or rituals are written and how widespread is cultural knowledge found in written forms? 10. What roles, attitudes, or personality traits are associated with particular ways of speaking among the cultural group? 11. What is the appropriate decorum or manners among your cultural group? 12. What counts as discipline in terms of your culture, and what doesn't? What is its importance and value? 13. Who is responsible and how is blame ascribed if a child misbehaves? 14. Do means of social control vary with recognized stages in the life cycle, membership in various social categories (i.e., gender, region, class, etc.), or according to setting or offense? 15. What is the role of language in social control? What is the significance of using the first vs. the second language? PART 3 1. What is considered sacred (religious) and what secular (non-religious)? 2. What religious roles and authority are recognized in the community? 3. What should an outsider not know, or not acknowledge knowing about your religion or culture? 4. Are there any external signs of participation in religious rituals (e.g., ashes, dress, marking)? 5. Are dietary restrictions to be observed including fasting on particular occasions? 6. Are there any prescribed religious procedures or forms of participation if there is a death in the family? 7. What taboos are associated with death and the dead? 8. Who or what is believed to cause illness or death (e.g., biological vs. supernatural or other causes)? 9. Who or what is responsible for treating or curing illness? 10. Reflecting on your culture, what foods are typical or favorites? What are taboo? 11. What rules are observed during meals regarding age and sex roles within the family, the order of serving, seating, utensils used, and appropriate verbal formulas (e.g., how, and if, one may request, refuse, or thank)? 12. What social obligations are there with regard to food giving, preparation, reciprocity, and honoring people? 13. What relation does food have to health? What medicinal uses are made of food, or categories of food? 14. What are the taboos or prescriptions associated with the handling, offering, or discarding of food? 15. What clothing is common or typical among your cultural group? What is worn for special occasions? 16. What significance does dress have for group identity? 17. How does dress differ for age, sex, and social class? What restrictions are imposed for modesty (e.g., can girls wear pants, wear shorts, or shower in the gym)? 18. What is the concept of beauty, or attractiveness in the culture? What characteristics are most valued? 19. What constitutes a compliment of beauty or attractiveness in your culture (e.g., in traditional Latin American culture, telling a woman she is getting fat is a compliment)? 20. Does the color of dress have symbolic significance (e.g., black or white for mourning, celebrations, etc.)? PART 4 1. In your culture, what individuals and events in history are a source of pride for the group? 2. ow is knowledge of the group's history preserved? How and in what ways is it passed on to new generations (e.g., writings, aphorisms or opinions, proverbs or sayings)? 3. Do any ceremonies or festive activities re-enact historical events? 4. Among your cultural group, what holidays and celebrations are observed? What is their purpose? What cultural values do they intend to inculcate? 5. What aspects of socialization/enculturation do holidays and celebrations observed further? 6. In your culture, what is the purpose of education? 7. What methods for teaching and learning are used at home (e.g., modeling and imitation, didactic stories and proverbs, direct verbal instruction)? 8. What is the role of language in learning and teaching? 9. How many years is it considered 'normal' for children to go to school? 10. Are there different expectations with respect to different groups (e.g., boys vs. girls)? In different subjects? 11. Considering your culture, what kinds of work are prestigious and why? 12. Why is work valued (e.g., financial gain, group welfare, individual satisfaction, promotes group cohesiveness, fulfillment or creation of obligations, position in the community, etc.)? PART 5 1. How and to what extent may approval or disapproval be expressed in you culture? 2. What defines the concepts of successful among your cultural group? 3. To what extent is it possible or proper for an individual to express personal vs. group goals? 4. What beliefs are held regarding luck and fate? 5. What significance does adherence to traditional culture have for individual success or achievement? 6. What are the perceptions on the acquisition of dominant group culture have on success or achievement? 7. Do parents expect and desire assimilation of children to the dominant culture as a result of education and the acquisition of language? 8. Are the attitudes of the cultural community the same as or different from those of cultural leaders? 9. Among your cultural group, what beliefs or values are associated with concepts of time? How important is punctuality, speed, patience, etc.? 10. Are particular behavioral prescriptions or taboos associated with the seasons? 11. Is there a seasonal organization of work or other activities? 12. How do individuals organize themselves spatially in groups during cultural events, activities, or gatherings (e.g., in rows, circles, around tables, on the floor, in the middle of the room, etc.)? 13. What is the spatial organization of the home in your culture (e.g., particular activities in various areas of the home, areas allotted to children, or open to children,)? 14. What geo-spatial concepts, understandings, and beliefs (e.g., cardinal directions, heaven, hell, sun, moon, stars, natural phenomena, etc.) exist among the cultural group or are known to individuals? 15. Are particular behavioral prescriptions or taboos associated with geo-spatial concepts, understandings, and beliefs? What sanctions are there against individuals violating restrictions or prescriptions? 16. Which animals are valued in your culture, and for what reasons? 17. Which animals are considered appropriate as pets and which are inappropriate? Why? 18. Are particular behavioral prescriptions or taboos associated with particular animals? 19. Are any animals of religious significance? Of historical importance? 20. What forms of art and music are most highly valued? 21. What art medium and musical instruments are traditionally used? 22. Are there any behavioral prescriptions or taboos related to art and music (e.g., both sexes sing, play a particular instrument, paint or photograph nude images, etc.)? 4.S: Cultural Identity (Summary) Key terms and Concepts Age Stratification Life Course Anticipatory Socialization Meritocracy Assimilation Minority Groups Collective Identity Multiculturalism Color Blind Racism Net-Worth Cultural Codes Patterns Of Intergroup Relations Cultural Identity Pluralism Cybersocial Interactions Population Transfer Dominant Regional Culture Power Ethnicity Queer Theory Fitness-Minded Culture Race Formal Religion Racial Privilege Gatekeepers Racism Gay Culture Religion Gender Segregation Gender Identity Sex Gender Inequality Sexual Identity Gender Socialization Sexual Orientation Gender Stratification Sexuality Genocide Sick Role Individual Discrimination Social Stigma Informal Religion Social Stratification Institutional Discrimination Socio-Economic Status Internal Colonialism Wealth
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/04%3A_Cultural_Identity/4.05%3A_Social_Class.txt
Learning Objectives At the end of the module, students will be able to: 1. understand the influence of globalization on culture and cultural identity 2. differentiate between the social patterns of cultural homogenization and cultural heterogenization 3. explain the role of technological advancements on cultural creation and transmission 4. summarize the process for creating cultural awareness and building cultural intelligence 5. demonstrate methods and approaches for working with others in a culturally diverse society Everyday production of culture centers on local and global influences (Giddens 1991). With the advancements in technology and communications, people are experiencing greater social forces in the construction of their cultural reality and identity. The boundaries of locality have expanded to global and virtual contexts that create complexities in understanding the creation, socialization, adaptation, and sustainability of culture. • 5.1: Globalization and Identity At the end of the module, students will be able to understand the influence of globalization on culture and cultural identity. They will also be able to differentiate between the social patterns of cultural homogenization and cultural heterogenization. This chapter also explains the role of technological advancements on cultural creation and transmission Overall, it summarizes the process for creating cultural awareness and building cultural intelligence. • 5.2: Culture Today With the world in flux from globalization and technological advances, people are developing multiple identities apparent in their local and global linkages. • 5.3: Building Cultural Intellegence Building cultural intelligence requires active awareness of self, others, and context. Cultural background greatly influences perception and understanding, and how we identify ourselves reflects on how we communicate and get along with others. • 5.S: The Multicultural World (Summary) Thumbnail: Women Falling in Line Holding Each Other. (CC BY 4.0; mentatdgt). 05: The Multicultural World Globalization is typically associated to the creation of world-spanning free market and global reach of capitalist systems resulting from technological advances (Back, Bennett, Edles, Gibson, Inglis, Jacobs, and Woodward 2012). However, globalization has the unintended consequences of connecting every person in the world to each other. In this era, everyone’s life is connected to everyone else’s life in obvious and hidden ways (Albrow 1996). A food production shortage in the United States effects the overall economic and physical well-being and livelihoods of people throughout the world in an obvious way. Our hidden connections stem from the individuals who grow, produce, and transport the food people eat. It is easier for people to recognize the big picture or macrosociological influences we have on each other, but sometimes harder to recognize the role individuals have on each other across the globe. Globalization also influences our cultural identity and affinity groups. Technology allows us to eliminate communication boundaries and interact with each other on a global scale. Globalization lends itself to cultural homogenization that is the world becoming culturally similar (Back et al. 2012). However, the cultural similarities we now share center on capitalist enterprises including fashion and fast food. Globalization has resulted in the worldwide spread of capitalism (Back et al. 201). Transnational corporations or companies with locations throughout the world like McDonald’s, Coca-Cola, and Nike dominate the global market with goods and services spreading and embedding their cultural artifacts on a global scale. These corporations increase the influence of global practices on people’s lives that sometimes result in economic and social consequences including closing factories in one country and moving to another where costs and regulations are lower. Along with people throughout the world becoming culturally similar, sociologists also recognize patterns of cultural heterogenization where aspects of our lives are becoming more complex and differentiated resulting from globalization. Our social relationships and interactions have become unconstrained by geography (Back et al.). People are no longer restricted to spatial locales and are able to interact beyond time and space with those sharing common culture, language, or religion (Giddens 1990; Kottak and Kozaitis 2012). People can travel across the globe within hours, but also connect with others by phone or the Internet within seconds. These advancements in technology and communications alters what people perceive as close and far away (Back et al. 2012). Our social and cultural arrangements in an era of globalization are adapting and changing the way we think and act. Today people are able to form and live across national borders. Advances in transportation and communications give people the opportunity to affiliate with multiple countries as transnationals. At different times of their lives or different times of the year, people may live in two or more countries. We are moving beyond local, state, and national identities to broader identities developing from our global interactions forming transnational communities. A key cultural development has been the construction of globality or thinking of the whole earth as one place (Beck 2000). Social events like Earth Day and the World Cup of soccer are examples of globality. People associate and connect with each other in which they identify. Today people frame their thinking about who they are within global lenses of reference (Back et al. 2012). Even in our global and virtual interactions, people align themselves with the affinity groups relative to where they think they belong and will find acceptance. Think about your global and virtual friend and peer groups. How did you meet or connect? Why do you continue to interact? What value do you have in each other’s lives even though you do not physical interact? OUR ONLINE FUTURE Research three online sources on how online interactions and social media influence human social life such as the following: The Future of the Web May Not be Social by Mitch Joel (https://www.youtube.com/watch?v=xh0obyhZPM8) 1. What is the relationship between inputting information online and privacy? 2. Do you think the web re-enforces narcissism? How does narcissistic behavior influence our connections to the social world and other people? 3. Even if we choose NOT to participate or be part of the online universe, how does the behavior of other people online affect what the world knows about us? 4. Should everything we do online be open and available to the public? Who should be able to view your browsing patterns, profile, photos, etc.? 5. What rights do you think people should have in controlling their privacy online? 5.02: Culture Today With the world in flux from globalization and technological advances, people are developing multiple identities apparent in their local and global linkages. Cultural identity is becoming increasingly contextual in the postmodern world where people transform and adapt depending on time and place (Kottak and Kozaitis 2012). Social and cultural changes now adapt in response to single events or issues. The instant response and connections to others beyond time and place immediately impacts our lives, and we have the technology to react quickly with our thoughts and actions. Approximately two-thirds of American adults are online connecting with others, working, studying, or learning (Griswold 2013). The increasing use of the Internet makes virtual worlds and cybersocial interactions powerful in constructing new social realities. Having a networked society allows anyone to be a cultural creator and develop an audience by sharing their thoughts, ideas, and work online. Amateurs are now cultural creators and have the ability to control dissemination of their creations (Griswold 2013). Individuals now have the freedom to restrict or share cultural meaning and systems. Postmodern culture and the new borderless world fragments traditional social connections into new cultural elements beyond place, time, and diversity without norms. People can now live within global electronic cultural communities and reject cultural meta-narratives (Griswold 2013). Postmodern culture also blurs history by rearranging and juxtaposing unconnected signs to produce new meanings. We find references to actual events in fictional culture and fictional events in non-fictional culture (Barker and Jane 2016). Many U.S. television dramas refer to 9/11 in episodes focusing on terrorists or terrorist activities. Additionally, U.S. social activities and fundraising events will highlight historical figures or icons. The blurring of non-fiction and fiction creates a new narrative or historical reality people begin to associate with and recognize as actual or fact. CULTURAL TRANSFORMATION 1. How has globalization and technology changed culture and cultural tastes? 2. How have people harnessed these changes into cultural objects or real culture? 3. How do you envision the growth or transformation of receivers or the audience as participants in cultural production? 4. What cultural objects are threatened in the age of postmodern culture?
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/05%3A_The_Multicultural_World/5.01%3A_Globalization_and_Identity.txt
In a cultural diverse society, it is becoming increasingly important to be able to interact effectively with others. Our ability to communicate and interact with each other plays an integral role in the successful development of our relationships for personal and social prosperity. Building cultural intelligence requires active awareness of self, others, and context (Bucher 2008). Self-awareness requires an understanding of our cultural identity including intrinsic or extrinsic bias we have about others and social categories of people. Cultural background greatly influences perception and understanding, and how we identify ourselves reflects on how we communicate and get along with others. It is easier to adjust and change our interactions if we are able to recognize our own uniqueness, broaden our percepts, and respect others (Bucher 2008). We must be aware of our cultural identity including any multiple or changing identities we take on in different contexts as well as those we keep hidden or hide to avoid marginalization or recognition. Active awareness of others requires us to use new cultural lenses. We must learn to recognize and appreciate commonalities in our culture not just differences. This practice evelops understanding of each other’s divergent needs, values, behaviors, interactions, and approach to teamwork (Bucher 2008). Understanding others involves evaluating assumptions and cultural truths. Cultural lens filter perceptions of others and conditions us to view the world and others in one way blinding us from what we have to offer or complement each other (Bucher 2008). Active awareness of others broadens one’s sociological imagination to see the world and others through a different lens and understand diverse perspectives that ultimately helps us interact and work together effectively. Today’s workplace requires us to have a global consciousness that encompasses awareness, understanding, and skills to work with people of diverse cultures (Bucher 2008). orking with diverse groups involves us learning about other cultures to manage complex and uncertain social situations and contexts. What may be culturally appropriate or pecific in one setting may not apply in another. This means we must develop a cultural understanding of not only differences and similarities, but those of cultural significance as well to identify which interactions fit certain situations or settings. CULTURAL INTELLIGENCE RESOURCES 1. How do you develop collaboration among people with different backgrounds and experiences? 2. What role does power play in our ability to collaborate with others and develop deep levels of understanding? 3. How might power structures be created when one group tries to provide aid to another? 4. Research the Cultural Intelligence Center (https://culturalq.com/) and online videos on the topic of building cultural intelligence such as Cultural Intelligence: A New Way of Thinking by Jeff Thomas (https://www.youtube.com/watch?v=K3S7...ature=youtu.be). Describe what information and free services are available online to help people improve their knowledge and communication skills with people of different cultural backgrounds and experiences. 5. Provide examples of how you will apply the following skills to develop global consciousness: • Minimize culture shock • Recognize ethnocentrism • Practice cultural relativism • Develop multiple consciousness • Step outside your comfort zone As we come into contact with diverse people, one of our greatest challenges will be managing cross-cultural conflict. When people have opposing cultural values, beliefs, norms, or practices, they tend to create a mindset of division or the “us vs. them” perspective. This act of loyalty to one side or another displays tribalism and creates an ethnocentric and scapegoating environment where people judge and blame each other for any issues or problems. Everyone attaches some importance to what one values and believes. As a result, people from different cultures might attach greater or lesser importance to family and work. If people are arguing over the roles and commitment of women and men in the family and workplace, their personal values and beliefs are likely to influence their willingness to compromise or listen to one another. Learning to manage conflict among people from different cultural backgrounds increases our ability to build trust, respect all parties, deal with people’s behaviors, and assess success (Bucher 2008). How we deal with conflict influences productive or destructive results for others and ourselves. Self-assessment is key to managing cross-cultural conflicts. Having everyone involved in the conflict assess herself or himself first and recognize their cultural realities (i.e., history and biography) will help individuals see where they may clash or conflict with others. If someone comes from the perspective of men should lead, their interactions with others will display women in low regard or subordinate positions to men. Recognizing our cultural reality will help us identify how we might be stereotyping and treating others and give us cause to adapt and avoid conflict with those with differing realities. Some form of cultural bias is evident in everyone (Bucher 2008). Whether you have preferences based on gender, sexuality disability, region, social class or all social categories, they affect your thoughts and interactions with others. Many people believe women are nurturers and responsible for child rearing, so they do not believe men should get custody of the children when a family gets a divorce. Bias serves as the foundation for stereotyping and prejudice (Bucher 2008). Many of the ideas we have about others are ingrained, and we have to unlearn what we know to reduce or manage bias. Removing bias perspectives requires resocialization through an ongoing conscious effort in recognizing our bias then making a diligent effort to learn about others to dispel fiction from fact. Dealing with bias commands personal growth and the biggest obstacles are our fears and complacency to change. Additionally, power structures and stratification emerge in cross-cultural conflicts. The dynamics of power impact each of us (Bucher 2008). Our assumptions and interactions with each other is a result of our position and power in a particular context or setting. The social roles and categories we each fall into effect how and when we respond to each other. A Hispanic, female, college professor has the position and authority to speak and control conflict of people in her classroom but may have to show deference and humility when conflict arises at the Catholic Church she attends. The professor’s position in society is contextual and in some situations, she has the privileges of power, but in others, she may be marginalized or disregarded. Power effects how others view, relate, and interact with us (Bucher 2008). Power comes with the ability to change, and when you have power, you are able to invoke change. For example, the racial majority in the United States holds more economic, political, and social power than other groups in the nation. The dominant group’s power in the United States allows the group to define social and cultural norms as well as condemn or contest opposing views and perspectives. This group has consistently argued the reality of “reverse racism” even though racism is the practice of the dominant race benefitting off the oppression of others. Because the dominant group has felt prejudice and discrimination by others, they want to control the narrative and use their power to create a reality that further benefits their race by calling thoughts and actions against the group as “reverse racism.” However, when you are powerless, you may not have or be given the opportunity to participate or have a voice. Think about when you are communicating with someone who has more power than you. What do your tone, word choice, and body language project? So now imagine you are the person in a position of power because of your age, gender, race, or other social category what privilege does your position give you? Power implies authority, respect, significance, and value so those of us who do not have a social position of power in a time of conflict may feel and receive treatment that reinforces our lack of authority, disrespect, insignificance, and devalued. Therefore, power reinforces social exclusion of some inflating cross-cultural conflict (Ryle 2008). We must assess our cultural and social power as well as those of others we interact with to develop an inclusive environment that builds on respect and understanding to deal with conflicts more effectively. Communication is essential when confronted with cross-cultural conflict (Bucher 2008). Conflicts escalate from our inability to express our cultural realities or interact appropriately in diverse settings. In order to relate to each other with empathy and understanding, we must learn to employ use of positive words, phrases, and body language. Rather than engaging in negative words to take sides (e.g., “Tell your side of the problem” or “How did that effect you?”), use positive words that describe an experience or feeling. Use open-ended questions that focus on the situation or concern (e.g., “Could you explain to be sure everyone understands?” or “Explain how this is important and what needs to be different”) in your communications with others (Ryle 2008). In addition, our body language expresses our emotions and feelings to others. People are able to recognize sadness, fear, and disgust through the expressions and movements we make. It is important to project expressions, postures, and positions that are open and inviting even when we feel difference or uncomfortable around others. Remember, words and body language have meaning and set the tone or atmosphere in our interactions with others. The words and physical expressions we choose either inflate or deescalate cross-cultural conflicts. The act of reframing or rephrasing communications is also helpful in managing conflicts between diverse people. Reframing requires active listening skills and patience to translate negative and value-laden statements into neutral statements that focus on the actual issue or concern. This form of transformative mediation integrates neutral language that focuses on changing the message delivery, syntax or working, meaning, and context or situation to resolve destructive conflict. For example, reframe “That’s a stupid idea” to “I hear you would like to consider all possible options.” Conversely, reframe a direct verbal attack, “She lied! Why do you want to be friends with her?” to “I’m hearing that confidentiality and trust are important to you.” There are four steps to reframing: 1) actively listen to the statement; 2) identify the feelings, message, and interests in communications; 3) remove toxic language; and 4) re-state the issue or concern (Ryle 2008). These tips for resolving conflict helps people hear the underlying interests and cultural realities. ETHNOGRAPHY PART 1 1. Interview another student in class. Record the student’s responses to the following: CULTURAL EXPRESSIONS • What are typical foods served in the culture? • Are there any typical styles of dress? • What do people do for recreation? • How is space used (e.g., How close should two people who are social acquaintances stand next to one another when they are having a conversation?) • How is public space used? For example, do people tend to “hang out” on the street, or are they in public because they are going from one place to the next? STANDARD BEHAVIORS • How do people greet one another? • Describe how a significant holiday is celebrated. • How would a visitor be welcomed into a family member’s home? • What are the norms around weddings? Births? Deaths? SPECIFIC BELIEFS • How important is hierarchy or social status? • How are gender roles perceived? • How do people view obligations toward one another? • What personal activities are seen as public? What activities are seen as private? • What are the cultural attitudes toward aging and the elderly? ENTRENCHED IDEOLOGIES • How important is the individual in the culture? How important is the group? • How is time understood and measured? (e.g., How late can you be to class, work, family event, or appointment before you are considered rude?) • Is change considered positive or negative? • What are the criteria for individual success? • What is the relationship between humans and nature? (e.g., Do humans dominate nature? does nature dominate humans? Do the two live in harmony?) • What is considered humorous or funny? • How do individuals “know” things? (e.g., Are people encouraged to question things? Are they encouraged to master accepted wisdom?) • Are people encouraged to be more action-oriented (i.e., doers) or to be contemplative (i.e., thinkers)? • What is the role of luck in people’s lives? • How is divine power or spirituality viewed? PART 2 1. Exchange the photos each of you took in the exercise. 2. Next visit the website Dollar Street (https://goo.gl/Rb8WUJ) 3. Compare the visual ethnography photos with other people throughout the world. 4. In complete sentences, explain the differences and similarities based on income and country. Specifically, describe what the poorest conditions are for each item as well as the richest conditions and what similarities and/or differences exist in comparison to the student photos. PART 3 Write a paper summarizing the ethnographic data you collected and examined. Your paper must include a description and analysis of the following: • Thesis statement and introductory paragraph (3-5 sentences) about the student you studied and learned about for this project and methods used to gather data. • A summary of the ethnography interview containing a minimum of five paragraphs (3-5 sentences each) with first level headings entitled cultural expressions, standard behaviors, specific beliefs, and entrenched ideologies. • A comparison of visual ethnography photos with other people throughout the world using the Dollar Street website (https://www.gapminder.org/dollar-street/matrix). Write a minimum of 10 paragraphs (3-5 sentences each) discussing the poorest and richest conditions of the archived photos on the website, and explain the similarities and/or differences to the 22 photos shared by your study subject. • Concluding paragraph (3-5 sentences) telling what you learned by completing an ethnography project and the significance to understanding cultural sociology. Type and double-space project papers with paragraphs comprised of three to five sentences in length and first level headers (left justified, all caps) as appropriate. Do not write your paper in one block paragraph. Include parenthetical and complete reference citations in ASA format as appropriate. 5.S: The Multicultural World (Summary) Key Terms and Concepts Affinity Groups Global Electronic Cultural Communities Cross-Cultural Conflict Globality Cultural Bias Globalization Cultural Creator Multiple Identities Cultural Heterogenization Postmodern Culture Cultural Homogenization Reframing Cultural Intelligence Resocialization Cultural Lenses Transnational Communities Cultural Realities Transnational Corporations Dynamics Of Power Transnational Migration Global Consciousness
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Beyond_Race_-_Cultural_Influences_on_Human_Social_Life_(Kennedy)/05%3A_The_Multicultural_World/5.03%3A_Building_Cultural_Intellegence.txt
• 1.1: What Is Encryption? Encryption is the process of scrambling a message so that it can only be unscrambled by the intended parties. The method by which you scramble the original message, or plaintext, is called the cipher protocol. In almost all cases, the cipher is not intended to be kept secret. The scrambled, unreadable, encrypted message is called the ciphertext and can be safely shared. Most ciphers require an additional piece of information called a cryptographic key to encrypt and decrypt messages. • 1.2: Modern Cryptography Modern cryptography is not something you do by hand. Computers do it for you, and the details of the algorithms they employ are beyond the scope of this book. However, there are certain principles that will help you better understand and evaluate modern digital security tools. • 1.3: Exchanging Keys for Encryption Eavesdropping communications through the internet can be done at many points: the Wi-Fi hotspot you’re directly connected to, your internet service provider, the server hosting the web pages you visit, national gateways, and the vast array of routers and switches in between. Without encryption, all these communications would be readable by an eavesdropper, be that a stalker, a hacker, or a government agency. • 1.4: Cryptographic Hash A hash function is any (computer) function that transforms data of arbitrary size (e.g., a name, a document, a computer program) into data of fixed size (e.g., a three-digit number, a sixteen-bit number). The output of a hash function is called the digest, fingerprint, hash value, or hash (of the input message). • 1.5: The Man in the Middle In the chapter “Exchanging Keys for Encryption,” you learned how two people can agree on a cryptographic key, even if they have not met. While this is a robust method, it suffers from the limitation that on the internet, it is difficult to be sure that you are communicating with the person or entity you are trying to communicate with, be that a friend you are instant messaging or emailing or the server that you are trying to load a web page from. • 1.6: Passwords • 1.7: Public-Key Cryptography • 1.8: Authenticity through Cryptographic Signing Public-key cryptographic systems can often be used to provide authenticity. In PGP, this is allowed by the complementary nature of the public and private keys. In the beginning, two cryptographic keys are created, and either can be used as the public key; the choice as to which is the public key is really just an arbitrary assignment. That is, either key can be used for encryption as long as the other one is used for decryption (and the one used for decryption is kept private to provide security • 1.9: Metadata Metadata is all the information about the data but not the data itself. • 1.10: Anonymous Routing Thumbnail: The action of a Caesar cipher is to replace each plaintext letter with a different one a fixed number of places down the alphabet. The cipher illustrated here uses a left shift of three, so that (for example) each occurrence of E in the plaintext becomes B in the ciphertext. (Public Domain; Matt_Crypto via Wikipedia)​​​​​ 01: An Introduction to Cryptography What You’ll Learn 1. The basic elements of encryption: the plaintext, the ciphertext, the cipher (or encryption protocol), and the cryptographic key 2. How some classic encryption methods work 3. Ways that encryption can be broken 4. An unbreakable cipher Let’s start with the basics—think “pen and paper encryption”—before moving on to more complex encryption methods made possible by computers. Encryption is the process of scrambling a message so that it can only be unscrambled (and read) by the intended parties. The method by which you scramble the original message, or plaintext, is called the cipher or encryption protocol. In almost all cases, the cipher is not intended to be kept secret. The scrambled, unreadable, encrypted message is called the ciphertext and can be safely shared. Most ciphers require an additional piece of information called a cryptographic key to encrypt and decrypt (scramble and unscramble) messages. A Simple Cipher: The Caesar Cipher Consider the first and perhaps simplest cipher: the Caesar cipher. Here, each letter in the message is shifted by an agreed-upon number of letters in the alphabet. For example, suppose you wanted to encrypt the plaintext IF VOTING CHANGED ANYTHING IT WOULD BE ILLEGAL by shifting each letter in the message forward by three places in the alphabet, so that A becomes D, B becomes E, and so on, with Z wrapping around to the start of the alphabet to become C. The plaintext gets encrypted to the following ciphertext: LI YRWLQJ FKDQJHG DQBWKLQJ LW ZRXOG EH LOOHJDO To decrypt this message, the recipient would do the reverse, shifting each letter in the message backward three places in the alphabet, so Z becomes W and A wraps around through the end of the alphabet to become X. For the recipient to be able to decrypt the message (quickly), they would have to know the key to the cipher. For the Caesar cipher, this is the number of places that each letter is shifted in the alphabet; in this example, it is the number 3. A Caesar cipher key can also be represented by a letter of the alphabet corresponding to the result of the translation from A. For example, a shift of 3 would be the key D, a shift of 23 would be the key Z, and the shift of zero (the identity shift) would be the key A. Let’s review the terms. In this example, to apply the cipher (or encryption protocol), one must simply follow these instructions: “To encrypt, shift each letter in the plaintext message forward in the alphabet by n letters. To decrypt, shift each letter in the message ciphertext backward in the alphabet by n letters.” The key is the amount of the shift, n. Of course, the Caesar cipher is not a strong cipher, and you certainly shouldn’t trust it to keep your plans secret. All an adversary would need to do to break (or crack) your secret code (ciphertext) is to try every possible backward shift through the alphabet. There are not many possibilities, so this wouldn’t take long: since the key A makes the ciphertext equal the plaintext, there are only twenty-five possible keys. Such an attack is called a brute-force attack, in which an adversary attempts to decipher an encrypted message by trying every possible key. This attack is feasible in the case of the Caesar cipher because there are very few possible keys. A Slightly More Complicated Cipher: The Vigenère Cipher The Vigenère cipher is a set of Caesar ciphers, each with its own key. Typically the key is given as a word, and the position of the word’s letter in the alphabet indicates how the letter A is shifted, as in a Caesar cipher. This is easiest to see with an example. Suppose you wish to encrypt the plaintext RESPECT EXISTENCE OR EXPECT RESISTANCE with the key ACT Then • Encrypt every third letter starting with the first letter of the plaintext (R, P, T …) with a Caesar cipher that maps A to A (a shift of zero, or a Caesar cipher with the key A or 0). • Encrypt every third letter starting with the second letter of the plaintext (E, E, E …) with a Caesar cipher that maps A to C (a Caesar cipher using the key C or 2). • Encrypt every third letter starting with the third letter of the plaintext (S, C, X) with a Caesar cipher that maps A to T (a Caesar cipher using the key 19). Applying these three Caesar ciphers results in the ciphertext: RGLPGVT GQIUMEPVE QK EZIEEM RGLIUMAPVE To break this cipher, suppose your adversary knows the length of your key: your adversary would try to decrypt the ciphertext with all possible three-letter words (or, in general, any three-letter sequence of letters) of that length. In this example, that would require at most 25 × 26 × 26 = 16,900 attempts, which is more than could be easily attempted by hand but is trivially done by a computer. If your adversary doesn’t know the length of your key, then they would have to try many more possible keys (as many as 25 + 25 × 26 + 25 × 26 × 26 + …) to apply this brute-force method to break the encryption. Notice that the longer your key is, the more difficult brute-force methods are—and the harder an adversary must work to break the encryption. In Context: The Unbreakable Onetime Pad A Vigenère cipher—whose key is a sequence of randomly selected letters and is at least as long as the message plaintext—makes possible a cipher known as the onetime pad. Historically, the key itself would be written on a pad of paper and distributed among communicating parties. To encrypt, a Vigenère cipher is applied to the plaintext, where each letter in the onetime pad is used only once before proceeding to the next letter and so on. Decryption relies on possession of this onetime pad, and the starting position in the key. It is impossible to break this cipher without the key—that is, it is impossible to guess the key and crack the ciphertext, even with unlimited time and resources. This is because a ciphertext of a given length could correspond to any plaintext of the same length. For example, without knowledge of the random key, the onetime pad-encrypted ciphertext SOU DUCYFUK RXL HQKPJ could (with equal probability) correspond to either the plaintext ALL ANIMALS ARE EQUAL or FEW ANIMALS ARE HAPPY. Without the key, there is no way to know what the intended (plaintext) message is! Omitting spaces between words or encrypting the spaces between words (using a twenty-seven-letter alphabet ABCDEFGHIJKLMNOPQRSTUVWXYZ_, where _ is a space) would make it far more difficult to guess even the set of possible plaintext messages. Of course, the onetime pad has the practical problem of how to exchange the key (the onetime pad itself), which is as long as the message, or as long as the total length of all possible future messages. Despite that, it has been used historically, with groups sharing a onetime pad in person and then sending messages over insecure channels. In the late 1980s, the African National Congress (ANC), at the time fighting apartheid in South Africa, used onetime pads to encrypt messages between foreign supporters and in-country operatives. The onetime pads (the keys) were physically transported by a trusted air steward who worked the Amsterdam-to-Johannesburg route. Incidentally, the ANC also computerized the encryption and decryption, making it possible to translate encrypted messages into tonal sequences transmitted over a phone connection and recorded to—or received from—an answering machine, allowing for asynchronous communication. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.01%3A_What_Is_Encryption.txt
What You’ll Learn 1. What key length means for security 2. What open-source software is and why it is important to security Modern cryptography is not something you do by hand. Computers do it for you, and the details of the algorithms they employ are beyond the scope of this book. However, there are certain principles that will help you better understand and evaluate modern digital security tools. Security through Requiring Brute-Force Attacks Modern cryptographic protocols are designed to force an adversary (not in possession of the cryptographic key) to spend (close to) as much time as it would take to try every possible key to break the code. Recall that trying every possible key is known as a brute-force attack. The parameters of a given protocol are chosen so that this amount of time is impractical. Usually, the most important parameter is the length of the key. Just as with the classic Vigenère cipher, longer keys mean that more possible keys must be explored in order to guess the correct key. As time goes by and computer processing becomes faster and more powerful, often longer keys are required to guarantee that a brute-force attack would be infeasible. For this reason, many cryptographic protocols will mention the key size in terms of the number of bits it takes to represent the key. Computers represent information, including cryptographic keys, in binary—using just 0s and 1s. Just like the numbers 0 through 9 represent the digits of a decimal number, the numbers 0 and 1 represent the bits of a binary number. How many three-digit decimal numbers are there? 10 × 10 × 10 = 103 = 1000—that is, the numbers 0 through 999. Likewise, there are 2 × 2 × 2 × 2 = 24 = 16 four-bit binary numbers. As an example, the AES cryptographic protocol may be referred to as AES-128 or AES-256 when using the protocol with 128-bit or 256-bit encryption keys, respectively. In AES-128, there are 2128 = 340282366920938463463374607431768211456 possible keys. In AES-256, there are 2256 = 115792089237316195423570985008687907853269984665640564039457584007913129639936 possible keys. Trying every possible key—or even a small fraction of all possible keys—for AES-256 is computationally infeasible, even given the computational power of nation-states such as the United States. Security Is Not Guaranteed through Obscurity Since as early as the nineteenth century, mathematicians have held as a standard that cryptographic schemes should be secure even if the method being used is not secret. This is based on the following principle: If security requires keeping the method secret, then one risks all messages that have ever been encrypted or ever will be encrypted with that method being revealed if the method is ever uncovered. On the other hand, if your method only requires keeping the key secret, then one only risks those messages that have been encrypted with that particular key being revealed if the key is compromised. Security Is Provided by Transparency In fact, the more transparency around a cryptographic method, the more you can trust the security of the method. To understand this, consider how an encryption program (or any computer program, in fact) is created. It starts with an algorithm as to how to perform the encryption. A programmer turns this algorithm into a source computer code. A computer compiles this source code into the program or app that runs on your computer or phone. A good computer programmer should be able to translate from an algorithm (1) to source code (2) and back. A security professional would be able to evaluate the security of a cryptographic protocol based on the algorithm but should also evaluate the source code to be certain of its faithful implementation (that there are no mistakes or bugs, whether intentional or not). However, as a user, you would only have access to the compiled program (3). Unfortunately, given only the compiled code, it is impossible for anyone to re-create the source code. So unless the source code is available, no one can be certain that the security claims of an app are true. On the other hand, having just the compiled program is enough for a hacker to try to break the security of the app. Many software projects make their source code available to the public: such software is called open-source software and includes many well-known projects, security and otherwise, such as Signal, Firefox, and Linux. The alternate is closed-source software and is popular among projects that aim to monetize their product through sales of proprietary software, such as Safari, Internet Explorer, Windows, and Mac OS. While it is possible to evaluate the security of closed-source software (e.g., through private audits), it is much more difficult to maintain this on an ongoing basis. Open-source projects are open to scrutiny by anyone, giving every opportunity for security (or other) problems to be discovered. Security Is Provided by Protecting Your Encryption Key Since the encryption method is typically public in modern cryptographic protocols, the way that one achieves security is through protecting their encryption key. What this looks like in practice depends on where the key resides. In the case of Signal, a secure instant messaging app, the encryption key is a file on your phone, and it protects your phone. In the case of a password manager that syncs your passwords to the cloud, the key that encrypts the file storing all your passwords is derived from or protected by the password that you use to log into your password manager. Security Is Provided by Distrusting the Infrastructure End-to-end encryption involves scrambling a message so that it can only be read by the endpoints of a conversation. But here’s where the confusion comes in: What are the endpoints? Are they just you and your friends? Or is the server an endpoint too? It depends on the application. As an example, https (which secures communications between you and the servers hosting the web pages you visit) is encrypted so that only you and the server can decrypt the content of the web pages. Signal encrypts messages so that only you and your friend that you are messaging can read them. In both cases, only the people or entities that need to know the information are able to decrypt the information. This is the heart of end-to-end encryption. Here is an illustration of why end-to-end encryption is so important in private messaging. This is covered in greater technical detail in the chapter “The Man in the Middle.” In the following figure, Assata (left) is trying to get a message (Ursula K. Le Guin’s The Dispossessed) to Bobby (right) over the internet: But the ghost of mean old J. Edgar Hoover haunts the infrastructure. This Man in the middle here is able to intercept, read, and change any unprotected message sent between our two friends. Like so: (Edgar could also just read and send the message along unaltered.) To make matters worse, saying that an app uses “encryption” (without being specific about who holds the keys) doesn’t guarantee that messages remain private and authentic. For example, if a server between the two comrades is managing the encryption keys, anyone with access to the server could read and modify all messages between them. However, if Assata and Bobby are encrypting their message (with the blue key), then Edgar won’t be able to read the message and wouldn’t be able to replace the message with one that can be decrypted with the blue key: How do you know whether an application uses end-to-end encryption? The best indication is that there is some way to verify encryption keys—Signal makes this easy with safety numbers. We will describe this in more detail in the chapter “Authenticity through Cryptographic Signing.” Another way to reduce exposure to a malicious interloper is through peer-to-peer messaging, where it is said that there is “no server” in between managing your messages or contacts. Even this can be a bit misleading, however: there is a tremendous amount of internet infrastructure in between you and your friends; it’s just invisible to most users and apps. As described above, this infrastructure is precisely what the State exploits to conduct undetectable, suspicionless mass surveillance. In Context: The Enigma Machine Possibly the first modern encryption techniques were used during World War II. Predating modern computers, the protocols were supported by sophisticated mechanical devices. Most notable among these is the Enigma machine used by Nazi Germany. The Enigma is an electromechanical device that allowed you to set a particular key, type in the plaintext, and get the ciphertext output. With the same key, typing in the ciphertext would output the original plaintext. The key is an order of the rotors and initial positions of the rotors (pictured above). Standard operation required using a new key every day. The keys were listed by day in handbooks distributed to operators of Enigma machines—these are essentially onetime pads of keys. Incidentally, these were printed with water-soluble ink, allowing quick destruction of the key book when at risk of falling into enemy hands. Much effort went into breaking Enigma-encrypted messages. Several machines were captured during World War II, but even in possession of the machine, decrypting messages was nearly infeasible (as with truly modern ciphers whose methods are public). Alan Turing, one of the founders of computer science as a discipline, worked at the secretive Bletchley Park, the central site for British code breakers during World War II. Turing designed the bombe, a type of computer specially designed for deciphering Enigma messages. The bombe was not enough. (In fact, decrypting Enigma messages without a key is incredibly challenging even with modern computation capabilities; at least one famous Enigma message intercepted during the war remains encrypted to this day.) However, the bombe in combination with the fact that most early morning messages contained weather reports or the phrase Keine besonderen Ereignisse (“nothing to report”) did allow the Allies to break Enigma enciphered messages regularly. Turing’s work during the war has been estimated to shorten the war by more than two years. However, his work remained unacknowledged throughout his life, since work at Bletchley Park was classified, and in fact, he was criticized for not contributing to the war effort. More tragically, as a gay man, he was persecuted by his own government to the point of being charged with a crime in 1952. Found guilty of homosexual acts, he was given the choice of chemical castration or imprisonment. Choosing the former, he only lived another two years, reportedly ending his own life by cyanide poisoning. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.02%3A_Modern_Cryptography.txt
What You’ll Learn 1. How messages can be encrypted without sharing an encryption key in advance 2. The primary method of exchanging keys online used today Eavesdropping communications through the internet can be done at many points: the Wi-Fi hotspot you’re directly connected to, your internet service provider, the server hosting the web pages you visit, national gateways, and the vast array of routers and switches in between. Without encryption, all these communications would be readable by an eavesdropper, be that a stalker, a hacker, or a government agency. But in order to encrypt your communications, you need to agree on a key with the party you are communicating with. If you are visiting a website, how do you safely exchange a key with the server that hosts the website? We need a method for two parties (e.g., two people, a person and a server, or two servers) to efficiently agree on a key without meeting and while only being able to communicate over insecure channels, such as the internet. A Physical Example: Exchanging a Message without Exchanging a Key First consider a physical example, illustrated below. Suppose Assata wants to send Bobby a package. She puts it in a strong box with a large clasp that can take multiple locks (1). She puts a lock on the box, but Bobby doesn’t have a key to the lock. Assata mails the box to Bobby, who cannot open it (and neither can anyone else while the box is in transit). Bobby puts his own lock on the box (2), a lock that Assata doesn’t have the key to. When Assata receives the box, she removes her lock and sends the box back to Bobby (3). Now Bobby can open the box because it is only secured with his lock (4). The box cannot be opened in transit—an eavesdropper would have to break Assata’s lock, Bobby’s lock, or both. This illustrates that it is possible to send something securely without meeting first to exchange (agree on) a key. However, we aren’t about to start physically mailing lockboxes in order to exchange encryption keys. What we need is a mathematical version of this that we can use for digital communications. A Mathematical Example: Exchanging a Message without Exchanging a Key Let’s see how we would do this without physical boxes and locks. Suppose you have an encryption protocol where you can encrypt any text (as we always expect), that you can apply multiple times for layers of encryption (as we also always expect), and that you can encrypt and decrypt the layers in any order you wish and end up with the same result. A mathematical operation satisfying this last property is said to be commutative. (All the encryption protocols we describe in the chapter “What Is Encryption?” are commutative.) Let’s see this with an example, using the Vigenère cipher. Assata encrypts the message AT ONE TIME IN THE WORLD THERE WERE WOODS THAT NO ONE OWNED with a Vigenère cipher and key ALDO to get the ciphertext AE RBE ELAE TQ HHP ZCRWG HHPUS WPUS WZRRS EKOT YR CNP RKNPG and sends the result to Bobby. Bobby doesn’t have the key! But Bobby encrypts this ciphertext with a Vigenère cipher and key LEOPOLD to get the doubly encrypted text LI FQS POLI HF VSS KGFLU SKAYG LDFV HDFGG PNZX MG QYS COBEU and sends the result back to Assata. Assata “decrypts” the message from Bobby with her key (ALDO) to get (the still encrypted message) LX CCS ELXI WC HSH HSFAR EKPVS LSCH HSCSG EKLX BD CYH ZABTR and sends the result to Bobby. Finally, Bobby decrypts this with his key (LEOPOLD) and gets the message that Assata wanted to send Bobby in the first place: AT ONE TIME IN THE WORLD THERE WERE WOODS THAT NO ONE OWNED Note that, in this example, Assata did not share her key (ALDO) with anyone, and Bobby did not share his key (LEOPOLD) with anyone either. Because the Vigenère cipher is commutative, it did not matter that the message was encrypted with Assata’s key, then encrypted with Bobby’s key, then decrypted with Assata’s key, and finally decrypted with Bobby’s key. All that matters is that the message was encrypted and decrypted once with each key. Any eavesdropper would only see one of the three intermediate ciphertexts. A Physical Example: Agreeing on a Secret over an Insecure Channel In modern cryptographic systems, rather than sending the entire message back and forth with different layers in this way, one has an initial exchange, much like in the above examples, to settle on a key to use for the intended communication. You could imagine that Assata, rather than sending the message AT ONE TIME IN THE WORLD . . . , sent an encryption key to use for a longer communication. We will describe the mathematical basis for key exchange as it is used by almost all modern communication, called the Diffie-Hellman key exchange. First, let’s see how this is done with paints instead of mathematics (illustrated below). We will assume that if you mix two colors of paint together, you can’t unmix them; specifically, even if you know what one of the two colors was, you can’t figure out what color was mixed with it to get the resulting mixed color. Assata and Bobby start by agreeing on one paint color (in this example, yellow) and an amount, say 10 mL (1). They can do this over an insecure communication channel and should assume that an eavesdropper will know what the color and amount are too. Then Assata picks a color (in this case, rusty orange) and keeps it secret (2). She mixes 10 mL of yellow with 10 mL of rusty orange to get a coralish color (3). She sends this to Bobby over the insecure channel, understanding that an eavesdropper will see it. Bobby does the same thing, with his own secret color (4). Now to the paint sample received from Bobby (5): Assata mixes in 10 mL of her secret color (6), resulting in a dark purple (7). Bobby does the same thing. Assata’s unpleasant brown–dark purple is obtained from a mix of 10 mL each of yellow, her secret color, and Bobby’s secret color. Bobby’s resulting paint mix is obtained from a mix of 10 mL of yellow, his secret color, and Assata’s secret color. So Bobby also ends up with the same unpleasant brown–dark purple (8)! Can the eavesdropper create the dark purple? The eavesdropper sees yellow (1), the mix of yellow and Assata’s secret color (3), and the mix of yellow and Bobby’s secret color (5). But to create the unpleasant brown, the eavesdropper would have to unmix in order to obtain Assata’s or Bobby’s secret colors, which they can’t do. Diffie-Hellman Key Exchange Let’s revisit this process mathematically. We do so with a commutative mathematical operation that is hard or impossible to reverse. A mathematical operation or function that is hard to reverse is called a one-way function. Let’s represent our mathematical operation with the symbol ☆—that is, a ☆ b = c for some numbers a, b, and c. Commutative means that a ☆ b = b ☆ a. That ☆ is one way means that if you know b and c, you cannot easily figure out what a is. In practice, one should only be able to figure out what a is by a brute-force (or close to brute-force) attack: by trying every possibility for a. You may think of ☆ as the multiplication sign (which is commutative but is not one way). (For those mathematically inclined, ☆ can be modular exponentiation for real implementations of Diffie-Hellman.) Illustrated below, Assata and Bobby agree on a number p, which is public (1). Assata chooses a secret number a (2), computes p ☆ a (3), and sends the result to Bobby. Since is one way, an eavesdropper will know p and p ☆ a but will not be able to (easily) determine a. Bobby chooses a secret number b (4), computes p ☆ b, and sends the result to Assata (5). An eavesdropper knows p ☆ b but not b. Assata computes (p ☆ b) ☆ a (7), using the message from Bobby (5) and her own secret number (6). Bobby computes (p ☆ a) ☆ b (8), using the message from Assata (3) and his own secret number (4). Since is commutative, (p ☆ b) ☆ a = (p ☆ a) ☆ b, and so Assata and Bobby now have computed a common number. Since the eavesdropper only knows p ☆ a, p ☆ b, and p, and since is one way, the eavesdropper has no efficient means of computing Assata and Bobby’s shared common number: it is secret to Assata and Bobby. Assata and Bobby can use this shared number as their cryptographic key. Using Diffie-Hellman Key Exchange Diffie-Hellman key exchange is used all over the place as a means of agreeing on a cryptographic key. It is used as the basis for most forms of encrypted communications that you will encounter. Most notably, it forms the basis of key exchange when you connect to a website via https. When you visit a website, the URL will either start with http:// or https://. In the former case, none of your communications with the server of the website are encrypted. In the latter, communications are encrypted, and the key used to encrypt those communications is generated using Diffie-Hellman key exchange. In Context: When Good Things Go Bad Remember that the first thing that Assata and Bobby do is agree on a number p that forms the basis of their key exchange. This number is public, but we assumed that our mathematical operation ☆ was one way, so it was OK for p to be public. However, someone with a lot of computational resources (such as a wealthy nation-state) can invert the operation (for functions such as modular exponentiation used for in the real world) using two phases. The first phase takes a very long time and must be done for a specific value of p. The second phase can be done very quickly (in real time) for the same value of p, assuming that the first phase has been completed. This means that everyone should not be using the same value p but should be using different values of p and changing them often. However, in 2015, researchers showed that 18 percent of the top one million https domains use the same value of p. Two other communication protocols that depend on Diffie-Hellman key exchange are SSH (secure shell) and VPN (virtual private network). The same researchers showed that 26 percent of SSH servers and 66 percent of VPN servers used the same value of p in their Diffie-Hellman key exchange. This means that a powerful adversary would have little trouble breaking the encryption. While the Diffie-Hellman protocol is strong and reliable, this highlights that those who implement the protocols need to do so with care to ensure that they are in fact secure. External Resources • Adrian, David, Karthikeyan Bhargavan, Zakir Durumeric, Pierrick Gaudry, Matthew Green, J. Alex Halderman, Nadia Heninger, et al. “Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice.” In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 5–17. Denver: ACM, 2015.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.03%3A_Exchanging_Keys_for_Encry.txt
What You’ll Learn 1. What a hash function does 2. What a cryptographic hash function does and how it is distinct from an ordinary hash function 3. Some examples of the use of cryptographic hash functions A hash function is any (computer) function that transforms data of arbitrary size (e.g., a name, a document, a computer program) into data of fixed size (e.g., a three-digit number, a sixteen-bit number). The output of a hash function is called the digest, fingerprint, hash value, or hash (of the input message). A cryptographic hash function has the following properties that make it useful for cryptographic applications: 1. The same message always results in the same output hash. 2. It is infeasible to generate the input message from its output hash value except by brute force (trying all possible input messages). 3. It is infeasible to find two different input messages that result in the same output hash value. 4. A small change to the input message changes the output hash value so extensively that the new hash value appears uncorrelated with the old hash value. The first two of these properties are similar to most encryption protocols. If you encrypt the same message on two different occasions, you would expect the same result, assuming you are using the same encryption key. Given only the ciphertext, it should be infeasible to generate the plaintext (without the decryption key). However, encryption allows you to go backward, from ciphertext to plaintext, using the decryption key. Hash functions are inherently one way: there is no key to go backward. That the result is sometimes called a digest or a fingerprint is a useful analogy: while the output of a cryptographic hash function does not encode all the information of the input message (in the way that a ciphertext does), it encodes enough information that you can use it to identify the input (relying on properties 1 and 3) and that this is very difficult to fake (property 2). We will see applications of cryptographic hash functions in the chapters “The Man in the Middle,” “Passwords,” and “Public-Key Cryptography,” but let’s look at a simple use here, known as a commitment scheme. Using Cryptographic Hash Functions to Prove How Smart You Are Assata and Bobby are both trying to solve a difficult math problem. Assata gets the answer (S) first and wants to prove to Bobby that she has the answer before he has solved it without leaking the solution to him. So Assata takes a cryptographic hash of the solution S, hash(S), and gives Bobby hash(S). Since the hash is cryptographic, Bobby can’t learn S from hash(S) (property 2). When Bobby eventually solves the problem, finding S for himself, he can compute hash(S) and check that the result is the same as what Assata gave him. By properties 1 and 3, Bobby knows that Assata’s input to the hash function must be the same as his input to the hash function, thus proving that Assata solved the problem first. (Property 4 was not used here, but without this property, if Assata got a solution that was close to correct but not quite, the two outputs might be very similar, and a cursory comparison may not uncover that they are different.) What Do Hash Functions Look Like? There are many different cryptographic hash functions in use today, but describing them in detail is beyond the scope of this book. However, to give you a sense of what they might look like, we give an example that satisfies some, but not all, of the properties that cryptographic hash functions have. The example hash function is called chunked XOR. Exclusive, or XOR, is a function that, when given a pair of inputs, outputs true (or 1) if the inputs are different and false otherwise. So, for example, apple XOR banana = 1, apple XOR apple = 0, 0 XOR 1 = 1, 1 XOR 1 = 0. We can take a chain of XORs on binary numbers (0s and 1s) and get a meaningful answer: 1 XOR 1 XOR 0 = 0, 1 XOR 1 XOR 0 XOR 1 = 1. For a sequence of binary numbers, XOR returns 1 if there are an odd number of 1s in the chain and 0 otherwise. Chunked XOR operates on a binary input. (If your input is not binary, you could represent it in binary first, the same way a computer would.) We group the input into chunks equal to the size of the output of the hash function—for example, groups of eight bits. We line the chunks up vertically, and then XOR the contents of each column, as illustrated below: input: 00111011 11101101 00101000 00101011 01011000 11001110 chunked: 00111011 11101101 00101000 00101011 01011000 11001110 XOR’d columns: 01010011 (output) This is a hash function in that no matter what the length of the input, the output will always have the same length (eight in this example). You should be able to see that chunked XOR satisfies the first property of cryptographic hash functions. However, it fails on the remaining properties. It is easy to create an input message (but not necessarily your desired input message) with a given output hash—for example, you could concatenate 11111111 11111111 onto the result of the hash. For the same reason, you could create multiple messages having the same output hash. Finally, changing a single bit of the input message will change only a single bit of the output hash. In Context: Cryptographic Hashes Violate Your Fourth Amendment Rights In 2008, a US district judge ruled that if the US government wants to cryptographically hash your private data, then they need a warrant first. The case in question had a special agent of the Pennsylvania Office of the Attorney General copy the hard drive of a suspect’s computer. The special agent computed a cryptographic hash of the copy (so that it could be later compared to the original to prove that they did not tamper with it, relying on properties 1 and 3). The agent then used a forensic tool that computed a cryptographic hash of each individual file (including deleted but not yet overwritten files) on the copied hard drive and compared these hashes to hashes of files in a database of contraband files. The agent found three matches between hashes of files on the hard drive and hashes of contraband files. By properties 1 and 3, this means that the hard drive contains at least three illegal files. The judge on the case determined this practice (of hashing the files and comparing them to known hashes) to constitute a search of the hard drive, violating the accused’s Fourth Amendment rights to protection from illegal searches and seizures. As a result, the evidence could not be used in trial. We should disclose the particulars of the case, which involves the possession of child pornography. While we would never defend the right of possession (or the creation or distribution) of child pornography, it is important to imagine a power (in this case, that of determining the existence of particular files on a computer) to be used in a way that you would not want it to be used. Music that a friend shared with you? Images of oil spills? Images of #blacklivesmatter protests? Earth First! Journal articles? External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.04%3A_Cryptographic_Hash.txt
What You’ll Learn 1. What an impersonation attack is 2. What a man-in-the-middle attack is 3. The difference between a passive and active man-in-the-middle attack 4. How to uncover man-in-the-middle attacks occurring during key exchange using fingerprinting In the chapter “Exchanging Keys for Encryption,” you learned how two people can agree on a cryptographic key, even if they have not met. While this is a robust method, it suffers from the limitation that on the internet, it is difficult to be sure that you are communicating with the person or entity you are trying to communicate with, be that a friend you are instant messaging or emailing or the server that you are trying to load a web page from. We will first show how an eavesdropper can intercept your communications with our lockbox example from the chapter “Exchanging Keys for Encryption” and then show how this plays out in a Diffie-Hellman key exchange. These interceptions of communications are called attacks. A Physical Man-in-the-Middle Attack Recall that Assata was able to send Bobby a secure package by sending a lockbox back and forth three times: once with her lock on it, once with Bobby’s lock on it, and finally with her lock removed and only Bobby’s lock on it. However, how does she know it is actually Bobby who receives the package? And how does she know that it is Bobby’s lock on the box when it is sent back to her? Illustrated below, suppose Edgar intercepts the lockbox from Assata to Bobby with Assata’s lock on it (1). Edgar could send the lockbox back to Assata with his own lock on it (2). Unless Assata is able to tell the difference between a lock from Edgar and a lock from Bobby, Assata would assume that the lock is Bobby’s lock, remove her lock, and send the package on to Bobby (3). If Edgar intercepts the package again, he can now open the box and examine the contents of the package, since it only has his lock on it (4). For Edgar to do this, he must intercept all the packages being sent from Assata to Bobby. This attack on Assata’s communication with Bobby is called an impersonation attack: Edgar is impersonating Bobby. (This is not generally considered a man-in-the-middle attack.) In the situation as described, Bobby never received a package at all. Edgar could go further though (illustrated below). Edgar could, after opening the lockbox from Assata (4), choose to send it along to Bobby, using a mirror image of the same three-exchange method, so that Bobby thinks he is receiving a locked box from Assata (5–8). Edgar passes along Assata’s original message (just inspecting the message for himself), then we call this a passive man-in-the-middle attack. If Edgar substitutes the package with a completely different package, we call it an active man-in-the-middle attack. In either case, Edgar would need to intercept all packages between Bobby and Assata, as the packages will be addressed to Bobby or Assata, not Edgar. These types of attacks are called man-in-the-middle attacks because Edgar is the man in the middle of Assata and Bobby’s communication. (In the case of J. Edgar Hoover, quite literally “the man.”) A Man-in-the-Middle Attack against Diffie-Hellman Key Exchange Let’s see how this plays out in the Diffie-Hellman key exchange, using the notation we introduced in the chapter “Exchanging Keys for Encryption.” Recall that in order for Assata and Bobby to generate a key, they first agree on a number p. Assata picks a number a, computes p ☆ a, and sends the result to Bobby. Bobby picks a number b, computes p ☆ b, and sends the result to Assata. Assata and Bobby (and no one else) can now compute p ☆ a ☆ b, which they use as their cryptographic key for their encrypted communication. Suppose, though, that Edgar is able to intercept Assata’s and Bobby’s communications. Then Edgar can do one Diffie-Hellman key exchange with Assata and another Diffie-Hellman key exchange with Bobby, illustrated below. Assata will think that she doing a Diffie-Hellman key exchange with Bobby, when really she is exchanging keys with Edgar, resulting in the beigeish key p ☆ a ☆ e. Bobby will think that he doing a Diffie-Hellman key exchange with Assata, when really he is exchanging keys with Edgar, resulting in the blueish key p ☆ b ☆ e. In the end, Assata and Edgar have a shared key (left), and Edgar and Bobby have a shared key (right). But Assata and Bobby think that they have a shared key with each other. When Assata and Bobby start using what they think is their shared key, Edgar will have to keep up the ruse in order to not be discovered. You see, Assata will encrypt a message with the key she has. If this message makes it to Bobby, Bobby won’t be able to decrypt the message because he doesn’t have the same key! What Edgar needs to do is intercept the encrypted message and decrypt it with the key they share with Assata. Edgar now has two choices. Edgar could simply read the message, encrypt it with the key he shares with Bobby, and then send it to Bobby. This would be a passive man-in-the-middle attack: Edgar is reading the messages between Assata and Bobby that Assata and Bobby think no one else can read. Edgar’s other option is to change the message from Assata, encrypt it with the key he shares with Bobby, and then send it to Bobby. This would be an active man-in-the-middle attack. In either case, Edgar must continually intercept communications between Assata and Bobby, because otherwise, one of them will receive a message encrypted with a key they don’t have, which would alert them to the man in the middle. Spotting a Man-in-the-Middle Attack with Cryptographic Hashes: Fingerprinting If Assata and Bobby, after a Diffie-Hellman key exchange, can reliably compare their keys and see that they are the same, they can be assured that any eavesdropper has not mounted a man-in-the-middle attack and can only see their encrypted communications. As a reminder, this is because the parts of the Diffie-Hellman key exchange that an eavesdropper sees does not allow them to create the hidden parts of the keys that Assata and Bobby each select (a and b in the above figure). Indeed, the most basic way to spot a man-in-the-middle attack is for Assata and Bobby to compare their keys. You may notice some problems with this plan: If Assata and Bobby try to compare their keys, can’t Edgar manipulate the communication to make it seem like their keys are indeed the same? Of course! Then Assata and Bobby should compare their keys over a different communication channel. For example, if they were originally communicating over the internet, then they should compare keys over the phone. The assumption is that it would be much harder for Edgar to intercept Assata’s and Bobby’s communications over all the different communication channels that could be used to compare keys. Ideally, Assata and Bobby would meet in person to compare keys. Either way, this is called an out-of-band comparison: the band refers to the communication channel, and keys should be compared outside of the band of communication that the keys are exchanged through. But wait, if Assata and Bobby have another means of communicating, then why don’t they just exchange keys the old-fashioned way, without any fancy math to worry about? Well, cryptographic keys, for modern cryptographic methods, are very long—hundreds or thousands of characters long. It can be cumbersome to do a manual exchange of keys. If a designer of a method for secure communications wanted to automate the exchange of keys over a different communication channel, that designer would have to specify that second communication channel, making the whole secure communications system cumbersome to use. (Imagine having to make a phone call in order to visit a website.) Edgar would also then know what channel the keys are being exchanged on and could play the man in the middle on that channel. But then isn’t it cumbersome to compare keys if they are so long? Absolutely. So instead of comparing the entire key, Assata and Bobby compare the cryptographic hashes of their keys, as we described in the chapter “Cryptographic Hash.” Remember the following properties of the cryptographic hash: (1) It makes the input (in this case the key) much shorter (say, a few dozen characters). (2) It is next to impossible to find two inputs (in this case two keys) that have the same output hash, so Edgar certainly can’t manage to do a Diffie-Hellman key exchange with Assata and Bobby so that the hashes end up the same. (3) It cannot be reversed, so if someone intercepted the hash, they could not re-create the input (in this case, the key). You may recall that a cryptographic hash is sometimes called a fingerprint, and so we call the process of comparing the cryptographic hash of keys fingerprinting. Various communication apps may use different terminology for this, including safety numbers, verification, and authentication. In-Band Fingerprinting There are two methods to compare keys in band that are not commonly used but are clever and are variations on the out-of-band fingerprinting we described above. The first relies on the use of a weak password. If Assata and Bobby both know something, like the name of Assata’s first pet or the street Bobby grew up on (say “Goldman”), that their presumed adversary doesn’t know, Assata and Bobby can use this as a weak password. Assata combines her key with the weak password (“Goldman”) and computes the cryptographic hash of the result. Bobby does the same thing with his key. Assata and Bobby then compare the result in band (i.e., over the communication channel in which they are already communicating). Because of the properties of the cryptographic hash, Assata and Bobby will only have the same result if they have the same key and the same password: If Edgar is playing the man in the middle, then Edgar shares a key with Assata and a different key with Bobby. Edgar would have to risk passing along Assata’s hash or would have to guess the weak password to be able to compute a result that is the same as what Assata computes and the same as what Bobby computes (as in the figure below). The password does not need to be strong because only a small number of incorrect guesses (e.g., “panther”) would be tolerated between Assata and Bobby before they assume the man is in the middle: a brute-force attack by Edgar to guess the password is not feasible. The second method is used for key comparison in voice and video calls. Here, Assata hashes her key into two human-readable words (instead of a string of numbers and characters as we have seen). Bobby does the same thing. If Assata and Bobby have the same key (i.e., there is no man in the middle), then they will have the same set of words. Assata reads the first word, and Bobby reads the second word, so each can compare the result of the hash. If Edgar is in the middle, Assata and Bobby would have different pairs of words. Edgar would have to synthesize Assata’s and Bobby’s voices (and possibly videos) to speak the words that Edgar shares with Assata and Bobby in order for Edgar’s ruse to continue. The Ability to Fingerprint Is Protective, Even If You Don’t Do It If a method of secure communication does not provide the ability to compare (fingerprint) keys, then there is little benefit to using end-to-end encryption. Man-in-the-middle attacks can be automated in our global surveillance system, so if a man-in-the-middle attack cannot be spotted (by fingerprinting), then it might as well be carried out by default. However, if fingerprinting is made possible, then the man risks being uncovered, particularly if the attacks are automated and widely carried out. Everyone does not need to go through the process of fingerprinting as long as some users do to prevent the widespread deployment of men in the middle. Of course, for users at risk of targeted surveillance, fingerprinting is essential to the security of their communications. What to Do When You Can’t Fingerprint In many modes of communication, fingerprinting isn’t feasible. One example is in accessing a website via https. In using https, your browser and the website’s server will generate a cryptographic key via a Diffie-Hellman key exchange. However, it isn’t practical for users to contact the servers of websites via alternate communication channels to fingerprint keys before accessing the content of web pages. Of course, you don’t know the voice of the operator of the web server or share any common knowledge to use in-band comparison methods either! In this case, alternate methods of validating keys, using public-key cryptography and certificate authorities, are used. We will describe public-key cryptography in the chapter “Public-Key Cryptography.” In Context: The Great Firewall of China Many people know that the internet is heavily censored in China by the Great Firewall of China. Starting in mid-January 2013, parts of GitHub, a site primarily used to host computer programming code but that can also be used to share more general information, were blocked in China. By January 21, 2013, the entire domain was blocked. However, given GitHub’s central role in computer development and business and the importance of this sector to the Chinese economy, the public backlash successfully unblocked GitHub by January 23, 2013. On January 25, a petition on WhiteHouse.gov was started asking for those involved in building the Great Firewall of China to be denied entry into the United States. The petition linked to a GitHub page, created the same day, listing Chinese individuals accused of contributing to China’s censorship infrastructure. The next day, reports appeared in social media of a man-in-the-middle attack of users accessing GitHub, showing that the equivalent of the fingerprint check for accessing a website via https was failing. The Chinese government had learned that they could not block GitHub, and since GitHub supports https, the Great Firewall could not block accesses to particular pages within GitHub (e.g., based on keyword matches), since https encrypts that information from an eavesdropper. The next option is a man-in-the-middle attack. Any users who ignored warning signs of the attacks would be at risk of their government knowing what pages they were accessing or possibly editing. The Chinese government is the presumptive deployer of widespread man-in-the-middle attacks between users in China and other major internet services, such as Outlook, Apple’s iCloud, and Google. China isn’t alone in launching man-in-the-middle attacks. Similar attacks have been caught in Syria and Iran too. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.05%3A_The_Man_in_the_Middle.txt
What You’ll Learn 1. When “password protected” means something is encrypted and when it does not 2. How passwords are defeated 3. What password practices you can use to minimize the risk of your passwords 4. How encryption keys can be generated from passwords When “Password Protected” Does Not Mean Encrypted All passwords are used to control access. Account passwords are used to grant access to an online account, for example. Rarely, though, is the information in that account encrypted with a key that you control, and it is likely that your information is not encrypted at all. That is, the information is usually readable by the provider (e.g., Google, Dropbox). Other passwords are used to unlock an encrypted file or document, and we will refer to these as encryption passwords. The use of an account password is like telling a bouncer your name and having that name be matched on a list of approved guests, whereas the use of an encryption password is more like using a key to unlock a safe. In the first case, it is up to the bouncer (a metaphor for your online account provider) to give you access. In the latter case, the safe represents the ciphertext, and the contents of the safe are the plaintext—gaining access to the plaintext is impossible (or at least impractical) without the key or password. (In fact, encryption keys are in some cases generated from the password, as we describe below.) That said, even though your information is not encrypted with an account password, you should still minimize the people who could have access to your information. But to understand why we recommend the password practices we do, it helps to understand how passwords can be compromised. Password Cracking Passwords can be compromised or cracked one at a time or as a whole batch, as in all the passwords for all the accounts on a given system. Passwords are hot commodities. Since people frequently use the same password for many accounts and “popular” (a.k.a. terrible) passwords are used by many people (123456, password, qwerty, admin, welcome, password, to name a few real examples), discovering passwords used for one service can result in an account compromise for a different service and possibly for a different person. Let’s consider the ways in which passwords are compromised. To crack one password, an adversary could do so via the same means that you enter your password—for example, via a website. This is relatively easy for the website operator to help protect against—for example, by locking an account after a few incorrect password entries or forcing delays after entering the password to slow down repeated guesses. Another way the account provider can help is by allowing for two-factor authentication. This is where, in addition to entering a password to access an account, you must also enter an authentication code that is delivered to you via text, via an app on your smartphone, or to a physical authentication key (such as a YubiKey). To compromise your account, an adversary would need your password as well as your device that receives the authentication code. An adversary could also physically access the device (your phone or computer) on which you enter your password. More likely—and as is regularly reported in the news—the server on which your password is stored is compromised or hacked. In this case, it won’t just be your username and password that is compromised; everyone who has an account on that system will be at risk. Although an adversary who has gained access to a database of passwords on the server will likely have access to your account information too, as we alluded to above, the point of the hack might be to gain access not to the hacked service but to another service entirely. A responsible web service provider won’t store your password in plaintext on their server but will store a cryptographic hash of your password. To uncover a password (or all passwords), an adversary computes the cryptographic hash of a guessed password and compares this to the database of stolen passwords. In practice, password-cracking tools (e.g., John the Ripper) use three techniques: 1. Dictionary attacks: trying dictionary words, common salts of dictionary words (e.g., pa55w0rd, fr33d0m), and previously cracked passwords 2. Brute force: trying all possible combinations of letters and numbers of symbols (for practical reasons, this method only works for relatively short passwords) 3. Precomputed hashes: comparing against a table of cryptographic hashes of possible passwords that are computed ahead of time A user can foil the first two techniques by using good password practices (described below). A service provider can make password cracking less practical by using a cryptographic hash function that is slow to compute or uses a lot of memory. This wouldn’t be noticeable for a single password (such as would be done when you log in) but would slow the computation of the hashes during cracking. A service provider can further foil the use of precomputed hashes if they add a long random sequence of characters (salt) to your password when you log in. This salt can be stored in plaintext with your username, so an adversary would have this information too but would not have had the salt when the precomputed table of hashes was prepared. Additionally, if two users have the same password, because their salt will be different, the cryptographic hash of their passwords with the corresponding salts will be different. This forces an attacker to uncover each password individually. For all of this, you are trusting the online service provider to responsibly store and protect your user information, including your hashed password, if they have even hashed it. The rest is up to you. Best Practices for Passwords To guard against the methods deployed in password cracking, your passwords should be sufficiently long (to prevent brute-force attacks), be uncommon (to prevent dictionary attacks), and not be reused (so if one of your accounts gets compromised, all your accounts aren’t compromised). To accomplish this, use a password manager to generate and store all the passwords that you don’t need to manually type in. The password manager should be able to generate strong random passwords for you such as bdY,Fsc_7\&*Q+cFP. This is great for a password that you never have to type in—that is, a password that the password manager will input for you. For passwords that you will necessarily need to type in (e.g., a password you enter on your phone, the password you protect your password manager with, the password you use to encrypt or unlock your computer), use a diceware password, a.k.a. a passphrase, a.k.a. a random sequence of words such as remake.catfight.dwelled.lantern.unmasking.postnasal You can generate this password manually using dice and a word list. Many password managers will also generate such passwords, although you probably won’t need many of these. Note that the two above examples of passwords are randomly generated. This is important because even if you think your password is awesome and strong, if you came up with it with your brain, then someone else’s brain probably also came up with it, and so it is susceptible to dictionary attacks. Generating Encryption Keys from Passwords In some cases, passwords are used to unlock an encrypted file or device. An encryption key in this case is in fact generated from a password or passphrase using a key derivation function, which is, essentially, a cryptographic hash function. The input to the cryptographic hash function is your password, and the output is the cryptographic key. Why would this work? Let’s revisit the properties of cryptographic hash functions: 1. Regardless of the length of the input, the output is always the same size. So no matter how short (and weak!) your password is, you will get a cryptographic key of the right size. (But a short and weak password is susceptible to the password-cracking methods we discussed above.) 2. The same input always results in the same output. So your password will always generate the corresponding cryptographic key you need. 3. It is infeasible to generate the input from the output. So if someone manages to get your key, at least they won’t be able to re-create your password. 4. It is infeasible to find two different inputs that result in the same output. So someone trying to crack your password would be unlikely to even find some other password that results in the same cryptographic key as yours. 5. A small change to the input changes the output so extensively that the new hash value appears uncorrelated with the old hash value. Well, this property isn’t as useful for cryptographic key generation… In Context: When Precautions Are Not Enough In 2016, longtime activist DeRay Mckesson had his Twitter account and two email addresses compromised in a targeted attack despite having a two-factor authentication set up. His adversary was able to get control of his phone by calling Verizon and requesting a new SIM card and knew enough about Mckesson to convince Verizon to do so. Once the adversary had access to Mckesson’s phone number, they were able to receive password-reset codes to change his passwords and gain access to his accounts. It is a reminder that no security measure will be perfect, and for those who are subject to targeted attacks (in this case, Mckesson was targeted for his support of Black Lives Matter), extra vigilance is necessary. Here, the fact that access to Mckesson’s phone number could be used to force a password reset reduced account protections from two-factor to one factor: only phone access separated an adversary from Mckesson’s account rather than a password plus phone access. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.06%3A_Passwords.txt
What You’ll Learn 1. The difference between symmetric- and asymmetric-key cryptography 2. How public-key cryptography works Encryption protocols can be classified into two major types. In symmetric-key cryptography, the key used to decrypt a message is the same as (or easy to transform from) the key used to encrypt the message. This is the case for the basic ciphers (Caesar, Vigenère, and the onetime pad) that we described in the chapter “What Is Encryption?” (There are, of course, modern symmetric-key ciphers that are used—for example, to encrypt the data on your phone or computer.) As we saw, these protocols are challenging to use for communication because you need to first find some way to privately share the key with your communication partner. Diffie-Hellman key exchange gave a method for two people to generate a shared key (that can be used in a symmetric-key encryption protocol) while only communicating over an insecure channel (such as the internet). Asymmetric-key cryptography or public-key cryptography solves the key sharing problem in a different way. Rather than have one key that is used to both encrypt and decrypt, public-key cryptography uses two keys: one key to encrypt (called the public key) and one key to decrypt (called the private key). This pair of keys has the following properties: 1. It is infeasible to generate the private key from the public key: the keys must be generated together. 2. A message that is encrypted by the public key can only be (feasibly) decrypted with the corresponding private key. Suppose Bobby wants to send Assata an encrypted message. Assata creates a private-key/public-key pair and sends Bobby her public key (over an insecure channel). Bobby uses the public key to create the ciphertext and sends the ciphertext to Assata. The ciphertext can only be decrypted using Assata’s private key. Even though anyone may have Assata’s public key, the only thing that can be done with the public key is encrypt messages that can only be decrypted using Assata’s private key. Security is therefore achieved by keeping the private key private: secret and secure. In this model, anyone can, in fact, publish their public key. For example, Assata could publish her public key online so that anyone wishing to send Assata an encrypted message could encrypt that message with her public key first. Likewise, Bobby could create his own pair of public and private keys and publish his public key online so that others could send him encrypted messages that only Bobby could decrypt with his (securely stored) private key. Revisiting Diffie-Hellman Key Exchange: Public-Key or Symmetric-Key Cryptography? Let’s revisit Diffie-Hellman key exchange through the lens of symmetric and public-key cryptography. Recall that Assata and Bobby agree (publicly/insecurely) on a number p. Assata picks a (secret) number a and computes p ☆ a to send (publicly/insecurely) to Bobby. One could thus view a as Assata’s private key, p ☆ a as Assata’s public key, and this scheme as part of a public-key protocol. But Bobby picks his own secret number b and combines it with Assata’s public key to get p ☆ a ☆ b. Likewise, Assata combines Bobby’s “public key” p ☆ b with her own private key to get p ☆ b ☆ a. Since p ☆ a ☆ b = p ☆ b ☆ a, Assata and Bobby have a common key to use for encryption, and they use the same key for decryption. In this way, this is part of a symmetric-key protocol. For these reasons, the Diffie-Hellman key exchange lies somewhere between public-key and symmetric-key cryptography. Combining Public-Key and Symmetric-Key Cryptography Public-key encryption is usually more computationally expensive than symmetric-key encryption. To achieve the same security guarantees (e.g., against brute force and other attacks), public keys need to be much longer than symmetric keys. Also, performing the encryption itself takes longer using public keys than symmetric keys. There is also the problem that the longer you use a key for encryption, the more ciphertext examples there are to try to use to break the encryption (other than brute force)—that is, keys tend to age poorly. For these reasons, public keys are generally used to encrypt a symmetric key for a given (communication) session. Suppose Bobby wishes to send Assata an encrypted message. Bobby generates a symmetric encryption key and encrypts the message with the symmetric key using a symmetric cipher. He then encrypts the symmetric key using Assata’s public key. He sends the encrypted message and the encrypted key to Assata: Assata decrypts the encrypted key using her private key and then uses the result to decrypt the encrypted message: Since the public key is only used to encrypt keys (which are typically random-looking strings), the public key does not age, because methods of breaking the encryption that rely on human-language phrases would fail. An added benefit is that if one message is successfully decrypted, that does not help in breaking the encryption of a different message, since each message is encrypted with a different key. In Context: Antinuclear Activism and Pretty Good Privacy A particularly robust implementation of public-key cryptography is PGP, an acronym for the understatement Pretty Good Privacy. (An interoperable, free, and open-source version of PGP is GPG, or GNU Privacy Guard.) PGP encryption is most commonly used for encrypting email communications with several plug-ins and email clients that support using PGP encryption. There are a number of (synchronized) online directories of PGP keys, each associated with an email address, that allow Bobby to look up Assata’s PGP key in order to send her an encrypted email. Phil Zimmermann, a longtime antinuclear activist, created PGP in 1991 so similarly inclined people might securely use bulletin-board services (BBSes, the Reddit of the 1980s) and securely store messages. He developed PGP as an open-source project, and no license was required for its noncommercial use. Posting it initially to a newsgroup that specialized in grassroots political organizations, mainly in the peace movement, PGP made its way to a newsgroup used to distribute source code and quickly found its way outside the United States. Users and supporters included dissidents in totalitarian countries, civil libertarians, and cypherpunks. However, at the time, cryptosystems using keys larger than forty bits were then considered munitions within the definition of the US export regulations. PGP was initially designed to support 128-bit keys. In February 1993, Zimmermann became the formal target of a criminal investigation by the US government for “munitions export without a license.” Zimmermann challenged this by publishing the entire source code of PGP in a book, which was distributed and sold widely. Anybody wishing to build their own copy of PGP could cut off the covers, separate the pages, and scan them using an OCR program, creating a set of source code text files. While the export of munitions (guns, bombs, planes, and software) was (and remains) restricted, the export of books is protected by the First Amendment. After several years, the investigation of Zimmermann was closed without filing criminal charges against him or anyone else. US export regulations regarding cryptography remain in force but were liberalized substantially throughout the late 1990s. PGP encryption no longer meets the definition of a nonexportable weapon. What to Learn Next • Authenticity through Cryptographic Signing External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.07%3A_Public-Key_Cryptography.txt
What You’ll Learn 1. How to achieve the digital equivalent of a signature 2. How cryptographic signatures can be used to provide authenticity 3. What electronic authenticity means 4. How cryptographic signatures can be used to propagate trust Public-key cryptographic systems can often be used to provide authenticity. In PGP, this is allowed by the complementary nature of the public and private keys. In the beginning, two cryptographic keys are created, and either can be used as the public key; the choice as to which is the public key is really just an arbitrary assignment. That is, either key can be used for encryption as long as the other one is used for decryption (and the one used for decryption is kept private to provide security). Once you have assigned one cryptographic key as the public key and the other cryptographic key as the private key, you could still choose to encrypt a message with your private key. However, then anyone with your public key could decrypt the message. If you make your public key, well, public, then anyone could decrypt your message, and so this would defeat the purpose of using encryption to achieve message privacy. However, this should illustrate to you that the only person who could have encrypted a message that can be decrypted with your public key is you, the person with your private key. Encrypting a message with your private key provides the digital equivalent of a signature and is called cryptographic signing. In fact, cryptographic signing provides two properties of authenticity: 1. Attribution. You wrote the message (and not someone else). 2. Integrity. The message is received as it was written—that is, it has not been altered. The second property comes from the fact that a tamperer would have to alter the ciphertext such that decrypting the ciphertext with your public key generates the tamperer’s desired altered plaintext. But this is completely infeasible. These properties are only meaningful if you are the only one who controls your private key, since anyone who gains control of your private key could cryptographically sign their own altered text. Cryptographically Signing Cryptographic Hashes In practice, rather than encrypting the entire message, one would encrypt a cryptographic hash (a.k.a. digest or fingerprint) of the message for the purpose of cryptographic signing. This is done for efficiency reasons. Let’s consider the protocol for Assata signing a message and Bobby verifying the signature, as illustrated in the following text. Assata takes a cryptographic hash of her message and encrypts the result with her private key, creating a signature, which she can attach to the message: Bobby takes the signature and decrypts it using Assata’s public key, giving the hash that is the same as what Assata generated. He then takes his own cryptographic hash of the message and compares the result to the decrypted hash he received from Assata: Recall that cryptographic hash functions are infeasible to counterfeit. So if the two hashes that Bobby generates (one directly from Assata’s message and one from Assata’s signature) are the same, then we know two things: 1. Only Assata could have generated the signature. Only Assata could encrypt something that can be decrypted with her public key, since she is the only one with her private key. 2. The message has not been altered since Assata wrote it. If someone altered the message, then the hash of the message would differ from the hash contained in the signature. Any counterfeiter would therefore have to forge a new signature but can’t generate one without Assata’s private key. That is, we obtain authenticity cryptographically. Note that Edgar, in a man-in-the-middle attack, could simply remove the signature. So for cryptographic signing to be effective, you need to agree to use cryptographic signing all the time. Modern end-to-end encrypted messaging apps generally have signing built in by default, though this is often invisible to the average user. Applications of Cryptographic Signing As in our example above, cryptographic signing can provide authenticity to messages in a similar way that traditional handwritten signatures and wax seals did. However, you can cryptographically sign more than just messages (such as emails). Verifying Software Perhaps the most explicit and common use of cryptographic signatures is for verifying software, even if you aren’t aware of it. Software such as apps will only do what their developers want and say they will do if they haven’t been tampered with on the way from the developer to your computer or phone. Responsible developers will sign their products in the same way as we described signing for messages. A program or app is really just a computer file (or set of files), which is just a sequence of characters or a type of message. If a developer has signed their software using public-key cryptography, a careful user can check the signature by getting the developer’s public key and performing the validation illustrated above. (The developer should provide their public key via a channel different from the one you downloaded the software from. This would allow an out-of-band comparison as described in the chapter “The Man in the Middle”—the public key, which you use for validation, is out of band from the message or software download.) Managing Fingerprint Validation and the Web of Trust To trust Assata’s public key, Bobby should really verify her public key by checking the fingerprint of the key, as we have described. Otherwise, Edgar, the interloper, could furnish Bobby with a public key that he holds the corresponding private key for. But if Bobby has verified that Assata’s public key is genuinely hers, Bobby can cryptographically sign Assata’s public key (with his own private key). This allows Bobby to keep track of the public keys that he has verified and allows him to share Assata’s key with other people as follows. Suppose Cleaver wants to send Assata an encrypted message and wants to be sure that Edgar is not going to play the man in the middle. But Cleaver does not have a secondary channel through which to verify Assata’s public key. However, Cleaver has received and verified Bobby’s public key. So Bobby can send Assata’s public key to Cleaver with his signature. If Cleaver trusts Bobby and has verified his public key, then Cleaver can verify his signature on Assata’s public key and trust that Assata’s public key is genuine. This is the basis of the web of trust. Rather than directly verifying fingerprints of keys, you can do so indirectly, as long as there is a path of trust between you and your desired correspondent. In Context: Warrant Canaries Warrant canaries or canary statements inform users that the provider has not, by the published date, been subject to legal or other processes that might put users at risk, such as data breaches, releasing encryption keys, or providing back doors into the system. If the statement is not updated according to a published schedule, users can infer that there has been a problem that may put their past or future data at risk. Riseup.net, for example, maintains a quarterly canary statement that is cryptographically signed so that you can ensure its authenticity—that is, that the people at riseup.net wrote the statement. They include a link to a news article dated on the day of release in their statement to give evidence that the statement was not published before the day of release. The use of warrant canaries began as a way to circumvent gag orders in the US that accompany some legal processes in which the US government can force someone to withhold speech. On the other hand, the US government can rarely force someone to say something (particularly that isn’t true) and so could not compel a provider to keep up a canary statement that falsely claims nothing has happened. The term originates from the use of canaries in coal mines to detect poisonous gases: if the canary dies, get to clean air quickly! What to Learn Next • Metadata External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.08%3A_Authenticity_through_Cryp.txt
What You’ll Learn 1. What metadata is 2. What metadata can reveal 3. Why metadata is difficult to protect What Is Metadata? Metadata is all the information about the data but not the data itself and is best illustrated with a few examples. 1. For a phone call, the metadata will include the phone numbers involved, the start time of the call, and the length of the call. For cell phone calls, the metadata will likely include the location of your phone (the GPS coordinates), the cell tower that you are connected to, and even the type of phone you are using. Metadata of phone calls would not include the audio transmission itself—this would be the “data.” The historical use of recording phone-call metadata is for the purposes of billing. 2. Most modern digital photographs include information about the time and place the photo was taken, the type of camera used, and its settings. In this case, the photo itself is the data. Many websites, such as Facebook, Twitter, and Instagram, remove this metadata for your privacy when you upload a photo or video. Others do not, such as Google, Flickr, and YouTube. 3. Almost all modern color printers, at the request of the US government to printer manufacturers over fears of their use in money counterfeiting, print a forensic code on each page that may be visible or not. In this case, the printed sheet (less the forensic code) would be the data, and the information encoded by the forensic code would be the metadata. The forensic code, which may or may not be visible to the human eye, has been known to include the day and time the sheet was printed and the serial number of the printer used. The first disclosure by Edward Snowden revealed that the NSA was collecting all the metadata of calls made by Verizon customers, forcing a conversation about metadata into the public consciousness. A debate on what privacy was being invaded by this practice ensued. Earlier that year, the Associated Press fought back against the collection of metadata obtained by subpoena from the Justice Department, saying, “These records potentially reveal communications with confidential sources across all of the newsgathering activities undertaken by the AP during a two-month period, provide a road map to AP’s newsgathering operations, and disclose information about AP’s activities and operations that the government has no conceivable right to know.” A court opinion noted that the collection of GPS data through such metadata collection “can deduce whether he is a weekly churchgoer, a heavy drinker, a regular at the gym, an unfaithful husband, an outpatient receiving medical treatment, an associate of particular individuals or political groups.” In an internal document, the NSA has referred to metadata as being one of the agency’s “most useful tools.” Metadata and the Internet When you visit a website, information is being sent between your computer and the server of the website through the internet. At a basic level, a message is sent from your computer to the server requesting the contents of the website, and then the contents of the website are sent from the server to your computer. The information being sent over the internet is often referred to as traffic, and any message being sent will actually be broken up into many shorter messages or packets. Each packet has three main parts: 1. The header includes the internet address of the sender and the receiver (e.g., your computer and the website’s server) and a description of the type of data that is being sent (e.g., HTML). 2. The data is the content of the message (e.g., the content of the web page or part of the web page). 3. The trailer indicates the end of the packet and provides proof that the packet has not been corrupted in transit (using a hash function). The metadata is composed of the header and the trailer. The header is difficult to protect or conceal because it indicates where a packet should be sent. Just like sending a letter, an address is needed for delivery. Your internet address, or IP address, is related to your physical location; in fact, often your physical location can be determined from your IP address. This description applies to any information that is sent over the internet—email, video streaming, VOIP calls, and instant messages included. In Context: Protecting a Whistleblower In May 2017, Reality Winner disclosed NSA documents reporting on Russian interference in the 2016 US presidential election. Her arrest, days before the story was published, prompted much speculation around how she was so quickly identified as the whistleblower, with many people pointing the blame at the website Intercept for their handling of the story. Reality Winner had anonymously mailed a color printout of the documents to the Intercept. In standard journalistic fashion, the Intercept sent a photograph of the documents to the NSA for verification. The same photograph was redacted and made public in their reporting. Shortly after the publication of the story, several people pointed out that printer forensic code was visible in the photo and determined the day and time the document was printed and the serial number of the printer. While it is possible that the FBI could have identified Reality Winner from this information (to best protect its source, the Intercept should have redacted the forensic code from the photo), it is probably more likely she was outed by logs of file accesses on her work computer. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.09%3A_Metadata.txt
What You’ll Learn 1. Who has access to what on the internet 2. Technologies that allow for anonymous communications online 3. What anonymity is and the pitfalls of anonymity In order to communicate online, packets of information need to be addressed to your computer, whether that information is from an instant-message conversation, an email, or browsing the web. In this section, we mostly focus on web browsing, although the same ideas apply in most settings. Your computer’s address, or IP address, is how internet communications reach your computer in the same way as a mailing address allows an envelope or package to reach your mailbox. For that reason, your computer’s current IP address (which changes depending on where you are connecting to the internet) is related to your physical location. How refined that physical location is depends on how much information the internet service provider (ISP) reveals and to whom they are willing to reveal it. The ISP knows which cable, phone line, or cell tower you are receiving internet traffic through but may only provide zip code information to the proliferation of IP geolocation websites. Or they may provide the location of a specific house. Your IP address is just one piece of metadata that is necessary in order to get information to your computer. When browsing the web, though, a lot of other metadata, while not strictly necessary, is transmitted to “maximize your browsing experience.” This information includes details such as what browser plug-ins you use, your time zone, and your screen size and can be used as a unique identifier across IP addresses that you use to connect to the internet. Who has access to all this metadata that can be used to identify you? Without encryption, such as using https, any eavesdropper would have access to this metadata as well as the content of your communications. Encryptions will protect some metadata from your ISP and eavesdroppers (such as which browser you are using) but not your IP address and not the web domains you are visiting. And the servers of the websites you are visiting will have access to your metadata as well as any content. But since metadata is used to get information to you, is there any way to protect this metadata, and who could you protect it from? We describe two ways to anonymize your web browsing. Trusting a Middle Man: Virtual Private Networks Virtual private network (VPN) technology began as a means to extend a local network (such as a university’s or company’s network) to remote locations (such as off-campus housing and home offices) so that no matter where you were, you could access the same resources as you would if you were on the local networks (such as library and software subscriptions). While connected to a VPN, a web page host will see the IP address of the local network the VPN is extending as your address, the IP address of your home. For this reason, VPN use has become popular for anonymizing your location. A VPN operates as a (hopefully benign) middle man (illustrated below). Rather than sending all her web requests directly, Assata sends all her web requests to her VPN, the VPN fetches her request from the internet for her, and then the VPN sends the results back to Assata. The specifics of how this is done vary between different VPN services, but generally, the communications between you and the VPN are encrypted. The protective quality of a VPN relies on many other people also connecting to that VPN. An eavesdropper looking at communications to and from the VPN will be able to identify the individuals connecting to the VPN and the web requests the VPN is fetching but ideally will be unable to match those web requests with the corresponding users because there are too many simultaneous requests in and out of the VPN. Of course, the VPN provider knows all your internet behavior, and with their cooperation, an adversary would too: you are trusting your VPN provider with that information. However, your ISP (without using a VPN) has access to the same information: you are putting the same trust in your VPN provider as you must in your ISP. The difference is that your ISP does not conceal your IP address from destination servers on the internet, while a VPN does. Some increased privacy risk, however, comes with using the same VPN across many connection locations (e.g., home, work, coffee shop), giving a single entity (that VPN) a more complete view of your internet use than available to the ISP at each location. Not Trusting the Middle Man: The Onion Router The Onion Router, or Tor, is a means of accessing the internet anonymously while sidestepping trust issues and gets its name from using layers of encryption (like the layers of an onion). Rather than using one middle man with whom you trust all your information, you use (at least) three intermediaries, chosen at random from a selection of thousands of volunteer servers (illustrated below). Traffic through this path of intermediaries is encrypted so that the first (entry) node only knows that you are accessing the internet via Tor, the second (relay) node only knows that someone is accessing something on the internet via Tor (but not who specifically or what specifically), and the last (exit) node only knows that a certain web page (for example) is being requested by a Tor user (but not which Tor user). The way this is done is by Diffie-Hellman key exchanges first with the entry node, then with the relay node, and finally with the exit node as follows (and illustrated below). (1) Assata establishes a cryptographic key that she shares with the entry node (which we will call the entry key, in red). This establishes an encrypted communication channel between Assata and the entry node. (2) Assata uses this encrypted channel to communicate with the relay node via the entry node. The traffic between the entry and relay nodes is not encrypted, but Assata uses the channel via the entry node to establish a cryptographic key that Assata shares with the relay node (the relay key, in blue). All that the relay node knows is that it is setting up a shared key with some Tor user but not the identity of that Tor user. (3) This process is repeated to establish an encryption key that Assata shares with the exit node (the exit key, in green). (4) This created a sequence of keys (red, blue, green) that allow for encryption between Assata and the entry, relay, and exit nodes, respectively. For Assata to send a request to disruptj20.org, she encrypts the request, addressed to disruptj20.org, with the green key and addresses this to the exit node; she wraps this in a message addressed to the relay node and encrypts this with the blue key; she wraps this in a message addressed to the entry node and encrypts this with the red key. The message is sent to the red node. The first layer of encryption is removed by the entry node (with the red key that the entry node shares with Assata), revealing a message addressed to the relay node. The second layer of encryption is removed by the relay node (with the blue key that the relay node shares with Assata), revealing a message addressed to the exit node. The third layer of encryption is removed by the exit node (with the green key that the exit node shares with Assata), revealing a message addressed to disruptj20.org, which the exit node forwards along. This is illustrated below (1). For disruptj20.org to send information back to Assata, the web server sends the information back to the exit node. The exit node encrypts with the green key and sends it to the relay node. The relay node encrypts with the blue key and sends it to the entry node. The entry node encrypts with the red key and sends it to Assata. Assata can remove all three layers of encryption because she has all the necessary keys. This is illustrated below (2). In order to re-create your path through the Tor network and therefore your web request, your adversary would need to control all three nodes that you select as your entry, relay, and exit nodes. Even an adversary who controls 80 percent of the Tor network would only have a 50 percent chance of controlling all three nodes that you select. Since there are thousands of Tor nodes (that anyone can volunteer to operate), this is unlikely. An alternate attack that an adversary could take would be a confirmation attack. In this scenario, the adversary is trying to prove that you have visited a particular web service. If they can access your web traffic (e.g., through your ISP) and the web traffic of the target web service (through legal or extralegal means), then your adversary may be able to match up your use of Tor to access the web service from Tor based on their timing. This type of correlation was used in the case against Jeremy Hammond, convicted for hacking activities conducted by the activist collective Anonymous. Other attacks have been made on Tor too, but the Tor project is very responsive to improving their technology and security. We discuss obstacles to anonymous browsing below and pitfalls a user may run into as well as best practices when trying to access the web anonymously in the chapter “Protecting Your Identity.” Use and Prevention of Anonymous Browsing Technologies Many people in countries where censorship of the internet is common, such as China and Iran, use VPNs and Tor to access the uncensored web. On the other hand, evidence of VPN traffic can be gleaned from the metadata of internet communications, and governments can use this to block all such communications, as has been done in China and Syria in their censorship efforts. Other countries, such as Iran, are known for blocking access to specific VPN providers that are not sanctioned by the government. Tor as a whole can be blocked from use (e.g., by a government), since Tor nodes are publicly listed. This is done by simply blocking all traffic addressed to the Tor nodes. This is overcome by the use of bridges, a set of Tor nodes that are not publicly listed, which you use in lieu of a publicly listed entry node. To get access to a small set of bridge nodes, you need to email the Tor project from a restricted email account (e.g., Google, Riseup!, or Yahoo!) to request one. Tor can also be blocked by packet inspection—that is, by looking at the metadata of the communications (as with VPN traffic). The Tor project makes this process challenging by using methods of obfuscating Tor internet traffic so that it doesn’t look like Tor traffic. VPN and Tor are also used to gain access to particular sites that might not be available in your jurisdiction because of a choice of the web host. This is common for many media platforms such as Hulu and Netflix. To this end, companies will often block access to content from known VPN service providers or from Tor exit nodes. In Context: Disruptj20 On January 20, 2017, mass protests erupted around the inauguration of the forty-fifth president of the United States. Much of the organizing for those events was coordinated on the website disruptj20.org. In August 2017, it came to light that the US Department of Justice had issued a warrant to the disruptj20.org web host DreamHost requesting, among other items, “all HTTP request and error logs,” which would include the IP addresses of all individuals, purported to be 1.3 million people, who visited the website, along with which subpages they visited, how often they did so, and any text a visitor may have typed into the web page. Of course, anonymous browsing technologies would have protected the IP addresses of visitors to that site. What to Learn Next • Protecting Your Identity External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/01%3A_An_Introduction_to_Cryptography/1.10%3A_Anonymous_Routing.txt
• 2.1: Mechanisms of Social Movement Suppression The US has a long history of interference, including on its own soil in the form of suppressing the efforts of social movements and, in particular, liberatory and leftist social movements. Labor organizing, independence, civil rights, and environmental movements have all been subject to opposition by the US government, often at the behest of or in cooperation with large corporations. • 2.2: Digital Threats to Social Movements Social movements challenging powerful individuals, organizations, and social structures face a broad range of surveillance risks. Specific threats vary widely in technical sophistication, likelihood, and potential for harm. Threat modeling is a process whereby an organization or individual considers their range of adversaries, estimates the likelihood of their various data and devices falling victim to attack, and finally considers the damage done if attacks were to succeed. Thumbnail: Protest (Unsplash License; Nathan Dumlao via Unsplash) 02: Digital Suppression of Social Movements (in the US) What You’ll Learn 1. How the US suppresses social movements 2. What COINTELPRO (Counter Intelligence Program) was and the mechanisms employed to suppress social movements of that era The US has a long history of interference, including on its own soil in the form of suppressing the efforts of social movements and, in particular, liberatory and leftist social movements. Labor organizing, independence, civil rights, and environmental movements have all been subject to opposition by the US government, often at the behest of or in cooperation with large corporations. In trying to grapple with the risks associated with a social movement not attending to digital security, it is helpful to look at how the State has interfered with social movements in the past. This history can be overwhelming, and it can be tempting to dismiss this as something that has happened in the past but that is not happening now. Going through this history can also lead to defeatism, especially in light of the additional digitally enhanced tools that the State can employ against an adversary (perceived or actual). However, since we should not condemn ourselves to repeat mistakes of the past, we need to attend to enough history to learn appropriate lessons so that our movements may be successful moving forward. In order to do so without turning this into a history textbook, we draw on the scholarship of Jules Boykoff, who categorized the ways in which the US has messed with social movements in the twentieth century. Boykoff enumerated twelve modes of suppression, which we compress to seven in this presentation. Understanding these historical modes will allow us to predict how digital surveillance could support or enhance those modes, as we will discuss in the chapter “Digital Threats to Social Movements.” But more importantly, we will be able to see how encryption and attending to digital hygiene can protect social movements against (some of) these oppositional forces, as we will cover in part 3 of this book. Modes of Suppression These are seven ways in which the US has suppressed and continues to suppress social movements, each with a few examples of its use. Unfortunately, these examples are far from exhaustive. 1. Direct Violence Beatings, bombings, shootings, and other forms of violence are carried out by the State or other institutions or nodes of power against dissident citizens. This may be the result of the policing of large groups (such as when the Ohio Army National Guard fired at students during an antiwar protest at Kent State University, killing four people and injuring nine others) or targeted assassinations (such as in the FBI-organized night-raid shooting of Black Panther Party leader Fred Hampton). To risk an understatement, these actions discourage participation in social movements for fear of life and limb. 2. The Legal System The legal system allows for harassment arrests, public prosecutions and hearings, and extraordinary laws that are used to interfere with individuals in biased fashions. The State arrests activists for minor charges that are often false and sometimes based on obscure statutes that have remained on the books, buried and dormant but nevertheless providing vessels for selective legal persecution. Public prosecutions and hearings can land dissidents in jail or consume their resources in legal proceedings that sidetrack their activism and demobilize their movements. Current supporters and potential allies are discouraged from putting forth dissident views. Prosecution and hearings publicized in the mass media reverberate outward into the public sphere. Another form of legal suppression, the State promulgates and enforces exceptional laws and rules to tie up activists in the criminal justice labyrinth. This is the legal system being used to squelch dissent. Controversial stop-and-frisk programs allow police officers to briefly detain and at times search people without probable cause. Free-speech zones greatly limit the time, place, and manner of protests. Those arrested at First Amendment–protected protests on the inauguration of Donald Trump faced public prosecutions that were unlikely to ever reach a conviction. And certain crimes, such as arson or the destruction of property, are elevated to terrorism when they are accompanied by a political motive and allow for the State to greatly increase the punishment doled out. Other laws are specifically tailored to prevent activism, such as “ag-gag laws,” which criminalize the filming of agricultural operations (which is done to expose the abuse of animals). 3. Employment Deprivation One’s political beliefs can result in threats to or actual loss of employment due to one’s political beliefs or activities. Some dissidents are not hired in the first place because of their political beliefs. This is typically carried out by employers, though the State can have a powerful direct or indirect influence. Recently, we have seen university professors forced out of their jobs or have job offers revoked, as with Steven Salaita, whose offer of employment as a professor of American Indian studies was withdrawn following university donors’ objections to a series of Tweets critical of Israel and Zionism. For several years (but since struck down by a federal court), government contractors in Texas were required to sign a pledge to not participate in the pro-Palestinian Boycott, Divestment, and Sanction movement or else they would have their contracts canceled; this resulted in the firing of an American citizen of Palestinian descent who worked as a school speech pathologist and refused to sign the statement. 4. Conspicuous Surveillance Conspicuous surveillance aims primarily not to collect information (which is best done surreptitiously) but to intimidate. This is intended to result in a chilling effect, in which individuals guard their speech and action out of fear of reprisal. It may drive away those engaged in activism or make it difficult to encourage new activists. Although the chilling effect has been deemed unconstitutional, it is difficult to prove harm in a court (as required), so it is a safe means of suppression (from the perspective of the surveiller). The FBI has a long history of “knock and talks” or simply visiting the houses of dissidents and activists (and those of their families and employers) to “have a chat” in order to let people know that they are being watched. 5. Covert Surveillance Surveillance might be concentrated or focused, as with the use of spies, targeted wiretaps, and subpoenas or warrants for data; the use of infiltrators (covert agents who become members of the target group); or the use of informants (existing group members who are paid or threatened in order to extract information). Surveillance might also be diffused, such as the accumulation, storage, and analysis of individual and group information that is obtained through internet monitoring, mail openings, and other mass-surveillance techniques. The scale of the FBI informant program is sizeable, having over fifteen thousand informants in 2008. In the wake of 9/11, the FBI and large law enforcement agencies such as the NYPD turned their intelligence programs against Muslim American communities. This included the close surveillance of lawyers, professors, and the executive director of the largest Muslim civil rights organization in the US (the Council on American-Islamic Relations) by the FBI. The NYPD singled out mosques and Muslim student associations, organizations, and businesses through the use of informants, infiltrators, and surveillance. The NYPD’s supposed rationale was to identify potential “terrorists” by looking for “radicalization indicators,” including First Amendment–protected activities such as “wearing traditional Islamic clothing [and] growing a beard” and “becoming involved in social activism.” 6. Deception Snitch jacketing is when a person (often an infiltrator) intentionally generates suspicion that an authentic activist is a State informant or otherwise maliciously present in a social movement group. Infiltrators or informants who are in place to encourage violence or illegal activities or tactics (rather than simply report on activities) are known as agent provocateurs and do so in order to legally entrap or discredit the group. False propaganda is the use of fabricated documents that are designed to create schisms or undermine solidarity between activist organizations. These controversial, offending, and sometimes vicious documents are meant to foment dissension within and between groups. FBI infiltrators have acted as agent provocateurs by leading them down a path to illegal activity that they would not have otherwise followed. Mohamed Mohamud was an Oregon State University student who was contacted by an undercover FBI agent who over a period of five months suggested and provided the means to bomb the lighting of the Portland Christmas tree on November 26, 2010. The bomb was a fake, but Mohamud was sentenced to thirty years of imprisonment. Eric McDavid spent nearly nine years in prison for conspiring to damage corporate and government property after a paid FBI informant acted as an agent provocateur, encouraging McDavid’s group to engage in property destruction and providing them with bomb-making information, money to buy the raw materials needed, transportation, and a cabin to work in. McDavid’s conviction was overturned due to the FBI failing to disclose potentially exculpatory evidence to the defense. 7. Mass Media Influence There are two major types of mass media manipulation: (1) story implantation, whereby the State makes use of friendly press contacts who publish government-generated articles verbatim or with minor adjustments, and (2) strong-arming, whereby the State intimidates journalists or editors to withhold unwanted information from reaching publication. In addition to that, mass media deprecation portrays dissidents as ridiculous, bizarre, dangerous, or otherwise out of step with mainstream society. This is often not so much due to conspiracy as to dutiful adherence to journalistic norms and values. Mass media underestimation occurs when activists and the State come up with discrepant estimates of crowd sizes for protests, marches, and other activities, with the mass media tending to accept the State’s lower numbers. The mass media may also falsely balance dissidents with counterdemonstrators. Many dissident efforts never make it onto the mass media’s agenda or are buried in the minor sections of the newspaper. Not only the State but also powerful media organizations or individual owners are able to carry out this type of suppression. Following the invasion of Iraq after 9/11, antiwar sentiment was consistently downplayed through underreporting. As just one example, the September 2006 antiwar protests that saw more than two hundred thousand people take to the streets across the US was reported by the Oregonian in this way: A one-hundred-thousand-strong antiwar protest in Washington, DC, was reported on page 10, along with an article on a Portland protest. The article estimated one hundred people at the protest, even though aerial evidence pointed to over three thousand. A counterdemonstration to the Washington, DC, antiwar protest was covered on page 2 with a larger photo and longer text, even though only four hundred people attended. Information Technology Interference This resource would be lacking if we didn’t talk about censorship and other interference with information technology. It is an additional mode of suppression with particular relevance to the Information Age that dovetails with deception and mass media influence, wherein access to the internet or related infrastructure is blocked or otherwise denied to social movements—for example, cutting off internet or mobile network access during a protest, censoring certain sites or types of internet traffic, or shutting down a social movement group’s website. Boykoff does not include this in his catalog of suppression, since its use is not widespread within the US by the US largely due to the country’s constitutional protections. However, its use is widespread around the globe. Governments have been known to cut off internet access at the country level (such as the week-long total shutdown of the internet in Iran as a means to suppress protests) or limit access to certain sources (such as the Great Firewall of China blocking Google, Facebook, Twitter, and Wikipedia). US companies also participate in this, complying with foreign censorship: Zoom (a web conferencing service) shut down the accounts of three activists at the behest of China, who had planned online events to commemorate the Tiananmen Square massacre. In Context: COINTELPRO and the COINTELPRO Era From the 1950s through the 1970s, the FBI conducted a set of secret, domestic counterintelligence activities, which became known as COINTELPRO, under the leadership of FBI director J. Edgar Hoover. Originating with US government anticommunist programs during the Red Scare, COINTELPRO aimed to “disrupt, by any means necessary,” the organizing and activist efforts of the Black Power, Puerto Rican independence, civil rights, and other movements. With respect to civil rights and Black Power movements (including the activities of Martin Luther King Jr.), COINTELPRO was ordered to “expose, disrupt, misdirect, discredit, or otherwise neutralize the activities of black-nationalist, hate-type organizations and groupings, their leadership, spokesmen, membership and supporters to counter their propensity for violence and civil disorder.” COINTELPRO was exposed through the theft of boxes full of sensitive FBI paperwork obtained through a burglary in 1971 by the Citizens’ Commission to Investigate the FBI, the members of which only went public in the wake of Ed Snowden’s disclosures, with remaining COINTELPRO documents coming to light through Freedom of Information Act (FOIA) requests. The commission’s leak resulted in the formation of the US Senate’s Church Committee in 1975, which castigated the FBI for the “domestic intelligence activities [that] have invaded individual privacy and violated the rights of lawful assembly and political expression” and ultimately shut down COINTELPRO. The Church Committee prefaced their admonishment this way: We have seen segments of our Government, in their attitudes and action, adopt tactics unworthy of a democracy, and occasionally reminiscent of totalitarian regimes. We have seen a consistent pattern in which programs initiated with limited goals, such as preventing criminal violence or identifying foreign spies, were expanded to what witnesses characterized as “vacuum cleaners,” sweeping in information about lawful activities of American citizens. The tendency of intelligence activities to expand beyond their initial scope is a theme which runs through every aspect of our investigative findings. Intelligence collection programs naturally generate ever-increasing demands for new data. And once intelligence has been collected, there are strong pressures to use it against the target. All the following modes of suppression were used by the FBI or support partners as part of COINTELPRO or against COINTELPRO targets: 1. Direct violence The murder of Fred Hampton mentioned above was a joint operation of the FBI and Chicago Police Department. Fred Hampton was the chairman of the Black Panther Party, a revolutionary socialist political organization of the late 1960s through 1970s that aimed to protect Black Americans and provide social programs (such as free breakfast and health clinics). The Black Panther Party (BPP) was labeled as a “Black nationalist hate group” by the FBI for inclusion as a COINTELPRO target. Hampton’s assassination was supported by other modes of suppression, including the following: • Covert surveillance. A paid FBI infiltrator provided intelligence that made the raid leading to Hampton’s murder possible. • Deception. The same infiltrator created an atmosphere of distrust and suspicion within the BPP in part by snitch jacketing other members of the BPP. • Mass media influence. Following Hampton’s assassination, BPP members were depicted as “folk devils,” with media representations becoming increasingly distorted. 2. The legal system Communist and Black Panther Party member and COINTELPRO target Angela Davis was charged with “aggravated kidnapping and first-degree murder” in the death of a judge in California who was kidnapped and killed during a melee with police, even though Davis was not on the scene. California held that the guns used by the kidnappers were owned by Davis and considered “all persons concerned in the commission of a crime, whether they directly commit the act constituting the offense . . . , principals in any crime so committed.” Davis could not be found at the time and was listed by J. Edgar Hoover on the FBI’s “Ten Most Wanted Fugitives” list. Months later, Davis was apprehended and spent sixteen months in prison awaiting a trial in which she was found not guilty. 3. Employment deprivation Prior to Davis’s battle with the legal system, Davis was fired from her job as a philosophy professor at UCLA for her Communist Party membership in her first year of employment in 1969, called unsuitable to teach in the California system. The firing was at the request of then California governor Ronald Reagan, who pointed to a 1949 law outlawing the hiring of Communists in the University of California. This highlights the lingering Red Scare era or McCarthyism that ran through the 1940s and 1950s. The FBI supported the demonization of communism through a predecessor program to COINTELPRO: COMINFIL (Communist Infiltration) probed and tracked the activities of labor, social justice, and racial equality movements. 4. Conspicuous surveillance Among the first round of FBI documents to come to light about COINTELPRO was a document memo called “New Left Notes.” The New Left refers to a broad political movement of the 1960s and 1970s, groups of which campaigned on social issues such as civil, political, women’s, gay, and abortion rights. In discussing how to deal with “New Left problems,” the FBI Philadelphia field office memo says, “There was a pretty general consensus that more interviews with these subjects and hangers-on are in order for plenty of reasons, chief of which are it will enhance the paranoia endemic in these circles and will further serve to get the point across [that] there is an FBI Agent behind every mailbox. In addition, some will be overcome by the overwhelming personalities of the contacting agent and volunteer to tell all—perhaps on a continuing basis.” 5. Covert surveillance The Church Committee reported enumerated covert surveillance that “was not only vastly excessive in breadth . . . but also often conducted by illegal or improper means.” Notably, both the CIA and FBI had “mail-opening programs” that indiscriminately opened and photocopied letters mailed in the US on a vast scale: nearly a quarter of a million by the CIA between 1953 and 1973 and another 130,000 by the FBI from 1940 to 1966. Further, both the CIA and the FBI lied about the continuation of these programs to President Nixon. 6. Deception The FBI often sent fake letters or flyers in order to drive wedges between otherwise aligned groups. Below is an example of a cartoon drawn by FBI operatives as a forgery of movement participants and is intended to incite violence between the Black nationalist groups Organization Us (coestablished by Maulana Karenga) and the Black Panther Party (with prominent members Huey Newton, David Hilliard, Bobby Seale, John Huggins, and Bunchy Carter). The cartoon depicts BPP being knocked off by Karenga. The FBI later claimed success in the deaths of two BPP members by US gunmen. 7. Mass media influence Manipulation of the mass media was an explicit tenet of the FBI’s COINTELPRO against the New Left. According to the Church Committee, “Much of the Bureau’s propaganda efforts involved giving information to ‘friendly’ media sources who could be relied upon not to reveal the Bureau’s interests. The Crime Records Division of the Bureau was responsible for public relations, including all headquarters contacts with the media. In the course of its work (most of which had nothing to do with COINTELPRO), the Division assembled a list of ‘friendly’ news media sources—those who wrote pro-Bureau stories. Field offices also had ‘confidential sources’ (unpaid Bureau informants) in the media, and were able to ensure their cooperation.” External Resources Media Attributions • things-to-do © Federal Bureau of Investigation is licensed under a Public Domain license
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/02%3A_Digital_Suppression_of_Social_Movements_(in_the_US)/2.01%3A_Mecha.txt
What You’ll Learn 1. What threat modeling is 2. Who is engaged in surveillance and the strategies they use 3. Examples of tactics and programs used in surveillance Social movements challenging powerful individuals, organizations, and social structures face a broad range of surveillance risks. Specific threats vary widely in technical sophistication, likelihood, and potential for harm. Threat modeling is a process whereby an organization or individual considers their range of adversaries, estimates the likelihood of their various data and devices falling victim to attack, and finally considers the damage done if attacks were to succeed. (And then they work to protect the data that is most at risk and that would be the most damaging to lose or have accessed.) We think about surveillance in the following order to inform how to protect oneself: • Who is your adversary? Is it a neighborhood Nazi who is taking revenge on you for your Black Lives Matter lawn sign? Is it an oil corporation that is fighting your antipipeline activism? Is it the US government trying to prevent you from whistleblowing? By understanding who your adversary is, you can surmise their resources and capabilities. • Is your adversary going after you in particular, are they trying to discover who you are, or are they collecting a lot of information in the hopes of getting your information? What surveillance strategy are they likely to employ? This will help you understand what type of data and where your data may be at risk. • What particular surveillance tactics will your adversary employ to get that desired data? This will help you understand how to protect that data. We examine surveillance risks starting with the adversary because it is strategic to do so. No one can achieve perfect digital security, but one can be smart about where to spend one’s effort in protecting oneself against surveillance. In an actual threat-modeling discussion within an organization or social movement, who potential adversaries are is often more readily apparent than how such adversaries would carry out attacks. Who the adversary is then informs the range of techniques available to that adversary (depending on their available resources and legal authorities) and, in turn, what protective behaviors and technologies the organization can employ. Surveillance Adversaries We generally think of adversaries in terms of what resources they have available to them. For the purposes of this book, we will limit ourselves to three adversary categories: Nation-states have access to the most resources in a way that may make it seem like their surveillance capabilities are limitless. Even so, they are unlikely to be able to break strong encryptions. Here, we think of the National Security Agency (NSA) as the entity with access to the most sophisticated surveillance capabilities. The disclosures by Edward Snowden in 2013 give the most comprehensive window into nation-state-level capabilities and are searchable in the Snowden Surveillance Archive. Large corporations and local law enforcement are often heavily resourced and share information with each other but don’t necessarily have access to the capabilities of nation-states. However, the use of technology to aid in surveillance is widespread among law enforcement agencies in the US, as illustrated below in this screenshot from Electronic Frontier Foundation’s Atlas of Surveillance. Individuals have the least resources but might know you personally and so be able to more effectively use social engineering to obtain your data. Note that techniques available to lower-resourced adversaries are also available to higher-resourced adversaries. As an example, corporations and law enforcement employ informants and infiltrators, who may be individuals who know you personally. Also, while more sophisticated surveillance capabilities are not usually available to lower-resourced adversaries, it is not always the case: police departments in large cities may have access to nation-state-level resources (e.g., through data sharing that is facilitated by fusion centers), or a particularly skilled neighborhood Nazi may possess advanced hacking skills that enable some corporate-level attacks. So while these categories are not sharply defined, they can act as a starting point for understanding the risks and focusing on your adversaries’ most likely strategies and your most likely weaknesses. Surveillance Strategies There are two broad strategies of surveillance: mass-surveillance and targeted surveillance. Mass surveillance collects information about whole populations. This can be done with the purpose of trying to better understand that population. For example, the collection and analysis of health-related data can help identify and monitor emerging outbreaks of illnesses. Mass surveillance may also be used as a strategy to identify individuals of interest within the surveilled population. For example, video feeds from security cameras can be used to identify those who engaged in property damage. Or mass surveillance may garner information about a particular individual. For example, information collected from the mass deployment of license-plate cameras can be used to track the movements of that particular individual. Targeted surveillance only collects information about an individual or a small group of individuals. For example, wiretapping intercepts the communications of a particular individual. Targeted surveillance allows for the existence of prior suspicion and can (conceivably) be controlled—for example, law enforcement obtaining a warrant based on probable cause before intercepting someone’s mail. Historically, there was a clearer divide between targeted and mass surveillance. However, in the digital age, many tactics of targeted surveillance can be deployed on a mass scale, as we will discuss further. In addition to this classic division of surveillance strategies, we draw attention to a bigger strategy that is unique to the digital age. Collect-it-all may simply be viewed as mass surveillance on steroids but goes far beyond what may have historically been viewed as mass surveillance. Where mass surveillance may encompass things like security cameras, the monitoring of bank transactions, and the scanning of emails, collect-it-all aims to vacuum up any information that is digitized. Collect-it-all goes further: for any information that isn’t online or available (e.g., video from truly closed-circuit security cameras), it digitizes it and then collects it. Collect-it-all is infamously attributed to General Keith Alexander, former director of the NSA, whose mass-surveillance strategies were born in post-9/11 Iraq and were described as follows: “Rather than look for a single needle in the haystack, his approach was, ‘Let’s collect the whole haystack. Collect it all, tag it, store it… And whatever it is you want, you go searching for it.” This is likely the inspiration for many of the NSA programs that were uncovered by Edward Snowden that we highlight below. Different adversaries tend to deploy different surveillance strategies, or rather, lower-resourced adversaries tend to be limited in their strategies, as depicted: Surveillance Tactics To go over all the surveillance tactics that are available to adversaries at all levels would fill an encyclopedia. Here we illustrate a few examples of surveillance programs and tactics that support the surveillance strategies above. We illustrate these programs (below) according to the minimum level of sophistication required to use the tactic and the number of people whose information would be collected via these means. Mass Interception and Collection of Data We start with what most people probably think of when they think of mass surveillance: the interception and possible recording of vast amounts of communications. Many mass interception programs were uncovered as part of Edward Snowden’s disclosures in 2013. STORMBREW, FAIRVIEW, and BLARNEY are three such programs through which the NSA collects data in transit by partnering with telecommunications companies and getting access to data passing through submarine data cables. This allows for the collection of any unencrypted content and all associated metadata while it is in transit from origin to destination. However, these programs cannot see content that is typically encrypted in transit, such as email or files in cloud storage. The PRISM program is a partnership of the NSA with various internet companies (such as Google, Microsoft, and Facebook, as illustrated below) to allow NSA access to data held on company servers. That is, if the information was encrypted in transit and so not collectible via STORMBREW, FAIRVIEW, and BLARNEY, then the NSA can get it via PRISM—unless the information is encrypted on the company servers with a key that the user controls. Aggregating and Analyzing Data Once you have a whole lot of surveillance data, what do you do with it? Surely the man will not be able to find my tiny little needle in that massive haystack. This is where data mining, from basic search to (creepy) predictive machine-learning models, comes in to make vast amounts of mass-surveillance data (including from disparate sources) useful to powerful adversaries. The most basic functionality is the ability to search—that is, given a large amount of data, retrieving the data of interest, such as that related to a particular person. XKEYSCORE acts as a Google-type search for the NSA’s mass-surveillance data stores. While the functionality is basic, the sheer amount of information it has access to (including that from the NSA programs highlighted above) places XKEYSCORE (and any related program) as accessible to only the most powerful adversary. On the other hand, Dataminr searches publicly available data (such as social media posts) to uncover and provide details of emerging crises (such as COVID-19 updates and George Floyd protests) to their customers, which include newsrooms, police departments, and governments, through both automated (software) and manual (human analysis) means. Dataminr and other social media–monitoring platforms, of which there are dozens if not hundreds, have come under fire for their surveillance of First Amendment–protected speech, most notably of the Movement for Black Lives. In several instances, Twitter and Facebook cut off social media–monitoring companies’ easy access to their data after public outcry over misuse. Going further, Palantir is one of many policing platforms that supposedly predict where policing is needed, be that a street intersection, a neighborhood, or an individual. In reality, these platforms do little but reinforce racist norms. Predictive policing platforms use current police data as the starting point and tend to send police to locations that police have been in the past. However, communities of color and impoverished neighborhoods are notably overpoliced, so predictive models will simply send police to these areas again, whether or not that is where crimes are being committed. Going further still, EMBERS (Early Model Based Event Recognition Using Surrogates) has been used since 2012 to predict “civil unrest events such as protests, strikes, and ‘occupy’ events” in “multiple regions of the world” by “detect[ing] ongoing organizational activity and generat[ing] warnings accordingly.” The warnings are entirely automatic and can predict “the when of the protest as well as where of the protest (down to a city level granularity)” with 9.76 days of lead time on average. It relies entirely on publicly available data, such as social media posts, news stories, food prices, and currency exchange rates. Targeted Collection of Data Another surveillance tactic that comes to mind is that of the wiretap. However, the modern equivalent is a lot easier to enable than the physical wire installed on a communication cable, from which wiretap gets its name. One modern version is the cell site simulator (CSS), which is a miniature cell tower (small enough to be mounted on a van). To cell phones in the vicinity, this tower provides the best signal strength, and so they will connect to it. At the most basic level, a CSS will uncover the identities of the phones in the area. (Imagine its use at the location of a protest.) Different CSSes have different capabilities: Some CSSes simply pass on the communications to and from the broader cell network while gaining access to metadata. In some cases, CSSes are able to downgrade service—for example, from 3G to GSM—removing in-transit encryption of cell communications with the service provider and giving access to message content. In other cases, CSSes can block cell communications by having phones connect to it but do not pass information onto the greater cell phone network. CSSes are fairly commonly held by law enforcement agencies (as illustrated in the map at the start of this chapter). Surveillance equipment, including CSSes and high-resolution video, can also be mounted to surveillance drones or unmanned aerial vehicles (UAVs), which can greatly increase the scope of surveillance from a few city blocks to a whole city. This is one example where tactics of targeted surveillance are expanded toward a mass level. Persistent Surveillance Systems has pitched the use of UAVs to many US police departments. Persistent Surveillance’s UAV uses ultrahigh resolution cameras that cover over thirty-two-mile square miles in order to be able to track the movements of individual cars and people, saving a history so that movements can be tracked backward in time. Of course, often it isn’t necessary to surreptitiously collect information. Sometimes you can just ask politely for it. In the US, subpoenas and warrants are used to request information from corporate providers. While warrants require probable cause (in the legal sense), subpoenas do not. As it publishes in its transparency report, Google receives around forty thousand data requests every year, about a third of which are by subpoena. Google returns data for roughly 80 percent of requests, and each request impacts, on average, roughly two user accounts (i.e., each individual request is highly targeted). Of note is that the contents of emails are available by subpoena. While subpoenas and warrants are basic in their nature, they usually can only be accessed by governmental adversaries. Attacking Devices The above tactics attempt to collect data while it is in transit or when it is held in the cloud. A final place to collect your data is right from your own device (phone or computer). This might happen if your device is confiscated by the police during a detention or search. We will discuss this more in the chapter “Protecting Your Devices,” but we highlight some tactics for extracting device-held data here. Cellebrite is an Israeli company that specializes in selling tools for extracting data from phones and other devices, such as their Universal Forensic Extraction Device (UFED), which is small enough to carry in a briefcase and can extract data quickly from almost any phone. However, this requires physical control of your device. NSO Group (another Israeli company) sells the ability to remotely install spyware called Pegasus on some iPhones and Androids that will extract text messages, call metadata, and passwords, among other data. The NSA has a family of malware (malicious software) denoted QUANTUM that can either gather data or block data from reaching the target device. But the NSA is able to install this malicious software on a mass scale with the use of their TURBINE system, which is able to disguise NSA servers as, for example, Facebook servers and use this as a means to injecting malware onto the target’s device. While Pegasus and QUANTUM can be deployed widely, it can be politically dangerous to do so, as these programs are generally met with public outcry. The more widely an invasive surveillance technology is deployed, the more likely it is to be discovered, as was the case with Pegasus. Personalized Harassment While outside the realm of typical surveillance, personalized harassment should be in mind when considering digital security risks. Doxxing, phishing, and password sniffing are techniques available to the lower-resourced adversary but shouldn’t be ignored for that reason. You may wish to revisit the story of Black Lives Matter activist DeRay Mckesson from the chapter “Passwords,” who had his Twitter account compromised despite employing two-factor authentication. All his adversary needed was access to some personal information, which may have been discoverable from public sources or through personal knowledge. Doxxing is the process of publishing (e.g., on a discussion site) a target’s personal information that might lead to the harm or embarrassment of the targeted individual. While this is very easy to do, it is also very difficult to protect yourself: once information about you is available online, it is challenging or impossible to remove it. Phishing describes methods of obtaining personal information, such as a password, through spoofed emails and websites. While phishing can be deployed on a mass scale, the most successful type of phishing (spear phishing) targets individuals by using already known information to improve success rates. Password sniffing can be as low tech as looking over your shoulder to see you type in a password or can involve installing a keystroke logger to record you typing in your password, but this requires the ability to install a keystroke logger on your device, for which there are methods of varying degrees of sophistication. Traditional password sniffing captures a password as it passes through the network, which can be possible if the traffic is not encrypted, and again requires varying degrees of sophistication but certainly could be deployed by a skilled individual. In Context: Standing Rock In 2016, opponents of the construction of the Dakota Access Pipeline (DAPL) set up a protest encampment at the confluence of the Missouri and Cannonball Rivers, under which the proposed oil pipeline was set to be built. The pipeline threatened the quality of the drinking water in the area, which included many Native American communities, including the Standing Rock Indian Reservation. Eventually the protest encampment would grow to thousands of people and was in place for ten months. Energy Transfer Partners, the company building DAPL, employs a private security force, which, a few months into the protest encampment, unleashed attack dogs on the protesters. In addition, Energy Transfer Partners very quickly hired TigerSwan to aid in their suppression of the protest movement. TigerSwan is a private mercenary company that got its start in Afghanistan as a US government contractor during the war on terror. As such, TigerSwan employs military-style counterterrorism tactics and referred to the Native American protesters and others who supported them as insurgents, comparing them (explicitly) to the jihadist fighters against which TigerSwan got its start. TigerSwan’s surveillance included social media monitoring, aerial video recording, radio eavesdropping, and the use of infiltrators and informants. Eventually local, regional, and federal law enforcement would be called in, with TigerSwan providing situation reports to state and local law enforcement and in regular communication with the FBI, the US Department of Homeland Security, the US Justice Department, the US Marshals Service, and the Bureau of Indian Affairs. While many (but not all) of the tactics employed by TigerSwan would be illegal for government law enforcement to adopt, the State is able to skirt this by receiving updates from private companies. This is common practice in many areas in law enforcement, with police departments buying privately held data that would violate the Fourth Amendment if the data was collected directly by the State. The public-private partnership between state law enforcement agencies, Energy Transfer Partners, and TigerSwan was instrumental in bringing an end to the protest encampment, with the State eventually violently removing protesters through the use of tear gas, concussion grenades, and water cannons (in below-freezing weather), resulting in approximately three hundred injured protestors (including one woman who nearly lost an arm). While the encampment ended and the pipeline eventually was built, continued opposition eventually led to a court ruling that the pipeline must be shut down and emptied of oil in order to complete a new environmental impact review. It is important to remember that even though mass surveillance collects information about almost everyone, the harm it causes is differential. Certain groups are surveilled more heavily, or surveillance information about certain groups is used disproportionately. Examples of groups in the US that are disproportionately harmed by State and corporate surveillance are Muslim Americans, Black and African Americans, Native Americans, and social movement participants, as we discussed in the chapter “Mechanisms of Social Movement Suppression.” External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/02%3A_Digital_Suppression_of_Social_Movements_(in_the_US)/2.02%3A_Digit.txt
• 3.1: Defending against Surveillance and Suppression You may hear that there is no such thing as perfect digital security, and we agree. The surveillance capabilities of a well-resourced adversary are nearly limitless, and those that we described in “Digital Threats to Social Movements” barely scratch the surface. However, not all risks are equal, not all surveillance tools are equally likely to be used, and there is a lot that an individual and a group can do to reduce the threats due to surveillance. • 3.2: Security Culture Social movements aware of the history of informant-driven suppression by State and private adversaries have developed what is termed security culture. This term refers to information-sharing agreements and other group practices intended to minimize the negative impacts of infiltration, surveillance, and other suppressive threats to the group, its work, its membership, and broader social movements; that is, security here means something much broader than digital security. • 3.3: Protecting Your Devices The amount of data you keep on your phone and laptop is staggering. A lot of this data you will also share with your cloud storage providers, but that is the focus of the chapter “Protecting Your Remote Data.” Here we focus on protecting the data that you keep with you on your laptops and cell phones from a remote or physical attack. • 3.4: Protecting Your Communications The best way to protect your online communications is through encryption. But not all encryption is equally protective. We will focus on the concepts that distinguish between the degrees of protection. • 3.5: Protecting Your Remote Data The cloud is ubiquitous. Since the early 2000s, data is increasingly stored not exclusively (or at all) on your own device but on the servers of the companies that manage your device or operating system or whose services you subscribe to. If that data is not encrypted with a key that you control, that data is at risk for compromise. • 3.6: Protecting Your Identity In this chapter, we will focus on skills for using Tor over VPN, but these lessons apply to using a VPN. One needs to additionally remember, though, that when using a VPN, the VPN provider knows who you are and the metadata of your internet communications (and the content, if it isn’t encrypted). While we will focus on using Tor via the Tor Browser, know that there are other applications that route internet requests through the Tor network. Thumbnail: (Unsplash License; Chris Yang via Unsplash) 03: Defending Social Movements (in the US) What You’ll Learn 1. What threat modeling is 2. Strategies for reducing threats to your digital security You may hear that there is no such thing as perfect digital security, and we agree. The surveillance capabilities of a well-resourced adversary are nearly limitless, and those that we described in “Digital Threats to Social Movements” barely scratch the surface. However, not all risks are equal, not all surveillance tools are equally likely to be used, and there is a lot that an individual and a group can do to reduce the threats due to surveillance. We can model a digital security threat in terms of the following relationship: $\text { threat } \propto \frac{\text { (surveillance capabilities) } \times \text { (suppression risk) }}{\text { effort required to obtain data }} \nonumber$ In this model, surveillance capabilities refers to your opponent’s level of resources, as discussed in the chapter “Digital Threats to Social Movements.” Suppression risk refers to the ways in which your opponent may try to undermine you, as discussed in the chapter “Mechanisms of Social Movement Suppression.” It is important to keep in mind that surveillance supports suppression both indirectly and directly. Many of the examples we gave in the chapter “Mechanisms of Social Movement Suppression” were indeed supported by surveillance: • The direct violence meted out on Black Panther Party leader Fred Hampton through a targeted assassination was supported by detailed knowledge of his schedule and apartment layout. • The US Department of Justice issued threats of sanction through the legal system against those individuals organizing the protests of Donald Trump’s inauguration and requested to obtain all website traffic information of an organizing web page (described at the end of the chapter “Anonymous Routing”). • Steven Salaita’s employment deprivation was a result of the monitoring of his Twitter activity. • The deception used by the FBI against Mohamed Mohamud began with the monitoring of Mohamud’s email. Reducing the Threat We can reduce digital security threats by decreasing surveillance capabilities or suppression risk or by increasing the effort required to obtain one’s data. Reducing Surveillance Capabilities Most activists have little immediate control over surveillance capabilities. However, there are a number of laudable efforts to regulate surveillance with some success, such as the banning of face recognition and CSS in certain jurisdictions. But unless your social movement work is aimed at trying to ban or limit surveillance, going down this route would take you away from your goals. Reducing Suppression Risk Likewise, activists have little control over suppression risk. You could minimize the risk of suppression by reducing the threat to your opponent, but then you would be succumbing to the chilling effect. Increasing the Effort Required to Obtain Your Data That leaves us with increasing the effort required to obtain your data, which is the focus of the remainder of this book. While protecting all data is important (the more your opponent knows about you, the better they can undermine you), we encourage putting any additional effort in protecting your data toward the most protective strategies. So to guide that effort, you should keep in mind the surveillance capabilities of your opponents and their likely modes of suppressing your efforts. To this end, focus on protecting data that 1. could most likely be used to suppress your efforts and 2. is most vulnerable to surveillance. Understanding point 1 will be through a deep understanding of the efforts and opponents of a given social movement. To consider point 2, we need to understand where your data is (described below) and how to protect it (which will be discussed in the remaining chapters of this book). Where Is Your Data? We take different protective strategies depending on where data is vulnerable. Your information becomes data when it is put on a device (e.g., a cell phone or laptop) and then may be transmitted through the internet via service providers. We distinguish here between websites where you may be browsing or cloud providers where your data may be held (from Google to Facebook). In the remaining chapters, we discuss how to protect where your data is. In the chapter “Security Culture,” we discuss how to decide whether your information becomes data (when you have control over it) and whether to store your data in the cloud—that is, whether you want your data to transmit over the red arrows. In the chapter “Protecting Your Devices,” we discuss how to protect data that is held on devices that you have control over (e.g., your laptop and cell phone). In the chapter “Protecting Your Communications,” we discuss how to protect your data while it transmits from you to your destination, be that a website, cloud provider, or another person. In the chapter “Protecting Your Remote Data,” we discuss how to protect data that is held in the cloud if you have made the decision to do so. We then discuss how to protect your identity—that is, how to be anonymous or pseudonymous online and break through censorship—in the chapter “Protecting Your Identity.” Finally, we discuss how to select digital security tools in the conclusion and give the principles we use for our recommendations. In Context: Edward Snowden In the years leading up to 2013, Edward Snowden collected data from his workplaces (mostly NSA subcontractors) that he had access to in his role as a systems administrator. Snowden’s leaks of troves of classified material illustrated just how advanced and broadly deployed the surveillance tactics of many of the world’s most powerful governments were. However, in order to make these disclosures, Snowden was up against a powerful adversary: the National Security Agency itself. Snowden was unlikely to achieve long-term anonymity—his goal was to keep his behaviors (collecting information) and goal (whistleblowing) unknown for long enough to leak the information to journalists, who would responsibly report on it, and hopefully long enough to get to a safe haven, where he could live in freedom. It took months for Snowden to set up an encrypted communications channel with Glenn Greenwald (a journalist known for fearless, deep reporting), this being in the days before “plug-and-play” end-to-end encrypted messaging apps. But once the reporting on Snowden’s disclosures started, he knew his identity would be discovered and unmasked himself. Snowden didn’t end up where he had hoped (Latin America). His US passport was canceled during his flight from Hong Kong (where he disclosed his leads to Glenn Greenwald) to Russia, preventing him from further air travel. Snowden was able to claim asylum in Russia. However, Snowden was very successful in his whistleblowing, with the reporting lasting for years after and with numerous changes to our communications: encryption is more commonly available now, so much so that many people don’t even know when their conversations are end-to-end encrypted. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/03%3A_Defending_Social_Movements_(in_the_US)/3.01%3A_Defending_against_.txt
What You’ll Learn 1. What social movement security culture is 2. Why security culture is essential to digital security Social movements aware of the history of informant-driven suppression by State and private adversaries have developed what is termed security culture. This term refers to information-sharing agreements and other group practices intended to minimize the negative impacts of infiltration, surveillance, and other suppressive threats to the group, its work, its membership, and broader social movements; that is, security here means something much broader than digital security. The term culture indicates an aspiration for security principles and practices to become reflexive and intuitive. The ideal security culture helps a group to safely and easily communicate and bring in new members (if desired) while avoiding excessive paranoia or cumbersome procedures and policies. Although perspectives and practices on security culture vary widely, some important widespread principles you should adhere to are the following: 1. Share information on a need-to-know basis. 2. If you are organizing with others, get to know your group members as well as possible. 3. Avoid gossip and rumors. Security Culture Meets Digital Security Let’s explore some of these elements in detail and how they relate to digital security. Need to Know: Minimize Information Sharing and Digitizing The first principle of keeping secrets is to minimize the number of people who must be trusted to keep them. Of course there is a spectrum of information sensitivity, from public announcements to open meetings, from in-development press releases to specific places and times of direct actions. Deciding what information needs to be protected and being careful to protect it is only part of the picture; people also need to accept that they won’t have access to sensitive information unless they need it to do their work. From a digital security perspective, this also means deciding what information becomes digitized. (Do you really need a Google Doc listing all the people who plan on attending a protest? Do you really need to post identifiable photos of people who showed up? Do you need those posts to be public and geolocated?) Limiting the amount and extent of information sharing dovetails with good digital security practices because no platform or means of communication can be considered perfectly secure. Before taking specific digital security measures (such as using possibly complicated end-to-end encrypted technology), consider what information needs to be stored, be shared, or even exist in a digital format—perhaps (absent a global pandemic) we should be meeting and discussing our ideas in person as much as possible. Keep in mind that any digital information is extremely easy to copy, and so even a strong encryption can only protect information to the extent that every human with access to it can be trusted. Not even a perfectly designed secure app or digital platform can stop information from being compromised by an infiltrator or defector within a group. Get to Know: Vetting and Trust Building Get to know the people that you work with so that you can trust them with whatever risks you decide to take together. But when you decide to digitize information, you are potentially welcoming more “people” (well, corporations and the State) into your organizing circle. If your group uses, for example, Gmail for communications among group members, then Google also has all those emails, and those emails can be easily subpoenaed by the State. So you should be ready to trust that any entity has access to your unencrypted data, whether that entity is a human with whom you interact, your internet service provider, your cloud storage provider, or your email provider. Don’t Gossip or Spread Rumors Social movements in the past have been crushed by gossip and rumors, with the State using our human weaknesses to engage in gossip and rumors to its advantage, as we discussed in the chapter “Mechanisms of Social Movement Suppression”: the use of snitch jacketing, agent provocateurs, and false propaganda as tactics of deception depend on social movement participants believing the source and repeating information. For digital security, we can aim to authenticate the source of information. This is particularly important online, where one can more easily pretend to be someone one isn’t, either through low-tech means (such as fake accounts or stealing an account) or high-tech means (such as redirecting network traffic). We will discuss authenticating digital sources in the chapter “Protecting Your Communications” and the conclusion, “Selecting Digital Security Tools.” But a very basic consideration is one’s use of social media, where gossip and rumors abound and where the details of our personal lives make infiltration unnecessary to get to know what your weaknesses might be. Social media platforms should only be used to publicly distribute information, and conversations there should never be considered private. These protective actions have the potential to protect you from social media monitoring, subpoenas and search warrants, and doxxing. In Context: Saint Paul Principles Leading up to the 2008 Republican National Convention in Saint Paul, Minnesota, different social movements came together around the opposition to the Republican Party’s support of the war in Iraq. The coalition of protest groups adopted the following principles ahead of the convention in order to make space for different groups’ views and strategies and reduce the risks one group is facing from affecting another group: 1. Our solidarity will be based on respect for a diversity of tactics and the plans of other groups. 2. The actions and tactics used will be organized to maintain a separation of time or space. 3. Any debates or criticisms will stay internal to the movement, avoiding any public or media denunciations of fellow activists and events. 4. We oppose any state repression of dissent, including surveillance, infiltration, disruption and violence. We agree not to assist law enforcement actions against activists and others. These rules have become known as the “Saint Paul Principles” and have been adopted by many coalitions of groups in the years since. The principles elevate notions of security culture from an intragroup level to an intergroup level. They are designed to help different groups come together if they have the same ultimate aim but may disagree on how to get there and to increase the success that the overarching movement will be successful in their agreed-upon aim. External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/03%3A_Defending_Social_Movements_(in_the_US)/3.02%3A_Security_Culture.txt
What You’ll Learn 1. Common ways in which phones and computers are compromised 2. Strategies for protecting phones and computers The amount of data you keep on your phone and laptop is staggering. Contacts, emails, photos, documents, calendars, tax returns, banking details, and, in the case of a smartphone, often a detailed history of your location as long as you’ve had that phone. A lot of this data you will also share with your cloud storage providers (i.e., Apple, Google, Dropbox), but that is the focus of the chapter “Protecting Your Remote Data.” Here we focus on protecting the data that you keep with you on your laptops and cell phones from a remote or physical attack. Physical Attacks By a physical attack, we mean that your adversary would first gain physical access to your device through loss, theft, or confiscation. You may lose your phone at an inopportune moment that places it in the hands of an adversary rather than a Good Samaritan, or your adversary may steal your phone. More likely your phone may be confiscated while crossing a border or during an arrest, either planned or unplanned. Those who were swept up in the mass arrests during the protests of the presidential inauguration on January 20, 2017 (J20), had their phones confiscated and subject to search by a tool from the Israeli company Cellebrite, which extracts all information on a device (phone or computer) and all remote accounts that device has access to (e.g., Google, Facebook, Dropbox). In the article “How to Protect Yourself from the Snitch in Your Pocket,” one J20 defendant described the eight thousand pages of data that a Cellebrite tool extracted from his confiscated cell phone; he received this information from his lawyer as they prepared for his defense: • A list of all my contacts, including phone numbers and emails that contacted me that were not stored in my phone, with a count of how many times I called, messaged, or emailed them or was called, messaged, or emailed by them. • The number of emails I received, sent, and drafted to specific email addresses and how many shared calendar events I had with those email addresses. The number of incoming/outgoing/missed calls from each number and if they were my contacts, and how long total calls were between me and a number. Whether they were in my contacts, and if so what nickname I call them in my phone. • The number of SMS texts received/sent/drafted to a number. The content of all texts, even if they were deleted, including drafts. • Whatsapp contacts, their “usernames” (i.e. the phone number attached to their account), and how many chats/calls took place between me and them. • All apps, when they were installed/deleted/last used/purchased, and what permissions they had. • Audio files that were stored in Google Drive, as well as any podcasts, voice memos, and ringtones. Timestamps for their creation/deletion/modification/last access. • All calendar events, attendees invited, location tags, etc. • Traditional call log info you might expect. • Date and time of all cell towers my phone had ever connected to and their location, conveniently linked to Google Maps. A world map marking all cell towers accessed by my phone. • Chats from Signal, WhatsApp, SMS, Google Hangouts, TextSecure, GroupMe, and Google Docs; a list of all participants in those chats; text body content; whether it was read or unread, with a timestamp for sent and read; if it was starred; if it was deleted; all attachments. These chats were also from years ago, before I even had a smartphone. • All information for my contacts, including whether the contact was deleted or not. • Web browser cookies. • Any document ever opened on my phone, including text documents, attachments, Google docs, and those created by apps. • Emails and email drafts, including all sending information, entire text content, and up to 16 attachments. • Images/photos/videos along with their created/accessed timestamp and any metadata. • Ninety-six random tweets from one of my Twitter accounts, some from as far back as 2013. • A list of all wifi networks that my phone ever connected to, their passwords, hardware identifiers, and when I connected to them. • The last five times my phone was turned on, including twice two months after I lost access to it. • Web history and web and Playstore search history. • A list of every word ever typed into my phone and how many times that word was typed, including email addresses as words, and words I added to the dictionary so they wouldn’t continue to be autocorrected to something else. • What they call my “timeline”: every action (texts, calls, emails, web history, app usage including maps searches, connections to wifi networks or new cell towers, etc.) with timestamp to be easily sorted. What Can I Do? A detective testifying at a J20 trial noted that among the phones that had encryption enabled, he was only able to access basic device information and not the contents of the phone storage. iPhones and Android devices that are running up-to-date operating systems have encryption enabled by default, while Apple and Microsoft computers need to have this enabled. Encrypting your device is not, however, a panacea. The encryption that protects your device is only as strong as the password protecting it. Device-encryption passwords regrettably suffer from a convenience-security trade-off. A passphrase (as described in the chapter “Passwords”) may need to be composed of six or more words to sustain a physical attack, but such a passphrase is cumbersome to type in frequently. There are a few options, all with trade-offs. For phones or laptops, you can modify your settings to change how often you need to enter your password, passphrase, or unlock code. (Note that encryption is only in effect when a screen lock is enabled.) Or you could modify the strength (length) of your password, passphrase, or unlock code depending on your situation. However, these strategies rely on knowing when your situation requires higher levels of security and consistently strengthening your security when needed. For phones and some laptops, one can often choose between a typed passphrase or biometric input (such as a fingerprint). A fingerprint is more convenient than a typed password. For the purposes of encryption, your fingerprint will be paired with a passphrase (which should be as strong as you can manage). However, if your device is confiscated by law enforcement, your fingerprint may be compelled from you. So when the risk of device confiscation is high, one should still consider removing the ability to biometrically unlock your device. There are additional protections against physical interference that one might consider. A privacy screen can obscure an eavesdropper from seeing the passwords (and anything else) you type. Faraday bags can prevent your phone from transmitting or receiving information; among other things, this can prevent your phone from recording location information. The ability to remotely wipe your phone is provided by major cell phone manufacturers, and while it may relinquish control over your device to the same corporations that may share your information with your adversaries (as we discuss in the chapter “Protecting Your Remote Data”), it may be a useful tool in certain situations. Remote Attacks By a remote attack, we mean that an adversary would access the data on your phone or laptop through an internet or data connection. There are companies that design and sell the ability to infect your device (usually focusing on smartphones) with malware that would allow their customer (your adversary, be it a corporate or state agent) to gain remote access to some or all your information. For example, Citizen Lab uncovered wide and varied use of the spyware Pegasus created and sold by another Israeli company, NSO Group. Coupled with some social engineering to convince the target to click on a link, the spyware grants the ability to turn on and record from the phone’s camera and microphone, record calls and text messages (even those providing end-to-end encryption), log GPS locations, and send all this information back to the target’s adversary. Citizen Lab reported that Pegasus attempts were made against Ahmed Mansoor, a human rights defender based in the United Arab Emirates, and twenty-two individuals in Mexico ranging from politicians campaigning against government corruption to scientists advocating for a state tax on sugary drinks. What Can I Do? Remote attacks, such as those sold by NSO Group, rely on flaws in computer software known as zero-days. Such flaws are unknown to the software provider (e.g., Apple or Microsoft). Until they are known (which happens on “day zero”), there is no chance that the software provider could have fixed or patched the vulnerability, and so there is no chance that a victim could protect themself. Computer security is often a cat-and-mouse game. The products of malware and spyware creators (such as NSO Group) are only good so long as the targets (or more accurately, companies like Apple, Google, and Microsoft) don’t know about the malware being deployed. As soon as they do, they fix their products so that the malware is no longer effective. But these product fixes only work if the target (you) updates their device. So the lesson here is to install all security updates as soon as they are available. Unfortunately, smartphones do not receive security updates indefinitely, with particular devices (e.g., Nokia 5.3) only being supported by an operating system (e.g., Android) for a few years. You can check if an Apple or Android phone is receiving security updates through the settings. Many malware products require phishing for installation on the target’s device: convincing the target to click on a link or opening a file (in either an email or a text message). So the second thing you can do is be wary of what you click on. Do you know the sender? Are you expecting something from the sender? Does anything seem, well, fishy? In fact, it was vigilance that led Ahmed Mansoor to avoid spyware infection: he sent the phishing text along to Citizen Lab, which led to their reporting on the abuse of spyware from NSO Group. Finally, be wary of the apps you install and what permissions you grant them. Does a flashlight app need access to your contacts and camera? Do you really need to install that game created by an unknown software creator? Every app you install is a potential vector for malware, so it is a good opportunity to practice minimalism. In Context: Compromising Protesters’ Phones In September 2020, it came to light that the Department of Homeland Security was “extracting information from protestors’ phones” during the extended protests during the summer of 2020 in Portland, Oregon. Purportedly using a novel cell phone cloning method, the government was able to intercept communications to the phones of protestors. While this is disturbing and likely illegal, the details of the attack remained classified. However, we can try to infer likely methods of attack and likely protective practices. If the cloning method requires a physical attack, most likely the compromised phones are those that were confiscated in prior arrests during the summer. However, this limits the surveillance potential to only the phones of arrestees and allows them either to no longer trust their phones or to factory reset their phones to remove possible malware. On the other hand, if the cloning method can be done remotely, this greatly expands the number of phones that might be compromised and has no signaling event. In either case, the use of in-transit and end-to-end encryptions would still protect phone communications (but perhaps not metadata), as you can learn about in the chapter “Protecting Your Communications.” External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/03%3A_Defending_Social_Movements_(in_the_US)/3.03%3A_Protecting_Your_De.txt
What You’ll Learn 1. The difference between in-transit and end-to-end encryption 2. Who has access to your information when not using an encryption 3. Who has access to your information when using an in-transit encryption 4. Who has access to your information when using an end-to-end encryption The best way to protect your online communications is through encryption. But not all encryption is equally protective. We will focus on the concepts that distinguish between the degrees of protection. Encrypted or Not The most basic version of encrypted communications is in-transit encryption, where your information is encrypted between your computer and a server. In the context of browsing the web, this is the best you can do to protect the content (but not the metadata) of your communications from an adversary. Most web browsers indicate whether your browsing is encrypted by the URL, as illustrated below. HTTP, not encrypted HTTPS, encrypted In the top example, the information is transmitted unencrypted. The full URL in this case is http://whenisgood.net, where http indicates accessing a web page without encryption. This browser (Firefox) emphasizes this point with a struck-through lock. In the bottom example, the information is encrypted: https indicates accessing a web page with encryption, and the s stands for “secure.” The keys used for this encryption are exchanged between your computer and the whenisgood servers using the Diffie-Hellman key exchange, as described in the chapter “Exchanging Keys for Encryption.” Using http, every entity on the path between you and the website pictured below can access the content of your web browsing (such as the pictures being loaded and any information you might type into a web form). Further, anyone snooping on the communications between the entities on these paths (such as between Comcast’s network and the internet backbone) may also have access to your browsing content. We qualify this with may because the communications between two entities on this path may be encrypted. For example, the communications between a cell phone and a cell tower are encrypted in most cases. Who has access to your browsing data By contrast, when you use https, only you and the website (technically, the servers that are hosting the website) have access to the content of your browsing. We specify content here because certain metadata would still be known by entities on the path between you and the website, such as the basic URL of the website, the amount of time you spend browsing the website, and the amount of information you are downloading from the website. In-Transit Encryption When we communicate with another person by email, instant messaging, or video chat, those communications are (most often) routed through the communication provider (e.g., Google servers for email or Microsoft servers for Skype calls), as pictured below. Nowadays, those communications are usually encrypted but most often only encrypted between you and the communication provider. That is, while the entities and eavesdroppers along the path between Assata and the communication provider’s servers (center) and between the communication provider’s servers and Bobby do not have access to the content of your communications (but can glean metadata), the communication provider does have access to the content of your communications. We call this in-transit encryption because the content is encrypted while it is in transit between Assata and the communication provider and between the communication provider and Bobby. The in-transit encryption keys are generated separately for each part of the path between Assata and Bobby, pictured below. The provider (center) performs a Diffie-Hellman key exchange with Assata, generating a shared key, and performs a separate Diffie-Hellman key exchange with Bobby, generating a different shared key. When Assata sends a message to Bobby through the provider, the message is first encrypted with the key Assata shares with the provider, and then the message is transmitted to the provider. The provider decrypts the message with the key that Assata and the provider share. Then the provider re-encrypts the message with the key that the provider shares with Bobby before transmitting the encrypted message to Bobby. Bobby can then decrypt the message. Therefore, the message only exists in a decrypted state on Assata’s and Bobby’s devices and the provider’s servers; the message is encrypted when it is in transit between these entities. In-transit encryption End-to-End Encryption While in-transit encryption protects your communications from many potential adversaries (such as your internet service provider, the Wi-Fi hotspot, or a snoop along the communication channels), the communication provider still has access to all that information. Even if the provider is not a direct adversary, they may share that information with an adversary (such as through a subpoena or warrant). End-to-end encryption (E2EE) will protect your communications from even the communication provider. For E2EE (pictured below), Assata and Bobby exchange keys (using a Diffie-Hellman key exchange or a similar procedure). While their communications are routed through the communication provider, so long as the provider isn’t mounting a man-in-the-middle attack, the communications through the provider are encrypted with a key that only Bobby and Assata have access to. That is, the message only exists in a decrypted state on Assata’s and Bobby’s devices. Assata’s and Bobby’s devices are the endpoints of the communication (hence end-to-end encryption). End-to-end encryption Authentication While E2EE is the gold standard, there are further considerations. As mentioned above, end-to-end encryption is only established if the communication provider (or another third-party member) does not mount a man-in-the-middle attack starting at the time of key exchange. However, as we covered in the chapter “The Man in the Middle,” if Assata and Bobby verify their keys through independent channels, they can determine whether or not a man-in-the-middle attack has occurred and so whether their communications are truly end-to-end encrypted. While many apps or services claiming E2EE provide the ability to verify keys, many do not, providing little guarantee to trust the claims of E2EE. Further, for those E2EE apps that do provide the ability to verify keys, most operate on a Trust on First Use (TOFU) basis. That is, communications may begin without verifying keys first. However, while actually doing key verification is the only way to guarantee E2EE, the existence of the ability to verify keys is protective against automated man-in-the-middle attacks, as even a small fraction of users verifying keys would catch widespread man-in-the-middle attacks. And, of course, E2EE only protects the communication between devices—it does not protect the data that is on the device. E2EE apps should be combined with strong passwords to protect the account or device. In Context: Multiparty Video Chatting There are many apps and services for video chatting between two or more people, with varying degrees of security. Here are three illustrative examples: • Wire provides the gold standard of E2EE. Each user has an account that can be accessed from multiple devices (e.g., laptop and smartphone). There is a public key for each device that is used to establish an encryption key for a session (e.g., a video call), and the fingerprints of these keys can be compared to verify true E2EE. Wire allows E2EE video calls for groups of up to twelve users. • Zoom allows video calls for much larger groups and does provide E2EE in that a video stream is encrypted and decrypted by the users with the same key. However, this key is established and distributed by the Zoom servers. Since Zoom has access to the encryption key, this cannot be considered true E2EE. Further, there is no mechanism for users to verify the encryption keys. As of summer 2020, Zoom had a proposal for establishing keys for true E2EE, but it has not yet implemented it. • Jitsi Meet also provides large-group video conferencing, but only using in-transit encryption. However, Jitsi Meet is available to be hosted on any server (including your own, if you are so inclined). There is an instance of Jitsi Meet hosted by May First, a nonprofit that provides technical solutions to social movements and is a trusted third party to many groups. Even though May First has access to these communications, some would prefer to trust May First over a profit-driven solution such as Zoom. External Resources • Blum, Josh, Simon Booth, Oded Gal, Maxwell Krohn, Julia Len, Karan Lyons, Antonio Marcedone, et al. “E2E Encryption for Zoom Meetings.” Zoom Video Communications, December 15, 2020.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/03%3A_Defending_Social_Movements_(in_the_US)/3.04%3A_Protecting_Your_Co.txt
What You’ll Learn 1. Who has access to your data in the cloud 2. What of your data is in the cloud The cloud is ubiquitous. Since the early 2000s, data is increasingly stored not exclusively (or at all) on your own device but on the servers of the companies that manage your device or operating system or whose services you subscribe to. If that data is not encrypted with a key that you control, that data is at risk for compromise. Accessing your remote or cloud data or storage is similar to accessing a web page, as pictured below. In most models of accessing cloud storage, the information is protected by in-transit encryption, which would protect your data from potential adversaries along the path from your device to your cloud storage provider’s servers (pictured below). However, as discussed in the chapter “Digital Threats to Social Movements,” data that is stored remotely (and is not encrypted) is accessible by government adversaries by subpoena or warrant or may simply be shared with third parties. Unfortunately, even if we avoid the most explicit forms of remote data (such as what is offered by Dropbox or Google Drive), many of our devices encourage the remote backup of all our data (such as Apple devices to the iCloud), in some cases making it very difficult to avoid (as for Android devices to a Google account). This includes a potential wealth of information, including your addresses, calendar, location history, browsing information—potentially anything you do with your computer. In Context: Trusted or Encrypted Cloud Storage There are many choices for cloud storage. In the following list, we describe a few options that illustrate the breadth of options, from not encrypted and not trusted, to not encrypted but trusted, to encrypted. • Google will happily store all your information (email, files, contact information, device backups) for free. Of course, they extract value from this by using your data, but they can’t do so if that data is encrypted with a key that only you control (and so it isn’t). As we saw in the chapter “Digital Threats to Social Movements,” Google returns data in response to roughly 80 percent of subpoena requests. • The software ownCloud provides Box- or Dropbox-style cloud storage but, like Google’s products, only uses in-transit encryption. (An enterprise version of ownCloud does provide some end-to-end encrypted file storage and sharing.) However, ownCloud, like the video-conferencing app Jitsi Meet, is available to be hosted on any server (including your own). Also, like Jitsi Meet, there is an instance of ownCloud hosted by May First, a service provider that is trusted by many. Even though May First has access to your stored data, some would prefer to trust May First over Google. • CryptPad is a collaborative editing platform that offers an end-to-end encrypted alternative to Google Docs. Documents are accessed by a link that includes the key for decrypting the document, but that key appears after a # in URL—for example, https://cryptpad.fr/pad/#/2/pad/edit..._i-6r_cTj9fPL+. The part of the URL after the # is known as a fragment identifier and is not transmitted to the server but is only used within the browser—in this case, to decrypt a given pad. Since the encryption key is part of the URL, one must take care in sharing such a link (i.e., only share this link over an encrypted channel, such as Signal). • Keybase has a number of features including an end-to-end encrypted storage system akin to ownCloud or Dropbox. Unlike CryptPad, Keybase offers stand-alone apps (rather than operating in a browser) and handles the management of keys.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/03%3A_Defending_Social_Movements_(in_the_US)/3.05%3A_Protecting_Your_Re.txt
What You’ll Learn 1. The difference between being anonymous and pseudonymous 2. Three distinct ways to use Tor 3. Some things you should never try to do using Tor In the chapter “Anonymous Routing,” we compared and contrasted virtual private networks (VPNs) and Tor as two methods for disguising one’s metadata online. This can help one achieve anonymity or pseudonymity, but this is difficult to do over the long term. In this chapter, we will focus on skills for using Tor over VPN, but these lessons apply to using a VPN. One needs to additionally remember, though, that when using a VPN, the VPN provider knows who you are and the metadata of your internet communications (and the content, if it isn’t encrypted). While we will focus on using Tor via the Tor Browser, know that there are other applications (such as secure-messaging applications or whole operating systems) that route internet requests through the Tor network. Anonymity versus Pseudonymity Before we describe different ways to use Tor, let us consider the difference between anonymity and pseudonymity. These terms are used in different ways in different contexts, and we restrict our use here to online communications and behavior. Anonymity refers to being without a name or, more generally, without any identifier that could link to you. If you visit a website anonymously today and the same website anonymously tomorrow, the website should not even be able to tell that it was the same person both times. All the website should know is that “someone visited me anonymously yesterday” and “someone visited me anonymously today.” Pseudonymity refers to using a false name, with few or no people knowing your true identity. For example, Samuel Clemens published under the pseudonym Mark Twain, but of course his publisher and others knew who the true author was. Edward Snowden used the pseudonym Cincinnatus in contacting journalist Glenn Greenwald. Greenwald did not know who was contacting him as Cincinnatus, and because Snowden was using Tor to contact Greenwald, neither did anyone else. However, Snowden’s repeated use of the alias Cincinnatus allowed Greenwald to connect different communications he (and fellow journalist Laura Poitras) received from Snowden. We will refer to pseudonymity as allowing one to link different and otherwise anonymous sessions of communications under one persona. Ways to Use Tor Tor can be used to hide information about your identity (such as your physical location) and achieve anonymity and pseudonymity. For the novice user, Tor is accessed using the Tor Browser or Tor-compatible apps. For a more advanced user, non-web-browser communications can be routed through Tor by using an operating system (such as Tails or Whonix) that routes all your web traffic through Tor. Hiding Your Physical Location Using the Tor Browser as you do any other browser, including accessing emails or social media usernames that are linked to you, will conceal your physical location from those accounts. Each time you open a new Tor Browser tab or window and after a certain delay, Tor will route your web requests via a new location. However, many email and social media platforms will flag your account activity as suspicious if it is being accessed from different locations, as it will appear to be when accessing through Tor. So while it is possible to use Tor all the time and for everything, it may not be practical. If you are able to navigate these difficulties, you will still need to be smart to consistently hide your physical location by avoiding the following behaviors: • Entering identifying information into a website (such as your address) • Downloading a document that might access some part of the document via the web (like a photo) and opening it outside the Tor Browser (Word documents and pdfs can do this); if you need to access such a document, disconnect from the internet before opening Achieving Anonymity Using Tor can help you achieve anonymity. However, you will need to restrict your behavior to ensure you don’t leak information that could break your anonymity. To that end, in order to maintain anonymity, you need to avoid the following behaviors during your anonymous session (in addition to those for hiding your physical location): • Logging into accounts (e.g., social media, email, banking) • Visiting your own website repeatedly Achieving and Maintaining Pseudonymity If you create a pseudonym that is unrelated to your true identity in order to, for example, post press releases or participate in forums, Tor can help ensure that your pseudonym stays unrelated to your true identity. However, to keep your real and pseudonymous identity separate, you need to avoid the following behaviors: • Accessing different pseudonymous (or your pseudonymous and real) identities in the same session, as this can link these identities • Accessing a pseudonymous account even once outside of Tor • Using two-factor authentication with a phone (as your phone, even if it is a “burner,” can reveal your physical location) • Posting media with revealing metadata (such as location) Note that the longer you attempt to maintain a pseudonymous identity, the more opportunity you give yourself to make a mistake. In addition to the mistakes above, your writing style can be used to identify you using stylometry. The more examples of your writing style that are available (under both your real and pseudonymous identity), the easier it would be to identify you. Tor Warnings There are some additional things to be aware of when accessing the internet via Tor. As with any protective technology, nothing is perfect. If an adversary (Edgar) is able to watch Assata’s connection to the Tor network as well her connection leaving the Tor network (to Bobby’s website), Edgar will be able to determine that Assata is visiting Bobby’s website. This is called an end-to-end timing attack or correlation attack. If you attempt to use applications not designed for Tor over the Tor network, they may leak identifying information, such as screen resolution or a unique set of settings you may have. Finally, when using Tor, keep in mind it only provides anonymity—for privacy, you need to be accessing web pages using end-to-end encryption via https (though unfortunately not all websites support this). In Context: Getting the Real Tor Browser The tools you use to protect yourself online are only useful if they are the real thing. In 2019, it was discovered that a false version of the Tor Browser was being promoted by people interested in (and successfully) stealing Bitcoin. They made the malicious “Tor Browser” available through incorrect domains like tor-browser[.]org and torproect[.]org (instead of the authentic domain, torproject.org). To protect yourself from such mistakes as downloading from the wrong site or to protect yourself from a man-in-the-middle attack that supplies you with a malicious app, apps such as the Tor Browser make it possible for you to check the signature of a download, as we discussed in the chapter “Public-Key Cryptography.” External Resources
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Defend_Dissent%3A_Digital_Suppression_and_Cryptographic_Defense_of_Social_Movements_(Borradaile)/03%3A_Defending_Social_Movements_(in_the_US)/3.06%3A_Protecting_Your_Id.txt
Starting points and emerging issues Mass surveillance is fundamental threat to human rights says European report (Harding, 2015). We are moving into an era when ‘smart’ machines will have more and more influence on our lives (but) the moral economy of machines is not subject to oversight in the way that human bureaucracies are (Penny, 2017). Headlines such as those above demonstrate as well as any that the IT revolution brings with it a series of challenges that societies are ill prepared to face. While surprisingly large numbers of people unthinkingly renounce such of their privacy as remains for trifles, the idealistic hopes of early pioneers and freedom-loving ‘netizens’ remain largely unfulfilled. Benign notions such as ‘cyber democracy’ and the ‘information superhighway’ have all but disappeared, replaced by a growing sense of uncertainty, disillusion and fear of unknown consequences. For many the digital realm has become an elusive and obscure ‘nowhere place’ whose shadowy operations lie beyond the boundaries of human perception. A few vast corporations, and those with privileged access to their services, appear to have almost unlimited influence both for good and for ill. To capture attention and encourage wide immediate usage it’s the presumed utility of emerging technologies that’s highlighted rather than the radical ambiguity that attends their longer-term use. The implications of this ambiguity need to be more thoroughly understood if positive measures to reduce or eliminate its negative consequences are to be undertaken. Those driving the IT revolution claim new benefits and highlight examples of successful implementation – email, tablets, health innovations and so on. Yet, despite such obvious successes, many IT practices are powerfully disposed in favour of the interests of agencies, corporations, innovators and entrepreneurs, with little evidence that these actors are motivated by positive values that promote public interest. So concerns that the overall effect of the IT revolution could herald the onset of a humanly oppressive technological dystopia remain remarkably durable – if not always spelled out in detail (Harari, 2015). Consequently no amount of saturation marketing will cancel out the ‘dark’ side of the IT revolution or allow it to be wished out of existence. The collective subconscious has access to truths, archetypes, dimensions of reality, denied to, and by, high-tech gurus (Slaughter, 2012, 2015a). It knows, for example, that intangible entities can reach out and destroy centrifuges in a distant country, disrupt civil infrastructure, undermine organised life across the globe. It knows that private bank accounts can be drained before their owners realise what has happened. It also knows that women are attacked and sometimes killed by former partners who’ve tracked their movements, their conversations, using smart phones and social media. Which leaves out a host of phishing attempts, scams, identity theft and other on-line abuses (Glenny 2011; Williams, 2015). This enquiry first seeks to account for the underlying polarity outlined above between the promoters of high-tech ‘solutions’ and those who view the onset of the IT revolution from a more critical perspective. Since the literature is huge and growing it draws on an indicative sample of literature including informed (or ‘quality’) journalism produced over the last decade or so. It begins by outlining key assumptions (including that technology is ‘not merely stuff” and ‘new technologies are ambiguous’). It provides a critical review of several key works and identifies some emerging themes. It then provides a critique of three case studies: the Internet of Things (IoT), autonomous vehicles (AV) and the Silicon Valley itself. It draws on Integral futures methods to provide a brief account of some internal aspects of the Internet giants. It finally concludes that a variety of actions, decisions and policies are needed to reduce high-tech ambiguity and expand social equity. Such ‘conclusions’ should be regarded as starting points for further enquiry. Turning the IT revolution toward more productive and egalitarian ends will require dedicated social efforts that are sustained over the longer term. Key assumptions 1 – Technology: not merely ‘stuff’ A key insight that emerges from STS (Science, Technology and Society) perspectives is that we should not think, speak or refer to ‘technology’ as if it were merely an array of physical (or digital) objects. While it is the material existence of technologies that present themselves to our most obvious and external senses, linear and external views reify what ‘technology’ actually is – a consequence of the interaction of long-term social, cultural and economic processes. Hence, many of the most significant characteristics of any particular technology are effectively invisible – both to the naked eye and the credulous mind. These characteristics are not visible in the ‘things’ (or software) that are displayed before us but hidden in the patterns inherent in the causative relationships that brought them into being and maintain them over time. Anything of value about ‘the IT revolution’ or ‘the Internet’ suggests a need to consider particular items, or suites of technology, in relation to their wider contexts. That’s where the fun begins because as soon as you look ‘beneath the surface’ of social reality you find powerfully contested dynamics just about everywhere. 2 – New technologies are ambiguous yet warnings and costs are ignored An underlying fact that’s often overlooked is that new technologies are, on the whole, seldom actively sought by anyone representing an existing public interest. Rather, ‘demand’ is manufactured and propagated by powerful organisations through pervasive and relentless marketing across all available media by sheer financial and economic power. One is reminded of the aphorism credited to Donella Meadows that you don’t have to spend millions of dollars advertising something unless its worth is in doubt. Few stand back to question the fact that the corporations assume that they know what’s best for everyone. Yet technical developments have always created ‘winners’ and ‘losers.’ So new technologies are often fundamentally ambiguous in the early stages or until sufficient time has passed for social experience to accumulate. While they are often introduced with showy fanfares enumerating supposed benefits, there are always hidden dangers and costs. For example, the ubiquitous rise of GPS devices has led to a marked decline in people’s own ability to navigate. Again, commonly used ‘phone numbers once memorised are now merely a click away and the memory fades. Most parents understand how technology alters things as basic as child rearing as they struggle to mediate between their children and the increasingly enticing attractions of ‘screen time.’ Then there are the ‘lonely hearts’ looking for love on the Internet and ending up seriously out of pocket or worse. The following section provides a small but indicative sample of work on aspects of the IT revolution that was produced over the last decade or so. While superseded in some respects by later works (considered here below) they indicate the beginnings of an evolving response to careless high-tech innovation. As such they provide a ‘way in’ to this vast domain and a foundation from which more influential accounts would grow. Earlier views of the IT revolution Big data, small vision Mayer-Schonberger and Cukier’s book Big Data (Mayer-Schonberger & Cukier, 2013), is sub-titled ‘A revolution that will transform how we live, work and think.’ Ironically, the associated threats appeared to escape them entirely. The bulk of the book was devoted to arguing how ‘big data’ provides new insights into many otherwise elusive phenomena and in so doing creates new sources of value. The authors ignored some key assumptions (for example that the emergence of IT can be equated with the ‘end of theory’) and concentrated exclusively on positive uses of big data. These include the ability to predict the emergence of epidemics and the prevention of aircraft breakdowns due to real time engine monitoring. But what they consistently failed to do was to separate what they considered to be ‘good for business’ from what may or may not be good for everyone else. Hence, the underlying theme, perhaps, can be summarised as ‘jump aboard or be left behind.’ While limited acknowledgements were made of how previous long-standing occupations and professions had been undermined by technological changes, the wider costs were overlooked. A brief section outlined strategies to minimise technology related risks, but no attention was given to evaluating the culture and worldview from which these technological changes originate. Nor was there any attempt to consider or evaluate their future implications. Rather these powerful background factors were taken as given and hence remained invisible throughout. As such the book demonstrated a familiar preoccupation with how ‘technology’ will help us to ‘create the future’ along with a strong sense of blinkered optimism. Reform and renewal Taylor’s The People’s Platform (Taylor, 2014) felt like a breath of fresh air in a difficult and often demanding IT debate – one that is often obscured by the overwhelming self-interest of some of the most powerful entities in the world. With the subtitle ‘taking back power and culture in the digital age’ the reader recognises at the outset that this will not be another banal enumeration of the purported ‘wonders of IT.’ For the author, the mantra of ‘open markets’ is far from an unalloyed ‘good’ because ‘the more open people’s lives are, the more easily they can be tracked and exploited by private interests’ (Taylor, 2014, p.23). At the outset she clearly acknowledges the way conventional discourse about IT is framed. It ‘tends to make technology too central, granting agency to tools while sidestepping’ larger social structures (Taylor, 2014, p. 6). She adds that ‘technology alone cannot deliver cultural transformation’. Rather, we must address the underlying social and economic forces (Taylor, 2014, pp. 9-10). The issues could not be put more plainly than that. The language and intent here also echo those of the STS discourse mentioned above. Grounded approaches that explore the IT revolution’s social and ecological implications certainly lie outside the realm of every day knowledge, but they are essential for ‘clearing the fog’ and making sense of what is happening around us. Later she points out how, far from promoting competition, high-tech monopolies prosper online sanctioning a new kind of ‘vertical integration’ and power over people (Taylor, 2014). A major challenge in her view is that the more user-friendly digital devices are, the more we are connected to machines that ‘keep tabs on our activities’ (Taylor, 2014, p. 32). One of the most striking conclusions is that the future currently being fashioned, far from being innovative and ‘new,’ is in fact deeply conservative, even regressive. That is, it ‘perpetuates and expands upon the defects of the earlier system instead of forging a new path’ (Taylor, 2014, p. 34). The analogy to this conclusion is reflected in modern day advertising. During earlier times advertising was little more than a kind of visual adjunct to shopping that simply drew attention to what was for sale. A century or so later it has become a vastly inflated, turbo-charged public nuisance. It not only embodies crass and indefensible conceptions of human life (‘shop ‘til you drop’) but also imposes incalculable costs on individuals, societies, cultures and the environment in part through misdirecting them wholesale and undermining useful, i.e. less self-focused values. It becomes increasingly vital to contest the power of what Taylor (2014, p. 78) calls ‘the overlords of monopoly journalism’ and the ways that they’ve become ‘disconnected from the communities they were supposed to serve’. As suggested above, new technologies don’t emerge in a cultural vacuum without a host of wider influences. It follows that, ‘if we want to see the fruits of technological innovation widely shared, it will require conscious effort and political struggle’ (Taylor, 2014, p. 54). What is also refreshing here is that the author is under no illusion that the main beneficiaries of IT innovations have indeed been US corporations. Given the worldview these share, it’s obvious that limits need to be applied to their activities and their growth. During previous years a great deal was written and said about the rise and rise of online ‘social networks’. But, at that time, few examined the ways that they quietly ‘shuffle hierarchies’ and produce ‘new mechanisms of exclusion’ (Taylor, 2014, p. 108). Such media, it turned out, are by no means immune to what has been called the ‘iron law of oligarchy.’ It has ‘a surprising degree of inequality built into its very architecture’ (Taylor, 2014, p. 121). Again ‘the topology of our cultural landscape has long been twisted by an ever-shrinking number of corporations’ (Taylor, 2014, p. 129). She adds that ‘powerful hierarchies have come to define the medium,’ (Taylor, 2014). Moreover ‘online spaces are… designed to serve Silicon Valley venture capitalists…and advertisers (Taylor, 2014, p. 139). The smoothness and ease of use of the technology belies an appalling ‘structural greed’ such that ‘the cultural commons have become little more than a radically discounted shopping mall (Taylor, 2014, p. 166). Some of the solutions – or at least necessities for creating positive change – that emerge from Taylor’s (2014) well-founded critique include the following: • The need for new social protocols that include ‘ethical guidelines for engagement and exchange, restrictions on privatising and freeloading, fair compensation and the fostering of an ethos of stewardship. • An explicit recognition of the need to acknowledge the people and resources of all kinds upon which IT systems rest. These include, rare minerals, mines, data centres, toxic waste, low paid factory workers and the growing mountains of e-waste that turn up in poor countries. • A serious attempt to define just how IT systems could be re-designed to better serve the public and also ensure that they are sustainable. • A strategy to withdraw from the current practice of commodifying and monetising the attention of IT users and expropriating their personal information for profit. That is ending ‘a new form of discrimination’ where companies use data without your permission, ‘dictating what you are exposed to and on what terms’ (Taylor, 2014, p. 191). • Defining and enacting new national policies to rein in the worst excesses of the IT industry and, at the same time, protect people and cultural spaces where creativity, art and innovation occur for non-instrumental purposes. • Reducing the colossal amount of resources expended on advertising (over US\$700 billion a year in the US alone) which is something that has virtually no social value and that most people despise. As a way of bringing these ideas together, Taylor (2014, p. 215) proposes a ‘manifesto for a sustainable culture’; one in which ‘new and old media are not separate provinces but part of a hybrid cultural ecosystem that includes the tradition and digital composites of the two’. In her view such a culture will possibly include the following features. • It will balance a preoccupation with ‘nowness’ with encouragements to think long term. As such it will include building archives ‘to allow people to explore their cultural heritage for years to come.’ • It will ‘harness new communications tools to shift the conversation from ‘free’ culture to ‘fair’ culture. • It will re-draw the boundaries for subsidies that currently go to the powerful and make them more widely available for genuine useful civic purposes. • Current Internet oligarchs will give way to new civic organisations such as a ‘digital public library.’ The former would, at the same time, be required to pay their fair share of tax. • Service providers and popular IT platforms will be regulated as public utilities. As part of this new ‘firewalls’ would be created to separate those entities that create information from those that transport it. In other words, the ‘vertical integration’ of the oligarchs would be reduced and eliminated over time. • Similarly, meaningful government oversight of digital media will be re-established. • New investment in non-commercial enterprises will be evaluated and encouraged. • Overall, art, culture and commerce will be freed from being monetised, commodified and relentlessly exploited These are clearly the kinds of suggestions that could in some places generate familiar accusations of ‘Socialism’ and the like. Yet without taking such proposals seriously it is difficult to imagine how the present trajectory of global civilisation catastrophe can be turned around. The dark side Thus far we’ve considered sources dealing with some of the social and commercial uses or misuses of advanced IT. But there’s an even darker and yet more challenging side to this story – the military and criminal uses of IT. The questions they pose are of the utmost significance to humanity and its possible futures but too few appear willing or able to grapple with the issues, let alone provide satisfying answers. Given the secrecy and obscurity that characterises the area, reliable sources are few and far between. An exception is Misha Glenny’s 2009 book McMafia (Glenny, 2009) which provides a detailed overview of organised crime around the world. The book illustrates how the advent of the Internet was a boon for criminals since it made their activities easier and that of governments and other civil authorities harder. That’s because the Internet provides an ever-growing number of ways to hide, launder money and pursue a vast range of criminal activities that are difficult to detect or deter. Glenny spent the next two years researching and writing a book on cybercrime called Dark Market (Glenny, 2011). Here he concentrates on the emergence of individuals and groups who were all-too-ready to capitalise on the new opportunities to steal from unsuspecting organisations and individuals. For example he describes how the emergence of ‘carding’ allowed hackers to discover and access personal information and use it to withdraw funds from unsuspecting banks. This rapidly morphed into the development and online sale of card skimming devices, the duplication of credit cards and so on. An online presence called CarderPlanet facilitated this underground trade for some time by operating out of the ‘Dark Net’ of hidden sites that require special software for access. Nowadays its successors facilitate a vast network of illegal transactions that appear to cover the entire gamut of criminal activity around the world. Glenny follows some of the individuals who developed and pursued this parasitic underground trade and found that many of them came from Ukraine and other parts of the Russian Federation. But, of course, it did not stop there. As all Internet users know to their cost the rise of spam quickly began to infest email communications. Vast quantities could now be generated at minimal cost. Moreover, very few hits were required to create substantial profits. The Nigerian 419 up-front or money transfer scam was one of many that began to divest the naïve and vulnerable from their hard-earned cash. This, unfortunately, is a game that continues to grow and for which there are no simple or easy solutions. The rise of ‘phishing’ and the exploitation of human weaknesses continue to degrade the web and take it ever further away from the idealism expressed by many of its early promoters. Certain well-meaning groups (sometimes referred to as ‘white hat hackers’) trawl the Internet continuously to detect ISPs (Internet Service Providers) that support such illegal activities. But, as Glenny (2011, p.151) notes, it is an unequal struggle since ‘there are tens of thousands of active cyber criminals out in the ether, and only a tiny fraction of them are likely to get caught.’ Nasty as these criminal operations undoubtedly are, they are still relatively minor when compared to the growing use of the Internet for industrial espionage and sustained cyber aggression. Often cited in this context is the case of the Stuxnet virus that was specifically designed to destroy uranium enrichment centrifuges in Iran. The virus is widely thought to have been a collaborative project carried out by the USA and Israel. The immediate end of disrupting the enrichment process for a period of time was apparently achieved. But informed observers point out that this dangerous piece of military software also had many other uses and thus potentially unlimited targets. Here the two-edged sword aspect of new technology is clearly revealed. What was originally touted as a ‘solution’ to a particular ‘problem’ becomes a vastly magnified ‘problem’ (if that is the appropriate word) in its own right with consequences that are, to a considerable degree, unknowable. The very same dynamic re-occurred in Syria in early 2017 when drones were used to attack the ‘liberating’ forces. Glenny’s book was written out of a concern that ‘in humanity’s relentless drive for convenience and economic growth, we have developed a dangerous level of dependency on networked systems in a very short space of time’ (Glenny, 2011, p. 1). Yet none of these technological corollaries appear to have deterred the corporates and Internet oligarchs from pressing onward and promoting new digital capabilities – including what is now being called the ‘Internet of Things,’ explored in more depth later in this book. At the end of his book Glenny refrains from suggesting solutions because, he does not see many emerging. He notes, for example, that the resources being poured into ‘cyber security’ are, by and large, being invested in technology. Here is another reflection of the structural bias that is common across a wide span of innovations. By contrast, ‘there is virtually no investment in trying to ascertain who is hacking and why.’ He adds that ‘nobody differentiates between the hackers from Wikileaks, from the American or Chinese military, from criminal syndicates and from the simply curious’ (p.268). It’s important, in his view to develop a more detailed and sophisticated understanding of the hackers themselves. A thumbnail sketch suggests that most of them are male, bright (often in possession of advanced degrees), socially withdrawn and have had problems with family, especially parents. These attributes resonate with those attributed by Joel Bakan and others to certain corporations themselves, suggesting that the behaviour of some could legitimately be described as psychotic (Bakan, 2003). Glenny’s work provides a valuable source of knowledge and understanding about the widespread criminality of our times and also the extent to which it is supported and facilitated by IT in general and the Internet in particular. To dig deeper, we turn to one work that delves further into the notorious world of IT. Interrogating net delusions The works considered so far have each tackled aspects of the IT revolution in fairly straightforward ways. They amount to what could be regarded as a ‘first wave’ of critique in that they deal with fairly obvious topics and employ quite straightforward thinking and analysis. Fewer have related IT and its many extensions to other frameworks of knowledge and meaning-making in any depth. Nor have they accessed narratives that bring into focus the wider and deeper threats to our over-extended civilisation (Ehrlich & Ehrlich, 2013). Evgeny Morozov brought a qualitatively distinctive voice to the conversation and qualified, perhaps, as an early ‘second wave’ contribution. His two books The Net Delusion (Morozov, 2011) and To Save Everything Click Here (Morozov, 2013) set new critical standards, broke new ground and brought into play an impressive range of cultural and linguistic resources. This brief overview concentrates on the second of these. What immediately set Morozov apart is that, unlike other observers who focused on more tangible and realist aspects of IT, his approach sought to ‘interrogate the intellectual foundations of the cyber-theorists.’ Thus, according to a Guardian review he found that ‘often, they have cherry-picked ideas from the scholarly literature that are at best highly controversial in their own fields’ (Poole, 2013). Morozov was critical not only of the means employed by the Internet oligarchs and Silicon Valley but also of their ends. The premise of To Save Everything uses: Two linked “small ideas” to critique the belief that the internet will help to improve everything. These two ideas are “internet centrism” and “solutionism”. The former idea is self-evident – advocates of the internet tend to assume that features of the internet can be mapped into other areas, and that its exceptional qualities will transform any area of life that comes to be mediated by it. The latter idea, drawn from science and technology studies and urban planning, argues that focusing on solutions limits our ability to think critically about the nature of the problems they are supposed to solve – or even whether they are ‘problems’ at all! To a hammer, everything looks like a nail, and to a social network entrepreneur, both politics and obesity look like problems that can be solved through behaviour change instigated through social networks (Powell, 2013) The method employed is ‘radical questioning’ and the author demonstrated a formidable grasp of doing it methodically and authoritatively. His arguments cannot be covered in detail as they need to be read and reflected on in the original. But it is useful to summarise some of the language and conceptualisations employed as these can be viewed as powerfully enabling resources in their own right. The main themes of Morozov’s work address a number of long neglected topics including: • Questioning the means and the ends (or purposes) of Silicon Valley’s quest. • Rejecting what he calls ‘Internet centricism’ along with the ‘modern day Talorism’ that it promotes. • Opposing the rise of pervasive ‘information reductionism’ in many areas of life, culture, economic activity and so on. • Questioning the fact that many apparently innovative procedures that are being promoted provide pseudo ‘solutions’ to problems that may not exist. • Questioning the tendency of IT to reduce the viability of many socially grounded functions and activities – for example, causing entire professions and types of work (both repetitive and creative) redundant. • Asserting the value of some of the human and social capacities that are undermined by IT. These include ambivalence, the capacity to make mistakes, the need for deliberative spaces and so on. Morozov supported Taylor in reminding us that the dynamic that shaped and is continuing to drive the Internet’s rapid growth and over-reach derives from the never-ending search for profits rather than any concern for human rights. In this view rights are everywhere being extinguished. The underlying dynamic is revealed in many different ways. It shows up in the supposed ‘neutrality’ of algorithms that, while ubiquitous, are hidden and inaccessible so far as most people and organisations are concerned. It also shows up in the vastly expanding realm of ‘apps’ that have hidden costs in terms of privacy, dependency and the promotion of questionable notions such as that of the ‘quantifiable self.’ (That is, a ‘self’ that can be tracked, measured, located, directed and ‘enhanced’ in real time.) Also involved here is a ‘quantification fetish’ – the idea that more data is always better, always ‘objective’. What this amounts to is a vast and pervasive collective pressure on how people understand their world and how they operate within it. Already there is a costly ‘narrowing of vision’ and a decline in the ‘narrative imagination.’ Morozov (2013, p. 282) quoted Clay Johnson that ‘much as a poor diet gives us a variety of diseases, poor information diets give us new forms of ignorance’. Having done so he also critiqued this view for portraying citizens as being too passive and hence unable to ‘dabble in complex matters of media reform and government policy’ (Morozov, 2013, p. 284). Instead Morozov preferred Lippmann’s formulation of ‘multiple publics.’ These are seen as being ‘fluid, dynamic, and potentially fragile entities that don’t just discover issues of concern out ‘in nature’ but negotiate how such issues are to be defined and articulated; issues create publics as much as publics create issues’ (2013, p. 287). Morozov’s work confirmed what some have critiqued for some time – namely that that the apparent ‘success’ of Silicon Valley, its entrepreneurs and, of course, the Internet oligarchs, arose out of a flawed and increasingly risky foundation. That ‘success’ for example depends on: • Profoundly inadequate understandings of human identity and life; • Thin and unhelpful notions of how private and public realms arise, exist and remain viable; • Equally thin and unhelpful views of core concepts such as ‘communication’ and ‘progress.’ • An overwhelming tendency to elevate ‘technology’ to a far higher ontological status than it deserves or can support. One of the ‘strands’ of this multi-themed critique is the tendency of Internet promoters to forget that the kind of ‘theory-free’ approaches to knowledge and action that they’d consciously or unconsciously adopted had a protracted and chequered history. It reflected the tendency, powerfully inscribed in American culture, of setting theory and reflection aside in favour of action and innovation. This is certainly one of the most credible drivers of the ‘GFC’ (Global Financial Crisis) meltdown. The fact is that those driving the ‘Internet explosion’ are ‘venerating a God of their own creation and live in denial’ of that fact (Morozov, 2013, p. 357). Morozov’s analysis supported some of the suggestions put forward by observers such as Taylor and Glenny, but also went beyond them; He sought a broad-based oppositional movement that called into question both the methods and the purposes of Silicon Valley. Part of this movement involves the conscious design and use of ‘transformational’ products. These are products that, instead of hiding and obscuring relationships, dependencies, costs and the like, reveal them as a condition of use. An example would be an electronic device that provides tangible feedback about the sources, types and costs of the energy being used. Some of these examples are reminiscent of Tony Fry’s attempts to counter what he calls ‘de-futuring’ by re-directing the evolution of the design professions (Fry, 2009). Such ‘post-Internet’ initiatives encourage people to ‘trace how these technologies are produced, what voices and ideologies are silenced in their production and dissemination, and how the marketing literature surrounding these technologies taps into the zeitgeist to make them look inevitable’ (Morozov, 2013, p. 356). A further characteristic of Morozov’s (2013, p. 357) approach is that ‘it deflates the shallow and historically illiterate accounts that dominate so much of our technology debate and opens them to much more varied, rich and historically important experiences’. Finally, Morozov (2013, p. 358) was at pains to remind us that ‘technology is not the enemy,’ rather, ‘our enemy is the romantic and revolutionary problem solver who lives within’. This neatly turned the discussion back onto broader questions regarding the constitution of human needs, wants etc. This ‘take away’ message is strikingly similar to that set out in the Biggest Wake-Up Call in History (Slaughter, 2010). Critique and transformation The sections above considered works that focus primarily on IT, the Internet and associated matters. Rushkoff’s approach differs in that his focus is not primarily on IT per se but the ways that society and business have unthinkingly extended industrial practices well beyond their use-by date, supercharged unsustainable growth and missed the most positive opportunities that arise from digitisation (Rushkoff, 2016). In his view industrial innovations operated over time to disconnect people from the value chains that their labour helped create. Today’s monopoly platforms, supported by centralised currencies have taken this process to extremes. Hence, ‘the digital landscape so effectively monopolises economic activity that most people have nothing left to be extracted,’ (Rushkoff, 2016). Consequently ‘social media companies grow at the expense of their users’ (Rushkoff, 2016, p, 33-4). The process is also counterproductive because it leads to an unsustainable endgame, namely ‘an economy based entirely on marketing and advertising’ (Rushkoff, 2016, p.36). Rushkoff reminds us that Daniel Bell’s earlier work on the ‘information society’ went well beyond purely technical issues. Among the latter’s suggestions was that ‘technical progress’ should be balanced by what he called ‘up-graded political institutions’ (Rushkoff, 2016, p. 53). Clearly that did not occur but many of Rushkoff’s recommendations for dealing with 21st century problems do serve to refocus attention on institutional change and transformation. Moreover these are to be guided, in part, by what he calls a ‘recovery of values’ (a topic that is explored further below). The modus operandi of platform monopolies like Uber and Amazon is seen as detrimental since neither accept any obligation to uphold the public good. In fact both rose to prominence by destroying and replacing pre-existing industries (taxi firms and publishing). A way forward, in his terminology is, to ‘re-code’ or reinvent the corporation – which is obviously easier said than done. The author does, however, make a strong case for creating what he calls ‘steady-state enterprises through engaging strategies such as: • Get over growth (focus on sustainable equilibrium); • Take a hybrid approach (commercial and more ‘distributed’); • Change shareholder mentality (addressing social and sustainability concerns); • Shift to a new operating system (revise and re-design the corporation). For Rushkoff the central flaw of ‘runaway capitalism’ is the notion that ‘more profit equals more prosperity’ whereas in his view ‘non-profits’ (such as Mozilla) may be better adapted for a digital future. The important thing is to ‘re-write the rules of the growth game itself’ (Rushkoff, 2016, p. 121-3). Much of the rest of the book deals with the nature of money. He is particularly critical of the dominance of centralised currencies – which he regards as ‘the core mechanism of the growth trap’ – and insists that ‘we can program money differently’ (Rushkoff, 2016, p. 132-8). One of his most original suggestions is that money should be optimised not for growth but for ‘velocity.’ He makes a strong case for using existing, and designing new, ways to ‘slow’ money down so that it can circulate more productively. Local area trading schemes (LETS) are one way to do this and, despite its ‘brittleness,’ emerging blockchain technology may be another. Rushkoff (2016, p. 153) then brings a key suggestion to the table when he writes that ‘reprogramming money requires less digital technology than digital thinking and purpose‘. This is a crucial point that supports a central claim of this book, namely that the power of technology needs to be matched by the wider, broader, deeper powers of understanding and insight that are available but sadly lacking in the culture of Silicon Valley (Slaughter, 2015a). For example in this context we need to consider what kinds of money (plural) are needed? Local currencies make sense in some places, virtual bartering systems (‘free money’) in others and co-operative currencies in still others. Equally the existing heavy trend toward monopoly platforms designed for growth and for humanly extractive business methods can be replaced by what he calls ‘platform cooperatives.’ Models of the latter are said to already exist in Ecuador and in Spain’s well-known Mondragon Collective. At least two broad considerations appear to support Rushkoff’s proposals. One is the sheer dysfunctionality of an economic system built on growth, extraction and exploitation, a system that works for a shrinking minority. The other is the growing influence of positive values that depart from this increasingly risky and over-extended model and that suggest viable ways forward. Readers will likely have their own list of candidates but those mentioned here include: women’s equality, integrative medicine, worker ownership and local currencies. Finally, he suggests that a ‘genuinely digital, distributist business’ would: • amplify value creating from everywhere; • obsolesce centralised monopolies; • retrieve the values of the medieval marketplace (inexpensive exchange between peers); and, in the long run perhaps • seek some sort of collective or spiritual awareness (Rushkoff, 2016. p. 237-8). In summary, what Rushkoff hopes to see is a wide range of social, organisational and related innovations that are informed by digital understanding but strongly oriented toward more productive human and social purposes. Summary Mayer-Schonberger and Cukier’s Big Data (Mayer-Schonberger & Cukier, 2013) demonstrated some of the pitfalls of taking an overly one-sided view of something as powerful as big data. Used carefully, with restraint and effective oversight, it certainly has a variety of helpful uses. Used carelessly and in covert, dishonest ways, it readily becomes a tool of domination and control. Taylor’s The People’s Platform (Taylor, 2014) offered a fresh way of looking at IT in general and a comprehensive list of ‘desirable actions,’ many of which could be readily undertaken with political and social will and enabled with appropriate organisational support. Glenny’s tour of the ‘dark side’ (Glenny, 2011) shed light on a widely felt but often ignored or denied reality. That is, the human, organisational and technical means through which the integrity of the early Internet was compromised. It drew attention to the fact that technical arrangements draw life, significance, meaning, both positive and negative capabilities, from human traits and cultural values. It therefore again demonstrated that these wider, deeper factors – rather than servers and ISPs – powerfully affect the underlying foundations and operational structure of the Internet. Morozov’s To Save Everything Click Here (Morozov, 2013) arguably set new critical standards and helped to create a more robust and capable discourse for dealing in depth with many of the issues raised here. He articulated a strong case for intelligent opposition to ‘solutionism’ and what might be called ‘Internet-centricity.’ As such his work provided in depth appreciation of the IT revolution and the need for ways of influencing it for the wider good. Finally Rushkoff (2016) followed suit with other contributors by demonstrating how redundant values and skewed power relations create adverse outcomes when expressed through digital technologies such as monopoly platforms, related social media and mis-named ‘sharing economies’. He also showed how, in their own terms, they lead to arid, self-defeating social and economic consequences. But, importantly, he also sees many positive opportunities. He demonstrates that other options can be envisaged, some of which already exist in one form or another. Alternatives emerge from adopting constructive values, ‘re-coding’ organisations, developing new kinds of money and evolving new or renewed social and organisational forms. His work also serves to confirm the two assumptions that underpin this work. He demonstrates the practical utility of perspectives that look beyond technologies as such to embrace richer worlds of significance and meaning. Despite the power and wealth of dominant IT based Silicon Valley mega-corporations they may not be as durable as they seem. Despite their current success most will at some point have to confront the fact that they are founded on a worldview and a set of values derived from the most problematic and short-sighted form of economic organisation that has ever existed (Ramos, 2011; Ehrlich & Ehrlich, 2013; Klein, 2014). To retain legitimation such organisations deny or obscure the fact that present forms of neoliberal techno-capitalism are poorly adapted to human needs and the reality of planetary limits (Slaughter, 2015b). Certain core operating assumptions dictate the way the system operates and powerfully shape and condition many of its products and services. These include the ‘freeing’ of markets from effective oversight and government regulation, the pursuit of ‘growth’ as an unquestioned goal, viewing the natural world instrumentally as merely a set of resources for human transformation and use and demeaning view of human beings as consumers or pawns. One result has been the concentration of wealth into the hands of ever fewer individuals and groups (Piketty, 2015). So this is a state of affairs that cannot continue indefinitely. Conclusion If human societies wish to protect the wellsprings of life, culture and meaning they will need to limit the wealth, power and reach of the Internet oligarchs. Collective courage and resolve will be required to re-frame ‘the Internet’ and free the ubiquitous algorithm from their grasp. Ways in which it can be re-designed for more respectful and constructive uses are already beginning to appear (Hodson, 2016). This is quite obviously not a case of rejecting ‘technology’ wholesale but, as several authors considered above have suggested, of locating it within a broader frame of understanding and value. The latter will include ‘the market’ but not be dominated by its current reductive and out-dated economic framework. An indicative example of this could be the Tesla corporation that has, in some ways, started to disrupt the comfortable world of the internet oligarchs by beating them at their own game. While it participates in mainstream projects such as the ‘self-driving car’ and ‘brain computer interfaces’ it is also investing in distributed power storage solutions that are already proving attractive around the world because they help solve a real and urgent problem. This shows that size and wealth do not necessarily preclude the development and production of truly useful innovations. It’s worth emphasising, however, that values do indeed sit at the core of everything. One of the most constructive options is therefore to understand and acknowledge how different values manifest, where they ‘fit’, so to speak, and how they are expressed in different environments. Hence the second chapter suggests that greater insight into values precedes effective action (Wilber, 2017; Slaughter 2012). It brings to mind a worldview in which technologies have been subordinated to consciously chosen values. That is, the culture of the Kesh richly evoked by Ursula le Guin in Always Coming Home (le Guin, 1986). Here the uses of high technology are certainly acknowledged but also known to be dangerous. The solution adopted by the Kesh is that advanced technologies are treated with care. They are partitioned off into specific locations where they can be used as needed but where their influence is kept in check. Rather than pursue technical power wherever its owners and inherent tendencies may lead, the Kesh chose to bring ritual and meaning into the heart of their culture. We would do well to remember this example and to draw inspiration from it. Although embodied in fiction it carries a vital message to our own time and culture.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/01%3A_Starting_points_and_emerging_issues/1.01%3A_Starting_points_a.txt
The Internet of Things Most people will have heard of various forthcoming ‘next big things’ such as ‘augmented reality,’ ‘self-driving cars’ and the ‘Internet of Things’ (IoT). Yet the chances are that they won’t have heard about them from personal or local sources since claims about their alleged benefits don’t originate there. Rather, they are one of many campaigns that originate elsewhere – that is, from a handful of the world’s most powerful organisations and their associates. As things stand, entire populations are regularly subjected to powerful marketing operations intended solely to prepare them for the so called miraculous new services that no one has ever wanted or needed. As Morozov and others have suggested ‘the Internet’ is a domain where numerous ‘solutions’ are offered for problems that currently do not exist – a phenomenon he calls ‘solutionism’ (Morozov, 2013). Hence it is difficult to find credible evidence of any real ‘demand’ for an IoT. Rather, it is all about power and accumulation on a vast scale. Powerful organisations insist that these latest innovations are inevitable. They claim that ‘the genie is already out of the bottle’ without offering any plausible justification to what this ‘genie’ actually is or what kind of ‘bottle’ it may have escaped from. Subtlety and depth of meaning are uncommon in these contextualised claims. Superficial, overly positive views about high-tech innovation, however, not only reflect their pretentious assumptions, they also speak volumes about the overriding self-serving priorities of the organisations involved. Yet, there should be no doubt that the innovation ‘push’ model is certainly disruptive and frequently dangerous. The reasons are straightforward – it constantly injects random elements into complex social systems that are then forced to adapt, often at considerable cost to people, professions and organisations at large. Reflecting on the 2016 US election one observer commented that: We have fetishised “disruption”. Governments have stood by and watched it take down all industries in its path – the market must do what the market must do. Only now, the wave is breaking on its shore. Because what the last week of this presidential campaign has shown us is that technology has disrupted, is disrupting, is threatening to upturn the democratic process itself – the best, most stable, most equitable form of governance the world has yet come up with (Cadwalladr, 2016). Despite this malaise an IoT per se should not necessarily be considered a categorical mistake. Well-designed devices installed in robust networks with appropriate technical and exacting safety standards would have a variety of uses. A host of specialised applications can be readily envisaged in education, surgery, disaster management and so on. The elderly, disabled and sick could gain greater autonomy and enhanced capability to run their own lives. Potentially positive uses like these may well be unlimited. But the dangers and costs of the IoT as envisaged by the power hungry appear to outweigh these benefits. Standing behind the seductive merchandising are questions such as: who is promoting the IoT? Who stands to gain and who will lose? Can we be sure that it will protect privacy and enhance human wellbeing or will it further erode both? Answering the ‘who’ question is straightforward. The main drivers and beneficiaries of this particular ‘radically transformative innovation’ are the corporate tech giants from Silicon Valley, their like-minded associates and high-tech manufacturers ever on the lookout for new markets. They share this particular expansionist worldview that continues to be virtually unchallenged. In fact following the 2016 US presidential election the Neo-Conservative ascendancy was reinvigorated. Central to its ideology is an assumption that equates ‘progress’ with single-minded technical innovation and development. Such a view, however, works against shared interests as it arguably rests on category errors and inadequate views of culture, human identity and human autonomy. Such limitations and costs were perhaps best expressed by Lewis Mumford who declared that: ‘I have taken life itself to be the primary phenomenon, and creativity, rather than the “conquest of nature,” as the ultimate criterion of man’s biological and cultural success’ (Mumford, 1971). He would, of course, be unemployable in Silicon Valley. This is not because Trump supported Neo-Conservatism directly. His antagonism towards it is well known. Nor is it because Silicon Valley has entirely abandoned its leaning toward Libertarian values. In the former case it is rather that a rich minority has thrived under Trump that remains deeply immersed in the ‘Neo-Con’ world from which it continues to derive significant financial and other benefits. In the latter Silicon Valley exhibits a profound disconnect from Democratic politics and the growing social costs of its own activities. The Neo-Cons therefore remain free to go about their business in the absence of any serious constraints. Disruptions and consequences In some ways the high-tech sector resembles a wayward child that challenges authority and ignores boundaries. So it is unsurprising that, as existing product categories become saturated, it seeks to invent new ones. But what’s good for Internet oligarchs and giant corporations may not be good for everyone else. Long before the IT revolution informed observers such as C.S. Lewis, Ivan Illich, E.F. Schumacher and many others understood that the ‘conquest of nature’ has a nasty habit of rebounding on people by compromising their humanity and riding rough shod over their rights. The entire high-tech sector has expanded rapidly over recent decades and, as a result, many of the organisations involved have become financially wealthy. But if they are not rich in humanity, perceptiveness, the ability to sustain people or cultures, then this becomes an empty and regressive form of wealth. The high-tech sector has exhibited a dangerous and apparently unquenchable obsession with ‘inventing the future backwards.’ That’s to say, it pours millions into speculative technical operations with little thought as to whether the outputs are necessary or helpful. There’s an abiding preoccupation with beating the immediate competition (including other high-tech behemoths) regardless of other considerations. Many will remember how the ‘information superhighway’ evoked images of openness, safety, productivity, social benefits spread far and wide. A range of new tools certainly came into wide use. Information on virtually any topic became almost instantly available. Useful knowledge is another matter entirely and wisdom may be the scarcest resource of all. None of the above can be blamed on the Internet pioneers who built early versions of these systems and devices. Many appear to have believed that what they were doing was useful and constructive (Taplin, 2017). Unfortunately, once the new tools were released into wide use the aims, ambitions, values and so on of the pioneers counted for little. New, poorly understood, world-shaping forces came into play. Yet the power apparently granted to the latter does not, in fact, reside with innovators and disruptors. In a more considered view it resides in the domain of ‘the social’ from which countervailing power (for example in the form of sanctions or legitimation) may eventually arise. The entrepreneurial marketplace and a new arms race In the meantime, left to the vagaries of ‘the market,’ further waves of high-tech innovation will continue to generate highly polarised consequences. It doesn’t really matter what the high-tech gurus and the Internet oligarchs like to claim at any particular time in terms of the efficacy and usefulness of new products and services. Nor does it matter how glossy the marketing, how many times stimulating or provocative TED talks are viewed on YouTube or how enticing the promises appear. The very last entities to entrust with the future of humanity and its world are those who make ‘innovation’ their ultimate value and selling their core profession. High-tech promises based on pragmatic, utilitarian and commercial values overlook or omit so much that’s vital to people and societies that they have little or no chance of creating or sustaining open and egalitarian societies. (The ideology of ‘value-free technology’ is discussed below.) Proponents of the IoT, however, seek to convince the public that it will be widely useful. Homes can be equipped to respond to every need, whim and requirement. Owners won’t need to be physically present since they can communicate remotely with their home server. What could possibly go wrong? The honest answer is: just about everything. Perhaps the greatest weakness and enduring flaw in the IoT is this: connecting devices together is one thing, but securing them is quite another. As one well-qualified observer put it ‘IoT devices are coming in with security flaws which were out-of-date ten years ago’ (Palmer, 2016). Naughton (2016) acknowledges that ‘there’s a lot to be said for a properly networked world.’ He adds ‘what we’ve got at the moment, however, is something very different – the disjointed incrementalism of an entrepreneurial marketplace.’ He adds that: There are thousands of insecure IoT products already out there. If our networked future is built on such dodgy foundations, current levels of chronic online insecurity will come to look like a golden age. The looming Dystopia can be avoided, but only by concerted action by governments, major companies and technical standards bodies (Naughton, 2016). Even now private e-mail cannot be considered secure. One slip, one accidental click on a nasty link, can initiate a cascade of unwelcome consequences. There’s no reason to believe that anyone’s wired-up electronic cocoon will be any different. Consider this: a creepy Russian website was allowing users to watch more than 73,000 live streams from unsecure baby monitors (Mendelsohn, 2016). In the absence of careful and effective system-wide redesign what remains of our privacy may well disappear. First world societies are on the cusp of being caught up in the classic unwinnable dialectic of an offensive / defensive arms race. Currently, few understand this with sufficient clarity. It’s therefore likely that many will continue to sign up for this new, interconnected fantasy world with no idea or little idea of the dangers involved or the precautions required. Some will ask why they were not warned. The fact is that such warnings have been plentiful but have fallen upon deaf ears. Internet ideology No discussion of the Internet and its pervasive effects is complete without reference to a persistent – some would say extreme – view that technology is ‘value free.’ Technology is said to be ‘neutral,’ what matters is how it is applied. This represents a distinct philosophical position supporting a specific worldview that eludes many, especially in the U.S. where such issues tend to remain occluded. So it’s not surprising that the limitations, not to say defects, of such a view are, on the whole, seen more clearly beyond the U.S. and far removed from Silicon Valley (Beck, 1999). For those who have absorbed the pre-conscious assumptions of U.S. culture the ‘IT revolution’ and its products are more likely to be described in glowingly positive terms (tinged, of course, with varying degrees of national self-interest). Yet such views are far from universal. Wherever healthy forms of scepticism thrive it’s obvious that information processing – once restricted to the world of machines – has already colonised the interior spaces of everyday life to an unwise extent (see Zuboff, 2015, below). Allowing it to penetrate ever further into human life is clearly fraught with adverse consequences. Greenfield (2017) has considered how these processes operate at three scales: the human body, the home and public spaces. To take just one example, in his view the rise of ‘digital assistants’ … ‘fosters an approach to the world that is literally thoughtless, leaving users disinclined to sit out any prolonged frustration of desire, and ever less critical about the processes that result in gratification’ (Greenfield, 2017). They operate surreptitiously in the background according to the logic of ‘preemptive capture.’ The services they offer are designed to provide the companies concerned with ‘disproportionate benefits’ through the unregulated acquisition (theft) of personal data. Lying behind such operational factors, however, is ‘a clear philosophical position, even a worldview … that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion.’ When applied to cities Greenfield regards this as: Effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion (Greenfield, 2017). Hence ‘every aspect of this argument is questionable.’ Similarly, the view that ‘anything at all is perfectly knowable’ he regards as perverse since so many aspects of individual and collective life cannot be reduced to digital data. Differences of value, identity, purpose, meaning, interest and interpretation – the very attributes that make human life so rich and varied – are overlooked or eliminated. It follows that, The bold claim of ‘perfect’ knowledge appears incompatible with the messy reality of all known information-processing systems, the human individuals and institutions that make use of them and, more broadly, with the world as we experience it. In fact, it is astonishing that any experienced engineer would ever be so unwary as to claim perfection on behalf of any computational system, no matter how powerful (Greenfield, 2017). In summary, claims for ‘perfect confidence’ in the social applications of digital systems are ‘incommensurate with everything we know about how technical systems work.’ In other words the dominant ideology behind the rapid expansion of the IoT and related systems is clearly unfit for many of the purposes to which it is currently being applied. Or to put this differently ‘hard’ empiricism involves systemic reductionism that works directly against the wider human and social interests outlined above. Fiction informs foresight It’s no secret that high-tech nightmares exploring the dark side of ‘progress’ have been a staple of science fiction (SF) for well over a century. Far from being idly ‘negative’ they can be viewed as useful reminders to, for example, not proceed too far too fast with these powerful, seductively networked technologies. H.G. Wells attempted an early expression of this concern in his 1895 novel The Time Machine in the contrasts he drew between the effete and vulnerable Eloi and the brutal Morlocks (Wells, 1895). Then in 1909 E.M. Forster made an even more deliberate attempt to identify the likely effects of becoming over-dependent on technology in his novella The Machine Stops (Forster, 1909). More than a century later it still carries a forceful message that is both credible and explicit. Then, in the early 1970s, J.G. Ballard began his decades-long explorations of ennui and decay in the ruins of high-tech environments – the abandoned high-rise, the empty swimming pool and so on. One of the most evocative is a short story in his 1973 collection Vermillion Sands called ‘The thousand dreams of stellavista’ (Ballard, 1973). It portrays a house constructed to exquisitely mirror the needs of its inhabitants in real time. Unfortunately it turns out that a previous occupant was insane. Over time the house begins to exhibit similar symptoms – which places later owners in peril of their lives. This is obviously not merely a metaphor. Daniel Suarez’ Daemon picks up the familiar theme of runaway technology and gives it a powerful new twist. He draws on a wealth of information technology (IT) know-how to explore how a dormant entity – or daemon – is activated, becoming a self-replicating, highly destructive virtual menace (Suarez 2010). Finally Dave Eggers’ prescient 2013 novel The Circle brings the story up to date in a highly relevant and insightful critique of the digital utopianism that arguably characterises the current thinking and practice of IT corporations (Eggers, 2013). It’s a salutary tale in which human ideals become subordinated to an ever more dominating technical infrastructure. This is, of course, only a small sample of a vast literature exploring almost every aspect of technological dystopias. Futurists and foresight practitioners often recognise such sources as essential background. But they also earn their living by scanning the environment for more specific and empirically based ‘signals of change.’ The art and science of ‘environmental scanning’ is, however, arguably more advanced in theory than it is in broad, commonly accepted, practice. In terms of social governance in a digital era, this is a serious oversight. Consequently the relative absence of high quality foresight places entire societies at significantly greater risk than they need to be. Here, for example, are a couple of ‘scanning hits’ on surveillance and the IoT. “The Internet of Things (IoT) has particular security and privacy problems…it affect the physical world, sometimes controlling critical infrastructure, and sometimes gathering very private information about individuals” (Seitz, 2015). And again, The IOT “network is responsible for collecting, processing, and analyzing all the information that passes through the network to make decisions, in other words, millions of devices permanently connected to the Internet act and interact intelligently with each other to feed and benefit thousands of applications that are also connected to the network,” (Alvarez, 2021). It will be a two-way street. Internet of Things transactions linked to the same identifier are traceable, and ultimately make people also traceable, hence their privacy is threatened. According to Ball (2016), this consumer surveillance is an act of corporate power, attempting to align individual preferences with corporate goals. This is seen by the increasingly widespread practice of customer surveillance in stores (and other points of sale) when people unwittingly accept offers such as ‘free Wi-Fi.’ In so doing they agree to ‘terms of use’ that they neither read, nor understand. This is clearly analogous to where entire societies now stand in relation to the IoT – the actual ‘terms of use’ remain out of sight and unavailable to all but the most persistent and technically adept. A plausible trajectory During these dangerous and uncertain times much is at stake – not least of which is how to manage a world severely out of balance. More competent, imaginative and far-sighted leadership would help, as would a growing society-wide resistance to the values and, indeed many of the products, of the high-tech giants. Strategies of this kind would contribute toward a thorough re-appraisal of various pathways toward viable futures (Floyd & Slaughter, 2012). Those who are fortunate enough to be living in still-affluent areas are being taken on a ride intended to distract them, to still their growing fears for the future, through the many diversions provided by new generations of technological devices. But the above suggests that it’s time to push back and seek answers to questions such as the following: • Does it make sense to accept the current, deeply flawed, vision of the IoT that promises so much but ticks so few essential boxes, especially in relation to privacy and security? • Are whole populations really willing to passively submit to a technical and economic order that it grows more dangerous and Dystopian with each passing year? • To what extent should time, resources and attention be focused on the kinds of long-term solutions that preserve human and social options? (Slaughter, 2015). If things continue to proceed along the present trajectory the system is likely to misbehave, to be hacked, militarised, fail just when it needs to work faultlessly. In this eventuality domestic users may start backing out and rediscovering the virtues of earlier analogue solutions. Although simpler and less flexible, the latter could gain new appeal since they lack the ability to exact hidden costs and turn peoples’ lives upside down in unpredictable ways. Some might well opt wholesale for a simpler life (Kingsnorth, 2017). Early adopters of the IoT are, however, not restricted to householders. They include businesses, government agencies and public utilities. It is often forgotten that the latter are structurally predisposed toward greater socio-political complexity – which also contributes to the growth imperative. Thus, according to Tainter, large-scale organisations are unlikely to pursue deliberate simplification strategies while at the same time becoming increasingly vulnerable to collapse (Tainter, 1988). Given the overall lack of effective social foresight, as well as the parlous state of government oversight in general, present modes of implementation may proceed unabated for some time. Security breaches on an unprecedented scale would then take place, disruptions to essential services would occur and privacy for many would all-but vanish. The costs would be painful but they would also constitute a series of ‘social learning experiences’ par excellence. At that point serious efforts to raise standards and secure the IoT become unavoidable.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/02%3A_Case_studies_and_implications/2.01%3A_The_Internet_of_Things.txt
Farewell to driving? The advent of ‘driverless cars’ has been regularly announced for some time. They refer to one type of ‘autonomous vehicle’ (AV) tested on the streets of various cities. Others are operating in closed environments such as mines and industrial sites. Airports have used ‘autonomous trains’ for some time, safely moving thousands of passengers around from one terminal to another. Road testing of city-to-city AV fleets are not far behind. Such vehicles are another in a series of ‘disruptive technologies’ whose benefits are said to outweigh the possible costs. It’s claimed that the current system of independent vehicles driven by fallible humans is so expensive, dangerous and out-dated that it needs to be replaced. At first glance, it’s not hard to see why. Such a system could be more efficient, less wasteful and safer. The outlook appears sufficiently compelling that the longer-term goal of creating fully automated systems is being widely debated and planned for. Several levels of autonomy are envisaged. At level one single functions are to be carried out by the vehicle in restricted circumstances. At level two the vehicle can operate multiple functions with the driver actively monitoring. At level three the vehicle can cover all driving functions but refer back to the driver if / when needed. At this level, however, the ‘hands off’ issue becomes a safety concern. So at level four, complete vehicle autonomy within system-wide limits becomes the preferred goal. At level five, in-vehicle systems replace all driving functions in any circumstances, indicating true autonomy (King, 2017). Such apparently positive conclusions appear to be supported by World Health Organisation (WHO) statistics that recorded a staggering 1.25 million road deaths in 2015 alone (WHO, 2015). Or, as one writer put it: ‘the only difference between a human driver and a machine driver is the speed and accuracy of perception and reaction, and the machine wins that one easily’ (Walsh, 2016). This is one of several arguments. Others include the following. If AVs were to become standard then chaotic and crowded road transport systems might well be rationalised. Traffic jams could become a thing of the past. Car ownership per se would decline since fewer vehicles would be needed. Roads could be smaller and less intrusive. The space in cities presently devoted to parking would be reduced making these same areas available for other uses. Then again, since the new AVs would run on electric power there’d be an increase in energy efficiency with corresponding reductions in exhaust fumes and pollution. (That noted, the makeup of energy systems – coal, oil, gas, nuclear vs. renewables – used to power electric vehicles would obviously have a significant impact on the overall energy profile.) From a popular viewpoint, cities could return to being ‘clean and green.’ On the other hand all these assumed benefits turn out to be highly contestable. For example, it’s doubtful if such a multi-dimensional transition could occur as quickly as proponents suggest. Then there’s the huge question of costs – not only to manufacture smaller, lighter batteries but also to drive down the cost the sophisticated electronics such vehicles require. Equally, the question of complexity has barely figured in current narratives. But it will take heroic levels of reliability to keep such vehicles operating safely. There’s also another side to this story. Unemployment and the myth of perpetual Internet reliability The most obvious and immediate drawback is the rapid decline in employment for large numbers of people who currently earn a living through driving. In the UK, for example, there are close to 300,000 Heavy Goods Vehicle (HGV) drivers alone, most of whose jobs would disappear (Ashley, 2017). And this is without counting bus and taxi drivers. Yet little is heard from policy makers or AV promoters about these deteriorating prospects. John Harris describes the issue like this: There are 3.5 million truck drivers in the US, as well as 233,000 cab drivers (an official estimate, which seems low), 330,000 Uber drivers and 660,000 bus drivers. In the UK, at the last count, there were 297,600 taxi or private-hire-vehicle driver licenses in England alone, and 600,000 people are registered drivers of heavy goods vehicles. The traditional logic of the job market has made sitting behind a wheel a fallback option – if all else fails, you can always drive a cab. But no more… (Harris, 2016a). The beginnings of a solution are likely to involve income redistribution on a wide scale. Proposals for a social innovation – a universal basic income (UBI) ­- to reduce the strain on what Paul Mason calls ‘the precariat’ crop up occasionally but are a long way from being implemented (Mason, 2016). The political will is minimal, the economics challenging and the issues complex. Yet it’s fair to say that little could be further from the minds of those who favour the introduction of AVs. While most are caught up in the ever more unequal distribution of wealth, measures to moderate such extremes are few and far between. These are matters of real public concern. Yet industry innovators, and those who speak for them, remain preoccupied with technical issues. So they don’t view the structural decline in employment and a corresponding rise in public unrest as any concern of theirs. They are focused on capturing as large a slice as possible of emerging markets. So questions like ‘should we do this?’ give way to ‘can we do this, how fast and where?’ Framing issues in such ways certainly simplifies things. Yet pursuing the single-minded pursuit of ‘innovation’ on the one hand, while ignoring wider consequences on the other, de-legitimises any pretence to objectivity or detachment. Acknowledging and understanding these links therefore becomes a vital public concern. More people would then appreciate the extent to which corporate and social interests have been poorly aligned for many years (Higgs, 2014; Klein, 2007, Klein, 2014; Bakan, 2004). It was suggested above that privileging technological innovation above all else looks increasingly like a dangerous mistake. On the other hand, costs and disruptions can be moderated or prevented if they are detected and publicised in good time. This is obviously one of the key functions of high quality foresight work in the public interest. If and when the political will is found, more equable solutions can emerge. There is, however, no ready-made solution to what may be the Achilles Heel of all AV systems – their dependence on perpetual Internet integrity. At the very time when key players are preparing for ubiquitous cyber warfare, the faultless continuity of IT-related systems remains a convenient myth. In this view, complexity becomes a social trap and reliable security a delusion. Yet, as things stand, the pragmatic worldview and raw instrumental power of the main players suggests that they will push ahead regardless. They’re uninterested in permission, regulation or negotiating any diversion from the humanly tragic and debased futures they are creating (Harari, 2015). Systems rationality, artificial intelligence, privacy Since most governments lack even the rudimentary means to evaluate the emerging tides of new technology – let alone make informed decisions about their social implications – the question of who will take responsibility for large-scale breakdowns, power-outages and disruptions, whether caused by actual accidents or by malign cyber-attacks, remains open. What is clear is that to the extent that AV systems are progressively installed the torrent of data that they’ll require and generate will become too vast and complex for humans to manage. New levels of automation capable of processing vast amounts of ‘big data’ in real time will be needed. Human control over these systems will therefore diminish. Humanity will have taken another step toward the era of ‘systems rationality’ where notions like ‘autonomy’ and ‘choice’ become meaningless. One option that can be explored as an alternative to a full on ‘big data’ scenario would reflect the difference between artificial intelligence (AI) and Intelligence Amplification (IA). In the former case the goal is to replace human intelligence with machine equivalents, whereas in the latter it is to augment human capabilities. Driver assisted vehicles are not merely less threatening and problematic, they already exist in significant numbers. So it may be possible to explore a similar process of augmenting human capability and, in so doing, bypass some of the hurdles mentioned here. Yet this is by no means a foregone conclusion. Within a ‘growth at all costs’ corporate worldview optimal solutions appear less appealing than grand visions in which limits have little or no place. Currently we’re a long way from figuring out how society as a whole can begin to deal with the unending flow of data. Effective AV systems would necessarily be designed to eliminate as much uncertainty, ambiguity and choice as possible. It would record the full details of each and every trip, making it possible for anyone with access to know exactly where and when people have been. Unlike with today’s smart phones whose ‘tracking services’ can still be switched off, no such option would be available. Some criminal activities (such as car theft) might decline but at the cost of ratcheting up the level of surveillance to unprecedented levels. One observer sees it this way. He writes: Shrouded in secrecy, swallowed up by complexity and scale, the world is hurtling toward a new transnational electro-dystopia … Localisation doesn’t matter that much. The Chinese Internet model and the American giant server farms are proof of the dangerous fact that digital automation is inherently coupled with the efficiencies of integrate centralisation and control (Keane, 2015, p. 33). AVs are safer for whom? The issue of safety is one of the key drivers behind the emergence of AV technology. Yet the conversation thus far has taken place within an affluent ‘first world ghetto.’ It’s here that the finance is available and the greatest rewards are expected. Yet the closer one looks the more the whole process appears to do with notions of greed than of need. So it’s worth asking a different question – where are these promised new levels of safety most needed? The answer is – in the very places where they are least likely to occur. The WHO (2015) statistics on road deaths make this clear. The following sample is for deaths per 100,000 people in 2013. Table 1: Road deaths Country Deaths per 100,000 people in 2013 Central African Republic 32.4 Democratic Republic of Congo 33.2 Germany 4.3 Iran 32.1 Libya 73.4 Netherlands 3.8 Norway 3.8 Rwanda 32.1 Singapore 3.6 Sweden 2.8 Thailand 36.2 United Kingdom 2.9 If, in this already one-sided technical view, part of the ‘value proposition’ is that ‘human life is valuable therefore we should reduce the road toll’ then it’s clear that countries with the greatest need for technical assistance are the least likely to get it. The unfortunate truth is that there’s little or no profit to be made from poor and destitute nations. Hence the argument about ‘making driving safer’ clearly rests on ‘first world’ privilege. It depends on (a) excluding the poorest nations and (b) therefore ramping up even further the already unsustainable gulf that exists between rich and the poor. So far as the corporates are concerned poor people can continue dying in their thousands so long as they gain access to the most profitable markets. Obscured by the growing chorus of approval for AVs in the rich West this sad reality has been widely overlooked. Yet its antecedents are well understood. They were described a decade ago, for example, in Klein’s detailed account of what she called ‘the rise of disaster capitalism’ (Klein, 2007). Summary This section has argued that the full costs of any thorough going implementation of AV technology bring with it very significant costs. These include: • mass unemployment and few serious attempts to deal with it; • the further erosion of privacy; • an impossible commitment to the myth of perpetual Internet integrity; • the assimilation of people, societies and cultures into a world dominated by machines and governed by the abstract demands of ‘systems rationality’; and, • a further increase in the unsustainable gulf between rich and poor. Rationales in favour of the rapid implementation of AVs are therefore not as persuasive as they may first appear. It follows that the rush to implementation needs to be slowed down and perhaps halted – at least for a while. This view is partly about values including prudence and compassion. It strongly supports the view expressed in the previous section that new technologies should be seen and understood in their wider contexts. They are not merely ‘stuff’; each has human, social, cultural and geopolitical consequences, and positive, negative and ambiguous outcomes. The arguments and justifications put forward in favour of AVs thus far appear to depict issues in the simplest and most positive ways, obscuring alternatives and understating the wider costs. High-tech companies have become surprisingly casual about embarking not merely on one or two but a whole series of frankly outrageous projects that, at base, serve to re-shape the world in their own image. But there’s sufficient evidence to take a stand against careless innovation with ramifying social consequences. It’s now clear that a high-tech world fashioned by and for the corporate sector becomes progressively less fit for people (Klein, 2014; Higgs, 2014; Harari, 2015). There are many other alternatives awaiting our collective attention (Alexander & Mcleod, 2014; Rees, 2014; Floyd & Slaughter, 2012).
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/02%3A_Case_studies_and_implications/2.02%3A_Farewell_to_driving.txt
What drives Silicon Valley? It was suggested above that well-grounded critique opens up new areas of insight that can inspire viable responses and inform policy-making (Slaughter, 2017). This chapter suggests that further insights can be gained from a better understanding of the human and cultural interiors of organisations and individuals. After all, it is from the interior dynamics of values and worldview commitments that real-world structures, innovations and consequences emerge into the light of day. Developmental psychology has opened up many ways of achieving greater clarity regarding interior structures and processes and integral methods have proved particularly useful here. In brief, they embody a fusion of the work of many different people that helps us to understand more of what is occurring ‘beneath the surface’ of contested issues (Slaughter, 2010). They shed new light on some of the interior sources or ‘drivers’ that operate in Silicon Valley. An indicative example can be found in Mark Zuckerberg’s admonition to the staff of Facebook to ‘move fast and break things’ as it reveals much about both. Jonathan Taplin draws on this statement to show how such imperatives arose within the specific conditions of American society and culture. Three influences can be mentioned here – Shumpeter’s notion of ‘creative destruction’, the normalisation of aggressive entrepreneurial practices and, last but by no means least, the pervasive influence of Ayn Rand’s radically individualistic right wing ideology (Taplin, 2017; Freedland, 2017). These are among the historical and social forces that created Facebook, Google, Amazon, among others, and helped them become what Rushkoff calls vast ‘monopoly platforms’ (Rushkoff, 2016). These organisations currently have as much, if not more, wealth and power than many national governments. John Harris puts it like this: The orthodoxies of government and politics are so marginal to the way advanced economies work that if politicians fail to keep up, they simply get pushed aside…The amazing interactions many of them facilitate between people are now direct – with no role for any intermediate organisations, whether traditional retailers or the regulatory state. The result is a kind of anarchy, overseen by unaccountable monarchs: we engage with each other via eBay, Facebook and the rest, while the turbo-philanthropy of Mark Zuckerberg and Bill Gates superficially fills the moral vacuum that would once have pointed to oversight and regulation by the state (Harris, 2016b). Mason comments what must be obvious to many that as ‘monopolies (they) should be broken up.’ He adds, ‘if Facebook were a bank, it could not exist; nor Google if it were a supermarket,’ (Mason, 2017). In this view an underlying reason why that has not occurred is due to ‘the structure of hedge-fund-driven modern capitalism (which incentivises the creation of monopolies), together with political cronyism’ (Mason, 2017). Back in 2016 Facebook reportedly earned a cool US\$8.8 billion and counted close to two billion people, or about half of the world’s Internet users, as its customers (Cadwalladr, 2017). Yet such gains also impose equally huge losses on publishers, newspapers, authors and a wide range of associated professions. Over time its customers become used to the dumbed-down alternatives that pour forth from countless unverified sources. Vital questions about where Facebook’s power ends, where its limits lie and to whom it is accountable have eluded successive U.S. governments that, at minimum, have failed to apply their own anti-trust rules and regulations. Inscrutable algorithms, deep penetration into the texture of so many human lives and vast wealth appear to make Facebook almost invulnerable to top-down intervention. There are, however, other possibilities. While much attention has been paid to the wealth and apparent instrumental power of these organisations, rather less attention has been paid to investigating them from within, so to speak. Yet doing so reveals new ways of understanding them and perhaps reducing their dominance. Two previous examples of this kind of work are informative. One is Urry’s Societies Beyond Oil: Oil Dregs and Social Futures (Urry, 2013); another is Oreskes and Conway’s Merchants of Doubt (Oreskes & Conway, 2011). Urry deployed his considerable talent in ‘depth sociology’ to understand how ‘carbon interests’ became so powerful and was able to characterise the kinds of futures to which their continued dominance leads. Oreskes and Conway took on the cultural power of the exceptionally well- financed U.S. ‘climate denialist’ clique. They revealed in detail exactly where it started, the techniques and assets it employed and how careers were destroyed en route to establishing denialism as continuing disruptive force in US political life. The point is this: when credible efforts are undertaken by well-qualified people to return some of these hidden interior phenomena back into the limelight there’s no turning back. The hand of autocratic power, money and influence is revealed. Motives, purposes and outcomes are identified and called into question. Importantly, in the present context, the knowledge so gained cannot be erased. This is, in other words, a fair and legitimate way for societies to recover from multiple failures of governance and to regain from the oligarchs what was never theirs in the first place – an assumed social licence to operate as they wish. Integral perspectives and the Silicon Valley worldview Integral methods can be used in many ways. Theorists and practitioners can plunge into them in such depth that their investigations become abstracted and lose touch with reality. Here, as in previous work, they are employed lightly to reveal insights that can be taken up and used by virtually anyone. They use three sets of criteria: the four quadrants (windows on reality); four levels of worldview complexity and six values levels (Table 2). In earlier work some key reasons for applying Integral thinking were summarised thus: While most people and the vast majority of civil and commercial organisations around the world certainly appear to have benefitted in the short term from the vast expansion of on-line options and capabilities, a much darker picture is emerging. It concerns not only the extraordinary cultural and economic power being wielded but also the nature of the underlying worldview and values – which are the main foci here – and where these appear to lead (Slaughter 2015, p. 243). Table 2 Summary of quadrants, worldviews and values by Slaughter (2012) 1 The four quadrants (or ‘windows’ on reality) a. The upper left quadrant (the interior ‘world’ of human identity and self-reference); b. The lower left quadrant (the interior ‘world’ of cultural identity and knowledge); c. The upper right quadrant (the exterior ‘world’ of individual existence and behavior); d. The lower right quadrant (the exterior world and physical universe). 2 Four levels of worldview complexity a. Pre-conventional (survival and self-protection); b. Conventional (socialised, passive, adherence to status quo); c. Post-conventional (reflexive, open to complexity and change); d. Integral (holistic, systemic, values all contributions, works across boundaries, disciplines and cultures). 3 Six value levels a. Red (egocentric and exploitative); b. Amber (absolutist and authoritarian); c. Orange (multiplistic and strategic); d. Green (relativistic and consensual); e. Teal (systemic and integral); f. Turquoise (holistic and ecological). What became clear over time was that the Internet had morphed into something like an extreme version of Bentham’s Panopticon where individuals were routinely subjected to extreme surveillance. Today that merely looks like a first step as entire industries are now feeding off of data traces routinely expropriated and on-sold for exploitation by the advertising industry (Zuboff, 2015). There’s little sense among the main players of any compassion, empathy or care for the higher goals or aspirations of humanity. “The dominant paradigm is one of covert exploitation, erosion of individual agency and autonomy, and a sheer lack of transparency and accountability, reminiscent of authoritarian dynamics rather than of a digital well-being with equal and active participation of informed citizens,” (Christodoulou et al. 2021). What emerges overall is a picture of societies and cultures becoming hollowed out by extraordinary monopoly power and, at the same time, becoming increasingly polarised and angry. Many formerly proud professions are in decline, unemployment is rising and criminality penetrates even the most private spaces. A look at three key figures from Silicon Valley – Mark Zuckerberg, Ray Kurzweil and work by Google’s chief economist Hal Varien – helps make sense of this perverse reality. In the former case an interview published in Time magazine clearly revealed elements of Zuckerberg’s interior life. It showed, for example, that he is dismissive of external opinion and equates critique with ‘turning the clock back’. He denies that pervasive advertising is in any way ‘out of alignment’ with his customers and is ‘concerned with nuance and subtle shades of meaning only to the extent to which they are useful to him’ (Grossman, 2014). Within such a pragmatic and instrumental frame terms like ‘values’, ‘human nature’ and ‘society’ have little or no meaning. This is significant when the broad impacts of Facebook are considered. Similar issues arose in relation to Kurzweil, Chief Engineer at Google and well known for his views on the coming ‘singularity.’ This is supposedly a time when humanity merges with its technology and achieves a kind of disembodied immortality. There are fringe admirers, of course, who eagerly anticipate such ‘post-human’ futures. Yet a review of various accounts of this work strongly suggest that this perspective can be characterised as ‘high technology and hubris’ in about equal parts. Reductionism and category errors abound, for example, in Kurzweil’s ‘theory of mind’ where the vast complexity of the latter is reduced to mere ‘pattern recognition’ (Pensky, 2015). Another concern is the ‘constant conflation of biological evolution’ with ‘technical evolution.’ For Kurzweil ‘biological evolution, cultural development, and the advancement of computing technology are all part of the same immutable force.’ In this view, ‘the advance of technology is as inevitable as biological evolution’ (Pensky, 2015). When technology and biology are ‘plotted on the same graph’ we know that those who view the world this way are living in their own version of what has been called ‘flatland.’ Within that diminished frame what is manifestly missing is any appreciation of the power and influence of the interior worlds of individuals and cultures. Also significant is that from a structural interior standpoint the worldviews and values of these key figures are so similar. In terms of the categories outlined in Table 1 both appear to be driven by ‘red’ to ‘orange’ values and draw on conventional to inverted (incomplete or, more controversially, ‘unhealthy’) forms of post-conventional worldviews. Zuboff’s critique of the ‘big other’ Shoshana Zuboff’s magisterial treatment of Google’s pursuit of ‘surveillance capitalism’ should be read in the original as it provides a paradigmatic example of an in-depth countervailing view (Zuboff, 2015). Her article ‘Big Other’ takes the form of an extended critical response to, and evaluation of, material produced by Google’s chief economist Hal Varien. Zuboff supports the view taken above that: ‘big data is not a technology or an inevitable technology effect. It is not an autonomous process… It originates in the social, and it is there that we must find and know it.‘ (Zuboff, 2016, p.75) . This is a crucial point. She continues: ‘Big data’ is above all the foundational component in a deeply intentional and highly consequential new logic of accumulation that I call surveillance capitalism. This new form of information capitalism aims to predict and modify human behaviour as a means to produce revenue and market control (Zuboff, 2016, p.75). Later in the piece she contrasts Varian’s technocratic vision with that of Hannah Arendt who offered more nuanced humanistic view of. She comments that: In contrast to (Hanna) Arendt, Varian’s vision of a computer mediated world strikes me as an arid wasteland – not a community of equals bound through laws in the inevitable and ultimately fruitful human struggle with uncertainty. In this futurescape, the human community has already failed. It is a place adapted to the normalisation of chaos and terror where there the vestiges of trust have long since withered and died. Human replenishment … gives way to the blankness of perpetual compliance (Zuboff, 2016, p.81). Zuboff’s calm, clear and forensic examination of Google and its operations lead her to conclusions that are valuable in the present context as they help to inspire subsequent actions. For example: Google’s tools are not the objects of value exchange. They do not establish productive consumer-producer reciprocities. Instead they are ‘hooks’ that lure users into extractive operations and turn ordinary life into a 21st Century Faustian pact. This social dependency is at the heart of the surveillance project. Powerful felt needs for an effective life vie against the inclination to resist the surveillance project. This conflict provides a kind of psychic numbing that inures people to the realities of being tracked, parsed, mined and modified – or disposes them to rationalise the situation in resigned cynicism. This … is a choice that 21st Century people should not have to make (Zuboff, 2016, pp.83-4). In summary she concludes that: New possibilities of subjugation are produced as this innovative institutional logic thrives on unexpected and illegible mechanisms of extraction and control that exile persons from their own behaviour (Zuboff, 2016, p. 85). Limitations of space preclude further discussion here. Next steps, however, could include applying this kind of exploration to other subjects and creating projects dedicated to revealing the inner worlds of the oligarchs and their leaders in much greater detail. The next section is devoted to this wider analysis. Silicon Valley – building or undermining the future? With such examples in mind it is legitimate to ask if Silicon Valley in general and the ‘big three’ in particular are building the future or, in fact, undermining it. From an Integral viewpoint any attempt to ‘build the future’ from structurally deficient and reductive right hand quadrant (empirical) views of reality is at the very least unwise and almost certainly a recipe for disaster. What can be missed by critics, however, is that the existential risks that have been created by thoughtless innovation and the scaling up of these enterprises to the global level are as dangerous for the U.S. as they are for anywhere else. In summary these examples suggest a broad default or collective profile of the sector, namely that it: • Arises from ego-, and socio-centric outlooks that serve to privilege ‘me, us and now.’ • Proceeds from a conventional level of complexity (with forays into post-conventional when it comes to, e.g., financial innovation and marketing); • Expresses a range of values from ‘red’ to ‘orange,’ neither of which provides an adequate basis from which to resolve the issues identified here. • Largely address the lower right (exterior collective) domain of reality, with an occasional focus in the lower left (for social influence) and upper right (for persuasion and control). Seen in this light the term silicon ‘giants’ appears misplaced since they currently operate more like ethical ‘midgets.’ It follows that if societies are to resolve some of the concerns expressed here then they will want to focus on ways to bring individuals and organisations at every level up and out of these diminished states of being. This is a core concern of humanistic and developmental psychology in general. Within the domain of integral methodology Chris Fuhs proposes a model for assessing the nature and potential of translative change (change within a given level) in contrast to transformative change (movement from one level to another). This work is partly motivated by a need to avoid earlier ‘growth to goodness’ assumptions that are now understood to be overstated (Fuhs, 2013). This is categorically not a question of promoting ever newer and more exciting technologies. Rather, it is finding ways to bring into play more comprehensive worldviews and more sustaining values. Grounds of hope The crucial thing to note is that the current techno-capitalist worldview is by its very nature unstable and yet highly resistant to any kind of oversight or limitation. The Internet oligarchs have continued to flourish over the years when it became clear that humanity requires a genuine shift of state, a new dynamic (a transition to sustainability) and completely different direction (a post-growth outlook). The evidence is finally in that high-tech civilisation, despite its real achievements, is on a no-win collision course with the planet (Das, 2015; Higgs, 2014). It no longer makes sense to deny that the direction we should be collectively pursuing is one that moves decisively away from passive consumerism, the diminished rationality of ‘the market’ and endless growth. This is not to say that genuinely innovative, useful and worthwhile uses of IT have not emerged over this period. Rather, ‘IT revolution’ has been undermined and misdirected by an ideology that ignores the human and cultural interiors. Instead of leading to a ‘better world’ it inscribes the collective slide toward civilisational breakdown and eventual collapse (Floyd & Slaughter, 2014). In a more open and egalitarian world new technologies would not be set loose to blindly impact upon complex social systems through one default fait accompli after another. Rather, they would be subjected to rigorous questioning and testing long before they were widely applied. Indeed, this was a core purpose of the Office of Technology Assessment (OTA) that, in its brief lifetime, was established to advise the U.S. Congress on exactly these matters (Blair, 2013). During the Reagan / Thatcher era the all-powerful ‘private sector’ in the US comprehensively abolished such initiatives with predictable results. This is only one of a whole series of failures of governance especially, within the U.S. One could imagine, for example, what might have occurred if, instead of repealing the Glass-Steagal Act (to abolish the separation of high street backing and high-risk speculative gaming) Bill Clinton and the US government had put in place the means to probe the implications of high-risk speculative credit-default swops and the like. The Global Financial Crisis (G.F.C.) would have been less serious or possibly averted altogether. But no such attempt was made. Warnings were ignored, taxpayers of the developed world ended up footing the bill and Wall Street continued much as before. While various attempts to institutionalise technology assessment have occurred, it still remains uncommon (Schlove, 2010). Until very recently the European Community (EU) has been effectively alone in taking steps to ensure that ‘some things are not for sale’. It has taken small, but promising steps to regulate corporations, compel them pay more tax and create new rules allowing users to take charge of how their personal data is used, if at all (Drozdiak, 2017). It has even fined Google 2.4 billion for promoting its own shopping recommendations above those of other companies. This is a beginning. But a great deal of dedicated work will be required before sufficient countervailing power can be assembled on behalf of civil societies to design and implement IT systems that are secure and benefit everyone Fortunately there are multiple ways forward in shaping this IT revolution that are being pursued by people and organisations of intelligence and good will. In fact the seeds of many solutions to global dilemmas are already emerging. For example, one of many places to begin is Solnit’s work on the role of hope in a threatened world (Solnit, 2016). A different approach from Canada is Rees’ ‘Agenda for sustainable growth and relocalising the economy’ (Rees, 2014). Raworth’s work on a broader and more inclusive model for economics looks promising (Raworth, 2017). As does Fry’s impressive work on what he calls ‘design futuring’ (Fry, 2009). Then, specifically relevant to the issues raised here, are suggestions by Hodson, Taylor and other actors in this virtual space on how, in practical terms, oversight and control can be returned from the Internet giants to individuals, societies and, more broadly, governance in the public interest (Hodson, 2016; Taylor 2014). Having outlined aspects of ‘the problem’ the following sections begin the process of focusing on possible solutions.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/02%3A_Case_studies_and_implications/2.03%3A_What_drives_Silicon_Val.txt
Compulsive innovation ‘You can’t stop progress.’ One of the themes that emerges from this enquiry is the need and opportunity for large-scale, democratically mediated social design and a commitment to long-term social innovation in the public interest. At first sight it may appear difficult to see how the motivation for such efforts could arise or from whom. But these are early days and motivation can emerge from a variety of sources. To begin with, in a context of radically ambiguous technical innovation, with its accompanying upheavals and disruptions the widely held view that ‘you can’t stop progress’ clearly lacks credibility when used fatalistically, and should be set aside. Modifying this slightly to ‘you can’t stop technical innovation’ is a small step forward but doesn’t take us very far. Of far greater value is a more nuanced understanding of what terms such as ‘progress’ and ‘innovation’ actually mean, what values they spring from, whose interests are represented (or extinguished) and what longer term impacts and consequences may plausibly occur. Such issues are hardly part of common conversation but if society is to regain any say in its own prospects, these issues need to be brought into the open and debated much more widely. Similarly, the social, political, technical and environmental consequences of neo-liberal formulas of economic growth along with ever increasing inequality are no longer in doubt across the globe. People are becoming ever more concerned about these issues and, moreover, the Earth system itself is responding to multiple human impacts with glacially slow, but unstoppable, momentum. The faulty notions of ‘progress’ in this context clearly need to be unpacked as they are fraught with ambiguity and increasingly divorced from genuine human interests (Metcalf, 2017). Australia’s Gold Coast illustrates this dilemma rather well. The mode of development on display is a living testament to a worldview characterised by profit-seeking, denial and short-termism. These are not characteristics that bode well for the future (Slaughter, 2016). ‘Progress’ is often seen as synonymous with technical innovation but such notions do not withstand close scrutiny. Similarly, a continuing free-for-all dialectic of innovation and counter innovation quickly becomes irrational in our currently divided world. In what may be an inexact but tellingly perverse reversal of Moore’s Law the stakes grow ever more extreme with each new level of technical capability. Yet business leaders and decision makers seem largely unaware of this. We can see this in the current breakout of IT company investments in powerful real-world applications such as automation and advanced robotics that look set to destroy most, if not all, semi-skilled jobs (Murphy, 2017). We see it in the irrationality of emerging autonomous weapon systems (Begley & Wroe, 2017). We also see it on the mid-term horizon in the systemic threats that plausibly arise from quantum computing (Majot & Yampolskiy, 2015). A more immediate example is the rise of GPS spoofing. The early development of this technology was undoubtedly useful as it introduced precise, reliable navigation into countless transportation applications. Now certain features in its design are being quite deliberately employed to disable it. According to reports anomalous results were first spotted by PokemonGo players near sensitive sites in Moscow and then began appearing elsewhere. For example, alarms began to sound when the master of a ship in the Black Sea discovered that his position was over 30 kilometres away from where it was supposed to be. Russia is thought to be one entity experimenting with the technology. But of equal or perhaps greater concern is that spoofing software can now be downloaded from the Internet and employed by anyone with the knowledge and will to do so (Hambling, 2017). A similar dialectic is apparent in countless other examples, sometimes even in advance. Actively scanning the environment for signals of change does, in theory, provide time to respond. Separate scanning hits may interact to reveal previously hidden possibilities. For example, public media announce that trials of driverless ‘autonomous vehicles’ (AVs) will occur along a public motorway. A UK Minister of Transport announces that AVs will be in use by 2021 (Topham, 2017). Such developments are now becoming technically feasible. Yet around the same time a radical group publishes details about how, with a little imagination, vehicle-derailing devices can be easily and cheaply constructed and set in place leaving those responsible to disappear without trace (Thiessen, 2017). Little imagination is required to suggest that both high-, and low-tech devices will be developed to intervene and disrupt the smooth operation of AV technology wherever it is deployed. Once again, we are reminded that new technologies are never ‘value free;’ they always come with hidden weaknesses and costs, winners and ‘losers’. Those who put their faith in complex systems will eventually need to recognise that the latter are not infallible. Those with different values and what one might call ‘oppositional’ social interests will continue to take advantage of any weaknesses or blind spots (Bartlett, 2017). It follows that the ‘hidden’, non-material side of any technology is at least as significant as its physical form. It therefore requires much closer attention. Artificial intelligence Bill Gates and Stephen Hawking are among many who have warned of the dangers of artificial intelligence (AI) and the very real possibility that it may represent an existential threat to humanity. Fresh impetus to this debate was provided when Mark Zuckerberg and Elon Musk clashed over this issue. While Musk echoed previously expressed concerns, Zuckerberg would have none of it. For him such talk was ‘negative’ and ‘irresponsible.’ He’s dead against any ‘call for a slowdown in progress’ with AI (Frier, 2017). So it fell to director James Cameron, director of Terminator 2 and other movie blockbusters, to inject some reality into the proceedings by reminding everyone of the mammoth in the room. Namely that it is ‘market forces (that) have put us into runaway climate change, and the sixth mass extinction.’ He then added that ‘we don’t seem to have any great skill at not experimenting on ourselves in completely unprecedented ways’ (Maddox, 2017). What is significant here is that it falls to a movie director to draw attention to links between the products of an advanced techno-economic system and the growing likelihood of irrational outcomes. Such concerns are fundamental to the maintenance of public safety and wellbeing. Yet, careful consideration of the social implications of technical change by public authorities has declined even as the need for it has increased. The race to create artificial intelligence is being pursued in many places. Yet, few of the key players appear willing to pull back and rigorously assess the risks or seek guidance from wider constituencies. Whether East or West, to passively ‘follow the technology wherever it leads’ is technological determinism writ large. It’s clearly an inadequate basis upon which to make decisions, let alone to gamble with the future of humanity. We cannot assume that advanced AI will take over the world and either destroy humanity or render it redundant. Such outcomes are certainly possible but there are genuine differences of opinion on these very questions (Caughill, 2017; Brooks, 2017). Of more immediate concern is that various agencies have been looking to AI for military and security ‘solutions’ for some years. Roboticised figures have been common in the entertainment industry for several decades. But wider appreciation of risks involved in their use in real-world situations has been minimal thus far. Now, however, robot soldiers are being designed and tested. In 2017, for example, a group called the Campaign to Stop Killer Robots met at the United Nations in New York. Part of the program included a film illustrating the potential of ‘assassin drones’ to sweep into a crowded area, identify targets using facial recognition, apply lethal force and vanish. Concerned scientists were attempting to ‘highlight the dangers of developing autonomous weapons that can find, track and fire on targets without human supervision’ (Sample, 2017). This may sound like science fiction (SF) but a leading AI scientist offered at least two reasons for believing that such devices are closer than one might think. In his view: The technology illustrated in the film is simply an integration of existing capabilities. It is not science fiction. In fact, it is easier to achieve than self-driving cars, which require far higher standards of performance. (Also) because AI-powered machines are relatively cheap to manufacture, critics fear that autonomous weapons could be mass produced and fall into the hands of rogue nations or terrorists who could use them to suppress populations and wreak havoc (Sample, 2017) This is merely one branch of a rapidly evolving area of research and innovation but the prospects are clearly terrifying. Another key question raised was: who or what locus of authority provided the green light to arms manufacturers, the disruptors of Silicon Valley, or indeed anyone else to carry out these unprecedented experiments? Reinventing the world in a high-tech era – whether by innovation or disruption or both – is a non-trivial matter. To routinely and relentlessly create new dangers and hazards cannot do other than threaten the viability of humanity and social life. Yet somehow these entities continue to operate openly and with confidence, yet lacking anything remotely like a social license. Some consider that the development of AI could be the test case that decides the matter once and for all. Here is Taplin again on how what he regards as the benign legacy of Engelbart – an Internet pioneer – was turned toward darker ends. He writes that the latter ‘saw the computer as primarily a tool to augment – not replace – human capability’. Yet ‘in our current era, by contrast, much of the financing flowing out of Silicon Valley is aimed at building machines that can replace humans’ (Taplin, 2017, p. 55). At this point, the ghost of Habermas might well be heard whispering something along the lines of ‘whatever happened to our communicative and emancipatory interests?’ To what extent does their absence from dominant technical discourses mean they are also missing from the products and outcomes they produce? The panopticon returns The original panopticon as envisaged by Jeremy Bentham in the 18th Century was a design for a prison in which all the inmates could be continuously monitored without their knowledge. Since they could never know whether they were being observed or not they were constrained to act as if they were at all times. Hence they became adept at controlling their own behaviour (Wikipedia, 2017). During recent years newer versions have arisen that bring this oppressive model to mind. One is in China; the other much more widely distributed. Chinese intentions to use IT for social control are revealed by Kai Strittmatter (2020), who states: “China’s new drive for repression is being underpinned by unprecedented advances in technology”, including: • Facial and voice recognition • GPS tracking • supercomputer databases • intercepted cell phone conversations • the monitoring of app use, and; • millions of high-resolution security cameras “This digital totalitarianism has been made possible not only with the help of Chinese private tech companies, but the complicity of Western governments and corporations eager to gain access to China’s huge market,” (Strittermatter, 2020). This may not seem like a particularly significant departure from what’s already occurring elsewhere. What is different is that China already has totalitarian tendencies since it is ruled by an inflexible party machine that shows no interest in human rights or related democratic norms. While the US has itself long been hamstrung by deadlocked and ineffectual governments it does have a constitution that protects certain core rights (such as free speech). Despite systematic predation (through copyright theft and monopoly power) by Internet oligarchs, the US also retains elements of a free press and it certainly has an independent judiciary. Furthermore, the European Economic Community (EEC) has already taken the first steps to establishing a more credible regime of regulation. In so doing it has shown that it is willing and able to take on the Internet oligarchs and force them to change their behaviour. So in the West there are real prospects of reining in at least some of the excesses. But China is a very different story. According to reports its oppressive ‘grid system’ of systematic surveillance has been operating in Beijing since 2007. Aspects of this oppressive new system were summarised as long ago as 2013 in a Human Rights Report. For example: The new grid system divides the neighbourhoods and communities into smaller units, each with a team of at least five administrative and security staff. In some Chinese cities the new grid units are as small as five or ten households, each with a “grid captain” and a delegated system of collective responsibility … Grid management is specifically intended to facilitate information-gathering by enabling disparate sources into a single, accessible and digitized system for use by officials. … In Tibet the Party Secretary told officials that ‘we must implement the urban grid management system. The key elements are focusing on … really implementing grid management in all cities and towns, putting a dragnet into place to maintain stability. … By 2012 the pilot system was in ‘full swing’ (as it had stored) nearly 10,000 basic data’ (and collected) hundreds of pieces of information about conditions of the people (Human Rights Watch, 2013). By 2015 this vast modern panopticon was ready to be rolled out to enable the full-on mass surveillance of China’s 1.5 billion citizens. According to the Metamorphosis Foundation (2020): Any society that looks to stratify people based on how they look, based on their health, based on their data and things about them, is an incredibly authoritarian and sinister society. The societies throughout history that have tried to separate and stratify people based on data about them are (those) that we want to stay as far away as possible from…Collaboration of all stakeholders and demand for public debate are key to preventing situations in which the power to decide is taken from citizens and lies only in the hands of private companies or police forces… Since then further details of this oppressive and inescapable surveillance system in China have emerged. For example, a wired article by Rachel Botsman revealed that two Chinese data giants – China Rapid Finance and Sesame Credit – had been commissioned by the government to create the required infrastructure using copious amounts of big data. Free access to this vast resource means that people can be monitored, rated and evaluated in depth throughout their normal lives. It turns out that ‘individuals on Sesame Credit are measured by a score ranging between 350 and 950 points.’ While the algorithms remain secret the five factors employed are not – credit history, fulfilment capacity (or ability to abide by contractual arrangements), personal characteristics, behaviour and preferences and, finally, interpersonal relationships. Those with high scores get consumer choices, easy credit and the chance to travel; those with low scores become the new underclass with few meaningful choices at all. These are described as ‘private platforms acting essentially as spy agencies for the government.’ The author then adds that ‘the government is attempting to make obedience feel like gaming. It is a method of social control dressed up in some points-reward system. It’s gamified obedience’ (Botsman, 2017). What’s particularly curious here is the inevitability of non-trivial perverse outcomes, foremost among which are the immense cultural and human costs. Masha Gessen’s mesmerising and sometimes painful account of life in post-revolutionary Russia clearly demonstrates how hard it is to imagine that a cowed and passive population could retain sufficient awareness or creativity to contribute much of value to any culture, however instrumentally powerful it may appear (Gessen, 2017). In Botsman’s view ‘where these systems really descend into nightmarish territory is that the trust algorithms used are unfairly reductive. They don’t take into account context.’ Yet without a keen sense of context meaning becomes free-floating and elusive. Finally there’s the inevitable emergence of ‘reputation black markets selling under-the-counter ways to boost trustworthiness’ (Botsman, 2017). Overall, this may turn out to be the world’s prime contemporary example of a ‘deformed future’ in the making. A second and equally subversive example over the last few years is the growing use of voice-activated ‘digital assistants’. Skillfully packaged as mere ‘assistants’ and ‘helpers’ they are ‘on’ all the time and thus set to respond to every request and whim. Some of are equipped with female voices that are intended to exert a distinctly seductive effect as shown in Spike Jonez’s 2013 film Her. What is less obvious (at least to the user) is that with each and every use the individuals reveal ever more information about their not-so-private lives. Before long comprehensive profiles are assembled, preferences noted and rich fields of data produced. As things stand, the operators of these systems own this treasure trove of information and suggest new products and services in the light of those already consumed. Sales go up but consumers become ever more tightly bound to their own induced impulses and proclivities. Thus, instead of having open-ended learning experiences, of responding to challenges, of deepening their knowledge and understanding of their own authentic needs and human qualities, those who succumb can end up having ‘feelings’ for, and an ersatz fictional bond to, a remote impersonal network that exists only to exploit them. A further consequence of becoming over-reliant on such ‘immersive’ technologies is that the real-world skills and capacities of human beings start to atrophy. Memory, time keeping, spatial awareness are among the capabilities that wind down over time leaving people ever more dependent and at risk (Aitkin, 2016). People are seduced into becoming a core component of the ‘product’ being sold. As the human interiors shrink and fall away, identity itself becomes elusive and problematic. In summary, leaving the high-tech disruptors in any field to their own devices, so to speak, simply means that the human enterprise is subjected to random shocks and abuses that end up placing it in ever-greater peril. For Naomi Klein this is part of a deliberate playbook designed to provide a minority with greater dominance and power (Klein, 2007). But it’s also the result of a certain kind of blindness that comes from over-valuing the technical and under-valuing the human and the social. If there’s a consistent theme here it’s that power in the wrong hands creates more problems than it solves. So high-tech innovation needs to be separated from simple notions of ‘progress.’ It is fundamentally a question of values and power – instrumental, cultural and symbolic. If humanity wants to avoid dystopian outcomes, human societies will need to find new ways to retain their power and control and part only with what they judge necessary to governance structures that meet their real needs. In other words, it’s time to disrupt the disruptors. They’ve had their moment in the sun and the clouds are gathering. It’s time for them to stand aside so that a different world can emerge.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/03%3A_Framing_solutions/3.01%3A_Compulsive_innovation.txt
Blind spots as opportunities The empirical-analytic methods employed to create powerful technologies and to understand track macro phenomena both emerge from the ‘exterior collective’ quadrant of Integral enquiry. Yet taken in isolation they cannot grasp the nature of related human and cultural realities since they spring from very different sources and invoke different kinds of knowledge. Integral perspectives seek greater balance by adding an ‘interior’ dimension to both individual and collective phenomena (Figure 1). The general lack of such distinctions helps explain (and indeed to resolve) some of the confusion and conflict that occurs when, for example, new waves of high-tech innovation (exterior collective) impact on human life worlds (interior individual) and pre-existing ways of life (interior collective). As noted, people, social systems and cultures are all deeply affected. Jobs are destroyed, professions disappear and machines primed to take over operations that were previously understood to exist solely within the domain of human action. Yet the study of history, the foundations of personhood, society and culture are only marginally accessible to empirical enquiry and are therefore routinely dismissed. Which is not to say that they cannot be studied and understood by those with the requisite skills, insight and methods (Esbjorn-Hargens, 2012). Constitutive human interests German philosopher Jurgen Habermas produced a series of works that made significant demands on readers yet also produced insights of continuing value. Of direct relevance here is his account of ‘constitutive human interests.’ Unlike much of his work the essence of such interests are easily grasped and usefully illuminate a number of vital social processes that tend to be overlooked in high-tech environments. Table three provides an outline of Habermas theory. In this account, the technical interest relates to ‘work’ and the empirical/analytic sciences that are centrally concerned with production and control (i.e. the application of technical rules to instrumental problems). The practical interest is about human interaction. Here the concern is not with control, nor with technical processes, but with communication and understanding, both of which are grounded in language and culture. The point is to clarify the conditions for clear and unobstructed communication between participating subjects. These are seen as interpretive tasks requiring appropriate skills. The third and perhaps ‘highest’ interest is the emancipatory interest. This relates to questions of power and the universal drive for emancipation and freedom of action (Habermas, 1971). Table 3: Habermas’ constitutive human interests (1971) Life Dimension Form of Knowledge Criteria Type of Problem Emancipatory Interest Power Critical Emancipation and liberation Normative: critique of domination, repression and distorted communication Practical Interest Interaction Interpretive Achievement of communication and understanding Interpretive understanding and practical choices Technical Interest Work Empirical/analytical Economy, efficiency and effectiveness Technical and instrumental At no point does Habermas denigrate the technical interest per se since civilisation depends upon the maintenance of effective and efficient technical processes. Rather, what he is set against is the over-extension of the technical into areas that he considers illegitimate – as, for example, when decisions about new technologies are made on the basis of ‘can it be done?’ rather than ‘should it be done?’ One is a pragmatic issue concerned with technique; the other is value-laden and grounded in ethical considerations. This distinction has been widely overlooked in the present context. Then concerning the practical interest, there are many non-technical factors (such as power, ideology, marketing and direct exploitation) that impede and prevent true communication taking place between individuals and groups. The issue then becomes that of defining the conditions under which communication can be optimised. This again is not a technical question but one that relates to the richer and more complex world of human intersubjectivity. Finally, the emancipatory interest is engaged in the critique of domination, repression, mystification and institutional inertia. It tries to define the conditions within which people can create an authentic existence for themselves. Unfortunately however, questions of limits, of the character and requirements of ‘the social’ and the whole question of underlying human interests – actual human needs and qualities – mean little or nothing to techno-enthusiasts and Internet entrepreneurs. As we’ve seen their speech patterns, metaphors, discourses were, and remain, focused on the single-minded pursuit of power, exploitation, expansion and accumulation of immense financial rewards. These features go a long way towards explaining why the Internet and many associated technologies became debased and also why they parted company from authentic human and social needs. The rise of homo economicus and the rapid expansion of humanly arid technical systems could not but produce a generalised dystopian sense that human affairs were spinning out of control. During the second decade of the 21st Century traditional research, scholarship and the scientific method itself were also being undermined by the diminished rationality of technical innovation coupled with denialism at an astonishing scale. Moreover, the tendency of traditional disciplines toward subject compartmentalism made it difficult to address the growing complexities of macro-change. Many people began to experience a sense of the coming-apart of earlier structures and assumptions, often expressed as multiple failures. For example: • A near-universal failure to resolve major environmental issues. • Unwillingness on the part of global elites to rein in growth or reduce over-consumption. • Unresolved questions about the Global Financial Crisis (GFC) and its aftermath. • The related failures of globalisation and ‘trickle down’ economics to create a fairer and more equitable distribution of wealth. • Growing instability and upheaval in the Middle East consequent upon the Iraq war and the abortive ‘Arab spring’. • Multiple failures of the US government to regulate or reform Wall Street, apply its own anti-trust regulations to the Internet oligarchs, develop appropriate policies on high-tech innovation and respond effectively to global warming. • New waves of high-tech innovation were and are undermined by corporate power, mass surveillance and a newly enfranchised criminal underclass (Glenny, 2011; Zuboff, 2015). The environment created by these interrelated and ever-shifting phenomena was and remains complex and challenging to say the least. Governance virtually everywhere has become more difficult. So it is regrettable, but not entirely surprising, that high-tech innovators have had little of value to say about the world they have been attempting to create. So long as their own innovations made it to market, these ‘straws in the wind’ were held to be of little significance. A variety of non-empirical and broad-based approaches to understanding were quietly developing in the background. Since they are too numerous to receive adequate attention here they might well form the basis of a separate work. Yet the task of grasping some of the interior aspects of social change in the post WW2 era was taken up by interdisciplinary scholars such as Lewis Mumford, Hannah Arendt, Ulrich Beck, Zigmunt Bauman and Jurgen Habermas, among many others. More recent perspectives shedding further light on these matters include accounts of hypernormality (Hooton, 2016), anticipation theory (Poli, 2010), the ‘de-growth’ movement (Cattaneo, 2012; Videira, 2014) postnormal studies (Sardar, 2015) new economic paradigms (Raworth, 2017) and the wider use of Integral methods (Egmond & de Vries, 2011). Overall, the selective blindness of the high-tech sector is less an indication of strength and power than of ‘thin’ and, in the long run, unproductive views of reality. The entire sector – and those who seek to reinvigorate it – would do well to re-direct their attention toward blind spots such as those outlined here. Properly understood, they provide creative springboards, stimuli for new thinking and new opportunities such as the following. • Grasping the reality of global limits and the vast number of opportunities for values development, creativity, design and adaptation that they imply. • Re-valuing aspects of ‘the social’ such as empathy, care, respect and in-depth communication between equals. • Consciously seeking to understand and enable fundamental human interests, without which it is doubtful if advanced and vibrant human societies can endure. In short, careful and genuine investments in richer worlds of meaning and significance foreshadow completely different outlooks and a radically renewed palette of options.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/03%3A_Framing_solutions/3.02%3A_Blind_spots_as_opportunities.txt
Interior drivers, scales of implementation Virtually everyone outside the Silicon Valley bubble who has paused to consider the complex tangle of issues thrown up by the IT revolution in general and the Internet in particular tends, at some point, to reach a key conclusion – that the key issues before us are not primarily technical. Technology provides the physical substrate and software an artificial ‘nervous system’ that reaches ever more deeply into human lives. But merely following technical capabilities as far as they can be driven appears to confine humanity on a fast train to Dystopia and perhaps the end of human civilisation itself. Juval Harari unintentionally provided a rehearsal, or test case, for that thesis in his book Homo Deus (Harari, 2015). Here the main driver of change was considered to be the ingenuity of large groups of people and their most significant achievements, indeed, were said to be those associated with high technology. Yet by relentlessly following this technologically determinist path, what the author refers to as ‘unaugmented’, humans are expected to fall by the wayside and become the ‘road kill’ of history. It is severe and uncompromising conclusion but unavoidable with the starting assumptions. If, on the other hand, the uses of high-tech are shaped and conditioned by progressive social drivers – such as life-affirming values and expanded worldviews – the outcomes would certainly be very different. So in playing a reductionist game with the very forces that moderate raw technical power degrades language, values, worldviews and similar culturally derived sources of meaning and capability – Harari actually demonstrates how vitally necessary they really are (Slaughter, 2017). Nor is this the only source that confirms this vital insight. As mentioned above, the idea that repressing or turning away from human qualities and social phenomena is exceptionally damaging receives powerful support from Masha Gessen in her book The Future is History (Gessen, 2017). There are clearly many aspects to this story and a growing number of informed observers of this rapidly changing scene. Greenfield, for example, is by no means alone in viewing the IT revolution as a full-on invasion. So he is alert to the implications of what he calls ‘the colonisation of everyday life by information processing.’ As with other critical approaches he is interested not merely in raw outcomes but also in the motives of promoters, the ideas behind the hardware and the social interests involved. Working at a more fine-grained level and acknowledging such interests helps to re-frame core assumptions within corporate and business environments. In 2015 John Naughton reported on work by Doc Searles on what he calls the ‘intention economy.’ Of direct relevance to the issue of there being human interests beyond the purely technical is the following view. Namely that ‘many market problems … can only be solved from the customer side: by making the customer a fully-empowered actor in the market place, rather than one whose power in many cases is dependent upon exclusive relationships with vendors, by coerced agreement provided entirely by those vendors’ (Naughton, 2015). From considering the IoT at three scales of implementation Greenfield wants to probe more deeply into what they mean through actual case studies (Greenfield, 2017). As we have seen repeatedly, the marketing of high-tech devices commonly assert assumed benefits to users but obscure underlying corporate benefits. So at the individual human scale biometric devices such as the Fitbit and the Apple Watch monitor a variety of health and fitness indices. Yet, these personal data are valued, analysed and used as inputs to advertising and sales. Insurance companies have vested interests in these skewed transactions such as offering reductions in premiums in exchange for such personal data. Truck and public service drivers are especially vulnerable to the imposition of more heavy-handed versions. Then, unless this trend is halted, the intensive collection of personal data may be required of all drivers and other persons responsible for vehicles and related machinery. The logical end this insidious process is akin to the imposition of total surveillance. That these observations are not ‘merely’ theoretical or personal but extend to other scales is confirmed by the emergence of ‘Google Urbanism,’ an ambitious plan by the company’s Alphabet subsidiary to reconfigure cities in its own image. Its pilot project on the Toronto waterfront sought to ‘reimagine urban life in five dimensions – housing, energy, mobility, social services and shared public spaces.’ However, what caused most concern was a proposed ‘data-harvesting, wi-fi beaming digital layer’ to provide a ‘single unified source of information about what is going on.’ This was intended to gather ‘an astonishing level of detail’ such that ‘each passing footstep and bicycle tire could be accounted for and managed.’ Issues of privacy and the blurring of public and private interests were set aside confirming the suspicion that ‘the role of technology in urban life is obvious: It is a money-maker’ (Bliss, 2018). Fortunately, opposition to this project grew to the point where it was eventually cancelled. For Morozov, ever on the alert for new forms of Internet solutionism, heavy-handed developments of this kind signal ‘the end of politics.’ He comments that: Even neoliberal luminaries like Friedrich Hayek allowed for some non-market forms of social organisation in the urban domain. They saw planning as a necessity imposed by the physical limitations of urban spaces: there was no other cheap way of operating infrastructure, building streets, avoiding congestion. For Alphabet, these constraints are no more: continuous data flows can replace government rules with market signals. (Morozov, 2017c) Seen in this light the emergence of high-end ‘smart cities’ represents a further incursion of technical expertise into the lifeworlds of people, the ethos of cultures and the character of the settlements where much of humanity lives. More recently Sadowski has suggested that such environments may best be referred to as ‘captured cities’ (Sadowski, 2020). Such conclusions clearly challenge the legitimacy of this entire process. Greenfield’s own recommendations include the following. • The use of algorithms to guide the distribution of public resources should be regarded as a political act. • Claims of perfect competence in relation to ‘smart city’ rhetoric should be rejected. • Any approach the whole IT domain should include a healthy dose of skepticism. • Commercial attempts to gather ever more data about people should be resisted (Greenfield, 2017). Taming the ubiquitous algorithm Standing at the core of a vast number of IT processes is the ubiquitous algorithm. Its relative obscurity and foundation in mathematics means that for many people it remains a mystery. But this need not continue. Cathy O’Neil was originally employed as a ‘quant’ in the heart of the New York financial district prior to the Global Financial Crisis (GFC). She saw first-hand, how the algorithms that exploit ‘big data’ can be used productively or as instruments of power and exploitation. In her view most people are unaware of how these new capabilities have proliferated. Consequently the reliance of bureaucratic systems on them is seldom appreciated. In the US she notes that ‘getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically.’ She adds that: The technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent (O’Neil 2016). She uses a ‘four-layer hierarchy’ in relation to what she calls ‘bad algorithms.’ At the first level are those with ‘unintentional problems that reflect cultural biases’. Next are those that ‘go bad through neglect.’ Third are those that she regards as ‘nasty but legal’ and finally ‘intentionally nefarious and sometimes outright illegal algorithms.’ In relation to the latter she adds that: There are hundreds of private companies…that offer mass surveillance tools. They are marketed as a way of locating terrorists or criminals, but can be used to target and root out citizen activists. And because they collect massive amounts of data, predictive algorithms and scoring systems are used to filter out the signal from the noise (O’Neil 2016). The scam run by Volkswagon to conceal the results of emissions tests is, in her view, perhaps the most well-known example; but the sale of surveillance systems to repressive regimes looms larger as a serious future threat. In her 2016 book Weapons of Math Destruction she looks into numerous context only to find the same dynamic at work. In one case a school district attempted to identify the weakest teachers and designed a set of tests of ‘teacher effectiveness’ using algorithms. Many of the criteria, however, such as how well students were learning year to year, could not be measured directly. The use of unverifiable proxies resulted in wildly varying results – but teachers were sacked anyway. From this and other cases O’Neil concluded that many algorithms are poorly designed and proxies used in place of real data invisibly distort the results. Another oft-experienced trap is where hidden feedback loops render data meaningless the more often they are run within a system. What is also significant about this account is that the underlying issues are less about mathematics, statistics or data, than they are about transparency (or its lack) power and control. Currently in the US, for example, the well-off can usually afford human representation whereas the poor are left with poorly performing data and a bureaucracy they can neither influence nor communicate with. In summary, used well algorithms can be tools that usefully extract value from big data. Used poorly, they can certainly ramp up the efficiency of operations but at the cost of unreliable or unjust results and increasing inequality. O’Neil (2016) suggests a number of solutions, none of which are short term or particularly easy to implement without wider social support. ‘First and foremost’, she suggests, ‘we need to start keeping track.’ For example, ‘each criminal algorithm we discover should be seen as a test case. Do the rule-breakers get into trouble? How much? Are the rules enforced, and what is the penalty?’ She continues: We can soon expect a fully-fledged army of algorithms that skirt laws, that are sophisticated and silent, and that seek to get around rules and regulations. They will learn from how others were caught and do it better the next time. They will learn how to do it better the next time. It will get progressively more difficult to catch them cheating. Our tactics have to get better over time too (O’Neil, 2016). Finally she suggests that: We need to demand more access and ongoing monitoring, especially once we catch them in illegal acts. For that matter, entire industries, such as algorithms for insurance and hiring, should be subject to these monitors, not just individual culprits. It’s time to gird ourselves for a fight. It will eventually be a technological arms race, but it starts, now, as a political fight. We need to demand evidence that algorithms with the potential to harm us be shown to be acting fairly, legally, and consistently. When we find problems, we need to enforce our laws with sufficiently hefty fines that companies don’t find it profitable to cheat in the first place. This is the time to start demanding that the machines work for us, and not the other way around (O’Neil, 2016). O’Neil’s program for re-purposing algorithms is certainly ambitious but, given the plethora of unresolved issues in this area, it seems entirely appropriate. In her book she also calls for a ‘model builder’s pledge’ (similar to the Hippocratic Oath taken by medical practitioners) a full-scale regulatory system, algorithmic audits and greater investments in research. In light of this she speaks approvingly of Princeton’s Web Transparency and Accountability Project and European approaches (noted below) that are, starting to dictate a new raft of terms and conditions that the Internet giants will have to recognise. Ultimately, she returns to the same ground that others have indicated in arguing that such choices are fundamentally moral, hence also ethical and social. Defensive measures, key questions Many options are available to those who are willing to invest the time and effort in responding to these issues and concerns. In mid-2017, for example, Australian reporter Rose Donahue interviewed Helen Nissenbaum in New York about the ‘obfustication movement.’ This was described as a ‘David and Goliath’ strategy that relied on the fact that David had more freedom to act than his opponent (Donahue, 2017). Donahue noted that Nissenbaum had developed tools specifically designed to disrupt Google’s tracking and ad delivery systems. One called ‘TrackMeNot’ allows users to browse undisturbed under the cover of randomly generated searches. Another dubbed ‘AdNauseum’ is a tool that collects data from every site visited by the user and stores them in a vault. This vastly overstates the user’s activity and therefore serves Google false information. While such tools may at present appeal only to a minority there are undoubtedly many more to come. A high-tech defensive war against the overreach of Internet oligarchs is increasingly likely. Many of these tools will become easier to use and personal agency will be enhanced as more people avail these tools. In summary, the present Internet has evolved – or ‘de-evolved’ – into its present condition over an extended period. It will therefore not easily be prised from the grasp of giant corporations. Repurposing the Internet will take time. It will take concerted social and political action as well as extensive technical backup. Charles Arthur credits online rights activist Aral Balkan with the following insight: ‘If you see technology as an extension of the self, then what is at stake is the integrity of our selves’. He continues: ‘Without that – without individual sovereignty – we’re looking at a new slavery’ (Arthur, 2017). So key issues include the following. • What kind of society do we want to live in? • What visions of human life, society and culture do we believe in? • What kinds of futures arise from our collective decisions? These are exactly the kinds of questions that have driven futures / foresight thinking and practice for several decades. As the wider implications of IT revolution cause more and more people to focus upon them so new players will need to become more involved in the search for solutions. Governments, city authorities and civic administrators at all levels will need to be open to new forms of social engagement. They, in turn will also need greater support from an informed public.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/03%3A_Framing_solutions/3.03%3A_Transcending_reductionism_re-purpos.txt
Social democracy Many of the decisions and practices of the high-tech innovators and oligarchs have gained support from prevailing assumptions about the market, the consumer, a minimal role for government and so on. Yet despite its broad influence the durability of neoliberalism as a guiding ideology should not be overstated. An in-depth review of neoliberalism by Metcalf refers to a 2016 paper published by the International Monetary Fund (IMF) that explicitly connects the former with some of its most significant consequences. These include ‘pushing deregulation on economies around the world … forcing open national markets to trade and capital, and for demanding that governments shrink themselves via austerity or privatisation’ (Metcalf, 2017). While such insights may appear unremarkable in themselves they represent a startling admission by the IMF whose policies have long supported such practices. The author also suggests that the ideology should not be seen merely a ‘standard right wing wish list’ but rather ‘a way of reordering social reality, and of re-thinking our status as individuals’ (Metcalf, 2017). Viewed in this light the main premise – that ‘competition is the only legitimate organising principle for human activity’ – seems unlikely to remain viable over the longer term since it rules out and overwhelms vital human capacities. These include care, compassion, philanthropy and the like which all healthy societies need in order to function at all. The decline of neoliberal values and assumptions would also mean that previously unthinkable options would emerge, as would new strategies to reform the system. A ‘new normal’ would have its chance to become established. In the UK a then resurgent Labour Party raised the possibility that just such a development could occur through the rise of social democracy. Rundle (2017) summarised what he considered to be some of the wider implications. In this distinctively optimistic view local, national and global societies could be run as a tripartite process of state, market and community institutions, with a “democratically enabling” state enforcing limits to the private sector, mandating social-economic spaces into which community / open / free / collective activities could expand, with democratic socialised ownership, whole or part, of key economic sectors. Such a shift would have major implications for all sectors of the economy – including IT systems and the Internet. As other essential social resources (including water, energy, finance) transitioned to shared ownership and control, Google, and other large companies could be regarded as having self-socialised (Rundle, 2017). As such there’s no good reason why they could not be subjected to the very same institutional arrangements. Such raw suggestions have a long way to go before they can be rendered into widespread practice. Yet they make a good deal of sense in the current context. Google / Alphabet, for example, may thus far have avoided the rigours of US anti-trust regulations but this may turn out to be a temporary ‘victory’ as other governments step in to take actions based on alternative assumptions and views (see below). Rundle’s (2017) piece also demonstrates yet again why so many observers and critics of the IT revolution argue that the central issues are not primarily technical but social and political. Society as a whole needs to take part in multi-faceted conversations of this kind. New infrastructure There’s no shortage of ideas and proposals regarding ‘what needs to be done’ to re-design and re-direct the Internet and, by extension, high-tech innovation in general. Helen Magretts of the Oxford Internet Institute is no exception. In order to deal with aspects of Internet aggression she suggests that: Any successful attempt to prevent extremist, abusive and hateful behaviour online must be multifaceted, thoughtful and collaborative. It will involve ethical and legal frameworks to guide as well as mandate good behaviour; working with tech companies rather than making enemies of them; smarter policing of activities that are already illegal; and crowdsourcing safety, so that people and social enterprises play a role (Magretts, 2017). Cathy O’Neil (2016) puts a strong case for the establishment of a new infrastructure to deal with the uses and misuses of algorithms. She seeks to create reliable records of how these tools are used and by whom. She also knows that to do so will not be easy as powerful organisations normally resist being called to account. As noted before Cathy O’Neil calls for the establishment of a new infrastructure to deal with the uses and misuses of algorithms. She seeks to create reliable records of how these tools are used and by whom. She also knows that to do so will not be easy as powerful organisations normally resist being called to account. Taplin (2017), however, goes even further in proposing what he calls a ‘digital renaissance.’ This has various features that include: • a shorter working week and the establishment of a universal basic income (UBI); • measures to get the technical and creative communities working together; • revisions of the ‘safe harbour’ provisions in the DMCA act; • the Library of Congress issuing new guidelines as to the ‘fair use’ of creative and copyrighted material; • revisions to, and wider application of, anti-trust regulations (to break up monoplies); and, • a proliferation of co-operatives, non-profit companies, and what he calls ‘zero-marginal-cost distribution systems. In this respect Taplin (2017) echoes suggestions by Rushkoff (2016). Rushkoff is interested in exploring a range of social and economic inventions in the context of re-thinking what money is and is for (Rushkoff, 2016). Finally Morozov (2017), whose work has contributed substantially to this enquiry, suggests that a single data utility would be best placed to make the best use of material from divergent sources. In the light of current experience with commercial entities it would need to be non-commercial and publically owned, much as Rundle (2017) has suggested. Given that progressive governments could set up such utilities quite easily the next step would be to ensure that ‘whoever wants to build new services on top of that data would need to do so in a competitive, heavily regulated environment while paying a corresponding share of their profits for using it,’ (Morozov, 2017a). Morozov (2017a) adds that ‘such a prospect would scare big technology firms much more than the prospect of a fine’. Effective regulation When the European Union (EU) handed Facebook a \$120 million fine in May 2017 and Google a heavy \$2.4 billion in June, both for market abuses, many wondered what the next step would be. By mid-2017 the answer came in the form of another acronym – the GDPR (or general data protection regulation). Long-time observer of the IT scene, John Naughton, emphasised that GDPR was not a directive but a regulation so it would become law in all EU countries at the same time. Some of the implications follow: • The purpose of the new regulation is to strengthen and rationalise data protection for all individuals within the EU. It also covers the export of personal data to outside the bloc. Its aims are to give control back to EU residents over their personal data and to simplify the regulatory environment for international business by unifying regulation. • The GDPR extends EU data-protection law to all foreign companies that process the data of EU residents. So even if a company has no premises or presence within the EU, if it processes EU data it will be bound by the regulation. And the penalties for non-compliance or infringement are eye watering, even by Internet standards: fines up to €20m and/or 4% of global turnover. • More significantly, the GDPR extends the concept of “personal data” to bring it into line with the online world… The regulation gives important new rights to citizens over the use of their personal information… Valid consent has to be explicitly obtained for any data collected and for the uses to which it will be put. • Citizens will now have the right to request the deletion of personal information related to them (Naughton, 2017a). This was obviously what pundits call a ‘game changer’ as it fundamentally changed the rules for how these organisations collect, use and manage private data. Naughton (2017a) called it an ‘existential threat’ to those currently operating beyond the reach of existing data regulation laws. It certainly helped to resolve a situation in which people’s private lives everywhere are regarded as ‘fair game’ to entities whose sole interests lie in sales, profit and power. And it went a long way toward resolving some, but by no means all, of the concerns expressed so clearly by Zuboff (2015) and others. At the same time public service sectors such as education and health need to adjust their own procedures which will involve considerable costs. The technical is political – the return of anti-trust As the oligarchs have steadily penetrated ever more areas of human and economic life they’ve become so powerful that they abjure regulation by elected bodies and are frequently said to be way ‘ahead’ in terms of their products and services. But this is a mistake. If we accept that the technical is political it is harder to confuse technical mastery with other forms of expertise. As ever, Morozov (2017b) nails the core of this confusion by reference to underlying social interests. He poses the following question: How could one possibly expect a bunch of rent-extracting enterprises with business models that are reminiscent of feudalism to resuscitate global capitalism and to establish a new New Deal that would constrain the greed of capitalists, many of whom also happen to be the investors behind these firms? (Morozov, 2017b.) During mid-to-late 2017 it was clear that, while the Internet giants were not about to collapse, social and political forces on both sides of the Atlantic were beginning to line up in broadly the same direction. Signs were emerging that might be called their ‘golden age’ could be coming to a close. In September, for example, the Guardian editorialised that ‘Amazon’s dominance of the eBook market may not have raised prices, but it left the sector anaemic and competition floundering’. Another commentator was quoted as saying of the oligarchs that he did not think ‘any credible economist who isn’t an Ayn Rand lunatic would accept that these are not monopolies’. More people than ever are becoming aware of the fact that something is very wrong with this picture. During the same month Ben Smith, a well-regarded Buzzfeed writer, was among the first of many to confirm what he called a ‘palpable, perhaps permanent, turn against the tech industry,’ (Smith, 2017). He added that ‘the new corporate leviathans that used to be seen as bright new avatars of American innovation are increasingly portrayed as sinister new centres of unaccountable power,’ (Smith, 2017). In his view this constituted ‘a transformation likely to have major consequences for the industry and for American politics’ (Smith, 2017). He also reported on how politicians of widely differing views were urging ‘big tech’ to be considered less as private companies than as ‘public utilities,’ (Smith, 2017). After years of denying the value or relevance of treating the high-tech giants in the same way that Bell Telephone and Microsoft had been treated in earlier years (i.e. broken up into small units) anti-trust legislation was finally back on the agenda. Similarly Washington senators Elizabeth Warren and Claire McCaskill both became involved in making anti-trust regulations part of the Democratic agenda over the next four years. Overall, the gap between ideas and effective action was perceptibly closing.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/03%3A_Framing_solutions/3.04%3A_Productive_innovation.txt
Humanising and democratising the IT revolution Bell (1997, p.218) summarises some of the features of these stages in the following way. Stage 6: Universal principles of justice, the equality of human rights and respect for individual human dignity are deemed to transcend the law itself. In this view it is rational to believe that ‘doing the right thing’ is based on an understanding that universal moral principles are valid. Personal decisions to uphold such principles affirm their continued salience over time. Stage 5: A contractual perspective requires impartial support for agreed core values, including that of trust in fulfilling contractual obligations. To this end it is helpful to recognise fundamental rights, such as the right to life and liberty, while not necessarily being constrained by fashion or transient opinion. Defensible ethical behaviour involves freely accepting such obligations and actively seeking the greatest benefit for the common good. Stage 4: Embodies a focus on large and more dominant social institutions and the wider society as a whole. Group welfare is a primary concern, and it is in this context that obligations need to be fulfilled. Stage 3: The need to be, and be seen as, a good person. Sustained loyalty is related primarily to particular groups and organisations. Individuals are keenly alert to the expectations of others in most situations. They are self-critical within these limited domains. Stage 2: A bi-directional stance in which individuals pursue their own agendas while also remaining open to, and accepting of, those of others. Behaviour is, however, socially sanctioned since it is dependent upon on approval and reinforcement from others. Stage 1: The locus of decision-making is largely external and, as such, lies beyond the individual. Motivation is therefore focused on routine, convergent behaviour and the avoidance of sanctions. ‘Doing right thing’ is identified with successfully following pre-existing rules and procedures. It is for the reader to consider how well or badly the values and human qualities suggested here may apply to specific individuals and organisations that have colonised the Internet for their own limited purposes. But at the very least Bell and Kohlberg provide us with clear and reliable criteria that can legitimately be used as an evaluative scale. So in terms of moral development thus defined, some organisations and their executive leaders may find themselves hard pressed to provide adequate answers. Which has huge social implications. When the question of re-negotiating social contracts is raised – and it will be repeatedly – then interlocutors can legitimately seek evidence for the fulfilment of these criteria at the highest levels. Possibly the most useful guidance and overall summary is provided by Bell himself when he suggests that ‘ People live best who live for others as well as for themselves’ (Bell, 1997, p.275). Finally Figure 3 summarises some of the key suggestions made throughout this series back to an Integral perspective. A straightforward four-quadrant analysis illustrates how various right hand quadrant phenomena (including technology, infrastructure and exterior actions) can usefully be related back to various left hand quadrant equivalents (values, worldviews, stages of development etc. as expressed through a variety of cultural norms and conditions). It follows that one way of promoting more humanised and democratic uses of any technology is to simply open to these left-hand quadrant realities and take them fully into account. The story thus far has shown how the early Internet was shaped and conditioned by specific human and cultural forces within the U.S. After a fairly benign, government-funded start, a handful of entrepreneurs took over and, with little or no thought for wider consequences, actively fashioned the conditions for their own success. Tax laws were revised. Anti-trust regulations that had earlier been applied to Microsoft and the Bell Telephone Company were set aside. Strategies were undertaken through which private monopoly platforms would grow unhindered into the world-spanning behemoths of today. The rise of neoliberalism turbo-charged this process. Following Hayek, it viewed the government as an impediment to ‘progress’ and the market as an unquestioned good. These tendencies, along with Rand’s nihilistic view of human existence, all helped to bring the present constellation of rootless and invasive entities to its present condition. In an alternative world, competent far-sighted governance would have set the conditions for such enterprises and modified them progressively over time. Human rights (including the right to dignity, privacy and freedom from oppression) would have been respected and consciously built into the foundations of the Internet. Corporations would have learned to respect users and therefore to ask before expropriating creative work and private data wholesale for commercial gain. Tax laws that mediated fairly between corporate and social needs would have helped to ensure a steady flow of income for social expenditures. When entities grew too large they would have been broken up or otherwise compelled to adapt. Currently, however, we do not live in that world. Yet, as can be seen from some of the many examples outlined above, there are a host of reasons to support informed optimism and hope, the framing of real solutions. Furthermore, it is helpful to remember that some aspects of our situation are not entirely new. When Martin Luther hammered a copy of his 93 theses onto the Wittenberg church door some five centuries ago, he set himself against the oligarch of the day – the all-powerful Catholic Church. He questioned the legitimacy of that vast institution and, at the same time, began a process that both destroyed its business model and made way for alternatives. Today the underlying dynamic is suggestive but there are also clear differences. Luther’s stripped down version of Christianity was a radical change but it still provided people with a sturdy moral framework to guide their thinking and behaviour. Such foundational certainties are more elusive in our own time. On the other hand this very fact arguably provides a rationale for recovering, re-valuing and applying some of the universal human values outlined above. The latter are perhaps among the most viable sources of strength and continuity available during times of transformation and change. The legitimacy of the Internet oligarchs is now in doubt from many quarters and for a variety of reasons, so limits and conditions are likely to be progressively imposed. Similarly, the business model that daily abuses countless human beings is unlikely to survive without major changes being wrought by newly enfranchised, democratically constituted cooperatives and civil authorities. While government actions may be slow and, at times uncertain, this study suggests that a host of responses, innovations and alternatives is under active development. It is inconceivable that these will not change the nature of digital engagement over time. So it is indeed possible to look ahead with qualified optimism and to anticipate a new and different renaissance. A renaissance that sets aside technological adventurism and wild, unconstrained innovation, in favour of positive human values and cultural traditions that balance human dignity and rights on the one hand with the enhanced stewardship of natural systems on the other.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/04%3A_Humanising_and_democratising_the_IT_revolution/4.01%3A_Humani.txt
The IT revolution reassessed Technology…is not intrinsically bad. Much of it … is brilliant and beneficial – at least to humans. But invention often originates in short-term or siloed thinking. And even more frequently, its application fails because of political and economic decisions taken with little heed for non-humans and future generations. … The old idea of conquering nature has never really gone away. Instead of changing ourselves, we adapt the environment … The United States, though, pays little heed to its pre-industrial history. The country’s identity is deeply enmeshed with technology, which is treated as the great enabler of progress and freedom (Watts, 2021). A successful society is a progress machine. It takes in the raw material of innovations and produces broad human advancement. America’s machine is broken. The same could be said of others around the world. And now many of the people who broke the progress machine are trying to sell us their services as repairmen (Giridharadas, 2019). This book began with a literature review and the identification of emerging issues and case studies. The latter included the Internet of Things (IoT) and the prospect of ‘driverless cars.’ Related evidence from these and other sources suggested that the broad, rapid and largely unreflected-upon adoption of Silicon Valley’s high-tech offerings, while impressive in many respects, evolved from surprisingly narrow and inherently problematic foundations. A wide variety of human and social concerns have emerged that cast serious doubt on the viability of this trajectory and outlook. Among them are: • Questionable values (unbounded profit, growth of monopoly power, size and over-reach in multiple domains). • The calculated use of strategies intended to conceal how high tech and the growth of corporate power compromise and degrade many aspects of public and private life. • Inadequate conceptions of human identity and purpose that contradict standards of safety, respect and dignity as defined, for example, in the UN Declaration on Human rights. • Equally thin and instrumental views of socially vital concepts such as ‘friends’, ‘communication’ and ‘progress.’ • One-dimensional views of high tech that bestow upon it an assumed and unquestioned ontological status that can neither be justified nor sustained. • Failure to question self-serving practices that permit high-tech innovations to be released into social and economic contexts without due regard for unintended effects, drawbacks and long-term implications. • How foresight and provident care have been overtaken by the naked power of speculative investments in ill-considered innovation, marketing and the resurgence of monopoly practices on a global scale (Slaughter, 2018b). Chapter three considered some features of ‘compulsive innovation,’ took a brief look at artificial intelligence (AI) and also drew attention to the apparently unstoppable rise of surveillance systems around the world. Its main emphasis, however, was to begin the task of ‘framing solutions.’ It was proposed that certain ‘blind spots’ that afflict Silicon Valley, its investors and supporters, could be reconceptualised as opportunities to reframe and re-direct the entire enterprise. A four-quadrant model from Integral enquiry re-focused attention away from the over-hyped exteriors of IT systems to highlight dynamic but widely overlooked interior phenomena such as worldviews and values. Habermas’ insistence on the primacy of what he calls ‘constitutive human interests’ also served to anchor the discussion in these vital domains. The chapter reviewed a variety of strategies for better understanding and intervening in systems that undermine humanity’s autonomy and well-being. They included: • Transcending reductionism and re-purposing the Internet; • Productive innovation; and, • Humanising and democratising the IT revolution (Slaughter, 2018c). It is universally accepted, however, that the IT revolution is anything but static. It is therefore unsurprising that a multi-faceted ‘pushback’ against the continued expansion and power of the Internet oligarchs has continued to grow and develop. In an Atlantic essay during mid-2019, Madrigal outlines 15 entities that he refers to as ‘an ecosystem of tech opponents.’ (Madrigal, 2019). This chapter draws on some of these newly emerging insights to extend the scope of the critique and provide further support for possible solutions. It begins with a view of the ‘fractured present’ and continues with four contrasting accounts by individuals who have, in quite specific ways, acted as ‘witnesses’ to this unprecedented upheaval. The upcoming chapters also employ a metaphor from The Matrix film trilogy to consider how the real-world matrix of high-tech entities and systems can be better understood, or ‘decoded.’ Overall, it suggests that the clarity of insight now emerging from such sources may begin to resolve the digital dilemmas we collectively face. It helps to establish the grounds for hope and effective action. Finally, we should not be under any illusion that we are dealing with a stable situation or outlook. The over-reach of high-tech innovation and its thoughtless implementation has multiple costs and brings with it quite new dimensions of hazard and risk. In other words, we are treading unstable grounds ripe for change. But what kind of change and whose interests will prevail? The fractured present Many features of human history are known to work against integration and the smooth functioning of society. They include poverty, revolution, war, disease, the exhaustion of physical resources and imagination (Tainter, 1988). During recent centuries, and especially since the Industrial Revolution, new forms of human organisation and technology progressively extended this list, giving rise to new versions of old problems as well as entirely new ones. During the early 21st Century, a particularly perverse combination of IT capability and capitalist values created powerful waves of change and dis-integration that now permeate our own fractured present. While it suited the institutional beneficiaries of the IT revolution (Silicon Valley behemoths, associated start-ups, investors, certain government agencies) to evoke the myth of progress and portray this ‘revolution’ as a broadly liberating force, that view has steadily lost credibility. A particular series of events occurring within a very specific historical context, sometimes known as the ‘Neoliberal ascendancy,’ unfortunately arrived at precisely the wrong moment. As global dilemmas became increasingly evident, the view that ‘markets’ should prevail over ‘governance’ was used to repeatedly delay or destroy many of the very adaptive responses upon which more far-sighted policies could have been based. US governments in particular failed to fully comprehend or restrain the aggressive, monopolistic strategies that arose in their midst. Consequently, no-one in positions of power and authority succeeded in subjecting these developments to sufficiently thorough-going assessment, technological or otherwise. In retrospect, few people paused to consider the repercussions of these developments in future. Some may argue that this apparent blindness should be attributed to inherent human limitations, including plain, old fashioned naivety. Yet the fact remains that the Internet oligarchs intentionally obscured the growing costs of their activities behind a wall of self-serving propaganda, marketing glitz, for distraction and outright deception of the general public. The costs include undermining human agency, weakening democracy, destroying livelihoods, fracturing social systems and creating new sources of conflict and violence. The following vignettes evoke the ‘lived quality’ of situations replete with disturbing human consequences (Fazzini, 2019). • A mother discovers that her 12-year-old son has become addicted to the hard porn he first encountered via friend’s ‘phone in a school playground. • A student who’d sent intimate images of herself to her boyfriend finds herself being ogled and trolled months later by school acquaintances as well as strangers on the internet. • New parents who’d installed a video monitor on their child’s crib find out later that the feed was intercepted by thieves who used it to compromise their home network. • A young man is hauled before a court for furiously striking his pregnant partner because she challenged his addiction to multi-player online gaming. • The owners of any organisation with an online presence can switch their computers on one morning only to find that they’ve become a victim of ‘ransom ware’ and have been ‘locked out’ of all their data. To have any chance of retrieving it they are required to pay a sum of money in Bitcoin to a remote and unknown entity. Help is available but there’s no guarantee the data will ever be recovered. • A mature affluent woman falls for a good-looking former soldier on the internet who has run into hard times. As their relationship develops, he asks for financial help. After several such transactions the victim discovers that she has been sending money to a 20-something scammer in Nigeria. • The would-be purchasers of a new property discover that the deposit paid into their lawyers’ authorised account was diverted elsewhere by scammers and could not be recovered. The bank denies all responsibility. These and countless similar examples have occurred, and are occurring, almost everywhere. Table four provides an indicative overview under three broad headings. Table 4 Human, Social and Geopolitical Costs of the IT Revolution Human costs • The loss of privacy on a vast scale. • Loss of control over private data and the uses to which it is put. • A steady decline in respect and tolerance for ‘others’ and other ways of being. • A growing tendency to stereotype, blame, exploit and attack from a distance. • Misuse of passwords to threaten, steal and control.; the rise of identity theft. • The rise of hacking, phishing, cyber-bullying and scams of every possible kind. • The rise of on-line predatory behaviour, including the sexual abuse of children. • Diminution of the right to be free of such abuse, and of the right to sanctuary. • Evisceration of the inner lives of countless individuals, especially in developing nations. • Propagation of false solutions and solutions to problems that do not exist (solutionism). • Propagation of vacuous ‘entertainment’ that degrades human life and experience. • The rise of equally vacuous ‘influencers’ who are richly rewarded for showcasing trash. • The active promotion of outrage as a means of creating ‘user engagement.’ • Careless and repeated abrogation of the 1946 UN Declaration of Human Rights. • Denial of the right to an open and ‘surveillance free’ life now and in the future. Social costs • Repeated assaults on the value of truth and the integrity of scientific knowledge. • The consequent weakening of social integration and clear-sighted decision making. • Radical questioning / undermining of precedence and authority in almost every domain. • The compromising of core human institutions such as: government, health and education. • The decay of social capital, traditions and ways of life built up over generations. • The deliberate or careless resourcing of ‘bad actors’ at every level and in every country. • The broadcasting of demeaning ideas, memes, narratives and images of every kind. • The curation, replication and use of anti-social ‘performances’ (including sexual assault and mass killings) that in turn promote further violence and destructive responses. • The deliberate use of dopamine reward responses to create and sustain addiction for commercial gain. • The deliberate and systematic appropriate of creative work – including that of artists, writers, musicians and journalists without any or adequate payment. • The associated ‘starvation’ of traditional news through direct theft of material and loss of funding through declining advertising income. • The attempt to replace government services funded by formal taxation with commercial for-profit costs levied by private companies in their own interests (for example, age care, health care, education and related social services). • The re-orientation of intra-nation security services from protection of native populations to the wholesale invasion of their privacy and autonomy. • The corresponding inability of governments to protect themselves or their citizens from random external cyberattacks. Geopolitical costs • A continuing shift from the Internet as positive enabler of legitimate civil functions to a multi-dimensional liability, i.e. an expanding series of hard-to-fix vulnerabilities. • The willingness of nation states to develop increasingly powerful surveillance capabilities and high-risk interventions in the IT systems of other countries for purposes of intimidation and control. • The resulting ‘dismal dialectic’ by which competing nation states seek temporary advantage over others by pursuing ever more dangerous and threatening internet- and satellite-enabled offensive capabilities. • The growing likelihood of autonomous ‘soldiers,’ ‘smart’ drones and the like, bringing the prospect of cyber warfare ever closer. • The asymmetric benefits that accrue to ‘bad actors’ at every level. For example, Internet-enabled crime such as money laundering, financial scams, illegal transfers to and from rogue administrations. As compared with the very high costs of pursuing any kind of wrong-doing or criminal activities via Internet means. The costs of the latter tend to be very low, while the costs of pursuing it in terms of time, money and expertise are prohibitively high. • Multiple vulnerabilities arising from the lack of coordination and cooperation in the digital arena between the three largest centres of power and control: China, Russia and the USA. • The global emergency, however, recognises no political boundaries whatsoever. Although IT systems have achieved global reach few or no effective human / political organisations have emerged that are capable of providing integration and coordination on a similar scale. • Effective global governance appears to be a remote possibility at present. These examples demonstrate how profoundly the IT revolution – as implemented by Silicon Valley and its clients – has helped to fashion the dangerous and unstable world that we now inhabit. It is a world that blunders into new dilemmas while failing to resolve those it already has. What many have overlooked, for example, is that to maintain what are now considered ‘normal’ operations, the high-tech world can no longer function without recourse to vast numbers of very complex devices operating silently in the background. The entire system is, in principle, vulnerable and needs to be constantly protected from entropic malfunction and deliberate on-line aggression (Galloway, 2020). Assurances regarding these endless liabilities have never been fulfilled. It is unlikely that they ever will be (Gent, 2020). To summarise, Western civilisation has embarked on a process of high-tech development with certain well-known benefits and other less well-known costs for which there are apparently very few easy or ready-made solutions. It is therefore, worthwhile to enquire if the IT revolution itself may constitute a new and dangerous progress trap (Lewis and Maslin, 2018). So instead of passively accepting the technology onslaught, it needs to be subjected to sustained critical enquiry. Exactly how does this historical condition affect life, culture, tradition and meaning? How, under these chaotic circumstances, can solutions be crafted that hold out real hope of recovering the collective future? In order to de-code the matrix we first need to understand how it developed and why. Understanding the matrix RED PILL, BLUE PILL? In the first Matrix movie the lead character, Neo, is offered a choice between red and blue pills (Warner Bros, 1999). One will wipe his memory and return him to the world of conventional surfaces with which he is familiar. The other will open his eyes so that he can not only see The Matrix for what it is but penetrate into, and perhaps even influence it. He opts for the former and as the mundane world slumbers begins his ‘deep dive’ into reality. The trilogy narrative may not be entirely coherent, but it certainly tapped some deep and perhaps obscured aspects of human psychology. In so doing it arguably triggered half-conscious questions or fears about ‘what is really going on’ with succeeding waves of technology over which we appear to have little or no control. The key word here is ‘appear’ since what is at stake are not immutable, natural forces or God-like injunctions handed down from above. Rather, the high-tech world has been created by individuals making critical decisions at the behest of people in real time and places with vested interests and imperatives. In the ‘blue pill’ version of ‘the real’ the global monopoly platforms created by Google, Facebook and others are believed to exist to help us access information, explore human knowledge and connect with others around the world. We are led to believe that the power of modern technology is at everyone’s fingertips to do with as they will. In exchange for what are described as ‘free’ services, personal data from everyday lives and activities is scanned, recorded, used and sold. This information helps ever-attentive suppliers to better know and anticipate human needs. By drawing on as much information as possible dedicated Google users are, it is said, enabled to more efficiently navigate their way through an ever more complex world. For reasons best known to themselves some appear happy to install various ‘digital assistants’ that record their daily conversations. Some choose to unburden themselves of familiar low-grade tasks such as remembering train times, navigating a city or knowing what groceries to buy when. Which encourages them to use these services in real time. Dedicated ‘always-on’ monitoring devices that connect the young to their parents and friends and the elderly to medical support seem to have wide appeal. Yet prying on everyone, even in most private moments, are hidden armies of ‘data aggregators’ that sift and sort and organise the flood of information about what people do, where, how and even why they do it. It can be claimed that such technologies protect individuals from external harm and perhaps protect society from certain kinds of criminal activity. Overall, it is presumed that the ‘blue pill’ provides a pretty fair bargain. Such passive and generalised assumptions that the technology and the systems they are embedded in are benign and useful have been widely accepted. We know this because the monopoly platforms (and their investors) have grown so immensely rich and powerful on the proceeds (Bagshaw, 2019). A ‘business-as-usual’ view simply assumes that these arrangements are broadly acceptable – albeit requiring routine upgrades and related changes from time to time (improved ‘personalisation’, longer battery life, sleeker handsets etc). In the absence of countervailing perspectives and clear evidence, alternative views of high-tech modernity can be difficult or impossible to articulate. This is especially the case in less affluent nations where Facebook, for example, and its subsidiary ‘WhatsApp,’ are used by large numbers of people who confuse these invasive and heavily monetised apps with the Internet per se. Given the strong tendency of social media to exacerbate dissent, extremism and even direct violence the consequences can be tragic. This has been seen in mass shootings, some of which have been streamed in real time. But a similar dynamic has occurred in other situations where social dissent has risen to such extremes that community violence and ‘ethnic cleansing’ have resulted. Two examples are the descent of the ‘Arab Spring’ into chaos and the expulsion of the Rohingya from their homes and villages in Myanmar to a precarious existence in nearby Bangladesh. Nor, given recent events, is the US immune from such consequences. Clearly a ‘red pill’ account requires real effort over time and a certain tolerance for discomfort and uncertainty. It raises disturbing questions that not everyone may be ready or able to pursue. It acknowledges the reality of what some regard as a true existential crisis with ‘forks in the road’ and pathways to radically different future outcomes. This view also suggests that the continuation and further development of surveillance capitalism leads directly to the kind of over-determined dystopian oppression already emerging in China (Needham, 2019). It therefore seeks to clarify just how the juggernaut works, to identify and name hidden factors, to expose the intangible forces that are working behind the scenes to shape our reality, and ourselves, in a variety of perverse ways. Yet before it can be tamed or directed toward different ends society needs to understand in some depth how we arrived at the point where societies are confronted by deformed versions of high tech and a fundamentally compromised Internet. Such an account clearly goes beyond the critique of technical arrangements to questions of purpose, history and context. MISCONCEPTIONS, MERCHANDISING AND ADDICTION The view explored here is that the IT revolution owes at least as much to human and cultural factors as it does to purely technical ones. For example, the barely qualified optimism with which it has been associated arguably owes more to marketing and merchandising – America’s great unsought ‘gifts’ to the world – than it does to the services and distractions of any device whatsoever. The close association that’s claimed to exist between technical innovations on the one hand and human progress on the other tells only part of the story and therefore remains problematic. Such generic ‘optimism’ is, perhaps, little more than a handy distraction used to conceal the predations of corporate power in this singularly heartless industry. As digital devices continue to penetrate nearly every aspect of human life, the forces driving them need close attention. They are shaped and enabled every bit as much by unconscious pre-suppositions and cultural myths as they are by computer chips, hard drives and servers. Such underlying intangibles – values, cultures and worldviews – powerfully determine what forms technologies take and the uses to which they are put. John Naughton, a seasoned observer of the shifting IT landscape has identified what he refers to as ‘two fundamental misconceptions.’ The first is ‘implicit determinism’ which he describes as: The doctrine that technology drives history and society’s role is to adapt to it as best it can… that capitalism progresses by “creative destruction” – a “process of industrial mutation that continuously revolutionises the economic structure from within (Naughton, 2020). In this view the second critical flaw in the worldview of Silicon Valley is ‘its indifference to the requirements of democracy:’ The survival of liberal democracy requires a functioning public sphere in which information circulates freely…Whatever public sphere we once had is now distorted and polluted by… Google, YouTube, Facebook and Twitter, services in which almost everything that people see, read or hear is curated by algorithms designed solely to increase the profitability of their owners (Naughton, 2020). The ’determinism’ and ‘indifference’ that Naughton refers to are two of many unacknowledged features that characterise this particular high-tech culture and degrade so many of its offerings. Another is the addiction to digital devices and the services they provide. Their appeal was ‘designed in’ with enormous care and strenuously promoted using every available marketing tool and technique. The language of advertising is, quite obviously, a projection of corporate interests and, as such, has no place for what might be called ‘autonomous needs.’ Its intrinsic conceptions of human beings, human life, are irredeemably reductive. The fact that advertising has become the central pillar of the Internet is not something to be passively accepted. It requires an explanation. During the post-war years, routine sales were regarded as too slow and uncertain, meaning that profits were always going to suffer. The advertising industry was a response to this highly ‘unsatisfactory’ situation. The whole point was to boost ‘demand.’ The strategy was so successful that over subsequent years ‘consumer demand’ became a ‘meta-product’ of this particular worldview (growthism) that expressed specific values (materialism, envy, consumerism etc). Buying and selling in this high-pressure mode made a kind of sense in the heady years of post-war America. The big mistake was to allow it to become so embedded, so much part of the ‘American way of life’ that it became normalised thereafter (Packard, 1962). Clearly times have changed, and those early imperatives make less sense than ever. Yet the present wave of IT-related selling continues to draw heavily on the very same manipulative tradition. One clear difference, however, with this new flood of products and services, is that entirely novel features appeared that seemed to by-pass rational thought and ethical evaluation. Compelling new devices and the apparently ‘free’ services that they enabled seemed to meet peoples’ authentic needs for organisation, communication, and agency and so on. At the time they were mistaken for gifts. More recently, however, the nature, extent and costs of addiction to digital devices, especially for children and young people, have become impossible to ignore (Krien, 2020). Yet even now responses to such concerns remain slow, uncertain and largely cosmetic (Exposure Labs, 2020). Heavily curated projections of IT as a neutral or positive enabler have clearly succeeded up to a point. But as more people experience the social, cultural and economic ramifications the legitimacy of digital manipulation will likely attract ever greater scrutiny. Societies permeated by powerfully networked digital devices not only operate along unconventional l lines, they also overturn earlier ways of life (Klein, 2020). The era of large-scale, targeted and pervasive merchandising may not be over, but it does face new challenges that emerge from lived experience and the deep, irrepressible need for human autonomy. As people seek to understand their reality, their world, in greater depth they will be more willing to look beyond the photo app, the chat group and those innocent-looking Facebook pages where powerful AIs stare coldly back right into their soul. They will want to know why this unauthorised invasion happened and how it can be prevented from recurring. They will need a clearer understanding of the nuances of innovation and demand more honest explanations from those who shaped this revolution without regard to the consequences. MONETISING DATA, INVENTING ‘BEHAVIOURAL SURPLUS’ Google was incorporated in the USA in 1998 soon after the Mosaic web browser that opened up the Internet to the public. Data collected at that early stage was seen merely as raw research material for which authorisation was neither sought nor granted. Indexing the World Wide Web (WWW) provided reams of data which was analysed and fed back into the system for users’ own benefit. It allowed users, for example, to fine tune their own searches. This arrangement recognised what had long been a standard feature of commercial practice – the inherent reciprocity between a company and its customers. But since Google did not have a distinctive product of its own the company was considered insufficiently profitable (itself a social judgement based on particular values and priorities). Subsequent discoveries, such as ‘data mining’ constituted a ‘tipping point’ that changed everything. Rich patterns of human behavior were progressively revealed but the research interest no longer applied; it was overtaken by commercial imperatives. These covert profit making operations were regarded as highly secret and were shielded from public view. A further critical shift occurred when it was realised that the avalanche of new data could be manipulated and monetised. The vast potential was eagerly welcomed by Google’s equity investors who, as Google announced at a 1999 press conference, had contributed some US\$25 million to the company. These investors, with their value focus on money, expansion and profit, brought strong pressures to bear with the sole aim of boosting the company’s financial returns in which they now held a powerful interest. None of these activities apparently broke any laws or regulations as they existed at the time, so were not considered illegal. The best that can be said is that they were, perhaps, ‘non-legal’ in that they took place in secret and within a regulatory vacuum. Very few understood at the time that this constituted a critical point of transition from one form of commercial activity to another. But it was consistent with Google’s priorities which had never been on improving peoples’ lives or contributing to society in any meaningful way. A couple of years later one of Google’s founders, Larry Page, spoke about further options that lay beyond mere searching operations. This was made explicit when he declared that ‘People will generate huge amounts of data… Everything you’ve heard or seen or experienced will become searchable… Your whole life will be searchable’ (Zuboff, 2019, p. 98). As Zuboff (2019, p.68-69) notes ‘Google’s users were not customers – there is no economic exchange, no price and no profit. Users are not products but sources of raw-material supply.’ She adds that: Google turned its growing cache of behavioural data, computer power and expertise to the single task of matching ads with queries… It would cross into virgin territory. Search results were…put to use in service of targeting ads to individual users… Some data would continue to be applied to service improvement, but growing stores of collateral signals would be repurposed to improve profitability both for Google and its advertisers. These behavioural data available for use beyond service improvement constituted a surplus, and it was on the strength of this behavioural surplus that the young company would find its way to the “sustained and exponential profits” that would be necessary for survival (Zuboff, 2019, p.74-5). To achieve this ambition the company simply ignored social, moral and legal issues in favour of technological opportunism and unilateral power. These were and are all human decisions, human inventions, not ‘an inherent result of digital technology nor an expression of information capitalism.’ This was an ‘intentionally constructed at a moment in history (that represented) a sweeping new logic that enshrined surveillance and the unilateral expropriation of behavior as the basis for a new market form. (It) resulted in a huge increase in profits on less than four years’ (Zuboff, 2019, p.85-7). Greed and opportunism were, however, not the only factors involved. The dominant Neoliberalist ideology succeeded in reducing the scope and power of government regulation and promoting a structural shift toward market-led practices. Anti-trust strategies that had previously been used to constrain monopolies were also set aside leaving companies to expand seemingly without limit. As mentioned below, Zuboff and Snowden both refer to the aftermath of the 9/11 disaster when the CIA and other government agencies formed a powerful but hidden alliance with Google. The former made a fatal choice to draw as fully and deeply as possible on the very surveillance techniques pioneered commercially by Google. These two highly secretive entities then found ways to conceal their surveillance operations not merely from the public but also from Congress. The immediate result was a decisive shift away from ‘privacy’ toward a new and dangerous type of ‘security,’ (Snowdon, 2019; Greenwald, 2015). Earlier aspirations for an ‘open Internet,’ and the long-standing value assumption that human rights were paramount, were abandoned. The scope of these changes was admitted in 2013 by former CIA Director Michael Haydon when he acknowledged that ‘the CIA could be fairly charged with militarising the World Wide Web’ (Zuboff, 2019, p.114). These developments arguably set the stage for the present dangerous and unstable geopolitical situation we now face. Google became progressively stronger. Its targeted advertising methodology was patented in 2003 and the company went public in 2004. Profits rose precipitously and it soon became one of the world’s richest companies. In its rush for dominance and profit it pursued a series of unsanctioned, non-legal projects such as Google Earth (2001), an eventually unsuccessful attempt to ‘digitise the world’s books’ (2004); (Guion, 2012) and Street View (2007). While all have their uses, the company’s supreme over-confidence and ignorance of common values repeatedly demonstrated its complete lack of interest in seeking or gaining legitimate approval. What it did obtain within the US was ‘regulatory capture’ of government policy. The question that will not go away, however, is whether any private company should be allowed to have this power and whether that power is better invested in public utilities charged with pursuing social well-being rather than private profit. Such distinctions matter a great deal and have implications beyond IT. In 2012, for example, Google paid its dues to its ideological friends by bestowing generous grants upon conservative anti-government groups that opposed regulation and taxes and actively supported climate change denial (Zuboff, 2019, p.126). Hence the regressive aspects of Google’s business model and sense of entitlement clearly extend far beyond the surveillance economy per se. Having opened out vast new and undefended territories of ‘behavioural surplus,’ Google’s model was emulated by many others, beginning with Facebook (Taplin, 2017). Today Google’s penetration into nearly every aspect of social and economic life is more extensive, more powerful than that of any nation state. Yet the legitimacy of these operations remains as problematic as ever. In order to understand and confront the Matrix cultural factors, powerful individuals and obscure decisions all need to be taken into account.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/05%3A_The_IT_revolution_reassessed/5.01%3A_The_IT_revolution_reasse.txt
Witnesses to the revolution The application of Hayek’s Big Idea to every aspect of our lives negates what is most distinctive about us. It assigns what is most human about human beings – our minds and our volition – to algorithms and markets, leaving us to mimic, zombie-like, the shrunken idealisations of economic models… As a result – the space where we offer up reasons and contest the reasons of others – ceases to be a space for deliberation, and becomes a market in clicks, likes and retweets. The internet is…magnified by algorithm; a pseudo-public space that echoes the voice already inside our head. (Metcalf, 2017). You only have to spend billions marketing something if its worth is in doubt (Meadows, 2001). The steady emergence of publications and new sources of insight into the substantive character of the IT revolution arguably constitutes a counter trend in its own right since understanding precedes action. Although it is beyond the scope of any single paper to survey these in detail, four sources qualify particular attention. They are Permanent Record (Snowden, 2019), The Psychology of Silicon Valley (Cook, 2020), The Age of Surveillance Capitalism (Zuboff, 2019) and How to Destroy Surveillance Capitalism (Doctorow, 2020). Snowden’s (2019) focus is primarily on his experience as a trusted member of the US security apparatus. He explains how, in the normal course of his work, he was confronted by critical changes in the way his government reacted to geopolitical shifts and events. He was shocked to discover how the surveillance options enabled by newly emerging technologies were turned upon the American people. Cook’s career began as co-founder of a non-profit organisation focusing on the effects of technology. This, in turn, led her to consider how high tech affects society more generally. From here it was a short step to exploring the psychological dimensions of Silicon Valley, the single most influential incubator of these changes. Her conclusions add compelling detail to the overall picture. Zuboff’s (2019) was a university business professor with long-standing interests in how new technology affected workers and organisations. This earlier focus provided a sound basis for her detailed investigation into how the Oligarchs were created. Of greatest significance, perhaps, was her in-depth exposure of the stealth methods embedded in their business models that allowed them to successfully avoid detection and regulation for so long. From here she provided a rich account about how they undermined democracy and social norms in the pursuit of larger profits. Doctorow (2020), on the other hand, is a radical thinker with strong and well-established links within the IT subculture. His work embraces fictional and non-fictional approaches to IT-related issues. Thus, he has a distinctive ‘insiders’ view both of the tech itself and the critiques advanced against it. As such he provides his own critique of Zuboff’s contention that the main culprit here is ‘rogue capitalism.’ For Doctorow (2020) the main issues concern the resurgence of monopolies and the need for far more comprehensive digital rights. Taken together the authors of these works qualify as ‘witnesses to the revolution.’ As such, they serve as a corrective to the prevailing view that this revolution is primarily about technology and the growing array of high-tech digital devices. Readers of earlier works will also be aware that Integral approaches distinguish between inner and outer realities as well as individual and collective ones. Hence much of our interest here is how this revolution has affected, and is continuing to affect, the inner lives of people, organisations and cultures. Snowdon’s dilemma In contrast to other, more in-depth treatments, Snowden’s account is straightforward, almost banal. After being injured during army training his proficiency in IT enabled him to begin working in the security sector. He worked his way up through various government agencies and eventually earned the envied ‘most trusted’ status. With an unquestioned belief in the goals and purposes of this work he became adept at handling highly classified material. Until 9/11; after which everything changed. He discovered incontrovertible evidence that, contrary to accepted practice and in direct contravention of the US constitution, the US government had started spying on its own people. Back in 2004/2005 he’d been aware of an unclassified report that outlined some superficial details of the President’s Surveillance Program (PSP). This allowed for ‘warrantless wiretapping’ of citizens’ communications and was supposed to wind down within a couple of years. Several years later, however, the classified version intended only for a very highly restricted group turned up on his desk. It described a secret program known as STELLARWIND which described how ‘the agency’s mission had been transformed from using technology to defend the country to using it to controlling it.’ This had been achieved by ‘redefining citizens’ private Internet communications as potential signals intelligence.’ He realised that ‘the activities it outlined were so deeply criminal that no government would ever allow it to be released unredacted.’ The National Security Agency (NSA) argued that ‘the speed and volume of contemporary communication had outpaced, and outgrown, American law … and that a truly global world required a truly global intelligence agency.’ This, in turn, and according to ‘NSA logic,’ led to ‘the necessity of the bulk collection of internet communications’ (Snowden, 2019, p.177). In summary, the way that STELLARWIND was being used meant that instead of working to defend the US and its citizens, the NSA had started to identify their private communications as standard ‘intel’ ripe for unlimited collection and analysis. What Snowden had unwittingly discovered was what he called a ‘culture of impunity’ that had somehow circumvented the Legislative Board, the Judiciary, Civil Society representatives and even the US Executive Branch. Notions of ‘privacy’ that, as noted earlier, had supposedly been enshrined in the post-war UN Declaration of Human Rights, had been trashed without any real public justification, debate or explanation. These were political decisions taken under the protective cover of ‘security’ – but that was not all. There was something about the technology itself that opened it to such egregious misuse. Snowden realised that while regulatory regimes were specific to each country, technology crossed borders with impunity and remained largely intact. This meant that the spread of personal data was, in principle, unlimited. Moreover, its unconstrained proliferation extended throughout and beyond individual lives. It also struck him forcefully that no previous generation had ever had to face such a profound symbolic assault on their privacy and continued well-being. Since we were the first, it was essential that we faced up to what was happening and dealt with it. Such conclusions are decidedly ‘non-trivial.’ They indicate global changes of state that cannot but affect humanity in powerful but little-understood ways. Among these are that the overreach of high tech and unconstrained power appear to lead, in Snowden’s words, to ‘a vision of an appalling future.’ He is therefore justified in asking: is this indeed what we are willing to impose on present and future generations? In this view humanity appears to have reached what might be called ‘a historical pivot’ of unknown dimensions. While Snowden has been portrayed as a ‘whistle blower’ or even ‘traitor’ it’s clear that he is neither speaking for himself, nor pursuing merely personal interests. He seeks to act on behalf of humanity and, indeed, of future generations. As such the values being expressed here are clearly world-centric in scope and the worldview post-conventional. His decision to leave the US for what could well become a lonely and isolated life in exile became a moral imperative. Robert Mann’s (2014) account of the Snowden story is exemplary. It not only accurately captures other personal aspects but also shows how decisions after the 9/11 attack at the very highest levels of the US government contradicted the constitution and normalised criminal uses of the internet. This, in turn, established a series of precedents that made it that much easier for other nations to follow suit. It was, at heart, a fatal abnegation of world leadership with immense long term costs into the future. Two points stand out here. First, his view from the inner recesses of the US security apparatus raises deeply concerning questions about just what values are operating there. Second, if those values and their associated motivations serve to undermine, rather than protect civilised life, the capacity of US governance to deal firmly and decisively with the many dilemmas raised by its own agents of high-tech innovation can also be compromised. It follows that the identity, values and culture of Silicon Valley (SV) are central and need to be taken fully into account. The myths and stories it tells, the narratives it projects upon the wider world have real consequences, some of them contradictory and severe. A psychological profile of the Valley helps to provide a more nuanced understanding of how we arrived at this particular point in history. Equally, such a profile, if credible, might well provide useful insights into just what changes in its culture and worldview may be required. Psychology of Silicon Valley Katy Cook’s decision to explore the psychology of Silicon Valley began with questions that have occurred to many others. How, for example, was it that so many people were becoming addicted to successive waves of high-tech devices? What might be the cumulative effects on health, wellbeing and relationships? Where is all this unregulated innovation taking us? Her initial involvement was with a non-profit organisation that considered the effects of technology and ran awareness campaigns on possible responses. The perspective she later developed is useful here because, in contrast to more common everyday external views of the IT revolution, she focuses on internal aspects that normally remain implicit, out of sight, and thus seldom considered. Viewed from a psychological perspective, however, the Valley and all it represents, looks decidedly darker and more problematic than the upbeat public persona it presents to the world. It highlights, for example, the fact that there are major differences between what this world-shaping entity would like others to believe and what it actually is. Cook’s view is essentially that SiliconValley has been ‘corrupted’ because it prioritises the wrong (i.e. socially damaging) things. These include making profit and growth the ultimate values, owners and shareholders the ultimate beneficiaries and the use of outright lies and manipulative evasions as core strategies. At heart, she believes, the Valley fails to understand itself. This may seem an obvious point, but it has real implications. It means, for example, that in spite of its wealth and power (or perhaps because of them) it lacks the qualities that psychologists have long associated with ‘emotional intelligence.’ These are serious charges so it’s worth summarising the evidence. Under ‘identity’ she notes that the Valley sees itself as an ‘ideas culture.’ Whereas in earlier times this was linked with counter-cultural aspirations for a more open and democratic future, established businesses and their investors remained doggedly focused on the same old ‘extractive’ culture. Big ideas are said to thrive in Silicon Valley but they are narrowly applied in the search for technical solutions. This makes greater sense when key traits of programmers and computer specialists are revealed. A considerable body of evidence shows that they are skilled at puzzle solving but they neither like, nor are much interested in, people. Moreover, the industry actively selects for ‘anti-social, mathematically inclined males’ (Cook, 2020, p.24). The author is not alone in suggesting that the ‘high-fliers’ of Silicon Valley should be considered, in some crucial respects, as ‘under-educated.’ This initially startling conclusion is supported by evidence that their educational backgrounds are strongly associated with science, maths and engineering but lacking when it comes to the human sciences. With this in mind we need look no further to explain what Cook (2020) regards as ‘a staggering amount of unconscious bias.’ In summary, she identifies three key issues: • Tech tends to be an uncommonly homogenous culture, marked by a lack of diversity and an unwillingness to embrace pluralism. • It is rife with discrimination, including sexism, ageism, and racism, as well as harassment. • There is a disturbing level of immaturity that permeates many corporations, often emanating from the highest levels (Cook, 2020, p.39). For these and related reasons the author concludes that, industry-wide, there’s evidence of a ‘working environment that is fundamentally broken and unhealthy.’ It’s entirely consistent with this view that the myths and stories promulgated by Silicon Valley have been carefully curated at huge expense by marketing experts with the sole purpose of exerting desired effects on affluent, but distinctly naïve, populations. A litany of manufactured ‘sound bites’ familiar to many, reveal attempts to portray Silicon Valley’s major companies in a more positive light. They include ‘Bring the world closer together’; ‘Give everyone a voice’ (Facebook); ‘Organise the world’s information’ (Google); ‘Broadcast yourself (YouTube); ‘Make tools that advance humankind’ (Apple); ‘Work hard. Have fun. Make History’ (Amazon) etc. (Cook, 2020). Thus, while they may claim to reflect ‘lofty aspirations’ and ‘benevolent ideals’ they are just as likely to be ‘false and toxic aphorisms designed to mask the true intentions of the companies who craft them.’ Such slogans are intended to distract attention from the underlying aims of the industry which are to ‘bring in the largest amount (SIC) of users, for the longest period possible, at the most frequent rate.’ Hence, overall CV ‘has managed to paint a self-serving picture of itself that fails to reflect the reality of its priorities and intentions’ (Cook, 2020, p.70). The key point to note is the divergence between what Silicon Valley says and what it actually does. ‘Capital’ she notes, ‘doesn’t want to change the world. (It just) wants to make more capital,’ (Cook, 2020). And this really is the heart of the issue. Many of the claims that emerge from Silicon Valley seek to promote ‘desirables’ such as engagement, connection, friendship and the like. But behind such pronouncements there is a barely concealed moral vacuum. There is no reality at all in shared ‘background myths’ such as ‘tech knows best’ or that these companies can in any way be considered ‘trustworthy custodians.’ The motivations and values underlying what they actually do clearly point in a quite different direction. Cook (2020) points to the tension between ‘socially liberal values and techno-capitalist incentives’ noting that the latter remain focused on the kinds of limited short-term profit-oriented values mentioned above. But what she calls the ‘transgression’ of Silicon Valley ‘is not so much a result of ‘for-profit’ and ‘corporate priorities’ so much as a ‘gross misrepresentation of its motives,’ (Cook, 2020). Sufficient time has now passed for some of the consequences to become clear. She adds: SV has spent years and billions of dollars persuading the public to worship an industry that claims to have its best interests at heart. (However) the tech industry is driven by the same market forces as any other market-driven industry … Placing greater importance on making money than on taking care of people’s needs results in a society with deeply unhealthy values, in which people come second to financial objectives. A society built on such values loses a great deal of its capacity for humanity. We have allowed the tech industry, through a lack of regulation and the proliferation of unhealthy behavioural norms, to become the bastion of an economic order that has abandoned morality in favour of dividends for an elite few. (Furthermore), ‘research has found evidence of an inverse relationship between elevated social power and the capacity for empathy and compassion’ (Cook, 2020). The divergence between what Silicon Valley claims to have delivered and what it has actually achieved is undoubtedly one of the chief underlying causes of the deep social divisions, disunity and perpetual conflict that have sadly become among the distinguishing features of American society. Having failed to rein in the Oligarchs and related financial and corporate interests the US appears to have suffered a ‘collective breakdown of order, truth, and the psychological orientation they provide.’ The profit and ad-driven business model that Silicon Valley adopted thrived on the back of social trends that have progressively undermined the coherence and status of truth, respect and fact-based debate. Those trends include radical individualism, market fundamentalism, polarisation, volatile dissent and a callous indifference to the well-being of others. Hence, ‘digital disinformation’ now constitutes a serious global risk not only to the US but also to the whole world. Clearly, the spread of such disruptions and distortions across entire populations does not end at the level of damaged individual lives. The deliberate and forceful ramping up of ‘engagement’ by any means deemed necessary ensured that the overall costs continued to mount such that a full accounting is unlikely to ever be rendered. While the potential for good certainly existed at the outset, the combination of naivety, greed and lack of oversight / regulation allowed a toxic ecology of dangerous technology-enabled innovations not merely to emerge but also be normalised. Collectively these drove the overall costs of the IT revolution into quite new territory. It was no longer simply a medium for individuals and powerful groups. it swelled with ‘bad actors’ of every kind, from petty criminals to nation states. What has since emerged even exceeds what the ‘dark market’ could achieve (Glenny, 2011). Both the disastrous 2016 US election and Brexit demonstrated that entire societies are no longer protected from digital manipulation. Which helps to explain why during 2019-2020 the world found itself backing uncertainly into a state of geopolitical instability and the ever-growing threat of global cyber war (Zappone, 2020). Finding our bearings, challenging legitimacy Close to 700 pages The Age of Surveillance Capitalism is not, by any means, a ‘quick read.’ The language makes few concessions and the barely concealed passion behind some sections is perhaps not entirely consistent with standard academic conventions. Yet the effort to come to grips with this revelatory and courageous work could hardly be more worthwhile. In effect the author re-frames key aspects of the last few decades, the time when IT took on new forms and, literally invaded human awareness, ways of life, before anyone grasped the significance of what was happening. Now, that the details of this invasion have been documented in compelling detail, a fundamental reorientation (both to the high-tech systems and, more importantly, to those in whose interests the present deceptions are maintained) can be envisaged. Which is no small achievement. At the macro level revised understandings of the recent past allow for a re-consideration of the present from which may emerge distinctively different futures than earlier, more anodyne, default views had perhaps allowed. For example, Peter Schwartz’ over-optimistic vision in The Long Boom (2000) is one of many that saw the coming IT revolution in highly overwhelmingly positive terms. One question answered early on is: who was responsible for this invasion? There’s a distinct cast of characters, prominent among which are the owners and investors of Google, Facebook and similar companies. Behind these organisations, however, are many others including neo-liberal ideologists, venture capitalists, several US presidents and powerful agencies closely associated with the US government. Yet even that’s too simple. As is clear from Snowden’s account, Bin Laden, the prime mover of the 9/11 attack, also had an influence since it was this event that led US security agencies to pivot away from earlier concerns about ‘privacy’ in favour of a particularly invasive form of ‘security’. It’s a bit like the ‘rabbit hole’ featured in the Matrix film trilogy: the further down you go, the more you find. Zuboff, however, is far from getting lost. She locates dates, events, players and consequences in a highly disciplined and comprehensible way. Her almost forensic methods open up the possibility of knowing what has happened, understand it and gain clarity about what responses may be needed. Part of Zuboff’s contribution is terminology. She provides a language and a framework that serves to reveal much of what’s been hidden and to resource the projects and actions that are clearly needed. It’s necessary to note, however, that no language is objective and early attempts to create one based on quite new phenomena are bound to require critique and modification over time. Language is, of course, anything but static. A couple of examples will suffice to demonstrate the relevance of these interventions. One is a notion of the ‘two texts;’ while a second is about learning to distinguish between ‘the puppet’ and ‘the puppet master.’ In the former case she makes a strong distinction between what she calls the ‘forward text’ and the ‘shadow text.’ The forward text refers to that part of the on-line world that users of, say, Google and Facebook, can see, use and be generally be aware of. This embraces the whole gamut of design features intended to keep people in the system where their actions and responses can be constantly harvested and sold to others (data processors, advertising companies, political parties and the like). The simplest way to think of this ‘text’ is to view it as the ‘bait’ that keeps people returning for repeated dopamine hits. The ‘shadow text’ refers to the vast hidden world owned by, controlled by, and singularly benefitting from what Zuboff (2019) calls the ‘extraction imperative’. This is a secretive world that, even at this late stage, has experienced minimal regulatory oversight, especially in the US, the country of origin. Similarly, in the second case, a so-called ‘smart phone’ can be regarded as ‘the puppet’ that appears to operate according to its proximate owner’s bidding. Whereas the remote owners of hidden intelligences (a vast network of dedicated AI applications) are the invisible and currently unaccountable masters. Knowing how to use the former as a tool and enabler is one thing. Coming to grips with the hidden imperatives of the puppet masters is quite another. The separation between the two is corrosive, sustained and entirely deliberate. Knowing this can provide part of the motivation to respond by acting in defence of human autonomy itself. The author carefully explores how this system became established and how it morphed from being something useful that initially supported peoples’ authentic needs (for connection, communication, identity, location etc.) into an all-out assault on each person’s interior life. The shift from serving customers with high quality search functions to ruthlessly exploiting their personal details is described in detail. Even now, following the Cambridge Analytica and similar scandals, few have yet grasped just how far this process of yielding their interiority to what Zuboff (2019) calls ‘Big Other’ has gone. For example, she documents how it exerts particularly savage consequences on young people at the very time when their identities, sense of self etc. are already unstable as they proceed through the upheavals of adolescence. She has strong words for what is involved (Zuboff, 2019). For example: Young life now unfolds in the spaces of private capital, owned and operated by surveillance capitalists, mediated by their ‘economic orientation’ and operationalised in practices designed to maximise surveillance revenues… (Consequently) …Adolescents and emerging young adults run naked through these digitally mediated social territories in search of proof of life… (Zuboff, 2019, p456 & p.463). Immersion in social media is known to be associated with a range of symptoms such as anxiety and depression but this particular rabbit hole goes deeper. Viewed through the evidence presented here a combination of ‘rogue capitalism’ with the far-reaching capabilities of digital technology are bearing down on matters of primary and non-negotiable interest to all human beings. That is, the capacity of everyone to know, value and, indeed, to maintain their inner selves. It’s here that Zuboff (2019) introduces a pivotal concept – the primacy of what she calls ‘the latency of the self’. She writes: What we are witnessing is a bet-the-farm commitment to the socialisation and the normalisation of instrumental power for the sake of surveillance revenues… In this process the inwardness that is the source of autonomous action and moral judgement suffers and suffocates (Zuboff, 2019, p.468). Thus far from being the fulfilment of humanity’s aspirations and dreams, what she calls surveillance capitalism leads to ‘the blankness of perpetual compliance,’ (Zuboff, 2019). Attentive readers may well ask ‘have we not seen this before?’ We have, not only in the great dystopian fictions of our time but also in recent history. History shows that when entire populations are deprived of their inner lives, their deepest sense of self, they become depressed, diminished and even disposable. Zuboff gives credit to some of the early responses, many by the European Union and some member states. Yet there’s a long way to go before the myths promulgated by the Internet oligarchs are recognised by entire populations (and the politicians who represent them) and seen for what they are: a sustained assault by secretive but radically indifferent private entities on the very foundations of their humanity. Perils of monopoly Zuboff’s opus has obviously contributed much to the process of ‘de-mythologising’ the IT revolution and revealing the practices of some of its key players. It is both an analytic triumph and, at to some extent, a personal crusade. It is to be expected that other observers will exhibit different and contrasting responses. E.L. Doctorow’s account is informed by a more close-up, participant view of what the IT revolution is and does. His detailed view of how the new media actually work in practice suggests that the ‘surveillance’ side of the story, while dangerous and objectionable, may not be quite as trouble-free and all-powerful as it may first appear. In his understanding it is also, to some extent, a kind of double-edged sword with its own distinct weaknesses. So, rather than take on the Internet Oligarchs in a kind of ‘frontal assault’ he considers some of the traps and issues that make them appear less monolithic and somewhat less threatening. Specifically, he suggests that the primary focus needs to shift from surveillance per se to the raft of problems he associates with monopolies. For example: Zuboff calls surveillance capitalism a ‘rogue capitalism’ whose data-hoarding and machine-learning techniques rob us of our free will. But influence campaigns that seek to displace existing, correct beliefs with false ones have an effect that is small and temporary while monopolistic dominance over informational systems has massive, enduring effects. Controlling the results to the world’s search queries means controlling access both to arguments and their rebuttals and, thus, control over much of the world’s beliefs. If our concern is how corporations are foreclosing on our ability to make up our own minds and determine our own futures, the impact of dominance far exceeds the impact of manipulation and should be central to our analysis and any remedies we seek (Doctorow, 2020). Or again: Data has a complex relationship with domination. Being able to spy on your customers can alert you to their preferences for your rivals and allow you to head off your rivals at the pass. More importantly, if you can dominate the information space while also gathering data, then you make other deceptive tactics stronger because it’s harder to break out of the web of deceit you’re spinning. Domination — that is, ultimately becoming a monopoly — and not the data itself is the supercharger that makes every tactic worth pursuing because monopolistic domination deprives your target of an escape route (Doctorow, 2020, p.10). From this point of view the very real dangers and dysfunctions that Facebook, for example, imposes on users have a simple solution: break the company up into smaller elements and divest it of those it has monopolistically acquired. Of great interest in the present context, however, is that while Facebook’s surveillance regime is ‘without parallel in the Western world’ and constitutes a ‘very efficient tool for locating people with hard-to-find traits,’ it cannot allow normal discussions to run unmolested. This is because the latter cannot deliver sufficient ads (or hits on ads) in the high-intensity mode demanded by the business model. The company therefore chose to boost what it calls ‘engagement’ by injecting streams of inflammatory material in order to create ‘artificial outrage.’ The fact that these can be dangerous and costly in the real world accurately demonstrates the perversity of the model and completely undermines any pretence that Facebook might contribute to social well-being. Thus, the writer is less concerned about the data capture per se than he is about the way the growth of monopolies forces people to consume the kind of material that makes them miserable! In this account the ‘big four’ (Facebook, Google, Amazon and Apple) all rely on such positions in order to dominate their respective market segments. In summary: • Google’s dominance isn’t a matter of pure merit – it’s derived from leveraged tactics that would have been illegal under ‘classical’ (pre-Reagan) anti-trust regulations. • Similarly, Amazon’s self-serving editorial choices determine what people buy on that platform. Consumers’ rights are overwhelmed because the company’s wealth and power enable it to simply buy up any significant and rivals or would-be competitors. • On the other hand, Apple is the only retailer permitted to sell via its products on its own platforms. It alone controls what products are allowed into its ‘secret garden’ (the app store). It monitors its customers and uses its dominance to exploit other software companies as ‘free-market researchers’ (Doctorow, 2020, p16). The fact that these monopolistic conditions have remained for well over a decade with little or no regulation once again reveals the inability of successive US governments to understand or respond to what has been happening in their midst. As Doctorow (2020) notes ‘only the most extreme ideologues think that markets can self-regulate without state oversight.’ He suggests three reasons for this: 1. They’re locked in to (a) ‘limbic system arms race’ with our capacity to reinforce our attentional defence systems that seek to resist the new persuasion techniques. They’re also locked in an arms race with their competitors to find new ways to target people for sales pitches. 2. They believe the surveillance capitalism story. Data is cheap to aggregate and store, and both proponents and opponents of surveillance capitalism have assured managers and product designers that if you collect enough data, you will be able to perform sorcerous acts of mind control, thus supercharging your sales. 3. The penalties for leaking data are negligible (Doctorow, 2020, p17). This is where things can appear confusing because, as Snowden’s account suggested, state surveillance that had earlier been focused outward on the wider world was re-purposed to focus on the American people. In the process public / private distinctions became blurred. Similarly, big tech regularly ‘rotates its key employees in and out of government service’ meaning one or two years at Google could easily be followed by a similar time at the Department of Defence (DoD) or the White House, etc… This ‘circulation of talent’ leads to what’s known as ‘regulatory capture.’ It indicates a diffuse but powerful sense of mutual understanding which emerges between organisations that previously had clear and distinct boundaries and quite different purposes. One of the consequences of such capture is that liability for questionable security practices can be shifted on to the customers of big tech and thence to the wider society. The question ‘who is responsible?’ then becomes more difficult to answer. Doctorow (2020, p. 21-22) asserts that ‘big tech is able to practice surveillance not just because it is tech but because it is big;’ also that (it) ‘lies all the time, including in their sales literature’. It got this way not because it was tech but because the industry arose at the very’ moment that anti-trust was being dismantled,’ (Doctorow, 2020). The role that Robert Bork played in this process has been told by Taplin and others (Taplin, 2017). In essence, it meant that some 40 years ago, when anti-trust regulations were being framed, Bork ensured that they focused less on limiting corporate size and power than on attempting to restrain the costs of products to consumers. This judgement, and the legislative loophole in section 230 of the Communications Decency Act of 1996 (which ensured that media companies were protected from the consequences of any material that might appear on their sites) along with the lack of effective Congressional oversight, are essentially what allowed these companies to grow beyond any reasonable limit. The key clause in the legislation reads ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider’ (Harcher, 2020). The fact, as Cook noted, that ‘capital wants to make more capital’ supplied the motive and the rationale. And as Zuckerberg once pronounced, this also enabled them to ‘move fast and break things.’ Doctorow (2020) differs most clearly from other commentators in his tendency to see surveillance capitalism as anything other than plain, old-fashioned capitalism. Thus, in his view, it does not need to be ‘cured.’ Rather, what needs beefing up and applied more widely is ‘trust-busting’ and bans on monopolistic mergers. For him, big tech is not as powerful as it would like others to believe and, although it has largely escaped thus far, it cannot actually overturn the rules to protect itself from the resurgence and renewal of anti-trust measures. For him the issue is – are we up to it? It’s clear that the ‘we’ he has in mind is considerably wider than that of government agencies and the technically adept. For Doctorow (2020) the ‘fake news’ generated by monopolistic systems that have shredded what was earlier regarded as shared reality is not merely an irritant but ‘an epistemological crisis.’ A widespread breakdown of shared meanings, and the radical uncertainty it creates suggest the ‘terrifying prospect’ of a widespread loss of control and capability. Yet, one of the distinctive points of this account is that at the heart of any technologically advanced society is a need for integration. This, according to Doctorow (2020), is what he calls ‘the hard problem’ of our species. If we can’t coordinate different activities across multiple domains such a civilisation cannot but fail. While for Zuboff (2019), the high-tech path to the future is what she calls a ‘bet-the-farm-commitment’ or choice, here it is portrayed as the only real option. But it is framed through two different strategies. Ultimately, Doctorow (2020, p. 33) believes, ‘we can try to fix Big Tech by making it responsible for bad acts by its users, or we can try to fix the internet by cutting Big Tech down to size. But we can’t do both’. In this view and outlook the preferred option is for a broad-based coalition spanning government and civil society to break up the monopolies, reform big tech and drive ‘up and out’ of the present dilemma.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/06%3A_Witnesses_to_the_revolution/6.01%3A_Witnesses_to_the_revoluti.txt
Resistance and renewal The most interesting puzzle in our times is that we so willingly sleepwalk through the process of reconstituting the conditions of human existence (Winner, 1986). Re-constituting the present This book has considered various aspects of the real-world matrix in order to know it, to deepen our understanding of what it is and what it means. The previous chapters provided substantive rationales and various proposals for taking informed action. This chapter discusses some of the innovations and responses now under active consideration. Table 5 provides a summary of various propositions and proposals that have surfaced in this space. The specifics of each will evolve and more detailed treatments no doubt follow. Yet even this limited sample provides clear evidence of an increasingly credible shared agenda. To the extent that if worked out, developed, valued and resourced it can become a valuable source of actions and strategies that can lead away from a high-tech Dystopia toward more desirable futures. Table 5: Suggested actions Cook (2020) Remember that ‘tech cannot fix itself.’ Understand what went wrong inside Silicon Valley (SV). Understand its psychological deficiencies and the full implications of the values it has chosen to follow. Monitor its (lack of) emotional intelligence and its structural biases. Promote healthier psychological norms and revise its ethical foundations. Snowden (2019) Question the widespread use of illegal surveillance. Challenge its legitimacy and that of those employing it. Enact new laws to prevent it re-occurring. To avoid a nightmare future, individuals need to take back ownership of their own data. Doctorow (2020) Recognise ‘fake news’ as an existential threat to social integration and the well-being of society as a whole. Rather than be distracted by arguments about surveillance per se, re-focus on the raft of issues that arise from the unrestrained re-growth of monopolies. Reduce or eliminate these using anti-trust and related regulations. Ensure that everyone’s digital rights are respected. Morozov (2018) Introduce legislation to force companies to pay for the data they extract. Improve citizens’ rights to access data obtained from public sources (such as CCTV). Combine data protection with a proactive social and political agenda. Use the ‘data debate’ to re-think other utilities and services (such as welfare, unions and bureaucracy). Howard (2020) Establish the principle that ‘public life belongs to the public.’ Require companies to routinely contribute such data to archives, libraries and similar public institutions. Explore new opportunities for civic, as opposed to commercial, engagement. Cadwalladr (2020) Regulate in relation to four main categories. I. Safety. No product should be sold / shipped until it demonstrates safety and is free from obvious bias. 2. Privacy. Treat all private data as a human right, not an asset. 3. Honesty. Remove the oligopolistic power now exercised by companies such as Facebook and Google, especially as they affect ad networks. 4. Competition. Strengthen and enact the relevant anti-trust laws that encourage entrepreneurism and innovation. Tarnoff & Weigel (2019) Don’t see IT issues as separate. Human beings have co-evolved with technologies over time. The focus should therefore be on ‘humanity/technology co-evolution.’ Society is not served well by having technologies imposed (or sold) from above. Society as a whole should be involved in deciding how to live with technology. IT companies should follow specific rules that retain democracy as a guiding principle Lavelle (2018) Invert operating principles of Facebook, Google etc. Users to opt in rather than search for escape routes. They should be provided access to clearly documented and user-friendly tools for managing their data. Calibrated fines needed to deal with knowing misuse of data. Users have option of retaining all their data for a fee. Sample / Berners-Lee (2019) In 2019 Tim Berners-Lee, an early internet pioneer, drafted a ‘contract for the web.’ It sought to protect human privacy, provide access to individuals’ data and establish a right to not have the latter processed. It argued for community consultation prior to products being launched, for the web to be safe and remain open for all users. Berners-Lee has also created Solid, a more person-centred data system. Deibert (2020) New laws to restrain how tech companies gather, process and handle personal information. Companies required to open up algorithms etc to external scrutiny and public interest auditing. Legal protection of worker’s rights in the ‘gig’ economy. Repeal section 280 of the1996 Communications Decency Act. Apply ‘Retreat,’ ‘Reform’ and ‘Reset’ procedures grounded in strong underlying principles. Eggers (2018) Update the Universal Declaration of Human Rights and add two new amendments. 1. Assert that all surveillance is inherently abhorrent and undertaken only by law enforcement with judicial oversight. 2. Resist placing everything online. Ensure that human beings can continue to live real analogue lives offline as much as possible. Zuboff’s (2019) magisterial critique led her to articulate two fundamental needs of supreme and vital importance to all human beings. They are the need to recover the future tense and the need for sanctuary. Both are clearly of great significance to Futurists and foresight practitioners. In relation to the former she frames her decision to spend seven years working on this book as an act of will that constitutes part of her own personal claim to the future. She states: Will is the organ with which we summon our futures into existence…The freedom of the will is the bone structure that carries the moral flesh of every promise…These are necessary for the possibility of civilisation as a ‘moral milieu’…(They are) the basis of contracts…collective decisions to make our vision real (Zuboff 2019, p.331-333). The notion of ‘civilisation as a moral milieu’ is a powerful and compelling one. By contrast, the conditions and agreements demanded by Google, for example, require centuries of human legal practice to be set aside in favour of what she calls ‘Uncontracts,’ (Zuboff, 2019). These are forced ‘agreements’ created by the “positivist calculations of automated machine processes.” In place of human qualities such as dialogue, problem solving and empathy, the ‘Uncontract’ leads back to ‘the blankness of perpetual compliance’ referred to above (Zuboff, 2019, p. 334-6). The ‘right to sanctuary’ is also of primary significance (Zuboff, 2019). It is among the most ancient of human rights and thus of vital and enduring value. But it is far from impregnable when ‘physical places, including our homes are increasingly saturated with informational violations as our lives are rendered as behaviour and expropriated as surplus,’ (Zuboff, 2019). Moreover, the power of Big Other ‘outruns society and law in a self-authorised destruction of (this right) as it overwhelms considerations of justice with its tactical mastery of shock and awe,’ (Zuboff, 2019). What is required, therefore, are ‘new forms of countervailing authority and power,’ (Zuboff, 2019). In place of a swelling ‘social void’ this depth critique envisages both ‘direct challenges’ to the power of Surveillance Capitalism and a commitment to ‘new forms of creative action’ (Zuboff, 2019, p.479-86). Zuboff (2019) also advances a number of broad suggestions about what, in her view, needs to be done to rein in Surveillance Capitalism (SC). In summary they include: • Naming and establishing our bearings, re-awakening our astonishment and sharing a sense of righteous dignity. • Giving voice to our collective outrage and refusal of the diminished futures on offer. • Becoming alert to the historical contingency of SC by calling attention to ordinary values and expectations that existed before it began its campaign of psychic numbing. • Establishing new centres of countervailing civic power equipped with laws that reject the fundamental legitimacy of SC’s declarations and interrupt its most basic operations (Zuboff, 2019, p.395-421). A new regulatory regime equipped with adequate laws will clearly take time and effort to achieve. Of the three key suggestions that Zuboff makes at least two are based on historical precedents: First, interrupt and outlaw surveillance capitalism’s data supplies and revenue flows. This means, at the front end, outlawing the secret theft of private experience. At the back end, we can disrupt revenues by outlawing markets that trade in human futures knowing that their imperatives are fundamentally anti-democratic… Second, research over the past decade suggests that when ‘users; are informed of surveillance capitalism’s backstage operations, they want protection, and they want alternatives. We need laws and regulation designed to advantage companies that want to break with surveillance capitalism… Third, lawmakers will need to support new forms of collective action, just as nearly a century ago workers won legal protection for their rights to organise, to bargain collectively and to strike. Lawmakers need citizen support, and citizens need the leadership of their elected officials (Zuboff, 2019b). Kathy Cook’s exploration of the psychology of Silicon Valley identified similar points of clarity and reached similar conclusions. She confirmed that we are facing an ‘unprecedented transition,’ (Cook, 2020). Related to this is a strong belief that that ‘tech cannot fix itself.’ For her ‘the notion that more tech is the answer to bad tech is psychologically curious, irrational and self-serving; yet it happens constantly, not only within the tech industry, but within society,’ (Cook, 2020). She adds that ‘our increased reliance on technical solutions is rooted in a cultural narrative that purports the boundless power of technology’ (Cook, 2020, p.233). Clearly the embedded symbolic power of such cultural narratives also needs to be accounted for and moderated. What might be called the ‘dual nature’ of technology also helps clarify why the values, beliefs and practices that drive its use in these forms won’t be corrected by its promoters and developers. A staff writer for The Atlantic who attended a 2020 Las Vegas consumer electronics show concluded that all available ‘solutions’ on offer involved the use of yet more technology. Given that most existing forms have known faults and costs, she emerged with a strong sense that this high-tech industry was less concerned with solving real problems than ‘capitalising on the anxieties of the affluent.’ As such it clearly fits a wider pattern. (Mull 2020). To be at all useful initiatives must originate elsewhere. Hence Cook’s (2020) instance on: • Understanding what went wrong in the first place. • Understanding the psychology and values driving the industry … (in the belief that) that the world can be a better place; and, • Working to ensure the industry moves forward with better values and healthier psych norms (which, in turn) requires a revisioning of the tech industry’s ethical foundations. Snowdon’s (2019) account originated within the privileged spaces of the intelligence community. He saw how, under the pressure of the 9/11 attack and a renewed sense of threat, the character of that ‘intelligence’ gained new and problematic features (Snowdon, 2019). This is where events in Silicon Valley connect back directly to themes, narratives, values and priorities in the wider culture of the US. It is a nation that has a long track record of sponsoring ideologies, trends and, indeed, technologies without paying a great deal of attention to the likely consequences. Snowdon (2019) is far from alone in wanting us to ‘reclaim our data’ and, in so doing, take active steps to avoid the kind of diminished future that his own experiences have led him to fear. As noted, Doctorow (2020) has a closer, more fine-grained view of the structures, processes and products of the IT revolution and he sees ‘fake news’ as a particularly serious existential crisis. His main concern is to bring back anti-trust regulation in order to reduce or eliminate the extremes of monopoly power. Turning the tide? Steps are slowly being taken that seek to challenge and limit the power of the Internet Oligarchs. They’re driven by actors in several countries working on behalf of governance and civil society. For example, during 2019 the French data watchdog fined Google Euro50m ‘for failing to provide users with transparent and understandable information on its data use policies’ (Hern, 2019). The European Union (EU) has flexed its regulatory muscles on several occasions in relation to privacy, taxation and monopolistic behaviour and especially via General Data Protection Regulation (Wikipedia, 2020). The UK has begun the process of establishing critical infrastructure to enforce a new raft of regulations. It includes a new Competition and Markets Authority (CMA) that contains a dedicated Digital Markets Unit (DMU) with the power to levy serious fines upon companies that fail to abide by the new rules. Even the USA, which has been so slow to react, has shown signs of following suit. For example, in October 2020 the US justice department sued Google for illegal monopoly in the online search market. In December the US Federal Trade Commission sued Facebook for breaking anti-trust laws and threatening to break it up into smaller units (Canon, 2020). Only time will tell if Congress will have the courage to repeal the infamous Section 230 of the Communications Decency Act of 1996 mentioned above. In the absence of strong and coordinated regulatory efforts, however, attempts by individual nations to enforce a comprehensive international tax regime upon the oligarchs have proved ineffectual thus far. During 2020 the Australian government took several small but significant steps. It confronted Google and Facebook and forced them to compensate news organisations for the loss of their advertising income and the illegal use of their material (Spears, 2020). Concerns were also expressed about how children and young people in particular are exposed to both the opportunities and the very real dangers of the on-line world. Cyber bullying is of particular concern (Ham, 2020). Very young children are particularly vulnerable since they have no defence against the digital incursions that have occurred through children’s TV programs, games, YouTube and so on. During late 2020 a report surfaced about the fact that ‘always on’ digital assistants in the home were attracting the attention of very young who were unconsciously providing family information to the remote listeners (Tapper, 2020). In response the Australian government announced that it would create an ‘online harms bill’ to augment other measures such as its existing ‘e-safety’ site. The very real threat of direct exploitation of children and young people for criminal purposes also led to increased support to the Australian Federal Police (AFP). This was part of an even larger grant of AUD\$1.66 billion for a cyber-security package provided to the AFP to help the nation defend itself from the growing threat of cybercrime and cyberwar (Galloway, 2020). Tangible results did not take long to appear. In mid-2021 the AFP, in collaboration with the FBI, revealed an undercover sting operation known as ‘Ironside’ that severely disrupted prominent drug cartels, uncovered large amounts of illegal drugs and of money and led to multiple arrests both in Australia and overseas. Instead of being frustrated by the co-option of encryption technology by criminals, law enforcement had turned it to positive use by clandestinely making the AnOm app available to them. Messaging between criminal networks previously considered ‘secure’ proved to be anything but. The operation not only led to many arrests it also demonstrated that law enforcement would, henceforth, be there in the background using the very latest tech themselves. It was a watershed moment. While what Peter Harcher calls the ‘cat and mouse game’ will certainly continue, criminal organsations everywhere were placed on notice that they were no longer as safe as they’d assumed (Harcher, 2021). Taken at face value such practical responses on the part of various Western governments may appear to support the notion that the ‘tide’ may indeed be turning. Yet 2020 was not merely another year. Covid-19 pandemic was a classic ‘wild card’ familiar to futurists and foresight practitioners. As is well known it impacted humanity with all the force of an unstoppable biological hurricane. Under the pressure of necessity large numbers of people were driven online. Almost everyone learned how to use Zoom but few grasped how increased dependence on an already dysfunctional system would place them at greater long term risk. In the midst of a torrent of unwelcome change it’s all too easy to lose one’s bearings. All of which evokes a playbook and a text that is decidedly less optimistic. As Klein (2017) explains in her analysis of ‘disaster capitalism,’ it is during just such times of shock and disruption, while public attention is diverted, that powerful entities quietly but actively pursue their own specific interests. As Covid-19 proceeded physical money almost disappeared only to be replaced by digital alternatives such as card and ‘contactless’ payments. Few were disposed to consider the longer-term costs of a cash-starved society, but they are considerable, especially for informal uses and the poor (Kale, 2020). They include greater anxiety for, and increasing exploitation of, unbanked people; fewer options for women fleeing abusive relationships; and reduced funding for for charities that previously relied on physical money for their cash flow. Overall, the wider public becomes more fully locked into a private banking system from which they have no escape and decreasing autonomy (Kale, 2020). Many organisations dispensed with offices requiring decision-makers and other employees to work from home and meet ‘virtually.’ Once again, the products and services offered by the Internet giants took centre stage and few involuntary ‘customers’ had time or opportunity to think beyond the moment. Journalist Anna Krien (2020), however, took a close look at the online ‘distance learning’ arrangements adopted by many schools during the pandemic. She found disturbing connections between companies like Apple and Microsoft, whose dedicated delivery platforms and content were widely taken up by schools and parents alike. During school visits she expressed her growing concerns, but to little avail. Since these companies had been courting them quietly for years it was easy for schools to slip all-too-readily into using commercially designed packages rather than those created by educators according to educational criteria (Krien, 2020). Ronald Deibert (2020) and the Citizen Lab at the University of Toronto have considered these and similar questions. In their view too much attention has been focused on micro-issues, such as the uses and misuses of particular apps. Meanwhile, ‘an entire landscape has been shifting beneath our feet.’ Specifically, and in relation to the pandemic they suggest that: This explosion of pandemic-era applications will invariably amplify the defects of the mobile marketing and location tracking industry – a sector made up mostly of bottom-feeder companies whose business model relies on collecting billions of user-generated data points, later sold and repackaged to advertisers, law enforcement, the military, customs and border agencies, and private security services (not to mention bounty hunters and other dubious characters). A shocking number of entrepreneurs and policy makers are nonetheless turning to this cesspool of parasitic firms – poorly regulated and highly prone to abuses – as a proposed pandemic solution… The entire ecosystem presents a bonanza for petty criminals, ransomware opportunists, spyware firms and highly sophisticated nation-state spies alike (Deibert, 2020). Moreover, such concerns are unlikely to recede once the pandemic is over. Indeed: Some argue that this COVID-19-era innovation cycle will pass once there is a vaccine. But the more we embrace and habituate to these new applications, the deeper their tentacles reach into our everyday lives and the harder it will be to walk it all back. The “new normal” that will emerge after COVID-19 is not a one-off, bespoke contact-tracing app. Rather, it is a world that normalizes remote surveillance tools such as Proctorio, where private homes are transformed into ubiquitously monitored workplaces and where shady biometric start-ups and data analytics companies feed off the footloose biosurveillance economy (Deibert, 2020). This raises the very real question as to just how societies already weakened by the virus and its multi-faceted aftermath will be able to gather the will, imagination, resources and organisational capacity to somehow ‘disembed’ themselves from these very same devices and systems. As mentioned in a previous chapter there is one country where a very different dynamic has been underway for some time. For reasons best known to itself, the Chinese government has already exceeded the predations and incursions of Western Internet Oligarchs into civil society and is proceeding with the construction its very own high-tech digital dystopia. The retreat of American leadership over recent decades and the impacts of the pandemic have allowed it to proceed with its strangely arid and inhuman desire for complete state manipulation and control of its population. A valuable study by Khalil on Digital Authoritarianism examines how China viewed the pandemic as a ‘proof of concept’ opportunity to show that ‘its technology with ‘Chinese characteristics’ works and that surveillance on this scale and in an emergency is feasible and effective.’ She continues: With the CCP’s digital authoritarianism flourishing at home, Chinese-engineered surveillance and tracking systems are now being exported around the globe in line with China’s Superpower Strategy. China is attempting to set new norms in digital rights, privacy, and data collection, simultaneously suppressing dissent at home and promoting the CCP’s geostrategic goals.’ Khalil considers this dangerous for other countries since it may well ‘result in a growing acceptance of mass surveillance, habituation to restrictions on liberties, and fewer checks on the collective use of personal data by the state, even after the public health crisis subsides.’ (Khalil, 2020). An obvious lesson to be drawn from this particularly dangerous precedent is the greatly increased need for Democratic nations to work together and be ‘vigilant in setting standards and preserving citizens’ rights and liberties.’ If anything, it adds urgency and salience for the free nations of the world to get their own houses in order and, in so doing, present a common front. What will this take? As discussed earlier, it’s useful to consider responses at several levels of aggregation, each of which may be appropriate to different tasks and actors. Effective coordination between different levels and types of response would certainly increase the chances that more effective options for de-coding and re-constituting the matrix will emerge. At the individual level, for example, we’ve already seen how, over the past two decades, powerful insights have constantly emerged from the efforts, the sense of agency and commitment, of particular individuals. Of the many others that could be included we should mention Tim Berners-Lee’s Contract for the Internet, Pascale’s New Laws of Robotics and author Dave Eggers bid to re-imagine the UN Declaration of Human Rights are worthy of mention (Sample, 2019; Funnell, 2020; Eggers, 2018). At the next level progressive community organisations play a strongly facilitative role. While some, such as the Oxford Internet Institute and the University of Toronto’s Citizen Lab, are located overseas, Australia also happens to be well-resourced in this area. For example, the Australia Institute hosts the Centre for Responsible Technology which published The Public Square Project (Guiao & Lewis, 2021). The report usefully identifies a number of vital themes and strategies for creating and extending public digital infrastructure. Similarly, a related organisation known as Digital Rights Watch also speaks for civil society by, for example, seeking a ban on facial recognition systems and the ‘microtargeting’ of individuals for political or commercial gain. Both organisations have active campaigns underway in relation to such matters and are easily located online. Finally, we’ve noted that government agencies have not been idle. We have recent, highly relevant proof that Australian citizens and organisations have the active support of powerful digital defence capabilities at the national level to moderate digital crime and cyber-aggression. Nor has the Australian Human Rights Commission been idle as its final and substantial report to the government, Human Rights and Technology, clearly demonstrates (Santow, 2021). In summary, while such contributions may be far from the public mind at any particular time, they are each vital players in the fight to ‘delete dystopia.’ Other, perhaps less obvious, factors may also serve to focus and undergird these efforts. For example, one of the most serious charges to be laid against the internet oligarchs, their supporters, investors and other interested parties is that in pursuit of unlimited self-interest they have worked to sustain an environment characterised by stress, conflict and confusion when what the times call for are clarity, integrity and far-sighted care. Yet at present, few seem to be explicitly aware that none of these over-confident, over-powerful entities possess anything remotely like a social licence for the intensive extractive and merchandising procedures they’ve undertaken, or for the many unauthorised uses to which this stolen ‘behavioural surplus’ has been put. To say nothing of those who divert high-tech equipment and expertise to support openly criminal enterprises. A case in point is the way that Mexican drug cartels are reported to have purchased hightech spyware from their country’s own police force (Schillis-Gallego & Lakhani, 2020). In principle, therefore, democratic agencies have every right to strip them of their illegitimately acquired dominance and power. There is certainly a huge task of institutional innovation and ‘back-filling’ to accomplish first. Ironically enough, some parts of the necessary institutional infrastructure do not need to be re-created from scratch. It may be recalled that back in 1972 an Office of Technology Assessment (OTA) was established to advise the US Congress on the ‘complex scientific and technical issues of the late 20th Century.’ By 1995 it had produced studies on a wide range of topics including ‘acid rain, health care, climate change and polygraphs.’ It was highly successful and widely emulated yet abolished in 1995 by the Reagan administration which claimed it was ‘unnecessary’ (Wikipedia, 2015). The point is that, prior to the emergence of the IT revolution and the development of surveillance capitalism, prevailing political elites in the US chose to eliminate this core institutional capability leaving the nation (and world) ever more vulnerable to the unanticipated costs of high-tech innovation (and, as we now know to our cost, entirely foreseeable events such as global pandemics). Almost three decades on Institutions of Foresight (IoFs) remain uncommon. Very few nations have a high-quality foresight capability installed at the national level to advise governments on the issues such as those discussed here. But this could change fast if what has been learned from previous iterations were to be taken up and consistently applied. In the absence of high-quality scanning, foresight and technology assessment societies remain profoundly vulnerable to a wide variety of future hazards. These obviously include further high-impact technological innovations and their accompanying disruptions. This is particularly the case with poorer and less developed nations such as the Pacific Islands which, at the time of writing, were about to be connected to the internet by high-speed undersea cable. Needless to say, scant preparation for the ensuing social and cultural impacts had been carried out (Higginbothom, 2020). This particular example is a reminder that there are still few or no effective, non-commercial, ‘filters,’ ‘barriers’ or ‘testing / proving grounds’ through which new technologies and applications are required to pass prior to implementation. The steady rise of Artificial Intelligence (AI) is among the most serious issues of concern, especially when united with new generations of high-tech weapons (Chan, 2019). Google’s Deep Mind project generates headlines each time it makes new discoveries but as the property of a vast private company it raises far more questions than it answers. For example, a 2020 Guardian editorial in noted that ‘Only 25% of AI papers publish their code. DeepMind, say experts, regularly does not.’ Lanier goes as far as to suggest that AI should be seen less as a technology than as an ideology. The core of the ideology is that a suite of technologies, designed by a small technical elite, can and should become autonomous from and eventually replace, rather than complement, not just individual humans but much of humanity’ (Lanier, 2020). Similar issues also proliferate in the open market as consumer electronics become more complex and powerful. Apple has, for example, been working to develop its ‘consumer smart glasses’ without reference to any substantive external foresight evaluation. These devices are intended to be worn like regular glasses but include a visible layer of digital information known as AR (Artificial Reality). While this may sound useful it raises profound questions indeed not merely about data access, privacy, regulation and so on, but about the kind of ‘cyborg’ society that would result. If, as suggested here, current IT frameworks and installations are frequently pernicious and defective, we need ways of enquiring at the social level whether such devices have any legitimate place at all in our lives, let alone those of our children. AR glasses would not be free standing. They would become one of countless other devices engaged in what’s being called ‘world scraping.’ That is, the constant recording and up-loading of information on more or less everything. It was referred to by one IT developer as ‘a big tech dream – and a privacy activist’s nightmare.’ He added that: Smart glasses turn people into walking CCTV cameras, and the data a company could gather from that is mindboggling. Every time someone went to a supermarket, their smart glasses would be recording pricing data, stock levels and browsing habits; every time they opened a newspaper, their glasses would know which stories they read, which adverts they looked at and which pictures they lingered on (Hern, 2020). In this context the need for more appropriate values, enhanced worldviews and a new sense of reality and purpose is paramount. New institutions and institutional settings are required to provide the means by which societies can refresh their view of the past, present and possible futures. The hard questions are indeed right there in plain sight. How, for example, can a society ‘find its bearings’ without putting in place learning contexts in which the broad issues of history, the constitution of the present and the span of possible future options can be freely examined and discussed? How can any social entity make considered choices about its present commitments and aspirations for the future without access to high quality, dedicated foresight capabilities and services? How can anyone gain a critical purchase on existing and new technologies without the embodied social capacity to do so? It takes years of effort and application to produce highly trained people who qualify as pathfinders and guides to the chaos ahead. None of these things can happen until societies wake up to the existential predicament that humanity has created for itself. But there are distinct signs of hope. The ‘pushback’ against the Internet as a medium of extraction, exploitation and abuse has already progressed from a few lonely voices to a growing chorus of dissent. If the means can be rapidly put in place to invest in state backed, cooperatively owned and operated social media, the Oligarchs can be retired from history. They will become redundant as the character and functions of IT shift from one cultural universe (invasion, dispossession and exploitation) to another (respectful fulfillment of authentic needs).
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/07%3A_Resistance_and_renewal/7.01%3A_Resistance_and_renewal.txt
Conclusion The sleeping giant is one name for the public; when it wakes up, when we wake up, we are no longer only the public: we are civil society, the superpower whose nonviolent means are sometimes, for a shining moment, more powerful than violence, more powerful than regimes and armies. We write history with our feet and with our presence and our collective voice and vision (Solnit, 2016). Tech companies have had a monopoly on utopian thinking for their own benefit, defining them as large-scale top-down projects requiring submission to capital’s desires. But this means that resisting them can also become a large-scale project, a utopian project that touches us all and includes us all (Sadowski, 2021). This book has argued that current trends are far from inevitable. Intimations of dystopia are best viewed as warnings that inspire us in many different ways to take decisive action. ‘Deleting Dystopia’ is not about working to eliminate a powerful idea but, rather, getting behind those human and social forces that collectively move us away from its realisation as a radically diminished condition of human life. ‘Understanding the matrix’ is a vital and necessary step in that direction. But ‘technology’ per se is not the only, or main concern. Interior human characteristics such as ignorance, greed, self-regard and what E.O. Wilson once called our ‘paleolithic obstinacy’ have affected the trajectory of human development every bit as much as any conceivable array of devices and tools. The compromised condition of the Internet suggests that they are still doing so today. Navigating around a global, high-tech dystopia confronts humanity with one of its most difficult and challenging tasks. It also suggests a new or renewed emphasis on the most positive and inclusive human qualities such as foresight, self-knowledge, empathy and perceptiveness. Which is why developmental psychology, integral theory and related fields constitute vital assets at this time (Gidley, 2017; Slaughter, 2012). Throughout this period countless warnings have been voiced about the failure of humanity to come to terms with the implications of its growing impacts on the global system. In September 2020, in the middle of a global pandemic, the United Nations (UN) published a ‘state of nature’ report which revealed that the world had failed to meet any of the targets set decade earlier to stem the tide of destruction (Greenfield, 2020). The UN’s head of biodiversity was quoted as saying that ‘Earth’s living systems as a whole are being compromised… The more humanity exploits nature in unsustainable ways and undermines its contributions to people, the more we undermine our own wellbeing, security and prosperity’ (Greenfield, 2020). We know that coral reefs are disappearing while glaciers and ice sheets are melting at alarming rates. The declines in wildlife populations have been precipitous over the last half-century even as humanity’s population has exploded from around 2 to over 7 billion people. We know that in relation to the remaining ‘carbon budget’ (the amount of CO2 that can be put into the atmosphere) humanity has no more than a decade to avoid the chaos of irreversible global heating (IPCC, 2021). The environment is, of course, ‘only’ one source of long-term systemic risk but our health and well-being depend entirely upon it. The sober fact is that many human / cultural / technical combinations are unsuccessful and disaster-prone (Oreskes & Conway, 2016; Diamond, 2005). This is certainly the case within our fractured present when extreme degrees of self-regard took flight within a socially sanctioned economic system designed to maximise private profit at all costs. This, in turn, occurred within an over-confident, expansionist worldview that encouraged the world’s richest nations to believe that they had the right to promulgate a limitless economy of acquisition and greed. In order to sustain the illusion, dominant players gave themselves permission to view the world as little more than a vast array of resources offering endless extractive opportunities and infinite wealth. A process that continues up to this day regardless of global heating and rother well-known hazards. (Neate, 2019). Few realised at the time that the design template perfected in 1950s America contained no limiting principle and had tended toward ‘overshoot and collapse’ from the very outset (Slaughter, 2010). Yet it was within these very specific human and technological circumstances that the IT revolution took root. The ruthlessness of raw capitalist imperatives, along with the radically limited value set of the oligarchs, encouraged them to grow rich by invading unprotected human space. The defects and dangers associated with these particular human and cultural combinations were and are well known and obvious, but the voices of those who understood them, and sought alternatives, were overwhelmed. Pathways to other and more viable human futures were deliberately cast aside. The result is a world in which extractive hyper-cultures are failing, having reached the early stages of their own entropic breakdown (Wallace-Wells, 2019). The key to moving forward is a paradigmatic shift, or several, from passively accepting the views of reality tenaciously promulgated by Silicon Valley and its agents toward a different reality altogether. Views based on broader, more embracing worldviews and life-affirming values provide far more productive starting points. It’s time to replace the self-centred and defective values of the Internet oligarchs with others that respect our common humanity and the fragility of the world upon which we depend. Together these provide a more appropriate and durable basis for civilised life. The proposition that knits together so much of what needs to be done is that the IT revolution has been wild, unauthorised, secretive and subversive of our humanity and our world. The practical shift away from what is already a ‘failed future’ has two parts. The first is to comprehensively deny continued, ‘rubber stamp’ social validation to the Internet oligarchs. It was never theirs to begin with. This means creating and enforcing new or renewed rules and regulations upon a recalcitrant and self-serving sector. We have seen that some governments have already started on this path. The second pathway, which again already has its champions and start-ups-in-waiting, is to transfer or duplicate the most socially useful parts of their operations from closed private infrastructures to a range of civil equivalents, each equipped with suitable codes of practice operating exclusively in the public interest. It is indeed an opportunity to ‘reset and rethink the entire technological ecosystem from the ground up’ (Deibert, 2020). None of this, however, is a quick fix. It will take time and there will be setbacks along the road. The goal, however, is clear: an international IT system that is benign, effective, respectful and safe for each and every legitimate need or purpose. The future before us continues to look threatening not because of any built-in necessity but because societies, and those in positions of power and authority, have still not woken up to the full costs of raw, unrestrained capitalism and the very real threats that now confront humanity. Does it make sense to stand by and passively watch the world’s most powerful organisations carelessly generate new waves of technological disruption regardless of the consequences? If so, we can say farewell to what remains of our environment, our autonomy, our privacy and humanity. If not, then we need to act together without further delay.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Deleting_Dystopia%3A_Re-Asserting_Human_Priorities_in_the_Age_of_Surveillance_Capitalism_(Slaughter)/07%3A_Resistance_and_renewal/7.02%3A_Conclusion.txt
Our goal for students is to not only be familiar with Foundations concepts, but also to be scholars of Foundations of Education. The additional readings selected throughout the course modules reflect this goal, as we have included authors who are considered experts in the field to supplement the content course content. In this module, you will be exposed to Sonia Nieto, Diane Ravitch, and Ronald Takaki. Learning Objectives Upon completing this module, students will be able to: • Define and discuss the idea of multicultural education and the different philosophical approaches to accomplishing this. • Understand the historical roots of multicultural education • Explain the idea of culture and be able to provide examples of how culture is influential. (Key terms: dominant culture, ethnocentrism, cultural capital, compensatory programs, acculturation) • Recognize the various acculturation outcomes for immigrants 01: Multiculturalism Why Multicultural Education? What is multicultural education? It is likely a term you have heard before, and perhaps something that you have never spent much time thinking about. Multicultural education is the idea that the United States is made up of many different kinds of people, and the public education people receive should be reflective and inclusive of all the different backgrounds that make up our country. Additionally, multicultural education should help all students feel that they have a place in our schools and society, regardless of their race, social class, gender, sexual identity, disability, language and geographic background, or religious background. In order to help you understand this importance, we have organized these modules into groups based on these differences. By understanding the experiences and societal impacts of each of these dimensions of diversity, you should be more prepared to teach or interact with people from all backgrounds going forward. As our schools and society in the United States continue to become more diverse, multicultural education is critical to foster empathy and understanding to each other. While many people can agree that this is an important concept, implementation of multicultural education can be very different. In today’s educational policy landscape, multicultural education is often viewed as being separate from general education, something that can be used occasionally to enrich or complement the general academic program. For example, many schools use national events like Black History Month or Martin Luther King Jr. Day as an opportunity to learn about the contributions of African-Americans, while others organize events to celebrate multiculturalism. Diversity Weeks or school assemblies designed to promote racial and ethnic diversity can be observed in districts across America. While these efforts are no doubt designed and implemented with benevolent intentions, many scholars in the field of multicultural education have suggested that current educational policies and practices address only the surface-level of multiculturalism by highlighting differences in food, dress, music, dance, and language, without addressing the underlying issues of educational values, worldview, and knowledge construction (Banks, 2004; Gollnick & Chinn, 2013; Nieto & Bode, 2012). As such, the conceptualization of multiculturalism shifts from a product to a process. Rather than offer simple educational products–like prescribed, close-ended lesson plans—these modules view multiculturalism as a long term investment that shifts and shapes educational experiences at all levels of policy and practice. The aim of these modules is to expand the understanding of multiculturalism to create a more inclusive and more holistic approach to teaching and learning. While many discussions of multiculturalism center around issues of race and ethnicity, we posit that class and socioeconomic status, gender, sexual orientation, language, immigration, geography, and religion also play crucial roles in the development of equal and equitable educational policies and practices. Therefore, after a discussion of the sociopolitical and sociocultural contexts of education and the overarching approaches to multicultural education, this module will investigate each of the individual identifiers that contribute to a more complete view of multiculturalism. History of Multiculturalism Multiculturalism, by definition, contains–and is characterized by–the diverse histories, ideologies, and social movements that combined to create the body of educational theories and practices that exist today. Given the history of discrimination based on race, ethnicity, gender, and language in the United States, the American education system offered unequal educational experiences to students for centuries. Prior to the Feminist and Civil Rights Movements of the 1960s, dominant social groups–for the most part white, wealthy, males– held the social, intellectual, political, and economic power to construct the knowledge, ideologies, and cultural norms that became institutionalized in American society and therefore implemented in educational settings. A wide body of scholarly research documented the systematic construction of educational curricula that validated and reinforced the dominance of European and Western values, while simultaneously degrading and devaluing the contributions of communities of color (Banks, 1993; Fine, 1987; Hines, 1964). Theoretical and empirical research confirmed that the imposition of a singular construction of knowledge based on the political, cultural, and economic ideologies of the dominant group was detrimental to the education of students whose backgrounds did not align with the dominant group (Banks, 2004). These findings, which were documented in formal research as well as in the informal experiences of countless individuals, contributed to the formation of a more unified conception of multicultural education. It is important, however, to situate modern understandings of multiculturalism within their historical contexts. In an effort to reflect the diverse history of multiculturalism, Fullinwider (2003) identified several “tributaries” that converged to create multicultural education. Intergroup education, the Civil Rights Movement of the 1960s, ethnic studies programs, and feminist and gender equality movements offered some of the most influential contributions to the contemporary conception of multiculturalism. Each of these traditions challenged dominant patterns of knowledge construction in American society, and thereby influenced teaching and learning in schools across the nation. While some education historians challenge the idea that the intergroup education movement influenced the development of early multiculturalism (Boyle-Baise 1999), others see it as a precursor to the establishment of the ethnic studies movement that was integral to its recognition as a legitimate academic field (Banks, 2004). The intergroup education movement was a product of the larger political, social, and economic context of the era. Throughout the 1940s, the effects and consequences of the United States’ involvement in World War II radically changed the way of life for many Americans. Economically, the increased availability of wartime jobs in the North and West enticed large numbers of African Americans, Mexican Americans, rural whites, and women to migrate into urban centers to fill vacant jobs. Politically, the wartime nationalism sparked–to a degree– a more inclusive national political narrative that promoted tolerance of African Americans in order to achieve common goal of defeating Germany and Japan, though the war also sparked increased racism against Asian Americans, particularly Japanese Americans, who were subject to harassment and violence, in addition to being forced to live internment camps. The social consequences of the war, however, were more complex. With increasing diversity in many urban centers, conflict based on race, ethnicity, and gender became a common experience. In the years following the war, black and Hispanic soldiers were legally and institutionally barred from receiving their GI and other veteran benefits, which was a stark reminder of the deeply entrenched racism in American society. The unrest caused politicians and policy-makers to turn to education for solutions to social issues. In response to the social, political, and economic consequences of World War II, the intergroup education movement aimed to reduce racial and ethnic tension by promoting an educational ideology of tolerance. Intergroup education grew out of progressive education and was headed by predominate educational researchers such as Hilda Taba, Howard Wilson, and Lloyd Cook (Banks, 2004). In order to achieve its central goal of reducing racial tensions and promote intergroup tolerance and understanding, the intergroup education movement advocated for the establishment of intergroup relations centers, active involvement in social tolerance movements, and the creation of more inclusive educational objectives, curriculum, and pedagogy throughout educational experiences, from kindergartens through universities. These programs were implemented into practice sporadically and non-uniformly, which led to mixed results in their effectiveness in achieving their stated goals. However, the intergroup education produced a number of influential research studies and reports that offered empirical evidence of educational inequalities based on race, ethnicity, gender, and religion. These studies confirmed and helped to support landmark cases that were directly preceded the Civil Rights Movement, including Kenneth and Mamie Clark’s doll study. While the intergroup education movement was viewed as a departure from previous educational traditions because of its inclusiveness, it was rooted primarily in an ideology that promoted tolerance and human relations, without a specific focus on the individual histories of different minority groups or the overarching institutionalized discrimination in American society, which became a central focus of the Civil Rights Movement, revisionist history, and ethnic studies programs. It is this distinction that has led scholars to view the intergroup education movement as an educational ideology separate from multiculturalism (Boyle-Blaise; 1999). The scholarly literature identified the Civil Rights Movement as one of the major factors that contributed to modern multicultural education (Banks, 2004; Banks, 1993; Gay, 1983; Valverde, 1977). Clearly, the overarching goals and objectives of multicultural education reflect the struggle for freedom and equality embodied in the Civil Rights Movement. The Brown vs. Board of Education ruling in 1954 marked the beginning of court-ordered educational integration in the United States. However, the oft quoted “all deliberate speed” language in the court’s decision limited the ability for federal oversight to ensure that states complied with the decision. Despite the ruling, the integration of schools continued to be a hard fought battle waged by civil rights activists, parent groups, and even students themselves. During this time, the focus was so heavily on integration of schools and the physical safety of students, there was little room for inquiry into curriculum content and pedagogical practices. However, as the Civil Rights Movement advanced, educational researchers and activists began to question the educational policies and practices of the time and began to develop the underlying foundations of multicultural education. After the passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965, the character of the Civil Rights Movement began to shift towards cultural pride, self-determination, and political activism (Gay, 1983). The growth of cultural consciousness among black activist groups sparked intellectual inquiry into the histories, traditions, and worldviews of cultures that had previously been excluded from the curriculum in American education, from elementary school through university. Armed with a critical consciousness, academics and practitioners conducted numerous analyses and reviews of curriculum contents and textbooks. Not only did these studies find that the contributions of minority groups and women systematically left out of the curriculum, they also identified that the vast majority of textbooks reported “ethnic distortions, stereotypes, omissions, and misinformation” (Gay, 1983, p.561). The misinformation that existed in historiographies and curriculum content served as an impetus for scholars to revisit historical narratives with a specific focus on the contributions and experiences of non-dominant groups. These counter-narratives– sometimes called revisionist histories– challenged intellectual status-quo and offered a contrasting approach to the construction of knowledge. As the field of counter-narratives and revisionist history gained ground in academia, students and professors at universities and colleges across the nation began to demand specific academic programs that centered around the experiences of minority groups in America. In the shadow of Martin Luther King Jr’s assassination in 1968, the Civil Rights Movement became increasingly fractured as various activist groups trended in different directions, though many shared similar goals and aims. Educationally, the combination of a resurgence of cultural pride and the counter-narratives of revisionist history created a sense of isolation and alienation from mainstream American culture and inspired a separatist perspective on curriculum and instruction. With the support of faculty, minority student activist groups on college and university campuses petitioned for specialized programs that addressed racial and ethnic issues. In response to the pressure from students, colleges and universities established Black Studies programs and courses throughout the late 1960s and early 1970s. In 1968, San Francisco State University became the first university to offer a Black Studies major. The establishment of Black Studies programs helped opened the door for other groups who had been subjected to institutionalized discrimination to organize and lobby for programs and courses specific to their experiences. By 1973, approximately 600 new ethnic studies programs had been established at colleges and universities around the United States (http://munews.missouri.edu/news-releases). As distinct ethnic studies programs became increasingly common in educational settings, scholars and researchers began to identify commonalities between the philosophies, ideologies, and experiences addressed across the separate programs. These common ideas became a center point for the establishment of multiethnic perspectives, which is considered to be the antecedent to multiculturalism. The shift to ethnic studies to multiethnic studies was guided by the work of many scholars who are now considered to be the founders of multiculturalism, including James Banks, Christine Bennett, Geneva Gay, Donna Gollnick, and Carl Grant. While the central goals of achieving equal and equitable educational experiences for all students through critical thinking, social justice, and community activism did not change during this period, some worried that the conceptual frameworks and theoretical perspectives would become muddled and less clear due to the diverse variety of experiences of the various minority groups included under the multiethnic umbrella (Grant, 1978). Despite these warnings, the boundaries of multiethnic education quickly expanded to become multiculturalism with the addition of gender and disability issues. Not surprisingly, the counter-narratives of ethnic studies were mirrored by a movement in gender studies that contributed to the creation of feminist movements and a resurgence of scholarship that focused on women’s issues and larger discussions about the importance of gender in society. The inclusion of gender studies in multiculturalism allowed for the development of new frameworks for analysis. For example, concepts of intersectionality and the interlocking experiences based on race, class, gender, and other identifiers–which are common in modern multiculturalism– grew out of scholarship and research in Women’s Studies and Ethnic Studies. These developments allowed for deeper investigations into the systems of discrimination and advantage in American society. However, the inclusion of gender and disability in multiculturalism was not welcomed by all as some scholars continued to challenge the inclusion of gender, disabilities, and age in multiculturalism because experiences based on those identifiers did not constitute a “pervasive worldview” and therefore did not conform with the commonly accepted definition of culture in the field of multiculturalism (Boyle-Baise, 1999). Regardless, modern conceptions of multiculturalism often include a consideration of gender, sexual orientation, disability, religion, and age (Gollnick & Chinn, 2013; Nieto & Bode, 2012). Current perspectives on multicultural education today continue to reflect the initial goals of improving educational equity and equality, reducing discrimination, and promoting active involvement in social justice and democratic society. The history of multiculturalism includes diverse influences and contributions that truly embody the ‘multi’ of multiculturalism.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/01%3A_Multiculturalism/1.01%3A_Introduction_to_Multicultural_Education.txt
Although educational policies and practices are sometimes viewed as if they existed in a vacuum, separate from the larger social, political, and cultural contexts, one of the central tenets of multiculturalism asserts that educational decision-making is heavily influenced by each of these contexts. In particular, many scholars of multicultural education point to the importance of the sociopolitical context of education in the modern era as educational policies and practices are increasingly becoming politicized. Given the political nature of educational decision making, the educational policies and practices implemented at national, state, and local levels reflect the values, traditions, and worldviews of the individuals and groups responsible for their design and implementation, which inherently makes education a non-neutral process, though it is often seen as such. Understanding the sociopolitical context of education allows for a critical analysis of educational policies and practices in an effort to reduce educational inequalities, improve the achievement of all students, and prepare students to participate in democratic society. In the field of multicultural education– and across the social sciences– the sociopolitical context refers to the laws, regulations, mandates, policies, practices, traditions, values, and beliefs that exist at the intersection of social life and political life. For example, freedom of religion is one of the fundamental principles of life in American society, and therefore there are laws in place that protect every individual’s right to worship as they choose. In this instance, the social practices (ideologies, beliefs, traditions) and political process (laws, regulations, policies) reflect each other and combine to create a sociopolitical context that is, in principle, welcoming to all religious practices. There are similar connections between the social and the political in the field of education. Given that one of the main purposes of schooling is to prepare students to become productive members of society, classroom practices must reflect– to some extent– the characteristics of the larger social and political community. For example, in the United States, many schools use student governments to expose students to the principles of democratic society. By organizing debates, holding elections, and giving student representatives a voice in educational decision making, schools hope to impart upon students the importance of engaging in the political process. The policies and practices that support the operation of student government directly reflect the larger sociopolitical context of the United States. Internationally, the use of student government often reflect the political systems used in that country, if a student government organization exists at all. However, sociopolitical contexts influence educational experiences in subtler ways as well. Throughout the history of American education, school policies and practices have reflected the ideological perspectives and worldviews of the underlying sociopolitical context. As stated above, schools in democratic societies often have democratic student government organizations that reflect the political organization of the larger society, while similar organizations cannot be found in schools in countries that do not practice democracy. Similarly, if a society shares a widespread belief that some groups (based on race, class, language, or any other identifier) are inherently more intelligent than another, educational policies and practices will reflect that belief. For example, as the United States expanded westward into Native American lands during the late 19th and early 20th centuries, many Americans shared the widespread belief that Native Americans were inherently less intelligent and less civilized than white Americans. This belief system served as a justification for the “Manifest Destiny” ideology that encouraged further westward expansion. Not surprisingly, the larger sociopolitical context of the time influences educational policies and practices. In large numbers, young Native Americans were torn from their families and forced into boarding schools where they were stripped of their traditions and customs before being involuntarily assimilated into “American culture”. These Native American boarding schools outlawed indigenous languages and religions. They required students to adopt western names, wear western clothes, and learn western customs. While from a contemporary perspective these schools were clearly inhumane, racist, and discriminatory, they illustrate how powerful the sociopolitical climate of the era can be in the implementation of educational policies and practices. Educational policies today continue to reflect the larger social and political ideologies, worldviews, and belief systems of American society, and although instances of blatant discrimination based on race, ethnicity, class, gender, sexual orientation, language, or any other identifier have been dramatically reduced in recent decades, a critical investigation into contemporary schooling reveals that individuals and groups are systematically advantaged and disadvantaged based on their identities and backgrounds, which will be explored in more depth in subsequent sections of this (book/class). The role of social institutions in educational experiences are another key consideration in developing an understanding of the sociopolitical contexts of education. The term social institutions refer to the establish, standardized patterns of rule governed behavior within a community, group, or other social system. Generally, the term social institutions includes a consideration of the socially accepted patterns of behavior set by the family, schools, religion, and economic and political systems. Each social institution contributes to the efficiency and sustained functionality of the larger society by ensuring that individuals behave in a manner that consistent with the larger structure, which allows them to contribute to the society. Traffic regulations offer an example of how social institutions work together to create and ensure safety and efficiency in society. In order to reduce chaos, danger, and inefficiency along roadways in the United States, political institutions have created laws and regulations that govern behavior along public roads. Drivers found in violation of these regulations face punishment or fines that are determined by the judicial system. Furthermore, families and schools– and to some extent religions organizations– are responsible for teaching young people the rules and regulations that govern transportation in their society. The streamlined and regulated transportation system produced by the aforementioned social institutions allows economic institutions to function more efficiently. Functionalist Theory is a term used to refer to the perspective that institutions fill functional prerequisites in society and are necessary for social efficiency as seen in the previous example. However, Conflict Theory refers to the idea that social institutions work to reinforce inequalities and uphold dominant group power. Using the same transportation example, a conflict theorist might argue that the regulations that require licensing fees before being able to legally operate a vehicle disproportionately impact poor people, which would limit their ability to move freely and thereby make it more difficult for them to hold and maintain a job that would allow them to move into a higher socioeconomic class. Another argument from the conflict theorist perspective might challenge institutionalized policies that require drivers to present proof of citizenship or immigration papers before being allowed to legally operate a vehicle. These policies systematically deny the right of freedom of movement to immigrants who entered the United States illegally, thereby limiting their civil rights as well as their ability to contribute to the American economy. Both the Functionalist Theory and Conflict Theory perspectives can contribute to a nuanced understanding of contemporary educational policies and practices by providing contrasting viewpoints on the same issue. Throughout these modules these perspectives will inform the discussion of educational institutions and how they influence– and are influenced by– other social institutions. Much like educational policies and practices, the rules and regulations set by social institutions do not exist within a vacuum, nor are they neutral in regard to the way they impact individuals and groups. Institutional discrimination refers to “the adverse treatment of and impact on members of minority groups due to the explicit and implicit rules that regulate behavior (including rules set by firms, schools, government, markets, and society). Institutional discrimination occurs when the rules, practices, or ‘non-conscious understanding of appropriate conduct’ systematically advantage or disadvantage members of particular groups” (Bayer, 2011). Historical examples of institutional discrimination in abound in American history. In the field of education, perhaps the most well known example of institutionalized discrimination is the existence of segregated schools prior to the Brown v. Board of Education decision in 1954. During this era, students of color were institutionally and systematically prevented from attending white schools, and instead were forced to attend schools that lacked sufficient financial, material, and human resources. Institutional discrimination in contemporary society, however, is often subtler given that there are a plethora of laws that explicitly prevent discrimination based on race, ethnicity, gender, sexual orientation, or any other identifier. Regardless of those laws, social institutions and institutionalized discrimination continue to disadvantage non-dominant groups, thereby advantaging members of the dominant group. Use housing as an example, homeowner’s associations are local organizations that regulate the rules and behaviors within a particular housing community. If a homeowner’s association decides that only nuclear families can live within their community and create a bylaw that stipulates such, the practice of allowing nuclear families and denying non-nuclear families becomes codified as an institutionalized policy. While the policy does not directly state that it intends to be discriminatory, it would disproportionately affect families from cultures that traditionally have households that include aunts, uncles, cousins, grandparents, and other extended family members, a practice that is common in many Asian, African, and South American communities. Although hypothetical, this example represents an example of the subtle ways in which institutional discrimination surfaces in contemporary society. A more concrete example of institutionalized discrimination can be drawn from the housing market in New Orleans as homes were being rebuilt in the aftermath Hurricane Katrina. While the Lower Ninth Ward– a mostly black neighborhood– was among the most damaged neighborhood in New Orleans, just down river the St. Bernard Parish neighborhood– which was mostly white– was also heavily damaged. By 2009, most of St. Bernard Parish had been rebuilt, while the Lower Ninth Ward remained unfit for living. As families began moving back into the neighborhood, elected officials in St. Bernard Parish passed a piece of legislation that required property owners to rent only to ‘blood relatives’. In effect, the policy barred potential black residents from moving into the area and served to maintain the racial makeup of the neighborhood prior to Katrina. After several months of implementation, the policy was legally challenged and was found to be in violation of the Fair Housing Act in Louisiana courts. In 2014, the Parish agreed to pay approximately \$1.8 million in settlements to families negatively affected by the policy. This example illustrates how institutionalized discrimination surfaces in contemporary society. Throughout the modules, instances of institutional discrimination in schools, as well as in American society as a whole, will be critically analyzed in order to develop an understanding of how educators can work to reduce inequality and promote academic achievement for all students. A basic understanding of social institutions and institutional discrimination helps inform this course’s approach to key educational issues in the field of multicultural education. As the student body in American schools becomes increasingly diverse, it becomes increasingly important for future teachers to know and understand how students’ identities might impact their educational experiences as well as their experiences their larger social and political settings. While there are many issues facing education today, Nieto and Bode (2012) identified four key terms that are central to understanding sociopolitical context surrounding multicultural education. These terms include: equal and equitable education, the ‘achievement gap’, deficit theories, and social justice. The terms equal and equitable are often used synonymously, though they have vastly different meanings. While most educators would agree that providing an equal education to all students is an important part of their mission, it is sometimes more important to focus on creating equitable educational experiences. At its core, an equal education means providing exactly the same resources and opportunities for all students, regardless of their background. An equal education, however, does not ensure that all students will achieve equally. Take English Language Learners (ELLs) as an example. A group of ELL students sitting in the same classroom as native English speakers, listening to the same lecture, reading the same books, and taking the same assessments could be considered an equal education given that all students are receiving equal access to all of the educational experiences and materials. The outcome of this ostensibly equal education, however, would not be equitable. The ELL students would not be able to comprehend the lecture, books, or assessments and would therefore not be given the real possibility of achieving at an equal level, which is the aim of an equitable education. Equity refers to the educational process that “provides students with what they need to achieve equality” (Nieto & Bode, 2012, p.9). In the case of the ELL example, an equitable education would provide additional resources– perhaps including ESL specialists, bilingual activities and materials, and/or programs that foster native language literacy– to the ELL students to ensure that they are welcomed into the classroom community and are given the opportunity to learn and succeed equally. Working towards educational equality by providing equitable educational experiences is one of the central tenets of multicultural education and will be a recurring topic throughout these modules. A second key term that is crucial in understanding multicultural education is the ‘achievement gap’. A large body of research has documented that students from racially and linguistically marginalized groups as well as students from low-income families generally achieve less than other students in educational settings. Large scale studies of standardized assessments revealed that white students outperformed black, Hispanic, and Native American students in reading, writing, and mathematics by at least 26 points on a scale from 0 to 500 (Nieto and Bode, 2012; National Center for Educational Statistics, 2009). Though usage of the term has changed over time, it often focuses on the role that students themselves play in the underachievement, which has drawn criticism from advocates of multicultural education because it places too much responsibility on the individual rather than considering the larger sociopolitical and sociocultural contexts surrounding education. While gaps in educational performance no doubt exist, Nieto and Bode (2012) suggest that using terms such as “resource gap”, “opportunity gap”, or “expectations gap” may be more accurate in describing the realities faced by marginalized students who often attend schools with limited resources, limited opportunities for educational advancement or employment in their communities, and face lowered expectations from their teachers and school personnel (p.13). Throughout this (book/course) issues related to the achievement gap’ and educational inequalities based on race, class, gender, and other identifiers will be viewed within the larger social, cultural, economic, and political contexts in order to create a more holistic and systematic understanding of student experiences, rather than focusing purely on the individual. Historically in educational research, deficit theories have been used to explain how and why the achievement gap exists, but since the 1970s, scholars of multicultural education have been working to dismantle the lasting influence of deficit theory perspectives in contemporary education. The term ‘deficit theories’ refer to the assumption that some students perform worse than others in educational settings due to genetic, cultural, linguistic, or experiential differences that prevent them from learning. The roots of deficit theories can be found in 19th century pseudo-scientific studies that purported to show ‘scientific evidence’ that classified the intelligence and behavior characteristics of various racial groups. The vast majority of these studies were conducted by white men, who unsurprisingly, found white men to be the most intelligent group of human beings, with other groups falling in behind in ways that mirrored the accepted social standings of the era (Gould, 1981). Though many have been disproved, deficit theories continue to surface in educational research and discourse. Reports suggesting that academic underachievement is a product of cultural deprivation or a dysfunctional relationship with school harken back to deficit theory perspectives. Much like the ‘achievement gap’, deficit theories place the burden of academic underachievement on students and their families, rather than considering how the social and institutional contexts might impact student learning. Deficit theories also create a culture of despondency among educators and administrators since they support the idea that students’ ability to achieve is predetermined by factors outside of the teacher’s control. Multicultural education aims to disrupt the prevalence of deficit theory perspectives by encouraging a more nuanced analysis of student achievement that considers the structural and cultural contexts surrounding American schooling. The fourth and final term that is central to understanding the sociopolitical context of multicultural education is social justice. Throughout these modules, the term social justice will be employed to describe efforts to reduce educational inequalities, promote academic achievement, and engage students in their local, state, and national communities. Social justice is multifaceted in that it embodies the ideologies, philosophies, approaches, and actions that work towards improving the quality of life for all individuals and communities. Not only does social justice aim to improve access to material and human resources for students in underserved communities, it also exposes inequalities by challenging and confronting misconceptions and stereotypes through the use of critical thinking and activism. Finally, in order for social justice initiatives to be successful, they must “draw on the talents and strengths that students bring to their education” (Nieto and Bode, 2012, p.12). This allows students to see their experiences represented in curriculum content, which can empower and inspire students– not only to excel academically– but also engage in activities that strengthen and build the community around them. These key components of social justice permeate throughout the field of multicultural education. In order to develop a holistic understanding of educational experiences, these modules will interpret and analyze educational policies and practices through a lens that considers the sociopolitical contexts of education. By recognizing the role that social and political ideologies have over educational decision making, multicultural approaches to education aim to reduce educational inequalities, improve the achievement of all students, and prepare students to participate in democratic society.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/01%3A_Multiculturalism/1.02%3A_Sociopolitical_Contexts_of_Education.txt
Culture and Society One of the main goals of multicultural education is to help bridge understanding between dominant culture and different people groups who may have been marginalized by that culture. Therefore, it is important to understand exactly what is meant by the term “dominant culture”. For most sociologists, culture refers to a roadmap for living within a society. Culture includes many components, such as language, customs, traditions, values, food, music, dress, gender roles, importance of religion, and so on. As culture encompasses so many aspects of diversity, it is one of the key components for understanding and discussing the experiences of all types of groups that will come in the following modules. Culture imposes order and meaning on our experiences, and it allows us to predict how others will behave in certain situations. For example, if you are in a classroom and a student raises their hand, we know this means he or she has a question. But, culture includes so many things – the way people talk, dress, interact, eat, live, and so on. Within each culture are individuals, who are unique expressions of many cultures and subcultures. There are two major responses to culture. One is enculturation, or, the process of acquiring the characteristics of a culture and knowing how to navigate behaviors, customs, etc. This often happens simply through the process of growing up within a given culture, but is certainly something that can continue should the culture around you change. For example, if you have ever studied abroad or visited another county for an extended amount of time, you will likely have encountered another culture where you needed to adapt and learn how to navigate social behaviors within that culture. Even in English-speaking countries there can be differences; while those of us in the United States often ask for the “bathroom”, Canadians refer to it as the “washroom”. The second major response is socialization, which refers to the process of learning the social norms of a culture. This can include what it means to be a daughter, husband, student, etc. and the societal expectations within those roles. Dominant culture refers to the major aspects of culture that you find in a society. If you think back to our previous discussion a few paragraphs ago, we mentioned that culture helps to guide language, customs, values, food, etc. Given that, how would you describe the dominant culture in the United States? White? English-speaking? Middle class? Christian? These are just a few terms that are often used to describe the dominant U.S. culture. While you may disagree or find you do not fit into those categories, a key distinction of dominant culture is that it is often maintained through our institutions. These can be our political and economic institutions (we will go into more detail about these in our discussions in Module 3 on Class and Socioeconomic Status), churches, schools, and media. When you examine the leaders in most of these areas, you find they would meet the criteria listed above. When people begin to believe that their culture is best and that any others are strange, inferior, or wrong, it is referred to as ethnocentrism. At its roots, ethnocentrism is the belief that your culture is correct and superior to all others, any other culture is not an equally viable option. Perhaps you have seen photos like this one that demonstrate ethnocentrism: The opposite of ethnocentrism is cultural relativism. Cultural relativism refers to an attempt to understand other cultures within the context of your own cultural beliefs. For example, if you religiously identify as Methodist and attend services and participate regularly, perhaps you can identify with Jews or Muslims who also have religious beliefs that impact their daily living, customs, and values. Culture and School So, what does culture have to do with education? There are two main ways that culture interacts with our education system. First, culture influences what and how we learn, and second, greater experiences with a dominant culture often equal greater success within that culture. To elaborate on how culture influences what and how we learn, we can look to history for some strong examples of this. One of the most blatant ones was the introduction of geocentric versus heliocentric theory. Prior to the work of Galileo, most scientists and certainly the influential Catholic church, fully believed the Earth was at the center of the solar system. However, mounting scientific evidence showed the sun was actually at the center. Was the church and culture quick to change their opinion based on scientific evidence? Not exactly. Galileo was subjected to Roman Inquisition by the church and put on house arrest in 1615. It was not until 1992 that the Catholic church apologized for the handling of Galileo. While this may be a more extreme example, we continue to see culture influencing other aspects of learning today. The topics surrounding climate change, evolution, sex education, and others continue to be influenced in school settings by politicians and dominant U.S. culture. The second way culture is important to education is that the more experiences a person has with dominant culture, the more likely they are to be successful within that culture. Sociologists often discuss these experiences as cultural capital, a symbolic credit a person would acquire by having more experiences with dominant culture. It is important to realize here, however, that all students come to school with some capital, it just may not be the capital schools expect them to have. Research tells us that there are two tiers of the most valuable cultural capital. Tier one activities include things like reading at least three hours per week, owning a home computer, attending preschool, and having exposure to performing arts (playing an instrument, chorus, etc.). Tier two experiences, those that research has shown important, but not as large of an impact, are things such as, having high family educational expectations, rules limiting television and screen time, participating in sports teams or clubs, completing arts and crafts activities, and exposure to lots of different types of music. Other examples of capital students may have that schools may not value in curriculum and assessment include things like knowing how to navigate public transit, cultivating and growing a garden, knowing how to birth a calf or other animal, and knowing how to load and shoot a shotgun. Families are often erroneously blamed for not providing their children with the cultural capital needed to succeed in schools. These children are often labeled as having a cultural deficit or experiencing cultural deprivation (a somewhat insensitive and biased term). The issue these terms are attempting to define, however, is a real one. The challenge for educators is that often the expected knowledge and experiences of students do not actually line up with their actual knowledge and experiences. Essentially, there is a gap between what our schools expect students to know and have experienced and what students actually know and have experienced. Compensatory programs are programming, funding, and other assistance that school systems and communities have put in place to address these gaps. Field trips and community schools are just a few examples of such programs. The following table includes several different programs you may see in schools and communities: Examples of Compensatory Programs Title I of Elementary & Secondary Education Act (ESEA) Programs and support services for the disabled Head Start Family literacy programs Language instruction Extended day instruction Transportation services Computer instruction
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/01%3A_Multiculturalism/1.03%3A_Sociocultural_Contexts_of_Education.txt
Another interesting point to consider is how individuals and families respond when they are confronted with a new culture. Acculturation is the term sociologists use to describe the process of adopting or taking on the culture of a new group. Most often, this involves immigrants adopting the dominant culture as their own. This can include speaking the new language, adopting a new set of core values, changing dress and foods, and so forth. The immigrant family or individual usually decide the degree to which acculturation will take place. There are multiple models that address acculturation outcomes, but only two will be highlighted here. One approach to understanding acculturation is the model proposed by Rambaut & Portes (2001). They identified the following acculturation patterns: • Consonant acculturation – Parents and children learn the language and culture of the community in which they live at approximately the same time. • Dissonant acculturation – Children learn the new language and the new culture, while parents retain the native language and culture, leading to conflict and decreased parental authority. • Selective acculturation – Children learn the dominant culture and language but retain significant elements of the native culture. However, these outcomes can certainly be considered too limiting, namely because they only address acculturation outcomes in family settings. Not all immigrants who come to the United States come as families, and many of your students may even be studying here alone or through exchange programs. Therefore, the Berry (1980) model is more widely used in research and practice to think about the different ways immigrants adapt to a new country and culture. Rejection/encapsulation refers to an individual decision to withdraw from norms of larger society; a cultural identity from the home country is retained, but within terms of a negative relationship to dominant society. For example, a Chinese immigrant that moves into a Chinese neighborhood and continues only speaking Chinese and interacting only with other immigrants in the immediate vicinity could be viewed as assuming the rejection variety of assimilation. Deculturation/marginalization is fixed upon individual confusion and anxiety about personal cultural identity and relationships to larger society. This is the most negative outcome possible, where there is not retention of cultural identity and there is not a positive relationship with dominant society. Assimilation on its own is similar to the old melting pot idea that new immigrants should give up their personal cultural identities in favor of greater, more dominant societal norms. Immigrants who changed their names upon arriving in America, such as changing the German-sounding “Von Meincke” to the more Anglo “Miller”, would be acting within the assimilation outcome of acculturation. Thus, individual cultural identity is lost, but a positive relationship to the dominant society is established. Integration/biculturalism is the most positive outcome, and this type of acculturation results in the retention of cultural identity and a positive relationship to dominant society. Using this model, integration/biculturalism is the best acculturation outcome for immigrants’ psychological wellbeing because of the balance struck between the culture of the home country and that of the new one. Keep in mind that each of these outcomes exists within a spectrum; individuals may fall closer to one side or the other within these possibilities. Assimilation is a strong example of this, as Native American schools represent some of the worst examples of assimilation in United States history, and their outcome would certainly be closer to the marginalization side. However, other immigrant groups came to the United States and willingly assimilated, such as changing their name, in order to be perceived as “more American”. Thus, while the Berry model offers a great guide to consider the experiences of adapting to a new culture, remember individuals can and do exist in a variety of places within the model. The This I Believe essays that are in the readings section of this module provide a strong example of two different acculturation experiences. We encourage you to read both of these and consider where they would fall according to this model.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/01%3A_Multiculturalism/1.04%3A_Responses_to_Culture.txt
Now that you, hopefully, understand more about the background and key ideas of multicultural education, it is worth investigating how scholars in the field would design and implement multicultural programming in schools. Sonia Nieto’s (2012) piece, Defining Multicultural Education for School Reform, highlights many of the key tenets she thinks should be included in any multicultural program. Additionally, educational philosophers Diane Ravitch and Ronald Takaki show alternative viewpoints on what a multicultural program would look like within a school setting in the two additional readings associated with this module. In these complementary pieces, educational philosophers Ronald Takaki and Diane Ravitch each put forth competing philosophies to guide the implementation of multicultural education. Takaki, an advocate of particularism, supports the idea that a common culture is both undesirable and unattainable and maintains the position that students would learn best from teachers and curriculum that reflect their ethnic backgrounds. Ravitch, on the other hand, advocates for pluralism, that the United States does have a rich, common culture made up of various subcultures. As you read, be sure to note the major ideas of each of these, as well as the criticisms. For example, pluralism advocates for a common culture, while particularism views this as undesirable and unattainable. One of the easiest ways to think about these different positions is to imagine a circle that is the historical approach multicultural education. For a particularist, there would be many pieces making up the circle, but they would never touch, as a common culture is unattainable because of all the diverse backgrounds. However, for a pluralist, the circle would be complete, all pieces touching, but perhaps each piece a different color to represent all of the different backgrounds that come together to make up the common culture of the United States. From a practical standpoint, which approach do you think is easier for schools to implement? Ravitch clearly outlines the different criticisms she has against a particularist approach, without clearly articulating some of the shortcomings of pluralism. Perhaps the greatest criticism of this approach is that there is a default towards European-American perspectives and history. While we expect the diagram to look as it does above, in reality, it often ends up skewed. As you continue working in this course and through the modules, consider the focuses of these perspectives and how each would apply to the various dimensions of diversity. 1.06: Activities and External Resources Activities Discussion Prompt: This I Believe Essays and Acculturation Models In the lectures for this module, we discussed two different acculturation models. Just to remind you, acculturation is the way a person, typically an immigrant, responds to new cultures. Your readings for this module included two This I Believe essays by immigrants. Based on their stories, which acculturation outcome do you think each of these people would fit into under Berry’s model? How about the Portes and Rambaut model? Be sure to support your ideas with specific references from the essays. ------------------------------------------------ Discussion Prompt: Pluralism and Particularism This week, you were assigned two different articles, ‘Multiculturalism: Battleground or Meeting Ground’ and ‘Multiculturalism: E Plurbus Plures’. Each of these advocates for a different educational philosophy of multicultural education, either particularism or pluralism. Create a post in which you discuss as the major differences between each philosophy. What are some of the shortcomings of each? ------------------------------------------------ Written Response: Multiculturalism Reflection Paper Topics • Choose a cultural norm to break. Write about what you broke, why you chose to break this, and others’ reactions. How does your experience relate to our discussion of cultural norms? Make sure you include information about how people who unintentionally break norms would feel based on dominant culture. • Describe ethnocentrism in your own words. What are 2 – 3 examples of how you are ethnocentric? What are some strategies you can use to control this in the classroom? ------------------------------------------------ External Readings & Resources Ravitch, D. (1990). Multiculturalism: E pluribus plures. American Scholar, 59(3), 337-354. Takaki, R. (1993). Multiculturalism: Battleground or Meeting Ground?. Annals of the American Academy of Political and Social Science, 109. In these complementary pieces, educational philosophers Ronald Takaki and Diane Ravitch each put forth competing philosophies to guide the implementation of multicultural education. Takaki, an advocate of particularism, supports the idea that a common culture is both undesirable and unattainable and maintains the position that students would learn best from teachers and curriculum that reflect their ethnic backgrounds. Ravitch, on the other hand, advocates for pluralism, that the United States does have a rich, common culture made up of various subcultures. As you read, be sure to note the major ideas of each of these, as well as the criticisms. ‘Defining Multicultural Education for School Reform’ – Chapter 2 in Affirming Diversity: The Sociopolitical Context of Multicultural Education (6th edition) As we begin EDUC 2120, it is important to define exactly what we mean by multicultural education. Sonia Nieto gives us a precise definition of multicultural education to work from for the semester in this piece as she reframes the idea of multicultural education and provides suggestions on what it should look like in educational settings. 2.01: Media External Videos: This American Life Episode 512: House Rules Saturday Night Live - 'White Like Me' Sketch 2.02: Activities Discussion Prompt: Race Preference Test Go to Harvard’s Project Implicit website. (https://implicit.harvard.edu/implicit/) Take the Race or Skin-Tone IAT. Next, read this article from the Washington Post about the assessment and patterns in scores across the United States. What were your IAT results? Were you surprised? How did this make you feel? Do you think it is an accurate assessment of your racial preferences? Provide some examples from your background that you think either support or negate your score. How do you score in comparison to others in Georgia? Overall, what are your impressions of this assessment? *Note that preferences are just that, preferences, and not a summation of personal prejudice. Discussion Prompt: White Privilege Response For this module, you should have read both Beverly Tatum’s article, ‘Defining Racism’, and Peggy MacIntosh’s, ‘White Privilege’. You should also have listened to the ‘This American Life’ episode, “512: House Rules”, and completed the Privilege Scavenger Hunt. Post the top three things you think you have learned from your exploration of privilege. Which aspects of privilege stood out the most to you from the different manifestations you have discovered? What can you do to help reverse some of these inequalities moving forward? Written Response: Reflection Paper Topics Go to Harvard’s Project Implicit website. (https://implicit.harvard.edu/implicit/) Take the Race or Skin-Tone IAT. Next, read this article from the Washington Post about the assessment and patterns in scores across the United States. What were your IAT results? Were you surprised? How did this make you feel? Do you think it is an accurate assessment of your racial preferences? Provide some examples from your background that you think either support or negate your score. How do you score in comparison to others in Georgia? Overall, what are your impressions of this assessment? *Note that preferences are just that, preferences, and not a summation of personal prejudice. 2.03: External Readings and Resources ‘Defining Racism’ – Chapter 1 in Why are All the Black Kids Sitting Together in the Cafeteria? Beverly Tatum uses this chapter to redefine racism in the context of institutionalized racism and privilege. While somewhat extreme, her writing is important as it calls attention to many of the institutionalized policies and procedures that are beneficial to whites in dominant culture. Tatum, B. D. (1997). ‘Why are all the black kids sitting together in the cafeteria?’. New York, NY, US: Basic Books. McIntosh, P. (1990). White privilege: Unpacking the invisible knapsack. Independent School, 49(2), 31. One of the key ideas to this module and the course is privilege. In this piece, Peggy Macintosh lists different privileges she feels she receives that many other white people in the United States may not see or notice. Part of the idea of privilege is that those who have it are blind to it, and this piece helps to illuminate examples of white privilege.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/01%3A_Multiculturalism/1.05%3A_Conclusions.txt
External Media: Public Broadcasting Service (PBS) Independent Lens Film, 'Park Avenue: Money, Power, and the American Dream' ABC News 20/20, 'A Hidden America: Children of the Mountains' 3.02: Activities Discussion Prompt: Poverty Case Study Response Review the poverty case study included in the Activities section of this Module. Post some information here about the outcomes of your work on that activity. Could you get the budget to balance? What things were not included in the budget? What changes do you recommend the family make in their living situation? Next, look at your current community on the New York Times Interactive Poverty Map. What did you find about how different poverty levels are distributed in your community? What current percentage of poverty exists where you live? Discussion Prompt: Social Class Privilege Calculate a personal score for yourself using the guide listed below. Then, post your score and your reflections about some of the different items included. How did it make you feel? Which items were most difficult for you to answer? How accurate do you think these scores are of social class privilege? • If your parents went to college (+ 1 point) • If there were more than 50 books in your house when you grew up (+ 1 point) • If you ever had to skip a meal or were hungry because there was not enough money to buy food when you were growing up (- 1 point) • If you were brought to art galleries, plays, or museums by your parents or guardians (+ 1 point) • If one of your parents was unemployed or laid off, not by choice (- 1 point) • If prior to age 18, you took a vacation out of the country (+ 1 point) • If one of your parents did not complete high school (- 1 point) • If you or your family owns your own house (+ 1 point) • If you were ever offered a job because of your association with a friend or family member (+ 1 point) • If you have ever inherited money or property (+ 1 point) • If public transportation was a requirement and not a choice (- 1 point) • If your parents purchased a car for you (+ 1 point) • If your parents are divorced (- 1 point) • If you received a scholarship for college (+ 1 point) • If you or your family have or own a summer home or second house (+ 1 point) • If you have worked in a fast food restaurant (- 1 point) • If you have a trust fund or own stocks and bonds (+ 1 point) • If you shared a bedroom as a child (- 1 point) • If you have ever shopped with food stamps (- 1 point) • If you attended a private school (+ 1 point) • If your social class was ever the target of a joke (- 1 point) Written Response: Reflection Paper Topics Topic One Think about the community in which you grew up and the class of your family and other members of the community. How would you describe the class of your family? What was the class of the majority of students in your high school? How do you think class influenced your educational aspirations and those of your high school peers? How does examining your community on the New York Times Interactive Poverty Map inform your own impressions? What did you find about how different poverty levels are distributed in your community? What current percentage of poverty exists where you live? Topic Two Think about the different ways the poor pay more mentioned in the article, ‘The High Cost of Poverty’. Explore their ideas on your own by visiting or participating in an activity more common to those in poverty. This can include shopping in a local community grocery store, taking public transit around town for a day, etc. Tell me what you did, what it cost compared to what is normal for you, and describe your overall experiences. How does this change the way you think about those in poverty? Topic Three Find an article relating to poverty and ways to eliminate it from The New York Times, The Economist, CNN, or National Public Radio (NPR)Provide a link to the article you choose. How does your article compare to the ideas in what we have read? Do you agree or disagree with the information presented in your article? Do you think the strategies presented in your article would actually work to help the problem of poverty? Poverty Case Study A married couple with two children, ages one and three currently live in a two-bedroom house, which includes a stove, refrigerator, washer, and dryer. Marcus, the father, is working 40 hours per week in a local factory, and earns \$9.00 an hour. Amanda, the mother, works part-time evenings on weekends, about 12 hours a week, and earns \$7.90 an hour. After taxes, they have an income of \$1,635.00 a month. Their combined wages make them eligible to receive \$137.00 in food stamps per month. Marcus has health insurance for the family through his job, but must pay \$24.00 per paycheck for his benefit. He is not required to take health insurance, however if it is available through his employer and he chooses not to take it, the family is not eligible to receive Medical Assistance. The family has one 2004 Honda Civic which they are making payments of \$178.00 a month and have one year of payments left. Car insurance costs \$147.00 a month. By law, they are required to carry car insurance. The car is their only transportation option for work, since the area where they live does not have public transportation. Marcus and Amanda have been spending about \$285.00 a month more than they earn. They have borrowed money from family and friends to pay their last month’s rent, and are now two months behind on their car loan. Low-income housing is unavailable because of Marcus’ salary. They cannot move in with anyone else. What changes do you recommend to their monthly budget? Are there any other changes you think this family should make, such as jobs or lifestyle? What is not included on their budget that would be most difficult to live without? Here is a copy of their current monthly budget. Space is provided for items you think are currently missing or should be added. Family Budget Current Spending Revised Spending Rent \$525.00 Natural Gas \$85.00 Electricity \$65.00 Water/Trash \$19.00 Cell Phones \$50.00 Cable/Internet \$78.00 Groceries* \$350.00 Personal Hygiene Products \$75.00 Gas/Car Maintenance \$80.00 Diapers/Wipes \$95.00 Car Loan \$178.00 Car Insurance \$147.00 Student Loan \$68.00 Medical Costs \$45.00 Cigarettes \$60.00 Total \$1,920.00 *Grocery spending is in addition to food stamps. Brainstorm a list of ideas the family could do together for free or for less than \$10 for entertainment. Family Activities for Free: Family Activities under \$10.00: 3.03: External Readings and Resources ‘Concerted Cultivation and the Accomplishment of Natural Growth’ – Chapter 1 in Unequal Childhoods: Class, Race, & Family Life Annette Lareau’s work has centered on the way that parenting and family background influence children. In this chapter, she outlines two different approaches to raising children and further discusses the implications of these on students. Lareau, A. (2011). Unequal childhoods : class, race, and family life. Berkeley : University of California Press, c2011. ‘At the Edge of Poverty’ – Introduction in The Working Poor: Invisible in America Often the national narrative surrounding poverty in the United States is the idea that, “work works”. However, David Shipler takes issue with that premise in this piece where he highlights many of the challenges facing people in poverty today. Shipler further discusses the balance of personal responsibility and social responsibility as means to eliminate poverty in the future. Shipler, D. K. (2004). The working poor : invisible in America. New York : A. Knopf : Distributed by Random House, 2004. Brown, D. (2009, May 18). The high cost of poverty: why the poor pay more. The Washington Post. This article also speaks to many of the challenges facing the working poor in the United States today. In particular, it focuses on ways that the poor pay more, largely with their time, in dominant society.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/03%3A_Class_and_Socioeconomic_Status/3.01%3A_Media.txt
External Media: 4.02: Activities Discussion Prompt: Gender Expectations Much like race is a social construction, the ideas we have about gender are also greatly influenced by our society and culture. Take a few minutes and brainstorm a few ideas about the expectations society has for each gender, and then post the top five items for each gender. What are the implications of these expectations? How do you think this makes someone feel who does not identify with gender norms? Discussion Prompt: Current Gender Issues 1. Locate a news article from a reputable source (NY Times, Washington Post, CNN, NPR, etc.) published in the last 12 months relating to one of the following topics: • Motherhood penalties faced by women in the workplace • Occurrence of women in high ranking positions in US (politics, business, etc.) • Gender bias in teaching evaluations in higher education • Maternity leave/coverage in the US versus other nations • Sexual assault occurrence for women in the US 2. Post a reply with a link to your article and a one paragraph summary of it. Include a second paragraph of your own personal thoughts regarding the info presented in the article. Written Response: Reflection Paper Topic According to Alfred Kinsey, gender identity and expression is a continuum of traits and behaviors that range from very feminine to very masculine. What are your thoughts about your own and others’ gender identity? Where do you fall along a continuum of gender identity? What people have had an influence on the development of your gender identity? How do you and your friends identify people who act like the opposite sex? Which sex suffers the most from behaving like the opposite sex? Why? Discussion Prompt: Sexual Identity Case Study Review the sexual identity case study. What are some of the pros and cons to starting this kind of organization in a high school? How do you feel about groups relating to gender and sexual orientation meeting in public schools? How do you think other students and faculty members would respond to such a club? What is your final answer to your student? Sexual Identity Case Study You are a second year teacher at a large suburban high school. One of your best students, Gina, whom you have a strong relationship with, approaches you and confesses she, is a lesbian. She also asks you about helping her start a Gay-Straight Alliance at your high school and becoming the faculty sponsor of the organization. You are only slightly familiar with this student organization, but you do know there was a large controversy about starting such a group at another high school in a neighboring county. Many of the religious parents strongly objected to the club, and the school board ended up deciding that only academic clubs/organizations would be allowed in that district rather than allow meetings of the Gay Straight Alliance. In addition, you know the way most students who are “out” are treated at your high school, and you have doubts that starting such a group would only make bullying and harassment worse for students who decided to participate. You want to support Gina, but are not sure how to proceed. What are some of the other pros and cons for founding such an organization in a public high school? Do you think public school is an appropriate forum for student groups relating to sexuality? How do you think other students and faculty members would respond to such a club? What is your final answer to Gina? * As additional information for you, here is the Gay Straight Alliance’s mission from their website, "Gay-Straight Alliance Network is a youth leadership organization that connects school-based Gay-Straight Alliances (GSAs) to each other and community resources through peer support, leadership development, and training. GSA Network supports young people in starting, strengthening, and sustaining GSAs and builds the capacity of GSAs to: (1) create safe environments in schools for students to support each other and learn about homophobia and other oppressions, (2) educate the school community about homophobia, gender identity, and sexual orientation issues, and (3) fight discrimination, harassment, and violence in schools. 4.03: External Readings and Resources Leonhardt, D. (2010, August 3). A labor market punishing to mothers. The New York Times. One of the main concepts from this module is the way gender interacts with society/dominant culture in the United States. This article highlights some of the challenges that are unique to women as they work towards job equality. Miller, C. (2014, September 6) A motherhood penalty vs. a fatherhood bonus. The New York Times. This article offers additional perspectives on how having a family impacts the careers of men and women in the United States. Schulman, M. (2013, January 9). Generation LGBTQIA. The New York Times. In this piece, the lives of college students who identify as LGBTQIA are explored in the context of broader dominant culture in the United States.
textbooks/socialsci/Sociology/Cultural_Sociology_and_Social_Problems/Exploring_Socio-Cultural_Perspectives_in_Diversity_(Cozart_et_al.)/04%3A_Gender_and_Sexual_Identity/4.01%3A_Media.txt