content
stringlengths 71
484k
| url
stringlengths 13
5.97k
|
---|---|
Regressions are mathematical tools to study the behavior of variables between them. These are tools used to predict behaviors by analyzing the correlations of variables between them. They therefore allow:
- Determine the function that binds variables
- Information on the intensity of the link between the variables
Regression studies can be used for other purposes. For example, a correlation graph tells us that the rate of delay in a meeting influence the quality of these meetings. Thus, to set up an indicator, we prefer to use the delay rate, which is easier to measure, than the quality of the meetings.
Regressions may only be used when the explanatory data (or information) is quantitative (continuous or discrete) and according to the following table:
|
|
Variable Explained
|
|
Continuous or discrete Quantitative
|
|
Qualitative (attribute)
|
|
Number of quantitative explanatory variables (continuous or discrete)
qualitative variables can also be taken as long as they are transformed into dichotomous or class variables.
Example: Male and female is translated into 0 and 1
|
|
A variable
|
|
simple or monotonous regression
|
|
Several variables
Historical
It was Francis galtn (1822-1911-mathematician cousin of Charles Darwin) who introduced the expression of regression. Working on the transmission of hereditary traits, he noticed that although there is a tendency for high-waisted parents to have high-waisted children and vice versa, the average size of children tended to approximate the size Average population. In other words, the size of children born to unusually large or small parents approximated the average population size1.
GALTON’s universal regression law was confirmed by his friend Karl Pearson, who collected more than a thousand sizes from family groups2. He discovered that the average size of the sons of a large group of fathers was lower than that of their fathers and that the average of the sons of a small group was greater than the size of their fathers, ” regressing “So the small and large wires towards the medium size.
1. Collect Data
First step, collect the data. For a regression study, the explanatory variable (or variables) are necessarily quantitative variables. In other words, a regression study can not be done with data of type yes/no or white/blue… In this type of case, it is necessary to use specific hypothesis tests, or if possible, to transform the data into a quantitative variable.
Let’s take the example of a quality control linked to traces on a label. No trace is desired and control is currently a good/not good control. A hypothesis test could be put in place, but it can also be interesting to make a regression. In the latter case, the appearance of the trace will be translated by a measure of the size of it. When there is none, the value is 0, and when it appears, one measures the surface of it and thus becomes a quantitative variable.
On the other hand, as systematically in a statistical study, the data collection Must be done in accordance with the basic rules. In particular, if necessary, remove outliers.
It is also necessary to ensure that the same number of data is collected for each of the two variables.
Finally, it is necessary to validate that the values are independent. For this we have either the logic… or the Durbin Watson test.
2. Identify the type of regression
According to the diagram presented in the introduction, the type of regression to be set up according to the data type is selected.
3. Characterizing the relationship
In this step, the data is graphically represented to characterise the relationship and choose a model. Whether it is a single or multiple regression, these graphs always represent the value to be explained in relation to only one of the other values. We then develop as many graphs as there are explanatory values.
As residues are the difference between our prediction model and our data, the “best” regression will be achieved when the square sum of the tailings is as small as possible.
We’re going to find three types of relationships:
|Type of connection||Description||Chart|
|Linear link||The simplest case, the two variables have a correlation that can be ascending or descending.|
|Monotonic link||More complex case, the connection is not linear but is either strictly positive or strictly negative.|
|Non-monotonic link||We find a "break" in the link but we can mathematically represent it.|
4. Quantifying the intensity of the correlation
The intensity of the correlation is quantified. For this, there are three different coefficients that we describe below and which are to be used according to the table below.
The coefficient of Bravais-Pearson-R
The coefficient of Bravais-Pearson measures the co-variation of the two variables. It calculates the ratio between how much variation the two measures have in common divided by the amount of variation they might have at most. It is expressed according to the following formula (Pearson function in Excel):
If the value of R squared (r2) is raised, it is then called the coefficient of determination. This gives us the amount of variance in common between the two samples. Expressed as a percentage, at most this will be close to 100%, at most, our regression model explains our data.
Autocorrelation
In the particular case where we have a chronological series of the same data, we can calculate via the coefficient of Bravais Pearson What is called in this case the autocorrelation.
This allows us to know whether in time our data follows the same trend or not.
The coefficient of Spearman-ρ
Basically, the Coefficient of Spearman is a special case of the Pearson coefficient. It is based on the calculation of the difference of the ranks. So it’s a non-parametric test.
Kendall-τ Tau
The Kendall’s Tau is also a non-parametric test. It is based on the difference in the ranks of the variables.
Interpretation
In all 3 cases, the values of the coefficients oscillate between-1 and 1 and are interpreted according to the following diagram:
5. Validate the significance of the study
It is necessary to validate whether the results obtained have a meaning or not. The details of the tests being put in the various articles relating to linear regressions, multiple…, we only put below the list: | https://wikilean.com/en/regression-studies/ |
Introduction: Soft tissue changes and especially smile is one of the most important parameters in diagnosis and treatment planning in orthodontics. The main aim of this study was evaluating the correlation of the smile line with verticaldental parameters of occlusion.
Materials And Methods: In this cross sectional study, 46 patients (23 females and 23 males) aged between 18 to 25 years old were selected. The subjects were asked to pose a smile and several variables were measured and recorded which were mainly related to smile line. A lateral cephalometric radiography was ordered for each patient and anatomic landmarks were determined. And then correlation between 6 vertical dental parameters and some smile variables were analyzed. In order to determine association between quantitative variables, correlation coefficient of Pearson was used. P<0.05 was considered as significant.
Results: There was significant correlation between palatal-occlusal plane with quantitative variables such as tooth-lower lip position and inter labial distance during smile. No significant correlation was seen with other smile variables. There was significant correlation between upper 6 to palatal plane with smile width but no correlation was found with other smile variables. Upper 6 to Frankfort plane had significant correlation with clinical crown and smile width but not with other smile variables.
Conclusion: According to the result of this study, dental vertical variables affect the vertical component of smile, which means vertical development in the dentition can lead to the distance between some vertical variables of posed smile. | https://www.ijorth.com/article_247760.html |
The aim of the study was to examine the relationship between peer influence, stress and academic performance among adolescents. The study was conducted at Kajjansi Progressive School located in Wakiso district. A correlation study design was used and 100 school going adolescents participated in this study. Results revealed that Pearson 's Correlation Coefficient (r) was used to determine the level of significance of the hypotheses. Results indicated that there was a significant relationship between stress and academic performance. There was also a significant relationship between peer influence and academic performance. However, results revealed that there was no significant relationship between stress and peer influence among adolescents in the study. Thus, the study recommends that school administrators and school counsellors should work together to equip students with skills to enable them cope with stress. | http://dissertations.mak.ac.ug/handle/20.500.12281/9396 |
What is Correlation?
Correlation simply means to be related or to be connected. In statistics, correlation refers to an association between different sets of variables. In other words, correlation refers to a mutual relationship between different variables.
When we try to identify the statistical relationship between different variables, we must do a correlation analysis. There are different ways in which correlation can be studied statistically.
How do we Measure Linear Correlation
Correlation is measured using a coefficient of correlation. In the case of linear correlation, the measure is the Karl Pearson’s coefficient of correlation that measures how much do the variables move together.
Assumptions in Linear Correlation
To measure linear correlation between variables, we must make a few assumptions. The most important ones are normality, homoscedascity, and linearity.
Measuring Rank Correlation
When the variables do not contain “normal” data, that is, when the data is categorical or ordinal, we cannot use Karl Pearson’s coefficient of correlation. Instead, we use Spearman’s rank correlation to measure the correlation for such data.
Correlation does not indicate a Causation
Simply because two sets of variables are correlated, it does not indicate that one of them is causing the other. Correlation merely indicate a relationship, and not necessarily a causal relationship. We should be rather careful when we draw conclusions out of an observed strong correlation alone. | https://helpfulstats.com/about-correlation/ |
In a piece of writing, using cause and effect helps the reader follow a cohesive thread through the content. It also assists the writer in the writing process in organizing and structuring the material into a logical shape. Cause and effect are important in literature because they help keep plotlines plausible. Without them, stories would not be able to progress naturally due to the need to include many different events that occur throughout the narrative.
In general, cause and effect help readers understand how specific events led up to others which led to a final result. This understanding allows them to connect information from separate parts of the story and move forward with the story rather than being stuck at a point where something inexplicable happens. For example, if I were to tell you the cause of everything that has happened in this story so far is love, then that would explain why these things have been happening: my roommate Amy moved out, my other roommate Matt decided to start dating someone, and my friend Sean dropped me off at the train station when I went to visit my mom for Mother's Day. The effect of all of this is that I ended up getting a job as a data analyst at a financial company in New York City.
There are several types of cause and effect relationships that can be used in writing.
What Is the Purpose of Teaching Cause and Effect? Without them, writers would need to explain everything, which would make stories boring.
Teaching cause and effect is important because it allows students to understand how different events are connected. This skill is useful in many areas of life including science, history, and politics. For example, if you were studying plants and wanted to know why some grow better than others, you could look at their genetic makeup or the environment in which they are grown. You could also look at their roots- those who grew up in rich soil have more opportunity to explore their surroundings and thus develop stronger stems and larger leaves than those that did not get as much love. Even after they reach maturity, plants continue to change due to environmental factors such as heat or cold so learning about cause and effect helps scientists select the best strains for different conditions.
In history, understanding cause and effect helps students understand how different events are related. For example, if you were studying the rise of Hitler in Germany you might want to look at various causes such as poverty, unemployment, or nationalism. You could also examine effects such as World War II or the Holocaust. Knowing what caused what would help you understand how events unfolded over time.
Cause and effect are key textual aspects that assist readers follow a writer's path of thought, whether the material is fiction or nonfiction. In their own reading and writing, they may already demonstrate a solid implicit comprehension of the topics. The more experience they have with literature and science, the better they will be able to comprehend such concepts as they read about them for the first time.
Effect words indicate to readers what will happen next in a story or article. Use of appropriate effect words creates anticipation in readers, which makes them want to continue reading. Cause and effect diagrams are used by writers to explain the relationship between events or circumstances. These diagrams can help readers understand complex ideas by showing the connection between different parts of an article or book.
Effect words fall into five categories: temporal, causal, logical, rhetorical, and psychological.
Temporal effect words describe something that happens at a specific time. They tell readers when certain events will take place. Some examples include "then," "next," and "after."
Causal effect words show how one event causes another. They explain why something happened after another thing. For example, "because," "since," and "due to" are commonly used causal effect words.
Logical effect words show how two things are related because one follows from or is implied by the other.
A popular technique to organize information in a text is through cause and effect. Cause and effect paragraphs are organized in such a way that they explain why something happened or the impacts of something. Cause-and-effect text structures are commonly employed in expository and persuasive writing. They help readers understand complex topics by explaining the relationship between different ideas or events.
In general, the cause and effect paragraph explains what caused something to happen (or not happen) and then goes on to talk about the effects of this cause on its result. For example, if you were to write a cause and effect paragraph explaining why it is good to eat your vegetables, you could say that eating vegetables helps you get some of the nutrients you need for healthy skin and hair, and also reduces your risk of getting some cancers. This would be an accurate explanation because nutritionists know that eating vegetables can help reduce your risk of getting certain kinds of cancer.
Some examples of cause and effect paragraphs that could be used in essays or reports include:
The effects of eating vegetables. Eating vegetables helps you get some of the nutrients you need for healthy skin and hair, and also reduces your risk of getting some cancers.
Effects of drinking water. Drinking water prevents dehydration, which can make you feel tired and weak. It also helps flush out your body's system of toxins.
Effects of exercising.
Cause and effect is a rhetorical style that examines which occurrences result in which outcomes. A cause and effect essay is organized around the purpose of uncovering and describing events that result in certain outcomes. Authors use facts and statistics to support their arguments.
In his book The Elements of Style, William Strunk Jr. and E. B. White described cause and effect writing as follows: "Use the past tense to describe causes that are no longer present (he was injured playing football) or causes that never were (fire burns up trees), and use the present tense for effects that are happening now (he is crying because someone else got promoted over him). Don't mix these two types of actions."
For example, if you were writing about why it is that dogs eat grass, you would need to describe what happens when humans feed grass to dogs. You could say, "When people give dogs grass in their food, they are causing them to lose weight" or "That's how we know why dogs eat grass—because people do it all the time and it makes them sick." Both statements are examples of cause and effect writing.
The word "cause" has many different definitions depending on the context in which it is used. Here are some common ones: reason, explanation, motive, source, stimulus, condition necessary for something to happen. | https://authorscast.com/how-can-cause-and-effect-help-you |
Managing resilience requires understanding how historical system dynamics have shaped the current system.
Social-ecological systems are dynamic and the changes they undergo are sometimes slow and predictable and other times fast and unforeseen. Having a broad overview of system change through time can reveal system drivers, the effects of interventions, past disturbances and responses.
Key Messages
• Social-ecological systems undergo change over time. Those changes can be slow and predictable, or they may be fast and unforeseen. These changes can result from external sources of variation interacting with internal vulnerabilities.
• Environmental crises can signal or accompany the loss of ecological resilience. They can also serve as windows of opportunity for change.
• Historical profiles can reveal how human interventions and management actions can lead to the loss of resilience.
• Historical assessments indicate how understanding, values, perceptions and priorities of the system have changed over time. These factors can also lead to regime shifts- in the ecological, social and/or economic components.
Resilience Assessment
Create an historical profile of the focal system: The development of an historical profile or timeline helps to reveal the longer-term dynamics of the system. It can help reveal the main social or ecological drivers in the system, and how change has occurred (such as episodic change through perturbations, or slow linear changes). It can also help identify the types of disturbances or shocks that have occurred, and the social and ecological responses to those shocks.
One method to creating an historical profile is to use three long pieces of paper (or a blackboard with three rows), labeling one row the focal scale, one the coarser scale, and one the finer scale. Establish the length of the history that you wish to describe (100 years, 1000 years, etc.) and appropriate unit of resolution (such as 5 or 10 years). Sketch a line on each sheet of paper that represents this time period, with appropriate subdivisions for the resolution. Mark events that are of significance to your system (e.g., social, ecological, and economic events) and put them on the appropriate scale. You can either mark on the paper directly or place post-it notes (which are easier to move around and change). At this stage it is more important to identify big events and or events that changed the management of the system.
Draw connections between related events. For instance, was a shift in agricultural production at the focal scale caused by an earlier economic shock at a larger scale? If so, indicate the reason for the connection.
For each of the events you identified above, determine if the event caused a dramatic change in the characteristics of the system. How would you characterize the system before the transition? How would you characterize the system after the transition? Give each era a name (try to identify 3-6 eras).
For each era summarize the event that led to a change in era (the ‘triggering event’), and list the attributes you believe made the system vulnerable to change. This can be done by following the format provided in the table below (add rows as necessary).
Look for any patterns in the picture you have created. How often do ‘triggering events’ come from the coarser scale(s)? How often from the finer scale(s)? How often from the economic domain? The social? The ecological? In other words, what are the critical domains in your system, and is there a pattern of cross-scale interactions?
Keep a record of the timeline you have created. Devise a plan for how you will record, archive, and disseminate the results of this assessment, and make a record of any action items. | http://wiki.resalliance.org/index.php/1.3_Linking_the_Past_to_Present_%E2%80%93_Historical_Timeline |
… And by “WOW!” I mean: Wow, I need to use these with my students. OR …Wow, I need to share these with my colleagues. OR Wow, I am inspired to develop my own digital history project. Of course a synthesis of all 3 is the sweet spot. That was the course of action leading to the development of my US History in a Global Context project.
What is digital history? Indeed, defining your terms is usually a great place to start. I have found these explanations to be useful and bring moments of clarity which ultimately furthers the conversation and utility of these types of projects.
I have had the pleasure of working on multiple digital history projects. So, let’s look a bit further and see what formats digital history projects can take. In short, when we discuss digital history, we can be referencing a number of types and purposes. The common aspects being that they are accessible to the public and organized around a theme(s). This list comes (in part) from the Organization of American Historians.
Archive: a site that provides a body of primary sources. Could also include collections of documents or databases of materials.
Essay, Exhibit, Digital Narrative: something created or written specifically for the Web or with digital methods, that serves as a secondary source for interpreting the past by offering a historical narrative or argument. This category can also include maps, network visualizations, or other ways of representing historical data.
Teaching Resource: a site that provides online assignments, syllabi, other resources specifically geared toward using the Web, or digital apps for teaching, including educational history content for children or adults, pedagogical training tools, and outreach to the education community.
Podcasts: video and audio podcasts that engage audiences on historical topics and themes.
Games: challenging interactive activities that educate through competition or role playing, finding evidence defined by rules and linked to a specific outcome. Games can be online, peer-to- peer, or mobile.
Wonderful! With classrooms having access to computers and moving to 1:1 formats, quality digital resources is in demand. The good news is that they are out there. But these are only good if they get used. To that end, I have curated a collection of digital history projects that are designed for high school and higher education history and social studies classes. These selections offer a variety of implementation pathways allowing immediate use with students (either in full or in part). Additionally, these would be relevant for history/social science methods classes.
Here is one more general resource, a short video, to help frame and advance your understanding before you dive into the digital history resources.
What project did I miss? What do you think of these? Let me know and contact the project designers so they know who is using the resource they created. Enjoy!
1. The 68.77.89 Project: Arts, Culture, and Social Change: Created by The National Czech & Slovak Museum & Library, this resource was just launched in early 2018! Students will be challenged to apply the lessons from the experiences of Czechs and Slovaks to better understand issues of democracy today and their responsibility for preserving democracy for the future. 68.77.89 is designed for students in grades 9-12. It provides a set of 12 learning activities in 4 modules that meet Common Core, Advanced Placement, and International Baccalaureate standards. The activities can be used as a set designed to be used together, or in single modules as free-standing lessons. Images of the 4 modules is below.
2. The Trans-Atlantic Slave Trade Database: This is a remarkable tool which synthesizes data with visualization formats very effectively. The database “has information on almost 36,000 slaving voyages that forcibly embarked over 10 million Africans for transport to the Americas between the sixteenth and nineteenth centuries. In order to present the trans-Atlantic slave trade database to a broader audience, particularly a grade 6-12 audience, a dedicated team of teachers and curriculum developers from around the United States developed lesson plans that explore the database. Utilizing the various resources of the website, these lessons plans allow students to engage the history and legacy of the Atlantic slave trade in diverse and meaningful ways. Here is one example of a search I did.
4. Our Shared Past in the Mediterranean: This is an intriguing world history curriculum. Given the unique geography of the transitions currently underway in the Middle East (several geographically contiguous North African states) and the likelihood that interactions between Europe, northern Africa, Turkey, and the Arab world will constitute a vitally important sub-region of globalization going forward, new cross-Mediterranean tendrils of economic and civil society connectivity will be necessary to help anchor these transitions. An outline of the modules can be viewed here.
5. Rethinking the Region: North Africa and the Middle East: Another contribution to the field of world history, this project “analyzed the common categories used to describe and teach the Modern Middle East and North Africa in existing World History textbooks. Based on this research, we offer robust alternatives for Grade 9-12 social studies teachers and multicultural educators that integrate new scholarship and curricula on the region. To this end, we examined the ways in which the region is framed and described historically, and analyzed categories like the ‘rise and spread of Islam,’ the Crusades, and the Ottoman Empire. Narratives surrounding these events and regions tend to depict discrete and isolated civilizations at odds with one another. To remedy this oversimplification, our work illuminates the manners in which peoples and societies interacted with each other in collaborative and fluid ways at different political and historical junctures.
6. Histography: “Histography” is interactive timeline that spans across 14 billion years of history, from the Big Bang to 2015. The site draws historical events from Wikipedia and self-updates daily with new recorded events. The interface allows for users to view between decades to millions of years. The viewer can choose to watch a variety of events which have happened in a particular period or to target a specific event in time. For example you can look at the past century within the categories of war and inventions. Histography was created as a final project in Bezalel Academy of Arts and Design. Guided by Ronel Mor. Below is a screenschot of the platform.
7. American Yawp: “In an increasingly digital world in which pedagogical trends are de-emphasizing rote learning and professors are increasingly turning toward active-learning exercises, scholars are fleeing traditional textbooks… The American Yawp offers a free and online, collaboratively built, open American history textbook designed for college-level history courses. Unchecked by profit motives or business models, and free from for-profit educational organizations, The American Yawp is by scholars, for scholars. All contributors—experienced college-level instructors—volunteer their expertise to help democratize the American past for twenty-first century classrooms.” This is being used in high schools as well. Also, you can offer insights and edits for the editors to consider.
8. Mapping American Social Movements in the 20th Century: “This project produces and displays free interactive maps showing the historical geography of dozens of social movements that have influenced American life and politics since the start of the 20th century, including radical movements, civil rights movements, labor movements, women’s movements, and more. Until now historians and social scientists have mostly studied social movements in isolation and often with little attention to geography. This project allows us to see where social movements were active and where not, helping us better understand patterns of influence and endurance. It exposes new dimensions of American political geography, showing how locales that in one era fostered certain kinds of social movements often changed political colors over time.” The screenshot below shows a sample of an interactive map. Fantastic!
9. Eagle Eye Citizen: Made by the invaluable team at the Roy Rosenzweig Center for History and New Media, Eagle Eye engages middle and high school students in solving and creating interactive challenges about Congress, American history, civics, and government with Library of Congress primary sources. This helps develop students’ civic understanding and historical thinking skills. It is highly interactive and invites students and teachers to use existing challenges and develop their own.
10. Mapping the 4th of July: Mapping the Fourth of July is a crowdsourced digital archive of primary sources that reveal how Americans celebrated July 4 during the Civil War era. These sources reveal how a wide range of Americans — northern and southern, white and black, male and female, Democrat and Republican, immigrant and native born — all used the Fourth to articulate their deepest beliefs about American identity during the great crisis of the Civil War… Whether you teach at the college or high school level, your students will jump at the chance to learn about how a previous generation of Americans celebrated the Fourth. (Yes, there were fireworks!) These are engaging documents that open up big themes: North-South differences; the causes and consequences of the Civil War; African American experiences of emancipation. On our website you’ll find standards-based assignment guidelines that make it easy to integrate it into your courses.
11. Back Story: Incredible podcast focusing on American history topics in a range of contexts. The hosts are fun, informed, and engaging. BackStory is a weekly podcast that uses current events in America to take a deep dive into our past. Hosted by noted U.S. historians, each episode provides listeners with different perspectives on a particular theme or subject – giving you all sides to the story and then some. Also, a resources icon indicates that the episode has educator resources available. Use BackStory in your classroom! Just go to the episode archives and filter by episodes with resources.
This resource feels like the “godfather” of digital history projects. “Since its establishment in August 1991, the Cold War International History Project (CWIHP) has amassed a tremendous collection of archival documents on the Cold War era from the once secret archives of former communist countries. CWIHP has become internationally recognized as the world’s preeminent resource on the Cold War.” The help organize and search the trove of documents, you search using a map, timeline (going back to 1866… great extended context) and contains over 30 featured collections (sample below).
This entry was posted in Global Education, History and Social Studies Education, Instructional Practices, Online Education, Uncategorized, Web 2.0. Bookmark the permalink. | https://cperrier.edublogs.org/2018/04/15/12-digital-history-projects-that-will-make-you-say-wow/ |
Our good friends at Star House of Boulder, CO have put together a course that agrees with our values and mission, and which we recommend to you. This is a three-part series on research on the astrology of plagues.
Astrology of Plagues:
Pandemics in the Light of Star Wisdom
Online Course
3 Sessions, 1 hour & 40 minutes each
Thursdays, beginning October 1st
6:00 – 8:00 pm (Mountain time USA)
With Brian Gray, Robert Schiappacasse, and David Tresemer
Live, with Q&A available each meeting
The 2020 pandemic gripping the world shows signs of continuing to harass humanity for some time. A study of pandemics through history reveals interesting patterns that we will share. Spiritual science helps us to find causes and responses, assisted by an understanding of events in the life of Christ. Seen historically or mytho-poetically, exploring these patterns and responses can strengthen us to deal with our challenging times.
Session 1: Patterns in Pandemics—seen through the lens of star wisdom. We will examine the present pandemic and others through history to show
patterns in celestial events in relation to these phenomena.
Session 2: Background to Pandemics—as seen through Spiritual Science. The spiritual research of Rudolf Steiner points to unexpected and surprising
causes and effects of pandemics, with which each of us must grapple.
Session 3: Assistance from Christology—whether you appreciate Christ events as history or as mytho-poetic artistic activity, understanding the
Christ Impulse can help us find our place and roles in the present drama.
This course goes beyond personal biography to world events, yet comes back to how an individual can respond in one’s life.
PRESENTERS:
Brian Gray, teacher at Rudolf Steiner College for thirty-eight years, author of star wisdom articles (Journal for Star Wisdom) and presenter via internet videos (WiseCosmos.org)
Robert Schiappacasse, for many years involved with the Waldorf school movement, author in star wisdom (Journal for Star Wisdom) and co-author ofStar Wisdom & Rudolf Steiner
David Tresemer, Ph.D., founder of StarHouse, author in star wisdom (Journal for Star Wisdom and other books), co-author of Star Wisdom & Rudolf Steiner
StarFire Research: The three presenters have been in a collegial group with Robert Powell, a co-founder of the Sophia Foundation, since the 1980s, where each shared their star research in annual meetings for many years. As Robert has moved out of the U.S., these are the three remaining in StarFire in the USA. Several studied directly with Willi Sucher. The annual Journal for Star Wisdom (now published under various volumes of Star Wisdom titles) was founded from this group. | https://sophiafoundation.org/courses/astrology-of-plagues-pandemics-in-the-light-of-star-wisdom/ |
Review From User :
Knowledge of the world is the first step to global citizenship!
Why teach history Why not add more hours of maths or technology instead There is so much students need to learn in order to be professionally successful later, and besides, they can check historical facts on the internet. Who ever needed to know anything about Mughal India to be an asset in the workplace Who needs it for recreation
For most educated people, historical knowledge is part of a cultural package they subscribe to, but which they do not value in the same way they value for example management, financial or technological skills. It is something they learn passively in their spare time, going to museums, reading the odd historical article, maybe even a book on a specific era of special interest. But why have students study it in school Wouldn't more English, math or science be better Or another language
History as such, in its global stretch from the Olduvai Gorge over early agricultural societies in the Mesopotamian river valleys to huge empires and later formation of modern nations, ideologies, conflicts, and inventions, is a neglected stepchild in most contexts, not least in school.
Considered something that can be scrapped without loss, it is frequently reduced to a mere stereotypical look at the most famous monsters and saints, heroes and villains. Instead of connecting historical and geographical knowledge to generate deeper understanding of development processes, history lessons are quite often reduced to watching themed documentaries, writing about random topics without context, showing "old-fashioned" pictures and reading historical adventure fiction. The argument is that general knowledge can be "looked up".
My experience, however, is that if we do not teach the basics anymore, students will not know what to look for. The inherent danger of that lack of context is vulnerability to naive acceptance of "alternative" facts. If we ask students to discuss and interpret something they do not know anything about, they are lost in an ocean of information they can't relate to. More often than not, the "debate" turns into mere speculation or fantasy argumentation based on personal history and parental beliefs.
The result is a fragmentary understanding of the causes and effects of developments, and difficulties to see history as a sequence in time, and a simultaneous global process. I guess all history teachers have their moments when they realise how little knowledge and understanding can be taken for granted without thorough engagement and dialogue with the students.
I remember correcting tests once in a staff meeting (secretly) and giving myself away by laughing out in loud frustration when reading the answer to the question:
"In what way did communication improve in Ancient Rome"
One student, who obviously had not listened to the lessons or studied at home, thought he could improvise an answer and wrote:
"They invented aeroplanes and that was good for transportation."
On a more serious note, history helps us explain to students what happens in the world today, and why it is worrying. It helps students see patterns, characteristics of successful or failing societies, and the impact of strong personalities on the course of history. Comparing a person to Hitler has become a general insult, a way to express utter disgust for the methods used to reach power, but who can still explain properly what were the social and economic causes for his rise, and how he used fear and propaganda to achieve his goals
To be able to establish an idea of timelines, cause-and-effect-chains, biographical information, and connections between different events, countries, and topics, we need overviews of global history.
We need reference books that offer structured information, and that refer back to different earlier sections, thus putting them into a wider context, adding quotes, primary sources, major events, and relationships between diverse historical questions over time and geographical borders. We need history books that look at ideas and people, in all parts of the world, and from different perspectives. And we need explanations in straightforward language, with as little regional bias as possible.
This book offers just that: an introduction to world history in clear layout. It is not a book for scholars, and there is plenty of detail left out, but it is a highly necessary book nonetheless. It is a first step to "real" history. A first step to learn about the events that demagogues like to quote out of context, a first step to an overview of humankind's mistakes that should not be repeated. A first step to learn about fights against oppression, for civil rights, and for democracies. And a first step to learn about dictatorship and propaganda.
It is an excellent book for students to consult, in order to have a knowledge basis to start from when they are faced with politicians who work with power play, denunciation, spreading of fear and demagoguery.
We study history to learn from the past, for a better future. This is an excellent way to start, with an appealing modern design. | https://akibooks.com/the-history-book-big-ideas-simply-explained-7/ |
Are isolated tornadoes, also known as microbursts, thunderstorms with a tornado-like appearance but without an accompanying funnel cloud? These storms can form quickly and with little warning. They can cause damage to public and private property, as well as loss of life.
Introduction: What are isolated tornadoes?
Isolated tornadoes are a type of tornado that form without any other tornadoes nearby. These tornadoes can occur in areas that typically don’t have a lot of tornado activity. They can also form when thunderstorm cells interact.
Causes of isolated tornadoes:
Isolated tornadoes are those that do not form in a “tornado alley”, which is a region of the United States where most tornadoes occur. There are many factors that can cause an isolated tornado, including changes in the wind speed or direction, moisture levels, and temperature differences.
Types of isolated tornadoes:
What are the different types of tornadoes?
There are three main types of tornadoes: supercell, weak tornado, and derecho.
Supercell tornadoes are the most common type of tornado and occur when a rotating thunderstorm becomes strong enough to produce an EF-5 or EF-4 tornado. These tornadoes can travel up to 350 miles per hour and cause extensive damage. Weak Tornado Tornadoes are typically smaller than supercells and can travel up to 75 miles per hour. Derechos are the least common type of tornado but can cause the most damage because they form in large groups.
Trends in occurrence and severity of isolated tornadoes:
1. Tornadoes are typically classified by their type, EF-0, EF-1, EF-2, and so on. These classifications are based on the wind speeds achieved during an event and the damage that can be caused.
2. The most common tornado type is the EF-0. This category includes tornadoes with wind speeds of 74 to 105 mph and limited damage. EF-0 tornadoes occur in about 25 percent of all events and cause between $50,000 and $500,000 in damages.
3. The next most common type is the EF-1 tornado with a wind speed of 111 to 130 mph and significant damage capabilities. EF-1 tornadoes occur in about 30 percent of all events and cause between $5 million and $25 million in damages.
4.
Conclusion
1. Tornadoes can be categorized according to their size, strength, and path.
2. While most tornadoes occur in populated areas, a small number are classified as isolated tornadoes. These storms are typically smaller and weaker than other varieties and tend to travel on paths that are more rural or mountainous.
3. Isolated tornadoes account for only about 2% of all tornado occurrences, but they cause a greater percentage of fatalities and injuries due to their propensity for striking smaller towns and villages without warning.
4. The best way to avoid being caught in an isolated tornado is by following weather forecasts and warnings closely, staying alert for unusual weather conditions, and never taking risks when it comes to safety.
What is an isolated tornado?
An isolated tornado is a tornado that does not touch or cross any other tornadoes.
What are the causes of an isolated tornado?
There are many possible causes of an isolated tornado, but the most common ones are a change in wind speed, a change in air pressure, and a rotation in the atmosphere.
What are the effects of an isolated tornado?
Tornadoes can cause extensive damage to structures, and fatalities are possible if people are in the path of a tornado. | https://www.topeasytips.com/2022/05/what-are-isolated-tornadoes.html |
Despite media contextualizations of mass shootings as being "on the rise," the odds of becoming a victim of such an event are quite low.
This book provides readers and researchers with a critical examination of mass shootings as told by the media, offering research-based, factual answers to oft-asked questions and investigating common myths about these tragic events.
When a mass shooting happens, the news media is flooded with headlines and breaking information about the shooters, victims, and acts themselves. What is notably absent in the news reporting are any concrete details that serve to inform news consumers how prevalent these mass shootings really are (or are not, when considering crime statistics as a whole), what legitimate causes for concern are, and how likely an individual is to be involved in such an incident. Instead, these events often are used as catalysts for conversations about larger issues such as gun control and mental health care reform.
What critical points are we missing when the media focuses on only what "people want to hear"? This book explores the media attention to mass shootings and helps readers understand the problem of mass shootings and public gun violence from its inception to its existence in contemporary society. It discusses how the issue is defined, its history, and its prevalence in both the United States and other countries, and provides an exploration of the responses to these events and strategies for the prevention of future violence.
The book focuses on the myths purported about these unfortunate events, their victims, and their perpetrators through typical U.S. media coverage as well as evidence-based facts to contradict such narratives. The book's authors pay primary attention to contemporary shootings in the United States but also discuss early events dating back to the 1700s and those occurring internationally. The accessible writing enables readers of varying grade levels, including laypersons, to gain a more in-depth—and accurate—understanding of the context of mass shootings in the United States. As a result, readers will be better able to contribute to meaningful discussions related to mass shooting events and the resulting responses and policies.
Jaclyn Schildkraut, PhD, is associate professor of criminal justice at the State University of New York (SUNY) at Oswego. Her research interests include school shootings, homicide trends, mediatization effects, moral panics, and crime theories. She has published in Homicide Studies, American Journal of Criminal Justice, Fast Capitalism, and Criminal Justice Studies as well as in other journals and several edited volumes.
H. Jaymi Elsass is a lecturer and doctoral candidate in the School of Criminal Justice at Texas State University. She received her Master of Science in criminal justice from Texas State University in 2010, and she holds a Bachelor of Arts in sociology from the University of Texas that she received in 2008. Her primary research interests include episodic violent crime, moral panics, fear of crime, and juvenile delinquency. She has published in Criminology, Criminal Justice, Law & Society Review, Crime, Law and Social Change, and an edited volume. | https://www.abc-clio.com/ABC-CLIOCorporate/product.aspx?pc=A4693C |
Hindsight tracks the passage of time since something happened, and helps you answer questions like "How long has it been since?" or "How often does it happen?"
It unburdens you from remembering dates and gives you new insight into the past.
Be more mindful of your activities, get things done on time, and discover patterns and trends.
Features:
- track events and the time of each occurrence
- quick swipe to record a new occurrence
- group events by category
- histogram reveals patterns of past occurrences
- alerts based on time elapsed
- stats and detailed history
- optional notes
- Cloud sync with other devices
- export to CSV
- Apple Watch app
- Today widget
Follow @apphindsight to get updates and send feedback. | https://wishlist.apki.io/discounts/hindsight-time-tracker-c64a5adb |
What’s the best way to solve a business problem? Breaking it down into bite-sized chunks, or taking the 40,000-foot view of the situation? The former is what we might consider to be the traditional problem-solving approach – separating a problem into smaller components and analyzing the parts individually. However, more often than not – especially in the context of large organizations – this approach is linear and reductionist. Why? Because it ignores what are often crucial relationships between the problem being analyzed and its wider environment. This is linear thinking – A leads to B and results in C. But in truth, business problems rarely exist in such a neat, tidy and linear vacuum. The elements within the surrounding environment are connected, and through these connections they create a system. As such, in order to truly solve business problems – as opposed to simply treating individual symptoms – we need to take the helicopter view and begin thinking in systems. Systems thinking takes into consideration the surrounding system as a whole when dealing with a problem. By recognizing the dependencies within the system, systems thinking is able to effectively solve complex problems with many interrelated components.
(Image source: thesystemsthinker.com)
Problems never exist in isolation. They are surrounded by other problems – which themselves are surrounded by more problems still. The trouble – or, if you will, the problem – is that most of us have been taught from a very young age to take the linear approach to problem-solving. At school, we conduct science experiments that follow a linear path from problem to solution – aim, method, and outcome. We are disciplined and socialized to respond to reward (do all your homework and you’ll get good grades (and a treat from Mommy)) and punishment (don’t do it and you’ll fail your class (and be grounded for a week)). By the time we graduate and enter the working world, we have been effectively programmed to think in ordered, linear ways – so no wonder linear thinking is so dominant.
But business environments are not linear. They are chaotic, dynamic – nonlinear. And when problems emerge in nonlinear environments, they require nonlinear thinking to solve them. Systems thinking is a way of viewing the business environment as a complete system – a system that inherently relies upon a series of interconnected and interdependent parts. It seeks to oppose the linear and reductionist view – i.e. that a system or an organization can be understood by its individual and isolated parts – and replace it with the view that everything forms part of a larger whole, and that all parts are intrinsically connected and dependent upon one another. The reality is that A doesn’t always cause B which results in C. Sometimes C can cause A, while a combination of A and C can result in B – but it’s only by examining the system as a whole that we are able to see these complex relationships and thereby solve the complex business problems that result from them.
We’ve covered what systems thinking is and what it can be used for in our previous post ‘An Introduction to Systems Thinking’ – so please refer there for a slightly longer preamble on the basic concept of systems thinking. In this post, we want to take a deeper dive into systems theory as well as the wider vocabulary and tools of systems thinking. Let’s begin with the iceberg.
Systems Theory – The Systems Thinking Iceberg
Why is there such a tendency to view business problems as isolated events? Well, the short answer is because that’s the simplest way in which to view them. When a business problem emerges – say, when a defective product comes off the assembly line of a manufacturing company – leaders see the problem and move to fix it. The problem is an event, and our linear brains seek to find the cause, repair it, often assign blame, and then move on.
But what happens if that problem persists – if the event reoccurs numerous times? Now we might start to see a trend emerging – for example, a higher number of product defects during changes in shifts. This is what systems thinkers call a pattern – and when viewing patterns, we start to understand that events are rarely isolated and independent, but a consequence of something larger.
This “something larger” is a systemic structure – and systemic structures are responsible for generating the events and patterns we observe. If defective products are more common during shift changes, then the true business problem isn’t in fact the event or pattern of events itself, but something much more fundamental. Perhaps there’s a problem in the way shift changes are timed, or there’s no overlap between incoming and outgoing work crews, or there’s no communication system in place to facilitate the smooth hand-off of workloads between employees. The problem, we see, lies in the systemic structure – and the event, by contrast, is in fact just a symptom of the underlying problem.
(Image source: thesystemsthinker.com)
Together, events, patterns, and systemic structures form the systems thinking iceberg. But when we view business problems merely as isolated events, we’re only really looking at the tip of the iceberg. As Systems Thinker’s Daniel Kim points out: “A key thing to notice about the three different levels of perspective is that we live in an event-oriented world, and our language is rooted at the level of events. Indeed, we usually notice events much more easily than we notice patterns and systemic structures even though it is systems that are actually driving the events we do see. This tendency to only see events is consistent with our evolutionary history, which was geared toward responding to anything that posed an immediate danger to our well-being. […] It’s redesigning things at the systemic level that offers us far more leverage to shape our future than simply reacting to events does.”
Thinking in Loops
It is systems that generate patterns and events – this is fundamental to systems theory. But what does this really tell us about how systems behave, and what systems thinking tools are there to help us deepen our understanding about these behaviors?
Let’s begin with causal loop diagrams – also known as feedback loops. Taking a linear perspective, managers and leaders may view a series of events that continuously flow in one direction (A causes B causes C, etc.). Kim gives the example of sales going down (event A). The business takes action by launching a promotions campaign (event B), which leads to an increase in orders (event C), sales rising (event D), and a subsequent rise in backlogs (event E). But then sales start to fall again (event F), so the business responds with another promotional campaign (event G), and so on.
With linear thinking, even though events A and F – and B and G, and so on – are repeating events, they are viewed as separate and unrelated.
(Image source: thesystemsthinker.com)
However, from a feedback loop perspective, the systems thinker views each event not as discreet, but as connected to all other events in the system. The systems thinker, Kim says, constantly asks him/herself the question: “How do the consequences of my actions feed back to affect the system?” Linear thinking only allows us to draw connections between isolated cause-and-effect pairs – A and B, B and C, C and D, etc. – whereas with the feedback loop view we can see the interrelationships and interdependencies among all events. As Kim puts it: “The main problem with the linear view is that although it may be a technically accurate way of describing what happened when, it provides very little insight into how things happened and why. The primary purpose of the feedback view, on the other hand, is to gain a better understanding of all the forces that are producing the behaviors we are experiencing.”
Reinforcing Loops and Balancing Loops
There are two main types of feedback loops – reinforcing loops and balancing loops. Reinforcing loops are those where elements within a system reinforce or amplify more of the same. For example, a good product, leading to high sales, leading to more customers, leading to more word of mouth recommendations, leading to even more sales, even more satisfied customers, even more word of mouth, etc.
(Image source: threesigma.com)
Of course, reinforcing loops are just as likely to be negative as they are to be positive – a bad product will lead to poor sales, fewer customers, less word of mouth, even lower sales, even fewer customer, even less word of mouth, etc., etc., etc.
While reinforcing loops essentially destabilize systems – by compounding change in one direction – balancing loops, on the other hand, have the opposite effect. Balancing loops are the great stabilizers, resisting change in one direction by producing change in the opposite direction. They tend to materialize in organizations where control is needed. Take inventory control, for instance. The organization can’t afford to have too much capital tied up in large stockpiles of inventory – and so the goal is to have just enough product in the warehouse to fulfil existing orders. But, as the reinforcing loop takes hold and sales increase, production needs to be increased in kind until actual inventory comes as close as possible to the required inventory. At this point, corrective action needs to be taken to return balance to the system – and so production is once again slowed until more inventory is required. (Note that in the diagram below, arrows marked with an “s” indicate that as a variable changes, the next variable changes in the same direction – when marked with an “o”, the next variable changes in the opposite direction.)
(Image source: thesystemsthinker.com)
Managing balancing loops can be tricky – which is why systems thinking causal loop diagrams, though they appear simple, are vital for shedding some much-needed light on a situation. By understanding the structure of a balancing process and all the factors that affect it, it becomes possible to design appropriate strategies for effective action.
(Image source: thesystemsthinker.com)
Delays – Four Flavors
Another important though challenging element of systems thinking is found when we consider the inevitable delays that occur across every link within a complex system.
There are four types of delay that systems theory seeks to understand and account for – physical delays, transactional delays, informational delays, and delays in perception.
Physical delays are those that represent the amount of time it takes to move actual things from one place to another – such as products moving between warehouse and retailer, or converting raw materials into saleable products. Transactional delays represent the time it takes to complete a transaction – such as when negotiating a contract or simply making a sale over the phone. Informational delays relate to the time it takes to communicate information – about decisions that have been made or actions that have been taken. Finally, perceptional delays are slightly abstract, but represent the delays in perception about the changes that have actually been made – for example, if product improvements have been made, there will tend to be a delay before customers or even employees perceive of the changes, and thereby adjust their overall perceptions of the brand.
Delays – be they physical, transactional, informational, or perceptional – must be factored in when creating balancing loops. This is fundamental to systems thinking. Consider a typical production setting, for example. When creating a balancing loop to keep a favorable balance between production and inventory, if there is ever a backlog, so long as there are no significant delays within the system, increasing production would lead to higher shipments and thereby reduce the backlog. However, this scenario is more akin to a reinforcing loop as all variables are changing the next variable in the same direction.
What’s more likely to be encountered is some sort of delay. Let’s say the factory is already working at maximum capacity. As such, as the backlog emerges, there will be a delay before production capacity can be increased to deal with it – and the backlog will continue to grow in the meantime, causing yet more delays.
(Image source: thesystemsthinker.com)
But of course, with systems thinking, we can never make the mistake of thinking that any one loop – be it balancing or reinforcing – exists in isolation. Continuing with the same example, the delay in loop B1 above will likely have an impact on the wider business, as the delay in facilitating more production will affect customer order fulfillment. This, in turn, may lead to a decrease in service quality, which may affect future business. During the delay in which the manufacturer attempts to increase production to clear the backlog, customers may take their business elsewhere. Now, though production has increased, fewer orders are coming in – a new delay to contend with (see B2 below), which must be factored into the balancing loop, or, in this case, the “coupled loop”. As production is able to more quickly work through backlogs, the factory can ship products faster than its competitors, leading to a revival of new orders. However, what comes next, of course, is another surge in backlogs – and so the cycle repeats again.
(Image source: thesystemsthinker.com)
Identifying all possible types of delays is essential to creating meaningful balancing loops – and importantly for systems thinking, will help you determine the delays that may be affecting or likely to affect other processes. In systems theory, this is crucial for gaining insight and understanding into your system’s – and subsystems’ – behavior.
Final Thoughts
Systems thinking is an incredibly broad and detailed discipline – and an incredibly valuable one at that. Understanding the organization in terms of a system with many interrelated and interdependent parts allows leaders to expand the range of choices available for solving business problems, and indeed enables them to start unearthing the systemic causes of those problems rather than erroneously viewing each one as an isolated and independent event.
But the power of systems thinking goes beyond problem-solving. Using reinforcement loops and balancing loops with delays not only helps organizations surface issues that may be affecting productivity and profitability, but also design new systems that generate the kinds of events and patterns that they want. However, what leaders must understand when it comes to systems thinking is that there are no perfect solutions. Why? For the simple reason that, when dealing with systems, all choices made have an impact on other parts of the system. Nonetheless, using systems thinking tools, leaders can anticipate each impact, and thereby either minimize its severity or leverage it to the organization’s advantage. In this way, what systems thinking does is allow the business to make better informed choices.
Systems thinking is indeed the antidote to the “quick fix” mindset. It is a means of seeing the complexity of an organization and recognizing that the majority of the time resorting to quick fixes is no way to succeed. As the business world becomes ever more tightly interwoven globally – and technology makes ever more connections possible – systems thinking is emerging as one of the key management competencies of the modern age. Leaders and managers working today need to start learning how to leave linear thinking behind, and instead adopt the big-picture approach to tackling business problems. | https://itchronicles.com/technology/systems-thinking-the-vocabulary-tools-and-theory/ |
The Franco-Prussian War: Its Impact on France and Germany, 1870-1914
View/
Open
MurrayFrancoPrussianHistory2016.pdf (540.6Kb)
Author
Murray, Emily
Date
2016-04-11
Type
Thesis
Metadata
Show full item record
URI
http://hdl.handle.net/11005/3682
Subject
History Department, University of the South
;
History Department, University of the South, Senior Honors Theses 2016
;
Collective identity in history
;
Franco-Prussian War
;
France and Germany
;
Revanchism
Abstract
Historian Niall Ferguson introduced his seminal work on the twentieth century by posing the question “Megalomaniacs may order men to invade Russia, but why do the men obey?” He then sought to answer this question over the course of the text. Unfortunately, his analysis focused on too late a period. In reality, the cultural and political conditions that fostered unparalleled levels of bloodshed in the twentieth century began before 1900. The 1870 Franco-Prussian War and the years that surrounded it were the more pertinent catalyst. This event initiated the environment and experiences that catapulted Europe into the previously unimaginable events of the twentieth century. Individuals obey orders, despite the dictates of reason or personal well-being, because personal experiences unite them into a group of unconscious or emotionally motivated actors. The Franco-Prussian War is an example of how places, events, and sentiments can create a unique sense of collective identity that drives seemingly irrational behavior. It happened in both France and Germany. These identities would become the cultural and political foundations that changed the world in the tumultuous twentieth century. The political and cultural development of Europe is complex and highly interconnected, making helpful insights into specific events difficult. It is hard to distinguish where one era of history begins or ends. It is a challenge to separate the inherently complicated systems of national and ethnic identities defined by blood, borders, and collective experience. Despite these difficulties, historians have often sought to gain insight into how and why European nations and identities developed as they did. Any answers gained can offer unique insight into how nation-states, cultural loyalties, and historical conflicts alter international stability. It may seem as though the political and military conflicts of late nineteenth and early twentieth century Europe have been examined with a fine-tooth comb; however, modern trends in evaluating this time period have obscured important antecedents. In recent years, the study of World War One and World War Two has been viewed as a single twentieth century conflict defined by causes dating to the turn of the century. A genuine understanding of twentieth century events cannot be obtained though if the era is isolated from the actions and events which preceded it. This perspective limits genuine comprehension. It misses how far earlier military events, cultural ideologies, and expressions of nationalism drove the instigators of both world wars. In particular, the acute animosity between France and Germany originated in the modern era with the Franco-Prussian War of 1870. Each nation created ideals of national superiority that conflicted with the other’s, dwelt on a cultural desire for retaliation known as Revanchism, and established patterns of nationalistic expansionism through unilateral military action. All of these habits defined the culture of both countries well before 1914 and motivated their belligerence in the twentieth century. | https://dspace.sewanee.edu/handle/11005/3682 |
In this talk, I will re-examine the distinction between semantic memory and episodic memory in light of recent empirical evidence, especially coming from human cognitive neuroscience over the last ten years. In light of this evidence, it is increasingly difficult to see what the distinction amounts to—which is of special interest to research on animal episodic memory, since comparative researchers have been accused of doing violence to a clear distinction borrowed from human psychology. First, I diagnose the central problem: semantic and episodic memories are so difficult to distinguish because they can be about the very same events. Second, I consider and reject some standard ways of drawing the distinction: conscious phenomenology (e.g. the feeling of autonoiesis), egocentricity, revisability, and voluntary recall. The criterion I recommend is etiological and contextual: the difference between a semantic memory and an episodic memory of the same event is determined by the historical origins of the memories and their default relations in the subject’s memory network. Consciousness may be said to play a crucial role in determining these default relations, but only if we focus on what it does rather than how it feels, on the way that conscious attention by default binds the semantic components of episodes in a holistic, temporally-ordered sequence. I will then discuss some empirical implications of this view: namely, a shift in emphasis away from behavioral measures applied to retrieval alone, to paradigms that assess a memory’s entire trajectory from storage to retrieval, focusing on the default processing characteristics of closely intertwined memory systems.
The Spoon Lecture: Relativity and Anchors in Time (Nicky Clayton and Clive Wilkins)
Einstein supposedly said~ Time only exists to prevent everything from happening at once. Although physical time proceeds forever forwards, mental time can travel backwards as well, indeed in every direction. Mental time travel allows us to re-visit our memories and imagine future scenarios. We make use of this process to define multiple realities; ones that define our sense of self in space and time. Our cognitive mechanisms for making sense of the world around us are aided and abetted by the patterns and ideas we use in our thinking, the way we choose to see the world around us and, importantly, the objects with which we choose to associate. We explore these ideas in the Spoon lecture.
Episodic thinking and theory of mind: a connection reconsidered (Christoph Hoerl)
One of the arguments sometimes put forward in support of the idea that episodic thinking (or 'mental time travel') is a uniquely human achievement is that episodic thinking requires theory of mind abilities, and that the latter are only found amongst humans. In this talk, I want to take a fresh look at where exactly the connection between episodic thinking and theory of mind might lie. I first criticize the dominant way in which this connection has been construed, which - perhaps influenced by other aspects of theory of mind research - has sought to connect episodic thinking primarily with a grasp of the idea of representation, or the idea of informational access. I then argue for a novel, alternative, way of connecting episodic thinking and theory of mind, which focuses on the category of an experience, and on the role grasp of that category might be seen to play in episodic thinking.
Causality and time: a fundamental connection (Ivo Jacobs)
Understanding something usually entails having a notion of its causal structure. Although in metaphysics causes and effects are typically regarded as occurring simultaneously, temporal priority (causes precede effects) is a major attribute for causal perception in humans and non-human animals. Planning for the future thus involves knowing that a current event affects the likelihood of another event occurring later in time. People perceive causation more strongly if they can intervene on events rather than just observe them, which might make planning easier when involving one’s own actions rather than uncontrolled events. Choosing among candidate causes is more difficult when the temporal gap between cause and effect is large. The inherent connection between certain stimuli and events (biological belongingness) in combination with memory allows agents to learn about causal relationships quicker and more effectively. A distinction can be made between accounts on how causes make a general difference on the probability of their effects occurring, and notions that involve a deeper understanding of the physical mechanisms involved in each cause-effect relationship. The latter presumably plays a less significant role in everyday prospective cognition. The study of cognition concerning time therefore seems to benefit from causal analysis.
Prospective cognition in corvids in a non-caching context: ravens can plan for a token exchange (Can Kabadayi)
Research on cognitive foresight and future-oriented cognition in animals has gained considerable attention in the recent years. Although much of the evidence comes from the studies on great apes, crow birds (corvids) represent another group on which the ability of foresight has been documented. Separated from the great apes approximately 300 million years ago, corvids represent a group which the ability of foresight evolved independently and thus the investigation of this ability in this group can shed light into the universal building blocks of complex cognition in general and cognitive foresight in particular. One criticism to the documented planning skills in corvids questioned the flexible nature of the corvid foresight as those studies utilized caching context and it was argued that innate predispositions might have played a role rather than the flexible foresight as corvids are habitual food-cachers. Thus there is a need to broaden the range of the foresight studies to investigate the flexible nature of this ability in corvids. Current study addressed this idea and we used a token-exchange task to investigate if ravens (Corvus corax) can prepare for an exchange event with a human experimenter that takes place in the future. Using the same paradigm, we also tested if ravens can exert self-control in the context of letting go of a smaller and immediate reward for the sake of a larger reward that will be available in the future. The results indicate that ravens are capable of a future token exchange with a human experimenter and they exert self-control in order to gain a larger reward in the future instead of getting an immediate but smaller reward. These results reveal that the foresight ability in corvids is not limited to the caching context and it emphasizes the presence of flexible future-oriented cognitive skills in the corvid family.
With the future in mind: towards a comprehensive understanding of the evolution of future-oriented cognition (Gema Martin-Ordas)
The human mind often wanders forward in time to imagine what the future might be like (future-oriented cognition) and backwards in time to remember personal events (episodic memory). Despite recent evidence, whether other animals— besides humans—can mentally travel in time is still object of an arduous debate. In this talk, I will critically review the theoretical and empirical assumptions behind this debate and question that the capacity for future-oriented cognition is uniquely human. I will conclude by arguing that the current view on the comparative research is based on general theoretical constructs that are problematic as well as on assumptions that have yet to be proved across different human populations. I call for a broader theoretical and empirical approach in which not only cross-species studies but also cross-cultural studies are needed if we are to understand future-oriented cognition.
Temporal Concepts and Three Ways of Thinking About the Future (Teresa McCormack)
In this talk I will distinguish between three types of future-directed cognition that can be considered to differ in terms of their sophistication: anticipation, planning, and episodic future thinking. I will discuss how each of these should be defined, how they might relate to other mental abilities such as metarepresentation, and the type of demands each of them places on temporal thought. Relevant data from developmental psychology regarding each of these types of future thinking will be described. I will argue that planning is more cognitively demanding than anticipation in part because it requires event-independent thought about time. Because of this, the ability to plan is closely linked to a grasp of the idea that there are multiple possible ways that events can unfold. I will consider whether or not such a grasp might be related to metarepresentational abilities. I will then talk about the sense in which episodic future thinking might be thought to require temporal perspective-taking abilities, by considering the concept of time that is involved in episodic future thinking. I will argue that there are important differences between spatial and temporal perspective-taking that impact on whether we should consider temporal perspective-taking to involve full-blown metarepresentational understanding.
Putting flexible animal prospection into context: escaping the theoretical box (Mathias Osvath)
Research into non-human prospection has long been mired in ideological-like convictions over the supposed uniqueness of human cognition. Closer inspection reveals just how much cognition in general – down to its simplest forms – is geared toward predicting the future in a bid to maintain homeostasis and fend off entropy. Over the course of life's existence on Earth, evolution has, through a series of arms races, gotten increasingly good at achieving this. Prospection reaches its current pinnacle partly based on a system for episodic cognition that – as research increasingly is showing –is not principally limited to human beings. Nevertheless, and despite some notable recent defections, many researchers remain convinced of the merits of the Bischof-Köhler hypothesis, with its claim that no species other than human beings is able to anticipate future needs or otherwise live in anything other than the immediately present moment. What might at first appear to be empirical disputes, turn out to reveal largely unquestioned theoretical divides. Without due care, one risks setting out conditions for “true” future orientation that are not even relevant for describing human cognition. In sorting out some of the theoretical and terminological muddle that frames contemporary debate, this talk makes a plea for moving beyond past dogmas, and instead putting animal prospection research into the context of evolution and contemporary cognitive science.
The Moral Significance of Animal Time and Memory (Valerie Soon)
Are all persons humans? On some theories of personhood, memory and self-identity are key criteria, and these are thought to be uniquely human capacities. Beings with these features can be made better or worse off over time, and as such, they deserve special forms of ethical regard. Recent studies in ethology have shown that some animals may possess certain forms of memory relevant to self-identity, specifically, conscious recollection of past experiences. But because we cannot communicate with animals, it is difficult to verify that they consciously recollect their past. Consequently, skeptics shy away from making ethical assertions about animal personhood based on these studies. I provide an alternative analysis of these studies and argue that the capacity forsubjective timekeeping is a sufficient, though not necessary, condition for some degree of personhood. Animals with this capacity can be minimally considered near-persons. | https://www.neuro.philosophie.uni-muenchen.de/events/blockcourses/abstracts/index.html |
Patience requires a leader to carefully evaluate tension points. How they are managed by others may reveal problem solving patterns that can help anticipate the unexpected and get closer to understanding the root causes of problems. As a leader, you must be extremely open-minded and patient under pressure in order to see it as an opportunity previously unseen. patience can be tested in many ways. Having an open mind can help to understand unique ways that others may approached problems.
The importance of control in a project is to stay ahead of failures that may happen within the project scope. Having mechanisms to control the project and ensure the project stays on track is of the utmost importance. The three different types of controls defined in this paper, give project managers a set of tools and mechanisms to follow when managing projects. Whether your project includes the automation of cybernetic processes, the ability to test each part like go/no-go or using the post-control mechanisms to further your knowledge for future projects, each of these methods give project teams a way to control important factors within their projects. Moreover, it is commonly acknowledged that monitoring and controlling is the best way to keep project failures within the identified specifics of the project scope.
World Book, 2013. Web. 28 Oct. 2013. Janda, Kenneth. "Checks and balances."
It is important to distinguish forms of reasoning in science in order to distinguish between science and pseudo-science. This essay will explore the concept of the scientific method and how it utilises inductive reasoning, followed by an exploration of Karl Popper’s argument that when scientists explore their ideas through inductive reasoning, they make it impossible for science to hold any more credibility than pseudo-science. This will then be followed a dismantling of Popper’s argument and deductive reasoning proposal on the basis that inductive reasoning is justified, falsifiable, and allows for scientific progress. First, before exploring the possible reasons to dismiss inductive reasoning, it is worth understanding completely how it is applied and justified for application. A helpful argument in understanding induction itself is Russell’s, in which he gives the example that if we hear thunder, it is reasonable to conclude that preceding that thunder came lightning based on our experiences of these occurrences in nature.
Therefore in conclusion “That which is accepted as knowledge today is sometimes discarded tomorrow” is an accurate statement as shown from the areas of knowledge history and natural sciences. However, emphasis on the word ‘sometimes’ must be given as history shows us that it will only be discarded if wrong or can be manipulated for personal gain. While the natural science show us knowledge must tested thoroughly before being discarded as all knowledge is useful.
Simplicity: a theory should be simple in it’s explanation and should bring order to phenomena which would not exist or be taken into consideration without the theory; in other words, it should be the basis of the phenomena. 5. Fruitful” a theory should stimulate new research findings, meaning it should disclose new phenomena or discover relationships between phenomena. While Kuhn’s 5 characteristics do help give direction to the process of determining which paradigm is to take over the old, it also comes with many potential problems. One major problem is that scientists may still reach different conclusions by using the same criterion because of different interpretations of the criterion.
The data collection will be the key to understanding the problem and will uncover reoccurring themes that can be addressed immediately. When the leadership of companies fails to look beyond what is to what could be, the company as a whole suffers and is not living up to its full potential. At Treadway, its leadership will have to look beyond where they currently are to see what a great company they can be by acknowledging its problems and taking steps to correct them.
This paper will focus on the results of research from experts who have analyzed the influence that resistance to change, potential sources of stress, and the consequences of change and stress have on organizations. As part of the results of each study, the authors’ conclude that there is an apparent need for additional research to be performed and the provided recommended approaches suggested in managing change and stress may not address all issues. The first of these topics explored will focus on individual resistance to change in organizations. Individual Resistance to Change in Organizations Individuals go through a reaction process when they are personally confronted with major organizational change (Kyle, 1993; Jacobs, 1995; Bovey & Hede, 2001). Within this process there are four phases that it consists of: initial denial, resistance, gradual exploration, and eventual commitment (Scott & Jaffe, 1988; Bovey & Hede, 2001).
For instance, if a clients get aggressive, anxious or any sort of negative behavior , I would notice the early warning signs that the clients is showing to me. • The main stuff I have learned is our own triggers, attitudes and the barriers. To prevent and manage crisis situation, being “hero “is not the way to solve it rather than being hero, we must be supportive to the clients. In the future, it is useful things we must need to remember and put that on the real life as well as it gives us learning opportunities after the crisis has been
Presently this adequate adaptation of danger is introduced to the partners in the wake of rolling out the essential improvements and changes. The Risk administration groups dependably perform intermittent testing to ensure t... ... middle of paper ... ...hoice is alleviation, attempting to reduce the threat presentation, leaving affirmation when in doubt for remaining perils that can 't be tended to by some other framework. In our company, we are using MYSQL database structure and it would be negligible troublesome. Regularly monotonous to change over the data if the data is secured in other database systems. MS SQL, terra-data. | https://www.123helpme.com/essay/Theories-pt-3-376250 |
hu | eng
date: 08th – 25th of November 2018.
opening: 7pm 08th of November 2018.
curator: Eliza Grisztel
In her works, artist Ács Kinga-Noémi reaches out to the heterogenic forms of the present in the most unusual ways, with the intention of reflecting on the main questions of feminine existence. These objects and installations reveal severe social issues in a playful way, and cause a sense of frustration through a purposeful choice of materials, in a gallery filled to the brim with their presence.
Reacting to the constantly changing effects of the present and even utilizing stereotypically effeminate colours, she presents the two inexhaustible topics of being a woman and the connection to their surroundings and their self-image. The patterns of action we develop – either formed consciously or instinctively as a reaction to society or the internet – cause our faith and relationship with ourselves to crumble. This leads to a neverending search for approval by others.
These pieces reflect on the impulses generated by the world wide web. The title is insightful, since affectio is the latin word for stimulus. If we google the usual questions and expressions like “beauty”, “perfect body”, “how do I know if he loves me” or “how should I act around my crush?”, we get an infinite amount of answers from all kinds of sources, although they often are far-removed from reality. It’s like the telephone game: we whispered words to each other, and the further they got, the more jumbled up and meaningless they became.
The trends manipulating society shift day by day thanks to the lighting-fast flow of data, thus society becomes a disfigured reflection of the internet. These trends set our relationships with others and ourselves by showing us ever-changing idols we aspire to become if we want to be sewn into the fabric of society. The mirror-like material used repeatedly represents our disjointed existence. The ceaseless need for approval causes a frustration that leads to a false body image, a lack of self-love and the constant questioning of what it means to be a woman. | https://telepgaleria.com/acs-kinga-noemi-affectio/?lang=eng |
Improving seismic data quality and utilising the full potential of microseismic monitoring systems.
Situations can arise at mines that require special, focused projects. IMS offers a variety of ad hoc seismological projects and analyses designed to improve seismic data quality and to get the most out of a microseismic monitoring system. These can be related to calibration and verification of seismological system settings (e.g. velocity and attenuation calibration, and site performance), calibration of mine-specific hazard analysis tools (e.g. calibration of Short Term Activity Tracker, ground motion prediction equations, and ground motion hazard for future mining scenarios) or forensic analysis of specific seismic events (e.g. calibration of Short Term Activity Tracker, ground motion prediction equations, and ground motion hazard for future mining scenarios) or advanced forensic analysis of specific seismic events (e.g. instant large-event analysis, and advanced large event analysis).
SEISMIC SYSTEM AUDIT
Verification of the performance of seismic sites is important for maintaining a fully operational microseismic system that is able to extract accurate information about seismic events. Recorded seismic data is used to evaluate performance in the following categories:
Background Seismic Noise
The pre-trigger waveform recordings of all sites are used to calculate the root-mean-square and spectral characteristics of background noise on a per-component basis. This helps to identify potentially problematic sensors, or explain variations in system sensitivity.
Site Response
The spectral response of sites is evaluated by comparing the recorded and expected responses. This helps to identify damaged sensors or those experiencing strong resonances. Appropriate cut-offs can then be set so that these resonances do not corrupt estimates of seismic source parameters.
Orientation and Polarity
Using the property of P-waves that they are polarised in the direction of their propagation, we verify the orientation settings of sensors, which is required for moment tensor inversion and location with direction algorithms.
Acceptance / Rejection Ratio
This is a summary of how often sites are used in processing. This is helpful in identifying faulty or misconfigured sites, which would consistently be excluded from processing.
SEISMOLOGICAL SYSTEM CALIBRATION
Assessment and improvement of quality of seismic data, including noise rejection, blast discrimination, velocity calibration and calibration of seismic quality factors. System calibration ensures the highest possible quality data. It includes the following aspects:
Velocity Calibration
Calibration blasts with known location are used to invert for the most appropriate per-site P- and S-wave velocities. When required, more complex 3D velocity models can be calibrated.
Blast Discrimination
Production and development blasts conducted in mines often appear similar to normal seismic events, leading to misclassification and polluting the seismic dataset. A special blast-discriminator algorithm, using waveform and source parameter information, can be calibrated to routinely identify these misclassified blasts. The result is a cleaner dataset. The algorithm is described in the following paper.
Noise Rejection
The sensitive sensors used in microseismic systems may record undesired signals and noise, such as electrical noise, gravitational and mechanical impacts in ore passes, crushers and other machinery. A number of noise-rejection algorithms can be calibrated and applied to the dataset to remove these unwanted signals.
Calibration of Seismic Quality Factors
Seismic quality factors (Q-values) define how quickly seismic waves lose energy due to inelastic attenuation or scattering as they travel through the rock mass. Calibration of these seismic Q-values is important for reliable estimates of seismic source parameters, particularly radiated seismic energy.
SHORT-TERM ACTIVITY TRACKER
Calibration of Short – Term Activity Tracker (STAT)
STAT is a special tool in Ticker3D that monitors the current activity rate, and quantifies the probability that activity is higher than a reference rate. The utility of STAT is based on the principle that if the rate of seismic activity increases, so does the probability that one of these events may be larger and damaging. Once calibrated, STAT can be monitored in real time, and automatically notifies if the activity rate increases.
Please refer to the following paper.
INTERMEDIATE AND LONG – TERM HAZARD
Assessment of Intermediate- and Long – Term Seismic Hazard
Intermediate- and long-term hazard assessments quantify the probability that a potentially damaging seismic event will occur in a given volume within a given interval of time in the future (on the order of months to years). The method of assessment is presented in Sections 3 and 4 of MSRB, and includes the following steps:
- the quality and consistency of seismic data is checked, and the largest events are manually reprocessed
- a seismogenic volume is selected
- the expected value and upper limit of the next record-breaking event is evaluated
- the probabilities of occurrence of events is calculated
The assessments should be re-evaluated routinely (annually or bi-annually) or when the largest event record is broken.
GMPE CALIBRATION
Development of Ground Motion Prediction Equation (GMPE)
Ground motion prediction equations (GMPEs) are specially calibrated equations that relate the ground motion to the size of an event (in terms of seismic potency or radiated seismic energy) and the distance from the event. The estimation of ground motion can be done in terms of peak ground velocity/peak particle velocity (PGV/PPV) or cumulative absolute displacement (CAD). GMPEs are used to perform ground-motion hazard assessments, identify areas that potentially experienced damage during a large event, and estimate inelastic deformation associated with seismic events.
GROUND MOTION HAZARD FOR FUTURE MINING
Assessment of Seismic and Ground Motion Hazard for Future Mining Scenarios
The assessment is based on the modelling of seismicity expected for the planned mining steps. The Salamon-Linkov method is used for this. The modelling needs to be calibrated for historical mining steps using the observed seismicity.
Combining observed and expected seismicity allows for the estimation of future seismic hazard and ground motion hazard. This requires the estimate of the next record-breaking event and calibrated ground motion prediction equation. The results can be presented in terms of likelihoods of events according to the risk assessment matrix adopted at the mine. The details of the method can be found here.
This method is most useful when comparing the seismic or ground motion hazard for different mining scenarios (e.g. ranking different stoping sequences).
INSTANT LARGE EVENT ANALYSIS
Rapid Large Event Analysis
Within hours of a large or damaging event, analysis by an experienced seismologist will confirm location, location uncertainty, source parameters and results of moment tensor inversion.
ADVANCED LARGE EVENT ANALYSIS
Advanced Analysis of Large or Damaging Seismic Event
Advanced analysis of large or damaging events can provide insight into the mechanics of their sources and help to explain the damage. The analysis can include: | http://www.imseismology.org/ad-hoc-seismology/ |
Jan 01, 2016· Lacey S J. Vibration Monitoring of the Internal Centreless Grinding process Part 2: Experimental Results, Proceedings of the Institution of Mechanical Engineers, Vol 24, 1990. Lacey S J. An Overview of Bearing Vibration Analysis, Schaeffler (UK) Technical Publication. Lacey S J.
Jul 15, 2020· Vibration is most commonly measured using a ceramic piezoelectric sensor or accelerometer. An accelerometer is a sensor that measures the dynamic acceleration of a physical device as a voltage. Accelerometers are full-contact transducers typically mounted directly on high-frequency elements, such as rolling-element bearings, gearboxes, or spinning blades.
Vibration measurements are thus usually taken at the bearings of machines, with accelerometers mounted at or near the bearings. Since conclusions regarding machine condition - and hence whether or not money and human safety are risked - depend on the accuracy of measurements, we must be very careful how measurements are taken.
Vibration monitoring devices use accelerometers to measure changes in amplitude, frequency, and intensity of forces that damage rotating equipment. Studying vibration measurements allows teams to discover imbalance, looseness, misalignment, or bearing wear in equipment prior to failure.
Metrix Instrument Co. is the leading vibration monitoring solution provider. We provide machinery condition monitoring solutions to the world's leading manufacturers and users of cooling towers, gas turbines, reciprocating compressors, and other rotating and reciprocating machinery. Our vibration monitoring products include digital proximity systems, probes, sensors and transmitters, signal ...
The following is a guide to SKF's experience in sensor use in the most common industrial sectors that employ vibration monitoring. For each industry, the top four features required of a quality vibration sensor are stated and explained. Industrial sensor choices are graded as follows:
The Structural vibration analysis service includes both Operational deflection shape (ODS) analysis and modal analysis. The machine's movements are measured under existing operating conditions, allowing us to see how various points on a machine move (amplitude and phase) and to define the machine's movement patterns.
Basics of Vibration Analysis & Vibration Monitoring. The 10 Most Important Vibration Analysis Tips You Need to know ... during its normal operation as a consequence of friction and centrifugal forces of both the rotating parts and the bearings. As a result, Vibration can be measured, recorded, trended, and in most cases even heard. ...
Vibration monitoring. We offer advanced vibration monitoring techniques for early detection of a wide variety of mechanical fault conditions such as unbalance, misalignment, resonance, looseness, and faulty gears or bearings on all types of rotating machinery. Machine vibration monitoring is the most widespread method to determine the health of ...
Using the same vibration monitoring technology, you can detect and prevent expensive machine damage in real-time. The ifm product range includes vibration transmitters, vibration sensors, accelerometers, and evaluation electronics. Vibration transmitters and sensors detect damaged bearings…
on the bearings can also be utilized to detect problems with other components of the rotating system. For this reason, the analysis of bearing vibrations is of great importance for failure detection and monitoring of the machine health con-dition . Different techniques have been used to monitor rotat-ing machines.
In such cases, a piezoelectric accelerometer or seismic velocimeter is used to measure the absolute bearing vibration severity. Shaft and bearing vibration monitoring is specified in the ISO-7919 and ISO-10816 norms respectively and is applicable to any rotating machines such as hydro turbines, gas turbines, steam turbines, pumps, fans, cooling ...
May 22, 2018· Vibration will detect machine issues like unbalance, misalignment, looseness, etc. early enough that they can be corrected to prevent bearing failure. Once the bearing starts to fail, ultrasound will find those faults first – an early warning that can be monitored. Bearing faults will show up in the vibration spectrum as the fault progresses.
The AMS Wireless Vibration Monitor delivers full vibration data over a self-organizing wireless mesh network. It provides rich information about machinery health for both operations and maintenance personnel. Overall vibration, PeakVue measurements and temperature readings can be easily integrated into any control system or plant historian, while diagnostic data can be displayed by AMS Device ...
A vibration monitoring system is a set of tools used to measure one or more parameters to identify changes along machinery life.Monitoring these parameters help identifying early faults like imbalance, bearing faults, looseness among others.
Jun 24, 2019· In this article, we are going to talk about the different types of vibration, the different types of vibration sensors, and how to choose a vibration detection device based on parameters. For optimum performance of the machines, it is necessary to continuously monitor the parameters like speed, temperature, pressure, and vibration. | https://www.vmfzomergem.be/crusher/31201/bearing_vibration_monitor.html |
Germany’s seismicity is mainly characterised by weak and often imperceptible seismic events. However, there are also regions whose recent and historical seismicity leads to moderate (M ≥ 5) to strong (M ≥ 6) earthquakes. The consequences are particularly significant for Germany as an industrialised country with a high degree of urbanisation, a dense network of infrastructures and highly industrialised exposed regions with capital-intensive and sensitive advanced technologies. In addition, the increased safety awareness of the population, authorities, organizations and industry has significantly increased the need for user-specific, real-time information after an earthquake event.
In the ROBUST project, a user-oriented earthquake early warning and response system is being developed based on the combination of interconnected decentralized sensor systems for earthquake early warning and local monitoring systems of structures with connection to digital building models (BIM). The system enables earthquake detection, the triggering of fast automatic shutdown procedures and other immediate measures, rapid damage prediction and target group-specific real-time information transfer based on a KATWARN warning system extended to a distributed, decentralized architecture.
The application of the system is carried out prototypically in the Lower Rhine Bay by integrating intelligent sensors into the existing network of the Geological Service – NRW (GD-NRW). Local monitoring systems will be installed for a bridge structure and an industrial plant and coupled with their digital building models. The functionality of the overall system will be tested and validated by simulating representative earthquake scenarios for the Lower Rhine Bay. The system is developed in close cooperation with the industrial partners in order to optimally consider the specific user requirements for the system.
Thematic focus:
- Development of intelligent sensor systems for integration into existing seismic networks, which can perform decentralised user-specific data analysis and alarming in addition to real-time data acquisition.
- Coupling of intelligent seismic sensors with local sensor systems for building monitoring and their integration into digital building models (BIM).
- To develop new methods for fast and detailed damage prediction using suitable damage indicators for reliable evaluation of changes in condition after a damage event.
- To link the developments through a distributed, decentralized communication infrastructure. | https://www.cwe.rwth-aachen.de/en/projects/earthquake-engineering-projects/robust-2/ |
Scientists at the Hawaiian Volcano Observatory (HVO) monitor, analyze, and report on earthquakes that occur throughout the Hawaiian Islands. HVO is unique among USGS volcano observatories in that it is responsible for earthquake monitoring as it relates to both volcanic and seismic hazards. In this dual role, HVO operates a "Tier 1" regional seismic network as part of the USGS Advanced National Seismic System (ANSS).
Seismic network is operated in partnership.
The seismic monitoring network in the State of Hawaii includes various types of ground-shaking sensors at about 100 sites. These are operated by different partners as one statewide virtual network known as the Hawaii Integrated Seismic Network (HISN). Partners include the NOAA Pacific Tsunami Warning Center (PTWC), USGS National Strong-Motion Project (NSMP), Incorporated Research Institutions for Seismology (IRIS), and Infrasound Laboratory University of Hawai‘i (ISLA).
HVO maintains about 60 stations on the Island of Hawai‘i to detect and locate small-magnitude earthquakes beneath the summits, rift zones, and flanks of the most active volcanoes. Since earthquakes can also happen throughout the island chain, sensors operated by HVO's partners help to record and measure potentially damaging earthquakes and warn of tsunamis. For a detailed history of seismic monitoring at HVO since its founding in 1912, see this report.
Various types of seismic instruments record waves of sound and motion.
Different types of sensors called seismometers are needed to record the full ground motions of both small- and large-magnitude earthquakes. HVO uses four main types of seismic instruments as part of its monitoring network.
Short period – HVO's network has historically been dominated by short period instruments. Most sensitive to frequencies of around 1 second, this type of seismometer is good at recording high-frequency signals from local earthquakes. It is especially useful for recording P-wave arrival times and first motions in a cost effective way. In recent years, short period instruments have begun to be phased out by more capable broadband sensors.
Broadband – Broadband seismometers come in many varieties but generally are responsible to seismic signals ranging from about 0.01 – 120 seconds or higher. This allows for recording a broad range of signals at a variety of periods, allowing for more in-depth study of seismic sources and other phenomena. This type of seismometer is significantly more expensive than other options and is more sensitive to local site conditions.
Strong motion – Large earthquakes can shake the ground with accelerations that exceed the force of gravity. This can cause velocity-based seismometers, such as short period and broadband, to go off-scale and "clip." A strong motion accelerometer is a type of seismometer that will stay on scale no matter how strong the shaking. Most broadband instruments are paired with strong motion accelerometers to ensure good recording of signals no matter how strong the shaking. Strong motion instruments are used widely by the engineering community to measure how shaking affects manmade structures. They are also the primary data source behind the USGS ShakeMap products.
Infrasound – Earthquakes, eruptions, explosions, and other phenomena emit sounds into the air as well as into the solid earth. An infrasound sensor is a special type of microphone that measures sound waves in the air. This offers a unique way to monitor explosive volcanic activity and has great promise to aid in rapid eruption detection.
HVO Seismologists analyze Hawai‘i earthquakes.
Seismic data arriving at HVO is processed and analyzed using different tools to track dramatic and subtle changes that occur in seismicity, especially within and beneath the volcanoes. In 2009, HVO became the first regional seismic network in the U.S. to adopt the ANSS Quake Management System (AQMS). AQMS is a database-driven extension to the Earthworm seismic system for both automated real-time and manual post-processing of seismic data. It has helped HVO improve its capability to detect small changes in volcanic processes and better characterize large, damaging earthquakes.
Hypocenter and Magnitude – The mainstay of any seismic network is determining the magnitude and location of individual earthquakes. HVO uses the Hypoinverse package for its routine earthquake processing, both automatic and manual. The types of magnitude computed by HVO include Md (duration) and Ml (amplitude, or local). This information can be displayed on the map of monitoring instruments. Read more in our volcano watch article, How big is that earthquake? Why magnitudes sometimes change.
RSAM – Real-time Seismic Amplitude Measurement (RSAM) uses the average amplitude of signals measured by seismometers in a given area. The RSAM value climbs higher during periods of persistent and stronger ground shaking that occurs during earthquake swarms, tremor, sustained gas emissions, spatter, and lava fountains. Learn more about RSAM at the Alaska Volcano Observatory Web site.
Swarm detection – An earthquake swarm is a sequence of events closely clustered in time and space without a clear main shock. When swarms occur at volcanoes, they can represent subsurface magma movement and be an indicator of an impending or ongoing eruption. HVO alarms on possible swarm activity when the event rate significantly increases and surpasses a preset threshold value in a given geographic area.
Tremor detection – Tremor occurs when fluids, such as magma or gas, move within a conduit, causing it to resonate. Thus, characterizing tremor is important for understanding and monitoring magmatic pathways. Using an automated envelope cross-correlation method, HVO can detect and estimate the location of tremor sources. Learn more in Wech and Thelen 2015.
HVO reports seismic activity after review.
After HVO detects seismic signals with its monitoring network and analyzes those signals with computer tools, it disseminates the information. The AQMS system automatically locates earthquakes and posts them to the internet in real-time. Within hours to days, HVO seismic analysts review, re-compute, and update earthquake locations and magnitudes. This information becomes part of the ANSS Comprehensive Earthquake Catalog (ComCat) for ease of searching by anyone.
Larger earthquakes above magnitude 4.0 are usually widely felt and may be damaging. These trigger a rapid response by HVO duty seismologists and others, who manually review the earthquake and issue a news release within two hours. Additionally, the National Earthquake Information Center (NEIC) will compute a number of enhanced earthquake information products such as Did You Feel It, ShakeMap, PAGER, and moment tensors that are used by USGS and its partners for hazard analysis and decision making.
You can sign up to receive customized, automatic earthquake alerts via email or text message with the USGS Earthquake Notification System (ENS). This is a free service that will send you notifications when earthquakes happen in your area. | https://volcanoes.usgs.gov/observatories/hvo/hvo_monitoring_earthquakes.html |
Congress included $5 million in an omnibus funding package for the US Geologic Survey (USGS) Natural Hazards program to begin implementing an earthquake early warning system on the west coast. The omnibus package passed Saturday with support from President Barack Obama. UO worked in coalition with the University of Washington, University of California--Berkeley, and Caltech to support funding for the project which will create research opportunities and enhance public safety by increasing the number of seismic sensors distribute throughout the west coast.
On December 1, Governor John Kitzhaber proposed a 2015-17 budget that includes investments in earthquake monitoring. Under the Governor's recommendation, the state will invest in an array of 15 seismometers that are already installed across the state. The sensor array, which is designed to detect seismic action, is owned by the National Science Foundation (NSF) with a scheduled move to Alaska in 2015; the Governor's budget proposes that the state purchase the array as other jurisdiction have.
The implementation of a federal earthquake early warning system combined with the proposed purchase of the NSF will enhance earthquake research at the university. The purchase will also strengthen Oregon's contributions to the USGS Pacific Northwest Seismic Network (PNSN). Through agreements with UO, UW and other institutions PNSN operates seismic stations, acquires seismic data from other organizations, processes the data into information products, disseminates them, and makes the data available to the public. PNSN earthquake data, including data from the array to be purchased under the Governor's proposed budget, can be viewed by the public at http://pnsn.org/. | https://gcr.uoregon.edu/newsletter/article/governors-proposed-budget-and-federal-omnibus-funding-package-include-funds-earth |
In this study, we focus on the accurate and early prediction of Localized Heavy Rain (LHR) using multiple sensors. Traditional sensors, such as rain gauges and radar, cannot detect LHR until cumulonimbus clouds cover the sensors. In contrast, Surface Meteorological Monitoring Networks (SMMNs) can accurately measure rainfall in the vicinity of the sensors, thereby detecting LHR earlier than traditional sensors. By evenly placing the sensors around a large city, a SMMN should be useful in predicting LHR. However, since most sensors are placed in a different installation environment, their raw sensor data may significantly differ depending on their surrounding environment (i.e., altitude and sky view factor). Therefore, we propose a calibration scheme for a SMMN that utilizes many sensors in various installation environments and implement a novel LHR prediction system that produces accurate and early LHR predictions. Our system proved to accurately predict LHR 30 minutes earlier than traditional schemes. | https://keio.pure.elsevier.com/en/publications/accurate-and-early-detection-of-localized-heavy-rain-by-integrati |
Expertise in the following.
* Energy
* Civil Engineering
* Education
* Seismic Monitoring
* Ocean Bottom Systems
* Multidisciplinary
* Earthquake “Early Warning” Systems
More Info
Structural Health Behavior Monitoring
The process of implementing a damage detection and characterization strategy for engineering structures is referred to as Structural Health Monitoring (SHM). Here damage is defined as changes to the material and/or geometric properties of a structural system, including changes to the boundary conditions and system connectivity, which adversely affect the system's performance. The SHM process involves the observation of a system over time using periodically sampled dynamic response measurements from an array of sensors, the extraction of damage-sensitive features from these measurements, and the statistical analysis of these features to determine the current state of system health. For long term SHM, the output of this process is periodically updated information regarding the ability of the structure to perform its intended function in light of the inevitable aging and degradation resulting from operational environments. After extreme events, such as earthquakes or blast loading, SHM is used for rapid condition screening and aims to provide, in near real time, reliable information regarding the integrity of the structure.
More Info
Naval Seismic
The system uses the latest weak motion sensor technology in a robust ocean bottom installation. The three-footed sensor package shown is designed to sink into a sandy sea floor, affording excellent coupling to ground movements with little interference from water motion. Digitizer and communications electronics is housed in a separate unit which floats above the sensor package.
More Info
Emergency Shut-Down System
A single internal accelerometer or a combination of up to three (3) high- speed external digital accelerometers to measure the acceleration of the “G” force shockwave against time. The data is processed on the on-board processor to distinguish the difference between typical industrial movement (trucks going by, locomotives, ground work drilling etc) and actual shock waves, or “G” forces.
When identified, the system shuts down supply of hazardous materials bringing the manufacturing to a complete stop.
More Info
Strong Motion Networks
New instrumentation has been sought to monitor this emerging situation and included finding suitable locations within built up areas to accept and install earthquake monitoring systems. The NetQuakes seismographs specification requires access the Internet via a wireless router connected to an existing Broadband Internet connection. The seismograph then transmits data to the USGS only after earthquakes above the magnitude of around 3, but will not consume any significant bandwidth and should require only minimal maintenance.
While enhancing the Strong Motion Network coverage in this seismically high-risk area, the measurements improve also the ability to make rapid post-earthquake assessments of expected damage and contribute to the continuing development of engineering standards for construction projects. They may well also shape future requirements within other urban areas over a longer time period.
More Info
Earthquake Early Warning
Earthquake Early Warning and Rapid Response SystemSeismic
Pre-shake (P Wave) warning and preventive actions
Output: earthquake alert signal
As a rule, only few stations are required
Hi-end seismic instrumentation
Not always possible to implement (benefit zone)
High level of risk and responsibility
Sounds trendy from the marketing point of view
Monitoring and Rapid Response System that, in case of an earthquake, measures the local accelerations, generated a detailed shakemap, compares the accelerations with the design limits of the facilities and generate alarms accordingly. The supplied instrumentation consists of field stations with borehole accelerometers and intelligent seismic recorders with associated peripheral equipment designed to work under the harsh environmental conditions. In addition a system central cabinet is supplied featuring hardware and specialised software to facilitate full configuration, operation and interfacing within the SEIC's local and remote systems.
The outputs of the EWS comprise of real time data streams from remote stations, processing of these streams and generating an earthquake alert of a destructive seismic earthquake, distributable to several institutions enabling vital information to be supplied to relevant officials and agencies.
The outputs of the RRS consists of processing of onsite seismic data continuously, seismic event triggered SMS messages from remote stations summarizing seismic event parameters, evaluation of incoming event parameters and processing these data to obtain damage estimation and event severity distribution across the metropolitan area, distribution of these results via real-time communication to relevant officials and agencies.
More Info
Smart Sensors
Direct Measurements Inc (DMI) develops and produces wire free sensors and portable devices used to measure and predict the structural health of operating systems in the fields of aerospace, civil infrastructure, and energy systems.
The DMI product line is the Dual Purpose Sensor (DPS) for strain and fatigue crack measurement on structural components. The DPS system consists of a gage that is bonded directly to the structure, a small sensor head that is mounted over the gage, and a control hub that contains hardware for power, data collection & processing, and communications.
More Info
Back to top
P.O. Box 53193 Tel Aviv 61531 Israel Tel. +972-3-6484196, Fax. +972-3-6487483
E-mail: | http://www.eltrap.co.il/seismic-structural-health-monitoring |
Combining data from a new generation of satellites with a sophisticated algorithm, a new monitoring system developed by researchers at the University of Bath with NASA could be used by governments or developers to act as a warning system ensuring large-scale infrastructure projects are safe.
The team of experts led by NASA’s JPL and engineers from Bath verified the technique by reviewing 15 years of satellite imagery of the Morandi Bridge in Genoa, Italy, a section of which collapsed in August 2018, killing 43 people. The review, published in the journal Remote Sensing, showed that the bridge did show signs of warping in the months before the tragedy.
Dr Giorgia Giardina, Lecturer in the University’s Department of Architecture and Civil Engineering, said: “The state of the bridge has been reported on before, but using the satellite information we can see for the first time the deformation that preceded the collapse.
“We have proved that it is possible to use this tool, specifically the combination of different data from satellites, with a mathematical model, to detect the early signs of collapse or deformation.”
While current structural monitoring techniques can detect signs of movement in a bridge or building, they focus only on specific points where sensors are placed. The new technique can be used for near-real time monitoring of an entire structure.
Jet Propulsion Laboratory Lead author Dr Pietro Milillo said: "The technique marks an improvement over traditional methods because it allows scientists to gauge changes in ground deformation across a single infrastructure with unprecedented frequency and accuracy.
"This is about developing a new technique that can assist in the characterisation of the health of bridges and other infrastructure. We couldn't have forecasted this particular collapse because standard assessment techniques available at the time couldn't detect what we can see now. But going forward, this technique, combined with techniques already in use, has the potential to do a lot of good."
This is made possible by advances in satellite technology, specifically on the combined use of the Italian Space Agency’s (ASI) COSMO-SkyMed constellation and the European Space Agency's (ESA's) Sentinel-1a and 1b satellites, which allows for more accurate data to be gathered. Precise Synthetic Aperture Radar (SAR) data, when gathered from multiple satellites pointed at different angles, can be used to build a 3D picture of a building, bridge or city street.
Dr Giardina added: “Previously the satellites we tried to use for this research could create radar imagery accurate to within about a centimetre. Now we can use data that is accurate to within a millimetre – and possibly even better, if the conditions are right. The difference is like switching to an Ultra-HD TV – we now have the level of detail needed to monitor structures effectively.
“There is clearly the potential for this to be applied continuously on large structures. The tools for this are cheap compared to traditional monitoring and can be more extensive. Normally you need to install sensors at specific points within a building, but this method can monitor many points at one time.”
The technique can also be used to monitor movement of structures when underground excavations, such as tunnel boring, are taking place.
“We monitored the displacement of buildings in London above the Crossrail route,” said Dr Giardina. “During underground projects there is often a lot of data captured at the ground level, while fewer measurements of structures are available. Our technique could provide an extra layer of information and confirm whether everything is going to plan.”
Dr Giardina has already been approached by infrastructure organisations in the UK with a view to setting up monitoring of roads and rail networks. | https://www.bath.ac.uk/announcements/new-high-definition-satellite-radar-can-detect-bridges-at-risk-of-collapse-from-space/ |
Significantly, under this section the main object will be to analyze how a certain economic phenomenon by a business is likely to impact on the targeted group and consequently ascertain the end result of such undertakings. Thus, microeconomics gives us an insight of what is likely to be anticipated when certain changes are instigated with regard to the operations of a company and this is important because it can further aid in making proper decisions and initiating sustainable operating strategies.
For this section, the microeconomic issue identified is the cutting of prices of products or equally giving of offers by business. The table below briefly illustrates on the tangible issues that will be further discussed.
- PRICE CUTS
Undisputedly, the pricing of a certain product eventually determines on whether it is likely to guarantee higher sales or losses. Ordinarily, customers are sensitive when it comes to the price of s product that they want to purchase because if it is not a pocket friendly price, then a majority will opt to buy an alternative product that will equally serve the same purpose but at a reasonable price (Dogan, et al 2013).
Similarly, when operating under a certain field competitors are bound to be present. This means that for one to outsmart them they have to come up with effective strategies to lure more customers to buying their products as compared to the rival’s and subsequently gain a large market share ( Pauwels & D’aveni, 2016). Hence, pricing seemingly plays a vital role under such circumstances.
Important to note is that before a business embarks on an initiative of providing price cuts on its products, there are certain essential factors that must be considered. This is so because not all price cuts may work for the advantage of the company. In fact, it is assumed that most price cuts tend to lead to low profit margin for the concerned business and this may hurt the overall operations of the business.
Among the things to be considered includes the long term implications of price cuts. For instance, one a price cut has been made and new customers have joined the bandwagon of purchasing it, increasing such a price thereafter may lead to loss of these customers as such a business must put in place other plans such as improving the quality of the product so as to demand a higher price because without such modification the initial price cut may end up hurting the business.
So as to answer the critical question of why do various businesses offer price cuts, the subsequent section of this paper will dwell on analyzing the various tools identified in discussing the economic issue.
- Competition
Foremost, competition is one of the key features of any market. However, stiff competition may force a business out of the market as only the dominant participants get to have the larger market share. To mitigate such an event occurring, businesses are inclined to offer price cuts to their products so as to retain a fair share of the consumers in the market.
By giving such price cuts, it means that such a company can compete fairly in the area of operation. Accordingly, one can argue that consumer would often resort to buying products at reasonable prices, hence if one of the competitors is offering the same product at a higher price it is highly likely that they will lose buyers to the company that gives relatively cheaper pricing. In such a situation, to promote a fair competitive market, prices will thus be relatively proportionate as a result leading to a fair share of each participant in terms of customers and the market place.
- Sales
Significantly, when a product does not sale it may eventually cause the business to succumb to losses. Thus, the concept of sales can be boosted in a twofold channel. First, for new products that have been introduced to a market it is imperative that price cuts are given so as to entice customers into buying the products.
On the other hand, when there are low buy outs of products, then a company may opt to initiate price cuts all in a bid to try and revamp the product. Generally, price cuts that aim to boost the sale of a commodity have to address a certain deficiency. In this way, having reduced prices serves as an effective tool in enhancing the purchase power of consumers towards a specified product.
- Brand Promotion
Particularly, for new products that are unknown to consumers, it is vital that price cuts are provided. This is so because, often consumers may refrain from interacting with new products in the market based on aspects such as having a preference of the already existing ones. Such circumstances may impair the emergence of new businesses in that market. Thus, when price cuts are offered as incentives for customers, it id then highly likely that new consumers will indulge in buying the given product based on its reduced pricing.
- Market dominance
Naturally, for businesses that operate in the same field of operation the market share that one has over the other largely matters. The market share determines the profit that a company expects to acquire from its sales. Hence, companies are motivated to initiate strategies that would put them at an advantage position over their rivals. One of the ways of doing this is by providing price cuts on the products of the business. Price cuts as aforementioned in the discussed sections are an allure for new customers.
When one business obtains new customers that belonged to a rival company it subsequently means that the former company acquires a large market share. However, such an undertakings has its downside in that it forms a platform for emergence of a monopolistic market structure whereby there is only one dominant player. When this happens, consumers are put in the liberty of that dominant player in the market because such a business has all the power and keys of controlling how that particular market will operate.
- Economic recession
Notably, the economic state of a country determines how consumers of products will purchase and spend on products. In the case where the economy is booming and businesses are not financially constrained, consumers are highly likely to purchase products without much limitations or considerations such as on pricing. In this scenario, offering price cuts whereas fellow competitors are not may harm the business because consumers may not give too much concern about their spending.
On the other hand, when there is an economic slump, in that businesses are not doing as well as they would normally do this thus calls for effective measures to retain and attract customers so as to continue operating.
Under an economic recession situation, consumers would preferably want to spend less. To match with such changed dynamics, then one would argue that price cuts on the products of a business are the most viable solution to follow.
- Market failure
Considering market failure is a concept that occurs as a result of inefficient allocation of certain resources within the market of operation, then such a situation is consequently likely to affect the operations of the company (Fabella, 2015). For instance, a monopolistic market structure may be deemed as a market failure ingredient based on the fact that new businesses will find it hard to compete in a market that is largely dominated by one player.
Nonetheless, in such a situation a company may opt to provide price cuts on its product so as to try and mitigate the market failure effects which if not diminished will certainly curtail the operations of the other businesses.
- Government failure
Significantly, the government is duty bound to make sure that businesses operate in a fair and friendly environment. To do this, certain limitations must be imposed and constraining barriers broken down. For instance, take a situation whereby the government fails to monitor the operations of businesses through relative agencies, in such a situation certain business may drain consumers by instigation undertakings that would solely serve their own interest. One of such an undertaking may be over-pricing on the produced products.
However, such an undertaking may not suit all the businesses within the market as such prompting the need to lower prices of similar goods so as to counter the other business competitors.
SECTION SUMMARY
Nonetheless, there may exist factors that may affect this equilibrium price such that a business may be forced to make adjustments. This is of essence because without such alterations, a business is likely to operate under losses. The aspect of price cuts maybe one of the ways that business may use to reach certain equilibrium.
By giving price cuts it fundamentally indicates that a company aims at first increasing its sales and similarly obtains new customers. Importantly, aspects such as the profit margin that the business aims at must be considered before making such a move. In doing this, prior research is essential because without having knowledge of such information then a business may orchestrate its failure.
CONCLUSION
Foremost, markets are placed that are guided by certain distinctive features that must be observed and preserved so as to allow business to operate efficiently. For instance, without embracing the concept of fair competition between rival businesses, then one may triumph over the other leading to unfair labor practices.
Significantly, the importance of government intervention in market practices cannot be ignored. The government plays a key role in regulation of various aspects of the market so as to facilitate proper co-existence between the firms themselves and the consumers that they serve. Without such an intervention, evidently every business would seek to protect their own interests putting aside all other basic requirements such as offering quality products.
When it comes to the various macroeconomic issues that may affect the operations of markets, first it is important to note that such issues may have a direct effect on the activities of consumers and as a result end up curtailing the operations of the business in the end. Microeconomic issues should be looked at from a wider scope. Their particular effects should be analyzed in depth so that the right techniques are initiated to mitigate on their possible hazards.
Significantly, these issues should never be ignored before they may have adverse effects on the operations of the company as such creating the need to find way to move around them and benefit the business.
Finally, without fair market practices, not only does firms suffer but consumers too share in the same suffering. This calls for proper market practices that protect both the interests of the businesses and consumers so that none is inclined to spear-head their own interests on the expense of the other. Where unfair practices may emerge, it is imperative that even the firms themselves take personal measures and approaches to meditate on the negative consequences.
Reference
Boyd, T. (2015, Nov 28). Woolies crisis to go for years.The Australian Financial Review
Retrieved from https://search.proquest.com/docview/1736670877?accountid=45049
Dogan, Z., Deran, A., & Koksal, A. G. (2013). Factors influencing the selection of methods and
determination of transfer pricing in multinational companies: A case study of United Kingdom.International Journal of Economics and Financial Issues, 3(3), 734-n/a. Retrieved from https://search.proquest.com/docview/1392996149?accountid=45049
Fabella, R. V., & Fabella, V. M. (2015). Re-thinking market failure in the light of the imperfect
state. St. Louis: Federal Reserve Bank of St Louis. Retrieved from https://search.proquest.com/docview/1698893264?accountid=45049
Pauwels, K., & D’aveni, R. (2016). The formation, evolution and replacement of price-quality
relationships.Academy of Marketing Science Journal, 44(1), 46-65. http://dx.doi.org/10.1007/s11747-014-0408-3
Shazad, M. M., & Miniard, P. W., 2013. Reassessing retailer’s usage of partially comparative
Pricing. The Journal of product and brand Management, 22 (2), 172-179. http://dx.doi/10.1108/10610421311321077
Spillan, J. E., & Ling, H. G. (2015). Woolworths: An adizes corporate lifecycle perspective. | https://www.bestessaywriters.com/price-cuts/ |
Outlook for overall women’s apparel industry. Since the downturn that began in the early 2000s significantly impacted the women’s apparel industry, the increasing rate of overall sales had presented a slight decrease from 2004. Although it came up from 3. 5% to 5. 7% in 2004, it began to come down from that year as well with a relatively stable dropping rate. However, it led to a significant fluctuation among units that sold in different price ranges. Let’s take a look at Chart-1 below which is transformed from Exhibit-2 in the case to the form of increasing rate and proportion respectively. According to (a), units sold whose price is more than $200, and in the range of $100 and $200 increase well below the average increasing rate. While the units sold whose price ranged from $50-$100 and under $50 increased higher than the average increasing rate. What’s more, the growth rate of units sold under $50 had jumped to 11. 50% in 2007 by 4. 28% which is in great contrast with that of only 1. 5% whose price ranged from $100-$200.
Chart-1: (a)
Price Point
2006 (growth rate)
2007 (growth rate)
difference
$200+
0. 6508%
2. 8736%
2. 2228%
$100-$200
-0. 0457%
1. 5075%
1. 5532%
$50-$100
7. 0548%
9. 0851%
2. 0303%
Under $50
7. 2188%
11. 5010%
4. 2822%
Total
5. 1272%
8. 1631%
3. 0359%
According to (b), the units sold proportions presented little difference from 2005 to 2006. However, in 2007, we found that 2 points less of units sold pricing up from $100 and 2 points more of that were priced under $50 compared with the previous year.
Chart-1: (b)
Price Point
2005 (proportion)
2006 (proportion)
2007 (proportion)
$200+
10. 7601%
10. 3020%
9. 7982%
$100-$200
17. 0388%
16. 2004%
15. 2036%
$50-$100
34. 0776%
34. 7025%
34. 9983%
Under $50
38. 0456%
38. 8025%
40. 0000%
Total
100%
100%
100%
Although the statistics suggest a slight increase in the total demand in the industry of women’s apparel, the shift in the structure of units sold in different price ranges indicates a downturn in the industry which led to a downward consumer purchasing behavior.
Indeed the economic sluggish hit the segment targeted on upscale-class apparel pricing higher than $100, however, it provides a great opportunity for manufactures who engaged in the segments targeted toward “budget” and “moderate” classifications. As a result, consumers undergoing an economic downturn would become more and more price-sensitive, especially for those who were always purchasing “moderate” or “budget” apparel even before the economic sluggish began. Competition Since the industry was moderately concentrated, it should belong to a monopolistic competition market, under which sellers could differentiate their offers to buyers. Products can be varied in quality, features, or style, or the accompanying services can be varied. Sellers try to develop differentiated offers for different customer segments and, in addition to price, freely use branding, advertising, and personal selling to set their offers apart. Retailing Competition for the Apparel Market In current years, department stores have been squeezed between more focused and flexible specialty stores on the one hand, and more efficient, lower-priced discounters on the other.
It results in a markedly falling in market share which can be found in Exhibit-5. In contrast, specialty stores with narrow product lines and deep assortment, and supercenters who are actually giant specialty stores with broader product lines presented an enlightening future trend. The fierce competition, as a result, gave rise either to merger and consolidations between retailers in order to gain bargaining power with suppliers, or to contracting directly with manufacturers to produce private label products. Manufacturers also expanded their roles by integrating forward into retailing so as to reduce expenditures and to take better control over their own business. Harrington Collection Harrington Collection targets itself at high-class fashion enthusiasts and divides the “upper-class” market into 4 further specific segments represented by 4 brands which focus on people with different income status, ages, self-concepts, etc. If we refer back to Chart-1: (b), we find that the market share of total apparel industry pricing higher than $100 decreased in 2007.
It would hit the performance of the Harrington Collection since all of its products are priced up from $150. In addition, we also find a rapid growth was taking place in the low-end market in 2007. As a result, senior executives of the company are considering to introduce active-wear into manufacturing and stretch its product line downward to grab the opportunity in the low-end market as well as to make up for their profit loss in the high-end market. However, a brand’s price and image are often closely linked and a change in price can adversely affect how consumers view the company. When a cheaper product is introduced into the market, their loyal consumers would think that the quality has been reduced. Especially for a luxury-oriented company like Harrington Collection whose customers are extremely loyal to the brand and looking for a status that the company’s brand stands for, lowering prices would threaten the company’s position in the minds of its loyal customers. (disadvantage) On the other hand, when it comes to new customers who never purchased Harrington’s apparel before, lowering prices might attract consumers in “moderate” or even “budget” segments to buy its products.
Moreover, if prices are similar to that of competitors who target only in “moderate” or “budget” segments, consumers are more likely to purchase Harrington’s apparel, because the brand would make them look wealthy. (advantage). However, it is not the case. Consumers belonging to “moderate” or “budget” segments are extremely price-sensitive, especially under a downturn economic situation. When the company slightly raises the price to meet a higher quality of service requirement, lots of consumers coming from these two segments would turn to its competitors. Even though the company makes a profit due to the current fad, it would hurt the company’s profitability in the long run. (disadvantage) As a result, the management of the Harrington Collection should trade-off both advantages and disadvantages when making the decision of lowering its price to develop an active-wear product line. Activewear According to the case, both the facts that the number of active-wear units sold was expected to double by 2009 and the extremely high turnover rate suggests a promising future of active-wear classification.
So the point is to which segment active-wear classification should target and how this new product line should be priced. First of all, let’s refer back to our analysis right above. Although targeting consumers who pinch pennies and lowering prices to attract this kind of “new” purchasers might make a profit in short term, the great price sensitivity and disloyalty hidden behind this group would result in a greater loss in a long run. Moreover, the inexpensive brand image would drive a lot of loyal customers away to its competitors. However, “10% of customers purchasing apparel in the $100-$200 price range would buy an active-wear set if one with superior styling, fabric, and fit was available. ” “There is a subset of Harrington customers who were loyal to the brands throughout their careers but no longer desire the tailored, professional look. They are now interested in something fresh and comfortable that fits with their active lifestyles. ” “The aging baby boomer population wants clothing that does not make them feel old. All of these facts suggest that it seems safer and more conservative to remain in the existing classification and excavate the need for loyal customers. However, due to the high growth in the low-end market, we don’t want to give up the opportunity to take a bite on that tempting market. So why don’t we price the product slightly higher than the “moderate” active-wear product, and increase the quality as well as add exceptional features to attract both old customers and new consumers who are not that sensitive to price?
For instance, if customers purchasing active-wear from Liz Claiborne related themselves with sexy and glamorous images, Harrington could redesign the active-wear product adding features to attract customers who would like to relate themselves with elegant or sophisticated images which are more consistent with the company’s value. It can also increase the quality of fabrics, manual work, and services. People would love to pay a little higher price in exchange for much higher quality, unique style, and better services. In this way, not only it would attract new customers who are less price-sensitive with affordable prices and retain them by reliable quality compared with that of poorly made “moderate” products, but also it would bring a fresh experience to its old customers and then increase the times of their purchasing behavior without eating up the sales of company’s other brands or hurting a luxury image. Secondly, if Harrington faces a host of smaller competitors charging high prices relative to the value they deliver, it might charge lower prices to drive weaker competitors out of the market. However, Liz Claiborne, one of Harrington’s major competitors, was also one of the leaders in the “better” active-wear category at a relatively low price. As a result, the company may decide to differentiate itself with value-added products at higher prices. In conclusion, Harrington should price the new product line of active-wear slightly higher as well as increase its quality and services in order to support the price and keep consistency with its luxury image.
Furthermore, Harrington should differentiate its style and features to avoid direct competition with other leading manufacturers. Brand Targeting and Positioning Myer thought active-wear would be a perfect addition to the Vigor division for two reasons – Vigor styles were less traditional than the other Harrington divisions, and the Vigor division emphasized comfort and fashion although it’s a career-oriented design. However, these two reasons cannot sufficiently support whether active-wear would well fit into the Vigor brand. Even though attributes and benefits brought to customers could be varied among different products, the images, beliefs, and values created for customers must be consistent with each of the products under the same brand. For instance, active-wear and existing Vigor’s products don’t have the same features, as active-wear is more sporty and casual while the other is more work/professional oriented. The point is neither the same attributes they have in common nor the comfortable benefits they will bring to customers. The point is the same image customers would like to relate themselves with and values the company intends to create.
Since Vigor has already successfully created an image of “Trend Setter”, the new product line must also create values of “breaking rules”, “looking exceptional”, “pursuing new lifestyle” for customers to meet the requirement under the brand of Vigor. As a result, it’s not a bad idea to branch out Vigor to support active-wear manufacture. Advantages and Disadvantages Moreover, to extend a current brand name to a new category will give the new product instant recognition and faster acceptance. It also saves the high advertising costs usually required to build a new brand name. And it could also use the brand’s existing support and functions to run the new business, which would decrease part of overhead expenditures. At the same time, branching out Vigor involves some risks. If a brand extension fails, it may harm consumer attitudes toward the existing products carrying the same brand name. Potential Retail Trade Since Company-owned stores accounted for about 20% of the manufacturing group sales, and the remaining sales were split 40:60 between specialty stores and department stores. It can be inferred that the sales proportion of these three outlets is 20: 32: 48 in the manufacturing group.
According to the sales information provided in Exhibit-6, the sales and corresponding proportions among different retailing terminals can be concluded as below:
Chart-2: (a)
2005 (sales in millions)
2006 (sales in millions)
2007 (sales in millions)
Own Store
945. 2
921. 4
913. 6
Specialty Store
177. 92
173. 44
172. 16
Department Store
266. 88
260. 16
258. 24
Total
1390
1355
1344
Chart-2: (b)
2005 (sales in pro)
2006 (sales in pro)
2007 (sales in pro)
Own Store
68. 00%|
68. 00%|
67. 98%
Specialty Store
12. 80%
12. 80%
12. 81%
Department Store
19. 20%
19. 20%
19. 21%
Total
100. 00%
100. 00%
100. 00%
Can see from Chart-2 above, sales from Company-Owned Store account for most of the company sales. By integrating itself forward into retailing or the entire value chain, the company could be able to reduce the time required of distribution, to take control of promotion and retail prices directly, and to provide more personal selling services to customers by professionally trained salespeople so as to better meet customer needs. On the other hand, multiple channels offer many advantages as well to companies facing large and complex markets. With each non-company owned store, the company expands its sales and market coverage and gains opportunities to tailor its products and services to the specific needs of diverse customer segments. But such multichannel systems are harder to control, and they generate conflict as more retailers compete for customers and sales.
The current channels of different brands are exhibited in Chart-3:
Chart-3:
Harrington Ltd.
Sopra
Christina Cole
Vigor
Price Range
$500-$1000 (DSN)
$400-$800 (Brd)
$300-$700 (Brd)
$150-$250 (Btr)
40% (50 stores) of the Company Owned Stores sell Vigor exclusively. One of the reasons might be that Vigor stands for a less traditional image and a new life-style relative to the others. As a result, environment, decorations, and even salesperson of stores selling Vigor would be specifically designed to meet the specific expectation of the customer positioned in that segment.
Since we’ve decided to branch out Vigor to integrate active-wear line into it due to the similar value they represented, we should display active-wear together with existing Vigor brand with separated segments in order to deepen the assortment of Company Owned Stores and give customer psychological suggestions about the value it intends to present. Secondly, since the specialty stores carry a narrow product line with a deep assortment, active-wear provides a great opportunity to enrich the classifications of the store. Thirdly, upscale-department outlets might also find active-wear an appropriate supplement for their product line. Since consumers with relatively high income would like to give the stylish, active, and more casual clothes a shot in the case that the clothes have to be made in good quality. Harrington’s active-wear has differentiated itself with exceptional quality, features, and services, which implies nothing related to the cheap prices. As a result, the upscale-department would love to support this product line regardless of its low price range. What’s more, Harrington could develop its channel into Superstores which indeed is a giant specialty store, since it might attract a larger group of people with various income statuses and self-positioning.
The reaction of Competitors If active-wear with Vigor’s logo performs brilliantly once it is introduced, it would attract lots of small competitors into the market. Since it’s a monopolistic competition market which allows a wide range of price and competitors could differentiate their products with various qualities, features, values, and services, small companies who are not able to find themselves competitive in quality and creativity would be more likely to cut their price down to attract customers coming from a more price-sensitive and less loyal segment compared with the segment targeted by Harrington. Although the price would be slightly higher than that of those small competitors, Harrington has gained strong customer relationships and its newly brought-in product targets both to loyal customers and to new customers who are less price-sensitive and who’d like to pay more attention to quality and features. As a result, Harrington would successfully avoid competition from its competitors.
Demand and Profitability Analysis Start-Up Costs:
Start-up Costs Pants Plant
$1,200,000
Start-up Costs Hoodie and Tee-shirt Plant
$2,500,000
Equipment Pants Plant
$2,000,000
Equipment Hoodie and Tee- shirt Plant
$2,500,000
Launch-PR, Advertising
$2,000,000
Fixtures for Company Stores
$50,000*50
Total Start-up Costs
$10,200,000+$50,000*50
Annual Depreciated Start-up Costs
$2,540,000 (total start-up cost/5)
Direct Variable Costs:
Hoodie
Tee-shirt
Pants
Sew and Press
$3. 25*x
$2. 00*y
$2. 85*z
Cut
$1. 15*x
$0. 40*y
$0. 70*z
Other Variable Labor
$3. 20*x
$2. 40*y
$3. 05*z
Fabric
$9. 10*x
$2. 20*y
$7. 50*z
If the market share of a 7% “better” active-wear segment is accurately estimated, the breakeven point will be definitely met and the company will even earn a 15. 80% profit margin. If the company decides to raise the price in order to earn more on one unit in the expense of losing part of its sales volume, consumers’ price elasticity would be extremely important for a company to see whether the amount of money it makes more on one unit would cover the loss of volume decrease. Sometimes the price-demand curve slopes upward when it comes to prestige goods, but it is another case.
Why Work with Us
Top Quality and Well-Researched Papers
-
Professional and Experienced Academic Writers
-
Free Unlimited Revisions
-
Prompt Delivery and 100% Money-Back-Guarantee
-
Original & Confidential
-
24/7 Customer Support
-
Our Services
No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services. | https://www.studyproessay.com/harrington-collection/ |
If Congress fails to pass a farm bill by the end of the year, it will set off a cascade of events that could send milk prices in Maine as high as $6 a gallon, dairy industry and economic experts said.
The prices of butter, cheese and ice cream also could increase significantly, they said.
However, they said that price increases would happen gradually, and only if the farm bill issue remains unresolved for an extended period.
There is a great deal of uncertainty about exactly how long it would take for dairy prices to increase if Congress fails to pass a farm bill or extend the current one, said Brian Gould, a professor in the Department of Agricultural and Applied Economics at the University of Wisconsin in Madison.
“We’ve never been in this situation before, so I don’t know what the dynamics are going to be,” Gould said.
The expected price increases would result from the dairy industry version of a fiscal cliff, he said.
The Agricultural Adjustment Act of 1938, which is still in effect today, states that unless a farm bill is enacted to supersede it, the federal government must agree to purchase certain agricultural products from their producers at set prices that are significantly higher than their current market value. The law’s effects are intentionally dire to ensure that Congress always passes a farm bill.
The higher prices for each agricultural product are based on a benchmark known as “parity” that is roughly equivalent to the market value of that product from 1909 to 1914, a time period in which farms prospered and prices were historically high.
The 1938 law remains on the books as a deterrent to letting the current farm bill expire without passing a new one. The current farm bill was passed in 2008, and expired in 2012, but was extended another year by Congress when a proposed 2012 update to the bill failed.
Each farm product covered by the law has its own parity price, Gould said. In the case of milk, the parity price is $49.60 per hundredweight, which is roughly 12 gallons, he said.
The 1938 law requires the federal government to purchase milk from farmers at no less than 75 percent of the parity price, which would be $37.20 per hundredweight, he said.
That’s nearly twice the current wholesale price of $21.30 per hundredweight of raw milk, Gould said.
It’s possible that the commercial market would have to match that higher government price in order to keep store shelves stocked with dairy products, he said.
However, that price only accounts for about half of what consumers pay at the grocery store for milk, he said. The rest of the cost, such as pasteurization, bottling, distribution and retail sales, would not change unless retailers exploited the crisis as an opportunity to boost their profits.
“It could go up dramatically, but not as much as at the farm,” Gould said.
The most likely scenario is that milk prices would top out at about 50 percent higher than they are today, said Gould, who recently completed a study of the likely economic impact of failing to pass a farm bill.
On Friday, milk prices in Portland ranged from about $3.50 to $4.50 per gallon at major grocery chains. Based on the average price of $4 per gallon, a 50 percent price increase would bring that average to $6 a gallon.
Shaw’s grocery store shoppers Barbara Cobb and Nancy McKeil said that price would be prohibitively expensive for them.
“I guess we would have to switch to powdered milk,” said Cobb, who lives in Portland.
“Or buy our own cow,” said McKeil, also of Portland.
Gould estimated that butter prices also could increase by as much as 50 percent, while cheese prices would increase by about 30 percent and ice cream would go up about 15 percent.
The more highly processed a dairy product is, the less its price would be affected by what the farmer is paid for the milk, he said.
According to a May 2013 study by the U.S. Department of Agriculture, the amount of dairy products consumed by the average American each day is equivalent to about 1.5 cups of milk, including 0.6 cups of liquid milk.
The total U.S. dairy consumption for a year is roughly equivalent to 550 cups, or 34.4 gallons, of milk per capita.
Given that high demand, reverting to the 1938 law might sound like a bonanza for dairy farmers, but Warren Knight of Smiling Hill Farm in Westbrook said he has no desire to see the farm bill fail.
“There are a lot of things in there that farmers count on,” he said.
Those things include government-backed insurance against failed crops, Knight said. Without that insurance, farmers might not be able to obtain short-term loans to buy seed or feed. That could put some farms out of business, he said.
“It can have tremendous implications for farmers,” Knight said.
It’s possible that dairy farmers would opt not to sell their products to the government at the higher price, Gould said, especially if they believed a solution to the farm bill crisis was close at hand.
Selling to the government would require a great deal of paperwork and some retooling of processing and packaging plants to meet specific federal requirements, he said.
“Believe me, the dairy industry does not want to sell to the feds,” Gould said.
For that reason, retail prices likely would begin to climb only if the impasse in Congress dragged on for weeks or months, he said.
The House and the Senate have each passed separate farm bills, and a conference committee is now trying to negotiate a compromise before the January deadline. Democrats and Republicans disagree on two key sticking points: How much to cut from the Supplemental Nutrition Assistance Program, often referred to as food stamps, and from a program that provides certain direct subsidies to farmers. Both programs have historically been included in the farm bill.
It is uncertain whether the committee will work out a deal by the end of next week, when the House hopes to adjourn for the holidays.
Maine Sens. Angus King, an independent, and Susan Collins, a Republican, both voted for the Senate version of the farm bill.
Maine Reps. Chellie Pingree and Mike Michaud, both Democrats, joined other members of their party in voting against the House version, in large part because of opposition to House Republicans’ decision to separate food stamps from the farm bill. House Republicans eventually passed a separate food stamps bill that would cut $40 billion over 10 years, compared to a proposed $4 billion cut in the Senate bill.
American Dairy Products Association board member Gary Cartwright said he believes Congress ultimately will pass a farm bill, but he is uncertain how much pain the average consumer will have to endure before that happens.
Cartwright, a professor of food, bioprocessing and nutrition at North Carolina State University in Raleigh, N.C., estimates that the average price of a gallon of milk would increase by about $1.60 if Congress fails to act, slightly less than Gould’s estimate.
Still, Cartwright said that increase would be sufficient to make U.S. consumers angry.
“If your milk goes up $1.60 across the board, people are not going to be happy about it,” he said.
Staff Writer Kevin Miller contributed to this story.
J. Craig Anderson can be contacted at 207-791-6390 or: | https://www.centralmaine.com/2013/12/07/milk_price_could_rise_to__6_per_gallon_if_farm_bill_fails_/ |
4.1.1 When the total cost of a purchase is $150, 000 or more, the North Carolina General Statutes require that the purchase be bid through the State Purchase and Contract Division. Solicitation of bids and quotations on orders for less than $150,000 and certain specified items and services has been delegated to the University. Within that delegation, the following thresholds apply:
4.1.2 Written competition must be solicited by the Purchasing Department for purchases over $5, 000. Even in the case of an approved sole source purchase, a written quote or bid will be obtained.
4.1.3 Departments may not divide direct purchases into smaller orders of $5, 000 or less to avoid seeking competition, nor into multiple orders under $150, 000 to avoid the necessity of sending the requirement to Purchase and Contract Division.
4.1.4 The North Carolina Administrative Code defines small purchases as those purchases of commodities or services for which the expenditure is $5, 000 or less and that are not covered by a term contract. The code also delegates to the University the authority to establish procedures for making small purchase transactions. See PUR Policy 24, Small Purchase Policy.
4.1.5 In accordance with the Chancellor's University Small Purchase Policy, competitive bidding is not required for those purchases of equipment, supplies, materials, and services which meet the definition of small purchases as defined in the Administrative Code. However, it is the responsibility and obligation of all University employees to seek the best possible value when making purchases with University funds. Accordingly, where appreciable value can be gained without undue sacrifice of convenience and/or necessity, the Purchasing Department may seek competition for certain purchases under $5, 000. Note: This small purchase policy in no way changes the requirement to obtain valid documentation (purchase order, check request, etc) through the Purchasing Department or the Controller's Office as applicable prior to committing any University funds for the procurement of commodities and services.
4.2 Official Bids
4.2.1 Solicitation of competitive quotations is the responsibility of the Purchasing Department. Any quotations requested by departments cannot be considered official and are to be used only as information for the department. Any information obtained by the department which might be useful to the Purchasing Office in processing the order should be included with the procurement request.
4.3 Specifications
4.3.1 Accurate preparation of specifications for non-contract items and equipment is a critical part of the purchasing cycle. For items and equipment which must be formally bid ($150,000 or more) by North Carolina Purchasing and Contract Division, it is important that adequate specifications be provided by the requesting department for forwarding to Purchase and Contract Division. The duty to establish and enforce specifications is assigned to the Purchase and Contract Division and to the Standardization Committee by statute. These agencies work with the University in developing specifications for the particular requirement. Cooperation and communication between the requesting department, the University Purchasing Department, and the North Carolina Purchasing and Contract Division is imperative for development and assurance of satisfactory specifications.
4.4 Limited Competition
4.4.1 The Purchasing Office, following University policy, solicits quotations even though competition is nonexistent or limited on certain items being purchased. The action is necessary in order to officially determine the price, terms, and conditions of the vendor whose product is being sought.
4.5 Results of Bids
4.5.1 Information submitted in a successful bid is used by the Purchasing Office as a basis for preparing the purchase order. The University purchase order form contains instructions to vendors that prohibit a vendor from accepting an order for shipment unless the vendor meets the printed conditions obtained through the quotation.
4.5.2 If competition is received from the formal bid solicitation amounting to $150,000 or more, the requesting department will be consulted before the order is placed. Justification will be required from the user if the low bid is not selected .
4.6 Time Allowance Needed
4.6.1 When issuing procurement requests, departments should allow sufficient lead time to solicit bids, evaluate the bids, place the order, and effect delivery. Twelve to fourteen days are generally required to solicit quotations and place a purchase order. Formal bids solicited by the State Purchase and Contract Division in Raleigh require approximately four to six weeks. An additional two weeks will be required to evaluate the bids and place the order. | https://policy.appstate.edu/Solicitation_of_Bids_and_Quotations |
Consumer Perceptions of Private Label Brands in China and UK
Disclaimer: This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
CONSUMER PERCEPTIONS OF PRIVATE LABEL BRANDS IN CHINA COMPARED WITH THE UK
Summary
In China there are fewer studies of private label brands (PLB's) that take up less market share than generic brands and national brands. However, there is a successful development of PLB's in the UK. Therefore, this thesis aims to explore the difference of consumer perceptions on PLB's between China and the UK with national brands as a standard.
The literature review will review theories like brand equity/image, PLB's and double jeopardy; PLB's development compared with national brands in China and the UK; the influencing factors of consumer purchase behaviour and previous researches of consumers' perceptions about PLB's in China and the UK. The main objective of this part is to ascertain the difference of consumer's perceptions between PLB's and national brands in the UK.
Primary research will take the form of a non probability convenience sampling method to randomly select 200 members of the public from two shopping malls of Xidan and Wangfujing and several large-scale supermarkets in Beijing. Questionnaires will be used for data collection, and data is analysed by Snap statistical programme.
The finding shows that there is a significant difference of PLB's perception between China and the UK. Before the evaluation of brand image, the awareness of PLB's in China is understood and just less than half of respondents know the own-label biscuits. With reference to the literature reviewed on consumer perception in the UK, it can get the result that Chinese PLB's are perceived lower than British PLB's on the all attributes except “cheap” and “good value”.
Chapter 1: Introduction
1.1 The topic of research
The aim of this thesis is to better investigate how private label brands (PLB's) are perceived by consumers in China, and compare it with the UK's.
1.2 Principle research question
To understand how Chinese consumers' perceptions of private label brands differ from the UK's.
1.3 Overall research objective
The primary purpose for this research is to discover the main difference of consumers' assessment of private label brands between China and the UK. This thesis will explore if there are significant differences between Chinese consumers' evaluation on PLB's and the UK's, and analyse the relevant factors that cause the distinctness of consumers' evaluation roundly on the basis of prior research in this subject scope.
1.4 Individual research objectives
In order to achieve the principle objective of this study, it will intend to fulfil the following objectives:
- To identify the actual development of PLB's in China and the UK
- To establish the influence factors of consumer purchase behaviour in China and the UK
- To determine consumers' perception of PLB's and national brands in the UK
- To determine consumers' perception of Chinese PLB's and national brands
- To ascertain the different brand perceptions of PLB's in China and the UK
1.5 Report Structure
This thesis contains eight chapters. Chapter2 to 4 are based on a literature review about theories and relevant knowledge of marketing background. Chapter2 outlines the theories about branding, and then chapter3 introduces the private label brands and their development in the UK and China. Chapter4 refers to consumer perception of PLB's review between two countries based on analysing the determining factors of purchase. The research method is explained in chapter5 with some specific designing scheme. The results of surveying Chinese biscuit category is presented, interpreted and analysed in chapter6, and discussed relating with the UK's market of literature review in chapter7 before conclusion and recommendations are given in chapter8.
Chapter 2: Branding
Chapter 2 introduces the definition of branding with its importance in the retail market, and moves to realize “brand image” and “brand equity” as well as the shift between them. In addition, the Double Jeopardy (DJ) Effect is identified finally. The aim of this thesis is to evaluate Chinese consumers' perceptions of private label brands (PLB's) in comparison to UK's. Therefore, it is necessary to understand the background knowledge about “brand” first.
2.1 The conception of branding
A brand was defined as “a name, term, sign, symbol, or design, or a combination of them, intended to identify the goods or services of one seller or group of sellers and to differentiate them from those of competitors” (p.404, Kotler, 2000).
Aaker (1996) indicated that brand was used for suppliers to reflect the consumers' purchase information and make communicate with customers easier, so that it is helpful to build a long-term relationship of belief between buyers and sellers.
Wileman and Jary (1997) had realized that retail branding was playing an important role in the modern retail market gradually. Managers and executives also perceived that retail branding could be used to increase benefits as a strong vehicle in the competitive retail industry (Carpenter, et al. 2005). The reason is that the relationship between a product and consumers is personified by the brand name (organization's name) on the product itself, like Microsoft and Nescafe (de Chernatony and McDonald, 2003).
The difference between a brand and a commodity is shown below in figure1, which describes the process of decline from brand to commodity. Following the disappearance of brand characteristic, a reduction in the differentiation of price and product/image is demanded to achieve the likeness of product offerings in the particular category. Thus the “added values” is the main difference between a brand and a commodity. The result proved the strong power of added values in the blind (brand cancelled) and open (brand revealed) test of Coke and Pepsi preference (de Chernatony and McDonald, 2003).
2.2 Brand image
Brand image is explained as the integrated effect of brand associations (Biel, 1992). Also, Faircloth et al. (2001) cited Engel et al. (1993) as claiming that brand image refers to consumers' perceptions of brand tangible and intangible association. Keller (1993) stated that brand image, a part of brand knowledge, belongs to the perceptions about a brand that is reflected by the brand's attribute, benefit, and attitude association in the memory of consumers. Besides, consumer's brand image is derived from the accumulative effects of marketing mix actions of companies (Roth, 1994).
Wulf el al. (2005) has argued that image is one prerequisite for the presence of brand equity. Brand image in the consumers' memory network that is decisive to make decision, provides preferred brand reminding and evaluation (Holden, 1992), and so it can contributes the positive effect on brand equity (Yoo et al. 2000).
Furthermore, Winchester and Fletcher (2000) argued that measuring brand image was one of the most important research projects undertaken by a company, because it could help firms to understand their products' perceptions in consumers' memory.
For example, retailers have the cheaper brand image than the manufacturers in most of consumers' memory. Also, they suggest that consumers consider retailer brands as “me too” products compared with manufacture brands (IGD, 2003). Thus it demonstrates that retailers are trying hard to build up a strong image for their own brands to shoppers. The enhancement of brand image will be beneficial to drive the sales, brand equity and increase the gross margin of private label products (Quelch and Harding, 1996). Therefore, brand image is an important determinant of consumers' perception about private label brands.
2.3 Brand equity
Brand equity, like the concept of brand, has been identified as having multiple meanings. For instances, people have debated the concept of brand equity both in the accounting and marketing literature for several years (Wool, 2000). The original concept of brand equity is the added value that a brand name offers to the fundamental product (Quelch and Harding, 1996; Wulf el al 2005). Wood (2000) also cited Feldwick (1996) as claiming a classification of different meanings of brand equity as:
“- the total value of a brand as a separable asset - when it is sold, or included on a balance sheet;
- a measure of the strength of consumers' attachment to a brand;
- a description of the associations and beliefs the consumer has about the brand.” (p. 662, Wood, 2000)
According to the statement of Wood (2000), brand equity rests on financial accounting no longer, but extends to the measure of brand strength (brand loyalty) and the description of brand image.
Additionally, Aaker (1996) identified the major asset categories of brand equity to include brand name awareness, brand loyalty, perceived quality and brand associations (brand image). It reflected the value supplied by a product or service to a firm and/or customers in the various different ways. If the name and symbol of the brand change, the assets or liabilities will be affected and even lost due to the link between both sides. To brand awareness, the strength of a brand's presence is mentioned in the consumers' minds, ranging from recognition to recall to “top of mind” to domination. Recognition is more important than others due to the perception obtained from the past exposure. Also, recall can be a deciding factor of the purchasing of products.
Otherwise, Chou (2002) also insisted on two categories of definition of brand equity—the customer-based and financial brand equity. The customer-based one is defined as the different effect of brand knowledge on consumers' response to the purchase of brand (Keller, 1993; Lassar, et al. 1995), and the financial one refers to the intangible asset of the value of brand name to the firm (Chou, 2002).
Through understanding the multiple concepts of brand equity, it can deduce that brand equity has attracted more attention in the marketing literature over the last decade, because it reflects if a brand would be repurchased by consumers. As Shapiro (1982) has demonstrated, certain brand equity offered genial value even though the appearance of products is uncertain. Nowadays Broniarczyk and Gershoff (2003) still emphasise the importance of brand equity; also, as one of the most valuable assets, it should be maximised to manage brands for the company (Keller and Lehmann, 2003). High brand equity can increase the opportunity on consumer choice of a common sales promotion (Simonson et al., 1994) and reduce the negative debates of consumers for a price increase (Campbell, 1999), because consumers lean to buy the brand more than the real product. Consequently, brand equity is also a factor to evaluate consumers' perception of own brands.
2.4 The Double Jeopardy Effect
In recent years, more authors (Sharp et al. 2002; Ehrenberg and Goodhardt, 2002) have been interested in understanding, developing and reinforcing the concept of Double Jeopardy (DJ), which represents a natural constraint on customer loyalty, which cannot be increased by marketing inputs much or for long unless a significant benefit increases the brand's penetration (Ehrenberg and Goodhardt, 2002). The DJ effect is that “small share brands have fewer customers, but these customers buy the brand less often than the larger brands get bought by their customers” (p. 17, Sharp et al. 2002). A conceptual model of the DJ effect is showed in the figure2, which illustrates if a small firm would have higher turnover of their customer base if they lost the same number of customers as a large firm.
The DJ effect is fit for the discussion of national brands and PLB's. Bigger brand will be known by more customers, and have more opportunities to be purchased and receive more responses than smaller brand. It will be an essential theory to support the last result of investigation about the comparison between own labels and national labels.
Chapter 3: Private Label Brands
This chapter attempts to understand private-label brands, their development in UK and China and the reason for focusing on them. National brands will be also mentioned as the scale for the validity of comparison between Chinese and UK's own labels.
3.1 The definition of private label brands
“Retailer brands are designed to provide consumers with an alternative to manufacturer brands, to build customer loyalty to a retailer or improve margins.” (p.11, IGD, 2003)
They are particular to a definite retailer, and may have a same or different name of the retailer but exclude other retailers' name (IGD, 2003). The terms “own label” and “own brand” are always used together; also private label, retail brands or distributor brands are in common used (Fernie and Pierrel, 1996).
Own brands can help retailers reduce the direct impact of price competition, since retailers carry their own brands instead of national brands that are sold in the most of the stores (Carpenter, et al. 2005). Furthermore, according to IGD (2003) study, doing own brands can provide competitively priced products, increased profitability and loyalty due to the special store, establishes store image, drives innovation and targets specific consumer groups.
3.2 The development of PLB's in UK
Based on more previous researches about UK's PLB's, it will specify them as the base to compare with Chinese growing PLB's.
3.2.1 The history of development
The generation of own brands in the UK can date back to the end of the nineteenth century (Key Note Market Review, 2001). Until mid-1960s, manufacturers perceived that the development of store brands could be a direct threat for them (Ogbonna and Wilkinson, 1998). After that, own brands rose to penetrate into grocery markets gradually (Fernie and Pierrel, 1996), because supermarket had to implement a new strategy under the tough economic crisis (Ogbonna and Wilkinson, 1998).
The growth of own labels in the UK had been rapid during 1980s, and slowing down in the 1990s (Laaksonen, 1994). After 1980, the UK's retailing had a big metamorphosis to change their own-label products from previous low-price/low-quality/poor-packaging to current high quality, competitive price and good packaging (Burt and Davis, 1999; Key Note Market Review, 2001). Especially from 1990, more retailers began to provide own brand lines in stores and penetrated towards the grocery field (Veloutsou et al., 2004), and even innovated in product categories to be consistent with branded-products, such as the expansion from grocery to clothing (Quelch and Harding, 1996). Fernie and Pierrel (1996) illuminated that Marks & Spencer, Sainsbury's, Tesco and Safeway had developed their own brands, which competed successfully with other brands in the UK. Otherwise, there were more private labels on the shelves of supermarkets than ever before (Quelch and Harding, 1996). The main reasons for growth of own-label products include lower pricing (60%-85% of branded products), improved quality and higher profits for retailers (Ashley, 1998).
Through the review of historical evolvement of PLB's, the current bloom of PLB's development in UK that is built on the basis of constant change can be seen. Also, it can be a good explanation for the condition, in which customers choose more own brands of supermarket rather than manufactures' brands.
3.2.2 Current development
At present, private label brands have taken up a significant share of nearly 29% in the UK food market. It is expected to increase further in 2009. Especially since 2008, own label has been gaining popularity following accelerating economic downturn. As consumers have begun to feel the pitch, so they have bought own-label products instead of branded products to save money. Thus the competition between own-labels and brands is reinforced. There is the highest own-label consumption in the FMCG sectors, like milk and frozen vegetables, or some products without emotional appeal. However, manufacturer brands still account for the majority of sales in the most of grocery categories (Mintel, 2009). Table1 shows the share of brands and own-labels in the following different categories.
From this table, it can see that own label is the most dominant in the category of ready meals; and it has the least share in the crisp category. Also, more share own labels account for, more increasing opportunities they have.
In addition, UK's supermarkets recognise that consumers have a wide range of product needs, so they segment the market by providing the brands that cater for the best, healthy, valuable, kids' and organic requirements as table2 shows.
3.2.3 The feature of development
The development of PLB's, a competitive strategy adopted by retailers, is necessary for them within the current retail market of high competition in the UK (Carpenter et al. 2005).
Own brands are developing fast and winning a better share of the food market with definite advantage in the supermarket product ranges, because retailers can offer their private label products with high-quality and low-price (Wulf et al., 2005). Also, own-brand products exceeding 40% of market share have expanded their presence across markets—from low-priced, value-for-money items to the premium and lifestyle arenas so as to cater for consumers concern about healthy eating (Drewer, P. 2006). Therefore, it can fetch up the limitation of national brands that segment the market less, and target desired consumers more narrowly. For instance, figure3 shows Sainsbury's Be Good to Yourself range of lower fat which is one kind of “healthier” own-label ranges; and Asda's value (Smartprice), healthy (Good for You) and Premium (Extra Special).
Furthermore, the feature difference of own brands has been gradually shortened from national brands in terms of aspects such as packaging, size, and label (Choi and Coughlan, 2006). In figure4, Sainsbury instant coffee products are taken as an example of private labels with reduced feature differentiation as national labels inside FMCG sector.
Some of the UK's retailers such as Sainsbury's and Tesco have set up own brands focusing on quality and taste due to more consumers' regards on flavour and aroma. Production methods have become diversified, and manufacturers have been found around the world to get various products with exotic flavours. For instance, the recipes of multiple ready-meal foods are derived from characteristic foods of different countries, like Waitrose chicken chow mein, which is developed from Chinese stir-fried noodles. Thus it is common for retailers to compete by developing premium own brands (Fenn, 2007), yet the majority of retailers changed their attention from premium ranges to the promotion of value ranges in 2008 (Mintel, 2009).
3.2.4 Marketing support
The increase of own-label products is supported by the gradually concentrated nature of the retail market. Retailers control own brand marketing, which has obtained higher promotional support than national brands since there are better space and location for private labels on supermarket shelves (Cataluna et al. 2006). Retailers have got bargaining power in the market and more confidence to invest in their own brands, which bring higher profits than generic brands (Fenn, 2007). In addition, own-label food and drink has been supported strongly in the competitive market, although the main retailers began to promote the potential of saving money on PLB's purchase in 2008 and early 2009. For example, own-label brands are promoted principally in the main retailers like Morrisons. Marks & Spencer also spent a third of its total budget on M&S brands in 2008 (Mintel, 2009). Table3 shows the market support on foods in the form of media advertising expenditure.
Generally, an increasing trend is shown for the retailers' spending on foods in this table. However, depending on the retailers' spending share, it can ascertain that branded manufacturers are still the biggest spenders on advertising for food and drink. They use the “Reassurance” and “tradition” as the key themes of promotion to fight against PLB's (Mintel, 2009).
3.2.5 The biscuit category
Own-label biscuits take up a fifth of the UK market, where it has remained the share stably over previous 5 years. The biscuits category has increased substantially since 2002, although there is an unhealthy high sugar content in the most biscuits. The benefits can be obtained from defying all advice of nutrition, because consumers regard biscuits as a reward for their efforts on healthy eating most of the time. Moreover, the development of biscuit market is likely to be influenced by three key factors: requirement for healthy foods, indulgent products and convenient products. The indulgent demand can benefits the branded biscuits, as consumers believe premium-branded products more than PLB's (Kidd, ed. 2007). Figure5 shows the UK biscuits market shares in 2007.
From this figure, it can see own labels account for more share than any one manufacturer brand, but it is less than the total share of main large manufacturers.
In short, UK's PLB's market has been described and compared with manufacturers' brands specifically, so that it can be as the firm foundation for the later comparison with Chinese PLB's market.
3.3 The development of PLB's in China
Private label in China is still in an emergent stage, where many retailers had increased the place of own label development in 2004, but most do not have their own brands until they have greater scale in the market. According to IGD's estimation, own brand only takes up 2% of sales at Wal-mart and less than 6% at Carrefour, which is the strongest retailer in China. Although the foreign retailers have a long history to sell private label brands, this is a big challenge for them to sell in China, where own brand is a new concept for the Chinese consumers. They just believe the value and quality of local branded products. Thus retailers need to prove their own products are not only cheaper, but also provide better value to consumers (IGD, 2005b). Auchan, Carrefour and Wal-mart will be chosen as the example of private label development, because they have wider range of own label products than others.
“Pouce”, “Auchan” and “First Price” ranges were introduced by Auchan in 2003. And they were developed across both food and non-food categories by the end of 2004. In Carrefour, own labels can be found in most categories, especially strong in non-food. Its private label brands include “Great Value”, “Equate” and “Kid's Connection”. Wal-mart is developing their own brands including “Simply Basic”, “Equate” and “Great Value” in China, where the quantity is more limited than other developed international markets (IGD, 2005b). But actually, most of supermarkets usually just focus on the value with low price, and use the name of supermarket as their own brands' name to attract consumers' attention, such as “Ito-yokado”, “Dia%” and “Tesco”.
3.4 Why the focus on PLB's
Veloutsou et al (2004) indicated that all grocery retailers have been entangled by private brands in Great Britain in the last decade. Also, the growth of private labels is one of the most obvious successes to the retail stores (Drewer, P. 2006); own brands have been seen as the strategic weapon to provide retailers with more powers and opportunities to distinguish themselves from national brands and build store image (Juhl et al. 2006). Nevertheless, there is a completely different situation in China, where the study of PLB's is less than the UK's and is strongly encouraged (Song, 2007). PLB's is undeveloped with low sales account in China, even if some foreign retailers (e.g. Carrefour, Wal-mart) have launched their own brands (IGD, 2005b). Consequently, there is a need to expose why PLB's have little market in China, and understand the shortage of Chinese PLB's development through comparing consumers' different perceptions between China and the UK.
Chapter 4: Consumers' Perception of PLB's in China and UK
This chapter will evaluate private label brands and national brands based on a cognizance of factors determining purchase. A generality of different viewpoints about consumers' perceptions on brands will be discussed and some factors influencing the PLB's purchase will be presented.
4.1 Determinant of Purchase behaviour
Consumer's purchase can be influenced by environment, personal preference and psychological factors. Customers who live in diverse regions have their own experience about private-label products (Veloutsou et al 2004). Individual consumers often choose certain brands that they know to be guaranteed due to their habits, instead of spending more time to re-evaluate the brands with different attributes when purchasing (Ehrenberg, 2004). Furthermore, consumers' preferences are different following the change of age (IGD, 2005a). For example, young people high on the new things more than old people. From the psychological aspect, “the right customer mindset can be crucial to realizing brand equity benefits and value” (p29, Keller and Lehmann, 2003).
During the decision making process, purchase can be influenced directly by several factors. Veloutsou et al (2004) cited Omar, Burt and Sparks (1995) as claiming that many consumers always consider their products' characteristics, quality and perceived value instead of the prices of products when consumers make purchasing decisions. However, the price cannot be excluded from factors of decisions, because most of consumers go shopping after they have a budget in mind (Hogan, 1996). Additionally, a generalized private-label attitude is discovered to influence purchase behaviour; factors include: “consumer price consciousness, price-quality perception, deal proneness, shopping attitudes, impulsiveness, brand loyalty, familiarity with store brands, reliance on extrinsic cues, tolerance for ambiguity, perceptions of store brand value, and perceived differences between store brands and national brands” (p347, Collins-Dodd and Lindley, 2003).
4.2 Consumer perception in China
Due to the limitation of consumption per capita, the market was driven by price instead of brand loyalty in China (IGD, 2005b). According to China Management Newspaper (2008) reported, it is a fact that consumers who realise the supermarket own labels account for rather low percentage of total population. Moreover, “low price” and “high quality” are the main motivations to drive consumers' purchase. Thus national brands with better quality can attract more consumers, although they have higher price than own labels. This results from the increase of Chinese consumers' purchase power and the improvement of living level in recent years. Moreover, PLB's and national brands were considered as less difference on price (Chen, 2009). Thus it can be deduced that “low price” strategy of own brands in China would be successful due to less brand loyalty. However, following the improved standard of living, people would increase their demand from low price to high quality, which could be a challenge for the PLB's.
4.3 Consumer perception in UK
4.3.1 Comparison of PLB's with national brands
Following the quality improvement of PLB's, Richardson (1997) found that store brands could be compared with national brands from the aspect of quality and consumers preferred to buy store brands where they usually shopped. As Quelch and Harding (1996) discovered that this was similar for consumers to perceive and judge the manufacturer and retailer brands in the orange juice private-label test, because consumers had a low involvement activity on grocery shopping.
Nevertheless, “If all retailers stock manufacturers brands, they can only differentiate on price or sales promotions; with own labels/brands, they can offer further differentiation in the market place.” (p49, Fernie and Pierrel, 1996)
They supported that own labels/brands could bring retailers more differentiation in the market place than manufacturers' brands that just differed on price or sales promotions from each other. However, there is a different understanding based on consumers' mind. Dick et al. (1996) considered that private labels were less famous than national brands, which have a distinct identification with a particular manufacturer. Richardson (1997) also supported the unification of store brands without the speciality of national brands.
In the research of Harris (2007), he also demonstrated a significant difference of brand image evaluation for national brands and store brands. He established that PLB's have the advantage of “cheap” and “good value” to compare with national brands, while national brands were used more with higher quality/superiority based on attributes than store brands. However, after breaking down PLB's into three relative positions (premium, standard and value), he discovered that premium private labels were overpriced more without better value for money than national labels; customers buy more value private labels than national labels due to their cheapness. This implicates that consumers prefer the high quality of national brands and the good value of value PLB's at the same time. Therefore, he identified the characteristic of “worth more” regarded by consumers mostly. The brand association strengths are summarized in the following figure7 from his study.
Furthermore, his researching results (see Appendix3) will be used as the reference of British consumer perception of PLB's to compare with China's later.
In addition, according to Mintel research (2009), it has been a long-term trend for more consumers thinking that own labels are better than national brands.
4.3.2 Evaluation of PLB's
To the quality/value thinking, other authors have had same ideas. Quelch and Harding (1996) predicted that consumers would choose PLB's readily rather than the higher-priced name brand, if there were more quality PLB's in the market. Richardson (1997) cited Richardson et al. (1994) as claiming that store brand market share could be increased by successfully communicating a quality rather than a low price strategy. Moreover, according to the IGD's research, PLB's have become one of the important factors for shoppers to choose the supermarket they shop in. The satisfaction of quality with lower price has attracted more consumers. The proportion of main reasons is “45% lower price, 45% better value than branded equivalent, 26% the same as branded, 24% a good reputation for own brand (IGD, 2003). Furthermore, consumers are not confused about the increasing number of own-label brands, but the segmentation is beneficial for them to choose products that are fit for themselves. The clear differentiation among brands is also the key for retailers to success (Mintel, 2006).
Chapter 5: Methodology
On the basis of the relevant literature review about the market analysis of PLB's and national brands, especially the UK market, this has been analysed for the final discussion to compare with Chinese PLB's. This chapter will look for the most appropriate approach to implement the needed research and achieve the objective of this thesis.
5.1 The objective of this study
Perceptual variables related to consumers' perception are investigated in this study. It needs to finish the following objectives:
- The difference of consumer perceptions between PLB's and national brands in the UK (achieved in literature review)
- The difference of consumer perceptions between PLB's and national brands in China
- The difference of consumer perceptions of PLB's in China and the UK
Based on the understanding of the UK's markets and perceived PLB's compared with national brands by consumers in the literature review, the Chinese situation will be understood by actual survey. Finally, it will compare the Chinese PLB's perception with the UK's according to the standard of national brands.
5.2 The research strategy and method
A quantitative research strategy will be used in this study, which takes a deductive approach deriving from the deduction of theories and objectives. It explores the data that refers to the development of theories and the achievement of aims founded on the literature (Saunders, et al. 2009), where the UK's market and the comparison of consumer perceptions between British PLB's and national brands have been also analysed concretely through large quantity of literature search. They are obtained by internet and various presses like books and magazines in order to develop the actual deep research. Then Chinese consumer perceptions of PLB's will be surveyed by questionnaire, and compared with national brands, which can be as a standard for the PLB's comparison of the two countries. In the result, conclusions can be gotten about the “differences and similarities in consumers' perceptions of own labels in China and the UK”.
5.3 The measurement of brand image attributes
There are three common measures of consumer perceptions about diverse brands of packaged goods: free choice, scaling and ranking, which are used to position the brands in their relevant sites aiming at each attribute dimension (Branard and Ehrenberg, 1990). Also, Dreisener and Romaniuk (2006) had demonstrated that all three approaches could provide equal results at both brand and separate level. In this study, the free choice approach is put forward to collect consumer brand beliefs for the supermarket own brand names of the top foreign supermarkets in China, also PLB's and national brands of biscuits they sell. The reason of choosing the foreign supermarkets is that they lead the Chinese own-label development with more varieties of private labels than Chinese local retailers.
5.4 The free choice sample
This study adopted a non-probability sampling method, because there was uncertain population of consumers in China, and everyone was unattached individual with different ideas. Furthermore, convenience sampling was undertaken due to limited time and resources. Remenyi et al (1998, p. 193) describe convenience sampling as “comprising those individuals or organizations that are most readily available to participate in the study”.
The sampling frame is all Chinese people in Beijing, which is chosen because Beijing as the capital of China is a representative city in China, where it can meet various people coming from different places of China. To consumers in Beijing, its sample size is 200, which will be selected across two shopping malls of Xidan and Wangfujing and several large-scale supermarkets in Beijing like Wal-mart and Carrefour. The sample size is an important part of research design when deciding upon sampling frame. The increasing sample size can decrease sampling error however a large sample cannot ensure the precision completely, which needs to consider the tolerance of sampling error (Bryman and Bell, 2007). The sample size of 200 is decided depending on the restriction of time, cost and personal capability.
5.5 The questionnaire
A personally administered (face to face) questionnaire will be developed with questions based on consumers' perceptions on PLB's. It is appropriate for data collection as establishing harmonization and inspiring respondent, clarifying doubts in time, less expense needed when dealing out in groups of respondents and ensuring almost 100% response rate (Sekaran, 2003).
Each of 200 respondents was asked if know the PLB's. Respondents who do not know the kind of brands were asked about the personal information, because they cannot give the exact data to support the brand comparison; and respondents knowing PLB's were given a table, which presented different attributes and main brand names of biscuits. In them, the national brands were mixed up with PLB's (see the questionnaire in appendix). Its objective is to understand consumer's perceives about own-label and branded biscuits fairly. Moreover, respondents were able to choose as many or as few brands as they believed the attribute is real, including none at all. To the answer of brand image measurement questions, Dreisener and Romaniuk (2006) gave the description as Pick any, with the following dialogue about how to phrase the questions:
“We would like to know how you regard different brands. We will give you a series of statements and for each we would like to know which brands you associate with it. You can select as many or as few brands for each statement as you like. It does not matter whether you have purchased this brand before or not; it is your opinion we are after”. (p. 686)
To obtain different response levels to understand the dimensions of PLB's and national brand positioning, many typical attributes were selected including “Good appearance, Good packaging and Clear information” (representing characteristics of appearance); “Good value, cheap and overpriced” (representing price based assessments); “superior quality” (representing the level of quality). Thus it can see whether the brand represents its essential characteristic that is more outstandingly than other category members. For example, “Good value” should be reflected in the biscuit products of “Great value” brand as it is expected. In general, it is predicated that there would be fewer feedback of attributes on PLB's than manufacturers' brands, because most of people did not know PLB's that accounted for tiny market share in China. Furthermore, PLB's would be assessed highly on “cheap” of price attribute, whereas national brands would be supported strongly on “superior quality” based quality attribute.
Otherwise, the questions about purchase frequency were also contained for identifying heavy, moderate and light consumers. Also, before the brand image measurement, the rough assessment of beliefs about own-label and branded features are asked with scale format. This is designed for those who do not really know PLB's in biscuit category but know other own-label products. Thus they still have potential consciousness on PLB's generally.
However, there are several limitations of this questionnaire design. The list of brand names is likely to limit the choice of respondents, who believe other unlisted brands from local supermarkets or other manufacturers. In addition, pick-any method does not represent the degree of association when indicating the attributes for various brands (for example, agreement that both of Carrefour and Wal-mart are perceived as cheap, but no definitude that which is perceived as cheaper between them can be deduced from the results).
5.6 The Pre-test
After finishing the design of questionnaire, it was piloted on 5 Chinese students (to test the academic approach and logicality) and 5 relatives with a non-academic background (to test the study from a consumer's view). Several corrections were then made to adjust the order of questions and remove any obscurity and design shortcoming. Data entry of the pilot survey into Snap statistical programme also ensured the validity of the questionnaire design on the aspects of answer coding and data input at the analysis stage.
5.7 Biscuits used for assessment
A study in the biscuit market was implemented as it is the primary product in the leisure foods, which have increased its requirement with the improvement of standard of living in China (China Grocery Industry Web, 2007). Thus there are more varieties of national brands and PLB's rather than other categories in China. In UK, biscuit market also accounts for definite share with high penetration. There are common biscuit brands like Oreo and Ritz in the two countries. Consequently, it is facilitative to obtain a mass of responses for this Chinese research and benefits to compare with UK's.
5.8. Data Analysis and explanation
Microsoft Word is used to design and format the questionnaire due to the complex table layout of brand image measurement (see appendix 4), whereas all questions designed were coded for the analysis process easily in the Snap statistical programme. Undertaking the data analysis could achieve the key objective that is to establish the differences or similarities in consumer perceptions of PLB's and national brands by assessing the diverse brand attributes.
5.9. Limitations of the study
Due to the constraints of time and budget, a convenience sampling method is used for this research, which may lack of the true representation of whole population. Also, the restriction of individual research abilities and skills could affect the reliability, validity and comprehensiveness of the study. As only one FMCG category was researched due to limited time and budget. This was not general to focus on diverse PLB's categories. Furthermore, to individual consumers, their evaluations of PLB's could associate with their own abilities, also be influence by other factors like the surroundings, so that they did not give the real intention.
Chapter 6: Results and Analysis
This chapter will present the key results of survey by questionnaire. Some data can be broken down to get the relevant information and satisfy the final objective of the research. Also, it will explain and analyse these findings, which mainly include the awareness of PLB's and the evaluation of brand image. A copy of questionnaire and some further relevant results with this area are presented in appendix.
6.1 Respondents
The sample is made up of 200 respondents. In them, there are 46% of male and 54% females. 61.5% of respondents accept that they know the own label products, whereas only 38% of respondents have ever purchased own-label foods. The low proportion of respondents with the experience of PLB's usage may influence the evaluation of brand image attributes. All detailed figures with relevant comments can be seen as follows, further information about respondent profiles is summarised in appendix 1.
6.2 The awareness of PLB's
As previous statement suggests, only 61.5% of respondents are aware of PLB's. The result is presented in figure8, which shows the difference of perceived own brands according to age.
From this figure, it can be seen a remarkable diversity about awareness of own brands among different age groups. In them, young group (18-30) has higher proportion of awareness than others, while older group (61 or over) is the crowd with the lowest proportion of perceiving the PLB's. Moreover, generally the proportion of unawareness of own labels is going up following the increase of age.
Based on the description about consumers' characteristics according to different ages in Chapter4, it just validates the description that young people are better at observation and try than older people. Youth have more curiosity on new things, while seniors conform to tradition more. Thus it leads to the result shown in the column chart that young people know the own brands rather than old people.
Furthermore, perceived own brands are related with the different occupations. As figure9 indicates the change of perceived own brands according to the occupation.
As can be seen from the bar chart, about 94% students express that they know the PLB's; next, 87% civil servants are also aware. However, more retired and unemployed people are not aware of own labels. Otherwise, there is no evidence of perceived PLB's in other occupations.
Youth are the main group as students and civil servants. Students who stand on the leading edge of the world study advanced knowledge, so they should know more than other people. Civil servants, as the “white collar” in the society, have the stable income, which can support their curiousness on something new. Most of retired and unemployed people are composed of middle aged and senior people, who know own brands less. Thus it leads to the above result.
In addition, there is no significant relationship between perceived own brands and gender.
However, focusing on the biscuit category, there are fewer people who know the own-label biscuits as long as they have been to the supermarket. The next table shows, the number of respondents who have shopped the supermarkets recently (last three purchases), recent consumers knowing the own-label biscuits, those knowing but never bought them and those who have ever bought them.
As from the above table, it can infer that most consumers do not often purchase own-label biscuits. As above data presents, generally more than half of consumers in recent three-time purchases know the private label biscuits except in Carrefour, Metro and Lotus; and less than half of recent consumers never bought the own brands of supermarkets they shopped before. This means that most consumers knowing own brands have bought the biscuits previously. However, only few consumers purchase them in recent three-time shopping, so they can remember the products more clearly and give the specific evaluations. Thus this may be a limitation of the accuracy of this study: the majority of respondents just give the assessment depending on their potential impressions on private label biscuits.
6.3 The brand image evaluation
As previous description, a rough assessment of PLB's and national brands is necessary before assessing the brand attributes specifically in biscuit category, because fewer people exactly know the own labels of biscuits among 61.5% of respondents who know the PLB's. (In this part, these respondents who know the PLB's will be seen as a whole, namely, 61.5% equals 100%). Figure10 presents the percentage of respondents who know the following biscuit brands.
According to figure 10, national biscuit brands (Oreo, Ritz, Danone and Nestle) are widely well-known by nearly all respondents who know own brands. Nevertheless, each own brand of biscuit is perceived by few respondents, who averagely take up only 17% of respondents knowing PLB's. In them, the own-label biscuits of Wal-mart are understood by more than other own brands.
Consequently, the total assessment of own brands compared with national brands are needed. Next, several main brand attributes are analysed to compare own brands with manufacturers brands generally. Then it will focus on the biscuit category.
6.3.1 Rough Comparison
From this figure, significant price diversity can be seen. Private labels focus on scale1 to 3 mainly, and national brands have a much smoother distribution from 6 to 10, where scale 8 achieves the peak response. Therefore, it can be concluded that PLB's still are perceived to have lower price than national brands according to the consumers' attitudes.
As shown in figure12, the choice of PLB's and national brands takes on a nearly symmetrical form. This illustrates that PLB's taste worse than branded foods mainly. The peaks still appear in scale3 and 8 like price evaluation. However, the difference is that separately about 10% of respondents do not identify their ideas and think there is no significant diversity between their tastes.
As can be seen from the figure13, the choice of brand variety of own labels and national labels is compared. It presents that over 40% of respondents on the scale8 accept more choices of national brands, which can be obviously compared with less choice of PLB's, according to about 78.8% of responses on the scale 2, 3 and 4.
It can be seen that PLB's still focus on scale 2 to 4, just leaning towards the average. National brands have the highest estimation of about 25% on average. However, most respondents believe national brands have better quality than PLB's.
According to figure15, there is an obvious estimation on average value of 36.6% on national brands and 23.6% on PLB's. The high proportion of average evaluation can be beneficial for PLB's to increase their value and achieve equal status with national brands, although about 64% of respondents still think they do not have good value like national brands.
In the above figure, it presents the difference of sale promotion intensity between own labels and manufacturer brands. Most respondents believe that own labels have weaker sales promotion than branded products. Only 13% and 18% of respondents on own labels and national brands respectively believe that there is no significant difference on sales promotion between PLB's and national brands.
As can be seen from figure17, there is still a significant difference of product exhibition position, as more branded products can take up most of the shelves, so that the minority of own labels can be found with difficultly.
In sum, it can be concluded that there is a significant difference of consumer perceptions between PLB's and national brands by the rough assessments on several main brand attributes.
6.3.2 Biscuit category
Based on the general evaluation of own labels and national brands, the general position of PLB's compared with national brands is understood in the market, like the lower price, worse taste and poor quality. Therefore, the biscuit category will be chosen for the further analysis to validate the accuracy of previous survey about the macroscopical brand image. In this study, 11 brands of all main national and private label biscuits are contained in the survey of biscuit category, 7 PLB's and 4 manufacture brands. It will analyse them according to different attributes of PLB's compared with national brands. Also, recent users are separated out from the overall consumers in respect that there is less awareness and purchase rate of own labels in China. Table5 displays the results of the evaluation of several brand attributes for PLB's compared with national brands.
To the aspects of price attributes, it can be seen that PLB's are cheaper with better value than national brands based on purchasers' assessments. The number of response can validate the previous general assessments of price and value. Also, own labels have 2.25 times the response rate as much as national labels on “cheap”, while “good value” can be 43% likely to be associated with own labels by purchasers. Oppositely, the response of “overpriced” and “superior quality” on national brands is much higher than own labels based on quality attributes. In them, consumers may not associate with “overpriced” for PLB's totally; and “superior quality” can be received on an 82% lower response for PLB's compared with national brands. This can be also a reason to explain the result of equal response on PLB's and national brands about “Good value” (good performance / price) to the overall consumers. As consumers believe PLB's do not have better quality but cheaper than national brands, while national brands have better quality but more expensive. Thus it is suggested to improve the own-label quality so as to enhance its value.
In the evaluation about function attributes, own brands have 29% lower response on “morning eat” and 80% lower response on “eat as snack” than national brands. This illuminates that PLB's may adapt to be eaten as breakfast more than snack. This can result from so many sweet national label biscuits like “Oreo” and “Danone”, which are not fit for eating as breakfast.
To other brand attributes, own brands also present a far lower level of response than national brands. The huge gap of response number between them can be demonstrated that bigger brands (with more knowers and purchasers) can obtain more responses of brand image than smaller brands from this table (Romaniuk and Sharp, 2000). There is a biggest gap between them on “clear information”. Most own labels have simple packaging with less information about the origin of products, and possibly only including product name, shelf-life and net content. Sometimes, shelf-life is likely to be found difficultly by consumers due to the printing problems. Therefore, it leads to the above response. It can be ascertained that all of them are the shortage of PLB's and are necessary to be improved.
Accordingly, through analysing the brand attributes of own-label biscuits and branded biscuits, it can be concluded that the significant difference between PLB's and national brands is existent and perceived by Chinese consumers. The benefits of own labels are represented on the price only, and they need to be improved a lot on other attributes to catch up with the national brands.
Chapter 7: Discussion
On the basis of the comparison of British consumer perceptions between own labels and branded products in the literature review, and the two perceived by Chinese consumers in the primary survey, their relevance will be discussed in this chapter. Consumer perceptions of PLB's in China are able to be compared with UK's by national brands.
7.1 The awareness of PLB's in China and the UK
In China, according to the perceived biscuit brands in figure10, almost all respondents know the national brands, but only 17% of respondents know the PLB's. In the UK, most consumers should know the own brands because only 6% of consumers never buy private-label biscuits in the UK, according to Mintel (2009) researched. Also, a strong competition exists between own labels and national labels in the UK. These demonstrate that PLB's have definite strength to compete with manufacturers brands. So there is no significant difference of awareness about PLB's and national brands in the UK. Consequently, perceived own labels in China are lower rather than the UK. Consumers perceive a significant difference of PLB's in China and the UK. Otherwise, it is necessary to find the reason of low perception through comparing the Chinese own brands with the UK's specifically.
7.2 The comparison of PLB's image
The aim of this part is to find the limitation of PLB's in China through comparing them with British own labels on several main brand attributes, such as price, quality and taste. It will compare Chinese PLB's with the UK's according to Harris (2006)'s research about British PLB's perception of tea bag brands (see Appendix3), which is a specific and up-to-date research that focuses on consumer perception of PLB's in the UK.
As can be seen from this table, own labels have definite advantage on “cheap” and “good value” in both of countries, where Chinese PLB's have more probability to associate with “cheap” attribute than UK's PLB's on average, even anyone of them based on same national brands. As “good value” is 43% likely to be associated with own labels by Chinese consumers; meanwhile, there is less gap of PLB's compared with national brands in the UK. To the price, “cheap” attribute received on a 2.25 times response for Chinese own brands than national brands. The “cheap” characteristic of own labels in China is similar with British value PLB's, which received 98% higher response than national brands. Through the data to compare Chinese and British PLB's with national brands respectively, it seems that Chinese ones are perceived with lower price and better value rather than UK's.
But actually, based on previous research Chinese own labels aim at the development of “great value”, which has become some product brands. It is related with better performance and lower price. Thus through this survey, it can be concluded that “good value” of own-label biscuits in China can be only achieved by lower price, but lack of good performance, which can be also seen from the latter comparison about quality, taste and packaging of biscuits with the UK's own labels. In them, quality can be compared alone due to its importance.
To the quality attributes, own labels have lower/inferior quality than national labels in the UK. However, they can obtain higher probability on “superior quality” response than Chinese PLB's based on comparing with national brands. As value PLB's are assessed less than national brands but higher than standard and premium PLB's, because British consumers regard more on “worth more” about the value of products (performance/price). Furthermore, premium PLB's received higher response on “overpriced” than national brands. Nevertheless, there is no such premium PLB's assessed as “overpriced” in China. Thus it can be inferred that British PLB's have better quality than Chinese PLB's.
To the relationship of perceived quality and value, it was shown previously in figure7 about the brand association strengths that consumers prefer high quality of national brands and high value of value PLB's. This is also suitable to Chinese consumers, where the own labels perceived as “cheap” and “good value” can be seen as value PLB's and compared with national brands with superior quality.
However, own brands are still necessary to be improved on the performance to keep the good value, like the inferior taste and packaging as follows.
According to this table, PLB's in the UK have limited difference with national brands. However, there is an obvious diversity on taste and packaging between Chinese PLB's and national brands. As “taste good” can be received on a 54% lower response for Chinese own labels than national labels, and “good packaging” has lower response than “taste good”. Consequently, PLB's in the UK can be perceived with better taste and packaging than Chinese ones.
In short, there is a significant difference of consumer perception on PLB's between China and the UK. Except the “cheap” and “good value” attributes on own labels in China can receive a higher evaluation than the UK's, other attributes are received on a lower response than UK's own brands.
Related with literature review, the reason of bringing on the results can be that consumers understand and buy the PLB's less, which account for less market share. Moreover, consumers can believe the own labels are worse than national labels potentially due to its lower price and less penetration in the market. Therefore, the Private-label image is worthy to be regarded more by the retailers. Consumer would be willing to know them, when the brand image is increased through improving the poor attributes of own brands in China.
In addition, the research only focuses on the specific category of biscuits, which cannot replace the whole market. Therefore, the survey results can be only as a reference for the all own labels in China. It is still suggested to improve the performance of Chinese PLB's such as quality, taste and packaging, so as to increase the value of products; meanwhile, the new brand positions of PLB's in China can be developed for consumers with more different demands to penetrate the food market deeper, such as premium, standard and value PLB's in the UK.
Chapter 8: Conclusions and Recommendations
This chapter concludes all problems identified and all findings obtained by the research in this thesis. Also, it provides recommendations for the future study.
8.1 Conclusion
During the last decade, private label brands involved all grocery retailers in the UK. The development of own labels in the UK is so successful that most retailers have used them as the strategic weapon to increase their reputation and build the store image through distinguishing with other brands. However, the situation in China is totally different from the UK's. There is less awareness of PLB's, which is in the condition of take-off with very low market share, although own labels have been seen in some larger retailers. Therefore, it is necessary to know the reason that private labels are developed difficultly in China through comparing the difference of consumer perception on PLB's between China and the UK based on national brands.
The biscuit category is chosen as the research objective because biscuit is the top one in the leisure foods in China. This research has investigated the consumer perception about brand image of PLB's compared with national brands in China by the questionnaire. And consumer perception of PLB's compared with national brands in the UK is known based on the literature review, because previously there are more studies on the comparison between own brands and manufacturer's brands in the UK. Finally, the comparison of PLB's between China and the UK is achieved by the norm of national brands.
According to the research data, it shows that there are just over half of respondents knowing the own labels and less than half of respondents having bought the own-label biscuits in China. Also, the awareness of PLB's has the relationship with the change of age and occupation. Moreover, through the total survey of brand attribute evaluation, it can be known that PLB's are perceived lower than national brands on any attributes generally.
Further data obtained by studying the biscuit category to validate the above result is that own labels in China are cheaper and have better value than national brands, and others have same results with general assessments. Thus the deviation of “good value” is discussed through more researches about quality, taste and packaging to compare with British PLB's based on the national brands. Lastly, it can be confirmed that the value is just emerged due to the low price, but there is no better performance of biscuits surveyed due to poor quality, taste and packaging.
8.2 Limitations and Recommendations
This research has indicated that there is a significant difference in consumer perceptions of PLB's between China and the UK in the biscuit category, and higher assessments on “cheap” and “good value” than the UK's, but there is a lower probability on other attributes such as quality, taste and packaging. Nevertheless, there are some limits within this research. To the methodology, convenience sampling method is used; questionnaires limited the individual ability to explore more views about brands and brand attributes. Also, the few purchasers of own-label biscuits can affect the accuracy of results. Furthermore, it just focuses on the biscuit category, which cannot replace the whole FMCG market. In addition, the UK's market is not surveyed with primary data collection due to the limited time and budget. Its source of secondary data may lack of the reliability and validity, especially make reference of Harris (2006)'s researching results about consumer perception of PLB's in the UK.
Further research can be recommended in future. As it has found out that there is a shortage of PLB's on their quality, taste and packaging. Thus it is necessary to know how to improve them better so as to adapt to the Chinese consumers' demands. Otherwise, it needs to extend the PLB's brands and segment them into different levels aiming at consumers with special needs like value, standard and premium PLB's in the UK.
References
Aaker, D.A. 1996. Building strong brands. 1st ed. New York: The Free Press
Ashley, S.R. 1998. How to Effectively Compete Against Private-Label Brands. Journal of Advertising Research. January/February.
Baltas, G. (1997), “Determinants of store brand choice: a behavioural analysis”, Journal of Product & Brand Management, Vol. 6 No. 5, pp. 315-24.
Barnard, N. R. and Ehrenberg, A. S. C. 1990. Robust Measures of Consumer Brand Beliefs. Journal of Marketing Research. 27 (1990), pp. 477-484.
Biel, A.L. 1992. How Brand Image Drives Brand Equity. Journal of
Cite This Dissertation
To export a reference to this article please select a referencing stye below: | https://www.ukessays.com/dissertation/examples/business/private-label-brands.php |
At any given time, a myriad of forces are at work influencing our consumer perceptions, affecting the attitudes and preferences that ultimately determine our purchase outcomes and future consumption patterns. Factors that affect our purchase decisions vary widely from one industry to the next, because firms differ in product offerings, regulatory considerations, and competitive scenarios.
The consumer decision-making process
The process a consumer goes through during the decision-making process is remarkably complex. Consumer scientists have identified five distinct steps involved in purchase behavior (see Resources). The first stage is one of need recognition, where a potential buyer recognizes an imbalance between actual and preferred states. This recognition can result from both internal and external stimuli, where a consumer might notice the gap on his or her own or through some external force such as marketing, advertising, or simply viewing others’ consumption habits and desiring to have the same.
After the initial product need is realized, that consumer embarks on an information search. Depending on the consumer’s level of interest in the product and personal attitude toward risk, he or she will spend either a short or long amount of time gathering and assembling the information needed to make an informed purchase decision. Next, deliberation occurs, where an identification and evaluation of alternatives is performed. During this third stage, a consumer analyzes product attributes, determines thresholds, and ranks product attributes by personal importance. It is in this third stage that price considerations and availability of alternatives are stressed. Finally a purchase occurs, then postpurchase and the process of cognitive dissonance, when a consumer mulls over his or her actions and wonders if he or she made a good decision, whether he or she got the right product as well as good value, and mentally adjusts attitudes to bring this into balance.
Purchase drivers: Channels
A lot of research has been devoted to understanding the drivers of purchase in a retail environment. It is important to note that these drivers also differ based on channel of sale. In the case of purchases made in retail stores (see Resources), purchases involve face-to face interaction with service personnel. In-store aspects like store display and presentation, ambiance, customer treatment, store layout, discounting, and promotions all enhance or detract from the non-virtual shopping experience, whereas e-commerce site experience factors include site design, site performance and reliability, security, and customer service. Regardless of channel, pricing is a key factor related to consumer choice.
Some studies have found that consumers’ price sensitivity tends to be less evident when shopping online and through mobile channels. One study (see Resources) found the primary impact on consumer choice to be channel (store, catalog, Internet) and price of the product, noting a distinct segment of customers preferring to transact through the Internet channel.
Retail represents one of the most complex industries in terms of the number of products and channels involved. A supermarket may offer tens of thousands of products, with roughly 50 percent of them perishable. A home improvement store also offers an infinite and ever-changing supply of options. Consider limited-product companies such as electric power and insurance companies. These firms traditionally offer only a limited number of products. Utilities may offer a selection of billing, bundling, metering, self-management of usage, rate structure, and prepaid options; however, consumer choice is essentially limited to a single product: electricity, water, or natural gas. Although insurance companies offer a comparatively larger selection of products, including auto, health, life, property, renters, pet, and financial products, that list is still finite compared to the complex array of product categories and alternatives that retail companies offer prospective customers.
Identifying the determinants of demand for firms with a limited number of products is therefore also relatively less complex, whereas retail represents an industry in which channel and consumer data collection is extremely rich and frequent. The retail industry is in a constant state of flux, with increasing competition and experiencing a seismic technological shift toward mobile as a channel that provides real-time alternative product pricing, feature-comparison capabilities, and one-step purchasing. Retailers that employ predictive analytics to better understand their customer and prospect behavior at a micro-segment level will be better situated to make faster data-driven decisions and have a deeper grasp on customer demand and preferences.
Evaluating product performance
Given all this complexity across a multichannel series of product categories, brand selection, packaging, inventory, pricing, discounting, and display options, how do retailers effectively evaluate performance at the product category level? At large retailers, this is done through a retail and supply chain management approach called category management, where the ranges of products are assembled into broad-level groups categorized by similarities. Each group is then run as a separate business unit. Each category manager is responsible for the direction and performance of his or her product category business unit, and each develops his or her own profitability targets and business strategy (see Resources for more information about category management).
Traditional metrics around sales per selling space or product category dominate the industry, and these key performance indicators (KPIs) are tracked in real time in enterprise reporting and inventory management systems and viewed by managers through reporting dashboards. Another important evaluation approach originating from the marketing discipline is to evaluate the customer lifetime value (LTV) of the customer base for the product category and understand how that audience interacts with the brand from a cross-product category perspective.
Let’s assume, for our purposes, that traditional aggregate KPIs like sales per square foot and sales per employee metrics are already tracked and stored in the database to support additional advanced analytics modeling initiatives. Consider what information about consumers at the individual level might be useful in terms of understanding the overall potential of an existing customer base for a product category, and consider how this information could be used to increase sales and manage inventory. The variable list at the consumer and household level available for modeling is virtually infinite—and overwhelming—from a data perspective. Recent technological strides have addressed traditional Structured Query Language (SQL) databases and commercially available statistical tools and now new NoSQL and cloud-based systems are emerging as common components of the go-to scalable data warehouse architecture revamps. Databases such as Netezza, Cassandra, and Pentaho combine with systems like Apache Hadoop and MapReduce, which allow access to auto-classify functionality and filter by sample the petabytes of consumer data in an interactive way not possible with legacy systems. The proliferation of social, mobile, web, video, and picture data associated with consumers’ interaction with the brand represents some of the most vital and untapped data in the retail industry. This information provides savvy retailers with an opportunity to gain a competitive advantage as they find new ways to derive individual and product-level insights by combining new data and approaches with industry-standard transaction, bar code-level, and survey-based data.
Retail performance is ultimately determined by sales numbers, and pricing strategy is an integral component of the health of a business unit and the enterprise overall. Gathering information about the consumer in terms of individual and household preferences, volume, interval of usage, bundling of purchases, complementary and alternative brands and products, and price sensitivity at the product category level allows insight from a bottom-up perspective. This insight will support micro-segmentation of customers for profile development that drives pricing and product design strategy and provides data to support bottom-up level sales forecasts and inventory supply forecasts. One of the most significant consumer-level variables as an input to both the segmentation and the sales forecasting models is price sensitivity. The remainder of this article focuses on the measurement and usage of this key consumer price-sensitivity input into the model.
Price sensitivity
Price sensitivity is the marketing term for the product- and consumer-level metrics that economists refer to as price elasticity of demand. Basic economics (see Resources) teaches us that all consumers are not created equal. One of the first concepts introduced is consumer preference, where consumers must choose bundles of goods, and the allocation subject to their budget constraint is represented by indifference curves. The rate by which they substitute some of one good to gain more of the second good is called the marginal rate of substitution.
Price elasticity of demand
Another concept introduced early on is price elasticity of demand (see Resources), a sensitivity measure representing the impact of a price change on quantity demanded. If quantity and price are represented by Q and P, price elasticity of demand is represented by the following expression:
Ed = (∆Q / Q) ч (∆P / P)
This indicates the percentage change in quantity that results from a one-percent increase in the price of that good. This price elasticity is affected by three main factors:
- First is the availability of substitutes and, generally speaking, the more alternatives consumers have to a product that has become more expensive, the more likely they are to shift to a similar product; in this case, demand is elastic.
- A second consideration is the time frame consumers have to adjust to the price change. Over time, more information on other alternatives results in demand becoming more elastic over time.
- Finally, there’s a question of how much of a household budget is allocated for this good: The larger the budget share, the more elastic the demand.
Two-stage estimation model
What is proposed here is a two-stage estimation model. In the first series of predictive models, price sensitivity measures at the individual and household level are determined. In the second estimation step, these price sensitivity inputs generated in the first stage of modeling become inputs into the second predictive product-demand estimation model.
The approach presented here involves an initial estimation of individual levels of price sensitivity at the product category level as a base. It is recommended that moving average and lag variables that capture sensitivity changes also be derived from a time series of purchase data and other indicators of developing or diminishing price sensitivity that might be stemming from the onset or easement of budgetary constraints (for example, job loss, student off to college, divorce, illness, other negative macroeconomic indicators affecting consumer confidence on the negative side; or salary increase, child finished with college, marriage, and other positive economic indicators). Other variables to facilitate a cross-price elasticity of demand representation (how price sensitivity for a product changes in relation to a price change in either a complementary or substitute product) should be collected.
Measuring price sensitivity
Price sensitivity, like customer loyalty, can be measured in a variety of ways and from a great deal of different consumer data. One measure used at the outside of new product launch pricing strategy is called the van Westendorp Price Sensitivity Analysis (PSA; see Resources), an approach that uses consumer survey feedback data to determine a range of acceptable and optimal pricing. This data is useful in determining whether a price-skimming (see Resources) strategy is feasible.
Survey-level data that is available at the household level may be added to the model. Keep in mind this will be a sample representing only a small percentage of households; if this data is added to the sensitivity estimation model, this data issue can be managed in two ways. First, a look-alike model can be contracted to score the customer database with expected responses. The second approach is to overlay the survey data on top of either a micro-segmentation or propensity model and assume that all others within a segment or a probability threshold have similar responses. The Resources section offers a link providing an example where Taco Bell was able to identify price sensitivity through surveys.
Social media provides a rich and untapped source of semantic data that can be valuable in terms of price-sensitivity prediction. It is necessary to collect customer social media IDs to do this modeling at a household level, and increasingly more firms are collecting Twitter and Facebook IDs that can be merged with customer transaction and survey data. If this linkage variable is not available, specific brand and product category price sensitivity can be derived from anonymous consumer comments and aggregated for other sensitivity analysis validation purposes. Briefly, consumers who use keywords such as expensive, cheap, a steal, coupon, and so forth tend to be price conscious, especially when comment share is at a higher-than-average rate for the brand in relation to quality, packaging, service, and other brand category comments.
Conclusion
After a company has identified household levels of price sensitivity at the category level, this data can be used and updated to improve strategy and processes throughout the enterprise. One such specific example is pricing strategy and specific households to target for new product offerings that will have a higher propensity to pay a premium during the early-introduction phase of the product launch. In a case where promotions or price decreases are planned, predictive models can predict the new demand scenarios, and these projections directly relate to inventory control to meet increased or decreased expected demand. Additionally, the price-sensitivity variables can be useful in terms of messaging at a segment level.
Currently, integration continues to happen in terms of pulling together all of the customers’ touch points and transactions; incorporating these with how brands are interacting with customers across social channels like Twitter and Facebook and how consumers are influencing one another’s purchase decisions. The retail industry promises to become only more complex and full of rich new data sources to drive competitive advantage to those brands that put predictive analytics into practice to leverage all of it. | https://developer.ibm.com/articles/ba-price-sensitivity/ |
Consumer surplus is the additional benefit to consumers that they derive when the price they pay is less than the maximum they are prepared to pay.
Consumer surplus is an important concept as it provides a method to evaluate the impact of changes in market conditions, and in terms of the impact of government policy.
A demand curve reflects the expected marginal benefit (or utility) derived by consumers when they purchase a given quantity. In consuming quantity ‘Q1’ at price ‘P1’ the consumer is prepared to pay more than ‘P1’ for units between zero and ‘Q1’.
Consumer surplus is measured as the area from the price line up to the demand curve.
If the price is P1, then the whole area is the value of the consumer surplus. If price rises, there will be a negative income effect and substitution effect, resulting in reduced demand.
This means that, assuming a fixed budget, less can be purchased (the income effect), and assuming the price of substitutes remains constant, consumers will switch to the alternative (the substitution effect). The result is that consumer surplus falls.
Hence, the higher the price, the smaller the area for consumer surplus, and conversely, the lower the price the larger the area of consumer surplus. | https://www.learn-economics.co.uk/Consumer-surplus.html |
The convergence of Middle East studies and Cold War studies in recent years has brought the region’s strategic importance to bear upon a conflict conventionally conceived as a duel between capitalist fantasies and communist ideologies. Yet this scholarship has not thus far taken account of the role of the visual arts in the struggle, nor how that struggle bore upon the visual arts in the Middle East, despite the fact that both Cold War studies and American and European art history have documented the ways in which art, and particularly certain styles of painting, namely American abstract expressionism, was a site of ideological investment.
Scholarship on the visual arts in the Middle East has acknowledged the function of art in forging political alliances, which resulted in traveling exhibitions, artists’ residencies, cultural exchanges, and the establishment of university art departments, cultural centers, and publications that have been central to the region’s art scenes. However the current paradigms in this scholarship rely upon analytic conventions, themselves a product of the Cold War, that in overemphasizing national style and autonomy, fail to adequately situate the arts in the more general political context set by the Cold War, and thus fail to deal with the complexity of the artistic encounters that took place in the name of ‘cultural diplomacy’ as well as the often unintended and novel aesthetic shifts that resulted. This panel reframes the relation between art and politics in the 1950s and 1960s by considering that relation in a broader international context.
Panelists:
Sarah Rogers (Darat al Funun) American University of Beirut and the Formation of the Modern Lebanese Artist
Saleem al-Bahloly (UC-Berkeley) The Politics of the Modern Artwork in Cold War Iraq
Jessica Gerschultz (University of Kansas) Mutable Form and Materiality: “Interweaving” Art and Politics in the New Tapestry of Safia Farhat, Magdalena Abakanowicz, Maria Laskiewicz, and Jagoda Buic
Monday November 19, 2012
Arab Spring, Artistic Awakening? Art, Resistance and Revolution
Organized by Jennifer Pruitt (Smith College) and Dina A Ramadan (Bard College)
Since the early weeks of the “Arab Spring,” critics and commentators have been eager to assert that in something of an awakening, artists from the region are finally being allowed the freedom to express themselves after decades of repression. Exhibitions and symposia soon followed, primarily concerned with the unique and specific role played by artists in the groundswell of grassroots activism, as well as how artists are directly tackling political upheaval in their work.
This panel would like to engage in a more nuanced examination of the relationship between art and politics, one that recognizes the limitations of prescribing a role for artistic expression based on anachronistic understandings of contemporary revolutions. Given the evolving nature of the “revolutions” we have witnessed over the last year, what is the changing place for artistic production and how do we move beyond the temptation to assign artists the responsibility of representing the revolution.
Papers on this panel will propose possible paradigms through which to understand the complicated relationship between art and revolution from a range of disciplines. Two of the papers will consider artistic production in Egypt since the revolution, the first addressing the role of the artist, particularly the artist as martyr and the relationship that develops to our understanding of the work, while the second examines the explosion of graffiti art across the walls of Egypt. Continuing the interest in art in public spaces, the third paper will look at the Libyan context, and specifically the representations of Muammar al-Gaddafi, the so-called “King of Kings of Africa,” in which the opposition sought to degrade Gaddafi through the use of a variety of “BlackFace” visual stereotypes. The final paper uses the Syrian documentary film collective, Abounaddara, to problematize the characterization of art during the Syrian uprising as a ‘new’ genre and the uprising as an event with predetermined meaning.
Chair/Discussant: Elliott Colla (Georgetown University)
Panelists: | http://amcainternational.org/two-amca-sponsored-panels-at-the-2012-annual-meeting-middle-east-studies-association-2/ |
There are a number of publications online about botanical art in the past - if you know where to look.
From Botany to Bouquets: Flowers in Northern Art (1999 / 88 pages) is the catalogue of the exhibition organised by the National Gallery of Art in Washington and held between 31 January — 3 1 May 1999
You can view and/or download a digital version for free as a pdf file - View PDF (28.94MB) on the NGA website.
From Botany to Bouquets examines the origins of flower painting with a selection of botanical treatises, manuscripts, and watercolors by 16th- and 17th-century printmakers and draftsmen.
The catalogue was written by Arthur K. Wheelock Jr. (the recently retired Curator of Northern Baroque Paintings at the National Gallery of Art in Washington) and is essentially about still life art involving flower painting,. However it starts from botanical art and how this developed over time within the context of various historical developments.
It also includes some great examples of paintings of flowers. I particularly enjoyed
The artists who created the flower still lifes in this exhibition could convey the delicacy of blossoms, the organic rhythms of stem and leaf, and the varied colors and textures of each and every plant. They could capture the fragile beauty of flowers and the sense of hope and joy they represent. Their bouquets come alive with flowers that seem so real we almost believe their aroma—and not the artist's brush—has drawn the dragonflies and bees to their petals.
Your comment will be posted after it is approved.
Leave a Reply.
|
|
Author
Katherine Tyrrell writes about botanical art and artists and has followers all over the world. You can also find her at linktr.ee
BAA Visitors so far....
since April 2015
Subscribe to BAA News
Blog posts are emailed to you when you SUBSCRIBE to "Botanical Art and Artists - News" by Email
Your email subscription to this blog is ONLY activated IF you verify the link you will receive. You can unsubscribe at any time
It will NOT be used for anything else and will NEVER be given to anybody else.
EVERY DAY FOLLOW News about botanical art + links to new BAA blog posts on Botanical Art and Artists on Facebook
Copyright
© Katherine Tyrrell 2015-22
Unauthorised use or duplication of ANY material on this blog without written permission is strictly prohibited. Please also respect the copyright of all artists featured here.
What's your news?
This blog highlights news - in brief - about botanical art exhibitions around the world.
Use the Contact form to tell me about an exhibition and provide a summary of relevant information. If listing your event I will ask you for relevant images.
The Best Botanical Art Instruction Books
Tap the pic to check out my recommendations
The Best Books about Botanical Art History
Tap the pic to check out my recommendations
Workshops, Classes & Courses
Find out about botanical art workshops, classes courses offered by various organisations and artists in:
Read other Botanical Art Blogs
READ Blogs about botanical art and/or by botanical artists & illustrators
Categories
All
BAA News Archives
January 2023
Archive (MaM)
This page Botanical Art & Artists on my main blog has an archive of blog posts about past exhibitions of the Society of Botanical Art and Artists
|
|
NEWS
News Blog about artists, awards, exhibitions etc.
|
|
EXHIBITIONS
- Calls for Entries
- Exhibitions around the world
- Online Exhibitions
- RHS Exhibitions
- Hunt Exhibitions
ORGANISATIONS
- Botanical Art Societies - national / regional / local
- Florilegium & Groups
- Botanical Art Groups on Facebook
|
|
EARN
- Tips and Techniques
- Best Botanical Art Instruction Books
- Directory of Teachers
- Directory of Courses
- Online Botanical Art Courses
- Diplomas and Certificates
- Talks, Lectures and Tours
ART MATERIALS (Paper / Vellum)
BOTANY FOR ARTISTS
- Scientific Botanical Illustration
- Best Botany Books for Artists
- Plant Names & Botanical Latin
BOTANIC GARDENS & Herbaria
|
|
FEEDBACK
Please send me .
- news to share
- info. about exhibitions
- any suggestions for what you'd like to see on this website
ADVERTISE
Contact me if you'd like to promote workshops and courses on this site.
AFFILIATION
This website is free to you but not for me! (See Affiliate Income below)
|
|
Cookies, Personal Data & Privacy tells you how this site relates to and impacts on you and your privacy - and your choices.
Product & company names may be trademarks of their respective owners
|
|
About Affiliate Income: This website has been created to share information not to make a profit. I am an Amazon Associate and earn from qualifying purchases (e.g. books from Amazon) which helps offset costs associated with maintaining this very large website. | https://www.botanicalartandartists.com/news/from-botany-to-bouquets |
OCT Contemporary Art Terminal (OCAT) Shenzhen was founded in 2005 and is based in OCT Loft, Overseas Chinese Town of Shenzhen. OCAT Shenzhen is the headquarters of the group of OCAT museums. As the longest-standing member of the OCAT museum group, OCAT Shenzhen is a pioneering contemporary art institution established in Shenzhen.
The programmes of OCAT focus both on in-depth survey, research, publication and exhibition of individual artists, as well as research-based thematic exhibitions. In addition to OCAT Exhibitions, OCAT Performs and OCAT Screens are annual programmes showcasing performing practices and theoretical discussions in art, dance and theatre, as well as screenings and lectures on documentary, video art and films.
Accompanying the exhibitions, performances and screenings that OCAT organizes, as well as OCAT Residency, which artists, curators, art critics and scholars are invited to reside in OCAT, the programme OCAT Library initiates lectures, conversations and other forms of discursive activities in the library in OCAT Shenzhen and publishes part of its content in book forms, providing documents and reading materials for a wider public and researchers.
In OCAT Shenzhen, publishing is both prompted by its exhibitions but can also function as an independent form of artistic and conceptual articulation and experimentation. The conception, editing and design of its publications could be employed as a form of artistic practice in parallel to the exhibition. OCAT Shenzhen has produced many outstanding publications, including the ones for every Sculpture Biennale over the past 16 years. | http://www.shenzhenparty.com/places/art-craft/oct-contemporary-art-terminal-ocat |
- Freedom Restrained?
- Birth of Kells
- Cuchullain/Morrigan
- Hugh O'Neill and the Fianna
- Song of American sculptures
- Divine Love
- Life of St. Elizabeth
- Constructing the Past
- Ode to Aphrodite and Hala Sultan
- Balancing on Three Points
- Grianan of Aileach
- Carriage na Gailge
- Marian Doors St. Michael's Cathedral Basilica
"My experience creating works of public art has involved finding a balance between and seen the evolving of a synergy between personal vision and community involvement - to a point where they exist in harmony. My involvement in and understanding of Art in community life has evolved to the point where community involvement in some form represents an integral part of my public art practice. All of my works of public art are site, community, or culturally specific. Separating from their location would strip them of meaning.
Although the creation of non-commissioned solo works remains my principal occupation, it has been my experience that a shared artistic vision and experience is an enhanced one.
My professional art practice has often meant an engagement with the local community. Many of my major sculptural commissions in Belfast, for instance, involved the community in a very meaningful way. Youth, pensioner, ex-prisoner, cultural, government, artistic, women, minority, and political groups were often consulted and interacted with as a necessary part of my artistic process. The Artist was integral to, supportive of, and was supported by the community. As a result I was able to create with complete artistic freedom; all my works were accepted and owned by the community.
Through my experience in youth work in Canada and Ireland, I became convinced of the positive influence of Art in the encouragement of youth. I have organized in both countries exhibitions of work by emerging artists. This involvement with emerging artists has formalized into a practice by which I have taken on artists from the secondary, undergraduate and graduate level as apprentices in some of my major international public commissions.
In my own emerging years as a working artist, my artistic practice often took place within the context of larger artists collective, and I still belong to various art collectives and societies, although at this time I work in isolation".
Yours truly, | https://www.farhadsculpture.com/PublicWorks/default.htm |
Five years in the making, it’s finally open to the public in last December.“Ink Art: Past as Present in Contemporary China” attempts to be a defining show on the subject, charting out new identities for Chinese contemporary art by featuring artworks embodying the “ink art aesthetic,” yet it is the “Ink Art” aesthetic wherein lies the problem of this show. “Ink Art” though full of iconic work, fails by looking at contemporary work merely through the prism of “tradition,” thus largely obfuscating the contemporary meanings and influences which anchor the work.
As Maxwell K. Hearn, the head of the Asian Art Department at the museum and curator of this show, elucidates in the exhibition catalogue, “Ink Art examines the creative output of a selection of Chinese artists from the 1980s to the present who have fundamentally altered inherited Chinese tradition while maintaining an underlying identification with the expressive language of the culture’s past.” Even though the primacy of the “ink art” tradition in China has increasingly been challenged by the idiom of Western art as well as that of new media ever since the early twentieth century, some Chinese artists chose to retain traditional Chinese painting medium or a “brush and ink” aesthetic in their practices.
Viewing the works of Chinese contemporary art included in this exhibition as “part of the continuum of China’s traditional culture,” the curator has embedded and contextualized the entire “Ink Art” show in a traditional setting. As viewers enter into a gallery near the Great Hall, they are greeted by two sets of large-scale triptychs, “30 Letters to Qiu Jiawa”(2009) by Qiu Zhijie and “Crying landscape”(2002) by Yang Jiechang, installed next to a Dunhuang mural and some ancient Buddhist sculptures from the museum’s permanent collection of traditional Chinese art. Having adopted the styles of traditional scroll ink painting and “blue-and-green” landscape painting, Qiu and Yang’s works seem to merge right into the larger context, camouflaged among the permanent pieces on display that were created more than 1,000 years ago. Qiu’s work depicting the iconic Nanjing Yangtze River Bridge has a contemporary edge to it, however; festooned with hovering symbols of spiral shapes, ladders and an infant figure, it serves as a visual narrative of the suicides facilitated by this glorious national symbol and the collective memory associated with it. Yang Jiechang takes similar aim at architectural icons with overlapping images of mysterious explosions in front of the Houses of Parliament, an oil refinery with smoke rising into the blood-red sky, a missile aimed over the Three Gorges Dam, the Pentagon under terrorist attack and a surreal Las Vegas skyline occupied by famous New York landmarks—all these images testifying to the idea that established power structures could be in imminent danger, and overthrown overnight. The subversive commentary embodied in the two contemporary works serves as a dystopian force, pulling the viewers immediately from the “ancient heaven” back the reality.
En route to the main displays, three other works are embedded in a gallery which primarily features ancient artifacts and figurines from China. A set of six prints displayed in a glass case attached to the wall resembles the maps directly torn out of antique woodblock-printed books; however, the ubiquitous misnomers, cartoon-like markings and geographic misrepresentations reveal that they are the work of contemporary artist Hong Hao and his alternative interpretations of our contemporary world and it’sglobal trends. In these maps, the international influences, military power, prevalent value systems and associated stereotypes of different geographic regions are visualized and communicated to the audience with irony and playfulness. Overlooking Hong Hao’s maps are two signature Ai Weiwei works—a mosaic-like map of China constructed from the wood salvaged from a destroyed Qing dynasty temple and a Han-dynasty jar painted over with a Coca-Cola logo. Placed in the center of a permanent collection gallery, these three works are meant to show how Chinese contemporary artists confront or comment on China’s self-image or national identity. However, given their respective media and the context of the stated curatorial aims of “Ink Art,” their relevance to the organization of the entire show is questionable.
The overarching idea of interpreting Chinese contemporary art through the lens of traditional Chinese aesthetics is also evident in the exhibition’s display and overall structure. Exhibited in the galleries that were originally designed to recreate an authentic, traditional Chinese setting—notably Astor Court—the famed Ming Dynasty-style courtyard—and the Douglas Dillon Galleries—almost all the contemporary works are placed in wood-and-glass vitrines made specifically to show scroll paintings and calligraphy works. Thematically, this exhibition is organized into four main sections—“The Written Word,” “New Landscapes,” “Abstraction” and “Beyond the Brush.” These titles immediately associate with quintessential art forms from China’s past. In the wall text for each section, the curatorial statement always starts with, and invariably weighs towards, an explanation of the aesthetic traditions and ethos central to the traditional art form. Taking the “New Landscapes” section as an example, the statement begins with, “Over the past one thousand years, landscape imagery in China has evolved beyond formal and aesthetic considerations into a complex symbolic program used to convey values and moral standards. In the eleventh century, court-sponsored mountainscapes with a central peak towering over a natural hierarchy of hills, trees, and waterways might be read as a metaphor for the emperor presiding over his well-ordered state…” These statements often end by noting that many Chinese contemporary artists nowadays have drawn inspirations from these “past models,” seemingly justifying the exhibition’s thematic categorization based on the contemporary works’ connection to their precedents.
But the question remains: is this new curatorial perspective used in the Met’s Ink Art show really valid? As we ponder the approach of using “past models” of Chinese art as a vantage point from which to interpret the recent development of Chinese contemporary art, several problems arise.
First of all, the entire exhibition is conceived and curated based on an underlying assumption made by the curator—that the same set of criteria that were developed to appreciate ancient and traditional Chinese art could be readily adopted to similarly judge contemporary Chinese art. This assumption is a fundamental fallacy of this show. If we carefully consider all the social upheavals and artistic transitions that China has gone through in the past several decades, it is clear that the development of Chinese artistic traditions cannot be viewed as a linear narrative. Since the categorization of contemporary works under each theme is primarily determined by their relevance and resemblance to different traditional art forms in material, style, technique and visual trait, the exhibition only reveals the most superficial connections between the traditional and contemporary. The real complexities behind the recent development of Chinese contemporary art, such as important social, political, economic and cultural changes, remain veiled. By highlighting and framing contemporary Chinese art in this manner, this exhibit draws viewers to focus more on techniques and less on the real message behind the individual works.
In addition, since it is specifically dedicated to the examination of “ink art aesthetics” in Chinese contemporary art, “Ink Art” almost provides a panoramic view on how traditional formats and mediums are continuously adopted or re-appropriated in Chinese contemporary art to further a variety of artistic agenda.
Credit must be given to the Met, as the exhibition’s wide selection of works allows many unfamiliar Chinese names exposure on an institutional level in the West. Nonetheless, this raises a side issue as well—some of the works exhibited have little relevance to the themes they serve and thus the exhibition sometimes wanders off topic. The section “Beyond the Brush,” features works which, in the words of the curator during a press preview speech, he “could not resist acquiring, but have nothing to do with ink.” To be more specific, these works do not use ink itself as a medium but embody the ink aesthetic, as their forms are inspired by traditional artworks that derive from Chinese literati pastimes or patronage. As general and vague as the statement sounds, the exhibition’s expansion of focus in the last section raises an issue of cohesiveness. For example: “Ruyi” (2006), a ceramic mutation of the most iconic talisman in traditional Chinese culture made by Ai Weiwei, is on view here. This fungus-shaped scepter being transformed into a lump of human organs by the artist looks both odd and disturbing. By doing this, Ai dramatically subverts the traditional talisman that symbolizes power and longevity into a vulnerable, monster-like creature. Coming from a lesser-known series made by Ai, “Ruyi” itself is an interesting selection. However, since the work’s traditional prototype itself is so remotely related to the “ink art aesthetic,” its inclusion in the exhibition seems farfetched. For the same reason, the inclusion of Hong Hao’s map, Ai Weiwei’s Coca-Cola jar and mosaic map is doubtful as well. In the “New Landscapes” section, an image of isolated concrete jungle rendered from a real-estate architectural model by Xing Danwei, “Urban Fiction No. 13”(2005), is on display next to a ghostly photograph of Shanghai by Shi Guorui. As these works bear no noticeable elements of Chinese ink tradition, their inclusion in the show should be questioned.
That said, the exhibition does feature many historically important classic pieces including Zhang Huan’s “Family Tree” (2001), the photographs of Song Dong’s most famous performance “Printing on Water” (1996), Sun Xun’s video “Some Actions Which Haven’t Been Defined Yet in the Revolution” (2011), Cai Guoqiang’s 1993 performance “Project to Extend the Great Wall of China by 10,000 Meters,” and Xu Bing’s installation “Book from the Sky”—the zen-like atmosphere created by Xu Bing’s installation leaving a deep impression on many viewers. Along with these iconic works, one of the highlights in the show is Huang Yongping’s “Long Scroll” (2001). Adopting the format of a traditional paper scroll, Huang gently depicts several of his installation projects created between 1985 and 2001 and rendered in simple palette of orange and blue watercolor. These casually illustrated images could be viewed as informal archival documentation or even a “mini-retrospective” of Huang’s past projects. Here, the feeling of the grandiose installations is replaced by a sense of intimacy between the images and the viewers.
Right before the beginning of 2014, two well-respected art institutions in the U.S.—the Rubell Family Foundation and the Metropolitan Museum of Art—both opened their first landmark exhibitions featuring Chinese contemporary art. While the “28 Chinese” exhibition at the Rubell Collection clearly places more emphasis on the young generation of Chinese contemporary artists and their individual practices, “Ink Art” has taken on the ambition to examine new collective identities for Chinese contemporary art, viewing this art from the standpoint of its own cultural heritage instead of the Western-avant-garde artistic language the artists have adopted. While both exhibitions have attracted much attention within the art world both locally and globally, their respective differing ways of framing and interpreting Chinese contemporary art brings up the important point of cultural representation of Chinese contemporary art.
“As the debates of recent years have shown, ‘identity’ is not an ‘essence’ than can be translated into a particular set of conceptual or visual traits. It is, rather, a negotiated construct that results from the multiple positions of the subject vis-à-vis the social, cultural, and political conditions which contain it,”[i] as Mari Carmen Ramirez, the renowned Latin American art curator, argues in her essay “Brokering Identities.” Unequivocally, the Met’s strong collection of traditional Chinese art, the curator’s particular expertise in traditional Asian art, good intentions and academic ambitions are all evident in the curatorial decisions, making this show a strong showcase of iconic pieces. However, the idea of using the “ink art aesthetic” to characterize and examine the pieces being shown is problematic. Overly emphasizing the visual traits shared between traditional and contemporary Chinese art – such as materials, techniques and artistic forms – “Ink Art” has unintentionally fallen into the Orientalist trap.
[i] Mari C. Ramirez, “Brokering Identities: Art Curators and the Politics of Cultural Representation, ” in Thinking about Exhibitions, ed. Reesa Greenberg et al. (New York: Routledge, 1996), 23. | http://www.randian-online.com/np_review/ink-art-a-mix-of-contemporary-and-traditional-ink-art-ink-arts-direction-falls-flat/ |
NıCOLETTı is a nomadic artistic project spanning curatorial practice, artist representation and art consultancy. Supporting and promoting both emerging and established contemporary artists – from the late 20th century to the present – NıCOLETTı organises series of exhibitions addressing current artistic, cultural and political problematics. The project is committed to fostering dialogues between artists and practices engaging in and contributing to international critical discourse.
Departing from the model of permanent space, NıCOLETTı is conceived as an itinerant project travelling across borders to facilitate and stimulate exchanges between artistic scenes and cultures. Considering the flexibility of space as an integral part of its curatorial practice, the project embraces mobility in order to find optimal venues for each of its exhibition.
On its online platform, articles, interviews and videos are presented to introduce the artists featuring our exhibitions. In addition to the information directly associated with our programme, the news section includes information about inspiration, influences and interests in the fields of art, culture and society. | http://nicoletticontemporary.com/informations/ |
Return to Artists of Utah site
"Giving everyone their fifteen bytes of fame".
GO TO
PAGE 1
PAGE 2
PAGE 3
PAGE 4
PAGE 5
September 2002
Page 6
"Covered in Sawdust" . . . continued from page 3
KK:
I was interested to read an article about your work in the Salt Lake Tribune (11/2/98), in which Joan O’Brien commented, “Long before David Delthony moved to southern Utah, he was sculpting furniture reminiscent of the red-rock country.” Were you influenced by this landscape?
DD:
As Joan aptly indicated, I had developed my visual language long before seeing the awe-inspiring formations in southern Utah, but obviously my Sculptured Furniture was originally influenced by a similar perception of nature and biomorphic forms.
I have also been inspired by sculptors like Moore, Brancusi and Arp as well as by other woodworking artists like Wendell Castle and Sam Maloof.
KK:
You refer to your art as “Sculptured Furniture”. Could you describe your work for us?
DD:
As a furniture artist, I sculpt with the material wood, investigating interior space and defining exterior boundaries. My work focuses on the dialog between functional and aesthetic values as I try to incorporate and balance these in each object. It is important for me to utilize my knowledge of the material wood and of ergonomics to create organic forms which engage the user through their function and my own personal visual language. As I work within the syntax of fine furniture, I endeavor to infuse my work with an artistic sensuality, embracing visual and tactile senses and encouraging the human contact which is essential to my vision as an artist.
KK:
The “art world” often ignores or attempts to discredit what could be termed functional art or artistic furniture. What are your feelings about this?
DD:
It seems that in our contemporary society, when an object takes on any type of functional characteristic, it loses recognition as an artistic expression.
If we look back to prehistoric cultures, especially African, we can see that the origins of furniture were closely related to rituals, mysticism, and artistic expression. The first chair or stool in early cultures was created by artists and craftsman for the tribal leader to elevate him and symbolically represent his position in the community. Following this example of the “chair”, the correlation between function and artistic creation is even obvious in more modern monarchial societies where a throne evolved out of the original symbolism of the stool. In my own work, I have even done a series of “King’s Chairs” which relate to this theme. Unfortunately for many people though, the idea of “functionality” seems to be a negative characteristic in the art world. From my own experience though, adding a functional at- tribute without sacrificing artistic values can be an elevating and dynamic feature as well as an extremely challenging task.
KK:
I agree with you entirely. To me one of the strengths of your organic and sensuous forms is that they are successful as artistic statements and as functional entities. Neither aspect is sacrificed or relegated to a secondary position. What do you see as the responsibilities of the artist as opposed to the craftsman?
DD:
Most important would be the unique sensibility of the artist and the successful expression of this sensibility in his personal style or visual language. A furniture maker who did not develop this unique sensibility and made purely utilitarian furniture would not fulfill this requirement. Whether the expression of the artists sensibility is “successful” or not is extremely subjective, but it should not to be measured by commercial or financial acclaim. Decisive is that the finished work somehow reach out, touch and evoke a response in the observer.
KK:
There are pieces of fine workmanship, made by excellent craftsman who do have their own style. Are there other features which can also be used when examining the questions of art, art furniture, and craft?
DD:
I think there is another characteristic which separates the artist from the craftsman and that is the artist’s emphasis on the creative process. For the artist, the creative idea is actually the driving force and takes precedence over function, craftsmanship or material. He may be bound by a utilitarian purpose or the use of a specific material, but more decisive for the artist is that he express his idea through a creative process. Functional values and mastery of the craft, although often central in furniture art, serve as tools for the realization of the artistic expression and should not become the end product. Unfortunately in certain disciplines, these very tools have become an impediment to being accepted in the world of art.
KK
: As someone who has been closely associated with galleries and exhibition venues, I realize how few opportunities there are to display functional art as opposed to what some would term “pure” art. Do you think it possible that we develop an expanded consciousness for the concept of art and that functional art and art furniture be included in this definition?
DD:
Yes, there are definite developments in this direction, especially within the furniture arts. An increasing number of educational institutions are combining high quality instruction in the crafts within their art curriculum. There are also a number of important societies, organizations and publications (for example, the American Craft Council and the Furniture Society) which promote the functional arts and where appropriate, their inclusion in the art world. The recognition of artists like Dale Chihuly, Betty Woodman, Peter Voulkos and Wendell Castle, who had their origins in functional art, contributes to the public awareness of functional art, but still, when seen as a whole, there is a lot of educational work yet to be done.
After his upcoming exhibit at the Utah Arts Council’s Rio Gallery, David's work can be seen in southern Utah in Torrey Home & Garden and Brigitte’s can be found at Gallery 24 as well as in their own Escalante studio.
This interview was conducted by Karen Kesler, designer, painter, object-maker and partner in the recently opened Gallery 24 in Torrey, Utah. All artwork is copyright David Delthony and may not be reproduced without written permission.
David Delthony can be contacted at (435) 826- 4631 or [email protected]
Gallery Stroll Preview-- Salt Lake City
by Mariah Mann
The Salt Lake Arts Center, located at 20 South West Temple is currently hosting two exhibits. PERSPECTIVES OF CONFLICT feature's work by local artists. The featured artist, Kazuo Matsubayashi, is a professor of Architecture at the University of Utah who has added to the scenery of Utah with works including ASTEROID LANDED SOFTLY, which currently resides at the Gallivan center. Supporting artists include but are not limited to Fletcher Booth, John Erickson, Suzanne Fleming, and Gary Pickering. All types of media are presented in this show.
Artist Victor Kastelic displays his lifetime achievements in a show titled CLOUD BURST. Kastelic spent 12 years in Italy drawing with pencil and oils. For this show he included three hundred small drawings and a few large murals painted on the art center walls. Both shows will hang until October 5th.
Art Access Gallery, 339 West Pierpont explores the subject of Spirituality. The show, entitled SPIRITUALITY AND COMPTEMPORY ART, features artists Phil Richardson, Trevor James Bazil, ViviAnn Rose, James Charles and Gregory Parascenzo. This diverse group of artists present their personal exploration and interpretation of spirituality through sculpture, paintings and photography. This show challenges and reinforces spirituality. Gregory Parascenzo explores the ritual side of spirituality with his paintings using the relationship of the artist and the act of creating. James Charles uses his life experiences being educated in Catholic schools and in a naval hospital to paint his figurative forms. To VivAnn Rose "Nature is her church and Art is her religion. Her photography's presents the spiritualness inherent in nature. Five artist, five different styles one large idea.
Letter From the Editor -- Artists of Utah
Now More Than Ever
WARNING:
This "letter" was conceived late at night, the time of dreams, delusions, and false senses of grandeur. It was not revised in the rational light of day and so may not be suitable for all audiences.
Click here to read.
Layne Meacham
continued from page 3
Despite the varied influences in his art, Meacham seems a uniquely Utah painter, proving once-again that the rich variety of the Utah landscape inspires and infiltrates more than just the landscape painters. His surfaces reminds one of relief images done by Satellite of the San Rafael. The cracked layers of his impasto in works such as “Blue Mound People” (9’x14’) immediately call to mind a dried out desert flood plain in mid-August heat. And the artist’s scratchings and indentations in these once wet-now dry surfaces are like the dinosaur tracks that are scattered across Utah’s skin – a reminder that this unique beauty was once altogether different in its make-up.
Though many of the works in the exhibit remain purely abstract explorations of the pleasures of creating, referential symbols do appear. “Navajo House Blessing” remains in the same colorfield and process-created works as the others but hints at a yellow sun, hills, trees and hogans. A sense of landscape, of directions, pervades many of the works. “Romanian Gyspy Trail” speaks of migrations and movements. “Land of Sirens” a huge mural piece, has a warm mystical feel to it, the magical quality of the Mediterranean. A soft magenta floats like a siren’s call over a pale sea of green -- like drowning in a late summer romance.
Meacham has done his forefathers proud. He has successfully shown us that the tactile, sensual quality of making art is primary and that color and form and texture are sufficient to move the soul.
Layne Meacham "You Like What You Know" will be showing until September 15th. The Artspace Forum Gallery is located on the corner of 5th West and 2nd South in SLC -- across from Gateway.
Additions to the Site -- Artists of Utah
Collector's Corner -- Name That Piece
Art Historians of Utah Unite!!! Demand has created a new section in Artists of Utah's Collector's Corner. Recently introduced, "Name That Piece" provides a venue for Collector's to learn more about the pieces they own. Collector's are invited to submit their questions about the authorship, date, or other information about their pieces, hoping to get input from the community at large. Take a look and see if you can help out or put something out there with your question. Is the piece you inherited from your grandmother an early LeConte Stewart? Find out. Visit the
Collector's Corner. | http://artistsofutah.org/newsletter/sep02/page6 |
UPDATE – London Art Fair organisers Immediate Live have postponed the London Art Fair from the planned January 2022 dates to April 20th-24th, 2022.
Their statement says: “In light of the continued uncertainty and disruption caused by the recent Omicron variant the decision has been made to postpone London Art Fair 2022 from 19th-23rd January to 20th-24th April at the Business Design Centre. Whilst we could have continued with the event as planned within government guidelines, we are keen to deliver the best possible Fair for our galleries, sponsors, partners and visitors. We have worked with our exhibiting galleries from the UK and internationally to make this decision, prioritising the wellbeing of all of our attendees. We are dedicated to delivering the same exceptional content that we have been planning for the last 12 months and look forward to you experiencing the line-up of talks, workshops and curated displays that we have scheduled”.
Details of the new preview dates will be shared in due course.
The Fair connects both accomplished and aspiring collectors with the best galleries from across the world, providing them with a unique opportunity to discover outstanding modern and contemporary art, from prints and editions to major works by internationally renowned artists from the 20thcentury to today.
In addition to over 100 exhibiting galleries, the Fair returns with a number of highly regarded curated spaces:
- Platform: Each year the Fair’s curated section Platform, features invited galleries presenting well-known, overlooked and emerging artists whose work aligns to a single distinct theme. Curated by Candida Stevens, for 2022 Platform will explore the theme of Music and its part in contemporary visual art. Visual art and music share common cultural influences, including societal, political and technological. With contemporary craft and contemporary art increasingly occupying a shared space in both exhibitions and collections, Platform will look at the range of music inspired by visual art being made today.
- Photo 50: Photo50, the Fair’s annual exhibition of contemporary photography, will be curated by art historian and curator Rodrigo Orrantia. No Place is an Island presents a selection of works by British and UK-based artists, interested in questioning the idea of an island and the associated concepts of isolation and isolationism. Echoing John Donne’s celebrated No Man is an Island, this exhibition examines what it means to be an island in the contemporary moment. As we slowly emerge from the Covid-19 pandemic to a post-Brexit Britain, the topical issues of our day (climate emergency, mass migration, travel and movement restrictions) confront us with the reality of an interconnected world.
- Art Projects: Art Projects returns for its 18th edition at the Fair to showcase the freshest contemporary art from across the globe. Pryle Behrman, art critic and member of the Art Projects Selection Committee, will once again return for 2022 to introduce this section, and as curator of The Screening Room, which is hosted as part of Art Projects as an accompanying programme of collaborative video and new media initiatives.
See the website for ticketing arrangements for the rescheduled dates. | https://www.artsandcollections.com/the-return-of-the-london-art-fair/ |
ROB pART II: The Robertson Park Artists' Studio exhibition, 2002.
The group of French painters, later to be labelled the Impressionists, started the artist led alternative art world movement in 1874. They banded together to extend their café society wine fed rhetoric and 'plein air' painting habits into the concept of artist run exhibitions. The marginalised, overlooked, confrontational or just out of step artist groupings which followed in the paths of the French open air painters changed rapidly from a trickle of mateship artists, who worked and exhibited together, to what became the mainstream of modernism.
Artists' studios doubling as occasional exhibiting spaces and also teaching spaces were a common feature of European, and even early Australian, modernism. Julian Ashton in Sydney and George Bell in Melbourne were exemplars of professional artists whose commitment to their particular way of seeing the world and of picturing it drew enthusiastic acolyte followings. But nowadays such Master class concepts are quite foreign to the contemporary artist and art student who want to develop their own identity and particular artistic style as soon as possible. This is especially so now that the contemporary art school is often housed, however awkwardly, within the confines of the modern university with all its multi-layered management structures and dare to be different market driven incentives.
One result of the huge social and economic changes within the arts has been to create a proliferation of artist driven initiatives. Sometimes artists, particularly 'emerging artists', group together from the same sense of need for a supportive philosophical environment as did the Impressionists, Dadaists, Expressionists or Heidelberg painters.
Sometimes the development of group activities is primarily the result of economic necessities, of escalating rents, reduced state funding, the increase of eligible applicants and the GST.
There also are many examples of group initiatives that result from the need to use expensive new technologies, as well as disaffection with existing gallery options. In Sydney, one aptly named initiative is a web based exhibiting gallery 'Briefcase', which subverts the tyranny of having to pay studio or gallery rent; another well named concept was a brief lived 2001 'Squatspace'. No prizes for guessing how it occurred.
In addition to needing a place for emerging artists or submerging artists to work or exhibit there also is the drive, felt by many community minded artists, to find and reach out to new audiences.
Perth has not been isolated from these various artistic manoeuvres, some short lived, some with state support, others endowed, as are many artists, with more enthusiasm and talent than business acumen. At a rough count I can think of around seven or eight independent artist shared spaces currently operating in the Perth area.
The Robertson Park Artists' studio in Halvorsen Hall, plumb in the middle of the South lawn of Robertson Park, is a spacious, well lit artists' studio space leased from the town of Vincent by the five artists currently showcasing their work there.
Although in this excellent setting the group is a relatively recent addition to the Western Australian art scene, it actually developed out of an earlier studio group, The Wellman Street group, which was established by a collective of then recent graduate artists in 1992. When that studio lost it's space in March 2000, the Hyde Park Precinct group and the supportive Mayor of Vincent assisted the group in finding a new space, in the ex-City of Perth Band practice space in Halvorsen Hall. The first studio exhibition was held two years ago and was a remarkable success.
The Studio artists for 2002 are Umberto Alfaro, Paul Carstairs, Frances Dennis, Graham Hay and Anne-Maree Pelusey, all of whom share the experience of working in the studios and running special community art classes.
The nature of their individual art practice is by no means a programmatic exercise demonstrating some shared ideological stance. In many respects their shared vision is for a convivial commitment to art making, but the artistic routes which they follow are many and varied. When meeting with the group I asked if there was any sharing of artistic intentions, for I had discerned a religious sensibility in the works of both Umberto Alfaro's Latin American sourced forms and Paul Carstairs carefully constructed ready-mades. Not so they said. Well, what about the relationship between Frances Dennis's extraordinary manikins and Graham Hay's biologically innovative paper clay forms? Their comfortable working relationship was stressed, but no aesthetic inbreeding admitted. Anne-Maree Pelusey's intensely coloured landscapes remained untouched by any hint of artistic border crossing or inter studio borrowing.
And yet, after a 40-year career teaching in schools of art, I know just how pervasive some influences can be in a shared work situation. What really happens is that an energy flow moves around the various members of a group and helps to keep up the level of shared commitment to creative activity.
This energy is often at its most highly charged just before an exhibition. The work, which I saw in the studio, some weeks before the exhibition opened will change and grow, and the artistic ideas of the members of the group will undoubtedly respond to the responses of the others.
The old idea of the ivory tower of art is a myth. Truly, no artistic person is an island.
The influences crowd in: from the art world outside the studio doors, from the street, from dead artists known only through the journals, and even from the inadvertently lying colour reproductions in art books and magazines. Out of this million and more, 'to whom it may concern' messages, creative individuals forge their own artistic identities. These identities are constantly changing.
Umberto Alfaro came from Chile, and this background is referred to in the themes of his papier-mâché sculptures. He doesn't try to escape the Spanish cultural influence; he works with it. He tells me, through his work, about the gold and the blood, the grandeur and the cruelty of the Spanish conquest of South America and why his sculpture of the Conquistadors looks the way it does. As he said, the difference from colonisation by other European nations is tremendous, and a new Latino race was created by conquest.
Frances Dennis has created her own personal race of strange creatures. They are almost cartoon grotesques and remind me of Daumier's small three- dimensional caricature figures of legal identities. They live in clusters of tightly interrelating humanoids and present a tragi-comic view of the world.
Anne-Maree Pelusey's smaller paintings are a reminder that the landscape rarely leaves Australian art for long. The paint brushstrokes have as intensely bright a life as the flashing wing of a speeding home seeking Lorikeet, but they remain earthbound and somehow geologically grounded in a sense of timeless history.
Graham Hay is the most international artistic wanderer of the group. His work has been seen in several major paper and clay exhibitions in Europe, UK, Australia and New Zealand. He has been unremitting in his pursuit of technical control of his chosen three -dimensional mediums. In his recent work he has achieved a cohesiveness of craft and concept, whilst not completely abandoning his earlier ideas. One recurrent theme seems to be an analysis of institutions, and with his work at the Robertson Park Studio collective he is again involved with an institution, albeit an artist run initiative.
I hope that there will be many more opportunities to watch the development of the Robertson Park Artists' studio, as the signs are all good.
To read about the studio click here. | https://www.robparkart.info/weston2002.html |
Art: You shall never see all of me, ladies and gentlemen You will always only see that part towards which you have turned. I am a mirror. Look at me–you see yourself.
The woman artist: The longer I look at you, the less I recognize myself. I can’t say that you are my mirror.
To come to the point straight away: I’m in favour of museums of women’s art. I would like to see not only one, but many. I would like to see such museums as places where exhibitions and research take place, where the lost works of women artists are traced, catalogued, evaluated and shown, where contemporary art by women is bought and exhibited. I would like to see them as educational institutions where the conditions for the creation of art are studied, and the concept of life is discussed which is reflected in this art.
I consider women’s museums not just a temporary measure which should and can increase the opportunities for women artists, I also consider them a socio-political and above all an artistic necessity. I’m concerned not only about greater justice–in regard to the distribution of funds, for instance, 90% of which end up in the pockets of men–I’m concerned about art.
I would like at first to talk about the relationship between art and social norms and then take up the question of how we are to tackle the androgyny which is asked of art. Demands for androgyny in the production of art and ‘objective’ of criteria of quality, are the main arguments raised against the separation or ghettoization of womens art in special museums or exhibitions.
But art is like the society in which it is produced. It reflects the norms which are valid in that society regardless of whether these norms appear to have been imposed from the outside or whether they have been arrived at democratically. Art in itself is nothing. Art has no absolute quality which only reveals itself in more or less ‘pure’ form. Rather art is both judgment and claim. Artists are not prophets, they do not ‘sense’ something in advance, they create, they produce. As a result art is not only a reaction to social phenomena, power structures or changes, it is also a participant in these changes, in fact art itself causes changes.
Since art is indivisible from society at large it appears in manifold and contradictory forms. It reflects the aspirations, hopes and identities of the time in which it comes into being. It is not manufactured out of a ‘purely artistic-objective’ interest, nor is it bought, admired, shown and preserved as a result of such interest.
Thus once art has been commissioned, acquired and exhibited, (that is once it has ‘arrived’), it appears to illustrate what is objective.
In art and by the means of art, all that a society accepts as normal and generally binding is affirmed and declared valid.
This applies even to art which understands itself as oppositional, once it receives official blessing as ‘valid work’. On the other hand, works which do not conform to what is considered valid are denied the attribute of art. They are considered either trivial or barbaric, i.e. ‘bad’, depending on whether their content is judged harmless or a threat to the system. If harmless, the opponents are ignored. If dangerous they are fought against and their works are prohibited. While the cultural policy of dictatorships are rigidly restrictive, in democracies non-conformist art is ignored or denied public exposure through economic or market-specific pressures.
We known that women are underrepresented in all decision-making positions, that they are a majority treated like a minority. This absence is hardly registered, is seldom perceived as deprivation. If it enters consciousness at all, it is at best regretted half-heartedly, more often accepted as a law of nature. But perceptive individuals, mostly women (men continue to exhibit astonishing mental lethargy in this area) know that this is the result of manipulation practiced for thousands of years.
Small wonder then that in its portrayal of human beings art shows women only as objects, as something ‘outside’ and different.
Women working as artists find themselves in a situation which is in total contradiction to what is generally expected of them. Excluded everywhere else from making decisions for themselves they are expected to act autonomously and rely on their own judgement. Through art they must avail themselves to a notional artistic “freedom” If they are unable to fully use this “freedom”, they are criticised for being weak, despite the fact that radical subjectivity, has long since become the programmatic principle of art. But female subjectivity and experience of self cannot be reflected in works of art, not where it counts.
At best, femaleness is considered as something opposite to maleness, whereby the latter remains the standard of what is deemed human.
Exhibitions, museums and art schools convey and confirm the perception of the marginality and insignificance of women. As a result both sexes are offered different reference points. This will continue as long as it is not acknowledged that women live in a different culture from men and that they experience this culture in a different way. Women find themselves in an alien culture, in a culture that has been determined by men. Men find themselves in their own culture. For this reason cultural institutions should present to the world at large the female experience of culture, that is the experience of alienation and exclusion, as much as they should present the work of women artists.
The work of women artists will be considered irrelevant as long as society does not grant women the right to set standards, that is as long as women are defined as being different, as being outside the norm. If women are considered sexually, biologically and emotionally different, then the art made by women cannot gain access to the level where the generally valid is shown, where the established Philosophy of life is communicated and made palatable. In other words as long as something does not count because it is made by a women, her art does not count either, at least not as art. The more women are able to achieve authority and recognition, however, the more they will take up decision-making positions in society and influence established standards, the less they can be defined as being different, as being ‘outside’. Although museums of women’s art might seem to maintain that ‘outside’-ness or otherness of women in relationship to the “asexual” art world, I feel that women’s museums can offer women a chance to take more control of the display and reception of their work.
Ever since the appraisal of art has been preserved in the form of art history, the attribute of masculinity has been the highest compliment. Noble or heroic manliness, virile strokes of the brush, manly enthusiasm–all the most highly rated qualities are identified with masculinity. To this day the creative talent itself is considered a specifically male talent. Women artists who disregard these assertions are considered either incompetent or unfeminine.
At the turn of the century men complained about the effeminization of art. A work of art that was not considered successful or worthy of critique was said to have feminine qualities: weakness, indecisiveness, sentimentality and histrionics. I have never heard of an art work being criticised for being too masculine!
Women artists, it seems, must prove androgenous, however. What women are supposed to prove when they claim that their art is androgynous is not gender symmetry reflected in the work but what is generally called the ‘masculine part’. Many of the women who demand androgyny or even sexual anonymity in art distance themselves from women’s art, women’s exhibitions and feminine aesthetics. They see the assertion of themselves as women rather than sexually anonymous artists restrictive, encouraging the ghettoization of women artists who show together as women.
Women’s exhibitions, women’s museums, women’s art–in combination with the word women the terms exhbition, museum and art are robbed fo their reputation.
The reason why the negative connotation of women’s art is disturbing is because it unmasks and disproves the illusion of neutrality in regard to gender, an allusion which has been sucessfully maintained by the male view of the world. Art can be produced either by men or by women, there is no such thing as a sexually neutral human being. Thus the hint of the disreputable and scandalous associated with women’s art, women’s museums etc. is the best proof for the necessity of these projects.
One of the questions which could be taken up in a women’s museum with a research facility is to what extent innovation in art is possible for women as long as innovation means nothing than a new version of the male self-image. The difficulty is that among women themselves there is no consensus whether there should be women’s museums and women’s exhibitions in the first place. In the introduction to the catalogue of the exhibition “Women Artists 1577-1977”, Ann Sutherland Harris, the organizer of the show, expressed the hope that projects of this kind would become superfluous in the future, a wish that is hard to understand in retrospect. Would an organizer of an exhibit of artists of, say, the ‘Blue Rider’ group hope that such shows would become superfluous in later years? Would he/she not want the opposite, namely more and larger exhibitions of these works?
The Holladay Collection, a collection of works by women artists since the thirties, has recently been opened to the public at the National Museum of Women in the Arts in Washington. The reactions to this museum have been typical for exhibitions of women artists in general. The evaluation of the works in this private collection turned into an evaluation of women’s art as such. But a collection of this kind can no more claim to represent, say, contemporary art in general, which in the case of the latter would never be made. It is assumed, however, that women’s museums can erase century-old prejudices and slights, and that comparisons with the great, traditional collections and museums can seriously be made.
As art by women has appeared so far only when it is shown separately, critics feel entitled to draw conclusions about the artistic abilities of women in general. A womens museum also leads to the assumption that its mere existence is reason enough to delegate acquisition and presentation of women’s art to it. Since it can be seen there, it need not be shown anywhere else. Thus the women’s museum is seen as a place of separation–a ghetto. The founding of a museum for “Brucke” painters, on the other hand, of course does not imply that these artists are not collected or shown elsewhere. But it is precisely this double standard which applies to every women’s exhibition. It is supposed to express the desire for separation. But separate exhibitions only reflect the fact that art by women has always been treated differently, that it has always been ghettoized. Women and their art have always been made invisible by male-oriented cultural thinking. Women’s exhibitions are not indications of the female desire to be separate but a reaction to male-dominated restrictive policies.
Within the environment of a women’s museum women artists could work without being subjected to the pressure of having to prove their femininity or deny it (or both) at the same time. Historians could research feminism without having constantly to justify it. And the museum would show not only one or two works by women, as is usually the case, but exclusively works by women. These works could be seen for what they are without being scrutinized for what is feminine or unfeminine about them. In such institutions students and teachers could concentrate on the questions and problems important to them without having to feel like outsiders. They could build up their strength because competent women would not be isolated in a male-dominated environment but in an atmosphere which promotes the development and communication of abilities and self-determination. | https://www.artofspirit.org/progress-or-ghetto/ |
The Broad is a contemporary art museum founded by philanthropists Eli and Edythe Broad on Grand Avenue in downtown Los Angeles. Designed by Diller Scofidio + Renfro in collaboration with Gensler, the museum offers free general admission. The Broad is home to more than 2,000 works of art in the Broad collection, which is among the most prominent holdings of postwar and contemporary art worldwide, and presents an active program of rotating temporary exhibitions and innovative audience engagement. The 120,000-square-foot building features two floors of gallery space and is the headquarters of The Broad Art Foundation’s worldwide lending library, which has actively loaned collection works to museums around the world since 1984. Since opening in September 2015, The Broad has welcomed more than 1.7 million visitors.
(Via: The Broad)
Closed: Mondays
Admission
Free (Donations Accepted)
Architect Arata Isozaki designed MOCA Grand Avenue in 1986 with classical architecture and Los Angeles popular culture in mind. Today this location hosts the museum's main galleries, Lemonade café, the flagship location of the MOCA Store, and staff offices. The museum's exhibits consist primarily of American and European contemporary art created after 1940. Since the museum's inception, MOCA's programming has been defined by its multi-disciplinary approach to contemporary art. (Via: MOCA)
Closed: Tuesdays
Admission
General Admisison | $15
Students | $8
Seniors 65+ | $10
Children Under 12 | Free
Members | Free
A former police car warehouse in L.A.'s Little Tokyo Historic District, renovated by the noted California architect Frank Gehry, The Geffen Contemporary at MOCA (formerly The Temporary Contemporary) opened in 1983. This location offers 40,000 square feet of exhibition space. (Via: Geffen)
Closed: Tuesdays
Admission
General Admisison | $15
Students | $8
Seniors 65+ | $10
Children Under 12 | Free
Members | Free
The J. Paul Getty Museum seeks to inspire curiosity about, and enjoyment and understanding of, the visual arts by collecting, conserving, exhibiting and interpreting works of art of outstanding quality and historical importance. To fulfill this mission, the Museum continues to build its collections through purchase and gifts, and develops programs of exhibitions, publications, scholarly research, public education, and the performing arts that engage our diverse local and international audiences. All of these activities are enhanced by the uniquely evocative architectural and garden settings provided by the Museum's two renowned venues: the Getty Villa and the Getty Center. The J. Paul Getty Museum at the Getty Center in Los Angeles houses European paintings, drawings, sculpture, illuminated manuscripts, decorative arts, and photography from its beginnings to the present, gathered internationally.
(Via: The Getty)
Closed: Mondays
Admission
Free (Donations Accepted)
The Getty Villa is one of two locations of the J. Paul Getty Museum. Located at the easterly end of the Malibu coast in the Pacific Palisades neighborhood of Los Angeles, California, USA, the Getty Villa is an educational center and museum dedicated to the study of the arts and cultures of ancient Greece, Rome, and Etruria. The collection has 44,000 Greek, Roman, and Etruscan antiquities dating from 6,500 BC to 400 AD, including the Lansdowne Heraclesand the Victorious Youth. The UCLA/Getty Master’s Program in Archaeological and Ethnographic Conservation is housed on this campus. The collection is documented and presented through the online GettyGuide as well as through audio tours. (Via: The Getty)
Closed: Tuesdays
Admission
Free (Donations Accepted)
The Hammer Museum champions the art and artists who challenge us to see the world in a new light, to experience the unexpected, to ignite our imaginations, and inspire change. The Hammer understands that art not only has the power to transport us through aesthetic experience but can also provide significant insight into some of the most pressing cultural, political, and social questions of our time. We share the unique and invaluable perspectives that artists have on the world around us. A vibrant intellectual and creative nexus, the Hammer is fueled by dynamic exhibitions and programs—including lectures, symposia, film series, readings, and musical performances—that spark meaningful encounters with art and ideas. (Via: The Hammer)
Closed: Mondays
Admission
Free (Donations Accepted)
The mission of the Pasadena Museum of California Art is to present the breadth of California art and design through exhibitions that explore the cultural dynamics and influences that are unique to California. The Pasadena Museum of California Art was founded to honor the rich and eclectic artistic and cultural history of California. With a mission that encompasses both historical and contemporary art, the Museum celebrates in equal measure the plein-air painters inspired by the region’s mountains and deserts—as well as today’s artists who use the PMCA’s building as canvas. As a non-collecting institution, the Museum develops temporary exhibitions with independent curators, which allows for flexible and dynamic programming.
(Via: PMCA)
Closed: Mondays & Tuesdays
Admission
Adults | $7
Seniors (62+) | $5
Students | $5
Children (12 & Under) | Free
The Museum of Latin American Art expands knowledge and appreciation of modern and contemporary Latin American art through its Collection, ground-breaking Exhibitions, stimulating Educational Programs, and engaging Cultural Events. | https://www.theartdistricts.com/los-angeles-museums |
This article's lead section may be too short to adequately summarize the key points. (September 2016)
Feminist art criticism emerged in the 1970s from the wider feminist movement as the critical examination of both visual representations of women in art and art produced by women. It continues to be a major field of art criticism.
Linda Nochlin's 1971 groundbreaking essay, "Why Have There Been No Great Women Artists?", analyzes the embedded privilege in the predominantly white, male, Western art world and argued that women's outsider status allowed them a unique viewpoint to not only critique women's position in art, but to additionally examine the discipline's underlying assumptions about gender and ability. Nochlin's essay develops the argument that both formal and social education restricted artistic development to men, preventing women (with rare exception) from honing their talents and gaining entry into the art world. In the 1970s, feminist art criticism continued this critique of the institutionalized sexism of art history, art museums, and galleries, as well as questioning which genres of art were deemed museum-worthy. This position is articulated by artist Judy Chicago: "...it is crucial to understand that one of the ways in which the importance of male experience is conveyed is through the art objects that are exhibited and preserved in our museums. Whereas men experience presence in our art institutions, women experience primarily absence, except in images that do not necessarily reflect women's own sense of themselves." In 1996 Catherine de Zegher curated the groundbreaking show of women artists Inside the Visible, that travelled from the ICA Boston to the Whitechapel in London, using the theoretical paradigmatic shift by the artist, philosopher and psychoanalyst Bracha L. Ettinger: the matrixial gaze, space and sphere. Bracha L. Ettinger wrote the introductory theoretical framework, art historian Griselda Pollock contextualised Ettinger's theory and de C. Zegher's curatorial project, in what became since then a cornerstone in feminist art history. In 2000, C. de Zegher organised a conference to look at Linda Nochlin's challenging question thirty years after. Highly significant female curators of the time, like Griselda Pollock, Lisa Tickner, Molly Nesbit, Ann Wagner, Emily Apter, Carol Armstrong and others presented the feminist art criticism in whose origin and revolution they took active part. Following this, Griselda Pollock published her Virtual Feminist Museum book (2007).
Nochlin challenges the myth of the Great Artist as 'Genius' as an inherently problematic construct. 'Genius' “is thought of as an atemporal and mysterious power somehow embedded in the person of the Great Artist.” This ‘god-like’ conception of the artist's role is due to "the entire romantic, elitist, individual-glorifying, and monograph-producing substructure upon which the profession of art history is based." She develops this further by arguing that "if women had the golden nugget of artistic genius, it would reveal itself. But it has never revealed itself. Q.E.D. Women do not have the golden nugget of artistic genius." Nochlin deconstructs the myth of the 'Genius' by highlighting the unjustness in which the Western art world inherently privileges certain predominantly white male artists. In Western art, ‘Genius’ is a title that is generally reserved for artists such as, van Gogh, Picasso, Raphael, and Pollock—all white men. As recently demonstrated by Alessandro Giardino, when the concept of artistic genius started collapsing, women and marginal groups emerged at the forefront of artistic creation. Griselda Pollock, following closely the psychoanalytical discoveries of French theorists Julia Kristeva, Luce Irigaray and mainly Bracha L. Ettinger consistently brought the feminist psychoanalytic perspective into the field of art history.
Similar to Nochlins’ assertions on women's position in the art world, art historian Carol Duncan in the 1989 article, “The MoMA Hot Mamas”, examines the idea that institutions like the MoMA are masculinized. In MoMA's collection, there is a disproportionate amount of sexualized female bodies by male artists on display compared to a low percentage of actual women artists included. According to data accumulated by the Guerrilla Girls, “less than 3% of the artists in the Modern Art section of New York’s Metropolitan Museum of Art are women, but 83% of the nudes are female”, even though “51% of visual artists today are women.” Duncan claims that, in regards to women artists:
In the MoMA and other museums, their numbers are kept well below the point where they might effectively dilute its masculinity. The female presence is necessary only in the form of imagery. Of course, men, too, are occasionally represented. Unlike women, who are seen primarily as sexually accessible bodies, men are portrayed as physically and mentally active beings who creatively shape their world and ponder its meanings.
This article narrows its focus on one institution to use as an example to draw from and expand on. Ultimately to illustrate the ways in which institutions are complicit in patriarchal and racist ideologies.
Women of color in the art world were often not addressed in earlier feminist art criticism. An intersectional analysis that includes not only gender but also race and other marginalized identities is essential.
Audre Lorde’s 1984 essay “The Master’s Tools Will Never Dismantle The Master’s House,” briefly addresses a vital dilemma that artists who are women of color are often overlooked or tokenized in the visual arts. She argues that "in academic feminist circles, the answer to these questions is often, ‘We did not know who to ask.’ But that is the same evasion of responsibility, the same cop-out, that keeps Black women's art out of women's exhibitions, Black women's work out of most feminist publications except for the occasional ‘Special Third World Women's Issue,’ and Black women's texts off your reading lists.” Lorde’s statement brings up how important it is to consider intersectionality in these feminist art discourses, as race is just as integral to any discussion on gender.
Furthermore, bell hooks expands on the discourse of black representation in the visual arts to include other factors. In her 1995 book, Art on My Mind, hooks positions her writings on the visual politics of both race and class in the art world. She states that the reason art is rendered meaningless in the lives of most black people is not solely due to the lack of representation, but also because of an entrenched colonization of the mind and imagination and how it is intertwined with the process of identification.: 4 Thus she stresses for a “shift [in] conventional ways of thinking about the function of art. There must be a revolution in the way we see, the way we look,": 4 emphasizing how visual art has the potential to be an empowering force within the black community. Especially if one can break free from "imperialist white-supremacist notions of the way art should look and function in society.": 5
Feminist art criticism is a smaller subgroup in the larger realm of feminist theory, because feminist theory seeks to explore the themes of discrimination, sexual objectification, oppression, patriarchy, and stereotyping, feminist art criticism attempts similar exploration.
This exploration can be accomplished through a variety of means. Structuralist theories, deconstructionist thought, psychoanalysis, queer analysis, and semiotic interpretations can be used to further comprehend gender symbolism and representation in artistic works. The social structures regarding gender that influence a piece can be understood through interpretations based on stylistic influences and biographical interpretations.
Laura Mulvey's 1975 essay, "Visual Pleasure and Narrative Cinema" focuses on the gaze of the spectator from a Freudian perspective. Freud's concept of scopophilia relates to the objectification of women in art works. The gaze of the viewer is, in essence, a sexually charged instinct. Because of the gender inequity that exists in the art sphere, the artist's portrayal of a subject is generally a man's portrayal of women. Other Freudian symbolism can be used to comprehend pieces of art from a feminist perspective—whether gender specific symbols are uncovered through psychoanalytic theory (such as phallic or yonic symbols) or specific symbols are used to represent women in a given piece.
Are the women depicted in an artistic work realistic portrayals of women? Writer Toril Moi explained in her 1985 essay "'Images of Women' Criticism" that "reflectionism posits that the artist's selective creation should be measured against 'real life,' thus assuming that the only constraint on the artist's work is his or her perception of the 'real world.'"
The 1970s also saw the emergence of feminist art journals, including The Feminist Art Journal in 1972 and Heresies in 1977. The journal n.paradoxa has been dedicated to an international perspective on feminist art since 1996.
Important publications on feminist art criticism include:
In 1989, the Guerilla Girls' poster protest of the Metropolitan Museum of Art's gender imbalance brought this feminist critique out of the academy and into the public sphere.
In 2007, the exhibit "WACK! Art and the Feminist Revolution" presented works of 120 international artists and artists’ groups at the Museum of Contemporary Art, Los Angeles. It was the first show of its kind that employed a comprehensive view of the intersection between feminism and art from the late 1960s to the early 1980s. WACK! “argues that feminism was perhaps the most influential of any postwar art movement-on an international level-in its impact on subsequent generations of artists.”
Rosemary Betteron's 2003 essay, “Feminist Viewing: Viewing Feminism”, insists that older feminist art criticism must adapt to newer models, as our culture has shifted significantly since the late twentieth century. Betterton points out:
Feminist art criticism is no longer the marginalized discourse that it once was; indeed it had produced some brilliant and engaging writing over the last decade and in many ways has become a key site of academic production. But, as feminist writers and teachers, we need to address ways of thinking through new forms of social engagement between feminism and the visual, and of understanding the different ways in which visual culture is currently inhabited by our students.
According to Betterton, the models used to critique a Pre-Raphaelite painting are not likely to be applicable in the twenty-first century. She also expresses that we should explore ‘difference’ in position and knowledge, since in our contemporary visual culture we are more used to engaging with "multi-layered text and image complexes" (video, digital media, and the Internet). Our ways of viewing have changed considerably since the 1970s. | https://demo.azizisearch.com/starter/google/wikipedia/page/Feminist_art_criticism |
Untitled provides exhibition and production infrastructure and develops new programs with the needs of both Calgarians and Calgary-based emerging artists in mind. The continuing education of emerging artists is an important aspect of our central mandate of supporting emerging contemporary art in Calgary.
Check out the following resources in order to support competitiveness and professionalism when applying for artist opportunities.
UAS endeavors to exhibit the work of pre-emerging, emerging, and occasionally student artists. All regular exhibitions are juried, and are therefore competitive. Often those who are in the early stages of their artistic career need assistance in developing a professional application package. There are many resources on the internet that provide advice in creating an exhibition proposal. Unfortunately, many of these resources do not hold up to the professional requirements needed to be competitive in a jury process. With this in mind UAS has gathered some notes and internet resources it feels to be helpful and appropriate to fulfilling our submission process.
Follow submission guidelines carefully and provide materials within the minimum and maximum requirements requested. Do not submit extra materials. The jury will not review them. Submitting less than the requested materials will be detrimental to the application. Applicants need to show they can follow instructions.
Provide materials in the formats requested. Artist-run centres do not have the ability to read any and all file formats. If your document/image cannot be opened it will negatively impact your application.
Respect the jury. Remember jury members have to read many, many proposals so don't make it difficult or frustrating to read yours. Formatting in your application should be easy to read. Use 12 point Arial font. Well defined margins, headings, and paragraph formatting also helps the readability of the text. Your name, page numbers, and any other applicable information should be displayed in the footer of ALL pages of the application in order to ensure proper order of materials. Each item submitted must be labelled at the top of the section (i.e. "Artist Statement", "Project Proposal", etc.) to ensure the jury has the correct context in which to read your work.
Be professional. The application must be written, prepared and presented in a professional manner. Applying for an exhibition is similar to applying for employment. Use professional language and package your application in an organized way. Also, your professional application should not contain unrelated personal information. UAS shows the work of emerging artists who wish to pursue a career in the arts. Professionalism is expected.
Ensure you are applying to the right venue. UAS is an artist-run centre with a mandate to exhibit contemporary work by pre-emerging, emerging and sometimes student artists from the local Calgary community. Review the web site, including documentation from past exhibitions to see if UAS is an appropriate venue. UAS is NOT a commercial venue.
Keep the writing focused. Ensure your writing is organized, and ideas are developed. Writing that lacks direction, clarity or formatting will easily be overlooked among the other applications. Have someone else read your writing prior to submitting. Do not state or imply that the jury should select which works should be part of the exhibition. It is expected that this has been determined by the individual(s) submitting, and is outlined in the materials. The jury judges the application as a whole.
Make sure your submitted images are professional. Images should be properly taken on a neutral background, with sufficient lighting, and visible edges if it is a non-detail shot. Images that are out of focus, crooked, or are not lit properly or white-balanced (images that have a yellow or blue tinge) will appear to the jury to be an unprofessional documentation of the work. The images provided should help clarify the written submission. If the proposed work is not complete provide images of past work that is related to the proposed exhibition or project, and also provide an outline/sketch/rendering or something to help the jury visualize what you are proposing. Be clear about what images apply to the proposed exhibition, and which images are documentation of past work. Only submit the requested number of images as per the submission guidelines. The jury will not look at more than the maximum number of images requested. Images should easily fit within maximum email size limits as long as the images are no larger than the requested image size in order to email your application.
A CV is an artist resume and should contain information about your professional activities. Be sure to format your CV so that it's easy to read and all the information is easily distinguishable. Use the headings and notes indicated at the following link.
An artist statement is a statement that expresses ideas, processes, influences, etc. of the artists' current general studio/artistic practice. It is not a biography or description of work specific to one work or project.
The Exhibition Proposal does not reiterate what was said in the Artist Statement. The Exhibition Proposal is a statement of the ideas and works being proposed for a specific exhibition or art project. Discuss how your ideas outlined in the Artist Statement is fulfilled in the proposed exhibition, and what logistical needs the exhibition will require.
Submit the largest file possible for the application guidelines. All work should be properly documented on a neutral background, well-lit, edges visible and without distracting interference. Documenting your work is about showing your art as clearly and descriptively as possible, it is NOT about your creative photography and Photoshop skills. | http://www.uascalgary.org/resources.html |
Organiser:
Contemporary Art Centre, Vilnius
in cooperation with National Museum of M. K. Čiurlionis, Kaunas
Curators: Linara Dovydaityte, Kestutis Kuizinas
Artists:
Aleksas Andriuskevicius, Akvile Anglickaite, Robertas Antinis, ARTcar, Naglis Baltusnikas, Egle Budvytyte, Arturas Bumsteinas, Gintaras Cesonis, Darius Ciuta, Gintaras Didziapetris, Konstantinas Gaitanzi, Ugnius Gelguda, G-Lab, Kestutis Grigaliunas, Kristina Inciuraite, Involved, Linas Jablonskis, Donatas Jankauskas, Karolis Jankus, Vytenis Jankunas, Evaldas Jansas, Agne Jonkute, Jurga Juodyte ir Saulius Leonavicius, Patricija Jurksaityte, Mindaugas Kavaliauskas, Ignas Krunglevicius, Zilvinas Landzbergas, Bernadeta Levule, Dainius Liskevicius, Ceslovas Lukenskas, Gintaras Makarevicius, Aura Maknyte, Ligita Marcinkeviciute, Martynas Martisius, Ieva Mediodia, Andrew Miksys, Deimantas Narkevicius, Audrius Novickas, Saulius Paliukas, PB8, Bartosas Polonski, Private ideology (Audrius Bucas, Gintaras Kuginis, Valdas Ozarinskas), Auris Radzevicius, Arturas Raila, Egle Rakauskaite ir Romualdas Rakauskas, Laura Stasiulyte, SMC TV, Loreta Svaikauskiene, Gediminas ir Nomeda Urbonai, Aiste Valiute, Auguste Varkalis, Arvydas Zalpys, Darius Ziura.
‘101.3 KM: competition and cooperation’ is the latest in a 10-year-long series of exhibitions reviewing Lithuanian contemporary art presented by the Contemporary Art Centre. The exhibitions in the series include ‘Lithuanian art `97: galleries present’; ‘Lithuanian art 1989-1999: ten years’; ‘Lithuanian art `01: Self-esteem’ and track the Lithuanian art scene, presenting new works, new trends, new artists, as well as introducing topical cultural debates to the audience. The previous exhibition ‘2 Show: young artists from Latvia and Lithuania’ (2003) expanded its geographic horizon to include Latvia so that developments in Lithuania could be contextualised against regional tendencies and emerging artists could be compared with their Latvian peers. Each of the exhibitions has generated large-audiences and extensive coverage in the popular and critical press.
The title of the 2006 exhibition ‘101.3 KM: competition and cooperation’ expresses both the distance (door-to-door) between the collaborating institutions in Vilnius and Kaunas and the exhibition‘s principal theme. The exhibition will be presented simultaneously in Vilnius and Kaunas and at three venues; the CAC in Vilnius, Kaunas Picture Gallery and the Meno Parkas gallery — as well as in public spaces in both cities. More than 50 artists from both cities will take part in the show. A number of works have been commissioned that address the different cultural context of the cities, questioning their image and stereotyping. Following the principle of ‘competition’ the Kaunas artists’ projects will be shown in Vilnius and vice versa as the two cities possess different and distinctive artistic traditions. Bringing together artists of different generations working with different media ‘101.3 KM’ seeks to produce unexpected relations, conceptual contradictions, and provocations, as well as searching for new forms of competition and cooperation.
To a great extent the exhibition reflects a history of artistic mobility between the cities and a history of competition between artists, art academies, universities, exhibiting institutions, and artists’ groups. In some ways, the exhibition tracks artists migration from one city to another, and the corollary — an exchange of ideas and capital, information flow and its interference, social interaction, and the natural criticisms of ‘the other‘. This comparison may challenge orthodox opinions about how and where Lithuanian artistic tendencies are formed.
Gallery
Aleksas Andriuškevičius
Aleksas Andriuškevičius
Video
Exhibitions
Events
Education & Excursions
Reading Room
Publications
Screening Hall
Fluxus Cabinet
About the CAC
Calendar
2019 ruduo
Skelbiamas atviras šaukimas jauniesiems menininkams pretenduoti į „JCDecaux premiją 2019“
2019.03.15
Triple book launch: Valentinas Klimašauskas, Laura Kaminskaitė, Anastasia Sosunova
Guided tours with curators
Sharon Lockhart's film programme
2019.02.14-17
Landscape to be Experienced and to be Read: Time, Ecology, Politics. James Benning's Film Programme
2019.01.25
Investigative Aesthetics: Forensic Architecture
Archive
Address:
Vokiečių St. 2, LT–01130, Vilnius
Opening Hours:
Exhibitions are open Tuesday to Sunday
12pm–8pm
Please find the exhibition schedule on the main page. CAC exhibitions are open on some of the bank holidays: February 16, March 11, June 24, July 6 and the first Sunday of June.
Exhibition Tickets:
Full price – 4 Eur
Discount – 2 Eur
Admission is free every Wednesday
Students of Vilnius Academy of Arts are welcome to visit free of charge at any time
CAC Reading Room:
CAC Reading Room is open Monday to Friday 12pm–7pm
If you wish to organise a meeting at the CAC Reading Room at another time or book the space for an event please call (+370 5) 2608960 or email [email protected]
George Maciunas Fluxus Cabinet:
CAC's only permanent display opens for visitors alongside the temporary exhibitions
CAC Sculpture Garden:
The CAC Sculpture Garden is open daily from 12pm to 7pm
Admission is free
CAC Café:
Open 11.30am–10pm and until 12am on weekends
Exhibition Tours:
We offer exhibition tours every Wednesday at 5pm, free of charge
If you wish to book a tour for another time please email [email protected]
If it's not on a Wednesday, you’ll need to buy CAC's regular exhibition tickets
Disabled Access:
CAC's exhibition spaces are accessible by wheelchair
Call for Information: | http://www.cac.lt/en/exhibitions/past/06/1315 |
This thesis explores the artist-led initiatives associated with the Triangle Network in the United States, United Kingdom, Southern Africa and South Asia, between 1982 and 2015. It considers how artists have set up artist-led initiatives as a response to different circumstances in these regions, and how the idea of a network has influence this development. It is specifically concerned with how artist-led initiatives have contributed to shifts in art-world infrastructures and the writing of contemporary art histories. In particular it shows how the idea for international artist-led workshops spread from an intitial workshop in Upstate New York to an international network of partner-workshops across the world. It demonstrates how these workshops often led to: the creation of new spaces for artists to work and host visiting artists for residency programs; new exhibitions and publications; and a cosmopolitan sense of international artistic exchange. It also shows how these artistic exchanges have been concerned with better regional and South- South connectivity. It examines why and how such artist-led initiatives have been initiated and run, and what impact belonging to a network has had on the artists and artworks. The purpose of this thesis is to argue that an understanding of these types of organisations is a useful contribution to international contemporary art history, as they often represent moments of transformation and emergence. Using the notions of assemblages and networks as analytical tools, the thesis explores the possibilities these approaches have to art historical writing. Such an approach allows for analysis of heterogeneous actancts including artworks, materials, artists, institutions, books, spaces, websites, and funding streams. The intention of this thesis is to contribute an approach to writing about contemporary art history from the perspective of ‘grass-roots globalization’ (Appardurai, 1999) that can counter readings of global contemporary art based only on hegemonic institutions. | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.691080 |
Ebifananyi 4. ‘Go Forward, Ham Mukasa’s take on Ugandan history is part of a series of publications and exhibitions that together form the artistic part of Andrea Stultiens’ PhD research.
There is no word for photograph in Luganda, the largest minority language in Uganda. ‘Ekifananyi’ (the singular of ‘Ebifananyi’) literally translates into ‘likeness’ and is used for any type of two-dimensional imagery. This research problematizes the apparent transparency of how meaning is generated in photographic images by developing stories based on photo collections in Uganda and the dialogues around them. (see www.HIPUganda.org and www.facebook.com/HIPUganda). The collected material is transformed into a series of small photo books and exhibitions that take place both in Uganda and in the Netherlands. Next to this, the working process is documented and reflected upon in a text that has the ambition to be a history in and of photographs (from and about Uganda).
‘Go Forward’ is an English translation of ‘Simuda Nyuma’, the Luganda title of a book triptych written by Ham Mukasa. The starting point for the exhibition is a collection of photographs from the Ham Mukasa family archive. The first presentation, which took place at Academy Minerva, included contributions by a range of Ugandan artists, and Ugandan and Dutch art students, who were invited by Stultiens to interpret specific moments and phenomena in history that were described by Ham Mukasa from their own vantage point.
Ham Mukasa (ca.1870-1956) was an important chief. An early literate and Christian convert, he wrote about the history he was part of. Stultiens found a list of described images that should have accompanied his writings on the reigns of three kings of the Buganda Kingdom. However, as far as Stultiens could see, these illustrations had not yet been made. The list became an open invitation to think about the history described.
Go Forward presents work by Achola Rosario, Lwanga Emmanuel, Eria Nsubuga, Nate Omiel, Papa Shabani, Violet Nantume, Fred Mutebi, Sanaa Gateja and students from Uganda Christian University and Academy Minerva. | http://priccapractice.nl/project/ebifananyi-4/ |
Linux is an open source operating system popular among programmers. It is a versatile system that can be used to develop software, web applications, and mobile applications. With Linux, developers have access to a wide range of powerful tools and technologies that can help them create high-quality applications quickly and easily. In this article, we’ll explore why programmers choose to use Linux, the benefits it offers, and some of the most popular programming languages used on the platform. We’ll also look at the challenges of developing with Linux and how to overcome them. So, if you’re looking to become a Linux programmer, read on to find out why you should make the switch.
Linux, like Mac OS X or Windows, is an operating system. In the past, it was primarily used as a server operating system, and it was not recommended as a desktop option. This proprietary operating system is used in every ten countries in the world (out of 1000) by not requiring third-party drivers for Windows or Macs. Users can select from a variety of modules available during the installation process to meet their specific hardware requirements. Following the selection of a specific distribution, you can port Linux to non-Intel architectures such as MIPS, Alpha AXP, SPARC, PowerPC, and Motorola 68K. There are possibilities that Linux’s shell scripting can be used to perform simple and consistent special operations. You can always find someone who shares your passion for Linux on the Linux community. Some businesses, such as Novell and Red Hat, offer paid support that is useful for sharing information and tips about Linux OS (or any associated applications).
24-cross-7 Support is an excellent way to increase customer loyalty because community members assist users in finding someone who has done something similar to what they are attempting to do. In the current environment, 96.3 percent of all servers run Linux, according to a recent study. The Linux operating system currently runs 90 percent of cloud infrastructure. As a result of this characteristic, Linux OS has demonstrated an uptime of 99.9 percent. As a result, Linux becomes more stable and reliable, as well as manageable while lowering costs.
Almost 10,000 people are involved in the software industry. According to the most recent report on who writes Linux, there have been nearly 10,000 Linux programmers since 2005, and 1,100 have been added in the last year alone.
Linux has many advantages over other operating systems, including faster script and code processing.
Linux applications are typically robust in terms of security and reliability, as well as capable of running quickly. Because they don’t have enough time to fix the operating system, many professionals like Linux because it’s simple and invisible.
Do Programmers Use Linux?
In fact, Linux OS is widely preferred by programmers and developers because it allows them to work more quickly and effectively. It also enables them to be creative and customize their solutions. Linux has an extremely appealing feature: it is free and open-source.
Programmers and developers tend to prefer Linux OS because it allows them to work faster and more effectively. Linux’s main advantage is that it is free to use and open-source. Linux is customized and free of charge, allowing businesses to use it for servers, appliances, smartphones, and other applications. It is thought that Linux is more secure than Windows. Because it is an open source project, there are many developers working on it, and anyone with a computer can contribute code. Hackers will almost certainly not be able to hack a Linux distribution until they discover a vulnerability. As a developer in Windows 10, you can test apps, change settings, and navigate to advanced features.
Linux is the most popular operating system among scientists and hackers alike. Because of its open source platform, portability, command line interface, and compatibility with popular hacking tools, it is an excellent tool for people who work in these fields. Furthermore, its flexibility and low entry price make it an excellent choice for those who require a reliable operating system. Linux is the best option for hackers due to its security, openness, and control. They have the ability to customize their environment just as they want, and the system is open source, allowing them to change and extend the capabilities of the system to meet their specific needs. Furthermore, Linux is extremely portable, allowing hackers to easily transfer data between computers. For scientists, Linux has many advantages. For users who require a reliable operating system, having an open source platform, compatibility with popular software packages, and low entry-level costs make it an appealing option. Furthermore, thanks to its flexibility and control, those working in the scientific field can customize their environment to their specifications in the manner they require. Linux is an excellent tool for hackers and researchers in general. Because of its open source platform, portability, command line interface, and compatibility with popular hacking tools, it is an extremely powerful tool for those in these fields. The Linux operating system is unequivocally the most reliable and powerful platform for hackers and scientists who require a dependable and powerful operating system.
Which Programming Is Used In Linux?
The Linux kernel‘s programming language is C, not C, C, C#, or C++. To begin, you must master the C programming language. You should also have a thorough understanding of the theory behind operating systems, particularly when it comes to Linux.
It’s a different matter than using a development system, for example, when you use a command-line compiler, such as gcc. Borland C, for example, can be executed using MS-DOS. There are no clear distinctions between how compile and link programs are compiled or linked. To be effective, particularly for Unix and C programmers, you must be familiar with how to use them. Gcc, in most cases, assumes that you intend to compile your specified source files before linking them together for your executable. Several methods can be used to modify or circumvent these defaults. Use gcc instead of the -c switch to compile a source file into an object file rather than linking it.
Unlock The Power Of Linux: A Versatile, Open-source Operating System For Developers
The Linux operating system is used by programmers and developers from all over the world. With its broad range of features, the operating system is well-known for its adaptability. This platform not only provides a diverse set of programming languages, but it also enables developers to freely download software. The Linux operating system is made up of two programming languages: C and C. The Linux kernel is based on the GNU license and was developed in C, where a large portion of the operating system’s components, including many components of the GUI, were written. With the Linux operating system, you can create and run a wide range of applications that you can use for a variety of purposes. Linux is ideal for those who want to create powerful software applications due to its extensive library of free software and the fact that it supports a variety of programming languages.
Do Most Devs Use Linux?
Do most developers use Linux? The answer to this question depends on the type of development work a person is doing. For web developers and software engineers, Linux is often the preferred operating system since it is open source, secure, and widely supported. However, for mobile app developers, the majority of development is done on Windows and Mac OS. So, depending on the type of development work a person does, the answer to this question can vary.
It is becoming increasingly clear that the majority of developers prefer Linux over Windows. The main appeal of Linux to many users is its flexibility. Because Linux is open source, it has a wider range of applications than Windows. A broader range of software is also available, thanks to the use of a variety of hardware. Many users never bother to disable or uninstall the dozens of built-in features available with Windows. The Windows user interface is far more complex and demanding than that of most Linux distributions. Despite the fact that Linux apps are simpler to use, Windows apps are more resource-intensive.
Linux is used on desktop computers, but it is also used on embedded devices such as routers and set-top boxes. Using Linux, you can discover a wide range of free and open-source applications for a wide range of purposes. Because Windows is a one-of-a-kind operating system built by a single company, it is more difficult to identify and fix security flaws. In Linux, security is built into the system’s core as part of the concept of security by design.
More and more software developers are demanding a greater level of software development expertise, and they appear to favor the Windows operating system on a global scale. Microsoft has been the most popular development environment for quite some time, with its market share growing. Despite this, the fact that Apple’s macOS has 44 percent of the market share, trailing only Linux’s 47 percent, demonstrates that there is a strong preference for developing in a more open-source environment. Given the increased demand for Linux professionals, this increase in diversity in the software development industry is essential. Employers are now willing to go to great lengths in order to find the best candidates, and having the necessary Linux skills and cloud computing capabilities is becoming increasingly important. To stay competitive, software developers must expand their skills and become more versatile in their roles. There is no sign that the demand for Linux talent will slack off anytime soon, and developers should seize the opportunities that exist.
Linux Gaining Traction Among Software Developers
In professional software development, there is a growing trend of using Linux; nearly 40 percent of developers now use it. The reason for this is that Linux has numerous advantages over other operating systems, such as Windows. Linux is more secure and reliable than Windows, and it does not require any security software. Furthermore, because it is open-source, developers can continuously add new features to the system and improve it.
When it comes to which environment to choose between Linux and macOS, Windows remains the most popular choice for software developers around the world. In addition, Linux has surpassed Mac OS X as the second most popular choice among developers with 47%. In terms of market share, macOS is followed by Microsoft’s Windows and Apple’s Mac OS.
Because Linux is considered to be more secure, developers tend to use it over Windows. Because it is open-source, numerous developers are working on it, and anyone who wants to contribute code can do so. As a result, any vulnerabilities discovered in the system will almost certainly be discovered and resolved before they can be exploited by hackers. Furthermore, while Linux does not require an antivirus, Windows does.
In conclusion, Linux is gaining popularity as a more secure, stable, and reliable alternative to Windows among software developers. As the number of developers using Linux grows, it is expected that this will only increase.
How Many Developers Use Linux?
Linux is an incredibly popular operating system among developers, with an estimated 90% of developers using it in some capacity. Not only is it open source and free to use, but it also offers a wide range of features and customization options that make it an ideal choice for developers. It’s fast, secure, and stable, which makes it a great choice for developing applications and programs, as well as for running servers. Additionally, many developers prefer Linux because it is highly versatile, allowing them to develop applications for a wide range of hardware and software platforms.
Powerful And Flexible: Linux Becomes Go-to Choice For Developers
With 39.89% of professional developers using Linux in 2022, Linux will remain the dominant platform for developers in the years to come. Linux is better suited for use by business users because it has so many features and is extremely stable. There are numerous developers who work to improve the system, add new features, and fix bugs, and it is not only stable and reliable, but it is also supported by numerous developers. This ensures that Linux is up to date and can be used to create any type of application. Because Linux is an open-source operating system, it is simple for developers to add their own features. The operating system’s versatility is its greatest selling point, and it is an excellent choice for developers looking for a powerful, yet adaptable operating system.
How Many Programmers Use Linux
Linux is an open-source operating system that has become increasingly popular among programmers in recent years. Many software developers have embraced Linux for its flexibility, stability, and reliability. It is also free to use and has a wide range of software and applications available. According to the 2019 Stack Overflow survey, approximately 76.7% of professional software developers use Linux, making it the most popular operating system among programmers. For these reasons, many software developers are turning to Linux and using it as their preferred development platform.
Exploring The Pros And Cons Of Programming With Linux & Macos
A wide range of operating systems are available for use in the development of software. According to data from 2018 to 2021, Microsoft was the most popular operating system for software development, with 62 percent of the market. Unix/Linux took the second spot with 50% market share, while macOS took the fourth place with 42% share. Other operating systems account for a mere 1% of the market. Applications, interfaces, programs, and software are all made possible by Linux. It is used in desktop applications, real-time programs, and embedded systems. There are numerous free online tutorials for using, imitating, and developing Linux code, as well as a plethora of free web pages where you can learn more about it. When it comes to programming, there is no one-way street between Linux and macOS. Linux has more control over its users than other operating systems because it has more control over its configuration, customization, and privacy. Macs, on the other hand, provide solid hardware and greater software options. In this case, the individual developer has complete control over their programming journey and is free to choose the best course of action.
Do Web Developers Use Linux
Web development is a type of online work. Most of the programming language you’ll need in order to create a website will be available on Linux. Chrome and VS Code (or WebStorm, my preference) are both available, as are node and npm command-line utilities. Using nvm to manage Node versions is a simple way to do so.
The majority of web developers should start with Linux-powered laptops or desktops to build their websites. Many tools are available for Linux-based web development. If you are using a number of programming languages, including Java, C, C++, Python, or Ruby, you will need to use a database management system, such as Apache or nginx. The first step toward learning how to code for a web developer is to learn how to code. A shared hosting account costs around five dollars per month. When you’re ready to reap the benefits of Linux, it’s a good time to give it a try. The Windows Subsystem for Linux provides the same user interface as the Mac OS X operating system.
You can have thousands of websites on your computer’s side with shared hosting. As a result, large websites will have issues, particularly when it comes to performance. To switch from shared hosting to VPS, virtual private server, or even dedicated server, make sure you’ve already upgraded. VPS provide dedicated resources, which means they do not share resources with other computers. The software that runs on a virtual private server or dedicated server is under your control. To transfer files from your local environment to your remote server, you’ll need to learn how to use Secure Copy. As we go through the process, we will be able to determine which configuration files are required.
Once you’ve learned how to use Linux, you’ll become a powerful user. Many commercial software products can be replaced with incredible software that you have access to. Because Linux is an open source software, it is easier to analyze the code that runs on the kernel. It is extremely secure to use Linux because its way of handling user roles, groups, files, and folder permissions is very sophisticated. If you don’t have an older laptop or desktop, ask your friends and family if they will sell it. You have the option of offering to pay for it. It’s best to use an older laptop for this task.
It is critical to remove the hard drive and perform a secure wipe. It is critical to understand that not every laptop will be able to support multiple virtual machines and your host OS. In terms of power, having a Linux-powered laptop is a game-changer, but it is not worth the investment. The following is a partial list; there are some that you can checkout but not others. More information on Linux distributions can be found on Google.
Why Is Linux Better For Programming
Linux is regarded as having a higher level of security than Windows. There is no need to protect your computer with an antivirus program. Because it is open-source, several developers are working on it, and anyone can contribute code. It is highly unlikely that hackers will find a vulnerability in Linux before they target it.
Linux is a type of Unix descendant, so it is similar to other Unix-based systems. Out of the hundreds of Linux distributions available, nearly 500 are currently in development. Because it is open-source, many developers are working on it, and anyone with a computer can contribute to the code. To learn how to program, a Linux distribution is the best option. Almost all programming languages, including Python, Ruby, C, and C, can be run on it. Linux distribution systems (DDOSs) are currently in the development phase, and they are freely available. There is a large community of Linux enthusiasts, and you can seek help in any of the forums.
Linux is an excellent platform for programming because it is a powerful and versatile operating system. With Linux’s versatility and efficiency, developers are increasingly turning to the operating system to create powerful applications in 2021. Linux has also been adopted by system administrators, network engineers, and other IT professionals, giving it an important skill to use in the workplace. When IT professionals want to gain an advantage in their fields, Linux is an excellent choice because of its user-friendly design and powerful features. Linux is an excellent choice for developers and IT professionals looking for a simple operating system that is logical, efficient, and easy to read source code. It is clear that Linux will continue to be a popular platform for developers and IT professionals in 2021 and beyond due to its numerous features.
Should I Use Linux
Using Linux is a great way to get the most out of your computer without spending a lot of money. With so many distributions available, you can choose the one that meets your needs best. Unlike other operating systems, Linux is open source which means that you can customize it to fit your exact needs. Linux is also highly secure, reliable, and stable. It’s a great choice for anyone who needs a robust operating system that can be tailored to their exact needs.
In addition to paying for both the operating system and its licensing, Linux is free to use. Free downloads and installations are available on the website. It’s possible to tinker with the operating system. There is no need to use Microsoft Office with LibreOffice. You can install Google Chrome, Mozilla Firefox, Opera, and other web browsers from the list provided below. Graphics editing software such as Gimp, Inkscape, and Krita, as well as audio editing software such as Ardour, are available, as are video editing software such as OpenShot and blender. There are numerous free and open source music players available to manage your music collection. Using VLC or MPV players, you can play any video format you want. Many of these games can only be played on Windows, thanks to Steam’s experimental new feature.
Is Linux Actually Better Than Windows?
While Linux provides excellent speed and security, Windows provides a simple interface that makes it simple to use for even the most inexperienced users. Businesses and gamers use Windows for a variety of purposes, whereas Linux is used as a servers and operating system for security purposes by many corporate organizations.
Is It Worth Trying Linux?
Linux is not a bad operating system to learn, and it can be one of the most rewarding if you gain a good understanding of how it works.
Linux Os
Linux is an open-source operating system that is widely used in the world of technology. It is free, secure, and versatile, making it an attractive choice for a variety of uses. It is a reliable and efficient operating system that is used in both personal and professional computing environments. It has a wide range of applications, from web servers to embedded systems and more. Linux is also highly customizable, with a wide range of distributions available to choose from. It is a powerful and secure operating system with a large and active community of users.
Linux is a popular open-source operating system that is similar to Unix and designed as a community project. It can be run on nearly any major computer platform, including x86, ARM, and SPARC. Linux distributions, also known as Linux operating systems, are available in hundreds of languages. Linux distributions can be customized to meet specific requirements, either by altering them to meet those requirements or by selecting from a variety of them. The Linux operating system is built on a modular design, which allows for a wide range of configurations. Many automakers have joined the Automotive Grade Linux project, which is a project supported by the Linux Foundation. Many distributions, including Knoppix Linux, have been used to recover damaged hard drives and perform other technical support tasks.
While the software remains free, commercial distribution sources frequently charge for services. The Linux kernel, which is the primary operating system component, is shared by all Linux distributions. The user experience can be vastly different depending on the system’s use. A command line or a desktop environment are two common Linux use cases that have very different user experiences. When he was in graduate school at the University of Helsinki in Finland, he developed Linux as a replacement for the Minix operating system. Despite its sluggish performance on the desktop, Linux is still a viable alternative to proprietary operating systems such as Windows and macOS. Manufacturers who want to create embedded operating systems will appreciate the wide range of customization options available to them with Linux OS. However, it is a disadvantage for businesses that require a desktop operating system that can be accessed by a wide range of users.
The Linux Vs Windows Debate: Exploring The Benefits Of Ubuntu
When it comes to operating systems, there is no simple answer: Linux or Windows. We make adjustments based on how our users want us to work. When it comes to the fundamental features of an operating system, such as thread scheduling, memory management, i/o, file system management, and core tools, Linux is generally better than Windows. Linux systems are very versatile, with the ability to serve as small or large of an individual’s needs. As a result, they can be used both at home and at work by enterprise-level companies, as well as smart and mobile devices. Furthermore, Linux provides them with a competitive advantage because it is open source. The Linux distribution Ubuntu runs the Debian OS family. This Linux distribution is appropriate for a variety of applications, including cloud computing, servers, desktops, and Internet of Things devices. A key distinction between Linux and Ubuntu is that Linux is an operating system family, whereas Ubuntu is simply a distribution of Linux. To make an informed decision, the user must first determine his or her preferences and needs. | https://sampleboardonline.com/article/benefits-challenges-and-popular-languages-systran-box |
Killing applications, not killer applications
One of the problems with commercial software is that it still largely sets the benchmarks for what all software developers want to produce. Hence, apart from a few mavericks most developers are following the established GUI metaphor, which encourages (though it is not by any means originally responsible for) the tendency to think in terms of "applications". This suits companies wishing to sell software, because applications can be arbitrarily large and complex, and have arbitrarily many features added; they are never complete. Even the modern versions of the classic UNIX command-line "tool chest" programs, which are often cited as examples of "software tools" designed to perform tasks in combination rather than as individual applications, suffer from the application mentality: grep and sed, for example, include their own subtly different regular expression engines.
What should we be doing instead? Simply, making programs work better together. Some of the problems are not even with the "application" mentality, but because programs are often hard to combine flexibly. For example, grep needs extra code and options in order to use Perl-compatible regular expressions, and either a wrapper script or yet more code in order automatically to decompress compressed files. "Libraries" usually require code to bind to them, and in the second case although you can achieve something close to the desired result by "preloading" a compression library that overrides the standard file-reading routines, the approach lacks fine control (e.g. what happens when you really do want to grep through a compressed file) and opens potential security holes (by overriding system libraries).
There are undoubtedly technical problems to be solved: programs have a wealth of ways to interact (pipes, sockets, temporary files, forking, dynamic linking, scripting) and they need fewer and more flexible ways, but this brief rant does not attempt to address this problem.
Instead, I want to return to the problem of the "killer application". The UNIX tool-chest may not be ideal, but it achieves much more with much less (code, programmer time, computing resources and hence user expenditure on hardware and software, and arguably user frustration) than bloated, bug-ridden (because large, complex and hard-to-test) killer apps. Much can already be achieved without any new technical solutions.
I suddenly realise that I'm at a loss to give examples, as most of my work in this area has been in the less dramatic and more subtle region of programs that are already lightweight, whether it is simplifying and removing bloat from a console editor, working on grep (sadly by adding code) to simplify its use and (even more remote from users) its distribution as part of complete systems, or working on a sound converter to reduce its internal complexity and allow it to support new sound formats without writing code (by making its various readers, writers and filters into standalone programs, rather than, as at present, reimplementing the pipe system inside SoX). Most of my simplification over the last ten years has consisted not of programming, but simply of choosing programs more amenable to toolkit, compositional use: rather than Windows Explorer or its free software equivalents, I use the bash shell, which for all its shortcomings (and they are legion) remains one of the most comfortable readily-available ways to interact with a computer; for editing I use Emacs, arguably a classic piece of bloatware, but also an environment for building editing applications that has attracted thousands of programmers to add routines for performing particular tasks, so that it works remarkably closely with a wide range of other software. I use only six applications daily: editor, shell, email client, web browser and two messaging programs, and that is five too many (one application is the same as zero), but I haven't yet managed to combine them further. It is the desire to do so that started to make me think about this sort of reductive programming.
Well, that was a ramble. Most of the value was probably in the headline. I suspect there are some tangled ideas that need untangling. Comments, as usual, welcome. | https://rrt.sc3d.org/Computer/Killing%20applications,%20not%20killer%20applications.md |
This white paper provides strategies for maintaining a test system for longevity.
Becoming a technology laggard in the 21st century is far from inconceivable. As human dependence on gadgets grows, the demand for developing new technology intensifies. Cell phones are prime examples of products that are continually evolving. The typical cell phone begins production, goes into distribution, and becomes obsolete all in less than three years. Such rapid technological advances mandate that test systems built to examine products with fast life cycles be expandable to sustain product growth over the long haul. These systems must not only be modular and flexible to support copious tests that may vary between product models, but they also must be scalable to accommodate a larger number of test points. Most importantly they must be cost-effective and easy to use so as to reduce development time.
For a single test system to have all of the above listed features would involve the use of scalable software coupled with modular and expandable hardware. The goal of this document is to list software, hardware, and maintenance/support considerations that can prepare engineers to design quality test system architectures with extensive life spans.
The software component of a test system is used to deploy the hardware and display processed data to the end user. The ideal software package should minimize the effort required when developing and expanding the test program and, thus, streamline the productivity of software engineers. This section lists the factors to consider when choosing software tools for designing long-term test system architectures.
Test systems built for longevity usually require modifications during their life spans. To make the process of incorporating these changes into the test program less time-consuming, the software must be scalable. Examples include easy-to-use application programming interfaces (APIs) that minimize the need to learn hardware caveats and the abundance of example code that serves as a starting point for any application. It must also minimize the effort required to perform new analysis on data acquired from the hardware by providing a host of precoded analysis functions that perform complex mathematical processing.
Test systems often need to be expanded either to accommodate more test points that may be required for testing newer models of a device under test (DUT) or to perform new tests on an existing DUT. Thus, software used to write test applications must not only be able to support hardware expansion but it must also simplify the process of modifying existing test code by maximizing code reuse. Some software tools do this by providing interactive configuration utilities that convert user input into data that is then used by the test code. By taking advantage of such utilities, users can expand the test program by making changes to the configuration file rather than to the actual code, thus reusing existing code. Another way that software can maximize code reuse when system hardware changes occur is to support drivers that use a standard communication protocol to program multiple instruments with the same API. Interchangeable Virtual Instruments, or IVI, are a prime example of such drivers, as shown in figure 1. IVI drivers are sophisticated instrument drivers that feature increased performance and flexibility for intricate test applications that require interchangeability, state caching, or simulation of instruments. These drivers minimize the need for hardware engineers to learn new protocols or hardware commands that may vary among instruments from different vendors.
Figure 1. An IVI driver architecture maximizes code reuse and works with multiple hardware platforms.
To achieve interchangeability, the IVI Foundation has defined specifications for the following eight instrument classes: digital multimeter (DMM), oscilloscope, arbitrary waveform/function generator, DC power supply, switch, power meter, spectrum analyzer, and RF signal generator. For the purpose of illustrating the advantages of IVI, consider a DMM/switch system that is being expanded. Because of higher throughput demands, an additional DMM might be needed. However, because of cost constraints, a cheaper model of the instrument is chosen from a different vendor. If the test program for the system were written using IVI drivers, the code to deploy the first DMM could be reused to deploy the second one as well. For more information on IVI, visit the IVI page.
Lastly, systems built to sustain long-term technological advances must be flexible and use application development environments (ADEs) that can withstand structural changes by working with multiple hardware and software platforms. If the ADE does not support these multiple platforms, programmers need to use different ADEs for different projects and spend time porting existing intellectual property from one platform to the other. Consider an older application built to work on Windows 98 that needs to be transported over to a newer PC that runs Windows XP. If the ADE used to develop the code did not support Windows XP, the test program would have to be completely reconstructed. Similarly, imagine that your application needs change and you must add a new measurement device. If your development software does not support this new hardware, then you may have to substantially change the application. Both scenarios would exponentially increase development time and significantly affect the efficiency of the test engineer. Therefore, a software package well-suited for designing test system architectures for longevity must support multiple vendor hardware platforms and numerous OSs. Moreover, it must be built on an adaptable framework that simplifies the process of adding support for new hardware and OSs when needed. Software with these features helps minimize the time needed to modify existing test code when new hardware or tests are introduced in the system, thus increasing application longevity.
Consider a DUT that is a product model of a gadget that is constantly evolving – for example, the Apple iPod. Older versions of this device such as the iPod mini (now obsolete) were audio-only devices. Newer iPods can play sound as well as video. An evaluation of past trends and the wide adoption of the iPod show that it is possible that future versions of the device will incorporate additional features. Testing complex and evolving devices such as the iPod requires a flexible hardware platform that is constantly growing. This section lists the factors to consider when choosing hardware platform for designing long-term test system architectures.
With technology evolving, test needs are bound to become more complex. Therefore, it is crucial that the hardware platform chosen to build the test system is continually growing as well. Such a platform eliminates the need to completely redesign a test system architecture when its maximum capacity is reached. Open industry platforms, such as PXI, VXI, and GPIB, are usually good examples of this type of hardware. This type of platform reap the benefits of multivendor adoption, which include constant growth and innovation. Increased channel-count, voltage, current, speed, and accuracy specifications are just a few areas in which vendors that manufacture open-standard hardware compete with each other for industry recognition. This competition fosters healthy platform growth and a constant supply of new products to meet the needs of advanced test systems.
Open standards that are modular further reduce system costs by maximizing component reuse. Their flexibility eliminates complexities when system expansion is needed. Because test systems built for longevity are often expanded to incorporate more I/O points during their life spans either to test new DUTs or to add testing features to existing DUTs, a flexible hardware foundation is crucial. Hardware chosen in test systems that need to last several years must have the ability to perform a vast number of tests. To do so, it must have a large portfolio of products that can perform tests on analog, digital, and RF signals at various levels with accuracy and speed. With the iPod example, future versions of the device might need a test system with expanded channel count and the ability to perform video, audio, and maybe even RF (in the case newer iPods that play radio stations). Open-standard platforms, because of their vast product offering and continual growth, are more likely to fulfill these requirements.
Engineers should preferably choose hardware and software that have longer life spans than the system itself. If this is not possible, they must ensure that these products are manufactured by vendors who have a strong track record for providing drop-in replacements and support for obsolete products. Without vendor support, technical issues and unforeseeable problems in the system can be hard to troubleshoot. This can sometimes cause unwanted delays and, in more extreme circumstances, require the construction of an entirely new system.
National Instruments is committed to providing products and services that engineers can use to build test systems for the long term. The recommended solution is a five layer system architecture that was introduced in the executive summary of this guide. This section will highlight the main products and services that ensure support over the long-term. From a software perspective, the NI LabVIEW platform forms the core of all NI software. As a graphical programming language, NI LabVIEW is based on three basic steps that can be used to construct any test program – acquire, analyze, and present.
Figure 2. There are three steps to programming in LabVIEW.
For acquisition, LabVIEW has APIs, as shown in figure 2, for programming various NI hardware devices such as DMMs, RF instruments, digitizers, programmable power supplies, and multifunction data acquisition devices. Because these APIs are the result of cohesive efforts by NI hardware and software R&D engineers, they follow the same programming sequence. This consistency makes programming various instruments in LabVIEW menial and eliminates the need for programmers to learn the nuances of each instrument. Most importantly, it minimizes development time. To simplify the acquisition process even more, LabVIEW uses configuration wizards and follows the dataflow paradigm, making it simple and intuitive for first-time users to write test code.
In addition to acquisition, the LabVIEW platform offers hundreds of built-in analysis functions that cover different areas and methods for extracting information from acquired data. Programmers can use these functions as is or modify, customize, and extend them to suit a particular need. These functions are categorized in the following seven groups: measurement, signal processing, mathematics, image processing, control, simulation, and application areas. View the LabVIEW for Measurement and Analysis tutorial for more information on analysis tools in LabVIEW.
LabVIEW also has a vast number of abstraction features such as charts, graphs, knobs, and the ability to export data to data management software such as Microsoft Excel or NI DIAdem, which help present analyzed data to the user in a clear fashion. Data presentation also can be done between locations by making use of the LabVIEW ability to publish data to the Web or log information to a database. Cohesively, these features make LabVIEW scalable and ideal for building long-term test system architectures.
Figure 3: LabVIEW is an intuitive graphical programming language that provides tools such as charts, graphs, and buttons to abstract information for the user.
LabVIEW also comes with built-in features and add-on modules and utilities to help reuse existing code. These features include interactive configuration wizards such as Measurement & Automation Explorer and NI Switch Executive, which convert user input into data that is fed into LabVIEW test code, as shown in figure 4. Using such graphical configuration wizards, programmers can add hardware to programs by making changes to the configuration file rather than to the test program. This maximizes code reuse and increases development efficiency. In addition to configuration tools, the LabVIEW platform offers users a host of IVI class drivers to help maximize code reuse in the test program. For more information on IVI, visit the IVI page.
Figure 4. Engineers can use NI Switch Executive to configure and deploy a 9,216-crosspoint switch matrix within minutes.
The last requirement that software packages need to meet is platform independence. Complex test systems sometimes require precision instruments. When an NI solution for such instruments does not exist, programmers can make use of more than 5,000 FREE instrument drivers to program instruments from numerous vendors including Agilent, Fluke, and Keithley. Learn more on the Instrument Driver Network page. In this way, programmers can use the LabVIEW ADE to program instruments from multiple vendors, making it independent of the hardware platform. LabVIEW also supports multiple OSs and can be deployed on Windows, Mac OS, Linux®, real-time OSs such as PharLap, and even FPGA. The flexibility of LabVIEW is one of its biggest assets. Using traditional text-based languages programmed in Windows, on an FPGA, and on an embedded processor would require engineers to learn three completely separate languages (for example, C, VHDL, and TI Code Composer Studio™). Using LabVIEW, engineers can deploy applications on all three targets in the same environment, maximizing efficiency and minimizing time spent learning new tools.
For a test system to function, its software and hardware need to work in a congruent fashion by complementing each other. It is just as important to select the right hardware platform as it is to pick a powerful software tool when designing a test system for longevity. Using a flexible open hardware platform, such as PXI, minimizes the need for test system replacement as product life cycles end and technological advances are made. This increases test system longevity.
PXI is an open-standard platform used by various test and measurement industry leaders who are all members of the PXI Systems Alliance. Currently, the more than 70 companies worldwide that are PXI Systems Alliance members share a common commitment to providing an open platform equipped for a variety of applications, from machine control to automated test. Growth of PXI modules has been rapid since the adoption of the PXI standard in 1998, and today more than 1,200 PXI products are available. In addition to a vast product offering, PXI is also expected to continually grow for the foreseeable future, as shown in figure 5. These PXI characteristics make it suitable for use in long-term test systems.
Figure 5. The PXI platform has enjoyed exponential growth over the past eight years and is forecasted to continue growing.
Its modular nature also makes PXI suitable for building test systems for longevity, as shown in figure 6. Expanding a PXI system to incorporate more I/O points or new testing capabilities is as easy as adding one of the more than 1,200 available modules to an empty slot in a PXI chassis, which is the outer frame of the system. Because all instruments are powered by the chassis, components such as the chassis fan and the power supply are reused to the maximum.
Figure 6. The modular PXI platform makes system expansion easy.
In addition to being an open standard and modular in nature, PXI has a wide span that addresses the needs of various applications from vision testing to high voltage and current testing (high-power applications) to even RF test. PXI modules can perform these tests with accuracies of up to 7½ digits (26 bits) and rates up to 6.6 GS/s. PXI instruments are also suited for mixed-signal tests that involve both analog and digital signals. Some modules also come with built-in signal conditioning for the measurement of sensors such as thermocouples, RTDs, load cells, and strain gages. The PXI platform also has FPGA modules and real-time capability, making it suitable for test applications that need determinism. To learn more about its capabilities and offerings, visit the PXI page.
Other modular platforms made by National Instruments include NI CompactDAQ, a low-cost alternative to PXI, and NI CompactRIO, an FPGA-based platform designed for high-speed control and deterministic test applications, as shown in figure 7. By designing modular system hardware that maximizes code reuse while catering to a vast number of applications, National Instruments provides test engineers with a selection of hardware platforms appropriate for designing long-term test systems.
Figure 7. Other modular platforms manufactured by NI include NI CompactDAQ and CompactRIO.
National Instruments has a commitment to providing an array of quality hardware and software products that make designing test system architectures for longevity as elementary as possible, as shown in figure 8. NI makes every effort to support these products by providing calibration services; technical support via phone, e-mail, and Web; and numerous repair and warranty options.
NI also makes every effort to support obsolete products by providing technical assistance via the Web in the form of knowledgebases, tutorials and example code, and discussion forums and by offering drop-in replacements for hardware products that go “end of life.” Using these services, NI hardware and software can be maintained and sustained over the long term.
Figure 8. As NI releases new products, it continues to support older ones.
You should consider software and hardware platforms that are designed to support long-term maintenance and upgrades. While hardware and software products can help build an efficient test system for longevity, it is the availability of product support from the manufacturer that determines whether it will be maintainable. In the universe of technology, what is advanced today may be archaic tomorrow – there is no avoiding that fact. With proper planning, it is possible for the test system to avoid a similar fate. | http://www.ni.com/de-de/innovations/white-papers/06/designing-and-maintaining-a-test-system-for-longevity.html |
Provides Windows programmers with details of and deep insights into the inner system functions of Microsoft Windows Essential for Win95 and other advanced Windows programmers Ideal for software developers who are moving applications from Windows 3.x t o Windows 95 Includes disk of example programs, source code, documentation, and utilities
In this book and disk set, Barry Kauler explains the exacting details of Windows programming at the system level. He dissects the fundamentals of hardware man agreement and explores the history and advanced architectural details of Windows, the PC processor family, and systems programming in Real and Protected modes. For everything from BIOS, direct hardware access, and virtual machines to real-time events and options for managing program transitions, Kauler gives the how-to information and example code advanced software developers need for the full range of Windows systems-level programming for Windows 3.1 to Windows 95. For programmers new to Windows, this book demystifies assembly language programming for Microsoft Windows. Kauler thoroughly examines the basic concepts of Windows, and reveals systems programming tips and tricks. He explains the architectures of the microprocessor hardware, and how these features affect programming; introduces object-oriented programming from a nuts-and-bolts perspective; demonstrates how to write complete object-oriented assembly language programs in as little as nine lines; shows how to interface C++ and assembly code; takes readers "inside" Windows to learn the architectural details that Microsoft never publicly documented; explains how to move between Real and Protected modes; illustrates the art of thinking from 16 bits to 32 bits and back again; and provides detailed, hard-to-find reference information. Plus, Kauler's companion disk is a treasure trove of example programs, useful source code, further documentation, and powerful utilities.
About the Author
Barry Kauler is a professor in the Department of Computer and Communication Engi neering at Edith Cowan University in West Australia. He is the author of several books, including PC Architecture & Assembly Language and Flow Design for Embedded Systems, and a contributor to Dr. Dobb's Journal.
This book is exactly what you need to increase proficiency on Microsoft Word 2002.
The book...
Your hands-on, step-by-step guide to building applications with Microsoft SQL Server 2012
Teach yourself the programming fundamentals of SQL Server 2012—one step at a time. Ideal for beginning SQL Server database administrators and developers, this tutorial provides clear guidance and practical, learn-by-doing...
Welcome to OpenGL SuperBible! The first time I ever heard of OpenGL was at the 1992 Win32 Developers Conference in San Francisco. Windows NT 3.1 was in early beta (or late alpha) and many vendors were present, pledging their future support for this exciting new platform. Among them was a company called Silicon Graphics, Inc. (SGI)....
This book follows a cookbook style approach that puts orthogonal and non-redundant recipes in your hands. Rather than rehashing the user manual, the explanations expose the underlying logic behind Matplotlib.
If you are an engineer or scientist who wants to create great visualizations with Python, rather than yet another... | https://www.pdfchm.net/book/windows-assembly-language-systems-programming-16-and-32-bit-low-level-programming-for-the-pc-and-windows/9780879304744/ |
To expose, or not to expose, hardware heterogeneity to runtimes!
The emphasis on energy efficient computing is steering hardware towards greater heterogeneity. Software must take advantage of emerging heterogeneous hardware to optimize for performance and efficiency. A question that arises is what is the right software layer to abstract the complexity of heterogeneous hardware?
Historically, the operating system (OS) is the first choice to abstract new hardware features. This benefits programmers, virtual machine developers, and language implementers, who do not need to worry about hardware details. On the other hand, the upper layers of the software stack, especially the language runtimes contain rich semantic information about user applications, unavailable to the OS. This information can be useful in better managing hardware. The drawback is that it requires changes to the runtime which makes hardware vendors depend on runtime developers. This paper discusses two case studies that show exposing hardware details to the Java runtime improves key evaluation metrics for popular Java applications. We further discuss implications for implementation complexity, programming model, and the necessary hardware and OS support.
Shoaib Akram is a Ph.D. candidate at Ghent University in Belgium. He has an M.S. in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign and a B.S. in Electrical Engineering from the University of Engineering and Technology in Pakistan. His research focuses on the intersection of programming languages, system software, and computer architecture. His current research investigates software approaches to ease the adoption of emerging memory technologies. His recent research also explores the potential of language runtimes in abstracting the complexity of heterogeneous hardware. | https://2019.programming-conference.org/details/MoreVMs-2019/2/To-expose-or-not-to-expose-hardware-heterogeneity-to-runtimes- |
Computer systems are increasingly bottlenecked by data movement costs and are no longer able to keep up with the “big data” explosion. This trend is compounded by the fact that data in memory is stored without any consideration of data content, usage patterns, or how data are grouped into higher-level objects. Many algorithms also exhibit poor temporal locality (because they only touch data items once) and poor spatial locality (because they exhibit irregular access patterns across vast data sizes), making caches inefficient.
Solving these challenges and enabling the next generation of data-intensive applications requires computing to be embedded in and around the data, creating intelligent memory and storage (IMS) architectures that do as much of the computing as possible as close to the bits as possible. The closer computation is to the memory cells, the closer it comes to the raw bandwidth available in the storage arrays. However, such IMS capabilities will require reconstructing the entire system stack, with new architectural and operating-system abstractions, new memory semantics, and new techniques for compiling and optimization, and dynamic yet efficient system software. At the same time, we want to achieve high programmer productivity and code portability across diverse, heterogeneous architectures. We must protect the user from this hardware and software ferment by providing a highly intuitive programming model that allows programmers to focus on what they want to do, instead of how, while nevertheless operating as close to the raw performance of the data arrays as possible.
The CRISP grand challenge is to significantly lower the effort barrier for every day programmers to achieve highly portable, “bare-metal,” and understandable performance across a wide range of heterogeneous, IMS architectures. This will democratize high-performance, heterogeneous, data-intensive computing, to enhance productivity of the IT workforce and enable an improved software ecosystem that opens new markets for computer systems. | https://crisp.engineering.virginia.edu/about |
Software, aside other categories as discussed in the previous post, can be generally divided into two: system software and application software. System software is designed to operate the hardware of the computer. System software keeps everything working and is responsible for making the computer usable. It provides a platform for running application software and basic functions of the computer. All application programs work with the system software to accomplish their tasks.
One important purpose of system software is to protect the applications programmer from the complexity and specific details of a particular computer being used, especially memory and other hardware features. System software is usually made up of three kinds of programs or components.
These are:
- Operating System
- Device Drivers
- Utility Programs (Utilities)
Application software on the other hand, allows a user to accomplish some tasks by enabling the computer to perform special activities aimed at solving a specific problem.
Whilst application software is discussed broadly in the next post, this post focuses on the components of system software and the types of its major component, Operating System.
READ ALSO : Types Of Computer Software
What Is A Computer Operating System?
An operating system (OS) is a software that manages the computer’s resources, runs and coordinates other programs, and provides common services for the user and application software. The operating system acts as an intermediary between application programs and the computer hardware for functions such as input & output and memory allocation.
Apart from Personal Computers (PC), Operating systems are also found on any device that contains a computer. Such devices include cellular phones, video game consoles, supercomputers and web servers.
Examples of popular operating systems for PCs include:
- MS-DOS (Microsoft Disk Operating System) for IBM computers
- Microsoft Windows (Windows 10, Windows 8, Windows 7, Windows Vista, Windows XP, Windows 2000, Windows ME, Windows 98, Windows 95)
- Linux (Ubuntu, Fedora, Red Hat, Mandriva, SuSE, Debian)
- Mac OS (Macintosh Operating System) for Apple computer systems
- UNIX, and many more.
READ ALSO : Types Of Computer Software – Types Of Application Software (Productivity & Business)
Types Of Operating Systems
Operating systems have over the years been developed and separated accordingly into the following categories based on their functions:
- Single- and Multi-tasking Operating System
- Single- and Multi-user Operating System
- Network Operating System
- Distributed Operating System
Single- and Multi-tasking OS
While a single-tasking OS can only run one program at a time, a multi-tasking OS allows more than one program to be running concurrently. This is possible due to time-sharing, where the available processor time is divided between multiple processes. Each of these processes are interrupted repeatedly in time slices by a task-scheduling subsystem of the operating system.
Single- and Multi-user OS
A single-user OS involves a standalone computer system where the operating system allows only a single user to process and handle one operation at a time. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run one after the other. Examples of single-user OS are Windows 95, MS-DOS, Windows NT Workstation, and Windows 2000 professional.
A multi-user OS allows multiple users on different computers or terminals to access a single system with one OS on it. These programs are often quite complicated and must be able to properly manage the necessary tasks required by the different users connected to it. A multi-user OS applies the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users. The system permits multiple users to interact with the system at the same time. Examples are Mac OS, All Linux based OS, UNIX, IBM AS400, and Windows 10.
Network Operating System
A network involves the setting up of network servers for connecting many computers to the network for the purpose of communicating and sharing resources. A network operating system (NOS) is designed primarily to support workstations and personal computers connected on a local area network (LAN). Examples include Artisoft’s LANtastic, Banyan VINES, Novell’s NetWare, Microsoft Windows (NT, 2000), OpenVMS, Linux, UNIX, and others.
Distributed Operating System
A distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked to communicate with each other gave rise to distributed computing. A distributed OS is an extension of the network operating system that supports higher levels of communication and integration of the machines on the network. Distributed computations are carried out on more than one machine. When computers in a group work in cooperation, they form a distributed system. Examples are IRIX, DYNIX, AIX, Solaris, Mac OS, and others.
Functions of Operating Systems
Typical functions of operating systems include:
- Performing various boot procedures (cold and warm booting procedures)
- Disk and storage management
- Virtual memory/storage functions
- Controlling application programs
- Providing inter connectivity functions
- File management functions (copy, cut, paste, delete, open)
- Provides automatic troubleshooting facilities
- Provides security
Computer Device Drivers
In computing, a device driver, commonly known as driver, is a small program that operates or controls a particular type of hardware device that is attached to the computer, such as a keyboard or a mouse. There is a plethora of computer hardware, for example the number of disk drive models available even from one manufacturer can be very huge.
A driver provides a software interface to hardware devices, to enable an OS and other programs to access hardware functions without needing to know precise details of the hardware being used. A device driver simplifies programming as it acts as a translator between a hardware device and the applications that use it. All peripheral devices attached to the computer have their device drivers installed through the OS. Some devices such as printers, scanners, plotters, and others, have exclusive software coming with the devices.
Although the system you buy has many drivers pre-installed, those programs are provided by the device manufacturer. This is especially important if you are experiencing problems with a device or want to connect an unrecognized device. You will probably have to download a driver from the device manufacturer.
Utility Programs (Utilities)
Utility programs, also known as utilities, service programs, or system tools, are designed to help analyze, configure, optimize and maintain the computer. They are programs that help users to identify hardware problems, locate lost files, and back up data. Common utilities include disk defragmenter, disk compressor, disk cleanup, virus scanners (antivirus), system restore, and development tools such as compilers and debuggers.
Utilities can be contrasted with application software which allows users to do things like creating text documents, playing games, listening to music or surfing the web. Rather than providing these kinds of user-oriented or output-oriented functionality, utility programs usually focus on how the computer infrastructure operates. Most utilities are highly specialized and designed to perform only a single task or a small range of tasks. However, there are also some utility suites that combine several features in one piece of software. Most major operating systems come with several pre-installed utilities, however, utilities are part of the system software but not part of the OS per se.
If you liked this article and find it helpful, please feel free to share and don’t hesitate to ask any question or share your thoughts or experiences in the comments section below. | https://ziglinkit.com/types-of-computer-software-system-software-and-types-of-operating-systems/ |
Blog, 2016 Blogs
Throughout the software development life cycle, your product would go through struggles until you see it deployed in production. These challenges have been there for years and this still remain in many conventional and startup companies.
The top five challenges are:
1. Prolonged Time delays as per the fixed deployment schedules
2. Dependency on operations teams for code deployment
3. Default Hardware/software configurations required per project
4. Making CI/CD work together as one piece from end to end.
5. Huge timeline from concept to production
Online businesses are feeling that the deployment processes are more likely challenging than the past. Although there are plenty of tools and technologies available, the deployment processes are still cumbersome. The cloud computing and mobile services are making huge growth in the market, and this is one of the most important reasons why the private sector companies are considering it heavy to handle the data and the deployment processes. Not more than half a decade ago, programmers were not bothered about their code being able to work on the web or mobile applications or a mix of both. So this indeed counts on top of the existing challenges one should face upon deployments.
1. Huge timeline from concept to production
Companies take ample amount of time to bring in a concept and transforming that into a reality.It takes months or even a year. If companies are following conventional deployments, then the time delay is indeed expected
2. Prolonged Time delays as per the fixed deployment schedules
Many organizations are still following the conventional approach of scheduling deployments one after another. The deployments requests created by the projects get approved within days/weeks of schedule. This is critical to the business and brings in huge delay to make the project ship ready state.
3. Dependency on operation teams for code deployment
Operation teams are highly dependent if software projects are still waiting and depending for operations guy to get deployed. If you are one among such companies, which means you are still playing black and white movies in your theater.
3. Default Hardware/software configurations required per project
If software projects required a specific set of default hardware/software configuration, mostly IT teams would work on the requests and go through the approval cycles for hardware & brief amount of delays in preparing the software configurations.
5. Making CI/CD work together as one piece from end to end.
Although some companies think they are faster in their deployment, they are not indeed. Even if their development teams take their CI/CD solution into practice, they still need huge technical expertise to run the show. This again brings in technical expertise into picture with a big dependency on technical team to take care of the deployments even if it’s called ‘automated way’.
What RLCatalyst Offers:
By Adopting Agile & DevOps, any concept can be seen as Minimal Viable Product in action in a shorter span of weeks. Yes you read it right! DevOps needlessly waits for no one if your product is ready to be deployed. DevOps does not take huge effort to adopt or execute. With the right expertise, it can be achieved to the fullest in the shorter span of time.
Imagine if all the above issues got solutions and if they are powered and packed in a single magic chest that too developed by DevOps Ninjas? This power packed product can shed light into your deployment challenges which can make a huge impact to your businesses with its one-click deployment
The solution is RLCatalyst, Yes! RLCatalyst can fade away all the above issues. | https://relevancelab.com/2016/03/24/ |
We examine the risks of not modernizing and offer steps IT leaders can take to gain executive buy-in.
In conjunction with CIO magazine, Stout surveyed healthcare industry IT leaders to learn more about the numerous challenges they face in their industry. One of our key findings was that the most important digital initiative for the current year is modernizing the IT infrastructure (Figure 1). The initiative is a challenging one given the complexity of a Healthcare Information System (HIS) (Figure 2), competing priorities, and the cost associated with such a large-scale digital transformation project.
Figure 1: Data-driven digital initiatives for healthcare companies for the current year
In terms of background, the HIS collects, manages, and uses large amounts of data from various sources with the objective of delivering a single, integrated solution for each functional department in the healthcare domain. For example:
Figure 2: High-level architecture of a Hospital Information System (HIS)
- The administration system data is collected at the data warehouse.
- The ERP system contains information including management data, financial information, (FI) and supply chain management (SCM) data.
- The data created in the administration system and in the Enterprise Resource Planning (ERP) software is analyzed by various decision-making systems.
- The Picture Archiving and Communication System (PACS) is used to store and transmit medical images, such as radiology images.
Risks With Not Modernizing a HIS
Several problems can arise when existing legacy IT components and services are not modernized:
- Problematic system integration: The integration of all systems can lead to inconsistencies because different systems use different programming languages, architectural standards, or communication protocols. Often, this highly heterogeneous integrated system arises because of M&A activity, replacement of individual IT components (e.g., the EHR system) without properly upgrading all the impacted IT systems, or having outdated IT services (whether homegrown or not) without proper documentation or developer support.
- High maintenance costs: There are associated vendor maintenance, labor, and operational costs for keeping legacy systems running properly. In addition, in many instances, IT teams have to spend up to 60% of their time to manage outdated legacy systems . (Logicalis Global CIO Survey 2018-2019, 2019)
- Cybersecurity risks: Old legacy hardware may not be compatible with newer versions of the operating systems (e.g., Windows 10), where important security patches and upgrades have taken place. In many cases, the operating systems used by either HIS IT components or medical devices (e.g., radiology systems, ultrasounds, etc.) are not supported by the software vendor anymore. As a result, there is a gap in security, providing hackers the opportunity to gain access to sensitive patient data or shut down essential hospital services.
- Regulatory compliance penalties: Regulations regarding patient data are constantly evolving and becoming more rigorous to meet compliance. Data breaches or ransomware attacks can translate into millions of dollars in penalties.
- Hardware failure financial risks: Old hardware is prone to failure. When failure takes place, it is challenging to take the outdated IT systems offline for maintenance or replacement because essential operations (such as patient care) of the healthcare organization will be affected.
- Loss of future revenue: Legacy systems are not flexible enough to support new services and customer demand. Innovation and competitiveness are not easily accomplished in an outdated IT environment, and this affects the profitability of the company.
- Lack of efficiency: Trying to automate manual, repetitive tasks, and services in outdated IT systems is extremely challenging. Consequently, IT staff is required to manually complete tasks that can take a significant time.
Not only will these risks be mitigated by updating the HIS, but additional benefits include improved analytical tools that can result in new insights that drive business decisions around quality of care and patient outmigration.
Modernization Approaches
There is a range of modernization methods that can be applied to the IT legacy systems depending on timeline, effort, and budgetary constraints. Below are some common approaches that can be used:
- Reuse of legacy components/APIs: This is a method borrowed from object-oriented architectures, where existing data and functions are wrapped (“encapsulated”) by modern code and are available via modern application programming interfaces (APIs).
- Migrate to cloud: The cloud offers flexibility and scalability in terms of resources that are leveraged to match user demand. It also results in lower operating costs.
- Rearchitect existing IT services: After migrating to the cloud, the existing software applications can be rearchitected by applying modern principles of advanced service-oriented architectures, such as microservices, where different functionalities are isolated from each other and making software upgrades more flexible and agile.
- Rewrite software: This method is usually the most time-consuming. Old and unmaintainable legacy business logic will be rewritten, and the code will be customized for current IT needs.
Figure 3: Modernization IT approaches
Executive Buy-In
There are eight guiding principles that CIOs can follow to gain executive buy-in and support for an IT budget that will be allocated to IT modernization initiatives:
- Identify business problems caused by outdated IT systems
- Create dashboards that report all relevant metrics to clearly quantify the identified business challenges (e.g., revenue loss, risk of non-compliance, high costs, etc.)
- Educate executives by using dashboards to translate IT issues to business impact
- Create an executive IT strategy committee that decides prioritization of open problems and budget
- Gather requirements and assess if IT issues can be handled internally or not
- If applicable, engage with third-party vendors and identify (for example, via RFP) the best fit
- Have chosen vendors build robust prototypes to understand impact of the new solutions
- Quantify business improvements based on prototyping phase and prioritize which IT initiatives should be developed and deployed
Information systems in the healthcare industry still rely on legacy hardware, software, and overall architecture. These outdated systems provide various challenges for healthcare CIOs due to their high operating and maintenance costs, security and compliance risks, and the lack of organizational agility needed to provide additional revenue. According to our recent survey, healthcare IT leaders recognize the need for technology modernization and digital transformation, but budgetary constraints may impede the implementation of all necessary changes.
However, with the help of a thorough assessment of all existing IT systems, the major inefficiencies and pain points can be captured. After identifying and prioritizing the main areas where modernization needs to take place, different methodologies (whether cloud migration or rewriting software) can be tested via several prototypes. These prototypes will be able to quantify the amount of efficiency, scalability, and cost-effectiveness that are introduced in the new and transformed IT components and services. The existing IT budget can now be allocated accurately to the areas to be modernized that will generate the highest ROI. | https://www.stout.com/de-DE/insights/article/it-modernization-healthcare-industry |
If it were not for system software, all programming would be done in machine code, and applications programs would directly use hardware resources such as input-output devices and physical memory. In such an environment, much of a programmers time would be spent on the relatively clerical problems of program preparation and translation, and on the interesting but unproductive job of reinventing effective ways to use the hardware. System software exists to relieve programmers of these jobs, freeing their time for more productive activities. As such, system software can be viewed as establishing a programming environment which makes more productive use of the programmer's time than that provided by the hardware alone.
The term programming environment is sometimes reserved for environments containing language specific editors and source level debugging facilities; here, the term will be used in its broader sense to refer to all of the hardware and software in the environment used by the programmer. All programming can therefore be properly described as takin place in a programming environment.
Programming environments may vary considerably in complexity. An example of a simple environment might consist of a text editor for program preparation, an assembler for translating programs to machine language, and a simple operating system consisting of input-output drivers and a file system. Although card input and non-interactive operation characterized most early computer systems, such simple environments were supported on early experimental time-sharing systems by 1963.
Although such simple programming environments are a great improvement over the bare hardware, tremendous improvements are possible. The first improvement which comes to mind is the use of a high level language instead of an assembly language, but this implies other changes. Most high level languages require more complicated run-time support than just input-output drivers and a file system. For example, most require an extensive library of predefined procedures and functions, many require some kind of automatic storage management, and some require support for concurrent execution of threads, tasks or processes within the program.
Many applications require additional features, such as window managers or elaborate file access methods. When multiple applications coexist, perhaps written by different programmers, there is frequently a need to share files, windows or memory segments between applications. This is typical of today's electronic mail, database, and spreadsheet applicatons, and the programming environments that support such applications can be extremely complex, particularly if they attempt to protect users from malicious or accidental damage caused by program developers or other users.
A programming environment may include a number of additional features which simplify the programmer's job. For example, library management facilities to allow programmers to extend the set of predefined procedures and functions with their own routines. Source level debugging facilities, when available, allow run-time errors to be interpreted in terms of the source program instead of the machine language actually run by the hardware. As a final example, the text editor may be language specific, with commands which operate in terms of the syntax of the language being used, and mechanisms which allow syntax errors to be detected without leaving the editor to compile the program.
In all programming environments, from the most rudimentary to the most advanced, it is possible to identify two distinct components, the program preparation component and the program execution component. On a bare machine, the program preparation component consists of the switches or push buttons by which programs and data may be entered into the memory of the machine; more advanced systems supplement this with text editors, compilers, assemblers, object library managers, linkers, and loaders. On a bare machine, the program execution component consists of the hardware of the machine, the central processors, any peripheral processors, and the various memory resources; more advanced systems supplement this with operating system services, libraries of predefined procedures, functions and objects, and interpreters of various kinds.
Within the program execution component of a programming environment, it is possible to distinguish between those facilities needed to support a single user process, and those which are introduced when resources are shared between processes. Among the facilities which may be used to support a single process environment are command language interpreters, input-output, file systems, storage allocation, and virtual memory. In a multiple process environment, processor allocation, interprocess communication, and resource protection may be needed. Figure 1.1 lists and classifies these components.
Editors Compilers Assemblers Program Preparation Linkers Loaders ======================================================== Command Languages Sequential Input/Output Random Access Input/Output File Systems Used by a Single Process Window Managers Storage Allocation Virtual Memory ------------------------------ Program Execution Support Process Scheduling Interprocess Communication Resource Sharing Used by Multiple Processes Protection MechanismsFigure 1.1. Components of a programming environment.
This text is divided into three basic parts based on the distinctions illustrated in Figure 1.1. The distinction between preparation and execution is the basis of the division between the first and second parts, while the distinction between single process and multiple process systems is the basis of the division between the second and third parts.
Historically, system software has been viewed in a number of different ways since the invention of computers. The original computers were so expensive that their use for such clerical jobs as language translation was viewed as a dangerous waste of scarce resources. Early system developers seem to have consistently underestimated the difficulty of producing working programs, but it did not take long for them to realize that letting the computer spend a few minutes on the clerical job of assembling a user program was less expensive than having the programmer hand assemble it and then spend hours of computer time debugging it. As a result, by 1960, assembly language was widely accepted, the new high level language, FORTRAN, was attracting a growing user community, and there was widespread interest in the development of new languages such as Algol, COBOL, and LISP.
Early operating systems were viewed primarily as tools for efficiently allocating the scarce and expensive resources of large central computers among numerous competing users. Since compilers and other program preparation tools frequently consumed a large fraction of an early machine's resources, it was common to integrate these into the operating system. With the emergence of large scale general purpose operating systems in the mid 1960's, however, the resource management tools available became powerful enough that they could efficiently treat the resource demands of program preparation the same as any other application.
The separation of program preparation from program execution came to pervade the computer market by the early 1970's, when it became common for computer users to obtain editors, compilers, and operating systems from different vendors. By the mid 1970's, however, programming language research and operating system development had begun to converge. New operating systems began to incorporate programming language concepts such as data types, and new languages began to incorporate traditional operating system features such as concurrent processes. Thus, although a programming language must have a textual representation, and although an operating system must manage physical resources, both have, as their fundamental purpose, the support of user programs, and both must solve a number of the same problems.
The minicomputer and microcomputer revolutions of the mid 1960's and the mid 1970's involved, to a large extent, a repetition of the earlier history of mainframe based work. Thus, early programming environments for these new hardware generations were very primitive; these were followed by integrated systems supporting a single simple language (typically some variant of BASIC on each generation of minicomputer and microcomputer), followed by general purpose operating systems for which many language implementations and editors are available, from many different sources.
The world of system software has varied from the wildly competitive to domination by large monopolistic vendors and pervasive standards. In the 1950's and early 1960's, there was no clear leader and there were a huge number of wildly divergent experiments. In the late 1960's, however, IBM's mainframe family, the System 360, running IBM's operating system, OS/360, emerged as a monopolistic force that persists to the present in the corporate data processing world (the IBM 390 Enterprise Server is the current flagship of this line, running the VM operating system).
The influence of IBM's near monopoly of the mainframe marketplace cannot be underestimated, but it was not total, and in the emerging world of minicomputers, there was wild competition in the late 1960's and early 1970's. The Digital Equipment Corporation PDP-11 was dominant in the 1970's, but never threatened to monopolize the market, and there were a variety of different operating systems for the 11. In the 1980's, however, variations on the Unix operating system originally developed at Bell Labs began to emerge as a standard development environment, running on a wide variety of computers ranging from minicomputers to supercomputers, and featuring the new programming language C and its descendant C++.
The microcomputer marketplace that emerged in the mid 1970's was quite diverse, but for a decade, most microcomputer operating systems were rudimentary, at best. Early versions of Mac OS and Microsoft Windows presented sophisticated user interfaces, but on versions prior to about 1995 these user interfaces were built on remarkably crude underpinnings.
The marketplace of the late 1990's, like the marketplace of the late 1960's, came to be dominated by a monopoly, this time in the form of Microsoft Windows. The chief rivals are MacOS and Linux, but there is yet another monopolistic force hidden behind all three operating systems, the pervasive influence of Unix and C. MacOS X is fully Unix compatable. Windows NT offers full compatability, and so, of course, does Linux. Much of the serious development work under all three systems is done in C++, and new languages such as Java seem to be simple variants on the theme of C++. It is interesting to ask, when we will we have a new creastive period when genuinely new programming environments will be developed the way they were on the mainframes of the early 1960's or the minicomputers of the mid 1970's?
The goal of this text is to provide the reader with a general framework for understanding all of the components of the programming environment. These include all of the components listed in Figure 1.1. A secondary goal of this text is to illustrate the design alternatives which must be faced by the developer of such system software. The discussion of these design alternatives precludes an in-depth examination of more than one or two alternatives for solving any one problem, but it should provide a sound foundation for the reader to move on to advanced study of any components of the programming environment.
For an interesting discussion of an early interactive program development environment, see
J. McCarthy, et al. A Time-Sharing Debugging System for a Small Computer. Proceedings of the 1963 Summer Joint Computer Conference, AFIPS Conference Proceedings 23. Pages 51 to 57.
One of the first fully developed program editors, fully distinct from a plain text editor or word processor, is described in
T. Teitelbaum and T. Reps. The Cornell Program Synthesizer: A Syntax-Directed Programming Environment. Communications of the ACM 24, 9 (September 1981) 563-573. | http://homepage.divms.uiowa.edu/~jones/syssoft/notes/01intro.html |
Operating systems have an important role today in supporting the development of information technology. This is because almost all applications developed currently run on top of the operating system. The operating system is a program that controls all the functions that exist on a computer. The operating system becomes the basis of application development for users. In general, all operating systems have the following four functions.
- Control of access to various hardware devices connected to the computer. (Hardware management)
- File and folder management
- Provision of user interface as a bridge between users and computer hardware (Management of user interaction)
- User application management
Hardware Control
Access to various hardware connected to the computer is provided by the operating system through an application known as a driver. Each driver is made to control one hardware device.
This driver application installation is carried out by the operating system itself during installation or when the hardware is connected to the computer. The installation mechanism automatically when the device is connected is known as the Plug and Play (PnP).
File and Folder Management
This is made possible by the operating system because during the installation of the operating system there is a format for the hard disk. Through this process the hard disk space will be arranged in such a way that it has certain blocks for storing files. This process is similar to placing shelves in an empty room for later books. A file is a collection of blocks that are interrelated and have a name. Folder is a container that can contain files or other sub-folders. Each file associated with a computer program is placed in a separate folder to facilitate file searching.
Interaction Management
Users can use the computer through an application (installed) on the computer. Each application provides an interface to receive possible interactions from the user. There are two types of interfaces that can be used to interact with users, namely:
Command Line Interface (CLI)
User interaction with the system is done by typing a series of command sentences to be done by the computer.
Figure 1. Appearance of Applications with CLI on Linux Ubuntu
Figure 2. Appearance of Applications with CLI on Windows
Graphical User Interface (GUI)
Here the user interaction is done through a set of menus and icons that can be chosen by the user to give various commands to the computer.
Figure 3. Display the Windows Operating System GUI
Figure 4. Display the GUI on Linux Ubuntu
Application Management
Each application is run by the operating system by finding the location of the program file and moving its contents into memory to then send each command in the file to be run by a computer. The user application here is an application used by the user to accomplish a specific purpose. Management functions in this user application can include:
- Install, the process of placing program files on a computer system including the configuration of the program.
- Uninstall, the process for removing program files and configurations from a computer.
- Update / Upgrade, the process of updating files from installed programs.
- Multi-user - two or more users can work together to share use of applications and resources such as printers at the same time.
- Multi-tasking - the operating system can run more than one user application.
- Multi-processing - the operating system can use more than one CPU (Central Processing Unit).
- Multi-threading - each program can be broken down into threads which can then be run separately (parallel) by the operating system. This capability also includes part of multitasking in the application.
32-bit and 64-bit Operating Systems
There are two differences between 32-bit and 64-bit operating systems.
- 32-bit operating systems can only accept a maximum of 4 GB of RAM, while 64-bit operating systems can use more than 128 GB of RAM.
- Memory management of 64-bit systems is also better, so it can run applications faster.
Judging from its use the operating system can be divided into two major groups, namely:
- Desktop operating systems, which are widely used in offices, Small Office / Home Office (SOHO), with a small number of users.
- The network operating system, Network Operating System (NOS), is designed to be able to serve users in large numbers for various purposes and is widely used in large-scale companies.
DESKTOP OPERATION SYSTEM
Desktop operating systems have the following characteristics:
- - Supports use by one user
- - Share files and folders in a small network with minimal security
Microsoft Windows
It is a proprietary desktop operating system developed by Microsoft company with its founder Bill Gates. The first version of this operating system is Windows
1.01 was released in 1985. Windows 8.1 is the latest product from Microsoft's operating system, which was released in October 2013.
Apple Mac Os
Apple Mac Os is the same as Microsoft Windows, which is a proprietary operating system developed by Apple. This system is designed as a user-friendly operating system. The latest version of this system is a development of the UNIX operating system.
UNIX / Linux
UNIX, introduced in the late 1960s, is one of the oldest operating systems. The program code for this operating system is opened so that it can be adopted by various companies. From this UNIX now many new operating systems are born which are its derivatives. Linux is also a UNIX-derived operating system that both opens its program code to the public. Linux was first developed by Linus Torvalds and later versions
0.0.1 was released in 1991. Debian is a Linux distribution developed by the Debian community company. Debian 7 Wheezy, is the latest version of this Linux operating system. In addition to Debian there are many other Linux distributions such as Fedora, Ubuntu, OpenSuSE, and Slackware. Android as a mobile operating system also includes derivatives of the Linux operating system.
NETWORK OPERATION SYSTEM
The network operating system has the following characteristics:
- Supports use by more than one user
- Running applications that can be used by more than one user
- Stable (robust), which is unlikely to have an error in the program. Robustness is a term to indicate the ability of a computer system to handle problems that occur during use by the user.
- Has a higher level of data security than desktop operating systems.
- UNIX / Linux, this is the operating system most widely used as a server at this time, examples of network operating systems with linux include Red Hat, Caldera, SuSE, Debian, Fedora, Ubuntu and Slackware.
- Novell Netware, in the 1980s, was the first operating system that met all the requirements for building a local computer network.
- Microsoft Windows, still from the same company, Microsoft also released Windows Server as its network operating system, starting from the initial version of Windows Server 2000, to the last Windows Server 2012.
Open Source Operating System (Open) Open Operating System is an operating system whose program code is open to the public so that it can be developed by others. Operating systems that are open include UNIX, Linux and its derivatives. Linux itself has many variants, such as Debian, Slackware, Redhat and SuSE. This variant is better known as a distro.
At the beginning of operating system development there were only a few. However, now there are very many in circulation. The following graph displays the development of the UNIX operating system and its derivatives from year to year.
Figure 5. History of the development of the UNIX operating system and its derivatives
From this history it can be seen that the two popular operating systems today, namely Linux and Mac Os are derivatives of the UNIX operating system. Until now the UNIX operating system continues to grow spawn new generations.
Summary
The existence of the operating system plays a very important role in the development of information technology. This is because almost all applications are currently running and require an operating system. There are many types of operating systems such as open operating systems (open source) and closed (proprietary).
Because there are no restrictions in the use of open operating systems can be developed and modified by many people or organizations. Variety of operating systems also currently very much, including those popular today are Windows, Mac Os, and Linux. | https://www.misnia.com/2020/03/complete-definition-operating-system.html |
1) Introduction:
A Software Requirements Specification (SRS) is a description of a software system to be developed, laying out functional and non-functional requirements. Software requirements specification establishes the basis for an agreement between customers and contractors or suppliers.
The software requirements specification document enlists enough and necessary requirements that are required for the project development. To derive the requirements we need to have clear and thorough understanding of the products to be developed or being developed. This is achieved and refined with detailed and continuous communications with the project team and customer till the completion of the software.
1.1) Purpose:
The goal of this windows application is to get customer’s information regarding their contact number and display all the information. This application can facilitate the work of those who can handle huge amount of data, as manual track of records is a tedious task, this may reduce lot of unnecessary work.
1.2) Scope:
The work that needs to be accomplished to deliver the customer for better results with specified features and functions in the project. This windows application is limited to specific organizations. There is no need of internet connection. This is an intranet scope for organizations.
1.3) Definitions, Acronyms, Abbreviations:
This is an Integrated Application for Caller-Identification System (Caller-ID System). It is the service where the CLI (Caller Line Identity) is sent to the admin. CNAM (Caller Name) is the character text field used in caller name service. Caller-ID System typically consists of the caller’s contact number with contact information. The information made available to the called party may be displayed on a separately attached device.
It is a windows based application which is hardware integrated system. It is build by using windows technology ASP.NET with C#.NET and Microsoft SQL Server 2008R2.The basic aim of this application is to display customer’s information based on their contact number. It can facilitate the work of those who need to handle huge amount of phone calls by serving customer data for the right person, to the right interface in the appropriate time.
If there is an unknown number for admin then admin asks all details to customer and save it in database. If existing number is appeared then all details of customer will be displayed on the hardware device (Caller-ID device). It is important to handle as many requests as possible on the first contact, and handle them as efficiently as possible. This helps minimize costs while keeping customer satisfied. To help admin to handle requests quickly.
It is common to have a window form with customer’s information, It’s called as CPN (Calling Party Number). This form holds the customer’s screen while entering the contact number. This window is usually called as screen population (screen POP) because that screen is populated with customer’s information.
An acronym, abbreviations of this application are as follows:
• Caller-ID: Caller-Identification.
• CPN: Caller Party Number.
• CLI: Calling Line Identity.
• CNAM: Caller Name.
• Screen-POP: Screen Population.
• Communication Module: The module that allows communication between clients (Customers) and server (admin).
• Developers: The team responsible for developing software system.
1.4) References:
csharp.net-tutorials.com – C#.Net
w3schools.com, tutorialspoint.com – for SQL server
1.5) Overview:
This document, the Software Requirements Specification (SRS), identifies the software requirements in the form of a task and system object model. The model presented within this document is an implicit statement of the requirements. It exhibits the boundaries and capabilities of the system to be built.
2) Overall Description:
The basic aim of this application is to display customer’s information based on their contact number. It is important to handle as many requests as possible on the first contact, and handle them as efficiently as possible.
User Characteristics:
• Customer:
Customer as an actor demands for particular product along with specification of all the requirements. Customer can:
Create a new product request.
Specify all requirements
Optionally attach design specifications
View all past request and its details.
• Admin:
The main job of an administrator is to manage the tasks performed by the user. Admin performs the following tasks:
Creating whole systems.
Configure all the parameters for displaying information.
Configure role of the user be it the employee or the customer.
Constraints:
• Hardware Limitations:
Interface problem in system found at the stage of integration which leads to hardware limitations problem. Problems of uncoordinated system components also create a barrier thus resulting in issues related to hardware.
• Regulatory Policies:
The software works on the basic fundamentals of 3-tier architecture. Hence necessary software is needed to be installed on the computer. The system is relatively heavy and therefore requires huge bandwidth.
• Reliability Requirements:
User is ensured that the system would be reliable at the time if Power Failure. A reliability program plan also improves and evaluates availability of the system by strategy of focusing on increasing maintainability and testability. It checks whether the system is highly reliable or not and generates updates information in correct order.
Assumptions and Dependencies:
The schedules, estimates and costs are based on many assumptions. If any assumptions are incorrect then specific organization reserves the right to re-estimate both the schedule and the cost for this project.
3) Specific Requirements:
This section of the Software Requirements Specification should contain all the details of the developer need to create. This is the largest and most important part of the SRS.
• Specific requirements should be organized in a logical and readable fashion.
• Each requirement should be stated such that its achievement can be objectively verified by a prescribed method.
• Sources of a requirement should be identified where that is useful in understanding the requirement.
• This should be classify as follows:
3.1) Functionality:
Admin response for feedback.
All requirements are very well known, clear and fixed.
Whole technology should be understood.
Code simplicity is one the best feature to understand the coding phase.
This Caller-ID on PC is used for specific organizations as well as for Home Purpose too.
This is also one of the Home Purpose features as Bright House Network Gadget.
3.2) Usability:
This subsection specifies the required learning, operate and training time for a normal user and a power user.
The whole system takes an hour to understand all requirements, specifications and functionality for a normal user.
It will take minimum 6 months for learning this application.
Integration with hardware device to software takes much time for learner (normal user).
Developers need to be up-skilling and expected to bring great code to an organization.
Developers must follow the specific standards, which the technology is used.
3.3) Reliability:
Requirements for reliability of the system should be specified as below:
Availability: A reliability program plan also improves and evaluates availability of the system by strategy of focusing on increasing maintainability and testability.
MTBF (Mean Time Between Failures): MTBF is a reliability term used to provide amount of failures per million hours for a system or product. This windows based application can take minimum 6 to 7 months for developing the whole application for normal user.
Accuracy: This term is used to describe the system that estimate, measure or predict. The true value of this windows application is customer’s contact number and measured value is customer’s name and all details. Hence the developer must maintain accuracy between these two values.
3.3.1) Reliability Requirements: There are other requirements for reliability of the system. Some are as below:
Maintainability: If any requirement occurs, then it should be easily incorporated in the system.
Portability: The application should be portable on any windows based system.
3.4) Performance:
Performance characteristics of the system are described as below:
Response time for transaction: This term represents the time for a database transaction. This application will take one to two seconds (average) for database transaction.
Capacity: The system can accommodate around the 40-50 customers’ capacity in this application.
Resource Utilization: This application contain resources like,
Personal Computer configured with Windows-7, Pentium dual core processor.
RAM is around 256MB to 1GB.
Communication is done through Ethernet cable or RS232 cable between hardware device and software.
3.4.1) Performance Requirements:
Cost Sensitivity: Under all circumstances, the maximum cost payable as submitted by the user will be the maximum cost charged to the user.
3.5) Supportability:
This section includes following supportability requirements:
Naming Conventions: All code will be written as specified by the Pascal Naming Convention (for Class name and Methods name), Camel Casing Convention (Method arguments and local variables) and Hungarian Naming Convention (Data type identification).
Coding Standards: All codes will be written as required by the .NET Coding Standards.
3.5.1) Supportability Requirements:
Cost-Effective-To-Maintain: It may covers divers level of documentation, such as system documentation as well as test documentation.
3.6) Design Constraints:
This section represents design decisions that have been mandated and must be adhered to. There are many aspects of any design project that must be considered to determine the feasibility of the system.
3.6.1) Design Constraints Requirements:
Software Language: All coding will be done in standard of Microsoft .NET platform.
Scheduling Constraints: Exhaustive searches of the entire set of combinations of jobs will not be done. Heuristics will be developed for this scheduling problem.
3.7) On-line User Documentation and Help System Requirements:
Online help and the guide to install and maintain the system must be sufficient to educate the users on how the use the system without any problems. All documentations will be made in accordance with requirements pertaining to open source software.
3.8) Purchased Components:
In this Caller-ID system, extra Caller-ID device is purchased and used as displaying the information of customer regarding their contact number.
3.9) Interfaces:
This section defines the interfaces that must be supported by the application. It should contain following interfaces:
User Interface:
The term “User Interface” is used in the context of Personal Computer Systems and electronic devices. The system would be developed with user friendly Graphical User Interface (GUI) making it is to understand at the initial phase.
Hardware Interface: This system can be implemented on system platform as:
Microsoft Windows XP/7/8
Dual Core Processor
RAM- 2GB and above
Hard Disk-100GB and above
Software Interface:
Platform: Microsoft .NET
Technology: ASP.NET
Language: C# (C Sharp)
For Development: Microsoft Visual Studio-2010
Back End: Microsoft SQL Server-2008
Communication Interface: Communication is very important between customers and organizations. Physical connection is required as the LAN cabling.
Diagrams
.NET Technology is following Object-Oriented Approach.
Object-Oriented UML Diagrams are as below:
1. Class Diagram
2. System Activity Diagram:
3. Use Case Diagram:
I. Use Case for Admin:
II. Use Case for Whole The System:
4. Sequence Diagram:
5. Collaboration Diagram: | https://www.essaysauce.com/computer-science-essays/integrated-application-for-caller-identification-system-software-requirements-specification/ |
A detailed treatment of the key architectural features of Alpha AXP systems.
This outstanding new book describes the internals and data structures of the OpenVMS AXP operating system Version 1.5 in vivid detail. Perhaps the most comprehensive and up-to-date description available for a commercial operating system, it is an irreplaceable reference for operating system development engineers, operating system troubleshooting experts, systems programmers, consultants and customer support specialists.
Some of the text and much of the book's structure are derived from its highly successful predecessor, VAX/VMS Internals and Data Structures: Version 5.2. The new work is divided into nine parts: Introduction; Control Mechanisms; Synchronization; Scheduling and Time Support; Memory Management; Input/Output; Life of a Process; Life of the System; and Miscellaneous Topics. Each of the 39 chapters is akin to a case study on the topic it covers, based on the depth and breadth of treatment. Although the descriptions are of the OpenVMS operating system for the Alpha AXP family of processors, the concepts are equally applicable to the internals of any modern-day multiprocessing operating system running on a RISC computer.
Price after June 1, 1994 is $150.00.
An in-depth overview of VAXcluster technology - from an "applied theory" perspective.
Discusses the Systems Communications architecture that defines how components of a VAXcluster configuration exchange information with each other.
Covers interconnects, ports, and port drivers used to implement this communications architecture.
Deals with Digital Storage Architecture, and with controllers, storage devices, class drivers, servers, and protocols that implement this storage architecture.
Explains storage options commonly used in a VAXcluster configuration.
Explains process and system level VAXcluster synchronization mechanisms.
Covers cluster wide process services, downline loading VMS systems, centralized system management, disaster tolerance, and monitoring VAXcluster activity.
A comprehensive introduction to the extensive capabilities offered by CDD/Repository-Version 5.0 - a data dictionary facility for the Open VMS Operating System.
There are few repositories in the marketplace today that offer the functionality and capabilities of CDD/Repository, and it is fast becoing the repository-of-choice in software development departments in many companies, software tool companies, and consultant firms. This comprehensive guide focuses on Version 5.0 of CDD/Repository - an extremely sophisticated and powerful repository based on an object-oriented approach. This is an active distributed repository system that provides the functionality necessary for users to organize, manage, control and integrate tools and applications across their companies. The repository simplifies application development by providing information management and environment management features.
The guide to the new Alpha system.
This is the authoritative reference on the new 64-bit RISC Alpha architecture of Digital Equipment Corporation. Written by the designers of the internal Digital specifications, this book contains complete descriptions of the common architecture required for all implementations and the interfaces required to support the OSF/1 and OpenVMS operating systems.
A guide to the latest version of ALL-IN-1 - for new and experienced users.
A complete guide to ALL-IN-1 installation and customization that goes beyond available product documentation.
Based on the author's extensive experience developing ALL-IN-1 subsystems and customizing their applications to specific customer sites, this handbook offers a far deeper treatment than the product documentation itself. Addressed to the needs of system managers, application programmers, and anyone who might work with ALL-IN-1 on a technical level, the guide features real-world practical examples, figures, and tables to illustrate the many possibilities available to meet the customized requirements of individual installations and users.
An invaluable guide for programmers and system managers to the best ways to move swiftly through the ALL-IN-1 code jungle.
It gives system managers an overview of code-level integration, and diagnostic help for product installations - including coverage on relinking the ALL-IN-1 image.
A must for anyone performing database procurement for evaluation/functions.
Rdb/VMS is a relational database system that was developed by Digital for the VAX family using the VMS operating system. It is one of a number of information management products that work together in an environment, enabling users to easily share information throughout an organization. Based on Version 4.0 of the Rdb/VMS system, this comprehensive introduction covers components, data definition and manipulation, storage structures, table access, transaction management, security, database integrity and restructuring, tuning and optimization, distribution, interoperability, the data dictionary, transaction processing, tools, application programming, and more.
The initial reference material on operating system support for Alpha.
These cutting-edge volumes are the first publications to explain OpenVMS operating system support for Digital's new 64-bit Alpha AXP architecture. Written for computer professionals who need detailed knowledge of Alpha platforms as early as it can be made available, they contain the most authoritative and timely information on the OpenVMS AXP operating system currently available. These PRELIMINARY EDITION volumes were written concurrently with the development of the operating system and are up-to-date as of the time of publication.
Invaluable for any professional working with Digital equipment.
Compiled by a committee from Digital - engineers, writers, and specialists - the latest edition of this indispensable guide includes nearly 4,500 new entries, twice as many pages as the first edition, and two new appendixes with guidelines to help you write readable technical text. It's an all-in-one glossary of technical terms, abbreviations, expansions, and acronyms for Digital products and functions - and for computing generally.
The only book on UNIX for experienced VMS users making the transition between the two systems.
This unique book is for any computer professional making the transition from VMS to UNIX. Each concept is illustrated with one or more examples comparing the way a task is performed in VMS and in UNIX. You move in a logical sequence, progressing from fundamental concepts to advanced programming and networking. Based on the Berkeley 4.2 version of UNIX, the text includes more than 150 interactive examples as well as appendixes providing command summaries and useful cross-reference tables, as well as a glossary.
Takes up where VAX/VMS documentation leaves off.
The popular guide to VMS utilities and applications.
No matter what components you're using, the new edition of this popular resource will help you learn to perform simple and useful tasks on the VAX/VMS system - with confidence! The book offers a hands-on introduction to the EDT and EVE screen editor programs, the DECspell spelling checker, WPS-PLUS, phone and mail utilities, VAX Notes, the DATATRIEVE database management program, the DECalc electronic spreadsheet, the BITNET network, and more. Included are a wealth of lively examples, exercises, and illustrations - plus ``quick reference'' charts summarizing commands and operations at the end of each chapter. Nine useful appendixes include additional detailed material. This practical, easy-to-use guide is indispensable for anyone who wants to become a self-sufficient VAX user - whether your needs are administrative, instructional, or academic.
For new users - the comprehensive guide to VMS.
A fresh presentation of VAX/VMS operating system concepts.
Discover a practical new approach to understanding VAX/VMS operating system concepts. Combining discussions of operating system theory with examples of its application in key VAX/VMS operating system facilities, this book provides a thoughtful introduction for application programmers, system managers, and students. In addition, you'll learn how VAX/VMS system services can tap the power of the operating system to perform critical tasks on behalf of applications. Each chapter begins with a discussion of the theoretical aspects of a key operating system concept - including generally recognized solutions and algorithms - followed by an explanation of how the concept is implemented, plus an example that shows the uses and implications of the approach.
How to write high-quality professional applications on VAX/VMS.
Takes a unique systems approach.
Master the relationships between today's advanced operating systems and applications and the hardware that supports them. Taking a unique systems approach, this completely revised and updated second edition of a classic shows you how. Using the VAX as a detailed example, the first half of the book offers a complete course in assembly language programming. The second describes higher-level systems issues in computer architecture. Highlights include the VAX assembler and debugger, other modern architectures such as RISCs, multiprocessing and parallel computing, microprogramming, caches and translation buffers, and an appendix on the Berkeley UNIX assembler.
Updated edition of the authoritative resource.
The second edition of the definitive guide to the design of Digital's most popular computer family, the VAX ARCHITECTURE REFERENCE MANUAL, further enhances its status as the bible of the machine's structure.... It contains the most comprehensive description of the VAX architecture to be found anywhere.... [Readers] will find this organized, well-indexed manual indispensable.
Every knowledgeable VAX programmer and architect will want to acquire this resource that Digital's own engineers depend on. From the Micro-VAX II to the VAX 9000, it spans the complete range of hardware and software issues and includes important new material covering the VAX shared-memory model and new vector processing extensions.
Most authoritative, complete description of the VAX/VMS operating system.
Experts who have been directly involved in engineering, teaching, troubleshooting, and supporting the VMS operating system since its creation bring you this totally revised edition of this landmark reference. Comprehensive and convenient, the book focuses on the kernel of the VAX/VMS Version 5.2 operating system: process management; memory management; the I/O subsystem; the mechanisms that transfer control to, from, and among these; and the system services that support and complement them. Written for professionals using VMS who wish to understand the components, mechanisms, and data structures, the book reflects every change to the VAX/VMS operating system through Version 5.2. An all-new detailed technical index and hundreds of data structure diagrams make the contents more accessible and easy to use.
Add this to your VAX/VMS library.
For software specialists, system programmers, applications designers, and other computer professionals, here is a welcome in-depth study of the VMS file system, Version 5.2. You'll find it helpful in understanding the data structures, algorithms, interfaces to, and basic synchronization mechanisms of the VMS file system - that part of the operating system responsible for storing and managing files and information in memory and in secondary storage. The book is also fascinating as a case study of the VMS implementation of a file system. Topics include the Files-11 On-Disk Structure, volume structure processing (including the handling of both on-disk and in-memory data structures), system-wide caching algorithms, clusterwide coordination techniques, and the I/O subsystem - including the extended $QIO processor (the Files-11 XQP).
A complete description of Digital's Network Applications Support (NAS) architecture.
Designed for anyone interested in learning about the NAS architecture - including application developers, technical consultants, Independent Software Vendors (ISVs), Value-Added Resellers (VARs), and Digital's Integrated Business Units (IBUs) - NAS ARCHITECTURE REFERENCE MANUAL provides information on the NAS services and the key public interfaces supported by each service.
A practical guide for integrating your PCs Microsoft WINDOWS graphical environment into enterprise-wide networks.
WORKING WITH TEAMLINKS is a practical guide to Digital's office system for the Microsoft Windows graphical user environment. Its thorough coverage will help experienced and inexperienced users, programmers, and system implementers realize the benefits while avoiding the pitfalls of using PCs in an integrated multivendor office system. The book shows how the TeamLinks File Cabinet works, how TeamLinks mail flows, how to streamline business processes with the TeamRoute document-routing system, and how to integrate applications in a TeamLinks environment. It discusses the problems of implementing a PC-based office system and of managing the process of migration from ALL-IN-1 IOS, Digital's minicomputer-based office system. An appendix documents TeamLinks internal codes and presents other interesting information. A companion diskette contains many sample programs that can be used as a base for your own solutions.
An introduction to the architecture AND the software and middleware of NAS.
NAS (Network Applications Support) is both a comprehensive architecture and set of software products designed to provide a framework that makes it possible for applications developers to enhance those characteristics of computing applications that promote interoperability, application distributability, and application portability among applications that run on Digital's computing platforms as well as those from other vendors, such as IBM, Hewlett-Packard, SunMicrosystems, Apple Computer, and others. For managers, executives, and information systems staff, NAS describes both types of NAS products - 1) those that comprise a set of development toolkits that provide services directly to computing applications; both Digital applications and user-written applications. This important new class of software, called middleware, operates as an intermediary between application programs and the underlying hardware/software platform, and 2) those that build on the NAS middleware to provide services directly to the end users of computing services.
An introduction and tutorial - as well as a comprehensive reference - on ALL of C-Kermit's capabilities.
For new and seasoned BITNET users alike.
The first book to cover BITNET exclusively, this volume addresses the needs of those who have never used a national computer network, as well as those who are familiar with accessing BITNET from the VMS operating system. Comprehensive in coverage, it details the many aspects of using BITNET - from electronic mail to searching remote databased to carrying on RELAY conversations with other users around the world. Appendixes provide specific programs and listings of the more popular mailing lists, digests, and electronic magazines.
A book/disk package for gathering and sending electronic information around the world - with your PC and MS-DOS Kermit.
THE definitive tutorial/reference on the detailed specification of the Kermit file transfer protocol.
The first book-length introduction to one of the fastest growing LAN standards on the market today.
Based upon the primer that received a 1991 Award for Excellence from the Society of Technical Communications (STC), FDDI: AN INTRODUCTION TO FIBER DISBRIBUTED DATA INTERFACE is the first book devoted to this new high-speed, high-bandwidth standard. A concise and thorough technical introduction to the subject, it covers all aspects of the FDDI standard - from its protocol specifications to its implementation and management in real-world, large-enterprise, local area networks. The book is written and designed for rapid comprehension by computer systems managers, telecommunications managers, and communications professionals who make decisions regarding networks for their organizations. The FDDI technology and applications are extensively illustrated, but without mention of Digital's FDDI products. An extensive glossary defines key networking, LAN and FDDI terms.
Your map through the network jungle.
Here's how to track down virtually every network available to academics and researchers. This new book, with its detailed compilation of host-level information, provides everything you need to locate resources, send mail to colleagues and friends worldwide, and answer questions about how to access major national and international networks. Extensively cross-referenced information on ARPANET/MILNET, BITNET, CSNET, ESnet, NSFNET, SPAN, THEnet, USENET, and loads of others is all provided. Included are detailed lists of hosts, site contacts, administrative domains, and organizations. Plus, a tutorial chapter with handy reference tables reveals electronic mail `secrets' that make it easier to take advantage of networking.
From the authorities on information processing.
- Marshall Rose, PSI, Inc., and author, THE OPEN BOOK: A PRACTICAL PERSPECTIVE ON OSI Broaden your understanding of how large networks are designed with this introduction to the concepts surrounding Digital Network Architecture. This useful book blends an OSI tutorial with a complete look at how OSI technology is used in a Digital computer network. You'll gain useful insights into OSI and the process of OSI standardization as well as implementation - all presented in a straightforward, easy-to-follow style.
A look at how ten American colleges and universities bridged the gap between computing, administrative, and library organizations.
Detailed case studies from ten American colleges and universities will prepare you to make better plans and decisions for an electronic library, integrated information management system, or unified information resource. You'll find models and guidelines covering reference services, latest philosophies and strategies, management and organization issues, delivery mechanisms, and more.
A hands-on encyclopedic directory of specific networks and conferencing systems that encompass millions of users across every continent.
This complete, central source lets you tap into the worldwide information-sharing network of engineers, scientists, and researchers - no matter where you live or work. It's a hands-on encyclopedic directory of specific networks and conferencing systems that encompass millions of users across every continent! Following a useful overview, you can look up specific systems, their interconnections and uses, history, funding, standards and services - all organized country-by-country for your convenience. Clearly drawn maps and hundreds of references help you easily connect to the global matrix.
A hands-on account of the design, implementation, and performance of Project Athena.
Project Athena at MIT has emerged as one of the most important models to date for next-generation distributed computing in an academic environment, and this book is the first to describe this landmark project. Pioneered by MIT in partnership with Digital and IBM, Project Athena is distinguished by its magnitude and its reliance on workstations, each providing the same user interface regardless of architecture, and each sharing the same programs and data through a high-speed electronic network. Based on thousands of pages of reports and the author's own experience, this important book lets you in on the design, implementation, and performance of Project Athena - now a production system of networked workstations that is replacing time-sharing (which MIT also pioneered) as the preferred model of computing at MIT. The book is organized in four parts, covering management, pedagogy, technology, and administration. Appendixes describe deployment of Project Athena systems at five other schools, provide guidelines for installation, and recommend end-user policies.
Long considered the definitive work among networking references, this book covers the spectrum of data communications technology. It presents the latest information on layered protocols, fast and error- correcting modems, smart multiplexers, digital transmission (T1), packet-switching, and ISDN. You'll also find an expanded discussion of local area networks, an introduction to the popular Kermit file transfer protocol, information on the EIA-530 specification, and updated material on communication LSI circuits. McNamara provides you with a practical approach to understanding communications systems, starting from a simple asynchronous interface called a UART and proceeding through more intricate problems of data communications system design. You'll learn about communication lines and interface standards (EIA-232 to EIA-530), modems and modem control, error detection, communication protocols, digital transmission systems, integrated services, and local area networks. If you're designing a system, purchasing hardware or software, or simply expanding your knowledge, this book is indispensable!
A nontechnical explanation of how local area networks work, what they do, and how you can benefit from them.
This concise book provides an objective introduction to local area networks - how they work, what they do, and how you can benefit from them. It outlines the pro's and con's of the most common configurations so you can evaluate them in light of your own needs. You'll also learn about network software, with special emphasis on the ISO layered model of communications protocols.
A convenient one-volume source for the most pertinent information on Xlib, X Toolkit Intrinsics, and the OSF/Motif programming libraries.
and all of the Motif Widgets.
New routines on Xlib to better provide support for internationalization and localization.
For new and experienced VMS users alike.
USING DECWINDOWS MOTIF FOR VMS is designed to help new VMS DECwindows users explore and apply DECwindows techniques and features, and to provide experienced DECwindows users with practical information about the Motif interface, ways to customize environments, and advanced user topics. VMS DECwindows Motif is based on MIT's specification for the X Window System, Version 11, Release 4 and OSF/Motif 1.1.1.
A technical reference covering every aspect of the sample server developed by the MIT X Consortium.
Using examples, guidelines, and tutorials, as well as material on theory and practice, this comprehensive reference provides essential information to knowledgeable X users who want to learn about the basic interactions between client and server - including developers who want to port, extend, tune, or test a server.
Part I, ARCHITECTURE, discusses the structure of the sample server, including the server's major data structures and their interactions, the flow of control within the server, how the server handles input devices and events, how the server maintains the window tree, and how it keeps graphics context up to date with drawables.
Part II, PORTING AND TUNING, explains in detail the process of porting the server to new display hardware or a new operating system, and discusses strategies for tuning an existing server.
Part III, WRITING EXTENSIONS, describes the server extension mechanism, its interface, and the design and implementation of new extensions.
Conforms to Release 5 - four books in one!
Written by the experts who originally designed and created the X Window System, this Third Edition is a major revision of the definitive reference describing each standard specification from the MIT X Consortium. It conforms to the latest release: X Version 11, Release 5. With this latest edition in hand, software developers and others can now take advantage of the significant new functionality in Release 5. And, in addition to fully integrating the important new features of this latest release, the original text has been significantly revised for clarity and easier access. Instructive diagrams, a detailed glossary, and a comprehensive subject-oriented index further enhance the book's overall value.
Part I, Xlib - C Language X Interface , describes the lowest level C language X programming interface to the X Window System. Essential for all X programmers, even those using higher level toolkits such as X Toolkit Intrinsics.
Part II, X Window System Protocol, details the precise specification of the X protocol semantics.
Part III, Inter-Client Communications Conventions Manual, discusses the conventions that govern X inter-client communication.
Part IV, X Logical Font Description, describes the conventions for font names and font properties in the X Window System.
A quick and thorough introduction to Motif programming.
Here is a straightforward, easy-to-understand introduction to Motif application development, covering both basic and advanced features of graphical user interfaces as implemented under Motif. Even though you may have little or no experience with X or other window programming environments, this useful guide will ease you into Motif programming smoothly and quickly. Using simple examples and explanations, it shows you how to design and build graphical applications with Motif in a reasonable amount of time. By the end of the book, you'll be familar with all of the Motif widgets as well as the process of application design in Motif, the basic capabilities of the X and Xt layers, and the X drawing model.
Written by the two leading designers of the X-Tool kit, this is the authorative guide to Version 11, Release 4 of the X Window System Toolkit. It's organized in two useful parts - a programmer's guide and a specification. You'll learn how to use the various features as well as how those features operate.
The Programmer's Guide, complete with over 100 pages of programming examples, illustrates how to use the X Toolkit to write applications and widgets. Each chapter contains both an application writer's section and a widget writer's section.
The Specification describes the capabilities more succinctly and precisely. It offers sufficient detail for a programmer to create a new implementation.
The defacto standard, a must-have for all Lisp programmers.
Plus other subjects not part of the ANSI standards but of interest to professional programmers.
Throughout, you'll find fresh examples, additional clarifications, warnings, and tips - all presented with the author's customary vigor and wit.
Find out what actually works (and what doesn't) in the development of commercial expert system applications - based on the author's decade of experience in building expert systems in all major areas of application for American, European, and Japanese organizations. Focusing on actual tested procedures, this how-to book offers practical methods for initiating, designing, building, managing, and demonstrating successful expert systems. You'll discover engineering programming techniques, useful skills for demonstrating expert systems, practical costing and metrics, guidelines for using knowledge representation techniques, and solutions to common difficulties in design and implementation.
For the professional Lisp programmer, this important book is an introduction to programming in the Common Lisp Object System. This object-oriented system - an extension of Common Lisp - incorporates the concepts of classes, generic functions, methods, multiple inheritance, and method combination. Complete with code examples, the book has been adapted directly from the X3J13 Document 88-002R, submitted to the X3J13 Committee in June 1988.
How to use information technology to gain the competitive edge.
Cut time and expense without sacrificing quality.
Focusing on work environments in which knowledge workers use electronic networks and networking techniques to access, communicate, and share information, this book develops strategic and practical approaches that distributed organizations can use to succeed and compete. You'll explore how such groups can decrease the time required to create quality products and services - even when their work teams are geographically and culturally dispersed, or ``Working Together Apart.'' With a foreword by organizational expert Edgar H. Schein of MIT's Sloan School, this book will be indispensable for managers and planners seeking innovative ways to build the capabilities necessary for prosperity in this complex and rapidly changing world.
Digital Equipment Corporation ``Book of the Year: this year's honors go to Charles Savage for Fifth Generation Management. Savage joins a select few ... who exquisitely explain tomorrow's bizarre organizational arrangements, where hierarchies are leveled and the imagery of networks and spider webs butts out yesterday's pyramids.'' -Tom Peters, Co-author of IN Search of Excellence Explore the shift from steep top-down management to human networking, as companies with second-generation management attempt to absorb fourth- and fifth-generation technologies, like computer-integrated manufacturing (CIM) and computer-integrated enterprises (CIE). This provocative yet readable book looks into the assumptions, values, and ways of relating in the workplace that must change as companies move beyond the ``technology wall'' into the future.
Benefit from the practices of the best logistics managers.
One of the nation's top authorities on logistics management presents a focused interpretation of research findings to help managers improve logistical competency within their organizations. Zeroing in on the best practices of successful logistic managers - and well supported by statistical evidence - this handbook provides a sequential model as well as extensive coverage of Elecronic Data Interchange in the logistics process. You'll find out why logistics must play an increasingly critical role in overall corporate strategy in the coming years, and why its managers must learn to better manage change. Special emphasis is placed on the development of strategic alliances to increase corporate speed and quality. Throughout the eight chapters, an action-oriented case dialogue facilitates interest and ease of reading.
Selecting and using software to tackle critical manufacturing problems.
Here's practical advice for managers on how to take advantage of an MRP II software system to plan and monitor your manufacturing, marketing, finance, and engineering resources. Each chapter focuses on a specific problem - such as material shortages, high inventory, or poor quality - and explains possible causes plus the solutions provided by MRP II packages. Find out about the advantages and drawbacks of MRP II (as well as other systems), how to evaluate features and select the right system, and how to use it to solve pressing business problems.
With a Foreword by Ken Olsen.
The official guide to the writing of technical information for users of Digital hardware and software products.
This useful reference addresses the key tasks that are integral to realtime software development in manufacturing plants: managing the design of the system, setting up and coordinating a development organization, and implementing tools for successful completion and management. Whether you're a new or experienced project manager, you'll discover useful advice and easy-to-follow procedures that help you cut time and costs - and stay competitive. You'll find out how to use concurrent methodologies to create realtime systems in half the time it usually takes - resulting in systems that are more flexible, more predictable, and far less costly than systems developed sequentially.
Direct from Digital's Information Design and Consulting Group.
Bridges the transition from one operating system to another.
Learn the in's and out's of four widely used operating systems - VMS, UNIX, OS/2, and MS-DOS - with this all-inclusive guide. Written by an experienced computer systems support specialist, it examines scheduling, synchronization techniques, file management, memory management, and more - bridging the transition from one operating system to another. The book helps you design applications that run on one or more of these systems or port existing applications to a new system. You'll benefit from the many examples - provided as building blocks to more elaborate implementations - as well as roadmaps that review chapter contents, summaries, and reference lists for quick look-up of information. A useful collection of algorithmic techniques and coding samples make this a comprehensive handbook for any programmer or software designer who wants to perform common functions among various operating systems in a multivendor environment.
Technical writing is more like investigative reporting than like scientific writing, requiring practitioners to gather information rapidly, identify audiences, and use creativity and problem-solving skills. Find out from an experienced technical documentation expert how to gather, dissect, and understand technical information- and how to organize and present it for the reader. Complete with illustrations, glossary, and useful appendixes including resources as well as international documentation standards, the book covers planning and process, research techniques, use of graphics, audience analysis, the role of standards, and careers.
Benefit from Digital's secrets for overseas success.
For hi-tech companies operating in today's global economy, here is an indispensable guide that can help you reduce the expense and time demanded for translating your user information. This unique book introduces you to Digital's success-proven methods for creating and packaging written, visual, and verbal user information that can be easily understood outside its country of origin and readily translated into other languages. Find out how to adapt the writing and organizational practices of an industry leader to speed your own process along.
What Is International User Information?
A valuable model for your own company.
Here is the first published description of the processes and practices, tools, and methods this industry giant uses to develop its software products. This ``shirt-sleeves'' guide is packed with diagrams and tables that illustrate each step in the complex software development process. You'll learn all about Digital's standard ``phase review process,'' the role of teams and their leaders, how computer-assisted software engineering tools (CASE) work, and how to control a project while improving productivity and product quality.
A working guide that helps you look at your system from the end-user's viewpoint.
This practical guide and accompanying audiotape explain over 90 guidelines and considerations for designing, installing, and implementing systems that focus on human needs. You'll discover unique approaches to human-computer interactions, backed up by pertinent research and realistic examples to illustrate the points. The one-hour audiotape features in-depth interviews with the authors, who emphasize the human approach and offer you dynamic new ideas for building successful systems.
This book will help programmers who want to take advantage of MUMPS and its unique features as a database management tool in business and academic environments. You'll move logically through from an introduction to programming, to a presentation of MUMPS as a programming language which is both interpretive and a shared database manager. Elements of the language, such as data types, syntax, command, and program creation, are covered in depth. With numerous practical examples and exercises including answers.
A complete overview of the Power PC processor - with information on the instruction set and examples of how to program it.
The first new PCs from the joint IBM/Apple venture are about to appear. Using the new Power PC RISC microprocessor designed by IBM, Motorola, and Apple, these new PCs will enable Apple Mac software to run on IBM PCs and vice versa. This book is a practical guide to the new microprocessor. It will be a valuable reference to engineers programming the new IBM and Apple PCs and to students and hobbyists with an interest in what is going on inside the computer.
Covers advanced window types and window management.
Anyone who is serious about Windows programming and requires a book that contains practical examples and applications will want this book. The WINDOWS ADVANCED PROGRAMMING AND DESIGN GUIDE provides plenty to stimulate the mind. Its an advanced look at Windows (both architectural and API levels) that provides many of those long sought after answers. Topics covered include advanced dialog techniques and the dialog manager, genuine custom control building, advanced dynamic link library theory and practice, super/sub-classing and hooks, and much, much more.
Practical emphasis and examples - using C++, Turbo Pascal and Visual Basic to demonstrate OOP principles.
Unlike most books on the subject, OBJECT ORIENTED PROGRAMMING UNDER WINDOWS uses practical examples to show how OOP techniques can be applied in the real world, with particular emphasis on programming under Windows. This timely book will appeal to anyone intending to write programs and experienced programmers who are planning to move from structured methods to OOP methods.
At a practical level for experienced programmers - with an emphasis on actual problems and solutions.
This book for the experienced software programmer presents a rigorous methodology, based on Jackson Structured Design (JSD), and clearly demonstrates how it can be applied to a wide variety of data processing problems. With a clear emphasis on the invaluable distinction between the software life cycle, STRUCTURED PROGRAM DESIGN will suit programmers striving to become competent software designers.
This book provides a practical introduction that will enable the user to become proficient in actually using Windows NT. It starts from basic principles for the beginner and leads effortlessly into areas of interest to the power user and system administrator. Almost every aspect of Windows NT is explained in practical, concise text. End users will find it useful in learning how to work with Windows and accessories. Power users will learn the ins and outs of Control panel, OLE, and workgroups. Administrators can come to grips with User Manager and Event Viewer.
The number of PC viruses is now in the range of 2500. How does the average PC user or major company prevent massive data loss from virus attack? This comprehensive guide to PC viruses provides the answers and gives solid advice on preventative measures. It sets out to explain how PC viruses work and how to guard against a viral attack. And for the user unlucky enough to come across an infected PC, there is ample coverage of the steps to take in combating such a dilemma. Leading anti-virus software is exhaustively evaluated.
Practical information for understanding the basic technology of PC networks.
With networks frequently seen as a panacea for most office automation problems, it is essential to understand the basic technology used within such systems and its limitations and advantages. EFFECTIVE PC NETWORKING describes this technology used within most PC networks and gives a lot of practical advice on how to avoid the problems and pitfalls.
Provides numerous tips to boost productivity - valuable to DOS users as well.
The majority of books written on WordPerfect focus on the program rather than on your work; on the tool rather than the job. GETTING THE JOB DONE WITH WORDPERFECT FOR WINDOWS is for the user whose job is to produce office documents in a business or other professional environment and is already familiar with the basis of word processing. It explains program functions within the context of recognizable tasks and jobs, such as the creation of memos and letters. It is based on Windows 3.1 and WordPerfect for Windows but DOS users of WordPerfect will also find this book a valuable source.
Practical help for learning the basics of - and becoming proficient in - UNIX.
This book teaches the student and new business user of UNIX the concepts and use of commands by fully worked practical examples. By working through these examples, the reader will gain the understanding and confidence needed to become a proficient UNIX user. No previous UNIX experience or any computer system is required to make full use of this book.
In-depth information on the programming languages of HyperCard and Plus.
This book provides in-depth information on HyperTalk and PPL, the programming languages of HyperCard and Plus. In addition to the possibility of creating applications that can be used without modification with Windows 3 on IBM compatibles and on the Macintosh, the approach followed makes it possible to explore the basics of object-oriented programming in graphical environments and is well suited to act as an introduction to the graphical environment for programmers skilled in languages such as Pascal or C.
For quality practitioners in all types of organizations - addresses each requirement of each clause.
Written with the busy manager in mind, the ISO 9000 QUALITY SYSTEMS HANDBOOK is a manual for those managing, designing, implementing, auditing, or assessing quality systems that aim to meet the Standard. In this book the mysteries of how to become and remain accredited to the quality system standard are solved. The approach taken is appropriate for quality practitioners of all types in any organization. Unlike other guides, this book addresses each individual requirement of each clause. One by one the purpose of the requirements is explained, and guidance in meeting them is offered.
Covers a wide range of knowledge processing technologies - case studies illustrate practical commercial application.
This practical book provides an explanation of AI concepts and techniques. It shows students how they can solve business problems with AI technologies, and analyze the strategic impact of AI technology on organizations.
Arms * Campus Strategies for Libraries and Electronic Information.
Brunner * VAX Architecture Reference Manual, 2nd ed.
Digital * Digital Dictionary, 2nd ed.
Gianone * Using MS-DOS Kermit, 2nd ed.
Levy & Eckhouse * Computer Prog. and Architecture, 2nd ed.
McNamara * Technical Aspects of Data Communication, 3rd ed.
Rost * X and Motif Quick Reference Guide, 2nd ed.
Sawey & Stokes * Beginner's Guide to VAX/VMS Utilities and Applications, 2nd ed.
Scheifler et al. * X Window System, 3rd ed.
Siewiorek & Swarz * Reliable Computer Systems, 2nd ed.
Steele * Common Lisp, 2nd ed.
Developing International User Information * Jones et al.
Logistical Excellence * Bowersox et al.
Production Software That Works * Behuniak et al.
X Window System, 3rd ed. * Scheifler et al. | http://docs.warhead.org.uk/cltl/digital_press/dp_catalog.html |
The introduction of Windows XP in 2001 marked the first mainstream (not just for business users) version of Windows to incorporate the Windows NT kernel. In addition to better plug-and-play support and other improvements, XP sported a revamped user interface with true-color icons and lots of shiny, beveled effects. Not wanting to look out of style, and smelling another sell-up opportunity, the Office group rushed out Microsoft Office XP (aka Office 10), which was nothing more than a slightly tweaked version of Office 2000 with some UI updates.
Hardware had evolved a bit in the two years since the Windows 2000 launch. For starters, Intel had all but abandoned its ill-fated partnership with Rambus. New Intel designs featured the more widely supported DDR-SDRAM, while CPU frequencies were edging above 2GHz. Intel also upped the L2 cache size of the Pentium 4 core from 256KB to 512KB (the Northwood redesign) in an attempt to fill the chip's stall-prone 20-stage integer pipeline. Default RAM configurations were now routinely in the 256MB range, while disk drives sported ATA-100 interfaces.
Windows XP, especially in the pre-SP2 timeframe, wasn't all that more resource intensive than Windows 2000. It wasn't until later, as Microsoft piled on the security fixes and users started running anti-virus and anti-spyware tools by default, that XP began to put on significant weight. Also, the relatively modest nature of the changes from Office 2000 to Office XP translated into only a minimal increase in system requirements. For example, overall working set size for the entire Office XP suite during OfficeBench testing under VMware was only 10MB, just 1MB higher than Office 2000, while CPU utilization actually fell 1 percent across the three applications (Word, Excel, and PowerPoint). This did not, however, translate into equivalent performance. As I noted before, Office XP on Windows XP took 17 percent longer than Office 2000 on Windows 2000 to complete the same OfficeBench test script. View the overall test results. View more detailed test results at xpnet.com.
I was fortunate enough to be able to dig up a representative system of that era: A 2GHz Pentium 4 system with 256MB of RAM and integrated Intel Extreme graphics. Running the combination of Windows XP (SP1) and Office XP on bare iron allowed me to evaluate additional metrics, including the overall stress level being placed on the CPU.
By sampling the Processor Queue Length (I ran the DMS Clarity Tracker Agent in parallel with Clarity Studio and OfficeBench), I was able to determine that this legacy box was only moderately stressed by the workload. With an average Queue Length of three ready threads, the CPU was busy but still not buried under the computing load. In other words, given the workload at hand, the hardware seemed capable of executing it while remaining responsive to the end-user (a trend I saw more of as testing progressed).
The Industrial Revolution: Windows XP/Office 2003
Office 2003 arrived during a time of real upheaval at Microsoft. The company's next major Windows release, code-named Longhorn, was behind schedule and the development team was sidetracked by a string of security breaches in the Windows XP code base. The resulting fix, Windows XP Service Pack 2, was more of a relaunch than a mere update. Whole sections of the OS core were either replaced or rewritten, and new technologies – such as Windows Defender and a revamped firewall – added layers of code to a rapidly bloating platform.
Into this mess walked Office 2003, which, among other things, tried to bridge the gap between Windows and the Web through support for XML and the ability to store documents as HTML files. Unlike Office XP, Office 2003 was not a minor upgrade but a major overhaul of the suite. And the result was, not surprisingly, more bloating of the Windows/Office footprint. Office suite memory consumption went up modestly to 13MB during OfficeBench testing, while CPU utilization remained constant versus previous builds, despite the fact that the suite was spinning an extra four execution threads (the overall thread count was up by 15).
Where the bloat took its toll, however, was in raw application throughput. Completion times under VMware increased another 8 percent vs. Office XP, putting the Windows XP (SP2) and Office 2003 combination a full 25 percent off the pace of the original Windows 2000/Office 2000 numbers from three years earlier. In other words, with all else being equal – hardware, environment, configuration – Microsoft's desktop computing stack was losing in excess of 8 percent throughput per year due to increased code path complexity and other delays. View the overall test results. View more detailed test results at xpnet.com.
Of course, all else was not equal. Windows XP (SP2) and Office 2003 were born into a world of 3GHz CPUs, 1GB of RAM, SATA disks, and symmetrical multithreading (that is, Intel Hyper-Threading). This added hardware muscle served to offset the growing complexity of Windows and Office, allowing a newer system to achieve OfficeBench times slightly better (about 5 percent) than a legacy Pentium 4 system, despite the latter having a less demanding code path (TGMLC in action once again).
Welcome to the 21st century: Windows Vista/Office 2007
Given the extended delay of Windows Vista and its accompanying Office release, Microsoft Office 2007, I was understandably concerned about the level of bloat that might have slipped into the code base. After all, Microsoft was promising the world with Vista, and early betas of Office showed a radically updated interface (the Office "ribbon") as well as a new, open file format and other nods to the anti-establishment types. Little did I know that Microsoft would eventually trump even my worst predictions. Not only is Vista and Office 2007 the most bloated desktop software stack ever to emerge from Redmond, its system requirements are so out of proportion with recent hardware trends that only the latest and greatest from Intel or AMD can support its epically porcine girth.
Let's start with the memory footprint. The average combined working set for Word, Excel, and PowerPoint 2007 when running the OfficeBench test script is 109MB. By contrast, Office 2000 consumed a paltry 9MB, which translates into a 12-fold increase in memory consumption (170 percent per year since 2000!). To be fair, previous builds of Office benefited from a peculiar behavior common to all pre-Office 12 versions: When minimized to the task bar, each Office application would release much of its noncritical working set memory. This resulted in a much smaller memory footprint, as measured by the Windows performance counters (which are employed by the DMS Clarity Tracker Agent used in my tests).
Microsoft has discontinued this practice with Office 2007, resulting in much higher average working set results. However, even factoring in this behavioral change, the working set for Office 2007 is truly massive. Combined with an average boot-time image of more than 500MB for even the minimal Windows Vista code base, it seems clear that any system configuration that specifies less than 1GB of RAM is a nonstarter with this version. And none of the above explains the significantly higher CPU demands of Office 2007, which are nearly double that of Office 2003 (peak utilization of 73 percent versus 39 percent). Likewise, the number of execution threads spawned by Office 2007 (32) is up, as is the total thread count for the entire software stack, which is nearly double the previous version (615 versus 370). View the overall test results. View more detailed test results at xpnet.com. Compare memory consumption.
Clearly, this latest generation of the Windows/Office desktop stack was designed with the next generation of hardware in mind. And in keeping with the TGMLC pattern, today's latest and greatest hardware is indeed up to the challenge. Dual cores, combined with 4MB or more of L2 cache, have helped to sop up the nearly doubled thread count, while 2GB standard RAM configurations are mitigating the nearly 1GB memory footprint of Vista and Office 2007.
The net result is that, surprise, Vista and Office 2007 on today’s state-of-the-art hardware delivers throughput that's still only 22 percent slower than Windows XP and Office 2003 on the previous generation of state-of-the-art hardware. In other words, the hardware gets faster, the code base gets fatter, and the user experience, as measured in terms of application response times and overall execution throughput, remains relatively intact. The Great Moore's Law Compensator is vindicated.
Give and take
The conventional wisdom regarding PC evolution, that Microsoft devours every Intel advance, continues to hold true right up through the current generation of Windows Vista and Office 2007. What's shocking, however, is the way that the IT community as a whole has grown to accept the status quo. There is a sense of inevitability attached to the concept of the Wintel duopoly, a feeling that the upgrade treadmill has become a part of the industry's DNA. Forces that challenge the status quo – such as Linux, Google, and Apple – are seen as working against the very fabric of the computing landscape.
But as Microsoft is learning, you can only push your customers so far before they push back. In the case of Windows Vista, the combination of heavy hardware requirements and few tangible benefits to IT has resulted in a mass rejection of Microsoft’s latest and greatest. Companies are finally saying enough is enough and stepping off the treadmill, at least for a while. Microsoft’s challenge will be to woo these customers back, and they can start by taking a hard look at their OS and application development practices. Instead of targeting the next generation of hardware, Microsoft engineers should try making sure that their new features and functions work well on the hardware of today, thus guaranteeing that they won’t overshoot their target and disrupt the fragile TGMLC balance. Maybe then customers will start looking forward to upgrading for a change. | https://www.infoworld.com/article/2650502/fat--fatter--fattest--microsoft-s-kings-of-bloat.html?page=2 |
Why do civilizations collapse? For some civilizational collapses in the past there is one striking event that leads to a quick end, like environmental damage for the people on the Easter Islands. For other civilizations, several causes interact with each other, as it happened for the Maya, where environmental damage, climate change and hostile neighbors sealed their fate (Diamond, 2011). But the most mysterious case I came across is the simultaneous collapse of almost all major empires in the Mediterreanen during the late Bronze Age. In only around 50 years between 1200 and 1150 BCE the Mycenaean Kingdoms and the Hittite Empire dissolved, Egypt lost its control over the Levante and broke down shortly after. Only Elam and the Assyrian Empire survived somewhat intact. To put this into perspective, there weren’t many states around in this time of human history. Depending on how you count it this means that most existing states on Earth in this time either collapsed or declined. There is less data about China and India for the late Bronze Age, but there are some indications that they also struggled at a similar time, which would make this a global crisis that involved every state that existed at that time. However, as the data from India and China is very scarce I will focus on the Mediterranean, describe my current understanding of the Bronze Age collapse and what we might be able to learn from it today.
So, what happened? This is still debated much and there are many conflicting theories, but I try to tell here the story that seems most likely to me (I am not a historian, so maybe take this with a grain of salt). I’ve seen many papers that traced the collapse back to only one or a few causes, but it made more sense to me that basically all proposed causes are responsible to some degree and I try to highlight their connections in the text and Figure 2.
We know from the archaeological record that most of the cities in the region were destroyed by force (Figure 1). The people who destroyed those cities are usually simply called “Sea People”. They have this vague name because they do not consist of a single culture. Depending on the location and year their composition varied considerably (Burlingame, 2011). Also, they do not seem to have any overarching command structure and attacks happen more or less randomly. From the movement of their attacks we can see that the general direction the sea people moved was from north to south and it seems that along the way they either destroyed the local population or integrated them into their move southwards (Carlin, 2019).
Other research links these movements to a drought that led to widespread famines (Weiss, 1982), which was likely caused by a cooling of the mediterranean sea (Drake, 2012). This decreased evaporation, which led to reduced precipitation, resulted in a drought and famine. The sea itself cooled because of a large volcanic eruption of Mount Hekla in Iceland (Yurko, 1999), that likely produced a volcanic autumn that reduced temperature globally and clouded the sky (this is also the link to the problems of the civilizations in India and China).
Figure 1: Map of the Sea People invasions in the Aegean Sea and Eastern Mediterranean at the end of the Late Bronze Age (blue arrows). Some of the major cities impacted by the raids are denoted with historical dates. Inland invasions are represented by purple arrows. (Source: Kaniewski et al., 2011)
But this climatic approach does not explain everything. The civilizations in this part of earth already survived similar events in the past. For example the destruction of the Minoan civilization on Crete (which is in the middle of the eastern Mediterranean) was caused by another major volcanic eruption (Marinatos, 1939). However, all other civilizations survived mostly unharmed. This indicates that also the societal structure comes into play. There are indications that the civilizations of that time were highly centralized, with the main cities and palaces collecting all of the output of the surrounding farms and workshops (Nakassis et al., 2011). The resources collected in this way were then used to trade and to satisfy key supporters of the ruling class. However, this system relies on the ability of the central government to collect the output of everyone else. This ability was disrupted by the changing climate. The cities in the Mediterranean usually had a higher need for food and other resources to maintain themselves than their immediate surroundings were able to deliver. Thus, they relied on the smaller cities to regularly provide them with food. With the changing climate, the smaller cities needed the food they produced for themselves and stopped their deliveries to the main cities (Knitter et al., 2019). Without those deliveries the central authority had two problems at once. Suddenly there is a shortage of daily everyday goods the citizens needed but, they also missed out on valuable trade goods. Those trade goods were quite important, as the states of that times were connected in a vast trade network reaching from Egypt to Scandinavia (Varberg et al., 2020). This trade network was essential, as all states of that time needed tin and copper to produce bronze. While copper is relatively abundant and can be mined in many places, large tin deposits are much rarer. Major deposits can be found in present day Spain, Southwest England and Afghanistan (Berger et al., 2019). Therefore, without trade you lose the access to tin, which means you cannot smelt bronze anymore. As bronze was especially important for the production of weapons and armor, a loss of it means a reduced ability to defend yourself against invaders. Another important function of the trade network was to keep in connection with the other empires. There are several indications that the empires helped each other out with food and military aid during emergencies (Burlingame, 2011). What also made defense much harder was a series of earthquakes that hit the Mediterranean at that time. They likely destroyed much of the fortifications and palaces in the region, which left the cities defenseless and in chaos (Nur and Cline, 2000). All those factors were further enhanced by the impact of diseases. There are several indications that a kind of pandemic was rampant during this time as well. However, it is unclear which diseases it was. Likely candidates include the bubonic plague and smallpox (Norrie, 2016). This destabilized society and decreased the available resources, which led to migration and finally to pressure on other cities. The final contributing factor I found is civil war. There are several cities where only the palace is burned down, while the rest of the city remained unharmed, which is an indication of a violent uprising (Carlin, 2019).
All those factors draw a very complicated picture of the Bronze Age collapse (Figure 2). There is not a single defining cause for the collapse. Every factor on its own would likely not have been enough to topple several Bronze Age empires. However, their interaction led to a quick end of the Bronze Age, which had been relatively stable for centuries or even millennia.
Figure 2: Interaction of the different factors that led to the late Bronze Age collapse (own figure).
The reason I am trying to tell this story is that it reminds me of the current situation of the world. There are many factors contributing to a complex system of which we don't always have a deep understanding or overview of. It is really hard to pinpoint the date when it is too late for a system before it collapses. When was the point the fate of the people of the Bronze Age was sealed? When Mount Hekla erupted? When the centralized structure collapsed? When tin became unavailable? The causes that seem most likely to me is the eruption of Mount Hekla and the series of earthquakes.Those two events started a kind of chain reaction by interacting with the way the society of that time was organised. A volcanic eruption that hit a different kind of society would not have been so fatal, as was shown by the earlier eruption at Crete. However, it is almost impossible to pinpoint the main cause of or even determine if there is something like this. This highlights how difficult it is to really find the exact tipping point of a system.
There is also some research that shows that humans have difficulties finding tipping points and that we might even be lacking the right techniques to do this right (Dudney and Suding, 2020). Therefore, I think we should be more careful when it comes to tipping points in our modern world. Right now this seems most urgent for climate change. We have a lot of tipping points in our climate system like thawing permafrost or a quickly dying rainforest (Lenton et al., 2019). And even though we are getting to understand those better, there are still many unknowns. We can learn from the Bronze Age collapse that a civilization can shine its brightest shortly before its destruction (Carlin, 2019). And in many respects the civilization of that time was not that different to the one we have today, like the vast trade networks, the interdependence of states and the need to acquire certain scarce resources. However, we also have more resources and more knowledge than the people in the Bronze Age. Also, we are much more connected and better equipped to help each other out. Still, we should not be too self assured, as the people of the Bronze Age might also have thought that they had it all figured out, right before disaster struck.
Imagine a person that was born 1230 BC in Ugarit, a prosperous city of the vast Hittite Empire. Everything seems to be going well. The cisterns and granaries are full, goods and people from all over the known world regularly arrive at the city and a surprisingly large portion of the population is able to write. Now fast forward 80 years to 1150 BC. Ugarit is a blackened ruin, the Hittite Empire simply does not exist anymore, everyone she knew is dead from disease, war and famine and the sun has not shown through the grey clouds for years.
I hope we can avoid such a fate for our current civilization.
Thanks to Jan Wittenbecher, Peter Ruschhaupt, Lutz Breuer, Matteo Trevisan and Luise Wolf for proofreading this text and providing valuable comments!
If you want to learn more about the topic you can find all the resources I used below. Also, there is a somewhat similar post from earlier this year [EA · GW].
References
Berger, D., Soles, J. S., Giumlia-Mair, A. R., Brügmann, G., Galili, E., Lockhoff, N. and Pernicka, E.: Isotope systematics and chemical composition of tin ingots from Mochlos (Crete) and other Late Bronze Age sites in the eastern Mediterranean Sea: An ultimate key to tin provenance?, edited by A. Zerboni, PLoS ONE, 14(6), e0218326, doi:10.1371/journal.pone.0218326, 2019.
Burlingame, K.: Forces of Destruction: The Collapse of the Mediterranean Bronze Age, Bachelor Thesis, Pennsylvania State University., 2011.
Carlin, D.: Chapter 3: The End of the World as They Knew It, in The end is always near: apocalyptic moments, from the Bronze Age collapse to nuclear near misses, HarperCollins Publishers, New York, NY., 2019.
Diamond, J. M.: in Collapse: how societies choose to fail or survive, Penguin Books, New York, NY., 2011.
Drake, B. L.: The influence of climatic change on the Late Bronze Age Collapse and the Greek Dark Ages, Journal of Archaeological Science, 39(6), 1862–1870, doi:10.1016/j.jas.2012.01.029, 2012.
Dudney, J. and Suding, K. N.: The elusive search for tipping points, Nat Ecol Evol, doi:10.1038/s41559-020-1273-8, 2020.
Kaniewski, D., Van Campo, E., Van Lerberghe, K., Boiy, T., Vansteenhuyse, K., Jans, G., Nys, K., Weiss, H., Morhange, C., Otto, T. and Bretschneider, J.: The Sea Peoples, from Cuneiform Tablets to Carbon Dating, edited by K. Hardy, PLoS ONE, 6(6), e20232, doi:10.1371/journal.pone.0020232, 2011.
Knitter, D., Günther, G., Hamer, W. B., Keßler, T., Seguin, J., Unkel, I., Weiberg, E., Duttmann, R. and Nakoinz, O.: Land use patterns and climate change—a modeled scenario of the Late Bronze Age in Southern Greece, Environ. Res. Lett., 14(12), 125003, doi:10.1088/1748-9326/ab5126, 2019.
Lenton, T. M., Rockström, J., Gaffney, O., Rahmstorf, S., Richardson, K., Steffen, W. and Schellnhuber, H. J.: Climate tipping points — too risky to bet against, Nature, 575(7784), 592–595, doi:10.1038/d41586-019-03595-0, 2019.
Marinatos, S.: The Volcanic Destruction of Minoan Crete, Antiquity, 13(52), 425–439, doi:10.1017/S0003598X00028088, 1939.
Nakassis, D., Parkinson, W. A. and Galaty, M. L.: Redistribution in Aegean Palatial Societies Redistributive Economies from a Theoretical and Cross-Cultural Perspective, American Journal of Archaeology, 115(2), 177–184, doi:10.3764/aja.115.2.177, 2011.
Norrie, P.: How Disease Affected the End of the Bronze Age, in A History of Disease in Ancient Times: More Lethal than War, edited by P. Norrie, pp. 61–101, Springer International Publishing, Cham., 2016.
Nur, A. and Cline, E. H.: Poseidon’s Horses: Plate Tectonics and Earthquake Storms in the Late Bronze Age Aegean and Eastern Mediterranean, Journal of Archaeological Science, 27(1), 43–63, doi:10.1006/jasc.1999.0431, 2000.
Varberg, J., Kaul, F. and Gratuze, B.: Bronze Age Glass and Amber Evidence of Bronze Age long distance exchange, Adoranten, (2019), 5–29, 2020.
Weiss, H.: The decline of Late Bronze Age civilization as a possible response to climatic change, Climatic Change, 4(2), 173–198, 1982.
Yurko, F. J.: End of the Late Bronze Age and other crisis periods: a volcanic cause?, in Gold of praise: studies on ancient Egypt in honor of Edward F. Wente, edited by E. F. Wente, E. Teeter, and J. A. Larson, Oriental Institute of the University of Chicago, Chicago, Ill., 1999.
7 comments
Comments sorted by top scores.
comment by Max_Daniel · 2020-10-28T17:38:12.837Z · EA(p) · GW(p)
The Late Bronze Age collapse is an interesting case I'd love to see more work on. Thanks a lot for posting this.
I once spent 1h looking into this as part of a literature review training exercise. Like you, I got the impression that there likely was a complex set of interacting causes rather than a single one. I also got the sense, perhaps even more so than you, that the scope, coherence, dating, and causes are somewhat controversial and uncertain. In particular, I got the impression that it's not clear whether the eruption of the Hekla volcano played a causal role since some (but not all) papers estimate it occurred after the collapse.
I'll paste my notes below, but obviously take them with a huge grain of salt given that I spent only 1h looking into this and had no prior familiarity with the topic.
Late Bronze Age collapse, also known as 3.2 ka event
Wikipedia:
- Eastern Mediterranean
- Quick: 50 years, 1200-1150 BCE
- Causes: “Several factors probably played a part, including climatic changes (such as those caused by volcanic eruptions), invasions by groups such as the Sea Peoples, the effects of the spread of iron-based metallurgy, developments in military weapons and tactics, and a variety of failures of political, social and economic systems.”
In a recent paper, Knapp & Manning (2016) conclude the collapse had several causes and more research is needed to fully understand them:
“There is no final solution: the human-induced Late Bronze Age ‘collapse’ presents multiple material, social, and cultural realities that demand continuing, and collaborative, archaeological, historical, and scientific attention and interpretation.”
“Among them all, we should not expect to find any agreed-upon, overarching explanation that could account for all the changes within and beyond the eastern Mediterranean, some of which occurred at different times over nearly a century and a half, from the mid to late 13th throughout the 12th centuries B.C.E. The ambiguity of all the relevant but highly complex evidence—material, textual, climatic, chronological—and the very different contexts and environments in which events and human actions occurred, make it difficult to sort out what was cause and what was result. Furthermore, we must expect a complicated and multifaceted rather than simple explanatory framework. Even if, for example, the evidence shows that there is (in part) a relevant significant climate trigger, it remains the case that the immediate causes of the destructions are primarily human, and so a range of linking processes must be articulated to form any satisfactory account.”
While I’ll mostly focus on causes, note that also the scope of the collapse and associated societal transformation is at least somewhat controversial. E.g. Small:
“Current opinions on the upheaval in Late Bronze Age Greece state that the change from the Late Bronze Age to the Geometric period 300 years later involved a transformation from a society based upon complex chiefdoms or early states to one based upon less complex forms of social and political structure, often akin to bigman societies. I will argue that such a transformation was improbable and that archaeologists have misinterpreted the accurate nature of this change because their current models of Late Bronze Age culture have missed its real internal structure. Although Greece did witness a population decline and a shift at this time, as well as a loss of some palatial centers, the underlying structure of power lay in small-scale lineages and continued to remain there for at least 400 years.”
By contrast, Dickinson:
“In the first flush of the enthusiasm aroused by the decipherment of the Linear B script as Greek, Wace, wishing to see continuity of development from Mycenaean Greeks to Classical Greeks, attempted to minimize the cultural changes involved in the transition from the period of the Mycenaean palaces to later times (1956, xxxiii-xxxiv). However, it has become abundantly clear from detailed analysis of the Linear B material and the steadily accumulating archaeological evidence that this view cannot be accepted in the form in which he proposed it. There was certainly continuity in many features of material culture, as in the Greek language itself, but the Aegean world of the period following the Collapse was very different from that of the period when Mycenaean civilization was at its height, here termed the Third Palace Period. Further, the differences represent not simply a change but also a significant deterioration in material culture, which was the prelude to the even more limited culture of the early stages of the Iron Age.” (emphases mine)
Causes that have been discussed in the literature
- Environmental
- [This PNAS paper argues against climate-based causes, but at first glance seems to be about a slightly later collapse in Northwest Europe.]
- Hekla volcano eruption, maybe also other volcano eruptions
- But dating controversial: “dates for the Hekla 3 eruption range from 1021 BCE (±130) to 1135 BCE (±130) and 929 BCE (±34).” (Wikipedia)
- Buckland et al. (1997) appear to argue against volcano-hypotheses, except for a few specific cities
- Drought
- Bernard Knapp; Sturt w. Manning (2016). "Crisis in Context: The End of the Late Bronze Age in the Eastern Mediterranean". American Journal of Archaeology. 120: 99
- Kaniewski et al. (2015) – review of drought-based theories
- Matthews (2015)
- Langguth et al. (2014)
- Weiss, Harvey (June 1982). "The decline of Late Bronze Age civilization as a possible response to climatic change". Climatic Change. 4 (2): 173–198
- Middleton, Guy D. (September 2012). "Nothing Lasts Forever: Environmental Discourses on the Collapse of Past Societies". Journal of Archaeological Research. 20 (3): 257–307
- Earthquakes
- Epidemics (mentioned by Knapp & Manning)
- Outside invasion
- By unidentified ‘Sea Peoples’
- For Greece: by ‘Dorians’
- “Despite nearly 200 years of investigation, the historicity of a mass migration of Dorians into Greece has never been established, and the origin of the Dorians remains unknown.” (Wikipedia)
- Cline, Eric H. (2014). "1177 B.C.: The Year Civilization Collapsed". Princeton University Press.
- Cline (2014) is dismissed by Knapp & Manning (2016)
- By broader ‘great migrations’ of peoples from Northern and Central Europe into the East Mediterranean
- Dickinson against invasion theories: “General loss of faith in ‘invasion theories’ as explanations of cultural change, doubts about the value of the Greek legends as sources for Bronze Age history, and closer dating of the sequence of archaeological phases have undermined the credibility of this reconstruction, and other explanations for the collapse have been proposed.”
- Technology (as a cause for why the chariot-based armies of the Late Bronze Age civilizations became non-competitive)
- Ironworking
- Palmer, Leonard R (1962). Mycenaeans and Minoans: Aegean Prehistory in the Light of the Linear B Tablets. New York, Alfred A. Knopf
- Changes in warfare: large infantry armies with new (bronze) weapons
- Drews, R. (1993). The End of the Bronze Age: Changes in Warfare and the Catastrophe ca. 1200 B.C. (Princeton)
- Ironworking
- Internal problems
- “political struggles within the dominant polities” (mentioned by Knapp & Manning)
- “inequalities between centers and peripheries” (mentioned by Knapp & Manning)
- Synthesis: general systems collapse a la Tainter
Types of evidence
- “material, textual, climatic, chronological” (Knapp & Manning 2016)
- Textual evidence very scarce (Robbins)
- Archeological evidence inconclusive, can be interpreted in different ways (Robbins)
↑ comment by FJehn · 2020-10-29T12:06:25.793Z · EA(p) · GW(p)
Thank you for your notes. Really quite interesting. I was not aware that the dating of the Hekla eruption was so disputed. The reason I focussed on it was that droughts seemed to me like they played a crucial role. The research by Drake et al. argued (relying on isotope data) that this drought was caused by a cooling of the sea, which in turn needs an explanation. And the most likely explanation seemed to be a volcanic eruption.
But I agree that it is overall very hard to understand the timing of all those events. Especially as it played out differently in different parts of the region. In some regions maybe the pandemic struck first, while it was migration or drought in others. I had hoped to highlight this complex web in my second figure.Replies from: Max_Daniel
↑ comment by Max_Daniel · 2020-10-29T13:24:52.259Z · EA(p) · GW(p)
Thanks! Interesting to hear what kind of evidence we have that points toward droughts and volcanic eruptions.
Note that overall I'm very uncertain how much to discount the Hekla eruption as a key cause based on the uncertain dating. This is literally just based on one sentence in a Wikipedia article, and I didn't consult any of the references. It certainly seems conceivable to me that we could have sufficiently many and strong other sources of evidence that point to a volcanic eruption that we overall should have very high credence that the eruption of Hekla or another volcano was a crucial cause.Replies from: Ramiro
↑ comment by Ramiro · 2020-10-30T14:35:24.019Z · EA(p) · GW(p)
Guys, great post and discussion. I was taking a look at the discussion about Hekla's role... even if the eruption succeeded the breakdown of those civilizations by half a century, it'd likely have an effect concerning their prospects for recovery.
comment by meerpirat · 2020-10-28T14:07:16.143Z · EA(p) · GW(p)
Wow, vulcano erupting, a famine, an earthquake, a pandemic, civil wars and rioting sea people, that's quite a task. Really interesting read, thanks for writing it! And the graph ended up really nicely.
But this climatic approach does not explain everything. The civilizations in this part of earth already survived similar events in the past. For example the destruction of the Minoan civilization on Crete (which is in the middle of the eastern Mediterranean) was caused by another major volcanic eruption (Marinatos, 1939). However, all other civilizations survived mostly unharmed. This indicates that also the societal structure comes into play.
This argument didn't seem super watertight to me. There seems to be a lot of randomness involved, and causal factors at play that are unrelated to societal structure, no? For example maybe the other eruption was a little bit weaker, or the year before yielded enough food to store? Or maybe the wind was stronger in that year or something? Would be interesting to hear why the mono- and/or some of the duo-causal historians disagree with societal structure mattering.
However, we also have more resources and more knowledge than the people in the Bronze Age.
I wondered how much this is an understatement. I have no idea of how people thought back then, only the vague idea that the people that spend the most time trying to make sense of things like this were religious leaders and highly confused about bascially everything?
Lastly, your warnings of tipping points and the problems around the breakdown of trade reminded me of these arguments from Tyler Cowen, warning that the current trade war between China and the US and the strains from the current pandemic could lead to a sudden breakdown of international trade, too.Replies from: FJehn
↑ comment by FJehn · 2020-10-28T14:22:53.279Z · EA(p) · GW(p)
Thank you. Yeah when I wrote this down I was a bit shocked myself on how many bad things can happen at the same time.
You're are right that the argument about the comparison with the other eruption is a bit flaky. The problem is that this is so long ago and most written sources were destroyed. So, we have to rely on climatic reconstructions and those are hard. Therefore, I found accounts that both eruptions were of similar strength, but also some which argued that one of them was stronger than the other. However, the earlier eruption happened smack in the middle of the Bronze Age empires, while the one during the collapse happened in Iceland. So, I would also be very interested in the opinion of someone about this who spend a career on it.
To your second argument: I agree that we have vastly more ressources and knowledge now. The problem is that it seems to me that our power to destroy ourselves increased as well and the society seems much more unlikely to recover when a really bad disaster would strike. So, my feeling is that stabilizing and destabilizing factors increased in a similar magnitude.
Thank you for the article from Cowen. I see this danger as well. Such topics always remind of this article. It is mainly a rant about programmers, but it also touches on the problem that much of our infrastructure will be very difficult to restart once its stopped, because so much of it are just improvised stopgap solutions.
comment by FJehn · 2020-10-28T12:57:17.939Z · EA(p) · GW(p)
I could not really fit this neatly in the text, but the destruction of Ugarit was the scene for a grim, yet fascinating bit of history that I do not want to withhold from you. During some archeological excavations clay tablets were found with the following text:
“My father, behold, the enemy’s ships came (here); my cities(?) were burned, and they did evil things in my country. Does not my father know that all my troops and chariots(?) are in the Land of Hatti, and all my ships are in the Land of Lukka?… Thus, the country is abandoned to itself. May my father know it: the seven ships of the enemy that came here inflicted much damage upon us.“
This was a desperate call for help, but we were only able to dig up those clay tablets, because the clay was baked by the city burning down around them and the tablets were buried beneath the rubble of the destroyed city. I think this is a stark reminder of what can happen when civilization collapses. | https://eaforum.issarice.com/posts/Zbvsfe9dDPKoy52QA/the-end-of-the-bronze-age-as-an-example-of-a-sudden-collapse |
How does climate change impact other sustainability issues?
We all know that climate change is a critical issue. But what about the other sustainability issues we are facing? Water scarcity and biodiversity loss, for instance, are two major problems also linked to food production. They are all immensely important – but there is a central reason why climate change is commonly identified as the risk of highest concern. These sustainability issues are interconnected and climate change has a strong effect on the others. More explicitly, the worse climate change gets, the worse these other issues will get as a result.
Let’s draw our square: Environmental sustainability comprises different impact categories including the examples mentioned above. These issues are under the same umbrella, in the same system, and they influence each other. Environmental sustainability overall is also under an umbrella of its own, in the same system as economic, societal, and geopolitical sustainability – and these are in turn interconnected. This is the playing field of the risks our society is out to reduce.
The pin in this square, however, is climate change. Take a look at the following graph from the Global Economic Forum visualizing the weight and interconnectivity of global risks before we dive deeper into some of them.
1. Water scarcity
Did you know that 17 countries, which are home to a quarter of the world’s population, face ‘extremely high’ water stress? This means a shortage of surface and groundwater, which has implications for food security, migration, and financial instability. According to the UN, water was a major factor of conflict in 45 countries in 2017. Water scarcity is clearly a problem with huge societal implications.
Why is climate change a major risk factor for water scarcity?
- According to the IPCC, as greenhouse gas concentration in the atmosphere increases, so does the fraction of the global population experiencing water scarcity. In existing dry regions, droughts will be an even more frequent occurrence.
- Climate change also causes rising sea levels. This leads to seawater intrusion of many natural freshwater resources along the coasts, making them unfit to consume.
- Finally, glaciers will shrink. This is a problem because glaciers act as freshwater reservoirs: They store water during cold or wet years and release it during warm years. As glaciers shrink, so do their meltwater yields during droughts and heatwaves.
2. Biodiversity loss
Biodiversity loss is for many a huge reason for concern, and rightfully so. The current rate of extinction is tens to hundreds of times higher than the average over the past 10 million years. The implications are serious and range from the collapse of food and health systems to the disruption of entire supply chains.
Nevertheless, the importance of biodiversity loss exceeds intellectual arguments. It breaks our collective heart when parts of nature that many of us ascribe an intrinsic value – birds, insects, mammals, and plants – are lost. Up until now, agriculture and fishing practices have been the main drivers of biodiversity loss. However, as climate change progresses, it is projected to become the most important driver.
Newsletter to-go?
Our special today is our Newsletter, including snackable tips, hearty climate knowledge, and digestible industry news delivered to your inbox
How are different impact categories approached in an LCA?
If you want to address the problem of water scarcity, then you want to make sure you use water with caution. For instance, avoid cultivating crops with huge irrigation needs in water-scarce areas. But you also want to address global warming, since global warming will reduce the water supply in already water-scarce areas. Water use and Climate impact are two typical impact categories in LCAs. Which of these impact categories is most relevant for the problem of water scarcity can be hard to decide, but it also depends on value judgments, such as the importance of long-term versus short-term damage.
The same goes for biodiversity loss. A primary driver has been agriculture intruding on the natural habitat of wildlife, but it is also related to other typical impact categories in LCAs such as eutrophication, acidification, the spread of toxic substances, and finally climate change. Sustainability issues are often related to more than just one LCA impact category.
It can be challenging to decide which of the impact categories to focus on. It is equally difficult when two or more different environmental impact categories are weighed against each other. How bad is the extinction of ten species compared to insufficient water supply for agriculture in a season? This is the type of question food product owners need to answer if they wish to include many impact categories in a life cycle assessment and aggregate them into one single score. The answer depends on value judgments rather than science. | https://carboncloud.com/2022/02/10/climate-change-and-other-sustainability-issues/ |
In his 2005 book Collapse: How Societies Choose to Fail or Succeed, author Jared Diamond examines various factors that led to instances of societal collapse in the past, and argues that our modern society faces many of these same challenges but on a larger scale. Today, let alone the collapse of societies, there is even a risk to the survival of our species.
Diamond was one of the first to propose that climate change and environmental degradation could lead civilizations to collapse. According to him, our current society is unsustainable and unless we make profound changes in behavior. History he showed us is full of examples of civilizational collapse because of limited resources and exploding populations.
Africa is the focus of world population growth this century. The African population is expected to increase from about 1.3 billion in 2020 to 4.5 billion by 2100, the biggest change in human history in just a few generations. If economic development and industrialization continue to be based on fossil fuel, it would probably mean the end of the planet.
Climate change and the environmental consequences will have increasing impact on the continent in the next few years. As we saw with the recent locust invasion in East Africa, Africa will see a number of environmental challenges, ranging from desertification to natural disasters, and new pandemics similar to Ebola could arrive very soon. This, coupled with a population explosion humankind, could make Africa the first “failed continent” in human history.
Any country would struggle to provide subsidized shelter, education, jobs, healthcare, and pensions for such a fast-growing population. Nor will it be possible to manage the brutal urbanization that will inevitably follow the provision of the required infrastructure, transportation, and telecommunications.
Looking to emulate “First World” societies, the youth in Africa will want improved living standards, and if they cannot get them at home, they will go in search of it, making the current migration surge to Western countries look like a picnic.
Political and Economic Challenges
Fast population growth also has implications for democracy. Many African dictators have been in power for decades. Neither party systems nor civil society organizations seem to be able to take the lead in a democratic transition. Compounding the problems of inefficient institutions, endemic corruption, and a lack of capacity and know-how are weak states, climate change, and resource scarcity. It’s a recipe for collapse. And COVID-19 has become a threat multiplier.
Besides the health crisis, the biggest challenges of the COVID-19 pandemic for Africa will be the economic and political ones. A report by the African Union warns that Africa could lose about 20 million jobs in 2020 due to the pandemic. This report was, however, done at the beginning of the spread of the disease in Africa, when there were relatively few cases. Another study, “Tackling COVID-19 in Africa” by McKinsey & Company—also compiled at the beginning of the pandemic on the continent—predicted that Africa’s economies could experience a loss of between US$90 billion and US$200 billion in 2020. But if the pandemic were to continue into 2021, as is starting to appear likely, things will get much worse.
For post-pandemic recovery, there would therefore be a strong need to increase the welfare state in all African countries, with Keynesian policies of government support. But this risks a mounting debt crisis for many African states. Africa already has some of the poorest and most indebted countries in the world, including Eritrea with a debt-to-GDP ratio of 127 percent and Mozambique with a ratio of 124 percent.
Competition among the major world powers has led to China in particular seeking to gain influence on the African continent by using debt-trap diplomacy. It extends large loans for infrastructure projects through its Belt and Road Initiative, but uses these investments to demand greater influence and access to commodities.
At social and political level, much unrest and instability are anticipated as the economic crisis unfolds this year and even more so next year. Furthermore, political heavy-handedness and anti-democratic enforcement measures will risk provoking more popular unrest. Since refugees, migrants, and displaced people across Africa are particularly vulnerable to COVID-19 transmission, governments should help to control the refugee camps and avoid border closures that could put vulnerable people at greater risk. Exacerbating the situation is the fact that the health infrastructure in Africa is inadequate to deal with such crises.
An Opportunity for Change
Yet not all is lost. The future always brings challenges and threats, but also possibilities and opportunities.
Africa could still do a lot with good leadership and cooperation. And the post-COVID-19 era could provide the opportunity for change. The most important step for Africa in the near future is to move rapidly toward an integrated market by implementing the African Free Trade Zone, and at the same time to have the support of Europe.
The Mediterranean could again become the bridge between Europe and Africa, with the possibility to make societies on either side flourish again. Instead of being the cemetery for migrants trying to cross its waters, the Mediterranean could become the connector between civilizations and histories, markets and people, for a future of prosperity and peace on both shores.
To make Africa the region of opportunities, both the Europe Union and the African Union will have to invest in the stability of the continent and in the human security of its people.
The United Nations has defined human security as “freedom from fear, from want, and from indignity,” but human security in Africa is at the lowest level in the world.
To invest in human security in Africa means first of all to address the root causes of instability and to carry out a real “peace-building” process with investments at the social, political, and economic levels of society.
Addressing the root causes of instability would involve combating endemic corruption at institutional level, empowering civil society organizations, supporting democratization, and working with international businesses to stop the pillaging of African resources. It also requires speaking out about human rights violations, tackling the security-development nexus, fighting armed groups benefitting from economic underdevelopment, supporting local economic development, and ending gender inequality and violence against women.
A Marshall Plan For Africa
Europe and the African continent will have to make important choices over the next few decades after the pandemic-induced economic crisis, which will be much worse than the economic downturn that started in 1929 leading to the "Great Depression."
This will be the decisive century for the survival of the world, and Africa and Europe will take center stage. The European Union could consider something similar to the United States’ Marshall Plan, a program to provide aid to a devastated Europe after World War II. My own country, Italy, was the third largest recipient of Marshall Plan aid. Decades after independence, African countries are still recovering from the effects of colonialism and the dictatorships that followed it, which Europe often supported. A similar plan should be developed for these countries.
The European Union will have to choose between pivoting to Africa or looking inward while struggling with domestic economic stagnation, and possibly losing the opportunity to become the cooperative leader that the world needs in this century. And Africa will have to decide whether it will look to the future or keep blaming the past.
These are tough choices, but there is no easy solution for ensuring the future of humankind: we need visionary leadership and courageous actions, or face the collapse of societies.
Maurizio Geri is an analyst on peace, security, defense, and strategic foresight. He is based in Brussels, Belgium. | https://newafricadaily.com/index.php/new-marshall-plan-africa-post-pandemic |
Covid-19 is the most significant economic, social, and public health issue of the present time, posing unprecedented challenges to economic policymakers. While the pandemic has caused havoc in the world economy, with major losses of human life, it is also reshaping our view towards the environment. The virus has thrown a spanner in the works of the global economy and financial system, but it’s having a positive impact on the environment. There is clear water in the Venice canals, blue skies over megacities, and wild animals roaming freely in locked-down towns. But do we really need a virus like this to rectify our actions, or can there be peaceful co-existence between humans and the environment, in which economic activity does not weigh on sustainability? This is probably the most crucial question and existential challenge to the human species and to the way socio-economic activities have been carried out, and we won’t be able to go anywhere without global cooperation.
We have taken important steps towards multilateral cooperation to tackle climate change. Efforts have been carried out under the United Nations Framework Convention on Climate Change (UNFCCC) since the early 1990s. The first convention of the Conference of the Parties (COPs), the supreme decision-making body of the UNFCCC, took place in Berlin in 1995. However, despite the severity of climate change, it took about 20 years for us to reach a global consensus just on emissions cuts, in the form of the Paris Agreement signed at COP 21 in 2015. It was more of a symbolic milestone in the efforts to tackle climate change. About 196 sovereign nations agreed to reduce greenhouse gas emissions, particularly CO2, so that global warming can be contained to 1.5-2° C above pre-industrial levels. To achieve this target, countries were left to decide their nationally determined contributions (NDCs) and actions to deliver them.
Over 60 countries (including the UK, Norway, Germany, Uruguay and France, among others) have committed to legally binding net-zero emission targets by 2050. But reducing emissions is equally vital for nations that have not yet committed to an explicit target, mainly China and the US, both big economies and large emitters. While emission reduction targets are appreciable and can act as broader guidelines, the most crucial question is how to achieve these targets without too much of an economic cost?
There is a clear indication of two things: the severity of our challenges and the lack of global consensus. In the last few years, we have seen trade wars, Brexit, US withdrawal from the Paris Agreement, and the Australian bushfires that killed hundreds of millions of wild animals and livestock. The US withdrawal from the Paris Agreement had no logical or political basis. The trade war between the US and China has cost the global economy hundreds of billions of dollars, and the Covid-19 blame game and WHO fund cuts in the midst of our worst health crisis in a century is not helpful either. COP 26, which was scheduled for November 2020 in Glasgow, has been postponed.
Considering their severity and scope, Covid-19 and climate change are global problems that can only be solved through global efforts and cooperation. Immense social and economic misery caused by the outbreak of Covid-19 and the longstanding issue of climate change will only be exacerbated if there’s no global cooperation. This is an important fork in humanity’s journey and requires major structural changes in socioeconomic activities to reduce the impact on the environment.
The low oil price regime that has prevailed for the last few years, particularly the recent collapse of oil prices, suggests that without global intervention clean energy will be a dream. With the liberal growth agenda and the America First slogan focusing solely on economic growth, unsustainable exploitation of domestic natural resources will weigh on global climate. No country can be the sole winner in this race. It’s either win-win or lose-lose for all of us. Education can play a crucial role in tackling climate change in the long-term. The scientific approach to the pandemic has saved millions of lives from Covid-19 and kept governments from adopting the controversial herd immunity approach. Reducing planetary emissions will also require global action involving public policy, technology and behavioural change. These are the crucial factors which can facilitate efforts to tackle climate challenges. Perhaps, this is the time for our species to change its course.
♣♣♣
Notes:
- This blog post expresses the views of its author(s), not the position of LSE Business Review or the London School of Economics.
- Featured image via Pikrepo, Royalty free
- When you leave a comment, you’re agreeing to our Comment Policy
Muhammad Shahbaz is a professor in energy economics at the Beijing Institute of Technology and Senior Research Fellow at the University of Cambridge. His research interests are applied economics, energy and environmental economics, development economics and financial economics.
Muhammad Ali Nasir is a senior lecturer in the department of economics, analytics and international business at Leeds Business School, Leeds Beckett University. | https://blogs.lse.ac.uk/businessreview/2020/05/27/climate-change-covid-19-and-our-existential-challenge/ |
Jared Diamond’s recent book Collapse makes a convincing case that multiple civilizations have collapsed because they exploited and mismanaged resources, then when outside forces turned against the culture or the climate changed, their entire way of life disintegrated. Examples include the Maya, Anasazi, Easter Island, and Viking settlements, all of which predate capitalism. They generally had a ruling class that kept most everything for themselves, something which exacerbated the collapse when it finally came because they weren’t paying attention to the environmental degradation they’d caused until it was too late.
For example, Easter Island collapsed in part because they cut down all the trees to use for themselves and finally had no more large trees with which to make seagoing canoes. Many of the trees were used to move the huge stone monuments that the different tribes built to compete with each other and to glorify their ruling class.
These examples were localized. Now the environmental damage is planet-wide and the a short-sighted ruling class exploiting resources for personal gain is certainly a factor here.
So go read Climate and Capitalism, a new blog, They say “EcoSocialism or Barbarism: There is no third way.” Really folks, if we want to stop global warming, new economic systems are needed. | https://polizeros.com/2007/02/09/making-the-links-between-global-warming-and-capitalism/ |
Brace yourself. You may not be able to tell yet, but according to global experts and the U.S. intelligence community, the earth is already shifting under you. Whether you know it or not, you’re on a new planet, a resource-shock world of a sort humanity has never before experienced.
Two nightmare scenarios — a global scarcity of vital resources and the onset of extreme climate change — are already beginning to converge and in the coming decades are likely to produce a tidal wave of unrest, rebellion, competition, and conflict. Just what this tsunami of disaster will look like may, as yet, be hard to discern, but experts warn of “water wars” over contested river systems, global food riots sparked by soaring prices for life’s basics, mass migrations of climate refugees (with resulting anti-migrant violence), and the breakdown of social order or the collapse of states. At first, such mayhem is likely to arise largely in Africa, Central Asia, and other areas of the underdeveloped South, but in time all regions of the planet will be affected.
To appreciate the power of this encroaching catastrophe, it’s necessary to examine each of the forces that are combining to produce this future cataclysm.
Resource Shortages and Resource Wars
Start with one simple given: the prospect of future scarcities of vital natural resources, including energy, water, land, food, and critical minerals. This in itself would guarantee social unrest, geopolitical friction, and war.
It is important to note that absolute scarcity doesn’t have to be on the horizon in any given resource category for this scenario to kick in. A lack of adequate supplies to meet the needs of a growing, ever more urbanized and industrialized global population is enough. Given the wave of extinctions that scientists are recording, some resources — particular species of fish, animals, and trees, for example — will become less abundant in the decades to come, and may even disappear altogether. But key materials for modern civilization like oil, uranium, and copper will simply prove harder and more costly to acquire, leading to supply bottlenecks and periodic shortages.
Oil — the single most important commodity in the international economy — provides an apt example. Although global oil supplies may actually grow in the coming decades, many experts doubt that they can be expanded sufficiently to meet the needs of a rising global middle class that is, for instance, expected to buy millions of new cars in the near future. In its 2011 World Energy Outlook, the International Energy Agency claimed that an anticipated global oil demand of 104 million barrels per day in 2035 will be satisfied. This, the report suggested, would be thanks in large part to additional supplies of “unconventional oil” (Canadian tar sands, shale oil, and so on), as well as 55 million barrels of new oil from fields “yet to be found” and “yet to be developed.”
However, many analysts scoff at this optimistic assessment, arguing that rising production costs (for energy that will be ever more difficult and costly to extract), environmental opposition, warfare, corruption, and other impediments will make it extremely difficult to achieve increases of this magnitude. In other words, even if production manages for a time to top the 2010 level of 87 million barrels per day, the goal of 104 million barrels will never be reached and the world’s major consumers will face virtual, if not absolute, scarcity.
Water provides another potent example. On an annual basis, the supply of drinking water provided by natural precipitation remains more or less constant: about 40,000 cubic kilometers. But much of this precipitation lands on Greenland, Antarctica, Siberia, and inner Amazonia where there are very few people, so the supply available to major concentrations of humanity is often surprisingly limited. In many regions with high population levels, water supplies are already relatively sparse. This is especially true of North Africa, Central Asia, and the Middle East, where the demand for water continues to grow as a result of rising populations, urbanization, and the emergence of new water-intensive industries. The result, even when the supply remains constant, is an environment of increasing scarcity.
Wherever you look, the picture is roughly the same: supplies of critical resources may be rising or falling, but rarely do they appear to be outpacing demand, producing a sense of widespread and systemic scarcity. However generated, a perception of scarcity — or imminent scarcity — regularly leads to anxiety, resentment, hostility, and contentiousness. This pattern is very well understood, and has been evident throughout human history.
In his book Constant Battles, for example, Steven LeBlanc, director of collections for Harvard’s Peabody Museum of Archaeology and Ethnology, notes that many ancient civilizations experienced higher levels of warfare when faced with resource shortages brought about by population growth, crop failures, or persistent drought. Jared Diamond, author of the bestseller Collapse, has detected a similar pattern in Mayan civilization and the Anasazi culture of New Mexico’s Chaco Canyon. More recently, concern over adequate food for the home population was a significant factor in Japan’s invasion of Manchuria in 1931 and Germany’s invasions of Poland in 1939 and the Soviet Union in 1941, according to Lizzie Collingham, author of The Taste of War.
Although the global supply of most basic commodities has grown enormously since the end of World War II, analysts see the persistence of resource-related conflict in areas where materials remain scarce or there is anxiety about the future reliability of supplies. Many experts believe, for example, that the fighting in Darfur and other war-ravaged areas of North Africa has been driven, at least in part, by competition among desert tribes for access to scarce water supplies, exacerbated in some cases by rising population levels.
“In Darfur,” says a 2009 report from the U.N. Environment Programme on the role of natural resources in the conflict, “recurrent drought, increasing demographic pressures, and political marginalization are among the forces that have pushed the region into a spiral of lawlessness and violence that has led to 300,000 deaths and the displacement of more than two million people since 2003.”
Anxiety over future supplies is often also a factor in conflicts that break out over access to oil or control of contested undersea reserves of oil and natural gas. In 1979, for instance, when the Islamic revolution in Iran overthrew the Shah and the Soviets invaded Afghanistan, Washington began to fear that someday it might be denied access to Persian Gulf oil. At that point, President Jimmy Carter promptly announced what came to be called the Carter Doctrine. In his 1980 State of the Union Address, Carter affirmed that any move to impede the flow of oil from the Gulf would be viewed as a threat to America’s “vital interests” and would be repelled by “any means necessary, including military force.”
In 1990, this principle was invoked by President George H.W. Bush to justify intervention in the first Persian Gulf War, just as his son would use it, in part, to justify the 2003 invasion of Iraq. Today, it remains the basis for U.S. plans to employ force to stop the Iranians from closing the Strait of Hormuz, the strategic waterway connecting the Persian Gulf to the Indian Ocean through which about 35% of the world’s seaborne oil commerce passes.
Recently, a set of resource conflicts have been rising toward the boiling point between China and its neighbors in Southeast Asia when it comes to control of offshore oil and gas reserves in the South China Sea. Although the resulting naval clashes have yet to result in a loss of life, a strong possibility of military escalation exists. A similar situation has also arisen in the East China Sea, where China and Japan are jousting for control over similarly valuable undersea reserves. Meanwhile, in the South Atlantic Ocean, Argentina and Britain are once again squabbling over the Falkland Islands (called Las Malvinas by the Argentinians) because oil has been discovered in surrounding waters.
By all accounts, resource-driven potential conflicts like these will only multiply in the years ahead as demand rises, supplies dwindle, and more of what remains will be found in disputed areas. In a 2012 study titled Resources Futures, the respected British think-tank Chatham House expressed particular concern about possible resource wars over water, especially in areas like the Nile and Jordan River basins where several groups or countries must share the same river for the majority of their water supplies and few possess the wherewithal to develop alternatives. “Against this backdrop of tight supplies and competition, issues related to water rights, prices, and pollution are becoming contentious,” the report noted. “In areas with limited capacity to govern shared resources, balance competing demands, and mobilize new investments, tensions over water may erupt into more open confrontations.”
Heading for a Resource-Shock World
Tensions like these would be destined to grow by themselves because in so many areas supplies of key resources will not be able to keep up with demand. As it happens, though, they are not “by themselves.” On this planet, a second major force has entered the equation in a significant way. With the growing reality of climate change, everything becomes a lot more terrifying.
Normally, when we consider the impact of climate change, we think primarily about the environment — the melting Arctic ice cap or Greenland ice shield, rising global sea levels, intensifying storms, expanding deserts, and endangered or disappearing species like the polar bear. But a growing number of experts are coming to realize that the most potent effects of climate change will be experienced by humans directly through the impairment or wholesale destruction of habitats upon which we rely for food production, industrial activities, or simply to live. Essentially, climate change will wreak its havoc on us by constraining our access to the basics of life: vital resources that include food, water, land, and energy. This will be devastating to human life, even as it significantly increases the danger of resource conflicts of all sorts erupting.
We already know enough about the future effects of climate change to predict the following with reasonable confidence:
* Rising sea levels will in the next half-century erase many coastal areas, destroying large cities, critical infrastructure (including roads, railroads, ports, airports, pipelines, refineries, and power plants), and prime agricultural land.
* Diminished rainfall and prolonged droughts will turn once-verdant croplands into dust bowls, reducing food output and turning millions into “climate refugees.”
* More severe storms and intense heat waves will kill crops, trigger forest fires, cause floods, and destroy critical infrastructure.
No one can predict how much food, land, water, and energy will be lost as a result of this onslaught (and other climate-change effects that are harder to predict or even possibly imagine), but the cumulative effect will undoubtedly be staggering. In Resources Futures, Chatham House offers a particularly dire warning when it comes to the threat of diminished precipitation to rain-fed agriculture. “By 2020,” the report says, “yields from rain-fed agriculture could be reduced by up to 50%” in some areas. The highest rates of loss are expected to be in Africa, where reliance on rain-fed farming is greatest, but agriculture in China, India, Pakistan, and Central Asia is also likely to be severely affected.
Heat waves, droughts, and other effects of climate change will also reduce the flow of many vital rivers, diminishing water supplies for irrigation, hydro-electricity power facilities, and nuclear reactors (which need massive amounts of water for cooling purposes). The melting of glaciers, especially in the Andes in Latin America and the Himalayas in South Asia, will also rob communities and cities of crucial water supplies. An expected increase in the frequency of hurricanes and typhoons will pose a growing threat to offshore oil rigs, coastal refineries, transmission lines, and other components of the global energy system.
The melting of the Arctic ice cap will open that region to oil and gas exploration, but an increase in iceberg activity will make all efforts to exploit that region’s energy supplies perilous and exceedingly costly. Longer growing seasons in the north, especially Siberia and Canada’s northern provinces, might compensate to some degree for the desiccation of croplands in more southerly latitudes. However, moving the global agricultural system (and the world’s farmers) northward from abandoned farmlands in the United States, Mexico, Brazil, India, China, Argentina, and Australia would be a daunting prospect.
It is safe to assume that climate change, especially when combined with growing supply shortages, will result in a significant reduction in the planet’s vital resources, augmenting the kinds of pressures that have historically led to conflict, even under better circumstances. In this way, according to the Chatham House report, climate change is best understood as a “threat multiplier… a key factor exacerbating existing resource vulnerability” in states already prone to such disorders.
Like other experts on the subject, Chatham House’s analysts claim, for example, that climate change will reduce crop output in many areas, sending global food prices soaring and triggering unrest among those already pushed to the limit under existing conditions. “Increased frequency and severity of extreme weather events, such as droughts, heat waves, and floods, will also result in much larger and frequent local harvest shocks around the world… These shocks will affect global food prices whenever key centers of agricultural production area are hit — further amplifying global food price volatility.” This, in turn, will increase the likelihood of civil unrest.
When, for instance, a brutal heat wave decimated Russia’s wheat crop during the summer of 2010, the global price of wheat (and so of that staple of life, bread) began an inexorable upward climb, reaching particularly high levels in North Africa and the Middle East. With local governments unwilling or unable to help desperate populations, anger over impossible-to-afford food merged with resentment toward autocratic regimes to trigger the massive popular outburst we know as the Arab Spring.
Many such explosions are likely in the future, Chatham House suggests, if current trends continue as climate change and resource scarcity meld into a single reality in our world. A single provocative question from that group should haunt us all: “Are we on the cusp of a new world order dominated by struggles over access to affordable resources?”
For the U.S. intelligence community, which appears to have been influenced by the report, the response was blunt. In March, for the first time, Director of National Intelligence James R. Clapper listed “competition and scarcity involving natural resources” as a national security threat on a par with global terrorism, cyberwar, and nuclear proliferation.
“Many countries important to the United States are vulnerable to natural resource shocks that degrade economic development, frustrate attempts to democratize, raise the risk of regime-threatening instability, and aggravate regional tensions,” he wrote in his prepared statement for the Senate Select Committee on Intelligence. “Extreme weather events (floods, droughts, heat waves) will increasingly disrupt food and energy markets, exacerbating state weakness, forcing human migrations, and triggering riots, civil disobedience, and vandalism.”
There was a new phrase embedded in his comments: “resource shocks.” It catches something of the world we’re barreling toward, and the language is striking for an intelligence community that, like the government it serves, has largely played down or ignored the dangers of climate change. For the first time, senior government analysts may be coming to appreciate what energy experts, resource analysts, and scientists have long been warning about: the unbridled consumption of the world’s natural resources, combined with the advent of extreme climate change, could produce a global explosion of human chaos and conflict. We are now heading directly into a resource-shock world.
Michael Klare is a professor of peace and world security studies at Hampshire College, a TomDispatch regular and the author, most recently, of The Race for What’s Left, just published in paperback by Picador. A documentary movie based on his book Blood and Oil can be previewed and ordered at www.bloodandoilmovie.com. You can follow Klare on Facebook by clicking here. | http://americanempireproject.com/blog/entering-a-resource-shock-world/ |
People are naturally reticent to change their society’s whole system of economic activity. However much care is taken in attempting an orderly transition, the potential for disruption is real. Additionally, nearly all who have influence and privilege will resist a change of system. The case for a change of system must be real and compelling. People must realize (as many already do) that there is no choice other than to act, that it is a matter of survival — and, in the end, the new system will bring a far better quality of life to all.
Here is the circumstance that now compels action: Humanity faces several major crises, each of which has capacity to bring dead-endings to our civilizational trajectory. Each alone may bring collapse; together their danger to humanity is magnified.
Resource Depletion. Although world population continues to grow, resource use grows even faster. Crucial resources are now being depleted at an unsustainable rate. The production of some vital, non-renewable resources has already reached peak and is in decline — most important among them is oil. Even the production of some vital renewable resources has peaked, such as timber, fish, and grain. Estimates of the overshoot of the carrying capacity of the earth are now at about 60 percent — that is, 1.6 earths now required to sustain humanity at our present level of resource use.
Climate Change. The correlation between CO2 levels and global temperatures is well established. Earth’s average annual temperature has steadily risen, as has the frequency of extreme weather events. There is growing awareness of the potential for positive-feedback mechanisms setting in — such as methane release in the tundra — that have potential to fuel acceleration of temperature rises. There is a real possibility the global climate system is already reaching tipping-points, which could bring a sudden shift in the stable climate that earth has enjoyed for the past twelve millennia. One variation of such a tipping-point scenario involves increased Arctic ice melt blocking and shutting down of the North Atlantic Current, responsible for maintaining Northern Europe’s mild climate.
Environmental Destruction. The earth’s biosphere is being killed at a rate so rapid as to be characterized as life’s “sixth extinction” event. Humanity’s impact on the planet has been of such a scale as to leave marked impact on earth’s geological record, compelling scientists to recognize the emergence of a new geological epoch: the Anthropocene. As the destruction proceeds — consuming soils, aquifers, surface waters, coral reefs, forests, flood plain buffers, ecosystem biodiversity — the economic potentials of humanity are diminishing accordingly, and the health and vitality of humans and other living beings are increasingly compromised. It is not unreasonable to project ecosystem collapse in fragile environments. Indeed, this appears to be occurring already in growing numbers of dead zones at river mouths and in dying coral reef communities across the tropics.
Economic Collapse. There are two fundamental causes of economic depressions: (1) over-concentration of wealth among the rich, which reduces the purchasing capacity of the common people, and (2) stagnancy in the movement of money in the productive economy, as investors withhold credit, or investments get concentrated into non-productive speculation. Both of these conditions exist, to an extreme, in the current global economy. Concentration of wealth has soared to reckless levels. While the number of billionaires continues to climb, the real wages of the middle and lower classes have remained stagnant in most countries, if not declined. Despite state interventions to stimulate credit markets, there is little investment flowing into industry and commerce. Meanwhile investment in speculative financial markets has resumed in force, despite the partial bursting of the speculative bubble in 2008. This is occurring amidst a debt bubble of a size almost beyond imagining.
These crises — and others could be identified — are not independent from each other, but should be regarded as acute symptoms of a larger global metacrisis that is rooted in fundamental defects at the heart of the present social order. As such, they are interrelated, with capacity to interact in complex and mutually reinforcing ways.
So, let us take for example, the situation in the mid 2000s when there was a global tightness of oil supply (due in part to peaking oil production). This brought on a surge in gas prices, which created a cost of living surge, that exacerbated the home mortgage crisis, which accelerated the credit meltdown, which, then, slammed the breaks on the economy. Slowed economic activity, in turn, caused a plunge in oil demand and a drop in gas prices, which weakened political pressure for developing alternative fuel sources, and thereby undermining aggressive efforts to reduce carbon emissions — the greenhouse gas most responsible for climate change that is putting stress on agricultural production.
As a result of the global metacrisis, it is no longer advisable for local economies to continue linking their fate to an unstable, crisis prone global economy. As the interactive and mutually exacerbating effects of the symptoms of the global metacrisis intensify, and as their effects come together in unanticipated perfect storm situations, tragedy and hardship will become more commonplace. Even where there is not severe hardship, societies will yet face ongoing dwindling of developmental potentials.
Acknowledgement of the global metacrisis compels us to accept that the sinking ship must be abandoned and fundamental change embraced.
Change at such a fundamental level will not be easy. Many are invested in economic globalism and will resist letting it go. But the necessity of change is upon us. We either resist it, at great cost to humanity’s wellbeing, or accept the challenges and embrace the new possibilities that are arising.
Adopting a new development modality — one that is decentralized and sustainable — will require an extraordinary degree of vision, engagement, unity and leadership. But most of all, it will require a viable new economic paradigm to guide development. | https://www.proutinstitute.org/global-metacrisis/ |
How did climate change influence the rise and collapse of the ancient Maya?
Before the arrival of Christopher Columbus in the New World, the ancient Maya thrived and relatively rapidly disintegrated as a major political force. While the Maya, as a people, persisted long after the collapse of the Classical Maya civilizations, their cities were much reduced or abandoned by the time Columbus arrived. Did climate change have an important role in this? This is a question researchers have long tried to answer and recent answers might provide some new insights.
Rise of the Maya
The societies that were the precursors to the Maya experienced greater social complexity during the period between 2000 BC and 250 AD. Towns and soon cities such as Nakbe, Kaminaljuyu, and El-Mirador in Guatemala grew to large sizes.Agriculture focusing on maize, beans, and squash developed that helped lead to more long-term sedentary villages that also thrived through increased trade. Pottery and ceramic objects developed along with different forms of stone works, in particular jade and obsidian works. Communities began to form kingdoms and worship focused on the jaguar in different regions. Sacred kingship soon likely arose. The Olmecs in southern Mexico likely formed the first true complex society that would later influence the Classical Maya civilization as well as the Aztecs (Figure 1). The Olmecs spread throughout central and southern Mexico, while also spreading their influence south to Central America.
During this time, evidence from lake sediments indicate oscillating change in the El Niño/Southern Oscillation (ENSO) winds. This had an effect of providing either greater or less rain. During the period around 1500-600 BC, conditions may have been favourable for increased rain that allowed the Olmecs to thrive and expand, while at around 600 BC and later there is evidence of more drier cycles. In effect, farming may have become more conducive during the early pre-Classical phases of the Maya when cultural expansion is evident, while it decline in the later phases.
The Classical Maya
The Classical Maya period lasted from about 250-900 AD, a period that led to the development of large-scale urban areas and monumental architecture. This was a period of city-states and competing polities rather than a single, long-lived and dominant entity. Scholars have compared it to the period of city-states in ancient Greece or Medieval Italy. Some of the largest centers likely had populations of over 100,000 people, occupying areas around Honduras, Guatemala, and southern Mexico. The great Mayan cities, such as Tikal, were politically involved and often influenced by Teotihuacan, the great central Mexican city to the north that likely was the largest city in pre-Columbian Americas, with perhaps nearly 150,000 people. One of the other great cities at this time was Calakmul, which formed as a rival to Tikal. Chichen Itza (Figure 2) to the north in the modern Yucatán, and Copan, to the south in Honduras, also competed with these cities and likely formed alliances, including with Teotihuacan while sometimes coming under influence of the great powers. The great Maya pyramids were built at this time, which represented temples to the gods and places of sacrifice. Writing was developed, including monumental inscriptions and calendars used to mark events and important cycles. As writing was now used on monumental inscriptions, these provided also dates in which buildings can be attributed to. By around 900 AD, the number of new building inscriptions declines steadily. Soon after this, some of the great cities were either abandoned or were much reduced in population. This has led some scholars to call this sudden change as the "Classic Maya Collapse." Initially ideas centred around warfare or even disease; however, some scholars noted the rapid changes evident in societies and abandonment suggested something different and more drastic. Out of the possibilities that could lead to collapse, it began to emerge that climate might be a major factor in the decline of the Mayas.
Collapse of the Maya
There is evidence of increased drought during or around 900 AD, with scholars often attributing the "collapse" period as being between 800-1000 AD. The Yucatecan lake sediment cores show very severe droughts that not only meant less rainfall but the thin soils that Maya agriculture were dependent on were particularly vulnerable to sudden change as they were also relatively less fertile. Tree ring data and climate modeling have also been conducted, helping to show that there is multiple lines of evidence that likely drought occurred. In fact, in the northern hemisphere at around 800-1000 AD, increasing cold temperature are evident. Those colder temperatures would have had the effect of creating more drier conditions to the south in Mexico and Central America.
More recently, more precise information on isotopic changes in sediments has allowed a more direct quantification on how much rainfall had to change to lead to the collapse of the Maya. Recent work has shown that between 41% and 54% (with intervals of up to 70%) of rainfall reduction in the Mayan regions likely occurred. In other words, reduction in rainfall was drastic and there was an accompanied decline in humidity, which likely led to more rapid drying with rainfall that fell.
In fact, relative to today, the region the Maya occupied was very different. It had been assumed the Maya created cities within jungles. However, the regions the Maya occupied were often drier, seasonally wet places that had cyclical rains that the Maya likely increasingly became dependent on. Only later after the cities were abandoned they became jungles. That pattern of climate began to change between 800-1000 AD, which likely disrupted the agricultural system the Maya depended on. The agricultural system, composed of canals, terracing, raised fields, and other systems began to not be sustainable relative to the new, emergent climate. Wider environmental decline may have accompanied climatic change that affected the success of agriculture, such as a decline in the complex agricultural system created.
Summary
Although the so-called "Classic Maya Collapse" has long fascinated scholars, the truth is the Maya never really disappeared. In fact, Maya cultures continue to this day. However, after the Classic period, cultures prior to the arrival of Columbus were much reduced. The northern lowlands and highlands began to take more importance in later Maya societies. Mayan cities continued and the last city did not fall until 1697, when the Spanish conquered the last holdout independent Mayan city. Nevertheless, changes after the Classical Maya period indicate that Mayan society did change drastically and it is evident that the environment and climate in Central American and southern Mexico was likely very different than today. Changes that occurred meant that a system that had been created to be adapted to the climate was no longer suitable, leading to a change that meant Mayan societies became smaller-scale and adapted to very different climate and environmental conditions.
References
- ↑ For more on the pre-Classical Maya civilizations of Mexico and Central America, see: Estrada Belli, F. (2011). The first Maya civilization: ritual and power before the classic period. London ; New York: Routledge.
- ↑ For more on the role of El Niño/Southern Oscillation on the rise and collapse of the early pre-Classic Maya societies, see: Brooke, J. L. (2014). Climate Change and the Course of Global History. West Nyack: Cambridge University Press, pg. 310.
- ↑ For more on the Classical Maya, see: Houston, S. D., & Inomata, T. (2009). The classic Maya. New York: Cambridge University Press.
- ↑ For more on the role of climate in this collapse, see: Hodell, D. A., Curtis, J. H., & Brenner, M. (1995). Possible role of climate in the collapse of Classic Maya civilization. Nature, 375(6530), 391–394.
- ↑ For more on the change in rainfall and how much this can be quantified to be, see: Evans, N. P., Bauska, T. K., Gázquez-Sánchez, F., Brenner, M., Curtis, J. H., & Hodell, D. A. (2018). Quantification of drought during the collapse of the classic Maya civilization. Science, 361(6401), 498–501. | https://dailyhistory.org/index.php?title=How_did_climate_change_influence_the_rise_and_collapse_of_the_ancient_Maya%3F&curid=2632&oldid=13008 |
An examination of two documented periods of climate change in the greater Middle East, between approximately 4,500 and 3,000 years ago, reveals local evidence of resilience and even of a flourishing ancient society despite the changes in climate seen in the larger region.
A new study – led by archaeologists from Cornell and from the University of Toronto, working at Tell Tayinat in southeastern Turkey – demonstrates that human responses to climate change are variable and must be examined using extensive and precise data gathered at the local level.
“The absolute dating of these periods has been a subject of considerable debate for many years, and this study contributes a significant new dataset that helps address many of the questions,” said Sturt Manning, the Goldwin Smith Professor of Classical Archaeology in the College of Arts and Sciences, and lead author of the study, which published Oct. 29 in PLoS ONE.
The report highlights how challenge and collapse in some areas were matched by resilience and opportunities elsewhere. The findings are welcome contributions to discussions about human responses to climate change that broaden an otherwise sparse chronological framework for the northern part of the region known historically as the Levant, which stretches the length of the eastern edge of the Mediterranean Sea.
“The study shows the end of the Early Bronze Age occupation at Tayinat was a long and drawn out affair that, while it appears to coincide with the onset of a megadrought 4,200 years ago, was actually the culmination of processes that began much earlier,” said Tim Harrison, professor and chair of the Department of Near & Middle Eastern Civilizations at the University of Toronto. Harrison directs the Tayinat Archaeological Project.
“The archaeological evidence does not point towards significant local effects of the climate episode,” he said, “as there is no evidence of drought stress in crops. Instead, these changes were more likely the result of local political and spatial reconfiguration.”
The mid- to late Early Bronze Age (3000-2000 B.C.) and the Late Bronze Age (1600-1200 B.C.) in the ancient Middle East are pivotal periods of early inter-connectedness among settlements across the region, with the development of some of the earliest cities and state-level societies. But these systems were not always sustainable, and both periods ended in collapse of civilizations and settlements, the reasons for which are highly debated.
The absence of detailed timelines for societal activity throughout the region leaves a significant gap in understanding the associations between climate change and social responses. While the disintegration of political or economic systems are indeed components of a societal response, collapse is rarely total.
Using radiocarbon dating and analysis of archaeological samples recovered from Tell Tayinat, a location occupied following two particularly notable climate change episodes – one occurring 4,200 years ago, the other 1,000 years later – Manning and Brita Lorentzen, a researcher at the Cornell Tree-Ring Laboratory, working with the University of Toronto team, established a firm chronological timeframe for Tayinat in these two pivotal periods in the history of the ancient Middle East.
“The detailed chronological resolution achieved in this study,” Manning said, “allows for a more substantive interpretation of the archaeological evidence in terms of local and regional responses to proposed climate change, shedding light on how humans respond to environmental stress and variability.”
The researchers say the chronological framework for the Early Iron Age demonstrates the thriving resettlement of Tayinat following the latter climate change event, during a reconstructed period of heightened aridity.
“The settlement of Tayinat may have been undertaken to maximize access to arable land, and crop evidence reveals the continued cultivation of numerous water-demanding crops, revealing a response that counters the picture of a drought-stricken region,” Harrison said. “The Iron Age at Tayinat represents a significant degree of societal resilience during a period of climatic stress.”
The research was supported by the Social Sciences and Humanities Research Council of Canada, and by the University of Toronto. | https://classics.cornell.edu/news/archaeologists-ancient-people-turkey-adapted-climate-change |
There are enough challenges that organizations and societies face today in the wake of environmental, economic and socio-political crises that have jolted the world during last few decades. In this context, the book under review, Economics of Enough: How to Run the Economy as if Future Matters, may be designated as one of the treatise in response to a call for reforms in the market-driven capitalism following not only financial but environmental, social and political predicaments that tend to mar our collective future.
The author of the book, Diane Coyle, is PhD in Economics from Harvard University and an economic consultant specializing in new technologies and globalization. Her book emphasizes upon that future needs to be taken into consideration while making decisions for the organizations in societies today. The main idea revolves around that "how can we ensure that we leave enough economic, environmental and societal legacy for future generations? The book also deals with the problems of unsustainability in the developed world and also sheds light on the issues of sustainability in the developing world.
The book is divided into three parts comprising 9 chapters. The chapters cover an array of challenges and obstacles facing the governments from the goals of policies to the reforms in social institutions that require change in order. The chapters discuss the challenges posed by the four dimensions of unsustainability: depletion of natural resources; vast borrowing from posterity; increase in inequality; the collapse of the trust. Also, Coyle has placed the concepts of fairness and enhancing social capital as vital for economics of enough in order to save future.
In order to deal with the challenge of sustainability, Coyle believes social welfare is the appropriate goal of government policy. However, social welfare encompasses material wealth such as physical security, rule of law, quality of the environment and the civility of everyday life. She dismisses happiness as too narrow a concept to assess social welfare. In her opinion, neither wealth nor happiness is enough for a healthy society hence the debate between happiness versus growth is meaningless.
Focusing on the challenge of climate change that many scientists have argued is the result of rampant use of natural resources, the author points out to the dilemma of achieving economic growth without affecting the climate adversely. She recommends that the way out of the dilemma is to time duration when taking decisions to consume natural resources. What is required is the focus on the measurement of wealth, both natural and financial. Such a change in focus will ensure a positive impact of the decisions in the long-term.
Elaborating on another dimension of unsustainability, the author has warned that the enormous public debt acquired, especially in the wake of financial crises and also to sustain the pension and health care system of the elderly in the West have put the financial freedom of the coming generation at stake. Not only Western governments have been borrowing from their own citizens but also from foreigners in poor countries. Hence the prospects that the future generations will enjoy are bleak.
After climate change and debt burden, the book covers fairness as an aspect of unsustainability which is related with the income inequality pervasive within countries as well as globally. Inequality goes against the human instinct of fairness that has been proven by psychological experiments, evolutionary psychology and primatology. A society that does not cater to its members' sense of fairness is unsustainable as it erodes the social capital which is vital for economic growth. Coyle describes social capital as trust that is essential and fundamental for the long-term sustainability of any society.
With such daunting challenges facing the global economy Coyle has identified three major areas where reforms are required: statistics, markets and public institutions. The current set of statistics that measure national growth must be reformed. She recommends measurement of comprehensive wealth, including social and human capital that will help governments formulate policy around a long-term horizon. Furthermore, she disapproves the idea that markets and state are opposite to each other. She highlights the symbiotic relationship that exists between state and markets. Markets operate under rules and laws set by the government therefore the challenge is to ensure that markets embody the values that matter to societies in which they operate.
Finally, lamenting the fact that public services are being run by self-serving cohorts of influential people with no regard to the interest and welfare of general public, Coyle exhorts the need to build new institutions and economic rules that have a long-term focus during policy making. Bringing reforms of such a large scale across wide range of social institutions requires a paradigm shift in the way economies are run. Coyle has put her hope on the ingenuity of human beings to adapt to change. She calls for changes in attitude and behavior from both individuals and businesses for bringing social change. Individuals must save in order to invest in future and businesses must invest for the long term. The fact that future matters and individual choices affect people must be acknowledged. Indeed how to bring about sea change in our behavior and attitude that makes patience and mutuality as the most cherished values of our society will be the greatest challenge for policy makers.
Finally, for achieving “economics of enough” Coyle recommends, “We need to internalize a sense of responsibility to other, including those not yet born, in order to restore the moral fiber that is needed for market capitalism to deliver social well-being.” And the success in this goal lies in education. The youngsters must be equipped with a sense of tolerance, empathy, co-existence, harmony, mutuality and safeguarding sustainability. | http://jisr.szabist.edu.pk/JISR-MSSE/Publication/11/2/3/BookReview |
Key Points The recent collapse of 3 regional US banks is a timely reminder of the impact of tighter monetary policy on liquidity. These banks carried idiosyncratic risks and leveraged into either speculative venture capital or high-risk lending. At this stage, there...
Australian Private Debt Market Review – Opportunities and Risks for Investors
Summary Foresight Analytics’ Australian Private Debt whitepaper provides investors a practical framework for evaluating the strategies available to Australian investors and offers insights on position sizing and allocation layering. Titled ‘Australian Private Debt...
Cross Asset – March 2022
Key Points: In our latest Cross-asset review, we note the materially positive moves in Commodities, AUD and to some extend the Australian equity market; reflecting war-related supply shocks in the commodity market. On the other hand, Gold price and VIX (US and...
Strategies for Adopting Gender-Lens Investing & Insights on Yielding Return and Impact
Summary This article examines Gender-Lens Investing as a growing investment framework that can influence social impact and achieve gender equality through financial mechanisms. It highlights some of the strategies fund managers and investors can adopt within private...
Australian Equity Factor Performance Report – April 2021
In Foresight’s monthly Australian Equity Performance review, we look at the performance of key market factors: the various structural drivers of Australian share market. Foresight factor premiums show the returns attached to...
Cross Asset Review April
In this month’s Cross Asset Review, we assess the past performance of various asset classes and draw implications for multi asset investors. In April, risk on theme continued with real assets rebounding strongly. The top performing assets were the Australian Micro and...
Long Short Manager Diligence
Source: iStock This article explore how the GameStop shorting squeeze frenzy is forcing many long-short managers to re-think about their strategy. While this herding behaviour from ‘main street’ investors is a real risk for long-short funds, it remains to be seen...
Truth or Tale: Does Climate Change Reduce Economic Growth?
Source: iStock This blog piece reiterates how studies around climate change and how they do not negatively affect a nation's GDP are largely inconsistent, particularly when considering the variations among developed and developing nations economies and climate...
Climate Change Action: How do we invest in it and what do we need to keep in mind?
Source: iStock This blog piece discusses the urgency required for climate change action, as well as how investors can contribute to climate impact through mechanisms of selective capital allocation and shareholder engagement. It explains the frameworks that should be...
Elevated dispersion and volatility due to rising geopolitical and policy risks make a strong case for active management in Emerging Markets
Key insights Global emerging markets have provided very strong returns for long term investors. However, investors have to be prepared to accept higher volatility over shorter periods of 3 years and less. Significant performance dispersions across regions, countries,...
AUD weakness underpinned by soft fundamentals, the case for hedging foreign currency exposure remains weak
Key Insights AUD index against major trading partners shows clear negative trend for currency since its peak in 2012. Over the past 12 months, AUD has weakened against most major currencies, largest weakness was noted against the USD. AUD weakness is also evident over...
Longer term outlook for India remains highly positive, an expected Modi win bodes well for secular economic and capital market outlook
Key Insights The USD$2.6 trillion Indian economy is expected to grow by 7% in 2019, placing it amongst the fastest growing economies in the world. India’s economic growth is underpinned by structural drivers of private consumption, fixed asset investments and public...
Foresight ASX Small Cap Report
Foresight Emerging Market Report
Foresight Australian Mid Cap Report
Active Managers Deliver: Evidence from the Australian Micro-Cap Sector
Active Managers Deliver: Evidence from the Australian Micro-Cap Sector Foresight’s latest analysis of the Australian equity strategies reveals some interesting insights. While the big-caps have done very well over the past 10 years, Micro and Small caps have delivered…
FAANGs and Fundamentals – Part 1
A lot has been written about FAANGs in recent times. These fast-growth companies have outperformed their industry peers and the broader market in recent years as their market capitalisations have soared. They’ve led mega-caps to outperform and masked much of the…
How to reduce fund selling, hiring and firing mistakes by using factor insights?
By Jay Kumar, September 2017. In a crowded market of active management, investors need to focus on unique drivers of alpha and assess their sustainability through a complete investment cycle. The recent proliferation of passive ETFs, fundamental-indexing and…
Realised factor premiums much stronger in Emerging markets
Global Developed markets experienced notable reversals in factor premiums since December 2016. So far in 2017, Value style factors recorded negative premiums while Growth and Momentum style factors delivered positive premiums. The YTD factor premium experience is…
Do active managers in Emerging Markets add Value? | https://www.foresight-analytics.com/cat-decisionrq/ |
Last month, Breakthrough Australia published a paper by David Spratt and Ian Dunlop that claims that “climate change now represents a near- to mid-term existential threat to human civilisation”.1 Climate scientist Michael Mann was quick to put down the paper as “overblown rhetoric”. He was quoted as saying that “I respect the authors and appreciate that their intentions are good, but as I have written before,2 overblown rhetoric, exaggeration, and unsupportable doomist framing can be counteractive to climate action.”3
The quote by Mann raises a number of questions. Is the report by Spratt and Dunlop “overblown rhetoric” indeed? Does Mann actually have the expertise to judge whether it is? Are Spratt and Dunlop really the “doomists” Mann seems to think they are? And are “doomists” really as dangerous as he thinks they are?
Answers to the last two question, of course, depends on what “doomists” are supposed to be. Mann has written about “doomists” before in the Washington Post.4 In that article he explicitly mentioned Guy McPherson, who believes that humans will go extinct before 2030, but also suggests that David Wallace-Wells’s “The Uninhabitable Earth” is an example of “doomism”. “Doomists”, according to Mann, overstate the risks involved in climate change and there is “a danger in overstatement that presents the problem as unsolvable and future outcomes as inevitable”. And perhaps most importantly, “doomists” predict near- or mid-term human extinction due to climate change.
It isn’t entirely clear which of these criteria – overstated risks, inevitable outcomes, human extinction – are necessary conditions and which are sufficient conditions to qualify as a “doomist”, but in a response on Facebook (to me) Mann wrote that the term “‘doomism’ is appropriate when the claim is being made – as it made in this report – that we face extinction as a species”, suggesting that the third criterion is particularly important. “This report” in Mann’s Facebook reply referred to the aforementioned paper by Spratt and Dunlop, by the way, so Mann’s response implies that he believes that they claim “that we face extinction as a species” and, therefore, that they are “doomists”.
That judgment doesn’t appear to be entirely justified, however. It can certainly be argued that Spratt and Dunlop are guilty of overstating the risks. Whether they actually are, I don’t know for sure, but if Michael Mann believes they are, I’ll take his word for it. Michael Mann, after all, is a climate scientist, and that is his area of expertise.5
Spratt and Dunlop’s report does not satisfy the other two conditions for the “doomist” label, however. They do not predict human extinction, and they do not present their “doomish” predictions as inevitable. They are guilty of “overblown rhetoric” in some sense, however, because their use of the term “existential risk to human civilisation” is very misleading. That term suggests human extinction, but that is not exactly what they mean. The term is defined on page 6 of their paper:
An existential risk to civilisation is one posing permanent large negative consequences to humanity which may never be undone, either annihilating intelligent life or permanently and drastically curtailing its potential.
In other words, “existential risk to human civilization” doesn’t necessarily mean human extinction, but can also be a “permanent and drastic curtailing” of humanity’s “potential”. Arguably, if 30% or so of the planet becomes effectively uninhabitable due to heat, drought, and/or rising oceans – and those are predictions of possible outcomes of climate change that Mann and nearly every other climate scientist supports – then that would result in a “permanent and drastic curtailing of humanity’s potential”. And a global societal collapse with little chance to rebuild civilization to anything resembling current levels due to a radically changed physical environment certainly also qualifies as a “permanent and drastic curtailing of humanity’s potential”. Importantly, if you read what Spratt and Dunlop are warning for in their paper, it is something like the latter: global societal collapse in a world plagued by natural disasters, not human extinction. Furthermore, they do not claim that that outcome is inevitable. In the contrary, the very aim of their report is to suggest policies to avoid this outcome.
The latter point is especially important because the reason Mann thinks that “doomists” are as harmful as climate change deniers (or “denialists”) is that both “doomism” and denialism lead us “down the same path of inaction”. Denialism leads to inaction because it implies that action to prevent climate change is unnecessary; “doomism” leads to inaction because it implies that preventive action is useless. In other words, “doomism” is a kind of apocalyptic fatalism, and Mann certainly has a point that such fatalism is as dangerous as denialism.6 However, Mann’s argument also implies that fatalism (i.e. inevitability) is a necessary condition in his definition of “doomist”. And since Spratt and Dunlop’s report does not satisfy this condition, they are – by Mann’s own implicit definition – not “doomists”. Calling them such was “overblown rhetoric” on Mann’s part.
But there is a more serious issue here. By labeling warnings for the possibility of disastrous social effects of climate change (like those by Spratt and Dunlop) “doomism”, Mann is effectively saying “it’s not that bad, you can go back to sleep”. If what these warnings aim to avoid is within the realm of possibilities for our future, then Mann’s crusade to discredit the scientists, journalists, and others who issue such warnings is just another form of denialism, albeit a more subtle one: rather than downright denying climate change, he merely denies the seriousness of some of its possible effects.7
This, of course, leads us to the first two questions that I asked above: Is the report by Spratt and Dunlop “overblown rhetoric” indeed? And does Mann actually have the expertise to judge whether it is? The second of these questions may seem less important than the first, but Mann is an influential climate scientist, so if he would (unintentionally) abuse his authority as a climate scientist to discredit work that is really outside his area of expertise, then that matters.
A simplistic view on the matter is this: Michael Mann is a climate scientist; Spratt and Dunlop’s paper is about climate change; so Mann has the expert authority to judge their work. The reason that this is simplistic is that while Mann is an expert in (certain parts of) climate science indeed, Spratt and Dunlop’s paper is actually not in climate science. Rather Spratt and Dunlop discuss social effects of climate change, focusing on national security. Climate change is an external variable in their study – it is a given context, rather than something they aim to predict or explain. The study of the social effects of climate change is social science, with a dose of humanities (history, especially) mixed in. Particularly, Spratt and Dunlop’s paper is concerned with issues of national security (and it is thus entirely appropriate that the foreword to their study is written by a retired admiral). I don’t know whether Spratt and Dunlop are experts in national security and related branches of the social sciences and humanities, but I see no reason to believe that Mann is (and certainly nothing in his publication record). In fact, Mann is way out of his league. If and when Spratt and Dunlop make predictions about the global climate, then he has the authority to judge those predictions, but if he abuses his authority as a famous and influential climate scientist to discredit a study on national security – which, again, is not his area of expertise – then that is a problem.
This issue points at a broader problem in debates and research on climate science, however. Studying climate change is not just the business of climate scientists, but of a whole collection of other sciences as well. Predicting climate change itself should be left to climate scientists, but predicting the social effects of climate change is an entirely different matter. A climate scientist is not trained to assess the national security effects of prolonged drought, for example, or of the psychological and sociological effects of heat. If you want answers to those questions, you need to ask experts in national security, psychologists, and so forth.
Because many effects of climate change interact, the study of climate change is inherently multi- or interdisciplinary, however. Unfortunately, science is not. Multi- and interdisciplinary research is fashionable among science funders, of course, but this always concerns research that involves just two or three closely related (sub-) branches of science, while in case of climate change, almost every branch of science should be involved. But science has become increasingly specialized, and the kind of multidisciplinary expertise needed to successfully integrate many widely different scientific viewpoints and theories is exceedingly rare.
The most obvious way to solve this problem is compartmentalization: cut up a study in parts and assign those parts to experts in different fields. The problem is that this can only work if there are also people involved who can judge whether the parts still fit together. Perhaps, one of the best examples of this kind of compartmentalization going completely wrong are the Shared Socioeconomic Pathways (SSPs).
The SSPs are five scenarios for the near future that attempt to integrate various aspects of climate science with economics, food security, national security, international cooperation, and so forth.8 The research that produced these scenarios was split up by fields, but probably without considering that these fields might have fundamentally different and even incompatible approaches. The parts (or “compartments”) of the study that involved climate science and most of the parts that involved social science were based on models and theories that were themselves the result of extensive empirical research, calibration, and testing. But the economic part – which is considered part of the foundation of the scenarios – was not. The economic model may have looked very similar to the other models to a casual observer, but there is a fundamental difference, as revealed by the following quote:
There is no unified theoretical model of economic growth . . . Macroeconometric models have been popular, but lack a theoretical foundation. They are subject to the Lucas Critique . . . , which states that models that are purely based on historical patterns cannot be used for policy advice, as they lack an explanation of the underlying structural parameters (. . .).9
What this means in normal language (rather than “econ-speak”) is that the economic model is not just not based on empirical reality, but explicitly rejects any empirical or historical basis. Rather, it is a mathematical model based on unrealistic assumptions and without any clear relation to (or relevance for) the real world.10 This kind of model has no predictive power whatsoever. In the contrary, the predictions of standard economic models routinely deviate from economic reality,11 but the “Lucas Critique” implies that this is irrelevant, because the modeling approach and the assumptions at the base of the model are exempt from revision or critique. Or to put this in more plain language: the economic model at the basis of the SSPs is an unempirical, unrealistic fiction. As such, it is incompatible with the underlying assumptions and approach of the rest of the SSP framework. Unfortunately, it appears that no one involved in creating the SSPs understood enough about economics to realize that.
What this example shows is that the sciences of climate change (which include climate science, but also much more) need genuine multidisciplinary perspectives, but unfortunately, modern science cannot really provide those. To some extent, journalists fill that gap. David Wallace-Wells and Bill McKibben have recently published books that present useful overviews of the climate crisis from a variety of perspectives, for example.12 Unfortunately these efforts are not always appreciated by climate scientists (and others), but their critique reminds me a bit of the famous Indian parable of the elephant and the blind men.
In that parable, a group of blind men hear of a strange animal called an “elephant”. Since they are blind, none of them knows what an elephant is like, and so they decide to go and find out. One of them puts his hands on the elephant’s trunk, and announces that an elephant is like a snake. Another touches a leg, and declares that an elephant is like a tree. Yet another finds its tail, and thinks than an elephant is like a rope. And so forth.
Sometimes, some scientists seem to act a bit like these blind men: they only see their little corner of the beast (the area they are specialized in) and lose sight of the whole. And then, when someone who actually can see looks at the elephant from a few meters away and describes the animal to them, they don’t believe the description, and declare the person who can see the whole to be mad.
Unfortunately, some of those who claim that they can see actually are mad. The reason for that is that it is much harder to see the whole of climate change and its implications than it is to see an elephant. The latter only requires your eyes; the former requires at least some understanding of a long list of scientific disciplines. However, that some who claim they can see are really mad, doesn’t mean that all attempts to get a better view of the whole picture of climate change are madness.
I’m getting slightly sidetracked here, and there is still one important question left: Is the report by Spratt and Dunlop “overblown rhetoric” indeed?
This question puts me in a somewhat uncomfortable spot because I argued above that Michael Mann isn’t really qualified to answer it. So, if I’d try to answer this question here, it would seem that I am claiming that I am qualified, on the other hand, and I’m not sure I am. I have a degree in geography, but have drifted to philosophy since. I have always refused to specialize, and consequently, there doesn’t appear to be any thematic consistency in my list of publications. None of that makes me particularly qualified, so I guess that I’m about as qualified as some of the journalists I mentioned above, which means that you should probably take my judgments with a grain of salt. Or even better, you can just check what I write here and decide for yourself whether my conclusions hold up.
The paper by Spratt and Dunlop consists of (roughly) two parts. The first explains the background and approach of the paper, and can be summarized as follows:
1) Science can be overly cautious.
2) Some effects of climate change that have a low probability but a very large negative effect may actually be realized.
3) We need to investigate “plausible worst cases” to prepare just in case such a low probability effect becomes reality.
The second part discusses such a “plausible worst case” scenario and its implications:
a) Action to curb dangerous climate change is too little, too late. (This is the basis of the scenario, and not a prediction.)
b) This leads to the passing of some tipping points by 2050.
c) “A number of ecosystems collapse, including coral reef systems, the Amazon rainforest and in the Arctic. Some poorer nations and regions . . . become unviable. Deadly heat conditions . . . contributing to more than a billion people being displaced from the tropical zone. Water availability decreases sharply in the most affected regions . . . affecting about two billion people worldwide. Agriculture becomes nonviable in the dry subtropics. Most regions in the world see a significant drop in food production and increasing numbers of extreme weather events, . . .” (pp. 8-9).
d) “Even for 2°C of warming, more than a billion people may need to be relocated and In high-end scenarios, the scale of destruction is beyond our capacity to model, with a high likelihood of human civilisation coming to an end.” (p. 9)
e) This has serious implications for national security. States and their security forces will be overwhelmed by the scale of the problems (numbers of refugees, disasters, epidemics, famine, and so forth).
f) To prevent this, we must switch to zero-emission industrial systems very soon, which requires a “society-wide emergency mobilisation of labour and resources” (p. 10).
There is one sense in which the paper by Spratt and Dunlop is “overblown rhetoric” indeed, and that is in its terminology and tone. The term “existential risk” suggests human extinction, while their conclusions don’t suggest anything like that. What they do suggest as the outcome of the “plausible worst case” scenario they discuss – see (c) to (e) above – is best described as global societal collapse. Such collapse would be the end of civilization as we know it indeed, but would not be the end of mankind. The overall tone of much of the paper is also rather apocalyptic – (d) above is a good example – and appears intended mainly to draw attention from the mainstream press (at which they have been quite successful, it appears). So, yeah …, there certainly is some overblown rhetoric here.
But let’s look beyond that at the actual content of the report. With regards to the general approach of the report, I see little reason for serious criticism. (1) is true, but should not be overstated. (2) is also true, of course, but these risks should not be overstated either. (3) is a normative statement, but appears to be widely held in national-security related fields. At least, in as far as I can see, it is common practice with regards to national security to develop plausible worst case scenarios to consider the possibility and necessity of preparation for what they predict as possible futures. And from a moral or social-philosophical point of view, that practice can be easily defended. I’m inclined to say that developing and assessing such scenarios is part of what a government and/or those in its service (particularly those tasked with national security) should do. That’s just responsible government.
The scenario sketched in (a) to (d) is supposed to be a “plausible worst case” scenario. (a) is, unfortunately, very plausible. Given what has been published about tipping points in the past years, (b) is also plausible, but that some tipping points will be crossed in a “too little, too late” scenario doesn’t necessarily imply that the most disastrous tipping points will be crossed.
(c), however, is probably too extreme. What Spratt and Dunlop write about coral reefs, the Amazon rain forest, and the Arctic is relatively uncontroversial, but that deadly heat conditions will lead to the displacement of a billion people seems an excessively high estimate. Parts of India, Pakistan, the Middle East, and Central America will indeed become uninhabitable due to heat, but those areas don’t currently have one billion inhabitants together (although the real number might be in roughly the same order) and not all of those people will flee. That two billion people will be affected by drought is not very controversial, on the other hand – I have seen several studies suggesting that – but that doesn’t imply that it will be equally severe for all of those people.
(d) doesn’t follow from (c), but a more charitable reading of their argument is that (d) leads to the national security problems mentioned under (e) above, and that those those ultimately (could) lead to societal collapse as mentioned under (d).
It seems to me that there are two serious weaknesses in Spratt and Dunlop’s argument. Firstly, (c) seems a bit extreme. Because of this, together with the paper’s terminology and tone the qualification “overblown rhetoric” is indeed not entirely inappropriate. Because (d) and (e) depend on (c), a more realistic scenario may break the argument (that is, (d) and (e) would no longer follow). Secondly, the report concludes that (f) we need to shift to zero emissions soon, but this conclusion appears out of thin air, other possible policies and adaptations are not considered, and it is not discussed whether this solution would be enough either.
So this raises two questions: How likely is global societal collapse by 2050, as Spratt and Dunlop predict as a possible (but not necessary or inevitable) future? And is switching to a zero emissions economy sufficient to avoid that?
I have attempted to answer both of these questions before. The short answer to the second question is “No, that is not enough”. For a longer answer, see The Lesser Dystopia. I don’t want to give a short answer to the first question, however. My longer answer can be found in On the Fragility of Civilization, but I’ll summarize and explain the main points of that answer in the following.
The basis of my answer to the question about the likelihood of global societal collapse (or “the end of civilization”) is a fairly simple model that can be graphically summarized as follows:
The top circle in this diagram represents natural disasters (top half: hurricane/typhoon; bottom half: drought). A natural disaster has three (here relevant) kinds of effects, which is shown in the diagram by the three arrows leading to the three circles on the second level. On the left: a natural disaster causes economic damage and thus decreases (red arrow) the size of the national economy. In the middle: a natural disaster causes (and thus increases) the number of evacuees or refugees. On the right: a natural disaster causes (and thus increases) mortality (especially in the case of drought- or flood-induced famines!), disease, PTSD, anxiety, depression, and so forth. The three circles in the middle are all related to civil unrest. Economic decline, increasing number of refugees/evacuees, and the various effects combined in the black and red circle on the right all increase dissatisfaction, which can turn into unrest and in extreme cases in rioting and even civil war or other kinds of armed conflict. (The arrow from the economy to unrest is red because it is economic decline which causes an increase in unrest. In other words, it’s an inverse relation like the other red arrow.) Finally, the size and health of the economy determines the ability of a society to cope with disaster, represented with the circle in the lower left, by providing food and shelter to refugees/evacuees, rebuilding devastated areas, and so forth.
With every natural disaster, the size of the three circles on the second row changes: the economy shrinks a bit, while the other two circles grow. If economic growth is larger than the damage caused by disasters, this is not a serious problem. The ability to cope with disaster will be large, refugees/evacuees are helped, disaster damage is repaired, and there is no or little increase in civil dissatisfaction or unrest.
But numbers of disasters are increasing and so is their size, and slowly the damage starts compounding. Damage repair starts falling behind or devastated areas are even abandoned; refugees can no longer all be housed and fed; and civil unrest starts to rise. The final stage of this process is a complete breakdown of social structure: civil war or societal collapse. This process – from a healthy economy with few evacuees and little dissatisfaction, to ever increasing problems – is graphically represented in this animation:
If the frequency and severity of natural disasters – droughts, floods, storms, and so forth – continues to increase, then it is no question whether this will happen, but only how fast. There are limits to the amount of natural disaster damage a society can cope with, and if the frequency and severity of natural disasters keeps increasing due to climate change, then it is inevitable that those limits will eventually be crossed. At that point, a society gradually slips into chaos. How long this takes depends very much on the starting situation. (Syria is already in a state of chaos; Germany can take a lot of hits before it even gets close.)
Furthermore, most countries have neighbors, and that matters in two different ways. Firstly, the state of a country’s economy will partially depend on the state of its neighbors’ economies because of trade. And secondly, a country may (willingly or not) receive refugees from its neighbors. Because of this, a collapsing country might drag its neighbors down with it. Trade may be especially important because there probably is some kind of tipping point in a world-wide application of this model: when the global trade network breaks down, collapse will almost certainly speed up.
In On the Fragility of Civilization I described a computer model based on the above. That computer model has 43 variables and 64 regions (as well as a bunch of model parameters), which means that every model year requires 43×64=2752 calculations. (Actually, it’s a few more.) That may sound like a lot, but the model is really ridiculously simple – way too simple to produce reliable predictions. At model settings that seemed rather conservative to me, but not excessively implausible, the model suggested global societal collapse in 20 to 30 years (or perhaps a bit more) from now.13 It would be nice, however, if some more capable simulation builders would try to create a more detailed and more realistic model of the compounding social and security effects of climate change and natural disasters.
On the basis of my model, Spratt and Dunlop’s prediction of possible (!) global societal collapse by 2050 does not seem unlikely. There is, however, a fundamental difference between their prediction and mine. If you’d look back to my summary of their argument above, you’ll find that their prediction of global societal collapse is a “plausible worst case” scenario and that it depends on significantly worsening effects of climate change. My model, on the other hand, does not depend on controversial, worst-case assumptions about what might happen to the climate, but merely on a gradual increase of the severity and frequency of natural disasters. In fact, the 20 to 30 years to collapse prediction assumes an increase in frequency and severity of natural disasters that is far slower than what we have experienced in the past decades. In other words, my model suggests that even with very conservative assumptions about the effects of climate change, global societal collapse by 2050 is quite likely. (And thus, this part of Spratt and Dunlop’s argument doesn’t necessarily have to depend on contentious claims about unexpectedly sever effects of climate change.)
I must repeat here, however, that my model is too simple for reliable predictions and that my aim in building it wasn’t to come up with an end date for civilization, but to better understand the social impacts of climate change. The most important things I learned from the model is that economic decline and refugees are likely to play key roles, but this is hardly surprising. What is, perhaps, more surprising is that while Spratt and Dunlop are also very much aware of the scope of the refugee problem, it plays no role in their conclusions.
Even in a best case scenario, the world-wide refugee population is likely to increase by 100s of millions by the middle of the current century. As national security experts have pointed out repeatedly, those are not numbers that can be handled by means of walls, fences, and armed guard posts. Those will eventually be overrun, and when that happens, security forces (and the societies they aim to protect) will be overwhelmed by a sudden unmanageable refugee influx hastening societal collapse.14 This is probably the most important reason why just switching to a zero-emissions economy is insufficient: without responsible refugee management (i.e. large-scale, international resettlement programs) societal collapse will spread like an oil-stain throughout the world.
Let’s return once more to the questions I asked in the beginning of this article. Is the report by Spratt and Dunlop “overblown rhetoric” indeed? To some extent it is, but that is a matter of style more than of substance. Their prediction that civilization as we know it might come to an end by 2050 is not without merit.15 Does Mann actually have the expertise to judge whether it is? No. He’s a climate scientist, not a national security expert. Are Spratt and Dunlop really the “doomists” Mann seems to think they are? No. By Mann’s own (implicit!) definition they are not, because they do not claim that global societal collapse (i.e. “the end of civilization”) is inevitable. In the contrary, they advocate policies to avoid it. And are “doomists” really as dangerous as Mann thinks they are?
Perhaps, that last question is the most important one, but to answer it we need to make an important distinction that Michael Mann refuses to make, namely that between apocalyptic fatalists (like Guy McPherson) and people who merely predict that catastrophe might occur but is not yet inevitable. I’m not sure what to call that second group, but “doomist” certainly sounds very inappropriate. Apocalyptic fatalism is harmful indeed for exactly the reason Mann mentions: they promote inaction (because fatalism makes an attempt at avoidance futile). But the second group is not harmful for the same reason. In the contrary, by pointing out the seriousness of the situation we are in they may even stir more people to action. (In contrast, and as mentioned above, Mann’s response to the paper by Spratt and Dunlop really sounds like he is saying that climate change is really not that bad and thus that we don’t have to worry. I don’t think he intends to give that impression, but that’s the effect he has. Without realizing it, his response to Spratt and Dunlop’s paper may be much more harmful than that paper itself.)
Unfortunately, Mann isn’t the only one who refuses to make a difference between apocalyptic fatalists and the second group, the press also frequently fails to make that distinction. This is, of course, the consequence of the press’s hunger for clickbait: a spectacular headline produces more advertisement revenue than a responsible analysis. Perhaps, the worst thing about the paper by Spratt and Dunlop is that they chose to feed that hunger for clickbait.
Back to the question: How dangerous are these “doomists” who really aren’t doomists but who are just warning for the catastrophe we are heading for if we don’t quickly change our ways? It’s difficult to be sure about the answer to that question without doing extensive psychological research about how people respond to different kind of messages and what motivates them to take action. However, I think it is pretty clear that the past decades of reporting on the effects of climate change have been largely ineffective. It’s time to try something else. It is time to tell people what really is at stake.
If you found this article and/or other articles in this blog useful or valuable, please consider making a small financial contribution to support this blog 𝐹=𝑚𝑎 and its author. You can find 𝐹=𝑚𝑎’s Patreon page here.
Notes
- David Spratt & Ian Dunlop (2019). Existential Climate-Related Security Risk: a Scenario Approach (Breakthrough).
- In the Washington Post.
- Source: New Scientist. Michael Mann shared this article on his Facebook page, which suggests that the quote is accurate. He also added the following “NOTE to Guy McPherson followers & doomers: Trolls get blocked here, whether they’re deniers or doomists.”
- Michael Mann (2017). “Doomsday scenarios are as harmful as climate change denial”, The Washington Post.
- Actually, this is a lie. I don’t take anyone’s word for anything just like that. I believe someone if she has a good argument and/or strong evidence, not just because she happens to be an authority in her field. Mann tends to have pretty solid evidence for his climate-change-related claims, however.
- See also: Fictionalism – or: Vaihinger, Scheffler, and Kübler-Ross at the End of the World.
- I wonder whether Mann is also going to tell the activists in the Extinction Rebellion movement that the term “extinction” in their name is inappropriately “doomist” and that they really shouldn’t worry that much.
- For an overview see the section titled “Shared Socioeconomic Pathways (SSPs)” of Stages of the Anthropocene.
- Rob Dellink, Jean Chateau, Elisa Lanzi, & Bertrand Magné (2017). “Long-term economic growth projections in the Shared Socioeconomic Pathways”, Global Environmental Change 42: 200-214, p. 202.
- See also: Economics as Malignant Make Believe, and (especially): Steve Keen (2011), Debunking Economics, Revised and Expanded edition (London: Zed Books).
- Such models were unable to predict the 2008 Great Recession, for example, because according to such models economic crises are impossible.
- David Wallace-Wells (2019), The Uninhabitable Earth: A Story of the Future (Allen Lane). Bill McKibben (2019), Falter: Has the Human Game Begun to Play Itself Out? (Wildfire).
- And obviously, at less conservative settings collapse comes faster, although due to inertia it cannot come much faster.
- But not causing it by itself. Again, it is compounding effects of natural disasters and secondary, human disasters that ultimately will bring down societies.
- Actually, even if what I write in The Lesser Dystopia is only half right, we have to completely overhaul civilization as we know it to avoid global societal collapse, so current civilization will come to an end either way. | http://www.lajosbrons.net/blog/doomists/ |
I create immersive experiences to teach, to train, to learn, to assist, to capture skills, to improve performance, and to change our minds. Over my career, I've developed for many virtual reality technologies, everything from CAVE systems to modern off-the-shelf VR headsets, and now looking toward the surprising innovations the near future holds..
My active research topics include VR therapies for neuropathic pain, the psychology of embodiment, VR-assisted physical therapy, AR display of real-time medical imaging, and kinesthetic/attention tracking feedback methods.
Contact
email:
[email protected]
Linkedin: | http://coreyshum.net/ |
Augmented, virtual, and mixed reality (AR, VR, and MR), collectively referred to as extended reality (XR), has the potential to transform most aspects of our lives, including the way we teach, conduct science, practice medicine, entertain ourselves, train professionals, interact socially, and more. Indeed, XR is envisioned to be the next interface for most of computing.
While current XR systems exist today, they have a long way to go to provide a tetherless experience approaching perceptual abilities of humans. There is a gap of several orders of magnitude between what is needed and achievable in the dimensions of performance, power, and usability. Overcoming this gap will require cross-layer optimization and hardware-software-algorithm co-design, targeting improved end-to-end user experience related metrics. This needs a concerted effort from academia and industry covering all layers of the system stack (e.g., hardware, compilers, operating systems), all algorithmic components comprising XR (e.g., computer vision, machine learning, graphics, optics, haptics, and more), and end-user applications software and middleware (e.g., game engines).
The current XR ecosystem is mostly proprietary. For system researchers, there have been no open-source benchmarks or open end-to-end systems to drive or benchmark hardware, compiler, or operating system research. For algorithm researchers, there are no open systems to benchmark the impact of new algorithms in the context of end-to-end user experience metrics. Similarly, developers/implementers of individual hardware and software components (e.g., a new SLAM hardware accelerator or a new software implementation of foveated rendering) do not have open access to benchmark their individual components in the context of end-to-end impact.
We aim to bring together academic and industry members to develop a community reference open-source testbed and benchmarking methodology for XR systems research and development that can mitigate the above challenges and propel advancements across the field.
Goals for the Testbed
Our goals for the testbed include the following:
- Standalone components that comprise representative AR/VR/MR workflows (e.g., odometry, eye tracking, hand tracking, scene reconstruction, reprojection, etc.). These components may be hardware or software implementations, written in various languages, and optimized for various current and future hardware (e.g., CPUs, DSPs, GPUs, accelerators).
- End-to-end XR system with a runtime to integrate the above components. The XR device runtime should provide:
- compliance with the OpenXR standard,
- real-time scheduling and resource management,
- offload and onload functionality for other edge devices and servers and the cloud to enable multi-tenant and multi-party experiences,
- plug-n-play functionality for replacing different implementations of a given component (in hardware or software),
- flexibility to instantiate different workflows consisting of subsets of components representing a variety of use cases in AR, VR, or MR.
- Edge and cloud server frameworks for multi-tenant and multi-party experiences and content serving.
- End-user applications, including middleware to represent a variety of single- and multi-user AR, VR, and MR use cases such as games, virtual tours, education, etc.
- Data sets and test stimuli to drive the testbed and telemetry, representing realistic use cases.
- Telemetry to provide extensive benchmarking and profiling information, ranging from detailed hardware measurements (e.g., performance and power related statistics) to end-to-end user experience metrics (e.g., motion to photon latency, image quality metrics, etc.).
Our goals are ambitious and require contributions from the wider XR, hardware, and software systems community. The consortium aims to bring together these communities, seeking and curating contributions towards the above goals for a common open-source community testbed. | https://illixr.org/about/goals |
This symposium will focus on various aspects of international research, applications and trends of robotic Innovations for the benefit of humanity, advanced human-robot systems and applied technologies, e.g. in the fields of robotics, telerobotics, simulator platforms and environment and mobile work machines as well as virtual reality/augmented reality and 3D modeling and simulation. The symposium will consist of keynote presentations on the state-of-the-art technology, workshops, and topical panels. All papers will have peer review and will be published with IMEKO and IEEE Guidelines.
------------------
SYMPOSIUM TOPICS WILL INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING-
- Robot Design Innovations
- Sensors/Smart Sensors their Integration / Fusion
- Advanced Controls and Actuators
- New Power Supplies
- Methods of Artificial Intelligence in Robotics
- Humanoid, Climbing/Walking, Service, and Autonomous Robots
- Anthropomorphic Robots/Mobile Robots
- Tele-existence / Telepresence
- Augmented Reality / Mixed Reality / Virtual Reality
- Communication with Realistic Sensations
- Intelligent CAD and IMS
- Visual and Auditory Displays
- Tactile and Force Displays
- Tools and Techniques for Modeling VR Systems
- Real-Time Computer Simulation
- Software Architectures for VR
- VR Interaction and Navigation Techniques, Distributed VR Systems and Motion Tracking
- VR Input and Output Devices
- Innovative Applications of VR
- Human Factors in VR
- Evaluation of VR Techniques and Systems
- Internet and VRML Application of VR in all areas
- Interactive Art and Entertainment
- Artificial Life
- Education and Entertainment Robots
- Medical and Healthcare Robots
- Micro and Nano Robots
- Innovative Robotics Applications
-------------------------------------
For more information, contact [email protected]
For registration, go here
For paper submission, go here
For hotel/transportation, go here
For TC-17/ISMCR History, go here
ISMCR Symposium Committee
General Chair - Dr Zafar Taqvi, University of Houston-Clear Lake, USA
General Vice Chairs
- Prof. Yvan Baudoin, Belgium
- Prof. Masahiko Inami, Japan
Symposium Advisors
- Prof Susumu Tachi, Japan
- Norman Chaffee, CLCTS/USA
- Reese Terry, IEEE Fellow
Symposium Program Chair
- Dr Thomas L. Harman, UHCL, Conference Program Committee
Finance Chair
- Dr Ishaq Unwala, UHCL/USA
Publication Chair
- Dr Trung Pham, University of Talca, Chile
Internet/Social Media Liaison
- Sarah Taqvi, University of Houston Journalism Department
Registration Chair
- Dr. Irfan Khan, Texas A&M University
Logistics Coordinator
- Michelle Patrick, CEO, ADAK Digital Inc. | http://ismcr.org/ |
When we think of people in the immersive tech industry, who comes to mind? It’s easy to look at the people endorsing the product, the people companies use to help market the product, but often times, we don’t think about the individuals who are innovating the immersive landscape, those working first-hand on the technology that makes this possible. It is important to spotlight these people, to bring attention to them, their hard work and accomplishments. To celebrate International Women’s Day, let’s look at four female figures who have set new standards for the industry through new technological developments as well as finding new and creative ways to implement VR in other settings.
Liv Erickson
To start with, there is Liv Erickson, who is currently a Senior Manager at Mozilla. She has extensive experience in software development for XR, 3D and metaverse technologies that contribute to the evolution of graphics and cloud computing. On top of that, she also has experience in management, strategic planning and collaborating with various teams.
Throughout her 12-year career, she has also worked for Microsoft and Amazon to develop and manage developments of various projects. In that timeframe, she has created several open-source VR and AR applications, a site that visualises Excel charts in 3D, co-developed the Digital Afterlife Project which addresses the challenges involving user data after the user themselves pass away and much more. She is currently working on Mozilla Hubs, a 3D collaboration platform inside a customisable space which works on desktop, mobile, VR and even on a browser.
Check out her other accomplishments through her LinkedIn.
Nicole Lazzaro
Nicole Lazzaro founded XEODesign, Inc., a firm that consults with clients on how to increase engagement with play. They identify ways to improve engagement in gaming and develop an understanding of players and what motivates them in order to improve the experience. She has almost 30 years of experience in designing the player experience and virtual reality is no exception.
She has developed an XR experience called Follow the White Rabbit, a mind-bending VR puzzle adventure immersing the user into a magical spectacle. She also developed the first iPhone game to utilise the accelerometer that measures the acceleration of motion, which would pave the way for games like Doodle Jump. Other efforts include creating the 4 Keys to Fun, a model that insights on the important elements required to make a game fun and interesting and consulting with the Obama White House and the US State Department on how to use games to improve the state of our world.
Check out her other accomplishments through her LinkedIn.
Rosie Summers
Rosie Summers is a 3D animator at XR Games. Despite her career only being active in the last 5 years, she has made strides as an animator for VR projects. These include helping to bring worlds to life with titles such as Angry Birds Movie 2 VR: Under Pressure and Zombieland VR and most notably, carving out her own niche as a Virtual Reality Artist.
What comes with creating art in VR is the performance. She essentially draws an image while in the digital world through the use of motion controls, allowing her dynamic hand movements to paint the picture. As she paints, the development of the painting is shown to the audience on a screen, so they can see how it is being made in real-time. Because of this, her paintings translate into a live spectacle. These live paintings have been showcased for clients such as Facebook, National Football Museum, the BBC and Riot Games. She has attended a variety of festivals and workshops throughout the UK to spread the power of this medium to the wider audience.
Check out her own website to get an improved look at her skills and portfolio.
Yuka Kojima
Yuka Kojima is the CEO and co-founder of FOVE, who made the first eye tracking VR headset. This means that the technology is able to capture the user’s eye movement in an accurate way, with one site even stating it as ‘Iron Man meets Oculus’. It will undoubtedly pioneer the technology and create the potential for even more immersive experiences.
In 2017, she made it into the top 100 list of most powerful women according to Forbes Japan and even made the front cover of the magazine. Before her success, she had set up a Kickstarter Campaign back in 2014 for the headset. She also previously worked at Sony Computer Entertainment Japan before leaving to branch out into VR.
Check out the FOVE website to discover more about her company.
Conclusion
Mersus Technologies have been producing VR experiences for the past 5 years, and are currently further developing our Avatar Academy platform. Several aspects of Avatar Academy require serious research and development of new systems, and our female staff members are integral to this.
Women are represented at every level of the Mersus team. We have female immersive developers, creative developers, concept designers, middle and senior management.
Mersus Technologies strives to create an environment in which all members of our community should expect to be able to thrive, be respected, and have a real opportunity to participate in and contribute to the company’s activities so that they can achieve their fullest potential. We are always seeking to increase the diversity of our team.
We understand that embracing diversity makes our workforce more innovative, resilient, and high-performing. We truly believe that teams that are as diverse as possible make better apps, and women play a crucial role in what is a traditionally male-dominated technology sector.
On International Women’s Day 2022, we look forward to a future where women are better represented across all technological fields. We in Mersus are doing our part.
Through Avatar Academy the women on our team play an integral role in setting new standards for the Immersive industry. The woman to keep an eye on for 2022 in the Immersive space are Palak, Fiona Moran, Polly Wong, and Brenda Mannion. | https://avataracademy.io/tag/figures/ |
The symposium will focus on various aspects of international research, applications and trends of robotic Innovations for the benefit of humanity, advanced human-robot systems and applied technologies, e.g. in the fields of robotics, telerobotics, simulator platforms and environment and mobile work machines as well as virtual reality/augmented reality and 3D modeling and simulation. The symposium will consist of keynote presentations on the state-of art technology, and topical panels. All papers will have peer review and will be published with IMEKO and IEEE Guidelines.
Symposium topics will include but not limited to the following-
- Robot Design Innovations
- Sensors/Smart Sensors their Integration / Fusion
- Advanced Controls and Actuators
- New Power Supplies
- Methods of Artificial Intelligence in Robotics
- Humanoid , Climbing/Walking, Service, and Autonomous Robots
- Anthropomorphic Robots/Mobile Robots
- Telexistence / Telepresence
- Augmented Reality / Mixed Reality / Virtual Reality
- Communication with Realistic Sensations
- Intelligent CAD and IMS
- Visual and Auditory Displays
- Tactile and Force Displays
- Tools and Techniques for Modeling VR Systems
- Real Time Computer Simulation
- Software Architectures for VR
- VR Interaction and Navigation Techniques, Distributed VR Systems and Motion Tracking
- VR Input and Output Devices
- Innovative Applications of VR
- Human Factors in VR
- Evaluation of VR Techniques and Systems
- Internet and VRML Application of VR in all areas
- Interactive Art and Entertainment
- Artificial Life
- Education and Entertainment Robots
- Medical and Healthcare Robots
- Micro and Nano Robots
- Innovative Robotics Applications
DEADLINES:
NOTiFICATION of paper acceptance: July 15th, 2019
SUBMISSION of camera ready paper: August 15th, 2019
- Highlight of the symposium will include two Keynotes, two luncheon special presentations, 45 technical papers, visit to local robotics lab and a special dinner treat at Space Center Houston.
- Proceeding of all accepted papers will be available in memory media. IEEE Xplore is being contacted for their publication. Selected papers with modifications and additional review will be eligible for ACTA IMEKO, the quality publication of IMEKO.
- All authors and attendees will be required to register for the event. Papers of only those who register will be published.
- Symposium will start with reception on Thursday, September 19th evening and end with Lab visit Saturday September 21st
THIS IS AN ADVANCE INFORMATION TO THE AUTHORS BEFORE OUR WEBSITE IS READY WHICH WILL HAVE ALL INFORMATION ON REGISTRATION, PROGRAM SCHEDULE, HOTEL INFORMATION, VISA REQUIREMENT (AND HELP) ETC. | https://www.bemeko.be/news/913-ismcr2019.html |
Detecting hand gestures can provide a useful non-contact interaction tool with machines and systems and it has been employed for a wide range of applications. Recently, smart glasses and Virtual Reality (VR) headsets become viable solutions for various training applications ranging from surgical training in medicine to operator training for heavy equipment. A major challenge in these systems is to interact with the training platform since user’s view is blocked. In this paper, we present hand gesture detection using deep learning as a means of interaction with the VR system. Real world images are streamed by a camera mounted on the VR headset. User’s hand gestures are detected and blended into the virtual images providing more immersive and interactive user experience. | https://link.springer.com/chapter/10.1007%2F978-981-13-6447-1_63 |
United Talent Agency is one of the entertainment industry's premier talent and literary agencies, representing many of the world's most widely-known figures in every current and emerging area of entertainment, including motion pictures, television, books, music, digital media and live entertainment.
Article | March 16, 2020
Early research on the latest devices shows that the VR technologies of today have made significant progress on the fundamental comfort and usability challenges that frustrated previous attempts to commercialize the technology. Virtual reality has inspired such fervent interest and debate partly because the fundamental building blocks of the technology have existed since the late 60s. The challenge of investors, developers, and engineers has been to transform that fundamental technology into a positive user experience and a commercial success.
Article | June 8, 2021
Last year the luxury fashion sector saw a lot of online chatter around men’s fashion, and some of the biggest conversation triggers were the same social movement – breaking gender stereotypes and fighting against toxic masculinity.
Many brands and celebrities joined this cause last year and spread their influence through fashion and pop culture.
Article | March 18, 2020
It's been nearly four years since HTC and Facebook's Oculus launched this generation of virtual reality (VR) with the Vive and Rift headsets, respectively. They were supposed to be revolutionary products that would open up a new world of innovation that would disrupt nearly every aspect of our lives. Instead of being in every home, as hoped, VR is still a novelty with only a few million headsets in the market. HTC and Oculus are actually trailing Sony's (NYSE:SNE) PSVR, which recently surpassed 5 million headsets sold since launch. And the number of headsets being used on a regular basis is well short of that number. So why hasn't VR taken off, and can HTC and Oculus fix what's ailing the industry?
Article | February 20, 2020
Sony’s PlayStation team and Facebook, including Oculus VR, have withdrawn from this year’s Game Developers Conference, citing health concerns surrounding the novel coronavirus, COVID-19.“Out of concern for the health and safety of our employees, our dev partners, and the GDC community, Facebook will not be attending this year’s Game Developer Conference due to the evolving public health risks related to COVID-19,” a Facebook spokesperson said in a statement emailed to Polygon. “We still plan to share the exciting announcements we had planned for the show through videos, online Q&As, and more, and will plan to host GDC partner meetings remotely in the coming weeks.”
Conference
Keep me plugged in with the best
Join thousands of your peers and receive our weekly newsletter with the latest news, industry events, customer insights, and market intelligence.
Welcome back!
Put your news, events, company, and promotional content in front of thousands of your peers and potential customers. | https://entertainment.report/articles/e3-vr-showcase-roundup-everything-announced-for-oculus-quest-playstation-vr-valve-index-and-more |
6th Dimension VR was established out of a passion for Virtual Reality. The company's goal is to provide unforgettable, immersive VR experiences and games for all ages—whether it's action, adventure, fantasy, 3D painting, racing or space simulator, or virtual travel with Google Earth. To achieve this goal, 6th Dimension VR uses high-tech equipment such as HTC Vive headsets with Lighthouses and controllers, as well as Oculus Rift. Apart from virtual reality, the venue also offers a wide selection of retro or modern console games, including Nintendo Switch or the 1990s Super Nintendo Entertainment System. | https://www.groupon.com/deals/6th-dimension-vr?deal_option=8be539a0-5c15-48f4-b9b6-58415ea74220 |
This week, Felix & Paul Studios are set to return to Sundance with the world premiere of their new immersive Eminem documentary “Marshall From Detroit.” The 21 minute long 360-degree film, which has been directed by Caleb Slain, explores Detroit through the eyes of the rapper, who takes viewers on a nighttime car ride through his home town.
Felix & Paul Studios have long been on the forefront of immersive video, and often pushed the boundaries of VR storytelling by adapting different forms of cinematic storytelling to the medium that go beyond traditional 360-degree video capture. In the past, this included time-lapse recordings, stop motion video and more. Now, the studio can add 360-degree shots from a moving vehicle to that list.
“Marshall From Detroit” is premiering at Sundance as part of the New Frontier Program this Saturday. It will be available on Facebook’s Oculus Go and Rift VR headsets next month. | https://variety.com/2019/digital/news/eminem-marshall-from-detroit-trailer-1203113703/ |
In light of societal and environmental challenges, companies are showing more commitment and must adapt their CSR policy. It is a lever for innovation, an opportunity to adapt to tomorrow's world and a lever of sustainable performance. It is in this framework that we intervened with a telecom operator who sought to conduct a transformation project aimed at juggling the work of its teams in the context of these challenges.
Launch of CSR approach
For the launch of its CSR approach, an international telecom operator has asked Sofrecom to assist them. Within this framework, a team of change management consultants and CSR experts was mobilized to meet our client's expectations.
The objective of the project is to define and establish governance and a roadmap for the CSR transformation project. This involves:
- Defining and implementing CSR governance to ensure compliance with the guidelines, its approach and to report on the results obtained
- Defining environmental objectives: environment and digital equality
- Harmonizing communication between the different teams
- Organizing and monitoring Green KPIs to measure the performance of the CSR approach and evaluate its progress. Implementing tools to improve its management.
- Defining eco-design processes, an essential lever for reducing CO2 emissions, limiting the impact on natural resources and building a responsible digital environment.
- Assisting in the preparation for the Afnor ISO 14001 audit.
This certification aims to improve environmental performance and contributes to the sustainable development goals. It proves the establishment of a culture of listening to interested parties and taking into account local environmental issues (pollution, etc.) and global environmental issues (biodiversity, climate change, preservation of natural resources, etc.).
Deploying a responsible business model and adopting a priority action plan for the development of CSR makes it possible to mobilize all of the company's players. It is in this sense that our client wished to structure a common approach within its various entities, to deploy its action plan while capitalizing on the various initiatives in place. Its ambition is to federate and unite its teams around a common ambition: responsible commitment, one of the pillars of its strategy for 2025.
To assist in this CSR transformation, our consultants proceeded in stages:
- Listening to the various CSR correspondents in the entities
- Conducting an environmental analysis in the different departments
- Mapping of the processes impacted by the eco-design approach of products and services
- Identifying key indicators for monitoring the effectiveness of the action plan
A transverse mobilization around objectives and governance, clarified and shared by all
All of the requests and impact analysis carried out have made it possible to identify the various initiatives in place and the best practices to be generalized, to adapt the processes for launching products and services to introduce the eco-design method and then to support the teams in the change management necessary for the success of the project.
The various initiatives implemented have enabled the alignment of CSR objectives across all departments, and specific indicators for monitoring ambitions have been set up and CSR correspondents appointed in the various departments to ensure that the action plan is monitored over time.
To increase efficiency and eliminate sources of error, these indicators have been industrialized. The results are now shared in a monthly review between the various stakeholders. Thanks to the introduction of dialogues and working groups, exchanges have been facilitated and harmonized.
All of these elements allowed the operator to prepare for the Afnor ISO 140011 certification audit which was successfully concluded. | https://www.sofrecom.com/en/news-insights/csr-transformation-plan.html |
COMPANIES and boards that continue to view sustainability as a buzzword or worse, greenwash it with a ‘check-in-the-box’ approach, are doing a grave disservice to the company and its shareholders in protecting the business longevity and future viability.
There are many definitions of sustainability. However, understanding the journey an organisation may undertake to arrive at sustainability provides the necessary context to the expected mindset.
Against an increasingly volatile, uncertain, complex and ambiguous (VUCA) backdrop, intensified by regulatory and investor demands, leading companies have identified alternative business strategies and have transitioned their businesses. This is done in tandem with making the leap towards greater accountability and transparency in their overall business operations and ethics.
Many have progressed from corporate governance (CG), corporate responsibility (CR), corporate social responsibility (CSR) to Sustainability, and beyond that, to contribute to the wider societal goal of sustainable development, otherwise known as the Sustainable Development Goals (SDGs) or the Global Goals.
However, too many have not grasped the underlying intent of the sustainability-mindset, which, in essence, is a proactive approach to holistic risk management to ensure business success and longevity.
As a result, many continue to focus on superficial reporting measures and short-term financial performance, without genuine integration of the principles of sustainability in the business. No doubt, the process can be challenging and may sometimes require rethinking current decisions, which impacts future considerations of the business, community and planet.
A realistic and meaningful long-term strategy
The impact of sustainability-related risks to business operations and the bottom line is evident from news headlines on corporate corruption to environmental pollution, to name a few. Numerous reports published over the last few years present a recurring theme – the reports reinforce the strong correlation between Sustainability and corporate value.
According to the Business and Sustainable Development Commission, alignment with the Sustainable Development Goals (SDGs) will require a step-change in both public and private investments. It also calls for business and world leaders to ‘strike out in new directions’ and ‘embrace more sustainable and inclusive economic models’.
By doing so, businesses stand to open-up some USD 12 trillion of market opportunities in the four (4) economic systems and create 380 million new jobs by 2030 . These are food and agriculture, cities, energy and materials, health and well-being – all of which, represent about 60% of the real economy today. Very clearly, the world needs to act urgently, and this is not a new thinking.
As early as 2011, 94% of respondents from leading companies say that they have integrated Sustainability into strategic planning. Fast forward to 2019, it is interesting to note that, globally, more CEOs have turned their focus inward, to centre around strengthening their companies from within, as they adapt to the changing dynamics of world politics and economics, especially the newly erected barriers between markets – both trade and labour.
According to PwC’s 22nd Global Annual CEO Survey, CEOs are ‘less bothered by the broad existential threats’, such as terrorism and climate change that rose in rankings last year and are more ‘extremely concerned’ about the ease of doing business in the markets where they operate.
The revenue and expansion opportunities CEOs identify are also more internally oriented and closer to home. 35% cited over-regulation and policy uncertainty as the top threats in 2019, followed by availability of key skills at 34%, trade conflicts at 31%, cyber threats, geopolitical uncertainty and protectionism at 30%, populism and speed of technological change making up 28%, with exchange rate volatility at 26%.
These are all risks and threats that would immediately be identified and become the emphasis of an organisation, if the impact-based principles of the Sustainability approach had been the key strategy.
It is proven time and time again, that companies and boards that take a long-term view of the business and perform well in sustainability issues tend to outperform their peers on a variety of financial indicators, such as share price, cost of capital, etc.
The principles and the best practices of sustainability will be key drivers for long-term value. This is evidenced in the large regional and global brands such as Unilever, Nestle, Shell, Hewlett Packard, and on a more local level, IOI Corp, Petronas Chemicals Group, Sime Darby Plantations and Sime Darby Berhad, Kuala Lumpur Kepong, British American Tabacco .
There needs to be a mindset shift about sustainability from an inconvenient cost to a business opportunity and long-term view, that leads to a competitive advantage.
Sustainability: A board-level concern
Sustainability encompasses Environmental, Economic and Social (EES) governance performance of a company with the intent to determine the impact caused by its everyday activities. The performance report brings together areas that have been viewed separate and disparate, and hence, not discussed in the same forum, resulting in siloed mentality or strategies. The external benefit is that stakeholders will be able to understand and connect the financial and non-financial initiatives to determine an organisation’s real value.
In a highly dynamic and globalised business environment, companies are becoming more susceptible to Sustainability-related risks. Proactively addressing these risks are becoming central to corporate competitiveness and long-term goals.
In the past, sustainability was centred primarily on environmental or social issues, with no clear correlation to the businesses, its financial and non-financial operations. This has since developed into a broader and holistic view, a vital component in creating long-term value, especially in driving growth, return on capital and managing risks.
These risks are apparent, as they have an impact on corporate reputation, competitiveness and profitability. What used to be considered non-financial risks, EES concerns are now defined as material financial risks.
When these issues become material to a company’s performance, the responsibility lies on boards to act.
The expectation on boards – the global sustainability agenda
Investor activism is on the rise. Technology and the internet have paved the way for investors to gain greater access to information, in compressed time frames, forcing companies to either be proactive with their strategies or extremely nimble in their response.
Perpetual connectivity and a pervasive network have allowed for a more informed and educated audience.
Investors are increasingly raising material issues and sustainability strategies in their conversations not only with management, but also with corporate boards. They are intensifying the pressure on boards to provide transparency through full disclosure of the company’s sustainability risks.
Indeed, the spotlight is falling on boards to demonstrate clear understanding and oversight of real business risks and opportunities that affect their corporate value while regulators closely monitor corporate governance, culture and conduct.
The challenge lies in its uncertainty, unpredictability and external forces when dealing with EES concerns.
With so many challenges that are beyond the control of the boards and company, it will require an aligned and united board to signal the cohesive organisational shift to sustainability.
Therefore, it is indeed time for corporate boards and directors to take Sustainability seriously and to drive it from the top. Making a clear commitment top down and effecting a well-defined plan that is kept focused by clear goals and outcomes, will be key.
This article is taken from Astro Awani.
Photo by Kai Gradert on Unsplash. | https://pulse.icdm.com.my/article/the-case-for-sustainability-as-a-strategy/ |
[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI--COLUMBIA AT REQUEST OF AUTHOR.] Corporate social responsibility (CSR) is evolving into an essential component of brand strategy. The way consumers interact with brands is changing, prompting modern consumers to have higher expectations for the companies they give their business to. This study will explore how organizations decide upon one or more corporate social responsibility initiative(s) to pursue. Corporate social responsibility can be defined as any action a company takes that furthers some social good beyond the interests of the firm. This research will use satisficing theory, which allows management to weigh decisions using a cost/benefit structure and the Symbiotic Sustainability Model to evaluate how corporate/nonprofit relationships create value for each respective organization. Examining CSR within this context allows for a deeper, more nuanced understanding of how firms can gain social capital while improving societal affairs. The ethics inherent in CSR decisions will also be examined. Analyzing the decision-making processes and ethical reasoning behind CSR initiatives will provide new insight on the field of CSR. This study will build upon extant literature and use semi-structured interviews with company executives to reveal a clearer picture of why firms choose to engage in one CSR strategy over another.
Degree
M.A. | https://mospace.umsystem.edu/xmlui/handle/10355/70085 |
The key success factor for PTTEP to be a sustainable company is its capability to build and manage relationships with stakeholders and how the Company mitigates impact from operations to community and environment where it operates. Accountable to enhance a best practice of issue and stakeholder management, PTTEP has therefore developed an Issue and Stakeholder Management System (ISMS); a tool to visualize social risks of both domestic and international operations/ projects with the same standard. It is a proactive system with the guideline on how to analyze, plan, evaluate and monitor the social Impact. Emphasizing on social risk analysis, ISMS demonstrates on how to mitigate social risks by controlling impact from operation to community probability, creating or amending existing community development programs as appropriated, and conducting a proper manner of stakeholder engagement.
The ISMS framework covers 4 key processes:
- Define: Clearly define framework in relation to internal and external context which may impact the objective
- Analyze: Identify, evaluate and prioritize social risks and stakeholders in order to prepare an issue and stakeholder management plan
- Execute: Implement issue and stakeholder management plan and monitor progress
- Evaluate: Evaluate the effectiveness and communicate results to stakeholders
ISMS is applied by PTTEP and all subsidiaries throughout the lifecycle of every project which includes exploration, production and abandonment. Since 2015 PTTEP has deployed ISMS to all operations worldwide to facilitate the process of making key decision with regard to social risk and stakeholder issues. To ensure the effective deployment, External Relations staff of each operating asset shall conduct ISMS workshop every year to revisit social risk analysis and submit a report to Communications and Public Affairs Division. Furthermore on a monthly basis, External Relations Officers at each location shall monitor the progress and result of engagement activity and report to Corporate.
A good example of ISMS capability to analyze and identify risk involves the case of a local group of fishermen who PTTEP was able to identify as opinion leaders in the area and furthermore identify their concerns which might impact the company. Based on the mitigation plan developed under ISMS, PTTEP approached and worked with the group of fishermen to implement a CSR project. Today, this group of fishermen has become partners in implementing the “Crab Hatchery Learning Center”, one of the most successful CSR projects which create economic value for fishermen and at the same time contributes to environmental conservation at the operations area of Petroleum Supply Base in Songkhla.
To evaluate the result, The company conducts the Stakeholder Commitment Survey every three years to assess the level of commitment in various PTTEP project areas. The survey results are used to improve the effectiveness of social projects. The goal of effective community relations management is to raise primary stakeholder commitment level to “Support” which is the highest level of engagement. In 2016 the Company has conducted the Stakeholder Commitment Survey in three areas of exploration and production projects in Thailand. The areas surveyed were the Petroleum Development Support Base in Songkhla; S1 Project in Kamphaengphet, Sukhothai, and Phitsanulok; and PTTEP1 Project in Suphanburi and Nakornpathom.
In addition, it is intended that ISMS shall incorporate social risk consideration into business decision process. Through ISMS, the identified of social impacts development of appropriate mitigation plans are important factors in managing PTTEP footprint as well as community development plan to offset residual impacts, which enables PTTEP to make that the best interest for stakeholders and the company as a whole. | http://www.pttep.com/en/Sustainabledevelopment/Widersociety/Issueandstakeholdermanagement.aspx |
Established in the year 1965, “Group Premier” (GP) is the pioneer in its engineering and manufacturing in the fields of Irrigation & Entertainment.
“Premier irrigation Adritec Private Limited” is the first and pioneering manufacturer of the modern water saving Irrigation systems in India and it is now an ISO 9001 & ISO 14001 Company.
Our philosophy of Corporate Social Responsibility (CSR) is based on the ‘Triple Bottom line’ approach which consists of “People, Planet and Profit". This measure helps us to be more conscious of our social and moral responsibilities. We understand our responsibilities towards the environment, employees and the society at large. For us these responsibilities are not just about statutory compliance but also about creating long-term social value. Regular and responsive engagement with farmer communities, employees, regulators, business partners, dealers &suppliers is central to our approach to CSR.
Our vision is being recognized by society as a responsible corporate and we endeavor to grow our business whilst reducing the environmental impact of our operations and increasing our positive social impact.
The essence of CSR comprises of philanthropic, corporate, ethical, environmental and legal as well as economic responsibility and we have taken various initiatives and adopted best practices towards persevering the nature, protecting the environment, contributing to the economic development and ensuring improvement in the quality of life of the people in the society at large.
The Company pledges its support to the initiatives of the Ministry of Corporate Affairs and in furtherance of the same and looks to adopt the CSR activities in harmony with the requirements of the Companies Act, 2013, Rules & Schedules thereof, subject to such amendments as may be made from time to time.
A CSR Committee comprising of three or more Directors will design, formulate & monitor the CSR activities. The Company Secretary shall act as the Secretary to the Committee. The Committee will be entrusted to discharge the following functions:-
Pursuant to section 135 (5) of the Companies Act, 2013, the Board of the Company shall ensure that the Company spends, in every financial year, at least two percent of the average net profits of the Company made during the three immediately preceding financial years, in pursuance of its CSR Policy.
CSR expenditure will include all expenditure, direct and indirect, incurred by the Company on CSR Programs undertaken in accordance with the approved CSR Plan. It shall include contribution to corpus for projects or programs relating to the CSR activities approved by the Board on the recommendation of its CSR Committee.
The CSR policy shall be implemented and reviewed from time to time by the Corporate Social Responsibility Committee. The Committee shall ensure that the Company is incurring the specified amount of CSR expenditure every financial year for the approved purposes.
The Company may on its own or through a registered trust or a registered society or a Company established by the Company or its holding or subsidiary or associate company under section 8 of theCompanies act, 2013, may undertake its CSR activities.
This CSR Policy is approved by the Board of Directors of the Company held on 14.12.2018 and will be in force till such time it is modified or amended by the Board of Directors on recommendations of the CSR Committee. | https://pial.in/csr_policy.php |
CSR Impact Assessment
SoulAce conducts CSR Impact Assessment as a 3rd party for development and CSR projects/initiatives of corporates to understand and communicate to the relevant stakeholders the tangible and intangible changes in the lives of the communities where the projects were implemented. This helps to understand the overall outcome and impact of the project from the point of view of the beneficiaries.
SoulAce has conducted more than 60 different CSR Impact Assessment studies till date.
CSR Impact Assessment study is also done for prospective projects to be taken up by the companies to understand the potential changes a project might bring in the lives of the communities in a region. The Impact Assessment is thus focused to layout an action plan to mitigate damages if any to the community and environment in the region.
Social Impact Assessment includes the process of analyzing, monitoring, evaluating and managing social outcomes, both positive and negative, of planned and unplanned interventions by corporate, government or organizations.
The benefitsof conducting Impact Assessment are;
- Helps build on local know how and utilizes participatory activities to analyze the concerns of interested and affected parties.
- Involves all the stakeholders in the assessment, analyze the alternatives, and monitor the planned intervention.
- Help understand the change in the lives of the beneficiaries of the project.
- Help the funder in understanding the efficiency & effectiveness of the project and deploy the funding accordingly.
- Help in understanding the benefits on the lives of indirect beneficiaries of the projects undertaken.
- Help understand the Sustainability aspects of the projects being implemented.
- Help build in a feedback system for any course correction needed.
- Help in planning of future projects and also to exit projects which are not doing well.
SoulAce with its team of experts in various specialized fields conduct Impact Assessment studies to cover various issues touched by the CSR initiative of an organization. The report helps the company in planning for future projects where they can determine which project to exit and which project to scale up. Periodic assessments conducted for long term projects help the company in keeping track of the progress of the initiative and thus helps in better program management. | https://soulace.in/impact-assessment.php |
Definition CSR is a new terminology, but its hardly a new concept. As in third century BC, Kautilya’s has dealt with the management of people and power commerce and taxations, standardization of weights and measures, and more peace.
3
Definition Raksha (protection): risk management
Vriddhi (growth): stakeholder value enhancement Palana (maintenance): compliance with the law Yokakshema (well-being): corporate social responsibility
4
CSR Corporate Social Responsibility is defined as “achieving commercial success in ways that honor ethical values, respect people, communities & the natural environment.” We can also say that CSR means addressing the legal, ethical, commercial & other expectations society has for business, & making decisions that fairly balance the claims of all key stakeholders. In its simplest terms it is a moral check on: “what you do, how you do it”
5
Credibility Reputation and Brand Enhancement .
Accountability and Transparency. Risk Management . Retention of staff It attracts green and ethical investment It attracts ethically conscious customers Reduction in costs through re-cycling It differentiates the firm from its competitor and can be a source of competitive advantage Increased profitability in the long run
6
Issues Health Environment Agriculture Microfinance Sanitation HIV
Childcare Development Education
7
Issues Rehabilitation & Resettlement Slum Improvement Disaster
Livelihood Women Enpowerment
8
Challenges How to measure CSR . Lack of Resources Accountability.
Social Compliance Cultural Diversity & Understanding of Local Law. Lack of Role of Media No Guidelines No Knowledge Lack of consent in implementing the policies.
9
CSR Index A recent survey of 20,000 people in 20 countries offers some fascinating insights into the way consumers, and societies at large, perceive the social and environmental responsibilities of business. Corporate Social Responsibility Monitor 2001: Global Public Opinion on the Changing Role of Companies identifies those aspects of corporate practice that matter most to the general public. It also reveals some intriguing differences in priorities between different regions of the world. The survey was undertaken by Environics International, and involved interviews with around 1,000 people in each of 20 countries including the USA, Canada, Mexico, Britain, France, Germany, Japan, India, Russia and Nigeria. The key findings are as follows. 1. Significant numbers of investors take a company's social performance into consideration when making investment decisions In the USA, where 61% of people own shares, more than a quarter said they had bought or sold shares on the basis of a company's social performance. A similar picture emerged in Canada, Japan, Britain and Italy . Percentage of share-owners who have bought or sold shares because of a company's social performance, by country
10
Conclusion and Suggestions
Conduct workshops to engage with staff and suppliers to explore areas of risk . Develop interactive intranet sites that show case examples of good practice, or build in opportunities for promotion of good practice at staff meetings . Review company policy and procedures to ensure values are consistent – procurement, recruitment, training, appraisals and exit interviews . Consult and involve staff more in the running of a business . Provide feedback questionnaires for employees, customers and suppliers – to show the organization is living its values .
11
Conclusion and Suggestions
Employee Practices. Encourage Higher standards at Workplace. Encourage Community Activities.
12
CSR - SCORE In a report titled, 'Ethical Asia', by global research firm CLSA, Reliance Industries, Australian airline Cathay Pacific and Japanese conglomerate Mitsubishi UFG have been named as the companies maintaining the highest CSR standards in the region and have been termed the region's 'corporate good guys'.
13
Conclusion In conclusion, let me share a quotation from Aristotle: “We are what we repeatedly do. Excellence, therefore, is not an act, but a habit.” If ethical leadership, too, were to become not just an act, but a habit, leaders could truly make the world a better place to live in.
Similar presentations
© 2021 SlidePlayer.com Inc.
All rights reserved. | https://slideplayer.com/slide/4550186/ |
What is the problem?
The current regulatory landscape for AI in the EU is fragmented, and concerns have been raised regarding cooperation, coordination and consistent application of EU law.
Who should act?
The European Commission, European Parliament and the European Council should address this in the development of the proposed legislative framework for AI (expected 2021).
The Recommendation
Establish an independent European Union Agency for AI.
The Agency should:
- Make Recommendations addressed to the European Parliament, the European Council, or the Commission for legislative amendments
- Identify potential red lines or restrictions for AI development, deploymentand use that violates human rights and/or has significant negative societal impacts
- Develop and promulgate general guidanceon legal concepts and regulatory issues of AI
- Setbenchmarks for enforcement
- Support and advise EU-level institutions, bodies and agencies and national competent authorities in Member Statesto fulfil their ethical and human rights obligations and to protect the rule of law
- Maintain an AI risk alert system
- Assist incoordinating the mandates and actions of the national competent authorities of Member States
- Develop harmonised andobjective criteria for risk assessment and/or conformity assessment
- Monitor and/or coordinate the evaluation of the operation of conformity assessment and/or certification schemes
- Cooperate, liaise, exchange information, promote public dialogue, best practices and training activities
- Ensure complementarity and synergy between its activities and other Community programmes and initiatives,
- Promote the adoption of regulatory sandboxes
- Promote the European Union’s AI approach through international cooperation
Key Considerations
In creating and/or implementing the European Union Agency for AI, key considerations are:
- Making the Agency operational as soon as possible, even if on a provisional or pilot basis.
- Strength of the underpinning legislative framework (establishing the Agency and its mandate, and setting clear boundaries and scope)
- Ability to complement and support (not duplicate) work of existing regulatory bodies
- Genuine independence and impartiality (e.g., guaranteed funding)
- Ability to adapt to reflect technological developments, changing societal needs and expectations
- A structure that incorporates the right competencies and expertise, including multi-stakeholder representation from diverse backgrounds.
SHERPA Contribution
SHERPA Terms of Reference for a European Agency for AI developed using research and expert consultations, including interviews, a focus group, and feedback from the SHERPA Stakeholder Board. | https://www.project-sherpa.eu/european-agency-for-ai/ |
Our primary fiduciary obligation to our clients is to maximize the long-term returns of their investments. It is our view that material environmental and social (sustainability) issues can both create risk as well as generate long-term value in our portfolios. This philosophy provides the foundation for our value-based approach to Asset Stewardship.
We use our voice and our vote through engagement, proxy voting, and thought leadership in order to communicate with issuers and educate market participants about our perspective on important sustainability topics. Our Asset Stewardship program prioritization process allows us to proactively identify companies for engagement and voting in order to mitigate sustainability risks in our portfolio.
Through engagement, we address a broad range of topics that align with our thematic priorities and build long-term relationships with issuers. Engagements are often multi- year exercises. We share our views of key topics and also seek to understand the disclosure and practices of issuers. We leverage our long-term relationship with companies to effect change. Voting on sustainability issues is mainly driven through shareholder proposals. However, we may take voting action against directors even in the absence of shareholder proposals for unaddressed concerns pertaining to sustainability matters.
In this document we provide additional transparency into our approach to engagement and voting on sustainability- related matters.
We expect companies to disclose information regarding their approach to identifying material sustainability-related risks and the management policies and practices in place to address such issues. We support efforts by companies to demonstrate the ways in which sustainability is incorporated into operations, business activities, and most importantly, long-term business strategy.
We have developed proprietary in-house sustainability screens to help identify companies for proactive engagement. These screens leverage our proprietary. R-Factor score to identify sector and industry outliers for engagement and voting on sustainability issues.
As part of our annual stewardship planning process we identify thematic sustainability priorities that will be addressed during most engagement meetings. We develop our priorities based upon several factors, including client feedback, emerging sustainability trends, developing macroeconomic conditions, and evolving regulations. These engagements not only inform our voting decisions but also allow us to monitor improvement over time and to contribute to our evolving perspectives on priority areas. Insights from these engagements are shared with clients through our publicly available Annual Stewardship Report.
Historically, shareholder proposals addressing sustainability-related topics have been most common in the U.S. and Japanese markets. However, we have observed such proposals being filed in additional markets, including Australia, the UK, and continental Europe.
State Street Global Advisors votes For (support for proposal) if the issue is material and the company has poor disclosure and/or practices relative to our expectations.
State Street Global Advisors votes Abstain (some reservations) if the issue is material and the company’s disclosure and/or practices could be improved relative to our expectations.
State Street Global Advisors votes Against (no support for proposal) if the issue is non-material and/or the company’s disclosure and/or practices meet our expectations. | https://www.ssga.com/global/en/our-insights/viewpoints/2019-global-proxy-voting-and-engagement-environmental.html |
We’ve worked consistently to integrate corporate responsibility and sustainability across every aspect of our business.
Corporate responsibility and sustainability are integral to the way we do business. Our commitment is embedded in our Company’s mission and values. CSR is a key part of our business plans and the way we assess our people’s performance.
Our key focus areas
As environmental and social challenges grow in number and urgency, we must focus our efforts. We prioritise the issues that are most critical to our business and our stakeholders. These include:
• Water stewardship
• Energy & climate protection
• Packaging & recycling
• Consumer health
• Developing our employees
These focus areas directly support our business imperatives. As we broaden our beverage portfolio, for example, our consumer health initiatives help us respond to growing interest in health and wellness. Similarly, our drive for cost efficiency is supported by environmental programmes to reduce our use of energy, water and packaging.
To ensure we also contribute effectively to broader sustainability initiatives, we consult with our stakeholders often working in partnership with them.
Managing CSR
We manage our CSR performance as rigorously as any other part of our business. A cross-functional CSR team reports to the General Manager who is accountable for performance. The team also has a dotted reporting line to the Group CSR Council who oversee our company’s global performance and report to the CSR Committee of the Board of Directors.
We adopt leading standards, set targets and monitor our progress regularly. In addition, we report our performance in our annual Group CSR report. CSR is a core competence we expect of our company’s managers, and is included in performance assessments.
A culture of sustainability
This is only part of bringing CSR to life in our operations. We also focus on building the right culture and capabilities so that this is simply part of the way we do business.
. | https://ng.coca-colahellenic.com/en/sustainability/sustainability-approach-and-impact/sustainability-approach/ |
The Directors acknowledge the importance of the principles set out in the QCA Code. Details follow summarising how the Group will comply with the QCA Code.
Corporate Governance
Gelion is focused on creating commercial solutions for the successful transition to a sustainable economy through the storage of renewable energy. By designing and delivering innovative battery technology, the Company will help facilitate that transition, and seek to return value for our customers and investors.
The Board is committed to adopting best practice, where possible and appropriate, in its reporting of Environmental, Social and Governance (“ESG”) issues.
Gelion takes its responsibility as a company, employer and clean-technology manufacturer seriously. The Directors intend to establish dedicated programmes aimed at delivering key initiatives across a range of core areas as well as continuing to achieve the Company’s ESG objectives. Its policies will be reviewed annually by the ESG Committee to ensure accurate reporting and measuring of relevant indicators.
In addition, the Company will adopt the QCA Code, a set of corporate governance principles, and has qualified for the London Stock Exchange Green Economy Mark, which identifies London-listed companies and funds that generate at least 50 per cent of total annual revenues from products and services that contribute to the global green economy.
The Directors recognise the value and importance of high standards of corporate governance. The Directors intend to adhere to the QCA Corporate Governance Code which sets out a standard of minimum best practice for small and mid-sized quoted companies, particularly AIM companies. The Directors acknowledge the importance of the principles set out in the QCA Code. Details follow summarising how the Group will comply with the QCA Code.
Gelion plc is subject to the UK City Code on Takeovers and Mergers.
Establish a strategy and business model which promotes long-term value for shareholders.
The strategy of Gelion is defined on pages 20 and 21 of the Annual Report.
The Directors are responsible for implementing the strategy and believe that the Group’s business model and growth strategy help to promote long-term value for shareholders. The strategy is further detailed into specific plans that enable day-to-day management of the business.
The principal risks facing the Group are set out on page 42 of the Annual Report. The Directors have continued and will continue to take appropriate steps to identify risks and undertake a mitigation strategy to manage these risks following the Company’s Admission on AIM, including and have implemented a risk management framework.
Seek to understand and meet Shareholder needs and expectations.
An active dialogue is maintained with shareholders. Shareholders are kept up to date via announcements made through a Regulatory Information Service on matters of a material substance and/or a regulatory nature. Updates are provided to the market from time to time, including any financial information, and any expected material deviations to market expectations would be announced through a Regulatory Information Service.
The Company will hold its first’s AGM in December 2022 which will provide is an opportunity for Shareholders to meet with the Non-Executive Chairman and other members of the Board. The meeting is open to all Shareholders, giving them the opportunity to ask questions and raise issues during the formal business or, more informally, following the meeting. The results of the AGM will be announced through a Regulatory Information Service.
The Board is keen to ensure that the voting decisions of shareholders are reviewed and monitored and the Company is committed to engaging with shareholders who do not vote in favour of resolutions at AGMs.
All contact details for investor relations are included on the Group’s website.
Including wider stakeholder* community and social responsibilities and their implications for long term success.
The Group takes its corporate social responsibilities very seriously and is focused on maintaining effective working relationships across a wide range of stakeholders including shareholders, staff, partners, suppliers and customers part of its business strategy. The Directors maintain an ongoing and collaborative dialogue with such stakeholders and take all feedback into consideration as part of the decision-making process and day-to-day running of the business.
The Company has implemented a formal Environmental, Social, Regulatory and Governance Responsibility (ESG) policy and strategy and has established an ESG Committee. The ESG Committee will monitor the implementation of ESG practises to ensure the Group conducts its business with a view of long-term sustainability for its customers, employees, communities, the environment as well as its shareholders.
* Please refer to Section 172 Stakeholder Engagement on page 38 of the Annual Report.
Embed effective risk management, considering both opportunities and threats, throughout the organisation.
The Directors have established an Audit and Risk Committee which takes appropriate steps to identify risks and undertake a mitigation strategy to manage these risks. A review of these risks is carried out by the Audit and Risk Committee on a regular basis or at least on an annual basis, the results of which are included on pages 59 to 62 of the Annual Report.
While it has established the Audit and Risk Committee, the Board has overall responsibility for the determination of the Group’s risk management objectives and policies.
Maintain the Board as a well-functioning, balanced team led by the chair.
The Board is comprised of the following persons:
- four Non-Executive Directors including the Non-Executive Chairman; and
- two Executive Directors.
The biographies of the Directors are set out on page 53 of the Annual Report. The Non-Executive Directors, Michael Davie and Joycelyn Morton, are considered to be independent and were selected with the objective of bringing experience and independent judgement to the Board.
The Board is also supported by the Audit and Risk Committee, the Remuneration Committee and the ESG Committee.
The Board meets regularly and processes are in place to ensure that each Director is, at all times, provided with such information as is necessary to enable each Director to discharge their respective duties.
The Group is satisfied that the current Board is sufficiently resourced to discharge its governance obligations on behalf of all stakeholders.
Under the articles of association, the Board has the authority to approve any conflicts or potential conflicts of interest that are declared by individual directors; conditions may be attached to such approvals and directors will generally not be entitled to participate in discussions or vote on matters in which they have or may have a conflict of interest.
Ensure that between them the Directors have the necessary up-to-date experience, skills and capabilities.
The Board continually evaluates the skills that are required of its members and whether they are adequately provide for.
The Directors believe that the Board has the appropriate balance of diverse skills and experience in order to deliver on its core objectives. Experiences are varied and contribute to maintaining a balanced board that has the appropriate level and range of skill to drive the Group forward. Where required the directors will take further advice from professional advisors such as lawyers, accountants and tax specialists.
The Board is not dominated by one individual and all Directors have the ability to challenge proposals put forward to the meeting, democratically. The Directors continue to receive briefings from the Company’s Nominated Adviser in respect of continued compliance with, inter alia, the AIM Rules and the Company’s solicitors in respect of continued compliance with, inter alia, UK Market Abuse Regulation.
Evaluate all elements of Board performance based on clear and relevant objectives, seeking continuous improvement.
The Directors continue to consider the effectiveness of the Board, Audit and Risk Committee, Remuneration Committee and ESG Committee, and the individual performance of each Director
The Chairman will review and appraise the performance of the Directors on an annual basis with the first evaluation in December 2022, to determine the effectiveness and performance of each member with regards to their specific roles as well as their role as a Board member in general. The appraisal system will seek to identify areas of concern and make recommendations for any training or development to enable the Board member to meet their objectives.
All continuing directors stand for re-appointment every three years. All directors undergo a performance evaluation before being proposed for re-appointment to ensure that their performance is and continues to be effective, that where appropriate they maintain their independence and that they are demonstrating continued commitment to the role.
The Board will monitor the Non-Executive Directors’ independence to ensure that a suitable balance of independent Non-Executive and Executive Directors remains in place.
It is beneficial for membership of the Board to be periodically refreshed. Succession planning is a vital task for boards. No member of the Board should become indispensable.
Promote a corporate culture that is based on sound ethical values and behaviours.
The Group has a responsibility towards its staff and other stakeholders. The Board promotes a culture of integrity, honesty, trust and respect and all employees of the Group are expected to operate in an ethical manner in all of their internal and external dealings.
The staff handbook and policies promote this culture and include such matters as whistleblowing, social media, anti-bribery and corruption, communication and general conduct of employees. The Board takes responsibility for the promotion of ethical values and behaviours throughout the Group, and for ensuring that such values and behaviours guide the objectives and strategy of the Group.
The performance and reward system should endorse the desired ethical behaviours across all levels of the Company.
The corporate culture should be recognisable throughout the disclosures in the annual report, website and any other statements issued by the Company.
The culture is set by the Board and is regularly considered and discussed at Board meetings.
Maintain governance structures and processes that are fit for purpose and support good decision making by the Board.
Steve Mahon, the Non-Executive Chairman, leads the Board and is responsible for its governance structures, performance and effectiveness. The Board retains ultimate accountability for good governance and is responsible for monitoring the activities of the executive team. The Non-Executive Directors are responsible for bringing independent and objective judgement to Board decisions. The Executive Directors are responsible for the operation of the business and delivering the strategic goals agreed by the Board.
The Board is supported by the Audit and Risk committee, Remuneration Committee and ESG Committee, further details of which are set out in the Annual Report. There are certain material matters which are reserved for consideration by the full Board. Each of the committees has access to information and external advice, as necessary, to enable the committee to fulfil its duties.
The Board reviews the Group’s governance framework on an annual basis to ensure it remains effective and appropriate for the business going forward.
Communicate how the company is governed by maintaining a dialogue with shareholders and other relevant stakeholders.
Responses to the principles of the QCA Code and the information that is contained in the Company’s Annual Report and Accounts provide details to all stakeholders on how the Company is governed. The Board is of the view that the Annual Report and Accounts as well as its half year report are key communication channels through which progress in meeting the Group’s objectives and updating its strategic targets can be given to shareholders.
Additionally, the Board will use the Company’s AGMs (first AGM expected in December 2022) as a mechanism to engage directly with Shareholders, to give information and receive feedback about the Group and its progress. The AGM which provides shareholders with an opportunity to meet the Board of directors and to ask questions relating to the business. All votes made at any AGM or GM are published and the Board will publish commentary on any vote where 20% or more of the independent shareholders have voted against any resolution.
Private investor roadshows and presentations at investor conferences are also taking place to interact with Shareholders.
The Company’s website is updated on a regular basis with information regarding the Group’s activities and performance, including financial information.
All contact details for investor relations are included on the Group’s website.
Page last updated 09 Nov 2022.
About Us
Laser-focused on solving tomorrow’s power and climate challenges, today.
Investors
Gelion plc is a publicly traded company on the London Stock Exchange AIM markets. Read about investing in our future.
Careers
We’re always interested to hear from passionate and motivated people, whose values align with ours. | https://gelion.com/investors/aim-rule-26/corporate-governance/ |
This preview shows half of the first page. to view the full 1 pages of the document.
Chp. 6 Terms
Basking in Self-reflected Glory: tendency to enhance one’s image by publicly
announcing one’s association with those who are successful.
Collectivism: putting group goals ahead of personal goals and defining one’s identity in
terms of the groups to which one belongs.
Downward Social Comparison: the defensive tendency to compare oneself with
someone whose troubles are more serious than one’s own.
Explanatory Style: tendency to use similar causal attributions for a wide variety of
events in one’s life.
External Attributions: ascribing the causes of behaviour to situational demands and
environmental constraints rather than personal.
Impression Management: usually conscious efforts to influence the way others think of
one.
Individualism: putting personal goals ahead of group goals and defining one’s identity in
terms of personal attributes rather than group memberships.
Ingratiation: efforts to make oneself likable to others.
Internal Attributions: ascribging the causes of behaviour to personal distributions,
traits, abilities and feelings rather than external events.
Possible Selves: one’s conception about the kind of person one might become in the
future.
Public Self: an image presented to others in social interactions.
Reference Group: a set of people who are used as a gauge in making social
comparisons.
Self-attributions: inferences that people draw about the causes of their own behaviour.
Self-concept: collection of beliefs about one’s basic nature, unique qualities and typical
behaviour.
Self-defeating Behaviours: seemingly intentional acts that thwart a persons self-interest.
Self-discrepancy: the mismatching of self-perceptions.
Self-enhancement: the tendency to maintain positive views about oneself.
Self-esteem: one’s overall assessment of one’s worth as a person; the evaluative
component of the self concept.
Self-handicapping: tendency to sabotage one’s performance to provide an excuse for
possible failure.
Self-monitoring: degree to which people attend to and control the impressions they make
on others.
Self-regulation: directing and controlling one’s behaviours.
Self-serving Bias: tendency to attribute one’s successes to personal factors and one’s
failures to situational factors.
Social Comparison Theory: idea that people need to compare themselves with others in
order to gain insight into their own behaviour. | https://oneclass.com/textbook-notes/ca/western/psyc/psy-2035ab/25411-chp-6-terms.en.html |
Self-serving bias is a cognitive process distorted by our need to maintain self-esteem or our tendency to perceive ourselves in an over-favorable manner. The bias refers to our ability to associate all positive events or outcomes to our own actions, and attribute negative outcomes to outside sources. Often, individuals with an “overactive’ self-serving bias focus on their strengths but overlook their weaknesses and reject the validity of negative feedback.
To a certain extent, self-serving bias is good for maintaining positive self-esteem, as it allows you to take credit for your successes. But what about your failures? You need to take credit for those as well. Imagine a world where all your accomplishments were associated with your positive abilities and you had zero accountability for your failures. Doesn’t that sound nice? To me, a world where you suffer no consequences or bear no responsibility for anything other than yourself and your successes sounds like utter destruction and depression waiting to happen.
An ‘overactive’ self-serving bias holds you back from change and progression. It just feeds your ego. How can improvement happen if you think you’re doing everything right and everyone else is doing it wrong? Self-serving bias can get you stuck in unfulfilling and under serving repetitive actions that fail not only to benefit others, but yourself as well. Self-serving bias does have a place in our lives, but don’t let it play a bigger role than it should.
The fact that you’re reading about it or talking about is the first step in learning how to limit self-serving bias. Here’s good example of self-serving bias:
A job applicant believes he’s been hired because of his achievements, qualifications, and excellent interview. When asked about a previous opening he didn’t receive an offer for, he says the interviewer didn’t like him.
If one was to continually blame interviewers for not being hired, it would prolong their struggle to find employment compared to someone who realizes they may not be as qualified as someone else and then strives to get those qualifications. Knowing when your self-serving bias kicks in is a good start.
As cliche as it may sound, it is the truth. How can you know where you have to improve if you don’t understand where your flaws are? Failure is your opportunity to learn and adapt. Bill Gates once said that “the best way to succeed is to double down on your failures.” We all make mistakes, so don’t be ashamed of them. There is a lot of respect to be given to the person who continually tries to better himself, even while he makes mistakes. It’s hard to admit that you’re the one to blame, but it’s critical to overcoming self-serving bias.
Usually, your success is not only yours. No matter what you do in life or what you accomplish, there is going to be a team behind you. It might not be apparent to you, but it’s a fact. Associate yourself with the people who helped you become the person you are today. You’ll feel better about yourself, and they’ll be thankful. There’s no better feeling than being able to help someone learn and grow. The ability to help others build recognition and knowledge will overweigh the perceived benefit of taking all the credit for yourself.
We all have self-serving bias, but don’t let it get out of control and hold you back from change and progression. No one likes the egoistic individual in the room who thinks their accomplishments and needs outweigh everyone else's. Get your ego in check by understanding self-serving bias, taking accountability for your failures, and giving credit to others.
Comments will be approved before showing up.
Call it intuition, instinct, or a gut feeling: if we followed it, we just might be a lot happier. Intuition is a skill we are all born with, but one we submerge in the business of modern living. | https://mindfulmindset.life/blogs/mindfulness/3-ways-to-rid-self-serving-bias |
My own long-ago interest in self-serving bias was triggered by noticing a result buried in a College Board survey of 829,000 high school seniors. In rating themselves on their “ability to get along with others,” 0 percent viewed themselves below average. But a full 85 percent saw themselves as better than average: 60 percent in the top 10 percent, and 25 percent as in the top 1 percent.
As Shelley Taylor wrote in Positive Illusions, “The [self-]portraits that we actually believe, when we are given freedom to voice them, are dramatically more positive than reality can sustain.” Dave Barry recognized the phenomenon: “The one thing that unites all human beings, regardless of age, gender, religion, economic status, or ethnic background is that deep down inside, we all believe that we are above average drivers.”
Self-serving bias also takes a second form—our tendency to accept more responsibility for our successes than our failures, for our victories than our defeats, and for our good deeds than our bad. In experiments, people readily attribute their presumed successes to their ability and effort, their failures to bad luck or an impossible task. A Scrabble win reflects our verbal dexterity. A loss? Our bad luck in drawing a Q but no U.
Perceiving ourselves, our actions, and our groups favorably does much good. It protects us against depression, buffers stress, and feeds our hopes. Yet psychological science joins literature and religion in reminding us of the perils of pride. Hubris often goes before a fall. Self-serving perceptions and self-justifying explanations breed marital conflict, bargaining impasses, racism, sexism, nationalism, and war.
Being mindful of self-serving bias needn’t lead to false modesty—for example, smart people thinking they are dim-witted. But it can encourage a humility that recognizes our own virtues and abilities while equally acknowledging those of our neighbors. True humility leaves us free to embrace our special talents and similarly to celebrate those of others.
(For David Myers’ other weekly essays on psychological science and everyday life visit TalkPsych.com)
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in. | https://community.macmillanlearning.com/t5/talk-psych-blog/how-do-i-love-me-let-me-count-the-ways/ba-p/6584 |
Self-serving biases in the attribution of causality: Fact or fiction?
- Psychology
- 1975
A review of the evidence for and against the proposition that self-serving biases affect attributions of causality indicated that there is little empirical support for the proposition in its most… Expand
Social anxiety, self-presentation, and the self-serving bias in causal attribution.
- Psychology, Medicine
- Journal of personality and social psychology
- 1980
Two experiments were conducted to provide evidence concerning the contribution of self-presentation concerns to the self-serving bias in causal attribution and its occasional, but systematic, reversal and it was found that both high- and low-social-anxiety participants portrayed the causes of their behavior in a more modest fashion when they responded via the "bogus pipeline". Expand
Self-Threat Magnifies the Self-Serving Bias: A Meta-Analytic Integration
- Psychology
- 1999
Experiments testing the self-serving bias (SSB; taking credit for personal success but blaming external factors for personal failure) have used a multitude of moderators (i.e., role, task importance,… Expand
Self-Esteem and Self-Serving Biases in Reactions to Positive and Negative Events: An Integrative Review
- Psychology
- 1993
The self-serving bias refers to the tendency of people to interpret and explain outcomes in ways that have favorable implications for the self. The term bias often implies distorted or inaccurate… Expand
Attribution bias: On the inconclusiveness of the cognition-motivation debate
- Psychology
- 1982
Abstract Social psychologists have given considerable theoretical and research attention to whether motivational variables bias the attributions people make for behavior. Some theorists maintain that… Expand
Constraints on Excuse Making: The Deterring Effects of Shyness and Anticipated Retest
- Psychology
- 1995
Although prior research has documented a pervasive egocentric bias in the self-perceptions, self-ascriptions, and behaviors of most people, shy individuals seem not to share this bias. This study… Expand
Attributing causes for one's own performance: The effects of sex, norms, and outcome
- Psychology
- 1977
Abstract Two experiments were conducted to determine the effect of sex of subject, stated sex linkage of task, and task outcome on causal attributions of an actor's performance. Results from both… Expand
Relocating Motivational Effects: A Synthesis of Cognitive and Motivational Effects on Attributions for Success and Failure
- Psychology
- 1986
The long-standing debate over motivational biases as explanations for asymmetrical (i.e., sell-serving) attribution patterns for success and failure is examined in the present paper. Following the… Expand
Depression, self-esteem, and the absence of self-protective attributional biases. | https://www.semanticscholar.org/paper/Exploring-Causes-of-the-Self%E2%80%90serving-Bias-Shepperd-Malone/46a3d7c77826faaef6e1faa8889ccb64632e273e |
How is social psychology different than other areas of psychology?
We are not concerned with the characteristics of the person -we are concerned with how the characteristics of the situation shape behavior (how different situations bring out different things)
The Power of the Situation components - social influence
Culture - your values, how to behave, difference between cultures
Social Roles - your friends influence on you
Relationships
Group Norms
What causes behavior? - the person or the situation?
Power of the situation! when situations overpower personal factors (bullying, Miligram experiments)
Person factors vs. situation factors
BOTH are important. Interactionism - both influence each other
Person factors - personality, genes, goals, beliefs
Situatio - culture, social roles, relationship, group norms
Internalization vs. niche picking/construal
Internalization: Social activation, when situation factors affect the person factors.
Niche picking/construal: person can influence the situations they experience (choose environments, their outlook can affect the way you perceive a situation)
The Power of the Situation
situations can influence personal behavior
often overlooked, underestimated, unappreciated
The Principle of COnstrual
Interpretation of the situation is subjective - if situations are ambiguous individuals may have different interpretations
To understand social influence we must also understand the characteristics of the person that shape construal processes
Confirmation bias and hindsight bias
Confirmation bias
: seeing what we expect to see and overlooking the rest
Hindsight bias
: "I knew it all along"
These biases show that a lot of our intuitions are inaccurate - must use scientific method!
How can we draw causal inference?
experimentation - experimenter manipulates something
independent vs. dependent variable
independent
: what is manipulated, causally important, the subject has no control over it
dependent
: the effect variables, controlled by the subject
tradeoffs between correlation and experimentation
experiment
: get causal inferences,
correlation
: may be impossible to manipulate the variable, greater realism, can create ethical problems,
field vs. laboratory
lab
: control, artificial
field
: less control, more realistic
internal validity
are changes in dependent variable caused by changes in the independent variable --> more likely in laboratory, controlled
external validity -
do the results hold in other situations, for other people, at other times?
can the results be generalized? -
kinda hard in lab research
Pitfalls in research
faulty introspetion, reactivity, ethical dilemmas,
faulty introspection
people often are mistaken or unaware of the true reasons for their behavior - can't just ask them why they do things - makes research much harder
reactivity
tendency for people to behave differently because they are being observed by a researcher
demand effect, social desirability bias
demand effect and social desirability bias
Part of reactivity - problem in research
demand
: if people think they know what the research is about, they tend to go along with it
Social desirabilty
: people want to present themselves in a positive way
ethical dilemmas and how they try to deal with it
harm (they use informed consent), deception (debrief afterward if deception used), invasion of privacy (they look at groups, not individuals)
What is social about the self?
How we define ourselves is important in shaping our social behavior.
And the interactions with the social world also shape how we define ourselves.
Sense of self cannot exist without the social world
Sense of self
varies across species - humans have the most developed
varies across situations - mirror = more self-aware
varies across persons - some people are very self aware some just go with teh flow
How do we define ourselves?
introspection, self-perception, social comparisons, reflected appraisals --> its a social thing, we need others there too to define ourselves!!
introspection
look into ourselves - we report a theory about ourselves - can often be wrong tho
self-perception
we can't introspect, we dont have access to inner workings of the brain --> inner self can only be inferred from observable data (the same way we form impressions of others)
can sense of self be manipulated?
yes -
situational manipulations can alter self-perceptions
changes in self-perception impact behavior
if we introspected we wouldn't be able to manipulate the sense of self - it woudl be stable
reflected appraisals
what people tell us we are like
2 types of reactions to failure experiences
helpless - just give up, assume performance is because of lack of ability (Entity Theory - you are a fixed entity that doesn't change, can't change abilities)
Mastery Oriented - think failure is due to insufficient effort (incremental theory - you can change, develop the skill)
can a positive view be healthy?
yes - beneficial to think you have control over a situation - buffers the sting of failure, prvides motivation to improve
Definition of self-esteem
attitude/evaluation of yourself (affectively charged - emotion behind it, can exist at different levels at unconscious and conscious level)
explicit vs. implicit self esteem
explicit
: the kind you tell about
implicit
: the most immediate emotional reactions, can be different than explicit, measured indirectly
Where does self-esteem come from?
external feedback - peers, performance outcomes, social comparison, social acceptance, effect of parental feedback
how to protect your self-esteem
self-enhancement mechanisms (downward social comparisons, self-serving cognitions - the way we interpret our situations in which we flatter ourselves)
protects more our explicit self-esteem than implicit
works better with people who already have a high self-esteem
self-serving cognitions
successes attributed to internal qualities, failures attributed to external things
how is high self-esteem bad?
High SE kids are more likely to be aggressive especially in frustrating situations
defensive high self-esteem
high explicit SE, low implicit SE
take negative feedback really hard, prone to destructive self-enhancement
violence, unstable!
so is it better to have low SE or high SE
moderately positive SE = the best especiall if backed up by real competence (stable, not defensive)
Does aspects of self change across cultures?
Individualistic - emphasize independence, personal self, uniqueness
Collectivist - emphasize interdependent, social self, value not standing out, value how they fit in
do people in different cultures have different levels of self-esteem?
Explicit - people in USA tend to test higher but thats b/c the questionnaires focus on personal, individual self esteem not your collective self-esteem, and modesty norms
Implicit - no difference
the general model of person perception
appearance/behavior --> trait inferences --> global impression
what makes someone attractive?
averageness effect - average faces more preferable
symmetry effect - facial symmetry = marker for health/genetic fitness
other facial qualities that affect how people are perceived/experience life
Baby face effect - larger lips/smaller nose, larger forehead, smaller chin, larger eyes
competence -
power - competence, dominance, maturity
warmth - trustworthiness, likeability
baby-face effect
have richer social life, more people initiated contact with them
are seen as more trustworthy, but not experts (can influence people differently then)
occupation outcomes also depend on domain - if job for leadership at disadvantage but if job for warmth are at advantage
advantageous in crimes of mallice (how can a baby be malicious!)
can you get anythign from teh face that would be accurate about personality?
strongest accuracy was in domain of extroversion, no correlation for a lot of other personality traits
theory of somatypes
different body types = different personalities
endomorph - heavy
mesomorph - athletic
ectomorph - skinny
you canot predict personality based on body type but people have sterotypes based on it
what do your clothes reveal?
strong impact for first impressions - seems like a valid signal because you choose your clothes but the accuracy is questionable
thin slices of behavior
minimal samples of behavior (less than a minute)
accuracy was above chance
for intelligence, racial prejudice, sexual orientation
thick slices of behavior
meaningful unit of behavior - richer behavioral evidence
correspondent inference theory
correspondent inference theory
do someone's behaviors correspond to their inner qualities?
look for situational contrainsts on the behavior (free choice, intention, norms, role requirements)
look for noncommon effects (consequences of choosing that behavior that would not have emerged if didn't do the behavior) to deduce the reasons for the behavior
no evidence for this!!
problems for correspondent inference theory
we don't think that elaborately/rationally!
Castro essay thing
perceptions of intelligence in questioner, contestants, and audiences (perceive teh questioner to be really smart, questioner knows they aren't)
correspondence bias (fundamental attribution error)
problem for the correspondent inference theory
we assume that someones behavior reflects their true personality regardless if there are other reasons for it, situations, contraints
assume that behavior corresponds with inner traits
2-stage model of trait inference - better than correspondent inference theory
2 stages to forming impressions
automatic - inferences spontaneously - effortless, efficient, correlate behaviors with inner personality
corrective deliberation - adjustments made for situations, effortful, mental resources required
proximal vs. distal factors
proximal - factors that exist in the here and now (power of the situation, dispositions, role of construal)
distal - are more removed in time from a given context (evolution/culture)
channel factors
situational circumstances that appear unimportant but can have consequence for behavior (either helping it, blocking it, etc.
Role of Construal
interpretation and inference about the stimuli/situations we confront
schemas
generalized knowledge about the physical/social world and how to behave in particular situations
Automatic vs. Controlled Processing
automatic - unconscious, implicit attitudes and beliefs, skill acquisition, production of beliefs w/o our awareness to cognitive processes that generated them
controlled - conscious, results in explicit attitudes/beliefs of which we are aware
observational research
looking at phenomenon in reasonably systematic way
self-selection
what correlational research suffers from, the participant, not the researcher gets to select his level on each variable (not manipulated)
Reliability vs. Measurement Validity
reliability
: the degree to which teh particular way we measure a given variable is likely to yield consistent results
measurement validity
: the correlation between some measure and some outcome that the measure is supposed to predict
OCEAN
traits (openness, conscientiousness, extroverted, agreeableness, neuroic)
have been shaped by evolution, biologically based too, diversification also
situationism
our social self shifts according to the situation (OCEAN is usually constant however)
distinctiveness hypothesis
we identify what makes us unique in each context and we highlight that in our self-definition
How we define ourselves, construal processes
social comparison, social narratives, self-assessment
self-reference effect
tendency to elaborate on and recall info that is integrated into our self-knowledge
self-shemas
knowledge based summaries of our feelings/actions and how we understand others views about the self
we are attuned to info that maps to our self-schema
self-image bias
tendency to judge other people's personalities according to their similarity/dissimilarity to our own personality
self-discrepancy theory
there are three selves (actual, ideal, and ought)
ideal - primed by promotion focus - what we want to be
ought - primed by prevention focus - what we ought to be
self-knowledge
self-reference effect, self-schemas, motivates behavior, affects how we judge others, illusions/biases of the self
trait self-esteem vs. state self-esteem
trait
: teh enduring level of regard people have for their abilities/characteristics over time
state
: dynamic, changing evaluations, momentary feelings about the self
contingencies of self-worth
self-esteem is contingent on successes/failures in domains in which their self-worth is based
sociometer hypothesis
self-esteem is an internal, subjective marker of the extent to which we are included/rearded favorably by others
self-evaluation maintenance model
we are motivated to see ourselves in a favorable light (by reflection and social comparison)
we associate with others that are successful (not in our domain tho)
self-verification theory
we strive for stable, accurate beliefs abotu the self -- they give coherence
attend to/recall info that is consistent with our views
we create self-confirmary social environments through our behavior
self-monitorig
moitor our behavior so it fits the demands of the situation
self-handicapping
tendency to engage in self-defeating behaviors in order to hae an explanation for poor performance
the 2 types of communication
on record - statements intended to be taken literally (honest, direct)
off-record - when we need to deliver a message that threatens our public self/friends (indirect, ambiguous, hinting, like flirting or teasing)
attribution theory
set of theoretical accounts of how people assign causes to the events around them and the effects that people's causal assessments have (how people understand the causes of events)
how we understand others - if an outcome a product of something internal to the person or a reflection of the circumstances?
explanatory style
a person's habitual way of explaining events, typically assessed with internal/external, stable/unstable, global/specific
pessimistic = global, stable, internal
covariation principle
we should attribute behavior to potential causes that co-occur with teh behavior
consensus (what do most people do), distinctiveness (what an individual does in different situations), consistency (what an individual does in a certain situation at different times)
can determine if situational or dispositional from this
discounting vs. augmentation principle
discounting
: we should assign reduced weight to a particular cause of behaior if there are other plausible causes
augmentation
: we should assign greater weight to a particular cause of behavior if there are other causes present that normally would produce the opposite outcome
counterfactual thoughts
thoughts of what might have, could have, or should have happened if only something had been done differently
emotional amplification
the pain/joy we get from an events tends to be proportional to how easy it is to imagine the event not happening
self-serving bias
attribute failure to external things, attribute succes to oneself (a motivational bias)
just world hypothesis
people get what they deserve in life and deserve what they get
causes of fundamental attribution effect
just world hypothesis (dispositional inferences can be comforting)
elements that catch our attention are more likely to be seen as causes (usually people not backgrounds stand out) (perceptive salience)
only after characterizing the person do we weigh in situational input
actor/observer difference
actors
: focus on situation, situational attributions
observer
: focus on teh person, who they are dealing with
pluralistic ignorance
misperception of a group norm that results form oberving people who are acting at variance with their private beliefs because of a concern for the social consequences (like when no one asks questions in class)
sharpening and leveling
sharpening
: when telling a story, emphasze important/interesting elements
leveling
: leaving out less important details when telling a story
asymmetry between positive and negative information
we are more attentive to negative info than positive b/c teh negative info has more implications for our well-being
order effects (the different ones too)
the order in which items are presented can have a powerful impact on judgment
primacy and recency effects
framing effect
the influence on judgment resulting from teh way info is presented --> order effects are a type of framing effect
confirmation bias
tendency to test a proposition by searching for evidence that woudl support it --> false beliefs b/c we can find evidence for anything
when we want a given proposition to be true so we seek out confirming evidence and discount contradictory evidence
bottom up vs. top down processing
bottom up - data driven, basis of stimuli
top down - theory driven, interprets new info in light of preexisting knowledge/expectations
schema
organized body of prior knowledge, like a knowledge structure
knowledge structure
coherent configurations in which related informatino is stored
how to schemas influence our judgment?
attention, influence and construal (info in the brain can be primed), memory (advantage in recall if the info fits an expectation)
schemas affect on memory
affect on recall, on encoding, on retrieval,
more of an effect on encoding than retrieval
how is new info mapped onto preexisting schemas?
feature matching
expectations
recent activation
feature matching
Whether a schema will be activated in interpreting new info depends on
the degree of similarity or fit between critical features of the schema and the
incoming stimulus
the "two minds" for certain problems
intuition - quick, automatic, simultaneously
reason - slower, more controlled, serially
Heuristics
intuitive mental short cuts that allow us to make a variety of judgments quickly and efficiently
availability heuristic
we judge the frequency/probability of some event by the readiness with which similar events come to mind
it is the ease of generating examples, not the number of examples that guides people's judgements
fluency
teh feeling of ease associated with processing information
if presented disfluently to you, you take a more, slow, careful approach
representative heuristic
we try to categorize something by judging how similar it is to our conception of the typical member of the category
similarity = likelihood
but sometimes blinds us from base-rate info
base rate info
info about the relative frequency of event or of members of different categories in the population (could be ignored with strong stense of representativeness)
planning fallacy
tendency to be unrealistically optimistic abotu how long a project will take
b/c of tendency to approach the problem form the inside (no outside perspective, such as how often do i get things done on time usually)
join effect of representative and availability heuristics =
illusory correlation
illusory correlation
joint effect of representative and availability heuristics
the belief that two variables are correlated when they are not
Naive Realism
the naive belief that we have a realistic objective view of the world
in actuality because of our past experience/knowledge we are very subjective and biased
these biases are unknown to us
automaticity
efficient, unintentional, uncontrollable, and without awareness
this is the way that impression formation typically occurs
it happens so fast that we fail to realize that we are making the conclusions that we are
where do automatic reactions come from?
a storehouse of past experiences and knowledge - they get triggered when you encounter people/places
they get triggered automatically
how are automatic associations created in teh first place?
from our own experiences, what we have learned
from culture
from our own experiences
innately born - like aversion to incest
when do our mental associations influence us? | https://freezingblue.com/flashcards/39477/preview/social-psychology-test-1 |
What is Attribution Theory?
Imagine that while driving to work one day you notice that the driver behind you seems very aggressive: She is following your car very closely, honks her horn if you delay even a few seconds when the red light turns green, and finally swerves around to pass you. How will you make sense of, or attribute, this behavior?
Attribution theory has been proposed to explain how individuals judge people differently depending on what meaning we attribute to a given behavior.
Attribution theory emphasize people’s core social motive to understand each other and to have some control. That is, people need to have some sense of prediction about other people’s actions (understanding) and about their own impact on those actions (control).
Specifically, attribution theory suggests that, when we observe an individual’s behavior, we attempt to determine whether it was internally or externally caused.
- Internally caused behavior is believed to be under the control of the individual.
- Externally caused behavior results from outside causes; that is, the person is seen as having been forced into the behavior by the situation.
For example, if an employee arrived late for work today, would we think it was internally caused (e.g. as a result of sleeping late) or externally caused (e.g. by a traffic jam)?
That determination depends on three factors. We’ll spend the remainder of this entry delving deeper into each, but for now, here they are in order.
- Distinctiveness,
- Consensus, and
- Consistency.
Attribution theory is an approach used to explain how we judge people differently, based on what meaning we attribute to a given behavior.
1. Distinctiveness
Distinctiveness refers to whether an individual displays a behavior in many situations or whether it is particular to one situation.
What we want to know is whether this behavior is unusual. If it is, the observer is likely to give the behavior an external attribution. If this action is not unique, it will probably be judged as internal.
Consequently, if the employee who arrived late to work today is also the person that colleagues see as lazy, we are likely to judge the behavior (resuming work late) as internally caused.
2. Consensus
If everyone who is faced with a similar situation responds in the same way, we can say the behavior shows consensus.
Our tardy employee’s behavior would meet this criterion if all employees who took the same route to work today were also late.
If consensus is high, you would be expected to give an external attribution to the employee’s tardiness, whereas if other employees who took the same route made it to work on time, you would conclude the reason to be internal.
3. Consistency
Finally, a manager looks for consistency in an employee’s actions.
Does the individual engage in the behaviors regularly and consistently?
Does the employee respond the same way over time?
Coming in 10 minutes late for work is not perceived in the same way, if for one employee, it represents an unusual case (she hasn’t been late for several months), but for another it is part of a routine pattern (he is late for two or three times a week).
The more consistent the behavior, the more the observer is inclined to attribute it to internal causes.
The Figure below summarises the key elements in attribution theory. It tells us, for instance, that if an employee, Michael, generally performs at about the same level on other related tasks as he does on his current task (low distinctiveness), if other employees frequently perform differently—better or worse—than Michael does on this current task (low consensus) and if Michael’s performance on this current task is consistent over time (high consistency), his manager or anyone else who is judging Michael’s work is likely to hold him primarily responsible for his task performance (internal attribution).
Distorted Attributions
Interestingly, findings drawn from attribution theory show that errors or biases can distort attributions. For instance, substantial evidence supports the hypothesis that, when we make judgments about the behavior of other people, we have a tendency to underestimate the influence of external factors and overestimate the influence of internal or personal factors.
This fundamental attribution error can explain why a sales manager may be prone to attribute the poor performance of her sales agents to laziness rather than to the innovative product line introduced by a competitor.
Individuals also tend to attribute their own successes to internal factors such as ability or effort while putting the blame for failure on external factors such as luck.
This self-serving bias suggests that feedback provided to employees in performance reviews will be predictably distorted by them, whether it is positive or negative.
Perceptual shortcuts can also distort attributions. All of us, managers included, use a number of shortcuts to judge others. Perceiving and interpreting people’s behavior is a lot of work, so we use shortcuts to make the task more manageable.
Perceptual shortcuts can be valuable as they let us make accurate perceptions quickly and provide valid data for making predictions. However, they aren’t perfect. They can and do get us into trouble.
See a summary description of the perceptual shortcuts below.
|SHORTCUT||WHAT IT IS||DISTORTION|
|Selectivity||People assimilate certain bits and pieces of what they observe depending on their interests, background, experience and attitudes||‘Speed reading’ others may result in an inaccurate picture of them|
|Assumed similarity||People assume that others are like them||May fail to take into account individual differences, resulting in incorrect similarities.|
|Stereotyping||People judge others on the basis of their perception of a group to which the others belong||May result in distorted judgments because many stereotypes have no factual foundation|
|Halo effect||People form an impression of others on the basis of a single trait||Fails to take into account the total picture of what an individual has done|
Individuals can’t assimilate all they observe, so they’re selective in their perception. They absorb bits and pieces. These bits and pieces are not chosen randomly; rather, they’re selectively chosen depending on the interests, background, experience and attitudes of the observer.
Selective perception allows us to ‘speed read’ others but not without the risk of drawing an inaccurate picture.
It’s easy to judge others if we assume that they are similar to us. In assumed similarity, or the ‘like me’ effect, the observer’s perception of others is influenced more by the observer’s own characteristics than by those of the person observed.
For example, if you want challenges and responsibility in your job, you’ll assume that others want the same. People who assume that others are like them can, of course, be right, but not always.
When we judge someone on the basis of our perception of a group they are part of, we are using the shortcut called stereotyping. For instance, ‘Married people are more stable employees than single people’ or ‘Older employees are absent more often from work’ are examples of stereotyping.
To the degree that a stereotype is based on fact, it may produce accurate judgments. However, many stereotypes aren’t factual and distort our judgment.
When we form a general impression about a person on the basis of a single characteristic, such as intelligence, sociability or appearance, we’re being influenced by the halo effect.
This effect frequently occurs when students evaluate their classroom instructor. Students may isolate a single trait such as enthusiasm and allow their entire evaluation to be slanted by the perception of this one trait. An instructor may be quiet, assured, knowledgeable and highly qualified, but if his classroom teaching style lacks enthusiasm, he might be rated lower on a number of other characteristics.
These shortcuts can be particularly critical with diverse workforces. | https://ifioque.com/interpersonal-skills/attribution-theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.