content
stringlengths
0
1.88M
url
stringlengths
0
5.28k
The Pollock Gallery of the Division of Art at SMU’s Meadows School of the Arts will showcase the exhibition: Triple Carbs Society (The Built-In Kitchen of M. Duchamp), by the artist Marco Bruzzone. Marco Bruzzone has created a conceptual space composed of a kitchen resembling a common American kitchen and a long communal picnic table (a replica of D. Judd´s Picnic table). Bruzzone offers the spectator a moment of reflection on the domestic space and its function as a “readymade” (a term coined by French artist Marcel Duchamp around 1913 to indicate an everyday object that has been selected and designated as art). The Built-In Kitchen of M. Duchamp was conceived as an environment in which people can reenact Marcel Duchamp’s supposedly typical meal after he moved to New York: a simple plate of pasta with butter and cheese. The Triple Carbs Society invites visitors to experience this meal. The space will be used every day at lunch time to cook spaghetti with butter and cheese for the public: visitors will not look at the art, but will be part of it. Every day a spaghetto will be thrown against a wall, sticking to it and creating an abstract line in the gallery space. This performative action is made with the purpose to test the cooking of the noodle. The graphic of the picnic table conceived by Bruzzone recreates the form of the individual strands of spaghetti thrown on the walls and to produce this decor titled “Noodle-Doodle”, the artist has chosen the quality of the digital printing technology by Abet Laminati.
https://abetlaminati.com/fr/abet-laminati-at-the-pollock-gallery-dallas/
We enjoyed watching it though it made us shed into tears. Overall, Awakenings is an entertaining, yet poignant film that is eloquently laced with poetry and a beautiful score by Randy Newman. This is easily the most powerful tearjerker that I 've ever seen, thanks in large part to the brilliant performances by Williams and De Niro. But to be fair, the whole cast was excellent, and they were helped by a script that was nearly perfect. For me, the scenes with Miller proved to be the most emotional, but really, the whole film was heartwarming or heartbreaking on some level. The movie ties in more brutality and violence to appeal to a modern audience that demands intense appeal to the senses. The play uses the simplicity of setting elements such as the balcony and common acting techniques to communicate Shakespear’s original message. Given the time period of the text, Shakepear’s use of these strategies are as modern as those unique techniques used in the movie. The movie and the play attract their audiences based on what appeals to them. Most importantly, both deliver the message to the audience that “For never was a story of more woe than this of Juliet and her You could say that a lot of his personality shines through in his acting, making it easy to argue that Louise Fletcher, who plays a very authoritarian role, might have had to face a greater challenge acting out her character. Therefore, perhaps, her acting should be considered better than Nicholson's. Regardless of this, both actors play their roles very well, getting the viewer immersed in the story. One I would say that Stardust is a romance and adventure story that is made in a way that captures the audience’s attention. Most love stories can bore and seem cliche, but the way that Stardust was structured made it stand out amongst the norm. The story of the film Stardust adheres to“The Hero’s Journey” and can still be considered to be a post-modern work as well. One way that supports Stardust to actually be a “Hero’s Journey” is that it uses all of the steps throughout the film. Knowing “The Hero’s Journey”, it was easy to point out the key elements while watching Stardust. The evolution of the director Baz Lurhmann Andrew Venter Topic two: “Lurhmann’s films are not so much adaptations as re-imaginings” Baz Lurhmann is a very distinctive director who is both loved and hated for his bold cinematic techniques. These techniques allow Lurhmann to recreate famous titles such as Romeo and Juliet in a way that very few people could have ever imagined. From Lurhmann’s first film Strictly Ballroom these techniques were very prevalent and instead of out growing these brash techniques he actually evolved and developed his techniques. And thus resulted, resulting in the creations of very successful films. In this essay I will be discussing how Lurhmann has evolved these cinematic techniques beginning in Strictly Ballroom, continuing in Romeo and Juliet and finally in The Great Gatsby. Luhrmann cast and presents the characters are captivating and special in some areas, but breakdown in others. Leonardo DiCaprio seems to fit in Jay Gatsby’s roll really well, showing off his lavish and party-driven lifestyle, symbolizing his American Dream perfectly. Same can be said about Daisy Buchanan. Her love for Gatsby, sooner or later being with him is her American Dream, which was portrayed nice as well in the crossover from the book to the silver screen. These perfect portrayals break down there, however, from Tobey Maguire playing Nick Carraway. Adversity in “The Intouchables” “My true disability is not having to be in a wheel chair. It’s having to be without her.” (The Intouchables). Lines like that are just a piece of the great undertaking directors Olivier Nakache and Eric Toledano took when they decided to be part of The Intouchables. Adapted from real-life events, this French biography was applauded for succeeding in painting a touching and resonating picture of the events that led to the birth of a strong relationship between the two protagonists, Driss and Philippe. Winning multiple awards, this movie has achieved the status of being one of the greatest French movies ever made. Nicholas Day VACV – 142 4-30-2016 Professor Tumminello The Paper Fights Back This paper will compare the movies The Empire Strikes back and Fight Club. These films were both huge success when released. Although both these movies are completely different from each other, they do have many similarities as well. Each film has their own ideals with gender socialization, political views, and social structure. In addition to that each film does an amazing job with the special effects, camera angles, and mise en scène. Heavily inspired and reliant on Shakespeare’s, Hamlet, this film is a very creative look at what happens around the key plot of the tragedy. This allows the film to reiterate key themes, add twists on the story in minor ways, and spark new interest in the original play. This imitative work was solely inspired by the original Hamlet, and further demonstrates how widely studied and admired this tragedy is. As one of the most popular stories of all time, it is no wonder that it continues to spark imitation even to this “Only in dreams can men be truly free,” says Robin William, a famous actor. From time immemorial, human never ceased to pursue freedom, but in fact, many impossibilities exist. However, this still cannot stop their aspiration to freedom, in this case, movie come into the world, for from a very large extent, movie satisfied people’s fantasy. Especially when the technique of special effect at present age grow more and more mature these days, human can create any visual effects they want, and even in the past, when the technique has not yet matured, people use simple theatrical properties and cut the films to create special effect. Hugo, a movie that brings people back to the old days, contains a large number of elements that demonstrated people’s Much Ado About Nothing: Plain and Simple William Shakespeare 's Much Ado About Nothing is a quick-witted tale that follows the romantic relationships of two cousins, through the comedic highs and tragic downs. It has been portrayed countless times, including Kenneth Branagh 's film adaptation in 1993. The film featured Denzel Washington, Keanu Reeves, and even Branagh himself; set in Messina, Italy; just like the original play. Overall, the casting choices and design choices the production team made worked out successfully, quickly turning many Shakespeare nay-sayers into believers. The story of Much Ado About Nothing is a slightly complex but entertaining one. While watching Casablanca I was really just quite amazed with the whole concept and idea of film. Each of the characters did such an amazing job of being able to bring the film to life. The story line seemed really relatable to the present day even though the setting was during the world two time period. With the whole movie being focused on a relationship that was so strong for two people I figure that lots of people would be able to relate to a movie like this. Relationships being as complicated as they are I feel like people continue to love this film just based on how well it shows two people in a troubled relationship.
https://www.ipl.org/essay/Compare-And-Contrast-Anagnorisis-And-Peripetia-PCUGMYS35U
Have you ever wondered what impact emotions have on the decisions we make? Whenever you feel pressured or uncertain do you make sound decisions? Does your emotion or mood impact your behavior? Have you ever had a situation where your emotions got the better of you and a relationship has suffered as a result? In this course, learn how to develop your Emotional Intelligence (EQ). EQ is the ability to understand, use, and manage your own emotions in positive ways to manage conflict, relieve stress, communicate effectively, empathize with others, and overcome challenges. Emotional Intelligence helps you build stronger relationships, succeed at school and work, and achieve your career and personal goals. It can also help you to connect with your feelings, turn intention into action, and make informed decisions about what matters most to you. In this course, learn how to recognize your own emotions and how they affect your thoughts and behavior; develop the skills to manage your emotions in healthy ways; understand the emotions, needs, and concerns of other people; manage conflict, develop and maintain good relationships, communicate clearly, and inspire and influence others. Are you ready to work on skills that will help you experience greater peace and happiness, develop stronger relationships, and achieve greater success at school and work? Join us on this journey of Emotional Intelligence.
https://peaceacademy.peacepeddlers.org/courses/emotional-intelligence-2/
A faith is a system of ideas held by a group of people. These beliefs are a representation of their worldview and what they expect from their actions. Every faith is special, and also the collection of ideas as well as actions varies substantially. Some religious systems link their belief in a superordinary being to a course of spirituality, while others focus largely on earthly matters. Whatever the religion, the research of faith is a crucial as well as important element of human society. Recommended Reading The term “religion” is a broad, detailed term. While the idea of religious beliefs is extensively comprehended to include the technique of a collection of values or techniques, a more particular summary could more easily recognize a faith en masse or advancement. In 1871, Edward Burnett Tylor specified faith as the belief in souls, no matter place. Although Tylor’s meaning is very wide, it recognizes the presence of such beliefs in all cultures. A typical definition of faith consists of numerous methods. Rituals, preachings, as well as the veneration of divine beings are all part of a faith. Various other practices may include events, initiations, funerary solutions, as well as marital routines. Various other tasks associated with religious beliefs might consist of meditation, art, and also public service. Men are more probable to be religious than women. Additionally, individuals might be religious in more than one means. There are several types of religious beliefs and also different cultures, as well as it is usually confusing to try to define what a religious beliefs really is. Religion is a complex phenomenon. The various kinds as well as ranges of it differ significantly. There are lots of ways that religious beliefs are revealed. Some are based upon the belief in a single god. Those who rely on monotheism think that there is just one God. Various other types of faiths make use of numerous gods or sirens. In both cases, there is an intermediary in between the gods and the tribe members. A shaman can perform rites and also rituals to help the sick or heal their discomfort. A lot of religions share the exact same basic characteristics. They all share an usual concept of redemption, a priesthood, sacred items, as well as a code of honest habits. While many of them are various, they all share some typical attributes. For instance, they all have a specifying misconception and also have sacred places. The most essential thing to keep in mind is that these religious beliefs are not monolithic. While they might have similarities, they do not have the exact same core belief or ideas. While there are many forms of faith, the best-known is the Catholic Church. In a Christian church, the clergyman is a number of prayer. In a Catholic parish, the priest is a blessed member of the clergy. A witch doctor is an individual who believes in a religious deity. In a non-Christian context, the priest is a number who relies on a divine being. In a nonreligious culture, there are several types of religious beliefs. In the last century, the research study of faith has actually been mainly concentrated on the relationship in between humans and also the sacred and also divine points they admire. The 5 largest spiritual groups stand for regarding 5.8 billion people and also their fans. Each of them has its very own beliefs as well as practices. Several of these ideas are much more rational than others, while others are a lot more rooted in practice. The research of religion is an intricate procedure, however it can be analyzed by anybody. The most vital part of a religion is its belief in a god. Amongst the various kinds of religious beliefs, Christianity is one of the most common. It is thought that God can help us in our lives. It can make us pleased or sad. For instance, a man can help a woman with her child with her magic. The duty of a witch doctor in a spiritual culture is essential to the health of the tribe. There are several sort of faiths. Nonetheless, there are numerous usual traits amongst every one of them. As an example, religious beliefs all share an usual idea of salvation. Additionally, they normally involve sacred places and also objects, rituals, as well as codes of honest behavior. They likewise consist of a priesthood to lead their followers. Historically, some religious beliefs have actually been led by a divine being, while others have several gods. Therefore, their faith is an idea in a deity.
https://earth-ad.com/2021/12/18/5-little-methods-to-achieve-the-very-best-results-in-religion/
With a one story house, each bedroom would ideally be on each side of the house. … For two or more story houses, you don’t want to have your bedrooms on top of each other. Instead, have one or two bedrooms in the front of the house, and one or two bedrooms in the back of the house. What is the best layout for a house? How to Pick the Best House Layout for Your Family - Make Sure You Have Enough Bedrooms. How many family members do you expect to live at the home? … - Don’t Overlook the Bathroom Situation. … - Select an Open Floor Plan. … - Have a Convenient Laundry Room. … - Have Adequate Storage Space. … - Designate a Playroom. … - Consider Your Outdoor Space. How do I find the perfect floor plan? Tips for Choosing the Perfect Floor Plan - Size. Your first consideration when you are selecting a house plan should be size. … - Design Style. Every homeowner has their own design style and choosing a floor plan that meets those needs is essential. … - Trust Your Instincts. … - Consider Cost of Materials and Furniture. … - Be Mindful of Your Budget. … - Consider Modifications. How do you design a house layout? How to Draw a Floor Plan - Choose an area. Determine the area to be drawn. … - Take measurements. If the building exists, measure the walls, doors, and pertinent furniture so that the floor plan will be accurate. … - Draw walls. … - Add architectural features. … - Add furniture. Is it cheaper to build a one story home or two? Per square foot, a one-story house is more costly to build than a two-story home. There is a larger footprint, meaning more foundation building and more roofing materials. And because the plumbing and heating/AC systems need to extend the length of the house, you’ll need bigger (and costlier) systems. Less privacy. How do I find the layout of my house? How to Get Floor Plans of an Existing House - Talk to the contractor that built your house, if possible. … - Locate the archives of the municipality or county where your house is located. … - Locate the fire insurance maps for the community. … - Visit your local building inspector’s office. … - Browse through historical plan books. Can I design my own house online for free? You can design your home online with our Interactive Planner. … With our interactive floor plan designer, you can do just that! We feature interactive floor plans for 57 of our floor plans, giving you hundreds of possible configurations for building your own custom home. It’s fun, it’s easy, and it’s completely free. How do you build a house from scratch? The 10 Steps to Build a New Home Are: - Prepare Construction Site and Pour Foundation. - Complete Rough Framing. - Complete Rough Plumbing, Electrical HVAC. - Install Insulation. - Complete Drywall and Interior Fixtures, Start Exterior Finishes. - Finish Interior Trim, Install Exterior Walkways and Driveway. How do you design a living room layout? These living room layout ideas will make the job of arranging furniture and decorating your home easy and enjoyable. - Measure the living room from wall to wall, making note of the length and width of the room. … - Decide on a focal point. … - Arrange tables, storage cabinets and ottomans. … - Assign floor and table lamps. What makes a good house design? Choosing your house design is one of the most exciting phases of building a new home. Essentially the key attributes of a great home include liveability, functionality, convenience, comfort and style. The layout and the way the space functions are key to a comfortable home. What is the best free home design app? House design app: 10 best home design apps - Home design app. … - Live Home 3D Pro. - House design and interior design app. iOS, macOS and Windows – $30.99 (Live Home 3D Interior Design Free) … - Planner 5D. - House design and interior design app. iOS, Mac, Android, Windows and online – Free. … - RoomScan Pro. - SketchUp. - House design software. iOS, macOS and Windows – What is most expensive part of building a house? Interior Finishes: $68,000 Besides the sales price, the interior is usually the most expensive step in building a house. What type of house is the cheapest to build? prefabricated What is cheaper building up or out? Building up is always the least expensive option for increasing your home’s square-footage because it requires less material and labor. … On the other hand, if you build out, you’ll have to add footers, concrete, fill rock, roof system, and more excavation cost.
https://birchlerarroyo.com/housing-planning/what-is-the-perfect-house-layout.html
The Originators of Quick & Easy Cooking! Down-Home Cookin': 24 Easy Southern Favorites Sloppy Joes Stuffed Peppers We are adding the recipe to your Recipe Box. This was added to your Recipe Box. Click here to view your Recipe Box. - SERVES - 6 - COOK TIME - 40 Min We ditched the buns to come up with another way to enjoy one of our favorite childhood dishes: sloppy joes. Our Sloppy Joes Stuffed Peppers are super loaded with more than just your typical seasonings. They've got corn, beans, peppers, onions, and more. Yummy! What You'll Need - 6 red bell peppers - cooking spray - 2 tablespoons olive oil - 1 pound ground turkey - 1 (8-ounce) package frozen chopped peppers and onions, thawed - 3/4 cup frozen corn, thawed - 1 (8-ounce) can tomato sauce - 1 (6-ounce) can tomato paste - 1 (15-1/2-ounce) can black beans, drained - 2 teaspoons garlic salt - 1 envelope Sloppy Joe seasoning What to Do - Preheat oven to 375 degrees F. Wash and cut the stem end off each pepper, then core, removing all seeds. Lightly grease a 9- x 13-inch baking dish with cooking spray. - In a large nonstick skillet, add remaining ingredients and cook 10 minutes, or until turkey is no longer pink and vegetables are tender. - Evenly fill each pepper with turkey mixture and place in baking dish. Bake for about 30 to 35 minutes, or until peppers are tender. Notes If you love stuffed peppers any way you can get 'em, then you've got to try our recipes for Cheesy Chicken Salad Stuffed Peppers and Meaty Stuffed Peppers too! Nutritional InformationShow More Servings Per Recipe: 6 - Amount Per Serving % Daily Value * - Calories 326 - Calories from Fat 99 - Total Fat 11g 17 % - Saturated Fat 2.2g 11 % - Trans Fat 0.1g 0 % - Protein 22g 45 % - Amount Per Serving % Daily Value * - Cholesterol 52mg 17 % - Sodium 1,338mg 56 % - Total Carbohydrates 35g 12 % - Dietary Fiber 7.7g 31 % - Sugars 15g 0 % Report Inappropriate Comment Are you sure you would like to report this comment? It will be flagged for our moderators to take action. Thank you for taking the time to improve the content on our site. LATEST TV RECIPE & VIDEO Hearty Chicken & Corn Soup Our Hearty Chicken Corn Soup is going to warm up your kitchen and the hearts of everyone gathered around the kitchen table.
https://www4.mrfood.com/Turkey-Misc-Poultry/Sloppy-Joes-Stuffed-Peppers
When Is the Right Time for You to Start Taking Social Security? A recent working paper from the National Bureau of Economic Research took a close look at when people should start drawing Social Security benefits, as well as whether the decisions people make match up with what theory says they ought to do. What researchers found was a disconnect: It turns out that while many people should consider waiting to take benefits, few people make their decisions based on the most important factors. The Take-Wait Trade-Off Right now, you can start taking Social Security as soon as you turn 62. But there's a catch: If you take benefits at your earliest opportunity, you'll take a haircut on your monthly payment, getting 25% less than you would if you waited until your normal retirement age of 66. Taking benefits early also has an impact on how much your spouse will receive if your spouse gets payments based on your work record. On the other hand, if you don't even need the money at age 66, you can put off starting to get Social Security until your 70th birthday. For each year beyond age 66 that you wait, your monthly benefits will get an 8% increase. So there's clearly a trade-off. Wait longer and you'll eventually get bigger monthly payments, but you'll collect fewer months of benefits over the course of your lifetime. Start earlier and you'll get money faster, but in the long run, you could be worse off. Key Factors to Consider According to the study, current interest rates are the key to when most people should take Social Security. When interest rates are low, the reward from getting higher monthly payments later is a lot greater than the value of getting income now, making it pay to wait. When interest rates are high, it makes more sense to get money sooner rather than later, even at the expense of getting smaller monthly payments. Right now, rates are low enough to make waiting the smart choice for most people. In addition, family considerations -- whether you're married or not -- play a role. At current rates, unmarried retirees should delay taking Social Security if possible, although the study found that men shouldn't wait until age 70 because of their shorter life expectancies. Married retirees, on the other hand, have a balancing act as one spouse may get payments based on the other spouse's work history. In general, for one-income families, interest rates have to be very high in order for it to make sense to take benefits sooner than later. In two-earner families, the person with the higher income should usually wait, while the lower-income spouse should take spousal benefits at normal retirement age. This Is Not a Drill In the real world, though, people can't look at Social Security as a theoretical exercise. The study found that retirees with higher levels of education tended to wait to receive payments, but that was often because they had more savings set aside and thus didn't need benefits as much. In addition, your work status makes a big difference. More than 75% of early retirees took their benefits right at age 62. On the other hand, it usually doesn't make sense for people who are still working to take benefits before age 66, as your Social Security benefits get reduced by $1 for every $2 you make above $14,640 per year until the year you reach your normal retirement age. What You Should Do As with any study, this research gives broad guidelines, rather than specific advice. In particular, with the research based on actuarial statistics, it doesn't take into account any personal health issues you may have. In general, the longer you expect to live, the more it makes sense to wait. As complicated as this question is, the key takeaway is to remember that taking Social Security isn't something you should decide on a whim. If you consider carefully all the trade-offs involved, you can make a decision that could mean thousands of extra dollars for you and your family. For more on Social Security: NEXT:
https://www.aol.com/2012/03/21/when-is-the-right-time-for-you-to-start-taking-social-security/
black holes. The proposal is that string theory would resolve two major problems of classical black holes - the singularity of infinite space-time curvature, and the missing multiplicity of states to account for Black Hole Entropy - by replacing the usual black hole interior by complicated (or fuzzy) spacetimes reacting to the presence of strings. The event horizon Samir Mathurof Ohio State University, with postdoctoral researcher Oleg Lunin, calculated that what could be considered an effective event horizonof a fuzzball agrees with the current theory of black holes, but in one way it is different. The event horizon of a black hole is very precise and strict while in a fuzzball the event horizon is very much like a mist; it is fuzzy, hence the name 'fuzzball'. The essence of the black hole Black holes have grabbed attention as the massive killers of the universe that destroy anything in their path—even light cannot escape their pull. However fuzzballs have redefined this idea. As described earlier a fuzzball doesn't have a prominent singularity at its center, and so the destruction of data that is the essence of a black hole no longer exists in a fuzzball. Instead the data from the fuzzball marks the strings that carry the information in vibrations. These data can be given out by the escape of Hawking radiation. The information paradox Black holes create a problem; they cause a contradiction widely known as the black hole information paradox, which means that they don't obey the laws of quantum physics. In the classic model, a black hole can be made of anything but will always end up in the same state, violating the quantum mechanical law of reversibility. However, fuzzballs may solve this problem by the fact that information that enters a fuzzball is given out by the vibrations in the strings that make them up. External links * [http://www.spacetoday.org/DeepSpace/Stars/BlackHoles/BlackHoleFuzzball.html Are Black Holes Fuzzballs?] * [http://researchnews.osu.edu/archive/fuzzball.htm Information paradox solved? If so, Black Holes are "Fuzzballs"] * [http://www.physics.ohio-state.edu/~mathur/faq.pdf Frequently Asked Questions about Fuzzballs] Wikimedia Foundation. 2010. Look at other dictionaries:
https://enacademic.com/dic.nsf/enwiki/696061/Fuzzball_%28string_theory%29
Technical Field Background Art Disclosure of Invention Technical Problem Solution to Problem Advantageous Effects of Invention Brief Description of Drawings Mode for the Invention The present invention relates to a system and method for controlling an electronic device, and more particularly, to a system and method for efficiently controlling an electronic device in consideration of a state of the electronic device and external information. With the development of electronic communication technology, remote control of various electronic devices, such as lighting controls, thermostats, and various electronic appliances such as Televisions (TV), computers, electric fans, refrigerators, ovens, etc., is possible through a network. More specifically, the electronic devices connect with a managing device in different wired or wireless schemes. The managing device also connects with a terminal in different wired or wireless schemes. When a user of the terminal selects a function, i.e., an execution command, to be performed by an electronic device, e.g., powering on, the execution command is transferred from the terminal, to the managing device, and then to the electronic device, so that the electronic device operates according to the execution command. US patent application number 2008/0282182 US patent application number 2006/0068759 A1 As described above, the user may operate various electronic devices using the terminal. discloses a network household electric control system which includes an external server on the internet. discloses an authentication method for authenticating an operation direction for a remote controlled device transmitted by a remote control terminal. However, many newer electronic devices provide a large number of operation modes and scheduling functions, which cannot be efficiently used through remote control because the operation modes of the electronic devices are too complex. For example, a newer washing machine has a number of functions such as bubble wash, bubble echo, air deodorization, air sterilization, small load, rapid speed, etc. However, the conventional art does consider an amount of clothes that require washing or external situations, such as current energy costs. Consequently, even if it were possible to control all of the various functions of the washing machine, the washing machine would still be inefficiently used. Accordingly, because using electronic devices is often complicated and inconvenient, a user cannot efficiently use all of the functions provided from the electronic device via remote control. The present invention has been made in view of the above problems, and provides an apparatus and a content playback method. The present invention has been designed to address at least some of the above problems, and to provide at least the advantages described below. Accordingly, an aspect of the present invention is to provide a system and method for controlling an electronic device, wherein the electronic device is efficiently used in consideration of a current state of the electronic device and external information. In accordance with an aspect of the present invention, a method for controlling an electronic device by a managing device is provided. The method includes receiving, by the managing device, a search word for operating the electronic device; acquiring operation information based on the received search word; generating an operation schedule of the electronic device for the received search word based on the acquired operation information; and controlling the electronic device based on the operation schedule. In accordance with a further aspect of the present invention, a terminal is provided, which includes an input for receiving a search word for operating an electronic device; a radio frequency communication unit for transferring the received search word to a managing device, and receiving operation information corresponding to the search word and an operation schedule of the electronic device which is generated for the search word based on the operation information from the managing device; and a terminal controller for controlling the electronic device based on the operation schedule. In accordance with another aspect of the present invention, a system is provided, which includes an electronic device for providing a search word; a server for providing operation information for operating the electronic device based on the search word; and a managing device for acquiring the operation information corresponding to the search word, generating an operation schedule of the electronic device for the search word based on the operation information, and controlling the electronic device based on the operation schedule. According to an embodiment of the present invention, a system and method for providing convenience of use of an electronic device to a user and reduce the time/cost by analyzing a state of the electronic device and external information (e.g., weather, schedule, electronic rate, etc.) to select optional functions and operation time, such that the electronic device operates by itself. Additionally, the present invention permit the user to efficiently use an electronic device in consideration a state of the electronic device and external information. FIG. 1 illustrates a system for controlling an electronic device according to an embodiment of the present invention; FIG. 2 is a block diagram illustrating a managing device according to an embodiment of the present invention; FIG. 3 is a block diagram illustrating a device management controller according to an embodiment of the present invention; FIG. 4 is a block diagram illustrating a terminal according to an embodiment of the present invention; FIG. 5 is a block diagram illustrating a server for providing operation information according to an embodiment of the present invention; FIG. 6 is a signal flow diagram illustrating a method for controlling an electronic device according to an embodiment of the present invention; FIG. 7 illustrates an example of a user interface for device management setting, presented on a terminal display according to an embodiment of the present invention; FIG. 8 illustrates an example of a user interface for a selected electronic device according to an embodiment of the present invention; FIG. 9 illustrates an example of a user interface for selecting a function of an electronic device according to an embodiment of the present invention; FIG. 10 is a diagram illustrating functions of a washing machine according to an embodiment of the present invention; FIG. 11 illustrates an example of selected optimal conditions, presented on a terminal display according to an embodiment of the present invention; FIG. 12 is a flowchart illustrating a method for controlling a washing machine according to an embodiment of the present invention; FIG. 13 is a diagram illustrating functions of an air conditioner based on environmental conditions according to an embodiment of the present invention; FIG. 14 is a diagram illustrating functions of a refrigerator according to environmental conditions according to an embodiment of the present invention; FIG. 15 is a diagram illustrating functions of a TV according to environmental conditions according to an embodiment of the present invention; and FIG. 16 is a flowchart illustrating a method for controlling an electronic device according to an embodiment of the present invention. The above and other aspects, features, and advantages of the present invention will be more apparent from the following detailed description in conjunction with the accompanying drawings, in which: Various embodiments of the present invention are described in detail below with reference to the accompanying drawings. Throughout the drawings, the same reference numbers are used throughout the drawings to refer to the same or like parts. Further, detailed descriptions of well-known functions and structures incorporated herein may be omitted to avoid obscuring the subject matter of the present invention. In accordance with an embodiment of the present invention a current state of an electronic device and external information relating to the electronic device are analyzed to receive operation information having an optimal function and an operation timeand to execute the electronic device based on the received operation information. Herein, a system for constructing and integrally managing electronic devices by a network is referred to as a "home network system". The home network system networks home electronics devices (e.g., TVs, washing machines, microwave ovens, gas ranges, audio players, air conditioners, boilers, etc.), lighting controls, gas valves, door locks, etc., to connect to a controller such as a home gateway or a home server, and uses a portable terminal as a remote control. Digital Living Network Alliance (DLNA), formerly known as Digital Home Working Group (DHWG), is a standardization body that is attempting to establish a compatible platform based on the already established industry standards and to realize convergence across the industries. The DLNA promotes the introduction of a guideline among the industries based on the Universal Plug and Plag (UPnP) protocols, which have been widely used in the manufacture of home appliances, personal computers, wireless devices, etc. UPnP is technology with which information appliance devices connect to a network to communicate with each other without complicated settings, wherein one device automatically searches for a service of another device. The current guidelines introduced by DLNA have provided a design principle capable of sharing content between different brands and products through a wired/ wireless home network between electronic appliancest, PCs, and wireless devices. Accordingly, products designed according to the guidelines are capable of sharing media content such as music, photographs, video, etc., through the home network. When sharing content between home devices in the home network environment based on DLNA, a home network data sharing system controls a service in consideration of characteristics of the devices and a communication environment, and is operatively associated with various services over a communication network to a service of excellent quality. Examples of a communication module to be included in the electronic device include a 3rd Generation (3G) communication module, a 4th Generation (4G) communication module, a Wi-Fi communication module, and a Zigbee® communication module. For example, an electronic device may be a smart TV, a smart phone, smart appliance (refrigerator or washer) etc. FIG. 1 illustrates a system for controlling an electronic device according to an embodiment of the present invention. FIG. 1 Referring to , the system includes a managing device 100, a terminal 200, a server 300, and electronic devices 400. The managing device 100 controls the electronic devices 400. For example, the managing device 100 stores identification information, a driving pattern, and a software pattern for the electronic devices 400 connected to the managing device 100 to control execution of the electronic devices 400. The electronic devices 400 include household electronic appliances, office devices used in an office, medical devices in a hospital, and industry devices used in a factory. For example, the electronic devices include TVs, refrigerators, washing machines, computers, electric fans, air conditioners, Digital Versatile Disc (DVD) players, audio players, external speakers, game machines, boilers, lighting controls, microwave ovens, gas ranges, Digital Signage (DS), Large Format Displays (LFDs), digital cameras, vacuums, security devices, etc. FIG. 1 In , the terminal 200, e.g., a smart phone or a tablet PC, receives user input through a Graphic User Interface (GUI) to remotely control at least one of the electronic devices 400. For example, the terminal 200 transmits an execution command with respect to a certain electronic device to the managing device 100, which transfers the execution command to a designated electronic device, such that the electronic device executes a function corresponding to the execution command. A wireless interface supporting communication of the terminal 200 with the electronic devices 400 or the managing device 100 may include an interface of a near distance communication protocol such as Radio frequency Identification (RFID), BLUETOOTH®, Near Field Communication (NFC) ®, Infrared Data Association (IrDA) ®, and Zigbee®. The server 300 transmits a control command of the electronic devices 400 to the managing device 100 according to a request of the managing device 100 or in a push scheme. For example, the server 300 is connected to the managing device 100 in various wireless communication schemes such as Wi-Fi, Wibro, 3G, and 2G. The managing device 100 controls the electronic devices 400 according to the corresponding control commands received from the server 300. The control command includes operation information, which for operating a corresponding electronic device according to a search word input through an input unit of the terminal 200 or an input unit of the corresponding electronic device. Alternatively, the operation information may be firstly provided from the server 300. Further, the managing device 100 receives the operation information from the sever 300, and stores and manages the received operation information. The search word may indicate contents included in the electronic device. For example, when the electronic device is a washing machine and the contents include a cotton quilt that may be washed by the washing machine, the search word is the cotton quilt. Further, when the electronic device is a microwave oven, the search word may be a type of food. When any of the electronic devices 400 include an input unit (not shown) and the search word is input through an input unit of an electronic device, device information (device type, device performance, etc.) of the electronic device may be automatically displayed or input. When the search word is input to the terminal 200 or the managing device 100, an electronic device list indicated on the terminal 200 or the managing device is selected, and the selected electronic device list may be input together with information with respect to the selected electronic device. The operation information may indicate operation information by situations optimized in an electronic device according to a search situation (environmental elements such as weather or temperature) or a search object (object whose characteristic is changed according to an operation of an electronic device) corresponding to a search word. The operation information by situations may indicate operation information capable of operating a corresponding electronic device according to a search word input from the user. The operation information by situations may be primarily provided from the server 300. The operation information by situations may be stored and managed in the managing device 100. The operation information by situations may also include operation information by situations optimized in an electronic device according to a search situation (environmental elements such as weather or temperature) or a search object (object whose characteristic is changed according to an operation of an electronic device) corresponding to a search word. FIG. 2 is a block diagram illustrating a managing device 100 according to an embodiment of the present invention. FIG. 2 Referring to , the managing device 100 includes a communication unit 110, an input 120, an audio processor 130, a device management display 140, a device management memory 160, and a device management controller 170. The communication unit 110 forms a wired/wireless communication channel (hereinafter referred to as a "data communication channel") for transceiving data such as a control signal or a data package under control of the device management controller 170. For example, the communication unit 110 forms a wired/wireless communication channel for communicating with the terminal 200, the server 300, and the electronic devices 400. Accordingly, the communication unit 110 transmits data for management between the structural elements of the communication system under control of the device management controller 170 through the data communication channel. For example, the communication unit 110 receives a device management control command from the server 300, transmits the device management control command to a certain electronic device, receives the device management control command from the terminal 200 or the electronic device, or transmits the device management control command from the terminal 200 to the certain electronic device. The input 120 may include various input devices for receiving a search word, i.e., numerical or character information, and setting various functions. For example, the input 120 includes a plurality of input keys, function keys, i.e., either physical keys or keys displayed on a touch panel. A search word for operation information of an electronic device may be input to the input unit 120. The operation information may include information for operating a corresponding electronic device according to the search word input by the user. The input search word is transferred to the device management controller 170 and may be used to request operation information for the electronic device according to the search word from the server 300, through the communication unit 110. Accordingly, the device management controller 170 generates the device management control command for controlling the electronic devices 400 according to operation information of the electronic devices 400 received from the server 300. The audio processor 130 includes a speaker SPK for playing audio data according to various execution modes or a function selection of the managing device 100 and a microphone MIC for receiving a voice signal of a user for setting an execution mode or executing a function. The audio processor 130 outputs a signal or an effect sound indicating control of the electronic devices 400. Further, when the electronic devices 400 are executed according to the operation information, the audio processor 130 may output a warning sound. The warning sound or the effect sound may be omitted according to user setting. The device management display 140 displays information input by the user and information provided to the user, such as various menus of the managing device 100. That is, the device management display 140 displays a screen indicating a search word input by the user and operation information of an electronic device according to the search word. Further, a screen indicating operation information of the electronic device received from the server 300 may be displayed. The device management memory 160 stores at least one application program for a function operation of the managing device 100, data created by the user, and a message transceived in a communication system, and data according to execution of an application program. The device management memory 160 may also store an Operating System (OS) for booting the managing device 100 and for operating the foregoing structural elements, and an application program for controlling the electronic devices 400. Additionally, the device management memory 160 stores a program for controlling the electronic devices 400 and user information using the electronic devices 400 according to operation information of the electronic devices 400. A control signal for operation information of the electronic devices 400 may be acquired through a data communication channel formed between the server 300 and the managing device 100 under control of the device management controller 170. The managing device 100 accumulates and store the operation information in the device management memory 160. When the electronic device request the operation information to the managing device 100, i.e., when a search word is input, the managing device 100 searches the operation information from the device management memory 160 to determine whether to provide the operation information to the electronic devices 400. FIG. 3 is a block diagram illustrating a device management controller according to an embodiment of the present invention. FIG. 3 Referring to , the device managing controller 170 includes an operation information acquiring unit 171, an operation information analyzing unit 173, and a function executing unit 175. The operation information acquiring unit 171 acquires a search word through the communication unit 110. When the operation information acquiring unit 171 acquires a search word that does not correspond to an electronic device, it may indicate a wrong search word through the audio processor 130 or the device management controller 140. The operation information and the search work may be defined as described below. For example, operation information by situations for a washing machine may indicate type and amount of washing object(s), external weather state, or electricity used efficiency (night-time electricity). Accordingly, a search word to be input to the input 120 may include the type and amount of washing objects, e.g., 5 cotton blankets. The device management display 140 displays the type and amount of washing objects, the weather (collected weather information), and the used electricity efficiency (use of night-time electricity). For example, operation information by situations for a microwave oven may include a recipe according to a type and an amount of food. Accordingly, a search word to be input to the input unit may include a type and amount of food to be cooked, e.g., o5 pieces of bacon. Detailed operation information by situations with respect to the microwave oven may also include an operation time and operation power consumption, whether or not the food should be reversed, or cook temperature, e.g., rare, medium rare, medium, well-done, etc. For example, operation information by situations for an air conditioner may include operation information according to weather, time, and season. Accordingly, a search word to be input may include temperature information. Detailed operation information by situations with respect to the air conditioner may include that day's weather, current time, current external temperature, or function properties, e.g., the presence of an ion function, a nano silver function, or air cleaning function. For example, operation information by situations for an electric blanket may include an external temperature, time (day, night), light amount, e.g., reduce driving temperature when there is a large amount of light, and increase the driving temperature when there is a small amount of light, operation information according to age or healthy state of a user, etc. For example, operation information by situations for a vacuum cleaner may include a noise control function according to a day and a night, e.g., power control that drives the vacuum cleaner at a low power level at night, and drives the vacuum cleaner at a high power level during the day, steam cleaning, weather, etc. Using the examples provided above, the operation information acquiring unit 171 transmits the acquired operation information to the operation information analyzing unit 173. The operation information analyzing unit 173 analyzes the operation information acquired by the operation information acquiring unit 171. The operation information may include ambient environment information or electronic device information, e.g., information indicating functions of the electronic device. For example, when the electronic device is a microwave oven, the function of the electronic device indicates a temperature, an operation time, and an operation power for cooking a fish. The operation information analyzing unit 173 generates data for operating the electronic device with reference to an external environment and a state of the electronic device. Accordingly, the operation information analyzing unit 173 receives data of the external environment from the server 300 through the communication unit 110. For example, the data of the external environment may indicate weather conditions such as temperature, humidity, strength of wind, time period, etc. The operation information analyzing unit 173 analyzes the external environment and the function of the electronic device based on an input search word. The operation information analyzing unit 173 generates optimal operation information of the electronic device corresponding to the search word. For example, when the search word is a cotton blanket and the electronic device is a washing machine, the operation information analyzing unit 173 may shorten a spin cycle during a clear, warm weather or increase the spin cycle during cloudy weather because a dry time of the cotton blanket will change according to the external environment, i.e., weather. Further, when shortening the spin, less electricity is used, thereby reducing the operating cost. Further, a function for increasing a washing time or controlling an amount of a washing object may be selected according to a weight of the cotton blanket. The operation information analyzing unit 173 selects an optimal function of the electronic device based on the external environment and the input search word. The operation information analyzing unit 173 generates data for an operation of the electronic device through the foregoing procedure. The generated operation information of the electronic device may be transmitted to a function executing unit 175 or be stored in a device management memory 160. When a plurality of search words are acquired by the operation information acquiring unit 171, the operation information analyzing unit 173 compares the external environment and the functions of the electronic device based on the respective search words to select optimal functions of the electronic device. For example, overlapping functions of the selected optimal electronic devices with respect to the respective search words may be selected. When there is a great difference between the overlapping functions, a difference of the overlapping functions may be displayed through the audio processor 130 or the device management display unit 140. For example, when the search words in the microwave oven are popcorn and milk, the amount of heat and time to heat the popcorn is very different from those the amounts to heat the milk. Accordingly, in order to prevent one of the foods from being improperly heated, and possibly ruined, the difference of the overlapping functions is displayed through the audio processor 130 or the device management display unit 140. When the search words acquired by the operation information acquiring unit 171 are a search word of an upper concept and a search word of a lower concept belong to the upper concept, the operation information analyzing unit 173 may select one of the search word of the upper concept and the search word of the lower concept. For example, functions of an electronic device selected by the search word of the upper concept include all the functions of the electronic device to be driven by the search word of a lower concept, the operation information analyzing unit 173 may select the search word of the upper concept. When functions of the selected electronic devices differ from each other because the search words acquired by the operation information acquiring unit 171 differ from each other, the operation information analyzing unit 173 may select all of the functions of electronic devices selected by the search words. The operation information analyzing unit 173 combines electric charges by time periods of an electronic device, a schedule of the user, or weather to analyze operation information of the electronic device. For example, the operation information analyzing unit 173 selects a time period having the cheapest electric charges by time periods of electricity used by the electronic devices, reviews a schedule of the user to select a time period of an empty schedule, and selects a time period having suitable temperature and humidity in consideration of the external weather. Accordingly, the operation information analyzing unit 173 may integrally review a time period having cheap electric charges, a time period having an empty schedule, and an external weather to select the most suitable time period. The operation information analyzing unit 173 then optimizes a function of an electronic device to provide efficient use and convenience of the use. Further, the operation information analyzing unit 173 may reduce a use time and cost of the electronic device through the operation information of the electronic device. The function executing unit 175 generates control commands of the electronic devices 400 based on the analyzed operation information from the operation information analyzing unit 173, and transmits the generated control commands to the electronic devices 400 to control the electronic devices 400. Further, the function executing unit 175 may transmit the control command to the terminal 200 through the communication unit 110. The terminal 200 may then transmit the control command to the electronic devices 400. The function executing unit 175 acquires operation information data stored in the device management memory 160 to control the electronic devices 400. In accordance with an embodiment of the present invention, functions of an electronic device may be provided, which are categorized according to a current state and an external environment of the electronic device. For example, when the electronic device is a washing machine, a washing time or a drying time of the electronic device may be controlled. Alternatively, the function executing unit 175 controls the electronic devices 400 based on the operation information which is selected by comparing an external environment and functions of the electronic device of a plurality of acquired operation information respectively to select optimal functions of the electronic device by the operation information analyzing unit 173. FIG. 4 is a block diagram illustrating a terminal according to an embodiment of the present invention. FIG. 4 Referring to , the terminal 200 includes a radio frequency (RF) communication unit 210, an input 220, a terminal display 240, a terminal memory 250, and a terminal controller 260. The terminal 200 communicates with the managing device 100 using the RF communication unit 210, receives various information associated with a control function of the electronic devices 400 provided from the managing device 100, and outputs the received information through the terminal display 240. Further, the terminal 200 remotely controls at least one of the managing device 100 and the electronic devices 400 to perform a control function of an electronic device. The terminal 200 may be substituted for the managing device 100, and may provide a control function directly to the electronic device. The RF communication unit 210 form a communication channel with the managing device 100 and receives information provided from the managing device 100, e.g., electronic device control setting screen interface information, an interface screen for selecting an electronic device, interface information for user input, or interface information for selecting a usable electronic device according to user information. The foregoing information may be transferred to the terminal controller 260. When the managing device 100 controls the electronic devices 400, the RF communication unit 210 forms communication channels between the terminal 200 and the electronic devices 400, respectively. The RF communication unit 210 performs the same function as that of the communication unit 110 of the managing device 100 to collect device information of the electronic devices 400. The collected device information may be transferred to the terminal controller 260, and the terminal controller 260 is substituted for the managing device 100 according to an operation design of the terminal 200, and generates a control signal of the electronic devices 400. Accordingly, the RF communication unit 210 receives control information of an electronic device from the server 300 and provides the received control information to the terminal controller 260. A control command may be input to the terminal input 220 to control the electronic devices 400. The input control command is transferred to the terminal controller 260 to control the electronic devices 400. For example, the input unit 200 includes various input devices for receiving input search words, and numerical or character information, and for setting various functions. For example, the input unit 220 includes a plurality of physical input keys and function keys, or a touch panel displaying input keys and function keys. The input unit 220 of the present invention may generate an input signal corresponding to a search word for operation information of electronic devices 400. The terminal display 240 outputs various screens for operating the terminal 200. The terminal display 240 may be configured by a touch screen in the same manner as in the device display 140 of the managing device 100 and executes a function for inputting information to the managing device 100. When the RF communication unit 210 receives information from the managing device 100, the terminal display 240 outputs the received information. For example, the terminal display 240 displays an electronic device setting interface or an electronic device setting interface according to a search word. Alternatively, the foregoing interfaces may be generated and provided by the terminal 200 itself, without receiving information from the managing device 100. To do this, the terminal memory 250 stores a device managing program 251, which drives an electronic device control function of the present invention using the terminal 200. The electronic device control management program may remotely control a plurality of electronic devices 400 by accessing a web server in consideration of mobility of the terminal 200. In this case, the managing device 100 may be provided between the electronic devices 400 and the terminal 200, perform as a gateway, receive a control signal of the electronic device 400 of the terminal 200 from a web, and transmit the received control signal to the electronic devices 400. The terminal memory 250 may store an OS and various application programs for operating the terminal 200. The terminal controller 260 controls signal flow, and transceiving of information according to various functions of the terminal 200 associated with a control function of electronic devices 400. For example, when the terminal 200 outputs information provided from the managing device 100 to the terminal display 240 and transmits an input signal of a user to the managing device 100, the terminal controller 260 receives and outputs at least one of the foregoing interfaces and information through the RF communication unit 210 to the terminal display 240. When the user selects a certain item or inputs a preset value into a displayed interface, the terminal controller 260 transfers corresponding information to the managing device 100. The terminal controller 260 outputs information generated while the managing device 100 controls the electronic devices 400 through the terminal 200, and transfers a user request to the managing device 100. When the terminal 200 is substituted for the managing device 100, i.e., directly controls the electronic devices 400, the terminal controller 260 performs a function of the device management controller 170, such that the terminal 200 may independently perform the foregoing interface arrangement and information output. When the terminal 200 is spaced apart from a zone of the electronic devices 400 by greater than a predetermined distance in consideration of mobility of the terminal 200, the terminal controller 260 may form a communication channel for controlling the electronic devices 400 through network access and support signal transceiving. As described above, the terminal 200 outputs information associated with operation control of the electronic devices 400 and transfers a corresponding input signal such that the user sets and operates an electronic device control function and confirms a corresponding result. In addition, the terminal 200 supports various functions associated with an electronic device control function, e.g., an information providing function according to operation information of the electronic devices 400 or information provision such as a control limit function of the electronic devices 400 according to a search word. FIG. 5 is a block diagram illustrating a server for providing operation information according to an embodiment of the present invention. FIG. 5 Referring to , the server 300 includes a communication unit 310, a controller 320, and a memory 330. The communication unit 310 communicates with at least one of the managing device 100, the electronic devices 400, and the terminal 200. For example, the communication unit 310 communicates with the managing device 100 to transfer a command requested from the managing device to the controller 320. The controller 320 analyzes a command received through the communication unit 310, and analyzes the electronic devices 400 to generate operation information suited to the command or select preset operation information. The controller 320 stores the operation information in the memory 330 or transmits the operation information to the managing device 100, the terminal 200, or the electronic devices 400 through the communication unit 310. The controller 320 acquires information of the electronic devices 400 or external environment data through the Internet. The controller 320 may integrally analyze the acquired information of the electronic devices 400, the external environment data, and the receive command, namely, a search word to generate a command capable of operating the electronic devices 400 or to select a previously generated command. The previous command may be set by the user using the electronic devices 400 or a command receiving operation information of the electronic devices 400 through the Internet. The memory 330 stores a command received through the communication unit 310. Identification information of the electronic devices 400, information about an external environment, or operation information suited to the electronic device may be stored in the memory 330. Further, the operation information generated from the controller 320 may be stored in the memory 330. FIG. 6 is a signal flow diagram illustrating a method for controlling an electronic device according to an embodiment of the present invention. FIG. 6 FIG. 6 Referring to , the managing device 100 requests identification (ID) information of the electronic devices 400 from the electronic devices in step S501. Although not illustrated in , the managing device 100 may also request the ID of the electronic devices 400 from the terminal 200. For example, the ID information requested may include device names, functions, and unique numbers of the electronic devices 400. In step S502, the ID information of the electronic devices 400 is transmitted to the managing device 100. The identification information of the electronic devices 400 may be transmitted to the managing device 100 through the terminal 100, or directly transmitted from the electronic devices 400 to the server 300. In step S503, a search word input to the terminal 200 is transmitted to the managing device 100. For example, the search word indicates objects operated by the electronic devices 400. Alternatively, the search word may be transmitted to the managing device 100 through the electronic devices 400, or may be directly input through an input of the managing device 100. In step S504, the search word and information of the electronic devices 400 associated with the search word is transmitted to the server 300 to request operation information of the electronic devices 400 associated with the search word. In step S505, the server 300 analyzes the search word, the information of the electronic devices 400 associated with the search word, and external environment data to generate operation information of the electronic devices 400 associated with the search word or to select preset operation information. The server 300 may acquire the preset operation information through a web site. The preset operation information may be data in a web server up-loaded by users using the electronic devices 400, respectively. Further, the preset operation information may be data by optimizing functions of the electronic devices 400 according to the search word and the external environment to up-load in the web server. The server 300 may download the information of the electronic devices 400 through the Internet, i.e., not from the electronic devices 400. The server 300 may acquire the external environment data from the managing device 100 or through the Internet. The managing device 100 may acquire the external environment data through the Internet. The external environment data may include at least one of electric charges by time periods of the electronic devices 400, temperature and humidity as an external weather, and home schedules of the user. The server 300 or the managing device 100 may integrally review the search word and the external environment data in order to generate the operation information of the electronic devices 400. Alternatively, the operation information of the electronic devices 400 may be directly transmitted from the server 300 to the device managing device 100, the terminal 200, or the electronic devices 400.. In step 506, the managing device 100 may be transmitted to the operation information of the electronic devices 400 received from the server 300 to the terminal 200. The terminal 200 transmits a signal for controlling the electronic devices 400, based on the received operation information of the electronic devices, to associated electronic devices 400to control the associated the associated electronic devices 400.In step 507, Alternatively, the managing device 100 may directly transmit a signal for controlling the electronic devices 400 based on the operation information received from the server 300, to the associated electronic devices to directly control the associated electronic devices 400. Next, screen interfaces output on at least one display of the managing device 100 and the terminal 200 will be described in detail with reference to the accompanying drawings. Hereinafter, the screen interface may be described based on a terminal display unit 240 of the terminal 200. FIG. 7 illustrates an example of a user interface for device management setting displayed on a terminal according to an embodiment of the present invention. FIG. 7 Referring to , a user may generate an input signal for controlling the electronic devices 400 using the terminal 200. Accordingly, the terminal 200 provides a menu item for setting device management. FIG. 7 In , the terminal display 240 outputs a selected electronic device 702 and a holding electronic device list item 705. To do this, the terminal 200 collects information of the electronic devices 400 or receives device information of the electronic devices 400 from a managing device 100. When receiving the device information of the electronic devices 400, the terminal 200 allocates an index with respect to corresponding devices, for example, icons or widgets, and outputs the allocated index at predetermined location of the terminal display 240, namely, the holding device list item 705. Icons 706 corresponding to the electronic devices 400 are displayed under the holding device list item 705. Each of the icons 706 may be selected by a user to activate or deactivate the corresponding electronic device. The device addition icon 707 is selected to add an electronic device. A method of adding the electronic device is achieved by inputting a name of the electronic device using a user interface or inputting a model name of the electronic device to be added. FIG. 7 An electronic device 701 indicates an electronic device selected from holding electronic devices items 705. For example, in , the washing machine 704 indicates the washing machine icon 708 has been selected from the holding device list item 705. When the washing machine is selected as the electronic device 701, the washing machine icon 708 is displayed with dotted lines on the holding device list item 705. When the washing machine 704 is selected, only an icon of usable electronic of the electronic devices 706 displayed on the holding device list 705 may be activated. To add another electronic device, an addition icon 703 may be selected to input information of the electronic device. Otherwise, to add the other electronic device, one of respective from icons 706 may be drag to the selected electronic device 702. The electronic devices 706 may be changed according to a taste or living environments of a user or as needed, or may be controlled according to types of the electronic devices 400 operated by the user. Moreover, the electronic device registered according to user information may be again controlled according user setting. FIG. 8 illustrates an example of a user interface for a selected electronic device according to an embodiment of the present invention. FIG. 8 Referring to , when the electronic device icon 704 is selected, a screen interface for inputting a search word to be used in the selected electronic device is displayed. FIG. 8 In , a first washing object 801 indicates a registered washing object, and the second washing object 802 indicates a washing object to be newly added. For example, when a user using the washing machine washes sacks and underwear, a first washing object 801 may become sacks and a second object 802 may become underwear. To add another washing object, interface screen 803 is used. Only usable functions of electronic devices may be selected based on the input information. FIG. 9 illustrates an example of a user interface for selecting a function of an electronic device according to an embodiment of the present invention. FIG. 9 Referring to , after a type and the number of the washing objects are selected, a screen interface for inputting information by functions of the electronic device is displayed for the washing objects. The information by functions of the electronic device includes information according to functions and an external environment of the electronic device. Specifically, the information by functions of the electronic device include electric charges by time periods 901, a weight of the washing objects 902, weather 903, and a home schedule 904. The electric charges by time periods 901 indicate a time period for using washing machine with the cheapest electrical costs. The weight of the washing objects 902 indicates a condition in which washing starts when the weight of the washing machine is equal to or greater than a predetermined value. For example, only when a weight of the washing objects included in the washing machine is in the range of 7kg∼10kg, may the washing start. The weather 903 indicates an external temperature, which may indicate temperature and humidity, which will affect A reason why the temperature and the humidity being conditions of drying the washing object is important because the washing object needs dryness. The home schedule 904 includes a schedule of the user using the washing machine. The home schedule can be considered during analyzing operation information of the washing machine to avoid a time period when the user has a difficulty managing the washing machine, or a time period when the user is sleeping. Additional functions may be added by selecting an empty, dotted line icon 905. FIG. 10 FIG. 10 is a diagram illustrating functions of a washing machine according to an embodiment of the present invention. Specifically, illustrates an example of generating operation information of the washing machine using a function of the washing machine and external environment data. FIG. 10 Referring to , an appropriate weight of a washing object for washing of the washing machine is in the range of 7kg ∼10kg. Reference numeral 1001 indicates a time period when the washing machine is operated. There are not special limitations on time period 1001. However, if a user of the washing machine optionally selects the time period, it may be a restricted condition. The electric charges 1002 by time periods indicate a fee added differently depending on the time periods when the washing machine is used. A washing time of a washing object may be selected with reference to the fee by time periods. The washing machine may continuously check a weight 1003 of the washing object to determine that washing is possible when the weight 1003 exceeds 7kg. The weight 1003 of the washing object may be changed according to convenience of the user or a function of the washing machine. The weather 1004 includes an external temperature and humidity.. Because the washing object needs to be dried after washing, the external temperature and humidity may be important. When there is a dry function inside the washing machine, conditions of the weather 1004 need not to be considered. Temperature of 29°C and humidity of 30% being conditions in which weather is sufficient to dry the washing object may be selected. A home schedule 1005 includes a schedule of a user of the washing machine in order to select an optimum time for running the washer when the user is home. An operation function 1006 includes a function of the washing machine with reference to at least one of electric charges 1002 by time periods or a weight of a washing object. Because the washing object is well dried when the weather 1004 is a high temperature/dry in the operation function 1006, although a dehydration time is reduced by 30%, a possible condition is indicated. Accordingly, in high temperature/ dry weather, the dehydration time among functions of the washing machine is reduced by 30%, so that a washing time and washing cost may be reduced. An electronic device operation state 1007 indicates a time period satisfying all the conditions. Waiting indicates a state that a weight of a washing object is insufficient, and reservation may indicate a state that one or two of all the conditions are not satisfied. Washing execution 1008 indicates a time period sufficient to start washing by a washing machine due to all satisfactory conditions. A priority order of the conditions is a weight 1003 of the washing object, a next order is a home schedule 1005, a third order is a weather 1004, and a fourth order is electric charges 1002 by time periods. An order of the condition may be divided into essential conditions and selection conditions, where an essential conduction is a condition that needs to be satisfied and the selection condition is a condition that may be satisfied. For example, the essential conditions may include a weight of a washing object and a home schedule 1005, the selection conditions may include a weather 1004 and electric charges 1002 by time periods. If various conditions are not satisfied, a time period satisfying only the essential condition may be selected. A time period satisfying the essential conditions and having a selection condition of the greatest satisfaction among the selection conditions may be selected. The function of the electronic device may be acquired through an electronic device, a server 300, or a managing device 100. Further, the external environment data may be acquired through the server 300 or the managing device 100. The function of the electronic device and the external environment data are transmitted to the managing device 100 or the server 300 such that the terminal 200 may receive operation information of the electronic device. Furthermore, the terminal 200 may analyze the function of the electronic device and the external environment data to generate the operation information of the electronic device. Accordingly, a washing machine may analyze internal/external situations through the operation information to select an optimal function and time to be operated. The operation information of the electronic device generated by the terminal 200 may be generated by the managing device 100, the electronic devices 400, or the server 300. FIG. 11 is a diagram illustrating selected optimal conditions according to an embodiment of the present invention. FIG. 11 Referring to , selected optimal conditions 1101 represent the optimal functions of the washing machine. When a change in the optimal conditions 1101 is required, the optimal conditions by functions may be selected and corrected. If a button 1102 is selected after the optimal conditions 1101 are completed, the optimal conditions 1101 are transmitted to the managing device 100 or the electronic device 400 such that the managing device 100 or the electronic device may be executed with a control command. FIG. 12 FIG. 12 is a flowchart illustrating a method for controlling a washing machine according to an embodiment of the present invention. Although the terminal 200 is omitted in for the purpose of simplifying the description, a control signal may be transmitted through the terminal 200. FIG. 12 Referring to , in step S1201, a washing machine 401 transmits a current state of the washing machine 401 to the managing device 100. The current state of the washing machine may include a washing function according to a washing object, presence of possibility of washing, or an amount of washing objects. In step S1202, the managing device 100 analyzes a state of the washing machine and a washing record received from the washing machine 400 to determine presence of washing. In step S1203, the managing device 100 requests external information from the server 300. The external information may include a schedule of the user, weather forecast, or power fee by time periods. In steps S1204-S1206, the server 300 acquires the external information in response to the request received from the managing device 100. Specifically, the server 300 acquires information of the weather forecast from a site of a meteorological observatory or a site supplying weather by dates in step S1204, and acquires data of power fee by time periods from a power company in step S1205. Further, the server 300 may compute a power fee by time periods based on an accumulated power fee. The server 300 acquires a schedule of the user input which the user inputs through the managing device 100 or the terminal 200 in step S1206, and acquires and generates data for controlling a function of the washing machine, i.e., control data, based on the acquired external information in step 1207. In step S1208, the server 300 transmits the control data to the managing device 100. In step S1209, the managing device 100 determine functions of a washing date, a time, and an option of the washing machine 401, based on the received control data. In step S1210, the managing device 100 generates a washing schedule of the washing machine and stores the washing schedule in a memory based on the determined functions. In step S1211, the managing device 100 generates washing command data according to the washing schedule and transmits the generated washing command data to the washing machine 401. In step S1212, the washing machine 401 may performs washing according to the received washing command data. When the washing is completed, the washing machine 401 transmits a washing completion message to the managing device 100 in step S1213. In step S1214, the managing device 100 transmits the received completion message to the user. FIG. 13 FIG. 13 FIG. 10 is a diagram illustrating functions of an air conditioner according to environmental conditions according to an embodiment of the present invention. has substantially the same operation principle as that of , but includes characteristics and functions that are applicable to a different electronic device, i.e., an air conditioner instead of a washing machine. FIG. 13 Referring to , the external environment conditions for the air conditioner include electric charges by time periods 1301, weather 1302, a home schedule 1303, an operation function 1304 indicating cooling and removing humidity of the air conditioner. Functions of cooling and removing humidity being the operation function 1304 may be controlled in consideration of a temperature and humidity in the weather 1302. For example, when the temperature and the humidity are low, cooling and the removing humidity may be reduced. Conversely, when the temperature and the humidity are high, the cooling and the removing humidity may be increased. The operation function 1304 may be restricted in consideration of the electric charges by time periods. For example, when the electric charges by time periods 1301 is relatively high, although the temperature is high, cooling may be reduced. Further, when the electric charges by time periods 1301 are relatively high, although the humidity is high, the removing humidity may be reduced. A home schedule 1303 is analyzed. When the user is located in a space in which the air conditioner is installed, the air conditioner is operated. When the user is not located in the space, the user may switch the operation function 1304 from an outing state to a standby power blocking state. When the user returns to the space according to the home schedule 1303 in the outing state, the user may increase the operation function 1304 of the air conditioner in units of slight amount. During a peak time period (e.g., 1 P.M.) when electricity is used most, to reduce the electric charges by time periods 1301, an operating rate of the cooling may be increased by 100 % in 12 A.M. before the peak time period, and the operating rate of the cooling may be reduced by 50 %. The electronic device operation state 1305 may include an operation state obtained by integrally analyzing the electric charges by time periods 1301, the weather 1302, or the home schedule 1303. For example, the electronic device operation state 1305 refers to a condition of the weather 1302, to the electric charges by time periods 1301, or to the home schedule 1303. Accordingly, the function of the air conditioner may be operated according to the external environment condition. Control data of the operation function 1304 may be received from the managing device 100 or the server 300. A cooling ratio and the removing humidity ratio of the operation function 1304 may indicate a used ratio to a maximum output value. FIG. 13 Through , the function of the air conditioner may be efficiently used according to the external environment, and accordingly the cost may be reduced. FIG. 14 FIG. 14 FIG. 13 is a diagram illustrating functions of a refrigerator according to environmental conditions according to an embodiment of the present invention. has substantially the same operation principle as that of , but includes characteristics and functions that are applicable to a different electronic device, i.e., a refrigerator instead of an air conditioner. FIG. 14 is a diagram illustrating an operation function of a refrigerator according to situations integrally considering an external environment condition and a function of the refrigerator. FIG. 14 Referring to , the external environment condition includes electric charges by time periods 1401, an internal temperature 1402, weather 1403, a home schedule 1404, and general cooling, concentration cooling, and a human mode being an operation function 1405 of the refrigerator. Because the internal temperature 1402 is affected by the weather 1403, the operation function 1404 of the refrigerator may be controlled. For example, because the internal temperature 1402 may be increased if the temperature is high, a cooling function being the operation function 1405 may be increased. Because the internal temperature 1402 may be reduced if the temperature is low, the cooling function may be reduced. The operation function 1405 may be restricted in consideration of the electric charges by time periods 1401. For example, when the electric charges 1401 by time periods are relatively high, although the temperature is high, the cooling may be reduced. A home schedule 1404 is analyzed. Because the refrigerator door will not be opened the user is not home or sleeping, the operation function 1405 may be switched to a human mode, which reduces a cooling function of the refrigerator to maintain the internal temperature 1402 in an optimal temperature range. For example, an optimal temperature of the refrigerator may be set to a range of 1 ∼ 3 °C. During a peak time period (e.g., between 1 and 4 P.M.) when electricity is used most, to reduce the electric charges 1401 by time periods, an operating rate of the cooling may be increased by 100 % in 12 A.M. before the peak time period to progress concentration cooling, and the cooling may be switched to general cooling during the peak time period to reduce the cost. When the peak time period is released, the general cooling may be switched to the concentration cooling to control the internal temperature 1402 of the refrigerator. Cooling of the operation function 1405 may be divided into general cooling, concentration cooling, and a human mode according to a maximum output value. The refrigerator may be operated with 50% of maximum output in the general cooling, operated with 100% of the maximum output in the concentration cooling, and operated with 10% of the maximum output in the human mode. The electronic device operation state 1406 may include a state that the function of the refrigerator is operated obtained by integrally analyzing the electric charges by time periods 1401, an internal temperature 1402, weather 1403, or the home schedule 1404. When the operation function 1404 is in the human mode, the electronic device operation state 1460 may be switched to a stop state in the middle of the human mode. An internal temperature 1402 is analyzed. When the analyzed internal temperature 1402 is beyond the optimal temperature, the stop state may be switched to the operation state. For example, the electronic device operation state 1406 refers to a condition of the internal temperature, to the home schedule 1404, to the electric charges by time periods 1401, and to weather 1403. Accordingly, the function of the refrigerator may be operated according to the external environment condition. Control data of the operation function 1405 may be received from the managing device 100 or the server 300. FIG. 14 Through , the function of the refrigerator may be efficiently used according to the external environment, and accordingly the cost of operating the refrigerator may be reduced. FIG. 15 FIG. 15 FIG. 14 is a diagram illustrating functions of a TV according to environmental conditions according to an embodiment of the present invention. has substantially the same operation principle as that of , but includes characteristics and functions that are applicable to a different electronic device, i.e., a TV instead of a refrigerator. FIG. 15 Referring to , the external environment conditions include electric charges by time periods 1501, weather 1502, and a home schedule 1503, a general mode, a power saving mode, and a standby power blocking state being an operation function 1504 of the TV. The general mode indicates a condition that enables a user to view the TV without controlling the function of the TV. The power saving mode indicates a condition, which partially control the function of the TV to reduce the electric charges. The standby power blocking state indicates a state which blocks power of the TV and enables the user not to view the TV. Screen brightness of the TV among the operation function 1504 may be controlled according to the weather 1502. For example, the screen brightness of the TV may be low controlled during bright weather 1502, and the screen brightness of the TV may be high controlled during dark weather 1502. The operation function 1504 may be restricted in consideration of the electric charges 1501 by time periods. For example, during the time period when the electric charges 1501 by time periods are relatively high, the screen brightness of the TV is controlled to switch the mode from the general mode to the power saving mode. The home schedule 1503 is analyzed. When the user of the TV is not home or sleeping, the operation function 1504 may be switched to a standby power blocking state. During a peak time period (e.g., 1 P.M.) when electricity is used most, to reduce the electric charges 1501 by time periods, the screen brightness of the TV may be controlled during the peak time period. The electronic device operation state 1505 may include a state that the function of the TV is operated obtained by integrally analyzing the electric charges 1501 by time periods, the weather 1502, or the home schedule 1503. The electronic device operation state 1505 refers to a condition of the home schedule, the electric charges 1501 by time periods, and the weather 1502. Accordingly, the function of the TV may be operated according to the external environment condition. Control data of the operation function 1504 may be received from the managing device 100 or the server 300. FIG. 15 Through , the function of the TV may be efficiently used according to the external environment, and accordingly the cost may be reduced. FIG. 16 is a flowchart illustrating a method for controlling an electronic device according to an embodiment of the present invention. FIG. 16 Referring to , in step S1601, the managing device 100 receives information of an electronic device from the terminal 200 or receives the information of the electronic device from electronic devices 400. The managing device 100 may also receive a search word used in the electronic device from the terminal 200, and receive the search word from the electronic devices 400. The managing device 100 generates operation information of the electronic device based on the information of the electronic device and the search word. In this case, the managing device 100 may receive and refer to external environment data. In step S1602, the managing device 100 sends the information of the electronic device and the search word to the server 300. When a plurality of search words are received in step S1601, the managing device 100 may determine a priority with respect to the search words. In step S1603, the server 300 analyzes the information of the electronic device and the search word. In this case, the server 300 may receive and refer to the external environment data. In step S1603, the server 300 generates operation information of the electronic device by integrally analyzing the information of the electronic device, the search word, and the external environment data. Additionally, the server 300 may integrally the information of the electronic device, the search word, and the external environment data to acquire the operation information from a web server. In step S1604, the server 300 transmits the operation information of the electronic device to at least one of the managing device 100, the terminal 200, and the electronic devices 400. Specifically, the managing device 100 generates the operation information of the electronic device and may transmit the operation information of the electronic device to the terminal 200 to control the electronic device. Further, the managing device 100 may directly transmit the operation information of the electronic device to an electronic device suited to the operation information to control the electronic device. In step S1605, the managing device 100 transmits the operation information of the electronic device to the terminal 200 to control the electronic device suing the terminal 200. Additionally, the managing device 100 may transmit the operation information of the electronic device to an electronic device 400 suited to the operation information to control the electronic device. The above-described embodiments of the present invention provide a system and method for providing convenience of use of an electronic device to a user and reduce the time/cost by analyzing a state of the electronic device and external information (e.g., weather, schedule, electronic rate, etc.) to select optional functions and operation time, such that the electronic device operates by itself. Additionally, the above-described embodiments of the present invention permit the user to efficiently use an electronic device in consideration a state of the electronic device and external information. Furthermore, the above-described embodiments of the present invention provide convenient use of the electronic device and reduce a used time and cost of the electronic device through the efficient use.
The debate on evidence-based management (EBMgt) has reached an impasse. The persistence of meaningful critiques highlights challenges embedded in the current frameworks. The field needs to consider new conceptual paths that appreciate these critiques, but move beyond them. The paper aims to discuss this issue. Design/methodology/approach This paper unpacks the concept of finding the “best available evidence,” which remains a central notion across definitions of EBMgt. For each element, it considers relevant theory and offers recommendations, concluding with a discussion of “bestness” as interpreted across three key dynamics – rank, fit, and variety. Findings The paper reinforces that EBMgt is a social technology, and draws on cybernetic theory to argue that the “best” evidence is produced not by rank or fit, but by variety. Through variety, EBMgt more readily captures the contextual, political, and relational aspects embedded in management decision making. Research limitations/implications While systematic reviews and empirical barriers remain important, more rigorous research evidence and larger catalogues of contingency factors are themselves insufficient to solve underlying sociopolitical concerns. Likewise, current critiques could benefit from theoretical bridges that not only reinforce learning and sensemaking in real organizations, but also build on the spirit of the project and progress made towards better managerial decision making. Originality/value The distinctive contribution of this paper is to offer a new lens on EBMgt drawing from cybernetic theory and science and technology studies. By proposing the theoretical frame of variety, it offers potential to resolve the impasse between those for and against EBMgt. Keywords Citation Martelli, P.F. and Hayirli, T.C. (2018), "Three perspectives on evidence-based management: rank, fit, variety", Management Decision, Vol. 56 No. 10, pp. 2085-2100. https://doi.org/10.1108/MD-09-2017-0920 Publisher:
https://www.emerald.com/insight/content/doi/10.1108/MD-09-2017-0920/full/html
ECLAC foresees growth rate in 2022 around one third of that of 2021 ECLAC recommended consolidating the income tax on individuals and corporations, extending the scope of taxes on heritage and property, while adding new taxes on the digital economy. Growth in Latin America and the Caribbean will slow down in 2022 to 2.1%, after a 6.2% increase on average last year, the Economic Commission for Latin America and the Caribbean (ECLAC) forecast Wednesday in a study. The new projections, whereby combined progress this year will near on average one third of the figured recorded in 2021, comes amid structural problems of low investment and productivity, poverty and inequality, which require growth to be a central element of public policies, the report said. ECLAC's annual report Preliminary Balance of the Economies of Latin America and the Caribbean 2021, released Wednesday in Mexico City also highlighted significant asymmetries between developed, emerging and developing countries on the ability to implement fiscal, social, monetary, health and vaccination policies for a sustainable recovery from the crisis unleashed by the COVID-19 pandemic. “The expected slowdown in the region in 2022, together with the structural problems of low investment and productivity, poverty and inequality, require that strengthening growth be a central element of policies, while addressing inflationary pressures and macro-financial risks,” ECLAC's executive secretary Alicia Bárcena said during the virtual launching of the document. According to the report, the region faces a very complex 2022: persistence and uncertainty about the evolution of the pandemic, a sharp slowdown in growth, low investment, productivity and slow recovery of employment remain, persistence of the social effects caused by the crisis, less fiscal space, increases in inflationary pressures and financial imbalances. The expected average growth of 2.1% thus reflects high heterogeneity between countries and subregions: The Caribbean will grow 6.1% (excluding Guyana), Central America will grow 4.5%, while South America will grow 1.4%. Meanwhile, in 2021 the region showed higher than expected growth, averaging 6.2%, thanks to a poor overall performance in 2020 which resulted in a favorable comparison ground. According to the Preliminary Balance Sheet 2021, estimates show that advanced economies would grow 4.2% in 2022 and would be the only ones to return this year to the growth path forecast before the pandemic. Emerging economies, for their part, would have a growth of 5.1% in 2022 but would only return to a prepandemic growth level by 2025. In 2021, 11 countries in Latin America and the Caribbean managed to recover their GDP levels from before the crisis. In 2022 another three would be added, bringing a total of 14 countries out of the 33 that make up the region, the report said. ECLAC also underlined the importance of combining monetary and fiscal policies to prioritize growth stimuli while containing inflation. This requires the use of coordinated macroeconomic policies and the use of all available instruments, to adequately prioritize the challenges of growth with monetary-financial stability. Regarding employment, 30% of the jobs lost in 2020 had not yet been recovered by the end of 2021, while inequality between men and women was accentuated, the study said. For 2022, ECLAC projects an unemployment rate of 11.5% for women -slightly lower than the 11.8% recorded in 2021, but still much higher than the 9.5% existing before the pandemic in 2019-, while unemployment for men would stand at 8.0% this year, almost the same as in 2021 (8.1%), but still well above the 6.8% recorded in 2019. The report also addresses the rise in prices of products and services. In 2021, inflationary pressures were registered in most countries, driven by increases in food and energy (inflation reached 7.1% on average in November, excluding Argentina, Haiti, Suriname and Venezuela), and the problem is expected to persist into 2022, with Central Banks already anticipating their inflationary targets will in all likelihood not be met. The United Nations agency also underscored it was crucial to increase collection levels and improve tax structures to give fiscal sustainability to a growing trajectory of spending demands, while tax evasion needed to be tackled after reaching US $ 325 billion (6.1% of the region's GDP). ECLAC recommended consolidating the income tax on individuals and corporations, extending the scope of taxes on heritage and property, while adding new taxes on the digital economy. “It is necessary to expand and redistribute liquidity from developed to developing countries; strengthen development banks; reform the architecture of international debt; provide countries with a set of innovative instruments aimed at increasing debt repayment capacity and avoiding excessive indebtedness; and integrate liquidity and debt reduction measures into a resilience strategy aimed at building a better future,” the document concluded.
By purchasing this content you agree and accept the terms and conditions This paper challenges the popular argument that sport is an effective channel for upward mobility, especially for ethnic minorities. My study of retired professional soccer players in Israel establishes the following findings: First, members of the subordinate ethnic group are disadvantaged in attainment of status not only in schools and labor markets but also in and via sport. Second, a professional career in sport does not intervene between background variables and later occupational attainment. Third, both ethnicity and educational level are the most significant determinants of postretirement occupational attainment; higher education and higher ethnic status improve opportunities for later occupational success. On the basis of these findings it is suggested that the same rules of inequality that push individuals to seek alternative routes of mobility, such as professional sport, continue to operate in and beyond sport. Paper presented at the American Sociological Association meeting, San Antonio, Texas, August 1984. Work on this paper was made possible by a University of Haifa Research grant. The author wishes to acknowledge Nicholas Babchuk, James C. Creech, Harry Crockett, Bernard Lazerwitz, Hugh P. Whitt, and his students in the sport seminar for help and advice.
https://journals.humankinetics.com/abstract/journals/ssj/1/4/article-p358.xml
1. Field of the Invention The present invention relates generally to a combined cycle power plant, and more specifically to a power plant configuration for efficient combined heat and power production in a small size. 2. Description of the Related Art including information disclosed under 37 CFR 1.97 and 1.98 A power plant is used to produce electricity for use in the general area or transmitted to far away areas where demand is high and production is low. Electric producing power plants are of the type such as a nuclear plant, a coal burning plant or a natural gas burning plant. Coal burning power plants are not desirable because of the pollutants discharged from the exhaust. Natural gas burning power plants are favorable because they are cleaner than the coal burning plants. The design of an electricity producing power plant is directed to producing the most efficient electrical power. Thus, the most highly efficient power plants tend to be very large power plants that are a permanent fixture in an area. Because of the very large size, these large power plants can produce enough electrical energy to be distributed to vary large areas. The idea of using waste heat for increased steam generation in industry has been around for many years. The progressive increase in fuel costs, the need to capture heat from various industrial processes and the increasingly stringent environmental regulations has created the need for using waste heat to its fullest potential. In the power industry, the waste heat from one power system such as a gas turbine engine can serve as the heat source for a steam turbine cycle. Such a system is referred to as a combined cycle and can reach overall electrical power cycle efficiency to nearly 60%. A combined cycle power plant integrates two or more thermodynamic power cycles to more fully and efficiently convert input energy to work or power. With the advancements in reliability and availability of gas turbine engines, the term combined cycle power plant usually refers to a system that includes a gas turbine engine, a heat recovery steam generator (HRSG) and a steam turbine. Thermodynamically, this implies the joining of a high temperature Brayton gas turbine engine cycle with a moderate and low temperature Rankine cycle where the waste heat from the Brayton cycle exhaust is used to heat input to the Rankine cycle. Where the heat recovery steam generator supplies at least part of the steam for a process, the application can be referred to as cogeneration. A simple combined cycle power plant includes a single gas turbine engine with an electric generator, a heat recovery steam generator (HRSG), a single steam turbine and electric generator, and a condenser and auxiliary systems. FIG. 1 shows a prior art combined cycle power plant with a gas turbine engine, a HRSG and a steam turbine. The FIG. 1 power plant includes a gas turbine engine with a compressor 12, a combustor 13 and a turbine 14 that drives a first electric generator 11, where the turbine exhaust is delivered to a HRSG 15 that includes a stack 22 for discharge of the exhaust and a steam turbine 16 that drives a second electric generator 17. The HRSG includes a condenser 18, a condensate pump 19, a de-aerator 20 and a boiler feed water pump 21. The modern 250 MW natural gas fired combined cycle power plant is the most economic option for new electric power production. The power plant efficiency is around 58% for electricity at the power plant, the natural gas fired plant is less than 50% CO2 production when compared to a coal fired power plant, and the power plant is very reliable with power produced anytime and with 50,000 plus hours of component life. However, some major disadvantages exist in this type of power plant. Of the fuel energy input, 32% of the energy is wasted heating rivers or the atmosphere from the steam condenser cooling. The condenser cooling heat from the power plant cannot be reused because of the far location of the plant to any potential users of the heat. For example, the heat could be used to heat a building but is not feasible because the heat would cool to atmospheric temperature from the long distance carried from source to end user. Also, more than 7% of the electricity produced is wasted in transmission line losses. Because they are so large, they produce a large amount of electricity which requires a large electrical grid and long power lines to transmit the power to relatively far away locations. Thus, the loss of electrical energy due to the long transmission line loses.
Second graders take pride in buddy reading with our first grade neighbors. They eagerly shared with grade 1 this month’s edition of National Geographic’s Explorer magazine. The cover story happens to be about parrots, which Grade 2 recalled is a point of interest within the first grade rainforest unit. It’s a wonderful opportunity to practice non fiction literacy skills and connect over a subject area that both grades enjoy. In our Informational Writing unit, fifth graders selected inspiring change-makers to research, report on, and embody. Their historical impact essays were not simply biographies. Instead, our writers explored the societal conditions of their figure’s life, and how those experiences influenced them to act and make a change. The students considered the long lasting legacies these inspirational people have left on our world. To cap off this unit, the fifth graders created a “wax museum” filled with these influential people from history and today. Our writers dressed up as their selected figures, and “came to life” as student and parent tour groups visited their exhibits. This simulation truly served to bring their historical impact essays to life – it was a really special and empowering morning! The Memory Project is a charitable nonprofit organization that invites art teachers, art students, and solo artists to help cultivate global kindness by creating portraits for children around the world who have faced substantial challenges, such as violence, war, extreme poverty, neglect, and loss of parents. Participants create these portraits to help children feel valued and important, to know that many people care about their well-being, and to provide a special childhood memory in the future. Since 2004, more than 130,000 portraits have been created for children in 47 countries. This year, Unquowa 8th graders painted portraits of Rohingya children. The Rohingya ethnic minority has been called the “most unwanted” group of people on Earth. Nearly a million fled genocide in Myanmar last year and are currently sheltering in a huge refugee settlement in Bangladesh. Most of these families have little more than a few cooking pans and a handful of clothes. For these children, who have rarely seen photos of themselves, the portraits are gifts they could never have previously imagined. Our students, along with almost 4,000 other artists worldwide, are helping to show these children they are not “unwanted” in our shared humanity. The video you are about to watch shows the joyful reactions of the children as they received their portraits.
https://unquowa.org/around-campus/
BACKGROUND This disclosure is directed to compositions for components for the interior or exterior of trains, and in particular weatherable compositions. The harmonized fire standard for rail applications, namely EN-45545, imposes stringent requirements on heat release, smoke density, and toxicity and flame spread properties allowed for materials used in rail applications in the European Union. As set-forth in the requirements of EN-45545, "Hazard Levels" (HL1 to HL3) have been designated, reflecting the degree of probability of personal injury as the result of a fire. The levels are based on dwell time and are related to operation and design categories. HL1 is the lowest hazard level and is typically applicable to vehicles that run under relatively safe conditions (easy evacuation of the vehicle). HL3 is the highest hazard level and represents most dangerous operation/design categories (difficult and/or time-consuming evacuation of the vehicle, e.g. in underground rail cars). EN-45545 classifies products are classified into 26 requirements sets (R1-R26). R4 includes lighting products. For each product type, different test requirements for the hazard levels are defined. Table 1. European Railways Standard EN 45545 for R4 applications Hazard Level Critical heat flux at extinguishment (CFE), 50 kW/m<sup>2</sup>, ISO 5658-2 Flame Spread ISO11925-2 Flaming Droplets ISO11925-2 Conventional index of toxicity (CIT), 50 kW/m<sup>2</sup>, at 4 minutes and 8 minutes, ISO 5659-2 HL1 13 150 0 1.2 HL2 13 150 0 0.9 HL3 13 150 0 0.75 The testing methods, and critical heat flux at extinguishment (CFE, spread of flame), flame spread, flaming droplets, and toxicity for the various hazard levels in the European Railway standard EN-45545 (2013) are shown in Table 1 for R4 applications. Polycarbonates are useful in the manufacture of articles and components for a wide range of applications, from automotive parts to electronic appliances. Because of their broad use, particularly in railway lighting applications, it is desirable to provide polycarbonate compositions with properties that meet or exceed the requirements set-forth under EN-45545. However, it is particularly challenging to manufacture articles that meet these standards and that have a high transmission ratio, chemical resistance, and weatherability. Accordingly, there remains a need for polycarbonate compositions that have a combination of low spread of flame (CFE), low toxicity, and low flame spread properties. It would be a further advantage if the polycarbonate compositions had a high transmission ratio, chemical resistance, and good weatherability. SUMMARY The above-described and other deficiencies of the art are met by a polycarbonate composition comprising: 1-65 wt% of a homopolycarbonate; 35-99 wt% of a poly(carbonate-co-arylate ester) comprising 60-90 mol% of bisphenol A carbonate units, 10-30 mol% of isophthalic acid-terephthalic acid-resorcinol ester units, and 1-20 mol% of resorcinol carbonate units; an organophosphorus flame retardant in an amount effective to provide up to 1.5 wt% phosphorous; optionally, up to 5 wt% of an additive composition, wherein each amount is based on the total weight of the polycarbonate composition, which sums to 100 wt%. In another aspect, a method of manufacture comprises combining the above-described components to form a polycarbonate composition. In yet another aspect, an article comprises the above-described polycarbonate composition. In still another aspect, a method of manufacture of an article comprises molding, extruding, or shaping the above-described polycarbonate composition into an article. The above described and other features are exemplified by the following detailed description, examples, and claims. DETAILED DESCRIPTION The inventors hereof have discovered polycarbonate compositions useful for rail lighting applications that have an improved critical heat flux at extinguishment (CFE), low toxicity, low flame spread properties, while also having improved optical properties (e.g., high transmission ratio) and weatherability, such that the molded articles of the compositions such that the yellowness shift is minimized or eliminated when exposed to UV light over time. It is exceptionally challenging to manufacture railway lighting products that meet stringent CFE, the flame spread, and toxicity standards in addition to other material requirements while also providing the desired optical properties and weatherability. Advantageously, the inventors have discovered that compositions including a poly(carbonate-siloxane), a homopolycarbonate, and an organophosphorous flame retardant provide the desired CFE, the flame spread, and toxicity characteristics while also providing good optical properties and weatherability. This was a surprising and unexpected discovery because the development of materials which robustly meet EN45545 requirements for R4 while providing the desired optical and weatherability properties when using flame retardants that are not organophosphorous flame retardants has proven challenging in the past. For example, conventional compositions that incorporate a flame retardant salt such as Rimar salt may provide the desired flame spread property but at the expense of the optical and weatherability properties. However, the inventors discovered that the incorporation of an organophosphorous flame retardant resulted in a robust CFE, flame spread, and toxicity values, thus meeting R4 requirements, and allowing good retention of optical properties and weatherability. 2 2 2 2 In a particularly advantageous feature, the polycarbonate compositions may have a critical heat flux at extinguishment of greater than 13 kW/m, preferably greater than 20 kW/m as measured in accordance with ISO 5658-2 on a 3 to 4 mm thick plaque at 50 kW/m; and a conventional index of toxicity value of less than 0.75 in accordance with ISO 5659-2 on a 3 to 4 mm thick plaque at 50 kW/m, having good retention of optical properties and weatherability. The polycarbonate compositions include a homopolycarbonate, a poly(carbonate-arylate ester), and an organophosphorous flame retardant. The individual components of the polycarbonate compositions are described in detail below. 1 1 1 1 1 1 2 1 2 1 1 2 1 2 1 a b a a a b 6-30 1-12 1-12 6 6 2 1-60 1-60 6 1-60 1-3 "Polycarbonate" as used herein means a polymer having repeating structural carbonate units of formula (1) in which at least 60 percent of the total number of R groups contain aromatic moieties and the balance thereof are aliphatic, alicyclic, or aromatic. In an aspect, each R is a C aromatic group, that is, contains at least one aromatic moiety. R may be derived from an aromatic dihydroxy compound of the formula HO-R-OH, in particular of formula (2) HO-A-Y-A-OH (2) wherein each of A and A is a monocyclic divalent aromatic group and Y is a single bond or a bridging group having one or more atoms that separate A from A. In an aspect, one atom separates A from A. Preferably, each R may be derived from a bisphenol of formula (3) wherein R and R are each independently a halogen, C alkoxy, or C alkyl, and p and q are each independently integers of 0 to 4. It will be understood that when p or q is less than 4, the valence of each carbon of the ring is filled by hydrogen. Also in formula (3), X is a bridging group connecting the two hydroxy-substituted aromatic groups, where the bridging group and the hydroxy substituent of each C arylene group are disposed ortho, meta, or para (preferably para) to each other on the C arylene group. In an aspect, the bridging group X is single bond, - O-, -S-, -S(O)-, -S(O)-, -C(O)-, or a C organic group. The organic bridging group may be cyclic or acyclic, aromatic or non-aromatic, and may further comprise heteroatoms such as halogens, oxygen, nitrogen, sulfur, silicon, or phosphorous. The C organic group may be disposed such that the C arylene groups connected thereto are each connected to a common alkylidene carbon or to different carbons of the C organic bridging group. In an aspect, p and q is each 1, and R and R are each a C alkyl group, preferably methyl, disposed meta to the hydroxy group on each arylene group. 1 h 1-10 1-10 1-10 6-10 6-10 Other useful dihydroxy compounds of the formula HO-R-OH include aromatic dihydroxy compounds of formula (6) wherein each R is independently a halogen atom, C hydrocarbyl group such as a C alkyl, a halogen-substituted C alkyl, a C aryl, or a halogen-substituted C aryl, and n is 0 to 4. The halogen is usually bromine. Some illustrative examples of specific dihydroxy compounds include the following: 4,4'-dihydroxybiphenyl, 1,6-dihydroxynaphthalene, 2,6-dihydroxynaphthalene, bis(4-hydroxyphenyl)methane, bis(4-hydroxyphenyl)diphenylmethane, bis(4-hydroxyphenyl)-1-naphthylmethane, 1,2-bis(4-hydroxyphenyl)ethane, 1,1-bis(4-hydroxyphenyl)-1-phenylethane, 2-(4-hydroxyphenyl)-2-(3-hydroxyphenyl)propane, bis(4-hydroxyphenyl)phenylmethane, 2,2-bis(4-hydroxy-3-bromophenyl)propane, 1,1-bis (hydroxyphenyl)cyclopentane, 1,1-bis(4-hydroxyphenyl)cyclohexane, 1,1-bis(4-hydroxyphenyl)isobutene, 1,1-bis(4-hydroxyphenyl)cyclododecane, trans-2,3-bis(4-hydroxyphenyl)-2-butene, 2,2-bis(4-hydroxyphenyl)adamantane, alpha, alpha'-bis(4-hydroxyphenyl)toluene, bis(4-hydroxyphenyl)acetonitrile, 2,2-bis(3-methyl-4-hydroxyphenyl)propane, 2,2-bis(3-ethyl-4-hydroxyphenyl)propane, 2,2-bis(3-n-propyl-4-hydroxyphenyl)propane, 2,2-bis(3-isopropyl-4-hydroxyphenyl)propane, 2,2-bis(3-sec-butyl-4-hydroxyphenyl)propane, 2,2-bis(3-t-butyl-4-hydroxyphenyl)propane, 2,2-bis(3-cyclohexyl-4-hydroxyphenyl)propane, 2,2-bis(3-allyl-4-hydroxyphenyl)propane, 2,2-bis(3-methoxy-4-hydroxyphenyl)propane, 2,2-bis(4-hydroxyphenyl)hexafluoropropane, 1,1-dichloro-2,2-bis(4-hydroxyphenyl)ethylene, 1,1-dibromo-2,2-bis(4-hydroxyphenyl)ethylene, 1,1-dichloro-2,2-bis(5-phenoxy-4-hydroxyphenyl)ethylene, 4,4'-dihydroxybenzophenone, 3,3-bis(4-hydroxyphenyl)-2-butanone, 1,6-bis(4-hydroxyphenyl)-1,6-hexanedione, ethylene glycol bis(4-hydroxyphenyl)ether, bis(4-hydroxyphenyl)ether, bis(4-hydroxyphenyl)sulfide, bis(4-hydroxyphenyl)sulfoxide, bis(4-hydroxyphenyl)sulfone, 9,9-bis(4-hydroxyphenyl)fluorine, 2,7-dihydroxypyrene, 6,6'-dihydroxy-3,3,3',3'- tetramethylspiro(bis)indane ("spirobiindane bisphenol"), 3,3-bis(4-hydroxyphenyl)phthalimide, 2,6-dihydroxydibenzo-p-dioxin, 2,6-dihydroxythianthrene, 2,7-dihydroxyphenoxathin, 2,7-dihydroxy-9,10-dimethylphenazine, 3,6-dihydroxydibenzofuran, 3,6-dihydroxydibenzothiophene, and 2,7-dihydroxycarbazole, resorcinol, substituted resorcinol compounds such as 5-methyl resorcinol, 5-ethyl resorcinol, 5-propyl resorcinol, 5-butyl resorcinol, 5-t-butyl resorcinol, 5-phenyl resorcinol, 5-cumyl resorcinol, 2,4,5,6-tetrafluoro resorcinol, 2,4,5,6-tetrabromo resorcinol, or the like; catechol; hydroquinone; substituted hydroquinones such as 2-methyl hydroquinone, 2-ethyl hydroquinone, 2-propyl hydroquinone, 2-butyl hydroquinone, 2-t-butyl hydroquinone, 2-phenyl hydroquinone, 2-cumyl hydroquinone, 2,3,5,6-tetramethyl hydroquinone, 2,3,5,6-tetra-t-butyl hydroquinone, 2,3,5,6-tetrafluoro hydroquinone, 2,3,5,6-tetrabromo hydroquinone, or the like, or a combination thereof. The polycarbonates may have an intrinsic viscosity, as determined in chloroform at 25°C, of 0.3 to 1.5 deciliters per gram (dl/gm), preferably 0.45 to 1.0 dl/gm. The polycarbonates may have a weight average molecular weight (Mw) of 10,000 to 200,000 G/mol, preferably 20,000 to 100,000 g/mol, as measured by gel permeation chromatography (GPC), using a crosslinked styrene-divinylbenzene column, using polystyrene standards and calculated for polycarbonate. GPC samples are prepared at a concentration of 1 mg per ml, and are eluted at a flow rate of 1.5 ml per minute. 1 1 2 1 The polycarbonate compositions may include a homopolycarbonate (wherein each R in the polymer is the same). In an aspect, the homopolycarbonate in the polycarbonate composition is derived from a bisphenol of formula (2), preferably bisphenol A, in which each of A and A is p-phenylene and Y is isopropylidene in formula (2). In some aspects, the polycarbonate is a bisphenol A homopolycarbonate. The bisphenol A homopolycarbonate may have: a melt flow rate of 3-50, per 10 min at 300 °C and a 1.2 kg load and a Mw of 17,000-40,000, g/mole, preferably 20,000-30,000 g/mole, more preferably 21,000 to 23,0000, each as measured as described above. In some aspects, the polycarbonate comprises a linear bisphenol A homopolycarbonate. In some aspects, the polycarbonate comprises a linear bisphenol A polycarbonate homopolymer having a weight average molecular weight of 26,000 to 40,000 grams per mole, preferably 27,000 to 35,000 grams per mole, as determined by gel permeation chromatography using polystyrene standards and calculated for polycarbonate; or a linear bisphenol A polycarbonate homopolymer having a weight average molecular weight of 15,000 to 25,000 grams per mole, preferably 17,000 to 25,000 grams per mole, as determined by gel permeation chromatography using polystyrene standards and calculated for polycarbonate; or a combination thereof. The homopolycarbonate may be present, for example, from 1-65 wt%, 20-65 wt%, 30-65 wt% or 40-65 wt%, each based on the total weight of the polycarbonate composition. 1 1 "Polycarbonates" include homopolycarbonates (wherein each R in the polymer is the same) and copolymers comprising different R moieties in the carbonate ("copolycarbonates"), and copolymers comprising carbonate units and other types of polymer units, such as ester units or siloxane units. The polycarbonate compositions include an aromatic poly(ester-carbonate). Such polycarbonates further contain, in addition to recurring carbonate units of formula (1), repeating ester units of formula (3) wherein J is a divalent group derived from an aromatic dihydroxy compound (including a reactive derivative thereof), such as a bisphenol of formula (2), e.g., bisphenol A; and T is a divalent group derived from an aromatic dicarboxylic acid (including a reactive derivative thereof), preferably isophthalic or terephthalic acid wherein the weight ratio of isophthalic acid to terephthalic acid is 91:9 to 2:98. Copolyesters containing a combination of different T or J groups may be used. The polyester units may be branched or linear. 2-30 In an aspect, J is derived from a bisphenol of formula (2), e.g., bisphenol A. In another aspect, J is derived from an aromatic dihydroxy compound, e.g, resorcinol. A portion of the groups J, for example up to 20 mole percent (mol%) may be a C alkylene group having a straight chain, branched chain, or cyclic (including polycyclic) structure, for example ethylene, n-propylene, i-proplyene, 1,4-butylene, 1,4-cyclohexylene, or 1,4-methylenecyclohexane. Preferably, all J groups are aromatic. Aromatic dicarboxylic acids that may be used to prepare the polyester units include isophthalic or terephthalic acid, 1,2-di(p-carboxyphenyl)ethane, 4,4'-dicarboxydiphenyl ether, 4,4'-bisbenzoic acid, or a combination thereof. Acids containing fused rings may also be present, such as in 1,4-, 1,5-, or 2,6-naphthalenedicarboxylic acids. Specific dicarboxylic acids include terephthalic acid, isophthalic acid, naphthalene dicarboxylic acid, or a combination thereof. A specific dicarboxylic acid comprises a combination of isophthalic acid and terephthalic acid wherein the weight ratio of isophthalic acid to terephthalic acid is 91:9 to 2:98. A portion of the groups T, for example up to 20 mol%, may be aliphatic, for example derived from 1,4-cyclohexane dicarboxylic acid. Preferably all T groups are aromatic. The molar ratio of ester units to carbonate units in the polycarbonates may vary broadly, for example 1:99 to 99:1, preferably 10:90 to 90:10, more preferably 25:75 to 75:25, or 2:98 to 15:85, depending on the desired properties of the final composition. The polycarbonate compositions may include specific poly(ester-carbonate)s are those including bisphenol A carbonate units and isophthalate/terephthalate-bisphenol A ester units, i.e., a poly(bisphenol A carbonate)-co-(bisphenol A-phthalate-ester) of formula (4a) wherein x and y represent the weight percent of bisphenol A carbonate units and isophthalate/terephthalate -bisphenol A ester units, respectively. Generally, the units are present as blocks. In an aspect, the weight ratio of carbonate units x to ester units y in the polycarbonates is 1:99 to 50:50, or 5:95 to 25:75, or 10:90 to 45:55. Copolymers of formula (5) comprising 35-45 wt% of carbonate units and 55-65 wt% of ester units, wherein the ester units have a molar ratio of isophthalate to terephthalate of 45:55 to 55:45 are often referred to as poly(carbonate-ester)s. Copolymers comprising 15-25 wt% of carbonate units and 75-85 wt% of ester units. wherein the ester units have a molar ratio of isophthalate to terephthalate from 98:2 to 88:12 are often referred to as poly(phthalate-carbonate)s and may optionally be present in addition to the certain polycarbonates of the polycarbonate compositions. 1 h h 1-10 1-10 1-10 6-10 6-10 1-4 In another aspect, the high heat poly(ester-carbonate) is a poly(carbonate-co-monoarylate ester) of formula (4b) that includes aromatic carbonate units (1) and repeating monoarylate ester units wherein R is as defined in formula (1), and each R is independently a halogen atom, a C hydrocarbyl such as a C alkyl group, a halogen-substituted C alkyl group, a C aryl group, or a halogen-substituted C aryl group, and n is 0-4. Preferably, each R is independently a C alkyl, and n is 0-3, 0-1, or 0. The mole ratio of carbonate units x to ester units z may be from 99:1 to 1:99, or from 98:2 to 2:98, or from 90:10 to 10:90. In an aspect the mole ratio of x:z is from 50:50 to 99:1, or from 1:99 to 50:50. h a b a c d c d e e 1-10 1-12 2 1-13 1-12 1-12 In an aspect, the high heat poly(ester-carbonate) comprises aromatic ester units and monoarylate ester units derived from the reaction of a combination of isophthalic and terephthalic diacids (or a reactive derivative thereof) with resorcinol (or a reactive derivative thereof) to provide isophthalate/terephthalate-resorcinol ("ITR" ester units). The ITR ester units may be present in the high heat poly(ester-carbonate) in an amount greater than or equal to 95 mol%, preferably greater than or equal to 99 mol%, and still more preferably greater than or equal to 99.5 mol%, based on the total moles of ester units in the polycarbonate. A preferred high heat poly(ester-carbonate) comprises bisphenol A carbonate units, and ITR ester units derived from terephthalic acid, isophthalic acid, and resorcinol, i.e., a poly(bisphenol A carbonate-co-isophthalate/terephthalate-resorcinol ester) of formula (c) wherein the mole ratio of x:z is from 98:2 to 2:98, or from 90:10 to 10:90. In an aspect the mole ratio of x:z is from 50:50 to 99:1, or from 1:99 to 50:50. The ITR ester units may be present in the poly(bisphenol A carbonate-co-isophthalate-terephthalate-resorcinol ester) in an amount greater than or equal to 95 mol%, preferably greater than or equal to 99 mol%, and still more preferably greater than or equal to 99.5 mol%, based on the total moles of ester units in the copolymer. Other carbonate units, other ester units, or a combination thereof may be present, in a total amount of 1 to 20 mole%, based on the total moles of units in the copolymers, for example monoaryl carbonate units of formula (5) and bisphenol ester units of formula (3a): wherein, in the foregoing formulae, R is each independently a C hydrocarbon group, n is 0-4, R and R are each independently a C alkyl, p and q are each independently integers of 0-4, and X is a single bond, -O-, -S-, -S(O)-, -S(O)-, -C(O)-, or a C alkylidene of formula - C(R)(R) - wherein R and R are each independently hydrogen or C alkyl, or a group of the formula -C(=R)- wherein R is a divalent C hydrocarbon group. The bisphenol ester units may be bisphenol A phthalate ester units of the formula (3b) In an aspect, the poly(bisphenol A carbonate-co-isophthalate/terephthalate-resorcinol ester) (4c) comprises 1-90 mol% of bisphenol A carbonate units, 10-99 mol% of isophthalic acid-terephthalic acid-resorcinol ester units, and optionally 1-60 mol% of resorcinol carbonate units, isophthalic acid-terephthalic acid-bisphenol A phthalate ester units, or a combination thereof. In another aspect, poly(bisphenol A carbonate-co-isophthalate/terephthalate resorcinol ester) (6) comprises 10-20 mol% of bisphenol A carbonate units, 20-98 mol% of isophthalic acid-terephthalic acid-resorcinol ester units, and optionally 1-60 mol% of resorcinol carbonate units, isophthalic acid-terephthalic acid-bisphenol A phthalate ester units, or a combination thereof. The certain polycarbonates of the polycarbonate compositions may include poly(ester-carbonate-siloxane)s comprising bisphenol A carbonate units, isophthalate-terephthalate-bisphenol A ester units, and siloxane units, for example blocks containing 5 to 200 dimethylsiloxane units. The high heat poly(ester-carbonate)s may have an Mw of 2,000-100,000 g/mol, preferably 3,000-75,000 g/mol, more preferably 4,000-50,000 g/mol, more preferably 5,000-35,000 g/mol, and still more preferably 17,000-30,000 g/mol. Molecular weight determinations are performed using GPC using a cross linked styrene-divinyl benzene column, at a sample concentration of 1 milligram per milliliter, using polystyrene standards and calculated for polycarbonate. Samples are eluted at a flow rate of 1.0 ml/min with methylene chloride as the eluent. The poly(carbonate-co-arylate ester)s may be present, for example, from 35-99 wt%, 35-80 wt%, 35-70 wt%, or 35-60 wt%, each based on the total weight of the polycarbonate composition. WO 2013/175448 A1 WO 2014/072923 A1 1-22 Polycarbonates may be manufactured by processes such as interfacial polymerization and melt polymerization, which are known, and are described, for example, in and . An end-capping agent (also referred to as a chain stopper agent or chain terminating agent) may be included during polymerization to provide end groups, for example monocyclic phenols such as phenol, p-cyanophenol, and C alkyl-substituted phenols such as p-cumyl-phenol, resorcinol monobenzoate, and p-and tertiary-butyl phenol, monoethers of diphenols, such as p-methoxyphenol, monoesters of diphenols such as resorcinol monobenzoate, functionalized chlorides of aliphatic monocarboxylic acids such as acryloyl chloride and methacryoyl chloride, and mono-chloroformates such as phenyl chloroformate, alkyl-substituted phenyl chloroformates, p-cumyl phenyl chloroformate, and toluene chloroformate. Combinations of different end groups may be used. Branched polycarbonate blocks may be prepared by adding a branching agent during polymerization, for example trimellitic acid, trimellitic anhydride, trimellitic trichloride, tris-p-hydroxyphenylethane, isatin-bis-phenol, tris-phenol TC (1,3,5-tris((p-hydroxyphenyl)isopropyl)benzene), tris-phenol PA (4(4(1, 1-bis(p-hydroxyphenyl)-ethyl) alpha, alpha-dimethyl benzyl)phenol), 4-chloroformyl phthalic anhydride, trimesic acid, and benzophenone tetracarboxylic acid. The branching agents may be added at a level of 0.05 to 2.0 wt. %. Combinations comprising linear polycarbonates and branched polycarbonates may be used. 1-22 1-22 An end-capping agent (also referred to as a chain stopper agent or chain terminating agent) may be included during polymerization to provide end groups. The end-capping agent (and thus end groups) are selected based on the desired properties of the polycarbonates. Exemplary end-capping agents are exemplified by monocyclic phenols such as phenol and C alkyl-substituted phenols such as p-cumyl-phenol, resorcinol monobenzoate, and p-and tertiary-butyl phenol, monoethers of diphenols, such as p-methoxyphenol, and alkyl-substituted phenols with branched chain alkyl substituents having 8 to 9 carbon atoms, 4-substituted-2-hydroxybenzophenones and their derivatives, aryl salicylates, monoesters of diphenols such as resorcinol monobenzoate, 2-(2-hydroxyaryl)-benzotriazoles and their derivatives, 2-(2-hydroxyaryl)-1,3,5-triazines and their derivatives, mono-carboxylic acid chlorides such as benzoyl chloride, C alkyl-substituted benzoyl chloride, toluoyl chloride, bromobenzoyl chloride, cinnamoyl chloride, and 4-nadimidobenzoyl chloride, polycyclic, mono-carboxylic acid chlorides such as trimellitic anhydride chloride, and naphthoyl chloride, functionalized chlorides of aliphatic monocarboxylic acids, such as acryloyl chloride and methacryoyl chloride, and mono-chloroformates such as phenyl chloroformate, alkyl-substituted phenyl chloroformates, p-cumyl phenyl chloroformate, and toluene chloroformate. Combinations of different end groups may be used. 3-30 The polycarbonate compositions include an organophosphorous flame retardant. In the aromatic organophosphorous compounds that have at least one organic aromatic group, the aromatic group may be a substituted or unsubstituted C group containing one or more of a monocyclic or polycyclic aromatic moiety (which may optionally contain with up to three heteroatoms (N, O, P, S, or Si)) and optionally further containing one or more nonaromatic moieties, for example alkyl, alkenyl, alkynyl, or cycloalkyl. The aromatic moiety of the aromatic group may be directly bonded to the phosphorous-containing group, or bonded via another moiety, for example an alkylene group. The aromatic moiety of the aromatic group may be directly bonded to the phosphorous-containing group, or bonded via another moiety, for example an alkylene group. In an aspect the aromatic group is the same as an aromatic group of the polycarbonate backbone, such as a bisphenol group (e.g., bisphenol A), a monoarylene group (e.g., a 1,3-phenylene or a 1,4-phenylene), or a combination comprising at least one of the foregoing. 3 3 2 2 3 3 The phosphorous-containing group may be a phosphate (P(=O)(OR)), phosphite (P(OR)), phosphonate (RP(=O)(OR)), phosphinate (RP(=O)(OR)), phosphine oxide (RP(=O)), or phosphine (RP), wherein each R in the foregoing phosphorous-containing groups may be the same or different, provided that at least one R is an aromatic group. A combination of different phosphorous-containing groups may be used. The aromatic group may be directly or indirectly bonded to the phosphorous, or to an oxygen of the phosphorous-containing group (i.e., an ester). 3 In an aspect the aromatic organophosphorous compound is a monomeric phosphate. Representative monomeric aromatic phosphates are of the formula (GO)P=O, wherein each G is independently an alkyl, cycloalkyl, aryl, alkylarylene, or arylalkylene group having up to 30 carbon atoms, provided that at least one G is an aromatic group. Two of the G groups may be joined together to provide a cyclic group. In some aspects G corresponds to a monomer used to form the polycarbonate, e.g., resorcinol. Exemplary phosphates include phenyl bis(dodecyl) phosphate, phenyl bis(neopentyl) phosphate, phenyl bis(3,5,5'-trimethylhexyl) phosphate, ethyl diphenyl phosphate, 2-ethylhexyl di(p-tolyl) phosphate, bis(2-ethylhexyl) p-tolyl phosphate, tritolyl phosphate, bis(2-ethylhexyl) phenyl phosphate, tri(nonylphenyl) phosphate, bis(dodecyl) p-tolyl phosphate, dibutyl phenyl phosphate, 2-chloroethyl diphenyl phosphate, p-tolyl bis(2,5,5'-trimethylhexyl) phosphate, 2-ethylhexyl diphenyl phosphate, and the like. A specific aromatic phosphate is one in which each G is aromatic, for example, triphenyl phosphate, tricresyl phosphate, isopropylated triphenyl phosphate, and the like. 1 2 a a 1-30 1-30 Di- or polyfunctional aromatic organophosphorous compounds are also useful, for example, compounds of the formulas wherein each G is independently a C hydrocarbyl; each G is independently a C hydrocarbyl or hydrocarbyloxy; X is as defined in formula (3) or formula (4); each X is independently a bromine or chlorine; m is 0 to 4, and n is 1 to 30. In a specific aspect, X is a single bond, methylene, isopropylidene, or 3,3,5-trimethylcyclohexylidene. 16 16 16 16 16 16 1-8 5-6 6-20 7-12 1-12 1-4 6-30 2-30 1-4 1-4 1-4 6-30 6-30 Specific aromatic organophosphorous compounds are inclusive of acid esters of formula (9) wherein each R is independently C alkyl, C cycloalkyl, C aryl, or C arylalkylene, each optionally substituted by C alkyl, specifically by C alkyl and X is a mono- or poly-nuclear aromatic C moiety or a linear or branched C aliphatic radical, which may be OH-substituted and may contain up to 8 ether bonds, provided that at least one R or X is an aromatic group; each n is independently 0 or 1; and q is from 0.5 to 30. In some aspects each R is independently C alkyl, naphthyl, phenyl(C)alkylene, aryl groups optionally substituted by C alkyl; each X is a mono- or poly-nuclear aromatic C moiety, each n is 1; and q is from 0.5 to 30. In some aspects each R is aromatic, e.g., phenyl; each X is a mono- or poly-nuclear aromatic C moiety, including a moiety derived from formula (2); n is one; and q is from 0.8 to 15. In other aspects, each R is phenyl; X is cresyl, xylenyl, propylphenyl, or butylphenyl, one of the following divalent groups or a combination comprising one or more of the foregoing; n is 1; and q is from 1 to 5, or from 1 to 2. In some aspects at least one R or X corresponds to a monomer used to form the polycarbonate, e.g., bisphenol A, resorcinol, or the like. Aromatic organophosphorous compounds of this type include the bis(diphenyl) phosphate of hydroquinone, resorcinol bis(diphenyl phosphate) (RDP), and bisphenol A bis(diphenyl) phosphate (BPADP), and their oligomeric and polymeric counterparts. w w w 1-12 The organophosphorous flame retardant containing a phosphorous-nitrogen bond may be a phosphazene, phosphonitrilic chloride, phosphorous ester amide, phosphoric acid amide, phosphonic acid amide, phosphinic acid amide, or tris(aziridinyl) phosphine oxide. These flame-retardant additives are commercially available. In an aspect, the organophosphorous flame retardant containing a phosphorous-nitrogen bond is a phosphazene or cyclic phosphazene of the formulas wherein w1 is 3 to 10,000; w2 is 3 to 25, or 3 to 7; and each R is independently a C alkyl, alkenyl, alkoxy, aryl, aryloxy, or polyoxyalkylene group. In the foregoing groups at least one hydrogen atom of these groups may be substituted with a group having an N, S, O, or F atom, or an amino group. For example, each R may be a substituted or unsubstituted phenoxy, an amino, or a polyoxyalkylene group. Any given R may further be a crosslink to another phosphazene group. Exemplary crosslinks include bisphenol groups, for example bisphenol A groups. Examples include phenoxy cyclotriphosphazene, octaphenoxy cyclotetraphosphazene decaphenoxy cyclopentaphosphazene, and the like. In an aspect, the phosphazene has a structure represented by the formula Commercially available phenoxyphosphazenes having the aforementioned structures are LY202 manufactured and distributed by Lanyin Chemical Co., Ltd, FP-110 manufactured and distributed by Fushimi Pharmaceutical Co., Ltd, and SPB-100 manufactured and distributed by Otsuka Chemical Co., Ltd. The organophosphorous flame retardant may be present up to 1.5 wt%, up to 1.2 wt%, up to 1.0 wt%, up to 0.8 wt%, up to 0.6 wt%, or up to 0.4 wt% of phosphorous, each based on the total weight of the composition. 2-16 2 3 2 3 3 3 3 3 6 6 4 3 6 4 2 6 3 6 The polycarbonate compositions may include flame retardants in addition to the organophosphorous flame retardant. Inorganic flame retardants may also be used, for example salts of C alkyl sulfonates such as potassium perfluorobutane sulfonate (Rimar salt), potassium perfluoroctane sulfonate, and tetraethylammonium perfluorohexane sulfonate, salts of aromatic sulfonates such as sodium benzene sulfonate, sodium toluene sulfonate (NATS), and the like, salts of aromatic sulfone sulfonates such as potassium diphenylsulfone sulfonate (KSS), and the like; salts formed by reacting for example an alkali metal or alkaline earth metal (e.g., lithium, sodium, potassium, magnesium, calcium and barium salts) and an inorganic acid complex salt, for example, an oxo-anion (e.g., alkali metal and alkaline-earth metal salts of carbonic acid, such as NaCO, KCO, MgCO, CaCO, and BaCO, or a fluoro-anion complex such as LiAlF, BaSiF, KBF, KAlF, KAlF, KSiF, or NaAlF or the like. Rimar salt and KSS and NATS, alone or in combination with other flame retardants, are particularly useful. Rimar salt and KSS and NATS, alone or in combination with other flame retardants, are particularly useful. The perfluoroalkyl sulfonate salt may be present in an amount of 0.30 to 1.00 wt%, preferably, 0.40 to 0.80 wt%, more preferably, 0.45 to 0.70 wt%, based on the total weight of the composition. The aromatic sulfonate salt may be present in the final polycarbonate composition in an amount of 0.01 to 0.1 wt%, preferably, 0.02 to 0.06 wt%, and more preferably, 0.03 to 0.05 wt%. Exemplary amounts of aromatic sulfone sulfonate salt may be 0.01 to 0.6 wt%, preferably, 0.1 to 0.4 wt%, and more preferably, 0.25 to 0.35 wt%, based on the total weight of the polycarbonate composition. Halogenated materials may also be used as flame retardants in addition to the organophosphorous flame retardant, for example bisphenols of which the following are representative: 2,2-bis-(3,5-dichlorophenyl)-propane; bis-(2-chlorophenyl)-methane; bis(2,6-dibromophenyl)-methane; 1,1-bis-(4-iodophenyl)-ethane; 1,2-bis-(2,6-dichlorophenyl)-ethane; 1,1-bis-(2-chloro-4-iodophenyl)ethane; 1,1-bis-(2-chloro-4-methylphenyl)-ethane; 1,1-bis-(3,5-dichlorophenyl)-ethane; 2,2-bis-(3-phenyl-4-bromophenyl)-ethane; 2,6-bis-(4,6-dichloronaphthyl)-propane; and 2,2-bis-(3,5-dichloro-4-hydroxyphenyl)-propane 2,2 bis-(3-bromo-4-hydroxyphenyl)-propane. Other halogenated materials include 1,3-dichlorobenzene, 1,4-dibromobenzene, 1,3-dichloro-4-hydroxybenzene, and biphenyls such as 2,2'-dichlorobiphenyl, polybrominated 1,4-diphenoxybenzene, 2,4'-dibromobiphenyl, and 2,4'-dichlorobiphenyl as well as decabromo diphenyl oxide, as well as oligomeric and polymeric halogenated aromatic compounds, such as a copolycarbonate of bisphenol A and tetrabromobisphenol A and a carbonate precursor, e.g., phosgene. Metal synergists, e.g., antimony oxide, may also be used with the flame retardant. When present, halogen containing flame retardants are present in amounts of 1 to 25 parts by weight, more preferably 2 to 20 parts by weight, based on 100 parts by weight of the total composition, excluding any filler. Anti-drip agents may also be used in the composition, for example a fibril forming or non-fibril forming fluoropolymer such as polytetrafluoroethylene (PTFE). The anti-drip agent may be encapsulated by a rigid copolymer, for example styrene-acrylonitrile copolymer (SAN). PTFE encapsulated in SAN is known as TSAN. An TSAN comprises 50 wt% PTFE and 50 wt% SAN, based on the total weight of the encapsulated fluoropolymer. The SAN may comprise, for example, 75 wt% styrene and 25 wt% acrylonitrile based on the total weight of the copolymer. Anti-drip agents may be used in amounts of 0.1 to 10 parts by weight, based on 100 parts by weight of the total composition, excluding any filler. 2 2 2 2 In addition to the homopolycarbonate, poly(carbonate-co-arylate ester), and the organophosphorous flame retardant, the polycarbonate composition can include various additives ordinarily incorporated into polymer compositions of this type, with the proviso that the additive(s) are selected so as to not significantly adversely affect the desired properties of the polycarbonate composition, in particular a critical heat flux at extinguishment of greater than 13 kW/m, preferably greater than 20 kW/m, as measured in accordance with ISO 5658-2 on a 3 to 4 mm thick plaque at 50 kW/m; and a conventional index of toxicity value of less than 0.75 in accordance with ISO 5659-2 on a 3 to 4 mm thick plaque at 50 kW/m. Such additives can be mixed at a suitable time during the mixing of the components for forming the composition. Additives include impact modifiers, fillers, reinforcing agents, antioxidants, heat stabilizers, light stabilizers, ultraviolet (UV) light stabilizers, plasticizers, lubricants, mold release agents, antistatic agents, colorants such as such as titanium dioxide, carbon black, and organic dyes, surface effect additives, radiation stabilizers, flame retardants, and anti-drip agents. A combination of additives can be used, for example a combination of an antioxidant, a mold release agent, and an ultraviolet light stabilizer. In general, the additives are used in the amounts generally known to be effective. For example, the total amount of the additives can be 0.01 to 5 wt.%, based on the total weight of the polycarbonate composition. In an aspect, the polycarbonate composition can optionally comprise an antimicrobial agent. Any antimicrobial agent generally known can be used either individually or in combination (i.e., of two or more). Exemplary antimicrobial agents can include, but are not limited to a metal containing agent, such as Ag, Cu, Al, Sb, As, Ba, Bi, B, Au, Pb, Hg, Ni, Th, Sn, Zn containing agent. In an aspect, the agent can be silver-containing agent. A suitable silver-containing agent can contain a silver ion, colloidal silver, silver salt, silver complex, silver protein, silver nanoparticle, silver functionalized clay, zeolite containing silver ions or any combinations thereof. Silver salts or silver complexes can include silver acetate, silver benzoate, silver carbonate, silver ionate, silver iodide, silver lactate, silver laureate, silver nitrate, silver oxide, silver palpitate, silver sulfadiazine, silver sulfate, silver chloride, or any combinations thereof. When present, the antimicrobial agent can be included in an amount of 0.001 to 10 weight percent, based on the total weight of the polycarbonate composition. In an aspect, the composition can contain a silver-containing agent(s) in amounts such that and the silver content in the composition of 0.01 wt. % to 5 wt. %. The polycarbonate compositions may include 1-65 wt% of a bisphenol A homopolycarbonate; 35-99 wt% of a poly(carbonate-co-arylate ester) comprising 60-90 mol% of bisphenol A carbonate units, 10-30 mol% of isophthalic acid-terephthalic acid-resorcinol ester units, and 1-20 mol% of resorcinol carbonate units; an organophosphorus flame retardant comprising a phosphazene, wherein the organophosphorus flame retardant is present in an amount effective to provide up to 1.5 wt% phosphorous; optionally, up to 5 wt% of an additive composition, wherein each amount is based on the total weight of the polycarbonate composition, which sums to 100 wt%. The polycarbonate compositions may be manufactured by various methods. For example, powdered polycarbonate, and optional components are first blended, optionally with fillers in a HENSCHEL high speed mixer. Other low shear processes, including but not limited to hand mixing, may also accomplish this blending. The blend is then fed into the throat of a twin-screw extruder via a hopper. Alternatively, at least one of the components may be incorporated into the composition by feeding directly into the extruder at the throat or downstream through a side-stuffer. Additives may also be compounded into a masterbatch with a desired polymeric polymer and fed into the extruder. The extruder is generally operated at a temperature higher than that necessary to cause the composition to flow. The extrudate is immediately quenched in a water bath and pelletized. The pellets so prepared may be one-fourth inch long or less as desired. Such pellets may be used for subsequent molding, shaping, or forming. Shaped, formed, or molded articles comprising the polycarbonate compositions are also provided. The polycarbonate compositions may be molded into useful shaped articles by a variety of methods, such as injection molding, extrusion, rotational molding, blow molding and thermoforming. In an aspect, the article is an extruded article, a molded article, pultruded article, a thermoformed article, a foamed article, a layer of a multi-layer article, a substrate for a coated article, or a substrate for a metallized article. Transportation components, in particular interior train components that are molded or extruded from the polycarbonate compositions are also provided. Molding may be by a variety of means such as injection molding, rotational molding, blow molding, and the like. In an aspect, the molding is by injection molding. Illustrative claddings include, for example interior vertical surfaces, such as side walls, front walls, end-walls, partitions, room dividers, flaps, boxes, hoods and louvres; interior doors and linings for internal and external doors; window insulations, kitchen interior surfaces, interior horizontal surfaces, such as ceiling paneling, flaps, boxes, hoods and louvres; luggage storage areas, such as overhead and vertical luggage racks, luggage containers and compartments; driver's desk applications, such as paneling and surfaces of driver's desk; interior surfaces of gangways, such as interior sides of gangway membranes (bellows) and interior linings; window frames (including sealants and gaskets); (folding) tables with downward facing surface; interior and exterior surface of air ducts, and devices for passenger information (such as information display screens) and the like. Data in the Examples shows that the compositions herein may meet the requirements for HL2, for R4 applications. While the compositions described herein are designed for use preferably in railway interiors, it is to be understood that the compositions are also useful in other interior components that are required to meet the test standards for HL2 for R4 applications. Interior bus components are preferably mentioned. Current discussions directed to increasing bus safety include proposals to apply the HL2 standards to interior bus components. The claimed compositions accordingly include interior bus components, including lighting components as described above and comprising the preferred compositions described herein, and particularly below, that meet the tests specified in the HL2 standards described above. In a particularly advantageous feature, the compositions described herein may meet other stringent standards for railway applications. For example, for interior applications used in the United States railway market, materials need to fulfill meet NFPA 130 (2010 edition). This standard imposes requirements on rate of smoke generation and surface flammability. The generation of smoke is measured via the ASTM E662-12 smoke density test and the requirements are a preferred smoke density after 1.5 min (Ds1.5) of 100 and less and a preferred smoke density after 4 min (Ds4) of 200 and less, in either flaming or non-flaming mode. Surface flammability is measured via the ASTM E162-12a flame spread test and the requirements are a maximum flame spread index (Is) of 35 and less, and no flaming, running or dripping allowed. It is calculated from multiplying the flame spread factor (Fs) and the heat evolution factor (Q) determined during the test. Certain of the more preferred compositions described herein, and particularly below, may also meet these standards. This disclosure is further illustrated by the following examples, which are nonlimiting. EXAMPLES Examples 1-2 The following components are used in the examples. Unless specifically indicated otherwise, the amount of each component is in wt%, based on the total weight of the composition. Table 1. Component Description Source PC-1 Linear bisphenol A polycarbonate homopolymer, Mw = 27,000-33,000 g/mol as per GPC using polystyrene standards and calculated for polycarbonate; produced via interfacial process, phenol endcapped SABIC PC-2 Linear bisphenol A polycarbonate homopolymer, Mw = 21,000-23,000 g/mol as per GPC using polystyrene standards and calculated for polycarbonate; produced via interfacial process, p-cumyl phenol endcapped SABIC ITR-PC Isophthalic acid-terephthalic acid-resorcinol (ITR) - BPA copolyester-carbonate, 19 mol % ITR, p-cumylphenol end-capped, produced via interfacial polymerization, Mw of 29,000-31,000 g/mol as determined by GPC using polycarbonate standards SABIC EPOXY a cycloaliphatic, diepoxy functional organic compound, commercially available as ERL 4221E DOW STAB Tris(di-t-butylphenyl)phosphite, available as IRGANOX 168 BASF PEPQ Tetrakis(2,4-di-tert-butylphenyl) [1,1'-biphenyl] -4,4'-diylbis(phosphonite), available as IRGAPHOS P-EPQ CLARIANT UVA 2-(2-Hydroxy-5-tert-octylphenyl) benzotriazole, <nplcit id="ncit0001" npl-type="c"><text>CAS Reg. No. 3147-75-9</text></nplcit>, available as Cyasorb UV5411 CYTEC PETS PETS (&gt;90% esterified), available as LOXIOL <patcit id="pcit0003" dnum="ep8578a"><text>EP 8578 </text></patcit> P-FR Phenoxycyclophosphazene (13.5 wt% phosphorous); obtained as RABITLE FP-110 FUSHIMI RIMAR Potassium perfluorobutane sulfonate The materials shown in Table 1 were used. The testing samples were prepared as described below and the following test methods were used. Table 2. Parameters Unit 25 mm ZSK Die - 2 holes Feed temperature °C 40 Zone 1 temp. °C 180-200 Zone 2-8 temp. °C 250-270 Die temperature °C 250-270 Screw speed rpm 300 Throughput kg/h 15-25 Vacuum 1 bar ∼0.7 Typical compounding procedures are described as follows: All raw materials were compounded on a 25 mm Werner Pfleiderer ZSK co-rotating twin-screw extruder with a vacuum vented standard mixing screw operated at a screw speed of 300 rpm. The temperature profile is given in Table 1. The strand was cooled through a water bath prior to pelletizing. The pellets were dried for 3-4 hours at 90-110 °C in a forced air-circulating oven prior to injection molding. A typical extrusion profile is listed in Table 2. Table 3. Parameters Unit Pre-drying time h 3-4 Pre-drying temp. °C 90-110 Hopper temp. °C 40 Zone 1 temp. °C 250-280 Zone 2 temp. °C 265-295 Zone 3 temp. °C 270-300 Nozzle temp. °C 265-295 Mold temperature °C 75-90 Screw speed rpm 25 Back pressure bar 7 Injection time s 1.9 Approx. cycle time s 45 An Engel 45, 75, 90 molding machine was used to mold the test parts for standard physical property testing. (for parameters see Table 3). Table 4. Property Standard Conditions Specimen Type Critical heat flux at extinguishment (CFE) ISO 5658-2 50 kW/m<sup>2</sup> 3 or 4 mm thick plaque Conventional index of toxicity (CIT) ISO 5659-2 50 kW/m<sup>2</sup>, at 4 minutes and 8 minutes 3 or 4 mm thick plaque Flame spread ISO11925-2 Maximum 150 mm in 60 seconds 3 or 4 mm thick plaque Flaming droplets ISO11925-2 Number of flaming droplets within 60 seconds 3 or 4 mm thick plaque Transmission ISO 1003-0 2.54 mm thick plaque Haze ISO 1003-0 2.54 mm thick plaque Sample preparation and testing methods are described in Table 4. Table 5. Units 1* 2 PC-1 wt% 9.38 9.38 PC-2 wt% 44.92 43.00 ITR-PC wt% 45 45 EPOXY wt% 0.06 0.06 AO-1 wt% 0.3 0.3 AO-2 wt% 0.06 0.06 UVA wt% 0.2 0.2 RIMAR wt% 0.08 P-FR wt% 2 Total wt% 100 100 %P wt% 0 0.27 CFE, 3 runs, 3mm % 15.2 49.1 % 37.7 50.0 % 13.8 50.9 CFE, avg. % 27.9 50 Flame spread, 3 mm mm Pass Pass Flaming droplets, 3 mm number 0 0 Toxicity, 4 min, 3 mm 0.05 0.18 Toxicity, 8 min, 3 mm 0.1 0.27 CFE, 3 runs, 4 mm % 18.5 50.9 % 13.0 49.8 % 16.3 50.9 CFE, avg. % 15.9 50.5 Flame spread, 4 mm mm Pass Pass Flaming droplets, 4 mm number 0 0 Toxicity, 4 min, 4 mm 0.11 0.22 Toxicity, 8 min, 4 mm 0.23 0.44 Transmiss ion, 2.54 mm % 92.5 92.1 Haze 0.5 0.4 Table 5 shows the compositions and properties for Comparative Example 1 and Example 2. 2 2 2 The composition of Example 2 includes homopolycarbonate (PC-1 and PC-2) and poly(carbonate-co-monoarylate ester) (ITR-PC), and a phosphazene flame retardant provide a desirable combination of properties: good optical properties (% transmission of greater than 90% and a haze of less than 1%), a critical heat flux at extinguishment (CFE) of greater than 13 kW/m, more preferably of greater than 20 kW/m, with no individual measurements lower than 13 kW/m, a pass rating on the flame spread test (ISO 11925-2), no flaming droplets and a flame effluent toxicity of less than 0.75 at 4 minutes and 8 minutes. Comparative Example 1 which includes a similar composition wherein Rimar salt is present instead of the phosphazene flame retardant provides a less robust CFE (ISO 5658-2) result with an individual measurement failing to meet the threshold of greater than 13 at a thickness of 4 mm. Specifically, the CFE was performed three times on each of the samples of Comparative Example 1 and Example 2. However, for Comparative Example 1, one of the tests resulted in a score of 13. The desired score for the spread of flame test is greater than 13. A "pass" rating in the flame spread test (ISO 11925-2) means that either the sample did not ignite when exposed to the flame or if it did ignite, the flame did not spread by 150 mm within 60 seconds. Aspect 1. A polycarbonate composition comprising: 1-65 wt% of a homopolycarbonate; 35-99 wt% of a poly(carbonate-co-arylate ester) comprising 60-90 mol% of bisphenol A carbonate units, 10-30 mol% of isophthalic acid-terephthalic acid-resorcinol ester units, and 1-20 mol% of resorcinol carbonate units; an organophosphorus flame retardant in an amount effective to provide up to 1.5 wt% phosphorous; optionally, up to 5 wt% of an additive composition, wherein each amount is based on the total weight of the polycarbonate composition, which sums to 100 wt%. 2 2 2 2 Aspect 2. The polycarbonate composition of Aspect 1, wherein a molded sample of the composition has: a critical heat flux at extinguishment of greater than 13 kW/m, preferably greater than 20 kW/m as measured in accordance with ISO 5658-2 on a 3 to 4 mm thick plaque at 50 kW/m; and a conventional index of toxicity value of less than 0.75 in accordance with ISO 5659-2 on a 3 to 4 mm thick plaque at 50 kW/m. Aspect 3. The polycarbonate composition of any one of the preceding aspect s, wherein the homopolycarbonate is a bisphenol A homopolycarbonate. 1 h h 6-30 1-10 1-10 1-10 6-10 6-10 1-4 Aspect 4. The polycarbonate composition of any one of the preceding aspect s, wherein the poly(carbonate-co-arylate ester) comprises units of formula wherein: R is a C aromatic group having at least one aromatic moiety, R is independently a halogen atom, a C hydrocarbyl such as a C alkyl group, a halogen-substituted C alkyl group, a C aryl group, or a halogen-substituted C aryl group, and n is 0 to 4, preferably, R is a C alkyl, and n is 0 to 3, 0 to 1, or 0, and a mole ratio of carbonate units x to ester units z is from 99:1 to 1:99, or from 98:2 to 2:98, or from 90:10 to 10:90. Aspect 5. The polycarbonate composition of any one of the preceding aspect s, wherein the poly(carbonate-co-arylate ester) comprises units of formula a poly(bisphenol A carbonate-co-isophthalate-terephthalate-resorcinol ester) of formula wherein the mole ratio of x:z is or from 98:2 to 2:98, or from 90:10 to 10:90. h a b a c d c d e e 1-10 1-12 2 1-13 1-12 1-12 Aspect 6. The polycarbonate composition of any of the preceding aspect s, wherein the monoaryl carbonate units have the structure and the aromatic ester units have the structure wherein R is each independently a C hydrocarbon group, n is 0-4, R and R are each independently a C alkyl, p and q are each independently integers of 0-4, and X is a single bond, -O-, -S-, -S(O)-, -S(O)-, -C(O)-, or a C alkylidene of formula -C(R)(R) - wherein R and R are each independently hydrogen or C alkyl, or a group of the formula -C(=R)-wherein R is a divalent C hydrocarbon group. Aspect 7. The polycarbonate composition of any one of the preceding aspect s, wherein the additive composition is present and comprises an impact modifier, a filler, a reinforcing agent, an antioxidant, a heat stabilizer, a light stabilizer, an ultraviolet light stabilizer, a plasticizer, a lubricant, a mold release agent, an antistatic agent, a colorant, an organic dye, a surface effect additive, a radiation stabilizer, a flame retardant, an anti-drip agent, an antimicrobial agent, or a combination thereof. Aspect 8. The polycarbonate composition of any one of the preceding aspect s, wherein the homopolycarbonate comprises a first linear bisphenol A homopolycarbonate having a weight average molecular weight of 26,000-40,000 grams per mole, a second linear bisphenol A homopolycarbonate having a weight average molecular weight of 15,000-25,000 grams per mole, or a combination thereof, each as measured via gel permeation chromatography using polystyrene standards and calculated for polycarbonate. 3 3 2 2 3 3 Aspect 9. The polycarbonate composition of any one of the preceding aspect s, wherein the organophosphosphorous flame retardant comprises a monomeric or oligomeric phosphate (P(=O)(OR)), phosphite (P(OR)), phosphonate (RP(=O)(OR)), phosphinate (RP(=O)(OR)), phosphine oxide (RP(=O)), or phosphine (RP), wherein each R in the may be the same or different, provided that at least one R is an aromatic group; a monomeric or oligomeric compound having at least one phosphorous-nitrogen bond; or a combination thereof. 1 2 16 17 18 19 16 17 18 19 1-30 1-30 1-8 5-6 7-12 1-12 1-4 6-30 2-30 Aspect 10. The polycarbonate composition of any one of the preceding aspect s, wherein the organophosphosphorous flame retardant comprises or a combination thereof, wherein each occurrence of G is independently a C hydrocarbyl; each occurrence of G is independently a C hydrocarbyl or hydrocarbyloxy; each X is independently a bromine or chlorine; m is 0 to 4, and n is 1 to 30; wherein R, R, R, and R are each independently C alkyl, C cycloalkyl, C6-20 aryl, or C arylalkylene, each optionally substituted by C alkyl, preferably by C alkyl and X is a mono- or poly-nuclear aromatic C moiety or a linear or branched C aliphatic radical, each optionally OH-substituted and optionally comprising up to 8 ether bonds, provided that at least one of R, R, R, R, and X is an aromatic group; or a combination thereof. w 1-12 Aspect 11. The polycarbonate composition of any one of the preceding aspect s, wherein the organophosphosphorous flame retardant comprises: a phosphazene, phosphonitrilic chloride, phosphorous ester amide, phosphoric acid amide, phosphonic acid amide, phosphinic acid amide, or tris(aziridinyl) phosphine oxide; or a phosphazene or cyclic phosphazene of the formulas wherein w1 is 3 to 10,000; w2 is 3 to 25, or 3 to 7; and each R is independently a C alkyl, alkenyl, alkoxy, aryl, aryloxy, or polyoxyalkylene group, optionally wherein at least one hydrogen atom is replaced with an N, S, O, or F atom, or an amino group. Aspect 12. The polycarbonate composition of any one of the preceding aspect s comprising: 1-65 wt% of a bisphenol A homopolycarbonate; 35-99 wt% of a poly(carbonate-co-arylate ester) comprising 60-90 mol% of bisphenol A carbonate units, 10-30 mol% of isophthalic acid-terephthalic acid-resorcinol ester units, and 1-20 mol% of resorcinol carbonate units; an organophosphorus flame retardant comprising a phosphazene, wherein the organophosphorus flame retardant is present in an amount effective to provide up to 1.5 wt% phosphorous; optionally, up to 5 wt% of an additive composition, wherein each amount is based on the total weight of the polycarbonate composition, which sums to 100 wt%. Aspect 13. An article comprising the polycarbonate composition of any one of the preceding aspect s, preferably wherein the article is an interior or exterior railway component. Aspect 14. The article of aspect 13, wherein the railway component comprises interior vertical surfaces, such as a side wall, a front wall, an end-wall, a partitions, a room dividers, a flap, a box, a hood, a louvre; an interior door, a lining for internal and external doors; a window insulation, a kitchen interior surface, an interior horizontal surface, a luggage storage area, a luggage container, a luggage compartment, a driver's desk application, an interior surface of a gangway, a window frame, a table with a downward facing surface; an interior or an exterior surface of an air duct, a device for passenger information, an exterior cladding, an exterior side skirt, exterior paneling, a roof panel, a roof cladding, a portal cladding, an apron flap, or a lighting component, preferably a lighting component. Aspect 15. A method for forming the article according to aspect 14, comprising molding, casting, or extruding the composition to provide the article. This disclosure further encompasses the following aspects. The compositions, methods, and articles may alternatively comprise, consist of, or consist essentially of, any appropriate materials, steps, or components herein disclosed. The compositions, methods, and articles may additionally, or alternatively, be formulated so as to be devoid, or substantially free, of any materials (or species), steps, or components, that are otherwise not necessary to the achievement of the function or objectives of the compositions, methods, and articles. All ranges disclosed herein are inclusive of the endpoints, and the endpoints are independently combinable with each other (e.g., ranges of "up to 25 wt%, or, more specifically, 5 wt% to 20 wt%", is inclusive of the endpoints and all intermediate values of the ranges of "5 wt% to 25 wt%," etc.). "Combinations" is inclusive of blends, mixtures, alloys, reaction products, and the like. The terms "first," "second," and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms "a" and "an" and "the" do not denote a limitation of quantity and are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. "Or" means "and/or" unless clearly stated otherwise. Reference throughout the specification to "some aspects", "an aspect", and so forth, means that a particular element described in connection with the aspect is included in at least one aspect described herein, and may or may not be present in other aspects. In addition, it is to be understood that the described elements may be combined in any suitable manner in the various aspects. A "combination thereof' is open and includes any combination comprising at least one of the listed components or properties optionally together with a like or equivalent component or property not listed Unless specified to the contrary herein, all test standards are the most recent standard in effect as of the filing date of this application, or, if priority is claimed, the filing date of the earliest priority application in which the test standard appears. Unless defined otherwise, technical and scientific terms used herein have the same meaning as is commonly understood by one of skill in the art to which this application belongs. All cited patents, patent applications, and other references are incorporated herein by reference in their entirety. However, if a term in the present application contradicts or conflicts with a term in the incorporated reference, the term from the present application takes precedence over the conflicting term from the incorporated reference. Compounds are described using standard nomenclature. For example, any position not substituted by any indicated group is understood to have its valency filled by a bond as indicated, or a hydrogen atom. A dash ("-") that is not between two letters or symbols is used to indicate a point of attachment for a substituent. For example, -CHO is attached through carbon of the carbonyl group. 2 2 2 3 n 2n-x 1-9 1-9 2 1-6 2 6-12 2 3 4 2 3-12 2-12 5-12 6-12 7-13 4-12 3-12 2 2 2 The term "alkyl" means a branched or straight chain, unsaturated aliphatic hydrocarbon group, e.g., methyl, ethyl, n-propyl, i-propyl, n-butyl, s-butyl, t-butyl, n-pentyl, s-pentyl, and n- and s-hexyl. "Alkenyl" means a straight or branched chain, monovalent hydrocarbon group having at least one carbon-carbon double bond (e.g., ethenyl (-HC=CH)). "Alkoxy" means an alkyl group that is linked via an oxygen (i.e., alkyl-O-), for example methoxy, ethoxy, and sec-butyloxy groups. "Alkylene" means a straight or branched chain, saturated, divalent aliphatic hydrocarbon group (e.g., methylene (-CH-) or, propylene (-(CH)-)). "Cycloalkylene" means a divalent cyclic alkylene group, -CH, wherein x is the number of hydrogens replaced by cyclization(s). "Cycloalkenyl" means a monovalent group having one or more rings and one or more carbon-carbon double bonds in the ring, wherein all ring members are carbon (e.g., cyclopentyl and cyclohexyl). "Aryl" means an aromatic hydrocarbon group containing the specified number of carbon atoms, such as phenyl, tropone, indanyl, or naphthyl. "Arylene" means a divalent aryl group. "Alkylarylene" means an arylene group substituted with an alkyl group. "Arylalkylene" means an alkylene group substituted with an aryl group (e.g., benzyl). The prefix "halo" means a group or compound including one more of a fluoro, chloro, bromo, or iodo substituent. A combination of different halo groups (e.g., bromo and fluoro), or only chloro groups may be present. The prefix "hetero" means that the compound or group includes at least one ring member that is a heteroatom (e.g., 1, 2, or 3 heteroatom(s)), wherein the heteroatom(s) is each independently N, O, S, Si, or P. "Substituted" means that the compound or group is substituted with at least one (e.g., 1, 2, 3, or 4) substituents that may each independently be a C alkoxy, a C haloalkoxy, a nitro (-NO), a cyano (-CN), a C alkyl sulfonyl (-S(=O)-alkyl), a C aryl sulfonyl (-S(=O)-aryl)a thiol (-SH), a thiocyano (-SCN), a tosyl (CHC6HSO-), a C cycloalkyl, a C alkenyl, a C cycloalkenyl, a C aryl, a C arylalkylene, a C heterocycloalkyl, and a C heteroaryl instead of hydrogen, provided that the substituted atom's normal valence is not exceeded. The number of carbon atoms indicated in a group is exclusive of any substituents. For example -CHCHCN is a C alkyl group substituted with a nitrile. While particular aspects have been described, alternatives, modifications, variations, improvements, and substantial equivalents that are or may be presently unforeseen may arise to applicants or others skilled in the art. Accordingly, the appended claims as filed and as they may be amended are intended to embrace all such alternatives, modifications variations, improvements, and substantial equivalents.
Western Power Distribution (WPD) has today launched the next wave of its digitalisation programme with the release of its new Real-Time Power Flow Data Access, providing customers and stakeholders with access to live data on energy production and use across the Midlands, South Wales and South West. The new data access tool will allow customers, academics and innovators to view live information on electricity demand, import and generation across the WPD network. The solution will also display historic data, supporting in-depth comparisons and research by third parties. Users will be able to dive into the data, accessing generation split by key types such as solar and wind, providing a deep understanding of the make-up of energy on the network in real-time. WPD’s ambition is that the real-time data resource should provide three key benefits to network stakeholders: - Better connections and planning – Customers, local councils and low carbon energy developers will have greater information to make informed decisions, helping to plan where to most efficiently connect new EV charging stations, solar or wind farm generation, or where capacity is available for a new housing development. - Supporting cutting edge research – The real-time data will also assist academics and researchers to better understand the operation of the UK electricity network. In combination with other datasets, WPD’s real-time data will help researchers monitor the UK energy network’s response to live events, such as storms and surges in demand, enabling them to more accurately forecast the UK’s future energy and infrastructure needs. - Democratising innovation – Echoing the success of TfL’s open data policy supporting innovative services like Citymapper; by opening up its network data, WPD aims to foster a new generation of smart, low carbon energy innovators. These data-led innovators could develop technologies that automatically help customers to minimise their fuel bills or plan a new grid connection. By allowing greater access to its network data, WPD is empowering a wider group of energy innovators to develop the next big idea.
https://electricalcontractingnews.com/news/western-power-distribution-ramps-up-programme-with-power-flow-data-access/
Accelerate your knowledge and understanding of the power of Blockchain and be part of the leading insurance Blockchain ecosystem network. B3i is developing an insurance ecosystem network to trade risks. You can participate in an early stage by becoming a member of our community. You will get exclusive access to information about our initiative and be invited to evaluate and provide feedback on the latest Blockchain-based insurance applications. Get access to discussions with leading industry experts through exclusive B3i conferences, our member network and webinars. B3i Services AG is a startup formed to explore the potential of using Distributed Ledger Technologies within the re/insurance industry for the benefit of all stakeholders in the value chain.
https://beamediagroup.com/projects/brochure-community-membership-b3i/
Peacock mantis shrimp clubs are much like Thor’s hammer, Mjölner, incredibly strong and incredibly tough, with an acceleration speed the same as that of a .22 caliber bullet. Scientists have been researching what exactly makes this colorful shrimp so different, seeing as they are powerful enough to crack open the exoskeleton of a crab, or break a human’s finger, despite being only four inches long, according to Discovery News. According to Scientific American, there are three sections to the shrimp’s club which, when put together, form an incredibly strong, crack-resistant weapon. The impact region of the club is made of hydroxyapatite, or HA, which is also found in vertebrate bones and teeth. behind the HA is an array of rods of chitosan, arranged much like a ream of paper, where every page lays in a different way, to protect against fractures. The sides of the claw are much less stiff, acting more like shock-absorbers. Using these design techniques, Discover News reports that Kisailus, as well as his colleagues, are working on materials engineered like the mantis shirmp, which can be used in aerospace engineering, automobiles, military body armor and even sports helmets.
https://www.inquisitr.com/250197/mantis-shrimp-the-thor-of-crustaceans/
Abstract: Luxury products represent substantial worldwide sales; major markets of luxury products are no longer limited to Western countries, but have also expanded to Eastern “young generation” markets (Zhan and He, 2012). With a rapidly growing economy and globalisation, Chinese young consumers have become an important target for producers of luxury products. According to Wiedmann, Hennings, and Siebels (2009), consumption motivations are derived from values that are connected to cultural background. Nowadays, because Chinese consumers are more engaged with foreign societies, Western culture also produces significant influence on their preferences, motivation and behaviour (Zhan and He, 2012). Although there is extant research which focuses on comparing cross-cultural influences on luxury consumption motivation of Western consumers and Eastern consumers, literature on luxury purchase motivation is only limited to a single country and scholars have ignored the influence of a foreign culture and acculturation on consumers (Beverland, 2004). This cultural study investigates (a) cultural orientation of Chinese young consumers living in the UK; (b) their current luxury consumption motivation; (c) a relationship between cultural orientation and luxury consumption motivation; and (d) if acculturation moderates the relationship. An online questionnaire was used to collect data. This study chose two groups of Chinese young consumers living in London; group one focuses on consumers who have lived in London for less than one year; and group two concentrates on consumers who have been in London for more than five years. In total 488 valid respondents were collected on an official Facebook group named London Chinese Community. Structural equation modelling (SEM) was adapted in this study; findings provide a deeper insight of the acculturation influence on luxury consumption motivations of Chinese young consumers and provide significant implications for both theory and practice.
https://bura.brunel.ac.uk/handle/2438/15586
“Peru is a beggar seated in a golden bench.” This famous phrase by Italian scientist Antonio Raimondi is especially relevant to describe the socio-economic reality of the Peruvian mining industry. While it is perceived to be thriving in terms of contribution to GDP and hence output, there is a different story to be told in terms of development for neighbouring mining communities which are responsible for natural resource extraction. Most surprisingly, it is not a new story, but one that has persisted over the centuries. Probably unbeknownst to the reader, Peru is the second largest exporter in the world of copper, silver and zinc and the fourth largest exporter of gold in the world. The industry accounted for 62% of total exports in 2017, constituted 10% of GDP and 5% of employment generation in the same year. Even though mining projects by 2021 are valued at almost $70 billion dollars, it is mostly benefitting large multinational companies and local giants. On the other hand, the miners, responsible for the manual labour, will probably not reap the most gains. Today, 50% of the population in these mining communities is living under the poverty line, 15% remain illiterate and overall, the communities count with a disheartening human development score of less than 0.5, while the country counts with an average of 0.75 out of 1. History seems to repeat itself, as these regions continue to be the poorest in the country, 500 years after the institutionalization of the Spanish mita, used to colonize the native population. Miners, similar to other poor communities in Peru, face a significant obstacle to development: they remain at a historical disadvantage in obtaining their rights against a persistent pattern of extractive institutions and exploitation. In the present, miners stand powerless against giant corporations which do not have it in their agenda to contribute to the progress of these communities but rather to generate enormous profits. In addition, informal mining and conflict-ridden communities neighbouring major mining projects are two crucial phenomena to consider in terms of limited progress for Peru. Although the Ministry of Mines and Energy (MINEM) has predicted to formalize almost 10,000 miners this year, informal miners constitute at least 750,000 of the mining community, while those directly formally employed lag behind at 190,000. This means that at least 75% of all miners in Peru are waiting for their rights to be recognized by the state. If the efforts by MINEM are successful, formalized miners will have rights to extract resources over a certain area through concessions, which most miners are unable to do so at the moment. Informality, alongside environmental concerns and poor living conditions, has sparked conflicts in mining communities and deterred potential investment. It is estimated that $12 billion dollars in investment could go forward if conflicts in the South of the country would be resolved. Mining conflicts are most prevalent in this region given that this is where most mining activity takes place. These conflicts are mostly caused by environmental concerns, dissatisfaction of the miners with the role of the State and direct confrontations with major enterprises. While these are mostly protests, some have scaled up to pose a risk to human life, most notably in Tia Maria and Conga. While the government is making significant strides to try to overcome conflict and informality, there remains the question whether the reforms will achieve meaningful progress and institutional change. In order to address this issue, the government is currently working on the establishment of three main types of funds. This includes a macroeconomic stabiliser “Fund of Fiscal Stabilization“, eight regional social funds, and the most recently set “Fund of Social Advancement” set to begin its operations in 2019. While the latter two intend to contribute to the provision of basic public goods and services in mining communities – including water, sanitation, infrastructure and education – the former is a sovereign wealth fund intended as a risk stabilizer if a catastrophe were to happen in the economy. The Fund of Social Advancement seems to be the most promising, given that it intends to merge the efforts of the regional funds into a centralized mechanism, for which $15.1 million dollars have already been destined. Nevertheless, a significant weakness of these funds is that, while they seem specialized in the provision of essential public goods, they do not intend to tackle the long-term challenge of breaking the barriers of path dependency. Essentially, a potential solution could be to centralize efforts into a macro-encompassing fund which guarantees security and the development of future generations. For instance, Peru could mimic the Norwegian Sovereign Wealth Fund model by guaranteeing that a significant share of the profits of natural resource extraction are destined to the sustainable development of the mining industry and the communities involved. Bottom line: Even though the mining industry in Peru is perceived as thriving by dollar counts, there is a different story to be told in terms of the development of neighbouring mining communities to major projects.
https://one-handed-economist.com/?p=666
Patterns of Circulating Corticosterone in a Population of Rattlesnakes Afflicted with Snake Fungal Disease: Stress Hormones as a Potential Mediator of Seasonal Cycles in Disease Severity and Outcomes. Snake fungal disease (SFD) is an emerging threat to snake populations in the United States. Fungal pathogens are often associated with a physiological stress response mediated by the hypothalamo-pituitary-adrenal axis (HPA), and afflicted individuals may incur steep coping costs. The severity of SFD can vary seasonally; however, little is known regarding (1) how SFD infection relates to HPA activity and (2) how seasonal shifts in environment, life history, or HPA activity may interact to drive seasonal patterns of infection severity and outcomes. To test the hypothesis that SFD is associated with increased HPA activity and to identify potential environmental or physiological drivers of seasonal infection, we monitored baseline corticosterone, SFD infection severity, foraging success, body condition, and reproductive status in a field-active population of pigmy rattlesnakes. Both plasma corticosterone and the severity of clinical signs of SFD peaked in the winter. Corticosterone levels were also elevated in the fall before the seasonal rise in SFD severity. Severely symptomatic snakes were in low body condition and had elevated corticosterone levels compared to moderately infected and uninfected snakes. The monthly mean severity of SFD in the population was negatively related to population-wide estimates of body condition and temperature measured in the precedent month and positively correlated with corticosterone levels measured in the precedent month. Symptomatic females were less likely to enter reproductive bouts compared to asymptomatic females. We propose the hypothesis that the seasonal interplay among environment, host energetics, and HPA activity initiates trade-offs in the fall that drive the increase in SFD prevalence, symptom severity, and decline in condition observed in the population through winter.
Pure silicon’s crystal structure is three dimensional. Silicon (and germanium) belongs to the IVa column of the Periodic Table, which is the carbon family of elements . The main properties for these elements is that every atom has four electrons to share with nearby atoms in creating bonds . For a simple description, the type of a bond between two atoms of silicon is a one in which each atom offers an electron for sharing with the other atom. Therefore the two electrons shared are actually shared between the two atoms equally. This sort of sharing is called a covalent bond which is a very stable bond, and tightly holds together the two atoms, and a lot of energy is required as a result for breaking this bond . This forms the silicon crystal, but not the semiconductor. In the silicon crystal, all the outer electrons of every silicon atom are used for creating covalent bonds with other atoms. So no electrons are available to travel from one position to another as an electrical current. Therefore, a pure silicon crystal is considered a really good insulator . A pure silicone crystal is called an intrinsic crystal . To make the silicon crystal conduct electricity , the electrons must be allowed to move from one position to another inside the crystal, regardless of the covalent bonds between atoms. One method to do this is by introducing an impurity into the crystal structure similar to Arsenic or Phosphorus . These elements belong to the Va group of the Periodic Table, and possess five outer electrons for sharing with other atoms. In this method, four out of the five electrons bond with nearby silicon atoms like before, but a bond can be formed with the fifth electron. Just with a small applied electrical voltage this electron can be easily moved. Since the resulting crystal has extra current carrying electrons, with a negative charge each, it is called a N type silicon . Other elements – like Gallium- have only three electrons that can be shared with nearby atoms. The three electrons create a covalent bond with nearby silicon atoms, but the anticipated fourth bond cannot be created thus leaving a hole in the crystal’s structure.This way, holes seem to move as a positive charge through crystals. So, this type of semiconductor material is called P type silicon Semiconductor Composition Semiconductors, like (Si) Silicon are composed of separate atoms bonded together in an even, periodic structure to create an arrangement in which every atom is encircled by 8 electrons. Each individual atom is made up of a nucleus consisting of a core of positively charged particles (protons) and particles having no charge (neutrons) surrounded by the electrons. The number of protons and electrons is the same, so that the atom is electrically neutral in total. The electrons which surround every atom in the semiconductor are a part of a covalent bond. A covalent bond contains two atoms which share two electrons. Every atom creates 4 covalent bonds with the surrounding 4 atoms. Hence, 8 electrons are being shared between every atom and the 4 surrounding atoms. Understanding the arrangement of these atoms is important to understand properties of the different materials of semiconductors. Earlier, semiconductors were manufactured from the Germanium element, but S ilicon is now preferred in modern applications.
https://sinovoltaics.com/learning-center/basics/semiconductor-structure/
Within a relatively brief time, semiconductor materials have made possible the creation of a wide range of optical and electronic devices which have played a major role in the shaping of our world. The impact of semiconductor devices has been felt from the battlefield to the playground and from the kitchen to the cosmos. In the earliest stages, semiconductor technology was limited by the use of single crystalline materials. These materials were, of necessity, highly pure and possessed a morphology with extremely regular and long-range periodicity. The dual and interdependent constraints of periodicity and stoichiometry restricted the compositional range, and hence physical properties of crystalline semiconductor materials. As a result single crystalline devices were expensive, difficult to fabricate and limited in their properties. While conventional wisdom at the time dictated that semiconductor behavior could only be manifested in highly ordered materials, it was recognized by S. R. Ovshinsky that the requirements of periodicity could be overcome and that semiconductor behavior is manifested by various disordered materials. In this regard, see "Reversible Electrical Switching Phenomena and Disordered Structures" by Stanford R. Ovshinsky; Physical Review Letters, vol. 21, No. 20, Nov. 11, 1968, 1450 (C) and "Simple Band Model for Amorphous Semiconducting Alloys" by Morrel H. Cohen, H. Fritzsche and S. R. Ovshinsky; Physical Review Letters, vol. 22, No. 20, May 19, 1969, 1065 (C). Disordered materials are characterized by a lack of long-range periodicity. In disordered semiconductors the constraints of periodicity and stoichiometry are removed and as a result, it is now possible to place atoms in three dimensional configurations previously prohibited by the lattice constants of crystalline materials. Thus, a whole new spectrum of semiconductor materials having novel physical, chemical and electrical properties has been made available. By choice of appropriate material compositions, the properties of disordered semiconductors may be custom tailored over a wide range of values. Disordered semiconductors may be deposited by thin film techniques over relatively large areas and at low cost, and as a result many types of new semiconductor devices have become commercially feasible. A first group of disordered semiconductors are generally equivalent to their crystalline counterparts while a second group manifest physical properties that cannot be achieved with crystalline materials. As a result of the foregoing, disordered semiconductor materials have come to be widely accepted and a great number of devices manufactured therefrom are in significant commercial use. For example, large area photovoltaic devices are routinely manufactured from amorphous silicon and germanium-based alloys. Such materials and devices are disclosed, for example, in U.S. Pat. Nos. 4,226,898 and 4,217,374 of Ovshinsky et al. Disordered alloy materials have also been used to fabricate photodetector arrays for use in document scanners, drivers for LCD displays, cameras and the like. In this regard see U.S. Pat. No. 4,788,594 of Ovshinsky et al. Disordered semiconductor materials have also been used in devices for the high volume storage of optical and electronic data. Amorphous materials are presently utilized in a manner to take advantage of the great variety of interactions between constituent atoms or molecules in contrast to the restricted number and kinds of interactions imposed by a crystalline lattice. In the present invention, the advantages of crystalline and amorphous properties can be combined for those devices and applications in which periodicity is essential to the physics. Periodicity can be placed in an amorphous matrix through the utilization of the present invention. The material can include spatially repeating compositional units, atoms, groups of atoms or layers without the overall bulk inhibition of crystalline periodicity. Also, individual atoms or groups of atoms in various configurations can be provided, which can be combined with other atoms or groups of atoms and be disbursed throughout the material. As stated, the individual atoms or groups of atoms, in these materials need not be in a regular pattern, but can have a varying spatial pattern, such as being graded or nonsequential throughout the material. By the proper choice of atoms or groups of atoms, their orbitals and isolated configurations, anisotropic effects not permitted in any prior type of material can be produced. These procedures provide varying geometrical environments for the same atom or a variety of atoms, so that these atoms can bond with other surrounding atoms in different coordination configurations as well as unusual nonbonding relationships resulting in entirely new chemical environments. The procedures provide means for arranging different chemical environments which can be distributed and located throughout the material in the spatial pattern desired. For example, one part or portion of a material can have entirely different local environments from other portions. The varying electronic states resulting from the various spatial patterns which are formed and the various chemical environments which can be designed, can be reflected in many parameters as a type of density of states or change of states in the energy gap of a semiconductor except that this density of states can be spatially arranged. In essence, the material of the invention is a compositionally modulated material utilizing the very concept of irregularity, inhomogeniety, "disorder" or localized order which have been avoided in the prior art, to achieve benefits which have not been exhibited in prior materials. The local environments need not be repeated throughout the material in a periodic manner as in the compositionally modulated materials of the prior art. Further, because of the above-described effects the specific types of disorder and their arrangement in a spatial pattern, the materials as described by this invention cannot be thought of as truly amorphous materials as typically produced by the prior art since the material is more than a random placement of atoms. The placement of atoms and orbitals of a specific type that can either interact with their local environment or with one another depending upon their spacing throughout an amorphous material and an amorphous matrix can be achieved. The composite material appears to be homogeneous, but the positions of the orbitals of the atoms can have relationships designed to emphasize a particular parameter, such as spin compensation or decompensation. The materials thus formed give a new meaning to disorder based on not only nearest neighbor relationships, but "disorder" among functional groups, which can be layers or groups, on a distance scale which can be as small as a single atomic diameter. Hence, a totally new class of "synthetic nonequilibrium multi-disordered" materials have been made available. It has been found that properties of semiconductor materials in the disordered state will depend upon their morphology and local chemical order and can be affected by various methods of preparation. For example, non-equilibrium manufacturing techniques can provide a local order and/or morphology different from that achieved with equilibrium techniques; and as a result, can change the physical properties of the material. In most instances, an amorphous semiconductor will have a lower electrical conductivity than the corresponding crystalline material and in many instances, the band gap energy, optical absorption coefficient and electronic activation energy of corresponding amorphous and crystalline materials will differ. For example, it has been found that amorphous silicon materials typically have a band gap of approximately 1.6-1.8 eV while crystalline silicon has a band gap of 1.1 eV. It is also important to note that amorphous silicon materials have a direct band gap while the corresponding crystalline material has an indirect band gap and as a result, the optical absorption of amorphous silicon is significantly higher than that of crystalline silicon at or near the band edge. It should also be noted that the dark electrical conductivity of undoped amorphous silicon is several orders of magnitude lower than that of crystalline silicon. It can thus be seen that the various physical properties of silicon strongly depend upon its morphology and local order. Similar relationships are found in a large number of other semiconductor materials. The principle of the present invention resides in the ability to control the local order of a semi-conductor material from that corresponding to a completely amorphous phase through various other local organizations including intermediate order to a state where the local order is so repetitively periodic that the material is in the single crystalline state. The most important and interesting area of the present invention is the ability conferred thereby to control the local order of a semi-conductor material to produce a material which has valuable properties different from either the amorphous or the crystalline states. The various properties of amorphous and crystalline silicon confer different advantages in various devices. The high mobility of carriers in crystalline silicon is important in high speed semiconductor circuits while the high level of optical absorption of amorphous silicon is ideal for photovoltaic devices since complete light absorption may be accomplished by relatively thin layers of material, making for a lightweight, low cost device. In some instances, one property of a given morphology and local order of semiconductor may be ideal for a particular purpose whereas the value of another property of that same material may not be so well suited. For example, the aforenoted high optical absorption of amorphous silicon is ideal for a photovoltaic device; however, the fairly wide band gap of amorphous silicon does not permit it to address the longer wavelength portions of the solar spectrum. The use of narrower band gap crystalline material in photovoltaic devices increases the portion of the useable light spectrum and the high conductivity, high mobility and long minority carrier diffusion length in crystalline silicon decreases the series resistance of the photovoltaic device, thereby increasing its overall efficiency; but, the trade-off is that crystalline cells are relatively thick because of their low absorption and hence they are fragile, bulky and expensive. Previously Ovshinsky, et al. produced materials which included clusters of atoms, typically between 12 and 50 angstroms in diameter. See U.S. Pat. No. 5,103,284, issued Apr. 7, 1992 and entitled "SEMICONDUCTOR WITH ORDERED CLUSTERS." The clusters or grains had a degree of order which is different from both the crystalline and amorphous forms of the material. The small size and ordering of the clusters allowed them to adjust their band structure to thereby relax K vector selection rules. Ovshinsky et al had found that various physical properties of semiconductor materials are decoupled from morphology and local order when those materials are comprised of ordered clusters. This selection rule relaxation occurred because the materials included a volume fraction of the intermediate order materials which was high enough to form percolation pathways within the material. The onset of the critical threshold value for the substantial change in the physical properties of materials in the ordered cluster state depends upon the size, shape and orientation of the particular clusters. However, it is relatively constant for different types of materials. There exist 1-D, 2-D and 3-D models which predict the volume fraction of clusters necessary to reach the threshold value, and these models are dependent on the shape of the ordered clusters. For example, in a 1-D model (which may be analogized to the flow of charge carrier through a thin wire) the volume fractions of clusters in the matrix must be 100% to reach the threshold value. In the 2-D model (which may be viewed as substantially conically shaped clusters extending through the thickness of the matrix) the volume fraction must be about 45% to reach the threshold value, and finally the 3-D model (which may be viewed as substantially spherical clusters in a sea of matrix material) the volume fraction need only be about 16-19% to reach the threshold value. Therefore, the materials disclosed and claimed in U.S. Pat. No. 5,103,284 have at least 16-19 volume percent of intermediate range order material for spherical clusters, at least 45 volume percent for conically shaped clusters and 100 volume percent for filamentary clusters. The instant inventors have now found that materials including any volume percent of the intermediate range order material (i.e. the ordered clusters) will have properties which (while not necessarily decoupled) differ from materials with no intermediate range order material. These materials are particularly useful in the form of thin films used in devices such as: photovoltaic devices, thin-film diodes, thin-film transistors, photoreceptors, etc.
"Synthesis, Assembly and Applications of Plasmonic Nanoparticles" "Controlling Nanoparticles with Atomic Precision: The Case of Gold" "Chemistry of Carbon Nanotubes and Graphenes" "Photo-Induced Phase Transfer of Inorganic Nanocrystals: A General Approach" "Controlled Synthesis of Silica Capsules: Taming the Reactivity of SiCl4 Using Flow and Chemistry" "Single Molecule Surface Chemistry of DNA" "New Structures for Energy Storage Devices and Fuel Cells" "Binding Magnetic Relaxation Nanosensors as Tools to Interrogate Molecular Interactions" "Protein Crystallization: Can Spherical Cows Help?" "Semiconductor Nanostructured Materials for Next Generation Photovoltaics" "Biophysical Studies of Virus Particles and their Maturation: Insights into Elegantly Programmed Nano Machines" "Using Nanoparticles to Interface with Biology" "Charge Transfer Interactions in Quantum Dot-Dopamine Conjugates and Light Scattering Characterization of Metallic Nanoparticles." "Bio-functionalized Gold Nanoparticles for Somatic Gene Therapy" "Ion Content of Multilayer Polyelectrolyte Membrane Skins by Derivatization with Photoleavable Side Groups" & "Ligand design and photochemistry to improve nanoparticle biocompatibility" "From nanocrystals to nanorods and nanowires: quantized semiconductors with possible applications in optical and electrical devices" "Microwave Enhanced Gasification of Carbon" "Aqueous synthesis and characterization of gold nanoclusters with tunable emission" "Multidentate polymeric ligands for long-term bioimaging using highly stable and functionalized quantum dots" Ghoussoub: "Free-standing nanoblankets of PEMU'S" Ashley: "Microwave Selective Heating and Materials Synthesis" "Design and Synthesis of Spin Crossover Complexes for Multifunctional Materials" "Synthesis and characterization of superparamagnetic nano-PEC" "Rational Design of Highly Incompatible Block Polymers for Sub-10 nm Lithographic Patterning" "New Materials and Devices for Chemical Sensing" "Adventures in Physical Organic Chemistry: Chemical Reactivity to Organic Semiconductors" " Interfaces in Biological/Biomimetic Nanocomposites and Rechargeable Batteries" "The Frontier of Actinide Magnetism" "Reactive Organozincates: New Methods and Novel Compounds" "Plasmon-Mediated Surface Chemistry for Solar Photocatalysis" "Understanding the protein-nanoparticles interactions using spectroscopy" "Thermolelectric materials produced from Mg/Al flux reactions" "Samll quantum dots and Super-resolution Microscopy" "Amphiphilic polymers with mixed coordination as a platform for surface-functionalizing a variety of inorganic nanocrystals" "New Computational Methods and Their Application to Renewable Energy, Semiconductors, and the Porous Materials Genome" "Graphene Molecules: Synthesis, Electronic Structure and Applications" "Photon Upconversion via Dual Sensitized Self-Assembled Bilayers on Metal Oxide Surfaces" "Improved Li-ion Batteries from Understanding and Control of Li+ Transport" "Novel Approaches for Thermoelectric and Strongly Correlated Magnetic Materials" "Nanospace within Metal-Organic Frameworks: Plenty of Room for Imagination" "Synthesis of High Refractive Index Lens Material through Thio-ene Based Polymerization" "Nickel Nanoparticles: Synthesis, EMI shielding polycomposites and energy coupling" "Mechanistic Studies of Electrochemical Interfaces for Rational Design of Energy Materials" Chemey:"Low-Temperature Explorations of f-Element Wasteforms"/ Hardy: "TBA" "Predictive materials modeling with a hierarchy of theory and computations" "Expanding the Toolbox for Photoredox Catalysis" "Understanding Protein Corona Formation on Inorganic Nanocrystals" "Surface Characterization of Quantum Dots and Au Nanoparticles using NMR spectroscopy" "Du:A multicoordinating polymer ligand for metal and metal-oxide nanoparticles/ Jin:Peptide-modified nanoparticles for probing MMP-14 proteolytic activity " "Designing quantum dot probes for fluorescence bioimaging" "Standing, Lying, and Sitting: Reenvisioning Amphiphilicity for Nanostructured Synthetic Materials" Betash Shakeri �TBA�/Jin Zheng "TBA" "Molecular and Hybrid approaches to energy and electron transfer for solar energy conversion" "Tuning Magnetism and Superconductivity in Layered Metal Chalcogenides through Intercalation Chemistry" "Diffusion of polyelectrolytes in their complexes and multilayers" "Learning To Predict Crystal Structures" "Photoredox-Mediated Metal-Free Ring-Opening Metathesis Polymerization" "Parahydrogen Enhanced Nuclear Magnetic Resonance Spectroscopy: A Powerful Operando Technique for the Study of Heterogeneous Hydrogenation Catalysis and More" "Self-Assembly of Precision Nanomaterials for Energy Applications" "Democratizing large-scale data and machine learning in materials research" "Controlling nanomaterials and interfaces for biological applications" "A Journey into the World of Peptidomimetic Polymers: Twist and Turns and Discovery" "Highly Instrumented Microwave Reactors for Heterogeneous Gas-Solid Catalysis" "Nanocrystal Growth on the Inside and Outside: Theory of Cation Exchange and of Nanoplatelet Formation" "Understanding the role of interfaces in nanomaterial processes" "The molecular pathways of mineral nucleation" "From imaging excitons at the nanoscale to emerging device applications" "Infrared Spectroscopy: from ultrafast dynamics to molecular imaging" "Designing Functional Materials with Electronically Excited Nanoscale Metals" "Designing Hybrid Nanoparticles for Therapy and Diagnosis" "Hidden crystallographic gems to link materials’ properties: stannides, germanides, and antimonides" "Effects of increasing dosage of zirconium oxometal cluster in a cross-linked thiol-ene polymer network" "Developments on New Electrolyte Systems to Limit Lithium Polysulfide Dissolution in Li/S Electrochemical Cells" Wang:"Improving the Stability of CsPbX3 Perovskite Quantum Dots via Ligand Design"/Ramakrishna:"Molecular dynamics of a Multiferroic material [(CH3)2NH2]Mn(HCOO)3 near magnetic and ferroelectric ordering using 1H and 55Mn NMR" "Tuning Electrical and Thermal Properties in Nanostructured Materials" "Surface chemistry of colloidal nanocrystals, established concepts and new directions" "Understanding the Phase Transformation of Prussian Blue Analogues to Nanocarbides to Bimetallic Nanoparticles" "Influence of Self Assembled Multilayers on Recombination Dynamics, Porosity, and Diffusion Rates at Dye-Semiconductor Interfaces" "Adhesive Non-Stick Polymer / The Mechanical Properties of PSS/PDADMA Polyelectrolyte Complex" "Synthesis and Characterizations of Fast Li Ion Conductors" "Semiconductor Nanocrystals as Triplet Sensitizers in Self-Assembled Bilayers for Photon Upconversion" "Nanostructured Surfaces Containing Opposite Charges from Self-Assembled Block Copolymers" "Intermetallic carbides and borides grown from Pr/Ni flux" Matrix Effects on Solid-Solid Phase Transformations in Magnetic, Photomagnetic and Multiferroic Hybrid Perovskites and Prussian Blue Analogues" "In Situ MRI Studies of All-solid-state Rechargeable Batteries" "Solid-state NMR Studies of Organic Electrodes as a more Sustainable Option for Rechargeable Batteries" "Molecular Photon Upconversion in Self-Assembled Mutilayers on Metal Oxide Surface" "Synthesis, Crystal-Chemistry, and Reactivity of Bimetallic Fluoroacetates: From Serendipity to a New Family of Organic–Inorganic Hybrids" "Microwave Studies of the Haber-Bosch Process / " "Loose Ions on a Disordered Landscape: An Enabling Paradigm for New All Solid State Batteries" "Shape Control of Block Copolymer Particles" "The Formation of a Nanoparticle: Exploring Microwave Methods for Phosphors and Templated Synthesis of Carbides and Metals"
https://www.chem.fsu.edu/seminars-previous.php?program=6&sort_by=seminar_speaker_lname
Center Point, Texas Substance Use Statistics And Treatment Options Drug misuse, especially involving heroin and meth, has been a problem in Kerr County during the last few years. There are some rehab facilities in this county, as well as plenty of other addiction treatment options throughout Texas. Substance Use Statistics In Center Point And Kerr County, Texas - Kerr County had 24 drug overdose deaths occur from 2016 through 2018. - From January to June 2016, roughly half of all county court cases were related to drugs. - There were 132 cases in the 216th Judicial District involving drugs during this time, including four cases involving marijuana. The rest of the cases mainly involved methamphetamine or heroin. - From January to June 2016, the 198th Judicial District had 49 cases related to drugs, with most involving methamphetamine or heroin. - Sixteen percent of adults living in Kerr County admitted to binge drinking in 2017. - Between 2014 and 2018, Kerr County had nine alcohol-impaired driving fatalities out of 47 driving fatalities overall. - In 2017, the written opioid prescription rate in Kerr County was 77.2 per 100 residents. - As of 2020, Kerr County has 24 providers with licensing to offer buprenorphine for opioid use disorder. - The county also has four addiction treatment centers, including two facilities that offer medication-assisted treatment options. Introducing virtual care Get treatment when and how you need it. Addiction Treatment Options In Center Point, Texas Finding the best rehab program for an alcohol or drug addiction means looking into ones that offer individualized treatment. These personalized plans include services that are aimed at meeting your particular needs. You can find programs with this kind of treatment plan at The Treehouse, which is under 300 miles from Center Point. Drug And Alcohol Detox Programs Symptoms ranging from mild to severe are part of the withdrawal process, so it’s important to have reliable care at this time. Medically supervised detox programs provide prompt treatment when needed, as well as support. You should be able to start a rehab program right after finishing an outpatient or inpatient detox program. Inpatient Addiction Treatment Inpatient rehab programs provide the support and treatment level needed for more severe addictions. When you’re in this kind of program, you’ll live in a residential facility that has supervision around the clock. Professionals, including social workers, counselors, and doctors, are on hand to offer prompt treatment. You can also count on getting peer support during inpatient rehab. These programs include behavioral therapy, life skills training, and other therapeutic approaches. Outpatient Drug Rehab Outpatient rehab programs provide the right amount of care for those who have a moderate addiction instead of a severe one. With this kind of program, you’ll travel to an outpatient clinic for therapy sessions a few times a week or more. Individual or group therapies are offered with this type of treatment. Medication-Assisted Treatment Medication-assisted treatment (MAT) programs can help you make a full recovery from an addiction to alcohol or opioids. MAT programs include the use of a medication, such as methadone, that can help ease cravings and other withdrawal symptoms. Therapy sessions are another part of MAT programs. Aftercare Planning And Services Aftercare planning and services are an important part of a long-term recovery from addiction. These services offer continuing care when you’re done with a rehab program. Some of the services that might be available with an aftercare program include recovery coaching, sober-living housing, and peer support services. How To Pay For Alcohol Or Drug Rehab Using private health insurance is generally the way people pay for rehab programs. If you’re planning to use your insurance plan, make sure you check with the rehab center and your insurance company to see which types of services are included with your coverage. Virtual Care Is Available With Vertava Health Although rehab is usually done in person, this isn’t always possible. When you’re unable to go to an addiction treatment facility, you can still receive support and care with virtual services. Vertava Health provides online outpatient services and more to address substance misuse, mental health disorders, and other behavioral health problems. The Treehouse Hiking, survival skills training, zip lines, and other outdoor activities are part of our rehab programs at The Treehouse. Located around 290 miles from Center Point, our campus features cognitive behavioral therapy, expressive therapies, and more. Recreational activities are also available on site, including horseback riding and a fishing pond. For more information, please connect with us today.
https://www.rehabcenter.net/rehab-centers/texas-rehab-centers/center-point/
Hearing preservation in medial vestibular schwannomas. Vestibular schwannomas (VSs) with no or little extension into the internal auditory canal have been addressed as a clinical subentity carrying a poor prognosis regarding hearing preservation, which is attributed to the initially asymptomatic intracisternal growth pattern. The goal in this study was to assess hearing preservation in patients who underwent surgery for medial VSs. A consecutive series of 31 cases in 30 patients with medial VSs (mean size 31 mm) who underwent surgery between 1997 and 2005 via a suboccipitolateral route was evaluated with respect to pre- and postoperative cochlear nerve function, extent of tumor removal, and radiological findings. Intraoperative monitoring of brainstem auditory evoked potentials was performed in all patients with hearing. Patients were reevaluated at a mean of 30 months following surgery. Preoperative hearing function revealed American Academy of Otolaryngology-Head and Neck Surgery Foundation Classes A and B in 7 patients each, Class C in 4, and D in 9. Four patients presented with deafness. Hearing preservation was achieved in 10 patients (Classes A-C in 2 patients each, and Class D in 4 patients). Tumor removal was complete in all patients with hearing preservation, except for 2 patients with neurofibromatosis. In 4 patients a planned subtotal excision was performed due to the individual's age or underlying disease. In 1 patient a recurrent tumor was completely removed 3 years after the initial procedure. The cochlear nerve in medial VSs requires special attention due to the atypical intracisternal growth pattern. Even in large tumors, hearing could be preserved in 37% of cases, since the cochlear nerve in medial schwannomas may not exhibit the adherence to the tumor capsule seen in tumors with comparable size involving the internal auditory canal.
In this chapter, our instructors have put together lessons to help you explore leading theories in the field of cognitive development. You'll learn about the theories of Piaget and Lev Vygotsky related to cognitive development of children, and you'll also discover helpful techniques to help support the cognitive development of children in the classroom. By completing the lessons in this chapter, you'll also have examined: - Assimilation and accommodation - Scaffolding and the zone of proximal development - Learning and information processing - Social constructivism - The mediated learning experience - Strategies to enhance cognitive development - Cognitive development from birth to six years old - Object permanence and sensorimotor periods of infancy - The cognitive development of children and teens These lessons have been developed to help you easily comprehend and retain the information. You'll find engaging video lessons that help to illustrate the material being covered. You can quickly navigate through main topics in the video by utilizing the Timeline. As a helpful resource, video transcripts are available for note-taking or text-based learning. To gauge your understanding of the topics, take a self-assessment quiz after each lesson. When you finish the chapter, take a cumulative exam to see how well you've absorbed the information. 1. Using Cognitive Development Psychology in the Classroom Do you ever feel bombarded with the amount of new information in a class? How do you process new information in order to create usable knowledge? These are the types of questions cognitive psychologists and teachers seek to answer. This lesson will explore and apply the major assumptions of cognitive development and psychology. 2. Piaget's Theory of Cognitive Development Jean Piaget's theory of cognitive development focuses on how learners interact with their environment to develop complex reasoning and knowledge. This lesson will focus on the six basic assumptions of that theory, including the key terms: assimilation, accommodation and equilibration. 3. Lev Vygotsky's Theory of Cognitive Development The role of culture and social interactions are imperative to cognitive development, according to psychologist, Lev Vygotsky. This lesson will discuss how social interactions play a role in cognitive development of children, provide an overview of Vygotsky's cultural-historical theory and describe the stages of speech and language development. 4. Assimilation & Accommodation in Psychology: Definition & Examples How do assimilation and accommodation help a child adapt to his environment? You'll explore how established and changing patterns of information drive a child's intellectual growth as he learns about cats and dogs. 5. Zone of Proximal Development and Scaffolding in the Classroom Psychologist Lev Vygotsky developed a theory of cognitive development which focused on the role of culture in the development of higher mental functions. Several concepts arose from that theory that are important to classroom learning. This lesson will focus on two concepts: zone of proximal development and scaffolding. 6. Cognitive Perspective of Learning & Information Processing When you see or hear something in your environment, how does your brain recognize what you are seeing or hearing? This lesson introduces the cognitive perspective in psychology, including the difference between sensation and perception. We'll also discuss the famous Gestalt principles of perception that you do automatically every day but didn't necessarily know there were names for what your brain was doing. 7. The Role of Play in Cognitive Development When we think of childhood, we often think of playing. But did you know that playing is vital to a child's healthy development? In this lesson, we will learn about the various types of play that aid a child's cognitive growth. 8. Social Constructivism and the Mediated Learning Experience A well-accepted fact among educational psychologists is the idea that knowledge is not absorbed but rather constructed through a person's experiences with his or her environment. This knowledge may be constructed individually or collaboratively. This lesson will briefly explain the processes behind knowledge construction and provide information on how socially constructed knowledge can advance the cognitive development of learners. 9. Tools to Advance Cognitive Development The word 'tool' has a connotation of something that aids us. Normally, we think of tools being something manipulated with our hands to help us build. In this lesson, we will learn about tools designed to promote cognitive development. 10. The Sensitive Periods of Development: Birth to Age 6 The phrase 'sensitive periods in human development' may sound like it refers to moody teenagers, but it actually refers to periods of time when a child easily absorbs information in a specific way. The most important sensitive periods occur between birth and age six. Learn more in this lesson. 11. Cognitive Development in Infants: Object Permanence & Sensorimotor Periods There is no time like infancy - it is the first exposure people have to the world. According to Jean Piaget, there are six stages infants go through as they develop cognitively and learn to act in their environment. 12. Cognitive Development in Children and Adolescents If you were to observe children growing into their teenage years, you would notice their thinking and understanding develops over time. In this lesson, you will learn about the specific stages of mental growth in children and adolescents as outlined by Jean Piaget. Earning College Credit Did you know… We have over 200 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level. To learn more, visit our Earning Credit Page Transferring credit to the school of your choice Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
https://study.com/academy/topic/cognitive-development-in-early-childhood.html
One in five young people have fatty liver disease (steatosis), with one in 40 having already developed liver scarring (fibrosis), research published today [15 January] has found. The study, published in The Lancet Gastroenterology & Hepatology, is the first to attempt to determine the prevalence of fatty liver disease and fibrosis in young healthy adults in the UK. Fatty liver disease is a condition in which fats build up in the cells of the liver. It is broadly split into non-alcoholic fatty liver disease (NAFLD), which is usually seen in people who are overweight or obese, and alcohol related fatty liver disease, which is associated with harmful levels of drinking. If left untreated both can lead to fibrosis (scarring of the liver) and in severe cases eventually cirrhosis of the liver, which is irreversible. Worldwide NAFLD affects approximately a quarter of adults in developed countries. The research, conducted by Dr Kushala Abeysekera and researchers from the University of Bristol, looked at data collected from 4,021 participants of the Children of the 90s study also known as Avon Longitudinal Study of Parents and Children (ALSPAC). Based in Bristol, participants from the health study - who had previously been assessed for NAFLD as teenagers using ultrasound - were invited for assessment using transient elastography with FibroScan as part of the Focus @24 clinic. Researchers first looked at those participants who did not report harmful alcohol consumption and found that one in five had non-alcoholic fatty liver disease. On widening the data to include all participants, they again found that over 20 per cent displayed evidence of fatty liver and one in 40 had already developed fibrosis, with those participants who had both fatty liver and harmful alcohol use at greatest risk of liver scarring. As a comparison, at 17 years of age, 2.5 per cent of participants had moderate to severe levels of fatty liver, whilst at the age of 24 this number had increased to 13 per cent. Dr Abeysekera, Honorary Lecturer in the Bristol Medical School: Population Health Sciences, explained: “Children of the 90s data has highlighted the potential importance of liver health amongst young adults. This age group remains a blind spot for clinicians, as they are typically considered a “healthy” age group that are rarely studied. If the obesity epidemic and culture of alcohol abuse aren’t tackled nationally, we may see increasing numbers of patients presenting with end-stage liver disease, and at earlier ages. “It is important to note that whilst we identified that 20 per cent of the cohort had fatty liver - only a small percentage of the individuals will go on to develop cirrhosis (irreversible liver scarring), and the vast majority of participants should be fine if they manage their diet and exercise appropriately. The next steps will be to take a closer look at how environmental and genetic factors may lead to individuals developing non-alcoholic fatty liver disease earlier in life.
https://www.bristol.ac.uk/alspac/news/2020/fatty-liver-disease.html
Arithmetic Series - All We often use the term direct variation to describe a form of dependence of one variable on another. An equation that makes a line and crosses the origin is a form of direct variation, where the magnitude of x increases or decreases directly as y increases or decreases. Direct variation and inverse variation are used often in science when modeling activity, such as speed or velocity.
First, there was a bug where the first Tag_Values tag would get ignored in combat (the UniSpace and SystCost tags do not count towards the 'first tag ignored' rule). So, using ECM I as an example, it has the tags 'SystCost = 30, UniSpace = 15, DefTgtRg *= 1.12'. The first two are UniSpace and SystCost, so those are dropped. As far as the game is concerned, it only has one tag, 'DefTgtRg *= 1.12'. However, as was stated at the beginning of this paragraph, because of the bug, the first tag value is ignored. That means that ECM I does absolutely nothing in combat. In fact, none of the Sensor/ECM/ECCM/Cloak ship items do anything in combat. Second, because of the way the formula for ship detection works, DefTgtRg (that is, decreasing he chance of being spotted by the enemy TF) was limited so that it could never provide more than a 1.33 benefit. In other words, if you used, say, ECM V, it should provide you with 'DefTgtRg *= 2.2'. However, the formula would reduce that to a benefit of only 1.33, or the equivalent of ECM III. Third, the system was set up so that OffTgtRg and DefTgtRg had multiple functions. The TgtRg part of each of them stands for 'target range'. But not only did they affect targetting range, they also affected spotting range. Originally, there was another variable called SptRg (with OffSptRg and DefSptRg). Instead of keeping the two concepts separate, they were both merged into TgtRg. This made it difficult to design a system that worked for spotting, but didn't become overpowered for targetting (or vice versa). Finally, there was the problem with OffTgtRg. In the previous paragraph it was stated that TgtRg affected targetting range. That isn't precisely true. While DefTgtRg functioned correctly, OffTgtRg did not, leading to severe problems with targetting. The solution to all of these problems was multifacetted. TgtRg and SptRg were separated again. This means there are now four values to use instead of two: OffTgtRg - Increases chance to hit. DefTgtRg - Decreases chance of getting hit. OffSptRg - Increases chance of spotting enemy TFs. DefSptRg - Decreases chance of your TF being spotted. There are two things to keep in mind about these tag values. For spotting range, the best value from the offensive (or the TF trying to spot) TF is compared against the worst value from the defensive (or the TF being spotted) TF. This means that only one ship needs to have a good OffSptRg (unless that one ship gets destroyed), but all ships need to have a good DefSptRg for cloaking to function properly. Second, for targetting range, unlike spotting range, it works on a per ship basis, not a per TF basis (that is, the individual ship firing has its tag value used, as does the individual ship getting fired at). Targetting range also only affects beam weapons (this is unproven, but demonstrably likely), ships that aren't likely to engage in beam combat don't need offensive targetting, although they would still benefit from defensive targetting (as it will affect fighters and missiles that target your ship). OffTgtRg was fixed so that it worked properly alongside DefTgtRg. The problem with ignoring the first tag value was also fixed. For this to have a proper effect on games, however, it is necessary that changes to TechTables.txt be done. Some modders have made the changes for you, however, if they have not, then you should use the TechTables.txt that is included with this patch (or, if comfortable modding, you can replace the required lines yourself).
http://patcher.moo3.at/visibility.html
Norms are culturally established rules prescribing appropriate social behavior. Norms are relatively specific and precise and elaborate the detailed behavioural requirements that flow from more general and overarching social values. A norm fixes the boundaries of behavior. A norm gives an expectation of how other people act in a given situation. In order for a norm to be stable, people's actions must reconstitute the expectation without change. A set of such correct stable norm expectations is known as a Nash equilibrium. The norm in western society is that one should respect the dead and it is a norm that one should dress in dark colours for a funeral. Norms and normlessness affect a wide variety of human behavior. Conduct Norms can be classified as general conduct norms and specific conduct norms. Do Ethical Principles Explain Moral Norm? A Test for Consent to Organ Donation Blondeau, Danielle; Godin, Gaston; Gagn; Camille; Martineau, Isabelle. Moral norm is a strong predictor of intention with respect to certain behaviors. The results indicated that the interrelations among the ethics variables were significant. However, the results also indicated that moral norm was influenced only by beneficence. Conducting other studies in different cultural contexts and verifying other behaviors would shed light on whether beneficence still influences moral norm. Charismatic Code, Social Norms, and the Emergence of Cooperation on the File-Swapping Networks - Lior Strahilevitz. Abstract: The question of why individual members of peer-to-peer file-swapping networks such as Napster, Gnutella, and Kazaa consciously choose to share their unlicensed copies of copyrighted content with anonymous strangers despite the absence of economic incentives for doing so. According to rational choice theory and many Social psychology, in the absence of face-to-face contact or other communication, strangers will be unlikely to contribute to a public good if such cooperation is somewhat costly. Linking social norms to efficient conservation investment in payments for ecosystem services - Xiaodong Chena, Frank Lupia, Guangming Hea and Jianguo Liua. Abstract: An increasing amount of investment has been devoted to protecting and restoring ecosystem services worldwide. Little is known about the effects of social norms at the neighborhood level. As a first attempt to quantify the effects of social norms, we studied the effects of a series of possible factors on people's intentions of maintaining forest on their Grain-to-Green Program land plots if the program ends. We found that, in addition to conservation payment amounts and program duration, social norms at the neighborhood level had significant impacts on program re-enrollment, suggesting that social norms can be used to leverage participation to enhance the sustainability of conservation benefits from PES programs. Search for legal norms - an impossibility result. Musharraf Rasool Cyan. Abstract: Legal norms or laws emerge in society through a process of aggregation of individual choices. Prior to the establishment of such norms individuals live on their own, all the time attempting to fend for themselves. This is the stage where the search for legal norms begins, to lend predictability and stability to behavior and conglomerate individuals into a society. I attempt to address the issue of aggregation of individual assertions into legal norms in a social choice framework. General acceptability by individuals is an attribute of legal norms. The second condition that a legal norm must meet is efficacy. A legal norm can only be classed as such if it is enforceable. Norms which are enforced through force or other coercive means achieve the status of legal norms. Do local tobacco regulations influence perceived smoking norms? Evidence from adult and youth surveys in Massachusetts - William L. Hamilton1, Lois Biener and Robert T. Brennan. Smoking behavior has been shown to be influenced by individuals' perceptions of social norms about smoking. Whether local regulations regarding clean indoor air and youth access to tobacco are associated with residents' subsequent perceptions of smoking norms. Multilevel models tested the association between perceived norms and the presence of strong local regulations in four policy domains. Results showed that youths perceived community norms to be significantly more antismoking if they lived in a town that had strong regulations in at least three of the four domains. Implementing and publicizing local regulations may help shape perceptions of community smoking norms. Are social norms associated with smoking in French university students? A survey report on smoking correlates Lionel Riou Franza, Bertrand Dautzenberg, Bruno Falissard and Michel Reynaud. Abstract: Background: Knowledge of the correlates of smoking is a first step to successful prevention interventions. The social norms theory hypothesises that students' smoking behaviour is linked to their perception of norms for use of tobacco. This study was designed to test the theory that smoking is associated with perceived norms, controlling for other correlates of smoking. University-based prevention campaigns should take multiple substance use into account and focus on the norms most likely to have an impact on student smoking.
http://www.sociologyindex.com/norms.htm
In 2007, atheist writer Adam Lee of Patheos’ Daylight Atheism wrote a post responding to and attempting to discredit a column from the Washington Post’s Michael Gerson in which Gerson argues that morality is ultimately untenable in the absence of God. In his reply, Lee commits a number of the blunders common to traditional atheistic moral arguments, fallacies that have been widely rebutted and thus will not be addressed here. In one of the arguments near the end of his post, however, Lee does raise an interesting point. Speaking to Gerson, he writes: You asked what reason an atheist can give to be moral, so allow me to offer an answer. You correctly pointed out that neither our instincts nor our self-interest can completely suffice, but there is another possibility you’ve overlooked. Call it what you will—empathy, compassion, conscience, lovingkindness—but the deepest and truest expression of that state is the one that wishes everyone else to share in it. A happiness that is predicated on the unhappiness of others—a mentality of “I win, you lose”—is a mean and petty form of happiness, one hardly worthy of the name at all. On the contrary, the highest, purest and most lasting form of happiness is the one which we can only bring about in ourselves by cultivating it in others. The recognition of this truth gives us a fulcrum upon which we can build a consistent, objective theory of human morality. Acts that contribute to the sum total of human happiness in this way are right, while those that have the opposite effect are wrong. A wealth of moral guidelines can be derived from this basic, rational principle. The utilitarian argument here presented for atheistic morality is a common (and insufficient) one, but Lee’s wording uniquely highlights one of its major flaws. Because he labels the sociological phenomenon he addresses as a “truth,’ his argument begs a pivotal question: how does he know that “happiness that is predicated on the unhappiness of others . . . is a mean and petty form of happiness”? Presumably, he makes this claim because his personal experience validates it, but thanks to the unavoidable principle of restricted access in human thought, neither he nor anyone else can definitively prove that this is the case for human beings in general. To assert such a claim, one must appeal to the knowledge of some omniscient psychologist—truly, to some revelation—to do so with confidence. Indeed, the central crisis of naturalism is not a spiritual or moral crisis; it is an epistemological one. Undeniably, the existence of God is a difficult fact to incontrovertibly prove, but by even approaching the topic in a rational manner, the theist and the atheist alike make a perhaps greater leap of faith even than the theist’s belief in an invisible God by assuming that the inscrutable mind and especially the chemical complex that is the human brain can be trusted to follow a trail of rational arguments to truth in a metaphysical quandary. Even the theist is obligated to be slightly speculative to conclude that the rational mind can be trusted based solely on his belief in the existence of a rational God, but neither of these basic beliefs are remotely so flimsy as the atheist’s insistence that the trustworthy rational brain evolved through sheer chance. By his own logical dogma, the atheist ought to distrust logic because of the extreme improbability of its accuracy—which, ironically, he cannot do without justifying his suspicion with logic. In the end, then, Lee’s mediocre argument for morality without God is potentially tenable only if God—or, if he finds God too extreme a term, some immaterial, omnipotent, and omniscient being that upholds reason—does exist. Otherwise, the reason on which he bases his moral framework (and presumably his atheism as well) is highly unreasonable.
https://theinterminablesocratic.com/2018/05/18/morality-atheism-reason/
Name: Street Address: « Back | Forward » NEW YORK (AP) — You'll soon see four new names on the periodic table of the elements, including three that honor Moscow, Japan and Tennessee. The names are among four recommended Wednesday by an international scientific group. The fourth is named for a Russian scientist. The International Union of Pure and Applied Chemistry, which rules on chemical element names, presented its proposal for public review. The names had been submitted by the element discoverers. The four elements, known now by their numbers, completed the seventh row of the periodic table when the chemistry organization verified their discoveries last December. Tennessee is the second U.S. state to be recognized with an element; California was the first. Element names can come from places, mythology, names of scientists or traits of the element. Other examples: americium, einsteinium and titanium. Joining more familiar element names such as hydrogen, carbon and lead are: — moscovium (mah-SKOH'-vee-um), symbol Mc, for element 115, and tennessine (TEH'-neh-seen), symbol Ts, for element 117. The discovery team is from the Joint Institute for Nuclear Research in Dubna, Russia, the Oak Ridge National Laboratory and Vanderbilt University in Tennessee, and the Lawrence Livermore National Laboratory in California. — nihonium (nee-HOH'-nee-um), symbol Nh, for element 113. The element was discovered in Japan, and Nihon is one way to say the country's name in Japanese. It's the first element to be discovered in an Asian country. — oganesson (OH'-gah-NEH'-sun), symbol Og, for element 118. The name honors Russian physicist Yuri Oganessian. The public comment period will end Nov. 8.
https://www.tnledger.com/editorial/Article.aspx?id=89775
BATTLE FOR WORLD / TASS | January 29, 2019: Bob Lazar was the first to talk about the super heavy element 115 from working at Area 51 involving advanced technologies. Some decades after element 115 was being theorized and later synthesized according to previous news reports; and at present Russian scientists will hold the first experiments at the Super-Heavy Element Factory in Dubna near Moscow in April (2019) to synthesize the elements 114 and 115 from Mendeleev’s Periodic Table, said Yuri Oganesyan from the Scientific Director of the Flerov Laboratory at the Joint Institute for Nuclear Research to TASS on Tuesday (January 29). The world’s first Super-Heavy Element Factory (the DC-280 accelerator) will help scientists synthesize and study new elements of Mendeleev’s Periodic Table of Chemical Elements. And work has been actually launched at the world’s major nuclear physics centers to synthesize the elements of 119, 120 and 121. The Joint Institute for Nuclear Research expects the Super-Heavy Element Factory to help research along and to be the first to obtain these chemical elements. “What interest does the 119th element evoke? The 118th is the last element in its row and the 119th is the first in a new row. So, will there be a big difference between them, a big leap? This has to be found,” the scientist said. Powered by WordPress | Matata Theme: Modified by BattleForWorld.
http://www.battleforworld.com/2019/01/29/russian-scientists-to-synthesize-elements/
This seminar with Tergar Instructor Cortland Dahl is open to anyone with an interest in Buddhism, including novice meditators and experienced practitioners. Interdependence: Meditation and the Ecology of the Heart Explore mindfulness meditation and compassion practices to experience the truth of interconnectedness, so that we may transform our lives and learn to act from a wise and compassionate heart. Meditation and the Science of Human Flourishing – Weekend Workshop Can we cultivate well-being in the same way that we can train our bodies to be healthier and more resilient? If so, how might we use the practice of meditation to experience equanimity, to open our hearts fully to others, and to cultivate insight and wisdom? Weekend workshop with Tergar Instructor Cortland Dahl.
https://tergar.org/events/2019-03-31/
Bourj Hammoud – Every Tuesday morning at the Frans van der Lugt Centre in Lebanon, women gather with Chef Dalal, a Syrian refugee from Aleppo famous for her delicious cooking. They have cooking sessions together, but sharing recipes is not the only thing they bond over. “We love to gather here, it reminds us of Syria,” Fatima* says. Gathering and spending time together helps them forget the war, the distance, and the nostalgia. “We met new people, we learned new recipes, and sometimes we put our input in famous dishes we used to cook in Aleppo,” Noor* says. “We also fight, more like disagree, over the recipes because in every village we cook it in a different way. At the end, we laugh and enjoy the food, just like family!” The JRS center helps refugees come together, make friends, and feel welcomed in the host country. “We felt so lonely when we first arrived to Lebanon” Rania* said. They found shelter at the center. It is filled with people who’ve experienced the same circumstances and have the same culture. Now they are learning to cope and understand the Lebanese society. “The Mahashi (a famous dish in Syria) takes time, so we take breaks, drink coffee, and then continue before the men return home hungry. We do the same here, but without worrying about having the food ready for our husbands!” says Fatima. Chef Dalal has helped refugee women relive their traditions and culture. “From time to time we play music, dance, and sing together classics and oldies despite our horrible voices”, they joke. “We really don’t feel like we are strangers anymore. Food and music truly bring people together, and here at JRS it brought us home.” See more photos from the cooking session here. *All names were changed for the privacy and safety of those involved.
https://www.jrsusa.org/story/lebanon-cooking-a-throwback-home/
Islam: Sunni Sect The religion of Islam has several sects or branches, of whcih the largest denomination is the Sunni (Sunnah) interpretation. Sunni Islam is based on the belief that the Prophet Muhammad died without appointing a successor to lead the Muslim community (ummah). According to Sunni Muslims, after Muhammad's death, the confusion that ensued from not having a person to head the community led to the election of Abu Bakr, the Prophet's close friend and father-in-law, as the first Caliph. This contrasts with the Shi'a Muslim belief that Muhammad himself appointed his first successor to be Ali ibn Abi Talib as the first Caliph and the first Muslim imam. The sectarian split that occurred in Islam between Sunni and Shi'a Muslims is based upon this early question of leadership. Thirty years after Muhammad's death, the various factions of the Islamic faith were embroiled in a civil war known as the Fitna. Many of Muhammad's relatives and companions were involved in the power struggle, and the war finally stabilized when Mu'awiyya , the governor of Syria, took control of the Caliphate. This marked the rise of the Umayyad dynasty which ruled Islam until 750. Three sects of Islam developed and emerged at the conclusion of the Fitna: Sunni and Shi'a Islam, and the Khwarij sect, which is generally rejected by Islamic scholars as illegitimate and is today only practiced in Yemen and Oman. Islamic sects that have materialized since the 7th century Fitna, such as The Nation of Islam, are not regarded as legitimate Muslims by Sunni Muslims. Traditional Islamic law, or Shari'a, is interpreted in four different ways in Sunni Islam. The schools of law, or madhab, developed in the first four centuries of Islam. The four schools of law are the Hanafi, Maliki, Shafi'i, and Hanbali traditions, each based on the beliefs of their founders. Some Sunni Muslims say that one should choose a madhab and then follow all of its rulings. Other Sunnis say that it is acceptable to mix madhabs, to accept one madhab's ruling regarding one issue, and accept another madhab's ruling regarding a different issue. Sunnis also view the hadith, or Islamic oral law, differently than Shi'a Muslims. Hadith are found in several collections, and Sunnis view some of these collections to be more holy and authentic than others, especially the Bukhari collection of hadith. Even though the main split in Islamic practice is between Sunni and Shi'a Muslims, there are several rifts within the Sunni community. There are some liberal and more secular movements in Sunni Islam that say that Shari'a is interpreted on an individual basis, and that reject any fatwa or religious edict by religious Muslim authority figures. There are also several fundamentalist movements in Sunni Islam, which reject and sometimes even persecute liberal Muslims for attempting to compromise traditional Muslim values. The Muslim Brotherhood and Jamaat-e-Islami organizations are fundamentalist Islamic groups that have given rise to offshoot groups like Hamas who wish to destroy secular Islam and Western society through terrorism to bring back to the world a period of religious Muslim rule. Some estimates say that Muslims constitute 20 percentof the world's population. Although the exact demographics of the branches of Islam are disputed, most scholars believe that Sunni Muslims comprise 87-90 percent of the world's 1.5 billion Muslims.
https://www.jewishvirtuallibrary.org/sunni-islam
Values are beliefs about what is fundamentally important. They affect your decision-making and your behaviors, whether you are conscious of them or not. Your real values are reflected by your behavior, and if your espoused values are not consistent with your behavior, you will lose credibility and trust. The same is true for teams. When a team identifies and commits to living shared values, there is a deeper level of trust, better problem-solving and increased collaboration. Team values are more than just a collection of the values of individual team members. Team values are reflected by the general pattern of behaviors of team members. They might not be explicitly stated, but it is possible to observe the general norms of behaviors to tell what the values are. Are people respectful toward each other? Do they push boundaries or are they conventional? Do they avoid conflict or is conflict surfaced and addressed? Team Values and Purpose To be most effective, team values should be consistent with the personal values of the team members and also the purpose of the team. For example, if you are an accounting department and see your team purpose as collecting and organizing financial information, partnering or collaborating with others won’t be as important as being accurate and dependable. On the other hand, if you see your team purpose as “providing information and advice to guide leaders in wise financial decision making,” then partnering and collaborating with business leaders will be essential for your team’s success. Team Values and Company Values It’s important to consider how your team values support the purpose or mission of your company. For example, if your company operates a cruise line, safety and entertainment are likely to be core values. The accounting department will need to consider how these values translate to their own department. Safety might translate to fiscal responsibility. Some values like entertainment might not translate to a core team value, which is fine. However, even if it is not a driving value for your team, it must still be respected or conflict will arise. If your company hasn’t articulated values, don’t wait. Consider your sphere of influence, and within that sphere, work with your team to identify your team’s values. As your team consistently lives its shared values, those who are impacted will notice, and interest and energy will spread. At the very least, you will have strengthened your own team. And you might be pleasantly surprised to discover that others will begin to change as well, because change does not have to begin at the top of an organization. 7 Guidelines to Create Shared Values 1. Don’t assume that any values are simply “understood.” If you think something is already understood, it needs to be named as an important core value. If some form of integrity or ethical behavior is not identified as a core value, you will eventually find yourself in a downward spiral. 2. Involve your team in identifying the values. You can’t impose values on others. When you surface the values that your team cares deeply about, they will commit to living them. 3. Don’t make a laundry list. Focus on the shared values that are the key drivers to fulfill your team’s purpose. There are usually only three to five core team values. You don’t need to include each individual’s personal values, as long as there are no values conflicts. 4. Translate the values into observable behaviors. Providing behavioral examples helps your team understand what the values look like when they are being lived. 5. As a leader, model the values consistently. People watch what you do more closely than they listen to what you say. 6. Integrate your stated values into your daily processes and practices. Refer to your values when it’s time to make important decision. Talk frequently about how they are reflected in your daily work. They will not be effective if they are seen as something extra or “soft.” 7. Don’t ignore a values breach. If a core value has been violated, address it immediately. No one is exempt. Too often bad behavior of “high performers” is ignored, which in the long run undermines your entire team.
https://seapointcenter.com/create-shared-values-that-guide-your-team/
Industry Ventures, LLC (“Industry Ventures”), a leading investment firm focused on the venture capital market, announced today the final closing of Industry Ventures Direct II, L.P. (&ldqu... Newgen Software, a global provider of low code automation platform, announced that Bermuda-based Clarien Bank has selected Newgen Omnichannel Customer Engagement (CCM) product suite to streamline cust... BCB Group, the global digital asset financial services group, and Circle Internet Financial Ltd., a leading global financial technology firm, today announced a partnership that will see BCB implementi... Glia Technologies, Inc. (Glia), a leading provider of Digital Customer Service, today announced that Alkami Technology, Inc. (Alkami), a leading cloud-based digital banking solutions provider, has int... SoftBank has come under renewed scrutiny about its investment strategy but this time it’s about one of the Japanese tech conglomerate’s lesser-known bets. Last year, the company made a 900 million euros ($1 billion) investment in Wirecard, as part of a broader tie-up between the two on digital payments. But that deal has raised eyebrows now due to a deepening accounting crisis at the German payments processor.
https://capital.report/news/softbanks-dollar1-billion-wirecard-bet-under-scrutiny-as-troubled-payments-processor-fights-for-survival/7530
Intersection of Relations For example, let \(R\) and \(S\) be the relations “is a friend of” and “is a work colleague of” defined on a set of people \(A\) (assuming \(A = B\)). Their intersection \(R \cap S\) will be the relation “is a friend and work colleague of“. If the relations \(R\) and \(S\) are defined by matrices \({M_R} = \left[ {{a_{ij}}} \right]\) and \({M_S} = \left[ {{b_{ij}}} \right],\) the matrix of their intersection \(R \cap S\) is given by Union of Relations For example, the union of the relations “is less than” and “is equal to” on the set of integers will be the relation “is less than or equal to“. If the relations \(R\) and \(S\) are defined by matrices \({M_R} = \left[ {{a_{ij}}} \right]\) and \({M_S} = \left[ {{b_{ij}}} \right],\) the union of the relations \(R \cup S\) is given by the following matrix: Sometimes the converse relation is also called the inverse relation and denoted by \(R^{-1}.\) Empty, Universal and Identity Relations A relation \(R\) between sets \(A\) and \(B\) is called an empty relation if \(\require{AMSsymbols}{R = \varnothing.}\) The universal relation between sets \(A\) and \(B,\) denoted by \(U,\) is the Cartesian product of the sets: \(U = A \times B.\) A relation \(R\) defined on a set \(A\) is called the identity relation (denoted by \(I\)) if \(I = \left\{ {\left( {a,a} \right) \mid \forall a \in A} \right\}.\) Properties of Combined Relations When we apply the algebra operations considered above we get a combined relation. The original relations may have certain properties such as reflexivity, symmetry, or transitivity. The question is whether these properties will persist in the combined relation? The table below shows which binary properties hold in each of the basic operations. To find the intersection \(R \cap S,\) we multiply the corresponding elements of the matrices \(M_R\) and \(M_S\). This operation is called Hadamard product and it is different from the regular matrix multiplication. So, we have Example 4. Let \(B = \left\{ {a,b,c,d} \right\}.\) The relation \(S\) on set \(B\) is defined by the digraph. Find the combined relation \(\overline {S \cap {S^T}},\) where \({S^T}\) denotes the converse relation of \(S.\) Solution. The converse relation \(S^T\) is represented by the digraph with reversed edge directions. Find the intersection of \(S\) and \(S^T:\) The complementary relation \(\overline {S \cap {S^T}} \) has the form Example 5. Prove that the symmetric difference of two reflexive relations is irreflexive. Solution. Let \(R\) and \(S\) be relations defined on a set \(A.\) Since \(R\) and \(S\) are reflexive we know that for all \(a \in A,\) \(\left( {a,a} \right) \in R\) and \(\left( {a,a} \right) \in S.\) The difference of the relations \(R \backslash S\) consists of the elements that belong to \(R\) but do not belong to \(S\). Hence, \(R \backslash S\) does not contain the diagonal elements \(\left( {a,a} \right),\) i.e. it is irreflexive. Similarly, we conclude that the difference of relations \(S \backslash R\) is also irreflexive. By definition, the symmetric difference of \(R\) and \(S\) is given by So we need to prove that the union of two irreflexive relations is irreflexive. Suppose that this statement is false. If the union of two relations is not irreflexive, its matrix must have at least one \(1\) on the main diagonal. This is only possible if either matrix of \(R \backslash S\) or matrix of \(S \backslash R\) (or both of them) have \(1\) on the main diagonal. However this contradicts to the fact that both differences of relations are irreflexive. Thus the proof is complete. We conclude that the symmetric difference of two reflexive relations is irreflexive. Example 6. Prove that the union of two antisymmetric relations need not be antisymmetric. Solution. We can prove this by means of a counterexample. Consider the set \(A = \left\{ {0,1} \right\}\) and two antisymmetric relations on it:
Since the dawn of the digital computing age in the mid-20th century, computers have been used as virtual laboratories for the study of atmospheric phenomena. The first simulations of thunderstorms captured only their gross features, yet required the most advanced computing hardware of the time. The following decades saw exponential growth in computational power that was, and continues to be, exploited by scientists seeking to answer fundamental questions about the internal workings of thunderstorms, the most devastating of which cause substantial loss of life and property throughout the world every year. By the mid-1970s, the most powerful computers available to scientists contained, for the first time, enough memory and computing power to represent the atmosphere containing a thunderstorm in three dimensions. Prior to this time, thunderstorms were represented primarily in two dimensions, which implicitly assumed an infinitely long cloud in the missing dimension. These earliest state-of-the-art, fully three-dimensional simulations revealed fundamental properties of thunderstorms, such as the structure of updrafts and downdrafts and the evolution of precipitation, while still only roughly approximating the flow of an actual storm due computing limitations. In the decades that followed these pioneering three-dimensional thunderstorm simulations, new modeling approaches were developed that included more accurate ways of representing winds, temperature, pressure, friction, and the complex microphysical processes involving solid, liquid, and gaseous forms of water within the storm. Further, these models also were able to be run at a resolution higher than that of previous studies due to the steady growth of available computational resources described by Moore’s law, which observed that computing power doubled roughly every two years. The resolution of thunderstorm models was able to be increased to the point where features on the order of a couple hundred meters could be resolved, allowing small but intense features such as downbursts and tornadoes to be simulated within the parent thunderstorm. As model resolution increased further, so did the amount of data produced by the models, which presented a significant challenge to scientists trying to compare their simulated thunderstorms to observed thunderstorms. Visualization and analysis software was developed and refined in tandem with improved modeling and computing hardware, allowing the simulated data to be brought to life and allowing direct comparison to observed storms. In 2019, the highest resolution simulations of violent thunderstorms are able to capture processes such as tornado formation and evolution which are found to include the aggregation of many small, weak vortices with diameters of dozens of meters, features which simply cannot not be simulated at lower resolution. Keywords: numerical weather prediction, thunderstorm modeling, high performance computing, tornadoes, severe weather, big data Why Are Thunderstorms Studied? Each year, thunderstorms bring rain, hail, lightning, damaging winds, and flooding to regions spanning much of the globe. The strongest thunderstorms typically occur when the atmosphere is unstable, containing large amounts of near-surface moisture and abundant environmental wind shear. A forcing mechanism that causes upward motion in such an environment can trigger the formation of a tall cumuliform cloud that achieves self-sustaining vertical motion, resulting in a thunderstorm. The vertical motion within these storms is forced primarily by the buoyancy that results from the release of latent heat due to phase changes of water (condensation, freezing, and deposition). Clouds will rise freely upward so long as the cloudy air is lighter (more buoyant) than the air surrounding the cloud; for the most unstable atmospheres, this can occur over depths well exceeding 10 km, from near the Earth’s surface to the lower stratosphere. The type and strength of thunderstorm that occurs at any given time and place has been found to be a strong function of the pre-storm atmospheric conditions, with the most dangerous storms forming when the atmosphere exhibits deep layers of instability and wind shear. Thunderstorm types include short-lived “ordinary” thunderstorms, longer-lived multicelluar storms, quasi-linear convective systems including squall lines and bow echoes, and supercell thunderstorms, the long-lived rotating storms that produce the most devastating tornadoes. While thunderstorms are primarily known for the damage they cause due to lightning, winds, hail, and tornadoes, collections of thunderstorms known as mesoscale convective complexes can produce heavy rainfall over many thousands of square kilometers, bringing rain during dry summer months in the continental mid-latitudes and helping to recharge reservoirs and aquifers. Collecting useful observational measurements of thunderstorm properties is a significant challenge. Thunderstorms can contain large amounts of wind shear, large hail, and frequent lightning. In-situ observations within thunderstorms conducted with research aircraft can offer only a tiny sampling of storm properties along the aircraft’s path. Surface measurements during thunderstorms are often quite incomplete due to a lack of sufficient instrumentation beneath the storms as well as the difficulty and expense of a deploying networks of portable observational research platforms to study them. Numerical models offer a powerful approach toward understanding thunderstorms by simulating their behavior over space and time, whereby model variables describing the spatiotemporal properties of thunderstorms can be visualized and analyzed with computers. Some of the first applications of the earliest computers were attempts at forecasting the weather. As computers became more powerful throughout the 1960s and 1970s, scientists began to use them as virtual laboratories for the study of atmospheric phenomena including thunderstorms. This article focuses on the use of numerical models to simulate the strongest thunderstorms (primarily supercell thunderstorms, which produce the strongest tornadoes), with an emphasis on research that required (or requires) the most powerful available computing resources. Our understanding of thunderstorms has been profoundly influenced by the use of computers to simulate their behavior. Prior to the 1960s, theory and observations served as the dominant approaches toward understanding the atmosphere. During the 1960s, some scientists interested in better understanding thunderstorms began to utilize computers to conduct short, crude simulations of simple convection such as buoyant thermals. Over time, as computers became faster and the amount of computer memory increased, scientists utilized these newer machines to increase the sophistication of their models in order to simulate thunderstorms in three dimensions. The first three-dimensional (3D) simulations of thunderstorms were conducted in the mid 1970s. These simulations, while very crude when compared to simulations in the late 2010s, ushered in the modern era of high-resolution thunderstorm modeling where entire thunderstorms can be simulated at high enough resolution to resolve tornadoes and downbursts and the processes involving their formation, maintenance, and decay. As supercomputers continue to increase in speed, memory, and storage capacity, researchers must adapt their models or create new models to harness this power. These new models will produce simulations that contain an increasing amount of physical realism when compared to observed storms, and these simulations, when analyzed, will answer questions that would be otherwise be impossible to answer utilizing only observations and mathematical theory. Storm Modeling and the Computing Revolution The history of modern thunderstorm research begins in 1947 with the Thunderstorm Project, the first large-scale field project focused specifically on understanding the behavior of thunderstorms in the United States (Braham, 1996; Byers & Braham, 1949). This federally sponsored program was initiated just at the end of the Second World War and was designed with a primary goal to better understand thunderstorms and the specific hazards they posed to military and commercial aircraft, several of which had crashed while encountering thunderstorms. Specially modified aircraft were flown through thunderstorms at different altitudes where winds, pressure, temperature, and hdyrometeors (rain, snow, graupel, and hail) were measured. In the 1950s, radar, which served primarily as a way to detect enemy aircraft during the war, was now being used to detect hydrometeors in thunderstorm clouds. Radar continues to serve as the most effective remote-sensing technology for the detection of winds and hydrometeors in thunderstorms. The National Science Foundation (NSF) was founded by the United States government in 1950 with the NSF-sponsored National Center for Atmospheric Research (NCAR) established in Boulder, Colorado, in 1959. By the early 1950s, computers were starting to be used with the goal of improving weather forecasting, but only at the synoptic scale, and with decidedly mixed results (Persson, 2005a,b,c; Shuman, 1989). At this time the possibilities of numerical weather prediction were only just beginning to be realized, and the development of filtered numerical equations and numerical techniques that would apply to thunderstorm-scale simulations had not yet been developed. The following decades would see an exponential growth in computer-processing power, memory and memory bandwidth, and storage, and these computing advancements were mirrored by an increase in model resolution and sophistication to exploit this power. The advancement of our understanding of thunderstorms since the 1950s has gone hand in hand with the advancement of improved observational data sources as well as the steady increase of computational power that could be applied toward modeling weather phenomena. Numerical Model Properties In order to understand the profound impact computing technology has had on the nature of numerical simulations of thunderstorms, it is useful to first discuss the basic properties of an atmospheric cloud model used by scientists to study them. A cloud model is a computer application that simulates, using known laws and approximations, the time-dependent behavior of a cloud and the atmosphere containing and surrounding the cloud. Properties forecast in numerical models of thunderstorms typically include the wind, temperature, pressure, turbulent kinetic energy, and bulk properties of water vapor, cloud water, cloud ice, graupel and/or hail, and snow. The most realistic cloud models are 3D, allowing for motion in all known spatial dimensions (East/West, North/South, up/down) as occurs in real clouds. A 3D cloud model will typically forecast atmospheric properties utilizing a mesh, or grid, that describes the Cartesian locations of all of the grid volumes (typically rectangular prisms) that comprise the full volume being forecast by 3D model. These combined grid volumes together make up the model domain, the full volume that contains the model’s “known universe.” For a Cartesian model on an orthogonal mesh, the lengths of the edges of the grid volumes are referred to as the grid spacing with the three components represented as ∆x, ∆y, and ∆z. One of the consequences of the finite difference numerical modeling approach is that the grid spacing greatly determines the fidelity of the simulation. The first 3D thunderstorm models utilized horizontal grid spacings on the order of 2–3 km with vertical grid spacings on the order of 1 km and contained domains on the order of 20–40 km in the horizontal and around 10 km in the vertical. This would be considered a very low-resolution simulation in modern times; there are too few grid volumes to span the thunderstorm to create a clear picture, much like the first digital cameras with only tens of thousands of pixels could not create a high-fidelity image comparable to a multi-megapixel camera. When discussing cloud model simulations, resolution (and hence grid spacing) is critical because throughout each grid volume in a cloud model, wind, temperature, pressure, and microphysical variables involving cloud, rain, snow, graupel, and hail are constant; no further subdivision is possible without further increasing the model resolution and altering the model mesh. Hence, resolution strongly determines the fidelity, or physical realism, of the simulation, so long as the model contains physical parameterizations that are appropriate for that resolution. Models that simulate thunderstorms require parameterizations in order to include precipitation and incorporate the effects of friction, subgrid scale kinetic energy, and radiation. Parameterizations are model algorithms that handle physical processes that are either too complex or too poorly understood to model using solvable fundamental equations, such as the interaction of countless numbers of hydrometeors that occurs in thunderstorms. Further, because thunderstorm models cover a limited domain, boundary conditions must be specified on all six faces of the model domain’s volume. The surface boundary condition in particular has a dramatic effect on the evolution of simulated thunderstorms, and the most appropriate choice of surface boundary condition remains an open question in 2019. The first cloud models were not 3D, however; clouds were either represented as “blobs” of air that can only rise and fall and contain no horizontal variation (one-dimensional [1D] model), or contained symmetry about a vertical axis (axisymmetric two-dimensional [2D] model) or about a vertically oriented plane (slab-symmetric 2D model). These first models were developed under the constraints of the hardware available at the time; fully 3D models could not have been developed and tested until the computing technology matured. Cloud model simulations containing fewer than three dimensions are obviously unphysical, but an enormous amount of valuable and insightful research was conducted under these constraints in the early days of cloud modeling. Further, some atmospheric phenomena do lend themselves well to a 2D approach. For instance, a downburst formed from a thunderstorm cloud in a quiescent environment may be approximated to be axisymmetric; a squall line, which does not vary appreciably along its length, can be represented fairly well in a 2D slab symmetric model. However, in both of these cases, symmetry is enforced and truly physically realistic flow is almost always impossible to achieve. In the early 21st century, a research-class 3D cloud model forecasts the winds, temperature, pressure, humidity, turbulent kinetic energy, and the mixing ratio of liquid and solid cloud water, rain, snow, graupel, and/or hail over its domain. Models with more advanced microphysics parameterizations will forecast additional variables related to the microphysics (such as number concentration). The cloud and rain field output of such a model, when visualized, will bear a very strong resemblance to a high-quality observed thunderstorm. Before a discussion of the state of the science of 3D high-resolution thunderstorm modeling in the late 2010s, research involving the modeling of thunderstorms will be approached historically, beginning with the first simulations in the 1960s. The Early Years of Storm Modeling Before computers could be programmed to forecast the weather, the set of equations to be solved first needed to be chosen, as well as numerical methods to approximate the solution. Charney (1955) acknowledged that, despite encouraging results found in recent attempts to solve the quasi-geostrophic equations for atmospheric flow, solutions to the primitive equations were not yet possible due to numerical instability. Smagorinsky (1958) made one of the first attempts to create a framework for the stable numerical integration of the so-called primitive equations that describe the physics of the atmosphere. Lilly (1962) developed a 2D model to simulate buoyant convection in a dry atmosphere and achieved the twin goals of numerical stability and qualitatively correct numerical results in some of his experiments; however, many shortcomings associated with stability, accuracy, and lack of computing capability to improve results were identified. Utilizing a set of equations different from Lilly (1962), similar encouraging results utilizing a dry axisymmetric 2D model were reported by Ogura (1962) who later introduced moisture into his model and simulated the formation of a cloud in a conditionally unstable atmosphere (Ogura, 1963). Orville (1964) applied Ogura’s equation set toward the simulation of upslope mountain flow by modifying the bottom boundary to include a “mountain” with a 45 degree slope, and later included water vapor to measure the effects of latent heating on the solution (Orville, 1965). Many of these early simulations suffered from computational instabilities that caused the simulation to degrade after short integrations. This issue was remedied by Arakawa (1966) who presented a set of equations and numerical techniques that overcame some of these instabilities and showed that long-term integrations of simulated 2D convection could be conducted without failing. This work was instrumental in paving the way toward fully stable 3D simulations that were to arrive in the 1970s. A major challenge to the field of cloud modeling is the handling of microphysical processes involving clouds and hydrometeors (rain, snow, and graupel/hail). A real thunderstorm contains a myriad of particles of different shapes, sizes, fall speeds, and phase, and these particles interact with each other and their surrounding environment in countless ways. A finite difference cloud model cannot hope to capture these processes, as each grid volume, rather than containing interacting particles, represents each microphysical property as a single bulk variable (e.g., mixing ratio and number concentration) that is constant throughout the volume. Despite this and other simplifications, the parameterization of microphysics tends to be one of the most computationally expensive parts of a cloud model owing to the many complex source and sink terms that transfer water substance from one microphysical variable to another. The first models containing both cloud and rain were 1D, only simulating along the vertical axis (e.g., Srivastava, 1967). In one of the most influential studies to the field of cloud modeling, Kessler (1969) developed a system of equations that described the formation of clouds and rain, and rain’s fallout that could be incorporated into a 3D cloud model. Kessler’s work was the first to carefully address conservation issues regarding all forms of water (in this case, water vapor, cloud water, and rain). Despite the lack of ice and the inevitable presence of unphysically supercooled liquid water, Kessler’s “warm-rain” microphysics have been programmed into many cloud models of varying sophistication throughout the years— Kessler’s microphysics and other similar routines are sometimes referred to in the literature as “warm rain” microphysics, as that the equations describing cloud and rain variables only properly describe precipitating convection that does not involve the ice phase, hence “warm.” In thunderstorm simulations where temperatures colder than 40 °C are commonly found aloft, simulations utilizing such “warm” rain microphysics routines will contain liquid cloud and rain that is actually unphysically cold! Pioneering work by Rutledge and Hobbs (1983) and Lin, Farley, and Orville (1983) included the addition of ice substance (cloud ice, snow, and graupel) to numerical model equation sets, which resulted in much more realistic storm structure and evolution when compared to observations. The use of Kessler microphysics in cloud modeling studies is rare in the 2010s, having been supplanted by microphysical routines that incorporate cloud ice, snow, graupel and hail in addition to liquid cloud and rain. The 3D Revolution By the end of the 1960s, many of the major stability issues that had plagued early numerical attempts had been worked out, and the first simulations of rain producing clouds were being simulated in 2D. Computing technology, though still quite primitive, was steadily advancing to the point where 3D simulations were possible over limited domains (Deardorff, 1970). It must be emphasized that the cloud modeling work up to this point had been done in a 2D framework, either axisymmetric or slab symmetric. In both cases, these simplifications arose due to the lack of available computing power. From 1964 to 1969, the CDC-6600, designed by Seymour Cray, was the fastest computer in the world. Approximately 50 were made, one of which was delivered to NCAR in December 1965. The machine, which was one of the first solid-state general computing systems, was programmed by feeding punch cards (containing Fortran 66 code) into a card reader where output was sent to a plotter or to magnetic tape. One needed to be physically present to use the machine, and it was not until the mid-1970s, following the development of a dedicated network connection as part of the ARPANET, that remote access to supercomputing facilities became feasible. The ARPANET was precursor to the internet that connected research facilities in the United States with a 50 kilobit per second dedicated connection, allowing remote access to computing facilities as well as the ability to display model output to a researcher’s local terminal. The CDC-6600 contained up to 982 kilobytes of memory could execute up to 3 megaFLOPS (millions of floating point operations per second). Its successor, the CDC-7600, ran about five times faster than the 6600 and had four times as much memory. These two machines served as the primary workhorses for cloud modeling spanning the mid-1960s to mid-1970s. In the early 1970s, improvements and refinements to 2D models continued, setting stage for the upcoming 3D revolution. More sophistication in the microphysical processes involving rain formation and subsequent falling, breaking, and evaporating were included in the 2D model of Takeda (1971). Run with 1 km horizontal and 500 m grid spacing, his simulations run in sheared environments indicated that for a range of atmospheric stability and shear profiles, a cloud could be simulated that was self-sustaining. Wilhelmson and Ogura (1972) found that small changes to the equation set by Ogura and Phillips (1962) removed the need to use a computationally expensive iterative solution for calculating saturation vapor pressure, while providing nearly identical model results. Steiner (1973) developed one of the first 3D models of shallow convection, examining the behavior of buoyant thermals in sheared environments. His simulations did not contain any water substance, however, and the conclusions that vertical motions in 3D are stifled by environmental shear would be modified significantly once the effects of condensation were included in 3D convective models. Schlesinger (1973a,b) developed a 2D anelastic model containing liquid water microphysics and conducted experiments exploring the role of environmental moisture and shear on storm morphology as well as the sensitivity of simulations to various microphysical parameters, finding that long-lived storms could be maintained in highly sheared environments so long as low level environmental moisture was sufficient. Wilhelmson (1974) presented one of the first 3D thunderstorm simulations containing liquid water microphysics. His simulation was run utilizing isotropic grid spacing of 600 m on a 3D mesh with dimensions of 64 x 33 x 25, containing a total of 53,000 grid volumes and utilizing mirror symmetry about the x axis. The limited domain size (38 km horizontal extent) and the use of doubly periodic boundary conditions precluded the ability to model the full life cycle of a realistic storm, however. In comparison to 2D simulations in identical environments Wilhelmson found that the 3D thunderstorm developed faster and lasted longer than its 2D counterpart before decaying, emphasizing the need for fully 3D modeling to properly model deep moist convection. Schlesinger (1975) converted his 2D model to 3D and included directional shear in his first “toy” 3D simulations (so-called due to the modest 11 x 11 x 8 grid) run with horizontal grid spacing of 3.2 km and vertical spacing of 700 m. Schlesinger’s 3D model contained open lateral boundary conditions, an improvement over doubly periodic boundary conditions, allowing for flow in and out of the model’s lateral boundaries. These early simulations were among the first to explore the morphology of storms in directionally sheared environments. Subsequent 3D simulations by Klemp and Wilhelmson (1978b) and Schlesinger (1980) revealed the process of thunderstorm splitting into right and left moving cells, and found that the right moving cell was favored in environments with strong low-level shear typical to environments associated with severe weather in the United States. Klemp and Wilhelmson (1978a) presented supercell simulation results from their newly developed 3D cloud model, which was a significant improvement over previously reported efforts. The so-called “Klemp–Wilhelmson model” would be used in many subsequent studies throughout the 1970s and 1980s. The 3D model incorporated Kessler-type microphysics, utilized open lateral boundary conditions, and a turbulence closure scheme to better handle subgrid kinetic energy. Their use of a prognostic equation for pressure was seen as advantageous over solving a diagnostic elliptical equation that implicitly assumes infinitely fast sound waves; however, this required the use of an extremely small time step in order to achieve computational stability. The use of a novel “time splitting” technique was achieved by separating out the terms responsible for sound waves and only integrating these sound waves small time step that guaranteed stability, an approach that was also used in the newly developed Colorado State RAMS model (Pielke et al., 1992). This model was used in early cloud modeling studies that explored orographic clouds, mesoscale convective systems, and was one of the first models to include the ice phase in simulations of thunderstorms (Cotton & Tripoli, 1978; Cotton, Tripoli, Rauber, & Mulvihill, 1986; Tripoli & Cotton, 1989). Cray and the Dawn of the Supercomputing Era In 1972, Seymour Cray, the architect of the CDC-6600 and 7600 that had served as the primary workhorses for cloud modeling research, established Cray Research, Inc., in Chippewa Falls, Wisconsin. The first Cray1 supercomputer was sold in 1976, and Cray quickly established itself as the leading supercomputer manufacturer in the world, a title that would hold until the 1990s. Cray computers exhibited significantly faster “number crunching” capabilities and larger amounts of memory and disk storage than previous machines. These machines contained vector processors that could simultaneously apply a mathematical calculation to several values in an array concurrently, an early form of parallel processing that served to speed up the earliest cloud models. The first Cray supercomputer was delivered to NCAR in 1976 and would be in service for 12 years. Atmospheric scientists lucky enough to obtain access could connect to it remotely rather than being required to be physically present, as had been the case in the early 1970s. Despite the promising advances in computing technology exemplified by Cray’s success, supercomputers in the early 1980s were almost entirely out of reach to university researchers who desperately needed access to them due to either the cost of, or complete lack of, access. In mid-1983, about 75 supercomputers existed across the world, the majority of which were Cray-1’s (Smarr et al., 1983). While NCAR provided access to researchers, only three universities in the United States had machines on site, and it was this clear deficiency in basic computational research capability that led to the development of a proposal to the National Science Foundation by scientists at the University of Illinois to create an on-site national center for scientific computing. The National Center for Supercomputing Applications (NCSA) would be established in 1986, the same year its Cray-1 went online. NCSA provided not just supercomputing access, but additional software and services to assist scientists from diverse fields in utilizing the hardware (and continues to do so in 2019). Remote access allowed scientists to edit, develop, and compile cloud model code and submit jobs to the Cray through a queuing system. Plotted output residing on the computer could also be graphically displayed to the researcher’s office computer using the dedicated network connection. With the proliferation of the Internet (many research universities were connected in the 1980s, years before it was generally available) it was now the standard practice to access supercomputing facilities, such as those available at NCAR or NCSA, using a remote network connection using software such as NCSA Telnet. Advances in operating system technology, the development of more sophisticated Fortran compilers, and the greater availability of supercomputing resources advanced many scientific fields during the 1990s. Supercells and Supercomputers Many of the fundamental properties of supercell thunderstorms were identified during the 1980s, and this was largely the results of 3D simulations of supercells. Several 3D supercell modeling studies published at this time greatly advanced our understanding of storm structure, dynamics, and morphology (e.g., Droegemeier & Wilhelmson, 1985; Klemp & Rotunno, 1983; Klemp, Wilhelmson, & Ray, 1981; Rotunno & Klemp, 1982, 1985; Schlesinger, 1980; Weisman & Klemp, 1982, 1984; Wilhelmson & Klemp, 1978, 1981). Throughout the 1980s, 3D simulations were conducted on grids with horizontal grid spacings on the order of 1 km with vertical spacings typically about 500 m. A typical simulation would span a domain on the order of 40 km by 40 km in the horizontal, extending to ten or more kilometers in the vertical. These domains resulted in grids on the order of 50 x 50 x 30 (75,000) grid points. (The highest-resolution thunderstorm simulations conducted in the late 2010s contain tens to hundreds of billions of grid volumes, six to seven orders of magnitude more than these early simulations.) It was widely recognized at the time that this resolution was inadequate to capture important smaller-scale flows that were known to be present in supercells (such as tornadoes) and that liquid-only microphysics was unphysical, but computers simply did not yet exist that had enough memory and processing power to conduct such a simulation. Despite these limitations, carefully constructed numerical experiments of thunderstorms throughout the 1980s and 1990s revealed a tremendous amount of insight into thunderstorm structure and morphology, such as the nature of supercell splitting, the origin of low-level supercell mesocyclone rotation, and the nature of thunderstorm cold pools. A technique for the creation of a vertically “stretched” grid was provided in Wilhelmson and Chen (1982), where the vertical grid spacing ∆z is a function of height. Most future studies involving deep moist convective would use a stretched vertical grid, typically with the highest resolution (smallest ∆z) occurring at or near ground level and increasing monotonically with height in order to “focus” resolution near the ground. This approach, however, can result in “pancake” grids with large aspect ratios (∆x/∆z), and highly anisotropic meshes are known to affect the parameterization of the turbulent kinetic energy, which can result in spurious energy piling up at high wave numbers, detrimentally affecting the simulation (Nishizawa, Yashiro, Sato, Miyamoto, & Tomita, 2015). One of the most significant examples of the power and promise of computing technology during this time was the video “A Study of the Evolution of a Numerically Modeled Severe Storm” produced by a team of scientists at NCSA in 1989 (Wilhelmson et al., 1989, 1990). This six minute-long video, nominated for an Academy Award, explored the structure of a 3D simulated supercell using most advanced commercial software and hardware available at the time. The video featured animations of the cloud and rain rendered in transparent isosurfaces and colored slices, as well as colored Lagrangian tracers. That it took four scientific animators one person-year to develop the video following the execution of the model emphasizes the challenge in producing high-quality, scientifically meaningful animations of 3D thunderstorm simulations. In 2019, this video remains one of the most insightful, high-quality visualizations of a full thunderstorm, despite the many visualization software options that later became available to scientists. The desire to study the process of tornadogenesis within a simulated supercell was a motivator for the development of nested grid capability in cloud models (Klemp & Rotunno, 1983). By nesting a higher-resolution grid within a coarser grid, one could, for instance, explore the development of a tornado without running the entire simulation at the resolution of the fine grid, something not computationally feasible. Wicker and Wilhelmson (1995) simulated a tornadic supercell with two meshes, the finest with 120 m horizontal spacing, and Grasso and Cotton (1995) did the same using three meshes with the finest containing 111 m horizontal spacing. In both studies, the higher resolution mesh was introduced prior to tornado formation in order to reduce the amount of computer time. Even with this approach, however, the tornadoes in both simulations were marginally resolved. These simulations revealed the importance of baroclinically generated horizontal vorticity in strengthening a supercell’s low-level mesocyclone, as well as the potential role of downdrafts as initiation mechanisms for tornado formation. They also demonstrated the utility of nested grids to incorporate different scales of motion, although spurious error growth along nest boundaries was a common problem (Wicker & Wilhelmson, 1995). Finley, Cotton, and Pielke (2001) initialized a RAMS model simulation with synoptic data but incorporated six nested meshes, with the innermost 100 m horizontal grid focused on a supercell that produced two tornadoes. This simulation was notable in that it did not utilize the “warm bubble” mechanism to trigger a storm in a horizontally homogeneous environment as was typically done in idealized supercell simulations. It is worth noting that while similarities were found between these three numerical studies, no clear picture had emerged regarding supercell tornado genesis and maintenance, and this is partly due to inadequate resolution; in fact it is common to describe vortices in these and other similar simulations as “tornado-like vortices” (TLVs) due to their being marginally resolved. However, these simulations resolved important features in the tornadic region of the supercell and provided encouragement that a much clearer picture would emerge as models were improved and simulations were run at higher resolution, enabled by the steady increase in computing power made possible by supercomputers. Severe weather outbreaks involving supercells provide a significant forecasting challenge to operational meteorologists who, throughout the late 20th and early 21st centuries, have increasingly relied upon numerical weather prediction models for forecast guidance. The Center for Analysis and Prediction of Storms (CAPS) was formed at the University of Oklahoma in 1988 with the goal of addressing the incredible challenge of providing operational forecasting of severe weather, with the Advanced Regional Prediction System (ARPS) model serving as its centerpiece (Johnson, Bauer, Riccardi, Droegemeier, & Xue, 1994; Xue, Wang, Gao, Brewster, & Droegemeier, 2003). ARPS was able to forecast five hours in advance the location and intensity of supercells that were observed in the U.S. Plains on May 24, 1996, demonstrating the utility of a weather model to sufficiently predict severe mesoscale phenomena in an operational setting (Sathye, Xue, Bassett, & Droegemeier, 1997). The Use of Ice Microphysics in 3D Thunderstorm Simulations An important advancement during the early 1980s was the development of equations that described the time-dependent microphysical characteristics of ice hydrometeors and the development of 2D models of thunderstorms utilizing so-called ice microphysics (e.g., Lin et al., 1983). Kessler’s (1969) scheme and its variants allowed, necessarily, the occurrence of unphysically cold supercooled cloud water and rain, with rain being the only hydrometeor allowed to fall to the ground as precipitation. These simplifications resulted in distorted cloud anvils lacking the structure expected partly due to the comparatively slow fall speed of snow as opposed to rain, and typically produced thunderstorm outflow that was too cold compared to observations. It was well known at the time that supercells contained an abundance of cloud ice, snow, graupel, and hail that needed to be accounted for in simulations. Despite the development of ice microphysics on paper in the early 1980s, 3D cloud model simulations could not contain ice if they were not programmed to, and doing so would have resulted in code that could not run on the hardware of the time due to the significant increase in both computation and memory usage it would have required. Ice microphysics routines required new prognostic equations to be solved, where memory allocations for additional 3D floating point arrays were required for snow, ice, and graupel mixing ratios. Due to the notoriously computationally expensive calculations inherent to microphysics, a simulation that fit in memory in the computers of the time would have taken a prohibitively long time to integrate. Early 3D simulations of thunderstorms containing ice microphysics (e.g., Knupp, 1989; Straka & Anderson, 1993) utilized grid spacings on the order of 500 m and were used to study the formation of downdrafts in thunderstorms, where processes such as the melting of hail were responsible for a significant amount of negative buoyancy that forced strong downdrafts known as downbursts. In the following decades, 3D simulations of thunderstorms would be run with a variety of cloud microphysics options, with a progression over time of single-moment ice microphysics replacing liquid-water-only microphysics, and multi-moment ice microphysics routines generally supplanting single-moment routines. It has been widely recognized that up until the use of multi-moment ice microphysics routines (with prognostic equations for variables beyond only mixing ratios, such as hydrometeor number concentrations) in cloud models, that simulated thunderstorm cold pools have been biased cold compared to observations. The implications of this cold bias are known to be significant, as a colder cold pool will propagate faster and interact with environmental wind shear differently than a warm one (Rotunno, Klemp, & Weisman, 1988; Weisman & Rotunno, 2004). The use of more physically correct microphysics parameterizations should aid future discovery in supercell tornado behavior as baroclinically generated horizontal vorticity, which forms due to horizontal gradients of density that arise from different amounts of cooling occurring throughout and adjacent to the storm’s cold pool, and this cooling is due to the evaporation of rain and the melting and sublimation of hail, is known to be a primary source of vertical rotation in the bottom kilometer or two of supercell updrafts and may also serve also as a primary source of vorticity occurring within tornadoes as cold pool air is abruptly tilted upward into the lowest region of the tornado (Orf, Wilhelmson, Lee, Finley, & Houston, 2017). High-Resolution Cloud Modeling in the Age of Distributed-Memory Supercomputing A Word About “High Resolution” as a Descriptor Within the field of atmospheric modeling, the term “high resolution” is used to ostensibly indicate a certain level of fidelity “beyond the norm.” It is generally true that increasingly higher resolution is desirable in research on thunderstorms; strong local gradients of winds, temperature, and pressure are typical within in real thunderstorms, requiring high resolution to capture the nature of these abrupt changes. Further, features that occur within thunderstorms such as downbursts, tornadoes, near-ground inflow, and outflow layers are much smaller than the storm in which they are occurring, requiring resolution well beyond that which captures only the storm’s basic features. In 2019 it is possible to conduct, on consumer desktop hardware, 3D simulations of thunderstorms that would have been impossible on any computer of the 20th century. A consumer motherboard for a home or office computer could contain a processor with up 20 cores and 128 GB of memory, with each core exhibiting a clock speed of around 3 GHz, providing around 1 TFLOP of performance. A single Graphical Processor Unit (GPU) could easily add several additional TFLOPs of computing to a desktop computer, providing performance on par with the world’s fastest supercomputers at the turn of the century. One consequence of this breathtaking increase in computing power is that “high resolution” as a descriptor has become meaningless, as a “high-resolution” simulation of a thunderstorm in 1985 may have modeled with 500 m grid spacing, while the same descriptor would be properly applied to a simulation in 2019 with 50 m grid spacing—a thousandfold increase in the number of grid volumes (and hence computer memory) required. This has resulted in the literature being peppered with descriptions of “high,” “very-high,” “ultra-high,” “super-high,” and even “giga-” resolution simulations to discern from previous coarser simulations. The ambiguity of these descriptors can only be removed by clearly specifying the grid spacing and the phenomenon being simulated, and it is hoped that, due to the eventual exhaustion of adjectives to indicate yet higher resolution, modelers will instead indicate grid spacing and let the reader decide the nature of the resolution! The Transition to the Era of Supercomputing Centers During the 1990s Cray began to share the stage with new computing architectures whose performance began to eclipse that of its signature single vector processor design, and the turn of the century marked rapid developments in supercomputing technology and access. It became necessary to add additional independent processors to supercomputers rather than optimizing or speeding up a single processor. Multiprocessor machines would introduce new challenges for programmers, as most of these new architectures demanded significant modification to existing code (or development of new code) to exploit all of the processors efficiently. One difficult but inexpensive computing option available to modelers in the 1990s was made possible by the development of transputer technology. Transputers were an inexpensive type of microprocessor, manufactured in the 1980s by the U.K. corporation Inmos, that were designed to serve as individual computing units (nodes) in so-called massively parallel computers. They were designed as computing nodes (i.e., as standalone computing entities or computers, hooked to a high-speed network, that is part of a supercomputer) that could be assembled into a massively parallel (able to conduct many calculations concurrently) computer. Jacob & Anderson (1992) report on the creation and use of the Wisconsin Model Engine (WME), a 192-node, massively parallel supercomputer that was used by John Anderson’s group at the University of Wisconsin to conduct high-resolution simulations of the ocean and atmosphere, including microbursts, damaging downdrafts that form within thunderstorms (Anderson, Orf, & Straka, 1990; Orf, Anderson, & Straka, 1996). One of the most exciting computer architectures at this time was the Connection Machines CM-2 and CM-5 supercomputers that were housed at NCSA in the mid-1990s. These were the first commercially available massively parallel supercomputers, containing up to 1,024 processors, each of which had its own memory space, similar to the WME. These machines were among the first to require message passing in order to communicate data between the individual nodes. To simplify matters for programmers, Connection Machines released its own Fortran compiler with extensions that, unlike the WME, “hid” the underlying message passing, allowing programmers to code simple expressions involving 3D arrays for executing what otherwise would have been complex parallel operations. (This approach is similar to the single-expression whole-array operations that were added in the Fortran95 standard.) Taking advantage of this new architecture, and using a modified version of the Klemp–Wilhelmson model, Lee and Wilhelmson (1997a) presented simulations of a line of weak tornadoes that formed from convection growing along a wind shear boundary. This work and companion studies (Lee & Wilhelmson, 1997b, 2000) explored the nature of non-supercell tornadoes, which tend to be shorter lived and weaker than those formed in supercells, typically forming beneath rapid growing cumulonimbi as the result of intense vortex stretching. While the CM-5 was the last machine built by Connection Machines, it heralded the future of individual shared-memory multiprocessor computers as well as distributed-memory supercomputers made up of identical nodes. Distributed-memory supercomputing continues to be the norm in 2019, where frequent sharing of data between the individual nodes during model integration is required in order to solve the equations that define the future state of the cloud. The emergence of distributed-memory supercomputing required the development of algorithms and libraries to manage the efficient passing data between nodes. The Message Passing Interface (MPI) standard was developed to address this issue, and soon MPI libraries were available to programmers who could use routines in their cloud models to handle data exchange between compute nodes. With MPI, a cloud model could be written to take advantage of as many processors as were available; however, the frequent exchange of data between compute nodes required to update the model state from time to time can result in a situation where adding more compute nodes actually slows the model down due to communication bottlenecks. It requires significant forethought and effort to construct a distributed-memory cloud model that scales well to tens of thousands or more nodes. This recognition has resulted in the development of parallelized open-source community cloud models such as ARPS (Xue, Wang, Gao, Brewster, & Droegemeier, 2003), WRF (Skamarock et al., 2008), and CM1 (Bryan & Fritsch, 2002) that greatly benefit the cloud modeling community, providing modifiable “virtual laboratories” for the study of thunderstorms and other weather phenomena. Recognizing the lack of shared computing resource available to researchers in the United States, the National Science Foundation solicited the development of a system comprised of supercomputers at sites in the United States, connected by a fast network, that would be used by the U.S. research community. The Teragrid, which became operational in 2004, dramatically increased the access to supercomputing facilities for U.S. scientists. The Teragrid established the model that is still used in 2019, replaced by the follow-on eXtreme Science and Engineering Discovery Environment (XSEDE) project that provides access to modern supercomputing facilities. XSEDE and other similar projects have made supercomputing much more accessible to researchers across the globe, although access to large portions of the most powerful computers, required for the highest resolution thunderstorm simulations, remains extremely competitive. Since the 2010s the National Science Foundation has funded initiatives such as the Blue Waters project (Bode et al., 2013; Kramer, Butler, Bauer, Chadalavada, & Mendes, 2014), providing access to a petascale machine to scientists who can convince a panel that their proposed work demands the most powerful hardware available and that they have the resources to use such a machine efficiently. Bryan, Wyngaard, and Fritsch (2003) conducted a series of numerical experiments with the CM1 cloud model, written by NCAR scientist George Bryan and made publicly available, that indicated that grid spacings of about 100 m or less were required in order for a cloud model to “perform as designed” in the simulation of deep moist convection, showing that the inertial subrange was not properly resolved in coarser simulations, and that convergence of did not occur as well. Fiori, Parodi, and Siccardi (2010) similarly found that supercell simulations require grid spacing of about 200 m as a minimum resolution to properly converge, and that this only occurs when a sophisticated enough turbulence closure method is used. Despite these and other studies that indicate the importance of using enough resolution to study thunderstorms, in the late 2010s simulations of idealized supercells are regularly reported in the literature run at grid spacings 250 m or greater. While it is important to note that much important work continues to be done at these coarser resolutions, as is evident from the huge body of work that dates back to the first 3D simulations of thunderstorms, re-conducting many of these simulations at higher resolution often results in different interpretations of the simulation. There can be significant complications to running very large numerical simulations, including being able to create a model that scales efficiently to large node counts, learning how to take advantage of the latest hardware topologies, and dealing with an exponentially increasing data load from model output. However, these types of challenges have always been surmountable. It is clear that it will require many thunderstorm simulations run with much finer grids, with spacings on the order of 10 m (or higher, as tornadoes and tornado-like vortices have been observed in thunderstorms with diameters as small as 10 m, requiring grid spacing on the order of 2 m to adequately resolve) before scientists can hope to understand the nature of tornado genesis, maintenance, and decay well enough to be able to create numerical forecasts of tornado paths from thunderstorms that have yet to form or are in the stages of formation. The Path to Fully-Resolved Tornadoes in Simulated Supercells The exponential increase in computing power available to researchers resulted in breakthrough simulations in the early 21st century where simulated supercells produce strong tornadoes (Noda & Niino, 2005, 2010; Xue, 2004). These studies utilized the ARPS model using the environmental conditions of the oft-simulated May 20, 1977, Del City supercell. These simulations improved upon Wicker & Wilhelmson (1995) and Grasso & Cotton (1995) by eliminating the introduction of nested grids and running the entire simulation at much higher resolution (grid spacings on the order of 60 m). The ARPS simulations utilized Kessler microphysics, however, and it would be a decade before simulations at comparable resolution were conducted with more physically proper ice microphysics and in domains large enough to avoid problems associated with interaction between the storm and the model’s lateral boundaries for the storm’s full lifespan. An issue that remains a topic of debate is the use of “free-slip” boundary conditions in simulations of thunderstorms. The free-slip surface boundary condition eliminates the effects of surface friction with the model’s bottom boundary, which in all simulations discussed thus far is perfectly flat, representing an obviously idealized approximation to the real Earth’s surface. It is well understood that friction with the Earth’s surface plays a key role in the life cycle of extratropical cyclones, also shaping the structure of the atmospheric boundary layer. In a cloud model, friction is typically introduced by the inclusion of a drag coefficient that reduces the horizontal winds at the model’s lowest grid level. Its inclusion in cloud models results in more physically realistic thunderstorm outflow profiles, such as the leading edges of gust fronts (Mitchell & Hovermale, 1977). However, the anomalously strong, local winds in the tornadic region of a supercell thunderstorm do not necessarily lend themselves well to this approach, which implicitly assumes a logarithmic wind profile beneath the lowest scalar level of models that use an Arakawa C mesh, such as CM1. In cases where insufficient vertical grid spacing results in the effects of surface friction being felt over too deep a layer, the cold pool structure of simulated supercell simulations is modified, and this can have a cascading effect on the storm morphology in the tornadic region of the storm. Further, surface friction has been identified as being of key importance in the formation of simulated tornadoes in ARPS simulations of supercells and mesoscale convective systems (Roberts, Xue, Schenkman, & Dawson, 2016; Schenkman, Xue, & Dawson, 2016; Schenkman, Xue, & Hu, 2014; Schenkman, Xue, & Shapiro, 2012), and recent high-resolution simulations of downbursts and downburst-producing thunderstorms require surface drag in order to obtain realistic wind profiles near the ground (Orf, Kantor, & Savory, 2012). Friction is also found to be, somewhat counterintuitively, necessary for the strongest near-surface winds found in real tornadoes, and this is supported by theory and numerical simulation (Davies-Jones, 2015). As thunderstorm simulations continue to be run on finer and finer grids, the effects of surface friction parameterization should hopefully be reduced as sub-meter vertical grid spacings will allow the physically proper no-slip surface condition to be used effectively. Currently, there is no clear consensus on what surface boundary condition is most appropriate for contemporary idealized supercell simulations (Markowski, 2016, 2018). Orf, Wilhelmson, Lee, Finley, and Houston (2017) presented results from a 30 m grid spacing free-slip supercell simulation run in a domain spanning 120 km x 120 km x 120 km and containing 1.8 billion grid points. The simulated supercell produced a long-track EF-5 strength tornado, simulated in the environment that produced the similar May 24, 2011, El Reno, OK tornadic supercell. This work was noteworthy in that data over a large domain was saved at very high temporal frequency (up to every 1 model second) and visualized and animated in insightful ways (Orf, 2019; Orf et al., 2014; Orf, Wilhelmson, & Wicker, 2016). Further, they used an isotropic (∆x = ∆y = ∆z) mesh over the bottom 10 km of a 60 km x 60 km region centered on the supercell, utilizing vertical stretching from 10 to 20 km AGL and horizontal stretching for 60 km along the periphery of the domain. A feature they dubbed the Streamwise Vorticity Current (SVC) was identified within the near-surface cold pool of the storm’s forward flank, corresponding to a significant drop in pressure below 2 km as well as a lowering toward the ground of extremely strong updraft winds. Unpublished simulations where surface friction was turned on at the beginning of the simulation resulted in significantly different results where no long track EF5 tornado occurred, in contrast to Schenkman et al. (2012) and Roberts et al. (2016) who found that friction was required to produce their strongest tornadoes. Using a different environment but otherwise identical parameters as in Orf et al. (2017), Finley (2018) found a similar relationship between the SVC and a strong, long-lived tornado in a simulation conducted using the April 27, 2011, southeastern United States outbreak environment as the model’s base state. Figure 1 contains a still image from a 15 misotropic simulation of the May, 24, 2011 violently tornadic supercell described in Orf et al. (2017), providing an example of state-of-the-art thunderstorm modeling and visualization in the late 2010s. In this image, features such as subtornadic vortices, a wall cloud and turbulent tail cloud, and rain shafts in the storm’s rear flank are resolved in fine detail, as well as a violent multiple vortex tornado pendant from a barrel-like cloud that bears a strong resemblance to a real storm. Mashiko & Niino (2017) conducted a tornado-resolving simulation at 10 m horizontal grid spacing utilizing the Japan Meteorological Agency Nonhydrostatic Model (JMANHM; Saito et al., 2006), incorporating the effects of surface friction and utilizing a vertical mesh stretching from 10 to 50 m to the model top of 10.8 km. Their simulation contained a multiple vortex tornado exhibiting vertical winds as high as 50 m s−1 10 m above ground level. Utilizing CM1 with parameters similar to Orf et al. (2017) including free-slip boundary conditions, which are “zero surface strains” in both simulations (another approach to free-slip is “zero surface strain gradient” where the vertical derivative of surface strain is set to zero. The latter boundary condition did not result in a long-track EF5 tornado in unpublished simulations otherwise identical to those of Orf et al., 2017), Yao et al. (2018) simulated a supercell with 25 m horizontal grid spacing and 20 m grid spacing below 1 km using atmospheric conditions adjacent to an observed EF4 producing supercell that occurred in Jiangsu Province, China on June 23, 2016. As has been the case since the 1960s, rapidly changing computer technology continues to provide new opportunities for simulating thunderstorms at higher resolutions with increasingly sophisticated physics parameterizations for hydrometeors, friction, radiation, and turbulent kinetic energy. The recent advent of graphical processing units (GPUs) as calculating platforms in supercomputers is having a significant impact across all fields of computational science including atmospheric science (e.g., Leutwyler, Fuhrer, Lapillonne, Lüthi, & Schär, 2016; Schalkwijk, Griffith, Post, & Jonker, 2012; Vanderbauwhede & Takemi, 2013). New programming models such as OpenACC and CUDA have been developed to help scientists efficiently exploit these massively parallel devices. Summit, the world’s fastest supercomputer in late 2018, obtains 95% of its computing power from GPUs. Programmers must modify their code to efficiently utilize GPUs on hybrid computing platforms, and this can be a time-consuming process as large sections of code often must be refactored in order to run efficiently. The “Big Data” Problem Doubling the resolution of a 3D model results in 8 (23) times more grid volumes and a reduction of the model’s time step by a factor of 2 in order to maintain computational stability. The roughly 16 times more calculations required to run such simulations underscores the need for a steady progression in supercomputer capability over the coming decades in order to continue to advance knowledge in the atmospheric (and other) sciences. Moving data between memory and disk (input/output, or I/O) continues to be significantly slower than moving data within computer memory, and I/O can become a severe bottleneck in increasingly higher-resolution simulations on modern supercomputers. While I/O bandwidth and storage capacity have both increased exponentially since the 1970s, the time it takes to complete a simulation where data is saved frequently to disk can be overwhelmingly dominated by I/O. This time can be greatly reduced by writing code that utilizes some form of parallel I/O, either by writing multiple files concurrently on a fast shared file system commonly found on supercomputers, or using routines that efficiently write files by multiple processors concurrently. I/O continues to be a significant bottleneck for researchers requiring frequent saving of large amounts of data that is often necessary to fully analyze a high-resolution simulation with rapidly evolving flows. Approaches for handling this issue range from the in-situ approach (Bauer et al., 2016) where visualization and analysis is done “live” while the simulation is being run on the supercomputer, to the post hoc approach of saving large quantities of compressed data to disk for later analysis and visualization. Lossy floating point compression can dramatically reduce (by one or two orders of magnitude) the amount of data written to disk at the expense of the loss of floating point precision, but the amount of needed accuracy in saved data can be specified upfront with algorithms such as ZFP (Lindstrom, 2014; Orf, 2017). It is likely that future supercomputers will involve topologies that are quite different than those found in 2019, and this will in part be due to the fact that I/O, communication, and processing do not all scale the same as simulations increase in size, and that frequent large-scale “saving data to disk” may no longer be feasible using current techniques and hardware topologies. Due to the nature of modern distributed-memory supercomputing architectures where simulations can span thousands of nodes, models with naive approaches to I/O can spend the vast majority of their time writing data to disk I/O rather than doing calculations, a situation that indicates poor scaling and one that is generally not tolerated on supercomputers. Data from large, complicated simulations can take years to study, making the in-situ approach infeasible. Self-describing scientific data formats such as netCDF and HDF are useful for storing model output data, and parallel methods for writing these file formats have been developed. Scientists wishing to minimize the I/O overhead in high-resolution thunderstorm simulations have several options. If only a small portion of the domain is being studied, only data from that region and its immediate surroundings may be saved, reducing the amount of data written to disk. Researchers may choose to save only model prognostic variables, calculating diagnostic quantities on the fly later. Data compression can be applied to 3D floating point arrays before being written to disk, where decompression occurs on the fly when data is later read back. Data compression may be lossless (where the original data is perfectly reconstructed) and lossy (where some accuracy is sacrificed in order to reduce the amount of bytes written to disk). Lossless compression algorithms typically cannot achieve very large compression ratios due to the restriction of requiring bit perfect decompression. As an example of the effectiveness of lossy ZFP compression, domain-wide compression ratios of over 200:1 for microphyiscal variables have been achieved while preserving enough accuracy for analysis and visualization in simulations containing billions of grid zones (Orf, 2017). Specifying required accuracy up front allows the algorithm to compress only as much as is needed to achieve that accuracy, resulting in compression greatly exceeding those found in lossless compression methods (Lindstrom, 2014). The Road Ahead The rapid advancement of supercomputing technology and numerical model development has resulted in the ability to conduct simulations of thunderstorms that bear a striking resemblance to observations. Despite this encouraging trend, much work remains in order to answer fundamental questions about the formation and evolution of damaging thunderstorm phenomena such as tornadoes, downbursts, and flash flooding. While idealized simulations of thunderstorms provide critical information on these features, the ability to predict their behavior accurately remains one of the grandest forecasting challenges. The road ahead will undoubtedly involve the use of ensembles of simulations to obtain better probabilistic forecasts of severe weather events and to provide useful information on the internal variability of thunderstorm simulations in an idealized framework. As resolution further increases, the inclusion of more realistic surface features including accurate topography, orography, and the natural and built environment will be necessary. In order to properly simulate the damage supercell-induced tornadoes cause to the built environment, structure and debris models will need to be coupled to thunderstorm model simulations. Machine learning will be used to improve upon existing parameterizations of cloud microphysics and computationally intensive bin models will also become more common. In the future, operational numerical weather prediction models that forecast on the continental scale will be run at resolutions of hundreds of meters and will contain hundreds of vertical levels, and these models will regularly produce the full spectrum of observed thunderstorm types including tornado producing supercells. Ensembles of operational model simulations run at sub-kilometer resolutions could provide probabilistic forecasts of thunderstorms and the potential for these storms to form tornadoes. In order for the future high numerical weather prediction simulations involving deep moist convection to be of operational use, the models will need to be initialized with an accurate, very high-resolution set of initial conditions, and it is likely that these conditions will be supplied primarily by weather satellites and other forms of remote sensing technology that can quickly scan wide volumes of the atmosphere. As always, advances in our understanding of thunderstorms will happen due to advances in the three main pillars of meteorological research: observations, theory, and numerical modeling. Acknowledgments Figure 1 was produced using the open source VisIt software on the Blue Waters supercomputer. The author’s current research is supported by the Cooperative Institute for Meteorological Satellite Systems at the University of Wisconsin—Madison, and the National Science Foundation. One anonymous reviewer provided useful feedback that helped improve the scope and organization of the article. The author would like to especially thank Dr. Robert Schlesinger for taking time to discuss his personal experience in conducting some of the first 3D numerical simulations of thunderstorms, and the immense challenges faced by modelers at the early dawn of the modern computing age. Further Reading Cotton, W. R., Bryan, G., & van den Heever, S. C. (2010). Storm and cloud dynamics. Cambridge, MA: Academic Press.Find this resource: Doswell, C. A., III. (2001). Severe convective storms (Meteorological Monographs). Boston, MA: American Meteorological Society.Find this resource: Markowski, P., & Richardson, Y. (2011). Mesoscale meteorology in midlatitudes. Hoboken, NJ: John Wiley & Sons.Find this resource: References Anderson, J. R., Orf, L. G., & Straka, J. M. (1990). A 3-D model system for simulating thunderstorm microburst outflows. Meteorology and Atmospheric Physics, 49(1–4), 125–131.Find this resource: Arakawa, A. (1966). Computational design for long-term numerical integration of the equations of fluid motion: Two-dimensional incompressible flow. Part I. Journal of Computational Physics, 1(1), 119–143.Find this resource: Bauer, A. C., Abbasi, H., Ahrens, J., Childs, H., Geveci, B., Klasky, S., . . ., Bethel, E. W. (2016). In situ methods, infrastructures, and applications on high performance computing platforms. Computer Graphics Forum, 35(3), 577–597.Find this resource: Bode, B., Butler, M., Dunning, T., Hoefler, T., Kramer, W., Gropp, W., & Hwu, W.-M. (2013). The Blue Waters Super-System for Super-Science. In J. S. Vetter (Ed.), Contemporary High Performance Computing From Petascale Toward Exascale (pp. 339–366). London, U.K.: Chapman and Hall.Find this resource: Braham, R. R. (1996). The Thunderstorm Project 18th Conference on severe local storms luncheon speech. Bulletin of the American Meteorological Society, 77(8), 1835–1846.Find this resource: Bryan, G. H., & Fritsch, J. M. (2002). A benchmark simulation for moist nonhydrostatic numerical models. Monthly Weather Review, 130(12), 2917–2928.Find this resource: Bryan, G. H., Wyngaard, J. C., & Fritsch, J. M. (2003). Resolution requirements for the simulation of deep moist convection. Monthly Weather Review, 131(10), 2394–2416.Find this resource: Byers, H. R., & Braham, R. R., Jr. (1949). The thunderstorm [Technical report]. Washington, D.C.: United States Department of Commerce.Find this resource: Charney, J. (1955). The use of the primitive equations of motion in numerical prediction. Tell’Us, 7(1), 22–26.Find this resource: Cotton, W. R., & Tripoli, G. J. (1978). Cumulus convection in shear Flow—Three-dimensional numerical experiments. Journal of Atmospheric Science, 35(8), 1503–1521.Find this resource: Cotton, W. R., Tripoli, G. J., Rauber, R. M., & Mulvihill, E. A. (1986). Numerical simulation of the effects of varying ice crystal nucleation rates and aggregation processes on orographic snowfall. Journal of Applied Meteorology and Climatology, 25(11), 1658–1680.Find this resource: Davies-Jones, R. (2015). A review of supercell and tornado dynamics. Atmospheric Research, 158–159, 274–291.Find this resource: Deardorff, J. W. (1970). A numerical study of three-dimensional turbulent channel flow at large Reynolds numbers. Journal of Fluid Mechanics, 41(2), 453–480.Find this resource: Droegemeier, K. K., & Wilhelmson, R. B. (1985). Three-dimensional numerical modeling of convection produced by interacting thunderstorm outflows. Part I: Control simulation and low-level moisture variations. Journal of Atmospheric Sciences, 42(22), 2381–2403.Find this resource: Finley, C. A. (2018). High-resolution simulation of a violent tornado in the 27 April 2011 outbreak environment. Paper presented at the 29th conference on severe local storms, Stowe, VT: American Meteorological Society.Find this resource: Finley, C. A., Cotton, W. R., & Pielke, R. A. (2001). Numerical simulation of tornadogenesis in a highprecipitation supercell. Part I: Storm evolution and transition into a bow echo. Journal of Atmospheric Sciences, 58(13), 1597–1629.Find this resource: Fiori, E., Parodi, A., & Siccardi, F. (2010). Turbulence closure parameterization and grid spacing effects in simulated supercell storms. Journal of Atmospheric Sciences, 67(12), 3870–3890.Find this resource: Grasso, L. D., & Cotton, W. R. (1995). Numerical simulation of a tornado vortex. Journal of Atmospheric Sciences, 52(8), 1192–1203.Find this resource: Jacob, R., & Anderson, J. (1992). Do-it-yourself massively parallel supercomputer does useful physics. Computers in Physics, 6(3), 244–251.Find this resource: Johnson, K. W., Bauer, J., Riccardi, G. A., Droegemeier, K. K., & Xue, M. (1994). Distributed processing of a regional prediction model. Monthly Weather Review, 122(11), 2558–2572.Find this resource: Kessler, E. (1969). On the distribution and continuity of water substance in atmospheric circulations. In E. Kessler (Ed.), Meteorological monographs (Vol. 10, pp. 1–84). Boston, MA: American Meteorological Society.Find this resource: Klemp, J. B., & Rotunno, R. (1983). A study of the tornadic region within a supercell thunderstorm. Journal of Atmospheric Sciences, 40(2), 359–377.Find this resource: Klemp, J. B., & Wilhelmson, R. B. (1978a). The simulation of three-dimensional convective storm dynamics. Journal of Atmospheric Sciences, 35(6), 1070–1096.Find this resource: Klemp, J. B., & Wilhelmson, R. B. (1978b). Simulations of right- and left-moving storms produced through storm splitting. Journal of Atmospheric Sciences, 35(6), 1097–1110.Find this resource: Klemp, J. B., Wilhelmson, R. B., & Ray, P. S. (1981). Observed and numerically simulated structure of a mature supercell thunderstorm. Journal of Atmospheric Sciences, 38(8), 1558–1580.Find this resource: Knupp, K. R. (1989). Numerical simulation of low-level downdraft initiation within precipitating cumulonimbi: Some preliminary results. Monthly Weather Review, 117(7), 1517–1529.Find this resource: Kramer, W., Butler, M., Bauer, G., Chadalavada, K., & Mendes, C. (2014). High-Performance Parallel I/O (pp. 17–31). London, U.K.: Chapman and Hall/CRC.Find this resource: Lee, B. D., & Wilhelmson, R. B. (1997a). The numerical simulation of nonsupercell tornadogenesis. Part II: Evolution of a family of tornadoes along a weak outflow boundary. Journal of Atmospheric Sciences, 54(19), 2387–2415.Find this resource: Lee, B. D., & Wilhelmson, R. B. (1997b). The numerical simulation of non-supercell tornadogenesis. Part I: Initiation and evolution of pretornadic misocyclone circulations along a dry outflow boundary. Journal of Atmospheric Sciences, 54(1), 32–60.Find this resource: Lee, B. D., & Wilhelmson, R. B. (2000). The numerical simulation of nonsupercell tornadogenesis. Part III: Parameter tests investigating the role of CAPE, vortex sheet strength, and boundary layer vertical shear. Journal of Atmospheric Sciences, 57(14), 2246–2261.Find this resource: Leutwyler, D., Fuhrer, O., Lapillonne, X., Lüthi, D., & Schär, C. (2016). Towards Europeanscale convection-resolving climate simulations with GPUs: A study with COSMO 4.19. Geoscientific Model Development, 9(9), 3393–3412.Find this resource: Lilly, D. K. (1962). On the numerical simulation of buoyant convection. Tellus A, 14(2).Find this resource: Lin, Y.-L., Farley, R. D., & Orville, H. D. (1983). Bulk parameterization of the snow field in a cloud model. Journal of Applied Meteorology and Climatology, 22(6), 1065–1092.Find this resource: Lindstrom, P. (2014). Fixed-rate compressed floating-point arrays. IEEE Transactions on Visualization and Computer Graphics, 20(12), 2674–2683.Find this resource: Markowski, P. (2018). A review of the various treatments of the surface momentum flux in severe storms simulations: Assumptions, deficiencies, and alternatives. Paper presented at the 29th conference on severe local storms, Stowe, VT: American Meteorological Society.Find this resource: Markowski, P. M. (2016). An idealized numerical simulation investigation of the effects of surface drag on the development of Near-Surface vertical vorticity in supercell thunderstorms. Journal of Atmospheric Sciences, 73(11), 4349–4385.Find this resource: Mashiko, W., & Niino, H. (2017). Super high-resolution simulation of the 6 May 2012 Tsukuba supercell tornado: Near-Surface structure and its evolution. SOLAIAT, 13, 135–139.Find this resource: Mitchell, K. E., & Hovermale, J. B. (1977). A numerical investigation of the severe thunderstorm gust front. Monthly Weather Review, 105(5), 657–675.Find this resource: Nishizawa, S., Yashiro, H., Sato, Y., Miyamoto, Y., & Tomita, H. (2015). Influence of grid aspect ratio on planetary boundary layer turbulence in large-eddy simulations. Geoscientific Model Development, 8(10), 3393–3419.Find this resource: Noda, A. T., & Niino, H. (2005). Genesis and structure of a major tornado in a numerically-simulated supercell storm: Importance of vertical vorticity in a gust front. SOLAIAT, 1, 5–8.Find this resource: Noda, A. T., & Niino, H. (2010). A numerical investigation of a supercell tornado: Genesis and vorticity budget. Journal of the Meteorological Society of Japan. Series II, 88(2), 135–159.Find this resource: Ogura, Y. (1962). Convection of isolated masses of a buoyant fluid: A numerical calculation. Journal of Atmospheric Sciences, 19(6), 492–502.Find this resource: Ogura, Y. (1963). The evolution of a moist convective element in a shallow, conditionally unstable atmosphere: A numerical calculation. Journal of Atmospheric Sciences, 20(5), 407–424.Find this resource: Ogura, Y., & Phillips, N. A. (1962). Scale analysis of deep and shallow convection in the atmosphere. Journal of Atmospheric Sciences, 19(2), 173–179.Find this resource: Orf, L. (2017). The use of ZFP lossy compression in tornado-resolving thunderstorm simulations. AGU 2017 Annual Meeting.Find this resource: Orf, L. (2019). Leigh Orf—Simulation and visualization of thunderstorms, tornadoes, and downbursts.Find this resource: Orf, L., Kantor, E., & Savory, E. (2012). Simulation of a downburst-producing thunderstorm using a very high-resolution three-dimensional cloud model. Journal of Wind Engineering and Industrial Aerodynamics, 104–106, 547–557.Find this resource: Orf, L., Wilhelmson, R., Lee, B., & Finley, C. (2014). Genesis and maintenance of a long-track EF5 tornado embedded within a simulated supercell. Retrieved from (Presented at the 24th Conference on Severe Local Storms, Madison).Find this resource: Orf, L., Wilhelmson, R., Lee, B., Finley, C., & Houston, A. (2017). Evolution of a long-track violent tornado within a simulated supercell. Bulletin of the American Meteorological Society, 98(1), 45–68.Find this resource: Orf, L., Wilhelmson, R., & Wicker, L. (2016). Visualization of a simulated long-track EF5 tornado embedded within a supercell thunderstorm. Parallel Computing, 55, 28–34.Find this resource: Orf, L. G., Anderson, J. R., & Straka, J. M. (1996). A three-dimensional numerical analysis of colliding microburst outflow dynamics. Journal of Atmospheric Sciences, 53(17), 2490–2511.Find this resource: Orville, H. D. (1964). On mountain upslope winds. Journal of Atmospheric Sciences, 21(6), 622–633.Find this resource: Orville, H. D. (1965). A numerical study of the initiation of cumulus clouds over mountainous terrain. Journal of Atmospheric Sciences, 22(6), 684–699.Find this resource: Persson, A. (2005a). Early operational Numerical Weather Prediction outside the USA: An historical Introduction. Part 1: Internationalism and engineering NWP in Sweden, 1952–69. Meteorological Applications, 12(2), 135–159.Find this resource: Persson, A. (2005b). Early operational Numerical Weather Prediction outside the USA: An historical introduction. Part II: Twenty countries around the world. Meteorological Applications, 12(3), 269–289.Find this resource: Persson, A. (2005c). Early operational Numerical Weather Prediction outside the USA: An historical introduction. Part III: Endurance and mathematics–British NWP, 1948–1965. Meteorological Applications, 12(4), 381–413.Find this resource: Pielke, R. A., Cotton, W. R., Walko, R. L., Tremback, C. J., Lyons, W. A., Grasso, L. D., . . . Copeland, J. H. (1992). A comprehensive meteorological modeling system—RAMS. Meteorology and Atmospheric Physics, 49(1), 69–91.Find this resource: Roberts, B., Xue, M., Schenkman, A. D., & Dawson, D. T. (2016). The role of surface drag in tornadogenesis within an idealized supercell simulation. Journal of Atmospheric Sciences, 73(9), 3371–3395.Find this resource: Rotunno, R., & Klemp, J. (1985). On the rotation and propagation of simulated supercell thunderstorms. Journal of Atmospheric Sciences, 42(3), 271–292.Find this resource: Rotunno, R., & Klemp, J. B. (1982). The influence of the shear-induced pressure gradient on thunderstorm motion. Monthly Weather Review, 110(2), 136–151.Find this resource: Rotunno, R., Klemp, J. B., & Weisman, M. L. (1988). A theory for strong, long-lived squall lines. Journal of Atmospheric Sciences, 45(3), 463–485.Find this resource: Rutledge, S. A., & Hobbs, P. (1983). The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. VIII: A model for the “seeder–feeder” process in warm-frontal rainbands. Journal of Atmospheric Sciences, 40(5), 1185–1206.Find this resource: Saito, K., Fujita, T., Yamada, Y., Ishida, J.-I., Kumagai, Y., Aranami, K., . . . Yamazaki, Y. (2006). The operational JMA nonhydrostatic mesoscale model. Monthly Weather Review, 134(4), 1266–1298.Find this resource: Sathye, A., Xue, M., Bassett, G., & Droegemeier, K. (1997). Parallel weather modeling with the advanced regional prediction system. Parallel Computing, 23(14), 2243–2256.Find this resource: Schalkwijk, J., Griffith, E. J., Post, F. H., & Jonker, H. J. J. (2012). High-performance simulations of turbulent clouds on a desktop PC: Exploiting the GPU. Bulletin of the American Meteorological Society, 93(3), 307–314.Find this resource: Schenkman, A. D., Xue, M., & Dawson, D. T., II. (2016). The cause of internal outflow surges in a high-resolution simulation of the 8 May 2003 Oklahoma City tornadic supercell. Journal of Atmospheric Sciences, 73(1), 353–370.Find this resource: Schenkman, A. D., Xue, M., & Hu, M. (2014). Tornadogenesis in a high-resolution simulation of the 8 May 2003 Oklahoma City supercell. Journal of Atmospheric Sciences, 71(1), 130–154.Find this resource: Schenkman, A. D., Xue, M., & Shapiro, A. (2012). Tornadogenesis in a simulated mesovortex within a mesoscale convective system. Journal of Atmospheric Sciences, 69(11), 3372–3390.Find this resource: Schlesinger, R. E. (1973a). A numerical model of deep moist convection: Part I. Comparative experiments for variable ambient moisture and wind shear. Journal of Atmospheric Sciences, 30(5), 835–856.Find this resource: Schlesinger, R. E. (1973b). A numerical model of deep moist convection: Part II. A prototype experiment and variations upon it. Journal of Atmospheric Sciences, 30(7), 1374–1391.Find this resource: Schlesinger, R. E. (1975). A three-dimensional numerical model of an isolated deep convective cloud: Preliminary results. Journal of Atmospheric Sciences, 32(5), 934–957.Find this resource: Schlesinger, R. E. (1980). A three-dimensional numerical model of an isolated thunderstorm. Part II: Dynamics of updraft splitting and mesovortex couplet evolution. Journal of Atmospheric Sciences, 37(2), 395–420.Find this resource: Shuman, F. G. (1989). History of numerical weather prediction at the National Meteorological Center. Weather Forecast., 4(3), 286–296.Find this resource: Skamarock, W., Klemp, J., Dudhia, J., Gill, D., Barker, D., Wang, W., . . . Duda, M. (2008). A Description of the Advanced Research WRF Version 3 [Technical report]. Smagorinsky, J. (1958). On the numerical integration of the primitive equations of motion for baroclinic flow in a closed region. Monthly Weather Review, 86(12), 457–466.Find this resource: Smarr, L., Kogut, J., Kuck, D., Wilhelmson, R., Wolynes, P., Hess, K., . . ., McMeeking, R. (1983). A center for scientific and engineering supercomputing [Proposal]. Srivastava, R. C. (1967). A study of the effect of precipitation on cumulus dynamics. Journal of Atmospheric Sciences, 24(1), 36–45.Find this resource: Steiner, J. T. (1973). A three-dimensional model of cumulus cloud development. Journal of Atmospheric Sciences, 30(3), 414–435.Find this resource: Straka, J. M., & Anderson, J. R. (1993). Numerical simulations of microburst-producing storms: Some results from storms observed during COHMEX. Journal of Atmospheric Sciences, 50(10), 1329–1348.Find this resource: Takeda, T. (1971). Numerical simulation of a precipitating convective cloud: The formation of a “Long-Lasting” cloud. Journal of Atmospheric Sciences, 28(3), 350–376.Find this resource: Tripoli, G. J., & Cotton, W. R. (1989). Numerical study of an observed orogenic mesoscale convective system. Part 1: Simulated genesis and comparison with observations. Monthly Weather Review, 117(2), 273–304.Find this resource: Vanderbauwhede, W., & Takemi, T. (2013). An investigation into the feasibility and benefits of GPU/multicore acceleration of the weather research and forecasting model. In 2013 international conference on high performance computing simulation (HPCS) (pp. 482–489).Find this resource: Weisman, M. L., & Klemp, J. B. (1982). The dependence of numerically simulated convective storms on vertical wind shear and buoyancy. Monthly Weather Review, 110(6), 504–520.Find this resource: Weisman, M. L., & Klemp, J. B. (1984). The structure and classification of numerically simulated convective storms in directionally varying wind shears. Monthly Weather Review, 112(12), 2479–2498.Find this resource: Weisman, M. L., & Rotunno, R. (2004). “A theory for strong long-lived squall lines” revisited. Journal of Atmospheric Sciences, 61(4), 361–382.Find this resource: Wicker, L. J., & Wilhelmson, R. B. (1995). Simulation and analysis of tornado development and decay within a three-dimensional supercell thunderstorm. Journal of Atmospheric Sciences, 52(15), 2675–2703.Find this resource: Wilhelmson, R. (1974). The life cycle of a thunderstorm in three dimensions. Journal of Atmospheric Sciences, 31(6), 1629–1651.Find this resource: Wilhelmson, R., Brooks, H., Jewett, B., Shaw, C., Wicker, L., & Coauthors. (1989). Study of a numerically modeled severe storm. Wilhelmson, R., & Ogura, Y. (1972). The pressure perturbation and the numerical modeling of a cloud. Journal of Atmospheric Sciences, 29(7), 1295–1307.Find this resource: Wilhelmson, R. B., & Chen, C.-S. (1982). A simulation of the development of successive cells along a cold outflow boundary. Journal of Atmospheric Sciences, 39(7), 1466–1483.Find this resource: Wilhelmson, R. B., Jewett, B. F., Shaw, C., Wicker, L. J., Arrott, M., Bushell, C. B., . . ., Yost, J. B. (1990). A study of the evolution of a numerically modeled severe storm. International Journal of Supercomputing Applications, 4(2), 20–36.Find this resource: Wilhelmson, R. B., & Klemp, J. B. (1978). A numerical study of storm splitting that leads to long-lived storms. Journal of Atmospheric Sciences, 35(10), 1974–1986.Find this resource: Wilhelmson, R. B., & Klemp, J. B. (1981). A three-dimensional numerical simulation of splitting severe storms on 3 April 1964. Journal of Atmospheric Sciences, 38(8), 1581–1600.Find this resource: Xue, M. (2004). Tornadogenesis within a simulated supercell storm. Paper presented at the 22nd Severe Local Storms Conference. Hyannis, MA: American Meteorological Society.Find this resource: Xue, M., Wang, D., Gao, J., Brewster, K., & Droegemeier, K. K. (2003). The Advanced Regional Prediction System (ARPS), storm-scale numerical weather prediction and data assimilation. Meteorology and Atmospheric Physics, 82(1), 139–170.Find this resource: Yao, D., Xue, H., Yin, J., Sun, J., Liang, X., & Guo, J. (2018). Investigation into the formation, structure, and evolution of an EF4 tornado in east china using a high-resolution numerical simulation. Journal of Meteorological Research, 32(2), 157–171.Find this resource:
https://oxfordre.com/climatescience/view/10.1093/acrefore/9780190228620.001.0001/acrefore-9780190228620-e-667
Introduction to World History challenges students to think analytically about the major historical processes that shaped and continue to shape cultures and civilizations. The course is based on a series of case studies that focus on shifting power relations between and within civilizations. Three major themes connect the several topics discussed throughout the semester: issues of authority and inequality within civilizations; encounters and conflicts between civilizations; and patterns of continuity and change across space and time. The course demonstrates how historians explain what has happened in the past and in various civilizations and cultures; presents the kinds of evidence that historians use to reconstruct the past; and examines the interpretations historians make based on this evidence. The semester begins with a consideration of culture and power in classical civilizations, and then moves on to address: Byzantine Christianity and Islam; the Spanish and the Aztecs; the emergence of a transatlantic world and the growth of European dominance in the eighteenth and nineteenth centuries; and, finally, the forces of anti-colonialism and new forms of nationalism, as well as ethnic conflicts in the mid to late twentieth century. NOTE: You should always draw on evidence from several readings and from lectures. You do not have to CITE works, but can refer simply to the author's name or title of the book/article. 1. A major consequence of the expansion of European power and culture throughout the world after 1500 was the creation of new and complex identities. This process began with the "Columbian exchange" between the New and Old Worlds, continued as Europeans advanced their commercial interests and, later, political power in Africa and India, and has extended up to today. Along the way, the formation of new identities encompassed individuals such as Equiano and Gandhi, sections of society such as the first Christians among the Ibo and indigenous soldiers in Queen Victoria's employ, and whole cultures such as Latin American societies. Write an essay in which you consider how and why European imperialism and colonialism shaped new identities. If you need a nudge, think of language, ideas/ideology/religion/beliefs, skin color, occupations, nationality... 2. In this course, we have looked at a number of cases of cultural contact/conflict in world history. Choose three of these cases and write an essay in which you consider the nature of cultural interaction. Was it peaceful or violent? What ?tools' ? be they technological or biological - shaped cultural contact? What role did race, religion, cultural practices, and economic interest play in determining the course of cultural interaction? What were some of the unintended consequences of cultural contact? Make sure you offer specific examples as well as a general interpretation of each case you address. 3. Both the Roman empire of late antiquity and the Islamic Empires of the medieval era created international markets and trade. Yet it is only after the conquest of the New World by European states that we speak of the emergence of a world economy. What were the features of the world economy that emerged between 1500 and 1900 and how did they inaugurate a new era in world history? Consider various aspects of economic relations within and between different regions, including trade patterns, the variety and sort of goods produced, and the types of labor employed and how laborers were treated. 4. The first reading for this course was Robbins' "Meaning of Progress." Using examples from at least three of the five case studies we've covered over the term, make an argument for or against interpreting world history as a record of human progress. Make sure you consider evidence from Parts I (China) or II (Mediterranean) as well as the post-mid term parts of the course. The midterm examination will be given on Monday, 26 February in lecture. The midterm will last fifty (50) minutes and will cover all materials presented in class, in the readings, and in discussion sections through 6 October. I will select one of the following study questions. You will NOT be allowed to use notes, books, or any other materials during the examination. I will provide blue books. Please bring two (2) pens with you to the exam. Hints: make sure you consider material from all the readings (i.e. don't forget Robbins). Also, can the images and maps in Gallery Part I and II provide you with evidence? 1. Imagine that Mencius could have read about the religious beliefs and practices of the ancient Romans (before the spread of Christianity), early Christians (until the 6th c. or so), and early Islam (until about 800 or so). What would he have found sympathetic and what off-putting in each of these religious communities and why? 2. Between the first century CE and 800 CE, the world religions of Judaism, Christianity, and Islam vied for people's souls in the Mediterranean world. What role did geography and the overlapping 'zones of influence' of the three religions play in the evolution of their doctrines, social message, social/ethnic bases, and organization? The following maps might be helpful for this question (all found on the web-site, Gallery Part II): 29, 30, 8, 4, 6, 34. 3. In the historical cases we've discussed, each state has had an intimate connection - sometimes friendly, sometimes hostile -- to particular religious beliefs and practices and/or to particular philosophies. Write an essay in which you discuss the interaction between the state and religion/philosophy in two of the following cases: a. ancient China b. ancient Rome c. Byzantine Empire d. Islamic empires. 4. Write an essay about the evolution of social stratification from hunting/gathering societies through the emergence of imperial societies in China and the Mediterranean. How did social stratification change as societies become more complex? Did it become more or less pronounced? How did it overlap with gender distinctions? Readings in this course included books, articles, and documents. The following books are required. They are available for purchase at the CMU University Book Store. You may, if you prefer, purchase them elsewhere or online. If you choose to purchase the books on line, you should provide the ISBN (International Standard Book Number) so that you are sure to receive the same edition as the rest of the class. The ISBN is given in square brackets after the title and publication information. - Chinua Achebe, Things Fall Apart (New York: Anchor Books, 1994; first published 1959) [0-385-47454-7] - Olaudah Equiano, Equiano's Travels: The Interesting Narrative of the Life of Olaudah Equiano or Gustavus Vassa the African, abridged and edited by Paul Edwards (Oxford and New York: Heinemann, 1996; first published 1789) [0-435-90600-3] - Gianni Sofri, Gandhi and India, translated by Janet Sethre Paxia (New York: Interlink Books, 1999) [1-56656-239-2] - Tzvetan Todorov, The Conquest of America: The Question of the Other, translated by Richard Howard (Norman: University of Oklahoma Press, 1998; first published 1982) [0-8061-3137-3] - Arthur Waley, Three Ways of Thought in Ancient China (Stanford: Stanford University Press, 1956) [0-8047-1169-0] Three of these books are to be read in their entirety (Achebe, Equiano, and Sofri). We will read selections from the other two. Page numbers are specified on the syllabus. All other readings are on the Website. Readings are to be completed before the class for which they are scheduled. For example, for Wednesday, 25 October, you are to read two articles, one by Alfred Crosby and the other by Sidney Mintz. You should have read these articles before coming to lecture on that day. Lectures will not simply repeat what is in the readings, but will use the readings as a springboard into deeper discussions of the material. Thus, your comprehension of the lectures will be severely compromised if you have not read the assignments in advance. Completing the reading assignment before the scheduled class is especially important for the discussion sections. Discussion sections will often concentrate on one particular reading. For example, on 1 September, we will discuss "The Meaning of Progress" in detail. Other discussions, however, will focus on the entire week's work, including materials presented in lectures. Sometimes this requires a good bit of advance planning. The secret here is simple: keep up with the readings and you will do well in the class; don't, and you won't. Before reading the selections from Todorov, Conquest of America , you will find it helpful (I hope!) to read the short "Orientation to Todorov" provided on this Webpage. Lectures for sections A-J will meet on Mondays and Wednesdays from 1:30-2:20 p.m. in Doherty Hall 2210; for sections L-U on Mondays and Wednesdays from 12:30-1:20 p.m. in Doherty Hall 2210. You are expected to attend lectures regularly. Material presented in lectures will complement and expand on the readings and not merely repeat them. I will assume that everyone in the class is familiar with the assigned readings and will lecture accordingly. You should endeavor to take close and careful notes during lectures. For help on note-taking see the Webpage section on "Help." Please arrive at lectures on time. Discussion sections meet at various times on Fridays. Discussion sections last fifty minutes. Attendance is mandatory . No unexcused absences are allowed; your grade will suffer for each unexcused absence. Excused absences will be given for illness or serious family emergencies. Your excuse must be presented in writing to your Teaching Assistant or to me personally. Sections will be devoted to intense consideration of materials presented in class and, in particular, the readings. You should come to class prepared to participate actively. Please bring with you either the book to be discussed or your notes on the readings (or copies of the readings, if you choose to print them out). 20% of your grade will depend on your performance in discussion sections. Mere attendance is not enough; you must talk. Teaching Assistants may, at their discretion, give quizzes in their sections that will not be announced in advance (with the exception of the Map Quiz . A Map Quiz will be administered during the second discussion section. We will provide you with a blank map and ask you to identify several geographical and geopolitical features. You should prepare for the quiz by studying the maps presented on the syllabus. There are two kinds of written assignments in this course: examinations and papers. Examinations : There are two examinations. Both examinations require you to write essays. The first is a midterm examination to be given in lecture. The midterm will last fifty (50) minutes and will cover all materials presented in class, in the readings, and in discussion sections. The second is the final examination to be held during finals week at the end of semester. The exact date will be announced in class. The final will last two (2) hours. The final is comprehensive and will cover the entire semester. One question will ask you to draw on materials from the entire semester; the other question will focus more closely on the second half of the semester. Study questions for the midterm will be distributed . The examination questions will be drawn directly from the study questions. I will choose the question. You will not be allowed to consult books, readings, or notes during the examination. All you need to bring with you are two pens; I will provide the examination bluebooks. Make-up examinations will be given only in the case of illness or a serious family emergency. Once again, these excuses must be presented in writing. You must inform your TA or professor as soon as possible if you will be missing, or have missed, an exam. Papers : You are expected to write two papers. Late papers will be accepted but severely penalized; one letter grade for each day late. Paper extensions will only be given for illness or severe family emergencies and not for other reasons. You must inform your TA or the professor as soon as you possibly can if you will miss, or have already missed, the due date for a paper. Paper topics are described below. Papers will be graded on your ability to formulate a cogent and coherent argument, to marshall evidence to support your argument, and to present your materials in a organized, fluid manner. Writing style will also be taken into consideration because there is NO difference between what you say and how you say it. Papers are to be free of grammatical, spelling, and typing errors. Points will be taken off for these mistakes. Papers are to be typed on white paper. Staple the pages together in the upper left-hand corner and please do not use binders or folders. Please number pages. All papers must be typed. You can print on one or both sides as you choose. Papers are to be double-spaced. Please use a 12pt font, either Times Roman or Courier. References: For this class, you may use an abbreviated footnote/ reference style. Simply put the reference in parenthesis where it is required. E.g. to make a reference to page 23 of Todorov's Conquest of America , use (Todorov, 23). TAs will be happy to discuss your papers with you before they are due. They will also be willing to read rough drafts, IF such drafts are presented in a timely manner, at least several days before the paper is due. In reading drafts, we will try to help you with arguments, ideas, and writing style, but we will not write the paper for you. No re-writes will be accepted. For more details on writing your paper see the section in "Help". PAPER TOPICS Plagiarism is a serious academic offense. Plagiarism means to take the ideas, writing, or arguments of others and pass them off as your own. Papers written for this class do not require you to use any materials other than those assigned in class. If you quote directly from a book, you must enclose that material in double quotation marks and indicate the source. In this class, it is sufficient to do so informally, that is, by placing a quick reference in parentheses, e.g. (Todorov, 231). Plagiarism is discussed in The Word, the CMU undergraduate handbook, on pp. 48-49. Please read this passage carefully! I will penalize all cases of plagiarism severely. The most common applied penalty will be failure in the course. For more details on writing your paper see the section in "Help" on Plagiarism. Grades: Final grades will be calculated on the following approximate percentages. Paper Topic 1 : due Feb. 16 Please consider the readings on ancient China and ancient Rome -- Waley, Three Ways of Thought in Ancient China, and the selections for Friday, Feb. 9 (Porphyry; Symmachus; Theodosian Code and Codes; selections on Hypatia). Write a four-page essay that discusses what these readings reveal about the different responses of the Chinese and the people of the Mediterranean world to the social and political unrest of their times. Make sure you focus in on specific issues such as tradition, custom, law, political authority, community, learning, belief, tolerance, and morality. Paper Topic 2: due April 19. Please write a four page essay on Chinua Achebe's Things Fall Apart in which you describe and explain the similarities and differences between the Christianization of Ibo culture in the 19th century and EITHER the rise of Christianity in ancient Rome OR the "encounter" between European Christians and Aztec culture in the 16th century. Your paper should consider differences in the beliefs of the polytheistic religions and Christianity, the reasons that some people converted rapidly and others resisted conversion, the methods used by the Christians to win and/or enforce conversion, and the methods of resistance used by the defenders of non-Christian beliefs and practices.
https://www.andrew.cmu.edu/course/79-104/Assignments.html
Software developing organizations strive to achieve flexibility to maintain a competitive advantage. There is no common understanding of what characterize flexibility for a software organization beyond the scope of the software product. Without a common understanding, it is difficult to evaluate the degrees of flexibility of software development approaches. The aim of this literature review is to collect attributes that characterize flexibility. The collected attributes are consolidated into a flexibility framework with 3 main attributes: properties of change, flexibility perspectives, and flexibility enablers. The resulting flexibility framework is then used to evaluate Agile and Lean practices. The evaluation shows that Agile and Lean practices address many flexibility attributes. However, some attributes are not addressed, such as infrastructure flexibility and strategic flexibility. On the basis of our evaluation, the classifications of flexibility attributes that we present in this paper could be used to aid software organization flexibility evaluation. Background: Software development organizations frequently face changes that require them to be flexible. The principles and practices of Agile software are often associated with improving software organizations’ flexibility. However, introducing Agile practices have its benefits and limitations. To amplify benefits and alleviate challenges, Agile adoption guidelines are being proposed to provide strategies for introducing Agile practices. One instance of such guidelines is known as Agile Maturity Models (AMMs). AMMs typically suggest that Agile practices are introduced in certain orders. However, AMMs provide contradictory strategies. Thus it is not known whether one strategy to introduce Agile practices is better than others. Objective: The objective of this thesis is to gather and examine the evidence on the different strategies of introducing Agile practices, particularly on the order of introduction as suggested in the AMMs. The thesis seeks if one order for introducing Agile practices is better than others. Method: Combination of empirical studies were used in this thesis. The data collection was done through a survey and semi-structured interviews. This involved analyzing the introduction of Agile practices over time, i.e. the start and/or end of Agile practices. A qualitative method like qualitative coding was used to analyze data obtained from the interviews. Different quantitative methods like inferential statistics and social network analysis were also used. Literature studies were also conducted to provide background and support for the empirical studies. Results: The examination of the evidence indicates that there is not one strategy to introduce Agile practices that would yield better results than others. The lack of conclusive evidence could be caused by the lack of consideration on reporting the context of empirical studies, particularly on the baseline situation, i.e. situation prior to Agile introduction. A checklist is proposed to capture a baseline contextual information focusing on internal organizational aspects of a software organization: the constellation of team members’ skills and experience, management principles, existing practices and systems characteristics of the software under development. The checklist was validated by seven experts in academia. The experts who participated in the validation perceived the checklist to be useful and relevant to research. Conclusion: The studies presented in this thesis can be a useful input for researchers who are conducting an empirical study in Agile software development. The checklist proposed in this thesis could be used to help researchers to improve their research design when evaluating the extent of improvements from introducing Agile practices. If researchers use the checklist, consistency across empirical studies can be improved. Consistency in reporting empirical studies is desired for comparing and aggregating evidence. In turn, this will help practitioners to make a fair assessment whether research results are relevant to their contexts and to what extent the results are helpful for them.
http://bth.diva-portal.org/smash/record.jsf?pid=diva2:1191753
Closing the Pay Gap for Ontario Workers This year on International Women’s Day, Iceland made it illegal to pay women differently than men. Employers there must prove they offer equal pay regardless of gender, ethnicity, sexuality, or nationality. While other programs like this exist, Iceland has made its decision legally binding. In Canada, the wage gap persists, even when education, occupation, experience, and hours of work are consistent. It takes Ontario women 15 1/2 months to earn what Ontario men do in 12. That’s a 30 percent pay gap! The gap is even wider for Indigenous women, racialized and immigrant women, and women with disabilities. But it isn’t just the pay gap. According to the Canadian Centre for Policy Alternatives, the increase in precarious, low wage work in Ontario is part of a seismic shift in the type of work being created and the way work is organized. “People used to work cleaning offices alongside their colleagues who worked in those offices. They are now working for cleaning contractors who compete to be the lowest cost (lowest wage) bidder.” Our experiences in the workforce are connected to legislation. The Employment Standards Act sets the minimum floor for pay and working conditions while the Ontario Labour Relations Act sets rules for the relationships between employers and unions. What is so powerful about the legislation in Iceland is that employers can be prosecuted for the unequal treatment of workers. In May, labour and community groups in Ontario received the final report of the Changing Workplaces Review. The report proposes a number of recommendations including extending employment standards to more workers, improving workplace protections, and investing in more rigorous labour law enforcement. However, the review fails to address some fundamental reforms sought by labour and community groups. The recommendations fail to reintroduce card-based certification or other models that make it easier for workers to organize collectively. They fail to address paid sick days, adequate vacation, and just cause protection for all Ontario workers. To raise the bar for all workers, ETFO has joined the Ontario Federation of Labour in calling for changes to the Labour Relations Act and the Employment Standards Act to address issues including equal pay, benefits, and working conditions. Many organizations have banded together calling on the government to raise the wage floor for everyone in Ontario. As the $15 and Fairness Campaign says, “We need a $15 minimum wage, 7 paid sick days, equal pay for part-time, temporary and contract workers, better rights for temp agency workers, advanced schedules, the right to unionize and respect at work.” As teachers, we see how poverty and precarious work affect our students. We know these changes are essential to our students, their parents, and our communities. While the review’s mandate excluded consideration of a raise of the minimum wage, it remains an urgent issue for the government to address and one that disproportionately affects women in workforce. As educators, we are privileged to have the backing of a strong union. Our numbers ensure we are heard when we speak about our issues and those of the broader community. Whether it is to address workplace violence, student testing or the funding formula, we are united in our action. Creating better working conditions for all Ontarians, closing the pay gap, and working to ensure that everyone has access to decent working conditions and fair compensation is our responsibility as members of our communities. Building and strengthening relationships with parents and families, school councils, and community organizations and agencies creates important alliances. These alliances help us better advocate for public education and the equity and social justice issues that are our priorities as an organization. Together we must continue to organize in our communities, building mutually supportive relationships, and raising the bar for workers in Ontario. When faced with a regressive government that consistently demonstrates its backwards logic, we must double down on our commitments to one another, to our collective well-being and to equity and social justice.
Risperidone is a medication administered in various mental and nervous disorders. It is administered orally in the form of tablets and is available with prescription only. It belongs to the antipsychotic category of drugs and has many side effects. This article describes what Risperidone is, in which conditions it is prescribed and in which it is contraindicated, what are its uses and what are the side effects, and how severe can the side effects be. Reviews from people who have used Risperidone have been included. As an alternative, herbal remedies from planet ayurveda are suggested for mental and nervous disorders. Introduction Risperidone, also known by the name ‘Risperdal’ is an atypical antipsychotic drug used for various nervous and mental disorders and their psychotic symptoms. It is most commonly used for treatment of bipolar disorder, schizophrenia, psychosis associated with schizophrenia and even irritability and mood swings. It is prescribed with the purpose to restore normal brain function. It is usually used as an oral medication, but sometimes it can also be injected into muscles. The oral or liquid form of the medication is advised to be taken in a specific amount with 100 ml of water, juice, skimmed or low fat milk, or coffee. A fresh solution has to be prepared every time. The exact amount of the medication depends on the person’s medical condition and symptoms, age, any other ongoing medications and response to treatments. Usually a low dose is advised at first and it is gradually increased. What are its Precautions and Side Effects? Antipsychotic medications are basically tranquilizers with a great sedative effect. The prescription of antipsychotic drugs alone doesn’t ensure relief. People on such medications need constant supervision because they can cause harm to themselves as well as to people around them. There are many precautions that need to be followed while using such drugs. This medication can cause and trigger allergic reactions and serious problems if administered to people with medical history of diabetes, high cholesterol, heart disease, liver disease, kidney disease, epilepsy, respiratory disorders, sleep abnormalities, dementia, blood count abnormalities and Parkinson’s disease. Risperidone can affect the normal heart rhythm by affecting how long the ventricles take to relax and get back to work again between two consecutive heartbeats. It can make the heart rate abnormally fast or irregular which is itself a serious condition. This complication also increases with lower than normal concentration of certain dietary minerals in the body. Risperidone reduces or even turns off sweating and causes the body to heat up and retain the heat. An overheated body caused by Risperidone has symptoms like persistent fever, headache, mood changes, dizziness and diarrhea; and needs immediate medical care. f Risperidone is taken during pregnancy, the baby can have stiffness and shakiness in its muscles, continuous crying, drowsiness and breathing difficulties, inability to take feed, etc. The medication if taken by lactating mothers also passes on to the feeding infant through breast milk. Some people who have used this medication have been kind enough to post reviews about their experiences , which can be summarized below: - Even a low dose makes you dizzy all the time and unable to stand up. - Extreme nausea is felt. - The person is unable to carry on daily activities. - It causes some people to hear sounds that aren’t actually there. - It becomes less effective over time and dose needs to be increased. - It causes migraines in some people. - Severe acne, hair loss and soreness of the face have also been reported. - It causes dehydration. - Irritation in the eyes, the eyelids and the eyebrows has also been experienced. - Excess weight gain in some people. - Hunger pangs and lack of motivation. - Suicidal thoughts in some people. - It causes severe hormonal disturbance in both men and women. - Women missed their periods and started lactating even when not pregnant. - Loss of personality and confidence in many people. - Loss of emotions and empathy. - It is reported to be extremely addictive. - Some people start bed wetting. - Many people have advised others to never take this medication. Are there any alternatives? Yes, thankfully there are alternative treatments in ayurveda. Ayurvedic preparations tend to have zero or minimal side effects. Also, these medications never cause imbalances of hormones or chemicals in the body. Herbal Remedies by Planet Ayurveda for Nervous and Mental Disorders Planet Ayurveda manufactures pure herbal formulations that are used as medications and tonics worldwide. These formulations are carefully sourced from authentic texts on Ayurveda written in ancient India thousands of years ago. The formulations are chemical free and additive free and thus safe for consumption by people of all ages and individual conditions. The products are suitable for people with all dietary preferences and restrictions since these are completely plant-based, halal and completely organic. A combination has been created by experts at Planet Ayurveda for ayurvedic management of nervous and mental disorders. Product List - Brahmi Capsules - Ashwagandha Capsules - Neuroplan Syrup - Unmad Gaj kesari Ras Tablets Product Description 1. Brahmi Capsules Pure herbal capsules consisting of Water Hyssop (Bacopa monnieri) are excellent remedies for the brain and its disorders. It also prevents and relieves all side effects caused by other medications. Dosage: 2 capsules twice daily with plain water after meals. 2. Ashwagandha Capsules these pure herbal capsules made of pure Ashwagandha (Withania somnifera) are excellent for relieving nervous breakdown and abnormalities. Ashwagandha is one of the most powerful multi beneficial herbs. Dosage: 2 capsules twice daily with plain water after meals. 3. Neuroplan Syrup it is a poly-herbal formulation consisting of Brahmi (Bacopa monnieri), Mandukaparni (Centella asiatica), etc. especially created for restoring nerve and brain health. It is also capable of balancing hormonal imbalances. Dosage: 2 tsp daily after meals. 4. Unmad Gaj kesari Ras Tablets It is an herbal formulation of heavy metals carefully prepared with herb juices. It includes purified extracts of mercury, sulphur, sulphides of arsenic, etc. and herbal decoctions. It is then made into pills or tablets of 250 mg. Dosage: 1 tablet twice a day to be chewed with ghee after meals. Contact my assistant to provide you the costing / ordering and delivery information at – [email protected] or call at +91-172-5214030 Or Check Website – www.PlanetAyurveda.com CONCLUSION In modern medicine all illnesses are treated based on symptoms, that’s why the modern medications are extremely powerful and they not only reduce symptoms of just one illness but also decrease the positive symptoms. In case of mental illnesses modern medicine proves to be extremely harmful because the list of side effects and complications can be found to be longer than the list of uses and benefits. It is recommended that in case of mental illnesses only naturally formulated medications be used, since they are less powerful and do not harm anyone. They are just tonics which we are using as medicines. Herbal remedies are always better than other medications because we don’t want to get side effects at the cost of mental health.
https://www.naturalayurvedictreatment.com/risperidone-and-its-ayurvedic-alternative/
By Verene Samuel. Interior Design. Published at Tuesday, March 13th, 2018 - 08:12:06 AM. If you have a room that doesn’t allow for much natural light, if any, try using light fixtures with softer light bulbs to help. Or, use mirrors strategically to help reflect the natural light around the room. Another trend that’s carrying on from 2018 is metallic accents. Bronze, gold and silver will be hot this year. You can expect to see them more often in home décor items and accessories. However, now metallic prints are adorning walls as backsplashes and furniture as well. Keep them in mind as finishes for appliances or fixtures. Another budget friendly idea to improve the décor of your room is to change your lighting. Bright, harsh lamps and bulbs can cause feelings of sadness and negativity. Whalen suggests more natural light when possible. If not, let this be your year. The latest home design and color trends are in. Some of them might even surprise you! Here’s our roundup of what you’ll be seeing in 2018. Recent Post Category Monthly Archives Static Pages Any content, trademark’s, or other material that might be found on the Something-fishy website that is not Something-fishy’s property remains the copyright of its respective owner/s. In no way does Something-fishy claim ownership or responsibility for such items, and you should seek legal consent for any use of such materials from its owner. © Copyright 2018 Something-fishy. All Rights Reserved.
http://something-fishy.net/design-interior-archways/
Taking stock of the political and economic questions surrounding the Red Sea The increasing importance of the Red Sea as a nexus of geopolitical competition poses security problems to countries in the region. A lack of consensus on the rules of competition among international and regional powers has only intensified the frenzied race of both state and non-state actors over natural resources and spheres of influence there, while the balance of power in the region continues to fluctuate. Stakeholders have had to reassess their outlook and strategies in the light of various factors. First, there is intensive security and military presence to safeguard freedom of navigation, secure the movement of trade and commerce and protect Bab Al-Mandab, the strategic southern entryway to the Red Sea. This presence is particularly heavy in the ports of Eritrea, Djibouti and Somalia, portals to the vital maritime routes and logistical support bases for international commercial activities. The Red Sea is the most important maritime corridor for commercial traffic between Europe and Asia as well as for the movement of oil from the Gulf to the Mediterranean. Secondly, competition and tensions between world powers, notably the US, China and Russia, has been mounting. In addition to rivalry over security arrangements for ships, ports and cargo, they vie for influence over certain governments’ decisions in ways that affect the flow of trade and access to local markets. China and the US, in particular, are competing to build communications systems for armed forces and security agencies, and to develop database and surveillance networks in Africa. As Beijing lobbies to secure ownership shares in African port facilities, Washington is trying rebuild security cooperation with Khartoum, having knocked Sudan off its list of state sponsors of terrorism. AFRICOM has been leading Washington’s drive to renew the security partnership with Sudan as well as the Democratic Republic of the Congo (DRC). Russia has also been working to improve relations with Khartoum and other countries in region, taking advantage of its presence in Port Sudan. The growing influx of capital and investments, especially into infrastructure development on the African side of the Red Sea and food production in West Africa, has also contributed to closer security links between the Gulf and the Horn of Africa. The mutual impacts of Middle East conflicts and tensions are clearly felt in the Red Sea region. Turkish, Iranian, Qatari, Emirati and Ethiopian agendas jostling alongside international powers and private security firms, not to mention the pursuits of the Red Sea countries themselves, have imposed their rhythm on the process of re-engineering the balance of power. The resulting friction has driven up the heat of competition over ports as well as military, commercial and cultural influence. In many ways the war in Yemen too reflects changes in outlooks on the region, on the levels of cooperation and coordination, and on patterns of intervention and conflict, all of which have important implications for the warring parties, southern Yemen, navigation off its coasts and the future of the country as a whole. In terms of directions of change, some strategic analysts believe developments are setting the stage for more intense international scrambling, rivalry and conflict as the patterns of interplay in the Red Sea, Gulf of Aden and Indian Ocean grow in strategic importance regionally and internationally. The many foreign military bases and concentrations on the western bank of the Red Sea have added new physical realities that make the region more vulnerable to instability. Current threats and challenges suggest that potential disputes and conflicts would be complex and multilevelled, while the current weave of regional and international interplay throws into relief the detrimental role that a number of influential regional and international stakeholders are playing in shaping the map of threats and challenges. This map also tells us that attempts to apply temporary palliatives to regional hotspots have not contributed positively to resolving protracted conflicts. Some Red Sea countries, especially in the Horn of Africa, are seething with internal and mutual tensions rooted in or fuelled by ethnic conflicts. For example, the Somali state exhibits numerous features of extreme fragility exploited by various foreign parties. The federal government is still unable to restore stability and to confront the ongoing threat of the terrorist Al-Shabaab Al-Mujahideen movement (commonly referred to as HSM or Shabaab) which continues to control parts of the capital Mogadishu, portions of southern Somalia and some border areas near Kenya and Ethiopia. Fraught relations between the federal and state governments, which stem from disputes over the current electoral process, the division of wealth and oil revenues, and the conduct of foreign relations on the part of some state governments, have also undermined the stability and performance of the state. In addition, border conflicts include the one between Sudan and Ethiopia and its repercussions, the temporarily deferred border conflict between Ethiopia and Eritrea (which have reached an agreement to work together on other priorities in the Horn of Africa), and the dispute between Sudan and South Sudan over Abyei. In addition, there is the dispute between Kenya and Somalia over their maritime boundary. A prime example of domestic conflicts with long-term regional ramifications is the Civil War in Ethiopia between the federal government in Addis Ababa and the government of the Tigray Region against the backdrop of Prime Minister Abiy Ahmed’s bid to alter the power equation between the federal and regional governments as part of his larger project to cast himself as the strongman of the Horn of Africa and to present Ethiopia as a major economic power in the continent. Meanwhile the activities of jihadist terrorist groups in Libya, Somalia and the Sahel and Sahara region demonstrate how gravely these groups threaten the stability of the relevant states and hamper their prospects for construction and development. As a result of the foregoing factors, the role of certain non-Red Sea actors have gained ascendancy over that of the Red Sea states and that has resulted in greater unrest and conflict as opposed to opportunities for cooperation. The idea of a Red Sea and Gulf of Aden Council, which originated with an Egyptian initiative in 2017, gained momentum in a series of meetings between the foreign ministers of eight states in Riyadh since 2018, culminating in the official establishment of the Council of Arab and African Coastal States of the Red Sea and Gulf of Aden in January 2020. As its charter states, the council aims to build a regional system for collective action to promote development and security in the Red Sea region and to facilitate remedies to various common concerns such as interstate commerce, infrastructural development, increased flow of capital, environmental protection and peaceful conflict resolution. As strategically significant as the council is, it still faces huge challenges in terms of its ability to transform its aims into concrete policies, coordinate its members’ positions and strengthen integrated collective interests. However, there are a number of pillars that the council can build on to augment its role. Effective collective management of the ports crisis could contribute to crystallising policies that look beyond immediate profits to the development of a sustainable economic ecosystem in the Red Sea. The littoral states possess a number of ports – Suez, Jeddah, Port Sudan, Mocha, Hodeida, Aqaba, Mitsiwa and Djibouti – which, if effectively integrated, can enhance the system, reordering various regional and international calculations and considerations that have worked to reduce liquidity. Another pillar is maximising the collective economic benefit of the council’s members. One factor that enhances the feasibility of such a drive is the increasing desire of global communications firms to build lines extending beneath the Red Sea and directly linking Africa and the Middle East to Europe. The new fibre optic cable systems offer the advantage of greater usage diversity than existing maritime capable courses. More importantly in our context, such projects as 2Africa, serving Africa and the Middle East, and Blue-Raman, linking Europe and India, will require close coordination and cooperation among the Red Sea and Gulf of Aden littoral states, especially Djibouti (a main communications cable hub), Egypt, Saudi Arabia and Jordan. The port networks, communications projects and other such developmental interests require extensive strategic discussions among the members of the council aimed at maximising consensus among them in the face of the competing agendas of outside players towards this region and its countries. As the new Red Sea and Gulf of Aden forum gains in efficacy it will have more say in the management of conflicts, interests and the power balance in this region and gain in ability to counter the detrimental repercussions of outside competition there. At the same time, it will become more instrumental in conflict reduction and dispute settlement within the region as it strengthens collective development efforts and opportunities for joint security arrangements. *The writer is senior analyst at Al-Ahram Centre for Political and Strategic Studies.
https://english.ahram.org.eg/NewsContent/4/0/409083/Opinion/A-sea-of-competition.aspx
Remote work wanted? Analyzing online job postings during COVID-19 In early 2020, just as the COVID-19 pandemic started to affect the health of millions around the world, epidemiologists and public health experts pointed to social distancing as the key measure to control the spread of the virus. From an economic perspective, it became immediately clear that the ability to work from home would determine workers’ outcomes. Visiting Researcher - Workforce of the Future initiative at Brookings Consultant - Workforce of the Future initiative at Brookings Indeed, our estimates using Current Population Survey basic monthly samples from 2020 suggest that employment in “teleworkable” occupations (those that don’t require working outdoors, using protective equipment, moving around, or operating machinery ) contracted by 7.3 percent between February and April 2020 compared to an 18.6 percent contraction for “non-teleworkable” occupations. Since then, demand for non-teleworkable occupations surged, and the employment gap between teleworkable and non-teleworkable occupations narrowed to only a 1.7 percentage point difference. Arguably, social distancing should have also meant greater resilience in hiring efforts for teleworkable occupations during the pandemic. However, this was not the case: By May 2020, postings in teleworkable occupations had dropped by 40 percent, while those for not-remotable had dropped only by 25 percent. As postings for non-teleworkable occupations had recovered to 13 percent below pre-pandemic levels by December 2020, teleworkable postings remained 35 percent behind their February benchmark (Figure 1). In a new working paper, we use Burning Glass Technologies (BGT) data on online job postings to disentangle this counterintuitive pattern. We find that employers’ hesitation to hire workers for whom on-site experience is crucial to their productivity could be one of the factors behind the slow recovery of postings for teleworkable occupations. Figure 1. Job postings, employment, and “remotability” of work during 2020 One plausible explanation for teleworkable occupations’ lagging job postings is that as social distancing disproportionately affected non-teleworkable occupations, the bulk of the hiring efforts focused on that group as the economy reopened. This hypothesis, however, does not explain why non-teleworkable job postings shrank by 30 percent over the first two months of the pandemic, while teleworkable postings did so by 35.8 percent, nor does it explain the persistent lag for teleworkable postings toward the end of 2020. Moreover, our analyses show that even after taking away the effect on employment losses on this trend—drop in postings—remotable occupations show a slower postings recovery than the rest. Another factor behind the reduced hiring efforts toward teleworkable occupations could be their relatively low presence in “essential” or “critical” industries (e.g., health care, logistics, or food manufacturing), whose demand stayed strong during most of 2020. Indeed, this explains why demand for front-line workers—those in roles that are both non-teleworkable and play a large role in essential industries—surged. However, hiring efforts towards teleworkable occupations were even lower among those often required by essential industries, like administrative assistants and accounting clerks, both occupations with 40 percent fewer job postings in December 2020 than in February 2020. Interestingly, we also find that the robust hiring demand for “front-line” workers did not translate to stronger employment in these occupations during the pandemic. So far, the best explanation for companies’ lackluster demand for teleworkable occupations is that, though employers made special effort to retain employees performing tasks that rely on experience and can be performed from a distance, they disproportionately halted their hiring. For example, employment levels for general and operation managers at the end of 2020 were just 10 percent below February 2020, while their postings lagged by almost 28 percent. Evidently, acquiring much of the experience for—in this case—managing an organization, is challenging without initial in-person interactions. Something’s got to give. If companies continue to extend their work from home policies, HR managers will need new ways to help workers acquire valuable experience. More generally, as the fraction of workers that will continue working remotely once the economy fully reopens is yet to be seen, our results highlight that the relatively stable employment of teleworkable occupations during the pandemic did not imply better employability for those eager to enter these occupations. In a socially distancing labor market, workers searching for remote jobs may face a more limited hiring demand than expected.
This paper will examine the representation of childbirth, midwifery and women’s health in the context of a number of narrative topoi that have been, and continue to be, of interest to Women Writers in History. In the last few years of the COST Action, attention has been drawn to aspects of women’s writing for the study of which our online tool needs to be (and will be) further developed. These are the so-called narrative topoi, which can be particularly gender-sensitive and apparently charged with specific “messages”. This paper will explore the possibility of studying them, examining the ways in which we might handle and interrogate “messages”, the communication of female authors that might be considered “feminine” or even “feminist”, and the way in which these were received by female readers. The representation of childbirth and midwifery serves as a useful starting point: in women’s novels from the eighteenth and nineteenth century, they are rather frequent and intriguing elements, while traces of concern and interest can be found in the diaries and correspondence of many of the authors. Furthermore, it is becoming increasingly apparent that this research field is connected in a number of nuanced and interesting ways to a variety of other fields, from women’s health (sexuality, reproduction, the body), to their relationships with men (marriage, motherhood), and female work (employment and other activities). This paper will examine a selection of these intersections, drawing on information in the database that has been collected on the works and lives of a number of female authors across Europe, paying particular attention to: the way in which women writers view marriage, either embracing it, or rejecting it out of fear of childbirth, or in favour of an independent life; how needlework is employed as a form of liberation and independence of mind, or a tool that allows them to process and reflect on impending marriage and motherhood, or, conversely, a symbol of their oppression; and the way that female writers experience, observe and portray their bodies and the bodies of other women — as objects of desire and symbols of (in)fertility. The paper will further outline the categories that need to be created and how the database structure can be adapted in view of the research to be done, as well as the fields that are being, and continue to be, populated, and how these might be further expanded. The broad field that this research topic represents also makes it a useful point of collaboration with other projects, those in the fields of literary studies, gender studies and history, as well as beyond, in the social sciences, health, medicine and obstetrics— allowing us to demonstrate the relevance of studying these problems on the basis of literary or fictional texts. One of these collaborations, with the Childbirth Cultures, Concerns and Consequences COST Action, will be outlined in this paper, explaining how collaboration with academics who are traditionally considered to be “outside” our field, might offer an alternative lens through which we can observe our research and extend it, thus allowing our methodologies to be tested and stretched, and for tools like the database to be adapted and fully utilised for a variety of different research needs, making it a truly dynamic and versatile device.
http://www.womenwriters.nl/index.php?title=The_Specificity_of_Female_Messages&printable=yes
Here's my question for you. Have you ever looked at a friend or colleague and marvelled at a specific behavioural skill that they've mastered? And just for a moment, wondered what it must be like to be able to do something so well and with such ease? What difference could such a skill or ability make to your career, family, relationships, well-being, happiness, success, business, finances? Well, if even for a moment you've thought something like this, then please stick around, because I've got a secret to tell you. Here it is... lean in close... so I can whisper... we don't want everyone to know just how simple it is... you can learn any skill from anybody and, with a little effort, consistently replicate their success! You just need a secret key to be able to unlock exactly how they do what they do, and then the ability to codify the 'difference that makes the difference' so that you can try on your version of it to get similar spectacular results. So what is this secret key? We call it modelling. And this course has been designed to help you unlock and master the skills and principles that enable people just like you to emulate the success of the most successful people in their lives. How does this work? Well you see, people who do things really well have developed systems, processes and strategies that allow them to be really successful in a given area. It doesn't just happen, they have developed their strategies over time through study and application. It often takes years of trial and error for them to become really good. But now they've done the hard work and have found a system that works consistently, if you have the ability, desire and access, you can learn direct from them at lightning speed. So why doesn't everyone do this? That's simple too, because most people who do something really well have no idea how they do it, it's not a conscious process, its an unconscious competence for them, so just asking them doesn't work. That's where modelling comes in. Modelling skills help you tease out and codify how an exemplar does what they do - even down to a beliefs and values level. This enables you to share your results with them, which they often find fascinating - part of the pay back for allowing you to model them - and then you get to try it on for yourself in order to share in their success. It sounds easy, and it is. But of course, you also need to be willing to be proactive and have the drive to take control of your own learning and development. This isn't some get rich quick thing or an empty promise that only works for a small percentage of people. 100% of the people we train leave with the ability to model the exceptional skills that could lead to a transformation in whatever area of their life they choose. But just having the skills doesn't necessarily mean you'll use them. We strongly suggest you don't take this programme unless you have enough drive, curiosity and proactivity to take your development into your own hands and reshape the way you learn. If you think this is of interest to you, then welcome to NLP Modelling Skills with John Seymour. But first let me tell you a little more about John Seymour, your trainer for this course. He is perhaps most know for his best selling book Introducing NLP, which has now sold over one million copies, and is still one of the best selling NLP books in the world today. But it has actually been around for over 30 years. And that's part of the special magic John brings to this programme - his legacy. He's been studying NLP and teaching NLP for all that time and so he's got highly developed nuances and distinctions that can really help you to develop sublime modelling skills with speed and elegance. This is a chance to learn from one of the foremost masters in the field. Why is John's background in NLP so important to this course? NLP is often thought of as the source-code for change, a set of psychological skills for understanding and influencing people and the world around you. But what you may not realise is that NLP was initially developed through modelling some of the most successful therapists and change agents and then codifying their success so that other people could be trained to get similar results with their clients. Since then the field has grown massively and the modelling has continued to roll out genuine results in areas, such as, business, sports, finance and social change, to name a few. John was around in those early days and worked directly with the founders of NLP in Santa Cruz. These days, other than the odd masterclass for 91 Untold, he is mostly retired so this course offers a rare opportunity to work with him. The footage on this course was filmed in front of a live audience in Bristol (2019), so you get a feel for what it would be like to be in the audience with such a gifted trainer and educator. But please, don't just take our word for it. We've unlocked the first couple of videos so you can really get a feel for whether the programme will add value to you. We hope it will, and like so many others who've been through John's courses that you get closer to your dreams as a result of working with one of the leading lights of NLP and modelling. Who this course is for: - Individuals with a desire to take charge of their own learning and growth - Coaches and NLP Practitioners seeking to refine their modelling skills - Business executives looking to model best practice within their organisations - NLP Students looking to learn from one of the UK's most respected NLP Master Trainers - Graduates looking to turbo-charge their learning in a new career role - Learning and development/ talent teams looking to unpack how star performers create their results Course content - Preview02:31 - 00:07Download the Student Workbook - Preview05:27 - 00:04Download the Student Manual - Preview02:58 - 2 questionsReflection on Modelling Instructors Neil Almond is an inspirational trainer with a gift for helping his students to apply complex concepts in practical and pragmatic ways. Neil is a sought after executive coach, facilitator and consultant. He has worked at the highest levels in business, charity and government – including facilitating sessions in 10 and 11 Downing Street for both Tony Blair and Gordon Brown and working with industry leaders such as Sir Richard Branson. In the Middle East Neil is also known as the on-screen Success Coach for Stars of Science, a pan-Arab reality TV show. He was winner of the coveted NLP Making a Difference Award at the 2017 NLP Awards. Neil is an NLP Master Trainer, Positive Psychology practitioner, and he also holds a Masters Degree in Coaching and Mentoring. Outside of the training and coaching world, Neil is perhaps best known as the founder and Chief-Executive of innovative youth charity K-Generation. Founded with a mission to apply entrepreneurial thinking to the youth sector, K-Generation (or Kikass as it was affectionately known) fast made a name for itself by using guerrilla and viral marketing techniques to breathe life into complex social issues. Kikass is still regarded by many as one of the most successful deliverers of coaching and personal development experiences to young people. John is an outstanding teacher. He has a gift for making the complexities of NLP profoundly simple, and the practical applications life-changing. He originally trained in NLP at Santa Cruz University in California where it all began, with John Grinder, Judith DeLozier, Robert Dilts, and later with Richard Bandler. His teaching style combines the best elements of the different schools of NLP. Since 1985 he has brought integrity, wisdom and humour to the teaching and learning of NLP. Co-author with Joseph O’Connor of the bestselling NLP book ‘Introducing NLP’, translated into 14 languages, they went on to write ‘Training with NLP’, the first ever book published on NLP training, based on modelling out best practice from the top NLP trainers. With Martin Shervington, he co-authored “Peak Performance with NLP”, a book for the business world. He has made numerous appearances on radio and TV, here and abroad. Over thirty years, John has worked extensively with hundreds of top organisations in the worlds of business, health, and education. A leading authority in the field, he is one of the UK’s first NLP Master Trainers and has three honorary trainer awards from professional NLP bodies in the USA and the UK.
https://www.udemy.com/course/nlp-modelling-skills-with-john-seymour/?referralCode=078C2FC93D5A8042F94B
Lao Tzu and Tao Te Ching What is Tao Te Ching Tao Te Ching the Book about Tao and Its Characteristics was traditionally assigned to Lao Tzu, but there are also passages of the book which claim the Taoist masterpiece was never the work of a single author but of a large collective of philosophers sharing the same views about life, world and wisdom. All the topics of Taoism are borrowed from it. In short, Lao Tzu explains what is Tao, the core concept of Taoism, and its basic features such as: The name that can be named is not the enduring and unchanging name. Conceived of as having no name, it is the Originator of heaven and earth; conceived of as having a name, it is the Mother of all things. Always without desire we must be found, If its deep mystery we would sound; But if desire always within us be, Its outer fringe is all that we shall see. Under these two aspects, it is really the same; but as development takes place, it receives the different names. Together we call them the Mystery. Where the Mystery is the deepest is the gate of all that is subtle and wonderful. All in the world know the beauty of the beautiful, and in doing this they have the idea of what ugliness is; they all know the skill of the skilful, and in doing this they have the idea of what the want of skill is. So it is that existence and non-existence give birth the one to the idea of the other; that difficulty and ease produce the one the idea of the other; that length and shortness fashion out the one the figure of the other; that the ideas of height and lowness arise from the contrast of the one with the other; that the musical notes and tones become harmonious through the relation of one with another; and that being before and behind give the idea of one following another. Therefore the sage manages affairs without doing anything, and conveys his instructions without the use of speech. All things spring up, and there is not one which declines to show itself; they grow, and there is no claim made for their ownership; they go through their processes, and there is no expectation of a reward for the results. The work is accomplished, and there is no resting in it as an achievement. The work is done, but how no one can see; 'Tis this that makes the power not cease to be. Therefore the sage, in the exercise of his government, empties their minds, fills their bellies, weakens their wills, and strengthens their bones. He constantly tries to keep them without knowledge and without desire, and where there are those who have knowledge, to keep them from presuming to act on it. When there is this abstinence from action, good order is universal. How deep and unfathomable it is, as if it were the Honoured Ancestor of all things! We should blunt our sharp points, and unravel the complications of things; we should attemper our brightness, and bring ourselves into agreement with the obscurity of others. How pure and still the Tao is, as if it would ever so continue! I do not know whose son it is. It might appear to have been before God. Heaven and earth do not act from the impulse of any wish to be benevolent; they deal with all things as the dogs of grass are dealt with. The sages do not act from any wish to be benevolent; they deal with the people as the dogs of grass are dealt with. May not the space between heaven and earth be compared to a bellows? Much speech to swift exhaustion lead we see; Your inner being guard, and keep it free. Its gate, from which at first they issued forth, Is called the root from which grew heaven and earth. Long and unbroken does its power remain, Used gently, and without the touch of pain. The reason why heaven and earth are able to endure and continue thus long is because they do not live of, or for, themselves. This is how they are able to continue and endure. Therefore the sage puts his own person last, and yet it is found in the foremost place; he treats his person as if it were foreign to him, and yet that person is preserved. Is it not because he has no personal and private ends, that therefore such ends are realised? The excellence of water appears in its benefiting all things, and in its occupying, without striving to the contrarythe low place which all men dislike. Hence its way is near to that of the Tao. The excellence of a residence is in the suitability of the place; that of the mind is in abysmal stillness; that of associations is in their being with the virtuous; that of government is in its securing good order; that of the conduct of affairs is in its ability; and that of the initiation of any movement is in its timeliness. And when one with the highest excellence does not wrangle about his low positionno one finds fault with him. If you keep feeling a point that has been sharpened, the point cannot long preserve its sharpness. When gold and jade fill the hall, their possessor cannot keep them safe. When wealth and honours lead to arrogancy, this brings its evil on itself. When the work is done, and one's name is becoming distinguished, to withdraw into obscurity is the way of Heaven. When one gives undivided attention to the vital breath, and brings it to the utmost degree of pliancy, he can become as a tender babe.Neoliberalism is promoted as the mechanism for global trade and investment supposedly for all nations to prosper and develop fairly and equitably. Lao Tzu and the “Tao Te Ching” Laozi or Lao Tzu, was a mystical philosopher who lived in ancient China. Most scholars believe Lao Tzu was born around B.C.E. However, some authorities have him being born about B.C.E. and some, question if Lao Tzu was actually a person or just a . Free Essay: Thoughts From the Tao-te Ching by Lao-Tzu It was quoted in the "Thoughts from the Tao-te Ching" by Lao-Tzu, a notable writer who speaks. Critical Analysis of The Thoughts from the Tao Te Ching " The Thoughts from the Tao Te Ching," by Lao Tzu addresses the early beginning of the religion of. The Temenos Academy Foundation Course in Perennial Philosophy - and Introduction to Part 1 from Ian Skelly on Vimeo. A TWO-YEAR PART-TIME DIPLOMA COURSE. Free Essay: “The Tao-te Ching” by Lao-Tzu and “The Prince” by Machiavelli Throughout history, it can be argued that at the core of the majority of successful.
https://vuqydikexap.regardbouddhiste.com/lao-tzu-tao-te-ching-essay-31439sb.html
Hello marine biology students. In this video we're going to be learning about some of the unique properties of water and seeing how life would be different living in water as opposed to life here on land. [Intro Music] So let's talk about some of the properties of water and why it might impact organisms living within it. In the water molecule itself, H2O, there are two atoms of hydrogen and one atom of oxygen covalently bonded together. While this molecule itself is quite simple, water is unique in many ways. Each water molecule has a slight positive and negative electrical charge and this is due to polar covalent bonds. Polar covalent bonds do not share electrons evenly and so there's an electrical charge imbalance on either side of that covalent bond. The positive charges are near the hydrogen atoms and the negative charges exist around the oxygen atoms. And this is because oxygen has a higher electronegativity or stronger pull on those shared electrons the electrons spend more time around the oxygen and since electrons have a negative charge, we have that partial negative charge near that oxygen. So here in this diagram of water molecules we can see the water molecules themselves composed of oxygen and hydrogen. We can see the partial charges and then also notice that there is attraction between separate water molecules based on those electrical charge differences. These attractions are called hydrogen bonds. So again, due to these slight electrical charges, water molecules are attracted to each other. The negative charge of one molecule is attracted to the positive charge of the other molecules. This attraction of one water molecule to another is known as hydrogen bonding. So, it's important to realize that hydrogen bonds are not the covalent bonds holding the hydrogen to the oxygen, instead hydrogen bonds are the attractions between two separate water molecules. When we look at the group of water molecules together, water in its liquid form consists of many water molecules having hydrogen bonds with their neighbors. And these hydrogen bonds, they can break and reform and transfer from one to the other. So, the individual water molecules can move in relation to each other. If these water molecules are moving fast enough, if enough kinetic energy is added to them, they can actually move so quickly that they can break the hydrogen bonds and break free of their connections of neighboring water molecules, and in that case the water has evaporated. It has gone from a liquid form into a vapor form. So water can be found in three states, and it's the hydrogen bonds that end up impacting these states of water. Hydrogen bonds help keep water molecules as a cohesive group at most temperatures found on Earth. This is the reason why we have liquid water. Water can be found in three different phases of matter on the surface of the Earth. Water can be found in its liquid form, in its gas form, which is water vapor and also as a solid in the form of ice. Ice is the solid form of water that is caused by reducing the kinetic energy of the molecule. As the temperature is lowered, the hydrogen bonds lock into place, preventing water molecules from moving in relation to each other. This also holds the water molecules a bit further from each other than in liquid water. An interesting property of seawater is that it has a lower freezing point then fresh water and as sea water begins freezing it, actually releases a concentrated salty brine resulting in sea ice having less dissolved solute than regular seawater. We will see that this also plays a role in driving ocean circulation. Now, the gas form of water is known as water vapor and humidity is the amount of water vapor that's in the air. Water vapor is formed when molecules of water escape the hydrogen bonds holding them together. And in this way, they become airborne. This process, called evaporation, increases with increasing temperatures. Water is the only substance on Earth that naturally exists in all three states, solid as ice, liquid as liquid water, and gas as water vapor. The concept we're going to visit many times over as we discuss water is the concept of density. At lower temperatures, water molecules are closer to each other than at warmer temperatures. In sea water at 75 degrees, the molecules are further apart than with the same water at 35 degrees. When water molecules are closer together, the substance is said to have a greater density. Substances with higher density are heavier than those with lower density when the same volume is present. Cold water therefore sinks underneath the warmer water. Colder water also holds more oxygen then the same volume of warm water, a crucial factor for organisms living at the ocean floor. Even though colder water is more dense than warmer water, this changes when the water gets cold enough to freeze. Ice is less dense than liquid water due to the distance between the water molecules increasing as the hydrogen bonds lock into place. This is why ice floats. This is important for organisms living in areas where freezing temperatures are common, such as in the Arctic Ocean or in Antarctica. This is also key to life on planet Earth as we know. Now, this may sound like a bit of a stretch, but if water were to sink after it froze, that layer of ice would sink crushing all life below it and it would settle at the bottom. So it really would prevent life from surviving underneath ice. If ice did not float, a body of water would also freeze from the bottom up, or in layers of freezing and sinking. Eventually the whole body of water would freeze. Even though we think of it as being cold, ice is an excellent insulator. Since ice floats, the floating ice creates a barrier between the cold air temperature and the water below the ice, keeping it from freezing. In addition to hydrogen bonds, water has other unique chemical properties. One is a high latent heat of melting. The latent heat of melting is the amount of energy needed to change water in state from ice into liquid or conversely the amount of heat that needs to be taken away to convert liquid water into ice. Water has a higher latent heat of melting than many other commonly occurring substances. This means that ice melts at a relatively high temperature and it absorbs a great deal of heat as it melts. Water also absorbs a great deal of heat before its temperature rises. This property is known as heat capacity and it is defined as the amount of heat required to raise a substance’s temperature by a given amount. Water has a very high heat capacity, making it resistant to easily changing temperature. This high heat capacity is significant for marine organisms because it means that they are not subjected to the wide temperature ranges so often seen on land. An exception is very shallow water, like tide pools, which warm quickly due to the relatively small volumes of water and can subject certain marine organisms to sudden drastic temperature changes. A great deal of heat is also required for evaporation to occur. The amount of heat required for a substance to evaporate is known as the latent heat of evaporation or the latent heat of vaporization. Water has a very high latent heat of evaporation. This means that it takes a lot of energy for liquid water to become water vapor without changing its temperature. When water is changing its states of matter, its temperature does not change until the state change is complete. The extra energy goes to the state change instead of temperature change, so here in this graph we can see the temperature of water over time with a constant amount of heat being added to it. So, we see that as the ice starts warming up, starts being warmed, there's a pause in temperature change as all the ice melts into liquid water. Once the water is liquid, the temperature increases again until it hits a level where that liquid water is being converted into gas. This is also why boiling water reaches a maximum temperature and will not go higher unless pressure is increased. That'll be the end of this video, but we'll have a bit more to discuss about the properties of water in the next video. Thanks for your attention. See you then.
108 This drawing employs the subtle sfumato technique of leonardo da vinci engineering and inventions book shading, in the manner of the Mona Lisa. 16 86 The third important work of this period is the Virgin of the Rocks, commissioned in Milan for the Confraternity of the Immaculate Conception. Retrieved 18 November 2017. The extra detail at the lower left looks good.121 Leonardo's physiological sketch of the human brain and skull ( 1510 ) Leonardo's anatomical drawings include many studies of the human skeleton and its parts, and of muscles and sinews.Despite being created approximately 500 years ago, the work of Leonardo is just as influential to the art that is being created today as it was in the 15th and 16th centuries.Teacup94 United States You are amazing!Piero della Francesca had made a detailed study of perspective, 66 and was the first painter to make a scientific study of light.Retrieved August 21, 2017.18 Melzi inherited the artistic and scientific works, manuscripts, and collections of Leonardo and administered the estate.Please proceed with stretching and shipment.86 In the smaller painting, Mary averts her eyes and folds her hands in a gesture that symbolised submission to God's will.Thanks Basak manaratana United States The painting looks good. Leonardo also remembered his other long-time pupil and companion, Salai, and his servant Battista di Vilussis, who each received half of Leonardo's vineyards.The Everything Da Vinci Book.Despair and lose heart." 101 The perfect state of preservation and the fact that there is no sign of repair or overpainting is rare in a panel painting of this date.I want to thank you and your professors for such on outstanding job.From the 18th on we will be away on holiday and would like to recieve it mid May.The Life and Times of Leonardo.He said, "amazing!" Frank1975 United States I just received the painting.267 "The Mona Lisa had brows and lashes".28 While Leonardo's experimentation followed clear scientific methods, a recent and exhaustive analysis of Leonardo as a scientist by Fritjof Capra argues that Leonardo was a fundamentally different kind of scientist from Galileo, Newton and other scientists who followed him in that, as a Renaissance.HudsonRiver United States Sorry to take so long to tell you that the picture has arrived and it is beautiful.A marked development in Leonardo's ability to draw drapery occurred in his early works.A b c d e Rachum, Ilan tagliando ford fiesta (1979).28 The content of his journals suggest that he was planning a series of treatises to be published on a variety of subjects.A b Alastair Sooke, Daily Telegraph, "Leonardo da Vinci: Anatomy of an artist", accessed b c Keele Kenneth D (1964).
https://cazareromania.eu/leonardo-da-vinci-flowers-painting.html
AI-based Predictive Analytics Canvass Analytics gives operational teams a faster path to operational insights. Canvass’s vision is to break the barriers to deploying AI in manufacturing operations. According to a recent survey by the MAPI Foundation, 47% of respondents indicated their companies’ workforces lack the digital skills needed to implement AI solutions. Canvass empowers manufacturers to overcome this challenge by providing a no-code, machine learning-based platform that puts the benefits of AI in the hands of operators by simplifying the process of building, training, and scaling applied industrial AI in their day-to-day operations. Click here to learn more about the latest platform enhancements to Canvass AI. ANALYZE Derive insights from across operations by automatically collecting data from diverse data sources and identifying patterns and correlations hidden deep within big data. PREDICT Shift from historical reports that tell why something happened to predictive insights that prevent, optimize and foresee outcomes. ADAPT Optimize processes and operations, improve production yield and improve energy efficiency using powerful Artificial Intelligence models that continually adapt to changes in data and conditions in real-time. SCALE Leverage predictive insights from a single process to across multiple plants using a central, scalable, flexible platform.
https://www.canvass.io/platform
This video will teach you a general pronunciation rule for short words (with one syllable) with a final silent ‘e’. Watch the video and do the short exercise. Many English words follow this rule but please remember, as with most rules, there are exceptions! Can you think of any? Long & Short Vowel Sounds Exercise – with students This video is an exercise to practise long and short vowel sounds. This time, my students will teach you for a change! First, listen and repeat the sounds, then try the exercise. You have to read a word and decide if the vowel sound is long or short. It works best if you say the … Continue reading Word Stress I hope you had a nice little Easter break? It’s back to work for me tomorrow and time to post a new teaching video. This time we will have a look at word stress, a really important part of pronunciation. First you will learn to find the number of syllables in a word and then … Continue reading Consonant Sounds Part 2 This video teaches you the remaining 8 consonant sounds of the English language. You will learn the following sounds: /m/, /n/, /ŋ/, /l/, /r/, /w/, /j/ and /h/. Remember to watch Part 1 as well. This is the first video which will teach you all other consonant sounds. You can find it on this blog … Continue reading Consonant Sounds Part 1 In this video you will learn the sounds and phonetic symbols of the English consonants. This is part 1 of two videos. You will learn the following sounds: /p/, /b/, /t/, /d/, /ʧ/, /ʤ/, /k/, /g/, /f/, /v/, /θ/, /ð/, /s/, /z/, /ʃ/ and /ʒ/ Please leave me a post if you have enjoyed this … Continue reading Question Tags Exercise 2 Have you watched the video introduction on question tags? If you know how to form question tags it is now time to practise your pronunciation with this short exercise. Listen carefully. Can you hear the difference between rising and falling intonation? Check and repeat each sentence.
https://tefltalk.net/tag/pronunciation/
Who was Zeus? Zeus was the god of the sky and ruler of the Olympian gods. He overthrew his father, Cronus, and then drew lots with his brothers Poseidon and Hades, in order to decide who would succeed their father on the throne. Zeus won the draw and became the supreme ruler of the gods, as well as lord of the sky and rain. His weapon was a thunderbolt which he hurled at those who displeased or defied him, especially liars and oath breakers. Zeus, the presiding deity of the universe, ruler of the skies and the earth, was regarded by the Greeks as the god of all natural phenomena on the sky; the personification of the laws of nature; the ruler of the state; and finally, the father of gods and men. He was married to Hera but often tested her patience, as he was infamous for his many affairs. Divine Lovers Aphrodite She is the Goddess of love, desire and beauty. Apart fro her natural beauty, she had a magical girdle that compelled people to desire her. Zeus’ jealous wife, Hera, found out about Aphrodite carrying Zeus’ baby and laid her hands of the belly of Aphrodite and cursed him with impotence, ugliness and foul-mindedness Children with Zeus: - Priapos - God of fertility, vegetables, nature, livestock, fruit, beekeeping and gardens Demeter She is the Olympian Goddess of corn, grain and the harvest. It was believed that she made the crops grow each year; thus the first loaf of bread made from the annual harvest was offered to her. She is also Zeus’ sister. Children with Zeus: - Persephone - Queen of the Underworld Dione She is the Titan Goddess and an Oceanid (sea nymph). She was an oracle and was worshipped alongside Zeus at the earliest oracle in Greece, located in Dodona. The priestesses and prophetesses at her shrine were called Peleiades (the doves), which were the sacred bird of her daughter. Children with Zeus: - Aphrodite - Goddess of love, desire and beauty Eurynome She is a Titan Goddess and Oceanid that was worshipped at a sanctuary near the confluence of rivers called the Neda and the Lymax in classical Peloponnesus. She was Zeus’ third wife. Children with Zeus: - Charites (Graces): - Aglaea - Goddess of beauty, splendor, glory, magnificence and adornment - Euphrosyne – Goddess of mirth - Thalia – Goddess of good cheer Gaia She was the great mother of all: the primal Greek Mother Goddess; creator and giver of birth to the Earth and all the Universe; the heavenly gods, the Titans, and the Giants were born to her. The gods reigning over their classical pantheon were born from her union with Uranus (the sky), while the sea-gods were born from her union with Pontus (the sea). Children with Zeus: - Manes - the eponymous first king of Maeonia and later came to be known as the first king in line of the primordial house of Lydia, the Atyad dynasty Hera She is the Queen of the Gods and Goddess of Marriage, Women, Childbirth, and Family. She is the wife and one of three sisters of Zeus. Due to Zeus’ numerous affairs, she has a turbulent relationship with him. His infidelities made her extremely jealous and led to many quarrels between them. Children with Zeus: - Ares – God of war - Eileithyia – Goddess of childbirth - Eris – Goddess of discord - Hebe – Goddess of youth Leto She is a daughter of the Titans Coeus and Phoebe and the sister of Asteria. Hera finds out Leto is carrying Zeus’ twins and in her jealousy caused all lands to shun Leto. Leto had to search for a place where she could give birth. Finally, she finds an island that isn't attached to the ocean floor so it isn't considered land and she can give birth. Children with Zeus: - Apollo - God of music, poetry, art, oracles, archery, plague, medicine, sun, light and knowledge - Artemis - Goddess of the Hunt, Forests and Hills, the Moon, Archery Metis She is a Titan Goddess and, like several primordial figures, an Oceanid. She was the first wife of Zeus, and became the goddess of wisdom, prudence and deep thought. Children with Zeus: - Athena - Goddess of Wisdom and Crafts Mnemosyne Zeus slept with Mnemosyne for nine consecutive days, eventually leading to the birth of the nine Muses. The kings and poets were inspired by Mnemosyne and the Muses, thus getting their extraordinary abilities in speech and using powerful words. Children with Zeus: - Nine muses: - Calliope - presides over eloquence and epic poetry - Clio - muse of history and lyre playing - Euterpe - muse of music and called the "Giver of delight" - Erato - muse of lyric poetry, especially romantic and erotic poetry - Melpomene – muse of singing - Polyhymnia - muse of sacred poetry, sacred hymn, dance, and eloquence as well as agriculture and pantomime - Terpsichore - muse and goddess of dance and chorus - Thalia - presides over comedy and idyllic poetry - Urania - muse of astronomy Nemesis She is the Goddess of divine retribution and revenge, who would show her wrath to any human being that would commit hubris, i.e. arrogance before the gods. She was considered a remorseless goddess. Although a respected goddess, Nemesis had brought much sorrow to mortals such as Echo and Narcissus. Narcissus was a very beautiful and arrogant hunter who disdained the ones who loved him. She lured him to a pool where he saw his own reflection and fell in love with it; not realizing it was only an image. He was unable to leave the beauty of his reflection and he eventually died. Nemesis believed that no one should ever have too much good, and she had always cursed those who were blessed with countless gifts. Children with Zeus: - Helen of Sparta Persephone She is the Goddess of the Underworld, springtime, vegetation, and maidenhood. Hades, the god-king of the underworld, abducted her. The myth of her abduction represents her function as the personification of vegetation, which shoots forth in spring and withdraws into the earth after harvest; hence, she is also associated with spring as well as the fertility of vegetation. She was also Zeus’ daughter. Children with Zeus: - Zagreus - the God of the Orphic Mysteries - Melinoe – Goddess of ghosts Selene She is the goddess of the moon. Like her brother Helios, the Sun god, who drives his chariot across the sky each day, Selene is also said to drive across the heavens. Children with Zeus: - Ersa - Goddess of the plant-nourishing dew - The Nemean lion - a vicious monster that lived at Nemea - Pandia - meaning "all brightness" and is the Greek personification of the moon Themis She is a Titan Goddess and is described as "of good counsel", and is the personification of divine order, law, natural law and custom (the traditional rules of conduct first established by the gods). She was also a prophetic goddess who presided over the most ancient oracles, including Delphoi. Children with Zeus: - The three Horai - Diké - goddess of moral justice - Eunomia - goddess of law and legislation - Eirene - personification of peace and wealth - The three Moirai - Clotho - spun the thread of life from her Distaff onto her Spindle - Lachesis - measured the thread of life allotted to each person with her measuring rod - Atropos - cutter of the thread of life Semi-divine Aegina She is the daughter of the river god Asopus and the nymph Metope and is a Naiad nymph. Zeus carried her off in the guise of an eagle to the island of Aegina, which was named after her. Children with Zeus: - Aeacus - king of the island of Aegina in the Saronic Gulf Carme She is a female Cretan spirit who assisted the grain harvest of Demeter's Cretan predecessor. According to stories of the Olympian gods, she was the mother, by Zeus, of the virginal huntress Britomartis, also called Diktynna, whom she bore at Kaino. Children with Zeus: - Britomartis - Minoan goddess of mountains and hunting Electra A Pleiad Nymphe of the island of Samothrake (in the Greek Aegean). She was the daughter of King Agamemnon and Queen Clytemnestra. She was the sister of Iphigenia and Chrysothemis, as well as Orestes, with whom they planned the murder of their mother and her lover Aegisthus, seeking revenge for the murder of their father. Children with Zeus: - Dardanus - founder of the city of Dardanus at the foot of Mount Ida in the Troad - Iasion - founded the mystic rites on the island of Samothrace - Harmonia - immortal Goddess of harmony and concord Himalia She is a nymph of the eastern end of the island of Rhodes. Zeus seduced her when he came to the island to vanquish the Gigantes. Children with Zeus: - Kronios - Spartaios - Kytos Io She is a Naiad nymph of the Argive River Inakhos. She was seduced by Zeus, who changed her into a heifer to escape detection from his jealous wife, Hera. Children with Zeus: - Epaphus - meaning, "touch", shows the way he was born - that is, by Zeus' touch. - Keroessa - heroine of the foundational myth of Byzantium Plouto She is a nymph of Mount Sipylos in Lydia (western Anatolia) and her parents were Oceanus and Tethys (thus making Plouto one of the 3000 Oceanids). Children with Zeus: - Tantalus - most famous for his eternal punishment in Tartarus Taygete She is a Naias Nymphe of the Argolis (in Southern Greece) who was abducted to Assyria (in Asia Minor) by Zeus. He promised her the fulfillment of a wish, and she declared "I wish to remain a virgin". Children with Zeus: - Lacedaemon - mythical king of Laconia Mortal Lovers Alcmene She was the wife of Amphitryon and mother, by Zeus, of Heracles. She was also the mother by Amphitryon of Iphicles and Laonome. Children with Zeus: - Heracles - divine hero in Greek mythology Antiope She was the daughter of the Boeotian river god Asopus, according to Homer; in later sourcesshe is called the daughter of the "nocturnal" king Nycteus of Thebes or, in the Cypria, of Lycurgus, but for Homer her site is purely Boeotian. Children with Zeus: - Twins: - Amphion - Zethus - Together they are famous for building Thebes Callisto She was a nymph of Lycaon. Transformed into a bear and set among the stars, she was the bear-mother of the Arcadians, through her son Arcas. Children with Zeus: - Arcas Cassiopeia She the queen of Aethiopia, was the wife of King Cepheus, daughter of Coronus and Zeuxo. Very beautiful and vain, she committed hubris by saying that she and her daughter Andromedawere more beautiful that the daughters of the sea god Nereus, called the Nereids. Children with Zeus: - Atymnius Lamia She was a beautiful queen of Libya who became a child-eating daemon. Aristophanes claimed her name derived from the Greek word for gullet, referring to her habit of devouring children. Children with Zeus: - Achilleus - most notable feat during the Trojan War was the slaying of the Trojan hero Hector outside the gates of Troy - Herophile (The Delphic Sibyl) - legendary figure who gave prophecies in the sacred precinct of Apollo at Delphi, located on the slopes of Mount Parnassus Laodamia She was the daughter of Bellerophon and Philonoe, sister of Hippolochus and Isander. She was shot by Artemis, that is, died a sudden, instant death, one day when she was weaving.Diodorus Siculus makes her the wife of Evander, who was a son of Sarpedon the elder and by her father of Sarpedon the younger. Children with Zeus: - Sarpedon - king of Lycia Niobe She was a daughter of Phoroneus. She is not to be confused with the more famous Niobe, who was punished for boasting that she had more children that Leto. Children with Zeus: - Argus - king and eponym of Argos - Pelasgus Pandora She was a daughter of King Deucalion and Pyrrha who was named after her maternal grandmother, the more famous Pandora. Children with Zeus: - Graecus - Latinus Protogeneia She was a daughter of Deucalion and Pyrrha, progenitors in Greek mythology. She was married to Locrus, but had no children; Zeus, however, who carried her off, became by her, on mount Maenalus in Arcadia, the father of Opus, Aethlius and Aetolus. According to others she was not the mother, but a daughter of Opus. Endymion also is called a son of Protogeneia. Children with Zeus: - Aethlius - first king of Elis - Opus - king of the Epeians and father of Cambyse or Protogeneia Pyrrha She was the daughter of Epimetheus and Pandora (Pandora’s box) and wife of Deucalion. When Zeus decided to end the Bronze Age with the great deluge, Deucalion and his wife, Pyrrha, were the only survivors. Even though he was imprisoned, Prometheus who could see the future and had foreseen the coming of this flood told his son, Deucalion, to build an ark and, thus, they survived. During the flood, they landed on Mount Parnassus, the only place spared by the flood. Children with Zeus: - Hellen Semele She was the daughter of the Boeotian hero Cadmus and Harmonia, was the mortal mother of Dionysus by Zeus in one of his many origin myths. Children with Zeus: - Dionysus - god of the grape harvest, winemaking and wine, of ritual madness, fertility, theatre and religious ecstasy Thyia She was the name of a female figure associated with cults of several major gods. "Thyia" was derived from the Ancient Greek verb θύωmeaning "to sacrifice". The name was applied to a type of fragrant tree called a Thuja. Children with Zeus: - Magnes - Makedon Ganymedes Ganymedes is the only known male lover of Zeus. He was a handsome, young Trojan prince who was carried off to heaven by Zeus, or his eagle, to be the god's lover and cup-bearer of the gods. Ganymedes also received a place amongst the stars as the constellation Aquarius, his ambrosial mixing cup became the Krater, and the eagle Aquila. Ganymedes was frequently represented as the god of homosexual love, and as such appears as a playmate of the love-gods Eros (Love) and Hymenaios (Marital Love).
https://hubpages.com/education/Greek-Mythology-lovers
The intelligence of dolphins is no longer to be proven, but how much do they resemble us in their way of communicating and maintaining relationships? Much more than one might think… They can call their comrades when certain tasks require some form of collaboration: for example when it comes to defending and impressing females in heat in groups. A new study finds that they achieve this by learning the characteristic whistles of their closest allies (sometimes more than a dozen) and remembering those who have always cooperated with them in the past. The results are amazing: dolphins do use the concept of team membership (a skill previously observed only in primates). Thus, they will help to reveal how these animals maintain such complex and close-knit societies. ” This is a revolutionary study Says Luke Rendell, behavioral ecologist at the University of St. Andrews, who was not involved in the research. The results were published in the journal Nature Communications. This work also provides further evidence for the idea that dolphins evolved to possess large brains, to be able to maintain their complex social environments. Friends for life… Previous studies have shown that male dolphins generally cooperate in pairs or trios, in what researchers call a “first-rate alliance”. These small groups work together to find and reunite fertile females. Males also cooperate in second-rate alliances, which can number up to 14 dolphins and defend against rival groups that attempt to steal their female (s). Some second-order alliances join together into even larger third-order alliances, offering males in these groups an even greater chance of having allies nearby in the event of an attack from rivals. Dolphins often change partners in their first-order alliances, but they retain allies in second-order groups for decades, according to long-term behavioral studies conducted in Shark Bay, Western Australia. These groups are considered to be the central unit of male society. ” Males stay together their entire lives, at least up to 40 years Says Stephanie King, behavioral biologist at the University of Bristol. A unique whistle But how do males keep track of all the members of these complex groups? Scientists claimed their hissing sounds were the key. Each dolphin learns a unique whistle from its mother, which it retains throughout its life. Dolphins recognize and remember other people’s whistles, just as we recognize other people’s names. To further study how male dolphins use their whistles, King and his colleagues turned to a population of Indo-Pacific bottlenose dolphins (Tursiops aduncus) living in the remarkably clear waters of Shark Bay. The team has been following the animals with a set of underwater microphones since 2016, allowing them to identify which dolphin was making which whistle. From 2018 to 2019, the researchers placed a speaker underwater and played the hissing males on other males in their various alliances. These males were between 28 and 40 years old, and had been in these groups all their lives. Meanwhile, scientists flew a drone over the water to film the dolphins’ reactions. The researchers expected that males hearing the hissing sound of their first-order mates would respond most strongly. But when they looked at the videos, they found that the strongest responses came from the males of the Dolphin Second Order Alliance – that is, animals that have always cooperated with them to fend off abusers. (see video at the bottom of the article). ” It was so striking Says King, lead author of the study. ” In 90% of the experiments, dolphins who heard the whistles of second-rate alliance members would immediately and directly turn to the interlocutor. “. The results suggest that dolphins, like humans, have a ” social concept of belonging to a team, based on the previous cooperative investment of an individual, rather than on the quality of his friends », She explains. This study “provides the missing link” between the signature whistles of male dolphins and their cooperative alliances, says Frants Jensen, behavioral ecologist at the Woods Hole Oceanographic Institution, who was not involved in the research. Jensen and other experts predict that the researchers’ high-tech approach will help scientists unravel other mysteries of cetacean communication. In the video below, a male dolphin is heard calling out for his mates. Shortly after, they turn around and follow him:
https://geeky.news/dolphins-learn-the-names-of-their-comrades-to-form-teams/
The Network has conducted several related studies in Rhesus monkeys that address how early social deprivation affects brain and behavioral development. Monkeys who were removed from their mothers at either 1 week of age or 4 weeks of age developed strikingly abnormal behaviors compared both to each other and to normal monkeys who left their mothers at 6 months, the usual time for maternal separation. Neuroanatomical studies of these monkeys have shown distinct differences in some areas related to social functioning. Other studies involve introducing monkeys separated at 1 week of age (who appear to lack any social drive after the separation) to a supermom (a female monkey known to adopt infants). These studies are showing that, while the behavioral abnormalities seen in these separated monkeys can be remediated by the introduction of a substitute mother, there appears to be a narrow window of opportunity for the reintroduction of maternal care. After this window of opportunity, it is highly unlikely that either the infant or the mother will be amenable to adoption. The behavioral abnormalities seen in the 1-week and 4-week separated monkeys appear to have long-lasting consequences. The first separated monkeys are now of reproductive age, and several have had infants of their own. Those monkeys separated from their mothers at one week of age display highly abnormal parenting behaviors. We are watching the infants of these mothers closely to see if there is any intergenerational transmission of the abnormal behavioral profile. Another study that involved re-organizing separated monkeys into unfamiliar social groups has just been concluded, and behavioral data show striking behavioral abnormalities among the one-week- and four-week-separated monkeys. This suggests that there may be some further brain and behavioral abnormalities caused by the early deprivation that do not manifest themselves until later in life, or until the introduction of some social stress (perhaps akin to the social stress undergone by adolescents entering a new school setting).
https://macbrain.org/timing/
WEDNESDAY, May 5 (HealthDay News) -- Many patients with normal pressure hydrocephalus (NPH) experience clinical declines similar to those of Alzheimer's disease (AD) patients, and newly discovered cerebrospinal fluid (CSF) biomarkers may help better diagnose forms of early dementia, as well as better identify patients who may benefit from surgical implantation of a shunt, according to research presented at the annual meeting of the American Association of Neurological Surgeons, held from May 1 to 5 in Philadelphia. In an effort to assess the clinical association of NPH and AD, Sebastian F. Koga, M.D., of the University of Virginia Health Science Center in Charlottesville, and colleagues performed CSF profiling for biomarkers beta-amyloid, T-tau, P-tau and APOε4 genotyping, and completed cortical biopsy evaluations for neuritic plaques and tau tangles in patients treated for NPH at their institution. They then assessed the results as they related to clinical progress and neuropsychological testing. The researchers found that the presence of a large number of neuritic plaques on biopsy and increased beta-amyloid levels were associated with a failure to improve after surgical shunt implantation. An analysis of the CSF biomarkers T-tau, P-tau and beta-amyloid showed that NPH progression was similar to the changes occurring in patients with AD. In addition, the researchers found that in a significant number of patients, a high number of plaques and tangles in frontal lobe biopsies were an indication of an advanced AD. "Although there are certain differences in the clinical presentation of NPH and AD, our research suggests that these two forms of dementia are part of a wider spectrum of tau-protein abnormalities in the brain. This new perspective could change diagnostic criteria and redefine the surgical treatment options available to patients suffering from dementia," Koga said in a statement.
https://consumer.healthday.com/neurology-9/misc-surgery-news-650/aans-findings-shed-light-on-surgical-treatment-of-dementia-638547.html
The Department of Music at Texas A&M University-Kingsville is a comprehensive music program within the College of Arts and Sciences. The Music Department is a member of the Texas Music Educators Association and the Texas Association of Music Schools and is an institutional accredited member of the National Association of Schools of Music. The Music Department serves three main purposes: 1) To provide training to qualified students for the music profession; 2) To supply an area of artistic enrichment for non-music majors; and 3) To create a genuine musical influence on the entire university family. Music Department Goals and Objectives Goal I: Students will acquire the necessary knowledge, experience, skills, and artistic abilities to pursue careers as successful and effective music educators and performers. Objective 1: Students will possess a thorough knowledge of music theory, history, pedagogy, literature, and listening skills. Objective 2: Students will have a thorough knowledge of and display ability in music performance. Objective 3: Students will be prepared to enter the job market with adequate skills needed for their chosen profession. Objective 4: Students will be prepared to enter graduate school with adequate skills needed to pursue an advanced degree. Goal II: Undergraduate and graduate music curricula and programs will remain current both in content and technology, and will reflect the pedagogical, artistic, cultural, and professional needs of our students. Objective 1: New degrees, programs, and areas of emphasis will be pursued to comply with student need and demand within the profession. Objective 2: TAMUK will continue to be accredited through the National Association of Schools of Music. Goal III: The Music Department will provide an atmosphere of artistic enrichment that will create a musical influence on the University and the South Texas region, while establishing a growing state and national reputation. Objective 1: The Music Department will attempt to increase the number and quality of performances given by students and student ensembles. Objective 2: The Music Department will involve students in professionally recognized research, creative activities, and outreach. Goal IV: The Music Department will strive to maintain qualitative and quantitative growth, while pursuing ways to ensure that resources such as faculty, staff, facilities, curricula, equipment and operating budgets are sufficient to support the program. Objective 1: The Music Department will seek funding to support facilities, equipment, operating expenses, student and faculty travel, and scholarships. Objective 2: Enrollments will rise in relation to the department’s ability to support that growth. Objective 3: Student Recruitment will produce significant numbers of new, quality music students each year.
http://www.tamuk.edu/artsci/music/MissionStatementGoalsandObjectives/index.html
A historic day today in Hanoi for the Vietnam export industries: in the presence of numerous press a joint statement was signed by Vietnamese employers’ organizations VITAS and LEFASO, Vietnam Chamber of Commerce and Industry VCCI and the national trade union VGCL. The statement observes the negative impact on Vietnam's garment and shoe industries over the COVID 19 pandemic and points out an action plan towards a sustainable industry ratifying international conventions, and compliable with the socially responsible requirements. CNV Internationaal supported the realization of the statement by facilitating a social dialogue between the different stakeholders. With the EU - Vietnam Free Trade Agreement going into effect in August 2020, the statement is a great impulse to help Vietnam’s export industry to grow and innovate in a sustainable and responsible matter. Since COVID-19 has hit Vietnam, it has had profound impact on the economy of Vietnam, Asia, and the world, especially the labor-intensive and key export industries such as garment and textile, leather and footwear sectors. According to the employers' organizations, the orders of April 2020 and May 2020 of the industry have decreased by 20% and 50% respectively. Most of the affected businesses and workers report that they have not been able to get access to support package of the Vietnamese Government. Moreover, support from the EU to Vietnam’s supply chains are necessary and urgent because 1 million workers have lost their jobs and income. Promoting Social Dialogue Under this context, with technical support of CNV Internationaal, social partners VITAS, LEFASO, Vietnam Chamber of Commerce and Industry VCCI, and Vietnam General Confederation of Labour VGCL worked – through social dialogue - together to find common ground and solutions to overcome the crisis. Nguyen Thi Hai Yen, (CNV Internationaal Coordinator Vietnam): “This statement gives an excellent example of effective Social Dialogue. CNV Internationaal is committed to continue supporting and working with the stakeholders to overcome the crisis and promoting social dialogue within the garment sector and beyond.” Call for action The statement is a call for action to partners in the European Union and the Vietnamese Government to invest in strategic partnerships to promote social dialogue in line with international labour standards. It further request investment in the education of the workers and making them more employable. The stakeholders also ask for timely and substantive support for workers and businesses affected by the COVID 19 pandemic, with easily accessible and simplified procedures. In conclusion the Vietnamese partners aim to become an innovative industry able to produce more sustainable and ecological products and production methods. Towards a social industry Vietnam is an important source country for the European garment industry. A substantial percentage of the textile in the European market is produced in Vietnam. Never before the different stakeholders signed a joint statement. CNV will continue supporting the social dialogue in Vietnam. We count on the partners in joint forces towards a social and fair industry.
https://www.cnvinternationaal.nl/en/our-work/news/2020/june/historic-joint-statement-signed-in-vietnamese-garment-industry
The Nuclear Industry Association of Turkey (NIATR) and The United States Nuclear Infrastructure Council (NIC) signed a memorandum of understanding to develop cooperative activities between the nuclear industries of the two countries, NIATR announced Monday. The agreement was inked during a ceremony at the U.S. Department of Commerce global finance workshop on March 11, which included participation by both industry groups, according to statement. "The agreement recognizes the importance of nuclear energy for sustainable development, both worldwide and in the two respective countries, Its goal is to foster dialogue, information exchange and communication between the nuclear industries of the two countries,” read the statement. The agreement was signed by NIATR Vice President and Founder Erhan Atay and the U.S. NIC CEO David Blee. "We are confident; this agreement will enhance collaboration between our respective industries by promoting a close and collegial relationship and cooperation for the safe and peaceful uses of nuclear technology and energy," Atay said. U. S. NIC is the leading U.S. business consortium advocate for new nuclear technologies and promotion of the American supply chain globally. NIATR is established to bring together global companies to invest in Turkey for nuclear energy and local companies interested in collaboration to acquire technology and information transfer. Turkey plans to build three nuclear power plants. The first two projects -Akkuyu Nuclear Power Plant in Mersin province of Turkey and Sinop Nuclear Power Plant on the Black Sea cost at a location called Inceburun.
http://www.turkishmaritime.com.tr/turkish-us-nuclear-associations-ink-tech-transfer-mou-25449h.htm
Although better known for her interior design, Kate Hume started her career in the design industry making mouth-blown glass pieces in the late ’90s. Since that time, she has produced nearly 1,000 unique glass objects and vessels. Kate joins us this week to discuss the design process for her recent Amphore collection. Raymond Paul Schneider: When did you first start to develop this new collection? Kate Hume: The Amphore collection of glass vessels was conceived several years ago, by happenstance. New ideas just evolve when I am in the studio. I started working on my glass collection in 1998. My husband Frans van der Heijden and I had developed our furniture collection (Heijden Hume) and I felt it needed accessorizing in some way. I had long been interested in glass and was convinced that this was the moment to learn more about the material (I knew nothing). Strangely there was very little on the market at that time, save for some very elaborate pieces produced in Murano, and I felt I wanted to develop more organic, fluid pieces using the material as my inspiration. I developed a technique of working with my master glassblower Richard Price, which is about as ‘hands on’ as you can be whilst blowing glass. We are literally able to touch the pieces and mold the shapes. Around 15 years ago a Belgian company, When Objects Work, took over production of my Pebble/Rock/ Caillou collection. This is distributed worldwide. I experimented a lot over the years and my pieces got bigger and bigger. I came up with the idea of throwing the big pieces in ice-cold water to ‘crack’ them and re-heat them so that this scaly surface is “locked in.” It’s a wonderfully dramatic moment for visitors to the studio. But essentially it brings a different reflective surface to the vessel, uneven yet smooth at the same time. Raymond: What was the overall timeline from conception to achieving the final design? Kate: Seconds really, it’s quite instinctual. I just draw a chalk sketch of what I want on the floor in front of Richard and he makes that shape. No two pieces are the same, so it’s almost as if I answer my question “what are we making today?” and off we go. Raymond: What was your initial inspiration, and where did the idea come from? Kate: As I said, from watching the actual material in the studio, I realized how fluid it is, and I didn’t want to control that too much. Of course, shapes I have seen in nature play a part, but I am actually more inspired by grouping pieces together to make a story. Color is very important to me. Raymond: Please describe your overall creative and design process. Kate: Sketches of course, and some kind of zeitgeist color feeling which I have always had. But if I am designing for a specific location, I tend to work out the colors and shapes based on where the pieces will eventually be placed. (My main activity is interior design, so my glass pieces are often the finishing touch to a room in my own work). Raymond: Please describe the methods, tools, and materials that you used to develop and prototype this design? Kate: That’s complex. A glass studio is a world unto itself, and medieval really, as nothing much has changed for centuries. It’s a wonderful, theatrical, dangerous environment. Raymond: Did you utilize a new technique or technology to conceptualize this product? Kate: No. I just wanted to make SIMPLE glass pieces, where color – and subtle color- is the story. Raymond: Please describe any challenges that affected the design and perhaps steered you to an entirely new final design? Kate: There is an inevitable waste in production, as the material is so delicate and dangerous. I often halt production if I don’t like the way a piece is forming, and start again, time is money in a studio with all the furnaces burning, so it’s easier to move on. Also, you can’t really stand and ponder about a piece once it’s in production, it’s “take it or leave it.” I am very decisive, so this suits me. Raymond: Describe your overall brand DNA and Ethos Kate: I am an interior designer first and foremost, so am concerned with large spaces most of the time. But as a firm, we do design everything – sometimes from the building itself to the dinnerware. I tend to custom design as much as possible for my projects. We are a small, flexible, international team, who genuinely love creating individual properties for people, and seeing our design dreams come to life. Personally, I believe that ANY space can be made to look lovely. Click here to see more of our “Anatomy of a Design” series and SMW Home. Like what you see? Get it first with a subscription to ASPIRE DESIGN AND HOME Magazine.
https://aspiremetro.com/anatomy-of-a-design-kate-hume/
Chonyid Bardo: The Vision Of The Peaceful Deities The Chonyid Bardo is the second after-death state described in the Bardo Thodol (Tibetan Book of the Dead) where visual and auditory phenomena occur. After witnessing the Primordial Clear Light in the Chikhai Bardo, what follows is a progressive vision of the peaceful deities from the fivefold radiant light of the primordial Buddha on the fourth until the eleventh day. Corresponding with the visions in this state is a feeling of intense tranquility and perfect knowledge. However, it is also said that the consciousness of the departed naturally go astray during this bardo experience if the necessary effort was not made during his or her lifetime to fully recognize the Primordial Clear Light. This means they get lost during the process and may end up some place they don’t like in their next rebirth. What are the Deities of the Chonyid Bardo? In Tibetan Buddhist doctrines, it is said that a concentration of five radiant lights emanate from the heart center of our spirit-body. It is from this center that the peaceful deities emerge. These deities occupy a very special position in the long line of buddhas and bodhisattvas. They constitute many philosophical and religious teachings and serve as guiding symbols for the spiritual life. The deities that emerge in the Chonyid Bardo are said to be the manifestations of the karmic fruits and experiences in one’s life. They are the fusion of joy and emptiness, which comes from the realm of pure self-awareness. These apparitions may either be mesmerizing or frightening in appearance. The deities in the Chonyid Bardo can be invoked by mantras that correspond to them. These mantras are considered to be the carriers of spiritual energy and channels for the wisdom of the Buddhas. Traditionally, they are only given from guru to disciple, which is why only those who know these mantras are able to communicate with the bardo deities. What is the purpose of the Chonyid Bardo experience? The purpose of the experience in the Chonyid Bardo is to help transform one’s consciousness via the dramatic display of psychic projections to purge one’s excessive karmic content. This is essentially the same thing that takes place when one reaches the deeper states during meditation. All the positive and negative emotional materials accumulated during one’s lifetime take their individual turn in coming to the surface of one's consciousness. This transformation is in some way analogous with the story wherein Christ meets the devil who offers him an easy life of self indulgence and illusions of power or when the Buddha meets Mara (the dark lord) who tempts him with beautiful women. Vision of the Peaceful Deities (Day 4-11) The vision of the peaceful and wrathful deities is said to take place from the 4th to 11th day of a person's death. The deities of the Chonyid Bardo are always depicted in a sitting, standing, or moving position on a lotus. They are surrounded by a powerful aura made of intense colors of the five elements. The lotus represents spiritual unfoldment and attainment. It also signifies that the deities have prevailed over the cycle of birth, life, death, and rebirth. There are five groups of peaceful deities that belong to the first part of the Chonyid Bardo experience: - The Five Wisdom Buddhas and their consorts - The Eight Mahabodhisattvas and their Dakinis - The Buddhas of the Six Realms of Existence - The Four Male & The Four Female Guardians - The Five Knowledge-Holding Deities The second part of the Chonyid Bardo experience involves the same deities only this time turning into their wrathful counterparts (discussed in the next article). Note: Due to the amount of time and space it would take to provide the details about the meaning of these deities and their other symbols, I decided to provide only a summary. I do think many of the symbols here and their meanings can be easily found on the web. 1. The Five Wisdom Buddhas and their consorts The first and second day in the Chonyid Bardo involve the vision of the Five Wisdom Buddhas (or Five Tathagatas) in order to purify the Five Aggregates (Skandhas); these aggregates are form, consciousness, perception, feeling, and mental formation. On the other hand, the five female buddhas purify the five elemental realms and they are usually depicted in inseparable tantric union, called Yab-Yum, with the Five Wisdom Buddhas. Above the Five Wisdom Buddhas, in the highest rank, you’ll see Adibuddha as the pure Dharmakaya (truth body) and as the source of all further manifestations. Adibuddha is the mystical father and medium of all the buddhas and bodhisattvas in the bardo. The vision of the Adibuddha actually belongs to the Chikhai Bardo, but since he is the creator of the mandala where all the other deities manifest, it is essential for his name to be mentioned here. 2. The Eight Mahabodhisattvas and their Dakinis The third and fourth day within the Chonyid Bardo involves the vision of the Eight Mahabodhisattvas and their Dakinis for the purification of the eight functions of consciousness and their realms of activity. They generally appear in mandalas, together with the Five Wisdom Buddhas, as male-female pairs. The Eight Mahabodhisattvas rule over the eight kinds of awareness (the psychic organs of perception), and the Eight Dakinis are associated with the eight realms of operation of these kinds of awareness (the corresponding physical organs of perception). 3. The Buddhas of the Six Realms of Existence The fifth day is the vision of the Six Incarnations of the great Bodhisattva Avalokitesvara. This is the only experience carried out in the plane of the emanation body or Nirmanakaya (see first article). The six vices, which cause people to be reborn repeatedly due to karma, are said to be overcome with this experience. The six Buddhas are contemplated in detail in separate images during the death ritual so that the dead person in the bardo can realize early why these Buddhas appear as incarnations of Avalokitesvara in the Six Realms of Existence. According to Buddhist belief, as long as human life is attached to the world of suffering due to ignorance, hatred, and desire, liberation from the chains of rebirth in the Six Realms of Existence is impossible. In order to communicate this fundamental knowledge to all six kinds of beings, Avalokitesvara appears in the Worlds of Existence in the form of the six Buddhas. 4. The Four Male & The Four Female Guardians The sixth day is the vision of the four male and the four female Guardians of the Mandala, who each possess the third eye of higher knowledge and who help the dead person to attain the Four Sublime States. Like all the other deities of the Chonyid Bardo, they appear in tantric union with their female counterparts. They appear human but show wrathful faces, have bad haircuts, and wear crowns of five skulls. They guard the four cosmic directions of the mandala and at the same time become the guides of the consciousness of the dead in the transcendent world. 5. The Five Knowledge-Holding Deities The seventh day is the vision of the Five Knowledge-Holding Deities or Vidhadharas. They are the last of the peaceful deities to appear and they occupy a special place in the mandala of deities. The Five Knowledge-Holding Deities form a mandala or circle in the throat chakra which represents the spiritually enlightened verbal sphere of human activity. Because of this, they neither belong to the spiritual plane of the peaceful deities which is the heart chakra nor to the mental plane of the wrathful deities which is the third eye chakra. Their position is special because no particular initiatory rite is associated with them. It could be that the knowledge of the mantras and the ritual process involved in this stage is a closely held secret among the Tibetans. Illusory Images Of The Mind One thing that is always mentioned in the Tibetan Book of the Dead is that these deities are not to be taken in the literal sense. No matter how extreme these figures may be, they are only projections of one’s unrecognized reality, i.e., the profound stuff that dwells in the subconscious. However, since they contain, as spiritual images, the most powerful forms of polarity and appear with such a convincing effect, you will find it impossible not to think that they are real. This is why the Tibetan Buddhists do not treat them as mere mythological figures. - “The underlying problem of the Second Bardo is that any and every shape - human, divine, diabolical, heroic, evil, animal, thing - which the human brain conjures up or the past life recalls, can present itself to consciousness: shapes and forms and sounds whirling by endlessly. The underlying solution - repeated again and again - is to recognize that your brain is producing the visions. They do not exist. Nothing exists except as your consciousness gives it life. You are standing on the threshold of recognizing the truth: there is no reality behind any of the phenomena of the ego-loss state, save the illusions stored up in your own mind either as accretions from game (sangsaric) experience or as gifts from organic physical nature and its billion-year old past history. Recognition of this truth gives liberation.” - — The Psychedelic Experience: A manual based on the Tibetan Book of the Dead, 1964) The deities in the Chonyid Bardo are not “gods” as we know them. They do not occupy any temporal and spatial realm because their existence is so incredibly different from the reality of the physical world. Thus, if you want to find out who or what these deities really are, you would have to spend years and years of dedicated practice in meditation. However, if you want to take a shortcut, you do have the option to take psychedelics. Just keep in mind that you are most likely to meet the wrathful deities first if you don’t know what you’re doing.
http://chinabuddhismencyclopedia.com/en/index.php?title=Chonyid_Bardo:_The_Vision_Of_The_Peaceful_Deities
There is another strong belief that cremation began in a real sense in the Stone Age, around 3000 B.C in Europe and Near East. In the last Stone Age it is believed to have spread across northern Europe. Moving to the British Isles in the Bronze Age (2500 to 1000 B.C) cremation slowly became popular in Spain and Portugal. Cemeteries later developed in Hungary and Northern Ireland. Establishing strongly in the Mycenaean Age in the Grecian burial custom it was also embraced by the early Romans around 600 B.C. Alternating between burial and cremation one death ritual has been preferred over the other throughout history. In some places like Middle East and Europe both burial and cremation have been evident. Hinduism and Cremation The Middle Ages The Modern Era New Technology Journey after Death (A Vedic Perspective) “BIRTH AND DEATH is the beginning and the end of earthly life, and no human being who has within him even a faint longing for the Truth can disregard the two important questions – how does life enter the physical body, and what becomes of it after death?” – Herbert Volkmann There are several sacred texts in India which speak about the theory that the way one leads one’s life decides one’s fate after death. In the Hindu tradition Agni the lord of fire being the center of all ritualistic celebrations, offerings are made to Agni and these are consumed and transformed by the same Agni. The last ritewhich is the final sacrificial fire ritual performed after the death of the individual is believed to establish the person well in the afterlife. In the Vedic times there was also a belief that cremating the body helped in returning the physical remains of the person back to nature as smoke and ashes. Afterlife thus depends to some extent on the performance of the correct rituals. Circumstances of the Human Body after Death Hindu Death Rituals and Beliefs: There are strong beliefs in the Upanishadic teachings that propound the giving up of one’s desires in order to escape the ceaseless cycle of birth and death – Samsara. This belief further propounds that the soul has the capability and the means to break free of the constant cycle of birth, death and rebirths. The outer or gross bosy is seen to fall away and the subtle body composed of the karmic tendencies, knowledge, mind etc. also begins to disappear. The soul or jiva after death remains near the body and later departs from the body entering into a temporarily delightfulexistence until it takes on a new physical body as per its karmic inclinations. In the Hindu culture the time of death of a person is given great importance. There is therefore great emphasis on assisting the person in the crossover to the other realm and therefore the need for the rites and rituals. The manner in which the body is disposed off is also very significant in this respect. The two most common funeral rites adapted are cremation and burial. Hinduism requires that bodies be cremated at the earliest unless the deceased is a child who less than three years old. Spiritual research has examined the effect of modern types of funeral practices from the view point of helping one’s ancestors in the afterlife. The two predominant methods of disposal of the human body as per the Hindu religion and custom as we know are Burial and Cremation. Here we look at the Spiritual Effect of burial and cremation as per the Vedas. At the time of death the body expels excretory gases which are regular physical gases like putrefying gases etc. The frequencies and vibrations are negative in nature which increases the tama component in the immediate environment. These in turn attract the negative energies to the dead body. With this in the background the various methods of disposal have evolved in the Hindu tradition. Spiritual Effect of Cremation From a spiritual perspective, the goal of a funeral rite is to: - Reduce the ill-effects of negative forces and to ensure that negative forces do not go near the body of the deceased. - The ritual should aid the subtle body in shaking off its bonds to the physical body. - To make the subtle body light and give it the momentum in its upward journey in the afterlife. From a spiritual perspective the cremation method is believed to have the following benefits as per the sequence of events. - The process itself is meant to be performed at the earliest and before sunset. This is meant to minimize the possibility of any negative forces approaching the body after death when the body is most susceptible. - By the process of cremation which is accompanied by the lighting of the funeral pyre and the recitation of mantras, the five vital energies, sub-vital energies and excretory gases in the corpse are expelled and they disintegrate in the atmosphere. - It is believed that as the body burns on the pyre a subtle protective sheath is formed around the body by the fire element and the reciting of mantras thus further protecting it from any negative forces. - Since there is complete disintegration of the five vital elements and sub-vital elements any bond that may have existed between the subtle body and the physical body is broken. - The fire along with the mantras destroys any rajasic or tamasic tendencies that the body had and provides a protective sheath around the body. The subtle body thus cleansed becomes lighter and more sathvik in nature. It has now gained the necessary momentum for its onward journey to the other realm. The Environment Perspective of Cremation versus Burial People weigh the consequences of different forms of funeral rites and methods. An analysis of cremation done scientifically has shown that it meets most of the criteria of an effective funeral rite whereas burial has been found to be wanting in several respects as far as Hindu rituals and beliefs are concerned. It is believed in the Hindu culture that people who have led relatively good lives, by the very act of burial increase their chances of being affected by negative forces in the afterlife. Burial is also a source of environmental contaminants, with the ‘casket’ or ‘coffin’ itself being the major contaminant. The other concern with burial is that of radioisotopes that may enter the body before or after death although cremation does not seem to take care of this aspect either. Cremation is found to return the radioisotopes quickly back to the environment. The other concern about burial is that it seems to take up a lot of space. In traditional burial the body is buried in a casket or coffin which is made of different types of material. All this requires space and even many big cities have run out of permanent space. Therefore there has been the need for an alternate and that is the present day crematorium. The spiritual angle is indeed important in the decision making process. From a purely environmental point of view there have been several controversies surrounding cremation. This has led to the modern day crematorium consisting of one or more cremulator furnaces. Open air cremations are becoming less frequent in urban areas. There are crematoriums in most major cities, which are in effect indoor electric or gas based furnaces. Most cremations take place in these indoor crematoriums. Hindu Cremation – Its Impact and Future: The estimate is that about seven million Hindus die each year and most of the bodies are cremated traditionally. There is another estimate that eight million tonnes of carbon dioxide or greenhouse gases are emitted from Hindu funeral pyres every year. There is growing concern about the smoke arising from these outdoor crematoriums as they are seen as a major health hazard and the wood consumed by these crematoriums lead to the felling and use of a large number of trees. Around 50 to 60 million trees are being consumed every year. The Hindu religious customs and ceremonies around the disposal of the body are seen as a threat to the living according to environmentalists. These are not the only pollutants; there is the large quantity of ash that comes from these funeral pyres that are later thrown into the rivers thereby increasing the toxicity of the waters. To tackle all these problems the Government and Environmental groups have promoted the use of crematoriums, which use LPG as the primary fuel. These developments have been welcomed by many as these help retain tradition while still protecting the environment while being viable solutions as they are cost effective. It allows the Hindus to perform all the rites that they are meant to. Modern Cremation Process Cremation is meant to reduce the dead body to 3-7 pounds of bone fragments and other organic and inorganic compounds. Cremation is performed in a cremation chamber which is in a crematorium and this may have many cremation chambers. Modern cremulators are capable of generating temperatures of 870–980 °C (1,598–1,796 °F) and they ensure quick disintegration of the corpse. Coal and coke were used until the early 1960s and modern day crematoriums have been using natural gas and propane as the fuel. There is also widespread use of LPG as the primary fuel in these crematoriums and they have adjustable control systems that monitor the interiors and the furnace shuts down automatically when the process is complete. The time needed for cremation in these modern crematoriums varies from body to body and is roughly about one hour per 45 Kilograms (99 lb.) of the weight. Cremation furnaces are not made to cremate more than one body at a time unless it is the bodies of a mother and her still born child / children in which case all the bodies are placed in the same cremation container. The body is placed in a container called the retort which is lined with heat- resistant refractory bricks. The container with the body has to be inserted into the retort quickly to ensure that there is no heat loss from the top-opening door. The latest crematoriums are computer-controlled to ensure that is legal and safe. Also the retort door cannot be opened until the crematory has come to its operating temperature. The modern day crematoria also allow relatives to view the charging. This is done for religious reasons especially for traditional Hindu and Jain funerals.
http://alivechannel.com/cremator/cremation-history-2/
During a recent discussion and lecture in the writing class I teach, I filled two dry-erase boards with information and explained that much of it was not available in the text. One student scribbled away in a spiral notebook and a few others were typing notes on their laptops, but the vast majority simply sat quietly, watching. At the end of class, a few more students used their phones to take pictures of the lecture notes, but the vast majority walked out without any way to access that information again. I suspect my online students are even more reticent to take notes on any lecture or reading material they encounter. It’s likely not complacency but rather lack of training that spurs their passiveness in the classroom. Note-taking’s big impact on content retention My students aren’t rare — many students resist taking notes as a regular habit. It’s unfortunate, too; information retention without note-taking is about 10 percent. If a student takes notes, content retention can hover around 80 percent. This significant difference makes note-taking an important educational skill that is worth teaching students. The act of taking notes influences recall According to Edudemic’s Katie Lepi, how we take notes has a direct influence on what we remember. While making audio recordings or taking pictures of an instructor’s notes seems like an efficient way to retain every available bit of material, the act of taking notes appears to have a direct influence on our ability to recall information. When notes are typed, more material can be recorded and students often have live access to the internet, so they have the ability to search the web for key terms or clarification during the lecture. This appears to be a significant influence of modern student culture, but can come in handy if you encourage information and note sharing among students. Still, Lepi points out that nearly 40 percent of all people surveyed prefer a mix of typing and handwritten notes. Hand-writing notes is slower, but has a higher rate of content retention. Finding the note-taking strategy that works best for each student Teaching students good note-taking strategies is essential. Cornell Notes are one effective method; other techniques include idea mapping, outlining, using multiple colors of highlighters, or recopying or typing notes the evening after a lecture. Because all students have different learning styles and needs, they should be encouraged to choose the process that works best for them. Students should be reminded that due to the constraints of typing or writing by hand, they must use note-taking to capture key ideas and phrases, not everything the instructor says. Additionally, they should leave plenty of space in their notes to fill in if an instructor skips around during a lecture. Reinforcing note-taking: Strategies for teachers To ensure that notes work, teachers can encourage students to review their notes after the lecture to identify any missing information or key questions. To reinforce this habit with my students, I open our classes by asking if they had any questions about our last class meeting or their reading material. In the blog The Art of Manliness, Brett and Kay McKay explain that the student’s job doesn’t end when the class lecture is complete. A key component of effective notes is to synthesize in-class material and reading notes into a master outline. The combination of lecture and reading materials helps foster both understanding and retention. Modeling is another excellent way to share the importance of note-taking strategies with students. I often share my notes on reading material and am always swift to take notes when they are talking. One way to quietly drive home the effectiveness of note-taking can be to listen to class discussion, review your notes each evening, and return to the next class to ask them questions about what they had to say. Consider sharing your own notes or even dedicating class time to mind mapping or outlining together to show students that often the act of taking notes is an essential piece of their educational success. Monica Fuglei is a graduate of the University of Nebraska in Omaha and a current adjunct faculty member of Arapahoe Community College in Colorado, where she teaches composition and creative writing.
http://lessonplanspage.com/the-importance-of-note-taking-in-digital-and-face-to-face-classrooms/
As the weather gets colder and days get shorter, something very special happens in the gallery. For a brief period of time in late afternoon, works of bamboo art on view transform in the slanting rays of the low winter sun. Bamboo glows as fantastical shadows stretch across walls and pedestals. Already beautiful objects become beautiful in a different way. This winter, we celebrate the beguiling confluence of light, shadow, and Japanese bamboo art. Sometimes subtle, often dramatic, the shadows cast by the works featured in this exhibition are almost as beautiful as the works themselves. Like a skyscraper in an urban landscape at dusk, Tanabe Chikuunsai IV’s Creative City casts a trailing edge of shadow that echoes the architectural shape of the piece. Says Tanabe, “Every time my father [Japanese bamboo artist Tanabe Chikuunsai III] drove on the Hanshin Expressway near the Nakanoshima district, he would say ‘I like to drive this road. It passes by many buildings.’ Indeed, he made many sculptures in the theme of the city. When traveling through the same area, I often remember his words. Over the years, the city has evolved and modernized. I began thinking about making a series of sculptures to capture the city as it is today, rather than as it was when it inspired my father years ago. This Creative City series was born as a result. I like to see the lights from the buildings in the city while I am driving at high speed at night. The shadows this piece casts represent the light and shadow of the city itself.” Though a beautiful shadow can sometimes be a happy accident, often it is a tribute to the artist’s imaginative and technical skills in regard to play of light and shadows. Endo Gen’s Evening Sky, a golden blonde basket that seems to glow under the light, casts a shadow of vibrating geometry. “I wanted to express the sky and clouds at twilight through this piece,” states the artist. “It casts vivid undulating shadows. Open weave is one of my favorite techniques because of the visual effect it creates. Depending upon the angle of the light, this piece casts a radiating shadow within and outside of the piece.” One of the most prominent contemporary Japanese bamboo artists, Morigami Jin, is already well known for making work with gorgeous shadows. With plenty of rewards and stunning pieces already behind him, this artist seems always in contemplation of the play of shadow that is a secondary, but important, aspect of his work. Whether irregularly woven, twined or hexagonally plaited, the strips of bamboo in his pieces are arranged in patterns that cast startling silhouettes when lit from any angle. In regard to Sea of Clouds, Morigami says, “I made this piece to challenge my technical abilities. Artmaking to me is the constant physical battle between the medium and the artist. It is no different from martial arts. The work at the end of such a battle is a visual record of the fight I had. The undulation of the form could be seen as an indicator of how severe the battle was. Each side insists on its own will and ways but meets halfway in the end. What’s most important is that both the bamboo and I did the absolute best we could in the process. The shadow of each piece vividly tells the story of that exchange.” The hidden, changing, and transitory shadows invoke curiosity and give the viewer a chance to participate, engage, and experience the work in a different way. Elements or characteristics, like color, that might dominate in even diffuse lighting recede with the arrival of shadows, and other qualities– like transparency, structure, and silhouette– take center stage. The exhibition will be on view through December 31 in-person at the gallery, supplemented by online exhibitions on taimodern.com and artsy.net.
https://taimodern.com/exhibit/winter-shadows/?artist=
Jun 16, 2021 Gold mining is a global business with operations on every continent, except Antarctica, and gold is extracted from mines of widely varying types and scale. At a country level, China was the largest producer in the world in 2020 and accounted for around 11 per cent of total global production. Our interactive gold mining map provides a breakdown of the top gold producing countries in the For many years, South Africa was the worlds primary gold producer. The glory days of the gold sector started waning in the early 21 st century as mines went deeper to find the rich reef patches. At the same time, the gold price had dropped significantly from the previous highs, and the global economy hit headwinds, culminating in the global financial crisis in 2008. Jun 18, 2019 South African gold output has been declining for several decades now. From a peak of around 1,000t in 1970, the nations gold output fell to 130t in 2018. A combination of closure, maturing assets and industrial strife has created an inhospitable operating environment. Gold Production in South Africa decreased to 44.50 percent in May from 177.90 percent in April of 2021. source Statistics South Africa. Gold Production in South Africa averaged -3.97 percent from 1981 until 2021, reaching an all time high of 177.90 percent in April of 2021 and a record low of -60.70 percent in April of 2020. Click here for the Principle Gold Producing Districts of the United States Index. The gold deposits of South Dakota are in the Black Hills, in the western part of the State (fig. 23). South Dakota ranks third among the States in total gold production and has been the leading gold producer in recent years. Its total gold production through 1965 was 31,207,892 ounces. Jan 17, 2018 In 1970, South African mines produced 1,000 tons of gold. Since then, production has steadily dropped. The country only produced 167.1 tons in 2016. That represents an 83% drop from the 1970 peak. The fact they have already dug out most of the easy to reach gold represents one of the biggest challenges facing South African miners. Gold was developed and financed with the assistance of the South Australian Film Corporation. The film will be released in cinemas through Madman Entertainment with Altitude Films handling worldwide sales of the film. The Stan Original Film Gold will premiere on Stan and in cinemas in 2021. Jul 20, 2021 In 2020, South Africa produced an estimated 90 tonnes of gold, down from 101.3 tonnes in 2019. South Africa is now the worlds 10th largest producer of gold. It is the 2nd largest producer of gold in Africa, after Ghana (since 2019). South Africa produces 4.2% of the worlds annual gold output. Aug 06, 2011 Up until a few years back, South Africa was the worlds largest gold producer. China surpassed South Africa as the worlds largest producer in 2007. China continues to increase gold production and remained the leading gold-producing nation in 2009, followed by Australia, South Africa, and the United States. Today, South Africa has around 80 operating gold mines and a handful of small-scale producers. The industry employs about 95,000 people, although the number of jobs fluctuates with changes in production. Gold mining boosted South Africas gross domestic product by 360.9 billion Rand in 2019, compared to 351 billion Rand in 2018. May 19, 2021 i-80 Gold Corp. is a Nevada-focused mining company with a goal of achieving mid-tier gold producer status. In addition to its producing mine, El Nino at South Arturo, i-80 is beginning to plan for future production growth through the potential addition of the Phases 1 3 projects at South Arturo, advancing the Getchell Project through economic studies and then on to development, and the ... Contacts. Brief history of gold mining in South Africa. INCLUDING MAJOR EVENTS. 1873. First large-scale production began when alluvial deposits were discovered at Pilgrims Rest. 1884. Gold was discovered in the Witwatersrand which led to an influx of miners from around the world. 1886 -. 1900. May 19, 2021 The production of gold worldwide has been steadily increasing since 2008, reaching 3,200 metric tons in 2020. Africa is the thir-largest gold producing continent in Apr 06, 2021 However, large-scale gold production is minimal with only one mine exceeding 300,000 ounces of gold, the Zijinshan gold-copper mine in the Fujan Excluding South African gold production, Rand Refinery now refines approximately 75% of gold production from the African continent by sourcing mining output from West Africa, East Africa, and other countries in Southern Africa. The list of countries within the continent whose gold is refined by Rand Refinery now includes Ghana, Namibia ... May 02, 2019 South Africas gold mining industry stretches back as far as its capital city Johannesburg. As the most economically-developed country on its continent, South Africa is home to a burgeoning power sector that comprises a swathe of highly-productive gold mines which together accounted for 12% of the worlds gold production as recently as 2005. Mining production in South Africa rose by 10.3 percent from a year earlier in July of 2021, following a 19.1 percent jump in the previous month. It was the fifth straight month of rising mining activity, reflecting the gradual recovery from the Covid-19 shock a year before. The largest positive contributors came from the production of iron ore (42.9 percent), PGMs (10.3 percent), chromium ore ... The standard future contract is 100 troy ounces. Gold is an attractive investment during periods of political and economic uncertainty. Half of the gold consumption in the world is in jewelry, 40% in investments, and 10% in industry. The biggest producers of gold are China, Australia, United States, South Africa, Russia, Peru and Indonesia. Oct 08, 2019 Gold production in South Africa is declining, but it is on the rise in Ghana. AP/Themba Hadebe. Its too soon for Ghana to pat itself on the back as it overtakes South Africa as the continents largest gold producer. Ghanas gold output of 4.8 million ounces in 2018 surpassed South Africas 4.2 million ounce total for the first time. May 21, 2020 Walhalla Belt Project has been consolidated by Fosterville South into a major 547 sq km land package. Total recorded historic hard rock gold production from the project of 1,510,309 ounces of gold at a recovered grade of 33.59 g/t gold. Dozens of high priority exploration targets including former mines, most with no modern drilling. 1 day ago A hole drilled in the far northern extent of the deposit returned 7m at 1 g/t gold, opening up further potential strike length to the north. The latest successful campaign comes off the back of a recent major discovery just 200m to the south of the known mineralisation where a bonanza 17m hit was encountered grading 7 g/t gold from a shallow 11m down-hole. Gold Bee, Manufacturing Production. 555 Anton Blvd Ste 150 Costa Mesa, CA 92626. (877) 977-6789 Jul 02, 2021 In 2020, Peru was the largest gold-producing country in Latin America, with an estimated output of 120 metric tons. It was closely followed by Mexico, with a production of 100 tons. May 04, 2021 While global gold production declined in 2020, major mining companies benefitted from rising commodity prices during the year (Credit Aerogondo2/Shutterstock) The worlds largest gold mining companies operate in diverse regions across the world, from Africa and Asia to South America, the US and Russia. The most expensive place in the world to mine gold is in South Africa. There, all-in gold production costs can be more than twice as much as in Peru, which is the least expensive place to mine gold. According to the Thomson Reuters GFMS Gold Mine Economics Service, average all-in costs for South Africa were over $1,400 between 2005 and 2013. May 19, 2021 I-80 GOLD CORP. is pleased to provide an update of Q1 2021 South Arturo production as well as full year production guidance. South Arturo Q1 2021 Production 15,752 ounces of Mar 12, 2015 The fall in production has reduced golds contribution to the South African economy. The metal contributed 3,8% to gross domestic product in Jun 14, 2018 In 2017, global gold mine production was a reported 3,247 tonnes. Australia is the worlds second largest producer of gold, Australia produces around 295.6 tonnes, followed by Russia (270.7 tonnes), and the United States 230.0 (tonnes). 10. Ghana 101.7 tonnes 9. Mexico 130.5 tonnes 8. South Africa 139.9 tonnes 7. Indonesia 154 ... Gold in South Australia. The first recorded production of gold in South Australia was in 1846 from the Victoria Mine, 18 km northeast of Adelaide. The history of subsequent discoveries is characterised by short periods of high production which had a significant effect on population movements during the development years of the state. GOLD PRODUCTION South Dakota gold mines have produced over 1,390 t (44.7 million ounces) of gold since 1875. In recent years, South Dakota typically ranks fourth in the nation in gold production, behind Nevada, California, and Utah. Annual gold production from the five large scale mines has recently ranged between Aug 11, 2021 i-80 noted that the acquisition of the Granite Creek project provides the company with near-term production potential from the historic underground workings and mid-term potential for an open pit project. i-80 Gold is a Nevada-focused mining company with Aug 03, 2021 South Africa is home to the Witwatersrand Basin, which holds the worlds largest known gold reserves. Although South Africa has the highest gold production in Africa, it The Mponeng gold mine is located approximately 65km west of Johannesburg in South Africa. Image courtesy of Andres de Wet. Mponeng is currently the worlds deepest operating mine. The Mponeng mine produced 244,000 ounces (oz) of gold in 2019 and is expected to be in operation at least until 2027. Oct 16, 1974 Gold Directed by Peter R. Hunt. With Roger Moore, Susannah York, Ray Milland, Bradford Dillman. A South African gold mine manager discovers a plot hatched by the mine owners and London bankers to flood the mine in order to curb gold production and consequently manipulate its price on the stock markets.
https://www.tt-solutions.nl/production-line/1998-02_27786.html
The koru is a spiral shape based on the appearance of a new unfurling silver fern frond. It is an integral symbol in Māori art, carving and tattooing, where it symbolises new life, growth, strength and peace. This ceramic sculpture ornament has been lovingly hand made in Kerikeri, New Zealand.
https://www.aratakiceramics.co.nz/product/modern-swirl-koru/
If you add physical activity to your life — even a bit at a time — you might be doing more than just losing weight. You’ll be reducing your chances of getting diabetes. People who aren’t very fit are two to three times more likely to develop diabetes, according to a new study published in Diabetes Care. The study was interesting because it tracked almost 4,000 people for 20 years to find out the relationship between their fitness levels and whether they developed diabetes. In the end, people with lower levels of fitness were significantly at higher risk of developing diabetes over the 20 years. Patterns of physical activity are often established early and don’t change much. “Patients who have low fitness in their late teens and 20s tend to stay the same later in life or even get worse,” says Dr. Mercedes Carnethon of Northwestern University in Little Falls, N.J., who led the study. So how can you get started? It is ideal to exercise for 30 to 60 minutes a day. But Carnethon suggests that if that’s not part of your lifestyle, you can begin with small changes and build towards more exercise. “I encourage building up smaller periods of activity over the course of the day. Although 10-minute bouts of moderate to vigorous activity are ideal, there is evidence that small intense bouts of activity are also helpful. Those include taking the stairs instead of the elevator or escalator, getting off the bus one (or two) stops early and walking the remainder of the distance, and taking brief ‘activity breaks’ during long periods of sitting,” says Carnethon. She also suggests committing to “family walks” before or after mealtime and planning weekend activities such as going to parks and being physically active while there.
https://www.metro.us/get-fit-avoid-diabetes/
I will be interviewing for a Psychologist near Cranston, RI. The position is a full time school year opening where the school has students ranging from mild to moderate needing psychological services. You would be conducting evaluations and assessments as part of your standard duties. We have the ability to place the psychologist in an Elementary, Middle or High School setting. Please indicate your preference. Job Requirements • RIDE – Rhode Island Department of Education Certification • Experience with evaluations, testing and IEPs • Effective communication with other staff, students and parents This school would ideally like to hire someone as a permanent staff school Psychologist and need someone that is passionate about the children and able to work in a team oriented environment. For immediate consideration please respond to John Kuzma by email with a resume to [email protected] or call 813-749-5192.
https://www.sunbeltstaffing.com/jobs/full-time-school-psychologist-job-openings-close-to-cranston-ri/hc/1092896/j/
The Sustainable Development Goals (SDGs) will not be achieved without significant public awareness and engagement. It is citizens who will hold governments accountable to the promises they made in 2015, and we need to find innovative ways of raising public pressure to deliver a more just and sustainable world by 2030. Only through such an “accountability revolution” will we have any chance of achieving the commitments made in the SDGs, and the lynchpin for that revolution is citizen participation. Citizens and civil society have been actively involved in the SDGs since before they existed. They have raised awareness about the importance of the “post-2015” process through nongovernmental organisations (NGOs) and actively contributed to the drafting of the goals through the Open Working Group. As Secretary General of CIVICUS – a global civil society alliance actively involved in the post-2015 process – I have participated in countless United Nations meetings about the SDGs and their predecessor the Millennium Development Goals. Often at these meetings, I would make a half tongue-in-cheek remark that the problem with the MDGs was that no one ever lost their job for failing to meet an MDG target. This comment always made the officials in the room shift uneasily in their seats, especially when I would ask why, if we truly want the SDGs to be a success, would we not hold accountable those of us in governments, intergovernmental agencies, global business, or civil society organisations (CSOs) responsible for achieving them, even to the point that our jobs would depend on it. If such an argument seems absurd, it is because we do not (yet) see the SDGs as having real political bite. They are not legally binding, their complexity and interconnectedness makes apportioning blame (or credit) difficult, and they arise out of an intergovernmental system that is losing credibility among activists. When local and national leaders and institutions make promises, almost all societies, both democratic and undemocratic, have fairly sophisticated ways of holding them to account. If the SDGs are a set of global promises – made by our leaders and institutions – then it should follow that we have at least some ways of ensuring these leaders are held accountable. However, I recognise that the accountability revolution will only be possible if civil society also adapts to the evolving development agenda. We will need to re-evaluate our strategies for cooperation and funding as well as for accountability and communication. We will also need to continue to defend and promote the civic space that citizens rely on to hold decision-makers to account. Despite their potential, the SDGs have arrived at a worrying time for civic freedoms. The civil society institutional landscape does not seem ready to channel citizen voice adequately toward the goals. The rising tide of populist politics across the globe poses a massive threat to the values that underpin the SDGs and internationalism more broadly. Meanwhile there are not enough people thinking about innovative ways of promoting citizen participation in the SDGs or the global governance mechanisms that are their guardians. The original post is on the website of Action for Sustainable Development and as a working paper.
https://watershed.nl/news/civicus-secretary-general-writes-toward-an-accountability-revolution/
President of Russia Vladimir Putin: Colleagues, The election campaign is over, and the single election day in Russia saw elections in virtually all of the regions. The competition was for more than 90,000 regional and local authority mandates. Everyone who took part in this process did a lot of work, and I want to specially thank those people who were in charge of preparing and holding the elections – representatives of political parties, public organisations, and observers who were controlling the voting. I want to note that the voter turnout has increased: our citizens voted more actively, voicing their positions and expressing interest in the way their village, city or region will develop. I believe it is important that the citizens support those political forces that back the country’s progressive development and consolidation of its sovereignty, support those candidates who are closer to people, whom the people see, know and understand what these candidates competing for various posts want to achieve, first of all, for their voters. The election campaign was open, honest and sometimes quite fierce. As regards the election of regional heads, in some regions the fight was for every vote, and in one region, as you know, the second round will need to be held to determine the winning candidate, and I think that on the whole this is not bad, because this proves that the competition was open, honest and transparent. And the people there are worthy, I mean, both candidates. Despite the intensity of the competition, all the parties that participated in the elections have confirmed the legitimacy of the voting mechanism and the results themselves, which once again shows that Russia is developing a mature multi-party system, and our political culture is growing. The set standards for holding truly open, competitive and legitimate election procedures should certainly be ensured in the upcoming 2016 State Duma elections as well. This election campaign once again confirmed that the parliamentary parties play a key, system-forming role in our nation’s political life. Incidentally, their representatives were the main competitors in this gubernatorial election. Meanwhile, the non-parliamentary parties are drawing increasing attention; they actively participated in the last election at many different levels. I will also note that many strong candidates – your opponents in these elections – will likely be State Duma candidates next year, including in single-mandate constituencies from your regions. And it’s good to see such prominent individuals who enjoy public esteem. I just noted that both parliamentary and non-parliamentary parties participated actively in the elections. Incidentally, my colleagues and I were just saying that we will have twice as many political parties participating in the 2016 State Duma elections as last time: ten non-parliamentary parties have the right to participate directly in the elections, plus four parliamentary parties. We had only seven parties participating in the last State Duma elections, but this time, we will have 14. I am confident that serious competition will help bring to the federal parliament those political forces that will prove their ability to resolve the most pressing issues that concern people through their actions, not just words. Colleagues, the support you received from citizens means exactly one thing: you must work with greater impact, with even more concerted effort. This is true of you and your teams. We need to constantly give attention to people, regardless of the political calendar, and we must do this without resorting to current difficulties, without shying away from problems. And friends, what’s most important is that if you have come into power, if you have been elected, if the people have entrusted you with such high offices, then you must work honestly and ensure your full dedication; otherwise, you should not have campaigned to be elected. It would seem as if I’m talking about basic things, but practice shows that not everyone and not everywhere always knows these basic things. But we should never forget about them. And, of course, given the upcoming State Duma election, it is important to work to bring the public together, to unite all the constructive forces to resolve key challenges in our nation’s development. I expect that you all understand the level of responsibility and are ready to engage in serious practical work. Thank you for your attention.
http://en.kremlin.ru/events/president/news/50313/print
What are the symptom of QuickBooks error 392? ??? Asked on July 12, 2017 in No Category. Hello Rafael, The symptom of Error 392: - Because of QuickBooks Error 392, the crash of Active QuickBooks Window. - The operating window runs slowly and inactive response to commands prompted through mouse or keyboard. - Frequent freeze of the system can also be said to be one of the reasons behind error 392. The error can take place while you are starting your Desktop or attempting a shutdown. One more place where this error occurs mostly is during installing or updating of your operating system. The point to keep in mind is that you should know when the error has occurred and what you were doing on the system at the point. Have a problem in QuickBooks? Contact QuickBooks Error Support team.
https://askproadvisor.com/question/what-are-the-symptom-of-quickbooks-error-392/?sort=oldest
Submitted by NileshSharma • September 15, 2020 solaceinfotech.com With regards to selecting the best cross-platform mobile application development framework, numerous application owners and developers are wondering why we have picked Flutter over various mobile frameworks like React Native, Angular Js or Xamarin. Why? Know the reasons of Flutter is setting trends in mobile app development. Related posts: - Bakers Against Racism Merch - Will Office Spaces Disappear as Employees Work from Home During the Coronavirus Pandemic? - Rockford independent living - Solar Power and Solar Energy Brisbane - LIQUID WEB REVIEW – FASTEST MANAGED HOSTING PROVIDER?
https://www.votetags.info/why-flutter-is-setting-the-trend-in-mobile-app-development/
There are the slow geomorphological changes associated with the geologic processes that cause tectonic uplift and subsidenceand the more rapid changes associated with earthquakes, landslides, storms, flooding, wildfires, coastal erosion, deforestation and changes in land use. The following sections cover each of these vegetation types in greater detail. Not only do they provide the drainage from the occasional rains, but they are also transportation routes for nutrients and wildlife from the interior to the sea. A two-syllable call is used to alert herd members to predators while snorts indicate happiness. Liming and fertilization rates should be determined by soil testing. These areas, whether fallow fields, roadsides, honeysuckle patches, etc. As with most desert habitats, the majority of faunal life consists of insects, lizards, and small mammals. Tree species include the Caribbean pine Pinus caribeacalabash Cresentia cujeteoak Quercus spp. It is the only known food source of the larvae of the Karner blue. There are many varieties of field corn suitable for planting for deer. This is done by placing a line of ammonia nitrate about the diameter of a pencil along the entire corn row. Planting depth of one inch is recommended. Ancient olive trees at the village of Tibneh near Irbid, some of which could be from Roman times. Wild lupine grows best in sandy soils where forest fires occasionally clear out old vegetation. Manatees only give birth every two to five years and they only have one calf at a time. Corn plots should be soil tested for precise liming and fertilization rates. The Indian Forest Act, helped to improve protection of the natural habitat. Unless deer density in the particular area is low to moderate, large acre plots maybe necessarily or if smaller plots are used they may need to be protected with temporary electric fencing to allow soybean plants to become established. Adults range in color from green to brown to almost black, although usually remain predominantly green as they mature. Corn Corn is planted widely for white-tailed deer and whole kernel corn is an excellent energy source for deer from early fall through winter. The actual pattern of vegetation is of course, much more complex than is implied by the map.Forestry & Wildlife Sciences Extension. Our objective is to provide Alabamians with the information needed to make natural resource related decisions; from building bluebird boxes, to finding a professional logger, to water quality indicators. Our Mission. Our mission is to build a future in which people live in harmony with nature. From our experience as the world's leading independent conservation body, we know that the well-being of people, wildlife and the environment are closely linked. The Official Home page for the Iowa Department of Natural Resources, DNR. Our mission is to conserve and enhance our natural resources in cooperation with individuals and organizations to improve the quality of life for. Official site of the West Virginia Division of Natural Resources, WV State Parks, and WV Hunting and Fishing License. The Human Touch Humans are now responsible for causing changes in the environment that hurt animals and plant species.
http://gakemuxejypyje.motorcarsintinc.com/wildlife-and-natural-vegetation-268loh.html
Thank you to everyone who joined us for our recent webinar: Crisis Communications – Planning, Response and Barriers to Success. In David Kalson’s thought-provoking presentation, he discussed the key elements of creating an effective crisis communications strategy and shared his top 10 list of best practices: 1. Empathize (Be human) 2. Train and rehearse spokespersons 3. Know and use the principles of Risk Communication 4. Use 3rd-party trusted experts 5. Focus on employees, your most important stakeholder group 4. Use 3rd-party trusted experts 6. Integrate social media. Have a social media policy 7. Tell your story first, fast and honestly 8. Build a strong brand 9. Identify specific most likely/highest impact crisis scenarios and develop scenario-specific plans 10. Have a Crisis Communications Plan that’s integrated with your operational Crisis Plan If you missed the webinar, you can view the recording here.
https://www.crisisconferences.com/top-10-best-practices-crisis-communications/
This book analyses the US drone attacks against terrorists in Pakistan to assess whether the ‘pre-emptive’ use of combat drones to kill terrorists is ever legally justified. Exploring the doctrinal discourse of pre-emption vis-à-vis the US drone attacks against terrorists in Pakistan, the book shows that the debate surrounding this discourse encapsulates crucial tensions between the permission and limits of the right of self-defence. Drawing from the long history of God-given and man-made laws of war, this book employs positivism as a legal frame to explore and explain the doctrine of pre-emption and analyses the doctrine of the state’s rights to self-defence as it stretches into pre-emptive or preventive use of force. The book investigates why the US chose the recourse to pre-emption through the use of combat drones in the ‘war on terror’ and whether there is a potential future for the pre-emption of terrorism through combat drones. The author argues that the policy to ‘kill first’ is easy to adopt; however, any disregard for the web of legal requirements surrounding the policy has the potential to undercut the legal claims of an armed act. The book enables the framing and analysis of such controversies in legal terms as opposed to a choice between law and policy. An examination of the legal dilemma concerning drone warfare, this book will be of interest to academics in the fields of international relations, Asian politics, South Asian studies, and security studies, in particular, global security law, new wars, and emerging technologies of warfare.
https://books.apple.com/us/book/terrorism-and-the-us-drone-attacks-in-pakistan/id1568143839
It's a summer course like no other at UC Santa Barbara or anywhere else. Spread between the lecture halls of the Kavli Institute for Theoretical Physics (KITP) and the labs in the California NanoSystems Institute (CNSI) on campus, the new Santa Barbara Advanced School of Quantitative Biology is abuzz with activity. Here, participants –– graduate students, postdoctoral fellows, and even science faculty members from around the world –– rub shoulders with leading experts in the field and shed new light –– literally –– on the dynamics of morphogenesis. Morphogenesis is the process that converts the genetic blueprint of a multicellular organism into complex physical structure. "This program is really unique in that it's incredibly interdisciplinary," said Michelle Dickinson, a senior lecturer (assistant professor) at the University of Auckland and a student in the course. "It's physics combined with biology, and technically I'm an engineer so it combines engineering, too. It's a great place to meet world leaders and experts, and live and eat and breathe the science that we're trying to solve." The course, "New Approaches to Morphogenesis: Live Imaging and Quantitative Modeling," presented by KITP and CNSI, brings together close to a hundred international scientists from various fields to collaborate on the problem of animal development. "Quantitative biology is really a major construction project in science and so is a very natural focus for an interdisciplinary school," said Boris Shraiman, Susan F. Gurley Professor of Theoretical Physics and Biology, permanent member of KITP, and a founding co-director of the course. "Our special take on quantitative biology is to bring theoretical modeling together with experiments and integrate it and make it part and parcel of the biological method. This is where the KITP program comes together with the live microscopy lab at CNSI." "We put together eight core ideas for research projects, but it's really the students that bring this to life," said Joel Rothman, Wilcox Professor of Biotechnology in UCSB's Department of Molecular, Cellular and Developmental Biology (MCDB) and the other founding co-director of the course. "The students are driving the research in ways that we wouldn't have even come up with," he added. "They're bringing a lot of fresh ideas, so the synergy that's created by bringing scientists of this broad expertise together creates whole new ventures that wouldn't have been created in a typical course environment." Years in the making, the course is designed to advance late Scottish biologist and mathematician D'Arcy Thompson's agenda of quantitative description outlined in his 1917 book, On Growth and Form. But almost a century later, these scientists are using the full power of modern imaging and molecular genetics, which makes the field ready for rapid progress. "We designed it in such a way that these experiments would be open-ended," noted Shraiman. "We hope new collaborations will form and the work started here will continue." The experimental work, complemented by theoretical and computational modeling and analysis, involves a collaboration between course students and participants of the concurrent KITP workshop New Quantitative Approaches to Morphogenesis. Together with other researchers, students attend morning lectures-cum-discussion sessions with such luminaries as Eric F. Wieschaus, Squibb Professor in Molecular Biology at Princeton University and winner of the 1995 Nobel Prize in Physiology or Medicine. "Scientific-wise, the discussions are all very interesting," said Romain Levayer, a postdoctoral fellow from the University of Bern in Switzerland. "It's more dynamic than I would have expected. We have a lot of discussions during the breaks so there is interaction with many different people." A unique synergy comes from such interdisciplinary conversations. "The science is much greater than it would be if it were just focused on biologists or just focused on physicists," said Rothman. "And that provides a tremendous opportunity for the students. It also means there will be new research opportunities and discoveries that come out of this." On the technical level, the course introduces several model organisms, including fruit flies, roundworms, and sea squirts, and provides instruction on live imaging, micro-manipulation, and genetic and chemical perturbations as quantitative tools to study developmental dynamics. "We were able to draw on the research strength of our faculty colleagues on campus, several of whom –– MCDB professors Denise Montell and Bill Smith and mechanical engineering professor Otger Campas –– are engaged in running experimental projects," said Shraiman. The morphogenesis course also got help from outside UCSB. Thomas Lecuit, a group leader at the Developmental Biology Institute of Marseilles and Ewa Paluch of University College London (UCL) are co-directing the course. In addition, Lars Hufnagel and Pierre Neveu of the European Molecular Biology Lab, Suzanne Eaton of the Max Planck Institute of Molecular Cell Biology and Genetics, and Andrew Oates of UCL lead experimental projects. Hufnagel's project has students build a single-plane illumination microscope (SPIM) and then use it for imaging rapid developmental processes taking place in fruit flies and sea squirts. Another project, led by Thomas Gregor, an assistant professor of physics at Princeton University, examines with near molecular precision the spatio-temporal dynamics of gene expression in the fly embryo. Other projects focus on spatial arrangement and rearrangement of cells in developing tissues and on temporal fluctuations and oscillations that control cell differentiation and tissue patterning. "What's clear is they've got the right people, people with expertise from all over," said Seth Donoughe, a Harvard graduate student who is working on the SPIM project. "My fellow students have tons of experience, lots of expertise in different areas. It's a good place to learn from workshop members as well. In terms of getting exposure to scientific stuff that people are doing and thinking, it couldn't be better." "If I had a preconceived notion coming here, it would have been learning a bit about biology and try to understand something that is outside my field," Dickinson said. "What I've actually done is gain collaborators, learned about physics and things that I didn't even think were involved in the biological processes, and I've developed new collaborations worldwide and especially here at UCSB and in Santa Barbara that will probably go on for decades to come." The Santa Barbara Advanced School of Quantitative Biology is supported by grants from the Burroughs Wellcome Fund and the Gordon and Betty Moore Foundation, as well as by the loan of state-of-the-art imaging systems from Andor, Coherent, Leica, Nikon, Olympus, and Zeiss. The course continues until August 24.
https://kavlifoundation.org/news/new-approaches-quantifying-how-animals-acquire-shape-and-form
"Two steps forward, one back". 2010 meeting of the Congenital Cardiac Anesthesia Society (CCAS) and the Society for Pediatric Anesthesia (SPA). A comparison of the effects of halothane, isoflurane, and pentobarbital anesthesia on intracranial pressure and cerebral edema formation following brain injury in rabbits. A model for biliary and vascular access in the unanesthetized, unrestrained rat. Acute and Long-Term Effects of Brief Sevoflurane Anesthesia During the Early Postnatal Period in Rats. Alzheimer's disease, anesthesia, and surgery: a clinically focused review. American College of Cardiology/Society for Cardiac Angiography and Interventions Clinical Expert Consensus Document on cardiac catheterization laboratory standards. A report of the American College of Cardiology Task Force on Clinical Expert Consensus Documents. Anaesthetic management of the brain dead for organ donation. Analysis of potential shifts associated with recurrent spreading depression and prolonged unstable spreading depression induced by microdialysis of elevated K+ in hippocampus of anesthetized rats. Anesthesia Care Handovers and Risk of Adverse Outcomes. Anesthesia and Sedation Practices Among Neurointerventionalists during Acute Ischemic Stroke Endovascular Therapy. Anesthesia for a child with reflex anoxic seizures. Anesthesia for awake craniotomy: a how-to guide for the occasional practitioner. Anesthesia for catheter ablation procedures. Anesthesia in Experimental Stroke Research. Anesthesia-based pain services improve the quality of postoperative pain management. Anesthetic Evolution in Transcatheter Aortic Valve Replacement: Expert Perspectives From High-Volume Academic Centers in Europe and the United States. Anesthetizing the phantom: peripheral nerve stimulation of a nonexistent extremity. Animal models for protecting ischemic myocardium: results of the NHLBI Cooperative Study. Comparison of unconscious and conscious dog models. Bronchopulmonary inflammation and airway smooth muscle hyperresponsiveness induced by nitrogen dioxide in guinea pigs. Capnography monitoring the hypoventilation during the induction of bronchoscopic sedation: A randomized controlled trial. Cardiac anesthetic: is it unique? Cardiopulmonary bypass induces neurologic and neurocognitive dysfunction in the rat. Cerebrospinal fluid rhinorrhea after thermometer insertion through the nose. Changes in S1 neural responses during tactile discrimination learning. Children with sick hearts: towards anesthesia safety. Closed-loop systems in anesthesia: reality or fantasy? Defining Value-Based Care in Cardiac and Vascular Anesthesiology: The Past, Present, and Future of Perioperative Cardiovascular Care. Determinants of seizure threshold in ECT: benzodiazepine use, anesthetic dosage, and other factors. Differences in Seizure Expression Between Magnetic Seizure Therapy and Electroconvulsive Shock. Differential cerebral gene expression during cardiopulmonary bypass in the rat: evidence for apoptosis? Duration and recovery profile of cisatracurium after succinylcholine during propofol or isoflurane anesthesia. Effect of neonatal diethylstilbestrol exposure on luteinizing hormone secretion following ketamine anesthesia and gonadotropin-releasing hormone in castrated postpubertal rats. Endovascular approaches to complex thoracic aortic disease. Enhanced Recovery After Surgery (ERAS) for gastrointestinal surgery, part 2: consensus statement for anaesthesia practice. Epinephrine induces rapid deterioration in pulmonary oxygen exchange in intact, anesthetized rats: a flow and pulmonary capillary pressure-dependent phenomenon. Evaluation of the eZono 4000 with eZGuide for ultrasound-guided procedures. Evidence-based anesthesia for major gynecologic surgery. Extracorporeal membrane oxygenation of the future: smaller, simpler, and mobile. Facial nerve palsy: a complication following anaesthesia in a child with Treacher Collins syndrome. False increase BIS values with forced-air head warming. Focused anesthesia interview resource to improve efficiency and quality. High-dose fentanyl does not adversely affect outcome from forebrain ischemia in the rat. High-resolution measurement of electrically-evoked vagus nerve activity in the anesthetized dog. How much are patients willing to pay to avoid intraoperative awareness? How to manage drug interactions. Hypertension in CB57BL/6J mouse model of non-insulin-dependent diabetes mellitus. Hypertension: a new look at an old problem. Impact of restarting home neuropsychiatric medications on sedation outcomes in medical intensive care unit patients. In vitro effects of lidocaine on anaerobic respiratory pathogens and strains of Hemophilus influenzae. Influence of splanchnic intravascular volume changes on cardiac output during muscarinic receptor stimulation in the anaesthetized dog. Interactions between NMDA and AMPA glutamate receptor antagonists during halothane anesthesia in the rat. Ketamine activates breathing and abolishes the coupling between loss of consciousness and upper airway dilator muscle dysfunction. Limb tourniquets and central temperature in anesthetized children. Low interscalene block provides reliable anesthesia for surgery at or about the elbow. Management of postdischarge nausea and vomiting after ambulatory surgery. Maternal and preterm fetal sheep responses to dexmedetomidine. Myocardial depression by anesthetic agents (halothane, enflurane and nitrous oxide): quantitation based on end-systolic pressure-dimension relations. Neurocognitive dysfunction following cardiac surgery. Neuroleptic malignant syndrome postoperative onset due to levodopa withdrawal. Neuromuscular Disease in the Neurointensive Care Unit. Neurophysiological characterization of high-dose magnetic seizure therapy: comparisons with electroconvulsive shock and cognitive outcomes. New medications and techniques in ambulatory anesthesia. Paediatric preoperative teaching: effects at induction and postoperatively. Perioperative management of aneurysmal subarachnoid hemorrhage: Part 1. Operative management. Perioperative renal dysfunction and cardiovascular anesthesia: concerns and controversies. Pharmacokinetics of remifentanil in anesthetized pediatric patients undergoing elective surgery or diagnostic procedures. Phosphodiesterase-5 inhibitors oppose hyperoxic vasoconstriction and accelerate seizure development in rats exposed to hyperbaric oxygen. Postoperative nausea and vomiting. A retrospective analysis in patients undergoing elective craniotomy. Pressure support mode improves ventilation in "asleep-awake-asleep" craniotomy. Prevalence and Predictors of Adverse Events during Procedural Sedation Anesthesia-Outside the Operating Room for Esophagogastroduodenoscopy and Colonoscopy in Children: Age Is an Independent Predictor of Outcomes. Reexamining traditional intraoperative fluid administration: evolving views in the age of goal-directed therapy. Response to Letter to the Editor on "Risk Factors, Outcomes, and Timing of Manipulation Under Anesthesia After Total Knee Arthroplasty". Richard Wolf piezoelectric lithotripters: Piezolith 2300 and 2500. Risk Factors, Outcomes, and Timing of Manipulation Under Anesthesia After Total Knee Arthroplasty. SPINAL OR GENERAL ANESTHESIA FOR INGUINAL HERNIA REPAIR?A COMPARISON OF CERTAIN COMPLICATIONS IN A CONTROLLED SERIES. Serum enzyme alterations after extensive surgical procedures. Simultaneous GCaMP6-based fiber photometry and fMRI in rats. Society for Ambulatory Anesthesia guidelines for the management of postoperative nausea and vomiting. Society for Pediatric Anesthesia/American Academy of Pediatrics/Congenital Cardiac Anesthesia Society: winter meeting review. Spatiotemporal structure of somatosensory responses of many-neuron ensembles in the rat ventral posterior medial nucleus of the thalamus. Steady-state gas exchange in normothermic, anesthetized, liquid-ventilated dogs. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Surgery upregulates high mobility group box-1 and disrupts the blood-brain barrier causing cognitive dysfunction in aged rats. Sustained potential shifts and paroxysmal discharges in hippocampal formation. Techniques of experimental animal radiotherapy. The Perioperative Use of Dexmedetomidine in Pediatric Patients with Congenital Heart Disease: An Analysis from the Congenital Cardiac Anesthesia Society-Society of Thoracic Surgeons Congenital Heart Disease Database. The Year in Cardiothoracic Critical Care: Selected Highlights From 2017. The effects of ethanol on pancreatic blood flow in awake and anesthetized dogs. The importance of intraoperative transesophageal echocardiography in endovascular repair of thoracic aortic aneurysms. The management of resistant depression. The new age of medical genomics. The puzzling aspects of anesthesia and autism spectrum disorder. The role of cerebral metabolism in determining the local cerebral blood flow effects of volatile anesthetics: evidence for persistent flow-metabolism coupling. The role of corticosteroids in Duchenne muscular dystrophy: a review for the anesthetist. The systemic inflammatory response to cardiopulmonary bypass: pathophysiological, therapeutic, and pharmacological considerations. The year in Cardiothoracic and Vascular Anesthesia: selected highlights from 2014. The year in cardiothoracic and vascular anesthesia: selected highlights from 2012. Transthoracic echocardiography in models of cardiac disease in the mouse. Ultrasound in anesthesia: applying scientific principles to clinical practice. Updates in Enhanced Recovery Pathways for Total Knee Arthroplasty. Use of the bispectral index to monitor anaesthesia. When Interpolation-Induced Reflection Artifact Meets Time-Frequency Analysis. [Nphe1]nociceptin(1-13)-NH2 antagonizes nociceptin-induced hypotension, bradycardia, and hindquarters vasodilation in the anesthetized rat.
https://scholars.duke.edu/display/meshD000758
DEAR READERS: So many of you had ideas for the student who was undecided when it came to determining a major in college and, in turn, a career path that I am printing a few of your letters here. Thank you for such great input. DEAR HARRIETTE: I thought your answer to "Undeclared" was good, but it would have been helpful to include financial points as well. What do the various career paths pay? What will the required education cost? Will they have to incur debt? Will they be able to pay the debt back and still support themselves on the pay they earn in each option? How many jobs are available in the fields they are considering? These answers will prepare the student to make realistic choices that they are less likely to regret in 10 years. I would also encourage the student to consider a field with multiple career options rather than one with limited options. -- Long-Term View DEAR HARRIETTE: I have three kids who went through college. My advice to kids who are not sure of a major is to go the community college route and take as many courses as you can in all the disciplines you are interested in and see which is the one you really like and could make into a career. Community college is cheaper and will let you take any course you want. If the student can maintain decent grades, they can transfer to a four-year college later. -- Community College DEAR HARRIETTE: Please allow me to expand on your advice to a student seeking a college major. Most universities offer counseling or testing centers that help with this common problem, usually at no charge for enrolled students. Typically, a student completes an online inventory that helps to identify his or her interests; a computer program then lists areas in which graduates with similar interests have been successful. Pursuing these options in more detail often helps a student select an appropriate major. Alternatively, if the student does not wish to complete the interest inventory, or after narrowing to a few tentatively attractive areas of interest, the student can consult a wonderful reference called the "Dictionary of Occupational Titles" -- a comprehensive reference published by the U.S. Department of Labor that provides details on a wide variety of occupations. It includes the education and training required, job locations, typical duties, salary range, etc. -- Retired Professor DEAR ALL: I appreciate your thoughtful input and see that we all are interested in supporting our youth as they work to figure out their futures.
https://thetandd.com/readers-weigh-in/article_8340671b-551c-5412-b1fe-5cec0aedc2c2.html
presented by Robert Donatelli Video streaming cannot be accessed on this network. Please access the video from a different network. For assistance, please contact your organization's help desk or MedBridge Support at [email protected]. Lumbar pelvic control and power in the hips and lower legs are essential for improvement in performance and the reduction of injuries in athletes. This course will review the anatomy and mechanics of the Lumbopelvic region and hips that pertain to balance, stability, and power. An overview of dynamic visual acuity and perturbation training, with emphasis on the athlete will be presented. Several different types of lower extremity dysfunctions, including hip labral tears will be discussed. Neuromuscular training and eccentric strengthening will be demonstrated in lab and developed into rehabilitation protocols. Evidence-based evaluation tools will be developed into an extensive assessment of muscle imbalance within the Lumbopelvic and hips. Finally, implementation of rehabilitation programs that are individualized according to evaluative findings will be reviewed and demonstrated in presented case studies, utilizing the research-based findings in the course. From March of 2004 to August of 2004 Dr. Donatelli worked as a physical therapist on the PGA tour helping treat injuries sustained by the golf professionals. In addition, he has served as a Physical Therapy Consultant to the Montreal Expos, Philadelphia Phillies and Milwaukee Brewers Baseball Teams. Donatelli has lectured throughout the United States, Canada, England, Ireland, Scotland, and Australia, and at the Swedish Foot and Ankle Society. Dr. Donatelli is now running his own private practice, Las Vegas Physical Therapy & Sports, in Las Vegas, NV. Define the anatomy of the core as it relates to lower limb injuries in the athlete. Outline 6 complex movement patterns for the lumbar-pelvic-hip complex. Define stability exercised and relate their benefit to returning the injured athlete to sport. Explain the importance of the subtalar joint as a torque converter during rotation. Define muscle dysfunction and its role in transmitting and shifting load to the discs and ligaments. Explain the importance of stabilization to the spine. Outline the 3 components of the stabilization systems. Explain ‘waste basket’ terms associated with sports hernias. Identify common deformities of the femoral neck. Identify common impingements of the femoral head-neck. Explain the role of hip flexors and their importance during gait. Define the 3 movement patterns in triplanar hip movement. Explain the function of the gluteus maximus during gait. Identify the importance of the gluteus maximus during gait. Identify the importance of the adductors of the hip and why they should be strengthened. Explain the importance of strengthening the posterior fibers of the gluteus medius in returning an athlete to sport. Explain how external rotator strength is important to knee mechanics, especially in running activities.
https://beta.medbridgeeducation.com/courses/details/returning-the-injured-athlete-to-sports-lower-limb-trunk-and-hip
Normally present in trace amounts in the Earth’s atmosphere, the CO2 level has increased considerably since the end of the 19th century. Despite the different actions taken by governments, the use of fossil fuels and global warming only worsen the phenomenon. Last month, atmospheric CO2 reached its highest level for several hundred thousand years. Despite efforts to reduce greenhouse gas emissions, the atmospheric CO2 concentration reached a new record in April: 410 ppm by volume. This is its highest rate in 800’000 years. The data collected by the atmospheric observatory Mauna Loa (Hawaii) and modeled by the Keeling curve (graph of the evolution of terrestrial atmospheric CO2) have indeed shown a precise rate of 410.31 ppm. “We continue to burn fossil fuels,” explains geochemist Ralph Keeling, son of Charles Keeling (author of the Keeling curve) and director of the Scripps CO2 Program, who is responsible for carrying out these measurements. “And as long as we do, carbon dioxide will continue to accumulate in the air, it’s as simple as that.” Yet this is not the first warning sign. Last April, the carbon dioxide level reached 410.05 ppm for the first time. Although the increase this year seems small, it is symptomatic of the weakness of the efforts of the governments around the world. The Scripps CO2 Program has been measuring the amounts of atmospheric carbon dioxide at the Mauna Loa observatory since 1958. And the corresponding Keeling curve shows a gradual and disturbing increase in them. In the 1950s, Charles Keeling’s first measurements indicated a rate of 310 ppm. That is to say, currently, in 1 million kilograms of air, there are 410 kg of CO2. In order to go back further in the history of the Earth, geochemists study the bubbles of gas trapped in ice. Such ice cores provide information on the chemical composition of the atmosphere thousands, even hundreds of thousands of years ago. The last eight ice ages spanned 800,000 years, and according to the scientists’ analysis, the CO2 level was not as high as it is now. But comparisons with the rate measured in April could be pushed even further back in time. Last year, a report by the World Meteorological Organization (WMO) suggested that CO2 concentrations were highest for at least 3 million years. This allows researchers to show explicitly that the abnormal levels attained for 100 years are due to human activities and not to a potential terrestrial natural cycle. As the Scripps explains, before the advent of life and photosynthetic activity, the concentration of CO2 was 100,000 times higher than today, about 4.5 billion years ago. With photosynthesis and occasional episodes like the “Azolla event”, the CO2 level has been falling steadily. Chemical dating shows that, chronologically, the increase of this rate coincides with the emergence of human industrial activity. At the same time, carbon dioxide emissions are experiencing the same trend. The use of fossil fuels, especially with the massive reopening of coal plants, accounts for the majority of measured emissions. The more CO2 accumulates in the atmosphere, the more it traps solar radiation. This phenomenon accentuates global warming, leading to the release of additional CO2 from underground sources, sea ice and permafrost. A self-perpetuating vicious cycle of carbon dioxide emissions is set up. Currently, many projects are being developed to trap or recycle atmospheric CO2. However, the development of these technological innovations will take time and all efforts available. The Scripps calls on governments and citizens to take drastic measures, each at their own level, to minimize CO2 emissions.
https://www.thetalkingdemocrat.com/2018/05/atmospheric-co2-reaches-its-highest-rate-for-at-least-800000-years/
Speaker(s): Jerry P. Brodsky, Practice Area(s): International Construction and Infrastructure,Register Jerry P. Brodsky, Partner and Director of Peckar & Abramson’s Latin American Practice, will participate as a panelist for the webinar “COVID-19 Leadership Series: International Perspectives on COVID-19,” hosted by the American Bar Association’s Forum on Construction Law. Taking place on Tuesday, June 30 at 4pm EDT, the panel will discuss how the pandemic and related economic disruptions, travel limitations, and border closures have affected international design and construction and cross-border relationships among industry stakeholders. They will also offer perspectives on the current situation and insights about what lies ahead for current and future projects, as well as international dispute resolution. ABA members can click here to register.
https://www.pecklaw.com/event/american-bar-association-covid-19-leadership-series-international-perspectives-on-covid-19/
The major goal of this project is to discover molecular mechanisms that regulate the connectivity of the visual cortex during development. Surprisingly little is known about the molecular substrates which underlie the formation of the highly specific sets of connections which both characterize and underlie the normal functioning of the visual system. This knowledge is critical not only for understanding visual system development, but to also enable us to genetically target specific populations of neurons to determine their function in visual processing or to deliver therapies following disease or injury to the nervous system. (1) We shall identify molecules that are important in regulating the initial sets of input and output connections of primary visual cortex. To do this, we will utilize high-density DNA microarrays to discover genes that are expressed in visual cortex at very early stages in development in the mouse. It is likely that the genes that regulate the initial connections will be differentially expressed between cortical areas at these times. We will prepare biotinylated cRNA samples from visual, somatosensory and auditory cortex from embryonic and neonatal mice. Data will be analyzed to find genes that are consistently enriched in visual cortex compared to other sensory neocortical regions. (2) We shall examine the spatial and temporal expression of promising candidate genes using in situ hybridization. We will determine which populations of neurons express the candidate genes by combining in situ hybridization with anatomical tracing experiments and immunohistochemistry. (3) We shall identify molecules that regulate the activity dependent refinement of connections which occurs during the critical period. We will therefore prepare samples from the visual cortex of mice in the mid-critical period for ocular dominance plasticity and from mature mice. We will analyze the data to find genes that show up- or down-regulation during the critical period compared to expression levels seen in neonates and mature animals. (4) We shall assess whether promising candidate genes identified in our third aim are regulated by visually-driven activity, by performing in situ hybridization on tissue from visually-deprived animals and comparing the signal strengths. In the longer term, the function of genes which show a spatial and/or temporal distribution pattern which suggests they play a role in the regulation of connectivity will be assessed using in vitro co-culture techniques and anatomical and physiological analysis of transgenic mouse models.
The Department of Surgery at Banner University Medical Center Tucson is actively seeking a Complex GI Surgical Oncologist (with HIPEC) at the Assistant or Associate Professor level to provide a full range of oncology surgical services at its Tucson Campus locations. The Division of Surgical Oncology is Arizona’s only NCI-designated Comprehensive Cancer Center. The division takes pride in being the leading academic center in the Southwest that is solely dedicated to the diagnosis and treatment of complex GI oncologic diseases (including complex, advanced, and recurrent diseases). We offer comprehensive and multi-disciplinary surgical and medical care for the richly diverse patient population of Southern Arizona, including the underserved areas. Our surgeons strive to provide the best evidence based compassionate patient care, resident and medical student education, together with extensive community and academic endeavors. This is an exceptional opportunity to take part in building and shaping the future of an academic complex GI oncology service (including Hepatobiliary, Esophagus, Gastric Cancers, HIPEC, and some Colorectal) with the support of a highly acclaimed medical center and academic program that has continued to grow significantly in the last few years. The qualified candidate will receive an academic appointment that is consistent with their credentials. Position Details: · Clinical responsibilities: OP clinic, IP rounding, OR cases, Shared Call Coverage · Academic research time provided DOE · Collaborating with oncology multidisciplinary team · Teaching and mentoring of students, residents, and fellows · 1.0 FTE Minimum Qualifications: · ABS BC/BE · Fellowship-Trained in Complex General Surgical Oncology with HIPEC · AHPBA Certification · Desire to work in an academic setting · Position open to experienced surgeons as well as fellow new grads with HIPEC experience About Banner Health You want to help lead change in the health care field – rather than just react to it. You want to spend your time doing what you do best – caring for patients. You belong at Banner Medical Group (BMG) and Banner – University Medicine Group (BUMG).As Banner Health's employed physician group with more than 1,300 physicians and advanced practitioners across more than 65 specialties, BMG, is transforming the delivery of care. This transformation can most clearly be seen in our Patient-Centered Medical Home (PCMH) implementation. Through PCMH, we’re organizing care around patients, working in teams and coordinating and tracking care over time. The end result is the highest quality and most efficient delivery of patient care.For physicians working in their own practices, we have two different paths to lead you to a colorful career with Banner Health.We also offer faculty positions at Banner – University Medicine Center in partnership with the University of Arizona.
https://www.doximity.com/careers/job_cards/6dcec054-29ec-4fac-bb28-5dbe84998732?_csrf_attempted=yes&career_specialty_name=oral%2Band%2Bmaxillofacial%2Bsurgery&credentials=md%252fdo&employment_type=full-time
Companies operate in an increasingly complex world: Business environments are more diverse, dynamic, and interconnected than ever—and far less predictable. Yet many firms still pursue classic approaches to strategy that were designed for more-stable times, emphasizing analysis and planning focused on maximizing short-term performance rather than long-term robustness. How are they faring? To answer that question, we investigated the longevity of more than 30,000 public firms in the United States over a 50-year span. The results are stark: Businesses are disappearing faster than ever before. Public companies have a one-in-three chance of being delisted in the next five years, whether because of bankruptcy, liquidation, M&A, or other causes. That’s six times the delisting rate of companies 40 years ago. Although we may perceive corporations as enduring institutions, they now die, on average, at a younger age than their employees. And the rise in mortality applies regardless of size, age, or sector. Neither scale nor experience guards against an early demise. We believe that companies are dying younger because they are failing to adapt to the growing complexity of their environment. Many misread the environment, select the wrong approach to strategy, or fail to support a viable approach with the right behaviors and capabilities. How, then, can companies flourish and persist? Our research at the intersection of business strategy, biology, and complex systems focuses on what makes such systems—from tropical forests to stock markets to companies themselves—robust. Some business thinkers have argued that companies are like biological species and have tried to extract business lessons from biology, with uneven success. We stress that companies are identical to biological species in an important respect: Both are what’s known as complex adaptive systems. Therefore, the principles that confer robustness in these systems, whether natural or manmade, are directly applicable to business. To understand how, let’s look at what these systems are and how they function. Your Strategy Needs a Strategy How to choose and execute the right approachLearn more In a complex adaptive system, local events and interactions among the “agents,” whether ants, trees, or people, can cascade and reshape the entire system—a property called emergence. The system’s new structure then influences the individual agents, resulting in further changes to the overall system. Thus the system continually evolves in hard-to-predict ways through a cycle of local interactions, emergence, and feedback. In nature we see this play out when ants of some species, for example, although individually following simple behavioral rules, collectively create “supercolonies” of several hundred million ants covering more than a square kilometer of territory. In business we see workers and management, through their local actions and interactions, shape the overall structure, behavior, and performance of a firm. In both spheres these emergent outcomes influence individuals and create new contexts for their interactions. Whether we look at team dynamics, the evolution of strategies, or the behavior of markets, the pattern of local interactions, emergence, and feedback is apparent. Complex adaptive systems are often nested in broader systems. A population is a CAS nested in a natural ecosystem, which itself is nested in the broader biological environment. A company is a CAS nested in a business ecosystem, which is nested in the broad societal environment. Complexity therefore exists at multiple levels, not just within organizational boundaries; and at each level there is tension between what is good for an individual agent and what is good for the larger system. What does this mean for business leaders? First, they need to be realistic about what they can predict and control, what they can shape collaboratively, and what is beyond the reach of managerial influence. In particular, they need to expect that unpredictable and even extreme emergent outcomes will cascade from actions at the lower levels. A clear example is the financial crisis of 2007–2008, during which risk created by subprime lending in the US real estate market spread catastrophically throughout the global financial system. Second, they need to look beyond what their firms own or control, monitoring and addressing complexity outside their firms. CEOs must ensure that their companies contribute positively to the system while receiving benefits sufficient to justify participation. Companies that fail to create value for key stakeholders in the broader system will eventually be marginalized (similarly, business ecosystems that do not provide benefits to their members will experience defections). Consider Sony, which brought out its first e-reader three years before Amazon’s but lost decisively to the Kindle and withdrew from the market in 2014. Because it failed to provide a compelling value proposition that would mobilize key components of the publishing ecosystem—authors and publishers—it could offer only 800 titles when its e-reader launched. In contrast, Amazon initially sacrificed profits, selling e-books for less than what it paid to publishers. It also invested in digital rights management to spur the growth of its ecosystem. With the support of other stakeholders, it launched with 88,000 e-books ready for download. Third, leaders must embrace the inconvenient truth that attempts to directly control agents at lower levels of the system often create counterintuitive outcomes at higher levels, such as the stagnation of a strategy or the collapse of an ecosystem. They must avoid relying on simplistic causal models and trying only to directly manage individual behavior, and instead seek to shape the context for that behavior. For example, questions or simple rules aimed at fostering autonomy and cooperation and leveraging employees’ initiative can be more effective than top-down control in shaping collective behavior. We have identified six principles that can help make complex adaptive systems in business robust. (For a related examination of complex adaptive systems in nature, see Fragile Dominion, by Simon Levin.) They derive from our study of features that distinguish dynamic systems that persist from those that collapse or decline. The first three principles are structural; they deal with the design of the system and are broadly seen in nature. The second three are primarily managerial; they deal with the application of the intelligence and intentionality provided by humans. Their key features have been observed in a wide variety of managed systems, from fisheries to global climate control agreements. A few caveats: Each principle confers a cost and has an optimal level of application; leaders must carefully calibrate how aggressively to implement each one. Furthermore, the principles are in tension with one another; emphasizing one may require deemphasizing another. Leaders must consider how to balance the principles collectively rather than treat the application of each as a unitary goal. For clarity we will illustrate the principles one at a time, but robust systems typically exhibit many or all of their characteristics simultaneously. Variety in the units of a CAS allows the system to adapt to a changing environment. Biologically, such heterogeneity explains why many diseases persist despite efforts to eradicate them. The influenza A virus has a high mutation rate and thus a large number of strains. Although we regularly develop resistance to the current common strains, new ones are always emerging. More generally, variation is the stuff of evolutionary adaptation: Heterogeneous components make up the reservoir on which selection acts. In a business CAS, leaders must ensure that the company is sufficiently diverse along three dimensions: people, ideas, and endeavors. This may come at the cost of short-term efficiency, but it is essential to robustness. One obvious starting point is to hire people with varied personality types, educational backgrounds, and working styles. But even amid such diversity, employees are typically reluctant to challenge the dominant business logic, especially if the firm has been successful. An explicit cultural shift and active managerial support may be needed to encourage people to risk failure and create new ideas; indeed, the absence of mistakes is a clear indication of missed opportunities and ultimately of enterprise fragility. Many Silicon Valley firms celebrate productive, or “learning,” failures (think of the mantras “Fail fast” and “Fail forward”), which contribute to their success. Fujifilm exemplifies robustness through strategic heterogeneity. Its industry faced a crisis in the late 1990s, when digital photography reached the consumer market and quickly eroded demand for photography film. Fujifilm responded with a series of radical reforms to diversify its business—partnering with new companies, investing heavily in R&D, and acquiring 40 firms. What distinguished Fujifilm’s approach from that of the other industry giant, Kodak, was that the firm explored not only obvious adjacencies but also entirely new business areas, such as pharmaceuticals and cosmetics, where it could exploit its existing capabilities in chemistry and materials. It also did so to a sufficient degree. The exploratory efforts and resulting diverse offerings paid off. While the camera film market peaked in 2000 and shrank by 90% over the following ten years, Fujifilm grew; in contrast, Kodak declared bankruptcy in 2012. Business heterogeneity is often punished by markets through the “conglomerate discount”—a markdown on the stock price relative to pure-play competitors. However, when a company experiences an environmental shock like the one Fujifilm and Kodak faced, heterogeneity is a key source of robustness. Is your strategy aligned with your risk environment?Use the diagnostic tool A modular CAS consists of loosely connected components. Highly modular systems impede the spread of shocks from one component to the next, making the overall system more robust. We see this effect in nature. For instance, occasional local forest fires help maintain modularity by creating areas of lower combustibility. When such fires are artificially suppressed, modularity disappears over time, opening the way for catastrophic blazes that can destroy the entire system. The performance of Canadian banks during the global financial crisis offers a vivid example of how modularity confers robustness in business. Canada’s regulations mandated less risk-taking behavior than was permitted in the United States, thus minimizing exposure to the complex financial instruments, such as collateralized debt obligations, that created hidden connectivity across US firms and built systemic risk. Furthermore, Canadian banks had a higher ratio of retail deposits, which are generally more dependable than other sources of funding. Thus a weakness in one part of the system was less likely to cause runs in other parts. Finally, the banks had relatively limited investments in foreign assets, which protected them from contagion elsewhere in the global financial system. Thanks to this modularity, Canadian banks emerged from the crisis largely unscathed. None needed recapitalization or government guarantees. In a business context modularity always brings trade-offs: Insulating against shocks means forgoing some benefits of greater connectivity. Within a company, tight connections across regions or businesses can enhance information flows, innovation, and agility, but they tend to make the company vulnerable to severe adverse events. In the broader ecosystem, integration with other business stakeholders can bolster effectiveness in similar ways, but interdependence also amplifies risk. Because the benefits of risk mitigation are subtle and latent, whereas efficiency gains are immediate, managers often overemphasize the latter. Despite the trade-offs, modularity is a defining feature of robust systems. A bias against it for the sake of short-term gains carries long-term risks. In systems with redundancy, multiple components play overlapping roles. When one fails, another can fulfill the same function. Redundancy is particularly important in highly dynamic environments, in which adverse shocks are frequent. Consider how the human immune system leverages redundancy to create robustness against disease. We have multiple lines of defense against pathogens, including physical barriers (the skin and mucous membranes), the innate immune system (white blood cells), and the adaptive immune system (antibodies), each of which consists of multiple cellular and molecular defense mechanisms. In healthy people these redundant mechanisms act in concert, so when one fails, others prevent infection. AIDS is so deadly because it effectively destroys the second and third lines of defense, removing redundancy. (The immune system also shows that multiple principles are usually at play in robust systems: It exhibits not only redundancy but also heterogeneity and modularity of defense systems, and it has feedback loops that permit adaptation.) Businesses often impugn redundancy, treating it as the antithesis of leanness and efficiency. This has led to some disastrous outcomes. In the 1990s, when Ericsson was one of the world’s leading mobile-phone manufacturers, it adopted a single-source procurement strategy for key components. In 2000 a fire incapacitated a Philips microchip plant, and Ericsson was unable to switch rapidly to another supplier. It lost months of production and recorded a $1.7 billion loss in its mobile-phone division that year, which resulted in the division’s merger with Sony. How can businesses implement redundancy while avoiding prohibitive expense or inefficiency? First, managers should identify the stakeholders—they might be suppliers or innovation partners—on which the business is most dependent. For Ericsson, that set would have included Philips. Next, they should determine the feasibility of creating redundancy to reduce risk. Often this will involve developing new partners, which necessitates careful consideration of the trade-offs involved. Toyota creates redundancy in a highly cost-effective manner. In 1997 a fire at Aisin Seiki, its sole source of P-valves, threatened to halt production for weeks. But Toyota and Aisin were able to call on more than 200 partners and resumed full production in just six days. Although only Aisin had the experience and knowledge for the production of P-valves, Toyota’s tight network of collaborators had, in effect, created redundancy and conferred robustness. Let’s turn now to the power of human agency, in the form of three managerial principles that can make complex adaptive systems in business more robust. A key feature of complex adaptive systems is that we cannot precisely predict their future states. However, we can collect signals, detect patterns of change, and imagine plausible outcomes—and take action to minimize undesirable ones. The Montreal Protocol, which established global rules for protecting the ozone layer, illustrates these capabilities. Scientists from many countries came together to analyze the human health impacts of ozone layer depletion by chlorofluorocarbons and propose interventions. Because the atmosphere is a complex adaptive system, the precise impact of human activity couldn’t be predicted. Nevertheless, by presenting rigorous evidence of the potential consequences of further degradation, the scientific community built a consensus for action. The United Nations called the protocol “perhaps the single most successful international environmental agreement to date.” Nature does not allow us to predict the future, but it can reveal enough for us to avert disasters. In business systems few things are harder to predict than the progress and impact of new technologies. But it can be worthwhile to actively monitor and react to the activities of maverick competitors in an effort to avoid being blindsided. Companies that do this follow a few best practices. First, if they are incumbents, they accept that their business models will be superseded at some point, and they consider how that may happen and what to do about it. Second, they understand that change often comes from an industry’s periphery—from start-ups or challengers who have no choice but to bet against incumbents’ models. Third, they collect weak signals from the smart money flows and early-stage entrepreneurial activity that constitute those bets against their models. Fourth, they practice contingent thinking: Rather than posing the unanswerable question of whether this or that company or technology will succeed, they ask, If the maverick’s idea worked, what would be the consequences for us? Finally, they take preemptive action against such threats by replicating the idea, acquiring it, or building defenses against it. The consumer optics firm Essilor, which has grown its top and bottom lines at double-digit rates in a mature low-growth industry for the past several decades, exemplifies this approach. “Technology is critical but unpredictable, and we can’t control it or do it all ourselves,” CEO Hubert Sagnières told us. “But we can scan systemically for threats and opportunities, jump on them decisively, and build capabilities to exploit them.” This approach led Essilor to acquire Gentex in 1995, making it the world leader in polycarbonate lenses. It also enabled the firm to access and exploit digital-surfacing technology, which had been identified as a major strategic threat some years earlier, by acquiring Johnson & Johnson’s Spectacle Lens Group in 2005. A major competitive threat was neutralized, and an $8 million business operating at a loss was turned around and grown into a $50 million operation with 35% margins. While heterogeneity provides the variety on which selection operates, feedback loops ensure that selection takes place and improves the fitness of the system. Feedback is the mechanism through which systems detect changes in the environment and use them to amplify desirable traits. The fact that selection occurs at the local level implies, paradoxically, that some lack of robustness at lower levels may be necessary for robustness in the larger system. That is, the system must destroy order locally to maintain its fitness at higher levels. In nature, mutation and natural selection—the variation, selection, and propagation of genes that contribute to reproductive success—is an autonomous process. In business the analog is a predominantly “managed” activity. The variation, selection, and propagation of innovations happen only when leaders explicitly create and encourage mechanisms that promote those things. In fact, mainstream management thinking, as taught in many business schools, may actively suppress the intrinsic “variance” and “inefficiency” associated with iterative experimentation. Yet the cultivation of this adaptive capability is now essential for companies that may have managed themselves for decades using only analysis and planning. How can a company implement the process of iterative innovation? First, it must detect the right signals from across the organization. This is not trivial. There is always some distance between the local actions of an employee or a business unit and the macrolevel outcomes they produce. We often don’t know which behaviors are worth strengthening and which should be discouraged. Still, frontline employees have valuable information that typically isn’t transmitted and amplified. Leaders need to engage with those employees to discover innovations that could improve robustness. That’s why Japanese manufacturing managers often go to the gemba (the “real place,” such as the factory floor) to glean fresh and rich information. By interacting directly with employees there, they can identify challenges and innovative solutions visible only at the local level. Second, the organization must translate those signals into action. This may seem self-evident, but large companies often find themselves unable to implement this crucial second step because it may require diverting resources from the dominant product or business model and perhaps allowing underperforming products, businesses, and employees to fail more quickly. James Cannavino, IBM’s chief strategist in the early 1990s, notes that strategic planners were aware of the rise of PCs and the increasing importance of software long before the company faced a crisis. But the mainframe business was so profitable that they weren’t inclined to translate their insight into a definitive shift to personal computing. Ambidexterity—the ability to simultaneously run and reinvent the company—requires effective feedback loops and is critical to robustness in changing environments. Tata Consultancy Services operates in the technology services space, which is characterized by unpredictability in both the technology itself and its application by large organizations. Recognizing that it needed to optimize for adaptability, the firm instituted its 4E model of experimentation—explore, enable, evangelize, and exploit. TCS explores by placing many small bets and scaling them up or down on the basis of market feedback. The strategy is enabled by heavy investments in analytic and knowledge management capabilities. Successes are evangelized within the organization, thereby enabling TCS to fully scale and exploit them. The company has had spectacular results, growing from $20 million in revenue in 1991 to $1 billion in 2003 and more than $15 billion in 2015. Its rise reflects an ability to rapidly adapt in a dynamic sector where many large rivals have faltered. This is not to say that the tighter the feedback, the better. If the feedback cycle becomes too short or the response to change is too strong, the system may overshoot its targets and become unstable. For example, financial regulatory systems tend to swing between overregulation and underregulation, making equilibrium impossible. As with the other principles, calibration is essential. In society, complex adaptive systems require cooperation in order to be robust; direct control of system participants is rarely possible. Individual interests often conflict, and when individuals pursue their own selfish interests, the system overall becomes weaker, and everyone suffers. This is the quandary of so-called collective action problems: Individuals lack incentives to act in ways that benefit the overall system unless they benefit in immediate ways themselves. Trust and the enforcement of reciprocity combine to provide a mechanism for organizations to overcome this quandary. (Indeed, this was key to the success of the Montreal Protocol.) The Nobel laureate Elinor Ostrom studied situations in which users of a common-pool resource, such as a fishery ecosystem, are able to avoid the “tragedy of the commons,” whereby public resources are overexploited to the eventual detriment of everyone involved. Her insight was that trust, along with variables such as the number of users, the presence of leadership, and the level of knowledge, promotes self-organization for sustainability. It allows users to create norms of reciprocity and keep agreements. Three broad trends contribute to rising corporate mortality. To leverage the power of trust, leaders should consider how their firms contribute to other stakeholders in their ecosystem. They must ensure that they are adding value to the system even as they seek to maximize profits. Novo Nordisk’s approach to new markets illustrates how this can work. Let’s look at the firm’s entrance into the Chinese market for insulin. The company established a Chinese subsidiary in the early 1990s, well before there was widespread awareness of or established treatment protocols for diabetes in China, or even many physicians who could diagnose it. Novo Nordisk built relationships with other stakeholders in diabetes care, establishing partnerships with the Chinese Ministry of Health and the World Diabetes Foundation to teach the medical community about diagnosis and management, and facilitating more than 200,000 physician training sessions. It also reached out to patients through grassroots campaigns and developed support groups to establish itself as more than just a provider of insulin. Finally, it demonstrated its local commitment by building production sites and an R&D center in China. These efforts not only developed the market—they built trust between the company and other stakeholders. And they paid off. Novo Nordisk now controls about 60% of China’s enormous insulin market. Making a clear commitment to other stakeholders and the social good enhanced the robustness of the firm and strengthened the broader CAS in which it is nested. Rising corporate mortality is an increasing threat, and the forces driving it—the dynamism and complexity of the business environment—are likely to remain strong for the foreseeable future. A paradigm shift in managerial thinking is needed. Leaders are used to asking, “How can we win this game?” Today they must also ask, “How can we extend this game?” They must monitor the changing risk environment and align their strategies with the threats they face. Understanding the principles that confer robustness in complex systems can mean the difference between survival and extinction. The Biology of Corporate Survival appeared in the January–February 2016 issue of Harvard Business Review. It is republished here with permission. Illustration by Janine Rewell. The BCG Henderson Institute is Boston Consulting Group’s strategy think tank, dedicated to exploring and developing valuable new insights from business, technology, and science by embracing the powerful technology of ideas. The Institute engages leaders in provocative discussion and experimentation to expand the boundaries of business theory and practice and to translate innovative ideas from within and beyond business. For more ideas and inspiration from the Institute, please visit Featured Insights.
https://www.bcg.com/en-hu/publications/2016/strategy-business-unit-strategy-biology-of-corporate-survival.aspx
Q: Sing Happy Birthday to your favourite programming language Your favourite programming language has just had a birthday. Be nice and sing it the Happy Birthday song. Of course you should accomplish this by writing a program in that language. The program takes no input, and writes the following text to the standard output or an arbitrary file: Happy Birthday to You Happy Birthday to You Happy Birthday Dear [your favourite programming language] Happy Birthday to You You should substitute the bracketed part (and omit the brackets). This is a code golf — shortest code wins. UPDATE I'm glad that the question aroused great interest. Let me add some extra info about scoring. As stated originally, this question is a code golf, so the shortest code is going to win. The winner will be picked at the end of this week (19th October). However, I'm also rewarding other witty submissions with up-votes (and I encourage everybody to do so as well). Therefore although this is a code-golf contest, not-so-short answers are also welcome. Results Congratulations to Optimizer, the winner of this contest with his 42 byte long, CJam submission. Leaderboard Here is a Stack Snippet to generate both a regular leaderboard and an overview of winners by language. /* Configuration */ var QUESTION_ID = 39752; // Obtain this from the url // It will be like https://XYZ.stackexchange.com/questions/QUESTION_ID/... on any question page var ANSWER_FILTER = "!t)IWYnsLAZle2tQ3KqrVveCRJfxcRLe"; var COMMENT_FILTER = "!)Q2B_A2kjfAiU78X(md6BoYk"; var OVERRIDE_USER = 48934; // This should be the user ID of the challenge author. /* App */ var answers = [], answers_hash, answer_ids, answer_page = 1, more_answers = true, comment_page; function answersUrl(index) { return "https://api.stackexchange.com/2.2/questions/" + QUESTION_ID + "/answers?page=" + index + "&pagesize=100&order=desc&sort=creation&site=codegolf&filter=" + ANSWER_FILTER; } function commentUrl(index, answers) { return "https://api.stackexchange.com/2.2/answers/" + answers.join(';') + "/comments?page=" + index + "&pagesize=100&order=desc&sort=creation&site=codegolf&filter=" + COMMENT_FILTER; } function getAnswers() { jQuery.ajax({ url: answersUrl(answer_page++), method: "get", dataType: "jsonp", crossDomain: true, success: function (data) { answers.push.apply(answers, data.items); answers_hash = []; answer_ids = []; data.items.forEach(function(a) { a.comments = []; var id = +a.share_link.match(/\d+/); answer_ids.push(id); answers_hash[id] = a; }); if (!data.has_more) more_answers = false; comment_page = 1; getComments(); } }); } function getComments() { jQuery.ajax({ url: commentUrl(comment_page++, answer_ids), data.items.forEach(function(c) { if (c.owner.user_id === OVERRIDE_USER) answers_hash[c.post_id].comments.push(c); }); if (data.has_more) getComments(); else if (more_answers) getAnswers(); else process(); } }); } getAnswers(); var SCORE_REG = /<h\d>\s*([^\n,]*[^\s,]),.*?(\d+)(?=[^\n\d<>]*(?:<(?:s>[^\n<>]*<\/s>|[^\n<>]+>)[^\n\d<>]*)*<\/h\d>)/; var OVERRIDE_REG = /^Override\s*header:\s*/i; function getAuthorName(a) { return a.owner.display_name; } function process() { var valid = []; answers.forEach(function(a) { var body = a.body; a.comments.forEach(function(c) { if(OVERRIDE_REG.test(c.body)) body = '<h1>' + c.body.replace(OVERRIDE_REG, '') + '</h1>'; }); var match = body.match(SCORE_REG); if (match) valid.push({ user: getAuthorName(a), size: +match[2], language: match[1], link: a.share_link, }); }); valid.sort(function (a, b) { var aB = a.size, bB = b.size; return aB - bB }); var languages = {}; var place = 1; var lastSize = null; var lastPlace = 1; valid.forEach(function (a) { if (a.size != lastSize) lastPlace = place; lastSize = a.size; ++place; var answer = jQuery("#answer-template").html(); answer = answer.replace("{{PLACE}}", lastPlace + ".") .replace("{{NAME}}", a.user) .replace("{{LANGUAGE}}", a.language) .replace("{{SIZE}}", a.size) .replace("{{LINK}}", a.link); answer = jQuery(answer); jQuery("#answers").append(answer); var lang = a.language; if (/<a/.test(lang)) lang = jQuery(lang).text(); languages[lang] = languages[lang] || {lang: a.language, user: a.user, size: a.size, link: a.link}; }); var langs = []; for (var lang in languages) if (languages.hasOwnProperty(lang)) langs.push(languages[lang]); langs.sort(function (a, b) { if (a.lang > b.lang) return 1; if (a.lang < b.lang) return -1; return 0; }); for (var i = 0; i < langs.length; ++i) { var language = jQuery("#language-template").html(); var lang = langs[i]; language = language.replace("{{LANGUAGE}}", lang.lang) .replace("{{NAME}}", lang.user) .replace("{{SIZE}}", lang.size) .replace("{{LINK}}", lang.link); language = jQuery(language); jQuery("#languages").append(language); } } body { text-align: left !important} #answer-list { padding: 10px; width: 290px; float: left; } #language-list { padding: 10px; width: 290px; float: left; } table thead { font-weight: bold; } table td { padding: 5px; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <link rel="stylesheet" type="text/css" href="//cdn.sstatic.net/codegolf/all.css?v=83c949450c8b"> <div id="answer-list"> <h2>Leaderboard</h2> <table class="answer-list"> <thead> <tr><td></td><td>Author</td><td>Language</td><td>Size</td></tr> </thead> <tbody id="answers"> </tbody> </table> </div> <div id="language-list"> <h2>Winners by Language</h2> <table class="language-list"> <thead> <tr><td>Language</td><td>User</td><td>Score</td></tr> </thead> <tbody id="languages"> </tbody> </table> </div> <table style="display: none"> <tbody id="answer-template"> <tr><td>{{PLACE}}</td><td>{{NAME}}</td><td>{{LANGUAGE}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr> </tbody> </table> <table style="display: none"> <tbody id="language-template"> <tr><td>{{LANGUAGE}}</td><td>{{NAME}}</td><td>{{SIZE}}</td><td><a href="{{LINK}}">Link</a></td></tr> </tbody> </table> A: LOLCODE: 109 (105 with "correct" spelling) LOLCODE is not a great language for golfing, especially since you lose all the beauty and expressiveness when shortening the code. HAI H R "HAPPY BIRTHDAY " T R SMOOSH H "TO YOU" VISIBLE T VISIBLE T VISIBLE SMOOSH H "DEAR LOLCODE" VISIBLE T Test it using loljs This is my preferred rendition, weighing in at 187 characters (spaces added for clarity): HAI H R "HAPPY BERFDAY " IM IN YR LOOP UPPIN YR N TIL BOTH SAEM N AN 4 VISIBLE H! BOTH SAEM N AN 2, O RLY? YA RLY VISIBLE "DEER LOLCODE" NO WAI VISIBLE "2U" OIC IM OUTTA YR LOOP KTHXBAI A: Mathematica- barcode birthday wishes--way too many bytes This prints the verses and reads them aloud. Happy Birthday to You Happy Birthday to You Happy Birthday Dear Mathematica Happy Birthday to You StringReplace replaces each comma with a NewLine. Barcodes cannot contain control characters. A: TI-Basic, 53 bytes Well, since everyone is putting their favorite programming language up, I might as well add one of my old favorites. I spent a lot of time over the years (before I graduated to actual programming languages) typing commands into a window half the size of a smart phone. "HAPPY BIRTHDAY TO YOU Disp Ans,Ans,sub(Ans,1,15)+"DEAR TI-BASIC Ans My calculator doesn't support lowercase letters, and the only variables that can be strings are Str1, Str2 etc.
The interchange of outgoing and incoming radiation, which heats the Earth, is known as the greenhouse effect since a greenhouse operates similarly. The incoming UV radiation goes through a glass wall of a greenhouse and is engrossed by hard surfaces inside and plants. However, the weaker IR radiation, often have a hard time going through the glass walls and gets confined inside, hence heating the greenhouse. The gas molecules, which absorb the current ultraviolet radiation, are adequate to power the weather system (Seinfeld, et al., 2016). Furthermore, the forms of gas molecules are referred to as greenhouse vapors. Carbon dioxide along with other greenhouse vapors functions as a blanket, by absorbing IR radiation and averting it from absconding to external space. The net outcome is the regular warming of the earth’s surface and atmosphere, this process is referred to as global warming. The greenhouse gases comprise of nitrous oxide (N2O), methane, CO2, water vapor, along with other gases. From the time when the emergence of the Industrial Revolution in the 1800s, the burn up of relic fuels such as gasoline, oil, and coal have significantly amplified the intensity of greenhouse vapors in the troposphere, especially CO2. Moreover, the atmospheric CO2 heights have improved by not less than 40 percent ever since the commencement of the Industrial Revolution, from an estimate of 270 parts for every million (ppm) currently, it is in the 1700s to 500 ppm. Pliocene Epoch was the last period when the Earth’s atmospheric heights of Carbon Dioxide reached 300 ppm. How do you think future climate change is likely to affect you, particularly considering the environmental problems we may be facing as a result of increased world population? If by any chance the population rate continues to increase with the current rate, then, there is a huge probability that our grandchildren will view the Earth falling into an exceptional environmental catastrophe. Also, it is well defined by the world’s scientific society that the buildup of methane, CO2, along with other greenhouse vapors in the troposphere are as a result of amassed farming, land usage as well as the production, processing, and shipping of the whole thing we were consuming – was gradually causing climate change. Moreover, as a result, individuals have a big problem at hand and finding a solution to the problem is complicated to a few of them. Have a discussion with your parents or grandparents and write down the major changes that have occurred in their lifetime as well as yours. Which of these affected our environment at the local, regional, or global level? Which ones were most important to you personally? Global warming and deforestation were the major environmental changes faced in my parent’s time. As the universal temperature intensifies, the earth’s climate patterns will radically change resulting in weather changes and a prolonged season of drought. Moreover, deforestation was a significant factor since in my parent’s time they much depended on timber as a source of heat when cooking. As a result, there was a minimum level of rainfall. In addition, in my time, ocean acidification along with acid rain greatly affected the environment. Rain acid occurs when people heat coal, and sulfur dioxide as well as nitrogen oxides are discharged in the air. The gases go up and amass in the mists till the mists become inundated with acid rain, instigating destruction on the earth’s surface. Ocean or sea acidification occurs when carbon dioxide disbands into the sea binding with seawater producing carbonic acid. Consequently, the acid then lessens the water pH levels, altering the acidity of the sea. This has affected both people and environment locally and globally. ReferenceSeinfeld, J. H., & Pandis, S. N. (2016). Atmospheric chemistry and physics: from air pollution to climate change. John Wiley & Sons. Delivering a high-quality product at a reasonable price is not enough anymore. That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.
https://www.solutionessays.com/global-climate-change/
Here is a scenario I’ve viewed (too) many times in my role as a performance psychologist working with portfolio managers and traders in the financial world. Returns from markets have been positive, so the managers and their romantic partners are doing well, taking nice vacations, going out frequently, and buying large, impressive houses. Then the lean times hit in challenging markets and income no longer comes in. The tightened belts lead to frustration and then to conflict. Each partner becomes increasingly focused on their (increasingly unmet) needs. The sense of “we” evaporates. Eventually the relationship itself disintegrates. What’s going on here? Noted psychologist Abraham Maslow made an important distinction between what he called being-cognition and deficit-cognition. His observation was that neurotic individuals act out of a sense of deficit: what they lack. Self-actualizing individuals operate from a sense of fullness: what they seek to maximize. Deficit-cognition might lead a socially-awkward person to seek a job as a night watchman to avoid having to interact with people. Being-cognition might lead that person to seek a career in protective security, maximizing such strengths as reliability and perceptiveness. In the latter case, acting upon one’s assets can lead to opportunity, allowing for experiences of successful social interaction. In the former case, any interaction with the public becomes a perceived threat and potential failure. Who is most likely to succeed in their work? This same dynamic is present in relationships. When relationships are need-based, each partner can fill the empty spaces of the other and all can go well for a while. In time, however, empty spaces change and the ability to fill gaps can wane, as in the case of the portfolio manager who is no longer bringing in the accustomed income. Once needs are not met, the relationship becomes a continual source of complaints and frustrations. Partners start to feel like adversaries because each is perceived as not living up to their end of the bargain. When relationships are gift-based, however, each partner is focused on interests and passions, strengths, and what they bring to the other party. If work is not going well, there is a doubling down on areas of shared fulfillment, such as quality time with friends and family, time at houses of worship, or shared projects. Because the gift-based relationship is grounded in empathy and connection to the other—not primarily focused on one’s own unmet needs—it allows for the possibility of growth during times of setback. Conversely, if relationships are mere vehicles for filling gaps, setbacks become times of threat and conflict. This same dynamic occurs in business relationships. At some firms where I have worked, traders operate in solo mode and feel chronically lacking in research resources. As a result, they often reach out to others, but with the primary focus on what they can extract from those conversations. Conversely, at firms where traders are organized as teams, there is an emphasis on what each member can bring of value to teammates. The team culture is based on gifts and sharing, rather than on the filling of needs and deficits. This contributes to enhanced motivation and resilience during difficult market conditions. Why do gift-based relationships work so much better than need-based ones? Consider several areas of recent research in psychology: So how do we go about cultivating gift-based relationships? It begins with a recognition and appreciation of what we have to give and an openness to the possibility that gifts never unwrapped are every bit as detrimental to our well-being as needs not currently met. In point of fact, none of us are wholly deficit-and-need based; nor are any of us continual gift-giving saints. In a solution-focused vein, we can explore the relationships in which we are already giving and receiving gifts and look to broaden those, using them as models for new relationships. We can also look to occasions when we are successful gift-givers in existing relationships and build upon those successes. What if, every time we felt frustrated, we stepped back with a degree of mindfulness and vowed to reach deep inside, find a gift, and give it to someone we truly care about? That could be the best New Year’s present of all, for all!
https://www.forbes.com/sites/brettsteenbarger/2020/12/31/how-to-renew-your-relationships-in-the-new-year/
To meet the Sustainable Development Goals (SDGs), the world must ramp up development financing from billions to trillions of dollars. The COVID-19 pandemic has increased the financing needs and made them more complex. We must think beyond aid, to private finance, and unlocking developing countries’ own resources. The roles of financiers and developing country partners in mobilizing and allocating aid needs to change so that the international community can focus not only on country-by-country development, but also on pressing shared problems, such as climate change, global health and international migration. Financing must encourage a resilient and sustainable future. At the same time that the world is looking to scale up development financing, the development financing system is becoming more complex. There are new donors, like China and India, with different development paradigms. And the emergence of new multilateral development agencies and national development banks add resources to the mix but raise the question of whether new models of international cooperation are needed to maximize the leverage of scarce financing. Our research focuses on five questions: How can the international financial system produce sufficient funding for recovery and sustainable development? How should it be allocated to help countries rebuild their economies, meet the SDGs and confront global challenges? How can financing most effectively mobilize private capital, safeguard public monies, and keep debt levels sustainable? How can domestic resources be mobilized within developing countries? And how should existing institutions be changed to best cooperate? Featured Work Get Development Finance Updates More on Sustainable Development Finance Contact Mauricio Cardenas Gonzalez [email protected] Related Experts Related Sustainable Development Finance Content In this paper I set out the economic logic for why good global economic governance matters for reducing poverty and inequality and argue that a step towards better global governance would be better representation of developing countries in global and regional financial institutions. The US government's proposed $5 billion Millennium Challenge Account (MCA) could provide upwards of $250-$300m or more per year per country in new development assistance to a small number of poor countries judged to have relatively "good" policies and institutions. Could this assistance be too much of a good thing and strain the absorptive capacity of recipient countries to use the funds effectively? Empirical evidence from the past 40 years of development assistance suggests that in most potential MCA countries, the sheer quantity of MCA money is unlikely to overwhelm the ability of recipients to use it well, if the funds are delivered effectively. After a decade of economic reforms that dramatically altered the structure of economies in Latin America, making them more open and more competitive, and a decade of substantial increases in public spending on education, health and other social programs in virtually all countries, poverty and high inequality remain deeply entrenched. In this paper we ask the question whether some fundamentally different approach to what we call "social policy" in Latin America could make a difference — both in increasing growth and in directly reducing poverty. We propose a more explicitly "bootstraps"-style social policy, focused on enhancing productivity via better distribution of assets. We set out how this broader social policy could address the underlying causes and not just the symptoms of the region's unhappy combination of high poverty and inequality with low growth. Sub-Saharan African states urgently need expanded and more dynamic private sectors, more efficient and effective infrastructure/utility provision, and increased investment from both domestic and foreign sources. The long-run and difficult solution is the creation and reinforcement of the institutions that underpin and guide proper market operations. In the interim, African governments and donors have little choice but to continue to experiment with the use of externally supplied substitutes for gaps in local regulatory and legal systems. The Burnside and Dollar (2000) finding that aid raises growth in a good policy environment has had an important influence on policy and academic debates. We conduct a data gathering exercise that updates their data from 1970-93 to 1970-97, as well as filling in missing data for the original period 1970-93. We find that the BD finding is not robust to the use of this additional data. (JEL F350, O230, O400) National economic policies' effects on growth were over-emphasized in the early literature on endogenous economic growth. Most of the early theoretical models of the new growth literature (and even their new neoclassical counterparts) predicted large policy effects, which was followed by empirical work showing large effects. A re-appraisal finds that the alleged association between growth and policies does not explain many stylized facts of the postwar era, depends on the extreme policy observations, that the association is not robust to different estimation methods (pooled vs. fixed effects vs. cross-section), does not show up as expected in event studies of trade openings and inflation stabilizations, and is driven out by institutional variables in levels regressions. This paper applies a new approach to the estimation of the impact of policy, both the levels and the changes, on wage differentials using a new high-quality data set on wage differentials by schooling level for 18 Latin American countries for the period 1977–1998. The results indicate that liberalizing policy changes overall have had a short-run disequalizing effect of expanding wage differentials, although this effect tends to fade away over time. updated 07/04/2006 In Latin America, privatization started earlier and spread farther and more rapidly than in almost any other part of the world. Despite positive microeconomic results, privatization is highly and increasingly unpopular in the region. While privatization may be winning the economic battle it is losing the political war: The benefits are spread widely, small for each affected consumer or taxpayer, and occur (or accrue) in the medium-term. In contrast, the costs are large for those concerned, who tend to be visible, vocal, urban and organized, a potent political combination. Pages Pages There is no content currently that matches this filter.
https://cgdev.org/topics/sustainable-development-finance?qt-topic_tabs=2&page=500
After Flamer – her dance duet with her brother Lander – dancer and choreographer Femke Gyselinck creates the musical dance performance Moving Ballads, in search of unprecedented relationships between pop music and contemporary dance, lyrics and lyricism, representation and imagination, choreography and improvisation. Moving Ballads takes the word ‘ballad’ literally: in an etymological sense a danced song, moving lyricism. The winch is replaced by a battery of synthesizers, played by the four performers: three dancers, one pianist-composer. The latter, Hendrik Lasure, composed a ballad that musically and textually plays on Joni Mitchell’s whimsical and rich oeuvre. From that one song, the four performers present a series of variations, from strictly musical, to silent dancing. The principle of variation invites the spectator to triangulate lines between music, words and the movements, to detect shifts in meaning and thus to ‘read’ the bodies as bearers of meaning. Moving Ballads searches for the no man’s land where movements are hardly not speaking, in which a physical form of rhetoric takes shape that is then uncovered again. Femke Gyselinck uses choreographic principles that seek a balance between organisation (Joni Mitchell: ‘fluid architecture’) and physical individuality, between stylized movement and carelessness. The central theme is the counterpoint formed by the individual dancers, and how each of them finds physical eloquence in a way that is characteristic of them, and gives shape to words.
https://wpzimmer.be/en/projects/moving-ballads/
The purpose of this study is to analyze the factors that hinder the implementation of government policies against the elimination of minimal capital of legal entities as an effort to develop MSME to improve the regional economy in Palu of Central Sulawesi. This research was an empirical research method using a qualitative descriptive approach. The results showed the factors inhibiting the implementation of government policy on the elimination of minimal capital of legal entity making is the lack of information about Government Regulation Number 29 of 2016 concerning Changes in The Basic Capital of The Establishment of Limited Liability Companies, Constraints on the requirements of making Limited Liability Companies both due to large capital constraints for the manufacture of Limited Liability Companies and Due to Administrative System Constraints that are quite difficult, and the latter is the Orientation of Individualist Thinking. Copyrights © 2021 Citation Download RIS EndNote, Reference Manager, ProCite BibTex Latex, Jabref Original Source Download Original Google Scholar Check in Google Scholar Journal Info Amsir Law Jurnal (ALJ) Website Abbrev alj Publisher institut ilmu sosial dan bisnis andi sapada Subject Law, Crime, Criminology & Criminal Justice Description Amsir Law Journal (ALJ) is a peer-reviewed journal published by Sekolah Tinggi Ilmu Hukum (STIH) Amsir. ALJ is published twice a year in April and October. This journal provides direct open access to content with the principle of free availability in the public interest and supports greater global ...
https://garuda.kemdikbud.go.id/documents/detail/2221497
We are looking for a hands-on Director of Engineering to provide both technical and people leadership for the Innovation Development and Operation Services (IDOS) program, which is a large government contract. IDOS provides Agile system development and modernization services, including advanced data analytic services in support of the Centers for Medicare and Medicaid Services (CMS) Innovation Center (CMMI). You will provide management for multiple engineering teams such as DevOps, Security, and Operations. You will be accountable for overall solution quality, be a champion for best practices (e.g. performance, scalability, maintainability), be a mentor for all technical resources, and work effectively in a highly matrixed organization. You should have a practical approach to architecture, a modern approach to software development, and servant leadership style. This is customer facing role with high visibility to stakeholders. Working with senior leadership in development of overall vision, roadmap, and implementation strategy Contributing to strategic planning, financial planning, staffing, and technical review boards Collaborating with project teams to engineer viable solutions and effectively illustrating proposed solution architecture Explaining all aspects of new solutions, modernizations, and alternative approaches to stakeholders, influencing as necessary Providing hands-on technical direction for the design, development, integration, and production support across multiple systems Conducting monthly 1:1s and annual performance reviews of direct reports and assisting in career development of all technical staff Applying significant knowledge of industry trends and best practices to improve service to our customer Managing stakeholders expectations that require excellent communication, facilitation and negotiation skills Engaging stakeholders in discovery calls to understand their business and technical needs Presenting alternative solutions to customer and senior leadership in solving business problems or enabling customer value Assist in preparation and/or review of quotes, proposals, and statements of work Accountability for thorough and accurate documentation of all engineered. solutions to help ensure successful implementation Maintaining deep technical and business knowledge of relevant technology stacks including web development, DevOps, security, analytics, and cloud computing Able to operate across multiple projects to enable synergies and cost savings from an application, infrastructure, data and security perspective 13+ years of experience in relevant engineering/technology work; prior experience with CMS is a plus 5+ years of experience with people management 5+ years of experience in software development; working in an agile software product development environment 3+ years of experience with enterprise, solution, technology, or security architecture 3+ years of experience as a remote work or distant leader Bachelor’s Degree in Computer Science, Engineering, or a related technical discipline, or the equivalent combination of education, professional training or work experience Ability to work in a fast-paced environment and bring rational thinking and a calm demeanor during difficult situations Hands-on technical experience with the ability to lead teams through complex troubleshooting and root cause analysis Prior experience with production support and ITIL practices. CMMI Level 4 support or CCMI Level 4 development is a plus Understands complex enterprise, cloud-based concepts and effectively employs appropriate engineering practices Able to work closely with all project stakeholders such as product owners, scrum masters, product/platform managers, testers, developers, DevOps engineers, and information security analysts while delivering solutions Possesses demonstrated work experience with multiple relational database management systems Understands the benefits and ideal uses of various programming languages Possesses significant knowledge of modern micro-services based architectures Identifies opportunities for improvement and makes constructive suggestions for change Remains on the forefront of emerging industry practices Communicates effectively with customers to identify needs and evaluate alternative technical solutions Continually seeks opportunities to increase customer satisfaction and deepen stakeholder and vendor relationships TECHNICAL SKILLS:
https://network.symplicity.com/remote-usa/director-engineeringsystem-architect/A784871793AA459B86AFD89BF1112EA6/job/?vs=28
Introduction to concepts related to common physiologic alterations and influences of genetics and genomics on disease development. Building upon science and math courses while integrating and threading concepts introduced in introductory nursing courses, this course illustrates resulting human adaptation processes to expand student critical thinking and judgment for patient care. Pharmacologic concepts and applications to associated alterations are integrated as applicable to physiologic processes throughout the course. Focuses on nursing care of adult clients experiencing complex health problems, those with unpredictable outcomes, consistent with the ANA (2004) Nursing: Scope and Standards of Practice. Therapeutic interventions address the nurse's leadership role of promoting health, guiding the person, and shaping the health environment through advocacy, multidisciplinary collaboration, evaluation of outcomes and effective management of resources. Socio-environmental factors influencing the person, nurse and health care decisions are analyzed.
https://www.umassd.edu/directory/treynolds/
Florida State University Artist-in-Residence Jawole Willa Jo Zollar — acclaimed choreographer and the founding and artistic director of New York dance company Urban Bush Women — has been named Florida State’s 2011-2012 Robert O. Lawton Distinguished Professor. It is the highest honor bestowed by the FSU faculty on one of its own. “Jawole Willa Jo Zollar is a force of nature who truly unleashes the power of dance to change lives, from her astonishing work since 1984 with contemporary dance company Urban Bush Women to her pioneering and inspiring work with students, artists, and communities around the world,” said FSU College of Visual Arts, Theatre and Dance Dean Sally McRorie. “Her celebrated work is a rich stew that reflects the artistic and historical diversity of human culture. “She is a national treasure,” McRorie said. “We are very lucky to have this exceptional artist, educator and humanitarian as our colleague, friend, devoted teacher and mentor.” The esteem with which colleagues and students regard Zollar has been reflected in her recognition as a 2009-2010 Guggenheim Fellow. She also is the recipient of, among numerous honors, a 2008 United States Artists Wynn Fellowship; a 2006 and 1992 New York Dance and Performance Award, better known as a “BESSIE” — the dance world’s Tony; the 2006 Joyce Award; the 2005 Master of African American Choreography Award from the Kennedy Center; FSU’s 1999 Dr. Martin Luther King, Jr. Distinguished Service Award; the 1997 FSU Alumna of the Year Award (Zollar received her Master of Fine Arts degree in dance from FSU in 1979); and the 1994 Capezio Award. Zollar and her Urban Bush Women were appointed as 2010 U.S. State Department cultural ambassadors to South America and, closer to home, Zollar has led community leadership institutes across the United States for many years. “It is most fitting,” said McRorie, “that Professor Zollar now joins the stellar assembly of Lawton Distinguished Professors at Florida State University.” To be named a Lawton Distinguished Professor is “exhilarating,” said Zollar, a Kansas City native for whom Tallahassee has been home for the past 14 years. “It’s a humbling and deeply moving honor to be so recognized by one’s esteemed peers.” In its 30-year history at Florida State the Robert O. Lawton Distinguished Professor award has gone to two dancers, including Zollar; the first was her graduate-school professor and mentor Nancy Smith Fichter, who retired from FSU in 1997. In fact, Zollar has had the distinction of serving as the Nancy Smith Fichter Professor in FSU’s School of Dance since 1996. Zollar’s dance workshops for students at Florida State have emphasized the positive role played by arts and culture in the lives of socially and culturally marginalized individuals. Workshop participants are taught to assess the needs of the community for which they’ll be performing and incorporate the local history and individual narratives into group performances and projects likely to be relevant and meaningful to targeted audiences. “While maintaining one of the more renowned contemporary dance companies in our country, Urban Bush Women, she brings undivided and generous attention to the development of the young artists here at FSU,” said Patty Phillips, FSU School of Dance professor and co-chairwoman, in her letter nominating Zollar for the Lawton award. “Her traditional classes offer excellent training for the dancers, but more importantly, she is constantly challenging the dancers to evolve their own creative voices and to determine how they can best impact the field in their own individual ways,” Phillips said. “She offers a clear window to the world that they will enter, and she often has extended her guidance and support beyond the confines of the university setting by involving students and alumnae through internships or as company members with Urban Bush Women.” For Zollar, the tradition of community engagement continues. This summer will be the second consecutive one that she takes the Urban Bush Women troupe to New Orleans with the goal of making a difference in that community and the lives of its citizens. Late summer will find her back at Florida State University, which will formally introduce and honor her as its newest Robert O. Lawton Distinguished Professor during the Aug. 6 summer commencement ceremony. Learn more about Zollar and the Urban Bush Women at www.urbanbushwomen.org. For additional information about the FSU School of Dance, visit the website at dance.fsu.edu.
https://news.fsu.edu/news/arts-humanities/2011/04/26/world-renowned-choreographer-earns-highest-honor-fsu-colleagues/
BACKGROUND OF THE INVENTION The present invention relates to a process for producing 2,6- naphthalenedicarboxylic acid continuously by oxidizing 2,6- diisopropylnaphthalene. More in detail, the present invention relates to a process for producing 2,6-naphthalenedicarboxylic acid characterized by the process comprising: (1) step of dissolving 2,6-diisopropylnaphthalene and a catalyst comprising a water-soluble salt of cobalt, a water-soluble salt of manganese or a mixture thereof, a water-soluble salt of cerium and a bromine compound in an aliphatic monocarboxylic acid as the solvent, (2) steps of supplying the obtained solution continuously into a reaction vessel and of oxidizing 2,6-diisopropylnaphthalene by molecular oxygen under an elevated temperature and a pressure, (3) step of drawing the reaction mixture continuously out from the reaction vessel, then depositing and separating crude 2,6- naphthalenedicarboxylic acid crystals from the reaction mixture, (4) step of bringing the separated crystals into contact with an aqueous solution of a mineral acid, thereby dissolving and removing metals of the catalyst from the crystals, (5) step of purifying the crude 2,6-naphthalenedicaboxylic acid crystals, (6) step of adding an alkali carbonate or an alkali bicarbonate to the filtrate of the step of dissolving and removing the metals, thereby depositing and separating the metals as carbonates and/or bicarbonates, (7) step of supplying the separated carbonates and/or bicarbonates of the metals to the step of dissolving 2,6- diisopropylnaphthalene and the catalyst, and (8) step of supplying the filtrate of the step of separating the crude 2,6-naphthalenedicarboxylic acid crystals into the step of dissolving 2,6- diisopropylnaphthalene and the catalyst. Hitherto, as the process for producing 2,6- naphthalenedicarboxylic acid (hereinafterreferred to as 2,6-NDCA), a process for oxidizing 2,6- dimethylnaphthalene or 2,6- diisopropylnaphthalene by molecular oxygen in acetic acid as the solvent in the presence of a catalyst comprising cobalt and/or manganese and bromine has been known refer to Japanese Patent Publications No. 48- 43893 (1973), No. 56-21017 (1981), No. 59- 13495 (1984) and No. 48-27318 (1973) and Japanese Patent Application Laid- Open (KKKAI) Nos. 48-34153 (1973), 49-42654 (1974), 52-17453 (1977), 60- 89445 (1985) and 60-89446 (1985)!. Among the processes disclosed in the above references, particularly the following two processes are remarkable: (1) Process disclosed in Japanese Patent Application Laid-Open (KOKAI) No. 60-89445 (1985): Process for producing 2,6-NDCA by oxidizing 2,6- diisopropylnaphthalene or its oxidized intermediate with molecular oxygen in a solvent containing at least 50% by weight of an aliphatic monocarboxylic acid of less than 3 carbon atoms, wherein the oxidation of 2,6-diisopropylnaphthalene or its oxidized intermediate is carried out in the presence of a catalyst comprising (i) cobalt and/or manganese and (ii) bromine in the ratio of at least 0.2 mol of the heavy metal to one mol of 2,6-diisopropylnaphthalene or its oxidized intermediate. (2) Process disclosed in Japanese Patent Application Laid-Open (KOKAI) No. 60-89446 (1985): Process for producing 2,6-NDCA by oxidizing 2,6- diisopropylnaphthalene or its oxidized intermediate with molecular oxygen in a solvent containing at least 50% by weight of an aliphatic monocarboxylic acid of less than 3 carbon atoms, wherein the oxidation of 2,6-diisopropylnaphthalene or its oxidized intermediate is carried out in the presence of a catalyst comprising (i) cobalt and/or manganese and (ii) bromine and containing cobalt and/or manganese in an amount of at least 1% by weight of the aliphatic monocarboxylic acid of less than 3 carbon atoms. However, in the above processes, not only a large amount of impurities, for instance, aldehyde derivatives, ketone derivatives, colored substances and nuclear bromides but also the derivatives of phthalic acid and trimellitic acid due to the cleavage of the naphthalene ring are formed, thereby not only the yield of 2,6-NDCA is reduced but also it becomes necessary to provide a complicated purifying process. Moreover, since 2,6-NDCA is obtained with by-products of the oxidation reaction, such as, aldehydes, ketones, brominated derivatives and oxidized polymers of 2,6-NDCA and colored substances, when such 2,6- NDCA is used as a starting material for producing polyethylene 2,6- naphthalate, polyester, polyamide, etc., the degree of polymerization of the polymers become low and the physical properties such as heat- resistance and the appearance such as color of the films and fibers prepared from the polymers are damaged. Accordingly, as the purifying method of 2,6-NDCA, the following methods have been proposed: (1) A method comprising the steps of dissolving crude 2,6-NDCA in an aqueous alkali solution, subjecting the solution to thermal treatment for from 1 to 5 hours at a temperature from 100&deg; to 250. degree. C. by stirring thereof, subjecting the solution to decoloring treatment by a solid adsorbent and blowing an acidic gas such as gaseous carbon dioxide or gaseous sulfur dioxide into the solution under a pressure, thereby reducing the pH of the solution and precipitating 2,6- NDCA as a monoalkali salt from the solution refer to Japanese Patent Publication No. 52-20993 (1977)!. (2) A method comprising the steps of treating an aqueous alkali solution of crude 2,6-NDCA with an oxidizing agent such as alkali perhalogenate or alkali permanganate and blowing gaseous carbon dioxide or gaseous sulfur dioxide into the solution, thereby separating 2,6-NDCA as a monoalkali salt refer to Japanese Patent Application Laid-Open (KOKAI) No. 48-68554 (1973)!, and (3) A method wherein after dissolving crude 2,6-NDCA into an aqueous solution of sodium acetate, condensing the solution and carrying out deposition of crystals, thereby isolating 2,6-NDCA as a monoalkali salt refer to Japanese Patent Application Laid-Open (KOKAI) No. 50- 105639 (1975)!. By the way, words &quot;nuclear bromide(s)&quot; in this application mean an aromatic compounds of which hydrogen(s) of aromatic nucleus is substituted by bromine(s), such as bromobenzene, bromonaphthalene, etc. However, every one of the above methods of purification is using a method wherein crude 2,6-NDCA is dissolved in an aqueous alkali solution and crystals of the monoalkali salt of 2,6-NDCA are precipitated by adjusting pH of the solution. In the method of purifying crude 2,6- NDCA by adjusting pH of the solution, it is adjusted to from 6. 5 to 7.5 by blowing gaseous carbon dioxide or sulfur dioxide under a pressure into an alkali solution of a relatively high concentration of 2, 6-NDCA while warming it or by adding a mineral acid to the solution. Then the solution is cooled to 20&deg; C. and the monoalkali salt of 2,6-NDCA is precipitated. The method has a demerit that the composition and the amount of the crystals are variable, are not stable, depending on the conditions such as pH, temperature and concentration, because there exists a delicate equilibrium relationship between the monoalkali salt, dialkali salt and free acid of 2,6-NDCA. Further, as other carboxylic acids having pKa close to that of 2, 6- NDCA are contained in 2,6-NDCA obtained by oxidizing 2,6- dialkylnaphthalene, it is difficult to purify crude 2,6-NDCA to a high purity only by means of adjusting pH. Moreover, it is necessary to wash the crystals with water to remove the mother liquor accompanied, after separating the monoalkalisalt precipitated by pH adjustment. However, since the monoalkali salt of 2,6-NDCA is soluble in water, there is a defect that the rate of recovery of 2,6-NDCA is reduced by the washing. As it is impossible to purify crude NDCA to a high purity only by crystallization, it is necessary to combine the method of crystallization with other methods such as thermal treatment, oxidative treatment or reductive treatment. However, when a crystallization is combined with thermal treatment, which makes high temperature and pressure inevitable, oxidative reaction or reductive reaction, there is a problem of numbers of newly formed by- products which become impurities, resulting in a necessity of a means of removing the impurities. Accordingly, the combined method is incomplete as the method of purifying the crude 2,6- NDCA. Still more, cobalt which is used as a component of the catalyst in the production of 2,6-NDCA is an expensive heavy metal which is relatively difficult to obtain. Accordingly, it is important industrially to reduce the amount of cobalt as small as possible, however, when the amount of cobalt is reduced in the conventional method, the amount of formation of trimetllitic acid, etc. are increased and the yield and purity of 2,6- NDCA are reduced, therefore, it has been recommended to use cobalt in a large amount in the reference. Moreover, since crude 2,6-NDCA obtained by a conventional method accompanies nuclear bromides refer, for instance, Japanese Patent Application Laid-Open (KOKAI) No. 48-96573 (1973) and Example 1 of Japanese Patent Application Laid-Open (KOKAI) No. 48-68555 (1973)!, from 1000 to 2000 ppm of bromine is usually contained in the product. Also in the present inventors' experiments in the production of 2,6-NDCA by the conventional method, the similar results are obtained and there are many cases wherein from 2000 to 4000 ppm of bromine is contained in the product depending on the reaction conditions. It has been known that the softening point of polyethylene naphthalate produced by using 2,6-NDCA containing large amount of nuclear bromides has been reduced and as a result, quality of the polymer becomes poor. Furthermore, the conventional process for producing 2,6-NDCA is performed batch-wise, and although possibility of applying a continuous process for producing 2,6-NDCA is suggested, no concrete proposal has been given so far. In the conventional processes for producing 2,6-NDCA, a large quantity of by-products and decomposed products such as trimellitic acid, aldehydes, colored substances and nuclear bromides is formed, and accordingly a large quantity of heavy metal salt of trimellitic acid and nuclear bromide is contained in the crude 2,6-NDCA and the purity of the crude 2,6-NDCA is usually about 80%. Accordingly, in order to obtain 2,6- NDCA of a purity of higher than 99% from such crude 2,6-NDCA, a complicated purifying method with many steps is necessary, and in addition, since an expensive cobalt has been used in a large amount, the conventional process is unsatisfactory as an industrial process for producing 2,6-NDCA. NDCA. Moreover, in the method for purifying the crude 2,6-NDCA, a chemical reactions and operations such as condensation, cooling, etc. are performed, therefore, it is impossible to obtain 2,6-NDCA of a high purity in a high rate of recovery. Still more, in the production of 2,6-NDCA in an industrial scale, since the conventional process is carried out batch-wise, it is impossible to produce a large quanttty of highly pure 2,6-NDCA. Accordingly, an proposal of a continuous process for producing 2,6-NDCA of a purity of higher than 98% in a large quantity has been demanded. As a result of the present inventors' studies on the process for producing 2,6-NDCA, they have found the way to control the amount of formation of trimellitic acid and nuclear bromides, among many by- products, which give an important influence on the recovery of the heavy metals (cobalt and/or manganese), the re-use thereof and the yield and the purifying steps of 2,6-NDCA while using a far smaller amount of cobalt catalyst than that has been used, and accordingly, the formation of trimellitic acid is suppressed and 2,6-NDCA can be produced in a favorable yield by adding cerium to the catalyst comprising cobalt and/or manganese and bromine. In addition, they have developed the continuous process for producing highly pure 2,6-NDCA in a large quantity and also at a moderate cost, the process is comprising of the following steps: (1) the step for dissolving 2,6-diisopropylnaphthalene and a catalyst comprising a water-soluble salt of cobalt, a water-soluble salt of manganese or a mixture thereof, a water-soluble salt of cerium and a bromine compound in an aliphatic monocarboxylic acid as the solvent, (2) the step for oxidizing 2,6-diisopropylnaphthalene by molecular oxygen under an elevated temperature and a pressure, while continuously supplying the solution into a reaction vessel, (3) the step for drawing the reaction mixture continuously out from the reaction vessel and precipitating and separating crude 2,6-NDCA from the reaction mixture, (4) the step for bringing the crude 2,6-NDCA crystals separated into contact with an aqueous solution of a mineral acid, thereby dissolving and removing the metals of the catalyst from the crystals, (5) the step for purifying the crude 2,6-NDCA crystals, (6) the step for adding an alkali carbonate or an alkali bicarbonate to the filtrate from the step of dissolving and removing the metals, thereby precipitating and separating the metals as carbonates and/or bicarbonates, (7) the step for supplying the carbonate and/or bicarbonate of the metals to the step for dissolving 2,6-diisopropylnaphthalene and the catalyst, and (8) the step for supplying the filtrate from the step of separating the crude 2,6-NDCA crystals to the step for dissolving 2,6- diisopropylnaphthalene and the catalyst. On the basis of the above findings, the present inventors have completed the present invention. SUMMARY OF THE INVENTION The object of the present invention for producing 2,6-NDCA by oxidizing 2,6-diisopropylnaphthalene lies in reducing the formation of trimellitic acid and nuclear bromides which are the main impurities by adding cerium to cobalt, manganese or a mixture thereof and bromine instead of cobalt, manganese and bromine which are the hitherto-used catalyst. Furthermore, the object of the present invention lies in the evelopment of an industrial process for continuously producing 2,6-NDCA of a high purity in a large quantity at a moderate cost. Still more, the object of the present invention lies in offering highly pure 2,6-NDCA which is suitable as the raw material for producing polyethylene 2,6-naphthalate, polyester, polyamide, etc. for the production of films and fibers excellent in heat-resistance. BRIEF EXPLANATION OF THE DRAWINGS FIG. 1 is a block flow-chart of a continuous manufacturing process for 2,6-NDCA according to the present invention and FIG. 2 is a block flow- chart of preferable embodiment thereof. DETAILED DESCRIPTION OF THE INVENTION The feature of the present invention lies in the process comrrising the first step of dissolving 2,6-diisopropylnaphthalene or its oxidized intermediate and the catalyst comprising a water-soluble salt of cobalt, a water-soluble salt of manganese or a mixture thereof, a water- soluble salt of cerium and a bromine compound in a lower aliphatic monocarboxylic acid as a solvent; the second step of supplying the solution continuously to a oxidation reaction vessel and oxidizing 2, 6- diisopropylnaphthalene or its oxidized intermediate with molecular oxygen under an elevated temperature and a pressure; the third step of drawing the reaction mixture out from the reaction vessel continuously and separating the crude 2,6-NDCA crystals deposited by cooling; the fourth step of bringing the deposited crystals into contact with an aqueous solution of a mineral acid, thereby dissolving and separating the solution of metals of the catalyst from the crude 2,6-NDCA crystals; the fifth step of purifying crude 2,6-NDCA and separating pure 2,6-NDCA crystals; the seventh step of adding an alkali carbonate or an alkali bicarbonate to the filtrate of the fourth step, thereby depositing and separating the metals of the catalyst as carbonates and/or basic carbonates and supplying them to the first step and the eighth step of supplying the filtrate of the third step, which separates crude 2,6-NDCA crystals, to the first step. Of the above-mentioned steps, the first step is the step wherein 2,6- diisopropylnaphthalene or its oxidized intermediate and a catalyst, comprising a water-soluble salt of cobalt, a water-soluble salt of manganese or a mixture thereof, a water-soluble salt of cerium and a bromine compound are dissolved in a lower aliphatic monocarboxylic acid containing less than 30% by weight of water, as a solvent. &quot;An oxidized intermediate of 2,6-diisopropylnaphthalene&quot; herein mentioned means, of the many intermediates formed by oxidation of 2,6- diisopropylnaphthalene (hereinafter referred to as 2,6-DIPN), the compound which form 2,6-NDCA upon further oxidation. The compound(s) which can be used as the starting substance in the present invention is (are) the compound(s) shown by the following formula (I): ##STR1## wherein X is a group selected from the group consisting of ##STR2## and Y is a group selected from the group consisting of As the water-soluble salt of cobalt, the water-soluble salt of manganese and the water-soluble salt of cerium, hydroxide, carbonate, halide, salt of aliphatic acids, etc. may be exemplified, however, acetate and carbonate are preferable. As the bromine compound, hydrogen bromide, hydrobromic acid, alkyl bromides such as methyl bromide, ethyl bromide, etc., alkenyl bromides such as allyl bromide and inorganic salts such as alkali bromides, ammonium bromide, etc. may be exemplified, however, ammonium bromide and cobalt bromide are preferable. The water-soluble salt of cobalt, the water-soluble salt of manganese and the water-soluble salt of cerium are added so that the total amount of the heavy metals in the catalyst is from 0.01 to 0.15 g- atom, preferably from 0.02 to 0.12 g-atom to 100 g of the solvents, a lower aliphatic monocarboxylic acid containing water. In a case where the catalyst of the heavy metals is used in a large quantity over the above- range, although the yield of 2,6-NDCA is not reduced, the product is accompanied by a large quantity of the heavy metals and purification of the product becomes difficult. On the other hand, in a case where the catalyst of the heavy metals is used in a quantity below the above range, the yield of 2,6-NDCA is reduced, and accordingly such situations are undesirable. Moreover, the ratio of the water-soluble salt of cobalt, the water- soluble salt of manganese or its mixture to the water-soluble salt of cerium depends on the reaction conditions in the step of oxidation, that is, the reaction temperature, the concentration of the bromine catalyst and the partial pressure of oxygen, therefore the mixing ratio is difficult to predetermine, however, the usual atomic ratio is in the range of 0.03-30, preferably 0.05 to 20, more preferably 0.10-10. When the ratio is over 30, it is absolutely uneconomic and when it is less than 0.03, the reaction speed is so low that the process becomes impractical. The mixing of the bromine compound is carried out so that the atomic ratio of bromine of the bromine compound to the total heavy metals of the used catalyst is from 0.001 to 1, preferably from 0.005 to 0.6, more preferably from 0.01 to 0.4. When the bromine compound is used over the above range, although the velocity of the oxidation reaction becomes larger, the amount of formation of the nuclear bromides which are difficult to separate from 2,6-NDCA also becomes larger. The solvent used in the process for producing 2,6-NDCA is a lower aliphatic monocarboxylic acid containing less than 30% by weight of water, preferably from 1 to 20% by weight of water. As the lower aliphatic monocarboxylic acid, those of not more than 4 carbon atoms are preferable and although formic acid, acetic acid, propionic acid and butyric acid may be exemplified, acetic acid is most preferable. When the starting substance of the pesent invention, that is, 2, 6- DIPN or its oxidized intermediate is present at a high concentrtion in the oxidation reaction system, the amount of molecular oxygen supplied to the reaction system is relatively reduced, resulting in the easiness of the progress of side-reactions, and accordingly the yield and the purity of 2,6-NDCA are reduced. Therefore, such a situation is undesirable. Accordingly, it is necessary in the present invention to maintain the concentration of 2,6-DIPN or its oxidized intermediate in the ratio of less than 20 g to 100 g of the solvent. The second step is a step wherein 2,6-DIPN or its oxidized intermediate is oxidized by molecular oxygen. Molecular oxygen for the reaction is supplied as gaseous oxygen or a mixture of oxygen and an inert gas, however, it is preferable industrially to use compressed air. Although the oxidation reaction proceeds faster as the partial pressure of oxygen in the reaction system is higher, for practical use, the partial pressure of oxygen of from 0.2 to 8 kg/cm.sup.2 (absolute pressure) is sufficient for the purpose, and there is no merit in using the partial pressure of oxygen higher than 8 kg/cm.sup.2 (absolute pressure). Furthermore, although the total pressure of the gas containing molecular oxygen supplied to the oxidation reaction is not particularly limited, the pressure is preferably used at 2 to 30 kg/cm. sup.2 (absolute pressure). The temperature of the oxidation reaction is from 140&deg; to 210. degree. C., and preferably from 170&deg; to 190&deg; C. When the reaction temperature is over 210&deg; C., it is undesirable because the lower aliphatic monocarboxylic acid is oxidized and decomposed. In order to suppress the side reactions in the second step, the molar ratio of 2,6-DIPN or its oxidized intermediate to the heavy metals is to be kept under 0.4, preferably under 0.05. It should be noted, however, that with continuous supply of the starting material and the catalyst, and also with high reaction rate to consume the startnng material, the actual molar ratio of the starting material to the heavy metals in the oxidation reactor is always vely low. Accordingly, the requirement on the concentration of the starting material in the second step of the present invention can easily be fulfilled as far as the rate of oxidation is high enough. In the third step, the separation of the deposited crude 2,6- NDCA crystals is carried out using a separating apparatus such as a centrifugal machine, etc. The fourth step is a step wherein the separated crude 2,6-NDCA crystals are brought into contact with an aqueous solution of a mineral acid to dissolve and remove the metals of the catalyst. Namely, the separated crude 2,6-NDCA crystals are added to an aqueous solution of sulfuric acid or hydrochloric acid in the concentration of 1 to 10%, preferably 3 to 6% by weight while stirring and the pH of the solution is adjusted to from 1 to 3, preferably from 1 to 2, thereby dissolving out heavy metals of the catalyst from the crude 2,6-NDCA crystals. The fifth step is a purifying step of the crude 2,6-NDCA, and is performed by one of the following three methods. (1) The crude 2,6-NDCA is dissolved in an aqueous solution of an alkali such as sodium hydroxide, potassium hydroxide, sodium carbonate and potassium carbonate, the pH of the aqueous solution being higher than 9, preferably higher than 11, and into the alkali solution of 2,6- NDCA, a water-soluble neutral salt of the same cation as that used in the aqueous alkali solution (hereinafter referred to as the common cation) is added so that the formed solution contains 10 to 30% by weight, preferably 15 to 25% by weight of the water-soluble neutral salt, and the solution is stirred at a temperature of from 10&deg; to 100. degree. C. , preferably from 20&deg; to 50&deg; C. to precipitate 2, 6-NDCA as a dialkali salt. (2) The crude 2,6-NDCA is added to an aqueous solution containing an amount of more than neutralizing amount of an alkali, such as sodium hydroxide, potassium hydroxide, sodium carbonate and potassium carbonate, and 10 to 30% by weight, preferably 15 to 25% by weight of a water- soluble neutral salt of the common cation therewith, and the mixture is stirred at a temperature of from 10&deg; to 100&deg; C., preferably from 20&deg; to 50&deg; C., to precipitate 2,6-NDCA as an alkali salt. (3) The crude 2,6-NDCA is added to methanol at a molar ratio of methanol to 2,6-NDCA greater than 15, preferably from 20 to 70, and the mixture is heated to a temperature in the range of 110&deg; to 140. degree. C., preferably 120&deg; to 135&deg; C. under stirring, and then concentrated sulfuric acid was added to the heated mixture to esterify 2, 6-NDCA. After the reaction is over, the reaction mixture is cooled to precipiaate the crystals of dimethyl ester of 2,6-NDCA in a purified form. The water-soluble neutral salt of the common cation means a water- soluble neutral salt of alkali metal such as sodium or potassium which dissolves in water at 20&deg; C. in an amount of more than 10% by weight, preferably more than 15% by weight, and sodium chloride, potassium chloride, sodium sulfate, potassium sulfate, sodium nitrate and potassium nitrate may be exemplified. The amount of addition of the water-soluble neutral salt is limited to from 10 to 30% by weight, preferably from 15 to 25% by weight of the solution subjected to the treatment of salting out, namely, the cation concentration is not higher than 10 mol/liter, preferably not higher than 5 mol/liter and also should be less than the solubility of the neutral salt. When the water-soluble neutral salt is added over its solubility, it is not desirable, because the neutral salt not dissolved mix with the salted-out crystals. Furthermore, even when the water- soluble neutral salt is added so that the concentration of the cation is over 10 mol/liter, the effect of salting out is not improved, and since the specific gravity and the viscosity of the aqueous solution is raised, the solid-liquid separtion becomes difficult. The solubility of dialkali salt of 2,6-NDCA into the aqueous solution is rapidly reduced when the concentration of the common cation, for example, sodium ion in the case of disodium salt and potassium ion in the case of dipotassium salt, is increased. For instance, the solubility of 2,6-NDCA.2Na at 20&deg; C. in the aqueous solution, of which sodium ion concentration is 1.5 mol/liter prepared by adding sodium chloride to an aqueous sodium hydroxide solution of pH 12, is about 11% by weight and when the concentrations of sodium ion are 2.2, 3, 4 and 5.4 mol/liter, the solubilities are about 7, 1.7, 0.4 and 0.2% by weight, respectively. In this specification, &quot;2,6-NDCA.dialkali&quot; means dialkali salt of 2,6- NDCA and &quot;2,6-NDCA.2Na&quot; and &quot;2,6-NDCA.2K&quot; mean disodium and dipotassium salt of 2,6-NDCA, respectively. In the step of salting out, the amount of alkali larger than neutralization equivalent is enough just for salting-out 2,6-NDCA, however, to precipitate and remove the minute amount of heavy metals, contained in crude 2,6-NDCA, as hydroxide or oxide, it would probably be desirable to use the alkali in an amount of more than 1.2 times of the neutralization equivalent while holding the pH of the aqueous solution at higher than 9, preferably higher than 11. The concentration of alkali is from 2 to 10% by weight, preferably from 4 to 7% by weight. In the step of salting out, the amount of the water-soluble neutral salt to be added and the concentration of 2,6-NDCA.dialkali are adjusted in a broad range according to the desired purity and the desired rate of recovery of 2,6-NDCA, therefore, the concentration of each compound is decided mutually in relation to the concentration of the cation present in common, however, as for the concentration of crude 2,6- NDCA, 50 to 250 g/liter, preferably 110 to 180 g/liter is practical from the viewpoint of the operation for purification. The sixth step is a step only necessary in the method of purifying by salting out, and is not necessary in the purification by the esterification of 2,6-NDCA. Namely, the sixth step is a step of precipitating 2,6-NDCA crystals by adding 2,6-NDCA.dialkali into a mineral acid and making the pH of the obtained aqueous solution to lower than 4, preferably to 2 to 3. Namely, the crystals of 2,6-NDCA.dialkali which have been salted out at the fifth step are added into a solution of a mineral acid such as sulfuric acid, hydrochloric acid or nitric acid, the concentration of 2,6-NDCA.dialkali being from 5 to 60% by weight, preferably from 20 to 40% by weight, and the pH of the solution is adjusted to lower than 4, preferably from 2 to 3 to precipitate 2,6- NDCA free acid. By carrying out the solid-liquid separation of the crystals of 2,6-NDCA using a separator, 2,6-NDCA of a purity of higher than 98% is obtainable. There is the seventh step wherein an alkali carbonate or an alkali bicarbonate is added to the filtrate of the fourth step to separate the metals of the catalyst as carbonates and/or basic carbonates and are returned to the first step, and there is the eighth step wherein the filtrate of the third step is recycled to the first step. In the seventh step, a solution of an alkali carbonate, such as sodium carbonate and potassium carbonate or a solution of an alkali bicarbonate such as sodium bicarbonate and potassium bicarbonate of a concentration from 1 to 34% by weight, preferably from 15 to 25% by weight is added to the filtrate of the fourth step, and its pH is adjusted to 7 to 10, preferably 7.5 to 9.5 to recover the metals of the catalyst as carbonates and/or basic carbonates and the recovered carbonates and/or basic carbonates are returned to the first step (step of dissolving). Accordingly, the additional supply of the metals of the catalyst is usually not necessary. By providing a reflux condenser and a distilling tower at the top of the vessel for oxidation reaction, it is possible to discharge water formed by the oxidation reaction to outside of the system and to reflux the lower aliphatic monocarboxylic acid accompanied by the water. Furtheremore, after washing the crude 2,6-NDCA crystals obtained in the third step with a lower aliphatic acid such as acetic acid, propionic acid and butyric acid, which contains less than 30% by weight of water, thereby removing the impurities such as manganese trimellitate from the crystals, the washed crude 2,6-NDCA crystals are heated to a temperature higher than the boiling point of the lower aliphatic monocarboxylic acid and lower than 200&deg; C. to recover the lower aliphatic monocarboxylic acid accompanied by the crystals. In the case where the lower aliphatic monocarboxylic acid is acetic acid, the thermal treatment is carried out at a temperature of 120&deg; to 200. degree. C., preferably 130&deg; to 160&deg; C. Moreover, a part of the filtrate in the eighth step is supplied to a dehydrating tower and after removing by-produced water in order to control the water-balance in the system of oxidation reaction, the part of filtrate can be returned to the first step. Still more, it is possible to send a part of the filtrate treated in the dehydrating tower to a distillation for recovery of the lower aliphatic monocarboxylic acid, to recycle the recovered lower aliphatic monocarboxylic acid to the first step of dissolving the starting substances and to send the rest of the recovered solvent to the third step of washing the crude 2,6-NDCA crystals. By subjecting the 2,6-NDCA crystals obtained to the following treatment, the coloring substances can be removed therefrom and accordingly, the purity of the product can be further improved: (1) A method of treatment wherein an aqueous solution, to which the crystals of 2,6-NDCA.dialkali separated in the fifth step has been dissolved, is subjected to adsorption treatment with activated carbon. (2) A method of treatment wherein an aqueous alkali solution of the crude 2,6-NDCA crystals is obtained in the first half of the fifth step, is subjected to adsorption treatment with activated carbon. For use in the above treatments, any shape and form of activated carbon may be used, such as particle, granule, globule, crushed form and powder, however, the powdery activated carbons of a large surface area acts effectively. A method of treatment with activated carbon is concretely explained as follows. In case of carrying out the treatment with activated carbon after salting out in the fifth step, activated carbon may be directly added to an aqueous solution of 2,6-NDCA.dialkali and after stirring the solution for more than 30 minutes, activated carbon is separated from the solution, however, in order to utilize activated carbon effectively, it is preferable to pass the solution through a layer filled up with activated carbon for carrying out the adsorptive treatment. The temperature at which the adsorptive treatment is performed with activated carbon is from 5&deg; to 100&deg; C., preferably from 10. degree. to 30&deg; C. Moreover, when sodium chloride is present in an amount of from 1 to 3% by weight in the aqueous solution of 2,6-NDCA. dialkali, the adsorptive activity of activated carbon is increased, accordingly it is possible to reduce the amount of activated carbon used for the purpose of purification. For instance, (1) crude 2,6- NDCA is dissolved in an auueous solution of sodium hydroxide, (2) the solution was subjected to salting out by using sodium chloride, (3) the obtained crystals are washed with an aqueous solution of sodium chloride, and (4) the crystals are dissolved in water, then an aqueous solution of 2,6-NDCA containing sodium chloride in a suitable concentration for the adsorption can be obtained. When such solution is treated with activated carbon, the consumption of activated carbon can be reduced. By carrying out this adsorptive treatment after the salting-out step, it is possible to obtain colorless 2,6-NDCA of a purity of higher than 99. 8%. Further, the residue obtained from the bottom of the monocarboxylic acid recovering tower, can be supplied to the seventh step together with the filtrate from the fourth step. Still more, since the filtrate of salting out, from which 2,6- NDCA. dialkali has been separated in the fifth step, contains the impurities dissolved therein, the dissolved impurities are deposited and removed by adding a mineral acid having the same anion as that of the water-soluble neutral salt used in the salting-out step adjusting the pH of the solution below 3. Then, the concentration of the salt and the pH of the solution from which the impurities have been removed are adjusted, and the solution is circulated to the step of salting out. By such a circulation of the filtrate, the additional supply of the water-soluble neutral salt becomes unnecessary in usual cases. Accordin to the process for producing 2,6-NDCA of the present invention, the atomic ratio of the amount of cobalt used to the amount of cerium used is from 0.03 to 30, and by using such an amount of cerium, it becomes possible to reduce the amount of trimellitic acid and the bromides of naphthalene ring which are difficult to remove from the product by purification. Furthermore, the purity of crude 2,6-NDCA obtained according to the present invention is higher than 85% by weight, and accordingly, the complexity of the purification step can be remarkably reduced as compared with the conventional process. Particularly, since the content of the heavy metal salt of trimellitic acid in crude 2,6-NDCA is less than 5% by weight, the recovery and the re-use of the heavy metal can be performed remarkably more easily than in the conventional process for producing 2,6- NDCA. Still more, although the amount of elementary bromine contained in crude 2,6-NDCA obtained by the conventional process, is from 1000 to 4000 ppm, the amount can be reduced to less than 300 ppm according to the process of the present invention. Accordingly, by purifying crude 2, 6- NDCA obtained by the present process, 2,6-NDCA of a purity of higher than 98%, of a content of elementary cobalt of less than 3 ppm, of a content of elementary manganese of less than 3 ppm, of a content of elementary cerium of less than 3 ppm, of a content of bromine of less than 3 ppm and of an optical density of less than 0.03 can be continuously obtained. By subjecting the thus purified 2,6-NDCA further to the adsorptive treatment by activated carbon, 2,6-NDCA of a purity of higher than 99.8%, each of the contents of elementary cobalt, managanese and cerium being less than 2 ppm, respectively, and of the content of bromine of less than 2 ppm and of the optical density of less than 0.02 can be obtained continuously. In this connection, when the whiteness of a polyester chip obtained by polycondensing ethylene glycol and 2,6-NDCA produced by the present invention is measured with a color difference meter, the value of Hunter scale b is 1.6, namely, a polymer excellent in the whiteness can be obtained by using 2,6-NDCA manufactured by the present invention. Moreover, according to the process of the present invention, the waste discharged from the system of the process is mainly water, removed from the substances containing water, and the metals of the catalyst and the lower aliphatic monocarboxylic acid are re-used in circulation and maintained inside of the system. Furthermore, as 2,6-NDCA.dialkali is hardly soluble in an aqueous solution of a water-soluble neutral salt of the common cation, it is possible to wash 2,6-NDCA.dialkali by an aqueous solution of a suitable concentration of the water-soluble neutral salt, and since the impurities dissolved in the mother liquor and the washings of the salting- out step can be easily removed by adding a mineral acid, the treated mother liquor and washings can be re-used in circulation without causing any accumulation of the impurities, and usually, the additional supply of the water-soluble neutral salt is unnecessary. In FIG. 1 of the attached drawings, 2,6-DIPN is supplied through the feed line (1) of 2,6-DIPN to the dissolving vessel (3) of the raw materials, and acetic acid and catalyst mixture are supplied via the feed line (2) to the vessel (3), and 2,6-DIPN is dissolved in acetic acid therein. In the next place, the acetic acid solution of 2,6-DIPN and the catalyst is continuously supplied to a reaction vessel (5) (for oxidation, titanium-lined and operated under a pressure) equipped with a stirrer via the raw material feed line (4). Into the reaction vessel (5), compressed air is introduced via the feed line (6). The exhaust gas from the reaction vessel (5) is released to outside of the system through the line (7) after acetic acid is recovered. The oxidation product is continuously drawn from the reaction vessel (5) via the line (11) and sent to the flash vessel (12), the pressure of the reaction is reduced to ordinary level in the flash vessel (12) and the temperature of the oxidation product goes down to 90. degree. to 110. degree. C. The solid matter comprising the crude 2,6- NDCA crystals deposited in the flash vessel is isolated by a centrifugal separator (13). The obtained crude 2,6-NDCA crystals are conveyed to the catalyst extracting vessel (16) via the feed line (15). Into the catalyst extracting vessel (16), a mineral acid is introduced through the line (17) and the mixture is stirred in the vessel (16) at a temperature of 80. degree. to 90&deg; C. After finishing the extraction of the heavy metals, the slurry in the vessel (16) is sent to a centrifugal separator (18) and subjected to solid-liquid separation therein and the solid (crude 2,6- NDCA) is washed well with hot water of 80&deg; C. The washed crude 2,6-NDCA is conveyed to the salting-out vessel (20) via the line (solid matter conveyer) (19) for drawing out crude 2,6- NDCA from the separator (18). Into the salting-out vessel (20), sodium chloride and sodium hydroxide are supplied via the feed line (21), and the mixture is stirred in the vessel (20) at a temperature of 25&deg; to 35&deg; C. The resulting slurry is filtered by the separator (filter press) (22) and the solid matter is washed with an aqueous solution of sodium chloride. The washed solid matter (2,6-NDCA.dialkali) is introduced into the acid-treating vessel (25) via the conveying line (23). Into the acid- treating vessel (25), sulfuric acid is continuously supplied through the feed line (26) adjusting the pH of the solution in the vessel (25) to not more than 4, preferably 2 to 3. After carrying out solid-liquid separation in a separator (filter press) (27) for purified 2,6-NDCA, the solid matter is washed until the chloride ion is not detected in the washings and then continuously dried in a drier (28) at a temperature of 120&deg; to 150&deg; C. to obtain 2,6-NDCA of a high purity continuously. On the other hand, the filtrate from the centrifugal separator (13) is recycled to the dissolving vessel (3) via the drawing line (29), the line (38) and the recycling line (39). The filtrate of extraction which is sulfuric acid-acidic and contains the heavy metals of the catalyst is led to a separating vessel (42) via the line (41). Into the separating vessel (42), an aqueous solution of sodium carbonate is continuously supplied through the feed line (43), and the pH of the solution in the vessel (42) is adjusted from 7 to 10 to deposit the carbonates and/or basic carbonates of cobalt, manganese and cerium. The carbonates and/or basic carbonates of the heavy metals are subjected to solid-liquid separation in the centrifugal separator (44). The filtrate is neutralized and discharged to outside of the system via a line (46) and the catalysts are washed with water and returned to the dissolving vessel (3) via the line (45) and the line (39). The filtrate of salting-out step is subjected to removal of the impurities and then discharged to outside of the system via the line (47). The filtrate from the separator (27) is neutralized and then discharged to outside of the system through the line (48). The preferable mode of operation will be explained while referring to FIG. 2 of the drawings as follows. Into the dissolving vessel (3), 2,6-DIPN is supplied through the feed line (1), and acetic acid and the catalyst mixture are supplied from the feed line (2) and 2,6-DIPN is dissolved in the vessel (3). In the next place, an acetic acid solution of 2,6-DIPN and the catalyst is continuously supplied to the reaction vessel (5) equipped with a reflux condenser (8) and a stirrer via the feed line (4). Into the reaction vessel (5) operated under a pressure, compressed air is introduced via the feed line (6). The exhaust gas from the reaction vessel (5) is led to the reflux condenser (8) via the line (7) and cooled therein. Acetic acid contained in the exhaust gas is recycled to the reaction vessel (5) via the line (9), and the exhaust gas is discharged to outside via the line((10). The oxidation product is continuously drawn out via the line (11) and the pressure of the oxidation reaction is reduced to ordinary level in the flash vessel (12) and the temperature of the oxidation product goes down to 90&deg; to 110&deg; C. The solid matter comprising crude 2, 6-NDCA is separated by the centrifugal separator (13) and the crude 2,6- NDCA is washed with hot actic acid, the acetic acid used in washing being separated and recovered in a drying tower (14). The dried crude 2,6-NDCA is conveyed to the catalyst extraction vessel (16) via the feed line (15). Into the catalyst extracting vessel (16), a mineral acid is supplied through the line (17), and the mixture in the vessel (16) is stirred at a temperature of 80&deg; to 90. degree. C. After finishing the extraction, the slurry is subjected to solid- liquid separation by the centrifugal separator (18), and the solid matter is washed well with hot water at 80&deg; C. The washed crude 2,6-NDCA is conveyed to the salting-out vessel (20) via the line (solid matter conveyer) (19) for drawing out crude 2,6- NDCA from the separator (18). Into the salting-out vessel (20), a recovered filtrate containing a water-soluble neutral salt of an alkali metal is supplied through the line (49) and an aqueous solution of sodium chloride and an alkali hydroxide is supplied through the feed line (21). The materials in the salting-out vessel are stirred at a temperature of 25. degree. to 35&deg; C. After filtering the slurry formed by salting out by the separator (filter press) (22), the solid matter (2,6-NDCA. dialkali) is washed with an aqueous solution of sodium chloride. The obtained 2,6-NDCA.dialkali is dissolved in warm water and introduced into the decolorizer (24) for treatment with activated carbon via the line (23), and subjected to decoloring treatment in the vessel (24). The decolored solution is introduced into the acid-treating vessel (25). Into the vessel (25), sulfuric acid is continuously supplied through the feed line (26) and the pH of the solution in the vessel (25) is adjusted to not more than 4, preferably 2 to 3. After carrying out the solid-liquid separation in the separator (27) (filter press), the solid matter (purified 2,6-NDCA) is washed with water until no chloride ion is detected and then continuously dried by the dryer (28) at a temperature of 120&deg; to 150&deg; C. to obtain 2,6-NDCA of a high purity continuously. Further, as in FIGS. 1 and 2, only the method of purification performed by salting out is described, the case where the purifying is performed by the dimethyl ester method will be explained as follows. The crude 2,6-NDCA washed by hot water in the centrifugal separator (18) is conveyed to the vessel for esterification and is mixed with methanol, and after the addition of sulfuric acid to the mixture, the esterification is carried out in the vessel. The crystals precipitated upon cooling the reaction mixture are subjected to solid- liquid separation and after washing with methanol, the crystals are dried in a drier to obtain the refined dimethyl ester of 2,6-NDCA. After neutralizing the filtrate with an alkali, for instance, sodium hydroxide or calcium carbonate, the insoluble matters in the neutralized filtrate are separated and removed, and the treated filtrate is subjected to distillation in a methanol-recovering tower. A part of the recovered methanol is used for washing of dimethyl ester of 2,6-NDCA and the rest is used in the vessel for esterification in circulation. Backing to the explanation of the drawings, the filtrate from the centrifugal separator (13) is, via the drawing line (29) of the oxidation filtrate, partly returned to the dissolving vessel (3) via the line (38) and the recycle line (39) and the rest of the filtrate is introduced into the dehydrating tower (31) via the line (30) and the water formed by oxidation of 2,6-DIPN is discharged to outside via the drawing line (32). The liquid from which water has been removed is drawn out from the dehydrating tower (31) through the line (33) and a part of the liquid is returned to the dissolving vessel (3) via the line (37) and the recycle line (39). The rest of the liquid is led to the acetic acid- recovering tower (35) via the line (34) for the recovery of acetic acid. A part of the recovered acetic acid is used for washing the cake in the centrifugal separator (13) and the rest of the recovered acetic acid is recycled to the dissolving vessel (3) via the line (36) and the recycle line (39). The residue remaining at the bottom of the acetic acid- recovering tower (35) contains a part of the heavy metals of the catalyst and organic by-products which are alkali-soluble, and is introduced into the vessel (42) via the line (40) to recover the heavy metals. The extraction filtrate which contains the heavy metals of the catalyst and is sulfuric acid-acidic is led to the vessel (42) for separating the heavy metals via the line (41). Into the vessel (42), an aqueous solution of sodium (bi)carbonate is continuously added through the feed line (43) and the pH of the solution in the vessel (42) is adjusted to from 7 to 10 to precipitate the carbonates and/or basic carbonates of cobalt, manganese and cerium. The precipitate is subjected to solid-liquid separation by the separator (centrifugal separator) (44). The separated filtrate is discharged to outside via the line (46) and the solid (catalyst) is washed with water and recycled to the dissolving vessel (3) via the lines (45) and (39). The filtrate of salting out is introduced into the acid- treating vessel (50) via the line (47). Into the vessel (50), a mineral acid is supplied through the line (52), and the pH of the mixture in the vessel (50) is adjusted to lower than 3 while stirring the mixture. After subjecting the mixture to solid-liquid separation by the centrifugal separator (51) to remove the solidified impurities, the filtrate from the separator (51) is returned to the salting-out vessel (20) via the line (49). The filtrate from the separator (27) for purified 2,6-NDCA is neutralized and then discharged to outside via the line (48). The process for producing 2,6-NDCA according to the present invention will be concretely explained while referring to the following non- limitative Examples. In Example 1 and Comparative Example, the oxidation of 2,6-DIPN was carried out batchwise for showing the effect of cerium clearly and crude 2,6-NDCA formed was analysed. Examples 2 and 3 are examples showing the continuous process for producing 2,6-NDCA according to the present invention, and Example 4 explains the method for purifying the crude 2,6-NDCA by the methyl ester process. Further, the quantitative analyses of 2,6-NDCA and trimellitic acid were carried out by the high performance liquid chromatography, and the quantitative analyses of the heavy metals were carried out by the ICP analytic method. The elementary analysis of bromine was carried out by x- ray fluorescence analytical method and the colored material was analyzed by the OD value (500 nm) of the solution in methylamine. (1) High performance liquid chromatography Apparatus: HPLC analytical apparatus made by Waters Co., Model 510 Column: A connected column consisting of Lichrosorb&reg; (5 &mgr; m, made by Merck Co.) and Radialpack&reg; cartridge C-8 (made by Waters Co.) Moving phase: a 45:55 (by volume) mixture of water of pH 3 and acetonitrile and of a flowing speed of 0.6 cc/min. Internal standard substance: 2-naphthoic acid Wave length for detection: 260 nm. (2) X-ray fluorescence analytical method Apparatus: X-ray fluorescence analytical apparatus (made by RIGAKU- DENKI Co., Model 3080 E 2) X-ray tube: Rhodium (under 50 KV and 50 mA) Detector: PC detector Crystals: Germanium The specimen (10 g) was processed into tablets of 30 mm in diameter and subjected to the analysis. Detection limit: 3 ppm. (3) Analysis of the colored component: Into 10 ml of an aqueous 25% solution of methylamine, 1 g of a specimen was dissolved, and the optical density of the solution was measured in a quartz cell of 10 mm thickness with the light of 500 nm in wave length. EXAMPLE 1 Into a titanium lined stainless-steel autoclave of a capacity of 2 liters equipped with a reflux condenser, a gas-blowing tube, a discharged tube, a temperature-measuring tube and an electromagnetic stirrer, 800 g of 93% acetic acid, 20 g of cobalt acetate tetrahydrate, 150 g of cerium acetate monohydrate, 0.6 g of ammonium bromide and 55 g of 2,6-DIPN were introduced. In the next step, the mixture was oxidized while maintaining the mixture at a temperature of 180&deg; C. and blowing compressed air into the mixture at a rate of 600 liter/hour under a pressure of 20 kg/cm. sup.2 and under stirring for about 6.5 hours. After the reaction was over, the thus formed mixture was cooled to 60. degree. C., and after collecting the deposited material by filtration, the collected material was washed with acetic acid and dried to obtain 46. 5 g of the crude crystals. On carrying out each of the analyses, it was found that the content of 2,6-NDCA in the crude crystals was 95.7%, the content of the heavy metal salt of trimellitic acid was 1.42%, the OD value of the 25% solution of the crude crystals in methylamine, which is the index of the content of the colored material, was 0.82 and the content of bromine was 148 ppm. The yield of 2,6-NDCA out of 2,6-DIPN was 79.5% on a molar basis. COMPARATIVE EXAMPLE Into the same reaction apparatus as in Example 1, 800 g of 93% acetic acid, 70 g of cobalt acetate tetrahydrate, 130 g of manganese acetate tetrahydrate, 6 g of ammonium bromide and 55 g of 2,6-DIPN were introduced and after bringing the content of the autoclave into reaction under the same conditions as in Example 1, the reaction mixture was treated in the same manner as in Example 1 to obtain 50.5 g of crude 2,6- NDCA. On carrying out the same analyses as in Example 1 on the crude 2, 6- NDCA, it was found that the purity was 78.5%, the content of bromine was 3765 ppm and the value of OD was 0.85. The yield of 2,6-NDCA out of 2,6- DIPN was 70.8% on a molar basis. EXAMPLE 2 Into a 50 liter titanium-lined stainless steel autoclave equipped with a stirrer and a reflux condenser were supplied, 2,6-DIPN, 6- isopropyl-2- naphthoic acid (an oxidized intermediate of 2,6-DIPN), acetic acid, cobalt acetate tetrahydrate, cerium acetate monohydrate and ammonium bromide at the respective rates of 1000 g/hour, 20 g/hour, 6940 g/hour, 600 g/hour, 705 g/hour and 16 g/hour via the feed line (4) for the raw materials. Then, by introducing compressed air at a rate of 6 Nm. sup.3 /hour to the reactor (5) via the feed line (6), 2,6-DIPN and 6- isopropyl- 2-naphthoic acid were oxidized at 180&deg; C. under a pressure of 9 kg/cm.sup.2. The exhaust gas from the reactor (5) was cooled by the reflux condenser (8), and acetic acid and water contained in the exhaust gas were recycled to the reactor (5) via the line (9). The oxidation product was continuously drawn through the line (11) and the pressure of the reaction was reduced to ordinary pressure in the flash vessel (12) and the temperature of the product went down to about 100&deg; C. in the vessel (12). The solid material consisting mainly of 2,6-NDCA was separated by the centrifugal separator (13), washed with hot acetic acid of a rate of 2000 g/hour and sent to the drier (14) wherein about 600 g/hour of water-containing acetic acid were separated and recovered and 910 g/hour of dry, crude 2,6-NDCA crystals were obtained. The dry, crude 2,6-NDCA crystals were led to the catalyst- extracting vessel (16) via the line (15). The content of 2,6-NDCA in the crude crystals in the line (15) was about 85% by weight. On the other hand, the filtrate and washing (about 10 kg/hour) from the centrifugal separator (13) were discharged via the drawing line (29) and most of them were led to the dehydrating tower (31) via the line (30) and after removing water (about 500 g/hour) formed by oxidation of 2, 6- DIPN, they were returned to the dissolving vessel (3) of the raw materials. About a third by volume of the liquid drawn from the bottom of the dehydrating tower (31) via hhe line (33) was led to the acetic acid- recovering tower (35) via the line (34) and acetic acid was recovered from the liquid, and most of the recovered acetic acid (2000 g/hour) was used for washing the cake in the centrifugal separator (13), and the rest was returned to the dissolving vessel (3) via the line (36). Further, the rest of the liquid remaining in the bottom of the dehydrating tower (31) was also returned to the vessel (3). The residue drawn from the bottom of the acetic acid-recovering tower (35) contained a part of the catalyst and alkali-soluble organic by- products and was led to the catalyst-separating vessel (42) via the line (40). On the other hand, into the dissolving vessel (3), 2,6-DIPN was added at a rate of 1000 g/hour via the line (1), and acetic acid, cobalt acetate tetrahydrate, cerium acetate monohydrate and ammonium bromide were added at the respective rates of 347 g/hour, 3 g/hour, 4 g/hour and 2 g/hour while utilizing the line (2) for make-up, and the thus introduced materials were dissolved at 80&deg; to 90&deg; C. under a nitrogen atmosphere. Into the catalyst extracting vessel (16), 4% sulfuric acid was supplied at a rate of 2300 g/hour and it was stirred at 85&deg; C. The slurry after finishing the extraction was subjected to solid-liquid separation in the centrifugal separator (18), and the solid matter was washed well with water at 80&deg; C. (average amount of washing water being 1900 g/hour). The content of water in the crude 2,6-NDCA crystals after being washed with water was about 45% by weight and the amount of the wet crystals was about 1500 g/hour. The crystals containing water were supplied to the salting-out vessel (20) via a solid-conveying apparatus (19), and an aqueous alkali solution containing 19% by weight of sodium chloride and 6% by weight of sodium hydroxide was supplied to the same vessel (20) at a rate of 4900 g/hour. Upon stirring the supplied materials in the vessel (20) at a temperature of 30&deg; C., 2,6-NDCA. 2Na was salted out. The slurry formed by the salting out was subjected to filtration in the filter press (22) and the obtained solid matter was washed with an aqueous 18% solution of sodium chloride. The filtrate of the filter press (22), through the line (47), was subjected to acid treatment in the vessel (50) and deposited impurities were removed by separator (51), and thereafter the filtrate was returned to the salting- out vessel (20) via the line (49). The crystals obtained by salting out were dissolved in 5900 g/hour of water and after passing through the activated carbon tower (24) for decoloration via the line (23), the solution was supplied to the acid- treatment vessel (25). Separately, 30% sulfuric acid was supplied to the vessel (25), continuously at 1300 g/hour from the line (26) to control the pH of the content of the vessel (25) to lower than 3, thereby effecting acid-treatment (deposition of crystals from the solution by adding an acid). The milkwhite and gruel-like slurry formed by the acid- treatment was sent to the filter press (27), and after subjecting the slurry to solid-liquid separation, the solid matter was washed with 6000 g/hour of water and then continuously dried in a drier (28) at a temperature of 140&deg; C. to obtain 2,6-NDCA of a high purity at a rate of 720 g/hour. The filtrate from the filter press (a separator for the refined 2, 6- NDCA) (27) was neutralized and then discharged through the line (48) to outside. The purity of the obtained, purified 2,6-NDCA was 99.8% and the OD value (optical density) of the solution in methylamine, which represents the content of the colored component, was 0.013, and bromine was not detected in the purified 2,6-NDCA. The yield of 2,6-NDCA of a high purity out of 2,6-DIPN supplied was 70. 7% on a molar basis. The filtrate from the extraction was sent to the catalyst- separating vessel (42) via the line (41) together with the residue from the bottom of the aeetic acid-recovering tower (35) via the line (40). Into the vessel (42), an aqueous 25% solution of sodium carbonate was continuously introduced from the line (43) while stirring the content of the vessel (42) to control the pH of the content thereof to 9.5, thereby depositing the mixture of basic carbonates of cobalt and cerium. The slurry of basic carbonates of cobalt and cerium was subjected to solid-liquid separation by the centrifugal separator (44), and the filtrate was neutralized and then discarded via the line (46). The crystals were washed with water and then sent to the dissolving vessel (3) at an average rate of 30 g/hour, wherein the catalysts were stirred to be regenerated as the aceaate of cobalt and cerium. EXAMPLE 3 Into a 50 liter titanium-lined stainless steel autoclave equipped with a stirrer and a reflux condenser were supplied 2,6-DIPN, 6- isopropyl-2- naphthoic acid (an oxidized intermediate of 2,6-DIPN), acetic acid, cobalt acetate tetrahydrate, manganese acetate tetrahydrate, cerium acetate monohydrate and ammonium bromide at the respective rates of 1000 g/hour, 20 g/hour, 6940 g/hour, 270 g/hour, 1310 g/hour, 230 g/hour and 7 g/hour via the feed line (4). Simultaneously compressed air was supplied at a rate of 6 Nm.sup.3 /hour to the reactor (5) via the feed line (6), and 2,6-DIPN and 6-isopropyl-2-naphthoic acid were oxidized at 180&deg; C. under a pressure of 9 kg/cm.sup.2 G. The exhaust gas from the reactor (5) was cooled by the reflux condenser (8), and acetic acid and water contained in the exhaust gas were recycled to the reactor (5) via the line (9). The oxidation product was continuously drawn through the line (11) and the pressure of the reaction was reduced to ordinary pressure in the flash vessel (12) and the temperature of the product went down to about 100&deg; C. in the vessel (12). The solid material consisting mainly of 2,6-NDCA was separated by the centrifugal separator (13), washed with hot acetic acid of a rate of 2000 g/hour and sent to the drier (14) wherein about 630 g/hour of water-containing acetic acid were separated and recovered, and 980 g/hour of dry, crude 2,6-NDCA crystals were obtained. The dry, crude 2,6-NDCA crystals were led to the catalyst- extracting vessel (16) via the line (15). The content of 2,6-NDCA in the crude crystals in the line (15) was about 85% by weight. On the other hand, the filtrate and washings (about 11 kg/hour) from the centrifugal separator (13) were discharged via the drawing line (29) and most of them were led to the dehydrating tower (31) via the line (30) and after removing water (about 500 g/hour) formed by oxidation of 2, 6- DIPN, they were returned to the dissolving vessel (3). About a third by volume of the liquid drawn from the bottom of the dehydrating tower (31) via the line (33), was led to the acetic acid- recovering tower (35) via the line (34) and acetic acid was recovered from the liquid and most of the recovered acetic acid (2000 g/hour) was used for washing the cake in the centrifugal separator (13), and the rest was returned to the dissolving vessel (3) via the line (36). Further, the rest of the liquid remaining in the bottom of the dehydrating tower (31) was also returned to the vessel (3). The residue drawn from the bottom of the acetic acid-recovering tower (35) contained a part of the catalyst and alkali-soluble oraanic by- products and was led to the catalyst-separating vessel (42) via the line (40). On the other hand, into the dissolving vessel (3), 2,6-DIPN was added at a rate of 1000 g/hour via the line (1), and acetic acid, cobalt acetate tetrahydrate, manganese acetate tetrahydrate, cerium acetate monohydrate and ammonium bromide were added at the respective rates of 347 g/hour, 2 g/hour, 5 g/hour, 2 g/hour and 1.3 g/hour while utilizing the line (2) for make up, and the thus introduced materials were dissolved at a temperature of 80&deg; to 90&deg; C. under a nitrogen atmosphere. Into the catalyst-extracting vessel (16), 4% sulfuric acid was supplied at a rate of 2300 g/hour and it was stirred at a temperature of 85&deg; C. The slurry after finishing the extraction was subjected to solid-liquid separation in the centrifugal separator (18) and the solid matter was washed well with water at 80&deg; C. (average amount of washing water being 1900 g/hour). The content of water in the crude 2,6-NDCA crystals after being washed with water was about 45% by weight and the amount of the wet crystals was about 1600 g/hour. The crystals containing water were supplied to the salting-out vessel (20) via a solid-conveying apparatus (19), and an aqueous alkali solution containing 19% by weight of sodium chloride and 6% by weight of sodium hydroxide was supplied to the same vessel (20) at a rate of 4900 g/hour. Upon stirring the supplied materials in the vessel (20) at a temperature of 30&deg; C., 2,6-NDCA. 2Na was salted out. The slurry formed by the salting out was subjected to filtration in the filter press (22) and the obtained solid matter was washed with an aqueous 18% solution of sodium chloride. The filtrate of the filter press (22), through the line (47), was subjected to acid treatment in the vessel (50) and deposited impurities were removed by separator (51), and thereafter it was returned to the salting-out vessel (20) via the line (49). The crystals obtained by salting out were dissolved in 6000 g/hour of water and after passing through the activated carbon tower (24) for decoloration via the line (23), the solution was supplied to the acid- treatment vessel (25). Separately, 30% sulfuric acid was supplied to the vessel (25) continuously at a rate of 1300 g/hour from the line (26) to control the pH of the content of the vessel (25) to lower than 3, thereby effecting acid-treatment. The milk-white and gruel- like slurry formed by the acid-treatment was sent to the filter press (27), and after subjecting the slurry to solid-liquid separation, the solid matter was washed with 6000 g/hour of water and then continuously dried in the drier (28) at a temperature of 140&deg; C. to obtain 2,6- NDCA of a high purity at a rate of 760 g/hour. The filtrate from the filter press (a separator for the purified 2,6- NDCA) (27) was neutralized and then discharged through the line (48) to outside. The purity of the obtained, purified 2,6-NDCA was 99.8% and the OD value of the solution in methylamine, which represents the content of the colored component, was 0.012. Bromine was not detected in the purified 2,6-NDCA. The yield of 2,6-NDCA of a high purity out of 2,6-DIPN supplied was 77. 5% on a molar basis. The filtrate from the extraction, which was sulfuric acid- acidic and contained the catalyst, and the washings were sent to the catalyst- separating vessel (42) via the line (41) together with the residue from the bottom of the acetic acid-recovering tower (35) via the line (40). Into the vessel (42), an aqueous 25% solution of sodium carbonate was continuously introduced from the line (43) while siirring the content of the vessel (42) to control the pH of the content thereof to 9.5, thereby depositing the mixture of basic carbonates of cobalt, manganese and cerium. The slurry of basic carbonates of cobalt, manganese and cerium was subjected to solid-liquid separation by the centrifugal separator (44) , and the filtrate was neutralized and then discarded via the line (46). The crystals were washed with water and then sent to the dissolving vessel (3) at an average rate of 30 g/hour, wherein the catalysts were stirred to be regenerated as the acetate of cobalt, manganese and cerium. EXAMPLE 4 Into a glass autoclave of an inner volume of 1 liter, 350 g of methanol and 50 g of crude 2,6-NDCA were introduced and while stirring the content of the autoclave, 11 g of concentrated sulfuric acid were added into autoclave, and the reaction was carried out after raising the temperature of the content to 127&deg; to 130&deg; C. under a pressure for 5 hours. After the reaction was over, the content of the autoclave was cooled to 30&deg; C., and the deposited crystal of dimethyl ester of 2,6-NDCA were separated from the reaction mixture and washed with 80 g of methanol. Upon drying the crystals at a temperature of 120&deg; C., 47.3 g of dimethyl ester of 2,6-NDCA were obtained. The purity of the dimethyl ester of 2,6-NDCA was 98.1% and the yield thereof out of the crude 2,6- NDCA was 92.0% on a molar basis.
LONDON (ICIS)--Logistics issues in Europe are continuing to bite for chemicals players, with driver shortages and port congestion straining already tight supply lines, although buyers may be starting to adapt to longer lead times for orders, according to sources. Europe has been hit by a shortage of heavy goods vehicle (HGV) drivers, resulting in delays and added expense in transporting chemicals. The situation is most dramatic in the UK, where the loss of access to EU drivers has created logjams in truck capacity and panic-buying of fuel. “Truck drivers are short… especially for UK. Supply is a problem because of missing drivers,” said an isocyanates and polyols buyer. Delays caused by lack of driver access, or COVID-19 infection, is causing unplanned production halts in some cases, according to a chlorvinyls industry player. “We are covered with our contracts, the only concern is the drivers,” it said. “A driver was about to load to serve to Glasgow [UK] site, the driver tested positive for COVID-19, [and] it took half a day to get an alternative truck driver to the site. Our production had to close down for a few hours.” LIMITED POLICY RESPONSE The UK has responded with 5,000 temporary emergency visas for EU drivers, after resisting the move earlier this year, but trade groups have questioned whether short-term visas will be sufficient to attract workers. “Europe is 400,000 drivers short, Poland has a 120,000 driver shortage, why would a European person come to the UK for a temporary job for three months?” Chemical Business Association chief Tim Doggett told ICIS in late September. The visa terms were later extended to six months. The driver shortage is also being felt in the rest of Europe, to a lesser extent, according to Dorothee Arns, director general of the European Association of Chemical Distributors (Fecc). “In continental Europe, the driver shortage is noticeable, although not as pronounced as in the UK. It is more a subtle development, which will exacerbate in the next years when the baby boomers are approaching their retirement age,” she told ICIS. Unlike in the UK, the situation has failed to deteriorate further in Europe, she added, but this may be a result of other shortages and price spikes up and down the value chain. “We have not heard that the situation became more severe over the summer,” she said. “The reason why several car-manufacturing or other industrial manufacturing sites – especially in Germany – had to close down temporarily is more due to the lack of semiconductors, certain raw materials or semi-finished goods.” Current spikes in European energy prices have reduced the attractiveness of producing lower-margin commodities in the region, resulting in output cuts. “In some cases chemical plants have reduced the production of above-average energy-intense chemical substances because of the recent hikes in natural gas prices,” Arns said. HEATING UP The sharp rebound in European manufacturing has softened slightly in recent months on the back of lengthening lead times and difficulty sourcing inputs, as well as driving substantial increases in inflation. Higher demand has weighed on already disordered global supply chain networks, where different regional paces of economic and demand recovery through the pandemic left ships and containers often in the wrong places, sending costs soaring more than tenfold for some routes compared to mid-2020. The intensified demand, which is only expected to tick up further in the approach to the Christmas period, has increased port congestion at many of Europe’s key chemicals trade hubs. “Port congestion is an issue still,” said an epoxy resins trader. “Felixstowe [UK] is especially bad, but we also see problems at Rotterdam -Netherlands]. Lack of drivers also contributes to this… [with a] build-up of empty containers that can’t be moved and delays to onward transport.” Conditions are reportedly becoming so strained at the port of Felixstowe that operators are diverting large cargoes elsewhere due to congestion from a lack of drivers to move containers off the docks. MARKET VERSATILITY While there is no end in sight to some of the logistics issues, players are becoming more accustomed to dealing with them, despite extremely limited capacity to build up inventory in the tighter value chains. “I think customers are adapting to extended lead times, so, [there is] less panic buying even though availability is still tight and inventories cannot be built,” the epoxy trader said. In other cases, players can obtain raw materials but find themselves unable to send out their products. “We're able to get raw materials, are able to produce everything in time, and then can’t dispatch,” said an isocyanates player. Other operators are dealing with the issue by building up stores of product near the sites it will be required, a move away from the ‘just in time’ approach to logistics, but one that increases costs along the chain. “We found a solution short term… we will store product [in the UK] to secure supply,” said the isocyanates and polyols buyer. “We expect the supplier to pay storage, because stopping… our production or [that] of the OEM [original equipment manufacturer] is more expensive than to store TDI, MDI or polyols locally.” “It is the better solution to have material there instead of stopping OEM production, [which can cost] millions,” it added. With limited warehouse space, particularly for hazardous chemicals, more widespread increased storage responses may stretch availability that is already likely to be tight. With conditions likely to remain volatile into 2022 and probably beyond, chemicals players will need to continue to adapt to the challenges ahead.
https://www.icis.com/explore/resources/news/2021/10/12/10693958/europe-logistics-disruption-continues-to-bite-for-chemicals-players
Sabicas was a tremendous genius of his day, not only with technique, but with major contributions, playing Flamenco previously unimaginable and giving new tools and possibilities for the solo instrument. He brought this art to concert halls and major theaters where all classes can enjoy. Notes Flamenco. 133 bars. 11 pages. Time Signature: 4/4. key of A. Rasgueados. Tremolo at measure (16-25). Provided Richard Lewis.
http://www.classclef.com/columbiana-sabicas/
. PLEASE CORRECT THE FOLLOWING: Please Enter Event Name View all Webinars INTERNET OF THINGS AND BIG DATA: VISION AND CONCRETE USE CASES Starting with a keynote from the leading industry analysts at Machina Research, this webinar provides an introduction to the vision of using the potential provided by modern Big Data and NoSQL technologies in IoT applications. Three concrete IoT use cases will be discussed and the different requirements highlighted. Finally, we will introduce the 5 key capabilities of data management technologies. MongoDB and Bosch Software Innovations have collaborated to build a powerful IoT application platform to support these new IoT (Internet of Things) business models. Watch our three-part webinar series where we provide an overview of use cases, key capabilities, best practices and implementation examples for leveraging the Big Data and NoSQL features of this IoT platform. For more information and to register, visit the event website. home.aspx Keep me plugged in with the best Join thousands of your peers and receive our weekly newsletter with the latest news, industry events, customer insights, and market intelligence. I agree to the terms of service and .
https://theinternetofthings.report/view-events.aspx?EventID=30