content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Since the election of Donald Trump as president of the United States, I have been reading a lot about the American Civil War, which suddenly feels extremely relevant – especially after Charlottesville. In my last post, I mentioned David Blight’s Race and Reunion, a study of the memory of the Civil War in the fifty years following the Emancipation Proclamation in 1863. It brilliantly shows how, during that period, a sectional reconciliation took place that was based on Southern terms and thus entrenched racism in America. But as a foreign policy analyst, I was particularly interested in Blight’s discussion of American imperialism at a time of worsening race relations in 1890s, which it seemed to me raises difficult questions about the relationship between U.S. foreign policy and race relations in America.
Category Archives: international relations
Competition and “constitutionalization”
There has been much discussion of the role of ordoliberalism in Germany’s approach to the euro crisis (see for example this paper by two former colleagues at the European Council on Foreign Relations and this paper by my former Transatlantic Academy colleague Wade Jacoby). But of course the story of how German ideas have influenced the European Union does not begin with the Greek crisis in 2010. It is well known that the European Central Bank (ECB) reflects the values of the Bundesbank. (Actually, it doubles down on them – the ECB is even more independent, and has an even tighter focus on price stability, than the Bundesbank – see this explainer.) Less well known, though, is the way German ideas on competition policy that go back to ordoliberalism have shaped European integration since its beginnings in the 1950s. You might almost say that competition policy is the missing link between histories of ordoliberalism and the EU.
On rules
There seems to be a lot of discussion about rules these days. In particular, among foreign policy analysts, rules come up both in discussions about the liberal international order and in discussions about the eurozone. But it is striking to me how disconnected the two discussions are – and how differently rules are seen in each case. In discussions about the liberal international order, rules are widely seen as a good thing because they are thought of as an alternative to relations between states based simply on power. But in discussions about the eurozone, rules are seen by many as being much more problematic. In particular, critics of the German view, which emphasises rules over discretion (see Brunnermeier, James and Landau on this), see them as essentially post-democratic. So are rules a good or bad thing?
History and policy
In a thought-provoking recent paper, Petri Hakkarainen asks an important question: what role should history play in foreign policy? (The paper is part of the History and Policy-Making Initiative, which was launched jointly by the Geneva Centre for Security Policy and the Graduate Institute in 2015.) Hakkarainen, a Finnish diplomat, argues that “we seem to be living in increasingly ahistorical times, dominated by myopic presentism” – in other words that policymaking is insufficiently informed by history. “The tendency to see arising policy challenges as one-off events, detached from the past, not only misleads us in the present but also blurs our vision ahead.” At the same time, historical analogies can also be misleading. So how can policy be informed by history without being misled by it?
A pre-history of Indian foreign policy
Srinath Raghavan’s India’s War is a brilliant synthesis of the history of India’s struggle for independence and the history of World War II. In most versions of the story of how India finally became independent in 1947, the impact of World War II tends to be underplayed. Conversely, in most histories of World War II, India’s role in it tends to be underplayed. What Raghavan does in India’s War is to bring these two histories together. That alone makes this an important book – a contribution to an emerging global history that simultaneously connects and challenges Western narratives centred on the war and non-Western narratives centred on decolonisation. But it seems to me that India’s War is also more than that. It can be seen as a kind of pre-history of India’s post-independence foreign policy. As such, it illuminates not just the past but the present and the future.
The West and the anti-West
Last month I took part in a workshop run by the Transatlantic Academy in Washington on the development of the relationship between China and Russia – and its consequences for the West. Immediately after the European Union and the United States imposed sanctions on Russia following the annexation of Crimea, President Vladimir Putin signed a series of trade deals with China, including a $400 billion deal to export Russian gas to China. Since then, the two countries have also agreed to “co-ordinate” the development of the Eurasian Economic Union and the Silk Road Economic Belt. Beyond China’s need for energy and Russia’s need to replace trade with, and investment from, Europe, the two countries also share an interest in challenging U.S. power and in creating a “multipolar world”. So should the West worry about a relationship? And if so, how should it respond?
International relations in Europe
Since my book, The Paradox of German Power, came out, I’ve had some interesting discussions about the implicit assumptions about the nature of international relations in Europe on which it is based. In particular, especially in Germany, some have questioned whether the concepts I use make sense in the context of the European Union. The EU, they argue, has transformed international politics into domestic politics. So does it make sense to use concepts like hegemony in this context? Thus discussion of the “German question” – a phrase that implies continuity with pre-World War II Europe – inevitably raises broader questions about how to understand the way in which international politics in Europe has changed. How exactly has European integration transformed relations between European states?
A German or European question?
Since the euro crisis began five years ago, there has been much discussion of a return of the “German question” – though few of the commentators or analysts who have used the term have explicitly defined the new version of “German question” or clearly explained what it has to do with the original (that is, pre-1945) “German question”. The argument in my book, The Paradox of German Power, is that what defined the “German question” between 1871 and 1945 was Germany’s position of “semi-hegemony” in Europe. It seems to me that since reunification in 1990 Germany has returned to something this position of “semi-hegemony” – as some German historians such as Dominik Geppert have also argued. At the same time, there is no danger of war as there was between 1871 and 1945. So does it even make sense to speak of a “German question” in the current context?
Infrastructure and the “inevitable analogy”
In a recent article on relations between China and India by the historian Srinath Raghavan, I was struck by the following line: “Not since the late 19th century has infrastructure been so prominent an issue in great power relations.” Raghavan had in mind the Asian Infrastructure Investment Bank, China’s new alternative to the Asian Development Bank, which will be used to fund projects such as the 21st Century Maritime Silk Road and the Silk Road Economic Belt. China presents them as futuristic projects that exemplify a “win-win” logic in international relations. But perhaps, as Raghavan’s reference to the late nineteenth century suggests, they are more old-fashioned and zero-sum than China’s liberal rhetoric suggests. In particular, the Silk Road Economic Belt – which will run from China through Central Asia to Europe – reminds me of nothing so much as the Berlin-Baghdad railway.
Memory and security in Asia
Since taking part in a study trip to Tokyo (which prompted me to write another post on Japan and the concept of “civilian power”) over the summer, I’ve been thinking a lot about the role of collective memory in international relations in Asia. In Tokyo, where we spent a week in discussions with policymakers and analysts from all over Asia, we talked a lot about history and the role it plays in tensions between Asian countries. In particular, there is an ongoing dispute between China and Japan over the Japanese occupation of Manchuria in 1931 and the Nanking massacre in 1937. This is particularly important because it plays into the dispute between China and Japan over the Senkaku islands, which the Chinese call the Diaoyu. There are also acrimonious disputes between Japan and Korea over issues such as the “comfort women” the Japanese forced into sexual slavery during World War II. | https://hanskundnani.com/category/international-relations/ |
This summer is beginning to look more normal. Wolf Trap announced its lineup of outdoor concerts, which kicks off June 18.
The venue released a calendar of more than 30 concerts. In honor of its 50th anniversary, there will be a “Fifty Years Together” dinner and concert event. The star-studded evening includes performances by Wolf Trap Opera aluma Christina Goerke, Van Cliburn International Piano Competition winner Joyce Yang, and Tony, Emmy, and Grammy Award winning pop vocalist Cynthia Erivo.
In the last week of June, there are select concerts for an “invited audience,” during which Wolf Trap will host healthcare workers, volunteers and educators in the community for complementary performances in honor of their work during the pandemic.
All tickets go on sale at 10 a.m. on May 7. Important to note that tickets are only available in groups of 2-8, and no single tickets will be sold. The Opera UNTRAPPED online streaming series will continue.
Here’s the schedule: | https://www.washingtonian.com/2021/04/27/wolf-trap-just-announced-a-summer-full-of-actual-in-person-concerts/ |
Batiashvili accomplished her Bachelor degree in Fine Art at the Tbilisi State Academy of Arts in 1997 and since then, she lives and works in Tbilisi, Georgia.
Batiashvili works in graphical drawing, oil painting, photography and film. Her exuberant work depicts scenes from the everyday. Through the subject matter, Batiashvili poses questions around the existential issues of life and death. In her work the everyday life resembles a playground, where various scenes are being performed. Ostensibly unmediated figures and the intense color scheme in Batiashvili's work share stylistic tendencies with the early modernist painting traditions of Georgia and Europe. Batiashvili's practice is an interesting cross between the traditional and the contemporary painting, developing the well-established tradition of painting popular in the republic of Georgia.
Bugiani accomplished his studies in Fine Art at the Tbilisi State Academy of Arts in 2001. Since then he studied at Ecole des Beux Arts in Rouen, France, at the State Academy of Fine Arts in Karlsruhe, Germany and received MA in Art History at the Heinrich Heine University of Düsseldorf, Germany in 2010.
Bugiani works in painting and collage. His works depict interiors and exteriors of architecture empty of human presence. The abstract imagery composed with rough strokes of oil on canvas combined with application, leave an impression of the yet unfinished work in process. The impressionistic landscapes, seascapes and washed out portraits compose the exploration into the fictional and the historical facts of one's memory. The combination of the subject matter and the working style evoke an unsettling ambiguity.
Uznadze, mostly works on photo portraits, reportage and through thematic photo series. He takes his inspiration from ordinary feelings and incidents which make up daily life: melancholy, happiness, love, separation, sex, fear and death. Uznadze's work is dominated in particular by portraits of people whom he has encountered in his immediate surroundings. Through them he narrates their stories, dreams and fears. To date his photo series include: 2B, In the Mood for Love, Don’t Wake Me, Tbilisi Portraits, Georgians in the UK, Broadway Market.
| |
All you need to know about property approvals – DTCP & CMDA
Property approvals from the Directorate of Town and Country Planning (DTCP) and Chennai Municipal Development Authority (CMDA) are important and sacrosanct permissions the builder has to obtain before he can commence construction. Any anomalies in this and you can be sure that the project is going to be fraught with legal issues. Read up below and you will understand all you need to know about these approvals.
Procedure for CMDA approval
The CMDA is a designated body appointed for the sole purpose of regulating real estate developments in the Chennai Metropolitan Area through issue of Planning Permission (PP) under section 49 of the Tamil Nadu Town and Country Planning Act 1971.
The CMDA has entrusted powers to the local bodies within the Chennai Metropolitan Area to issue planning permission for ordinary buildings and buildings under normally permissible categories of industrial, residential, institutional and commercial use zones, sub-divisions and small layouts.
IT/ ITES, special and multi-storey buildings require direct permissions issued by CMDA.
The authority has set up guidelines for the execution of Planning Permission (PP) and the process is much easier now due to the streamlined procedure. It is prescribed that before a builder may commence development, it is mandatory for him to obtain PP from the CMDA. The permission is valid for
three years from the date of issue.
Permissions are granted to only those who conform to the land use for which the site has been designated under the Master Plan or the Detailed Development Plan. Once the builder has obtained approvals he must complete the project within the time prescribed in the planning permit. CMDA approvals are withheld if the CMDA finds some anomalies related to the land or deficiencies in statutory approvals.
Developers carrying out constructions without proper permissions are taken to task with the construction deemed as unauthorized. The end result of such a scenario being that either the constructions will be demolished, or there will be a stay order or it may even be sealed under the Act.
Procedure for DTCP approval
The Directorate of Town and Country Planning (DTCP) is responsible for carrying out developmental activities and to exercise control on the town planning for Chennai, its suburb and all the districts that come under its jurisdiction. The DTCP or CMDA approvals come with the clear explicit advantage of owning land that must be utilised for the purposes as defined by the planning authority; in other words it is the ease of getting approval from DTCP or CMDA for the right to build lawfully and legally. Once the builder has an approval for the said land, the building cannot be demolished, unless there are some unapproved layouts later on. A DTCP approved layout will follow the guidelines with regard to the road width or public spaces.
There are no specific size requirements of the property to get DTCP approval, however the land should be within the limit of DTCP and must fulfil the guidelines set by them.
As a buyer, you can ask for the approved CMDA or DTCP number. Typically, the builder will have it on his approved layout drawing. You can request and preserve a copy for yourself. Consult a lawyer if you have any doubts concerning the approvals from the builder. It is best advised to buyers that they must wait for the CMDA approval before deciding and purchasing a house. | http://www.rubybuilders.in/blog/tag/dtcp-approved-apartments/ |
Sketching Out States in Fantasy: Two Cases from Avlis
As promised here are two cases from Avlis of magic equipped societies. I will begin with a (moderately) short description of each state, then compare the place of magic and its political economy. This post is meant as a sketch, with further development and detail to be presented based on comments received. If there are points that are unclear, or things that I’ve missed feel free to point them out.
So let’s step through the portal…
The Fourth Kurathene Empire
Aristocrats, Mages, Merchants and Priests
The Fourth Kurathene Empire is a multi-national state located in the northwestern corner of the continent of Negaria. It stretches from the banks of the Methran River basin in the south through the heartland of the old Kurathene Empire.
Kurathene society is organized by a social class system based on a territorial aristocracy first established by Joral Kuras nearly two thousand years prior. The aristocracy began as a mix of traditional manorial landowners dating from before the Empire and military officers granted land-holdings after the unification of the Empire first in the early 1st century. Over the years the distinction between the two systems of nobility have largely vanished, and modern Kurathene aristocrats and gentry are largely pulled from families that have held on to their holdings over the intervening centuries.
Open-field systems of farming remain the predominant method of land use, with landholders owning tracts of land divided among tenant farmers. The system came about primarily as an adaptation of the three-field crop rotation system used by the more hive-minded Dracon. The use of rare and expensive farming tools such as ox-teams and magically enhanced farming tools helped cement open field farming as the predominant land-use style in western Kurathene.
Because the territories of the Fourth Empire were the coastal regions of the old Empire, coastal trade remains an important part of the Imperial economy. As access to inland territories and their raw resources has become more difficult, the Empire’s economy has become increasingly reliant upon foreign trade for metals. The concentration of population into coastal cities have helped spur urbanization. About a quarter of the Empire’s population lives in urban centers of ten thousand or more.
The wealth flowing out from the cities has helped to create a system of wealth based patronage that attracts artisans and mages from across the continent looking for sponsors. They often compete for resources with the established churches, who maintain a following through a combination of religion, tradition and community engagement. The strongest faiths within the Empire are those of Toran, Senath, Mikon and Hurine.
Much of the Empire’s infrastructure dates from the time of Joral Kuras and are often the result of centralized military planning. Enchanted road systems, navigation canals with fixed currents and wind sources, magically tended forests of hardwood trees and great artificial lakes and rivers are just a handful of the things created during the Golden Age of the Empire.
The modern Kurathene state has several organizations dedicated to maintaining these creations. The first is the Imperial Corps of War Mages, a centralized cadre of mages given extensive training in everything from engineering to alchemy and battlefield casting. As the premier organization of arcane magic in the Empire, it recruits from the most promising students and provides an avenue for substantial prestige for noble or gentry families with a history of arcane magic in their bloodlines. Public schools operated by the orders of magic are used to find the most promising applicants who are sponsored for entry in academies of higher learning. As a consequence the Empire enjoys a much higher literacy rate than other states in Negaria although at considerable expense.
While more informal, the Lords Spiritual of the Chamber of Lords are assigned the task of coordinating the recruitment of both clerics and holy warriors within their regional district. Often the seminaries of the major regional churches are located within the same campus, in order to increase access to instruction on the use of deific magic. Commoners tend to be chosen out of the same public schools as mages, while noble families with strong traditional ties to churches will send second or third sons into the clergy.
Overall the Fourth Empire is a prosperous state, though like most others there is a substantial gap in wealth between the rural tenant farmers and the landowners. There are also substantial regional disparities based on whether the territory had been part of the Imperial successor state or had been part of an independent principality until recent times.
Tabayelle
Glories of Mageocracy
Tabayelle is a continent located on the opposite side of Negaria upon the world of Avlis. The continent was settled by a group of one thousand mages who thought that coexistence with divine spellcasters was unbearable. Using a combination of volunteer settlers and slaves to bolster their numbers, eleven thousand settlers created the first cities on Tabayelle. The cities were built without much regard to geographic proximity, as teleportation spells were used to move settlers to areas with rich arable land and easy access to staples such as water and lumber. Each city was built around a central mage spire which had a portal linking it to the rest of the city-states.
The first generations of mages made extensive use of alchemical fertility enhancers to help increase population, while making use of enchantment magics for social conditioning of following generations. Constructs and summoned elementals made up large portions of manual labor, taking over tasks that were ordinarily done by beasts of labor and unskilled peasantry. This combination of factors led to rapid growth and the establishment of additional spire cities, all based on geographic access to resources.
As a result the settlement of Tabayelle is extensive but scattered. Of the roughly thousand spire cities created by the mage orders, no two cities are within one hundred miles of another. The urban centers of the cities tend to be dense, hive-like constructs made of stone or concrete, with high-rises common within a walled enclosure. Outside of the city there are extensive rural pasture fields tended to by automated constructs that are often many generations old. The size of the field will differ depending on the scale of the city.
Within the walls of a spire city, chained elementals are used to control the climate, keeping the city’s confines comfortable all year round. Illusionists ply their trade to create entertainment for the masses, while every citizen is provided a basic stipend of food provided by golems. Artisans exist to satisfy the needs of the more affluent, often working within generational halls. Most people spend their entire lives within a single spire-city, as geographic separation and access restrictions to the portal network that links the great spires keeps them within their city of birth.
Attainment in magic is directly related to the amount of influence an individual has within Tabayelle society. Because most paths of magic were available to all citizens through the education system, early families had an incentive to produce a large number of children with the hope that one or two might show an aptitude for magic. Wealthier families on the other hand would often focus their resources on preserving their abilities through subsequent generations, using an extensive system of alchemy, magic and magical artifacts to enhance the abilities of their offspring.
After the first thousand years population pressures forced most cities to create policies to curb growth. Some insisted upon alchemical sterilization of “undesirable” populations, while others implemented (but rarely enforced) a two or three child policy for all families. The heavy-handed enforcement of sterilization sometimes led to revolts, but the isolation of the spire cities kept these incidents from spreading beyond a single city. For the most part cities began to shunt excess population into subordinate colonies on a new continent, built to more mundane principles as an outlet for both population growth and as a repository for the criminal class.
The reliance on magic was the eventual downfall of Tabayelle, when the god Andrinor claimed the wellspring of Arcane power in 1950 OD. When in fit of pique the new god cut access to magic from the world, the magic that kept Tabayellan society together failed. Outsiders broke free of their binds, golems shut down, the protections that kept the mage class in control stopped working. The mage spires fell within weeks of this failure, but that, as they say, is another story entirely….
Comparing the States
Both the Kurathene Empire and the mage-spires of Tabayelle are magic heavy societies. The use of magic is implicitly responsible for allowing their forms of social organization to exist. In Kurathene magic was used to build the infrastructure needed to maintain a large empire, while in Tabayelle the entire way of life was shaped by the use of arcane magic.
That’s where most of the similarities end.
The Kurathene Empire has a non-magic oriented aristocracy buttressed by generations of accumulated wealth, social standing, cultural authority and of course the possession of magic swords that signify high office. Further this aristocracy can turn to a wider array of spellcasting sources: ranging from hedge casters like bards and holy warriors to divine casters like clerics and druids and finally to mages (wizards and sorcerers). The diffusion of power sources makes it difficult for a single type of spellcaster to gather absolute power, and the trading of wealth or social prestige for magic assistance helps to keep spellcasters on the second tier of society.
In Tabayelle, the oligarchy is based on magical ability. While there were certainly families who were able to cement their hold on the upper echelons of magedom through resource concentration, they could not keep a stranglehold on arcane magic. The lack of non-mages as a spellcasting class allows the mage class to hold a monopoly on wealth and hard power. Specialization also helps to narrow which types of mages hold the most power in Tabayelle. The most promising spellcasters are raised as generalists, but most of middling proficiency are shunted off into specialist roles where they can exercise influence in a limited sphere. Illusionists might make great entertainers, but poor pedagogues as they’re unable to wield enchantment spells well.
Then there’s the character of the states they’ve crafted. The Kurathene Empire is a geographically contiguous state, with traditional borders and geostrategic concerns. While the modern Empire is more of a thalassocracy with an emphasis on ports and urban centers, the population is still largely living on rural pasture and works the land. Local culture and identity are important, and the multi-national character of Kurathene can be seen in the many ethnic groups that consider themselves a distinct nation within the wider Empire.
Tabayelle on the other hand has no geostrategic logic. The spire cities are located not by proximity or access to other parts of the country, but based on where they can find the best pasture land, timber, fresh water sources and minerals. The use of magic for menial labor means that most of the population is concentrated within the cities, and the isolation imposed through geographic separation means only those with access to magical means of travel have access to trade and interaction between cities. The mageocracy, with its ability to travel between cities at whim and move about where it pleases has an identity of its own, while most citizens of the spire cities are tied to their cities.
The next post will focus on comparing the military uses of magic between the two states. Stay tuned! | https://ordinary-times.com/2012/10/25/sketching-out-states-in-fantasy-two-cases-from-avlis/ |
Today, marketers must take advantage of digital marketing opportunities that are increasingly data-driven, programmatic, and addressable. This can only be achieved by understanding and reaching actual people with relevant content—across media, channels, and devices—that’s personalized to them as individuals. People-based marketing creates an enormous potential to simultaneously increase conversion and ROI while improving the overall customer experience. The downside? It’s extremely difficult to implement.
Watch this on-demand webinar to learn more. | https://www.merkleinc.com/news-and-events/webinars/2017/powering-people-based-marketing-financial-services |
Fear
fear
Fear
(1) In psychology, a negative emotion toward a real or imagined danger that threatens an individual’s life, personality, or values, including ideals, goals, and principles.
(2) One of the main tenets of existentialism introduced by S. Kierkegaard, who distinguished between a common, empirical fear (in German, Furcht) brought about by a concrete object or condition and an indefinite and uncontrollable dread (in German, Angst). Dread is a metaphysical fear unknown to animals. Its object is nothing, and it results from man being mortal and knowing it. For M. Heidegger dread functions to disclose the final potential of existence—death. J.-P. Sartre defines metaphysical, existential fear (angoisse) as anxiety before one’s own self, potential, and freedom.
(3) Early psychoanalysis distinguished between a rational fear in the face of external danger and a deep, irrational fear. The latter was interpreted to be a result of unrealized ambitions and a repression of unsatisfied desires. Modern neo-Freudianism interprets fear in terms of anxiety, which is a state of general irrationality associated with the irrational nature of bourgeois society. It is also considered to be the main source of neuroses.
Many theories on the origin of religion regard the emotion of fear to be a reason for the development of religious ideas and beliefs. This trend of thought was developed by Lucretius and Democritus in antiquity and D. Hume, P. Holbach, and L. Feuerbach in modern times.
What does it mean when you dream about your fears?
Fearful dreams are quite common, reflecting either anxiety about concrete problems in the world or anxieties arising from inner tensions. For a deeper understanding, the dreamer should attempt to identify the source of fear in the dream.
Fear
(dreams)
If you are experiencing great fear in your dreams, you are having nightmares. These types of dreams are positive because your unconscious mind is trying to tell you something. If you have repressed issues, they may be coming to the surface. Think about the fear in your dreams and try to be honest with yourself. Face your fears and as a great American president once said “The only thing we have to fear is fear itself.” Having fearful dreams seems to be relatively common. Most dreams are unpleasant and that is the nature of our private unconscious. Issues and concerns, repressed emotions, and daily stress all contribute to an uneasy sleep and to fear-filled dreams.
Their pain is manifested in eccentric behaviour that becomes something more when they are confronted by do-gooding vicar (Sam Spruell) and then arrant racist Karl Lindner, a role in which a particularly sinister Martin Freeman puts the fear of God into everyone present.
All content on this website, including dictionary, thesaurus, literature, geography, and other reference data is for informational purposes only. This information should not be considered complete, up to date, and is not intended to be used in place of a visit, consultation, or advice of a legal, medical, or any other professional.
| |
The first step in building such a system requires one to choose a chemical structure model. The structures generated by this model need to be standardized, easily interpreted by a computer, compact, and ideally human readable. The following sections describe the development of the currently available models from a historical perspective.
Jons Jakob Berzelius introduced the classical system of chemical symbols and formulae which with a few minor changes is still in use today. His system provided a way to easily integrate chemical information with the medium (paper) and method (print) of the day using a collection of simple characters and numbers in algebraic-like expressions.
As time past, organic chemists recognized the need to be able to represent a molecule as a parent structure substituted with various groups. In particular, Friedrich Konrad Beilstein introduced rigorous methods for classifying, naming and indexing "related compounds" in his Handbuch first published in 1880. However, it was Alexender Crum Brown who was a key contributor to the development of a method that represents the spatial relationships between atoms. Below is his 1861 structure for phenol. Note that while Crum Brown's representations provides a mathematical graph of a molecule where atoms are the nodes and bonds are the edges, it also highlights the issue that molecules are not two-dimensional.
Chemists subsequently approached the issue of projecting three-dimensional information on a two-dimensional medium in two ways. One way is to circumvent the issue by building physical models such as August Wilhelm Hofmann did by using croquet balls as the atoms and steel rods as the bonds.
The difficulty with this approach is that it is the information is only communicated to those who can physically see the model. Emil Fischer used the alternative strategy. He created a projection by physically flattening a physical model.
Linear formulae continued to be used but with improvements in printing techniques and the adoption of certain standards for atom numbering, graphical representations of structures began to predominate. It was not until the advent of computers, that better methods were required. In 1949 William Wiswesser lead the way by improving on the standard Berzelian system by streamlining the symbols used and by adding structural and connectivity information.
Despite the fact that in practice Wiswesser's notation turned out to be primarily a shorthand for representing the systematic name of a compound, it was used widely until graphic entry technology was introduced. At that time, it became possible to store structure information as a connection table representing the underlying graph and thus expanding the possibilities for subgraph matching. Implicit in modern graph representations is a valence-bond model of structures where vertices are atoms with a variety of atomic properties and edges are bonds, usually constrained to a few defined types (single, double, triple, and aromatic). Below is the N-ethylaminobenzene compound represented in MDL molfile format.
Despite many size and format constraints, connection tables are easily handled by computers. Unfortunately, connection tables and many of the other linear notations systems developed during this era such as ROSDAL are not easily interpreted by humans. It was not until the invention of SMILESTM in 1989 that a structure could be represented as a graph both in text documents and in a computer system. A SMILESTM string such as that shown below for the N-ethylaminobenzene compound is human understandable, very compact, and if canonicalized represents a unique string that can be used as a universal identifier.
The second step in building a chemical information system is to ensure that the appropriate mechanisms are available to retrieve structures that match a target structure or pattern in a variety of ways as in the following examples.
Incorporation of the ability to calculate structure-derived data such as physical properties and conformational information is the third step in building a chemical information system. A sample of some typical calculated properties are listed below.
Lastly, a good chemical information system should have the ability to execute various in silico operations on registered compounds. For example, the ability to perform a reaction transform on a set of compounds as illustrated below.
Within the past 20 to 30 years, computers and cheminformatics software have evolved to the point where nearly all chemists routinely access huge chemical databases from their desktops. Many millions of structures of known compounds and many times that number of virtual compounds are stored as graphs and can be explored efficiently by chemists and biologists. Cheminformatics has revolutionized discovery chemistry. | http://daylight.com/cheminformatics/intro.html |
The 1920s in Germany witnessed a revolution in visual communication, typography, and graphic design that still influences us today.
In 1929, Hungarian avant-garde artist and Bauhaus professor László Moholy-Nagy was invited to design a room dedicated to the future of typography at the Martin-Gropius Bau in Berlin as part of a larger exhibition called New Typography ("Neue Typographie").
The exhibition was organized by the Ring of New Advertising Designers ("ring neue werbegestalter"), a group started by Kurt Schwitters in 1927 which consisted of 12 avant-garde designers and artists who explored a common vision of modernity in advertising and graphic design. In five years, the Ring put on over 20 shows in Germany and invited a number of guest artists to exhibit with them.
Moholy-Nagy's room in the New Typography show was called "Where is Typography Headed?" He created 78 wall charts illustrating the development of the “New Typography” since the turn of the century and extrapolating its possible future. To create these charts, he not only used his own designs but also included prints by colleagues associated with the Bauhaus.
The movement initiated by the “New Typography” exhibition, functional graphic design, broke with tradition and established a new practice of design based on artistic criteria. It aimed to achieve a modern look with standardized typefaces, industrial DIN norms, and adherence to such ideals as legibility, lucidity, and straightforwardness, in line with the key principles of constructivist art.
Moholy-Nagy's original New Typography charts are reproduced together in this book for the first time, along with an Abecedarium of terms and concepts by a roster of noted typography and design historians. Contributors include Peter Biľak, Günter Karl Bose, Gerda Breuer, Steven Heller, Richard Hollis, Annette Ludwig, Ellen Lupton, Julia Meer, Erik Spiekermann, and many more.
The book features recently discovered, previously unpublished archival materials by Moholy-Nagy from the Kunstbibliothek in Berlin, as well as work by Guillaume Apollinaire, F.T. Marinetti, Theo van Doesburg, Herbert Bayer, Walter Dexel, and El Lissitzky.
Published concurrently with an exhibition at the Staatliche Kunstbibliothek in Berlin. | https://draw-down.com/products/moholy-nagy-and-the-new-typography-a-z |
1 The upper extremity of the elbow as seen from the front. The inner surface of the coranoid process of the ulna is curved so as to clasp the pulleylike trochlea of the humerus.
2 The lower extremity of the humerus is somewhat flat. Projecting from each side are the internal and external condyles. Between the two is the rounded groove that receives the lip of the ulna.
3 Here the bones of the arm and forearm are connected. This is a view from the front. The humerus above shows the two condyles with a notch that receives the coranoid process of the ulna, when the arm is bent. The ulna at the elbow swings hinge-like on the bone of the upper arm. It moves backward and forward in one plane only. Just below the outer condyle of the humerus is a small and rounded bursa, called the radial head of the humerus, on the surface of which rolls the head of the radius.
Elbow—Front View
The large bone, which carries the forearm, may be swinging upon its hinge at the elbow, at the same time that the lesser bone which carries the hand may be turning round it. Both these bones of the forearm, the radius and ulna, have prominent ridges and grooves. They are directed obliquely from above, downward and inward. The radius turns round the ulna in these grooves and on the tubercles at the heads of both bones.
The lower extremity of the humerus gives a key to the movements of the elbow joint. Above, the shaft of the humerus is completely covered by the muscles of the upper arm. Below, the inner and outer condyles come to the surface near the elbow. The inner condyle is more in evidence. The outer one is hidden by muscle, when the arm is straightened out. When the arm is bent, it becomes more prominent and easier to locate.
Elbow—Back View
1 The humerus at the elbow is flattened in front and back, terminating in two condyles. Between these is placed the trochlea, a rounded spool-like form that is clasped by the olecranon process of the ulna.
2 This is a diagram of the spool-like form of the trochlea with the embracing condyles at the sides.
3 From the back, the olecranon process of the ulna is lodged into the hollowed-out portion of the back of the humerus, forming the elbow point.
4 This shows the bony structure of the hinge joint at the elbow.
Elbow—Side View
1 The ulna swings on the pulley of the humerus. The articulation is known as a hinge joint.
2 Shows the mechanical device used in straightening the forearm, on the arm, at the elbow. The common tendon of the triceps grasps the olecranon of the ulna, which in turn clasps round the spool-like trochlea of the humerus.
3 When the forearm is flexed on the arm, the ulna hooks round the pulleylike device of the humerus. The triceps in this position is opposed by the biceps and brachialis anticus in front, which becomes the power that raises the forearm upward. The triceps in reverse is inert and somewhat flattened out.
Was this article helpful?
Realize Your Dream of Becoming a Professional Pencil Drawing Artist. Learn The Art of Pencil Drawing From The Experts. A Complete Guide On The Qualities of A Pencil Drawing Artist. | https://www.joshuanava.biz/anatomical/elbow.html |
Application of microwave-assisted fragmentation in excavation and comminution of hard rocks/ores
Enhancement of mineral resources employment efficiency is one of the main targets identified by the mining sector and United Nation’s sustainable development roadmaps. However, the relatively low energy efficiency of rock fragmentation, including excavation/comminution, is among the most challenging mining industry hurdles. The microwave pre-treatment technique has yielded promising results compared to other pre-weakening solutions such as high-temperature rock cutting, electro-pulse defragmentation and hydrofracking. So far, the importance of the energy used in creating microwave-induced fractures is mostly overlooked. The primary goal of our research is to connect energy consumption and strength reduction by using numerical methods. Towards this end, A COMSOL-based numerical tool has been developed to simulate the microwave-treatment process taking place within the cavity. In addition to this Finite Element Method (FEM) tool, a Discrete Element Modelling (DEM) tool has been developed to evaluate the behavior of rocks undergoing microwave treatment. These numerical tools will be used to better understand rock fragmentation and macro-scale responses to achieve energy viability in microwave-assisted rock fragmentation processes.
Mine hybrid renewable energy system for application in remote mines
Mining is to be considered as one of the most energy-intensive industrial activities in Canada. Due to its high fossil fuel dependency, the mining sector is one of the primary contributors of carbon emissions from Canadian industry (82.6 megatons in 2017 according to Natural Resources Canada). Consequently, miners are seeking more innovative solutions for fully transitioning their energy supply off fossil fuels especially in remote mines where lack of access to electric grid and natural gas makes them solely rely on fossil fuels for provision of power, haulage and heat. Renewable energy systems such as wind and solar photovoltaic can relieve this over-reliance on fossil fuels. Although some progresses have been made to shift towards green energies, high cost of battery storage system makes the current mine renewable solutions economically non-competitive with the conventional diesel-based systems. Using hydrogen and thermal storage systems due their relatively cheaper technologies can facilitate the application of renewable energies in mining industry. In this project we aim to develop a novel integrated renewable-multi storage (Battery/Hydrogen/Thermal Storage) solution for provision of fully-decarbonized energy in off-grid mining operations.
Design of integrated solar-borehole thermal storage systems
In this research project, we plan to design, develop, and optimize the solar-borehole thermal energy storage system to supply a complete heating solution to a residential high-rise building located in a cold climatic region like Canada. Solar thermal collector system absorbs the solar thermal energy year-round which is mostly stored during the summer days in underground through borehole heat exchangers. The stored energy is extracted during the winter months to supply the building heat demand. This research underlines the seasonal intermittency issues with renewable heating systems which can be effectively dealt by use of borehole thermal energy storage systems. A numerical simulation code is developed to couple the solar thermal collector system with the borehole thermal energy storage system. This code aims to study and explore the heat transfer mechanisms involved in the integrated system to gain a better understanding and enhance the performance of the system.
Spray cooling and heat recovery in mine ventilation
For deep mineral deposits, underground mining methods generously offer numerous options for feasible extraction of valuable minerals. On one hand, these options are rather flexible and could yield comparably smaller environmental footprints than some of the other bulk mining methods. Also, it is observable that underground mining operations are continuously increasing in number due to slow but gradual depletion of shallower deposits. On the other hand, increasing depth of these workings requires employment of advanced mechanical ventilation and air conditioning techniques. This brings several challenges to the industry in terms of finding ‘low cost / high performance’ options, as these systems are often energy intensive and expensive to acquire. Here, mine ventilation research team at the NBK Mining takes the duty to bring scientific research in these terms to develop an understanding of such systems and bring novel solutions to the industry supported by the contemporary research methods.)
Numerical and experimental investigation of fluid flow and heat transfer of flue gas carbon sequestration in mine wastes
Being the most bountiful anthropogenic greenhouse gas, carbon dioxide has been significantly enhancing global warming. Ultramafic mine tailings, with their rich Mg and Ca content, can capture and store carbon in solid form through mineral carbonation. There is a great potential of reducing the carbon footprint of the off-grid mines that have fossil fuel-based power generation plants by injecting flue gas in the available ultramafic mine tailings. To access the applicability of the concept, techno-economics is currently being investigated through numerical and experimental studies. An amenable engineering tool is under development to estimate the energy/costs associated with injection of flue gas for the purpose of carbon sequestration of in main tailings. | https://mining.ubc.ca/research-group/advanced-mine-energy-systems/ |
Published:12 August 2013
the problem is that the system does not .... (have a drop-down here, have this extra screen, a button for this, etc., where the problem is defined as the lack of the solution they have already decided upon without any investigation to ensure that the problem has been correctly identified. I brought up the use of Problem Statements, and was asked by my peers to write up my ideas in more detail. I ended up writing it in article format so that it could be shared ahead of time, and it was deliberately written to generate some discussion. I hope you find it useful as well.
A problem can be defined as
a difference between the expected state affairs and the actual state affairs. And according to Wikipedia a problem statement
is a concise description of the issues that need to be addressed by a problem solving team and should be presented to them (or created by them) before they try to solve the problem.
Problem Statements are a common aspect of Project Management. They are frequently included in a Project Charter, with the Problem Statement identifying what problem the project is focused on solving and the Business Case identifying why the problem should be solved (usually in the form of some specific benefit(s) to be gained).
However, problem statements should also be used in the Elicitation and Requirements Analysis aspects of Business Analysis work.
A good problem statement in a Project Charter should essentially specify the scope by defining:
There is some debate over whether a problem statement should include a solution to the problem, but it is usually a better idea to specify a measurable state that will indicate a solution has been achieved rather than the method the problem will be solved with. For more complex or larger projects, there may be a set of problem statements rather than just one.
However, specifying the problem in project management is not always an easy task. Common issues with defining a good problem statement for a project include:
The problemis nothing more than a list of business complaints that may or may not be related to the project goal or that make the project goal nebulous and ill-defined.
The problemis actually describing the symptoms of the root problem.
The problemis defined as a solution state, not a problem. An example might be a problem statement of
The server is too slow, resulting in reduced response times.Unless you have done the work to ensure it really is the server, and not network issues, software issues, or a number of other factors you may solve the
problemonly to find it has had no effect.
The problemis defined too simplistically and without measures. This leads to uncertainty about the actual problem being solved, the scope of the project, and how success will be measured.
A good problem statement in a project charter becomes the key factor in deciding if something is in scope (or in deciding to change scope), in supporting Stakeholder management and engagement, in guiding Business Analysis work, and in evaluating Change Requests.
However, Problem Statements aren’t just useful in Project Management work. They can also be a valuable tool for Business Analysts in their Elicitation and Requirements Analysis activities.
Business Analysts frequently start Elicitation work with the question
What do you need? These needs can then be evaluated to determine if they are within the project scope, and if so, work with the client to validate the need and determine requirements that will fulfill the need.
The problem with this approach is that a client’s needs (or at least their perception of their needs) are almost always framed by their current process, business, or system constraints. Their needs are evolutions of their current situation, and are almost never revolutions. And while evolutions frequently cost less, are easier to envision, and are usually easier to implement; they will never deliver the real business transformations that are needed for major improvements. Indeed, evolutions rarely even consider revolution as a possibility and end up limiting the vision of potential solutions. Evolution is safe, but myopic. That is not to say that sometimes evolutionary change is not the best option, but you should always be aware of its limitations in generating solution options.
Additionally, asking for needs is inviting the client to provide their full wish-list of everything that they want and think they might be able to get the project to pay for. The needs may not be related the project goals or scope in any way. And frequently the Business Analyst must do all of this while relying almost entirely on the client for business context.
A better option may be to start elicitation with the question
What are your problems? or
What are the barriers preventing you from doing your job better or the company being more effective? By changing the focus to problems and barriers you begin change the client focus from evolution to revolution. You also take the focus away from a general wish-list of new features that the client wants.
Another benefit of starting with problems and barriers is that the business analyst starts off the elicitation process by essentially educating themselves on the business processes and work. By identifying problems and decomposing those problems down to their most atomic level, the business analyst will learn about the business at a greater level of detail than they would likely learn by asking for needs. This puts the business analyst in a much better position to analyze the business needs and requirements that will solve the problem, making the requirements and the solution more effective at actually meeting the client needs.
At the requirements level, the Business Analyst should focus on identifying single, highly-specific problems (unlike the more general problems at the project level). Each problem should be traced back to its root cause and all stakeholders involved in that problem identified. Once this is done a problem statement should be generated for that specific problem that every stakeholder involved can agree upon. This problem statement should:
There are likely to be some doubts about why a problem statement should be solution and system agnostic, so let me explain my reasoning.
If you were to say that the problem is
the accounting system does not calculate year-end cost basis correctly for reporting to clients you have defined the problem as being with the accounting system. You have also limited all potential solutions to just the accounting system because that is the root of the problem you have defined.
The question is the whether client (or business) problem is really with the accounting system? If you instead say that the problem is
accurate and correct year-end cost basis reporting figures are not available for clients you make the problem solution independent of a specific system. Now the range of solutions could include at least the following:
This process of creating a blame-free and solution agnostic problem definition should result in a discussion of what the problem is in abstract terms, which should lead to a discussion of business needs in more abstract, solution agnostic terms in the future. This separation of the client problem from specific solutions or systems enables a more strategic view of problem solutions that are outside of the current process and system limitations. In essence, it makes revolutionary change easier to consider while system-specific solutions are almost always evolutionary in nature.
If achieving agreement among all stakeholders if proving difficult, you may not have identified the true root cause or you may need to build knowledge and perspective among the stakeholders of why the problem is valid in order to build consensus. This can often occur when stakeholders get bogged down in identifying the problem as being with a specific system or group, rather than in isolation.
Once a verified and agreed upon problem statement exists, the problem can be identified as in or out of scope of the project goals and prioritized among all problems identified for the project effort. The Business Analyst can then begin the needs identification part of the Elicitation process focused on what the business needs are to solve that specific problem. This helps keep the needs elicitation process focused and more effective. It also helps to identify business needs that are
free-range, and not confined within the current system and process limitations.
This does not mean that the final solution may not have to be built around those realities, but starting at a level outside of the current limitations means the underlying business needs are more likely to be identified. This makes the range of solution options more open, and may make non-technical solutions such as process changes easier to identify.
Writing problem statements can start with the same process at both the project management and requirements levels.
The first task is to identify the problem(s). One way to do this is to start with the
Five W’s and then follow that up with the
5 Why’s.
The Five W’s are the classic Who, Where, What, When, and Why. In the case of problem statements, these might be better stated as:
This information is then further analyzed using the 5 Why’s (asking
Why five times or more to elicit more details and to ensure the root cause is identified), or other methods such as Fishbone Diagrams that help with identifying root causes. This process should continue until there is agreement that the actual root cause problem has been correctly identified and defined.
Where the Project Management and Requirements work with Problem Statements usually diverge is at the Problem Decomposition stage. Once a problem has been identified, the business analyst will work with the client to decompose it into smaller distinct elements. These may be new dependent problems that are smaller but separate issues that cause the problem(s) at higher levels; or the components of a problem such as who is involved in the process, the systems used, the business units or system with input to process or who take output from it, and other characteristics.
In the end, each problem statement at the requirements level should meet the following criteria:
As each discreet problem is identified and decomposed, the business analyst is identifying the business needs that a solution to that specific problem must meet. These business needs then drive further elicitation and analysis work until business requirements are complete.
Starting requirements work from a problem perspective isn’t very different in most ways than starting with needs elicitation. It inserts two additional steps at the start of the normal requirements process so that the flow is now:
In my opinion, the real difference is in how starting with problem identification and decomposition changes thinking on both the business analyst and stakeholders part.
The real difficulties are likely to be in identifying the actual root cause of problems, and in the time it takes to do that analysis. The process of identifying root causes can lead to systems or groups that are not
in scope, can require the business analyst to gain a significant amount of knowledge of the business, and can take a lot of the client’s time. This can result in clients feeling the BA is
wasting their time when they
already know what the problem is.
When documenting requirements, the business analyst should also make sure to tie specific requirements to specific problems. Just as with tying specific requirements to specific business needs and goals is done through a traceability matrix, the same process can support tying those business needs to specific problems that prevent the achievement of the business goals.
Starting with defining problem statements in requirements work has several benefits. These include:
wish list.
Comments and feedback are welcome. | http://bawiki.com/blog/20130812_Better_Business_Analysis_Through_Problem_Statements.html |
Learn More About a Project Manager. Project management has long been a vital business activity. And only became more important with the passing of time.
As a matter of fact, 87.7 million employers will need to serve in project management positions by 2027. To better handle this need, 71% of global organizations, nearly 15% from 2007, now have an office. The career future is clearly promising for practitioners with expertise in project management.
You will want to learn more about the various positions and duties. Following your degree or qualification if you are looking at a future in project management.
Project Manager’s major roles
But, what are project managers doing?
In the broadest context, project managers (PMs) have a duty to schedule, organize and conduct an organization’s programs, in a timely and budgetary manner.
Project managers can form the future of an enterprise. By monitoring programs from start to finish. Besides, helping minimize costs, optimize corporate efficiencies and raise revenues.
A project manager’s specific roles will depend on the industries. Further, the types of tasks that a PM supervises may differ.
Below are the various duties that a project manager can have at each stage of the project life cycle.
Initiating
Project managers start a new project by identifying the key project goals, intent, and scope of the project. They will also identify important internal and external stakeholders. Further, address common priorities and get the requisite approval to move the project forward.
Important questions asked during the initial process by project managers are:
Why does the project matter?What are we doing to solve the particular problem?What’s the expected result?
What are the performance conditions for the project? And many more questions to face.
Planning
If they have approved the charter, project management collaborates with key partners. To develop an integrated project schedule to meet the outlined objectives.
This lets PMs monitor scale, expense, deadlines, risk, problems of quality, and coordination during this process. During this process, project managers will outline important results and goals. Further, describe the projects that each must accomplish.
We must note that “planning” project would not finish until they completed the project. It is a living text they should grow during the project and meet the project schedule.
Executing
The team members perform the work defined to achieve the project objectives during this process. The job of the PM is to delegate this function and to make them carried sure projects out as planned. Usually, the project manager:
(1) Protect the squad from entertainment(2) Enable resolution of the problem
(3) Lead the team to improve the project
Control and monitor
Despite being identified as the fourth step, monitor and control processes simply is at the start. Then, continue to plan, execute and close. The role of a project manager involves in the management and control phase:
(1) Observation of project success(2) Project budget management(3) Make it possible to achieve important milestones
(4) Comparison of actual results with projected performance
Naturally, things rarely go precisely as planned. A project manager also needs the flexibility to work with a schedule for a project but needs to adapt easily.
Closing
Throughout this process, PMs work to make sure that they do the required tasks to accomplish the final outcome. Project managers will: As a project closes:
Work with the customer to get a formal endorsement of the projectDismiss any services that are no longer required for the project (budget or staff).
Examine the job of third-party sellers or associates to close and pay their invoices.
Using a post-implementation study, they usually identify a major learning experience after the project is finished. Understanding what went right, what should and could avoid being handled differently. | https://www.rpa-star.com/learn-more-about-a-project-manager/ |
Children who do not consistently live with two biological parents are only half as likely to ever attend a selective college, even after researchers take into account factors such as income and parent education, according to a new Cornell University study.
"The results suggest that students not living with two biological parents are educationally disadvantaged in a variety of ways," the researchers said. "We have known for some time that these students receive less education, but now we see they are less likely to attend a selective college as well. If this quality difference is reflected in later life income and other benefits, as other studies suggest, then these children will be disadvantaged in other life outcomes as well."
Dean Lillard, Cornell assistant professor of consumer economics and housing, reported the findings at the annual meeting of the Population Association of America in New Orleans on May 9. "Not only are these students far less likely to apply, be admitted and to attend college, but even given that they apply to college, they are substantially less likely to apply and attend a selective college," he said.
Lillard, with Jennifer Gerner, Cornell professor of consumer economics and housing, analyzed The High School and Beyond longitudinal survey of almost 12,000 high school seniors and almost 15,000 high school sophomores initially interviewed in 1980 and re-interviewed in 1982, 1986 and 1992.
After controlling for parents' income, employment and education, student's grade point average, SAT scores, participation in sports and other extra-curricular activities, and identifying the top 50 colleges in the nation, the consumer economists found striking differences between the two sets of students. They reported on these in their paper, "Family Composition and College Choice: Does it Take Two Parents to Go to the Ivy League?"
"Divorce turns out to be a marker for a whole array of factors that have a negative impact on later life outcomes," Gerner said.
She pursued this research after she noted that only 10 percent of the students in a large class at Cornell were from divorced households. She later discovered this same proportion among the entire undergraduate student body at Cornell, compared with the national average of almost 50 percent. When only students who go to college are considered, 38 to 40 percent are from divorced families, compared with the 10 percent at Cornell.
"Our analysis shows that it is not living without two biological parents itself that has this negative effect. Rather, it is the family disruption that influences a whole constellation of factors that are considered when students apply to college," said Gerner, who teaches a course on the economics of family policy pertaining to children. Gerner also is assistant dean for undergraduate and graduate students in Cornell's College of Human Ecology.
Students from divorced households, whether living with a step-parent or not, are generally less likely to score as well as students from intact families on grades, standardized tests, school activities and the other factors Lillard and Gerner considered.
In a related study, also conducted by Gerner and reported on at the PAA meeting by co-author Shelly Verploeg, the consumer economists compared how elementary and middle school children who experienced any kind of family disruption (parent separation, divorce, birth of a sibling, family move, grandparent moving out) fared on standardized tests compared with children who did not experience any significant disruption in their family life.
"We found that all these disruptions had significant negative impacts on children's scores. This is consistent with the notion that stability in a child's living arrangement matters," Gerner said. "Divorce is obviously a major family disruption and we're finding that it consistently and systematically negatively affects a wide range of significant factors."
In the United States, about 40 percent of white children and 75 percent of black children can expect to live with only one parent or no parents by the time they turn 17. Although other studies have looked at family composition and its relationship to performance on tests, years of schooling and whether or not the child will drop out of school, this study is one of the first to look at family composition and the type of college a student is likely to attend.
Specifically, the researchers found that, after controlling for factors such as income and parent education, 28 percent of students living with two biological parents were likely to apply to a selective college, compared with 17 percent of students not living with two parents; 25 percent were likely to get in, compared with 14 percent of those living with one parent; and 2.2 percent of two-parent students were likely to ever attend a selective college, compared with only 1.1 percent of one-parent students.
Next, the Cornell consumer economists want to explore the implications of these findings for tomorrow's policy leaders.
"Students at the most selective colleges are most likely to become policy leaders, making vital decisions concerning welfare and other benefits for single-parent families. Yet these students are, much more than others, isolated from such families. How can they make informed decisions if they are so unfamiliar with the all-too-common disputed family?" Gerner asked. | https://news.cornell.edu/stories/1996/05/children-divorced-families-only-half-likely-go-top-college-cornell-research-shows |
A team of students led by researchers in the BIG IDEAS lab in the biomedical engineering department will build and validate machine learning techniques to classify longitudinal illness trajectories of individuals with infections such as COVID-19 or flu. Students will construct a pipeline to query survey and wearable device data from our newly constructed database in the Microsoft Azure environment and modify existing machine learning and deep learning algorithms for wearables data analysis. This project will build upon the work accomplished by the Duke Bass Connections team and the Duke MIDS capstone project. | https://bigdata.duke.edu/projects/covidentify |
Managing Relationship Conflicts
Conflict arises out of disagreements with others about how we should behave/act or even think and feel. Usually, it’s not the conflict that is the problem, but how we choose to deal with it that brings us negative results and damaged relationships. Properly handled, conflict can lead to a healthy sharing of ideas and opinions and allow us to accommodate new concepts and ideas.
Conflict resolution is the process of trying to find a solution to a conflict. Ideally conflict resolution is collaborative problem-solving, a cooperative talking-together process that leads to choosing a plan of action that both can feel good about.
The first step in resolving the conflict is to discover what is going on and why it is happening. Once you know this information, the second step is to match it with an intervention strategy which addresses its origins and extent.
If steps one and two have not resolved the conflict OR you feel the conflict has advanced too far, then mediation is an option that can be utilised before starting down the dismissal or legal pathways.
Mediation is a process which employs a range of methods—notably reason and persuasion—to bring the parties to a mutually satisfactory solution. A mediator is a neutral third party acceptable to the contending parties. Mediators seek to clarify the issues, identify what is at stake for the parties, and employ problem-solving methods and techniques.
Stop your conflict from becoming World War 3 by contacting us today! | https://www.findingthelight.com.au/services/relationships-counselling/conflict-resolution2/ |
Worrying is the central feature of generalized anxiety disorder (GAD). Many people worry from time to time, but in GAD the worrying is prolonged and difficult to control. Worrying is a specific way of coping with perceived threats and feared situations. Meanwhile, it is not considered to be a helpful coping strategy, and the phenomenological account developed in this paper aims to show why. It builds on several phenomenological notions and in particular on Michael Wheeler's application of these notions to artificial intelligence and the cognitive sciences. Wheeler emphasizes the value of 'online intelligence' as contrasted to 'offline intelligence'. I discuss and apply these concepts with respect to worrying as it occurs in GAD, suggesting that GAD patients overrate the value of detached contemplation (offline intelligence), while underrating their embodied-embedded adaptive skills (online intelligence). I argue that this phenomenological account does not only help explaining why worrying is used as a coping strategy, but also why cognitive behavioral therapy is successful in treating GAD.
Worrying is the core feature of generalized anxiety disorder (GAD) [1–4]. Many people worry from time to time, but in GAD the worrying is prolonged and difficult to control. Worrying is a way of coping with perceived threats. Yet, usually, it is not considered to be a helpful strategy for dealing with future situations , and the phenomenological account developed in this paper aims to show why.
There are various ways to explain the prolonged and uncontrollable worrying that occurs in GAD . The present paper, however, is restricted to an explanation in phenomenological terms. Central to this account is to conceive of the human being as 'being-in-the-world' in the sense of an agent in continuous, embedded interaction with the environment. More specifically, my account is informed by Michael Wheeler's application of phenomenological (mainly Heideggerian) notions to artificial intelligence (AI) and the cognitive sciences. His approach - in which the concept of 'online intelligence' is central - enables us to articulate an unhelpful 'metacognition' in GAD (see next section), and, moreover, to explain why treating this condition requires cognitive as well as behavioral interventions. I suggest that the overall picture derived from this phenomenological perspective can also be communicated to GAD patients, hopefully providing them with additional motivation to find more helpful coping strategies than worrying.
The outline of the paper is as follows. First, I introduce GAD and the concept of worrying which is central in theorizing about GAD. Next, I present and discuss a phenomenological account of 'being in the world'. I start out explaining some central Heideggerian notions and then proceed to Wheeler's radical interpretation of these notions, articulated by the concept of 'online intelligence'. Then, I show how both accounts - theorizing about worrying in GAD on the one hand and online intelligence on the other - can be linked in order to explain why worrying in GAD occurs and why it is unhelpful. In brief, I suggest that in GAD there is a metacognition that overrates the value of detached contemplation about future situations while underrating the value and resources of actual embodied-embedded engagement.
GAD is an often chronic and impairing disorder with a lifetime prevalence estimated to be up to 5% in the United States population [4, 6]. The core symptom is excessive or unreasonable worrying about all kinds of situations and the worrying is difficult to control [3, 7]. There is a 15-20% heritability in GAD and, in general, treatment options are cognitive behavioral therapy (CBT) [2, 8] as well as medication .
Adrian Wells defines worrying as consisting of long chains of negative thoughts that are predominantly verbal in form and aimed at problem solving . The common type of worrying - 'normal' worrying without GAD - is very similar in content to its GAD variant, but the GAD-type is less controllable . Wells developed a model of worrying in GAD in which the concept of metacognitive beliefs and appraisals is central . In this model, worrying is not just taken to be the natural consequence of being anxious. Instead, the model tries to explain why it is that the patient in an attempt to cope with fearsome stimuli chooses the worry-strategy above other coping options. Wells proposes that the reason behind the patient's choice is the presence of specific metacognitions about worrying. Patients, in other words, hold certain beliefs and appraisals about worrying that make them start and continue to worry in response to fearsome stimuli .
GAD patients do not worry all day; usually the worrying is triggered by thoughts with intrusive character, popping up in the patient's mind. They often take the form of "what if"-questions, like "what if I fail my exam?". Alternatively, they take the form of images of disasters . It is at this point, confronted with such thoughts or images that metacognitive beliefs about the helpfulness of worrying come into play. Examples of such metacognitions are, as provided by Wells: "Worrying helps me cope; worrying keeps me safe; if I worry I'll be prepared." Such thoughts attach positive value to worrying. This hypothesis is supported by the observation by clinicians that GAD patients often maintain that worrying helps them to be prepared for negative outcomes . In addition, GAD patients indeed turn out to believe that worrying is more helpful in finding solutions and preventing negative outcomes as compared to moderate worriers who do not fulfill GAD criteria . GAD patients continue contemplating how to deal with possible scenarios until, at some point, they somehow feel that they will be able to cope. The assessment that the worrying process can be ended is, for example, based on internal cues like a "felt sense" that one is able to cope should the feared scenario unfold, or it may be related to some superstitious reasoning, or the worrying may stop because the patient is distracted .
One element should be added to this account of metacognitions about worrying in GAD. As Riskind, referring to Beck, says (, p.2), individuals vulnerable to developing GAD "overestimate the magnitude and severity of threat, underestimate the extent of their coping resources, and overuse compensatory self-protective strategies such as cognitive, affective, or physical avoidance." So, on such an account, GAD patients underestimate their actual coping skills with respect to real-life challenges . In other words, in addition to the ideas about the value of worrying in coping with future situations, there is the underestimation of the patient's own capabilities for dealing with such situations should they occur.
It is characteristic for GAD patients not only to have positive ideas about worrying. In fact, according to Wells, these patients have negative thoughts about worrying as well. They worry about worrying itself, contemplating themes like: "I cannot control the worrying" and "worrying is harmful", "worrying means that I could go crazy," etcetera . Wells calls these negative ideas about worrying 'type 2' worries. The worries about all other things in life are the - common - type 1 worries . In this paper, I focus on type 1 worries, so the worries about future situations (and catastrophes).
We should note that in some patients the reason for worrying may be superstitious: Wells (, p. 303): "Such meta-cognitive beliefs may be linked to superstitious themes, such as not tempting fate through positive thinking, or to beliefs that worry is a good way of dealing with threat." In this paper, however, I will not go into the superstitious metacognitions, but concentrate on the non-superstitious metacognition that worrying somehow helps to prepare for future situations. In section 4, I propose that the metacognition that makes people choose for prolonged worrying as a coping strategy can be linked to a view of human agency that overrates detached contemplation and underrates embodied-embedded online intelligence in dealing with the world (see also next section).
Phenomenologists, in particular the early Heidegger, suggest to conceive of the human agent1 as being-in-the-world . What is clear from this term by itself is that the world should not be conceived of as something separate from the human being, but as intimately related to what it is to be a human agent. Such a view stands in contrast with an observant, detached 'Cartesian Ego' that mainly contemplates about the world, as Michael Wheeler explains in more detail . Furthermore, the phenomenological tradition conceives of the human being in practical interaction with the environment. According to Merleau-Ponty , "[c]onsciousness is in the first place not a matter of 'I think that' but of 'I can'". Notably, the 'I can' implies the possibility of change in the actual world. While the 'I think' refers to reflection about the world, the 'I can' acknowledges the continuous potential for change.
The phenomenological focus on interaction with the environment has resulted in specific terminology, emphasizing that worldly entities appear first of all as tools, as things that enable us to perform certain actions (, see also ). The objects are engaged in a way that has already appreciated the possibilities, options, or opportunities provided by these objects. Heidegger's famous example is his analysis of our appreciation of a hammer . The hammer is not primarily perceived as a 'thing' with a wooden component and an iron part, but it has already been perceived - before its parts are recognized in their specific nature - as something which enables us to hammer. In everyday life, we engage the 'objects' as enabling certain actions, and therefore, indeed, as tools. Like when we are building, we pick up the tools without much thought; we are engaged in a certain praxis, which provides us with the practical eye that makes us recognize the specific tools suitable for the actions we intend to perform. In sum, we appreciate the world from a perspective that continuously recognizes actual possibilities for action.
The possibilities appreciated by the agent are not limited to one simple option . Rather, we recognize a range or network of options. Returning to the hammer, this tool does not merely refer to the act of hammering; it rather opens up further possibilities, like fixing things and building a shed or a house. In fact, possibilities or options go on infinitely, because every possibility opens up a plethora of further possibilities . In order to acknowledge that it is not about isolated possibilities, but about a range of interdependent options, Wheeler uses the term involvement networks: "...the hammer is involved in an act of hammering; that hammering is involved in making something fast; and that making something fast is involved in protecting the human against bad weather." The analysis so far comes down to a basically action-oriented approach to our being in the world, in which practical options and possibilities provided by the actual encounter with the environment constitute the primary level of our understanding of the world.
Wheeler has integrated Heidegger's phenomenological analyses in the philosophy of artificial intelligence (AI) - in part building on Dreyfus's earlier work . Wheeler is certainly not merely using Heideggerian phenomenological notions. A clear example is the central role of the body in Wheeler's account. In Heidegger's Being and Time, the body is absent, as Heidegger himself (, p. 143) says: "This 'bodily nature' hides a whole problematic of its own, though we shall not treat it here." Meanwhile, the body is to be found - and its role emphasized - in the work of other representatives of the phenomenological tradition, like Merleau-Ponty , who was one of the sources for Dreyfus's initial criticism of AI (see also , p. 167) that inspired Wheeler's account. So, Wheeler's work is informed also by other phenomenological strands (other than Heidegger) enabling him to articulate the embodied nature of human agency.
According to Wheeler, engagement with the world is an ongoing adaptive process with continuous action-oriented perception . He understands the engaged attitude toward the world as a form of 'online intelligence': "A creature displays online intelligence just when it produces a suite of fluid and flexible real-time adaptive responses to incoming sensory stimuli." This formulation shows a basic view of how both organisms (humans included) and robots relate to their environment - and is, indeed, a radical interpretation and application of some Heideggerian notions. Online intelligence is especially relevant in a world that is constantly changing, like our world. It is the opposite of offline intelligence: detached cognitive processes that are not in immediate interaction with the world, like contemplating the weather in Paris . Offline intelligence, in other words, is the opposite of embodied-embedded cognitive activity. More can be said about such a distinction, and how it should be conceived of , but in this paper I intend to use it in Wheeler's sense.
For decades, Wheeler explains, people in AI tried to build robots equipped with cognitive maps of the world: these maps represented all kinds of aspects of the world. Contrary to what was expected, such robots were not able to smoothly interact with the environment. More recently, other types of robots emerged. These robots did not know that much about the world, but they were designed to continuously pick up environmental cues while interacting with the world. Without a precise representation of the world (so, without a map), but equipped with the capacity to continuously sense the world and interact with it, they are capable of smooth interaction. For this kind of robots their specific bodily features and abilities to interact are crucial . So, without an elaborate map, but equipped with some relevant sensors, they are able to effectively interact with their environment - in a way their clumsy and detached predecessors were unable to.
In other words, an 'online' approach to the interaction with the world has been helpful to AI and robotics. It highlights that, contrary to what one might think, it is the actual situation that is the enabler of options and behavioral scenarios. In fact, AI for a long time overlooked the specific nature and complexity of interacting with the world. For example, three decades ago, IA specialists thought that the major challenge was building a chess computer able to beat the world champion. But beating a chess grandmaster - even the best of the (human) world - turned out to be the easy part. The real challenge was not this kind of cognitive activity, but real-time, online interaction with the world, like ants, mice, and falcons perform. Already several decades ago, Hubert Dreyfus identified and explained the problems and challenges in AI referring to Heideggerian notions like 'being-in-the-world' [16, 20, 23]. His account brought forward that interaction with the world is something much different from what we theoretically anticipated - still, in practice, we, as humans, are extremely good at it.
As indicated, the concept of online intelligence is intimately related to 'embodied-embedded cognitive science'. Wheeler (, p. 11) says, "[i]n its raw form, the embodied-embedded approach revolves around the thought that cognitive science needs to put cognition back in the brain, the brain back in the body, and the body back in the world." In my account I emphasize cognition, body, and the environment - not the brain. More in particular, I take the situated embodied nature of our being in the world to be the core of online intelligence within the context of this paper.
It is well established that the phenomenological tradition emphasizes the role of emotions in our being in the world [19, 24, 25]. This has to be acknowledged when considering profound disturbances of emotions - like in anxiety disorders, GAD being an example - from a phenomenological perspective. Heidegger, who also had a particular interest in psychiatry [26, 27], explains that our engagement with the world is always taking place in some mood . Mood, like the weather, is always there. We may be happy, sad, or anxious, but there is always a mood in which we engage our environment. And our mood is profoundly related to the actual options we appreciate in the environment [15, 25]. It is, therefore, likely that anxious patients will appreciate and perceive all kinds of smaller and bigger threats in their environment because their mood makes them focus on such dangerous possibilities. So, an anxious patient won't have much trouble finding things to worry about and one could phenomenologically explore this issue with respect to GAD. However, this is not the focus of my paper; I take the 'online intelligence' angle on worrying in GAD.
Notably, usually online and offline cognitive activity go hand in hand: we combine these processes. In fact, in our everyday activities it is about a balance between these two, and about the ability to change one's approach in accordance with the task we perform.
In the next section I show how both accounts - concerning the unhelpful worrying strategy on the one hand and the online intelligence view on the other - can be linked in a way that brings forward a deeper metacognition in GAD: a metacognition about the nature of our interaction with the world.
In the light of the analysis so far, how should we characterize worrying? It is a kind of thinking that takes a detached, offline - disembodied and disembedded - approach to future situations and scenarios that might become reality. Patients overusing such offline intelligence may be thinking all day about what it will be like to go to a supermarket and interact with the cashiers - whilst not actually going there but staying in bed instead. And without any of the environmental input that is in place in an actual supermarket, the patient can easily go on worrying for hours. The embodied approach recognizes and emphasizes that the human bodily resources are richer than is consciously available or accessible when detached contemplation is going on. The body is the means of online intelligence, especially equipped for actual engagement, not for offline contemplation about scenarios that may or may not become reality. In fact, it is virtually impossible to make a cognitive map for all events that may occur in a supermarket. Relying on such detached knowledge about the environment made it hard for robots to smoothly deal with their actual environment. Meanwhile, our online embodied cognitive capacities are perfectly capable of dealing with a vast variety of situations at the moment they occur. We are, as it appears, equipped for actual interaction rather than for detached contemplation about such interaction. This idea or 'metacognition' may be hard to accept for GAD patients, as it took time for AI scholars to accept its truth.
Let us take a further step. The metacognition with respect to the value of worrying on the part of the patient could be understood in the following terms: the patient appears to believe somehow that the offline, rational cognitive capacities can more or less be identified with our capacities as a whole. The idea of the human being is, in a way, equated with the rational Cartesian Ego: a detached mind, contemplating various scenarios. In contrast to such a Cartesian view of cognition and rationality stands the twentieth century pragmatic turn in phenomenology that focuses on actions (rather than thoughts), body, and a world or environment in flux (rather than the static object of detached contemplation). In other words, there is a parallel between the approach to being in the world that an exemplary GAD patient seems to take and a specific philosophical, Cartesian approach to the human agent. And since the GAD patients are likely to avoid the feared circumstances, the patients are (at least to some extent) deprived of the opportunity to experience the richness of their resources in actual online engagement. Consequently, their 'Cartesian' metacognition will remain uncorrected.
The gist of the analysis of the worrying process in terms of offline intelligence is that there appears to be a vital metacognition, namely that offline contemplation is the means of choice for preparing for future situations - ignoring the online cognitive and behavioral resources. We also noted that GAD patients tend to underestimate their ability to deal with whatever scenarios might obtain in reality. Taking these together, I propose the following perspective on worrying in GAD: These patients may underestimate their coping skills in such feared situations, because they tend to largely overlook their embodied-embedded resources, while they are convinced that detached contemplation is a helpful means to deal with the world. In fact, overlooking the online aspect of the human nature cannot but result in an underestimation of the actual skills one possesses.
Although the focus of the present paper is on metacognitions, it is important to note that GAD is not just a cognitive stance on our interaction with the environment, but a disorder in which people are anxious, and, therefore, a disorder that is about mood, emotions, stress and distress. The burden to the patient will often consist of - apart from avoidance - continuous anxiety and stress. Still, in this paper I focus on unhelpful (meta)cognitions, not on emotional distress.
It is clear that offline contemplation is not at all wrong or bad in itself; it is merely the uncontrolled and prolonged contemplation as it occurs in GAD that is unproductive and unhelpful. In fact, since we are no Cartesian Ego's but embodied and embedded agents - not going anywhere without our bodies - the contemplation-strategy should be used selectively. We will never be able to cognitively grasp completely how we do what we do every day. Many of our own online capacities are likely to remain a mystery to ourselves: we do not even know how we raise our own arm. Thinking about behavioral responses will never be the same as actual engagement, and is, therefore, indeed, only of limited value.
The worry-strategy is not only unproductive; it also stimulates avoidance of the actual situations. I mentioned that the avoidance of feared situations indeed deprives the patient of the opportunity to correct the metacognition by showing that the patient can actually deal with the situation. The (repeated) behavioral experience that one is able to deal with actual situations is probably the strongest means to achieve metacognitive correction. This could also be part of the explanation of why it is that the behavioral element of CBT is successful in treating GAD . In section 2, the case of a 25 year old was mentioned and session 5 and 6 of CBT contained a behavioral component: "Exposure to worried about situations was implemented (e.g., travelling alone, shopping alone) to test the accuracy of worry content against perceptions of real situations. This was intended to strengthen the replacement meta-belief that worries are inaccurate and therefore offer little advantage for coping." Such a behavioral approach is completely understandable from the phenomenological perspective proposed in this paper. Online intelligence should be experienced and performed as a means to achieve correction of the metacognitions.
A specific behavioral aspect of CBT in anxiety disorders in general could also be explained from an embodied-embedded perspective. Anxiety patients may be afraid that they will be overwhelmed by anxiety in the feared or worried about situations. And these patients are right (at least in part): in such feared situations anxiety levels will increase, and usually these patients will flee from the situation as soon as possible. Yet, in CBT, anxiety patients are motivated to expose themselves to such stressful situations without fleeing. In such cases anxiety levels will initially increase - but since patients are not fleeing from the stressor, they will experience that the (extreme) anxiety levels eventually drop (often they decrease after, let's say, 10-20 minutes). Thus, CBT enables the patients to experience their bodily responses and to find out that, after a period of time, anxiety levels actually decrease in the feared situation. There is ample evidence for the effectiveness of this exposure component of CBT . On the phenomenological account developed so far, this is just one example of our adaptive, online bodily capacities and responses. Our body adapts - physiological anxiety responses decrease - to a situation in a way a worrying patient might well overlook. In fact, our bodily responses, like the physical responses to stressors, are not static, but dynamic in nature, and aimed at effective coping. The behavioral component of CBT, therefore, shows the patients their own online responses to the feared situation.
Phenomenologically informed research has provided evidence that 'disembodiment' is a problem in several psychiatric disorders, like schizophrenia and melancholia [30, 31]. In such cases of a 'disembodied mind' there is a bodily (mediated) disconnect between the mind and the environment. For instance, in melancholia patients may feel completely alienated from the world, not 'in touch' with it anymore. Still, there is a difference between this kind of 'disembodiment' and the approach to GAD developed in the present paper. For this paper is not about actual disembodiment or perceptual changes in GAD, but about metacognitions on the part of the patient pertaining to his interaction with the environment. In other words, in (more severe) disorders like schizophrenia, there are actual (developmental) changes in bodily interaction with the world, whilst in GAD the main problem concerns metacognitions about how to deal with future situations. Meanwhile, it would be interesting to study GAD in the way that has been done in melancholia and schizophrenia, aiming to find signs of actual, e.g., perceptual disembodiment.
The idea put forward in this paper could also be explained to patients, I suggest. In brief, a therapist could say something like this to the patient: "You are more resourceful than you think you are, because you focus and rely on detached contemplation skills. In fact, however, you are ignoring that you are a human being, which means a being that has the specific ability to deal with real life situations rather than with contemplated ones. Granted, detached deliberation and contemplation are indispensible features of our being in the world. Yet, their value is limited; as soon as contemplation takes over and becomes uncontrollable - becomes worrying -, your cognitive endeavors stop adding to your resources in real life situations. Part of the therapy will be to experience your own behavioral resources in the situations you fear. We will do this gradually, but over time you will be surprised by your own capacities in dealing with actual situations." Explaining this view to patients might help motivating them to, indeed, engage bodily - instead of via worrisome imagination - in (feared) situations, and experience their own adaptive skills. In addition, providing patients with this overall picture might encourage them to identify specific aspects of their own behavioral responses in the feared situation that were unanticipated, i.e., overlooked in the worry-scenarios - yet were effective in dealing with the stressor. Identifying such elements of their own behavior could deepen their belief that worrying is not a helpful strategy.
Still, the emphasis on correcting the metacognition about how to deal with the world, should not suggest that other metacognitive corrections are irrelevant when treating GAD. Usually, in CBT, the therapist and the patient identify various (meta)cognitions that contribute to the GAD symptoms in that particular patient. Identifying and correcting these specific (meta)cognitions is a powerful part of CBT. Meanwhile, I suggest that there appears to be an overarching or deep metacognition which ignores the resources of online intelligence. Correcting this metacognition could be part of CBT as well.
GAD patients hold certain metacognitions, circling around the value of worrying as a way of preparing for future events. Such metacognitions are shared by many people, but in GAD they are more extreme. In addition, GAD patients tend to underrate their own coping skills should the feared situation occur. I related these phenomena to a philosophical - 'Cartesian' - position that concentrates on rational deliberation, while (at least in part) ignoring our embodied-embedded existence. More specifically, from the perspective of an online account, the GAD-metacognitions are in fact mistakes about what we are. Like philosophers tended to ignore the body as well as the environment (embeddedness), GAD patients tend to either ignore or underrate the embodied-embedded skills human agents naturally have. From this biased perspective it becomes understandable that people start figuring out all kinds of scenarios in a detached offline fashion and at the same time tend to avoid the actual situation just as long as they do not feel that they have worked out all sorts of eventualities. The online intelligence approach, however, acknowledges that, indeed, the future is uncertain (the world being in flux). Meanwhile, at some point detached contemplation adds very little to our coping resources; its value is limited.
The proposed view adds to our understanding of why CBT is effective in these patients: on the one hand it aims at direct cognitive correction of the relevant GAD-metacognitions; on the other hand its behavioral component provides the GAD patients with the experience that their resources are much richer than prolonged (fearful) offline contemplation can imagine.
Since many people value worrying to some extent and many people do it from time to time, it might be that more people do not fully realize the limitations of offline contemplation on the one hand and the resources of online intelligence in dealing with our world on the other.
Gerben Meynen received a PhD in Philosophy and in Medicine. In 2007 he started his current research project (NWO-Veni grant) on free will and mental disorder at the Faculty of Philosophy, VU University Amsterdam. He also works as a psychiatrist at the Outpatient Clinic for Anxiety Disorders at GGZ InGeest, partner of the VU Medical Center, Amsterdam. His main research interest is philosophy and ethics of psychiatry.
This work was supported by The Netherlands Organisation for Scientific Research, Grant 275-20-016. I thank Alan Ralston for his helpful suggestions.
1. In technical terms 'being-there' [German: Dasein]. Wheeler (, p. 122) uses the term "human agency" and "the human agent". For the analysis and interpretation of 'being-in-the-world' and Wheeler's account as developed in this section, see also .
GM is responsible for the full content of the manuscript. | https://peh-med.biomedcentral.com/articles/10.1186/1747-5341-6-7 |
Living in rural Georgia, it’s easy to think that our government doesn’t care much about our communities. From broadband access to outgrown infrastructure to transportation and education, rural communities like ours face many challenges — challenges that often we’re left to solve on our own. Right now, internet access is a barrier to education, job growth and investment: How are kids supposed to do their homework, residents to work in good, modern jobs, or companies choose to come to an area where high-speed or even reliable internet is sparse or even nonexistent? Because of this and other barriers, we need to elect officials who are committed to working to help our area, not just their own interests or those of wealthy donors.
As a longtime North Georgia resident, I have seen our state senators and representatives make promise after promise about bringing funds or new investments to our area that have yet to materialize. We are past the time for empty promises — we need action. I’m truly excited to support a leader who has real, concrete plans for our area, and whose track record shows that she’ll follow through. Stacey Abrams has a dedicated plan for rural Georgia — one that includes full funding and support for our schools and raising teacher pay to $50,000 to attract and retain educators all over the state. Real investment in public education will give our teachers and our students the skills and support they need to succeed.
In addition, Abrams’ plan includes working with local governments collaboratively to bring broadband to underserved areas like ours. Federal funding — already allocated but never spent by the current crop of politicians — can be used immediately to make sure we can attract good jobs, new families, and more investment. Colleges like UNG can also be partners in this effort, and Abrams has a plan to work with places of worship, schools and libraries for broadband in rural communities.
Her plans for rural Georgia also include healthcare — because all of us deserve to live healthy, safe and vibrant lives. By expanding Medicaid, we can bring $4 billion dollars in federal funding — money our taxes already pay for, but under Brian Kemp, we never benefit from — to help keep our hospitals open, make doctor’s visits and prescriptions more affordable, and make healthcare accessible to over 500,000 Georgians. She also has a plan to incentivize doctors and nurses to work in rural areas, making sure your city or county has a pediatrician or family practitioner to keep our families healthy.
As residents, we love our community and the beauty of our region—and we deserve a government that will work with us, not stand in the way of our progress. We have a chance to choose a leader who can bring investment, infrastructure, and hope to North Georgia, and I’m excited to vote for her this November. | https://www.gainesvilletimes.com/opinion/letter-editor/opinon-abrams-will-help-rural-georgia/ |
Pharmacist practice in neonatal intensive care units in Australia and Poland
- Publication Type:
- Thesis
- Issue Date:
- 2018
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
The quality and safe use of medicines is a global priority, particularly in high-risk patients such as those in the neonatal intensive care unit (NICU). Whilst medication misuse and errors have been widely reported in the published literature across all patient populations, of particular concern are those that occur in neonatal patients. Pharmacotherapy is heavily used within the NICU, with a reported average of 8.6 medications prescribed per patient. Furthermore, neonates have a unique set of challenges, including immature and constantly changing body-systems, a lack of suitable formulations for administration, as well as a lack of evidence to inform medicines use in infants, rendering this population particularly vulnerable to experiencing medication errors. Medication errors with the potential to cause harm are eight times more likely to occur in the NICU compared with adult wards, and are more likely to cause significant consequences ranging from pressure on clinical resources and increased healthcare costs, to adversely affecting the health outcomes of neonatal patients, i.e., impairing the development of organs and body systems due to neonates’ physiological inability to buffer errors. As key facilitators of the quality use of medicines (QUM), clinical pharmacists possess the skills necessary to improve medication management in the NICU. Whilst studies have showcased pharmacist interventions and reported significant decreases in medication errors in the NICU, they have failed to describe roles that are provided in actual NICU settings. As such, there is a distinct gap in knowledge relating to what roles and services are provided to NICUs in current pharmacy practice, as well as what impact pharmacist-led services have upon clinical outcomes in neonates. Without relevant practice standards, differences in healthcare systems, legislation, culture, and tertiary education across countries may lead to the variable provision of pharmaceutical care services to this setting. As a result, there is potential for the quality of pharmaceutical care provided to NICU patients to also differ, which may impact on patient outcomes. The World Health Organisation (WHO) reports that health inequalities are a major concern for health systems globally. Currently, there is no literature describing what a quality level of pharmacy practice entails in NICUs, nor are there any standardised means of measuring the quality of pharmaceutical care provided to NICU patients. Quality assurance is an important concept to confirm whether the level of pharmaceutical care being provided is optimal. Healthcare service quality is most commonly measured via key performance indicators (KPIs) or other quality indicators that assess practice performance, helping to identify service gaps. These indicators are formulated according to evidence-based national or international clinical practice guidelines. However, there is currently (and surprisingly) an apparent lack of medication management policies or KPIs/frameworks needed to guide QUM in the NICU. Health equity is a shared responsibility of all nations worldwide, and it is a fundamental right of each human being to receive the highest possible standard of healthcare. The RIO Political Declaration on Social Determinants of Health states that all nations should collaborate to identify best practices and adopt coherent policies that promote uniformity across health settings worldwide. Whilst there are significant differences in practice between third and first world countries, it is apparent that there are also variances in pharmacy practice between industrialised countries in Europe, as well as the US, UK, Australia, New Zealand and Canada. It is clear that many nations are challenged in striving for this global uniformity, regardless of their population, location, or wealth. This is also apparent in the context of pharmacy practice where, aside from large studies commissioned by the WHO, European Association of Hospital Pharmacists (EAHP) or the American Society of Hospital Pharmacists (ASHP) comparing general hospital pharmacy services around the world, there is little comparative research focussing on pharmacist practice in NICUs transnationally. Summarily, there is a need to better understand the current state of pharmacy practice in NICUs worldwide, to identify specific issues relating to medication management issues or pharmacy practice, and to create reference points for quality pharmaceutical care and/or benchmarks against which to compare changes in international hospital pharmacy practice.
Please use this identifier to cite or link to this item: | https://opus.lib.uts.edu.au/handle/10453/129436 |
Home > Our System > Incarnation & Reincarnation: Your Soul Changes Clothes Too!
Incarnation & Reincarnation: Your Soul Changes Clothes Too!
Birth, death and rebirth form the major events in the circle of life and incarnation. Hinduism speaks of two certainties. Firstly, that any individual who has been born, must ultimately die and secondly, that whoever dies, will be reborn, unless freed from the process of incarnation.
The question then is why does the soul have to go through this cyclical journey? Are there obligations that cannot be dealt with in one lifetime? The answer involves the understanding of another concept: Karma. The literal translation of the Sanskrit word, karma, is work or duty. Birth presupposes a soul being assigned some particular duty that it has to accomplish through life. Every person has the power to generate both good karma and bad karma. Rebirth can be looked upon as an opportunity provided to the soul for nullifying or equalizing bad karma.
Let’s take a quick look at the processes that precede birth. When the higher soul is ready to incarnate, superior beings design the destiny of the incarnated soul. This serves as a blueprint for the life to be. The higher soul then embarks upon a journey through the lower mental world by moving “downward” with the seed of consciousness and forms the lower mental body. Following this, the higher soul extends the emotional permanent seed into the astral world thus forming the astral body. Finally, the physical permanent seed is attached during fertilization following the union of the sperm and the egg cell. The incarnated soul is the final entity to be lodged onto the physical body. This happens during the seventh month of pregnancy and the soul is harboured in the 12th chakra, located a foot above the head.
The processes mentioned above lead to birth in the physical world as we know it. Throughout life, the incarnated soul is provided opportunities to engage in good karma and bad karma. Moreover, the karmic blueprint that is developed for the incarnated soul is devised in such a way that some of the bad karma of its past lives get equalized.
People on the spiritual path, especially Arhatic yogis often realize that their lives become more difficult than it was before they became spiritually inclined and active. This is because higher spiritual practices hasten the process of incarnation in the sense that the incarnated soul is provided more opportunity to unite with the higher soul. This can be done only when a large part of the bad karma is neutralized.
So, how does one achieve liberation from the cycle of birth, death and rebirth? Is ‘nirvana’ or ‘moksha’ possible only if all negative karma has been equalized? How many lifetimes does it take to achieve freedom from the cycle of incarnation? To be able to arrive at the answers of such questions would require one to delve deeper into the realms of spirituality. The books and courses designed by Master Choa Kok Sui are truly enlightening and might help you understand many philosophies and clarify many doubts that you might have.
Eğitmenlerimiz nuray hanım ve ali bey , oldukça misafirperver , kibar ve güleryüzlü idiler . Eğitim müfredatını muhtemelen onlara verilen sınırlar dahilinde bizlere güzel bir şekilde ögrettiler . Buraya kadar bir sorun yok . Ancak Ben kursun yetersiz olduğunu düşünüyorum . Bu benim fikrim . Ben genel anlamda memnun kalmadım , kontrolsüzdü . sevgilerimle . | https://www.thepranichealers.com/pages/en/our-system/incarnation |
1905: On 2 January 1905, Argentinian astronomer Charles Perrine discovered Elara, then the twelfth of Jupiter's known satellites.
It orbits 11 737 000 kilometres from Jupiter and has a diameter of 76 kilometres.
Leda, Himalia, Lysithea and Elara may be the remnants of a single asteroid that was captured by Jupiter and broken up. In mythology, Elara was the mother by Zeus of the giant Tityus.
1959: On 2 January 1959, the USSR launched Lunik I in an attempt to hit the Moon. The spacecraft missed the Moon and was flung out into space by the Moon's gravity. It became the first man-made object to achieve an orbit around the Sun. | http://www.esa.int/Our_Activities/Space_Science/2_January |
Location:
StarHub Green
Company: StarHub Ltd
Job Description
Finance Business Partnering support for Division providing comprehensive, timely and effective financial analysis, insights and recommendations.
Responsibilities
- Support the business to set financial KPI by providing key business insights and analysis outcome to drive and track performance.
- Lead in the quarterly forecasting and annual budget exercise by reviewing financial assumptions, pipeline reviews and highlight risk/issues.
- Involved in the annual Product Costing exercise, drive cost syndication to ensure reasonable margin for each product.
- Review business cases and develop financial business models to assess financial feasibility of initiatives/products before presenting to the business leaders.
- Ensure timely and accurate closing of finance and accounting activities and providing in-depth analysis of the financial performance.
- Provide variance explanations of actual vs budget / forecast, includes reporting of key findings to the business.
- Involved in annual Target setting exercise, to ensure appropriate target dish out to sales personnel to meet budget.
- Alignment and translation of strategy into Target deployment, includes maintaining profitability model for Target measurement.
- Any other ad hoc projects as assigned
Qualifications
- Degree in Accountancy (preferred) with minimum 5 years relevant working experience.
- At least 3-4 year of Financial Planning and Analysis experience preferred.
- Strong analytical and conceptual skills, with attention to details and accuracy while being able to provide crisp and clear summarized analysis.
- Good communication with strong teamwork and interpersonal skills.
- Can handle ambiguity independently and deliver within strict timeline in a fast-paced environment.
- Tenacity and problem-solving skills.
- Emotional resilience and ability to withstand pressure on an ongoing basis.
- Competent in PC applications e.g. Microsoft Office. Knowledge of SAP, ANAPLAN, UiPath, Macros will be an advantage. | https://careers.starhub.com/job/Senior-Financial-Analyst/935234110/ |
We are recruiting Paid Staff – anyone is welcome
Indian Society of WA (ISWA), a not for profit organisation, is having it’s biggest event of the year on 7th and 8th November 2020. The timing for this at Claremont Showgrounds would be from 3pm till 9.30pm on both the days. The Premier is attending the event.
However, to make it safe in the currently applicable Phase 4 Easing restrictions, we need Covid Safety Marshalls to be managing the patrons and advising them the need for social distancing/ following other protocols. We are looking forward to train some 50 people and get the approvals. ISWA will cover the related expenses and may be good for you to find additional employment in the future. This would be a paid job apart from the training, if you can commit for the two days. Preference would be given to students who have registered with ISWA and already have NPC. For those interested some additional information is provided below:
In summary COVID Safety Marshals must:
• be over 18 years of age;
• have completed the online training course “COVID-19 Infection Control Training” provided by Aspen Medical and found at https://www.health.gov.au/resources/appsand-tools/covid-19-infection-control-training#registration;
• have a National Police Clearance (NPC) that is less than 12 months old;
• wear distinctive clothing which easily identifies them as a COVID Safety Marshal (e.g. a safety (Hi-Vis) vest or uniform) to the public and any authorized officer;
• not be simultaneously performing another role at the event; and
• be responsible for monitoring that public health measures approved in the COVID Event Plan are implemented and complied with.
Duties of COVID Safety Marshals. A training program would be conducted for this:
• Promote, and encourage (as necessary) appropriate infection control practices. This could include monitoring the cleaning log and ensuring personal hygiene measures are maintained (e.g. provision of hand sanitizer at high touch points) by persons at the event (whether as patrons, employees or contractors);
• Promote, and encourage (as necessary) patrons to comply with the physical distancing principles (i.e. maintaining 1.5 m between individuals/groups who do not know each other).
• Ensure the COVID Event Plan requirements are effectively being implemented. Alert management of any issues that cannot be resolved. Please advise if you can help us get about 60 people to do this job?
• To be present on the ground and do the duty from 3pm till 10pm on 7th and 8th November.
If you’re a student, you will need ISWA Student ID/Number to express your interest for these roles. If you are not yet a ISWA Student Wing member, register now using this link.
Registration for Covid Marshal role is now closed. | https://diwalimela.com.au/covidmarshalls/ |
Q:
HTML adding user input values together into one value that is updated depending on the input
I am not the best with html and I have not been able to find out how to do this from looking around on the internet, so I was hoping someone here can point me in the right direction.
This is the code I have now :
<p>
<label for="food">Food</label>
<input type="number" onkeypress="return isNumberKey(event)" id="food" />
</p>
<p>
<label for="auto">Auto</label>
<input type="number" onkeypress="return isNumberKey(event)" id="auto" />
</p>
<p>
<label for="bills">Bills</label>
<input type="number" onkeypress="return isNumberKey(event)" id="bills" />
</p>
<p>
<label for="monthlybudget">Monthly Budget</label>
/* Add food + auto + bills whenever there is input */
</p>
I am trying to add the user input of food, auto, and bills so that I can create a monthly budget depending on what there inputs were in the input fields. Each time a user inputs a value the monthly budget field will be updated but I cannot figure out how to do that.
For example:
Food: 200
auto: 150
bills: 400
Monthly budget $750
A:
Here is how you can do it without jQuery:
// When document elements have loaded:
document.addEventListener('DOMContentLoaded', function() {
// Get reference to all inputs and output:
var foodInput = document.getElementById('food');
var autoInput = document.getElementById('auto');
var billsInput = document.getElementById('bills');
var budgetOutput = document.getElementById('monthlybudget');
function calculateOutput() {
// Convert input values to numbers (+) and put sum in output:
budgetOutput.textContent = +foodInput.value + +autoInput.value + +billsInput.value;
}
// Calculate on any change in inputs:
foodInput.oninput = calculateOutput;
autoInput.oninput = calculateOutput;
billsInput.oninput = calculateOutput;
// Calculate at load:
calculateOutput();
});
<p>
<label for="food">Food</label>
<input type="number" id="food" />
</p>
<p>
<label for="auto">Auto</label>
<input type="number" id="auto" />
</p>
<p>
<label for="bills">Bills</label>
<input type="number" id="bills" />
</p>
<p>
<div>Monthly Budget: <span id="monthlybudget"></span></div>
</p>
| |
Q:
What is the complexity class most closely associated with what the human mind can accomplish quickly?
This question is something I've wondered about for a while.
When people describe the P vs. NP problem, they often compare the class NP to creativity. They note that composing a Mozart-quality symphony (analogous to an NP task) seems much harder than verifying that an already-composed symphony is Mozart-quality (which is analogous to a P task).
But is NP really the "creativity class?" Aren't there plenty of other candidates? There's an old saying: "A poem is never finished, only abandoned." I'm no poet, but to me, this is reminiscent of the idea of something for which there is no definite right answer that can be verified quickly...it reminds me more of coNP and problems such as TAUTOLOGY than NP or SAT. I guess what I'm getting at is that it's easy to verify when a poem is "wrong" and needs to be improved, but difficult to verify when a poem is "correct" or "finished."
Indeed, NP reminds me more of logic and left-brained thinking than creativity. Proofs, engineering problems, Sudoku puzzles, and other stereotypically "left-brained problems" are more NP and easy to verify from a quality standpoint than than poetry or music.
So, my question is: Which complexity class most precisely captures the totality of what human beings can accomplish with their minds? I've always wondered idly (and without any scientific evidence to support my speculation) if perhaps the left-brain isn't an approximate SAT-solver, and the right-brain isn't an approximate TAUTOLOGY-solver. Perhaps the mind is set up to solve PH problems...or perhaps it can even solve PSPACE problems.
I've offered my thoughts above; I'm curious as to whether anyone can offer any better insights into this. To state my question succinctly: I am asking which complexity class should be associated with what the human mind can accomplish, and for evidence or an argument supporting your viewpoint. Or, if my qusetion is ill-posed and it doesn't make sense to compare humans and complexity classes, why is this the case?
Thanks.
Update: I've left everything but the title intact above, but here's the question that I really meant to ask: Which complexity class is associated with what the human mind can accomplish quickly? What is "polynomial human time," if you will? Obviously, a human can simulate a Turing machine given infinite time and resources.
I suspect that the answer is either PH or PSPACE, but I can't really articulate an intelligent, coherent argument for why this is the case.
Note also: I am mainly interested in what humans can approximate or "do most of the time." Obviously, no human can solve hard instances of SAT. If the mind is an approximate X-solver, and X is complete for class C, that's important.
A:
I don't claim this is a complete answer, but here are some thoughts that are hopefully along the lines of what you're looking for.
NP roughly corresponds to "puzzles" (viz. the NP-completeness of Sudoku, Minesweeper, Free Cell, etc., when these puzzles are suitably generalized to allow $n \to \infty$). PSPACE corresponds to "2-player games" (viz. the PSPACE-completeness of chess, go, etc.). This is not news.
People generally seem to do alright with finite instances of NP-complete puzzles, and yet find them non-trivial enough to be entertaining. The finite instances of PSPACE-complete games that we play are considered some of the more difficult intellectual tasks of this type. This at least suggests that PSPACE is "hitting the upper limits" of our abilities. (Yet our opponents in these PSPACE-complete games are generally other people. Even when the opponents are computers, the computers aren't perfect opponents. This heads towards the question of the power of interactive proofs when the players are computationally limited. There is also the technicality that some generalizations of these games are EXP-complete instead of PSPACE-complete.)
To an extent, the problem sizes that arise in actual puzzles/games have been calibrated to our abilities. 4x4 Sudoku would be too easy, hence boring. 16x16 Sudoku would take too much time (not more than the lifetime of the universe, but more than people are generally willing to sit to solve a Sudoku puzzle). 9x9 seems to be the "Goldilocks" size for people solving Sudoku. Similarly, playing Free Cell with a deck of 4 suits of 13 cards each and 4 free cells seems to be about the right difficulty to be solvable yet challenging for most people. (On the other hand, one of the smartest people I know is able to solve Free Cell games as though she were just counting natural numbers "1,2,3,4,...") Similarly for the size of Go and Chess boards.
Have you ever tried to compute a 6x6 permanent by hand?
I suppose the point is that if you take natural problems in classes significantly above PSPACE (or EXP), then the only finite instances that people are capable of solving seem to be so small as to be un-interesting. Part of the reason "natural" is necessary here is that one can take a natural problem, then "unnaturally" modify all instances of size $< 10^{10}$ so that for all instances a human would ever try the problem becomes totally intractible, regardless of its asymptotic complexity.
Conversely, for problems in EXP, any problem size below the "heel of the exponential" has a chance of being solvable by most people in reasonable amounts of time.
As to the rest of PH, there aren't many (any?) natural games people play with a fixed number of rounds. This is also somehow related to the fact that we don't know of many natural problems complete for levels of PH above the third.
As mentioned by Serge, FPT has a role to play here, but (I think) mostly in the fact that some problems naturally have more than one "input size" associated with them.
A:
I think one is led to the wrong model by trying to extrapolate from the kind of things the human brain appears to compute, and I think it would be better to take the opposite view and instead extrapolate from the computational model it is.
So, to me the complexity class that most reasonably captures the human mind is the nonuniform circuit class $TC^0$. This view is is supported by modeling the workings of the brain as a neural network performing computations in an instant.
Also, I do not agree with the statement in the question that the human mind can simulate a Turing machine. Rather, what it can do is to simulate the finite control of the Turing machine. To perform very complicated tasks, it seems necessary to be able to record information on a "tape".
A:
The Tractable Cognition thesis postulates that human cognitive capacities are constrained by computational tractability. In this way, the P-Cognition thesis uses deterministic polynomial time as a model for computational tractability, while in the paper below, it is argued that the FPT-Cognition thesis is more appropriate. See Iris van Rooij's article in the June 2009 edition of the Parameterized Complexity Newsletter for a more detailed discussion and pointers to other papers.
Iris van Rooij. The Tractable Cognition thesis. Cognitive Science, 32, pp. 939-984, 2008.
Iris van Rooij. Parameters that make Cognitive Work Light. FPT News: Parameterized Complexity Newsletter, June 2009.
| |
including personnel administering medication on a field trip.
physician’s order and/or an appropriately labeled, original medication container.
in its original container will be used for students attending a field trip.
medication must understand what to do in an emergency.
of area where medications are administered.
medication label and will double-check the dose. The medication will be given within 30 minutes either side of the prescribed time.
occurrences, and a method of returning any medication not administered. | http://www.sau31.org/district/board-of-directors-1/policy-handbook/j---students/medication-administration-on-school-field-trips |
Barreira, B. Saussol and J. Dai and S. Society , Vol.
Upcoming Events
Mathematics—Oxford Journals , Vol. Elekes and T. Advances in Mathematics , Vol. Fan, L. Liao, J. Ma and B. Indagationes Mathematicae , Vol. All rights reserved. Advances in Pure Mathematics, , 1, doi This result was generalized to the -ary case by Eggleston .
Billing sley proved a more general ver- sion of this result in the context of probability spaces . Sets such as N p K are studied in the context of multifractal theory see [1,7,9,11,] and Billing- sley-type results have been proved by several authors in this context. Recently, such a result has been proved for a countable symbol space in .
Billingsley, Probability and Measure. Weak convergence and mappings.
Patrick Billingsley was Professor Emeritus of Statistics and Mathematics at the University of Chicago and a world-renowned authority on probability theory before his untimely death in Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
The limit as the number of particles tends to infinity of the random empirical measures associated with the Bird algorithm is shown to be a deterministic measure-valued function satisfying an equation close in a certain sense to the Boltzmann equation.
Note: Citations are based on reference standards. For a proof, see either Ash or Billingsley. Request PDF on ResearchGate Convergence of Measures One focus of probability theory is distributions that are the result of an interplay of a large number of random impacts. Theory of Probability and Its Applications 26, - Billingsley Probability and Measure. World Scientific Publishing Co. Patrick Paul Billingsley May 3, — April 22, was an American mathematician and stage and screen actor, noted for his books in advanced probability theory and statistics. He was the author of Convergence of Probability Measures Wiley , among other works.
ISBN alk. Changes of measure arise in areas of wide applicability such as in mathematical nance, in the setting of so-called equivalent pricing measures. I was. Cambanis and G. Probability and statistics p. We introduce a monotone class theory of Prospect Theory's value functions, which shows that they can be replaced almost surely by a topological lifting comprised of a class of compact isomorphic maps that embed weakly co-monotonic probability measures, attached to state space, in outcome space.
Homework should be handed in on the dates specified in the table below. Billingsley, Patrick. Lectures will be self-contained and in principle you do not need any of the books. Both written texts as well as a LaTeX pdf are allowed. Section 1. Durrett Probability: Theory and Examples. CUP paperback. Lehmann , Theory of Point Estimation, Wiley Precise understanding of the concepts probability space and random variable is therefore essential. In the opposite direction, convergence in distribution implies convergence in probability only when the limiting random variable X is a constant.
However, formatting rules can vary widely between applications and fields of interest or study. The visible theorem Weak convergence of probability measures These additional notes contain a short overview of the most important results on weak convergence of probability measures. The presentation of this material was in uenced by Williams .
کتابخانه مرکزی دانشگاه صنعتی شریف - Billingsley dimension in probability spaces, Cajar, Helmut.
Running title: Billingsley, P. Billingsley P. Convergence almost surely implies convergence in probability. Jul 3, of convergence via convergence of probability measures of open and. Often a useful Description : A new look at weak-convergence methods in metric spaces-from a master of probability theory In this new edition, Patrick Billingsley updates his classic work Convergence of Probability Measures to reflect developments of the past thirty years. Academic career Three uses: ordering , approximation and convergence of probabilities.
Martingales and martingale convergence theorems if not covered in Billingsley, Convergence of Probability Measures, 2nd ed. A change of probability measure often relies on the speci cation of a nonnegative martingale process which convergence of the probability density functions convergence in distribution Example: Central Limit Theorem Z n and Z can still be independent even if their distributions are the same! Topics include probability measures, Lebesgue-Stieltjes integration, sigma- elds, random variables, expectation, moment inequalities, independence, convergence of random variables and sample moments, 4.
By the definition of Skorokhod's metric see e. I on convergence of probability measures, of which he gave me a copy. According to Billingsley [6, Theorem 1. Theorem 8. To illustrate the meaning of the total variation distance, consider the following thought experiment. Discrete-parameter martingales.
You should start working on each homework early, that way you will have time to ask questions in class before the R. The course is based on the book Convergence of Probability Measures by Patrick Billingsley, partially covering Chapters , , , 16, as well as appendices. The main goal of the paper is to study the asymptotic behavior of a random walk with stationary increments which are interpreted as discrete-time speed terms satisfying the Langevin equation.
Week 3. It provides mathematically complete proofs of all the essential introductory results of probability and measure theory. Hint the development of Probability Theory, in particular in the theory of stochastic processes.
Navigation menu
Tightness and relative compactness of families of probability measures. The specific requirements or preferences of your reviewing publisher, classroom teacher, institution or organization should be applied. Topics covered: From the course catalog. Resnick, A Probability Path, Birkhauser Exercise 3.
Butzer, L. Convergence of probability measures by P. Weak convergence of probability measures, characteristic functions of random variables, weak convergence in terms of characteristic functions. Hahn, and M.
apakreiletan.ga Stieltjes measures and probability distribution functions. Dudley Real Analysis and Probability. | https://ininxiecito.tk/billingsley-dimension-in-probability-spaces.php |
This session we're going to be heavily focusing on color and what you, as the artist, can achieve by thinking out your color before you begin to paint. But before we do that, let's review the four properties of color: One: Hue - the color itself. For example all the blues are one 'hue' Two: Temperature. Within the blue 'hue' are colors that are blue but are also warmer or cooler versions of the blue. A color's temperature is relative to the colors around it. Three: Intensity - the most pure form of a color - found along the outside of the color wheel - is the most intense. As colors are mixed with other colors they become less intense or grayed, in proportion to where the added color is on the color wheel. The colors exactly opposite - the complement - will produce a gray when mixed in equal portions. Four: Value - how dark or light the color is. Yellows are usually the lightest - or most high key - while purples are usually the darkest. If you are using a medium where you will be adding white, light values can be mixed.
So how will you use color to evoke a mood? or present an idea of what you have found memorable? Color will do that for you. How will you show the heat of the desert? Using color. The cool spray of the ocean. Again color. How about something sad or somber? Or happy and light? Through the use of color.
As the artist, you will start to develop themes for your paintings. And these themes can be expressed using color. So, what is the best way to find out what colors will work for you? By experiment and discovery. And that's what we'll be working on. There are lots of tools in an artist's workbox. One of them is "The Color Scheme" and we'll be working with some of these. This week we'll be looking at the Monochromatic Scheme.
| |
Dario Floreano Keynote Speaker
- Pioneering thinker and practitioner in the field of robotics and A.I.
- Inventor and entrepreneur in robotics and A.I.
- Director of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology Lausanne
Dario Floreano's Biography
Professor Dario Floreano is director of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology Lausanne (EPFL). He is also founder and director of the Swiss National Center of Competence in Robotics, which funds collaborative research and technology transfer in wearable and mobile robots.
Dario has held research positions at Sony, at Caltech/JPL, and at Harvard University. He pursues research in robotics and A.I. with the goal of making machines more life-like and bridging the gap between humans and machines. He is a recognized pioneer of Evolutionary Robots that autonomously co-evolve bodies and brains, of Autonomous Drones that safely fly in confined spaces and near humans, and of Soft Robots that redefine the mechanical and control foundations of robots of the future. He is currently interested in extending human experience and collaboration with wearable technologies and robots.
Dario has published more than 350 articles, tens of patents, and 4 books on Artificial Neural Networks (Il Mulino), Evolutionary Robotics (MIT Press), Bioinspired Artificial Intelligence (MIT Press), and Bio-inspired Flying Robots (Springer Verlag). His work has been cited more than 15,000 times, covered worldwide by hundreds of international media, including CNN, BCC, National Geographic, and has inspired the book Prey by best-selling novelist Michal Crichton (author of Jurassic Park).
Dario Floreano is on the Advisory Board of Future and Emergent Technologies of the European Commission, has been a founding member of the World Economic Forum Council on robotics and smart devices,is co-founder of the International Society of Artificial Life, Inc., and executive board member of the International Society for Neural Networks. He has spun off two robotics companies: senseFly (now part of the Parrot Group), which has become a world leader in imaging drones for professionals, and Flyability, which produces a game-changing drone for close inspection of confined spaces. | https://www.chartwellspeakers.com/speaker/dario-floreano/ |
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a Continuation of International application Ser. No. PCT/EP91/00906, filed May 15, 1991., now WO 91/18195.
The invention relates to a device for adjusting the throttle valve of an internal combustion engine, including a set value signal generator having an accelerator pedal and two position signal generators connected thereto, each outputting a set value signal characterizing the position of the accelerator pedal, and a control unit driving an actuation element for the adjustment of the throttle valve as a function of the set value signals.
In such a device the throttle valve is adjusted by an electromotive actuation element which is driven by a control unit. For this purpose, the control unit receives a set value for setting the throttle valve from a set value signal generator which has an accelerator pedal and two position signal generators connected thereto. The two position signal generators are usually potentiometers which have a common slide that taps off two separate resistance tracks.
The operational reliability of the device is thus substantially increased since the two potentiometers can be tested with respect to one another. A defect in one of the two potentiometers can be detected from the deviation of the output signals.
A further safety problem in such a device is the possible blocking of the set value signal generator. The blocking cannot be detected through the two potentiometers. In addition, when blocking occurs it is no longer possible to change the throttle valve position through the accelerator pedal. If the blocking occurs during an overtaking procedure, the driver can no longer accelerate or if he or she takes his or her foot off the accelerator pedal, the throttle valve no longer returns to the no-load position.
It is accordingly an object of the invention to provide a device for adjusting the throttle valve of an internal combustion engine and a method for testing the device, which overcome the hereinafore- mentioned disadvantages of the heretofore-known devices and methods of this general type and which do so in such a way that, even when the accelerator pedal is blocked, the desire of the driver to set full-load or no-load can be reliably detected and set.
With the foregoing and other objects in view there is provided, in accordance with the invention, a device for adjusting the throttle valve of an internal combustion engine, comprising a set value signal generator having an accelerator pedal and two position signal generators each being connected to the accelerator pedal for outputting a set value signal characterizing a position of the accelerator pedal, a control unit connected to the set value signal generator, an actuation element being driven by the control unit for adjusting a throttle valve of an internal combustion engine as a function of the set value signals, a no- load switch disposed at the set value signal generator for responding when the accelerator pedal is touched, even before the position signal generators change position, and a full-load switch disposed at the set value signal generator for responding to a specific pressure exerted on the accelerator pedal being greater than a pressure being necessary for keeping the accelerator pedal in a specific position and for moving the accelerator pedal into another position, and the control unit setting the throttle valve in accordance with positions of the no-load switch and of the full-load switch, into a no-load position when the no-load switch and the full-load switch have not responded, and into a full-load position when the no-load switch and the full-load switch have responded.
According to the invention, a no-load switch and a full-load switch are provided on the set value signal generator. The no-load switch responds when the accelerator pedal is touched, that is to say as soon as the driver puts his or her foot on the accelerator pedal and before the set value which has been output changes. The full-load switch responds when there is increased pressure on the accelerator pedal. The level of this pressure is selected in such a way that it is always greater than the pressure which is necessary to hold the accelerator pedal in a specific position or to move it into another position. The full-load switch therefore only responds when the driver presses against the full- load stop or against a blocked accelerator pedal.
When the set value signal generator is blocked, there are two cases to be differentiated. If the driver wishes to carry on accelerating the vehicle, e.g. during an overtaking procedure, he or she presses with increased pressure on the accelerator pedal. Thus, the no- load and the full-load switch have responded and the control unit switches the throttle valve into the full-load position. If, on the other hand, the driver takes his or her foot off the accelerator pedal, thus wanting no- load to be set, neither the no-load nor the full-load switch are actuated. In this case, the control unit switches the throttle valve into the no- load position.
An allowance is therefore made for the desire of the driver to set full-load or no-load, although the set value signal generator is blocked and thus the set value signals from the potentiometers always remain the same. Increased safety, e.g. for the critical case of full- load, is provided by the use of two switches. Full-load and no-load are only set if either both switches have responded or both switches have not responded.
In accordance with another feature of the invention, the throttle valve is only switched into the full-load or no-load position when the so- called single-fault criterion is fulfilled. If blocking of the accelerator pedal occurs, a fault is already present. In the case of each further fault in the system, the number of combinations of possible faults is multiplied. Therefore, full-load or no-load is only set when the two potentiometers indicate the same position of the accelerator pedal, since otherwise a further fault is present in one of the potentiometers.
In accordance with a further feature of the invention, a test is performed as to whether or not the position signals of the potentiometers have changed within a specific time interval. When such a change occurs, blocking is then in fact no longer occurring so that full- load in particular may no longer be set.
In accordance with an added feature of the invention, safety can be increased once more if one or more brake switches which respond when the brake pedal is actuated are included in the system. If the driver actuates the brake, it can be assumed that he or she does not desire to set full-load under any circumstances but rather no-load. Therefore, in this case the throttle valve is switched into the no-load position irrespective of the position of the other switches.
In accordance with an additional feature of the invention, the full- load switch additionally has a pressure sensor having an output signal which is proportional to the pressure exerted on the accelerator pedal. By comparing these output signals, a two out of three redundancy with respect to the two throttle valve potentiometers can be achieved.
In accordance with yet another feature of the invention, the output signals are conducted from the opposite threshold value switch which is set to the pressure limit value of the full-load switch, and the function of the full-load switch can be tested.
In accordance with another mode of the invention, in normal driving operation of the vehicle, there is provided a method which comprises testing all of the switches and position signal generators described above with respect to one another for plausibility.
In accordance with a further mode of the invention, if an implausible combination occurs in this case, there is provided a method which comprises selecting that component having a position or switching state which contradicts that of the other components, as being defective.
In accordance with a concomitant mode of the invention, there is provided a method which comprises maintaining emergency operation in this case, on the basis of the remaining intact components.
Other features which are considered as characteristic for the invention are set forth in the appended claims.
Although the invention is illustrated and described herein as embodied in a device for adjusting the throttle valve of an internal combustion engine and a method for testing the device, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the spirit of the invention and within the scope and range of equivalents of the claims.
The construction and method of operation of the invention, however, together with additional objects and advantages thereof will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.
FIG. 1 is a schematic and block circuit diagram of a device for adjusting a throttle valve of an internal combustion engine;
FIGS. 2 and 3 are two fragmentary, diagrammatic, elevational views of a set value signal generator; and
FIG. 4 is a flow diagram used for illustrating the mode of operation in the case of blocking of the set value signal generator.
Referring now to the figures of the drawing in detail and first, particularly, to FIG. 1 thereof, there is seen a set value signal generator which is designated by reference numeral 1. This set value signal generator 1 essentially contains an accelerator pedal 10 and a potentiometer 13 which outputs the position of the accelerator pedal 10 as a set value to a control unit 2. The control unit 2 is a microcomputer which performs a calculation on the set value signal and outputs a corresponding actuation signal for an actuation element 3. The actuation element 3 is an electric motor which is directly connected to a throttle valve 4 and adjusts the latter in accordance with the actuation signal.
FIGS. 2 and 3 show two views of the set value signal generator 1. The potentiometer 13 is a rotary potentiometer which is adjusted by a lever 15. Disposed inside a cylindrical housing of the potentiometer 13 are two resistance tracks which extend annularly independently of one another. A common slide which is connected to the lever arm 15 so as to be fixed in terms of rotation, taps off each of the two resistance tracks through a respective contact. The two contacts are electrically insulated from one another so that the effect of two potentiometers or position signal generators is achieved with the two separate resistance tracks.
The two resistance tracks are connected in opposite directions. In other words, the position of the slide which signifies the largest resistance value in one potentiometer signifies the smallest resistance value in the other potentiometer. As a result, a simple means of monitoring is provided since the sum of the two resistance values must always correspond to the maximum value of a potentiometer.
The lever arm 15 is moved by the foot of the driver through the accelerator pedal 10, in the direction of rotation shown in FIG. 3. In this case, the respective position is detected by the potentiometer 13 and is fed to the control unit 2 as an electrical signal through feed lines 14.
A no-load switch 11 and a full-load switch 12 are provided on the lever arm 15. The two switches are located between the accelerator pedal 10 and the lever arm 15 and they are both constructed as push- button keys which only differ in the actuation force required.
The no-load switch 11 is constructed in this case in such a way that when the accelerator pedal 10 is actuated, it responds even before the lever arm 15 moves. A response of the no-load switch 11 therefore means that the driver has placed his or her foot on the accelerator pedal 10 but the level arm 15 has not yet moved out of a no-load position.
On the other hand, the full-load switch 12 is constructed in such a way that it does not respond until the accelerator pedal 10 is depressed with a further increased force after a full-load stop has been reached. The full-load switch 12 thus also responds when the lever arm 15 is blocked in any position and the driver presses on the accelerator pedal 10 with increased force.
FIG. 4 shows part of the processing routine which is stored in the control unit 2 and is actuated in the event of blocking of the set value signal generator.
In a first case it will be assumed that the set value signal generator 1 is suddenly blocked during an overtaking or passing procedure. The driver continues to press on the accelerator pedal 10 since he or she requires even more engine power in order to be sure of completing the overtaking procedure.
In a step S0, in the case of each calculation routine for a new actuation signal to set the throttle valve 4, a test is performed as to whether or not the no-load switch 11 and the full-load switch 12 have both responded. Since this occurs in the above-mentioned first case, a step S1 follows with a test as to whether or not the same position of the lever arm 15 has been determined through the two resistance tracks of the potentiometer 13. If this is not the case, a branching occurs to an emergency program which only permits the internal combustion engine to continue operating at restricted power. This power restriction is necessary since in this case one of the two potentiometers is defective and thus a no-longer definable fault state is present.
If the two potentiometers agree, a step S2 follows, in which the response of two switches attached to the brake pedal is tested. A first one of the two switches is the customary brake light switch. A second one of the two switches is also actuated with the brake pedal, like the brake light switch, and only serves to increase safety. Therefore, if the switch responds, this means that the driver is braking. In this case, it is assumed that it is not desired to set to full-load but rather to no- load and in a step S5 the throttle valve is adjusted into the no-load position. If, on the other hand, the brake is not actuated, in a step S3 the throttle valve 4 is adjusted into the full- load setting. It is to be noted that the steps S0 to S3 proceed identically, irrespective of whether the throttle is completely opened during normal, fault-free driving operation or whether it is desired to set to full-load by means of further pressure on the accelerator pedal 10 when the set value signal generator 1 is blocked. The steps S1 and S2 in this case serve to test whether or not a state is present which is critical in terms of safety and does not permit a full-load signal generator.
In a second case, it will be assumed that the set value signal generator is blocked when the throttle is being closed and the driver takes his or her foot off the accelerator pedal 10.
In this case, it is detected at the step S0 that neither the no- load switch 11 nor the full-load switch 12 have responded.
In a step S4, a test is then again performed as to whether or not the two potentiometers agree. In the negative case, the emergency program then follows again and in the positive case the throttle valve 4 is adjusted into the no-load position at the step S5. In this case as well, the routine proceeds identically, irrespective of whether or not it is a normal, fault-free no-load case or whether or not the set value signal generator 1 is blocked and the driver desires to set no-load by taking his or her foot off the accelerator pedal 10.
A particular advantage of the system according to the invention is the possibility of mutual testing of the individual switches and potentiometers. Only a few examples of such tests will be mentioned herein:
in every sub-load case the no-load switch 11 must have responded;
if the no-load switch 11 is not actuated, the two potentiometers must indicate the no-load position of the throttle valve 4;
whenever full-load is set the full-load switch 12 must respond;
the two brake switches may only respond together; and
the two potentiometers must always indicate the same position. | |
The Camel Period is the first of African rock art. In many African cultures,masks represent spirits, including ancestors, animals, and nature spirits.
What period does African rock art belong to?
African Rock Art: Tassili-n-Ajjer (? 8000 B.C.–?) Introduction to Prehistoric Art, 20,000–8000 B.C.
What is African Rock Art?
Evidence of early human artistic expression in Africa commonly takes the form of rock paintings and engravings. Some of these are thought to date back 12,000 years, but most are much more recent. They are found across the continent, with the best preserved sites found in the Sahara and the deserts of southern Africa.
What are three of the rock art periods in Africa?
Such artworks are often divided into 3 forms: petroglyphs, which are carved into the rock surface, pictographs, which are painted onto the surface, and earth figures, formed on the ground.
What is the specific name of the very old stone art of Africa?
Saharan rock art – there are over three thousand known sites where artists carved or painted on the natural rocks of the central Sahara desert. Tadrart Acacus (Libya) – rock art with engravings of humans and flora and fauna, which date from 12,000 BCE to 100 CE.
Which country is known for rock arts?
Southern Thailand is known for the first discovery of rock art in Thailand, at the site of Khao Khian in Phang Nga province, which was reported by Lunet da Lajonquière in 1912 .
What are rock paintings called?
Pictograph from Petit Jean State Park. Anthropologists and archeologists define rock art as images carved, drawn, or painted onto immovable rock surfaces. Images that are carved or engraved into rock are called petroglyphs. Images made with paint or other pigment are called pictographs.
What is traditional African art?
Traditional art describes the most popular and studied forms of African art which are typically found in museum collections. Wooden masks, which might either be of human, animal or legendary creatures, are one of the most commonly found forms of art in western Africa.
When did humans first make art?
The oldest secure human art that has been found dates to the Late Stone Age during the Upper Paleolithic, possibly from around 70,000 BC, but with certainty from around 40,000 BC, when the first creative works were made from shell, stone, and paint by Homo sapiens, using symbolic thought.
Where is the Apollo 11 stone now?
The Apollo 11 Cave is an archeological site in the ǁKaras Region of south-western Namibia, approximately 250 km (160 mi) southwest of Keetmanshoop.
What are three types of masks created in Africa?
The three types are face masks, helmet masks, and body and belly masks.
What are Nok sculptures?
Nok Terracotta Bas-relief Sculpture. … These artifacts are mostly terracotta sculptures of human heads, human figures, and animals. One of the identifying characteristics of Nok sculptures is the triangular or oval-shaped eyes on human faces. Human figures also often have elaborate hair styles.
What are petroglyphs?
Petroglyphs are rock carvings (rock paintings are called pictographs) made by pecking directly on the rock surface using a stone chisel and a hammerstone. When the desert varnish (or patina) on the surface of the rock was chipped off, the lighter rock underneath was exposed, creating the petroglyph.
Which period in history is known as the Stone Age?
Paleolithic or Old Stone Age: from the first production of stone artefacts, about 2.5 million years ago, to the end of the last Ice Age, about 9,600 BCE.
What is the most famous Stone Age site in Africa?
The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was confirmed to be around 77,000 and 100-75,000 years old.
What’s the difference between Stone Age and Modern Age?
Stone age man lived near the sources of their needs whereas, modern man lives anywhere (due to the advancement of technology). 3. They used to wear the skin of animals whereas the modern man wears clothes of a number of fabrics such as cotton, silk and nylon etc. | https://flairng.com/africa/which-period-of-rock-art-in-africa-was-first-to-develop.html |
In her new book, Reclaiming Conversation, Sherry Turkle examines how smartphones and social media have crowded out real conversation. She says “human relationships are rich, messy, and demanding. When we clean them up with technology, we move from conversation to the efficiencies of mere connection.” Turkle explores the power of face-to-face conversation in a time of “always on”technological connection.
Sherry Turkle is a professor of social studies of science at the Massachusetts Institute of Technology in Cambridge, Massachusetts and author of Reclaiming Conversation: The Power of Talk in a Digital Age (Penguin, 2015). | https://www.sciencefriday.com/segments/sherry-turkle-reclaiming-conversation/ |
FDIC Reports Analyze Growth of Nonbank Lending in US
FDIC published the three reports analyzing the growth of nonbank lending in the United States. The analyses suggest the shift in some lending from banks to nonbanks over the years; examine how corporate borrowing has moved between banks and capital markets; and focus on the migration of some home mortgage origination and servicing from banks to nonbanks. These reports have been featured in the FDIC Quarterly 2019 (Volume 13, Number 4).
Bank and Non-bank Lending Over the Past 70 Years
Since the 1970s, the share of bank loans fell as non-banks gained market share in residential mortgage and corporate lending, the report concludes. In other business lines, shifts in loan holdings from banks to non-banks was less pronounced as banks and non-banks continue to play important roles in lending for commercial real estate, agricultural loans, and consumer credit. Banks have regained market share of commercial real estate mortgages after a decline during the financial crisis, with the bank share of commercial mortgages being relatively steady at about 50% to 55% since the mid-1970s. In second quarter 2019, banks held 59% of commercial mortgages. In contrast, the bank share of multifamily residential mortgages decreased substantially between 1990 and 2012 and has modestly recovered since. The report highlights that the primary cause of the shift in loan origination from banks to non-banks is securitization. If less-regulated financial institutions play a larger role in lending, the shift may alter underwriting standards when loan demand increases. FDIC is also expected to publish a series of articles that look closely at the factors driving these trends and the related risks of residential mortgages and corporate debt and leveraged lending.
Leveraged Lending and Corporate Borrowing
The analysis examines the shift in corporate borrowing to capital markets over the past several decades. It also details the ways corporate debt has grown, the resulting risks this shift poses to banks since the 2008 financial crisis, and what factors could mitigate those risks. The report also discusses corporate bonds, including the role of banks in these markets and developing risks. It also details the macroeconomic risks banks face from corporate debt as well as potential risk-mitigating factors. The report finds that the shift from bank financing to capital market financing through bonds and leveraged loans could have implications for banking system stability. The shift may reduce banking risk, because when corporations rely less on direct bank loans, direct bank exposure to corporate borrower credit risk is reduced. However, banks are still vulnerable to corporate debt distress during an economic downturn in several ways:
- Higher corporate leverage built up through capital markets could reduce the ability of corporate borrowers to pay bank and nonbank debt in times of distress.
- Banks lend to nonbank financial firms that in turn lend to corporations, so if corporations default on loans from nonbank financial firms, then nonbank financial firms may default on loans from banks.
- In a downturn, bond issuances and leveraged loan syndications could decline, and any income that a bank had been earning from organizing bond issuances and leveraged loan syndications would be likely to decline.
- The migration of lending activity away from the regulated banking sector has increased competition for loans and facilitated looser underwriting standards and risky lending practices that could expose the financial system to new risks.
- Any macroeconomic effects of corporate debt distress could affect the ability of small businesses, which borrow more heavily from banks, to service their debt.
Trends in Mortgage Origination and Servicing
The report highlights that, post the 2007 financial crisis, mortgage market changed notably, with a substantive share of mortgage origination and servicing and some of the risk associated with these activities migrating outside of the banking system. Some risk remains with banks or could be transmitted to banks through other channels, including bank lending to non-bank mortgage lenders and servicers. Changing mortgage market dynamics and new risks and uncertainties warrant investigation of potential implications for systemic risk.
The characteristics of non-banks that have, in part, enabled them to gain a competitive edge in mortgage origination and servicing include continued reliance on short-term credit, a focus in conventional conforming and government (Federal Housing Administration or FHA in particular) loan origination, origination of loans exhibiting incrementally eased underwriting standards, application of technological innovation to improve efficiency and origination profits, and less comprehensive regulatory oversight relative to banks. Many nonbank characteristics subject these entities to several risks and the new competitive pressures facilitated by nonbanks have increased several risks in the financial system. These risks include the following:
- Liquidity and funding risks of the nonbank structure
- Interest rate risk inherent in refinancing-focused lending
- Risk of reduced availability of FHA-insured and other government loans in the case of widespread nonbank failures
- Moderate growth in credit risk caused by heightened competition in the market, driving incremental easing in historically tight credit standards
- Cyber-security and other risks related to increased reliance on technology
- Risks posed by the less stringent and more fragmented regulation of nonbanks relative to banks
Related Links
- Press Release
- Report on Bank and Nonbank Lending (PDF)
- Report on Leveraged Lending (PDF)
- Report on Mortgage Origination and Servicing (PDF)
- FDIC Quarterly
Keywords: Americas, US, Banking, Securities, Insurance, Leveraged Lending, Credit Risk, Securitization, Mortgage Lending, Mortgage Origination, FDIC Quarterly, Commercial Real Estate, FDIC
Featured Experts
María Cañamero
Skilled market researcher; growth strategist; successful go-to-market campaign developer
Nicolas Degruson
Works with financial institutions, regulatory experts, business analysts, product managers, and software engineers to drive regulatory solutions across the globe.
Patrycja Oleksza
Applies proficiency and knowledge to regulatory capital and reporting analysis and coordinates business and product strategies in the banking technology area
Previous ArticleErkki Liikanen of IFRS on Impact of Big Tech on Financial Stability
Related Articles
PRA and FPC Finalize Changes to Leverage Ratio Framework in UK
The Prudential Regulation Authority (PRA) published the final policy statement PS21/21 on the leverage ratio framework in the UK. PS21/21, which sets out the final policy of both the Financial Policy Committee (FPC) and PRA
CFPB Proposes Rule on Small Business Lending Data Collection
The Consumer Financial Protection Bureau (CFPB) proposed to amend Regulation B to implement changes to the Equal Credit Opportunity Act (ECOA) under Section 1071 of the Dodd-Frank Act.
PRA Decides to Maintain O-SII Buffers for Another Year
The Prudential Regulation Authority (PRA) decided to maintain, at the 2019 levels, the buffer rates for the Other Systemically Important Institutions (O-SII) for another year, with no new rates to be set until December 2023.
FSB Report Assesses Implementation of Recommendations on Stablecoins
The Financial Stability Board (FSB) published a progress report on implementation of its high-level recommendations for the regulation, supervision, and oversight of global stablecoin arrangements.
APRA Updates Loan Serviceability Expectations for Home Lending
In a letter to the authorized deposit taking institutions, the Australian Prudential Regulation Authority (APRA) announced an increase in the minimum interest rate buffer it expects banks to use when assessing the serviceability of home loan applications.
CPMI and IOSCO Consult on Guidance on Stablecoin Arrangements
The Committee on Payments and Market Infrastructures (CPMI) and the International Organization of Securities Commissions (IOSCO) are consulting on the preliminary guidance that clarifies that stablecoin arrangements should observe international standards for payment, clearing, and settlement systems.
EBA and EIOPA Set Out Work Priorities for 2022
The European Banking Authority (EBA) and the European Insurance and Occupational Pensions Authority (EIOPA) have set out their respective work priorities for 2022.
MFSA Issues Reporting Updates and Guidance for Banks
The Malta Financial Services Authority (MFSA) updated the guidelines on supervisory reporting requirements under the reporting framework 3.0, in addition to the reporting module on leverage under the common reporting (COREP) framework.
EC Publishes Decision on List of Equivalent Third Countries Under CRR
The European Commission (EC) published the Implementing Decision 2021/1753 on the equivalence of supervisory and regulatory requirements of certain third countries and territories for the purposes of the treatment of exposures, in accordance with the Capital Requirements Regulation or CRR (575/2013).
EC Rule on Contractual Recognition of Write-Down and Conversion Powers
EC published the Implementing Regulation 2021/1751, which lays down implementing technical standards on uniform formats and templates for notification of determination of the impracticability of including contractual recognition of write-down and conversion powers. | https://www.moodysanalytics.com/regulatory-news/Nov-14-19-FDIC-Reports-Analyze-Growth-of-Nonbank-Lending-in-US |
In public debates the input of historians seems to play a subordinate role. Instead, the contemporary witnesses are more important, because they are those who can talk about “what it was really like”.
Tag Archive for ‘Science Communication (Wissenschaftskommunikation)’
Public History and Spaces of Knowledge
Scientists have workspaces for research and teaching, e.g. laboratories or media platforms. Against the background of public history’s focus on participation and cooperation: where does public history position itself in this ‘question of space’?
Post-ism. The Humanities, Displaced by their Trends
What is post-ism about? The humanities are a scholarly institution in which human culture is thematized, investigated, and analyzed. They do it in the special way we call ‘scholarly’ or, in most other languages, ‘scientific’. But, nevertheless…
Reclaiming Relevance from the Dark Side
The ‘tyranny of relevance’ is a convenient and popular target for academic historians. Mention the ‘r’ word with a raised eyebrow during a conference coffee break, or condemn instrumentalist research policy at a committee meeting and you are likely to receive murmurs of sympathy.
Who We Are: Public Historians as Multiple Personalities?
There is no doubt that, since its inception in the United States, public history has been increasingly professionalized internationally as an academic teaching and research discipline. At German universities, however, its status is still fuzzy. Although …
Thank you—No Offence Meant! The Humanities in Splendid Isolation
From our “Wilde 13” section. Abstract: “Thank you for your kind introduction,” “Many thanks for your stimulating comments,” “Thank you for those further references,” “Thank you, I shall happily… Read More ›
Danke und nichts für ungut! Geisteswissenschaften ganz bei sich
From our “Wilde 13” section. Abstract: “Danke für die freundliche Einführung”, “Vielen Dank für die anregenden Kommentare”, “Ich danke Ihnen für die weiterführenden Hinweise”, “Danke, ich werde das gerne… Read More ›
Hot Property, Cool Storage, Grey Literature
Are you still listening to the promises that interconnectedness would abolish all hierarchies, that that mythical entity, “The Web,” would dissolve all boundaries, providing everything for everyone, a promised digital realm within immediate reach? All one had to do, so the claim went, was to be creative yet highly disciplined, and permanently online with everyone else, all of us using the new, smart programmes. | https://public-history-weekly.degruyter.com/tag/science-communication-wissenschaftskommunikation/page/2/ |
Welcome back to my weekly recaps of DC’s Legends of Tomorrow! To read all of my past recaps of episodes, go here!
The Legion broke reality (but somehow not certain aspects of the timeline) and the Legends put it back together. “Aruba,” minus some issues that have already been built into the show and the nature of time travel that raises questions, was a delightful way to end Legends of Tomorrow’s second season. It even had a semi-happy ending, which, you know, on a superhero show doesn’t always happen.
After Captain Cold brutally murders Amaya in the alternate version of 2017, the Legends, complete with Mick on board, regroup. They concoct a plan to head back to 1916 to steal the spear of destiny at the moment before it’s taken by the Legion. However, they risk running into themselves, which would make their future selves aberrations, and time would cave in on itself. I know the show doesn’t ever really think these things through, but the writers kind of wrote themselves into a corner in this episode with its lack of logic. If the Legion did indeed alter reality, then how does an event which occurred in a different reality remain intact? If the Legends led different lives in their alternate states, then why would their past selves even be in 1916 to begin with? The spear of destiny would have altered everything, erasing their mission to stop the Legion and rendering the risk of bumping into themselves moot.
I should never go in armed with too much logic with this show, I know, but it’s a thought that swept through my mind during the entire episode and rendered many of its events questionable. However, despite the obvious conundrum of time travel and the writers’ need to create events that might not make much sense logically, “Aruba” was well-executed in style, action, and character interaction. The bad guys are defeated, the Legends win the day (which I’d also like to see on the other CW superhero shows), and all of the deaths, of which there are many, don’t stick. The episode finds a way to be both surprising, a bit heartfelt, and fun without having to sacrifice any of its main characters to death. It also sets up another mission the team will have to contend with in season three, but aside from the new problem, the season finale ended in a generally happy place.
One of my biggest concerns was that the show would decide to kill off Amaya or send her back to 1942 permanently. And while the show has dealt with destiny, it was nice to see Amaya allowing her heart to lead her into a new adventure. She accepts that going back to 1942 may be what she’s destined to do, but that perhaps time and destiny will wait (or maybe course correct) for now. I’ve never been more relieved to see Legends of Tomorrow keep a character around. She’s been a wonderful presence on the show and I like the dynamic she has with everyone. There is a sense of power and compassion in her demeanor and she has been one of the highlights of season two.
Rip’s place on the team is brought into question by episode’s end. He spent half of season two with amnesia and with another identity. By the time he rejoined the team, they had moved on, with Sara taking his place as captain of the Waverider. He’s feeling like he no longer belongs and that the team has become a well-oiled machine in his absence. So Rip decides to take his leave. Where he goes or what he’ll do is unclear (I’m not sure he’ll even be back for season three), but his departure doesn’t feel forced. It feels timely. And as much as I like Rip (he has been so much better in season two), it feels like a natural departure as his mission and time on the Waverider has come to an end.
The Legion of Doom proved to be better adversaries and really elevated the season. I’m not convinced that we’ve seen the last of Eobard Thawne (because, let’s be real, which version did Black Flash kill?) and he’ll probably pop back up on The Flash sooner or later, but his run on Legends was a good one. Sara even got to see Laurel again and had to make a conscious choice about altering reality to bring her back.
I wouldn’t have faulted her for this since Sara has had her issues in the past and to want to bring back Laurel would have been a personal choice that would have made sense on a human level. However, she’s made peace with her sister’s death and is no longer as angry as she was at the start of the season. Even Mick has seemingly found his place among his new friends, something he reveals later to Snart. All in all, it was nice to see Legends of Tomorrow end in optimism and not despair. With the team reassembled and a new mission on the horizon (and really, the Legends may as well be in an alternate universe given that none of the other superheroes have noticed any changes), I look forward to season three. Maybe the Legends will finally get penalized for breaking the timeline even more so than Barry ever could. | https://www.theyoungfolks.com/television/101146/legends-of-tomorrow-2x17-review-aruba/ |
Anunnaki Atlantis & The Babylonian Stargate
In this fascinating presentation, the host of Gaia’s Mystery Teachings, Jonny Enoch reveals hidden insights about the Annunaki story and how it was actually the Atlantis story. He will share insights on how Zacharia Sitchin was an ET contactee, what he got right and where we find evidence for these beings in the ancient world. He will also reveal all new discoveries from Egypt and Iraq, including insider testimonials about the stargates in Babylon that were extracted by the US military and where they are now.
Jonny Enoch is an author, researcher of prehistoric civilizations and a futurist that explores mysteries around the world. He is the host of popular TV shows such as Gaia’s Mystery Teachings, the Odyssey of Enoch and featured on Deep Space. He is also a regular on Ancient Civilizations and the Travel Channel’s Alaska Triangle. Jonny frequently takes expeditions to remote parts of the world to explore the unknown crawling on his hands and knees under temples and ruins searching for clues about our ancient past. He has used clinical hypnotherapy to work with ET contactees and do past life regressions. What makes his work different is how he connects the dots between the hidden symbolism found in world religions to quantum physics, alchemy and the multiverse. | https://consciouslifeexpo.com/jonny-enoch-2023/ |
This chapter book identifies three crisis warning indicators driven from trading in emerging markets’ carry trades, and empirically examines whether these indicators could predict two major financial crises that hit the global financial markets in the last decades – The 1997-1998 Asian crisis and the 2007-2008 global crisis. The Probit regression is used to examine the power of the three indicators in forecasting financial crises, using data from eight Asian emerging countries which serve as proxies for emerging markets, independent of the origination of the crisis. I use both fixed effect and random effect estimation to measure crisis impacts.The empirical results show that financial crises could have been predicted. Probit estimation show that carry trade returns can predict a financial crisis, and the estimation results are robust to both panel level and country-level analysis.
Conditioning Carry Trades: Less Risk, More Return! | https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2660623 |
Consultancy-style report assessing development potential of specified sites or areas in light of real estate market circumstances, relevant planning policy and political realities.
2,750 words
Autumn
How the module will be delivered
Lectures/workshops : inter-active short sessions of three kinds : a) introducing, and critically appraising, key dimensions of the real estate development process, e.g. institutional dynamics, funding, planning policy context etc..; b) case studies of real estate development initiatives; c) consolidating techniques – e.g. valuation, report-writing. In each case, the experiences of current practitioners will be drawn upon.
Study visits: to significant real estate development initiatives in the local area
Outline Description of Module
This module provides students with a critical understanding of key concepts, techniques and trends associated with planning intervention in real estate development in market economies. The primary focus is the UK, but skills and analytical approaches will be transferable. It considers the implications for real estate development of public policy/planning challenges such as climate change and social justice. It develops skills in decision-making and judgement in relation to real estate development.
On completion of the module a student should be able to
- Critically understand the institutional landscape of the real estate development process and the roles played by key actors;
- Appreciate the social, economic and political significance of planning intervention in the development process, including cases revolving around notions of equity and climate change
- Show awareness of recent and current property market trends in the UK and elsewhere, and their implications for planning practice.
- Undertake simplified versions of development project appraisals with an appreciation of the strengths and weaknesses of various method
- Research and prepare reports recommending preferred actions and outcomes in relation to real estate development proposals
Skills that will be practised and developed
- Accessing and analysing data on property-market trends from standard sources
- Assessing the real estate development implications of planning policy at various spatial scales
- Evaluating feasibility of detailed design options and mixes of use at site-specific level.
- Report writing
- Land valuation technique (residual)
- Financial appraisal of real estate development proposals
Assessment Breakdown
|Type||%||Title||Duration(hrs)||Period||Week|
|Written Assessment||35||Short Essay||N/A||1||N/A|
|Written Assessment||65||Report||N/A||1||N/A|
Essential Reading and Resource List
Reed, R and Sims, S (2015) Property Development 6th ed Abingdon : Routledge
Tiesdell, S and Adams, D eds (2011) Urban Design in the Real Estate Development Process Oxford, Wiley-Blackwell
Williamson, I et al (2010) Land Administration for Sustainable Development Redlands, California, ESRI Press Academic
Background Reading and Resource List
Ambrose, P and Colenutt, B (1975) The Property Machine Harmondsworth, Penguin , chs 1 and 2.
Ball. M et al eds (1985) Land Rent, Housing and Urban Planning London, Croom Helm, chs 1 and 11
Davy, B (2012) Land Policy : planning and the spatial consequences of property Farnham : Ashgate
Guy, C (2006) Planning for Retail Development London, Routledge
Seabrooke, W et al (eds) (2004) International Real Estate : an institutional approach Oxford : Blackwell
Syllabus content
The module begins with an overview of the real estate development process and planning’s role in it. It introduces ideas of development appraisal and land valuation. This is followed by a critical exploration, through case studies, of the ways in which real estate development is bound up with challenges confronting the planning process, such as promoting social justice, and the challenge of climate change. A significant proportion of the latter stages of the module is devoted to considering how real estate development prospects and aspirations are shaped by spatially differentiated market trends, planning policy at various scales, and political circumstances. | https://data.cardiff.ac.uk/legacy/grails/module/CPT857.html |
The contents of this Register are intended for research purposes only. The heraldic emblems found in the Register may not be reproduced in any form or in any media without the written consent of the Canadian Heraldic Authority and/or the recipient.
Oromocto, New Brunswick
Grant of Arms, Flags and Badge
April 15, 2009
Vol. V, p. 436
|
|
Arms of Stephen Troy Chledowski
Blazon
Gules a griffin segreant Argent within a bordure Or charged with a maple leaf Gules at each angle;
Symbolism
The white griffin on red field is from the Polish arms of Gryf, here particularized by using a gold border with maple leaves for this Canadian branch of the family.
|
|
Crest
Blazon
A mermaid affronty proper queued Vert grasping in her dexter hand a scimitar and in her sinister hand a staff proper supporting a gonfalon ensigned by a rose Gules;
Symbolism
The wild rose alludes to Mr. Chledowski’s birth in Alberta. His wife’s ancestral background is symbolized by using a mermaid based on the arms of the city of Warsaw. The gonfalon alludes to her Polish herb Radwan.
|
|
Motto
Blazon
TUIS ET TIBI HONORI ESTO;
Symbolism
This phrase in Latin means “Bring honour to thy family and thyself”.
|
|
Flag of Stephen Troy Chledowski
Blazon
A standard, the Arms in hoist, the fly per fess Or and Gules charged with the Crest between two representations of the Badge all separated by two bends Argent inscribed with the Motto in letters Sable;
Symbolism
The symbolism of this emblem is found in other element(s) of this record.
|
|
Additional Information
Creator(s)
Original concept of Darrel Kennedy, Assiniboine Herald, assisted by the heralds of the Canadian Heraldic Authority. | https://www.gg.ca/en/heraldry/public-register/project/1933 |
Understanding comes through sacrifice and our lives alternate between light (understanding) and darkness (confusion). Abraham's experiences teach us not to try to force God's will by contrivances of the flesh. When any sin or self-will is involved, the fruits of such an endeavor will be bitter and disappointing (as was the incident involving Abraham and Hagar). Abraham's righteousness equated with his acquired humble unflinching trust in God rather than his skill at law keeping. The gift of grace comes only to those who yield to God by faith, establishing a warm working relationship with Him, performing righteousness through the power of His Holy Spirit.
The transcript for this audio message is not available yet.
If you would like to be notified when this transcript is available, please enter your email address in the space below and click the button. | https://www.cgg.org/index.cfm/library/sermon/id/844/abraham-part-seven.htm |
A Holy Family Catholic High School education is guided by its Catholic and Lasallian tradition of providing a welcoming, student-centered environment in which young people encounter opportunities to grow spiritually, morally, intellectually, and physically within a community of faith. In order to provide appropriate experiences, it is essential the Consent to Release Private Data form be submitted as part of the admissions process before final acceptance is confirmed.
Holy Family faculty and staff are committed to the development of each student’s potential. We also recognize our programs do not support every need. Admission may be declined for reasons pertaining, but not limited, to academic, behavioral, emotional, or attitudinal considerations Holy Family is unable to serve. Such administrative decisions are final.
Non-discrimination Policy: Holy Family admits students of any race, color, national and ethnic origin to all the rights, privileges, programs, and activities generally accorded or made available to students at the school. We do not discriminate on the basis of race, color, national and/or ethnic origin in the administration of our educational policies, admissions policies, scholarship and loan programs, and athletic and other school-administered programs. | https://hfchs.org/final-acceptance/ |
This article was co-authored by Stephanie Wong Ken, MFA Stephanie Wong Ken is a author primarily based in Canada. This text was co-authored by Stephanie Wong Ken, MFA Stephanie Wong Ken is a writer based in Canada. Stephanie’s writing has appeared in Joyland, Catapult, Pithead Chapel, Cosmonaut’s Avenue, and other publications. She holds an MFA in Fiction and Inventive Writing from Portland State College.
The greatest distinction between the literary essay and the college essay is that the literary essay springs from the interests of the author and could be a joy to write. Book analysis of unlike on this part or bibliographic essay writing the. A literature evaluate is a crucial evaluation of published sources.
Figurative language; similes, metaphors, personification. These are decisions that an author makes and also you perhaps asked to investigate why you suppose she or he made the choice. Symbols, irony, why they picked a particular finding out, the sequence of the plot, why sure things occur in certain orders. Themes are a giant one where you are really looking at creator intention, what’s the message that he or she is trying to get throughout. Point of view, why they’ve a certain narrator or what affect that specific narrator had on the story after which exterior literary theory or literary criticism. So all of this stuff are literary analysis the place you’re taking the e-book after which you are actually digging into it on a deeper level.
In countries like the United States and the United Kingdom , essays have change into a serious part of a proper schooling within the type of free response questions. Secondary students in these nations are taught structured essay codecs to enhance their writing skills, and essays are often used by universities in these nations in choosing candidates (see admissions essay ). In both secondary and tertiary education, essays are used to evaluate the mastery and comprehension of the material. College students are asked to clarify, comment on, or assess a topic of study within the type of an essay. In some programs, college students must full a number of essays over several weeks or months. In addition, in fields such as the humanities and social sciences, quotation wanted mid-term and end of term examinations often require students to write a short essay in two or three hours.
At the finish of the introduction, you’ll embrace your thesis statement. It’s beneficial for it to be in a single sentence. That rule will push you in direction of readability and scarcity. After evaluating the sources, collect info, divide it into components, and come up the principle ideas of your essay. Formulate each of them and construct a draft.
What’s the role of introduction and the way might a character evaluation essays introduction appear like? For those who describe the characters from Batman,” for instance, start with a hook like Bruce Wayne was not a protagonist of the story; this character led to the deaths of many people by refusing to invest his money into charity, environmental points, and more.” It is an intriguing, non-normal hook. Most individuals are likely to view Batman as a optimistic character. It’s a good suggestion is shawshank redemption based on a true story to point out one other facet. Deal with the fact because his wealthy alter-ego did not help a few of the city’s enthusiast like the character of Pamela Lillian Isley who needed to assist the environment, many of these individuals find yourself mutating and turning into destructive characters. Stress these individuals had a chance if not Batman.
Thinking About Plans In essay sample
Swift Advice In literature essay samples
Effective Plans For literature essay examples Considered
It is an opinion essay through which the students share their opinion in regards to the theme and other literary components of an editorial. These views are supported with supporting evidence from related other work. It is necessary for college kids to research the subject before writing. Accumulate enough materials that will help you reply or help your query.
The literacy assessment lies on the core of your paper, so it needs to be effectively-structured and show your creative strategy to the duty. Structuring makes it simpler to incorporate subtopics in your article. You’re also expected to supply expert analysis of your check or essay, as well as give you the thesis statement in order that the readers perceive why you decided to put in writing your research paper. The argument ought to be provided to create a debate for the chosen topic, too. You are also required to offer theoretical framework ideas and ideas urged by different students prior to now. | https://tcprosport-bg.com/uncovering-no-fuss-plans-for-ap-literature-essay-example/ |
Every story is an effort to resolve some kind of problem. The Protagonist’s success in this endeavor is largely determined by whether or not the appropriate solution is found.
Many stories, and not surprisingly many of Hollywood’s favorites, tell the tale of a Protagonist who injects the solution into the story’s problems, thereby bringing order and balance back into the lives of the characters. Stories of Triumph and Personal Triumph abound with Protagonists who win and Antagonists who lose. It is within the other side of the equation, the Personal Tragedy and Tragedy, where the success of these two dramatic opponents switches hands, leaving the original problem intact.
Failure is more than giving up
In his insightful treatise on play-writing, Backwards and Forwards, David Ball elaborates on what it means when a story ends:
Stasis comes about at the close of the play when the major forces of the play either get what they want or are forced to stop trying.
Stasis returns when the original problem is solved, thereby dissolving the original inequity. There are several stories, however, that do not regain this balance and do not return to a point of stasis. Hamlet, Amadeus, and Se7en are three prime examples of problems that linger on far beyond the last credit. It is less a case of the characters being forced to stop trying and more that a solution to the original problem was never found, or at the very least, was never employed.
Identifying the source of all problems
In How To Train Your Dragon problems exist because of the refusal of some characters to compromise. At first, this refusal only comes from Hiccup. His act of disobedience within the opening sequence sets the story off and forces Stoick, as Protagonist, to take the necessary action required to return things back to normal. As covered in the article How to Train Your Inciting Incident, Hiccup’s chaotic influence interrupts the tender balance between dragons and Vikings, ultimately driving Stoick to pursue the Story Goal of Training the next generation of dragon killers.
This inability to compromise though, can be found everywhere and not simply within Hiccup himself. Stoick’s refusal to allow Hiccup to train, the initial refusal by Toothless to take Hiccup on as a rider, Hiccup’s constant repudiation of traditional Viking teaching within the ring, Astrid’s resistance to Hiccup’s relationship with Toothless and the subsequent romantic flight that begins with Toothless’s initial rejection of Astrid…all of these are clear instances where standing one’s ground creates friction within the world of the story, friction that affects everyone.
Such friction becomes a good indicator of a story’s central problem.
Every problem carries with it its own solution
No matter what the problem is, its very existence automatically supplies the corresponding solution. If the problem is an overabundance of emotion as it is in The Godfather (Sonny’s overreaction, Don’s feeling about drugs), then it only follows that the solution would be a reliance on rational thought (as supplied by Tom Hagan and used to great success by Michael). If the problem is a repressive state of control as it is in Casablanca (exemplified by the Nazi’s presence and their willing benefactor Renault), then it would make sense that the solution be found in freedom (the rousing rendition of “La Marseillaise” and of course, Ilsa and Laszlo’s escape). Identify the problem in a story and the corresponding dynamically opposed solution will present itself.
If the problem is a refusal to compromise, as it is in How To Train Your Dragon, then the solution would be a call for more tolerance, an acceptance of what is presented. But isn’t this what happened in the film? Didn’t Stoick come to accept his son for who he is?
Separate Throughlines and Their Solutions
Complete stories require that there be four throughlines. Beyond the obvious Main Character and Objective Storylines, there also needs to be someone who challenges the Main Character’s way of seeing things. This becomes the third throughline. The fourth and final throughline is covered by the relationship that develops between the two. In Dragon, this challenging character throughline is shared among three characters: Stoick, Astrid and Toothless. While each has their own motivations and place within the main story, their place within the structure of the story is the same: to force Hiccup to doubt his resolve.
In the end, Hiccup stands his ground (as all Steadfast Main Characters do), forcing the others in the relationship to change. Both Toothless and Astrid fall early, but it is Stoick’s change that carries with it the greater emotional resonance. His acceptance of Hiccup as his son resolves the problem within his own throughline. It does not, however, solve the problems within the story at large.
The reason for stories
These four throughlines provide an audience with an opportunity to look at problems from different perspectives simultaneously --something they cannot do in their own lives. This is the power of stories, and of movies, and the reason why audiences continue to seek out this experience time and time again. By assessing the outcomes of separate throughlines dealing with the same problem, an audience member can acquire some greater meaning to the order of things.
Failing to resolve the main story problem
So while Stoick was able to resolve his throughline by finding a greater tolerance for his son, in the larger picture that refusal to compromise persisted.
Remember in that previous article how Stoick was identified as the Protagonist and Hiccup the Antagonist? For Stoick to win, to successfully achieve his goal of Training the next generation of dragon killers Hiccup would have had to accept the Viking way and done away with all dragons. By mounting one of their heads above the fireplace, Hiccup would have employed that solution of tolerance the main story needed for a successful resolution.
But the story’s structure called for a completely different outcome.
Arriving at the final battle atop dragons wasn’t a sign of tolerating the dragons, it was a continuation of the mayhem originally begun by Hiccup. In effect, the kids were misbehaving, refusing to accept the training that Stoick and the others had hoped to instill in them. While they ultimately managed to save the day in the end, they did so by rejecting the Viking way.
Problems that persist
When a story ends in failure, the required solution was never employed. As explained in When Failure Becomes a Good Thing, the consequences of failing to reach the desired outcome can be painted in a more positive light as they are in How To Train Your Dragon. While the solution may be employed in one throughline, this does not necessarily guarantee a successful outcome in every throughline. By mixing success and failure within the separate perspectives on a story’s problem, an author can construct a meaningful dissonance that provides an audience with a memorable and lasting experience.
:::expert Advanced Story Theory for this Article The storyform for How To Train Your Dragon finds Non-Acceptance as a Problem in three of the four throughlines: Objective Story, Influence Character and Relationship Story. While failure is the outcome of the Objective Story (as described above), success is to be found in the Influence Character and Relationship Throughlines. Stoick’s Acceptance of his son shortly before his final attack on the huge dragon naturally rounds out his throughline and gives us a very clear example of a Change Influence Character. Their relationship finds resolve in Stoick’s Acceptance of Hiccup’s idea of how Vikings and Dragons should live together (proof of which lies within the closing sequence).
Interestingly enough, the storyform calls for Hiccup’s Problem to be Protection. As a Steadfast Main Character, this Problem will be seen more as a source of the Main Character’s drive rather than a problem to be solved and nowhere is this more evident than within the young Viking’s explanation for why he didn’t kill Toothless. As he explains to Astrid, he looked into the dragon’s eyes and saw himself (a prime candidate for the “You and I” montage if there ever was one), driven to Protect someone as helpless as he himself felt. In a more complex and extended story, the Solution of Inaction would have been seen in moments where Hiccup would have doubted himself and perhaps done nothing to protect a defenseless creature. With a running time close to 90 minutes, this sort of sophistication and nuance within a Steadfast Main Character falls by the wayside. ::: | https://narrativefirst.com/articles/what-it-means-to-fail/ |
The prestige of the Enlightenment has declined in recent years. Many consider its thinking abstract, its art and poetry uninspiring, and the assertion that it introduced a new age of freedom and progress after centuries of darkness and superstition presumptuous. In this book, an eminent scholar of modern culture shows that the Enlightenment was a more complex phenomenon than most of its detractors and advocates assume. It includes rationalist as well as antirationalist tendencies, a critique of traditional morality and religion as well as an attempt to establish them on new foundations, even the beginning of a moral renewal and a spiritual revival.
The Enlightenment's critique of tradition was a necessary consequence of the fundamental modern principle that we humans are solely responsible for the course of history. Hence we can accept no belief, no authority, no institutions that are not in some way justified. This foundation, for better or for worse, determined the course of the following centuries. Despite contemporary reactions against it, the Enlightenment continues to shape our own time and still distinguishes Western culture from any other.
You do not have access to this
book
on JSTOR. Try logging in through your institution for access.
Log in to your personal account or through your institution.
This book had its origin in the surprise I experienced many years ago when considering the fundamental change in thinking and valuing that occurred during the period stretching from the second half of the seventeenth century until the end of the eighteenth. Curious to know what the intellectual principles of modern thought were, I made a study of the beginnings of modern culture before turning to the critical epoch that forms the subject of the present book. It soon appeared that no direct causal succession links the humanism of the fifteenth century with the Enlightenment. When Max Weber described modernity...
In 1783 the writer of the article “Was ist Aufklärung?” (What Is Enlightenment?), published in theBerlinische Monatschrift, confessed himself unable to answer the question he had raised.¹ Today it remains as difficult to define the Enlightenment. The uncertainty appears in the conflicting assessments of the movement. The second edition of theOxford English Dictionarydescribes it as inspired by a “shallow and pretentious intellectualism, unreasonable contempt for tradition and authority.” Obviously a definition of this nature is not very helpful for understanding a phenomenon distinct by its complexity. But neither is Kant’s famous description of it as “man’s release...
During the Enlightenment the concept of power that had dominated ancient and medieval physics underwent a profound transformation. Previously thought to derive from a source beyond the physical world, it came to be viewed as immanent in that world and eventually as coinciding with the very nature of bodiliness. Aristotle’s theory that all motion originated from an unmoved mover had continued to influence Scholastic theories throughout the Middle Ages. For Jewish, Muslim, and Christian thinkers, the impact of divine power went beyond motion and extended to the very existence of finite beings. According to the doctrine of creation, the dependence...
The success of the physical and mathematical sciences inspired a demand for a science of human nature. Not only would a systematic knowledge of the person round out the circle of sciences, but, as Hume understood it, such a knowledge would place all other sciences on a secure basis. “It is evident that all the sciences have a relation, greater or less, to human nature; and that, however wide any of them may seem to run from it, they still return by one passage or another. Here then is the only expedient, from which we can hope for success in...
Artistically the Enlightenment may not compare favorably with the Renaissance or Baroque periods, but the aesthetic criticism of the eighteenth century surpassed that of the two earlier periods and provided most of the categories used in the two centuries that followed. One of its major achievements was to raise the idea of beauty to the level of truth. It accomplished this in three stages: at the beginning the imitation theory prevailed, next the expressive theory, and at the end the symbolic theory made a tentative entrance. Each of these movements made a definitive contribution to the modern conception of art...
In chapter 1, I expressed reservations about applying the term “crisis” with its modern negative meaning to eighteenth-century culture as a whole. The questioning of the traditional foundations of morality, however, definitely caused a crisis. In France, libertinism flourished among the rich and the educated. During the reign of Louis XIV much of the Court was thoroughly corrupt—from the king’s own brother down. Corruption increased during the regency period and the reign of Louis XV. England also passed through a period of moral decline. The axiomatic beliefs that had supported traditional moral principles had become dubious. In the wake...
The Enlightenment may have made its most lasting impact on the way we live and think today through its social theory. Our institutions and laws, our conception of the state, and our political sensitivity all stem from Enlightenment ideas. This, of course, is particularly true in the United States, where the founding fathers transformed those ideas into an unsurpassed system of balanced government. Remarkably enough, at the center of these ideas stands the age-old concept of natural law. Much of the Enlightenment’s innovation in political theory may be traced to a change in the interpretation of that concept. Originally it...
The writing of history has always been inspired by the belief that the knowledge of the past sheds light on the present. Yet the nature of this knowledge has varied from one period to another. Ancient writers, both classical and biblical, assumed that the essential patterns of life remained identical and therefore that history provided lasting models for instruction and imitation. Hence the search for historical prototypes of current customs and institutions. Legendary founders of cities, ancestors of existing professions, prehistorical legislators, and establishers of rituals were believed to grant them legitimacy. This belief in tradition persisted among Christians, even...
The impact of the Enlightenment was undoubtedly felt most deeply in the area of religion, either as loss or as liberation. It was particularly severe in France and in England, where for a long time skeptical philosophies had undermined the foundations of Christian beliefs. By the end of the eighteenth century, the French masses, pressed by economic hardship, felt abandoned by a Church closely linked to a political regime indifferent to their suffering. In England, after two centuries of religious turmoil, the willingness of the Church to adapt its doctrine to the will of the sovereign had drained common people...
In this chapter I shall discuss the main philosophical responses to the challenges to religion described in the preceding one. Some philosophers, such as Leibniz and Clarke, responded from within the rationalist tradition. Others, among them Malebranche, Berkeley, and Jacobi, considered philosophical rationalism the very source of the religious crisis and repudiated it altogether. The first group attempted to revive philosophical theology, a branch of metaphysics that had existed since the early Stoics and that aimed at establishing the existence and nature of God. The Arabs, in their commentaries on the works of Aristotle, revived it as a rational foundation...
The ideas discussed in this chapter differ considerably from the ones we have come to consider characteristic of the Enlightenment. Not only do they fall outside the rationalist trends of the age, but they contrast just as much with those that opposed that rationalism. Some of the most prominent and influential thinkers of the time appear to have bypassed the dominant controversies altogether. Unlike the so-called anti-Enlightenment thinkers, the ones presented here do not seek, or do not seek in the first place, alternatives to the prevailing ideologies. They mostly ignore them. Their ideas remain largely continuous with those of...
The ideas of the Enlightenment continue to influence our present culture. The ideal of human emancipation still occupies a central place among them, though it has since passed through a number of changes. Marxism, which until recently played a leading role in European life, may serve as an example both of the continuity with and the transformation of the original ideal. It derived its goal of social liberation from the eighteenth-century ideal of emancipation. Yet the kind of social liberation Marx had in mind obviously differs from the emancipation pursued by the Enlightenment. Eighteenthcentury thought had at least in principle...
Processing your request... | http://slave2.omega.jstor.org/stable/j.ctt1npfbd |
The gas giant Saturn contains many of the same components as the sun. Although it is the solar system's second largest planet, it lacks the necessary mass to undergo the fusion needed to power a star. Still, its gaseous composition — and the stunningly beautiful rings that surround it — make it one of the more interesting object in the solar system.
Astrophotographer Jerry Lodriguss selected this photo of Saturn taken by Cassini spacecraft in 2006 as his favorite space photo.
Saturn is predominantly composed of hydrogen and helium, the two basic gases of the universe. The planet also bears traces of ices containing ammonia, methane, and water. Unlike the rocky terrestrial planets, gas giants such as Saturn lack the layered crust-mantle-core structure, because they formed differently from their rocky siblings.
Saturn is classified as a gas giant because it is almost completely made of gas. Its atmosphere bleeds into its "surface" with little distinction. If a spacecraft attempted to touch down on Saturn, it would never find solid ground. Of course, the craft would be fortunate to survive long before the increasing pressure of the planet crushed it.
Because Saturn lacks a traditional ground, scientists consider the surface of the planet to begin when the pressure exceeds one bar, the approximate pressure at sea level on Earth.
At higher pressures, below the determined surface, hydrogen on Saturn becomes liquid. Traveling inward toward the center of the planet, the increased pressure causes the liquefied gas to become metallic hydrogen. Saturn does not have as much metallic hydrogen as the largest planet, Jupiter, but it does contain more ices. Saturn is also significantly less dense than any other planet in the solar system; in a large enough pool of water, the ringed planet would float.
Like Jupiter, Saturn is suspected to have a rocky core surrounded by hydrogen and helium. However, the question of how solid the core might be is still up for debate. Though composed of rocky material, the core itself may be liquid.
The distance to Saturn from the sun is significant, keeping the average temperature of Saturn low, but things are hotter within the rocky core. There, temperatures can reach as high as 21,000 degrees Fahrenheit (11,700 degrees Celsius).
During the formation of Saturn, the core would have been created first. Research suggests that Saturn's rocky core is between 9 to 22 times the mass of Earth. Only when it reached sufficient mass would the planet have been able to gravitationally pile on the light hydrogen and helium gas that make up most of the its mass.
As on Jupiter, the liquid metallic hydrogen drives the magnetic field of Saturn. Saturn's magnetosphere is smaller than its giant sibling, but still significantly more powerful than those found on the terrestrial planets. With a magnetosphere large enough to contain the entire planet and its rings, Saturn's magnetic field is 578 times as powerful as Earth's.
When Italian astronomer Galileo Galilei turned his telescope toward Saturn, he observed two blobs on either side that he identified as bodies separate from the main planet. It wasn't until Dutch astronomer Christiaan Huygens studied the planet with a more powerful scope that the rings of Saturn were first identified. | https://www.space.com/18472-what-is-saturn-made-of.html |
Anxiety, stress, and fear are natural emotions that can become overwhelming reactions to the Coronavirus (COVID-19). Isolation and loneliness can also occur as you apply social distancing, and if you and/or your loved ones are quarantined, or a “Shelter In Place” has been issued in your location. Effects of these emotions can include worry over one’s health and the health of others, changes in diet or sleep, difficulty focusing, and decreased health.
WAYS TO COPE:
1. Take care of your body.
– Eat healthy food, drink water often, get 8-hours of sleep, and avoid illicit drugs and alcohol.
– Stretch your body, and become active in physical activities you enjoy.
– Follow your doctor’s advice, and take needed prescription medication as prescribed.
– Do all you can to prevent getting or spreading the virus by following the important steps listed in www.cdc.gov.
2. Take care of your mind.
– Reading or hearing about COVID-19 continually can be upsetting. Stay informed, but monitor and limit watching, listening to or reading news stories, including from social media.
– Get out in nature, even if it’s just outside your balcony or in the protection of your back yard.
– Seek out reading materials, activities, and interactions that will prompt positive, uplighting thoughts.
3. Take care of your emotions.
– Calm anxiety, stress and fear through slow, relaxed breathing techniques.
– Apply the power of gratitude, and count your blessings, large and small, that are still in your life.
– Listen to music that soothes and uplifts you.
– Process your feelings through journaling, exercising, creative expression, and sharing them with a professional therapist or safe loved one who can help you.
– If you have pets, be aware of their emotional needs as well. Our pets are often highly attuned to our emotional state, and can be impacted by our emotional condition. Be loving and gentle with your pet(s). This will comfort both your pet(s) and you.
– Seek out beauty, positive and inspirational resources, humor, and the tiny miracles that surround you.
– As you take walks, greet those you pass and say, “Hello.”
– Find a reason to smile.
4. Take care of your social needs.
– As you exercise Social Distancing, or are unable to leave your home due to a quarantine or “Shelter In Place,” stay in touch with loved ones through telephone calls and video telephony technology such as FaceTime, WhatsApp, or Zoom. Proactively reach out in this way to family, friends, and neighbors for connection. They will be as close to you as your cell phone, tablet or computer screen.
– While you are connected, inquire as to their well being, and offer emotional support and heart-felt connection. Uplift another during this time of uncertainty and fear. Be sensitive to the needs of vulnerable people, and assist where you safely can. This will uplift you as well as them.
5. Take care of your inner spirit.
– Connect to the Divine, whether you find that in nature, prayer, or within your own inner being.
– Breathe in inner-peace (or whatever spiritual quality you need in the moment) on the inhale, and “Yes” to receiving that quality on the exhale.
– Practice meditation, including various soothing guided meditations available on YouTube and various apps to calm your soul and heart.
– Open your heart to the love that surrounds you. Take that love in, and express the love you feel for those dear to you.
– Send out healing thoughts, energy and prayer for all medical, health care and emergency teams; our world, national and local leaders; those who are ill, suffering or at risk; and for actions to be taken by all to allow a flattening of the pandemic’s curve. | https://janeneforsythmft.me/tools-to-cope-with-covid-19-virus-anxiety-fear-and-stress |
Q:
Is the darktable manual implying that all RAW files need sharpening?
Early in the darktable manual (p13 of the pdf) there is the line:
"If you start your workflow from a raw image, you will need to have your final output sharpened."
Which confused me a little bit. Is one of the following interpretations correct or is there a better reasoning?
"In camera jpeg conversion includes a little sharpening so you'll want to replicate that"
"Humans like sharper images than the camera 'sees' so we all adapt by sharpening everything.
"There's something about RAW that implicity reduces the sharpness and you need to compensate"
(There is a similar question, but that focuses on a general sharpening of all digital files, and this interesting question is about the ordering of the steps)
A:
All of your "interpretations" are technically incorrect because there is nothing in the statement or context (as you've described) to imply any of them. You shouldn't read more into the statement than what it actually says. That is not to say that there is no underlying reason for the statement, just that you cannot divine what that reason is.
That said, your proposed statements are often true, so most analog-to-digital image captures do benefit from some sharpening. Interpolation is inherently "blurry", which sharpening can counteract to some extent.
| |
Please use this identifier to cite or link to this item:
http://hdl.handle.net/20.500.12666/761
|Title:||Modeling and Measuring the Shielding Effectiveness of Carbon Fiber Composites|
|Authors:||Díaz Angulo, L. M.|
Gómez de Francisco, Patricia
Plaza Gallardo, B.
Poyatos Martínez, D.
Ruiz Cabello Núñez, M. D.
Escot Bocanegra, D.
García, Salvador G.
|Keywords:||Carbon fiber composites;Electromagnetic compatibility|
|Issue Date:||25-Oct-2019|
|Publisher:||Institute of Electrical and Electronics Engineers|
|DOI:||10.1109/JMMCT.2019.2949592|
|Published version:||https://ieeexplore.ieee.org/document/8883044|
|Citation:||IEEE Journal on Multiscale and Multiphysics Computational Techniques 4|
|Abstract:||We provide a model able to predict the shielding effectiveness (SE) of carbon fiber composite (CFC) panels made of stacked layers of conducting fibers. This model permits us to obtain simple formulas in which the only parameters needed are the sheet square resistance and the effective panel thickness. These tools let us predict a minimum SE, which always increases with the frequency and therefore constituting the worst case, from an electromagnetic shielding perspective. Consequently, the measurement of minimum SE requirements can be simply measured with a micro-ohmmeter using an specific experimental setup which is also described here. Additionally, this method allows to measure very high SE falling far beyond the dynamic range of the values measurable with the most commonly used standard, the ASTM D4935. After describing the modeling technique and the different test setups used, across-validation between theoretical and experimental results is made for four different samples of CFC; two designed to test the modeling assumptions and two which are representative of the ones nowadays used in a real aircraft.|
|URI:||http://hdl.handle.net/20.500.12666/761|
|E-ISSN:||2379-8793|
|Appears in Collections:||(Espacio) Artículos|
Files in This Item:
|File||Description||Size||Format|
|acceso-restringido.pdf||221,73 kB||Adobe PDF|
View/Open
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated. | https://digital.inta.es/jspui/handle/20.500.12666/761 |
NCIP agrees. “For close to a decade, we have had a detailed road map available for effective criminal justice reforms in the form of the 2008 recommendations from the Commission. But as those reforms have not been adopted or implemented, faulty convictions continue to take place and errors continue to be made,” says NCIP Executive Director Hadar Harris. “This important new study shows that, as a result, the State of California has wasted millions of dollars in taxpayer funds and destroyed decades of innocent peoples’ lives.”
The study was not limited to cases in which a claim of innocence was made; the authors noted that California’s uniquely difficult standard for proving actual innocence would restrict the evaluation’s parameters too much. Instead, the evaluation included all felony convictions that were reversed in the time period and in which charges were subsequently dismissed or the defendant acquitted.
“This study shows how pervasive the causes of wrongful convictions can be, and how some simple, commonsense reforms would both save taxpayers money and strengthen our system to make sure innocent people do not go to prison,” said Justin Brooks, Executive Director of the California Innocence Project. “The conclusions and recommendations found here should not be read as a criticism of current practices, but rather a way to open an earnest dialogue concerning the long-term effects of wrongful convictions and how to change things for the better.”
NCIP, CIP and LPI actively support criminal justice system reforms that rectify and prevent wrongful convictions. Reform efforts include statewide implementation of evidence-based eyewitness identification practices and mandatory videotaping of custodial interrogations.
“There are some very real costs to wrongful convictions. Of course, the most significant costs are to the individuals whose cases are not handled fairly by the judicial system. But there are also costs to the public and criminal justice system itself,” said Laurie Levenson, who holds the David W. Burcham Chair in Ethical Advocacy at Loyola Law School. “Everyone profits when the police, prosecutors and defense lawyers act zealously and honestly to ensure that a defendant’s rights are protected. Loyola’s Project for the Innocent is dedicated to rectifying wrongful convictions and preventing future injustices.”
For more information about NCIP’s, CIP’s and LPI’s exoneration efforts and policy initiatives, please contact:
NCIP Policy Director Lucy Salcido Carter at [email protected] or 650-400-4364 (Cell).
CIP Associate Director Alex Simpson at [email protected] or (619) 515-1525.
LPI’s Legal Director Paula Mitchell at [email protected] or (213) 736-8143.
About the Northern California Innocence Project
The Northern California Innocence Project’s (NCIP) mission is to create a fair, effective, and compassionate criminal justice system and to protect the rights of the innocent. NCIP is a project of Santa Clara University School of Law. NCIP is a member of the Innocence Network, an affiliation of independent organizations (including the Innocence Project, located in New York) dedicated to providing pro bono legal and investigative services to individuals seeking to prove innocence of crimes for which they have been convicted, and working to redress the causes of wrongful convictions. Since its founding in 2001, NCIP has attained justice for 18 innocent people who had collectively spent more than 230 years in prison. For more information, please visit www.ncip.scu.edu.
About the California Innocence Project
The California Innocence Project (CIP) is a law school clinical program at California Western School of Law dedicated to releasing wrongfully convicted inmates and providing an outstanding educational experience to the students enrolled in the clinic. Its three missions are: to free innocent people from prison; to provide outstanding training to our law students so they will become great lawyers; and to change laws and procedures to decrease the number of wrongful convictions and improve the justice system.
Founded in 1999, CIP reviews more than 2,000 claims of innocence from California inmates each year. Students who participate in the year-long clinic work alongside CIP staff attorneys on cases where there is strong evidence of factual innocence. Together, they have secured the release of many innocent people who otherwise may have spent the rest of their lives in prison. For more information, please visitCaliforniaInnocenceProject.org.
About Loyola’s Project for the Innocent
The Loyola Law School Project for the Innocent (LPI) investigates and litigates cases of wrongful conviction. LPI’s clients are men and women who are serving decades-long or life sentences in California prisons for crimes they did not commit. LPI is a proud member of the Innocence Network and is dedicated to providing pro bono legal and investigative services to indigent individuals seeking to prove their innocence. LPI’s social justice mission is also dedicated to reforming our criminal justice system to eradicate the primary causes of wrongful convictions. For more information, please visit www.lls.edu. | https://wrongfulconvictionsblog.org/2016/03/10/faulty-convictions-in-california-lead-to-2346-years-of-wrongful-imprisonment-and-282m-in-costs-over-23-years-new-study-finds/ |
Abstract: There is a two-way relationship between literature and psychology coming together on the same intersection at the point of essential people and human behavior. As it is possible to approach literature and to evaluate literary works with the resources of psychology, and of literary sciences, so it is also possible to consider literary works based on psychology and to discover psychological facts in literature. Thus, both psychologists and writers have taken into consideration the relationship between literature and psychology. Studies of the science of psychology directed to literature, literary works and writers that was introduced by Freud continued with other outstanding theorists of psychology such as Adler, Jung, Lacan, Fromm, Reich and Klein. Likewise, writers and literary theorists such as N.Holland, Lev Tolstoy, Fyodor Dostoevsky and Virginia Woolf contributed to the psychology of literature. This paper is an effort to analyze the relationship between literature and psychology considering the wide field which the science of psychology opens for literature. | http://dspace.khazar.org/handle/20.500.12323/3467 |
Beyond monetary measurements, we also need to recognize that being poor is not defined just by a lack of income.
Other aspects of life are critical for well-being, including education, access to basic utilities, healthcare, and security.
For example, someone may earn more than US$1.90 or $3.20 a day but still feel poor if lacking access to such basic needs as adequate water and sanitation, education, or electricity.
In fact, the number of people in South Asia living in households without access to an acceptable standard of drinking water, adequate sanitation, or electricity—about one South Asian in five lacks electricity at home—is far greater than those living in monetary poverty.
And when factoring in all aspects of well-being, the poverty rate more than doubles in five South Asian countries.
This means that the challenge in securing higher living standards for the population of South Asia is far more daunting when poverty in all its forms is considered.
And while South Asia is expected to meet the goal of reducing extreme poverty to below 3 percent by 2030, many people will still be living in unsatisfactory conditions if the region does not make progress on other components of well-being.
We have made tremendous progress in the effort to end poverty, but the last leg of the journey will be the toughest.
To get there, we need more investment, particularly in building human capital, to promote the inclusive growth it will take to reach the remaining poor. Promoting opportunities for women and improving access to education and health services is vital.
We are adding new pieces to our understanding of the puzzle of poverty in South Asia and beyond.
This is a challenge that is not going away. We must sustain and accelerate the effort for Sharmin Akhtar and her family—and for all South Asians that seek a brighter, more fruitful, and productive future. | http://blogs.worldbank.org/endpovertyinsouthasia/endpovertyinsouthasia/comment/reply/1542 |
We’ve had our metadata standards included in the music submission guidelines up in our FAQ since the start, but we thought it was time to dive deeper and give you some examples of what we’re talking about! We asked some licensors we’ve enjoyed working with to let us use their songs as examples. Please note, however, the companies mentioned in the metadata text are not connected to this post.
I am going to continue today with an example of correct metadata for a cover. In this particular example, Gravelpit Music sent me a track for us to dissect.
Supe Troop recommends that you include as much information as possible in the metadata of all your digital music submissions. A lot of the fields are self-explanatory, but here are some helpful details specifically regarding metadata for a cover recording. If you want to know about all the fields, please see the complete metadata standards.
- Song Name – As this song is a cover, cite that in the Song Name field as “Song Name (Original artist cover)”
- Composer – As this is a cover, the composers are the same as the original release.
- Rating – Do NOT include rating. Leave this empty so music supervisors can use it themselves if they want.
- Grouping – Company clearing the master side (% controlled M) / Company clearing pub side (% controlled S). Although not the case in this example, it is possible for the publisher of the original song to send a cover version that they also control. In that case, the sender has the song one-stop and would state so here [e.g. Gravelpit Music (one-stop)]. Please note: The Grouping section includes the company that is the contact the music supervisor is dealing with, not the actual cue sheet publisher name (if they differ).
- Comments – Contact info for licensing party.
- Album – Album name, if it exists.
- Disc Number – Include if you have it, not required.
- Track – Include if you have it, not required.
- Artwork – Include if you have it; we love to see it, but it is not required.
We’ll be back soon with our next example! Our previous post in this series discusses metadata for a recording of a public domain composition. | https://www.supetroop.com/metadata-for-a-cover/ |
Terms of Service apply.
6/24/2022
The Metropolitan Transportation Authority this week announced it has settled two class-action lawsuits brought against the New York City transit system for its subway stations that are inaccessible to people with disabilities.
Under the terms of the agreement, MTA will add elevators or ramps to create stair-free paths in 95% of the 346 currently inaccessible subway and Staten Island Railway (SIR) rail stations by 2055. The authority has committed to procuring contracts for accessibility construction to be completed at 81 stations by 2025, another 85 stations by 2035, another 90 stations by 2045 and the remaining 90 stations by 2055. The settlement still needs court approval.
"This settlement is not just the unveiling of a game plan, but the start of a closer collaboration between the MTA and advocates to achieve our shared goal, to ensure that everyone has the ability to ride mass transit without needing to plan around accessible stations," said Quemel Arroyo, MTA's chief accessibility officer and senior adviser.
Funds for a $5.2 billion accessibility upgrades program are included in the 2020-24 MTA Capital Program.
MTA has 472 subway stations, plus 21 SIR stations. Only 131 subway and SIR stations — or 27% — are fully accessible via elevators and ramps. Nine stations that are only partially accessible are also slated for upgrades. There are more than 1 million people with disabilities living in the city.
MTA leadership has been criticized for being slow to address accessibility concerns, reported The New York Times. Several transit systems in the nation's largest cities are either completely accessible or are more accessible than the MTA system, including in San Francisco, Washington, D.C., Boston, Philadelphia and Chicago.
Many subway lines have gaps of 10 or more stops between elevator access. The lack of access to one of the nation's largest transit systems has effectively blocked off some areas of the city from being fully accessible, the Times reported.
A group of advocacy organizations, including Disabled in Action, issued a lawsuit in 2017 claiming MTA's lack of accessibility was a violation of the city's human rights laws. And in 2019, another lawsuit on the federal level claimed the MTA was violating the Americans with Disabilities Act when the agency renovated stations without installing elevators or ramps. Later that year, MTA officials approved the $5.2 billion capital plan to add elevators to 70 stations by 2024.
The remaining 5% of stations that are not included in accessibility upgrades have difficult engineering issues, making the addition of elevators or ramps unfeasible. | https://www.progressiverailroading.com/passenger_rail/news/MTA-settles-class-action-lawsuits-on-accessibility-issues--66876 |
What does a Technical Architect at the Home Office do?
Technical Architects are responsible for the design and build of the technical architecture that benefits Home Office teams as well as the services you use.
Our Technical Architects are responsible for the design and build of the technical architecture that benefits Home Office teams as well as the services you use.
To do that we provide:
- technical governance of IT projects
- strategies for technology that meet our business needs
- translation of business problems into technical designs, aligning user needs with departmental objectives
- resolution of technical disputes
- translation of technical concepts to non-technical users so they’re understood by all
Our Technical Architects work closely with delivery teams and are hands-on – they might code prototypes or be involved in the technical management of teams. The type of projects we work on include:
- scaling the use of public cloud services, for example, migrating to Amazon Web Services to improve performance
- applying machine learning techniques in an operational context, such as projects to develop or implement speech recognition and language translation capabilities
- building shared technology DevOps platforms for hundreds of teams, reducing siloes and driving collaboration
Technical Architecture is recognised as one of the DDaT Professions. This means our people work within a Profession Capability Framework with clear career pathways and commitment to continuous professional development.
The framework includes Heads of Role, who are industry specialists. They build communities, set standards for what ‘good’ looks like, and ensure we have the right people in the right jobs at the right time.
‘A day in the life of a Tech Architect at the Home Office’: Meet Shueb
Hello, I’m Shueb. I’m the Lead Enterprise Architect in the Chief Technology Office (CTO) Architecture Team.
The CTO Architecture Team is responsible for establishing and maintaining architectural standards while exploring the latest IT trends.
As a Lead Technical Architect, I work across multiple projects and teams on challenges that require broad architectural thinking. I help guide project colleagues using architectures, principles, standards and tooling to deliver new systems or enhance existing systems – for both Home Office users and the public.
A big part of my role is gathering stakeholder opinions and balancing different demands to make design decisions. By demonstrating leadership, clarity of thought and impartiality I deal with objections and trade-offs.
I’ve worked on massive, complex projects as well as creating 5-year roadmaps for departmental portfolios. I’ve identified technical debt across the Home Office leading to substantial savings, completed exercises to allow us to better understand our current IT estate architecture and created roadmaps outlining how we can move toward our target IT architecture.
Joining the civil service from the corporate world
I joined Home Office DDaT a year ago. Before submitting my application I had many reservations. Some of them included being of Indian origin and having no experience in the civil service (having previously spent over 25 years in the corporate world). But all these concerns vanished at interview. The interview was one of the most inviting ones I’ve had in my entire career!
The level of engagement I received during the onboarding process made me feel so welcome – as though I were joining one big family. While I found my feet, colleagues did all they could to support me.
Loads of work variety and making a positive impact
As a Tech Architect in the Home Office not a single day is the same.
From internal business challenges to keeping our citizens and the country safe, there are many opportunities to help make a positive difference.
Our Home Office Careers site has more information on our roles and how to apply for them. | https://www.croydon.digital/2022/12/05/what-does-a-technical-architect-at-the-home-office-do/ |
Immediately following the publication of “Teaching the Origins Controversy: Science, or Religion, or Speech?” in 2000 in Utah Law Review, multiple law review articles appeared opposing the constitutionality of teaching intelligent design (ID). It seems that the law review article by Professors DeWolf and DeForrest and Meyer hit a nerve that incited various law students to ardently defend the evolutionary theory they were uncritically taught in high school.
Once such student was Eric Shih, who published an article in the Michigan State Law Review in 2007 entitled, “Teaching Against the Controversy: Intelligent Design, Evolution, and the Public School Solution to the Origins Debate.” Mr. Shih argues that “recent demands to ‘teach the controversy’ of intelligent design are nothing more than variations on the balanced tactics ruled unconstitutional by the Supreme Court in Edwards.” In other words, ID is nothing more than a mask for creationism.
Mr. Shih’s attacks are misplaced and confused. First, in real-world public policy debates, proposals proposals to “teach the controversy” have explicitly opposed requirements to teach intelligent design. As Stephen C. Meyer explained in a 2002 op-ed titled “Teach the Controversy” in the Cincinnati Enquirer:
Recently, while speaking to the Ohio State Board of Education, I suggested this approach as a way forward for Ohio in its increasingly contentious dispute about how to teach theories of biological origin, and about whether or not to introduce the theory of intelligent design alongside Darwinism in the Ohio biology curriculum.
I also proposed a compromise involving three main provisions:
1) First, I suggested–speaking as an advocate of the theory of intelligent design–that Ohio not require students to know the scientific evidence and arguments for the theory of intelligent design, at least not yet.
(2) Instead, I proposed that Ohio teachers teach the scientific controversy about Darwinian evolution. Teachers should teach students about the main scientific arguments for and against Darwinian theory. And Ohio should test students for their understanding of those arguments, not for their assent to a point of view.
(3) Finally, I argued that the state board should permit, but not require, teachers to tell students about the arguments of scientists, like Lehigh University biochemist Michael Behe, who advocate the competing theory of intelligent design.
(Dr. Stephen C. Meyer, “Teach the Controversy,” Cincinnati Enquirer (March 30, 2002).)
Second, Mr. Shih forgets the fact that the sort of “teach the controversy” approach suggested by Dr. Meyer–to allow students to critique evolution–was implicitly supported by the U.S. Supreme Court in the Edwards ruling which held that, “We do not imply that a legislature could never require that scientific critiques of prevailing scientific theories be taught. … teaching a variety of scientific theories about the origins of humankind to schoolchildren might be validly done with the clear secular intent of enhancing the effectiveness of science instruction. Edwards v. Aguillard, 482 U.S. 578, 593-594 (1987).
It is only by shoe-horning ID into the category of “creationism” that Mr. Shih makes his argument. The article asserts that public schools could never teach ID as an alternative to evolution because that would simultaneously advance an “inherently religious belief system” and inhibit “a secular government activity for religious reasons” (FN 205). Thus he argues that science teachers are “under an ethical duty to present accurate information to their students and are relied upon to maintain accepted scientific standards” (FN 219). The article goes on to contend that “design theory poses a unique threat to the education system in that it relies almost exclusively on attacks on a secular school subject in order to advance particular religious views” (FN 221).
Again, Mr. Shih’s arguments are misplaced. Design theory is not based upon religious views. Those who have fairly researched the theory will realize that it is actually a secular view that can be taught as a secular school subject alongside evolution without violating the Establishment Clause.
Yet despite the article’s diametrical opposition to intelligent design, it does find a place for it in the science classroom — but only where it is presented negatively:
[Once] students learn the basic tenets of the scientific method, teachers could directly address Intelligent Design theory during discussion about the naturalistic limitations of science…for example, students could be told that theories such as Intelligent Design are not scientific because every single study conducted by a design theorist relies upon an empirically unobservable and un-testable entity to explain what is being observed. (FN 234)
Under Mr. Shih’s vision of education, students will learn why the scientific method cannot be used to empirically determine the existence of an intelligent being and why any claim contrary to this teaching is false. Is this a fair solution? Mr. Shih’s hypocrisy should now be exposed: on the one hand he argues that ID should not be taught because it is an “inherently religious belief system,” but then he argues that the taxpayer funded schools should attack that viewpoint. The government is supposed to remain “neutral” about religion (see Epperson v. Arkansas, 393 U.S. 97, 104 (1968).), but perhaps in Mr. Shih’s view, there’s no problem with attacking what he calls “religion.”
The genius of Mr. Shih’s plan is that it creates rules that destroy the possibility of students considering any alternate theory before a fair presentation of the facts has been rendered. Why not present ID as a theory held by a minority of the scientific community that is in the beginning stages of development, and let students decide for themselves whether evolution or ID makes more sense? The last time I checked, science is not about clinging to one view and ending the investigation.
Additionally, Mr. Shih has not made a convincing case that intelligent design contradicts the scientific method. As Discovery Institute’s Intelligent Design Briefing Packet for Educators states:
The scientific method is commonly described as a four step process involving observations, hypothesis, experiments, and conclusion. ID begins with the observation that intelligent agents produce complex and specified information (CSI). Design theorists hypothesize that if a natural object was designed, it will contain high levels of CSI. Scientists then perform experimental tests upon natural objects to determine if they contain complex and specified information. One easily testable form of CSI is irreducible complexity, which can be discovered by experimentally reverse-engineering biological structures to see if they require all of their parts to function. When ID researchers find irreducible complexity in biology, they conclude that such structures were designed.
It seems that Mr. Shih is creating rules to effect the result he desires, ignoring that intelligent design is a theory that makes its claims using the scientific method. | https://evolutionnews.org/2008/08/to_teach_or_not_to_teach_commo/ |
U.S.
Politics
Business
Movies
Books
Entertainment
Sports
Living
Travel
Blogs
Vancouver Olympics | search
Overview
Newspapers
Aggregators
Blogs
Videos
Photos
Websites
Click
here
to view Vancouver Olympics news from 60+ newspapers.
Bookmark or Share
Vancouver Olympics Info
Vancouver (/ v æ n ˈ k uː v ər / ( listen)) is a coastal seaport city in western Canada, located in the Lower Mainland region of British Columbia.As the most populous city in the province, the 2016 census recorded 631,486 people in the city, up from 603,502 in 2011.
More @Wikipedia
Get the latest news about Vancouver Olympics from the top news
sites
,
aggregators
and
blogs
. Also included are
videos
,
photos
, and
websites
related to Vancouver Olympics.
Hover over any link to get a description of the article. Please note that search keywords are sometimes hidden within the full article and don't appear in the description or title.
Vancouver ... Featured News
Russian Olympic boss quits after worst showing
Medvedev demands resignations over Olympic flop
Canadians celebrate men's hockey win
'Night Train' gives US 1st 4-man gold since 1948
Apolo Anton Ohno disqualified in men's 500-meter final
More
Vancouver Olympics Photos
Vancouver Olymp... Websites
Olympic Games | Winter Summer Past and Future Olympics
The new Olympic Channel brings you news, highlights, exclusive behind the scenes, live events and original programming, 24 hours a day, 365 days per year.
2010 Winter Olympics - Wikipedia
The 2010 Winter Olympics, officially known as the XXI Olympic Winter Games (French: Les XXI es Jeux olympiques d'hiver) and commonly known as Vancouver 2010, informally the 21st Winter Olympics, was an international winter multi-sport event that was held from 12 to 28 February 2010 in Vancouver, British Columbia, Canada, with some events held in the surrounding suburbs of Richmond, West ...
Special Olympics British Columbia – Vancouver – Production ...
This is just a friendly reminder that we have our monthly meeting coming up this Thursday, March 8th at 7PM in MTG 2 at Creekside Community Centre (1 Athletes Way, Vancouver, BC V5Y 0B1).
2018 PyeongChang Olympic Games | NBC Olympics
Visit NBCOlympics.com for Winter Olympics live streams, highlights, schedules, results, news, athlete bios and more from PyeongChang 2018.
Ice hockey at the 2010 Winter Olympics - Wikipedia
Ice hockey at the 2010 Winter Olympics was held at Rogers Arena (then known as GM Place, and renamed Canada Hockey Place for the duration of the Games due to IOC sponsorship rules), home of the National Hockey League's Vancouver Canucks, and at UBC Winter Sports Centre, home of the Canadian Interuniversity Sport's UBC Thunderbirds.Twelve teams competed in the men's event and eight teams ...
More
Vancouver Olympics Videos
Vancouver Olympics Topics
000 Meters
10
2010 Winter Olympics
Andre Lange
Apolo Anton Ohno
Apolo Anton Ohno Disqualified
Bobsledding
Bond Girl
Broken Finger
Canada
Disqualified
Dmitri Medvedev
Endorsement Deals
Figure Skating
Gerard Kemkers
Germany
Gold Medal
Gold Metal
Halle Berry
Hockey
Holland
Ice Dancing
Ice Hockey
Injury
Inner Lane
James Bond
Jane Seymour
Katarina Witt
Kim Yu-na
Lee Seung-hoon
Lindsey Vonn
Lindsey Vonn Injury
Men's 500-meters
Men's Hockey
Night Train
Olympic Flop
Olympic Record
Olympics
Pressure
Records
Resignation
Russia
Russian Olympic Committee
Scott Moir
Skater
Skating
Skiing
South Korea
South Korean Skater
Speedskater
Steve Holcomb
Sven Kramer
Switzerland
Tessa Virtue
United States
Ursula Andress
Vancouver
Vancouver Games
Winter Games
Winter Olympics
Winter Olympics Medal Count
Women's Alpine Skiing
Women's Hockey
Women's Short Program
Yu-na Kim
Zach Parise
CNN
»
NEW YORK TIMES
»
FOX NEWS
»
THE ASSOCIATED PRESS
»
WASHINGTON POST
»
AGGREGATORS
GOOGLE NEWS
»
YAHOO NEWS
»
BING NEWS
»
ASK NEWS
»
HUFFINGTON POST
»
TOPIX
»
BBC NEWS
»
MSNBC
»
REUTERS
»
WALL STREET JOURNAL
»
LOS ANGELES TIMES
»
BLOGS
FRIENDFEED
»
WORDPRESS
»
GOOGLE BLOG SEARCH
»
YAHOO BLOG SEARCH
»
TWINGLY BLOG SEARCH
»
About
|
Blog
|
Topics
|
RSS
|
Twitter
|
Facebook
|
Links
|
Nodes
|
|
Contact
© 2008-2013 Wopular.com. All rights reserved. Headlines from the nation's top news sources.
Wopular.com provides links to other sites based on their RSS feeds. Image feeds provided by
Obsrv.com
.
All trademarks from featured sites are property of their respective owners. | http://www.wopular.com/newsracks/vancouver%2Bolympics |
Titus Livius “Livy” Patavinus (59 BC – AD 17)—Historian who wrote a monumental history of Rome and the Roman people.
“We can neither endure our vices nor face the remedies needed to cure them.” The Roman historian Livy (59 BC-17 AD) wrote this terse, sage summary of a culture in decline when he was in his 30’s and Rome was just beginning her Golden Age under Augustus. How was this young historian prescient enough to see in Augustan Rome the dark dawning of its own destruction?
There are definite, unmistakable harbingers of the fall of past civilizations and of the civilizations in our own time. A civilization starts out pristine and hopeful and gradually becomes tarnished and jaded. Civilizations fall from within and only then are vulnerable to the withouts. | https://earlychurchhistory.org/politics/decline-fall-of-civilizations/ |
Why Ceramics Do Not Corrode? Ceramics are a material that is made up of inorganic, nonmetallic elements. They are generally strong, hard, and brittle. Because of their inorganic composition, ceramics do not corrode.
Is corrosion a ceramic material? No, corrosion is not a ceramic material. Ceramic materials are typically strong, non-metallic materials that are often used in construction or engineering applications. Corrosion, on the other hand, is a process that occurs when a metal or metal alloy corrodes, or deteriorates, as a result of a chemical reaction with its surroundings.
Why ceramics do not corrode? Ceramics are resistant to corrosion because of their chemical and physical properties. The oxides that form on the surface of ceramics create a barrier that inhibits corrosion.
Can plastics corrode? Yes, plastics can corrode. In fact, the term “corrosion” is often used interchangeably with “degradation” when referring to plastics. This is because plastics are susceptible to various types of degradation, including photodegradation (degradation caused by light), thermodegradation (degradation caused by heat), and hydrolysis (degradation caused by water).
Frequently Asked Questions
What Causes Corrosion On Plastic?
There are a number of factors that can cause corrosion on plastic. The most common are environmental factors, such as the presence of water or oxygen, as well as chemical factors, like acid or alkaline compounds.
Does Ceramic Undergo Corrosion?
Ceramic is a material that is used in many applications, including corrosion protection. It can be used to protect equipment and pipelines from corrosion.
Why Do Plastics Not Corrode?
The reason that plastics do not corrode is that they are made of organic materials, which means that they are not susceptible to the same types of corrosion as metals. In addition, plastics are often treated with additional coatings that make them even more resistant to corrosion.
Do Ceramics Corrode?
Ceramics can corrode over time. This is because they are made up of different materials that can react with each other over time. This can lead to the development of cracks and other damage to the ceramic object.
How Does Plastic Prevent Corrosion?
Plastic coatings can be used to prevent corrosion by forming a barrier between the metal and the environment. The plastic barrier prevents contact between the metal and the corrosive elements, which helps to stop the corrosion process.
Do Metals Plastics And Ceramics Corrode?
Metals, plastics, and ceramics corrode when they come into contact with other substances that can damage them. This can cause the material to break down over time, which can lead to decreased functionality or even complete failure.
Can Ceramics Corrode?
Ceramic materials are not particularly corrosion-resistant, and can corrode in the presence of acids, bases, salts, and other aggressive environments. The corrosion of ceramics can result in a loss of material, discoloration, pitting, and cracking.
Ceramics are resistant to corrosion because of their physical and chemical properties. They have a low porosity, which means they do not allow fluids or other substances to penetrate them. Additionally, they are non-reactive, meaning they do not corrode in the presence of other substances. | https://babyfingertalk.com/home-and-kitchen/simple-answer-why-ceramics-do-not-corrode/ |
My unconventional mixed-media visual work incorporates classic portraiture and classical references with hybrid art image creation. I use a varied method of combining beautiful hand drawings, strong digital image-making, screen-printing and ink transfers. I bring this together with skilled acrylic and aerosol painting on wood and other canvases.
In the 21st century, everything is affected by digital media and the Internet. Most works of art created today will be seen on digital devices more times than in person. I make art to encourage interactions―physically and digitally.
My work aims to ask questions regarding equality, immigration and what it means to be Black in North America. I explore culture and humanity's relationship with beauty, sex, nature and cultivation through my creative process. I use images to explore the notion that history, entertainment, including film and other media, shape the mass public perception of Black people and people of colour in North American society. I create art to document this with compositions brimming with references to media, popular culture, music, and art history. The work aims to add beauty to the world while invoking the unending social responsibility to capture thought. As Black people, Indigenous people and people of colour, we are not accustomed to seeing nuanced reflections of ourselves in contemporary visual culture. I create work featuring many of these faces and issues we encounter in an attempt to be accurately represented.
I like happy mistakes in art, such as ink bleeds and artwork affected by age, sun, rain and natural elements, which creates areas that are worn away or lifted. I think some mistakes, simplicity and chance are beautiful fundamentals of creating.
I am truly captivated by the development process of taking a simple idea from nothing and watching it grow into a completed project that can be seen and touched, interpreted, and enjoyed by the viewer. Images allow me to share an idea or evoke an emotional response almost instantly. I channel this emotion and energy into creating, inspiring new work and exploring new ideas.
Lives and works in Toronto. | https://www.kestincornwall.com/artiststatement |
Aging is modulated by both genetic and environmental factors. Dietary nutrients have been shown to be among the most potent environmental factors that have significant impact on healthspan and lifespan. A number of pharmaceuticals and nutraceuticals have been identified to have prolongevity effects in model organisms. Nutraceuticals made from plants are rich in phytochemicals, which possess diverse bioactivities and exert numerous health benefits, including anti-aging effects. However, whether and how dietary nutrients influence the prolongevity effects of interventions for promoting healthy aging remains elusive. This is an important issue in the aging field to address considering diverse dietary customs among human populations of different geographic origins. Invertebrates including worms and flies are ideal model organisms to investigate the prolongevity effect of pharmaceuticals and nutraceuticals at least partially due to their short lifespan and rich genetic resources. We have summarized the research progress on prolongevity nutraceuticals using invertebrate models and published a review in Oxidative Medicine and Cellular Longevity (2013). This work should provide valuable guidance for future mechanistic studies on the effect of nutraceutical supplementation in delaying aging process and improve healthspan. To investigate the interaction between nutraceuticals and macronutrients in lifespan modulation, we have investigated the effect of cranberry-derived nutraceuticals on lifespan and determined the impact of dietary macronutrient composition on the prolongevity effect of cranberry in Drosophila. We have found that cranberry can extend lifespan of flies fed a diet with modest amount of sugar and protein, and increase lifespan more prominently in flies fed a high sugar-low protein diet, but does not extend or shorten lifespan of flies fed a low sugar-high protein diet. We have further demonstrated that lifespan extension induced by cranberry supplementation is associated with increased lifetime reproductive output and higher expression of stress response genes. We have also shown that cranberry can improve the survival of flies fed a high-fat diet. This study reveals the critical role of dietary macronutrients in the prolongevity effect of cranberry supplementation and points out the importance of take into account diet composition in implementing interventions for promoting healthy aging. This line of the work has been accepted for publication in Journal of Gerontology Biological Sciences (2013). Future work will be directed more to understand the molecular mechanisms underlying the prolongevity effect of cranberry supplementation and its interaction between dietary macronutrients in modulating lifespan. This will provide a comprehensive view of and help improve interventions for promoting healthy aging. An important issue in aging studies is to assess healthspan since a fundamental goal of aging research is to not just increase lifespan but significantly improve healthspan through the preservation of function. Aging is associated with numerous behavioral changes, such as gradual decline of locomotor activity, which is a parameter for healthspan. Many tools are available for measuring locomotor activity in model organisms and humans. However, age-related behavioral changes remain poorly understood mainly due to the lack of tools capable of recording lifelong behavioral changes in a high resolution in any organism. We have previously developed a behavior monitor system (BMS) that can record six types of behaviors in a fine resolution over the lifetime of Mexican fruit flies (mexflies). Mexflies have been used extensively in demographic and aging intervention studies. In collaboration with Drs. Pablo Liedo at Mexico, Joanne Chiu and James Carey at UC Davis and Donald Ingram at Pennington, we have taken advantage of the BMS to investigate the impact of diet on age-related changes in locomotor activity, sleep quantity and quality using the high resolution lifelong behavior recording data in mexflies. We have found that flies under a nutritionally balanced diet have little age-related change in activity profile, while flies on suboptimal diet have a significant decrease of activity in amplitude and lower sleep quality at old age. This line of work has been published in Scientific Reports (2013). Future work will be to use the BMS to evaluate lifelong behavioral changes induced by any prolongevity interventions to shed light on the impact of aging interventions on healthspan. We will also develop a similar BMS for Drosophila in order to investigating molecular mechanisms underlying the lifelong behavioral changes. In summary, we have determined the impact of dietary macronutrients on the prolongevity effect of a cranberry-containing nutraceutical in Drosophila. We have assessed the impact of diet on healthspan by analyzing lifelong behavioral changes in mexflies under different dietary conditions. These findings provide the foundation for our future research directed towards understanding the molecular mechanisms underlying the interplay between dietary macronutrients and nutraceuticals or pharmaceuticals in modulating lifespan and healthspan. These studies should provide valuable information for developing efficient interventions for promoting healthy aging in humans. This project should advance the objectives of the Translational Gerontology Branch and overall missions of the National Institute on Aging. | https://grantome.com/grant/NIH/ZIA-AG000366-06 |
Snapdeal loss rises 5 times to Rs 1,350crAuthor: Digbijay Mishra - Published 2015-08-20 18:01
Data accessed by TOI from Hong Kong Stock Exchange showed that Snapdeal's loss rose from around Rs 270 crore in March 2014 to nearly Rs 1,350 crore in March 2015 as the company shelled out $25 million (over 150 crore) a month as discounts and marketing expenses.
Snapdeal may not be an isolated case. The situation, industry experts say, is similar across e-commerce segment, and is not confined to India alone. For instance, Amazon emerged as the world's largest e-tailer recently and has been commanding a hefty premium even on stock markets, despite reporting a surprise profit last month after several quarters of losses. | https://calport.com/article57 |
The following is the complete text of William Cullen Bryant's
"The Embargo; or, Sketches of the Times."
To see all available titles by other authors, drop
by our index of free books alphabetized by author
or arranged alphabetically by title.
Potential uses for the free books, stories and prose we offer
* Rediscovering an old favorite book, short story or poem.
* Bibliophiles expanding their collection of
public domain eBooks at no cost.
* Teachers trying to locate a free online copy
of a short story or poem for use in the classroom.
NOTE: We try to present these classic literary
works as they originally appeared in print.
As such, they sometimes contain adult themes,
offensive language, typographical errors, and
often utilize unconventional, older, obsolete
or intentionally incorrect spelling and/or
punctuation conventions.
If you find the above classic literature
useful, please link to this page from your
webpage, blog or website.
Alternatively, consider recommending us to
your friends and colleagues. Thank you in
advance! | https://www.accuracyproject.org/t-Bryant-TheEmbargo.html |
At Fuel, we place audiences at the heart of what we do: conversations with audiences as part of research and development, and around the shows themselves, in-person and online. This is really important to us, as we try to produce performances that give audiences a good night out, with new ideas to think about and different perspectives on the world we share.
Over the coming months and years, we will keep developing our public forums and socials (e.g. Theatre Clubs), as well as the opportunities digital engagement can offer.
At Fuel, we place audience and community at the heart of what we do: conversations with audiences as part of research and development phases, and around the shows themselves, whether physically or online, are paramount to producing work that connects with audiences and has an impact.
Since 2004 we have reached 500,000+ audiences, 3000+ participants, presenting work from Ullapool to Totnes, from inner-city schools to village halls, from working men’s clubs to the National Theatre, for all ages – always, in our very DNA, with audience experience at the heart of our mission. We choose to work with artists who seek to connect meaningfully with audiences in a conversation about the world we live in and how we relate to each other in that world.
In the last 5 years, we developed a programme entitled New Theatre in Your Neighbourhood (NTiyN), seeking to work in a deeper place-based way, over time, to build connections with under-served communities and learn from experiments in how to reach new audiences and deepen engagement.
Our Local Engagement Specialist model emerged out of NTiyN and was refined with the appointment of a dedicated Engagement Producer in 2017 to bring this learning and practice together into a strategy and continue to develop engagement strategies. From 2020, Fuel has made a commitment to offering 10% of all production tickets for free to targeted groups who may not attend theatre otherwise.
Fuel is a Wellcome Trust Sustaining Excellence partner, enabling us to explore research-led creative processes with artists, scientists, and established culture institutes such the appointment of Fuel’s Associate Scientist, Dr Magda Osman of Queen Mary University of London. | https://fueltheatre.com/engagement/ |
Work-related musculoskeletal pain and injuries are a common occurrence you see in a workplace. Work-related pain and joint issues are often suffered because of long-term improper sitting, poor posture, and a bad work chair. This means the chair you have been sitting on during work hours could be the reason why your back has been under so much pressure lately.
If you or your employees also suffer from prolonged and constant pain when working, you need to make some changes. From finding the best sitting position for hip pain to covering tips to relieve hip pain while sitting, this article will introduce some easy everyday tips to attain the best type of posture and combat body pain when working.
How to Sit With Hip Pain?
When it comes to the routine to sit with hip flexor pain, office workers aren't the only ones susceptible to this issue. Students who spend their hours studying, people who drive for hours at a time, and someone traveling a long distance during some special occasion are likely to suffer from hip pain.
Other than finding the correct sitting posture for hip pain and getting the best office chair, here are some other tips you can try to cure and prevent hip pain.
Maintain a Proper Angle
Hip angle matters the most when it comes to prolonged sitting. An improper angle can put unnecessary strain and thus cause the muscles to bear extra stress. And especially for people who suffer from various hip bone-related issues, an improper angle will only aggravate the issue and cause the situation to worsen.
Here are some tips on how to maintain a proper hip angle
- Don't sit and work on chairs low in height, such as working on a couch or a sofa
- Adjust your sitting position in a way so the hips are slightly higher than the knees
- Using a wedge cushion also helps with the right hip angle
- A little recline in the seatback can adjust the hip angle properly.
The Right Chair
When you are so engrossed in work and spending hours in front of the desktop, you likely tend to shift towards uncomfortable angles unintentionally. This could be an issue since you need to be constantly aware of the sitting position at all times. The right chair can help you adjust the angle as soon as you sit on it. It is wise to invest in the best office chair for back pain as it contains all the right adjustments to achieve the right posture.
While using an ergonomic chair, make sure you maintain the correct sitting posture at the desk.
Control Compression
This is rated as a good way to get the best sitting positions for hip pain. Hip pain is a result of consistent compression on the bone. Medically this term is known as proximal hamstring tendinopathy and is a cause of major hip bone pain in users who work while sitting. Hence, to control or prevent hip pain, you need to reduce the compression faced by your body.
A pressure relief cushion can help relieve compression across these structures when sitting. Alternatively, start with a pillow to boost the area you're sitting on the softness. If it doesn't work, you'll have to buy a piece of thick medium-density foam and cut a few holes for the sitting bones. It seems strange, but it works!
Keep Moving
A study reveals that switching between sitting and standing positions when working effectively fights several posture and back pain issues. This means that periodic movement is the key to preventing your body from physically being strained. Other than getting a standing desk so you can alternate between sitting and standing positions, you can also practice a few desk stretches to eliminate any pain and uneasy feeling in your muscles.
Using Heat
Here is another best sitting position for hip pain for you. If you are new to discovering that your hip pain is not just an issue you have to live with, and it can be cured, then using heat could be the next new addition to your routine. If you suffer from hip pain because of a poor chair or prolonged sitting, heat can cause the area to feel better and the swelling to reduce. Besides taking preventive measures, you can repeat this procedure to wave that hip pain a final goodbye.
Relax your Hip Muscles
Squeeze and relax the gluteal muscles in your buttocks to relieve strain on your bottom while stimulating blood flow. This practice will also help you feel relaxed when working so you don't feel fatigued sooner.
Eat Healthily
Eating healthy improves your overall health and boosts up the immune system. People who eat healthy experience better energy levels and have greater bone health. Hip pain is also a result of being obese; hence eating healthy can be helpful in that regard too.
Frequently Asked Questions
Can A Standing Desk Help With Hip Pain?
Yes, a standing desk allows you to switch between sit and stand positions to alternate the body parts under stress. This leads to a uniform distribution of stress when working or even sitting/driving. A standing desk burns more calories hence the chances of developing serious bone and hip pain reduce significantly.
Can Prolonged Standing Cause Hip Pain?
Workers who spend too much time on their feet, shins and calves, knees, thighs, hips, and lower back are at a higher risk of developing pain and discomfort in their feet, shins and calves, knees, thighs, hips, and lower back.
How Do I Know If My Hip Pain is Serious?
Hip pain doesn't begin once for all. It's a slow transition from slight discomfort to an extremely uneasy feeling. If you want to know when to see a doctor, here are some sure tell signs.
- A deformed joint looks to be crooked.
- You are unable to move your leg or hip.
- The injured limb is unable to bear weight.
- Intense discomfort
- Swelling that occurs suddenly
- Symptoms of infection (fever, chills, redness)
What is the Best Position for Hip Pain?
Hip pain is because of the extra strain on your hip bone and muscles. Sleeping, sitting or working in a position that takes off the stress from the hip joint relieves the pain. Make sure you are resting on your sides if you suffer from hip pain.
Get exclusive rewards
for your first Autonomous blog subscription.
WRITTEN BYAutonomous
We build office products to help you work smarter. | https://www.autonomous.ai/ourblog/sitting-position-for-hip-pain-to-practice |
StepStones for Youth ("StepStones") is a registered charitable organization whose mission is to provide support for disadvantaged children and youth who have experienced trauma, abuse and unstable guardian care, with minimum to no support in their lives. To create long-term change, we support youth in educational achievement, securing employment, finding stable housing and building lasting support networks.
Description
Our principle aim at StepStones for Youth is to prevent youth who are ageing out of the child welfare system from dropping out of school, and becoming homeless and impoverished. We believe that if preventative measures are taken to support these youth then we can foster stability and independence for them that will drastically reduce their dependence on corrective social services in adulthood, and consequently disrupt the cycle of poverty. StepStones’ programs demonstrate that these outcomes are possible when youth feel connected, empowered, and supported.
StepStones’ programs are designed to encourage skill development, promote educational attainment, increase youths’ employability, and reduce the likelihood of homelessness. To achieve these outcomes we offer a range of programs including the Youth in Transition Housing Program, the Youth in Transition Education and Mentoring Program, and the Summer Camp and Leadership Program. Collectively, our programs help youth to foster their inherent strengths and build their capacity to take charge of their own lives. Using this approach we have witnessed firsthand positive changes in youths’ physical, social, and emotional development that in turn promotes their self-esteem, self-advocacy, and overall self-sufficiency. | https://www.volunteermatch.org/search/org1072882.jsp |
Advisor & Advisee Responsibilities
Both the student and the advisor have responsibilities for an effective and beneficial partnership. In addition to both being active participants in the relationship, each is additionally responsible for the following:
The student can expect the advisor to:
- Assist the student in understanding the purposes and goals of higher education and its effects on the student’s life and personal goals.
- Understand and effectively interpret the curriculum, graduation requirements, and university and college policies and procedures.
- Assist the student in gaining decision-making skills and in assuming responsibility for educational plans and achievements.
- Encourage and guide the student as he or she defines and develops realistic goals.
- Encourage and support the student as he or she develops clear and attainable educational plans.
- Provide the student with information about, and strategies for, utilizing the available resources and services on campus.
- Monitor progress toward meeting the student’s degree plan and goals.
- Assist the student in working closely with other faculty.
- Be accessible for meetings via office hours, one-on-one appointments, telephone, or e-mail.
- Maintain confidentiality.
The advisor can expect students to:
- Schedule regular appointments or make regular contacts with your advisor during each semester.
- Arrive prepared to each appointment with questions or material for discussion.
- Be an active learner by participating fully in the advising experience.
- Accept responsibility for decisions.
- Ask questions if they do not understand an issue or have a specific concern.
- Maintain a record of progress toward degree plan and goals, keeping all necessary official documents.
- Complete all assignments and follow through on recommendations of the advisor.
- Gather all relevant decision-making information.
- Clarify personal values and goals.
- Be knowledgeable about college programs, policies, and procedures. | https://www.butler.edu/academic-services/learning/advising/advising-responsibilities/ |
Leadville residents woke up to snow on historic Harrison Avenue on Sunday, October 12. It was the first (measurable) snow of the season. Photo: Leadville Today.
All you Ninja Turtles, Elsas, and Rocketmen can relax – those reports of the cancellation of this year’s Trick or Treat Street’s were widely exaggerated! The event is on – with more enthusiasm than ever!
This annual Halloween event will be held on W. 7th Street (first two blocks) and allow children to go traditional door to door Trick or Treating in a safe, supervised environment. Halloween falls on a Friday this year, with the event will be from 5 to 9 on October 31.
Trick or Treat? Children will enjoy door to door goodies as they don their favorite costumes for the annual Halloween tradition. Photo: Leadville Today.
Amanda Stinnett has taken over organizational responsibilities, with help from Bernadette Bifano. The team is requesting community donations in the form of candy or cash. Candy donations can be dropped off at the following locations: ALCO, the Leadville Sanitation District building which is located on the SW corner of McWethy and Hwy 24 South, Wells Financial Services (in the post office building), and the Stapletons are happy to accept donations at their house located on 118 W. 7th Street.
It takes about $1000 to put on Trick or Treats Street, which provides candy for 700 ghosts and goblins . There’s never been a year when they HAVEN’T run out of candy, so if you’re in a position to pick up some extra treats or make a financial contribution contact Amanda Stinnett at 719-293-0099.
The Stapleton Haunted Mansion on E. 7th Street in Leadville.
The Stapleton Haunted Mansion (118 W. 7th St.) will be open Halloween Night, October 31 and then again on Saturday, Nov. 1, from 6 – 9 p.m. Come and get your spook on in a fun, safe environment located in an historic 1880s carriage house. Children 13 years and younger must be accompanied by an adult. The cost is $5 per person. Proceeds, in part, go to Full Circle of Lake County, a non-profit youth services program. | https://leadvilletoday.com/2014/10/13/latest-news-october-13-mon/ |
Achilles is the personification of individualism, a residing concept, a semi-demon, the best of Greek warrior. All individual is strange to Hector. He’s honest, an opponent of the pugilative war, provides to fight perhaps perhaps perhaps perhaps not with all the armed forces, however with representatives. Hector is shown in a calm life: farewell to Andromache – the subdued scene that is psychological of poem. Patriot. Shame did perhaps maybe perhaps perhaps not enable him to cover up behind walls. He is terrified when he sees Achilles, operates away. three times operate around Troy, Hector sets up with fear. The toss chooses the loss of Hector. He asks Achilles to offer the physical human body to family relations, but Achilles declines, because this is his revenge for Patroclus.
When you look at the poem Iliad both the Greeks and Achilles are substandard in sincerity to Hector. Hector, son of Priam, acquires the absolute most humane, pleasing features of Homer. Hector, unlike Achilles, is a hero that knows exactly just exactly exactly exactly what social obligation is, he will not place their feelings that are personal other people. Achilles is the personification of individualism (their quarrel that is personal with Agamemnon brings towards the cosmic scale). There’s absolutely no bloodlust in Achilles in Hector, he could be generally speaking in opposition to the Trojan War, views it as an awful tragedy, knows most of the horror, the entire dark, disgusting part of the war. It really is he whom proposes to not ever fight because of the armed forces, but to expose representatives (Paris-tr., Menelaus-Greeks). Nevertheless the gods never enable him to do this. Paris, by way of Aphrodite, escapes through the battlefield.
Hector, unlike Achilles along with other heroes, is shown from a totally various part, in a calm https://edubirdies.org life. The scene of their farewell to Andromache (spouse) the most subdued scenes that are psychological the poem. She asks him never to be involved in the battle, since there is Achilles, whom damaged Thebes and her entire household. Hector really really really loves his nearest and dearest really much and realizes that Andromache will alone be completely without him, however the responsibility associated with defender regarding the Fatherland is first and foremost for him. Shamewill not allow him to cover up behind the wall surface.
Therefore, both Hector and Achilles are glorified warriors. But, if Achilles places their individual emotions, individual gain most importantly, then Hector sacrifices himself for the Fatherland, quitting their family that is peaceful life the title of their state.
Hector is combined with gods (Apollo, Artemis), but their huge difference from Achilles is endless. Achilles is the son associated with the goddess Thetis, he could be maybe maybe perhaps perhaps perhaps not susceptible to weapons that are humanexcept the heel). Achilles, in reality, just isn’t a person, but a semi-demon. Achilles sets on Hephaestus armor whenever planning to battle. Hector is a man that is simple faces a dreadful ordeal, he understands that just he alone can accept the process of A. It is really not astonishing that during the sight of Achilles he could be covered with horror, and then he operates (three times the heroes operate around Troy hyperbole). The goddesses of Moira determine the fate for the heroes by placing lots in the scales. Athena helps Achilles. Dying, Hector asks only 1 thing – to move their human body to their Relatives so that a funeral can be performed by them ceremony (extremely important for the Greeks). But, Achilles revenges for the loss of friend and says which he will put the human body of Hector during the beggars and thieves.
The pictures among these two heroes differ significantly. In the event that title of Achilles starts the title of Hector completes it. “so that they buried the ultimate Hector’s human body.” All individual is gathered in Hector (both talents and weaknesses (he’s horrified by Achilles, operates) Achilles is practically a half-demon. | http://hkvchannel.com/2019/04/12/composing-relative-analysis-of-literary-heroes-on-16/ |
We don’t always have the luxury of measuring an entire population, so we take samples. For good or for bad, there is plenty of advice when it comes to sampling. Sir R. A. Fisher said that if all the elements in your population are identical, you need only a sample size of one. John Tukey noted that in general you never improve your estimate of variability more than when you go from a sample size of one to a sample size of two. George Runger has said that he’d rather have a small sample of parts collected over a couple of weeks of production than a couple of thousand parts from one batch produced this afternoon. At the risk of making a sweeping generalization, we see a lot of work done with samples of size 30 or so. Is there something magical about a sample size of 30?
With a lot of determination and a little help from some friends (including Fisher), William Gosset (aka “Student”) developed the properties of the t distribution. Gosset’s work at the Guinness brewery in Dublin led him to look closely at what it means to rely on a small sample to estimate the population mean and the population standard deviation (I’m pretty sure he worked more with yeast than he did with beer, by the way). His key observation was that for small samples, there’s uncertainty not only in our estimate of the mean value but there can be considerable uncertainty in our estimate of the standard deviation. The probability distribution function of a t distribution looks similar to that of a normal distribution, but t distributions have (among other things) heavier tails (accounting for the uncertainty in the estimate of standard deviation). As sample sizes increase the t distribution look more and more like the normal (Z) distribution (infinite sample size and the t is the Z – that makes sense, as an infinite sample size means we’re looking at the population).
Introductory statistics books and classes often discuss the notion of Z-tests and t-tests to compare two sample means. T-tests are appropriate when the sample standard deviation is the best we’ve got as an estimate of the population sigma (actually, you could argue that t tests are always appropriate), while Z-tests are appropriate when the population standard deviation is known. For my own part, the last time I knew the population standard deviation was when it was given to me in a homework problem in an introductory statistics class, so it seems like the t-test is the way to go. Yet, we know there’s more to the story, including the fact that the normal distribution is just so convenient.
Many texts suggest than when the sample size is around 30 you can safely use the normal distribution instead of the t-distribution when computing, for example, a confidence interval on the mean (standard deviation unknown, but estimated from the data). An important point to note is that a distribution of sample averages tends toward the normal distribution as the sample size increases (look to the central limit theorem for guidance here). The key, of course, is that we’re talking about a distribution formed from average values, not individual values. A large sample size doesn’t turn (for example) a lognormal distribution into a normal distribution, it just helps turn a distribution of sample averages from a lognormal distribution into something that approaches a normal distribution.
Keith Bower has written: “Regarding n = 30, I’m fairly sure the prevalence is due to one of Egon Sharpe Pearson’s papers … which runs thru many simulations to assess robustness of t-tests with regard to some non-symmetry in the underlying distribution. I’m fairly sure it was Shewhart who had recommended … to investigate it, in some correspondence between the two. Statisticians should always preach the ‘it depends’ mantra though, as I’m sure Pearson would be the first to agree with.”
Again, having a sample size of 30 does not somehow magically turn the underlying distribution into a normal distribution. If your data are uniformly distributed or lognormally distributed or gamma or beta or whatever, then individual values are modeled quite differently as compared with a normal distribution. What does happen is that for many cases, a sample size of 30 gets you to a point where the difference between using Z and t is relatively small for such things as confidence intervals of the mean. However, it doesn’t guarantee that your estimate of the mean is somehow spot on. Even with a sample size of 30, a 95% confidence interval based on a sample mean of 100 and sample standard deviation of 10 is ~ 96.3 to 103.7 (if you use Z instead of t, the interval would be 96.4 to 103.6).
By the way, for that same parameter with estimated mean 100 and estimated standard deviation 10, a 95% CI for the capability index Cpk is around 1.00 to 1.67. From a PPM standpoint, that’s around 2700 to about 0.57. To put it nicely, that’s a difference of a few orders of magnitude. In a related vein, Somerville and Montgomery point out that the assumption of normality would lead to an underestimation of 1428 PPM in the case where Cpk is calculated to be 1.00 with the underlying distribution actually t with 30 degrees of freedom.
My wife used to work for a relatively large computer products manufacturer. She was in a meeting one day when the topic of proportion defective came up. That is, someone started a discussion about what sort of sample size would be needed to detect a particular proportion defective. For illustration, let’s say they were concerned about one particular kind of defect, and assume 1 out of 50 widgets had this issue. What would you infer about this problem if you took a random sample of 5 widgets? 50? 500? 5000? In the meeting, someone suggested that if you could take a ‘perfect’ sample of say 50 widgets and you saw that 1 of them had the defect then you knew that your proportion defective was exactly 2% in the population. That’s an interesting sentiment, but it is essentially meaningless. Sampling isn’t about perfection, it’s about practical ways of dealing with uncertainty. | http://www.syntricity.com/just-sample-opinion/ |
The museums of the Ministry of Culture and Sports increased their collections during 2018 with the acquisition of 1,000 new cultural assets for a value of 855,000 euros.
It is mostly letters, designs, drawings and lithographs that are already part of the documentary heritage that is kept in these institutions. The sixteen centers dependent on the Department also received 421 goods from private donations, with a valuation of close to 200,000 euros.
The 10 most relevant works can be known today on the Culture website (New acquisitions 2018), in addition to a document with information about the collections of the Ministry's museums during the past year (Our Museums at a glance: Collections).
Among the shopping highlights San José with baby Jesus, Pedro de Mena, a polychrome wooden sculpture of the seventeenth century, which has been the most valuable piece acquired by Culture for its museums in 2018 (€ 150,000). It has been incorporated into the stable collection of the National Sculpture Museum of Valladolid, where it will complete the work of the sculptor from Granada.
Another of the most important assets is a coin of four escudos of Felipe V, from the last series of gold made in the mint of Cuenca in 1725, of which only four copies are known, one already preserved in the numismatic collection of the Museum Arqueológico Nacional, the museum that will exhibit this new acquisition.
In the field of design, Culture has purchased for the National Museum of Decorative Arts a prototype of the armchair Fauteuil B301 (1928-1929), created by Le Corbusier, Pierre Jeanneret and Charlotte Perriand, an icon of modern design.
Among the documentary collections stands out the acquisition of a letter from Joaquín Sorolla to the ceramist Ruiz de Luna. The letter is the response of the painter, great connoisseur and collector of ceramics, to another preserved in the archive of the Sorolla Museum where he was invited to visit the ceramic factory of Talavera, owned by the company Ruiz de Luna, Guijo y Cía.
The Costume Museum has received almost half of the donations
The museums of Culture have also received 421 donations, of which 186, 44% of the total, have been allocated to the Costume Museum-Ethnographic Heritage Research Center (CIPE), followed by the Museum of America (121).
Among the donated works stand out the portraits of Pilar Ordeig and Jacinto Galaup (1845), by Rafael Tegeo, assigned to the Museum of Romanticism, and that have recently been exhibited in the monographic exhibition dedicated to the painter.
The Costume Museum-CIPE has received an album of stickers from the league championship 1967-1968, which has been integrated into the museum's collections, aimed at recovering and preserving the collective memory of Spanish society through the acquisition of assets that are part of of everyday life.
More than 1,500 assets were part of 130 temporary exhibitions
The museums of the Ministry of Culture and Sports lent in 2018 more than 1,500 works that traveled to 130 exhibitions, 100 in Spain and 30 abroad. On the other hand, more than 2,000 were lent to be part of the temporary exhibitions organized throughout the year. | https://thespainjournal.com/culture-acquired-a-thousand-new-cultural-assets-for-its-16-museums-in-2018/ |
Privacy is an important feature in today’s software, and DevOps teams should be focused on it.
Privacy by Design, Privacy Enhancing Technology and Privacy by Default are all terms associated with privacy and software. What do they mean, and how do they affect DevOps?
I’ll explain them all in a three-part blog series. This is the first part, which focuses on Privacy By Design.
Privacy by Design, what is it?
Simply said, Privacy by Design (PbD) is an approach to systems engineering which takes privacy into account throughout the whole engineering process. This concept first emerged in a joint report on “Privacy-enhancing technologies,” produced by a working group from the Information and Privacy Commission of Ontario, Canada, the Dutch Data Protection Authority, and the Netherlands Organisation for Applied Scientific Research back in 1995.
So it is not a new term. Although the concept has been circulating in the EU for more than 20 years, it will be in legislation no sooner than next year. Alas, European legislation takes its time. The EU General Data Protection Regulation (GDPR) will be in force May 2018.
‘Privacy by Design is an approach to systems engineering which takes privacy into account throughout the whole engineering process’
With the GDPR, in today’s software development, PbD is very important. How can DevOps address this?
Privacy by Design Principles and DevOps
Every approach has its principles.
For PbD, these are:
1 Proactive not reactive; also, preventative not remedial
2 Privacy as the default setting
3 Privacy embedded into design
4 Full functionality: positive-sum, not zero-sum
5 End-to-end security: full lifecycle protection
6 Visibility and transparency: keep it open
7 Respect for user privacy: keep it user-centric
These principles are, as you can see, very related to information security, but are still ambiguous. This could give DevOps teams a tough time creating user stories. That’s why parties like OWASP try to provide PbD guidelines to put into practice.
The top 10 privacy risks OWASP recognizes are:
P1 Web application vulnerabilities
P2 Operator-sided data leakage
P3 Insufficient data breach response
P4 Insufficient deletion of personal data
P5 Non-transparent policies, terms and conditions
P6 Collection of data not required for the primary purpose
P7 Sharing of data with third party
P8 Outdated personal data
P9 Missing or insufficient session expiration
P10 Insecure data transfer
This sounds less ambiguous than the seven PbD-principles, and can be incorporated as user stories in every DevOps backlog.
Privacy by Design and Tooling
I can simply give you a list of DevOps tools addressing PbD, but that’s not the goal of this article.
It is crucial as a DevOps team member that you know both what PbD is, and how you can implement it in your daily work. Guidelines like those from OWASP can help. If you follow their principles and guidelines, you can then choose a suitable tool.
Wrap-Up
Privacy by Design is a systems engineering approach which takes privacy into account throughout the whole engineering process.
Due to new privacy regulation like the GDPR, it is very important for DevOps teams to incorporate its principles into their daily work.
OWASP guidelines can be used for this, including the selection of a suitable tool. | https://sweetcode.io/privacy-by-design/ |
Human Population Growth is Accelerating Species Change. “We are more different genetically from people living 5,000 years ago than they were different from Neanderthals.” John Hawks, University of Wisconsin anthropologist.
In a fascinating discovery that counters a common theory that human evolution has slowed to a crawl or even stopped in modern humans, a study examining data from an international genomics project describes the past 40,000 years as a time of supercharged evolutionary change, driven by exponential population growth and cultural shifts.
Thanks to stunning advances in sequencing and deciphering DNA in recent years, scientists had begun uncovering, one by one, genes that boost evolutionary fitness. These variants, which emerged after the Stone Age, seemed to help populations better combat infectious organisms, survive frigid temperatures, or otherwise adapt to local conditions.
The findings may lead to a very broad rethinking of human evolution, especially in the view that modern culture has essentially relaxed the need for physical genetic changes in humans to improve survival.
A team led by University of Wisconsin-Madison anthropologist John Hawks estimated that positive selection just in the past 5,000 years alone — dating back to the Stone Age — has occurred at a rate roughly 100 times higher than any other period of human evolution. Many of the new genetic adjustments are occurring around changes in the human diet brought on by the advent of agriculture, and resistance to epidemic diseases that became major killers after the growth of human civilizations.
“In evolutionary terms, cultures that grow slowly are at a disadvantage, but the massive growth of human populations has led to far more genetic mutations,” says Hawks. “And every mutation that is advantageous to people has a chance of being selected and driven toward fixation. What we are catching is an exceptional time.”
While the correlation between population size and natural selection is nothing new — it was a core premise of Charles Darwin, Hawks says — the ability to bring quantifiable evidence to the table is a new and exciting outgrowth of the Human Genome Project.
In the hunt for recent genetic variation in the genome map the project has cataloged the individual differences in DNA called single nucleotide polymorphisms (SNPs). The project has mapped roughly 4 million of the estimated 10 million SNPs in the human genome.
John Hawks’ studies include trying to make sense of genetic fragments from different populations, and anthropological bone and tooth specimens, to show how humans have evolved during the past 30,000 years. And he attempts to integrate that knowledge with data from archeology and the historical record. Hawks’ research focuses on a phenomenon called linkage disequilibrium (LD). These are places on the genome where genetic variations are occurring more often than can be accounted for by chance, usually because these changes are affording some kind of selection advantage. | https://wordlesstech.com/hyper-evolution/ |
How This Special Olympics Athlete Races Past Expectations
After all, he was in Albuquerque, running at elevation alongside some of the top middle distance runners in the country. Plus, it was snowing and cold.
But when the group reached the turnaround point, he said he wanted to keep going.
The 36-year-old from Woodinville, Washington, has autism spectrum disorder and has heard a chorus of “no’s” and “he can’t” throughout his life. But, thanks to running, he’s racked up an impressive list of accomplishments, finishing 30 marathons — including nine Boston Marathons.
He ended up running for two hours in Albuquerque with the Brooks Beasts Track Club as part of their training camp earlier this year.
“He’s very disciplined in his approach to training. He has intention behind what he does and structure to what he does,” says Danny Mackey, head coach of the Beasts. “If you ask him to do something, he just does it. The guy can work hard for a long time.”
It’s this head-down, grind-it-out work ethic and up-for-anything attitude that Mackey believes has made Bryant a successful athlete and will help him excel at the 2018 Special Olympics USA Games in July.
Bryant is one of Brooks’ newest additions to their roster and one of two Special Olympic athletes sponsored by the running footwear and apparel company. He first started running with his mother, Colleen, and late stepfather and later joined the high school track team despite the coach’s initial reservations that he might not keep up.
Andy Bryant. Photo courtesy of Brooks.
Not only did he run well, thanks to his natural gift for endurance, but everything about running seemed to lift him up spiritually — the workouts, competition, and community of runners. Running was his ticket to being able to communicate, to feel accepted and to be part of a group, Colleen says.
“One of the major issues with Andy’s disability is the ability to connect and make friendships,” Colleen says, adding that it can be hard at times to have a conversation or make eye contact.
“With running, he was able to find a way to communicate,” she says. “Even though he couldn’t function well intellectually in the classroom and had literacy issues, he could run and talk about running.”
Plus, there’s something about the way sport breaks down barriers and creates connections. Under normal circumstances, gathering Bryant together with 12 professional runners might feel forced and awkward, but the shared interest and talent for the sport has provided a common bond between Bryant and the other members of the Brooks Beasts team.
“As soon as you bring sport into it, all of a sudden you have this guy who’s just training with them and they’re playing games on a Friday night. It’s normal, natural, and authentically fun,” Mackey says. “How else do you get that sort of community except through running?”
Bryant works hard at his craft, too. He says his favorite distance is the marathon because it’s challenging and “you do a lot of training.” He plans on running his 10th Boston Marathon in 2019. And he’s also thinking of testing his endurance and signing up for an ultra.
Mackey said that his elite athletes have been inspired by how tough Bryant is and his passion for running.
He’s slated to run the 3,000-meter and 10,000-meter races for Team Washington at the USA Games.
As for his chances for medaling?
“He’s at least going to medal,” says Mackey. “I would be surprised if he doesn’t win.” | |
Introduction
============
Work-related musculoskeletal disorders (MSDs) are prevalent world-wide and associated with long-term pain and physical disability.[@b1-ehi-suppl.1-2014-001],[@b2-ehi-suppl.1-2014-001] Risk factors for such MSDs include forceful actions against large loads, repetitive movements, and awkward posture.[@b3-ehi-suppl.1-2014-001],[@b4-ehi-suppl.1-2014-001] An interaction between force and repetition exists such that repetition results in moderate increases in MSD risk for low-force tasks and elevated risk for high-force tasks.[@b3-ehi-suppl.1-2014-001] While exercise composed of muscle contractions against external resistance (ie, resistance training) can be adaptive and can improve the condition of muscle, inappropriate resistance training results in maladaptation characterized by diminished performance and the onset of a contraction-induced MSD.[@b5-ehi-suppl.1-2014-001] Extensive investigation of regimes to promote contraction-induced muscle adaptation is difficult to perform for human subjects largely because of the limitations in sampling tissue.
To overcome the limitations inherent in human studies, an experimental animal model was developed almost four decades ago.[@b6-ehi-suppl.1-2014-001] Cats were operantly conditioned to perform weight-lifting training (ie, wrist-flexion of their right paw) for food reward. Since that time, several investigators have examined other voluntary resistance training models that have included rearing up while wearing weighted jackets[@b7-ehi-suppl.1-2014-001],[@b8-ehi-suppl.1-2014-001] or lifting of a weighted ring within a vertical tube.[@b9-ehi-suppl.1-2014-001] Using such models, investigators have studied such topics as the extent of inflammation, molecular pathways of muscle hypertrophy, and the role of stem cells.[@b10-ehi-suppl.1-2014-001]--[@b13-ehi-suppl.1-2014-001] In a number of animal studies of chronic voluntary resistance training, morphological analysis was evaluated and a training-induced increase in muscle fiber number reported.[@b7-ehi-suppl.1-2014-001],[@b8-ehi-suppl.1-2014-001],[@b11-ehi-suppl.1-2014-001],[@b14-ehi-suppl.1-2014-001]--[@b16-ehi-suppl.1-2014-001] Despite these advances, assessment of reactive forces, a precise measure of performance, was not implemented. In addition, investigation of resistance training of animal models has been limited in that muscle morphology has not been assessed during a distinctive phase of training -- when performance gains precede muscle hypertrophy. A common observation in human subjects is that in the initial phase of resistance exercise training, performance improves in the absence of muscle hypertrophy.[@b17-ehi-suppl.1-2014-001]--[@b19-ehi-suppl.1-2014-001] Both neural factors (eg, motor unit discharge rate and synchronization) and muscular factors (eg, myofilament density and connective tissue) have been investigated for their potential role in initial strength gains, but the roles of these factors are not fully characterized.[@b17-ehi-suppl.1-2014-001],[@b20-ehi-suppl.1-2014-001]--[@b26-ehi-suppl.1-2014-001] Consequently, the motivation to explore factors for early muscle adaptation other than those already investigated still persists.
To address the limitations in research regarding animal models for resistance training, our group developed a custom-designed weight-lifting apparatus for operantly conditioned rats that includes a force plate for direct measure of reactive forces. The methods and components of this novel apparatus are reported in an previously.[@b27-ehi-suppl.1-2014-001] In the article, some experimental data are also reported, such as the observation of increased mass of various lower hindlimb muscles after two months of resistance training of a 700 g load. However, the report did not describe the effect of training on reactive forces. In addition, investigation of the training phase when strength gains precede muscle hypertrophy remained for future research. With this in mind, the purpose of the present study was to identify the alterations in performance and muscle morphology one month following training with two different loading regimes. We tested the hypothesis that load-dependent muscle adaptation is characterized by increased muscle fiber number in histological muscle sections and heightened reactive forces before increased muscle mass. The agonist soleus (SOL), medial gastrocnemius (MG), lateral gastrocnemius (LG), and plantaris (PL) muscles and the antagonist tibialis anterior (TA) muscle were excised and analyzed. Results from the present study support the hypothesis and establish a load-dependent adaptation in performance and muscle morphology in a volitional animal model of resistance training. These findings provide valuable insights regarding the loading and morphological features to consider for resistance training-induced muscle adaptation -- insights potentially important for addressing MSDs in terms of prevention and rehabilitation.
Methods
=======
Rats
----
A total of 24 male Sprague-Dawley (Hla:(SD)CVF, Hilltop, Scottdale, PA) rats, three to four months of age, were randomly assigned to three groups differing in training conditions: 700 g load training, 70 g load training, or cage controls. The rats were trained using operant conditioning procedures and a food reward, 45 mg pellets (Noyes Formula P; Research Diets). To ensure rats were hungry and to maintain food as a reinforcer, food was restricted to keep body mass at 80% of ad libitum mass, a standard target body mass in behavioral research.[@b28-ehi-suppl.1-2014-001] All animal procedures were approved by the Animal Care and Use Committee at the National Institute for Occupational Safety and Health (NIOSH) in Morgantown, WV.
Training
--------
Preparation for training and the training protocol were described in detail previously.[@b27-ehi-suppl.1-2014-001] Before training, each rat became accustomed to the operant chamber, which was modified with a custom-designed weight-lifting apparatus. The apparatus was designed for squat-type training and consisted of a vertical tube containing a ring assembly (ie, a yoke that moved along two vertical shafts) and a force plate at the base.[@b27-ehi-suppl.1-2014-001] The pre-training period (five days per week for three to five weeks) prepared the rat to enter the tube, stand on its hindlimbs, place its nose in the ring assembly, push a weighted ring assembly vertically until activating the nose-poke at the top of the tube, and retrieve a food reward (two pellets for each full lift).
Training sessions were done with a light load (70 g) or a heavier load (700 g, which is approximately two times the average body weight (BW)). The training terminated after 100 full lifts (lifts with successful nose pokes) or a fixed time limit, whichever occurred first. The time limit was 30 minutes for the 70 g load group and 60 minutes for the 700 g load group. Rats exposed to 70 g load training tended to achieve 100 full lifts before reaching the time limit, whereas the rats exposed to 700 g load training tended to reach the time limit having achieved ∼80 full lifts.[@b27-ehi-suppl.1-2014-001] One training session was performed each day with a training schedule of five days per week for one month. During each session, peak reactive force was determined for each lift regardless if the lift was partial or full. The mean peak reactive force of each session was recorded. Afterward, the mean of these values for the initial five sessions and the final five sessions of training was calculated for each rat and statistically analyzed. To determine whether transient effects of edema or inflammation were present closely following the training regimes, half of the rats in each training regime were euthanized within a week (four to seven days) post-training. To characterize muscle well after any potential transient inflammatory responses, the remaining trained rats were euthanized 14 days after the last training session. Since no overt edema or inflammatory response was observed within one week following the last training session and no effect of time was observed in the analyses of muscle fiber number or size used for this study, data from muscles 4--14 days post-training were pooled for each group.
Histology
---------
SOL, MG, LG, PL, and TA muscles were removed by dissection, weighed, covered with tissue freezing media, and frozen in cold isopentane at −80 °C. The mid belly of each muscle was cryosectioned at 10 μm thickness. These transverse sections of muscles were stained with hematoxylin and eosin. During histological analysis, each slide was coded to prevent the investigator from knowing in which group each slide belonged.
Quantitative morphology
-----------------------
Histological analysis was performed by a standardized stereological method that has been utilized and described by our group in previous publications.[@b29-ehi-suppl.1-2014-001]--[@b33-ehi-suppl.1-2014-001] Points of a 121-point 11-line overlay graticule (0.04 mm^2^ square with 100 divisions) were evaluated at 40× magnification. On either side (by 1 mm) of the midpoint of the midsection, stereological analysis was systematically repeated at five equally spaced sites across the muscle section. As 121 points were evaluated in 10 fields, a total of 1210 points were analyzed per muscle section. Percent of muscle tissue for degenerative muscle fibers, non-degenerative muscle fibers, and centrally nucleated muscle fibers was computed as the percentage of points that overlaid the type of muscle fibers of interest relative to the total number of points. Three criteria for degenerative muscle fibers were as follows: (1) those that lost contact with surrounding fibers, (2) cellular infiltrates interdigitating the sarcolemma, and (3) cellular infiltrates internal to the muscle fiber.[@b29-ehi-suppl.1-2014-001] If any of these criteria were not met, the fiber was considered non-degenerative. Centrally nucleated fibers were considered to be any fibers with at least one internal nucleus not in contact with the sarcolemma. Percent of muscle tissue for cellular interstitium and non-cellular interstitium was computed as the percentage of points that overlaid the type of interstitium of interest relative to the total number of points. Points that overlaid nuclei in sites between muscle fibers were counted as cellular interstitium. Points that overlaid regions between muscle fibers but did not directly overlay nuclei were counted as non-cellular interstitium.
For MG, LG, PL, and TA muscles, in addition to evaluating points of the overlay graticule at each of sites sampled per section, we evaluated the number of fibers within the boundary of the graticule. These values were used to estimate mean muscle fiber area and the number of fibers per unit cross-sectional area. A muscle fiber was counted when the topmost point of the fiber was within the outermost boundary of the graticule. As the sections consisted almost exclusively of non-degenerative muscle fibers and degenerative fibers lacked distinct borders, degenerative muscle fibers were not counted. The number of fibers per unit cross-sectional area (number of fibers per square millimeter) was calculated as the total number of fibers counted divided by the total area sampled over 10 regions (ie, 0.4 mm^2^). Because of the small size of SOL muscles, total counts of all the muscle fibers in each section were feasible to perform. These counts were divided by the area of each section to determine fiber number per unit cross-sectional area directly. Mean muscle fiber area (μm^2^) was determined by dividing the percent tissue fraction of non-degenerative muscle fibers by the fiber number per unit area.
Statistical analyses
--------------------
Peak reactive force data were analyzed with a two-way repeated measures ANOVA with Student--Newman--Keuls post hoc tests. Data from the histological analysis were analyzed using one-way ANOVA with Student--Newman--Keuls post hoc tests. If normality or equal variance could not be assumed, the data were analyzed by Kruskal--Wallis one-way ANOVA by ranks with the Wilcoxon method of post hoc tests. All data are shown as means ± SEM. *P* \< 0.05 was considered statistically significant.
Results
=======
Peak reactive force during each lift was used to assess performance. In general, greater peak reactive forces were required during 700 g load training compared with those during 70 g load training ([Fig. 1](#f1-ehi-suppl.1-2014-001){ref-type="fig"}). When comparing the peak reactive forces early in training with those late in training, an enhancement of 21 ± 6% was only observed for 700 g load training. Therefore, one month of training was sufficient to improve performance when exposed to a moderately heavy external load.
The performance gain was not accompanied by increased muscle mass, thereby excluding muscle mass as a factor for the adaptation ([Table 1](#t1-ehi-suppl.1-2014-001){ref-type="table"}, [Figure 2](#f2-ehi-suppl.1-2014-001){ref-type="fig"}). To investigate muscle morphology, muscle transverse sections were evaluated. No training effect was observed in the percentage of muscle tissue composed of non-degenerative muscle fibers, degenerative muscle fibers, or centrally nucleated fibers ([Table 2](#t2-ehi-suppl.1-2014-001){ref-type="table"}, [Figure 3](#f3-ehi-suppl.1-2014-001){ref-type="fig"}, [Supplementary Figure 1A--C](#s1-ehi-suppl.1-2014-001){ref-type="supplementary-material"}). This was also consistent with the absence of changes in the cellular and non-cellular interstitium, indicating a lack of edema and the absence of accumulated connective tissue ([Table 2](#t2-ehi-suppl.1-2014-001){ref-type="table"}, [Figure 3](#f3-ehi-suppl.1-2014-001){ref-type="fig"}, [Supplementary Figure 1A--C](#s1-ehi-suppl.1-2014-001){ref-type="supplementary-material"}). Overall, these results indicate that chronic cycles of degeneration/regeneration did not accompany training.
Despite the lack of a training effect on muscle mass, training had a significant effect on muscle fiber size and number. Following the 700 g load training, mean muscle fiber area decreased by 15% for SOL muscles and 21% for TA muscles relative to control values ([Fig. 4](#f4-ehi-suppl.1-2014-001){ref-type="fig"}). This decrease in muscle fiber size was accompanied by an increase in muscle fiber number per unit area -- 16% for SOL muscles and 23% for TA muscles relative to control values ([Fig. 5](#f5-ehi-suppl.1-2014-001){ref-type="fig"}). Whole muscle section areas were not significantly different between 700 g load training, 70 g load training, and cage control conditions (SOL muscles -- 12.8 ± 0.3 mm^2^, 12.3 ± 0.4 mm^2^, and 13.1 ± 0.5 mm^2^, respectively, *P* value = 0.45; TA muscles --47.7 ± 1.3 mm^2^, 47.0 ± 1.3 mm^2^, and 51.2 ± 1.8 mm^2^, respectively, *P* value = 0.16). Therefore, the increased fiber number per unit area indicated an increase in total fiber number per muscle section. Because of the small size of SOL muscles, total counts of all muscle fibers in each section were feasible to perform. Following 700 g load training, the total number of fibers per muscle section increased 18% relative to control muscles and 11% relative to 70 g load training, *P* \< 0.05 for both comparisons ([Fig. 6](#f6-ehi-suppl.1-2014-001){ref-type="fig"}).
Discussion
==========
A multitude of reports and reviews exist in the scientific literature regarding exercise as an intervention to reduce work-related MSDs.[@b34-ehi-suppl.1-2014-001],[@b35-ehi-suppl.1-2014-001] While resistance exercise in general appears to alleviate MSDs, novel insights into muscle adaptation and consensus on exercise prescription have been difficult to ascertain. This is largely because of the methodological complexities that arise when investigating a human population.[@b34-ehi-suppl.1-2014-001],[@b35-ehi-suppl.1-2014-001] Because of these complexities, investigators have worked to develop animal models of volitional resistance exercise.[@b6-ehi-suppl.1-2014-001]--[@b9-ehi-suppl.1-2014-001] The present investigation advances the research concerning volitional animal models of resistance exercise by demonstrating training-induced adaptation by the assessment of reactive forces and the observation of distinct alterations at the muscle fiber level.
The main findings include the observation of strength gain exclusively following the heavier training load (ie, 700 g). Also, improvements in performance preceded muscle hypertrophy -- a finding consistent with studies regarding human subjects.[@b17-ehi-suppl.1-2014-001]--[@b19-ehi-suppl.1-2014-001] In one month, 700 g load training induced an increase in peak reactive force of 21% despite unaltered muscle masses for the major muscles of the hindlimb. Previous work demonstrated that increased muscle mass occurs later -- at two months of 700 g load training.[@b27-ehi-suppl.1-2014-001] Despite the lack of muscle hypertrophy in the present study, alterations occurred at the muscle fiber level. Histological analysis indicated a 16 and 23% increase in muscle fiber number and a 15 and 21% decrease in mean muscle fiber area for SOL and TA muscles, respectively. Our results demonstrate early strength gains in a volitional animal model of resistance training and establish a synchronization between muscle fiber number and performance gains independent of alterations in muscle mass.
A training-induced increase in muscle fiber number has been observed in several investigations of voluntary models of animal resistance training.[@b7-ehi-suppl.1-2014-001],[@b8-ehi-suppl.1-2014-001],[@b14-ehi-suppl.1-2014-001]--[@b16-ehi-suppl.1-2014-001] Consistent with the present study, a training-induced effect on muscle fiber number has been noted previously for SOL muscles.[@b15-ehi-suppl.1-2014-001] Adult (19 months of age) rats were trained progressively in squat-like exercise for approximately 10 lifts per day, four days per week for 20 weeks (reaching ∼800 g external load toward the end of the training).[@b9-ehi-suppl.1-2014-001],[@b15-ehi-suppl.1-2014-001] Muscle mass increased by 22% and fiber number increased by 14%, muscle adaptations consistent with several reports regarding humans.[@b15-ehi-suppl.1-2014-001],[@b36-ehi-suppl.1-2014-001]--[@b40-ehi-suppl.1-2014-001] Despite these intriguing findings, these studies were limited because muscle morphology was analyzed only after muscle hypertrophy was achieved.[@b7-ehi-suppl.1-2014-001],[@b8-ehi-suppl.1-2014-001],[@b14-ehi-suppl.1-2014-001]--[@b16-ehi-suppl.1-2014-001] Consequently, these morphological alterations were associated with the muscle hypertrophic response and did not address whether they might occur before muscle hypertrophy or influence performance.
In the present study, we demonstrate that increased fiber number occurs before increased muscle mass following 700 g load training. This was possible because the increased fiber number was countered by a shift to smaller muscle fiber sizes. Such a shift in fiber size with resistance training has been observed previously in animal studies where muscle hypertrophy had already been reached.[@b7-ehi-suppl.1-2014-001],[@b16-ehi-suppl.1-2014-001] Overall, the implication is that increased muscle fiber size is not required for either early or late muscle adaptation. With 70 g load training, muscle fiber size and number were unaltered. These results are consistent with the reactive forces characteristic of rats training with 70 and 700 g loads. Even early in training, 700 g load training consisted of peak reactive forces that were twofold greater than those generated during 70 g load training. This implies that the significant increases in fiber number with 700 g load training were because of high reactive forces necessary for handling a heavy external load.
The results indicated that increases in fiber number also occurred for the TA muscle exclusively following 700 g load training. Biomechanical analysis of human subjects demonstrates that the TA muscle, an antagonist for ankle plantarflexion, is activated during squats.[@b41-ehi-suppl.1-2014-001] Such activation stabilizes the hindlimb and assists descent in controlled squats or squat-related movements.[@b41-ehi-suppl.1-2014-001],[@b42-ehi-suppl.1-2014-001] This control was, perhaps, most important during 700 g load training of the present study because of the higher descent velocities inherent to that training compared with 70 g load training.[@b27-ehi-suppl.1-2014-001] Therefore, the need for precise control was likely heightened and, consequently, required increased activation of the predominantly type II dorsiflexor TA muscle. These results highlight the importance of evaluating antagonist muscles in volitional training studies -- a practice rarely done for animal models.[@b7-ehi-suppl.1-2014-001],[@b8-ehi-suppl.1-2014-001],[@b14-ehi-suppl.1-2014-001]--[@b16-ehi-suppl.1-2014-001]
Our group previously reported that gastrocnemius (GTN) muscles also increase mass after two months of 700 g training.[@b27-ehi-suppl.1-2014-001] This occurred despite the absence of a training effect at the muscle fiber level after the shorter duration of training in the present study. In the previous report, the increased muscle mass for GTN muscles (∼5%) was half that of SOL muscles (∼10%). This implies that GTN muscles may have experienced a muted response compared with SOL muscles. The decreased responsiveness for GTN muscles is not surprising given the external load tested. That is, 700 g is twofold greater than the BW of the rats -- a moderate external load because rats are able to voluntarily train with threefold of their BW.[@b43-ehi-suppl.1-2014-001] In addition, the load was fixed so that training was not progressive, implying the load was perceived as submaximal during the weeks of training. Therefore, compared with maximal or near-maximal loads, the expectation for lifting in the present study is that full neuromuscular recruitment of muscles with predominantly type II muscle fibers (eg, GTN muscles) was not required. Rather, in this case, plantarflexion was more dependent on the predominantly type I SOL muscle. Additionally, it should be noted that the GTN muscle is a biarticular muscle (ie, crosses and acts upon two joints). Consequently, the different movement dynamics of such a muscle during each lift may have been a factor in the dissimilar response. If this is true, this would suggest that training-induced fiber number increases may be dependent on the muscle-to-joint relationship during each movement of resistance training, a possible factor to consider when designing and interpreting studies.
An explanation for how the morphological alterations observed in the present study originated and how such alterations potentially improved muscle performance could not be determined because of restricted availability of tissue. A limited amount of tissue was collected, sectioned, stained with hematoxylin and eosin, and stored for analysis in the present study. With a more expansive collection of tissue for analysis in future research concerning this animal model, several mechanisms could be explored further. For example, the mechanism for increased fiber number, fiber splitting or *de novo* fiber formation, could be determined.[@b13-ehi-suppl.1-2014-001] In addition, the mechanism by which the morphological alterations potentially contributed to strength gains could be established. One such mechanism to consider is whether the remodeling inherent in increasing muscle fiber number also induces remodeling at the neural and connective tissue levels -- two adaptations previously proposed to influence early strength gains.[@b20-ehi-suppl.1-2014-001]--[@b24-ehi-suppl.1-2014-001],[@b44-ehi-suppl.1-2014-001] Another mechanism to consider is whether a diminished metabolic/diffusion gradient inherent in small muscle fibers improves force development during prolonged training such as that tested in the present model (rats were exposed to a high number of lifts, ∼100 per session). Aside from a diminished diffusion gradient, small muscle fibers have a high sarcolemmal to cytoplasmic volume ratio, a feature that may improve lateral force transmission and increase sarcomere length homogeneity during contractions.[@b45-ehi-suppl.1-2014-001],[@b46-ehi-suppl.1-2014-001] Regardless of the particular mechanism involved, the finding of the present study that alterations in muscle fiber number and size coincide with early strength gains before increased muscle mass suggests the possibility that these morphological alterations directly influence performance.
The findings of the present investigation provide motivation to further research fiber number modulation in human studies. Indirect evidence regarding body builders indicates that training-induced increases in muscle fiber number are possible.[@b37-ehi-suppl.1-2014-001]--[@b40-ehi-suppl.1-2014-001] Such an adaptation has obvious benefits to conditions associated with muscle fiber loss such as aging,[@b47-ehi-suppl.1-2014-001],[@b48-ehi-suppl.1-2014-001] muscular dystrophy,[@b49-ehi-suppl.1-2014-001],[@b50-ehi-suppl.1-2014-001] and accident-related denervation.[@b51-ehi-suppl.1-2014-001] An encouraging finding from the present study is that chronic cycles of widespread degeneration/regeneration are neither inherent nor required, for adaptation to resistance training.[@b7-ehi-suppl.1-2014-001],[@b16-ehi-suppl.1-2014-001],[@b52-ehi-suppl.1-2014-001] This is in agreement with the notion that severe inflammation and muscle fiber degeneration follow acute extreme contractions associated with strains but are absent after acute bouts of stretch-shortening contractions modeled after customary resistance-type training.[@b52-ehi-suppl.1-2014-001] As a consequence, MSDs may be avoided by scientifically based training regimes where such training has the potential to benefit those persons with compromised conditions. Indeed, in one study regarding experimental animals, voluntary resistance training decreased age-related muscle fiber loss.[@b15-ehi-suppl.1-2014-001] In addition to the potential to slow fiber number reduction in several detrimental conditions, the present work offers a rationale to investigate muscle fiber number alteration further -- as a process that may benefit healthy individuals such as athletes/performers (eg, wrestlers, olympic-style weightlifters, ballet dancers) pursuing performance gains without directly increasing muscle bulk. If this notion is supported, fiber number modulation would provide an alternative goal for accelerating early strength gains induced by resistance training.
The authors thank Robert R. Mercer of the NIOSH for his critical comments regarding stereological analyses.
**SUPPLEMENT:** Occupational Health and Industrial Hygiene
**ACADEMIC EDITOR:** Timothy Kelley, Editor in Chief
**FUNDING:** This study was supported by internal NIOSH funds.
**COMPETING INTERESTS:** Authors disclose no potential conflicts of interest.
This paper was subject to independent, expert peer review by a minimum of two blind peer reviewers. All editorial decisions were made by the independent academic editor. All authors have provided signed confirmation of their compliance with ethical and legal obligations including (but not limited to) use of any copyrighted material, compliance with ICMJE authorship and competing interests disclosure guidelines and, where applicable, compliance with legal and ethical guidelines on human and animal research participants. Provenance: the authors were invited to submit this paper.
**Author Contributions**
EPR, GRM, OW, and BAB conceived and designed the experiments. EPR, RDC, OW, and BAB analyzed the data. EPR and BAB wrote the first draft of the manuscript. EPR, GRM, RDC, OW, and BAB agreed with the manuscript results and conclusions. EPR, GRM, RDC, OW, and BAB jointly developed the structure and arguments for the paper. EPR, GRM, RDC, OW, and BAB made critical revisions and approved the final version. All the authors reviewed and approved the final manuscript.
**Publication Disclaimer**
The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the NIOSH.
Supplementary Data
==================
######
**Supplementary Figure 1A.** Low magnification (10X) image of a transverse section of SOL muscle following cage control conditions.
**Supplementary Figure 1B.** Low magnification (10X) image of a transverse section of SOL muscle following 70 g load training.
**Supplementary Figure 1C.** Low magnification (10X) image of a transverse section of SOL muscle following 700 g load training.
{#f1-ehi-suppl.1-2014-001}
{#f2-ehi-suppl.1-2014-001}
{#f3-ehi-suppl.1-2014-001}
{#f4-ehi-suppl.1-2014-001}
{#f5-ehi-suppl.1-2014-001}
{#f6-ehi-suppl.1-2014-001}
######
Body weights and muscle masses for cage control and training conditions.
CONTROL 70 g LOAD 700 g LOAD
----------------- ----------- ----------- ------------
Body weight (g) 404 ± 4 392 ± 5 396 ± 2
SOL (mg) 173 ± 5 174 ± 7 172 ± 5
TA (mg) 879 ± 25 820 ± 32 836 ± 18
GTN (mg) 2203 ± 62 2051 ± 57 2094 ± 47
PL (mg) 447 ± 11 438 ± 13 448 ± 6
**Notes:** Values are means ± S.E.M. Body weight is the weight at the time muscles were removed. Sample size was N = 8 per group with the exception of the TA muscle (sample size was N = 4 for controls and N = 5 for each of the 70 g and 700 g loads). No significant differences were observed.
**Abbreviations:** SOL, soleus; TA, tibialis anterior; GTN, entire gastrocnemius; PL, plantaris.
######
No change in percentage of muscle tissue composed of degenerative muscle fibers, centrally nucleated muscle fibers, and interstitium in muscle sections following training with different loads.
CONTROL 70 g LOAD 700 g LOAD
--------------------------------------- ------------- ------------- -------------
SOL
Non-degenerative muscle fibers (%) 937 ± 0.6 94.3 ± 0.3 93.7 ± 0.6
Degenerative muscle fibers (%) 0.33 ± 0.20 0.12 ± 0.09 0.29 ± 0.17
Centrally nucleated muscle fibers (%) 3.7 ± 1.4 3.9 ± 1.5 3.4 ±1.0
Cellular interstitium (%) 1.1 ± 0.2 1.0 ± 0.2 0.9 ± 0.2
Non-cellular interstitum (%) 4.9 ± 0.4 4.7 ± 0.3 5.2 ± 0.5
TA
Non-degenerative muscle fibers (%) 97.9 ± 0.3 97.0 ± 0.3 96.7 ± 0.5
Degenerative muscle fibers (%) 0.00 ± 0.00 0.07 ± 0.07 0.00 ± 0.00
Centrally nucleated muscle fibers (%) 0.7 ± 0.2 1.0 ± 0.3 0.3 ± 0.2
Cellular interstitium (%) 0.2 ± 0.1 0.3 ± 0.1 0.3 ± 0.3
Non-cellular interstitum (%) 1.8 ± 0.4 2.6 ± 0.2 3.0 ± 0.5
LG
Non-degenerative muscle fibers (%) 93.9 ± 0.6 94.1 ± 0.6 93.1 ± 0.5
Degenerative muscle fibers (%) 0.03 ± 0.03 0.00 ± 0.00 0.01 ± 0.01
Centrally nucleated muscle fibers (%) 1.9 ± 0.7 0.8 ± 0.3 2.2 ± 0.6
Cellular interstitium (%) 1.8 ± 0.2 1.7 ± 0.2 2.1 ± 0.2
Non-cellular interstitum (%) 4.3 ± 0.5 4.1 ± 0.5 4.8 ± 0.5
MG
Non-degenerative muscle fibers (%) 94.8 ± 0.8 94.7 ± 0.7 95.7 ± 0.4
Degenerative muscle fibers (%) 0.03 ± 0.03 0.02 ± 0.02 0.00 ± 0.00
Centrally nucleated muscle fibers (%) 1.8 ± 0.4 3.8 ± 1.1 1.1 ± 0.4
Cellular interstitium (%) 1.3 ± 0.3 1.7 ± 0.4 1.6 ± 0.3
Non-cellular interstitum (%) 3.8 ± 0.6 3.5 ± 0.5 2.7 ± 0.3
PL
Non-degenerative muscle fibers (%) 96.5 ± 0.3 96.7 ± 0.5 97.0 ± 0.3
Degenerative muscle fibers (%) 0.02 ± 0.02 0.00 ± 0.00 0.01 ± 0.01
Centrally nucleated muscle fibers (%) 1.5 ± 0.4 2.4 ± 0.5 2.1 ± 0.4
Cellular interstitium (%) 0.3 ± 0.1 0.4 ± 0.1 0.2 ± 0.1
Non-cellular interstitum (%) 3.2 ± 0.3 2.9 ± 0.4 2.8 ± 0.2
**Notes:** Values are means ± S.E.M. Sample size was N = 8 per group with the exception of the TA muscle (sample size was N = 4 for controls and N = 5 for each of the 70 g and 700 g loads). Values expressed as percentage were in reference to percentage of tissue fraction. No significant differences were observed.
**Abbreviations:** SOL, soleus; TA, tibialis anterior; LG, laterial gastrocnemius; MG, medical gastrocnemius; PL, plantaris.
| |
In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: (1) it reinforces the connections between people and objects, and promotes engineers' appreciation about their working context; (2) it allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; and (3) it offsets the significant cost of 3D Model Engineering by including the real world background. This paper reviews critical problems in AR and investigates technical approaches to address the fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co-existence; and the integration of these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables workers to "see" buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.
Recommended Citation
Behzadan, Amir H., Suyang Dong, and Vineet R. Kamat. "Augmented reality visualization: A review of civil infrastructure system applications." Advanced Engineering Informatics 29, no. 2 (2015): 252-267. | https://bearworks.missouristate.edu/articles-cob/241/ |
Senior Principal Manufacturing Engineer | MKS
Are you naturally curious? So are we at MKS. Our collective curiosity drives us to be an innovation leader in many industries. Our products drive technology advancements across a wide range of applications like cloud computing, clean drinking water, self-driving cars and space exploration. We are a team of collaborators who value fresh thinking and believe in mutual respect, constructive candor, diversity and inclusion. As a valued and trusted partner to our customers, we are continually pushing the boundaries of possibility. We believe in creating technology that transforms our world and are looking for like-minded individuals to join our team. If this is appealing to you, we want to meet you.
We are looking for a Senior Principal Manufacturing Engineer who will support Research & Development related to optical fabrication and peripheral processes with particular interest in requirements related to DUV/EUV cleanliness standards. As the technical lead, you will use your knowledge to assume ownership of the following.
- Leads in the development and implementation of new process and/or projects as related to optical (EUV/DUV) cleaning and handling, photo contamination, testing requirements.
- Communicaton with customers regarding cleanliness and testing requirements.
- Supports multiple programs in Manufacturing to achieve division and company performance goals
- Provides sustaining engineering support to Production and continuously make efforts to improve the current process for cost efficiency and time reduction within Manufacturing
What will you bring to the team is:
- Advanced Degree in Engineering, Material Science, Physics, Chemistry, or related fields
- Experience/Knowledge of testing requirements and general requirements related to optical fabrication
- Optical component specifications/tolerancing and general optical fabrication knowledge desired
- Solid knowledge of optics theory (geometrical, physical) preferred but not required
- Experience working in cleanroom environments and knowledge of optical cleanliness stanards
- Experience or knowledge of thin film coatings preferred
Globally, our policy is to recruit individuals from wide and diverse backgrounds. However, certain positions require access to controlled goods and technologies subject to the International Traffic in Arms Regulations (ITAR) or Export Administration Regulations (EAR). Applicants for these positions may need to be “U.S. persons.” “U.S. persons” are generally defined as U.S. citizens, noncitizen nationals, lawful permanent residents (or, green card holders), individuals granted asylum, and individuals admitted as refugees. | https://www.optics.arizona.edu/jobs-industrial-affiliates-members/senior-principal-manufacturing-engineer-mks |
Old technologies revisited: we made and sold good looking furniture from recycled paper and cardboard and organized workshops. More than sustainable products, these pieces of furniture are true works of art. Visitors have learned how to bind a book, make good drawings and more.
Our goal is to create and sell affordably priced furniture made of recycled paper and cardboard that is an artwork, a restoration of old technology, a practical and sustainable household item.
Share the knowledge and promote the concept.
Space for the open workshop.
Set up the workshop and start the production.
Create a website and setup a web-shop.
Open the workshop to visitors.
Promote the enterprise offline and online.
Continuously improve the development process through all the listed stages.
- Creative design. Our intention is to collaborate with artists and designers to increase the creative value of our products.
- We are looking for collaboration with institutions supporting creative entrepreneurship that can help in the following areas: provide space for the open workshop, legal and accounting advise, funding. | https://www.mediamatic.net/en/page/27638/paper-universe |
Kingdom : Animalia Class : Insecta Scientific Name : Phasmatodea Size : 3 - 30cm (1.2 - 11.8in) Average Life Span : 1 - 2 years Colour : Green, Brown, Tan, Grey Skin Type : Shell Special Features : Long body that help with camouflage
- Stick insects are insects that look like a tree, a bush or a twig of a branch.
- There are around 3,000 species of stick insects found in the world.
- They vary in size from being 3cm to 30cm in length.
- They inhabit the rainforests, jungles and forests all around the world.
- They are extremely difficult to spot for their predators as they perfectly get camouflaged in their habitat.
- They have long, cylindrical bodies with a shape and color that closely resembles that of a stick.
- A few species have a more leaf-like appearance with flattened bodies.
- They are herbivores, feeding on green plants, leaves, fruits and berries.
- Their natural predators are rodents, small reptiles and birds.
- The female stick insect lays around 1000-1500 eggs that very closely resemble plant seeds.
- The eggs have the ability to be dormant for months together.
- When the eggs hatch the larvae come out, they look like adult stick insects. | http://planetfauna.xyz/12-interesting-facts-you-should-know-about-stick-insect |
CROSS-REFERENCE TO RELATED APPLICATION
BACKGROUND
SUMMARY
DETAILED DESCRIPTION OF THE EMBODIMENTS
The present application claims the benefit of, and priority to, U.S. Provisional Patent Application Ser. No. 61/049,548 filed on May 1, 2008, the entire disclosure of which is incorporated by reference herein.
1. Technical Field
The present disclosure relates to yarns that contain filaments made from high strength materials and/or absorbable materials, and braided multifilaments suitably adapted for use as surgical devices made from such yarns.
2. Background of Related Art
Braided multifilaments often offer a combination of enhanced pliability, knot security, and tensile strength when compared to their monofilament counterparts. The enhanced pliability of a braided multifilament is a direct consequence of the lower resistance to bending of a bundle of very fine filaments relative to one large diameter monofilament. However, a tradeoff between braid strength and pliability exists in the design of conventional braided multifilaments.
Braided multifilaments intended for the repair of body tissues should meet certain requirements: they should be substantially non-toxic, capable of being readily sterilized, possess good tensile strength and pliability, and have acceptable knot-tying and knot-holding characteristics. If the braided multifilaments are of the biodegradable variety, the degradation of the braided multifilaments should be predictable and closely controlled. Moreover, colored multifilaments may aid a surgeon during a surgical procedure by providing greater visibility of the device.
The present disclosure describes first and second yarns that are interconnected to form surgical devices. The yarns may be interconnected in a braided construction. In embodiments, the yarns are braided, knitted, or woven into a suture, mesh, sternal closure device, cable, tape or tether.
The first yarns include a plurality of filaments including one or more filaments made from a high strength material and the second yarns include a plurality of filaments including one or more filaments made from an absorbable material. In embodiments, the yarns of the present disclosure include homogenous yarns that include all high strength filaments or all absorbable filaments. The present disclosure also describes heterogeneous yarns that include a plurality of filaments made from a high strength material and one or more filaments made from an absorbable material, and in other embodiments, heterogeneous yarns that include a plurality of filaments made from an absorbable material and one or more filaments made from a high strength material. The surgical devices may include combinations of homogenous and heterogeneous yarns.
The absorbable filaments of the yarns may include a color element for identifying the surgical device. The color element may be a single uniform color, a gradation of color, multiple colors, a design pattern, or combinations thereof along a portion of the filaments of the surgical device. In embodiments, the color element is substantially the same among the filaments, in other embodiments the color element varies between filaments of the same or different yarns.
Filaments made from high strength materials and absorbable materials are used in accordance with the present disclosure to prepare yarns that can be incorporated into a braided, knitted, woven, or other suitable structure to provide a surgical device.
A plurality of filaments is used to form a yarn. A plurality of yarns is used to form a braid, knit or weave.
A “heterogeneous yarn” is a configuration containing at least two dissimilar filaments mechanically bundled together to form a yarn. The filaments are continuous and discrete, so therefore each filament extends substantially along the entire length of the yarn and maintains its individual integrity during yarn preparation, processing, and use.
Unlike a heterogeneous yarn, a “homogeneous” yarn is a configuration containing substantially similar filaments. The filaments are also continuous and discrete. Therefore each filament extends substantially along the entire length of the yarn and maintains its individual integrity during yarn preparation, processing, and use.
A “heterogeneous braid” is a configuration containing at least two dissimilar yarns. The two types of yarns are intertwined in a braided construction. The yarns are continuous and discrete, so therefore each yarn extends substantially along the entire length of the braid and maintains its individual integrity during braid preparation, processing, and use.
A “homogeneous braid” then, is a configuration containing substantially similar yarns. The yarns are intertwined in a braided construction. The yarns are continuous and discrete. Therefore each yarn extends substantially along the entire length of the braid and maintains its individual integrity during braid preparation, processing, and use.
In the broadest sense, this disclosure contemplates yarns that include at least one filament made from a high strength material and yarns that include at least one filament made from an absorbable material, articles made therefrom, and their use in surgery. Methods for forming filaments from high strength materials as well as filaments from absorbable materials are within the purview of those skilled in the art. The yarns can be a homogeneous yarn made entirely of either a high strength material or an absorbable material. In other embodiments, the yarns are heterogeneous. The yarns may be made from at least one high strength filament or absorbable filament in combination with a plurality of filaments made from at least one other fiber forming material. For example, the yarns may include a combination of high strength and absorbable materials.
High strength materials include extended chain fibers having a molecular weight of at least about 500,000 g/mole. In embodiments, the high strength material has a molecular weight between about 1,000,000 g/mole and about 5,000,000 g/mol, in embodiments between about 2,000,000 g/mole and about 4,000,000 g/mole. Examples of high strength polymers include, for example, ethylene vinyl acetate, poly(meth)acrylic acid, polyester, polyamides, polyethylene, polypropylene, polystyrene, polyvinyl chloride, polyvinylphenol, polyacrylonitrile, and copolymers and mixtures thereof. A particularly suitable non-biodegradable high strength fiber is ultra high molecular weight polyethylene, available under the tradename SPECTRA® (Honeywell, Inc., Morristown, N.J.). Other ultra high molecular weight polyethylene sutures are disclosed, for example, in U.S. Pat. No. 5,318,575, the entire contents of which are incorporated herein by reference.
Absorbable materials are absorbed by biological tissues and disappear in vivo at the end of a given period, which can vary for example from one day to several months, depending on the chemical nature of the material. Absorbable materials include both natural and synthetic biodegradable polymers.
Representative natural biodegradable polymers include polysaccharides such as alginate, dextran, cellulose, collagen, and chemical derivatives thereof (substitutions, additions of chemical groups, for example, alkyl, alkylene, hydroxylations, oxidations, and other modifications routinely made by those skilled in the art), and proteins such as albumin, zein and copolymers and blends thereof, alone or in combination with synthetic polymers.
Synthetically modified natural polymers include cellulose derivatives such as alkyl celluloses, hydroxyalkyl celluloses, cellulose ethers, cellulose esters, nitrocelluloses, and chitosan. Examples of suitable cellulose derivatives include methyl cellulose, ethyl cellulose, hydroxypropyl cellulose, hydroxypropyl methyl cellulose, hydroxybutyl methyl cellulose, cellulose acetate, cellulose propionate, cellulose acetate butyrate, cellulose acetate phthalate, carboxymethyl cellulose, cellulose triacetate, and cellulose sulfate sodium salt. These are collectively referred to herein as “celluloses.” Representative synthetic degradable polymers include polyhydroxy acids prepared from lactone monomers such as glycolide, lactide, trimethylene carbonate, p-dioxanone, ε-caprolactone, and combinations thereof. Polymers formed therefrom include, for example, polylactides, polyglycolides, and copolymers thereof; poly(hydroxybutyric acid); poly(hydroxyvaleric acid); poly(lactide-co-(ε-caprolactone-)); poly(glycolide-co-(ε-caprolactone)); polycarbonates; poly(pseudo amino acids); poly(amino acids); poly(hydroxyalkanoate)s; polyanhydrides; polyortho esters; and blends and copolymers thereof.
Rapidly bioerodible polymers such as poly(lactide-co-glycolide)s, polyanhydrides, and polyorthoesters, which have carboxylic groups exposed on the external surface as the smooth surface of the polymer erodes, may also be used.
FIGS. 1A and 1B
FIG. 1A
10
12
10
10
Turning now to , a plurality of filaments are commingled to form yarns. The filaments may be systematically or randomly arranged within a yarn, such as by twisting, plaiting, braiding, or laying the filaments substantially parallel to form the yarn. illustrates a homogeneous yarn including a plurality of substantially similar filaments . In embodiments, homogeneous yarn includes a plurality of high strength filaments, and in other embodiments, homogeneous yarn includes a plurality of absorbable filaments.
20
22
24
22
24
22
24
FIG. 1B
A heterogeneous yarn , on the other hand, contains a plurality of two dissimilar filaments , as shown in . In embodiments, first filaments are made from a high strength material and second filaments are made from an absorbable material. In other embodiments, first filaments may be made from a high strength material or from an absorbable material and second filaments may be formed from other fiber forming materials, such as non-degradable polymers like shape memory polymer or alloys.
FIGS. 2A-2C
FIG. 2A
30
32
34
32
34
30
Referring now to , braids are formed from yarns. As shown in , a braid contains two similar heterogeneous yarns , . Each heterogeneous yarn contains a plurality of two dissimilar filaments. In embodiments, a first filament is a high strength material and a second filament is made from an absorbable material. The yarns , are intertwined to form a substantially homogeneous braid .
FIG. 2B
40
42
44
42
44
42
44
40
illustrates a heterogeneous braid containing two dissimilar yarns , . In embodiments, a first yarn contains a plurality of filaments made from a high strength material and a second yarn contains a plurality of filaments made from an absorbable material. The homogeneous first and second yarns , are intertwined to form a heterogeneous braid .
FIG. 2C
50
52
54
52
54
50
In another embodiment shown in , a heterogeneous braid contains a heterogeneous yarn and a homogeneous yarn . As described above, a heterogeneous yarn contains a plurality of two dissimilar filaments. In embodiments, a first filament is made from a high strength material and a second filament is made from an absorbable material. The homogeneous yarn contains a plurality of filaments made from any material capable of being spun into a filament. The heterogeneous yarn and the homogeneous yarn are intertwined to form a heterogeneous braid .
A braid and/or yarn can be prepared using conventional braiding, weaving, or other technology and equipment commonly used in the textile industry and in the medical industry for preparing multifilament sutures. Suitable braid constructions include, for example tubular, hollow, and spiroid braids and are disclosed, for example, in U.S. Pat. Nos. 3,187,752; 3,565,077; 4,014,973; 4,043,344; 4,047,533; 5,019,093; and 5,059,213, the disclosures of which are incorporated herein by reference. Illustrative flat braided structures (suitable, e.g., for tendon repair) which can be formed using the presently described yarns include those described in U.S. Pat. Nos. 4,792,336 and 5,318,575. Suitable mesh structures are shown and described, for example, in U.S. Pat. No. 5,292,328.
If desired, the surface of a filament, yarn, or braid can be coated with a bioabsorbable or nonabsorbable coating to further improve the performance of the braid. For example, a braid can be immersed in a solution of a desired coating polymer in an organic solvent, and then dried to remove the solvent.
A braid is sterilized so it can be used for a host of medical applications, especially for use as a surgical suture, cable, tether, tape and sternal closure device, which may be attached to a needle, suture anchor, or bone anchor.
Once sterilized, a braided multifilament surgical device, as described herein, may be used to repair wounds located between two or more soft tissues, two or more hard tissues, or at least one soft tissue and at least one hard tissue. The braided multifilament surgical device is passed through, wrapped around or secured to tissue and then the tissue is approximated by manipulating the braided multifilament surgical device, such as, for example, by tying a knot, cinching the device, applying a buckle, or the like.
In embodiments, a braid is made of heterogeneous yarns to form a surgical suture. The heterogeneous yarns contain filaments made from high strength materials and filaments made from absorbable materials. In embodiments, the heterogeneous yarns contain one or more high strength materials and one or more absorbable materials. In other embodiments, the braid may contain two sets of yarns, each containing different high strength and absorbable filaments. The high strength filaments may comprise from about 5% to about 95% of the cross-sectional area of the heterogeneous yarns, in embodiments from about 25% to about 75%, and in other embodiments from about 40% to about 60% of the heterogeneous yarns. The braid may be composed of yarns having the same or different proportion of high strength filaments to absorbable filaments.
In an embodiment, the heterogeneous yarns include filaments made from ultra high molecular weight polyethylene and filaments made from copolymers of glycolide and lactide, the ultra high molecular weight polyethylene filaments comprising about 10% to about 90% of the braid, in embodiments about 25% to about 75% of the braid, and in other embodiments about 30% to about 55% of the braid.
Sutures made in accordance with the foregoing description will exhibit superior strength and handling properties, as well as reduced long term implantable mass. High strength fibers, particularly ultra high molecular weight polyethylene, have a high tensile strength but an inherently low coefficient of friction thereby exhibiting poor knot security. Improved knot security is obtained by braiding absorbable filaments, which have a higher surface friction than high strength materials, into the device along with the high strength filaments. Additionally, absorbable filaments degrade after implantation, whereas high strength filaments are generally non-degradable. Thus, the braided suture would have less mass remaining long term after implantation.
In embodiments, the braided multifilaments include a color element to enhance visibility of the device. Ultra high molecular weight polyethylene filaments are substantially translucent or colorless and are currently only available without coloration. Absorbable filaments, on the other hand, can be colored via a number of conventional ways to produce a variety of different colors and/or color patterns. For example, a color element may be coated, sprayed, glued, dyed, stained, or otherwise affixed onto and/or into the absorbable material. By combining the translucent high strength filaments with colored absorbable filaments, color variation is achieved resulting visually identifiable and distinguishable sutures.
FIG. 3A
FIG. 3B
62
64
66
60
68
70
72
76
70
74
78
70
The color element may appear in various forms to provide visual identification and/or differentiation of the suture. Absorbable filaments are available in a variety of colors to visually distinguish sutures or to allow yarns to be woven into a wide variety of distinguishable patterns. In embodiments, all or a portion of the absorbable filaments in a yarn and/or braid may comprise a single uniform color. In other embodiments, all or a portion of the absorbable filaments in a yarn and/or braid may have a gradation of color, multiple colors, a design pattern, or combinations thereof. For example, as illustrated in , the intensity of the color element may decrease from the end portions , of a filament toward the middle portion for visually identifying the location of an end portion. In an embodiment, the filament may include sections having different lengths of color where the spacing between the adjacent sections increase from one end toward the other end of the filament. In another exemplary embodiment shown in , filament may include a first color element disposed on a first portion of the filament and a second color element disposed on a second portion of the filament . In yet other embodiments, the color element may also be in the form of a pattern, such as shapes or arrangements that afford a different identification effect. For example, the color may extend along the length of the absorbable filament in a spiral pattern or as a plurality of stripes. It is envisioned that that the individual filaments within a yarn or braid may have different color elements and that the yarns within a braid may have different numbers of filaments containing color elements to form a visually distinct braid. It will be appreciated that other embodiments of color elements of an absorbable filament are also within the scope of the present disclosure.
In embodiments, the suture may also be a vehicle for delivery of pharmaceutical agents. A pharmaceutical agent as used herein is used in the broadest sense and includes any substance or mixture of substances that have clinical use. Consequently, pharmaceutical agents may or may not have pharmacological activity per se, e.g., a dye or fragrance. Alternatively a pharmaceutical agent could be any bioactive agent which provides a therapeutic or prophylactic effect, a compound that affects or participates in tissue growth, cell growth, cell differentiation, an anti-adhesive compound, a compound that may be able to invoke a biological action such as an immune response, or could play any other role in one or more biological processes. A variety of pharmaceutical agents may be incorporated into a coating and/or into the bulk polymer structure.
Examples of classes of pharmaceutical agents which may be utilized in accordance with the present disclosure include anti-adhesives, antimicrobials, analgesics, antipyretics, anesthetics, antiepileptics, antihistamines, anti-inflammatories, cardiovascular drugs, diagnostic agents, sympathomimetics, cholinomimetics, antimuscarinics, antispasmodics, hormones, growth factors, muscle relaxants, adrenergic neuron blockers, antineoplastics, immunogenic agents, immunosuppressants, gastrointestinal drugs, diuretics, steroids, lipids, lipopolysaccharides, polysaccharides, platelet activating drugs, clotting factors, and enzymes. It is also intended that combinations of agents may be used.
Other pharmaceutical agents which may be included as a bioactive agent in the coating composition applied in accordance with the present disclosure include: local anesthetics; non-steroidal antifertility agents; parasympathomimetic agents; psychotherapeutic agents; tranquilizers; decongestants; sedative hypnotics; steroids; sulfonamides; sympathomimetic agents; vaccines; vitamins; antimalarials; anti-migraine agents; anti-parkinson agents such as L-dopa; anti-spasmodics; anticholinergic agents (e.g. oxybutynin); antitussives; bronchodilators; cardiovascular agents such as coronary vasodilators and nitroglycerin; alkaloids; analgesics; narcotics such as codeine, dihydrocodeinone, meperidine, morphine and the like; non-narcotics such as salicylates, aspirin, acetaminophen, d-propoxyphene and the like; opioid receptor antagonists, such as naltrexone and naloxone; anti-cancer agents; anti-convulsants; anti-emetics; antihistamines; anti-inflammatory agents such as hormonal agents, hydrocortisone, prednisolone, prednisone, non-hormonal agents, allopurinol, indomethacin, phenylbutazone and the like; prostaglandins and cytotoxic drugs; chemotherapeutics, estrogens; antibacterials; antibiotics; anti-fungals; anti-virals; anticoagulants; anticonvulsants; antidepressants; and immunological agents.
Other examples of suitable pharmaceutical agents which may be included in the coating composition include viruses and cells, peptides, polypeptides and proteins, analogs, muteins, and active fragments thereof, such as immunoglobulins, antibodies, cytokines (e.g. lymphokines, monokines, chemokines), blood clotting factors, hemopoietic factors, interleukins (IL-2, IL-3, IL-4, IL-6), interferons (β-IFN, (α-IFN and γ-IFN), erythropoietin, nucleases, tumor necrosis factor, colony stimulating factors (e.g., GCSF, GM-CSF, MCSF), insulin, anti-tumor agents and tumor suppressors, blood proteins, fibrin, thrombin, fibrinogen, synthetic thrombin, synthetic fibrin, synthetic fibrinogen, gonadotropins (e.g., FSH, LH, CG, etc.), hormones and hormone analogs (e.g., growth hormone), vaccines (e.g., tumoral, bacterial and viral antigens), somatostatin, antigens, blood coagulation factors, growth factors (e.g., nerve growth factor, insulin-like growth factor), bone morphogenic proteins, TGF-B, protein inhibitors, protein antagonists, and protein agonists, nucleic acids, such as antisense molecules, DNA, RNA, and RNAi, oligonucleotides, polynucleotides, and ribozymes.
Various modifications and variations of the yarns, braids and devices and uses thereof will be apparent to those skilled in the art from the foregoing detailed description. Such modifications and variations are intended to come within the scope of the following claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
FIGS. 1A and 1B
show illustrative embodiments of yarns in accordance with the present disclosure;
FIGS. 2A
2
2
, B and C show illustrative embodiments of braids in accordance with the present disclosure; and
FIGS. 3A and 3B
show illustrative embodiments of filaments including a color element for use in a surgical device as described herein. | |
Yes, there really is art outside photography. :)
The history and evolution of painting has undergone a similar transformation as most things adapting to a digital age. As photographers, we adapted techniques and tools commonly used in the darkroom to software, and found new ways to extend what was possible to help us achieve a vision. Just as we tried to adapt skills to a new environment, so too did traditional artists, like painters.
My headshot, as painted by Gustavo Deveze
These artists adapted by not only emulating the results of various techniques, but by pushing forward the boundaries of what was possible through these new (Free Software) tools.
Digital painting discussions with Free Software lacks a good outlet for collaboration that can open the discussion for others to learn from and participate in. This is a similar situation the Free Software + photography world was in that prompted the creation of pixls.us.
Due to this, both Americo Gobbo and Elle Stone reached out to us to see if we could create a new category in the community about Digital Painting with a focus on promoting serious discussion around techniques, processes, and associated tools.
Both of them have been working hard on advancing the capabilities and quality of various Free Software tools for years now. Americo brings with him the interest of other painters who want to help accelerate the growth and adoption of Free Software projects for painting (and more) in a high-quality and professional capacity. A little background about them:
Americo Gobbo studied Fine Arts in Bologna, Italy. Today he lives and works in Brazil, where he continues to develop studies and create experimentation with painting and drawing mainly within the digital medium in which he tries to replicate the traditional effects and techniques from the real world to the virtual.
Imaginary Landscape - Wet sketches, experiments on GIMP 2.9.+
Americo Gobbo, 2016.
Elle Stone is an amateur photographer with a long-standing interest in the history of photography and print making, and in combining painting and photography. She’s been contributing to GIMP development since 2012, mostly in the areas of color management and proper color mixing and blending.
Leaves in May, GIMP-2.9 (GIMP-CCE)
Elle Stone, 2016.
With this introductory post to the new Digital painting category forum we feature Gustavo Deveze, who is a Visual Artist using free software. Deveze’s work is characterized by mixing different medias and techniques. With future posts we want to continue featuring artists using free software.
Gustavo Deveze is a visual artist and lives in Buenos Aires. He trained as a draftsman at the National School of Fine Arts “Manuel Belgrano”, and filmmaker at IDAC - Instituto de Arte Cinematográfica in Avellaneda, Argentina.
His works utilize different materials and supports and he is published by different publishers. Although in the last years he works mainly in digital format and with free software. He has participated in national and international shows and exhibitions of graphics and cinema with many awards. His last exposition can be seen on issuu.com: https://issuu.com/gustavodeveze/docs/inadecuado2edicion
Website: http://www.deveze.com.ar
Cudgels and Bootlickers: The Emperor’s happiness - Gustavo Deveze.
Let’s be clear: the village’s idiot is not tall… - Gustavo Deveze.
The new Digital Painting category is for discussing painting techniques, processes, and associated tools in a digital environment using Free/Libre software. Some relevant topics might include:
Emulating non-digital art, drawing on diverse historical and cultural genres and styles of art.
Emulating traditional “wet darkroom” photography, drawing on the rich history of photographic and printmaking techniques.
Exploring ways of making images that were difficult or impossible before the advent of new algorithms and fast computers to run them on, including averaging over large collections of images.
Discussion of topics that transcend “just photography” or “just painting”, such as composition, creating a sense of volume or distance, depicting or emphasizing light and shadow, color mixing, color management, and so forth.
Combining painting and photography: Long before digital image editing artists already used photographs as aids to and part of making paintings and illustrations, and photographers incorporated painting techniques into their photographic processing and printmaking.
An important goal is also to encourage artists to submit tutorials and videos about Digital Painting with Free Software and to also submit high-quality finished works.
Please feel free to stop into the new [Digital Painting category][dp-forum], introduce yourself, and say hello! I look forward to seeing what our fellow artists are up to.
| |
Tuberculous pericarditis is an infrequent but serious form of tuberculosis. Its diagnosis is difficult and often delayed or not even reached, which results in complications such as constrictive pericarditis with high mortality rates [@B1]. In 2017, there were 10 million cases of active tuberculosis worldwide and 1.3 million related deaths making tuberculosis the leading cause of death by a single pathogen worldwide [@B2]. In Colombia, 13,626 new cases of tuberculosis were reported during 2016 of which 83% (11,338 cases) corresponded to pulmonary tuberculosis and 17% (2,288 cases) to extrapulmonary tuberculosis while 37 cases (1.6%) of these corresponded to tuberculous pericarditis [@B3].
We describe here the case of tuberculous pericarditis in a man with no apparent risk factors to develop the disease, which reinforces the concept that no predisposing condition is necessary to develop tuberculosis [@B4].
A 62-year-old man presented to the emergency room with a history of malaise, fever, cough, dyspnea, and loss of 5 kg of weight in the previous 30 days. His initial assessment showed normal vital signs and no abnormalities in the white blood cell count; the erythrocyte sedimentation rate was 56 mm/h, the C-reactive protein was 10.66 mg/l, and the procalcitonin level was less than 0.5 ng/ml; the serology for HIV was negative.
Chest X-rays showed global cardiomegaly with a rounded heart shape ([figure 1](#f1){ref-type="fig"}). The chest tomography evidenced abundant homogeneous and hypodense pericardial effusion, thickening of the pericardial membrane, and enlarged lymph nodes ([figure 2](#f2){ref-type="fig"}). An echocardiogram confirmed the accumulation of approximately 1,300 ml of pericardial effusion without hemodynamic compromise.
Figure 1Chest X-ray. PA projection. Global cardiomegaly with rounded cardiac shape. No opacities were observed.
Figure 2Contrasted chest computed tomography, mediastinal window. There is a 3-millimeter thickening and abnormal pericardial enhancement, pericardial effusion, which does not produce compression of the right ventricle, and pre-aortic adenopathy with heterogeneous enhancement (white arrow).
Taking into consideration the clinical and imaging characteristics, a pericardiocentesis was performed and 275 ml of yellowish liquid were obtained. The cytological analysis was negative for malignancy; the adenosine deaminase (ADA) measurement was 101 IU/l, the polymerase chain reaction for *Mycobacterium tuberculosis* (IS6110), smear microscopy, and culture (both MGIT and Lowenstein-Jensen) for mycobacteria were all negative.
Due to these inconclusive findings, biopsies of the pericardium and mediastinal lymph node were performed. The pathological examination showed an extensive chronic granulomatous reaction with necrosis and giant cells ([figure 3](#f3){ref-type="fig"}) while the Ziehl-Neelsen staining showed acid-fast bacilli. Based on these results, anti-tuberculosis treatment plus prednisone was started. After the stabilization of his clinical condition, the patient was discharged and completed six months of anti-tuberculosis treatment with complete clinical recovery.
Figure 3A and B. Pericardium compromised by chronic granulomatous inflammation with central necrosis and multinucleated giant cells. Hematoxylin-eosin. 40X. C. Zielh-Neelsen stain showing acid-fast bacilli (black arrow), 100X
One to two percent of patients with pulmonary tuberculosis develops tuberculous pericarditis. However, it can also present as an isolated extrapulmonary form [@B5]. In a Spanish series of 294 immunocompetent individuals with acute pericarditis, thirteen cases of tuberculous pericarditis were identified (4%), cardiac tamponade was observed in five cases, and constrictive pericarditis in six patients [@B6].
Pericardial involvement can occur by extension from the lungs, adjacent lymph nodes, the sternum or even the spine, as well as through hematogenous spread. Frequently, tuberculous pericarditis corresponds to the reactivation of a previous infection without an apparent primary site [@B7], which is probably what happened in the case we describe here.
Four pathological stages are described. Initially, there is a fibrinous exudate with polymorphonuclear infiltration and the formation of early granulomas. This is followed by a serosanguineous effusion with abundant lymphocytes and, finally, adsorption of the effusion with the onset of granulomatous necrosis, pericardial thickening, and fibrosis that can progress to constrictive pericarditis [@B7].
The clinical presentation is nonspecific and insidious, with symptoms such as fever, night sweats, and weight loss, usually preceding the cardiopulmonary symptoms, taking cough, dyspnea, and pleuritic pain being the most frequent ones [@B8]. This was the case of our patient, who did not develop a hemodynamic compromise.
Regarding the diagnostic approach, this is established through the detection of *M. tuberculosis* bacilli in smear microscopy or culture of the pericardial fluid and/or the identification of bacilli or granulomatous inflammation in the pathological examination of the pericardium [@B7].
Pericardiocentesis is a common and useful procedure for the diagnosis of tuberculous pericarditis. The extracted fluid should be evaluated by smear and culture, ADA concentration, and cytology [@B8]. In many cases, after this evaluation, the diagnosis is not reached and, therefore, a pericardium biopsy is necessary as described here.
**Citation:** Jurado LF, Pinzón B, de la Rosa Z, Mejía M, Palacios DM. Tuberculous pericarditis. Biomédica. 2020;40(Supl.1):23-5. <https://doi.org/10.7705/biomedica.4911>
**Author contributions:** All the authors contributed equally to the manuscript.
**Funding:** Hospital Universitario Fundación Santa Fe de Bogotá.
**Conflicts of interest:** The authors declare no conflicts of interest
| |
Iceland volcano eruption in 1783-84 did not spawn extreme heat wave: Massive Laki volcano eruption led to unusually cold winter in Europe
The study, in the Journal of Geophysical Research: Atmospheres, will help improve predictions of how the climate will respond to future high-latitude volcanic eruptions..
The eight-month eruption of the Laki volcano, beginning in June 1783, was the largest high-latitude eruption in the last 1,000 years. It injected about six times as much sulfur dioxide into the upper atmosphere as the 1883 Krakatau or 1991 Pinatubo eruptions, according to co-author Alan Robock, a Distinguished Professor in the Department of Environmental Sciences at Rutgers University-New Brunswick.
The eruption coincided with unusual weather across Europe.
The summer was unusually warm with July temperatures more than 5 degrees Fahrenheit above the norm, leading to societal disruption and failed harvests.
The 1783-84 European winter was up to 5 degrees colder than average. Franklin, the U.S. ambassador to France, speculated on the causes in a 1784 paper, the first publication in English on the potential impacts of a volcanic eruption on the climate. To determine whether Franklin and other researchers were right, the Rutgers-led team performed 80 simulations with a state-of-the-art climate model from the National Center for Atmospheric Research.
The computer model included weather during the eruption and compared the ensuing climate with and without the effects of the eruption. "It turned out, to our surprise, that the warm summer was not caused by the eruption," Robock said. "Instead, it was just natural variability in the climate system. It would have been even warmer without the eruption.
The cold winter would be expected after such an eruption." The warm 1783 summer stemmed from unusually high pressure over Northern Europe that caused cold polar air to bypass the region, the study says. After the eruption, precipitation in Africa and Asia dropped substantially, causing widespread drought and famine.
The eruption also increased the chances of El Niño, featuring unusually warm water in the tropical Pacific Ocean, in the next winter.
The eruption spawned a sulfuric aerosol cloud -- called the "Laki haze" -- that lingered over most of the Northern Hemisphere in 1783. Reports from across Europe included lower visibility and the smell of sulfur or hydrogen sulfide.
The air pollution was linked to reports of headaches, respiratory issues and asthma attacks, along with acid rain damage to trees and crops, the study notes. More than 60 percent of Iceland's livestock died within a year, and about 20 percent of the people died in a famine. Reports of increased death rates and/or respiratory disorders crisscrossed Europe. "Understanding the causes of these climate anomalies is important not only for historical purposes, but also for understanding and predicting possible climate responses to future high-latitude volcanic eruptions," Robock said. "Our work tells us that even with a large eruption like Laki, it will be impossible to predict very local climate impacts because of the chaotic nature of the atmosphere." Scientists continue to work on the potential impacts of volcanic eruptions on people through the Volcanic Impacts on Climate and Society project.
The Laki eruption will be included in their research. Volcanic eruptions can have global climate impacts lasting several years.
The study's lead author is Brian Zambri, a former post-doctoral associate who earned his doctorate at Rutgers and is now at the Massachusetts Institute of Technology. Scientists at the National Center for Atmospheric Research and University of Cambridge contributed to .
Read the full article at the original website
References: | https://articlefeed.org/iceland-volcano-eruption-in-1783-84-did-not-spawn-extreme-heat-wave-massive-laki-volcano-eruption-led-to-unusually-cold-winter-in-europe/ |
Las Vegas Sheriff Joseph Lombardo held a press conference on Thursday regarding the 1 October shooting final report which reveals what we have already heard many times over and over again — that Stephen Paddock was the only shooter, no motive, blah, blah, blah.
And if that doesn’t get your blood racing, Lombardo said during the conference that Paddock’s girlfriend Marilou Danley will not be charged in connection with the crime and will walk free.
When the question of multiple shooters at other casino locations popped up, Lombardo quickly deferred by saying “blood spatter” was to blame.
Additionally, the sheriff addressed “keyboard cowboys” for pushing internet “conspiracies” which Lombardo explained often is a waste of the department’s time with false leads and said that Paddock’s loss of wealth may have been a factor. | https://www.intellihub.com/watch-lombardo-discusses-the-final-report-on-the-1-october-shooting/ |
“The Big Lebowski,” “Seven,” and “Drive” are not movies one would expect to see when envisioning a class based around the horror and film noir genres.
On Mondays and Wednesdays, the film genres class does exactly that, in the Business Communications Building in the Fé Bland Forum. It covers movies reaching from as far back as 1922 to as recent as 2012.
“I wanted to pick movies to build basic conventions of the genre, but also pick films that broke those conventions,” said Michael Albright, adjunct Film Genres instructor. “Very often it’s what you don’t see or what’s lurking in the shadows is what becomes the most important part of the film.”
The Fé Bland Forum is home to many of the film department’s classes because of its large size, lit theater with stadium style seating, and violet upholstered chairs with retractable desks.
The class switches the genre focus every semester. Their options include: horror, film noir, western, science fiction, musicals, and crime films. Albright has taught at City College for two semesters and has “bounced around” California universities where instructors are needed.
“[It’s] a fun class to teach because there are a lot of taboos within horror films.” said Albright. “No other film or genre can talk about what was going on at the time like a horror film.”
Many of the movies have social and political implications that extend far beyond the silver screen.
‘Night of the Living Dead,’ was produced in 1968 and is “very much about race,” said Albright.
“With the civil rights movements happening in the ‘60s, the assassination of Martin Luther King Jr.… All of those themes are in it,” he said. “Also, the lead character is an African-American, which was pretty uncommon for the time.”
Jaclyn Murdock, a Film Production major, is enrolled in the class and said she enjoys her time spent in the two hour lecture despite the occasionally uncomfortable seats.
“[Albright] is so funny and really knows what he’s talking about,” said Murdock. “Film genres is a cool class because every teacher you get will be different. They will use their own list of movies to teach the class.”
The genres are chosen to all be taught in the same semester because of their stylistic similarities rooted in German Expressionism.
”This is a more analytical sort of class,” said Murdock. “Whereas [other classes] would be more of dates, times and memorizing.”
A lot of camera angles, lighting choices and other cinematic techniques, are universal throughout all films. However, there are elements and ideas with specific implications within horror and film noir movies.
“When you see a hatchet you think immediately of a horror film, it’s all language. It’s like an alphabet,” said Albright. “The ordering and the way that those letters or objects are used in a film are really important to understand how [movies] work.”
Albright realizes that not all of his students will focus on film studies or creating movies for their occupation, however he hopes that those students will take this experience and apply it as an audience member with a broadened perspective. | https://www.thechannels.org/features/2013/03/15/horror-film-class-analyzes-movies-such-as-night-of-the-living-dead/ |
"tumbles"
The term describing bacteria with flagella distributed over the entire surface of the cell is________. When essential nutrients are depleted, some strains of bacteria form, dormant structures that form within cells.
Peritrichous Endospores
Many pathogenic (disease-producing) bacteria have ______ that protect them from phagocytosis by host cells.
Glycocalyx OR capsules
Endoplamsic reticulum that has ribosomes attached to its outer surface is reffered to as
rough endoplasmic reticulum OR rough ER
Chemotaxis refers to the ability of microorganisms to
move toward or away from chemical stimuli
All of the following are found in the cell walls of gram-positive bacteria except
lipid A
Gram-negative cells contain a periplasmic space that is
rich in degradative enzymes
All of the following are true of the gram-negative outer membrane except
it contains enzymes for energy synthesis
A population of bacterial cells has been placed in a very nutrient-poor environment with extremely low concentrations of sugars and amino acids. Which kind of membrane transport becomes a crucial in this environment?
Active transport
Spherical shaped bacteria that divide and remain attached in chainlike patterns are called
streptoccoci
Which of the following about a gram-negative cell wall is not true?
It has teichoic acids.
Which of the following processes requires energy?
Active transport
Which of the following bacterial structures are necassary for chemotaxis?
Flagella
Which of the following statements is true?
Endospores are extremely durable structures that can be survive high temperatures.
The plane in which a bacterial cell that divides determines the arrangement of cells.
True
The cell membrane is a fluid structure that allows membrane proteins to move freely.
True
Acid-fast bacteria demonstrate unique staining properties because of a special protein layer found in their cell walls.
False
Penicillin is more effective against gram-negative bacteria than gram-positive bacteria because it specifically interferes with the synthesis of lipopolysaccharide.
False
The endosymbiotic theory states that eukaryotic organelles evolved from symbiotic prokaryotes living within other prokaryotes.
True. | https://www.easynotecards.com/notecard_set/5967 |
Sterling Elementary is proud to be a PBIS school. Our staff members work very hard to insure that all students are provided a safe atmosphere for learning. We will RISE at Sterling! We are excellent role models for Respect, Independence, Self-Control, and Effort!
WHAT is PBIS?
PBIS is a school-wide system that provides clear expectations for behaviors and consistent consequences for inappropriate behaviors across all classrooms and across all school settings. By rewarding positive behaviors and by explicitly teaching our students how to engage in these behaviors, we will foster a positive and safe learning environment. We are excited to have such a dedicated and positive staff who will be excellent role models throughout the school year!
How can Parents help?
As a parent of a Sterling Elementary student, you can help reinforce positive behaviors at home as well. Encourage your child to use respect towards all people at all times and to become an independent student who is responsible for his/her own actions and learning. | https://sterling.glynn.k12.ga.us/apps/pages/index.jsp?uREC_ID=1743153&type=d&pREC_ID=1922191 |
1. Field of the Invention
The present invention relates to a multi-point ignition system for an internal combustion engine for an automobile engine.
2. Description of Related Art
Multi-point ignition systems for automobile engines have a plurality of ignition spark gaps in each cylinder. In order to arrange a plurality of peripheral ignition spark gaps along an inner wall of a combustion chamber of each cylinder at appropriate circumferential separations, a plurality of spark plugs may be supported by an annular supporting ring. Such an annular supporting ring is disposed between a cylinder block and a cylinder head. Simultaneous fuel ignition by providing a spark across each of the plurality of peripheral ignition spark gaps causes rapid fuel combustion so as to realize ideal "iso-volume" combustion, particularly for Otto cycle engines. Such an ignition system is known from, for instance, Japanese Unexamined Patent Publication No. 57-148,021.
It has recently been attempted to provide an ignition spark gap at the center of a combustion chamber in addition to a plurality of peripheral ignition spark gaps. In order to optimize fuel combustion, these ignition spark gaps are selectively used to ignite fuel in different ignition modes; such ignition modes may include a first ignition mode and a second ignition mode.
When an engine operates under lower loads, the first ignition mode, in which only the peripheral ignition spark gaps are used to ignite fuel, is selected so as to cause sufficient fuel combustion along the inner wall of the combustion chamber. This suppresses increases in hydrocarbon (HC) which are produced easily in a peripheral area of the combustion chamber. Additionally, flame expansion in an early stage of fuel combustion is suppressed. Suppressing flame expansion lowers a peak speed of fuel combustion and leads to provision of a uniform combustion ratio throughout the combustion chamber during the entire period of combustion. This, in turn, results in a decrease in emission of nitrogen oxides (NOx). On the other hand, when the engine operates under higher loads, the second ignition mode, in which all of the ignition spark gaps are used at once to ignite fuel, is selected. This second ignition mode prevents the occurrence of engine knocking.
When considering both the relative placement of intake and exhaust ports and spark plug service efficiency, it is difficult to arrange the peripheral ignition spark gaps at equal circumferential separations. Having peripheral ignition spark gaps at unequal circumferential separations may produce a delay in what is known as "flame fusion" near part of the combustion chamber between adjacent peripheral ignition spark gaps having a separation larger than those of other adjacent peripheral ignition spark gaps. Such a delay in flame fusion is undesirable from the standpoint of providing sufficiently decreased hydrocarbon (HC) and nitrogen oxide (NOx) emissions.
| |
I am defending the usefulness of modern logic to a class and I came across this modalized version of a Thomistic argument (https://www.bu.edu/wcp/Papers/Reli/ReliMayd.htm). However, although I have studied both predicate and modal logic before, some of this gentleman's notations confuse me immensely. Namely, I have never seen (3t) for example, or a modal quantifier within the exact same parentheses as an existential quantifier.
P.S. with respect to natural deduction proofs, I am looking for something like this:
- (P -> Q)
- <>(P)
- W1: P .... 2, Possible Instantiation.
- W1: P -> Q ..... 3, Necessary Instantiation
- W1: Q ..... 3,4 Modus Ponens
- <>Q ..... 5, Possible Generalization. | https://philosophy.stackexchange.com/questions/30306/can-someone-give-me-a-natural-deduction-proof-for-this-argument |
Maison & Objet 2020
The company’s corporate identity is the aesthetics of eco-minimalism with a well-thought-out logic of each subject. ZEGEN furniture not only transforms the interior of the house but also declares a new way of life, where the central values are awareness, intelligence, culture, and personality.
At the exhibition, the company presents a desk from the DUOO collection by designer Andriy Mohyla, a GRID sideboard, and OM chair by Pavel Vetrov. | https://pavelvetrov.com/maison-objet-2020/ |
Bar-Ilan University is one of Israel’s most dynamic nanotechnology research centers. The University’s Institute for Nanotechnology and Advanced Materials (BINA) is home to top innovators who publish hundreds of paper a year, collaborate with multi-national corporations and help set the global nano-research agenda. BIU’s nano experts are making important advances in energy production, environmental protection and spintronics.
BINA is home to top innovators who publish and collaborate on a multi-national scale with corporations that help set the global nano-research design. Nanotechnology’s impact involves innovations related to healthcare, energy, security and computer technology enhancing lives and livelihoods on every level.
Collaborating with the world’s most respected research centers – as well as multidisciplinary corporations such as Merck, General Motors, Phillips, Siemens and IBM – Bar-Ilan is leading the change towards a future in which the potential of nanotechnology makes life better for all of us.
Established in 2007 and housed in the Leslie and Susan Gonda (Goldschmied) Nanotechnology Triplex, BINA boasts 40 cutting-edge laboratories as well as state-of-the-art “Scientific Service” facilities for electron microscopy, nano fabrication, surface analysis, fluorescence, and magnetic measurement, all of which are available for use by the wider scientific community.
BINA offers a comprehensive set of educational frameworks, including PhD and MA studies, and a selective program for BIU undergraduates majoring in the sciences. BINA graduates have gone on to postdoctoral positions in prestigious academic institutions, and hold key positions in Israel’s science-based industries.
BINA is a world leader in materials science research. Ranked third in the world in terms of scientific citations in this area, and site of a Marie Curie Training Site for Fabrication of Nanoscale Materials, Bar-Ilan scientists are expanding our understanding and control of molecular and atomic self-assembly. BIU materials research is currently focusing on everything from electrical vehicle batteries, to energy-saving digital lighting displays, to medical nanoparticles for the targeted eradication of the herpes simplex virus, staph infections and cancer. This can impact discoveries in a wide range of areas including energy, medicine, magnetism, photonics, and cleantech.
BINA researchers have played a vital role in the development of renewable energy applications, forging a path toward practical and green solutions for a sustainable future.
BINA scientists are uncovering fundamental principles that govern human health and disease, while creating the path-breaking medical technologies that will save lives, from specially-designed nanoparticles for diagnostics and targeted drug delivery, to innovative approaches to neurodegenerative disease, viral infection and cancer.
From fundamental studies of the magnetic properties of materials, to the fabrication of new materials, researchers in the BINA Nano-Magnetism Center are making dramatic contributions that will lead to novel devices for communication, medicine and industry.
Research in BINA’s Nano-Photonics Center encompasses two main areas: imaging and vision, and optic information transport. BIU researchers are helping to advance our understanding and control of the quantum behavior of light.
Members of BINA’s Nano-Cleantech Center are advancing and developing the materials and methodologies that will lead to a sustainable, environmentally-friendly society.
Professor Rachela Popovtzer is a Senior Lecturer in the Faculty of Engineering and a member of the Nano-Medicine Center at the Bar-Ilan Institute of Nanotechnology and Advanced Materials (BINA). She came to Bar-Ilan University from the University of Michigan, as part of the 2007 cohort of returning scientists, a special program within Bar-Ilan University to recruit young Israeli scientists to return to work in Israel, which is supported by a Returning Scientist grant from the Crown Family of Chicago.
Professor Popovtzer is a bio-engineer who applies engineering principles to address challenges in biology and medicine. Methods developed in Popovtzer’s lab allow doctors to detect cancer earlier, determine how far the disease has advanced, and find small tumors with greater accuracy.
One of Popovtzer’s key innovations is an “intelligent” bio-sensor that integrates living organisms – genetically-engineered bacteria – into an electronic device used for identifying toxins. The bio-sensor effectively screens for thousands of chemicals at once, and can detect the presence of a toxin within sixty seconds. | https://afbiu.org/nanotechnology-bar-ilan-university |
Surgical consultations can take a long time to complete, but you can reduce that by following a few simple steps. The first step is ensuring you understand the procedure’s eligibility criteria. Then, you can apply a Lean Six Sigma approach and use CI to streamline the process.
Surgical waiting times are a significant policy challenge in many developed countries. The growing demand for elective surgeries has put pressure on health systems in these countries. Waiting times can be a stressful experience for both patients and service providers. They can also lead to reduced health outcomes.
Many countries have implemented innovative policies to reduce waiting times. However, more research is necessary to determine the impact of these measures.
In particular, standardized referral guidelines can improve cooperation between primary care practitioners and surgeons. These guidelines allow decision-makers to assess the appropriateness of a referral by examining the patient’s needs.
Better patient triage strategies can help avoid overcrowding in surgical clinics. This strategy also may improve the timeliness of elective surgical care. In addition, it may be worth considering using a simple questionnaire to track potential neurosurgical patients.
In addition to the self-administered 3-item questionnaire, another effective method of reducing waiting times is by allowing non-surgical patients to consult at other clinics. These treatments may be just as beneficial as surgery.
Surgical consultations can be expedited using a CI (computer-integrated) approach. CIs provide a safe and convenient alternative to face-to-face consultations. Patients and clinicians are spared valuable time and money by reducing avoidable face-to-face visits. The proposed approach was tested in simulations and was found to be successful in improving average waiting times for elective surgeries.
The study’s results showed that the CI approach to surgical consultations reduced the mean waiting time for elective surgeries by 30 percent. This reduction in waiting times was mainly due to the addition of patients on the list with reduced wait times. However, some surgical subspecialties could not benefit from the CI approach.
While the use of eConsults in medical care has been studied extensively, several aspects still need to be addressed. These aspects include the surgical yield, defined as the proportion of face-to-face specialist visits scheduled for surgery. Many important factors can affect the result of eConsults. This is particularly true in surgical subspecialties, where various conditions are seen.
Surgical scheduling is a tedious and expensive process requiring many human interventions. Automated optimization techniques may generate optimized surgery schedules that save valuable staff time. These schedules may be simulated to gauge the system’s performance under uncertainty.
A simulation engine may execute on processor 110 and provide estimates of operational metrics. These metrics may include surgery durations, operating room open times, and PACU bed occupancy. These metrics may be displayed to the scheduling coordinator.
Optimized surgery schedules may be created in two stages. First, a scheduling system 104 may generate an optimized proposed surgery schedule based on resource constraints and operational metrics. The scheduling system may also develop an optimized program for surgeries to be performed the following day.
The optimization model used may be tailored to the objectives of the surgical facility. This may include ensuring the availability of critical surgical equipment, the availability of qualified anesthesiologists, and the availability of post-anesthesia care unit beds.
Using Lean Six Sigma can help reduce the wait time for surgical consultations. Healthcare organizations are embracing this methodology to improve patient care. This can also reduce the length of stay for patients in the hospital.
The Cleveland Clinic Cardiac Catheterisation Laboratory used Six Sigma techniques to improve on-time patient arrival times and physician downtimes. This enabled the hospital to increase the number of cases handled daily without hiring more staff.
The Memorial Health System, a nonprofit four-hospital system in Springfield, Illinois, completed over 300 Lean Six Sigma improvement projects. They achieved almost $30 million in positive financial impact. These projects included a time-in-motion study that identified issues affecting service efficiency. This data was analyzed using detailed process maps.
Lean Six Sigma was used to improve patient satisfaction scores. Patients were categorized based on their needs, allowing for better discharge process design. This increased the number of patients seen per session by 9%. | https://divinoplasticsurgery.co/how-to-reducing-the-wait-time-for-surgical-consultations/ |
Help and resources for entrepreneurs creating new electronic hardware products.
Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips. In modern terminology, a microcontroller is similar to, but less sophisticated than, a system on a chip SoC. SoC may include a microcontroller as one of its components, but usually integrates it with advanced peripherals like graphics processing unit GPU , Wi-Fi module, or one or more coprocessors.
Different types of Microcontroller Programming used in Embedded Systems
A microcontroller is a chip optimized to control electronic devices. It is stored in a single integrated circuit which is dedicated to performing a particular task and execute one specific application. It is specially designed circuits for embedded applications and is widely used in automatically controlled electronic devices. In this Microprocessor Vs. Microcontroller tutorial, you will learn: What is Microcontroller?
Types of Microcontroller: A Micro controller is a small computer on a single integrated and architecture circuit. While, the speed of Microcontroller Programming have increased over the years, but the name stuck. As much as, for the controller part, a microcontroller consists of a microprocessor unit, RAM, ROM, and some extra peripherals. When the power supply is turned ON, the quartz oscillator was enabled by the control logic register. In the first few milliseconds, while the early preparation is in progress, the parasite capacitors are being charged. When the Voltage level reaches its max value and frequency of quartz oscillator becomes stable the process of writing bits on special function registers start.
The main differences between microprocessors and microcontrollers are Microprocessor has one or two types of bit handling instruction and Microcontrollers have much time of bit handling system. Before moving further on differences lets have an overview of both Microprocessor and Microcontroller. A microprocessor is a controlling unit of a micro-computer, fabricated on a small chip capable of performing ALU Arithmetic Logical Unit operations and communicating with the other devices connected to it. Microprocessor consists of an ALU, register array, and a control unit. ALU performs arithmetical and logical operations on the data received from the memory or an input device.
Types of microcontroller
The fabrication technology used for its controller is VLSI. An alternate name of the microcontroller is the embedded controller. These are basically used in different electronic devices that require an amount of control to be given by the operator of the device. This article discusses an overview of microcontrollers types and their working. A microcontroller is a small, low-cost, and self-contained computer-on-a-chip that can be used as an embedded system. A few microcontrollers may utilize four-bit expressions and work at clock rate frequencies, which usually include:. Microcontrollers usually must have low-power requirements since many devices they control are battery-operated.
This is a list of common microcontrollers listed by brand. In , Atmel was sold to Microchip Technology. These are clones of the and bit Microchip PIC line of processors, but with a bit instruction word. Espressif Systems, a company with headquarters in Shanghai, China made their debut in the microcontroller scene with their range of inexpensive and feature-packed WiFi microcontrollers such as ESP Holtek Semiconductor is a major Taiwan -based designer of bit microcontrollers, 8-bit microcontrollers and peripheral products.
Microcontroller – Types of Microcontrollers & their Applications
What is a Microcontroller? It is essentially a small computer with all peripheral inside a single package. Classification of Microcontrollers. The microcontrollers are classified into different types on various basis:. The Bus in a microcontroller refers to the parallel lines of connection between various components of the microcontroller.
A micro controller is also known as embedded controller. Today various types of microcontrollers are available in market with different word lengths such as 4bit, 8bit, 64bit and bit microcontrollers. Microcontroller is a compressed micro computer manufactured to control the functions of embedded systems in office machines, robots, home appliances, motor vehicles, and a number of other gadgets.
How to Select the Microcontroller for Your New Product
Как кот, пойманный с канарейкой в зубах, святой отец вытер губы и безуспешно попытался прикрыть разбившуюся бутылку вина для святого причастия. - Salida! - крикнул Беккер. - Salida. Выпустите. Кардинал Хуэрра послушно кивнул. Дьявол ворвался в святилище в поисках выхода из Божьего дома, так пусть он уйдет, и как можно скорее. Тем более что проник он сюда в самый неподходящий момент.
Повернувшись к терминалу Хейла, Сьюзан вдруг уловила странный мускусный запах - очень необычный для Третьего узла. Она подумала, что дело, быть может, в неисправном ионизаторе воздуха. Запах показался ей смутно знакомым, и эта мысль пронзила ее холодом. Сьюзан представила себе Хейла в западне, в окутанной паром ловушке.
– This microcontroller has 3 timers & bytes of RAM. Additionally it has all the features of the traditional microcontroller. microcontroller is a subset of microcontroller. – This microcontroller is ROM less, other than that it has all the features of a traditional microcontroller.
- Каким образом. Даже если Цифровая крепость станет общедоступной, большинство пользователей из соображений удобства будут продолжать пользоваться старыми программами. Зачем им переходить на Цифровую крепость.
Клаус Шмидт, - выпалил Беккер имя из старого учебника немецкого. Долгая пауза. - Сэр… я не нахожу Клауса Шмидта в книге заказов, но, быть может, ваш брат хотел сохранить инкогнито, - наверное, дома его ждет жена? - Он непристойно захохотал. - Да, Клаус женат.
- Очевидно, что Стратмор с трудом сдерживает гнев. - Я уже раньше объяснял вам, что занят диагностикой особого рода. Цепная мутация, которую вы обнаружили в ТРАНСТЕКСТЕ, является частью этой диагностики.
Танкадо задыхался, явно стараясь что-то сказать добрым людям, склонившимся над. Затем, в отчаянии, он поднял над собой левую руку, чуть не задев по лицу пожилого человека. Камера выхватила исковерканные пальцы Танкадо, на одном из которых, освещенное ярким испанским солнцем, блеснуло золотое кольцо. Танкадо снова протянул руку.
Коммандер посмотрел на вышедший из строя главный генератор, на котором лежал Фил Чатрукьян. Его обгоревшие останки все еще виднелись на ребрах охлаждения. Вся сцена напоминала некий извращенный вариант представления, посвященного празднику Хэллоуин. Хотя Стратмор и сожалел о смерти своего молодого сотрудника, он был уверен, что ее можно отнести к числу оправданных потерь. Фил Чатрукьян не оставил ему выбора.
- Он выдержал длинную паузу. - Если только… Сьюзан хотела что-то сказать, но поняла, что сейчас-то Стратмор и взорвет бомбу. Если только -. - Если только компьютер понимает, взломал он шифр или. Сьюзан чуть не свалилась со стула. | https://us97redmondbend.org/and-pdf/977-types-of-microcontrollers-and-their-applications-pdf-222-389.php |
Product Details:
|Minimum Order Quantity||100 Piece|
|Package Type||DIP|
|Pin Count||28|
|Maximum Operating Temperature||+ 85 Degree C|
|Brand||MICROCHIP|
|ROM Type||Flash|
|RAM||2 kB|
|Core Size||8 Bit|
|Maximum Clock Frequency||20 MHz|
|Number of Timer||3|
A ATMEGA328P-PU Micro Controlleris a small computer on a single integrated circuit. In modern terminology, it is similar to, but less sophisticated than, a system on a chip (SoC); an SoC may include a microcontroller as one of its components. A microcontroller contains one or more CPUs (processor cores) along with memory and programmable input/output peripherals. Program memory in the form of ferroelectric RAM, NOR flash or OTP ROM is also often included on chip, as well as a small amount of RAM. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications consisting of various discrete chips.
Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, toys and other embedded systems. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control even more devices and processes. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems. In the context of the internet of things, microcontrollers are an economical and popular means of data collection, sensing and actuating the physical world as edge devices.
Some microcontrollers may use four-bit words and operate at frequencies as low as 4 kHz, for low power consumption (single-digit milliwatts or microwatts). They generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CPU clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications. Other microcontrollers may serve performance-critical roles, where they may need to act more like a digital signal processor (DSP), with higher clock speeds and power consumption.
Product Details: | https://www.padmavatielectronics.com/pic-micro-controller.html |
Retractions and corrections
Articles published by GPPS form part of the scholarly record, the integrity and completeness of which must be protected. As such, published articles cannot be simply amended or removed. If articles are found to contain errors or pose problems, we will issue a correction, expression of concern or retraction notice after investigation, in accordance with the Committee on Publication Ethics’ guidelines (see Wager et al., Retractions: Guidance from the Committee on Publication Ethics (COPE), Version 1, September 2009). Specifically, these guidelines state that:
Journal editors should consider retracting a publication if:
- they have clear evidence that the findings are unreliable, either as a result of misconduct (e.g. data fabrication) or honest error (e.g. miscalculation or experimental error)
- the findings have previously been published elsewhere without proper crossreferencing, permission or justification (i.e. cases of redundant publication)
- it constitutes plagiarism
- it reports unethical research
Journal editors should consider issuing an expression of concern if:
- they receive inconclusive evidence of research or publication misconduct by the authors
- there is evidence that the findings are unreliable but the authors’ institution will not investigate the case
- they believe that an investigation into alleged misconduct related to the publication either has not been, or would not be, fair and impartial or conclusive
- an investigation is underway but a judgement will not be available for a considerable time
Journal editors should consider issuing a correction if:
- a small portion of an otherwise reliable publication proves to be misleading (especially because of honest error)
- the author/contributor list is incorrect (i.e. a deserving author has been omitted or somebody who does not meet authorship criteria has been included)
Only the journal’s editor can decide to retract an article or publish a correction. Retraction or correction notices will contain the reason for the notice, who's issuing it, and will clearly identify the publication affected.
Should the publisher introduce errors in the article, it will issue an erratum.
In any case, the original manuscript will remain available online, with a clear message indicating its retracted or corrected status. GPPS participates in the CrossMark scheme, a multi-publisher initiative to provide a standard way for readers to locate the current version of a piece of content. By applying the CrossMark logo GPPS is committing to maintaining the content it publishes and to alerting readers to changes, such as corrections and retractions, if and when they occur. Clicking on the CrossMark logo will tell you the current status of a document as well as the additional publication record information about the document (namely funding, license, copyright, peer review details and publication history).
In very rare cases, chiefly relating to legal issues, articles may be removed from the record (i.e. no longer available, although their metadata will remain).
Small inconsequential errors (e.g. some typos) may be corrected in the published article without issuance of a correction notice. | https://journal.gpps.global/CrossMark-Correction-and-Retraction-Policy,2727.html |
Kuwait Institute for Scientific Research.
cameras, spectrometers, telescopes; and calibration and diagnostic equipment.
Fluent in Arabic and in English. | http://spie.org/profile/Sami.Alaruri-150970 |
Claire is sure of herself, her work and family, until — like a bad dream — her husband disappears, leaving a trail of puzzling secrets that shatter her certainty.
You May Also Like
Pop superstar Jordan Jaye has a big dream – he just wants to live like a regular teenager. When he’s chased down by some excited female fans, he finds a perfect hideout and a reluctant new friend from a small town, high-school art student, Emily Lowe. Despite being from different worlds, they soon discover they have way more in common than they ever imagined. Over the course of several days, the two embark on an unexpected journey of friendship, first love and self-discovery — proving that maybe opposites really do attract.
A Jewish girl disguises herself as a boy to enter religious training. .
A nine-year-old boy gets a plastic Indian and a cupboard for his birthday and finds himself involved in adventure when the Indian comes to life and befriends him.
A discredited professor and a sophisticated thief decide to join together and pick a team to pull off one last job–the casino vault in Monte Carlo.
Terry Malloy dreams about being a prize fighter, while tending his pigeons and running errands at the docks for Johnny Friendly, the corrupt boss of the dockers union. Terry witnesses a murder by two of Johnny’s thugs, and later meets the dead man’s sister and feels responsible for his death. She introduces him to Father Barry, who tries to force him to provide information for the courts that will smash the dock racketeers. | https://www0.123movieson.com/movies/claire-in-motion/ |
Derek qualified in 1998 from the University of the Witwatersrand in Johannesburg, South Africa. He is an experienced, forward-thinking dentist committed to providing high-quality dental care to all his clients. He has continued to grow our highly successful practice, since taking over in 2003, when we became Christchurch Dental.
He has passionately sought further learning under the guidance of internationally renowned dentists and specialists in the UK and the USA, in order to provide the best level of care for clients.
Derek studied dental implantology in 2006 at the Eastman Dental Institute in London, so he would be able to replace missing teeth with implants – a fantastic long-lasting alternative to dentures. He is thrilled to be able to offer this life-changing treatment option to our clients.
Derek has received advanced training in minor oral surgery and sedation, he has also passed his Diploma of Membership of the Joint Dental Faculties at the Royal College of Surgeons of England (MJDF RCS Eng).
He is committed to continually improving his knowledge and skills so he can remain at the forefront of new enhancements and techniques to provide you with the best results and service possible.
As a testament to the excellence of his work, Derek has a loyal client base. He is known for his quiet, calm and gentle demeanour and this is greatly appreciated by nervous clients. | https://www.christchurchdental.co.uk/team/dr-derek-van-staden/ |
Globally, healthcare costs are rising at an unsustainable rate. Expenditures are expected to increase worldwide from $7.8 trillion in 2013 to more than $18 trillion by 2040 [i]. Unsurprisingly, healthcare costs represent an increasingly larger proportion of global GDP – rising from 8.6% in 2000 to nearly 10% today [ii]. There is no doubt that helping our healthcare customers effectively address escalating costs is critical to the ongoing success of their programs and ours.
The challenge is particularly acute in the U.S., where the Centers for Medicare and Medicaid Services predict that healthcare spending as a percentage of GDP will increase to nearly 20% by 2025 [iii]. Per person, the U.S. spends about twice as much on healthcare as Germany, France, the UK or Canada [iv].
As Chief Commercial Officer at Intuitive, a global leader in minimally invasive care and pioneer of robotic-assisted surgical technology, the cost of healthcare consistently surfaces in my conversations with hospital executives, healthcare providers, payers and others. Value and delivering value-based technologies and solutions are central to how these audiences think about healthcare delivery. That’s why Intuitive is committed to working with our customers to maximize the investments they make in our systems and technologies to improve patient outcomes, deliver clinical benefit and drive economic value.
Calculating value in healthcare is challenging throughout the world. Perceptions of value vary greatly, depending on how broad or narrow a lens on cost – and value – governments, payers and providers have. In the case of a surgical system or intervention, for example, are they looking at costs incurred and value delivered throughout the duration of a patient’s journey in the healthcare system? Pre- and post-operatively for a year or more? Or, just the costs observed and incurred in the operating room?
Simple analytical approaches have historically focused on the upfront costs associated with acquiring drugs, medical devices or other technologies, and the costs associated with their use. That’s only part of the equation. Measuring short- and long-term value in healthcare can be complex, and is made more so by the variety of methodologies that are influenced by geography, author preference, and more. And these analyses can be clouded by the belief that new or innovative devices and technologies add to – rather than help address – rising healthcare costs.
Any true assessment of value must look at both the total cost to treat a patient through the continuum of care and the value delivered throughout this journey. This more comprehensive approach showcases longer-term patient benefits and costs savings and is markedly different than assessments that focus only on the point of healthcare intervention (e.g., surgery) [v].
At Intuitive, we are investing in data analysis, clinical and economic evidence generation and machine learning that will contribute to a deeper understanding of healthcare cost and value through robust and well-supported longitudinal analyses. In doing so, we can provide hospitals and healthcare systems with customized analytics to understand – and model/project – their overall costs and savings.
It’s also important that “value” isn’t just a code word for cost reduction [ix]. We can’t lose sight of what these costs are incurred in service of. Medical device innovation and investment should be aligned with the Institute for Healthcare Improvement’s (IHI) “Triple Aim” of healthcare – improving the patient’s experience of care, improving the health of populations, and reducing the per capita cost of healthcare [x].
While working to help deliver economic value, the passions that really drive Intuitive are rooted in providing technologies and solutions that help surgeons improve patient outcomes. Today, complications in certain open surgical procedures exceed 30% [xi] and more than $41 billion is spent on hospital readmissions in the U.S [xii].
We believe the combination of robotic technology, vision, data analytics, and advanced training can make a difference. Patients should experience better and more consistent outcomes regardless of where they live or what surgeon they see. Similarly, technology such as robotic-assisted surgical simulators can help surgeons get through their learning curve quicker and understand anatomy better, which may ultimately help ensure that important measures, from cancer margins to functional outcomes, are addressed in a minimally invasive manner.
Built on two-plus decades of proven real-world experience with thousands of hospitals, surgeons and patients around the world, these are the considerations that contribute to Intuitive’s definition of delivering value: an understanding of customer and stakeholder needs during a time of rising healthcare costs, coupled with an unwavering commitment to innovate for minimally invasive care to increase predictability for surgeons and hospitals and help improve outcomes for patients.
Disclaimer: Patients should talk to their doctor to decide if da Vinci® Surgery is right for them. Patients and doctors should review all available information on non-surgical and surgical options and associated risks in order to make an informed decision.
In order to provide benefit and risk information, Intuitive Surgical reviews the highest available level of evidence on representative da Vinci® surgical system procedures. Intuitive Surgical strives to provide a complete, fair and balanced view of the clinical literature. However, our materials should not be seen as a substitute for a comprehensive literature review for inclusion of all potential outcomes. We encourage patients and physicians to review the original publications and all available literature in order to make an informed decision. The references provided here are a few examples of literature published for da Vinci Surgery. These references are not a substitute for a comprehensive literature review. Additional clinical literature is available at pubmed.gov.
[ii] “Current Health Expenditure (% of GDP).” The World Bank - IBRD - IDA, The World Bank, data.worldbank.org/indicator/SH.XPD.CHEX.GD.ZS?locations=UA-1W-US.
[iii] “2016-2025 Projections of National Health Expenditures Data Released.” The Centers for Medicare and Medicaid Services Newsroom, 15 Feb. 2017, www.cms.gov/newsroom/press-releases/2016-2025-projections-national-health-expenditures-data-released.
[v] Porter, Michael E. "What is value in health care?." New England Journal of Medicine 363.26 (2010): 2477-2481 https://www.nejm.org/doi/full/10.1056/NEJMp1011024. .
[vi] Chandra, Amitabh, et al. “Robot-Assisted Surgery For Kidney Cancer Increased Access To A Procedure That Can Reduce Mortality And Renal Failure.” Health Affairs, vol. 34, no. 2, 2015, pp. 220–228., doi:10.1377/hlthaff.2014.0986.
[vi] Niklas, C., et al. (2015). "da Vinci and Open Radical Prostatectomy: Comparison of Clinical Outcomes and Analysis of Insurance Costs." Urologia Internationalis 96(3): 287-294.
[ix] Porter, Michael E. "What is value in health care?." New England Journal of Medicine 363.26 (2010): 2477-2481 https://www.nejm.org/doi/full/10.1056/NEJMp1011024.
[x] “The IHI Triple Aim.” Institute for Healthcare Improvement, Institute for Healthcare Improvement, www.ihi.org/Engage/Initiatives/TripleAim/Pages/default.aspx.
[xi] Chen S-T, Wu M-C, Hsu T-C, Yen DW, Chang C-N, Hsu W-T, et al. Comparison of outcome and cost among open, laparoscopic, and robotic surgical treatments for rectal cancer: A propensity score matched analysis of nationwide inpatient sample data. J Surg Oncol. 2017:1-9.
[xii] Hines, Anika L., et al. "Conditions with the largest number of adult hospital readmissions by payer, 2011: statistical brief# 172." (2006). | https://www.forbes.com/sites/intuitivesurgical/2018/10/30/thinking-holistically-about-healthcare-costs-and-long-term-value/ |
What is the purpose of the book?
Explain the significance of the book’s subject matter at this time.
What is the author trying to accomplish?
How is the book relevant to issues and diversity in Canada?
Save your time - order a paper!
Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlinesOrder Paper Now
Issues in diversity
What are the key diversity, equity and inclusion issues explored in this book?
What social justice theory does this author use to convey his point?
Please give specific examples of this from the book.
Provide 4 examples and use additional scholarly resources.
Describe and provide 2 specific examples where the book challenges the mainstream ideologies and theories of issues of diversity, equity and inclusion in Canada.
How can this book be applied to advance/improve the criminal justice system and broader Canadian society?
Please explain how this book can be applied in understanding the realities of other communities (Indigenous and People of Colour).
Critical analysis
What, if anything, makes this an important work?
What does the reader stand to gain by reading this book?
What new knowledge/information was produced by this book?
Provide a critique of the book. What is missing? Where are the gaps? What could have been improved? Use additional scholarly resources when answering this question.
Does the book deliver on its promise of highlighting the lived realities and experiences of African Canadians? Does this book provide hope for justice, and if not, explain what justice looks like to you? Use additional scholarly resources.
The post Book review first appeared on COMPLIANT PAPERS. | https://customessayusa.com/book-review-2/ |
Nationally, the number of children under age 21 enrolled in Medicaid grew from 23.5 million in 2000 to 40.5 million in 2017, with the proportion of children in Medicaid managed care plans increasing from 65 percent to 94 percent, according to a study from Ann & Robert H. Lurie Children’s Hospital of Chicago published in the journal Academic Pediatrics.
A key principle of managed care is access to routine preventive care services. While managed care has become the predominant form of Medicaid coverage for youth, researchers found only a modest increase in the receipt of preventive care services in this population, with marked variation across states. Whereas some states experienced improvements in preventive care services delivery for children as they implemented managed care, others did not.
State-specific differences in the association of managed care Medicaid with preventive care for youth may include access to primary care, Medicaid reimbursement, availability of clinicians in managed care networks, and state oversight of the quality of care of Medicaid managed care organizations. Managed care by itself is not enough to improve care for children who are covered by Medicaid. States must consider multiple factors that influence access to care and delivery of care at the community and clinic level within managed care systems.”
Jennifer Kusma, MD, Study Lead Author and Physician, Lurie Children’s Hospital, Instructor, Pediatrics, Feinberg School of Medicine, Northwestern University
Dr. Kusma and colleagues used annual state-level data from the Centers of Medicare and Medicaid Services (CMS) to assess the relationship between Medicaid managed care and preventive care encounters for youth. Such services include immunizations, growth and development evaluation, anxiety and depression screening, lead level monitoring and oral health surveillance.
CMS has set a yearly goal of 80 percent participation in the Early and Periodic Screening, Diagnostic, and Treatment benefit, meaning that 80 percent of children enrolled in Medicaid should receive at least one visit or screen in a year.
The frequency of preventive care expectations is based on recommendations from the American Academy of Pediatrics and US Preventive Services Task Force. Routine screenings are particularly important in promptly detecting developmental delays, such as autism spectrum disorder, when early intervention is known to be beneficial.
Regular screenings are also important for adolescent health and well-being, as many engage in higher risk behaviors and up to 20 percent have undiagnosed behavioral health disorders that can be detected during regular check-ups in primary care.
“We found that older children had lower rates of preventive care than younger children,” says Dr. Kusma. “This pattern has been reported in other research, and it reveals an opportunity for managed care plans to help improve quality of care by encouraging preventive care visits for adolescents as well as for younger children.”
Kusma, J. D., et al. (2021) State-Level Managed Care Penetration in Medicaid and Rates of Preventive Care Visits for Children. Academic Pediatrics. doi.org/10.1016/j.acap.2021.02.008. | https://babyinformed.com/2021/03/02/proportion-of-children-in-medicaid-managed-care-plans-increases-from-65-to-94/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.