content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Despite varying degrees of impact, the allure of an investment policy with the potential to affect corporate and social change has been a recurring theme among a portion of socially-minded investors. Otherwise known as “Socially Responsible Investing (SRI)”, or “Environmental, Social, & Governance (ESG) Investing”, we have seen various attempts of integration into the investment process since the 1970s.
The Achilles heel of ESG today, as with all previous iterations, is fiduciary responsibility. Plan sponsors make decisions for recipients with a wide range of social orientations; they are unable or unwilling to sacrifice return for social or other exogenous factors and will switch mandates given underperformance. Thus, the argument for ESG investing cannot be that investors are willing to compromise return in order to “feel good” about environmental or social factors. Rather, the argument for ESG must be that investing in such a manner leads to equal, or superior performance. Of course, there will always exist some mission-driven endowments and foundations that will accept a trade-off, but by and large, no such trade-off will be acceptable. Herein lies the hurdle. The data is mixed. Consequently, there are a plethora of investors who remain unconvinced that ESG investing has the ability to outperform.
Until those charged with fiduciary duty are convinced that ESG-investing truly leads to greater long-term performance, the market will remain split in their views. | https://thesustainableinvestor.net/2013/11/15/the-hurdle-to-responsible-investing-fiduciary-duty/ |
An agglutinative language of the Bolgar branch of the Turkic language family and is spoken west of the Ural mountains in central Russia. Chuvash is the native language of the Chuvash people and an official language of Chuvashia. It is spoken by about two million people.
This page was last modified on 7 September 2008, at 11:38.
This article uses material from the English Wiktionary entry "Chuvash". | https://end.translatum.gr/wiki/Chuvash |
For these and other reasons, we will make a sub-par decision at times. I’ve done well with my stocks and investing over the years but I still keep to the basics in investing. I’ve seen Super profitability the complicated Wall Street stock trading programs, I’ve helped put some of them together while working with advisors, but the basics are all you really need to make money in stocks.
- One important annual fair took place in the city of Antwerp, in present-day Belgium.
- Understanding different categories of stocks is key to building a strong portfolio.
- If these funds buy hundreds or thousands of shares, the sale tends to go fairly quickly.
- The stock market is really a kind of aftermarket, where people who own shares in the company can sell them to investors who want to buy them.
- For example, a company may show a profit of $2 million, but if that only translates to a 3% profit margin, then any significant decline in revenues may threaten the company’s profitability.
These early stock exchanges, however, were more akin to bond exchanges as the small number of companies did not issue equity. In fact, most early corporations were considered semi-public organizations since they had to be chartered by their government in order to conduct business. Preferred shares are so named because preferred shareholders have priority over common shareholders to receive dividendsas well as assets in the event of a liquidation.
Share
These securities are chosen as a sample that reflects how the market in general is behaving. But because these indexes include companies from myriad industries, they are seen as solid indicators of how the U.S. economy is doing overall. “When I’m advising clients … anything under a couple of years, even sometimes three years out, I’m hesitant to take too Forex news much market risk with those dollars,” Madsen says. He oversees editorial coverage of banking, investing, the economy and all things money. The S&P 500 (also known as the Standard & Poor’s 500) is a stock index that consists of the 500 largest companies in the U.S. It is generally considered the best indicator of how U.S. stocks are performing overall.
Demand normally reflects the prospects for the company’s future performance. Strong demand—the result of many investors wanting to buy a particular stock—tends to result in an increase in the stock’s share price. On the other hand, if the company isn’t profitable or if investors are selling rather than buying its stock, your shares may be worth less than you paid for them.
Getting Back To The Stock Market Basics
However, there’s usually a bit of confusion as to what diversification is exactly. True diversification isn’t just about having a bunch of different types of investments in different accounts. Go to a major financial publication to see how the stock price shifted over different periods of time. On the other hand, companies can also be adversely affected by economic conditions out of their control. For example, the stay-at-home orders in many states related to COVID-19 caused a drop in oil stocks because people not going anywhere caused demand for gas to crater and led to an oversupply of oil. If these funds buy hundreds or thousands of shares, the sale tends to go fairly quickly.
When it comes to the main pillars of financial wellness – earning, saving, investing and protecting – investing in the stock market can be the most intimidating of the bunch. Sometimes, the market moves from strength to weakness and back to strength in only a few months. Other times, this movement, which is known as a full market cycle, takes years. By skipping the daily financial news, you’ll be able to develop patience, which you’ll need if you want to stay in the investing game for the long term. It’s also useful to look at your portfolio infrequently, so that you don’t become too unnerved or too elated. These are great tips for beginners who have yet to manage their emotions when investing.
Trading stocks, analyzing investments, and following the financial markets are a part of an overall healthy financial strategy. You may not notice gains every day, but investing over the long term is a long-respected way to ensure financial stability in the future. Even if you want only to understand your financial advisor in meetings or build knowledge of new offerings like bitcoin, your knowledge can help in the long term.
Learn
To do this, you will incur $50 in trading costs—assuming the fee is $10—which is equivalent to 5% of your $1,000. If you were to fully invest the $1,000, your account would be reduced to $950 after trading costs. This represents a 5% loss before your investments even have a chance to earn.
Since the publication of “The Intelligent Investor” by Ben Graham, what is commonly known as “value investing” has become one of the most widely respected and widely followed methods of stock picking. There are a number of regular participants in stock market trading. Although stock trading dates back as far as the mid-1500s in Antwerp, modern stock trading is generally recognized as starting with the trading of shares in the East India Company in London. Real estate may be a small part of the portfolio, but it’s an important component for diversification and generating income.
With everyone trying to sell and no one buying, the market crashed. Typically invest in well-established companies that have shown steady profitability over a long period of time and may offer regular dividend income. Value investing is more focused on avoiding risk than growth investing is, although value investors do seek to buy stocks when they consider the stock price to be an undervalued bargain. OTC stocks are stocks that do not meet the minimum price or other requirements for being listed on exchanges. Most stocks are traded on exchanges such as the New York Stock Exchange or the NASDAQ.
Learn the basics of the IPO market and the process of how to go about investing in IPO’s. This chapter also helps us understand the different IPO Jargons that are commonly used. Get a FREE share of stock worth up to $1,600 when you open a Webull investing account – learn more here.
The reality is that investing in the stock market carries risk, but when approached in a disciplined manner, it is one of the most efficient ways to build up one’s net worth. While the value of one’s home typically accounts for most of the net worth of the average individual, most of the affluent and very rich generally have the majority of their wealth invested in stocks. Stock prices, and why they rise and fall may seem like another mystery. You will hear about the influence of earnings on stock prices or the economy or the credit market. While all of these factors figure into price changes, they have little direct impact on prices. What these and other factors do is change the balance of supply and demand.
Developing A Trading Strategy
Money that you need for a specific purpose in the next couple years should probably be invested in low-risk investments, such as a high-yield savings account or a high-yield CD. Understanding whether you’re investing for the long-term future or the short term can also help determine your strategy – and whether you should be investing at all. Sometimes short-term investors can have unrealistic expectations about growing their money. And research shows that most short-term investors, such as day traders, lose money. You’re competing against high-powered investors and well-programmed computers that may better understand the market. An alternative to individual stocks is an index fund, which can be either a mutual fund or an exchange traded fund .
How To Invest Using The Business Cycle
Such diversification also means that fund shareholders, unlike owners of individual stocks, are at less risk when a single stock drops sharply in value. Because of these desirable features, mutual funds have become a popular investment alternative for many investors. Keep in mind that the price of a stock can fall as easily as it can rise. Investing in stock offers no guarantee that you will make money, and many investors lose money instead. Investors who can satisfy certain securities regulations may sell short, or sell shares of stock they do not actually own.
They are denominated in U.S. dollars and pay dividends in U.S. dollars. In this method, one holds a portfolio of the entire stock market or some segment of the stock market (such as the S&P 500 Index or Wilshire 5000). The principal aim of this strategy is to maximize diversification, minimize taxes from realizing gains, and ride the general trend of the stock market to rise. In which financial assets such as demand deposits, stocks or bonds are traded .
Since the only sure bottom is zero, when you invest, consider adding protection (i.e. stop orders, options, etc.). To avoid another WorldCom, Enron, or even a 2008 type of disaster, be sure the company you’re buying is worth owning. Unless you love risk, you may wish to avoid putting too much of your money in one stock.
Reporting Investments, Pensions, And Financial Statement Analysis
Listed companies have greater visibility in the marketplace; analyst coverage and demand from institutional investors can drive up the share price. This changes the status of the company from a private firm whose shares are held by a few shareholders to a publicly-traded company whose shares will be held by numerous members of the general public. The IPO also offers early investors in the company an opportunity to cash out part of their stake, often reaping very handsome rewards in the process. When a company establishes itself, it may need access to much larger amounts of capital than it can get from ongoing operations or a traditional bank loan. It can do so by selling shares to the public through an initial public offering.
Eventually you must purchase the same number of shares borrowed and return them to the lender – this is referred to as closing out or covering the short-sale position. Taxation is a consideration of all investment strategies; profit from owning stocks, including dividends received, is subject to different tax rates depending on the type of security and the holding period. In many countries, the corporations pay taxes to the government and the shareholders stock market basics once again pay taxes when they profit from owning the stock, known as “double taxation”. Information provided on Forbes Advisor is for educational purposes only. Your financial situation is unique and the products and services we review may not be right for your circumstances. We do not offer financial advice, advisory or brokerage services, nor do we recommend or advise individuals or to buy or sell particular stocks or securities.
The investor tells the broker to buy “at the market,” which means to buy shares at the best available price at the time the order reaches the stock exchange. If the investor sets an exact price he or she is willing to pay, the order is called a “limit order,” and no sale can take place unless another stockholder wants to buy or sell at that price. Until 1869 it was easy for a company to have its securities listed on the exchange. | http://www.lottotech.mu/how-does-the-stock-market-work/ |
Warren Buffett’s older sister, Doris, has health problems, including advanced Alzheimer’s disease, the Boston Globe reported, and her charitable work is the subject of disputes involving her foundation’s staff members and her grandson, Alexander Buffett Rozek.
The Globe story by Sacha Pfeiffer, published last weekend as shareholders of Warren Buffett’s Berkshire Hathaway met in Omaha, said Rozek had disagreements with the people who run Doris Buffett’s nonprofit Sunshine Lady Foundation after she moved to Boston from Virginia.
After the story ran, Warren Buffett told The World-Herald, “Alex has taken extraordinarily good care of Doris for a great many years.”
Doris Buffett, 90, attracted attention with the 2010 publication of a book, “Giving It All Away,” about her small grants to people with financial problems and her support for prison education programs and other projects.
Her brother helped fund some of the foundation’s early grants, but she began providing the money herself from Berkshire stock she owns, valued at about $50 million in 2015.
The Globe story said a rift between Rozek, 39, and the Sunshine Lady staff resulted in him and his grandmother leaving the foundation and setting up the separate Letters Foundation, so named because it receives and sorts letters asking for money.
Earlier this month, a former Letters employee, Emily Walsh Holland (no relation to the late Dick Holland of Omaha, an early Berkshire shareholder), pleaded not guilty to charges of taking confidential financial information and Doris Buffett’s medical records from the Letters Foundation, the Globe reported.
Holland told the Globe she believes she was fired because she questioned the way the Letters Foundation’s finances were handled. Rozek told the Globe that an investigative firm reviewed Holland’s concerns and found no evidence of mismanagement. He brought the theft complaint against Holland, the Globe reported.
In a letter to the Globe about the news article, forwarded to The World-Herald by Warren Buffett’s office, Amy Kingman, director of external relations for the Letters Foundation, said the foundation is “doing exactly what Doris Buffett has always loved to do: help others and show her appreciation for her employees and volunteers.”
Kingman said Rozek and other family members “receive no financial benefit from the Letters Foundation and never will.”
She said the foundation has given away more than $3 million to more than 300 people since 2016 and has a small staff and more than 100 volunteers who review thousands of request letters a year.
Bumped from a flight, she missed the Berkshire meeting
Folks who came to Omaha for the May 5 Berkshire meeting had a good time, according to the World-Herald staffers covering the event, but Kalamazoo’s Dr. Virginia Little, daughter of a longtime shareholder, ended up not attending.
She had prepaid three days at the Magnolia Hotel downtown but came slightly late to the airport and was bumped by a weather-delayed passenger, she said. She said she could have gotten another flight the next day but wouldn’t have arrived until early on the morning of the meeting, so she stayed home.
The airline refunded her ticket and Magnolia refunded her first night, but because four or five other weather-delayed guests came to Omaha anyway and she could have come, hotel manager Tim Darby said, it seemed fair to not forgive the second and third nights.
There was some back and forth, and Little isn’t happy. Such disputes happen occasionally, and you would hope it won’t discourage her from being here on May 4, 2019, right?
Honoring Humboldt, 'the forgotten father of environmentalism'
Berkshire weekend isn’t just about money.
An Omaha group invited local guests and German investors, in town for the meeting, to the Fontenelle Forest Nature Center for the Alexander von Humboldt Dinner, named after a German explorer and scientist who traveled all over the Americas.
A contemporary of Thomas Jefferson, Humboldt is the namesake of that chilly Pacific Ocean current that runs north along the west side of South America. He gained fame for his observations and collections in geography, botany, climate, biology and other fields, but he has nearly dropped out of public consciousness.
Researcher and author Andrea Wulf rediscovered that missing piece of history with “The Invention of Nature: Alexander von Humboldt’s New World” (Vintage Books, 559 pages, $17).
She told the forest-goers that Humboldt’s understanding of the world influenced naturalist John Muir and poet Henry David Thoreau, among many others, but today he is “the forgotten father of environmentalism.”
Anyone want a purple ukulele?
A pair of visitors were still smiling when they left the Regency shopping center on the day after the Berkshire meeting, but they had failed in their mission: to give Buffett a ukulele with a Berkshire Hathaway Home Services-themed paint job.
Leigh Voruz, an Omaha Public Schools middle school art teacher, artist and founder of Unique New Uke, and her son Max, 8, who got a toy locomotive at the mall, knew Buffett is a ukulelist and hand-painted the instrument but didn’t spot him.
I have her number if someone from the company is interested in a purple ukulele.
Meeting the 'Early Bird' author
In April I mentioned Maya Peterson’s “Early Bird” book, aimed at young investors, but hadn’t met the high school student from St. Paul, Minnesota, until I saw her at the Hilton Omaha coffee shop after she ran in the post-meeting “Invest in Yourself 5K” charity race.
Next year, she hopes to have the book sold at the meeting. At this year’s session, Buffett discussed his first investment, at age 11, which fits exactly with her thesis: Start young so you have plenty of time to compound your money.
She’s trying to organize an investment club at her high school and was heartened at the number of children attending the meeting. | https://omaha.com/money/buffett/doris-buffett-has-alzheimer-s-and-now-her-foundation-is/article_fcfd35a8-89e0-527c-a795-063216437a3c.html |
Refugees prefer to stay in cities, but are cities ready for people seeking refuge? Cities offer the kind of anonymity that allows refugees some degree of freedom, unlike in prison-like “camp” situations. It is in cities that refugees hope to integrate better – with the mobility, the relative heterogeneity, and the economic opportunities that urban areas offer. Many seek towns as the best option to provide for their families. World Refugee Day is an opportune moment to see how we can make our cities more welcoming to refugees.
For long years, the dominant thinking has been that the refugee crisis is temporary and reversible with a hope that refugees will return home sooner rather than later. In line with this narrative, refugees have long been housed in camps and camp-like situations. Such confinements allowed for services – especially food, sanitation and documentation, to be more efficient while making it easier for the governments to track and monitor refugees. Camps have also served as bases for further transfers, repatriations and sending back, or eventual integration in host countries.
It is increasingly clear that refugee crises are protracted issues in the modern world; in most cases, they are irreversible. UNHCR estimates that the duration of displacement is already high and increasing further. Protracted refugee situations across the globe last an estimated 26 years on average. Many refugees spend decades under challenging conditions in camps, with children not knowing any other home while awaiting the future. The Global Refugee Agency, UNHCR has advocated member states that camps should be an exception rather than a rule.
ActionAid Association’s work with refugees including in its most recent humanitarian response supporting refugees in Poland and Romania with local NGOs, indicates that a youthful refugee population aspire to stay in cities. Urban areas offer not just anonymity and some freedom but large and informal labour markets offering unregulated openings for work without permits. There are better prospects for self-enterprise and informal work in towns. The availability of affordable shelter, health and education facilities are better in cities, and public services are better developed and, therefore, more accessible.
Given that refugee camps with protracted refugee displacements turn into almost permanent colonies, often away from markets and existing infrastructures, the future geography of refuge may be more and more urban. The idea of remotely located refugee encampments may need to be permanently buried, and planning for refugee settlement in cities should be the reality for a better future.
For now, the cities are not prepared for refugees. They have little experience in making policies and plans for hosting refugees. Since refugees are considered the responsibility of national Governments, cities have not considered the question of refugees in their planning and policy thinking. The recent Ukrainian refugee crisis, for instance, is now pushing towns in Europe to start implementing solutions for hosting refugees. Some years ago, mayors and leaders from United Cities and Local Governments, an umbrella organisation for cities, local and regional governments, and municipal associations throughout the world, met with International Organisation for Migration and organised 150 cities around the world to sign the Mechelen Declaration which contained the “rights of urban refugees”.
While UNHCR recognises that most refugees are in cities and will seek to move there in the future, there is little coordination with the UN agency for human settlements and urban development (UN HABITAT) on what needs to be done to support refugees in cities. Most countries also resist the idea of giving up camps or camp-like isolated settlements since tracking, controlling or repatriating them becomes difficult in cities.
The welfare state needs to step in to care for refugees. Throughout history, refugees have contributed significantly to the growth of nations’ economies, diversities and futures. An embrace of refugees perhaps embodies the path for tomorrow. A call to cities to build solidarity, care and support for amongst the most oppressed members of humankind with scarred pasts and unwelcoming futures.
Such a pathway is about creating social housing facilities, allowing temporary and in time regular work permits, for ensuring refugees are part of all social protection programmes available to members of the country till they choose to go back or onwards.
It is about creating decent work and social welfare, which is universal for citizens and refugees alike. Therefore, a semi-permanent work permit, revocable on onward movement, would be the basis for ensuring that refugees not only can make livelihoods but are not discriminated against and contribute to and make the cities they now seek refuge in as residents. In addition, access to education and health services must be made freely available.
Cities must also incorporate refugees in their planning and make spaces for refugee involvement in their local governance. Community-based organisations must be encouraged in refugee communities to make the principle of participation and consultation more grounded in everyday urban life.
We also need to recognise that those whose flagrant footprints across the geo-politics of the globe have turned citizens into refugees have an accountability and responsibility to bear. Host cities in poorer countries, in addition to support from national governments, would also need international economic help to meet the needs of their newest members. The principle of common but differentiated responsibility applies as also that of equity. A lot needs to be done to prepare cities for refugees.
Disclaimer: The article was originally published on Morning Mail Live. The views expressed in the article are the author’s and do not necessarily reflect those of ActionAid Association.
* Mandatory fields
My ActionAid is an interactive space created exclusively
for ActionAid donors to express themselves, to get in touch with us , to be a part
of our campaigns and to trace their journey with the ActionAid family!
Forgot password? | https://www.actionaidindia.org/blog/refugees-and-thecityweneednow/ |
Scottish wind farm operators received more than £14 million in the past two years in return for switching off their turbines, it has been revealed.
Since 2010, a system of “constraint payments” has been operated by the National Grid to compensate windfarms if they are taken off the grid when it cannot cope with high supply.
A breakdown of the payments made to wind farms has been published by the Renewable Energy Foundation (REF), a charity that has been sceptical of wind power. It believes the data should be made public.
All the 15 wind farm sites paid constraint payments are in Scotland. They received a total of £14,249,194 in compensation in the past two years.
The 40-turbine Farr Wind Farm, near Inverness, which is operated by nPower renewables, received the highest payment, of more than £2.3m.
The lowest payment went to ScottishPower Renewables’ 60-turbine Arecleoch Wind Farm in South Ayrshire, which received £24,584.
The REF claimed the figures showed that the price of the constraint payments was often many times more than the loss in subsidy payments for wind farms, which are withdrawn for the period when they are taken off the grid, “suggesting that the market is not functioning in the consumer interest”.
Dr John Constable, director of REF, said: “The introduction of opaque trading arrangements to manage wind power is a very unwelcome step in the wrong direction and must be reversed without delay.
“It is time for the regulator, Ofgem, to step in to protect the consumer interest by ensuring UK electricity markets become more transparent, not less.”
The overloading problem in Scotland is a result of a lack of capacity in the grid to carry the electricity generated by growing numbers of renewable energy schemes.
Responding to REF’s figures, Catherine Birkbeck, grid and markets policy manager at Scottish Renewables, highlighted that all energy producers received constraint payments, not just wind farms.
“All electricity generators, including coal and gas power stations, are paid not to generate at times of lower than expected demand or when there is congestion on the grid,” she said.
“The payments made to renewables are tiny compared to what is paid to fossil-fuelled electricity generation.
“Scottish Renewables is working closely with National Grid to address how the industry can work with the regulator to ensure constraint payments are kept to a minimum.” She also questioned REF’s agenda, pointing out that it is a “well- established anti-wind farm group”.
Overall constraint payments to all types of generators, including fossil fuel firms, totalled £708m for the financial year 2010-11, and consumer groups recently called for a cap to be put on the payouts to energy firms.
A spokesman for energy regulator Ofgem said: “We are reviewing the rules around generator behaviour when transmission constraints are active.”
A National Grid spokesman said that it used various trading agreements and tools to control costs. He added: “They help keep prices, to manage constraints and balance the electricity transmission system, as low as possible.”
This article is the work of the source indicated. Any opinions expressed in it are not necessarily those of National Wind Watch.
The copyright of this article resides with the author or publisher indicated. As part of its noncommercial effort to present the environmental, social, scientific, and economic issues of large-scale wind power development to a global audience seeking such information, National Wind Watch endeavors to observe “fair use” as provided for in section 107 of U.S. Copyright Law and similar “fair dealing” provisions of the copyright laws of other nations. Send requests to excerpt, general inquiries, and comments via e-mail.
|Wind Watch relies entirely
|
on User Funding
Share: | https://www.wind-watch.org/news/2012/01/26/wind-farms-paid-14m-by-national-grid-for-switching-off-turbines/ |
Guest post by former student, William Matchin:
+++++++++++++++++++++++++++++++++++++
It’s been almost 10 years since the Society for the Neurobiology of Language conference (SNL) began, and it is always one of my favorite events of the year, where I catch up with old friends and see and discuss much of the research that interests me in a compact form. This year’s meeting was no exception. The opening night talk about dolphin communication by Diana Reiss was fun and interesting, and the reception at the Baltimore aquarium was spectacular and well organized. I was impressed with the high quality of many of the talks and posters. This year’s conference was particularly interesting to me in terms of the major trending ideas that were circulating at the conference (particularly the keynote lectures by Yoshua Bengio & Edward Chang), so I thought I would write some of my impressions down and hear what others think. I also have some thoughts about Society for Neuroscience (SfN), in particular one keynote lecture: Erich Jarvis, who discussed the evolution of language, with the major claim that human language is continuous with vocal learning in non-human organisms. Paško Rakić, who gave a history of his research in neuroscience, also had an interesting comment on the tradeoff between empirical research and theoretical development and speculation, which I will also discuss briefly.
The notions of abstractness, innateness, and modality-independence of language loomed large at both conferences; much of this post is devoted to these issues. The number of times that I heard a neuroscientist or computer scientist make a logical point that reminded me of Generative Grammar was shocking. In all, I had an awesome conference season, one that gives me great hope and anticipation for the future of our field, including much closer interaction between biologists & linguists. I encourage you to visit the Faculty of Language blog, which often discusses similar issues, mostly in the context of psychology and linguistics.
1. Abstractness & combinatoriality in the brain
Much of the work at the conference this year touched on some very interesting topics, ones that linguists have been addressing for a long time. It seemed that for a while embodied cognition and the motor theory of speech perception were dominant topics, but now it seems as though the table has turned. There were many presentations showing how the brain processes information and converts raw sensory signals into abstract representations. For instance, Neal Fox presented ECoG data on a speech perception task, illustrating that particular electrodes in the superior temporal gyrus (STG) dynamically encode voice onset time as well as categorical voicing perception. Then there was Edward Chang’s talk. I should think that everyone at SNL this year would agree that his talk was masterful. He clearly illustrated how distinct locations in STG have responses to speech that are abstract and combinatorial. The results regarding prosody were quite novel to me, and nicely illustrate the abstract and combinatorial properties of the STG, so I shall review them briefly here.
Prosodic contours can be dramatically different in frequency space for different speakers and utterances, yet they share an underlying abstract structure (for instance, rising question intonation at the end of a sentence). It appears that certain portions of the STG are selectively interested in particular prosodic contours independently of the particular sentence or speaker; i.e., they encode abstract prosodic information. How can a brain region encode information about prosodic contour independently of speaker identity? The frequency range of speech among speakers can vary quite dramatically, such that the entire range for one speaker (say, a female) can be completely non-overlapping with another speaker (say, a male) in frequency space. This means that the prosodic contour cannot be defined physically, but must be converted into some kind of psychological (abstract) space. Chang reviewed literature suggesting that speakers normalize pitch information by the speaker’s fundamental frequency, thus resulting in an abstract pitch contour that is independent of speaker identity. This is similar to work by Phil Monahan and colleagues (Monahan & Idsardi, 2010) who showed that vowel normalization can be obtained by dividing F1 and F2 by F3.
From Tang, Hamilton & Chang (2017). Different speakers can have dramatically different absolute frequency ranges, posing a problem for how common underlying prosodic contours (e.g., a Question contour) can be identified independently of speaker identity.
Chang showed that the STG also encodes abstract responses to speaker identity (the same response regardless of the particular sentence or prosodic contour) and phonetic features (the same response to a particular sentence regardless of speaker identity or pitch contour). Thus, it is not the case that there are some features that are abstract and others are not; it seems that all of the relevant features are abstract.
From Tang, Hamilton & Chang (2017). Column 1 shows the responses for a prosody-encoding electrode. The electrode distinguishes among different prosodic contours, but not different sentences (i.e., different phonetic representations) or speakers.
Why do I care about this so much? Because linguists (among other cognitive scientists) have been talking for decades about abstract representations, and I think that there has often been skepticism placed about how the brain could encode abstractness. But the new work in ECoG by Chang and others illustrates that much of the organization of the speech cortex centers around abstraction – in other words, it seems that abstraction is the thing the brain cares most about, doing so rapidly and robustly in sensory cortex.
Two last points. First, Edward also showed that any of these properties identified in the left STG are also found in the right STG, consistent with the claim that speech perception is bilateral rather than unilateral (Hickok & Poeppel, 2000). Thus, it does not seem that speech perception is the key to language laterality in humans (but maybe syntax – see section 3). Second, the two of us also had a nice chat about what his results mean for innateness and development of these functional properties of the STG. And he had the opinion that the STG innately encodes these mechanisms, and that different languages make different use of this pre-existing phonetic toolbox. This brings me to the next topic, which centers on the issue of what is innate about language.
2. Deep learning and poverty of the stimulus
Yoshua Bengio gave one of the keynote lectures at this year’s SNL. For the uninitiated (such as myself), Yoshua Bengio is one of the leading figures in the field of deep learning. He stayed the course during the dark ages of connectionist neural network modeling, thinking that there would eventually be a breakthrough (he was right). Deep learning is the next phase of connectionist neural network modeling, centered on the use of massive amounts of training data and hidden network layers. Such computer models can correctly generate descriptions of pictures, translate between languages; in sum, things for which people are willing to pay money. Given this background, I expected to hear him say something like this in his keynote address: deep learning is awesome, we can do all the things that we hoped to be able to do in the past, Chomsky is wrong about humans requiring innate knowledge of language.
Instead, Bengio made a poverty of the stimulus argument (POS) in favor of Universal Grammar (UG). Not in those words. But the logic was identical.
For those unfamiliar with POS, the logic is that human knowledge, for instance language, is underdetermined by the input. Question: You never hear ungrammatical sentences (such as *who did you see Mary and _), so how do you know that they are ungrammatical? Answer: Your mind innately contains the relevant knowledge to make these discriminations (such as a principle like Subjacency), making learning them unnecessary. POS arguments are central to generative grammar, as they provide much of the motivation for a theory of UG, UG being whatever is in encoded in your genome that enables you to acquire a language, and what is lacking in things that do not learn language (such as kittens and rocks). I will not belabor the point here, and there are many accessible articles on the Faculty of Language blog that discuss these issues in great detail.
What is interesting to me is that Bengio made a strong POS argument perhaps without realizing that he was following Chomsky’s logic almost to the letter. Bengio’s main point was that while deep learning has had a lot of successes, such computer models make strange mistakes that children would never make. For instance, the model would name a picture of an animal correctly on one trial, but with an extremely subtle change to the stimulus on the next trial (a change imperceptible to humans), the model might make a wildly wrong answer. This is directly analogous to Chomsky’s point that children never make certain errors, such as formulating grammatical rules that use linear rather than structural representations (see Berwick et al., 2011 for discussion). Bengio extended this argument, adding that children have access to dramatically less data than deep learning computer models do, which shows that the issue is not the amount or quality of data (very similar to arguments made repeatedly by Chomsky, for instance, this interview from 1977). For these reasons, Bengio suggested the following solution: build in some innate knowledge that guides the model to the correct generalizations. In other words, he made a strong POS argument for the existence of UG. I nearly fell out of my seat.
People often misinterpret what UG means. The claim really boils down to the fact that humans have some innate capacity for language that other things do not have. It seems that everyone, even leading figures in connectionist deep learning, can agree on this point. It only gets interesting when figuring out the details, which often include specific POS arguments. And in order to determine the details about what kinds of innate knowledge should be encoded in genomes and brains, and how, it would certainly be helpful to invite some linguists to the party (see part 5).
3. What is the phenotype of language? The importance of modality-independence to discussions of biology and evolution.
The central question that Erich Jarvis addressed during his Presidential Address at this year’s SfN on its opening night was whether human language is an elaborate form of vocal learning seen in other animals or rather a horse of a different color altogether. Jarvis is an expert of the biology of birdsong, and he argued that human language is continuous with vocal learning in non-human organisms both genetically and neurobiologically. He presented a wide array of evidence to support his claim, mostly along the lines of showing how the genes and parts of the brain that do vocal learning in other animals have closely related correlates in humans. However, there are three main challenges to a continuity hypothesis that were either entirely omitted or extravagantly minimized: syntax, semantics, and sign language. It is remiss to discuss biology and evolution of a trait without clearly specifying the key phenotypic properties of that trait, which for human language includes the ability to generate an unbounded array of hierarchical expressions that have both a meaning and a sensory-motor expression, which can be auditory-vocal or visual-manual (and perhaps even tactile, Carol Chomsky, 1986). If somebody had only the modest aim of discussing the evolution of vocal learning, I would understand omitting these topics. But Jarvis clearly had the aim of discussing language more broadly, and his second slide included a figure by Hauser Chomsky & Fitch (2002), which served as the bull’s-eye for his arguments. Consider the following a short response to his talk, elaborating on why it is important to discuss the important phenotypic traits of syntax, semantics, and modality-independence.
It is a cliché that sentences are not simply sequences of words, but rather hierarchical structures. Hierarchical structure was a central component of Hauser, Chomsky & Fitch’s (2002) proposal that syntax may be the only component of human language that is specific to it, as part of the general Minimalist approach to try and reduce UG to a conceptual minimum (note that Bengio, Jarvis and Chomsky all agree on this point – none of them want to have a rich, linguistically-specific UG, and all of them argue against it). Jarvis is not an expert on birdsong syntax, so it is perhaps unfair of him to discuss syntax in detail. However, Jarvis merely mentioned that some have claimed to identify recursion in birdsong (Gentner et al., 2006), feeling that to be sufficient to dispatch syntax. However, he did not mention the work debating this issue (Berwick et al., 2012), which illustrates that birdsong has syntax that is roughly equivalent to phonology, but not human sentence-level syntax. This work suggests that birdsong may be quite relevant to human language as a precursor system to human phonology (fascinating if true), but it does not appear capable of accounting for sentence-level syntax. In addition, the main interesting thing about syntax is that it combines words to produce new meanings, unlike birdsong, which does not.
With respect to semantics, Jarvis showed that dogs can learn to respond to our commands, such as sitting when we say “sit”. He suggested that because dogs can “comprehend” human speech, they have a precursor to human semantics. But natural language semantics is way more than this. We combine words that denote concepts into sentences which denote events (Parsons, 1990). We do not have very good models of animal semantics, but a stimulus-response pairing is probably a poor one. It may very well be true that non-human primates have a similar semantic system as we do – desirable from a Minimalist point of view – but it needs to be explored beyond pointing out that animals learn responses to stimuli. Many organisms learn stimulus response pairing, probably including insects – do we want to claim that they have a similar semantic system as us?
The most important issue for me was sign language. I do not think Jarvis mentioned sign language once during the entire talk (I believe he briefly mentioned gestures in non-human animals). As somebody who works on the neurobiology of American Sign Language (ASL), this was extraordinarily frustrating (I cannot imagine the reaction of my Deaf colleagues). I believe that one of the most significant observations about human language is that it is modality-independent. As linguists have repeatedly shown, all of the relevant properties of linguistic organization found in spoken languages are found in sign languages: phonology, morphology, syntax, semantics (Sandler & Lillo-Martin, 2006). Deaf children raised by deaf parents learn sign language in the same way that hearing children spoken language, without instruction, including a babbling stage (Petitto & Martentette, 1991). Sign languages show syntactic priming just like spoken languages (Hall et al., 2015). Aphasia is similarly left-lateralized in sign and spoken languages (Hickok et al., 1996), and neuroimaging studies show that sign and spoken language activate the same brain areas when sensory-motor differences are factored out (Leonard et al., 2012; Matchin et al., 2017a). For instance, in the Mayberry and Halgren labs at UCSD we showed using fMRI that left hemisphere language areas in the superior temporal sulcus (aSTS and pSTS) show a correlation between constituent structure size and brain activation in deaf native signers of ASL (6W: six word lists; 2S: sequences of three two word phrases; 6S: six word sentences) (Matchin et al., 2017a). When I overlap these effects with similar structural contrasts in English (Matchin et al., 2017b) or French (Pallier et al., 2011), there is almost perfect overlap in the STS. Thus, both signed and spoken languages involve a left-lateralized combinatorial response to structured sentences in the STS. This consistent with reports of a human-unique hemispheric asymmetry in the morphology of the STS (Leroy et al., 2015).
TOP: Matchin et al., in prep (ASL). BOTTOM: Pallier et al., 2011 (French).
Leonard et al. (2012), also from the Mayberry and Halgren labs, show that semantically modulated activity in MEG for auditory speech and sign language activates pSTS is nearly identical in space and time.
All of these observations tell us that there is nothing important about language that must be expressed in the auditory-vocal modality. In fact, it is conceptually possible to imagine that in an alternate universe, humans predominantly communicate through sign languages, and blind communities sometimes develop strange “spoken languages” in order to communicate with each other. Modality-independence has enormous ramifications for our understanding of the evolution of language, as Chomsky has repeatedly noted (Berwick & Chomsky, 2015; this talk, starting at 3:00). In order to make the argument that human language is continuous with vocal learning in other animals, sign language must be satisfactorily accounted for, and it’s not clear to me how it can. This has social ramifications too. Deaf people still struggle for appropriate educational and healthcare resources, which I think stems in large part from ignorance about how sign languages are fully equivalent to spoken languages among the scientific and medical community.
When I tweeted at Jarvis pointing out the issues I saw with his talk, he responded skeptically:
|
||
|
At my invitation, he stopped by our poster, and we discussed our neuroimaging research on ASL. He appears to be shifting his opinion:
|
||
|
This reaffirms to me how important sign language is to our understanding of language in general, and how friendly debate is useful to make progress in understanding scientific problems. I greatly appreciate that Erich took the time to politely respond to my questions, come to our poster, and discuss the issues.
If you are interested in learning more about some of the issues facing the Deaf community in the United States, please visit Marla Hatrak’s blog: http://mhatrak.blogspot.com/, or Gallaudet University’s Deaf Education resources: http://www3.gallaudet.edu/clerc-center/info-to-go/deaf-education.html.
4. Speculative science
Paško Rakić is a famous neuroscientist, and his keynote lecture at SfN gave a history of his work throughout the last several decades. I will only give one observation about the content of his work: he thinks that it is necessary to posit innate mechanisms when trying to understand the development of the nervous system. One of his major findings is that cortical maps are not emergent, but rather are derived from precursor “protomaps” that encode the topographical organization that ends up on the cortical surface (Rakić, 1988). Again, it seems as though some of the most serious and groundbreaking neuroscientists, both old and new, are thoroughly comfortable discussing innate and abstract properties of the nervous system, which means that Generative Grammar is in good company.
Rakić also made an interesting commentary on the current sociological state of affairs in the sciences. He discussed a previous researcher (I believe from the late 1800s) who performed purely qualitative work speculating about how certain properties of the nervous system developed. He said that this research, serving as a foundation for his own work, probably would be rejected today because it would be seen as too “speculative”. He mentioned how the term speculative used to be perceived as a compliment, as it meant that the researcher went usefully beyond the data, thinking about how the world is organized and developing a theory that would make predictions for future research (he had a personal example of this, in that he predicted the existence of a particular molecule that he didn’t discover for 35 years).
This comment resonated with me. I am always puzzled about the lack of interest in theory and the extreme interest in data collection and analysis: if science isn’t about theory, also known as understanding the world, then what is it about? I get the feeling that people are afraid to postulate theories because they are afraid to be wrong. But every scientific theory that has ever been proposed is wrong, or will eventually be shown to be wrong, at least with respect to certain details. The point of a theory is not to be right, it’s to be right enough. Then it can provide some insight into how the world works which serves as a guide to future empirical work. Theory is a problem when it becomes misguiding dogma; we shouldn’t be afraid of proposing, criticizing, and modifying or replacing theories.
The best way to do this is to have debates that are civil but vigorous. My interaction with Erich Jarvis regarding sign language is a good example of this. One of the things I greatly missed about this year’s SNL was the debate. I enjoy these debates, because they provide the best opportunity to critically assess a theory by finding a person with a different perspective who we can count on to find all of the evidence against a theory, saving us the initial work of finding this evidence ourselves. This is largely why we have peer review, even with its serious flaws – the reviewer acts in part as a debater, bringing up evidence or other considerations that the author hasn’t thought of, hopefully leading to a better paper. I hope that next year’s SNL has a good debate about an interesting topic. I also feel that the conference could do well to encourage junior researchers to debate, as there is nothing better for personal improvement in science than interacting with an opposing view to sharpen one’s knowledge and logical arguments. It might be helpful to establish ground rules for these debates, in order to ensure that they do not cross the line from debate to contentious argument.
5. Society for the Neurobiology of …
I have pretty much given up on hoping that the “Language” part of the Society for the Neurobiology of Language conference will live up to its moniker. This is not to say that SNL does not have a lot of fine quality research on the neurobiology of language – in fact, it has this in spades. What I mean is that there is little focus in the conference on integrating our work with people who spend their lives trying to figure out what language is: linguists and psycholinguists. I take great value in these fields, as language theory provides a very useful guide for my own research. I don’t always take the letter of language theory in detail, but rather as inspiration for the kinds of things one might find in the brain.
This year, there were some individual exceptions to this general rule of linguistic omission at the conference. I was pleased to see some posters and talks that incorporated language theory, particularly John Hale’s talk on syntax, computational modeling, and neuroimaging. He showed that anterior and posterior temporal lobe are good candidates for basic structural processes, but not the IFG – no surprise but good to see converging evidence (see Brennan et al., 2016 for details). But, my interest in Hale’s talk only highlighted the trend towards omission of language theory at SNL that can be well illustrated by looking at the keynote lectures and invited speakers at the conference over the years.
There are essentially three kinds of talks: (i) talks about the neurobiology of language, (ii) talks about (neuro)biology, and (iii) talks about non-language communication, cognition, or information processing. What’s missing? Language theory. Given that the whole point of our conference is about the nature of human language, one would think that this is an important topic to cover. Yet I don’t think there has ever been a keynote talk at SNL about psycholinguistics or linguistics. I love dolphins and birds and monkeys, but doesn’t it seem a bit strange that we hear more about basic properties of non-human animal communication than human language? Here’s the full list of keynote speakers at SNL for every conference in the past 9 years – not a single talk that is clearly about language theory (with the possible exception of Tomasello, although his talk was about very general properties of language with a lot of non-human primate data).
2009
Michael Petrides: Recent insights into the anatomical pathways for language
Charles Schroeder: Neuronal oscillations as instruments of brain operation and perception
Kate Watkins: What can brain imaging tell us about developmental disorders of speech and language?
Simon Fisher: Building bridges between genes, brains and language
2010
Karl Deisseroth: Optogenetics: Development and application
Daniel Margoliash: Evaluating the strengths and limitations of birdsong as a model for speech and language
2011
Troy Hackett: Primate auditory cortex: principles of organization and future directions
Katrin Amunts: Broca’s region -- architecture and novel organizational principles
2012
Barbara Finlay: Beyond columns and areas: developmental gradients and reorganization of the neocortex and their likely consequences for functional organization
Nikos Logothetis: In vivo connectivity: paramagnetic tracers, electrical stimulation & neural-event triggered fMRI
2013
Janet Werker: Initial biases and experiential influences on infant speech perception development
Terry Sejnowski: The dynamic brain
Robert Knight: Language viewed from direct cortical recordings
2014
Willem Levelt: Localism versus holism. The historical origins of studying language in the brain
Constance Scharff: Singing in the (b)rain
Pascal Fries: Brain rhythms for bottom-up and top-down signaling
Michael Tomasello: Communication without conventions
2015
Susan Goldin-Meadow: Gestures as a mechanism of change
Peter Strick: A tale of two primary motor areas: “old” and “new” M1
Marsel Mesulam: Revisiting Wernicke’s area
Marcus Raichle: The restless brain: how intrinsic activity organizes brain function
2016
Mairéad MacSweeney: Insights into the neurobiology of language processing from deafness and sign language
David Attwell: The energetic design of the brain
Anne-Lise Giraud: Modelling neuronal oscillations to understand language neurodevelopmental disorders
2017
Argye Hillis: Road blocks in brain maps: learning about language from lesions
Yoshua Bengio: Bridging the gap between brains, cognition and deep learning
Ghislaine Dehaene-Lambertz: The human infant brain: A neural architecture able to learn language
Edward Chang: Dissecting the functional representations of human speech cortex
I was at most of these talks; most of them were great, and at least entertaining. But it seems to me that the great advantage of keynote lectures is to learn about something outside of one’s field that is relevant to it, and it seems to me that both neurobiology AND language fit this description. This is particularly striking given the importance of theory to much of the scientific work I described in this post. And I can think of many linguists and psycholinguists who would give interesting and relevant talks, and who are also interested in neurobiology and want to chat with us. At the very least, they would be entertaining. Here are just some that I am thinking of off the top of my head: Norbert Hornstein, Fernanda Ferreira, Colin Phillips, Vic Ferreira, Andrea Moro, Ray Jackendoff, and Lyn Frazier. And if you disagree with their views on language, well, I’m sure they’d be happy to have a respectful debate with you.
All told, this was a great conference season, and I’m looking forward to what the future holds for the neurobiology of language. Please let me know your thoughts on these conferences, and what I missed. I look forward to seeing you at SNL 2018, in Quebec City!
-William
Check out my website: www.williammatchin.com, or follow me on twitter: @wmatchin
References
Berwick, R. C., & Chomsky, N. (2015). Why only us: Language and evolution. MIT press.
Berwick, R. C., Pietroski, P., Yankama, B., & Chomsky, N. (2011). Poverty of the stimulus revisited. Cognitive Science, 35(7), 1207-1242.
Berwick, R. C., Beckers, G. J., Okanoya, K., & Bolhuis, J. J. (2012). A bird’s eye view of human language evolution. Frontiers in evolutionary neuroscience, 4.
Brennan, J. R., Stabler, E. P., Van Wagenen, S. E., Luh, W. M., & Hale, J. T. (2016). Abstract linguistic structure correlates with temporal activity during naturalistic comprehension. Brain and language, 157, 81-94.
Chomsky, C. (1986). Analytic study of the Tadoma method: Language abilities of three deaf-blind subjects. Journal of Speech, Language, and Hearing Research, 29(3), 332-347.
Gentner, T. Q., Fenn, K. M., Margoliash, D., & Nusbaum, H. C. (2006). Recursive syntactic pattern learning by songbirds. Nature, 440(7088), 1204-1207.
Hall, M. L., Ferreira, V. S., & Mayberry, R. I. (2015). Syntactic Priming in American Sign Language. PloS one, 10(3), e0119611.
Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: what is it, who has it, and how did it evolve?. science, 298(5598), 1569-1579.
Hickok, G., Bellugi, U., & Klima, E. S. (1996). The neurobiology of sign language and its implications for the neural basis of language. Nature, 381(6584), 699-702.
Hickok, G., & Poeppel, D. (2000). Towards a functional neuroanatomy of speech perception. Trends in cognitive sciences, 4(4), 131-138.
Leonard, M. K., Ramirez, N. F., Torres, C., Travis, K. E., Hatrak, M., Mayberry, R. I., & Halgren, E. (2012). Signed words in the congenitally deaf evoke typical late lexicosemantic responses with no early visual responses in left superior temporal cortex. Journal of Neuroscience, 32(28), 9700-9705.
Leroy, F., Cai, Q., Bogart, S. L., Dubois, J., Coulon, O., Monzalvo, K., ... & Lin, C. P. (2015). New human-specific brain landmark: the depth asymmetry of superior temporal sulcus. Proceedings of the National Academy of Sciences, 112(4), 1208-1213.
Matchin, W., Villwock, A., Roth, A., Ilkbasaran, D., Hatrak, M., Davenport, T., Halgren, E. &
Mayberry, M. (2017). The cortical organization of syntactic processing in American Sign Language: Evidence from a parametric manipulation of constituent structure in fMRI and MEG. Poster presented at the 9th annual meeting of the Society for the Neurobiology of Language.
Matchin, W., Hammerly, C., & Lau, E. (2017). The role of the IFG and pSTS in syntactic prediction: Evidence from a parametric study of hierarchical structure in fMRI. Cortex, 88, 106-123.
Monahan, P. J., & Idsardi, W. J. (2010). Auditory sensitivity to formant ratios: Toward an account of vowel normalisation. Language and cognitive processes, 25(6), 808-839.
Pallier, C., Devauchelle, A. D., & Dehaene, S. (2011). Cortical representation of the constituent structure of sentences. Proceedings of the National Academy of Sciences, 108(6), 2522-2527.
Parsons, T. (1990). Events in the Semantics of English (Vol. 5). Cambridge, Ma: MIT Press.
Petitto, L. A., & Marentette, P. F. (1991). Babbling in the manual mode: Evidence for the ontogeny of language. Science, 251(5000), 1493.
Rakic, P. (1988). Specification of cerebral cortical areas. Science, 241(4862), 170.
Sandler, W., & Lillo-Martin, D. (2006). Sign language and linguistic universals. Cambridge University Press.
Tang, C., Hamilton, L. S., & Chang, E. F. (2017). Intonational speech prosody encoding in the human auditory cortex. Science, 357(6353), 797-801. | http://www.talkingbrains.org/2017/11/ |
BACKGROUND OF THE INVENTION
Field of the Invention
Description of the Prior Art
The present invention relates to a garbage container.
Generally, a garbage bag is placed in a barrel of the garbage container, which is convenient for packing garbage and keeps the barrel clean. For easy operation, a garbage container receiving garbage bags in its bottom is developed, and when the garbage bag in used is full, an unused garbage bag is directly pulled up to use, such garbage containers are disclosed in TWM574602, TWM525915, TWM517731 and TWM478012.
However, when the garbage bag in used is broken, the garbage in the garbage bag is easy to smudge the unused garbage bags received in the barrel, which results problems of cleaning and waste. Moreover, each of the garbage containers as described above requires a roll of garbage bags to be disposed therein so that multiple rolls of garbage bags have to be prepared. The roll of garbage bags is embedded in the barrel, and the garbage bag in used has to be taken out so as to take the unused garbage bag, which is impractical and inconvenient.
The present invention is, therefore, arisen to obviate or at least mitigate the above-mentioned disadvantages.
SUMMARY OF THE INVENTION
The main object of the present invention is to provide a garbage container which is convenient to take and replace a garbage bag and can keep unused garbage bags clean.
To achieve the above and other objects, the present invention provides a garbage container, including: a main body and a storage box. The storage box is detachably positioned on and out of the main body and includes a casing, an inlet and an outlet. The casing defines a receiving space configured to receive at least one roll of garbage bags which includes a plurality of garbage bags connected with one another. The inlet and the outlet communicate the receiving space with an exterior of the receiving space, and the inlet is disposed on an upper portion of the casing and configured for the at least one roll of garbage bags to pass therethrough and into the receiving space. The outlet is disposed on a lower portion of the casing and is an elongate slit, and the outlet is configured for one of the plurality of garbage bags to pass therethrough and protrude out of the casing.
The present invention will become more obvious from the following description when taken in connection with the accompanying drawings, which show, for purpose of illustrations only, the preferred embodiment(s) in accordance with the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1
is a stereogram of a preferable embodiment of the present invention;
Fig. 2
Fig. 1
is a breakdown drawing of ;
Fig. 3
is a front view of a preferable embodiment of the present invention;
Fig. 4
is a schematic diagram of a preferable embodiment of the present invention when a roll of garbage bags is put into a storage box;
Fig. 5
is a schematic diagram of a preferable embodiment of the present invention in use;
Fig. 6
Fig. 5
is a partial cross-sectional view of ;
Fig. 7
Fig. 6
is a partial enlargement of .
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Figs. 1 to 7
Please refer to for a preferable embodiment of the present invention. A garbage container of the present invention includes a main body 1 and a storage box 2.
The storage box 2 is detachably positioned on and out of the main body 1 so that the storage box 2 and the main body 1 can be cleaned separately, and it is convenient to storage. The storage box 2 includes a casing 21, an inlet 22 and an outlet 23, and the casing 21 defines a receiving space 24 configured to receive at least one roll of garbage bags 8 which includes a plurality of garbage bags 81 connected with one another. The inlet 22 and the outlet 23 communicate the receiving space 24 with an exterior of the receiving space 24. The inlet 22 is disposed on an upper portion 25 of the casing 21 and configured for the at least one roll of garbage bags 8 to pass therethrough and into the receiving space 24. The outlet 23 is disposed on a lower portion 26 of the casing 21 and is an elongate slit, and the outlet 23 is configured for one of the plurality of garbage bags 81 to pass therethrough and protrude out of the casing 21.
The storage box 2 is disposed out of the main body 1 and separated from a garbage bag in the main body 1 so as to avoid contamination. When the garbage bag in the main body 1 is full, one of the plurality of garbage bags 81 is taken out from the outlet 23 to replace the garbage bag in the main body 1, which is convenient to use.
Preferably, the storage box 2 further includes a notch 321, and the notch 321 is disposed on the casing 21 and communicated with the inlet 22 and laterally communicated with the receiving space 24. Therefore, the notch 321 allows a user to observe the receiving space 24 and determine whether to supply the at least one roll of garbage bags 8. The user's hand can enter into the receiving space 24 through the notch 321 and the at least one roll of garbage bags 8 is dropped when its position is close to the lower portion 26 of the casing 21 so as to reduce force exerted on the casing 21, prevent the main body 1 from shaking and avoid damage to the casing 21. The notch 321 also allows the user's hand to enter into the receiving space 24 to take, place or adjust the at least one roll of garbage bags 8. Moreover, the storage box 2 is large enough to receive multiple rolls of garbage bags 8, boxes of garbage bag and various sizes of garbage bags.
In a height direction 35 of the storage box 2, a maximum distance of the notch 321 is defined as a first length 71, a maximum distance of the outlet 23 is defined as a second length 72, and a ratio of the second length 72 to the first length 71 is between 0.1 and 0.3. Therefore, the outlet 23 has a suitable size for the at least one roll of garbage bags 8 with a normal size to pass therethrough and the casing 21 has sufficient structural strength.
The main body 1 includes a barrel 12, a cover 13 and a pivot structure 15. The cover 13 is openably disposed on the barrel 12 by the pivot structure 15, and the storage box 2 is preferably disposed below the pivot structure 15. The storage box 2 is located at a backside of the barrel 12 so as to keep the at least one roll of garbage bags 8 clean. Moreover, in the height direction 35 of the storage box 2, a ratio of a height of the storage box 2 to a height of the main body 1 is between 0.25 and 0.45, so that sizes of the storage box 2 and the main body 1 are preferably cooperative and the receiving space 24 has a sufficient size.
The main body 1 further includes at least one first positioning portion 4, and the storage box 2 further includes at least one second positioning portion 5. A number of the at least one second positioning portion 5 is equal to a number of the at least one first positioning portion 4, and the at least one first positioning portion 4 is releasably engaged with the at least one second positioning portion 5.
Furthermore, one of the at least one first positioning portion 4 and the at least one second positioning portion 5 has an engaging concave 41, and the other of the at least one first positioning portion 4 and the at least one second positioning portion 5 has an engaging convex 51. The engaging convex 51 is inserted and engaged within the engaging concave 41. The engaging concave 41 includes a large-diameter hole 411 and a small-diameter hole 412 which are communicated with each other, and the small-diameter hole 412 is located below the large-diameter hole 411; the engaging convex 51 includes an engaging portion 511 and a necked portion 512 which are connected with each other, and the engaging portion 511 is inserted within the large-diameter hole 411 and moved to the small-diameter hole 412, and the necked portion 512 is engaged with a circumferential wall of the small-diameter hole 412 for stable connection.
In this embodiment, the storage box 2 includes three said first positioning portions 4, two of the three said first positioning portions 4 are spacingly aligned with each other, and another one of the three said first positioning portions 4 is located below the two of the three said first positioning portions 4. As viewed in a direction facing the three said first positioning portions 4, the three said first positioning portions 4 are symmetrically arranged relative to a central line 11 of the main body 1. A height of the main body 1 is defined as a first height 61, a distance between the another one of the three said first positioning portions 4 disposed below the two of the three said first positioning portions 4 and a bottom surface 14 of the main body 1 is defined as a second height 62, and a ratio of the second height 62 to the first height 61 is between 0.4 and 0.6.
Moreover, the casing 21 further includes a bottom wall 31, a first side wall 32, a second side wall 33 and two connecting side walls 34. The first side wall 32, the second side wall 33 and the two connecting side walls 34 are laterally connected with the bottom wall 31, and the inlet 22 faces toward the bottom wall 31. The first side wall 32 and the second side wall 33 face each other, and the two connecting side walls 34 respectively connected between the first side wall 32 and the second side wall 33. The first side wall 32 is remote from the main body 1 and has the outlet 23, and the second side wall 33 faces toward the main body 1 and has the at least one second positioning portion 5. In this embodiment, the storage box 2 includes three said second positioning portions 5, two of the three said second positioning portions 5 are disposed on the upper portion 25, and another one of the three said second positioning portions 5 is disposed on the lower portion 26 and higher than the outlet 23.
Specifically, in the height direction 35 of the storage box 2, a ratio of a distance, between one of the three said second positioning portions 5 disposed on the upper portion 25 and the another one of the three said second positioning portions 5 disposed on the lower portion 26, to the height of the storage box 2 is between 0.5 and 0.7 so that the storage box 2 is stably disposed on the barrel 12.
In summary, the storage box of the garbage container can be used to receive the at least one roll of garbage bags which are unused, which is convenient to replace the garbage bag in the main body. Moreover, the storage box is detachably disposed on the main body so as to be convenient for cleaning and storage. The notch allows the user to observe the receiving space and take or place the at least one roll of garbage bags, which is convenient to use.
Although particular embodiments of the invention have been described in detail for purposes of illustration, various modifications and enhancements may be made without departing from the spirit and scope of the invention. Accordingly, the invention is not to be limited except as by the appended claims. | |
The learners will read several folktales related to forgiveness, investigate how compassion is interrelated with forgiveness, and describe challenges to real forgiveness.
Filter by subjects:
Filter by audience:
Filter by unit » issue area:
find a lesson
Unit: Generosity of Spirit Folktales
Using folktales from various American cultures, learners will determine which character traits are valued. They will also debate the advantages of "paying a debt forward" rather than "paying it back." Learners will also determine how stories move from one continent to another based on historical...
Unit: Freedom to Choose
Students discuss what it feels like to not have a choice. They relate this experience to how the Pilgrims and other immigrants feel when they chose to come to the United States for democratic freedom.
Unit: Early American Influences
Introduce the philanthropic behavior of Native Americans through the speech attributed to Chief Seattle, using the book Brother Eagle, Sister Sky: The Words of Chief Seattle.
Unit: Community Collaboration
Using the data collected from the Blue Sky Activity in the previous lesson and the community interviews, students brainstorm possible members of the community who can help with the identified issues. Introduce students to the concept of neighborhood beautification....
Unit: We are the Positive School Culture
In this lesson, the students carry out their service plan to promote a positive school climate, then reflect on its impact and demonstrate their service-learning process to an audience.
Unit: George H.W. Bush and Points of Light
Students view primary documents to explore public policy on service. They make meaning of the government role and citizen responsibility in civic action. They make a personal plan of service based on their available time, talent, and treasure.
Unit: Attributes of a Civil Society
Learners define justice, kindness, peace, and tolerance and describe the importance of these attributes of a civil society. They look for examples in the media and brainstorm how they can promote these attributes in their school, community, and the world. ...
Unit: Constitution Day
Students learn how the Constitution relates to rules and community roles. This lesson is designed for Citizenship/Constitution Day (September 17) and connects students to improving their community for the good of all. ...
Unit: Cinderella Project
The lesson emphasis is on the shoe motif in Cinderella as well as the philanthropic ideals of giving and helping others. A service learning project will be developed where students create a "shoe drive" to donate to children in need. | https://www.learningtogive.org/resources/lessons-units?search_api_views_fulltext_1=Viewing&page=4 |
Battles between live music and local councils are an unfortunate yet common place occurrence across Australia.
Whether it’s Perth’s Claremont Council ongoing grievances with music festivals in the WA capital (latest bugbear: Big Day Out), or Sydney officials telling residents they need to be more forgiving of neighbouring live music hotspots following a year that saw noise complaints and costly legal battles threaten the city’s Annandale Hotel and even a political clash over Playbar.
The issue once again came to a head in Melbourne last week, when Collingwood’s Bendigo Hotel was the target of complaints from residents over the venue’s 300 capacity band room, though it was eventually rescued from a hearing with VCAT thanks to last-minute intervention, in the wake of the the incident comes a plan from local council that is looking to help live music rather than hinder.
In light of the Bendigo Hotel’s issues, representatives of City Of Yarra Council are eyeing plans to help live music venues pay for costly sound proofing as part of a new proposal, as The Herald Sun reports.
The motion to aide the cost of containing volume limits comes from Labor councillor Simon Huggins, with an aim to stem venues coming under fire from noise complaints and avoiding sticky legal complaints and court battles entirely.
“Venues are often dealing with old buildings and large costs for sound muffling. I want to set up a program for established venues to assist them make necessary modifications,” Cr Huggins said.
The Councillor has also emphasised the importance of nurturing the vibrancy of the City of Yarra’s culture and the many live music venues situated within its borders, including The Tote, Corner Hotel, Old Bar (and the many more that featured in the Leaps And Bounds festival earlier this year). “Venues are often dealing [with] large costs for sound muffling. I want to set up a program for established venues to assist them…”
The Councillor will bring the motion to his constituents at a future meeting with details on the scheme’s budget still to be determined.
Cr Huggins will definitely find support in fellow City of Yarra representative Stephen Jolly, one of the “2 legends”, in the words of the Bendigo Hotel, “[that] made it possible for the VCAT hearing to be stopped” over the sound dispute surrounding the venue last week.
After learning of the Bendigo Hotel’s plight, Huggins and Jolly helped broker 11th hour meetings between the Bendigo Hotel’s owner, Guy Palermo, and council that helped postpone the VCAT hearing while City of Yarra worked with the venue to find a solution.
Mr Palermo, indicated he’d spent up to $10,000 on soundproofing for The Bendigo Hotel after it was accused of breaching EPA noise guidelines and the subject of 14 noise complaints in the past year, according to City Of Yarra Mayor Jackie Fristacky.
Cr Jolly explained that the volume breaches were due to ‘guest bands’ using their own live equipment that did not satisfy the sound levels that Palermo says he so diligently checked.
“The issue was, when [The Bendigo Hotel] used their own equipment they kept to sound levels,” Jolly said, “but when other bands came in with their own equipment they sometimes went a bit over the top – so it’s easily fixed.”
The councillor also noted “I was really angry with the council bureaucrats because they’d rushed to VCAT… taking the Bendigo Hotel and putting it at risk of closure without telling any of the councillors, without any discussion within the organisation.”
The costs placed on Melbourne music venues was also spotlighted earlier this year by Music Victoria when a commissioned report uncovered that outdated Building Codes were costing venue operators millions of dollars in order to satisfy legislative red tape and regulation compliances. | https://tonedeaf.thebrag.com/push-to-pay-for-live-music-venue-soundproofing-from-melbourne-councillor/ |
“Can I repair a punctured Bridgestone Run flat tyre?
With certain types of punctures in the tread area, the Bridgestone Run-Flat technology tyres may be repaired subject to certain conditions (Depending on how far and at what speed the car was driven after the puncture). Please consult your nearest RFT authorized dealer to have your tyres inspected.
Before attempting a repair, consult the vehicle owner's manual for restrictions. The vehicle manufacturer may restrict the use of repaired tires on its vehicles”.
As you can see. In my opinion, the part in brackets is a grey area, “how far and at what speed” who knows? Most tyre retailers are responsible people. Hence, have the customer and other road user’s safety at heart. We also think long and hard before making these decisions. | https://www.pellontyres.co.uk/Content/Page/Run%20Flat%20Tyre%20reppair%20Policy |
Thomas Deacon Education Trust is a charitable organisation dedicated to raising educational outcomes across a range of primary and secondary phase schools and academies in the East Midlands. As a multi-academy trust we strive to provide outstanding teaching and learning for all and an ethos and culture that encourages everyone to be the very best that they can be.
We put learners, and their learning, at the heart of everything we do.
Thomas Deacon Academy
The Thomas Deacon Academy that we know today opened in 2007 as one of the UK’s first and largest academies. Whilst a building does not make a school, it is worth noting that Thomas Deacon Academy was designed by Norman Foster and Partners and in addition to our impressive main academy our beautiful 43-acre campus includes TDA Juniors - a modern, light and purpose-built Key Stage 2 facility - and expansive playing fields and sporting facilities.
In September 2018, a new extension to the existing main building led to new accommodation being created for a sixth form study centre, refectory expansion and staff professional learning centre.
Moving forward, our core purpose for the Academy is: “To ensure that each student acquires the necessary knowledge, skills and character to make a positive contribution to society and ‘thrive’ as a global citizen.”
The Academy’s core purpose is supported by our six pillars of character which provide the foundations for our TDA Way. Our character values are: Curiosity, Commitment, Courage, Compassion, Confidence and Courtesy.
In September 2019, TDA received a very positive Ofsted report, judging the academy to be Good overall. Ofsted commented favourably, amongst other things, on the overall leadership of the academy, good teaching, positive relationships between staff and students and students’ behaviour. The Sixth Form was graded as ‘Good’ and recognised as a strength of the Academy. These aspects of the academy have been maintained alongside the identified areas for development being addressed.
The Role
Thomas Deacon Academy are looking to appoint a highly motivated, inspiring teacher who has a passion for teaching Spanish and will be able to motivate, enthuse, excite and challenge students to produce high quality work.
The department is extremely supportive of each other and is led by fantastic practitioners who are forward-thinking, innovative and motivated to ensure all students achieve well and develop an enjoyment of the subject. Being part of TDET there will be a wide range of opportunities to grow and develop within the department, Academy and Trust wide.
The Trust offers:
• Excellent salary package.
• Fantastic training/CPD opportunities in a friendly supportive environment.
• Opportunity for progression across Thomas Deacon Education Trust.
• Use of Academy facilities (including a gym and onsite car parking).
• Teachers’ pension scheme.
• An engaging, creative and welcoming environment to learners who take pride in their school.
• An inclusive and collaborative approach.
• A talented, highly motivated, committed and professional team of colleagues, both within the school and across the trust.
• An actively supportive Local Governing Body and Trust leadership.
Thomas Deacon Education Trust is committed to safeguarding and promoting the welfare of children and young people and expects all staff and volunteers to share this commitment. The Academy will require the successful candidate to provide satisfactory references and undertake an Enhanced Check with the Disclosure and Barring Service. | https://www.eteach.com/careers/thomasdeaconacademy/job/teacher-of-mfl-spanish---part-time-1138988?lang=en-GB |
(Samarskite, a mineral) Discovered spectroscopically by its sharp absorption lines in1879 by Lecoq de Boisbaudran in the mineral samarskite, named in honor of a Russian mineofficial, Col Samarski.
Samarium is found along with other members of the rare-earth elements in many minerals,including monazite and bastnasite, which are commercial sources. It occurs in monazite tothe extent of 2.8%. While misch metal containing about 1% of samarium metal, has long beenused, samarium has not been isolated in relatively pure form until recent years.Ion-exchange and solvent extraction techniques have recently simplified separation of therare earths from one another; more recently, electrochemical deposition, using anelectrolytic solution of lithium citrate and a mercury electrode, is said to be a simple,fast, and highly specific way to separate the rare earths. Samarium metal can be producedby reducing the oxide with lanthanum. | https://www.physlink.com/reference/chemicalelements/samarium.cfm |
The Trump administration has sought a slate of quick regulatory reforms over the past year, tweaking environmental permitting requirements everywhere from EPA to the Federal Communications Commission.
But potentially the most consequential change will be a slower burn. The White House Council on Environmental Quality is seeking to update its National Environmental Policy Act regulations, a process experts expect could take over a year.
The CEQ standards serve as the framework for NEPA permitting across the federal government. They got a minor amendment in 1986 under President Reagan, but otherwise, they’ve been untouched since they were first finalized in 1978.
"Anytime regulations are changed for the first time in more than 40 years — significantly changed — it’s a big deal," said Fred Wagner, a partner with Venable LLP’s Environmental Group who served as chief counsel for the Federal Highway Administration in the Obama administration.
"The regulations have served the community pretty well for a long time," he said, "but I think there’s a general sense that updating them in light of recent statutory changes, in light of recent administrative initiatives, makes sense."
CEQ declined to comment for this story. But Ted Boling, associate director for NEPA at CEQ, said at a conference this month that changes to the regulations are just one in a range of tools CEQ is looking at to clean up what the Trump administration sees as inefficiencies in the NEPA process.
For an infrastructure project, the average time between the beginning of scoping and producing a draft environmental impact statement is two years and 10 months, Boling said at the conference, sponsored by the Environmental Law Institute.
"So what you’re saying as part of the scoping process is, ‘Thank you for your input on this project. We’ll get back to you in maybe 2 ½ years with a draft environmental impact statement,’" Boling said. "We can do better than that."
Most projects don’t require an environmental impact statement. And some of those inefficiencies come as the result of individual agency policy or staffing, rather than CEQ’s regulations.
Still, delays on major projects that do require an EIS cost money year after year, Wagner said. And the two most recent major transportation bills — the Moving Ahead for Progress in the 21st Century Act (MAP-21) and the Fixing America’s Surface Transportation (FAST) Act in 2015 — provide models of what CEQ might seek to change.
CEQ might require, for example, that agencies combine the final EIS and record of decision (ROD) into a single document, a change that is already in place for certain transportation projects under MAP-21.
Currently, the law requires a 30-day cooling-off period between the two documents, but it sometimes gets extended as agencies deal with more public comments on the final EIS, Wagner said.
Another possibility would be to have one ROD document for the whole federal government, rather than one for each agency. That’s a tweak President Trump has already floated with his Aug. 15, 2017, executive order and a subsequent interagency agreement signed last month (Greenwire, April 9).
Other changes based on the FAST Act and MAP-21 might be in order, but generally speaking, the regulations are sound, said Larry Liebesman, a senior adviser with Washington water resources firm Dawson & Associates who worked on the 1978 standards during his time at the Justice Department.
"I think a lot of the real objections can be addressed through fine-tuning of the existing regs," he said. "Don’t throw the baby out with the bathwater, so to speak."
‘A little bit more oomph’
Industry groups and environmentalists alike will get a chance to weigh in as public comments get underway in coming months, but the process will be complicated.
CEQ earlier this month submitted a draft advance notice of proposed rulemaking to the Office of Information and Regulatory Affairs (E&E News PM, May 7). It was included in the spring Unified Agenda, though it hasn’t yet been published in the Federal Register for comment.
But for those seeking to streamline the regulations, it may be difficult to find common ground with the environmental groups that will inevitably comment and possibly sue if there are any legal blips in the process.
They’re looking to go in the opposite direction with reforms to CEQ’s NEPA regulations, said Raul Garcia, legislative counsel with Earthjustice.
"There is very little in there, and I think there needs to be more, on how to engage communities on the ground," Garcia said.
Garcia and other environmentalists argue that it’s a lack of staffing and funding — rather than statutes or regulations — that holds up the process.
"The problem is not NEPA; the problem is that you’re not funding the agencies that carry out NEPA, CEQ being front and center on this," Garcia said.
Other observers point out that one of the biggest holdups in the NEPA process — litigation — would have to be addressed through statute, rather than regulations.
For CEQ, it may also be difficult to pinpoint how, exactly, it can change its regulations to fix what the administration sees as a laborious NEPA process.
The current regulations state that EIS documents "shall normally" be fewer than 150 pages, and fewer than 300 for unusually complex projects.
The wording of that guidance is nearly identical to a memo Interior Deputy Secretary David Bernhardt issued to his agency last year (Greenwire, Sept. 6, 2017).
"It’s already here, but it’s just never really been enforced," Wagner said. "So the question becomes, why not? And if it’s already in the regulations, what else do you have to say?"
CEQ also issued a document in 1981 titled "Forty Most Asked Questions Concerning CEQ’s National Environmental Policy Act Regulations."
The memo advises that even large complex energy projects "would require only about 12 months for the completion of the entire EIS process."
Those are just two of many examples of where critics of NEPA — namely, the transportation and energy industries — might be able to work with agencies to cut down permitting time within existing regulatory frameworks, Wagner said.
"But I think what people want to see is a little bit more oomph, for lack of a better word, in the regulations," he said.
Road ahead
Environmentalists fear that even apparently reasonable changes to the NEPA regulations could be co-opted by bad-faith political forces in the Trump administration.
But for now, CEQ is without appointed political leadership, since Kathleen Hartnett White withdrew her name from consideration as its chair when it became clear that her nomination would not pass the Senate.
"Without a leader there that understands the NEPA process, that’s a problem," Liebesman said.
Boling, for his part, is a well-respected career official with more than a decade of experience working under Democratic and Republican presidents. He could help fend against those in the administration that see NEPA as an "albatross," Liebesman said.
Still, the agency may have time to get a leader confirmed before the process wraps up. Each step is likely to draw a wealth of public comments.
"I think it’s going to be several years before you see any revised NEPA regulations," Liebesman said. | https://www.eenews.net/articles/industry-wants-more-oomph-in-planned-nepa-overhaul/ |
Learning is a different process for different individuals. For some, it is an easy skill that does not require any extra effort while for some other individuals, much intentionality has to go into achieving the aim.
People have unique ways by which they assimilate things and these ways are specially customized to various individuals. More learning methods are being discovered regularly to improve the process. In-depth information and reviews about learning methods in different fields can be found on britainreviews.co.uk.
Practical Ways To Learn Faster
1. Have enough rest
It may seem strange to hear that a way to learn faster is by resting, but it is the truth. The human body is made of systems that work round the clock, so it is only logical to expect that it needs rest to work better. Not having enough rest will slow down activity in several organs including the brain, which is the major organ responsible for learning in the body. This first tip is mentioned in virtually all online reviews about online academies excellence.
Getting enough sleep, therefore, helps the body to retain and recall faster when information is required and this is considered as one of the basic foundations of learning.
2. Taking notes
While it is common knowledge that taking notes is an essential part of learning, studies have also shown that a lot of people go about notes taking the wrong way. It has been shown that taking notes by hand is more productive than taking notes on a laptop or a smartphone.
Students have been said to be more involved in the learning process when they take their notes actively as it increases their ability to grasp concepts more and also increases retention.
3. Teach others
This particular tip is highly effective but greatly undermined. Information that is taught is known to retain faster than those not taught. Teaching serves as a means of practice, but this time having an object focus. Even if you do not have actual people to teach, practice your knowledge like you are teaching and you would be surprised at how much faster you will retain and recall this knowledge.
4. Make use of mnemonics
Sometimes, information might seem quite overwhelming to learn because of the extent of new words around it. In cases like this, try using mnemonics. Reducing the magnitude of what you have to learn by classifying them into simpler forms of letter patterns or sounds is quite effective in enhancing retention. Another way around this could be associating what you have to learn with other already familiar information. This also serves to increase recall in people.
With learning, there is no end to it. Finding out other ways by which you can learn faster as an individual puts you at an advantage, hence, it’s best to discover what works for you best early enough so you can fully optimize them for maximum productivity. | https://thelegendedition.com/learn-anything-with-these-4-simple-tricks.html |
Review of Working at the Southwark Playhouse
Working is an impressionistic hybrid of a musical: a bit like a verbatim documentary crossed with a musical revue, in which monologues and scenes are patched together to tell multiple stories about the real working lives of an extensive canvas of characters.
Based on verbatim interviews conducted by Studs Terkel for a book that was first published in 1974, it features songs by six separate composers, that have been woven together with a moving and inspiring integrity. There's no narrative story, as such; but taken together, the songs and scenes cumulatively give feeling and texture to what the jobs people do means to their lives.
First briefly seen on Broadway in 1978, it has since been extensively updated and revised, with new songs added (notably by Hamilton composer Lin-Manuel Miranda), and this version is now receiving its UK premiere. Director Luke Sheppard and choreographer Fabian Aloise have made another authentic intervention to give extra shape to it: they've underscored it with six young newly qualified musical theatre graduates acting as listeners to the tales being told by a wonderful troupe of six more experienced actors. The young troupe also provide fluent illustrative dance accompaniment.
So it is never dull or dry; and there are moving juxtapositions of age and experience on display, too. A really fine cast is led by West End veteran Peter Polycarpou, who made me cry with his version of Stephen Schwartz's Fathers and Sons, with Gillian Bevan's magnificent performance as a waitress in It's an Art, also by Schwartz, about the life of a waitress, could make you cry with laughter.
There are also immensely powerful vocal performances from Dean Chisnall, Krysten Cummings, Liam Tame and Siubhan Harrison to make this one of the best sung shows in town; and a six-piece band under Isaac McCullough lend them superb accompaniment.
The show is a rare and evocative fringe pleasure.
Working is at the Southwark Playhouse until 8th July 2017. | https://www.londontheatre.co.uk/reviews/review-of-working-at-the-southwark-playhouse |
Q:
Performing operations and simplification
The equation is $$2a-7b- \dfrac{4(a^2-16b^2)}{2a-3b}$$
I need to simplify the expression.
A:
Perhaps the main objective here is simply finding a common denominator, expand, and simplify:
$$\begin{align} 2a-7b- \dfrac{4(a^2-16b^2)}{2a-3b} & = \dfrac{(2a - 7b)(2a - 3b) - 4a^2 + 64 b^2 }{2a - 3b} \tag{1}\\ \\ & = \dfrac{4a^2 -20ab+ 21 b^2 - 4a^2 + 64b^2}{2a - 3b}\tag{2} \\ \\ &= \dfrac{85b^2 - 20ab}{2a-3b} \tag{3}\\ \\ & = \dfrac{5b(17b - 4a)}{2a - 3b}\tag{4}\end{align}$$
$(1)\quad $ Finding the common denominator and subtracting.
$(2)\quad$ Expanding the product in the numerator: $(2a - 7b)(2a-3b)$
$(3)\quad$ Adding/subtracting like terms in numerator.
$(4)\quad$ Factoring out common factor in numerator.
| |
It is very difficult to determine wether or not the ancient Greek vase paintings depict the myths on which theatrical performances were based or the performances themselves. However, there are a few observations that can be made to help us come to a conclusion.
- Firstly, the vases depict scenes of violence and slaughter. In Grecian theatre, actual physical violence was never enacted on stage, but described vividly to the audience afterwards. This would suggest that the paintings are representations of myth rather than performance.
- Secondly, the figures depicted in the paintings are not wearing the typical elaborate garments and headresses synonomous with theatrical performances of that era. We know that Grecian costumes were large and striking as the auditoriams in whic performances took place were so vast, anything small would have been lost.
- Thirdly, the vase shown above has been dated to around 500 BC. It shows the murder of Aegisthus by Orestes. This myth was the subject of a play, The Oresteia, written in 458 BC. If these dates are correct then the vase predates the play by a considerable margin suggesting that the vase is a depiction of the actual myth and not the performance.
In the light of these observations, it would be fair to say that the Ancient vase paintings are not especially reliable as a source of information about theatrical performance. However, they do provide a useful documentation of the myths that formed the basis of theatre for the Greeks and the stories that were the inspiration for performance at the start.
The small wooden stages known as Phlyax stages were simple constructions. There was a double door at the back which provided an entrance for the performers and steps leading down from the stage to the ground level of the audience.
In terms of tragedy performance, these stages would provide a intimate and clostrophobic atmosphere perfectly apt for the material being performed. For example, when the Furies crowd in on Orestes as he begs for assistance from Athene, the atmosphere should be tense and restricted, perfect for representation on a smaller stage. However, the Greek performances were specticals of grandeur with elaborate costume and large choruses. Practically, therefore, these small stages may have been restricting and it would probably have been necessary to make use of the space surrounding the stage aswell as the stage itself.
The most sensible positioning for Apollo and Orestes in the opening scene would be in the area on the ground directly infront of the main stage. Here they could hold their private conversation intimately. They would be close enough to the audience to make them feel priveledged to the conversation but in a good position to exit quickly, leaving the main stage free for The Furies and Clytemnestra, who dominate the scene with powerful speech and dramatic appearance.
The Furies would hang drapped across the mains tage and down the steps, like a horrible disease that has tarnished the shrine. Clytemnestra could then enter from the door in the back of the stage, gaining immediate impact.
The Theatre of Dionysus in the Lycurgan Period (338–326 BC)
The stone skene built during the Lycurgian period was a very impressive structure and certainly had it's advantages.
- The stage had three entrance points onto the ground level for the performers to make their entrances.
- The performance space was larger allowing the true magnificence of the performers and their costumes to be appreciated.
- There were two definate levels for performance use, the ground level and the raised proskenion allowing more diverse staging.
- There were also the additional areas on either side of the proskenion know as the paraskenia to be made use of.
However the huge auditorium that was built to accommodate the vast audiences that attended the theatre meant that voice projection was of key importance. Also, the visual impression of the performance needed to be hightened for the benefit of the audience members seated far away from the action taking place on stage.
The three locations featured in The Eumenides are:
The Temple of Apollo
The Inner Shrine of the Temple of Apollo
The Acropolis
It is thought that the Greeks may have made use of painted clothes in order to give a sense of setting to their plays. However, large bold props could also have been used as visual symbols to suggest scene changes, for example, large pillars could have been wheeled on to suggest the columns of the Acropolis. A change of location on stage coud also suggest that the setting for the action has shifted, for example the scenes set in the temple of Apollo could have been presented on the ground level with the scenes in the Acropolis taking place on the roof of the proskenion. The large performance space available at the Theatre of Dionysis during this period offered many possibilities for more diverse staging and scenery.
It has been traditionally assumed that the strongest position for a performer on the Greek stage was directly infront of the central doors of the skene. However, David Wiles has recently suggested that the most sybolically potent position was actually in the centre of the orchestra. Which of the two staging positions was used would undoubtably have had an effect on the way a scene such as the binding scene in the play "The Oresteia" was interpretated by the audience.
Positioning the performers further away from the audience by the doors of the skene would probably lessen the impact of a scene that invites tension and emotion. Not only would the performers be restricted spatially but the connection between them and the audience would be lessened and the emotional intensity of close proximity lost. However, bringing the performers out into the centre of the orchestra would be practically more successful allowing free movement in a larger space. It would also highten the emotional involvement of the audience. Having hideous representations of creatures such as the Furies performing in close proximity would increase the repulsive impact of the scene more than if they were removed to the skene.
Also, positioning the chorus and the actor playing Orestes on the same level out in the orchestra would suggest to the audience the hoplessness of his situation. Without the help of the Gods, he is left defenseless. The moral message of the dependancy of the people on the Gods and their infinately higher status could be aptly presented through certain spacial decisions. In a performance where the visual impact is of equal importance to the spoken script, staging becomes an important indicator of meaning.
In the Greek theatron the best seats would have been near to the front where the dialogue of the performance could be fully appreciated aswell as the visual spectacular. Audience members seated at the back of the theatron would probably have experienced difficulty hearing the dialogue of the performance and relied on the visal impact for entertaiment. However, all the seats would probably have felt physically uncomfortable after extended periods of time, joustled together with thousands of other people. The experience was definately not for the faint hearted!
The theatrical experience for the ancient Greeks would warrant comparison today with attending the final of the World Cup. As W.B. Stanford has said:
"If someone beside you sobbed or shuddered or trembled, you would feel it directly, and a wave of physical reaction could pass like an electric shock through all your neighbours . . mass emotionalism flourishes in compact crowds of that kind."
Today, theatre as an experience is much more tame. The audience are seated in plush seats with personal armrests and an unwritted law about "personal space." The closest thing today to the united, patriotic experience that theatre gave to the Greeks would probably be The Last Night of the Proms in the Royal Albert Hall where a small scale demonstration of national pride can be seen. | https://blogs.warwick.ac.uk/hesterbond/ |
Applications of compost and clay to ameliorate soil constraints such as water stress are potential management strategies for sandy agricultural soils. Water repellent sandy soils in rain-fed agricultural systems limit production and have negative environmental effects associated with leaching and soil erosion. The aim was to determine whether compost and clay amendments in a sandy agricultural soil influenced the rhizosphere microbiome of Trifolium subterraneum under differing water regimes. Soil was amended with compost (2% w/w), clay (5% w/w) and a combination of both, in a glasshouse experiment with well-watered and water-stressed (70 and 35% field capacity) treatments. Ion Torrent 16S rRNA sequencing and Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) analysis of functional gene prediction were used to characterise the rhizosphere bacterial community and its functional component involved in nitrogen (N) cycling and soil carbon (C) degradation. Compost soil treatments increased the relative abundance of copiotrophic bacteria, decreased labile C and increased the abundance of recalcitrant C degrading genes. Predicted N cycling genes increased with the addition of clay (N-2 fixation, nitrification, denitrification) and compost + clay (N-2 fixation, denitrification) and decreased with compost (for denitrification) amendment. Water stress did not alter the relative abundance of phylum level taxa in the presence of compost, although copiotrophic Actinobacteria increased in relative abundance with addition of clay and with compost + clay. A significant role of compost and clay under water stress in influencing the composition of rhizosphere bacteria and their implications for N cycling and C degradation was demonstrated.
Biography: Dr Jenkins over ten years of research experience in the fields of molecular ecology, waste management and sustainable agriculture. My research focuses on the development and adoption of low-cost waste treatment technologies for the recapture of bioenergy, nutrients and water. | http://www.som2019.org/3171 |
This is a field-based position and requires professionals to credibly interact with thought leaders and centers of excellence. The position involves travel of more than 60%.
***Candidates must live within the territory and close to a major airport. Position will be filled at a level commensurate with experience.
Responsibilities will include, but are not limited to, the following:
• Identify, develop and maintain collaborative relationships with current and future Key Opinion Leaders (KOLs) and healthcare stakeholders in disease states of strategic importance to Cara Therapeutics.
• Provide clinical presentations and information in response to unsolicited questions (as appropriate) in academic, community, healthcare provider, and managed care settings in both group and one-on-one settings.
• Collaborate with the Clinical R&D and Medical Affairs organizations to enhance enrollment in Cara-sponsored clinical trials by identifying appropriate trial sites and interacting with investigators in ongoing studies.
• Develop and execute territory plans in alignment with Medical Affairs strategy.
• Gather competitive intelligence including therapeutic trends and unmet needs within the appropriate disease states and marketplace and provide timely feedback/information on emerging clinical/scientific information and opportunities to internal stakeholders.
• Provide scientific support at medical congresses.
Skills/Knowledge Required:
• Advanced scientific degree (MD, DO, PharmD, PhD, or DNP highly desired)
• Excellent interpersonal communication and presentations skills are required
• Full understanding of compliant interactions between MSLs and HCPs
• Must be a strong team player and effectively interface with other internal departments
• Overnight business travel of more than 60% is required
Preferred Skills/Experience: | https://www.caratherapeutics.com/about/careers/posting/medical-science-liaison-nephrology-dermatology-northwest/ |
This pasta dish is a family favorite — simple to prepare, but delicious and full of flavor. I always enjoy fortifying an ingredient and making it my own. When creating the tomato sauce, I used Pinot Noir. Have fun experimenting with your own favorite red wine, and enjoy a glass while preparing it! This recipe is very easy to adapt to create a whole new dish. Switch up the flavoring by changing the sauce to a pesto, or the cheese with ricotta. Be creative — go and play with your food!
Ingredients
- 32 ea Cheese Ravioli
- 1 gal salted water (For cooking ravioli)
- 1 cup tomato, diced
- 1 T garlic, fresh, clove, minced
- 1 cup Pinot Noir
- 12 oz Tomato sauce
- 3 T Basil, fresh, chopped
- 2 T Oregano, fresh, chopped
- 2 T Oil
- 3 cups spinach, fresh
- Pinch Salt, Kosher
- Pinch Pepper, black
- 1 T Pine nuts, toasted
- ½ T Parm Cheese, shaved or grated
Preparation
- In a medium size sauce pan on medium high heat add the oil, tomato and garlic. Cook for about 2 minutes or until garlic has some browning color.
- Pour in the Pinot Noir and reduce down until almost gone. Stir the bottom of the pan to get all of the flavors from the tomato and garlic while they are sautéed.
- When the wine is almost fully reduced, add your favorite tomato sauce, and fresh herbs. Turn down the heat to medium and let simmer for about 30 minutes, stirring the bottom every so often.
- In a sauté pan over medium high heat, add oil and spinach. Cook down spinach and season with salt and pepper.
- In a bowl, put 8 cooked ravioli down in the center Place the tomato sauce over the ravioli followed by the sautéed spinach. Garnish with toasted pine nuts and Parmesan Cheese. | https://whro.org/letseat/1020-atlantic-shores-cheese-ravioli |
BACKGROUND
BRIEF SUMMARY AND OBJECTS OF THE INVENTION
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
1. The Field of the Invention
This invention relates to medication regimens prescribed by caregivers and, more particularly, to novel systems and methods for communicating medication regimens to clients or patients.
2. The Background Art
As the field of medicine continues to advance, particularly with the emergence of many new and effective drugs, the average life span of human beings continues to increase. In many cases, a regimen prescribed to a client or patient by a caregiver may include numerous drugs or medications. Each medication may have a distinct dosage, timetable for consumption, method of consumption, along with distinct instructions and warnings corresponding thereto. In certain instances, the timing or dosage of a particular drug may change from day to day. Moreover, medications may be administered by a professional, a relative, or the patient. As the number of medications in a selected regimen increases, so does the complexity of the medication regimen and the likelihood of error by a caregiver or patient. The error may occur in communication of or in compliance with instructions provided by a healthcare professional.
During the training of medical professionals, considerable emphasis is placed on the importance of maintaining strict accuracy in the administration of medications. Errors or misuse of drugs and medications may be dangerous to a patient, undermine their efficacy, and be extremely costly. Since many patient's medication regimens are administered at home on an out-patient basis, they are effectively self-supervised. Methods are needed to simplify and manage the communication, administration, and tracking associated with consumption of medications, so that they may be properly administered. Moreover, the ability of patients to cope with the complexity of overseeing a medication regimen may be further complicated by the patient's illness. Other factors that may undermine a patient's ability to correctly follow a medication regimen may include caregivers's or a patient's age (youth or seniority), education, language barriers, mental condition, sight or hearing impairment, and aptitude.
Tools are needed by medical professionals and professional caregivers to properly communicate with and educate both patients and caregivers about medication regimens. Charts currently used to convey this information may be useful to the caregivers themselves and others who can become familiar with them over time. However, such charts may be confusing to patients due to age (youth or seniority), education, language barriers, mental condition, sight or hearing impairment, and aptitude. Moreover, charts suitable for use may prove insufficient as teaching tools to properly educate patients and amateur caregivers.
What is needed is a chart system and process that are simplified and structured in a manner that may allow use of clear visual, iconic communication enabling substantially all patients and caregivers of varied age and medical condition to successfully follow a medication regimen.
What is further needed is a charting system and method to overcome language barriers by providing terminology, color, and symbology that can be more universally understood.
What is further needed is a charting system and method to serve as a successful teaching tool for medical professionals and professional caregivers to properly educate clients, amateur caregivers, and patients of varied age and medical condition regarding a medication regimen.
What is further needed is a charting system and method that graphically illustrate to a client or patient proper times, medications, and dosages of a prescribed medication regimen.
Also needed is a system and method for tracking, by a patient, caregiver, or medical professional, the history of administration of medications. Doctors, nurses, and other need to know what has happened compared to what was prescribed. Patients need to know what was done. Anyone administering a medication may forget what has been done or whether it has been done as routines drag on and memories of events seem to blur together.
In view of the foregoing, it is an object of the present invention to provide an apparatus and method for communicating a medication regimen, prescribed by a doctor or other caregiver, to a patient. A “medication communicator” chart, system, and method in accordance with the present invention may be used as a teaching tool to educate patients and to schedule events corresponding to a prescribed medication regimen. The “medication communicator” chart may include a scheduling graph having the shape of a 12-hour analog clock, 24-hour analog clock, or the like. This shape is used because it is universally understood and is more easily comprehended by the very young and elderly. In addition, the graphical nature of the analog clock may provide easier visualization over a typical rectangular chart or table. Moreover, poor eyesight is easily compensated by an analog dial, where written text may fail.
The scheduling graph may be divided into regions corresponding to each hour of a day. Fields, on the “medication communicator” chart may be receptive to labels communicating information corresponding to numerous medications in a prescribed medication regimen. Timing indicators may be applied to the regions of the scheduling graph to indicate consumption or application times of each of the medications.
Each of the medications may be readily distinguished from one another by assigning each medication a distinct color, pattern, symbol, texture, or the like. In addition, one embodiment “medication communicator” chart may include one or several fields receptive to labels printed with patient information, such as name, address, emergency contact information, identification number, if any, or any other desired patient information.
The “medication communicator” chart may include fields receptive to labels containing frequency information for consuming the medications. For example, in certain embodiments, an “every day” label may be applied to the chart to indicate that a patient is to take a medication every day. Likewise, “every other day” labels or labels designating specific days of the week may be applied to the chart to indicate when a medication is to be taken.
A label sheet may provide labels to be adhered to the “medication communicator” chart in accordance with the invention. The labels may include a pair of medication labels for each medication in the regimen. One label may be adhered to the “medication communicator” chart. The other corresponding label may be adhered directly to the packaging of the referenced medication. Each label may have the same color, pattern, symbol, texture, or the like, so that a selected medication may be referenced directly to the chart.
Each medication may include corresponding timing indicator labels characterized by the same color, pattern, symbol, texture, or the like, as the medication label. These timing indicator labels may be adhered to the regions of the scheduling graph to indicate the time a selected medication is to be taken. In certain embodiments, the “medication communicator” chart may include calendars printed thereon to assist a patient in keeping track of the medication regimen.
In certain embodiments, the scheduling graph may be represented by two separate 12-hour analog clocks to represent the A.M and P.M, hours, respectively. This embodiment may help further clarify what medications are to be taken in the A.M. hours as opposed to the P.M. hours by applying the corresponding timing indicators to separate scheduling graphs. In this embodiment, timing indicator labels having an A.M. or P.M. designation may be eliminated. In selected embodiments, the A.M. and P.M. scheduling graphs may be identified by an A.M. and P.M. icon, respectively.
In certain embodiments in accordance with the invention, a system for record keeping may be used. For example, when a medication of a medication regimen is ingested or applied, a record-keeping label, pin, magnet, magnetic label, marker, or the like, may be used on a calendar to record consumption times of medications in the medication regimen. Thus, a patient may keep records of the regimen and provide feedback to a caregiver that a medication regimen has been executed correctly.
FIGS. 1 through 10
It will be readily understood that the components of the present invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the system and method of the present invention, as represented in , is not intended to limit the scope of the invention, but is merely representative of certain presently preferred embodiments of the system in accordance with the invention.
The term “patient” as used throughout the specification and claims, means any person or even an animal requiring management of a medication regimen. The terms “medicine” and “medication” as used throughout the specification and claims mean prescription and nonprescription drugs, vitamins, supplements, herbs, foods, bandages or other wraps, first aid devices, cleaning solutions, and the like. The term “caregiver” as used throughout the specification and claims means any doctor, physician, nurse, medical practitioner, family member, individual, patient, or and the like responsible for adherence to a medication regimen.
The embodiments of systems, methods, and devices in accordance with the invention will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
FIG. 1
10
12
14
16
16
18
18
18
20
20
22
24
22
24
Referring to , a treatment process may typically be initiated by either a regular checkup of a patient or by a health complaint initiated by a patient. As a result, an examination of the patient may be performed by a doctor or caregiver. The examination may result in an analysis or a diagnosis by a health professional. Depending on the result of the analysis or diagnosis , a doctor may prescribe a particular regimen to the patient. For some over-the-counter medications, any caregiver may establish a regimen. However, a medical professional is more likely to do so. This regimen may include either actions , medication , or a combination thereof , .
24
26
28
30
32
If medication is prescribed, the prescription may include timing information for ingesting or applying the prescribed medication, a required dosage of the prescribed medication, a duration for taking the prescribed medication, and information on the actual composition being prescribed.
24
22
34
34
20
34
34
36
36
58
60
36
38
Once a medication or action regimen is prescribed, a caregiver may need to communicate or educate this regimen to the patient or to a nonprofessional caregiver. The communication or education process may be facilitated by the use of selected tools . These tools may assist a patient or client in executing a regimen, or may help a caregiver in actually verifying that the medical regimen has been executed correctly. Tools may include various visual learning tools and techniques .
36
34
40
42
36
44
44
36
46
48
50
52
54
56
36
46
48
50
52
54
56
Other factors that may be important in implementing effective tools to communicate or educate a patient may include cost , ease of use of the tool , as well as other patient considerations . By patient considerations is meant taking into account factors that may determine if a tool in successful at communicating or educating a patient. These may include a patient's age , education , language , mental condition , sight or hearing impairment , aptitude , or the like. An effective tool may be effective in overcoming any or all of these communication barriers , , , , , .
34
58
58
58
60
62
62
Once a regimen has been successfully communicated , a patient or client may need to actually follow or execute the regimen without supervision or with nonprofessional supervision. Thus where a patient is mentioned herein, a typically nonprofessional caregiver (or possibly other caregiver) may execute appropriate actions. Once the regimen has been executed , a caregiver may need to verify that the regimen has been properly executed. This may require providing feedback or results to a caregiver.
62
20
62
16
20
62
16
20
62
68
After receiving feedback that a regimen has been properly executed, if the patient's condition has not improved at a test , the caregiver may choose to perform another examination and prescribe another regimen . If the patient's condition has improved according to the test , but the patient's condition remains uncured, then the caregiver may again choose to perform another examination and prescribe another regimen . However, if the condition has improved at test and the condition has been successfully treated and cured in the patient, then the prescribed regimen may be ended .
FIG. 2
70
72
24
72
20
Referring to , a “medication communicator” chart may include a scheduling graph to display scheduling information related to the medication regimen of a patient or client. In certain embodiments, the scheduling graph may be presented in the form of a twelve-hour analog clock. The twelve-hour analog clock may be used because it is universally understood and because it has become a world-wide standard. This may prove particularly important in communicating regimen information to a young or elderly patient, or if a language barrier is present.
72
74
74
24
74
76
72
70
The scheduling graph may be divided into regions to produce a plurality of scheduling fields so that different medications of a medication regimen may be scheduled. Each scheduling field may correspond to a time indicating when the selected medication is to be ingested or applied by the patient. As illustrated, the scheduling graph is represented by a twelve-hour analog clock. However, in other selected embodiments, a twenty-four hour analog clock may also be used. The “medication communicator” chart may be printed on a variety of substrates including, but not limited to, paper, cardboard, laminate materials, glass, metal, plastics, and the like.
70
78
24
78
The “medication communicator” chart may further include a plurality of medication fields , each corresponding to a distinct medication in a medication regimen . Each of the medication fields may include notes and administration comments pertaining to a particular medication. For example, notes may include the strength of the medication and directions from the caregiver related to using the particular medication.
70
80
78
82
82
82
70
78
24
a
The “medication communicator” chart may further include a patient information field which may include desired patient information such as the patient's name, identification number if needed, address, emergency contact information, as well as other desired information. Each of the medication fields may include corresponding fields where timing information may be located. These spaces may contain labels which indicate the timing or frequency with which a particular medication should be used. For example, a field may, for example, contain or include a label that informs the patient that a medication should be taken every day, every other day, or on selected days of the week. In addition, the “medication communicator” chart may include a plurality of fields to list each of the medications of a medication regimen .
70
70
46
48
50
52
56
72
70
70
Because of the clock-like layout of certain embodiments of the “medication communicator” chart , the chart may be more easily understood by patients of all ages , particularly the young and elderly, or of disparate educations , languages , mental conditions , or aptitudes . Because of the graphical layout of the “medication communicator” chart , medication regimen information may be more easily communicated by a caregiver to a patient or client. Moreover, because of the clock-like layout of the “medication communicator” chart , a patient or client may be able to easily understand the times a particular medication is to be taken by simply referencing the chart with the time on a normal twelve-hour analog clock.
70
24
70
24
22
70
The “medication communicator” chart may be used to schedule medications of a prescribed medication regimen . However, one of ordinary skill in the art will be able to recognize that the chart may be used to schedule not only medication , but also actions , or compliance with other instructions. In addition, the chart may be used outside of the medical industry for many types of event scheduling. The color, pattern, symbol, or texture coding may be used in numerous applications to provide a simple, easy-to-follow, universally understood, scheduling system.
FIG. 3
81
81
70
82
72
83
72
82
74
83
74
82
83
74
72
b
b
Referring to , a label sheet may include a plurality of labels which may be applied to a “medication communicator” chart . For example, A.M. timing indicator labels may be applied to the scheduling graph to indicate when medications are to be taken by a patient or client in the “A.M.” hours. Likewise, P.M. timing indicator labels may also be applied to a scheduling graph to indicate when medications are to be taken in the “P.M.” hours. For example, if a medication is to be taken by a patient at 8:00 o'clock A.M., a label may be applied to the space . However, if a medication is to be taken at 8:00 o'clock P.M. in the evening, a label may be applied to the scheduling field . Likewise, the other labels , may be applied to the various fields of the scheduling graph to indicate other times when medications are to be taken or applied by a patient.
81
84
84
84
70
78
84
88
90
a
b
a
a
b
The label sheet may also include a plurality of medication labels , to provide information related to a particular medication. One medication label may be applied to the “medication communicator” chart in the medication field . A second medication label may be applied directly to the packaging or enclosure of a particular medication. In addition, if syringes or other implements are used to dispense or apply a particular medication, measurement labels may be used to indicate the measure, amount, etc. of medication to be dispensed. In addition, timing indicator labels may be provided in order to indicate to a patient the timing or frequency with which to take a particular medication.
90
90
90
90
90
90
90
90
90
92
92
a
b
c
d
e
f
g
h
i
a
a
For example, if a medication is to be taken every day, an “every day” label may be used. However, if a particular medication is to be taken every other day, an “every other day” label may be used. Likewise, if a medication is to be taken on some other day of the week, then labels , , , , , , may be used to indicate the other days of the week. Thus a label set may be used to provide information to client regarding a particular medication. The label set may be identified by a particular color, symbol or pattern to identify a particular medication. In other embodiments, a particular texture may be used to help blind or sight-impaired patients to identify a particular medication. In the case that a patient is color-blind, a particular pattern or cross-hatching may be used to help the patient identify a particular medication.
92
92
92
92
92
92
24
84
b
c
d
e
f
g
In a similar manner, other label sets , , , , , may be assigned a different color, pattern, symbol, texture, or the like to identify other medications in a medication regimen . The medication information labels may be configured to attach to a wide variety of medication containers including, but not limited to, bottles, tubes, inhalers, packages, boxes, blister packs, syringes, and the like.
84
84
94
81
84
94
In addition, medication information may be printed on the labels using common typewriters or software templates. A particular software template may be configured so that data may be entered directly into fields , of the label sheet by a personal computer. In other embodiments, special software may be programmed so that data may be entered and then formatted to print on the labels , .
81
94
94
94
94
94
94
81
70
80
a
b
a
b
a
b
The label sheet may also include one or a plurality of patient information labels , . These information labels , may include information corresponding to a particular patient such as name, identification number, address, emergency contact information, and the like. The labels , may then be removed from the label sheet and applied to the “medication communicator” chart in the field .
FIG. 4
70
70
70
70
81
70
82
81
70
a
b
a
b
a
Referring to , a caregiver may use a pair of “medication communicator” charts , . A first “medication communicator” chart may be prepared for the caregiver's records. A second “medication communicator” chart may be prepared to assist the patient or client. Labels may be removed from the label sheet and applied to the “medication communicator” chart according to a medication regimen. For example, a timing indicator label may be removed from the label sheet and applied to the “medication communicator” chart to indicate that a particular medication is to be taken at 12:00 o'clock A.M.
82
81
70
83
70
a
a
Likewise, in another example, a timing indicator label may be removed from the label sheet and applied to the “medication communicator” chart to indicate that a medication is to be taken or applied at 8:00 o'clock A.M. Likewise, if a medication is to be taken in the evening, one of the labels may be applied to the “medication communicator” chart in order to indicate what time in the evening the medication is to be taken.
84
81
70
24
84
81
96
96
96
98
84
96
98
a
a
In a similar manner, a medication label may be removed from the label sheet and applied to the “medication communicator” chart to provide information pertaining to a first medication in a medication regimen . A second medication label may be removed from the label sheet and may be applied to a particular medication , such as a bottle of prescription drugs. The prescription medication may already include an information label containing information pertaining to the medication. The medication label may be adhered to the medication such that the label does not hide or cover up information contained on the label .
84
98
100
100
88
81
100
100
100
100
94
81
70
a.
In certain embodiments, the label may be clear in order not to hide the inscription on the label . If a syringe , cup, spoon, or other implement is to be used to dispense a particular medication, a measurement label may be removed from the label sheet and applied to the implement or syringe to indicate to the patient a particular measurement or grade to fill the syringe or implement . In a like manner, a patient information label may be imprinted with desired patient or client information and may be removed from the label sheet and applied to the “medication communicator” chart
FIG. 5
FIGS. 1 through 4
102
104
104
106
94
70
108
84
70
109
90
110
82
83
70
a
a
a
Referring to , while continuing to refer generally to , an education process may include preassembling a first “medication communicator” chart. A preassembly process may include first applying a client or patient information label to the chart , applying medication labels to the chart , applying timing labels to the chart indicating the days of the week, and applying timing labels , to the chart indicating particular times of day.
112
112
70
70
112
106
94
70
108
84
70
114
70
b
b
A second step may include partially assembling a second. “medication communicator” chart . The chart may be partially assembled by first applying client or patient information to the chart and then applying medication labels to the chart . A caregiver may then take the first preassembled “medication communicator” chart and the second partially assembled chart and display both “medication communicator” charts in front of a particular client or patient.
70
116
118
90
70
120
82
83
122
70
The assembly of the second “medication communicator” chart may then be completed while the client or patient observes. This may include first applying timing labels to the chart indicating the days of the week, applying timing labels , to the chart indicating the times of day each medication is to be taken, and then indicating the time on an actual clock in order to help the patient be visually reminded to remember when to take the medication. Thus, the client or patient may observe the assembly of the second “medication communicator” chart by the caregiver. This may assist the learning process and provide a visual aid that may be more easily remembered.
70
124
70
126
128
100
100
88
132
100
134
100
88
136
24
24
As a caregiver is completing the second chart , the caregiver may show the client or patient each medication as the timing is designated on the chart . This may include showing the dosage to the client, as well as showing the method of preparation, application, or consumption of the medication. If an implement (e.g. syringe) is needed to dispense a particular medication, the measurement may be shown on the grading of the implement at this time using the labels . This may include showing a patient how to use the implement , labeling the implement with the correct medication information, and using the labels to indicate the correct dosage of medication. Once the medication regimen is properly demonstrated to the client or patient, the caregiver may then use various methods to verify that the patient understood the regimen .
FIG. 6
140
70
140
24
144
24
144
81
148
150
152
154
156
Referring to , an assembly process of a “medication communicator” chart may include initially determining a medication regimen . Colors, symbols, patterns, textures, or other identifiers may be assigned to each medication in the medication regimen . Any and all writing and coloring may be done in braille. After an identifier is assigned , medication information may be printed on the labels of the label sheet . This may include information such as the name of the patient, the name of the medication or composition, times to take the medication, the dosage of the medication to be ingested or applied, directions on taking the medication, and the like.
84
84
158
24
84
160
70
84
160
70
82
83
162
70
164
82
83
164
90
166
82
83
90
162
70
88
168
100
Once the proper information is printed on the labels , the information labels may then be applied to each of the medications in the medication regimen . Corresponding labels may then be applied to the “medication communicator” chart . Once the information labels are applied to the “medication communicator” chart , the timing labels , may then be applied to the chart . This may include applying the labels , such that they indicate the times of day, and applying the labels indicating the particular days of the week that the medication is to be taken. Once the timing labels , , are applied to the chart , measurement labels may be applied to any syringes , if used.
FIG. 7
70
170
170
170
172
172
172
172
172
172
a
b
c
a
b
c
a
b
c
Referring to , in an alternative embodiment, the “medication communicator” chart may include one or a plurality of calendars , , indicating even and odd days , , of the week. The even and odd days , , may be used to schedule when particular medications are to be taken.
FIG. 8
FIG. 3
92
90
90
90
90
90
170
170
170
170
170
170
90
170
170
170
170
170
170
a
b
c
b
b
a
b
c
a
b
c
c
a
b
c
a
b
c
For example, referring to , labels assigned to a particular medication, may include an “every day” label , an “every odd day” label , and an “every even day” label . This method may assist in eliminating confusion caused by the “every other day” label described in . For example, under certain circumstances, a patient may forget to keep track of what days correspond to “every other day.” With respect to the “every odd day” label , a patient or client may simply refer to the calendars , , , and take a selected medication on the odd days of the months , , . Likewise, with respect to the “every even day” label , a patient or client may simply refer to the calendars , , and take a selected medication on the even days of the months , , . Alternatively, the specific administration days of a week may be named, such as, for example, Monday and Thursday of each week.
FIG. 9
70
72
72
72
72
72
72
72
a
b
a
b
a
b
Referring to , in selected embodiments, the “medication communicator” chart may include two scheduling graphs , . A first scheduling graph may be used to establish an A.M. medication schedule and second scheduling graph may be used to establish a P.M. medication schedule. This may help eliminate confusion caused by using one scheduling graph to represent both A.M. and P.M. hours. The two scheduling graphs , may continue to represent a standard twelve-hour clock face, but may be effective to schedule events over a twenty-four hour day.
180
72
182
72
180
182
70
72
72
180
182
72
72
a
b
a
b
a
b.
In certain embodiments, an A.M. icon may be used to identify a first scheduling graph and a P.M. icon may be used to identify a second scheduling graph . The icons , may be located anywhere on the “medication communicator” chart as long as they may be correctly identified with the corresponding scheduling graphs , . In selected embodiments, the icons , , may be located outside or inside the scheduling graphs ,
FIG. 10
FIGS. 3 and 8
FIG. 9
82
83
184
184
72
72
72
72
184
82
83
a
b
a
b
Accordingly, referring to , the timing indicator labels , illustrated in may simply be represented by labels lacking any A.M. or P.M. designation. These labels may then be applied to either of the A.M. or P.M. scheduling graphs , illustrated in to schedule a selected event. By using two scheduling graphs , , any identifiers designating the labels as either A.M. or P.M. labels , are unnecessary.
FIG. 11
FIG. 7
FIG. 9
FIG. 9
70
72
72
170
170
170
172
172
172
70
70
70
82
83
170
170
170
180
182
72
72
a
b
a
b
c
a
b
c
a
b
c
a
b.
Referring to , the “medication communicator” chart may include both a pair of scheduling graphs , , as well as calendars , , , to identify the days , , of the month. This chart may provide the advantages of the chart illustrated in as well as the chart illustrated in . A client or patient may easily identify events over a complete twenty-four day, eliminating the need for A.M. and P.M. timing labels , . In addition, the calendars , , , may aid the patient or client in identifying odd, even, or specified days of the week to take a selected medication. As was previously described with respect to , A.M. and P.M. icons , may be used to identify each scheduling graph ,
FIG. 12
190
24
82
196
192
192
194
196
192
196
192
194
194
170
198
196
192
194
170
190
24
a
With respect to , a method for maintaining records of the execution of a medical regimen may be used in accordance with the present invention. For example, in certain embodiments, a timing label may be used to identify a time for taking a selected medication. A medication may be contained in a blister pack or other suitable packaging . In one selected embodiment, labels may be used to cover each pill or tablet in the blister pack . When a pill or tablet is removed from the blister pack , the label may be first removed from the blister pack. This label may then be applied to a calendar in a field indicating a particular day . As each successive pill or tablet is removed from the blister pack , the labels may be applied to the calendar . Thus, a method of record keeping may be provided to keep track of a client's or patient's actual administration of a medication regimen .
202
24
82
220
202
170
200
198
170
200
170
170
200
In another embodiment, a record label sheet may be used to track a client's or patient's medication regimen . For example, when a medication is taken corresponding to an event indicated by a timing label , a sticker or label may be removed from the label sheet and applied to the calendar . Each time a medication is ingested or applied, another label or sticker may be applied to a particular day of the calendar . In certain embodiments, reusable indicators , such as magnetic labels or reusable tacks or pins, may be used to indicate when a mediation has been taken on the calendar . Likewise, the calendar may be constructed from a variety of materials including paper, cardboard, wood, laminates, plastics, a bulletin board material, or some other like material. In another embodiment, the labels may include a particular texture such as braille which might be used by a blind or sight-impaired patient.
204
70
170
204
170
204
206
70
170
In another embodiment, a record marker may be used to keep track of events on the “medication communicator” chart . When a medication is taken, the record marker may be used to annotate the event on the calendar . In certain embodiments, the record marker may be a dry-erase marker and the calendar may be a dry-erase white board. The marker may also include a tether to tie the marker to the “medication communicator” chart , or the calendar .
The present invention may be embodied in other specific forms without departing from its essential characteristics. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. All changes that come within the meaning and range of equivalency of the description are to be embraced within their scope.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects and features of the present invention will become more fully apparent from the following description, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only typical embodiments of the invention and are, therefore, not to be considered limiting of its scope, the invention will be described with additional specificity and detail through use of the accompanying drawings in which:
FIG. 1
is a flow chart illustrating a generalized treatment process;
FIG. 2
is a schematic block diagram of on embodiment of a “medication communicator” chart in accordance with the present invention;
FIG. 3
is a schematic block diagram of one embodiment of a label sheet in accordance with the present invention;
FIG. 4
is a flow diagram of one embodiment of a method of using the present invention;
FIG. 5
is a flow diagram illustrating one embodiment of an education process in accordance with the present invention;
FIG. 6
is a flow diagram illustrating one embodiment of an assembly process in accordance with the present invention;
FIG. 7
is a schematic block diagram of an alternative embodiment of the “medication communicator” chart in accordance with the invention;
FIG. 8
FIG. 7
is a schematic block diagram of labels that may be used in accordance with the “medication communicator” chart of ;
FIG. 9
is a schematic block diagram of another alternative embodiment of the “medication communicator” chart in accordance with the invention;
FIG. 10
FIG. 9
is a schematic block diagram of labels that may be used with the “medication communicator” chart illustrated in ;
FIG. 11
is a schematic block diagram of another alternative embodiment of the “medication communicator” chart in accordance with the invention; and
FIG. 12
is a flow chart illustrating a method for maintaining records in a medication regimen in accordance with the present invention. | |
Large aggregations of Sargassum, when at sea, provide important habitat for numerous marine species of vertebrates and invertebrates. It is especially important for the young of several species of sea turtles. However, when large aggregations of Sargassum come ashore on beaches frequented by tourist it is often viewed as a nuisance or even a health hazard. It then becomes a burden to beach management and has to be physically removed as quickly as possible. Many Gulf coast beaches suffer from Sargassum accumulation on a regular basis. Timely information on the size and location of the Sargassum habitat is important to developing coastal management plans. Yet, little is known about the spatial and temporal distribution of Sargassum in the Gulf of Mexico. There is no systematic program to assess the distribution of the macroalgae, therefore practical management plans are difficult to execute. In 2008, Gower and King of the Canadian Institute of Ocean Sciences along with Hu of the University of South Florida, using satellite imagery, identified extensive areas of Sargassum in the western Gulf of Mexico. These were not confirmed with ground truthing data. To date ground truthing observations have not been directly compared with the corresponding satellite images to confirm that it was in fact Sargassum, as the satellite images suggested. y building on the information and research methods of Gower and King, current ground truthing data taken from Texas Parks and Wildlife Gulf trawl sampling surveys was analyzed. In addition, shoreline information and imagery was used to substantiate the data derived from current Moderate-resolution Imaging Spectroradiometer (MODIS) Enhanced Floating Algae Index (EFAI) images. As part of the NASA sponsored research project Mapping and Forecasting of Pelagic Sargassum Drift Habitat in the Gulf of Mexico and South Atlantic Bight for Decision Support, NASA satellite MODIS EFAI images provided by Dr. Hu were used to identify and substantiate corresponding floating Sargassum patches in the Gulf of Mexico. Using the most recent advances in technology and NASA satellite remote sensing, knowledge can be obtained that will aid future decision making for addressing Sargassum in the Gulf of Mexico by substantiating the data provided by satellite imagery. Findings from this research may be useful in developing an early warning system that will allow beach managers to respond in a timely manner to Sargassum events.
SubjectSargassum
Remote Sensing
Gulf of Mexico
Citation
Tabone, Wendy (2011). Ground Truthing Sargassum in Satellite Imagery: Assessment of Its Effectiveness as an Early Warning System. Master's thesis, Texas A&M University. Available electronically from http : / /hdl .handle .net /1969 .1 /ETD -TAMU -2011 -12 -10543. | https://oaktrust.library.tamu.edu/handle/1969.1/ETD-TAMU-2011-12-10543 |
People with social anxiety experience fear and anxiety in response to social situations., such as parties, work gatherings, meeting new people, eating in public, and work presentations. And they often worry a lot about what other people think, and that they'll do something embarrassing in front of people.
Social anxiety often means you worry before and after social situations too, not just during them. And when you are there, you might experience lots of physical symptoms of anxiety such as sweating or blushing, and worry that other people can notice this and will judge you negatively. This often leads to avoidance of social situations when possible.
Ruminating after being in a social situation by playing the scenario over in your head and analysing it, often cringing at what you said.
Often avoiding social situations when possible (i.e. turning down an invite to a party), or enduring them with intense fear.
Experiencing a range of physical symptoms of anxiety during social situations, such as trembling, shaking, sweating, blushing, heart racing, and feeling nauseous.
Feeling self-conscious during social interactions, i.e. having a conversation, meeting unfamiliar people, being observed, or performing in front of others.
Worrying before and during social situations - about whether you might do something humiliating or embarrassing.
Cognitive Behavioural Therapy for Social Anxiety Disorder begins with understanding how your social anxiety developed, how it presents itself, and what maintains it.
We then work on reducing your symptoms of anxiety and self-consciousness by using a variety of CBT tools and techniques (and I can include Mindfulness and Compassion Focused Therapy techniques too if useful).
And lastly we focus on practicing being in social situations to build up your confidence, and challenging any unhelpful negative beliefs you hold about yourself. | https://natalieenglander.com/therapy-for-social-anxiety |
Medical Moment- What is the Heme Biosynthetic Pathway? Why are some porphyrias in the liver and others erythropoietic? What does it have to do with hemoglobin?
The heme biosynthetic pathway is one of the key metabolic pathways that leads from the simple building blocks of the amino acid glycine and the dicarboxylic acid succinic acid to the formation first of 5-aminolevulinic acid [ALA], then to the monopyrrole porphobilinogen [PBG], then to porphyrinogens and porphyrins [four pyrrole rings strung together and closed on themselves], and, in the eighth step, to the insertion of an iron atom into the center of the protoporphyrin ring to form heme. Nearly all cells of the human body contain heme, which is essential to carry out many essential functions. Heme functions mainly as a small molecule that binds to proteins to form a large class of proteins called ‘hemoproteins.’ Hemoglobin is one such essential hemoprotein; it is found mainly in developing and adult red blood cells, where it functions to carry oxygen absorbed from air in the lungs to cells and tissue throughout the body. Hemoglobin also functions to carry carbon dioxide, a waste product formed by the metabolic functions of most cells, from these cells back to the lungs, where the carbon dioxide is released into the expired air and where the hemoglobin again picks up more oxygen.
Most of the heme that is synthesized in the human body is made in developing red blood cells, to provide for the formation of heme for hemoglobin. In the erythropoietic porphyrias, the major site of the overproduction of heme precursors is the developing red blood cells. Thus, the name erythropoietic porphyria.
The other major organ where heme is made is the liver. When the major overproduction of heme precursors occurs in the liver, the disorder is called a hepatic porphyria. | https://porphyriafoundation.org/purple-light-blog/medical-moment/ |
What are the Different Motherboard Form Factors?
Microsoft MTA O/S | CompTIA A+ Exam objectives 1.2: sizes.
Introduction to Motherboards
A motherboard (MB), also known as a Mainboard, system board or logic board, is the central or primary circuit board in a Personal Computer (PC). It is an extremely complex electronic system that every device in a computer system connects to in order to send and receive data. A typical motherboard is made up of a main microprocessor, called the CPU, two or more DIMM slots to hold memory modules, support chips called the Chipset, controller ports for connecting storage drives, expansion slots for adding connections, and integrated input and output ports for connecting external devices.
Also known as a mainboard, system board, mobo or MB, here’s how a motherboard looks like:
Motherboard Form Factors
Motherboard form factors refer to the layout, features, and size of a motherboard. While there are dozens of form factors for desktop computers, most of them are either obsolete or developed for specialized purposes.
As a result, almost all consumer motherboards sold today belong to one of these form factors: ATX, Micro-ATX, Mini-ITX and EATX
What Does Each One Mean?
To start, let’s begin with the “standard”-sized motherboard which is the ATX. ATX stands for “Advanced Technology eXtended” and was developed as far back as 1995. If you own, or have owned, a regular-sized PC, there’s a good chance it has an ATX motherboard. This makes ATX the “regular” choice when purchasing a PC or motherboard.
From ATX, motherboards get either bigger or smaller in size. Going upward, you have the EATX motherboard (Extended ATX) which adds more to the ATX board and is slightly larger as a result. Going the other way, you have the Micro ATX which is smaller than the ATX. After that is the Mini ITX (“Information Technology eXtended”) which is even smaller than the Micro ATX.
ATX
The most popular standard for PC motherboards is ATX, which stands for Advanced Technology Extended. ATX motherboards are considered to be full-size with up to seven PCI/PCI Express (PCIe) expansion slots. Expansion slots are needed for things like graphics cards, sound cards, NVMe PCIe Solid State Drives (SSDs), and various peripherals. They also provide up to eight slots for RAM.
MINI-ITX
If you need a computer that is really small then you should look to Mini-ITX. These boards are primarily used in small form factor (SFF) computer systems where the entire computer must fit in a cabinet or on a bookshelf or otherwise be very portable. Typical uses include home theater PCs (HTPCs) where low power consumption means less noise from cooling fans and LAN gaming where you need something that is easy to carry around. Many new CPUs include integrated graphics eliminating the need for a dedicated graphics card if you aren’t after high resolution and/or high frame rates. This is good, because the Mini-ITX standard allows for just one PCI expansion port. To take full advantage of the smaller form factor you may need to find something other than a standard ATX power supply as they are generally too large for small Mini-ITX cases.
EATX
On the other hand, if space is not your concern, but performance and reliability are then eATX is for you. The e stands for extended making this an Extended Advanced Technology Extended motherboard. Boy that’s a mouthful. Generally these are used for enterprise-class high-performance workstations and servers. While it’s the same height as an ATX motherboard, it is 86 mm (3.39 inches) wider. This additional space is generally used for a second CPU, but single CPU boards are also available. They also have eight memory slots and up to seven PCI expansion slots, but using an older 64-bit PCI standard called PCI-X (PCI Extended).
Micro ATX
As the computer Technology developed the computer market changed and demand of small and powerful main boards was huge in numbers. They were developed using the same ATX form Factor in mind as the price of this motherboard was low the demand Increased rapidly.
Pros and Cons of each form factor
Pros
Cons
ATX
- Excellent overclocking potential
- Easy to find compatible components
- Usually features great aesthetics
- Little expensive
- Also requires a lot of space
Micro-ATX
- Very affordable
- Kinda Portable
- Small enough for a on-desk setups
- Decent overclocking
- Lower at RAM capacity than ATX
- Not ideal for Multi-GPU setups
Mini-ITX
- Affordable
- Very Portable (Ideal for LAN Parties
- Makes great HTPC
- Not a great choice for overclocking
- Minimal RAM capacity
- No Multi-GPU Support
EATX
- Enthusiasts-Tier overclocking
- More PCIe lanes
- High Ram capacity
- Ideal for 4-way GPU builds, servers and High-End workstations
- Very expensive
- Requires a lot of space
Motherboard Form Factor Comparison Chart
ATX
Micro ATX
EATX
Mini-ITX
Maximum Size
30.5 x 24.4 cm
12 x 9.6 in
24.4 x 24.4 cm
9.6 x 9.6 in
30.48 x 33.02 cm
12 x 13 in
.17 x 17 cm
6.7 x 6.7 in
Ram Slots
2 to 8
2 to 4
8
2
RAM Type
DIMM
DIMM . | https://asmed.com/different-motherboard-form-factors/ |
Grey is a neutral color that combines well with practically any other color, very much the same as black. Color schemes with grey depend mostly on the shade of grey worn. Light grey shows contrast of brighter colors well, while darker grey combines better with lighter colors.
Light grey displays purples, yellows, blues, and reds well, while dark grey is a good base for silvers, whites, blacks and pastel colors. Both light and dark grey enhance patterned and textured garments in contrast. Men's suits in grey are a good "canvas" to present blue, white, pink or yellow shirts combined with plain, patterned or colorful contrasting ties. | https://www.reference.com/beauty-fashion/colors-well-grey-pants-a5d1c86bcd845df1 |
A study published in the journal GigaScience shows how the blind salamander (Proteus anguinus) manages to adapt to the aquatic and dark environment in which it lives. This species lies in the underwater caves of Europe and around the year 1600 it was baptized as “baby dragon” by the locals.
Now, using X-ray computed tomography scans, the scientists were able to generate 3D reconstructions of the soft tissue in the salamander’s head, allowing them to observe the changes that have taken place in the animal’s body over time.
One of the most relevant aspects highlighted by researchers is the resistance of this species. Living in an underground environment, one of its main adaptations has been its resistance to starvation, allowing it to survive up to 10 years without eating anything. The animal also has gills and lungs unlike most amphibians.
“We accessed various collections to cover stages of development, from larvae to adult specimens. Therefore, the data can be used to study the differences in development and evolution between stages. Additionally, making salamander data accessible allows for an exemplary comparison between cave-dwelling and surface-dwelling paedomorphic salamanders,” the study notes.
The ancestors of this species were surface dwellers and had eyes, however, when they began to live in lightless caves, the selective pressure to retain vision disappeared. In this way, the visual organs of the salamander became smaller and incomplete, so the amphibian became blind.
In any case, and as an inheritance from its ancestors, the vision is present in the first stages of life of this specimen; loss of this ability occurs from youth to adulthood. | https://moneytrainingclub.com/study-explains-how-this-salamander-survives-in-the-dark-digital-trends-spanish/ |
Jet-lagged and appearing just a little surprised at the unusually vociferous welcome at his sold-out guitar clinic, Robben Ford strapped on his black Sakashta and plugged directly into a Fender Super Reverb amp.
And for the next hour . 5, he proved once and for all that tone comes from the top, heart and hands. The man exudes soul. Describing his style as ‘freeform but with a method’, Robben began by talking about his early years studying the saxophone. Growing up in the small town of Ukiah, CA, he paid attention to the neighborhood radio station, KUKI, “or kooky”, as he says with a laugh.
His parents also joined an archive club, where he was exposed to Ravel’s Bolero and Dave Brubeck’s Take 5. Listening to saxophonist Paul Desmond on Take 5 made him desire to play the alto. Playing the saxophone for 11 years, Robben learned to read music, but admitted that his reading skills did not transfer readily to the guitar. Teaching himself to play your guitar was a far more intuitive process, he states, and he learned by listening to the first Paul Butterfield Blues Band album featuring Mike Bloomfield. แก้จมูกที่ไหนดี Listening intently to Bloomfield’s playing became a significant turning point, and for a while Ford reckons he sounded a lot like his hero.
Having turn into a household name himself, and a guitar hero to numerous, Ford non-chalantly described his style as a variety of folk-blues and jazz., a musical fusion that has served him well. Elaborating further, Ford emphasized the need to experiment and make mistakes so as to create a personal style. Likening his approach to being nearly the same as fingerpainting on your guitar, he was emphatic that music should come from a place of feeling and not just from technique.
When asked about his practice schedule, Ford replied he practiced intensely at first. He joked that he learned his initial ‘hip’ blues chord from looking at the picture on the cover of the initial Paul Butterfield Blues Band album where Mike Bloomfield was holding down a dominant 9th chord. After that early epiphany, Ford made a decision to bone up on his chordal knowledge. Laughing, he recalled obtaining a your hands on Mel Bay’s Jazz Chords Vol. 1 book and started to use the jazzier chord voicings he learned when he began using Charlie Musselwhite. To demonstrate, Ford then launched into an elaborate jazz-blues progression throwing in a variety of chord substitutions into mix.
Delving into his improvisational approach, Ford described how he learned a few scales plus some standard bebop licks, and boiling everything down to ii-V progressions. Ford assured his audience that the language of music was actually very simple, and how, literally, it might all be learned in a couple weeks. Emphasizing the necessity for simplicity and the significance of finding one’s own voice, Ford proferred that although musicians dilligently transcribed and learned Herbie Hancock and John Coltrane licks, it rarely evolved into finding their very own voice. Doing it his own way, he says, has kept him unique.
Asked about his current amplification setup for tours, Robben expressed his preference for Fender Super Reverbs, explaining that his setup when he was with Jimmy Witherspoon’s group contains a Gibson L5 archtop right into a Super Reverb amp. With good speakers and matched tubes, the Super Reverb, he says, is his favorite. When asked about pedals and effects, Ford was emphatic they hindered one from finding one’s own sound. Devoid of pedals when he began, he states, enabled him to focus on his tone and he encouraged every guitar player in the audience to accomplish away with pedals, for at the very least a while.
Delving into his sophisticated soloing style, he spoke about his fondness for the diminished scale, which he learned from jazz guitarist Larry Coryell when Ford was19 years old. Coryell described it to him because the half-tone/whole-tone scale and Ford started practicing it immediately and making up a few of his own licks. He says he could instantly hear that the b9 on the dominant 7th chord reminded him of ideas jazz trumpeter Miles Davis used in his own playing.
Following a tasty demonstration of some lines that outlined the changes to a blues progression perfectly, Robben explained how the diminished scale acted as a transition to the IV chord in a blues. Elaborating further, he talked about finding the common tones in the diminished scale that moved seamlessly to another chord and how they may be used in soloing when likely to the IV and the V chord aswell.
How To Win Buyers And Influence Sales with BEST CLINIC REVIEW
Jet-lagged and appearing just a little surprised at the unusually vociferous welcome at his sold-out guitar clinic, Robben Ford strapped on his black Sakashta and plugged directly into a Fender Super Reverb amp. | http://control-pilates.ml/page/19/ |
Pharmacokinetic profile and metabolism of N-nitrosobutyl-(4-hydroxybutyl)amine in rats.
N-Nitrosodibutylamine and its omega-hydroxylated metabolite N-nitrosobutyl(4-hydroxybutyl)amine (NB4HBA) induce tumors in the urine bladder of different animal species through their common urinary metabolite N-nitrosobutyl(3-carboxypropyl)amine (NB3CPA), resulting from the oxidation of the alcoholic group of NB4HBA to a carboxylic group. NB4HBA disappearance from blood, the formation of its main metabolites, NB3CPA and NB4HBA-glucuronide (NB4HBA-G), and their urinary excretion, were investigated in rats after an i.v. dose of 1 mg/kg (5.7 mumol/kg). NB3CPA and NB4HBA-G formation was readily detectable 2 min after treatment and levels were still measurable at 120 and 30 min, respectively. The parent compound disappeared from blood 90 min after injection. The NB4HBA blood concentration-time profile was adequately described by a one-compartmental linear model. NB4HBA half-life was 8 min, total body clearance and renal clearance were 86.1 and 0.22 ml/min/kg, respectively. The 0-96-h urinary excretion of NB4HBA was 0.3% of the administered dose. NB3CPA half-life was 15 min; NB3CPA and NB4HBA-G urinary excretion were 36 and 11.7%, respectively, urinary excretion of known compounds accounting for less than 50%. After i.v. injection of NB3CPA equimolar to the NB4HBA dose, only 50% of unchanged compound was recovered in the urine and after NB4HBA-G, 41% of the administered dose was excreted unchanged, NB3CPA accounting for 10%. Thus NB3CPA and NB4HBA-G might undergo further biotransformation, suggesting that NB3CPA may not be the ultimate carcinogen responsible for urinary bladder tumor induction.
| |
Xinhua News Agency reported that the Standing Committee of the National People’s Congress of China voted to amend the criminal law, which will imprisonment for coercing or inciting athletes to use doping.
Now, according to the adopted regulation to Article 355, any person who will lure or incite athletes to use doping in domestic or international competitions can go to prison for up to three years, and will also be fined.
More severe penalties will be for those who organize the use of doping by athletes or force them to do so. This amendment will enter into force on March 1, 2021. | https://chernayakobra.ru/china-will-criminalize-doping-from-march-1/ |
More Books:
Language: en
Pages: 152
Pages: 152
Books about Galois Theory
Language: en
Pages: 176
Pages: 176
A clear, efficient exposition of this topic with complete proofs and exercises, covering cubic and quartic formulas; fundamental theory of Galois theory; insolvability of the quintic; Galoiss Great Theorem; and computation of Galois groups of cubics and quartics. Suitable for first-year graduate students, either as a text for a course
Language: en
Pages: 190
Pages: 190
Galois theory is a mature mathematical subject of particular beauty. Any Galois theory book written nowadays bears a great debt to Emil Artin’s classic text "Galois Theory," and this book is no exception. While Artin’s book pioneered an approach to Galois theory that relies heavily on linear algebra, this book’s
Language: en
Pages: 122
Pages: 122
Foundations of Galois Theory is an introduction to group theory, field theory, and the basic concepts of abstract algebra. The text is divided into two parts. Part I presents the elements of Galois Theory, in which chapters are devoted to the presentation of the elements of field theory, facts from
Language: en
Pages: 294
Pages: 294
This 1984 book aims to make the general theory of field extensions accessible to any reader with a modest background in groups, rings and vector spaces. Galois theory is regarded amongst the central and most beautiful parts of algebra and its creation marked the culmination of generations of investigation. | https://www.joannatrollope.net/ebook/galois-theory |
Personnel vacancies and Big Pharma allies in the Biden administration threaten a landmark executive order on competition.
July 28, 2021
CoronavirusEthics in GovernmentExecutive BranchPharmaRevolving Door
Revolver Spotlight: Elizabeth Fowler
Fowler, a former Johnson & Johnson executive, is the latest Biden hire to spin through Pharma’s revolving door.
July 22, 2021
Anti-MonopolyIndependent AgenciesIntellectual PropertyPharmaTrade Policy
The Industry Agenda: Big Pharma
In 2019, Gallup found that the pharmaceutical industry was “the most poorly regarded industry in Americans’ eyes,” and rightfully so. Pharmaceutical companies often set drug prices exorbitantly high, including life-saving drugs which patients literally cannot go without, such as insulin. This includes older drugs that are cheaper to produce — such as epinephrine (emergency medication used to treat severe allergic reactions and asthma attacks). These firms achieve this by stifling competition at the consumer’s expense, jealously protecting their money-makers from the generics which the pharmaceutical system is supposed to develop after a patent expires.
May 19, 2021
Revolver Spotlight: Ellisen Turner
If appointed, Turner would be a transparent and flagrant case study in the workings of the revolving door, which means he would be right in line with the IP orthodoxy PTO has upheld.
May 17, 2021
Revolver Spotlight: Kevin Rhodes
When choosing the next PTO director, the Biden administration should rule out those who have a history of prioritizing profits and corporate interests over public health and safety. One such individual is Kevin Rhodes, an ally of Big Pharma who has vigorously defended efforts to keep drug prices high. His current employer, 3M, has abused its monopoly on the military earplug market to sell overpriced and faulty products to veterans. This should be immediately disqualifying for any future PTO director. Here are a few of the most alarming aspects of Kevin Rhodes’s career:
April 23, 2021 | The American Prospect
Place Human Lives Over Pharma’s Property Rights
The Biden administration is divided over whether to waive trade protections for Big Pharma—with Commerce Secretary Gina Raimondo as a key industry ally. | https://therevolvingdoorproject.org/search?issue=pharma |
FIELD OF INVENTION
BACKGROUND
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Example 1
Example 2
Example 3
Example 4
Example 5
Example 6
Example 7
Example 8
The present disclosure relates to a method of making a coated body using a PVD-method. The PVD-method includes a sequence varying the substrate bias voltage. The present disclosure also relates to a coated body made according to the method.
PVD coatings, especially on cutting tools, are well known in the art. The most commonly known PVD techniques are arc evaporation and magnetron sputtering. It is known in the art to vary the substrate bias voltage depending on the coating composition. Different target compositions can require different substrate bias voltages.
Another PVD depositing technique where the substrate bias voltage is varied is the uni- and bipolar pulsed techniques where the substrate bias voltage is varied at a high frequency.
US 2007/0275179 A1 discloses deposition of an aperiodic, multilayered coating having a MX/LX/MX/LX . . . laminar structure. The coating produced contains at least one electrically isolating layer. The coating is deposited with Bipolar Pulsed Dual Magnetron Sputtering (BPDMS) where the pulse times are in the range of μs.
Variations of the substrate bias voltage in order to improve PVD coatings have also been tested.
US2007/0218242 discloses a PVD-coating having variations in compressive stress within the coating. The compressive stress variation is obtained by varying the substrate bias voltage.
It is desired to obtain a coating having an improved wear resistance.
There is a constant strive to further improve the properties of PVD coatings to meet the increasing demands on improved wear resistance and increased tool life.
It is further desired to obtain a coating having an increased tool life.
Applicants discovered that improved wear resistance and increased tool life can be obtained by depositing a PVD coating by applying a sequence varying the substrate bias voltage in a certain pattern.
i
An aspect of the present invention relates to a method of making a coated body including a coating and a substrate where onto the substrate a coating is deposited, using a PVD deposition process. The coating includes a nitride, carbide, oxide, boride or mixtures thereof, of one or more elements selected from groups IVb, Vb, VIb of the periodic table and Al, Y and Si. The deposition process includes at least one sequence of varying the substrate bias voltage, while maintaining the active targets, where the sequence of varying the substrate bias voltage includes a subsequence Sof:
i
i
i
i+1
i
i+1
depositing at a first substrate bias voltage, B, for a deposition time, T, of between 10 seconds and 60 minutes, then, during a ramping time, R, of between 10 seconds and 40 minutes, while depositing, gradually changing the substrate bias voltage to a second substrate bias voltage B, where |B−B|≧10 V,
i
the subsequence, S, is repeated until i=n where i=0, 1, 2, . . . n, where n≧2, and where each new subsequence starts the deposition at the same substrate bias voltage used when ending the previous subsequence.
The value n is suitably 2≦n≦1000, more particularly 6≦n≦100 and yet more particularly 10≦n≦20.
i=1
i=2
The substrate bias voltage B, Betc. is suitably between −10 and −300 V and particularly between −20 and −200 V.
i+1
i
The difference between the first and second substrate bias voltage in absolute value, |B−B| is preferably ≧40 V and more particularly ≧70 V, but ≦290 V.
i
The deposition time, T, is preferably between 30 seconds and 30 minutes and more particularly between 1 and 15 minutes.
The ramping time, is preferably between 20 seconds and 20 minutes and more particularly between 30 seconds and 10 minutes.
As used herein, “by gradually changing the substrate bias voltage” refers to changing the substrate bias voltage either continuously or incrementally.
During the deposition process the active targets are maintained. Maintaining the active targets refers to continued use of the same targets through out the sequence of varying the substrate bias voltage.
In one embodiment, the sequence includes two different subsequences, A and B, alternated through out the whole sequence of varying the substrate bias voltage. The two subsequences will then be:
1
1
1
2
A: Depositing at a substrate bias voltage, B, for a deposition time, T, of between 10 seconds and 60 minutes, then, during a ramping time, R, of between 10 seconds and 40 minutes, while depositing, gradually changing the substrate bias voltage to a substrate bias voltage, B,
2
2
2
1
1
2
FIG. 1
a.
B: Depositing at the substrate bias voltage, B, for a deposition time, T, of between 10 seconds and 60 minutes, then, during a ramping time, R, of between 10 seconds and 40 minutes, while depositing, gradually changing the substrate bias voltage to a substrate bias voltage B, where |B−B|≧10 V. The subsequences A and B are alternated. One example of this embodiment is shown in
i
i
FIG. 1
b.
In yet another embodiment, the sequence of varying the substrate bias voltage is built up by subsequences such that the substrate bias voltage during deposition at deposition time Tin the first, third, fifth etc. subsequence is gradually increasing while the substrate bias voltage during deposition at deposition time Tin the second, fourth, sixth etc. subsequence is also gradually increasing. One example of this embodiment is shown in
i
i
FIG. 1
c.
In yet another embodiment, the sequence of varying the substrate bias voltage is built up by subsequences such that the substrate bias voltage during deposition at deposition time Tin the first, third, fifth etc. subsequence is gradually decreasing while the substrate bias voltage during deposition at deposition time Tin the second, fourth, sixth etc. subsequence is also gradually decreasing. One example of this embodiment is shown in
i
i+1
FIG. 1
d.
In yet another embodiment, the absolute value, |B−B| is increasing for each subsequence. One example of this embodiment is shown in
FIG. 1
e.
In yet another embodiment, the sequence is built such that substrate bias voltages, deposition times and ramping times are varied randomly. One example of this embodiment is shown in
The method can also include a mixture of one or more of the above described embodiments.
The composition of the coating deposited is determined by the target composition and the process gas present in the deposition chamber. The coating deposited, including, for example during the at least sequence of varying the substrate bias voltage, is suitably a nitride, carbide, oxide, boride or mixtures thereof of one or more elements selected from groups IVb, Vb, VIb of the periodic table and Al, Y and Si. In particular embodiments, the coating deposited is a nitride of one or more elements selected from groups IVb, Vb, VIb of the periodic table and Al, Y and Si, and in more particular embodiments, a nitride of one or more of Ti, Al, Cr, Si and Y.
1−x
x
In one embodiment, the coating deposited is (Ti,Al)N. In a certain embodiment, the (Ti,Al)N composition is (TiAl)N, where x suitably is between 0.2 and 0.9, particularly between 0.4 and 0.8, and more particularly between 0.5 and 0.7.
1−x−y
x
y
In another embodiment, the coating deposited is (Ti,Al,Cr)N. In a certain embodiment, the (Ti,Al)N composition is (AlTiCr)N where x is between 0.05 and 0.25, particularly between 0.10 and 0.20, and where y is between 0.05 and 0.30, particularly between 0.10 and 0.25, and 0.30<x+y<0.70.
In yet another embodiment, the coating deposited is (Ti,Al,Cr,Si)N.
The method can be applied to all common PVD techniques, like cathodic arc evaporation, magnetron sputtering, high power pulsed magnetron sputtering (HPPMS), ion plating etc., in particular cathodic arc evaporation or magnetron sputtering. Process parameters, other than the substrate bias voltage, can be conventional in the art for depositing PVD-coatings onto substrates and depend on the specific deposition equipment, coating composition etc. Typically, the deposition temperature varies between 100 and 900° C.
2
2
2
2
4
The pressure during deposition is typically between 0.1 to 10 Pa of the process gas present. The process gas can be one or more of O, N, Ar, CH, CHor silicon containing gases like, for example, trimethylsilane, depending on the aimed coating composition.
Suitable substrates include cutting tools, like cutting tool inserts, or round tools such as drills, end mills, taps etc. In certain embodiments, the substrate is made of any of cemented carbide, cermets, ceramics, cubic boron nitride, polycrystalline diamond or high speed steels. In more certain embodiments, the substrate is made of cemented carbide.
In one embodiment, the substrate can be pre-coated with an inner layer deposited directly onto the substrate to ensure a good adhesion to the substrate. The inner layer can include a pure metal and/or a nitride, particularly Cr, Ti, CrN or TiN. The inner layer can have a thickness of 0.005-0.5 μm, particularly 0.02-0.2 μm, and is deposited within the same coating process as the rest of the coating.
In one embodiment, the method can further include deposition of other PVD layers without the sequence of varying the substrate bias voltage, including, for example, at conventional deposition conditions. These additional deposition sequences can be performed either prior to or after the sequence with varying the substrate bias voltage. These additional deposition sequences can take place in the same deposition apparatus as the rest of the deposition steps.
In one embodiment, the method can further include one or more additional sequences where the active targets are changed between each sequence, including, for example, where during the sequence of varying the substrate bias voltage the active targets do not change, but the active targets are changed if a new sequence is started.
The total coating thickness is between 0.5 and 20 μm, particularly between 0.5 and 8 μm and more particularly between 1 and 6 μm.
All thicknesses given herein refer to measurements conducted on a reasonably flat surface being in direct line of sight from the targets. For inserts, being mounted on sticks during deposition, it means that the thickness has been measured on the middle of a side directly facing the target. For irregular surfaces, such as those on for example, drills and end mills, the thicknesses given herein refer to the thickness measured on any reasonably flat surface or a surface having a relatively large curvature and some distance away from any edge or corner. For example, on a drill, the measurements have been performed on the periphery, and on an end mill the measurements have been performed on the flank side.
In one embodiment, the method further includes a post treatment step. The post treatment step can for example, be brushing, blasting, shot peening, etc.
In one embodiment, the sequence of varying the substrate bias voltage is as follows:
a) Deposition at a substrate bias voltage of between −120 to −80 V, particularly between −110 and −90 V for a period of 2 to 10 minutes, particularly between 4 and 8 minutes;
b) During a period of 30 seconds and 4 minutes, particularly between 1 and 3 minutes, increasing the substrate bias voltage to −220 to −180 V, particularly between −210 and −190 V;
c) Deposition at a substrate bias voltage of between −220 to −180 V, particularly between −210 and −190 V for a period of 2 to 10 minutes, particularly between 4 and 8 minutes;
d) During a period of 30 seconds and 4 minutes, particularly between 1 and 3 minutes, decreasing the substrate bias voltage to −120 to −80 V, particularly between −110 and −90 V;
Where step a) to d) is repeated until the desired coating thickness is reached.
In yet another embodiment, the sequence of varying the substrate bias voltage is as follows:
a) Deposition at a substrate bias voltage of between −90 to −60 V, particularly between −80 and −70 V for a period of 2 to 10 minutes, particularly between 4 and 8 minutes;
b) During a period of 30 seconds and 4 minutes, particularly between 1 and 3 minutes, increasing the substrate bias voltage to −170 to −130 V, particularly between −160 and −140 V;
c) Deposition at a substrate bias voltage of between −170 to −130 V, particularly between −160 and −140 V; for a period of 2 to 10 minutes, particularly between 4 and 8 minutes;
d) During a period of 30 seconds and 4 minutes, particularly between 1 and 3 minutes, decreasing the substrate bias voltage to −90 to −60 V, particularly between −80 and −70 V;
Where step a) to d) is repeated until the desired coating thickness is reached.
FIG. 2
Another aspect of the invention relates to coated bodies made according to embodiments of the method described above. The sequences varying the bias are displayed as a layered structure which can be seen when using Scanning Electron Microscopy (SEM) or Transmission Electron Microscopy (TEM). For example, shows a substrate (1), pre-coated with an inner layer (2), a coating deposited according to an embodiment of the present invention (3) and an outer layer (5). The coating (3) includes sequences (4) varying the substrate bias, and has a layered appearance.
0.33
0.67
Cemented carbide end mills with the geometry R216.34-10050-BC22P were coated with PVD arc evaporation using TiAl-targets. The substrates were first subjected to an etching process, prior to deposition of a starting layer of Ti having a thickness of approximately 0.050 μm.
2
After that, deposition of the (Ti,Al)N coating took place. The coating was deposited at a temperature of 600° C. and at a Npressure of 1.0 Pa. The substrate bias voltage was varied according to the following sequence:
a) Deposition at −100V for 6 minutes
b) During a period of 2 minutes, increasing the substrate bias voltage to −200 V
c) Deposition at −200V for 6 minutes
d) During a period of 2 minutes, decreasing the substrate bias voltage to −100 V
Steps a) to d) were repeated until the coating reached a coating thickness on the flank side of 2.8 μm.
The end mills are herein referred to as Invention Example 1.
0.33
0.67
2
Cemented carbide end mills of the same geometry and composition as in Example 1 were coated with PVD arc evaporation using TiAl-targets. The substrates were first subjected to an etching process, prior to depositing a starting layer of Ti having a thickness of approximately 0.050 μm. After that, deposition of the (Ti,Al)N coating took place. The coating was deposited at a temperature of 600° C., at a Npressure of 1.0 Pa, and at a constant substrate bias voltage of 100 V until a final coating thickness of 3.8 μm was reached. The end mills according to Example 2 are herein referred to as Reference 1.
End mills according to Examples 1 and 2 respectively, were tested in a semi-finishing cutting operation in steel at the following cutting conditions:
Material: SS2244
Quantification: milled length
c
V=300 m/min
p
a=10 mm
e
a=1 mm
z
f=0.1 mm/tooth
Note: Coolant
max
Tool life criterion: Vb/Vb≧0.15/0.20
65
35
A third variant, Comparative 1, of the same cemented carbide end mill as in Examples 1 and 2 (composition and geometry), which had been deposited by an external supplier with a homogenous AlTiN layer as analyzed with EDS and with a thickness of 3.2 μm on the flank side was also included as reference. Three end mills of each variant were tested and the results in Table 1 give the average of the three:
TABLE 1
Tool life (m)
Invention Example 1
323
Reference 1
217
Comparative 1
220
Table 1 clearly shows that the end mills of Invention Example 1, have a considerably longer tool life than prior art, including, for example, Reference 1 and Comparative 1.
End mills according to Examples 1 and 2 were tested in a semi-finishing cutting operation in stainless steel at the following cutting conditions:
Material: 316Ti
Quantification: maximum wear in mm at 200 meters milled length
c
V=105 m/min
p
a=10 mm
e
a=1 mm
z
f=0.071 mm/tooth
Note: Coolant
65
35
A third variant, Comparative 1, of the same cemented carbide end mill as in Examples 1 and 2 (composition and geometry), which had been deposited by an external supplier with a homogenous AlTiN layer as analyzed with EDS and with a thickness of 3.2 μm on the flank side was also included as reference. Two end mills of each variant were tested and the results in Table 2 give the average of the two:
TABLE 2
Max wear (mm)
Invention Example 1
0.10
Reference 1
0.16
Comparative 1
0.17
Table 2 clearly shows that the end mills of Invention Example 1, have a considerably better wear resistance, including, for example, a lower maximum wear, than both Reference 1 and Comparative 1.
Threading inserts of the geometry, 266RG-16MM01A150M, were coated according to the method described in Example 1 to a coating thickness of 2.3 μm and according to the method in Example 2 to a coating thickness of 2.1 μm, respectively. They are herein referred to as Invention Example 2 and Reference 2. They were tested in an intermittent threading application as follows:
Material: SS2541
Quantification: Number of threads
c
V=110 m/min
number of passes=8
Length of thread=25 mm
max
Tool life criterion: Vb/Vb≧0.15 mm
Two inserts of each variant were tested and the results in Table 3 give the average of the two:
TABLE 3
Tool life (number of threads)
Invention Example 2
115
Reference 2
65
Table 3 clearly shows that the threading inserts of Invention Example 2, have a longer tool life than prior art inserts, including, for example, Reference 2.
0.30
0.70
30
70
Cemented carbide threading inserts with the geometry R166.OG-16VM01-002 were coated using PVD arc evaporation with TiAl-targets and CrAl-targets. The substrates were first subjected to an etching process, prior to depositing a starting layer of TiN having a thickness of approximately 0.10 μm.
2
After that, deposition of the (Ti,Cr,Al)N coating took place. The coating is deposited at temperature of 600° C. and at a Npressure of 1.0 Pa. The 3-fold rotation of the substrates resulted in alternating layers of TiAlN and AlCrN with sublayer thicknesses in the range of 0.2 nm to 30 nm. The substrate bias voltage was varied according to the following sequence:
a) Deposition at −75 V for 6 minutes
b) During a period of 2 minutes, increasing the substrate bias voltage to −150 V
c) Deposition at −150 V for 6 minutes
d) During a period of 2 minutes, decreasing the substrate bias voltage to −75 V
Steps a) to d) were repeated until the coating reached a coating thickness on the flank face of 2.2 μm. The deposition cycle was ended with a thin TiN color layer of approximately 0.2 μm. The threading inserts are herein referred to as Invention Example 3.
Cemented carbide threading inserts with the geometry R166.OG-16VM01-002 were deposited with TiN at 450° C. using an ion plating method. The threading inserts are herein referred to as Reference 3.
Threading inserts of Examples 6 and 7, and threading inserts with the same geometry but deposited according to Example 1, referred to herein as Invention Example 4, were tested in a threading operation as follows:
Material: 316Ti
Quantification: Number of threads
c
V=90 m/min
Number of passes=14
Length of thread=30 mm
max
Tool life criterion: Vb/Vb≧0.15 mm
Two threading inserts of each variant were tested and the results in Table 4 give the average of the two:
TABLE 4
Tool life
Coating thickness (μm)
(number of threads)
Invention Example 3
2.2
96
Invention Example 4
2.3
96
Reference 3
2.8
45
Table 4 clearly shows that the threading inserts of Invention Example 3 and Invention Example 4, have a considerably longer tool life than prior art, including, for example, Reference 3.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1
a
e
1
-shows different embodiments of the present invention where the substrate bias is varied in different patterns.
FIG. 2
is a schematic drawing of one embodiment of the present invention showing as it would look like in a Scanning Electron Microscope. | |
Let's say you have a biological sample with trace amounts of DNA in it. You want to work with the DNA, perhaps characterize it by sequencing, but there isn't much to work with. This is where PCR comes in. PCR is the amplification of a small amount of DNA into a larger amount. It is quick, easy, and automated. Larger amounts of DNA mean more accurate and reliable results for your later techniques.
PCR can be used to create a DNA "fingerprint," which is unique to each individual. These DNA fingerprints can be useful in real-world applications relating to paternity/maternity, kinship, and forensic testing.
The technique was developed by Nobel laureate biochemist Kary Mullis in 1984 and is based on the discovery of the biological activity at high temperatures of DNA polymerases found in thermophiles (bacteria that live in hot springs).
Most DNA polymerases (enzymes that make new DNA) work only at low temperatures. But at low temperatures DNA is tightly coiled, so the polymerases don't stand much of a chance of getting at most parts of the molecules.
But these thermophile DNA polymerases work at 100C, a temperature at which DNA is denatured (in linear form). This thermophilic DNA polymerase is called Taq polymerase, named after Thermus aquaticus, the bacteria it is derived from.
Taq polymerase, however, has no proofreading ability. Other thermally stable polymerases, such as Vent and Pfu, have been discovered to both work for PCR and to proofread.
You'll need four things to perform PCR on a sample:
1. The target sample. This is the biological sample you want to amplify DNA from.
2. A primer. Short strands of DNA that adhere to the target segment. They identify the portion of DNA to be multiplied and provide a starting place for replication.
3. Taq polymerase. This is the enzyme that is in charge of replicating DNA. This is the polymerase part of the name polymerase chain reaction.
4. Nucleotides. You'll need to add nucleotides (dNTPs) so the DNA polymerase has building blocks to work with.
There are three major steps to PCR and they are repeated over and over again, usually 25 to 75 times. This is where the automation is most appreciated.
1. Your target sample is heated. This denatures the DNA, unwinding it and breaking the bonds that hold together the two strands of the DNA molecule, leaving you with single stranded DNA (ssDNA).
2. Temperature is reduced and the primer is added. The primer molecules now have the opportunity to bind (anneal) to the pieces of ssDNA. This labels the portions of DNA to be amplified and provides a starting place for replication.
3. New pieces of ssDNA are made. Taq polymerase catalyzes the generation of new pieces of ssDNA that are complimentary to the portions marked by the primers. The job of Taq polymerase is to move along the strand of DNA and use it as a template for assembling a new strand that is complimentary to the template. This is the chain reaction in the name polymerase chain reaction.
PCR is so efficient because it multiplies the DNA exponentially for each of the 25 to 75 cycles. A cycle takes only a minute or so and each new segment of DNA that is made can serve as a template for new ones.
Perhaps the most important thing to remember is to be very aware of contamination. If, for example, you unknowingly slough off a piece of skin into your sample, then your DNA may be amplified in the PCR reaction. Here are some other factors to optimize your results with PCR:
1. Annealing temperature. Starts at the low end of what you think will work, then move up as necessary. If the temperature is too low, the primers will make more mistakes and you'll get too many bands when you run your sample on a gel. If the temperature is too high you will get no results and your gel will be blank. You want to be about 3C to 5C below the melting temperature (Tm). A rough formula for determining Tm is Tm=4(G + C) + 2(A + T).
2. Magnesium concentration. You want your Mg2+ concentration to be about 1.5mM to 3mM. If you go too high, the polymerase will make more mistakes.
3. Think carefully about primer design. Both primers should have approximately the same Tm so they both anneal at the same temperature. Two out of three bases on the 3' end should be G or C to get good hybridization (G and C have three H-bonds so you get better polymerization). Lastly, avoid primer dimers, which occur when the primers have ends that will anneal to each other. This will produce NO product.
4. More is not necessarily better. More polymerase produces more nonspecific product, so don't just carelessly dump in a bunch of polymerase. Additionally, PCR reactions don't work if there is too much DNA.
RT-PCR
Taq polymerase does not work on RNA samples, so PCR cannot be used to directly amplify RNA molecules. The incorporation of the enzyme reverse transcriptase (RT), however, can be combined with traditional PCR to allow for the amplification of RNA molecules. After you add your RNA sample to the PCR machine, add a DNA primer as usual and allow it to anneal to your target molecule. Then add RT along with dNTPs, which will elongate the DNA primer and make a cDNA copy of the RNA molecules and run the PRC reaction as usual. The product of RT-PCR is a double stranded DNA molecule analogous to the target segment of the RNA molecule.
Guruatma "Ji" Khalsa. (2010, April 12). PCR (polymerase chain reaction). ASU - Ask A Biologist. Retrieved January 21, 2022 from https://askabiologist.asu.edu/pcr-polymerase-chain-reaction
Guruatma "Ji" Khalsa. "PCR (polymerase chain reaction)". ASU - Ask A Biologist. 12 April, 2010. https://askabiologist.asu.edu/pcr-polymerase-chain-reaction
Guruatma "Ji" Khalsa. "PCR (polymerase chain reaction)". ASU - Ask A Biologist. 12 Apr 2010. ASU - Ask A Biologist, Web. 21 Jan 2022. https://askabiologist.asu.edu/pcr-polymerase-chain-reaction
By volunteering, or simply sending us feedback on the site. Scientists, teachers, writers, illustrators, and translators are all important to the program. If you are interested in helping with the website we have a Volunteers page to get the process started. | https://askabiologist.asu.edu/pcr-polymerase-chain-reaction |
Potentially surprising fact: there’s a positive correlation between ice cream consumption and drowning deaths, as one goes up, so does the other. At first glance this sounds pretty horrifying, but, as my stats-teacher-husband loves to remind me, they have nothing to do with each other directly. Instead, their correlation is governed by a third “confounding variable,” which is the weather. As it gets hotter, more people eat ice cream. Likewise, the heat causes more people to go swimming, thereby increasing the likelihood of a drowning death.
Since I was a kid, my dad has enjoyed reminding me that “correlation does not mean causation.” The ice cream-drowning story was the first to drive the notion home for me.
A new paper from Northeastern psychology professor Judith Hall and her colleagues at the University of Rochester brought the idea to mind once again. The paper, “The relation between intelligence and religiosity: A meta-analysis and some proposed explanations,” reports the team’s analysis of 63 separate studies carried out over the last century, and confirms a long-examined hunch that more intelligent people tend to be less religious.
I’d been puzzling over how to write about the article for a while, when I found myself eating sushi with that stats teacher I mentioned earlier. This is the sort of subject that is bound to get people’s knickers into twists so I wanted to be careful. I kept returning to the same phrase: “The studies they reviewed show that intelligent people tend to be less religious,” I kept telling the teacher-husband. I couldn’t disentangle that truth from the question of causation. It sounded so much like they were inextricably linked. Then the husband-teacher asked me a couple important questions: “Does being intelligent cause people to be less religious?….Does being religious cause people to be less intelligent?”
The answer to those questions seemed unlikely, to me, but very difficult to prove with any amount of certainty. If you wanted to prove that religion causes lower intelligence, or vice versa, you’d have to do some kind of double-blind, randomized trial, conferring intelligence on some people and looking at whether they became more or less religious, or vice versa. Despite the obvious challenges with this setup, I’m sure no institutional review board would ever go for it.
This all then brought to mind a phenomenal story I heard on NPR the day before on the TED Radio Hour. Margaret Heffernan, a British writer and businesswoman, told the story of a researcher named Alice Stewart, who, in the 1950s, wanted to figure out why the rate of childhood cancer was increasing, particularly among affluent families. Not knowing at all where to begin, she sent a ginormous questionnaire to parents of children with and without cancer. She asked them every question she could possibly think of. One of these, was a question to mothers: did they have an obstetric x-ray while pregnant? It turned out there was a serious statistical correlation between answering “yes” to this question and having a child with cancer.
At this point, Stewart’s data was akin to what Hall and her team have got now: Responses from people of varying levels of intelligence about their religious beliefs and practices. But lucky for Stewart, she would have an easier (if not at all easy…it took another quarter century before clinics would stop giving pregnant women x-rays) go at proving a causation behind the correlation she’d observed.
With the intelligence and religion question, things are less straightforward. “The present findings are correlational and cannot support any causal relation,” Hall and her colleagues write in the article. But they do present a few hypotheses as to what might explain the statistically significant correlation they and their predecessors observe. In one scenario, they note that intelligent people, who also tend to sway towards nonconformity, may be less religious because religiosity is a societal norm that they are trying to steer away from. Another possibility, they suggest, is that the cognitive style of people with high IQs is typically analytical. And when it comes to spiritual beliefs, there’s simply no way to empirically test the questions at hand.
Finally, the authors suggest that something called “functional equivalence” makes religion fundamentally unnecessary to people with higher IQs: Other studies have shown (through statistics of course) that religion helps people make sense of the world around them, offering a sense of order and external control. It helps us feel better about ourselves and others, helps us delay gratification, lowers our sense of loneliness, confers safety and security in times of distress….Other studies have shown all of these things to be true of high IQ as well. So, the authors suggest that perhaps where religion serves a particular purpose for some people, intelligence does the same for others.
I should also mention that the meta-analysis mostly looked at studies of Christian religions in western societies….because that’s were most of research has been done in the field so far. Looking at other cultures and religions could obviously open a whole bunch of doors for more explanations and hypotheses.
So, what does all of this tell us? Basically, don’t jump to conclusions: | https://news.northeastern.edu/2013/08/30/intelligence-religion-and-the-twisted-up-mind-game-that-is-statistics/ |
The Endometrium, lining of the uterus, thickens during an ovulatory cycle and if a pregnancy is established, provides crucial support and nourishment to a developing embryo. Underdeveloped or hormonally out of phase uterine linings can lead to implantation failure or early miscarriage. Low grade infection of the endometrium, chronic endometritis, can also cause early miscarriage.
Historically, endometrial biopsies or sampling of the endometrium were done frequently in attempt to determine if the endometrium was “out of phase”. Inaccurate results, and a painful, expensive procedure let to infrequent use of the test. Currently, blood testing for Progesterone and evaluating the length of the luteal phase enable us to more accurately diagnose potential out of phase endometrium.
Today, we use endometrial biopsies for new types of testing that have developed recently. Patients can take medication to ensure a more comfortable sampling procedure for these important tests.
ERA TEST: We are able to sample the endometrium after a course of medication to determine the optimal timing for future embryo transfers.
E-Tegrity Test: The endometrium can be tested for the presence of substances that are thought to be important for embryo implantation.
Endometrial Pathology: The endometrium can be sampled to check for endometrial hyperplasia, a precursor to endometrial cancer. Endometrial hyperplasia is more commonly seen in patients with PCOS .
Chronic Endometritis: An endometrial biopsy can be used to diagnose Chronic Endometritis, a low-grade infection of the uterine lining which can lead to miscarriage. Patients who test positive can be treated with a simple course of antibiotics. | https://www.fertilitydr.com/fertility-test-endometrial/ |
"The Wellington Catholic District School Board is committed to the success of all students. We continue to build a welcoming and inclusive environment for each student, regardless of challenge, need or exceptionality. Inclusion is a cornerstone of our philosophy. An inclusive community is one where all students feel supported and valued in their journey of learning. We are so excited that you and your family will be joining us!"
"All are Called" is a resource designed to help guide you in your transition to high school. Please contact Mr. Jeff Mawhinney, Special Education Resource Teacher - Lead for your copy. | https://ololguidance.weebly.com/all-are-called.html |
The long-awaited European General Data Protection Regulation ("GDPR") entered into force on 24 May 2016 and, following a two year transition period, will apply from 25 May 20181. The GDPR will replace Directive 95/46/EC (the "Directive") on which the Irish Data Protection Regime2 is based. The GDPR will be directly applicable in all Member States without the need for implementing national legislation and it is hoped that the use of a Regulation will bring greater harmonisation throughout the European Union ("EU").
The GDPR does not fundamentally change the core rules regarding the processing of personal data which are contained in the Directive but rather seeks to expand and strengthen the rights of data subjects. The GDPR aims to make businesses more accountable for data privacy compliance and offers data subjects extra rights and more control over their personal data. This bulletin aims to summarise some of the main changes that will arise under the GDPR.
Consent
Obtaining consent for the lawful processing of personal data is more onerous under the GDPR. Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject's agreement to the processing of personal data relating to him or her. The data controller is required to be able to demonstrate that consent was given.
If consent is given in the context of a written statement/declaration which also concerns other matters, the request for consent must be presented in a manner which is clearly distinguishable from other matters, in an intelligible and easily accessible form using clear and plain language.
A data subject will have the right to withdraw his or her consent at any time.
Data Subjects' Rights
The GDPR largely retains and in some cases enhances existing rights of data subjects whilst also introducing new rights relating to (i) data portability, (ii) restricting processing and (iii) the right to be forgotten.
Data Portability
Under the right to data portability, data subjects have the right to obtain their data and to have their data transmitted to another data controller without hindrance where technically feasible. This right only applies to personal data provided to the data controller and does not therefore extend to personal data generated by the data controller.
Restricting Processing
The GDPR gives data subjects the right to restriction of processing in certain circumstances; i.e. a data subject can ask a data controller to restrict processing. This right to restrict processing replaces the right to blocking which is contained in the Directive.
Right to be Forgotten
Data subjects have the right to be forgotten without undue delay in certain circumstances including where the data subject withdraws his/her consent and there is no other legal ground for the processing. A data controller must provide information on the action it has taken to comply with the request to be forgotten without delay and in any case at least within one month of the receipt of the request from the relevant data subject.
Accountability and Governance
Under the GDPR, the data controller must be able to both comply with the principles relating to processing of personal data and also be able to demonstrate its compliance with the GDPR.
The data controller and data processor must implement appropriate technical and organisational measures to ensure that data is processed in a manner that ensures appropriate security and confidentiality of the personal data.
The GDPR also requires data controllers and data processors to retain records of their processing activities, which should be made available to the supervisory authority on request. Organisations with less than 250 employees are exempt from the record retention obligation unless the processing
it carries out is likely to result in a risk to the rights and freedoms of data subjects, the processing is not occasional, or the processing includes special categories of data or personal data relating to criminal convictions and offences.
Where processing operations are likely to result in a high risk to the rights and freedoms of natural persons, the data controller must, prior to the processing, carry out a data protection impact assessment of the envisaged processing operations on the protection of personal data to evaluate, in particular, the origin, nature, particularity and severity of that risk. The precise meaning of 'high risk' has not yet been defined. The GDPR sets out some examples of circumstances which should be regarded as high risk processing however it is hoped that further guidance on this point will issue from the Article 29 Working Group in due course. Data controllers should consider the outcome of the assessment when determining the appropriate measures to be taken in order to demonstrate compliance with the GDPR.
International Data Transfers
The rules in the GDPR regarding the transfer of data outside the EEA are broadly similar to the current regime. The GDPR prohibits the transfer of personal data outside the EEA unless certain conditions can be satisfied. In particular, the consent exemption has been amended such that explicit consent is required where an entity wants to transfer personal data outside the EEA.
Territorial Scope
The GDPR expands the territorial scope of EU data protection law. The GDPR applies to both data controllers and data processers established in the EU regardless of whether the data processing takes place in the EU or not.
It also applies to the processing of personal data of data subjects who are in the EU by a data controller or data processor not established in the EU where the processing activities relate to:
- the offering of goods or services to such data subjects in the EU, irrespective of whether a payment of the data subject is required; or
- the monitoring of their behaviour as far as their behaviour takes place within the EU.
Where data controllers and data processors outside of the EU are caught by the new territorial rules, they will need to designate a representative in the EU unless they can avail of an exemption3. This representative must be established in one of the Member States in which the data subjects, whose personal data is processed in relation to the offering of goods or services to them, or whose behaviour is monitored, are located.
One Stop Shop
Under the GDPR, Member States must establish a supervisory authority that will be responsible for monitoring the application of the GDPR in order to protect the fundamental rights and freedoms of natural persons in relation to processing and to facilitate the free flow of personal data within the EU. The supervisory authority must be independent of the Member State and appointed for a minimum period of four years. There will also be a European Data Protection Board made up of one member from each of the supervisory authorities of each Member State.
In respect of cross-border processing, the GDPR also introduces the concept of the "one stop shop" whereby a lead supervisory authority will be appointed to the data controller or data processer that will cooperate with the other national supervisory authorities, where relevant. A business that carries out cross border processing should be primarily regulated by the supervisory authority in which it has its main establishment. There are circumstances in which the lead supervisory authority will be required to co-operate and consult with authorities of other Member States.
Data Processors
The GDPR will apply directly to data processors. The GDPR also expands the list of provisions data controllers must include in their contracts with data processers. This is a significant change as data processers were largely exempt from regulation under the Directive. Some of the main obligations imposed on data processors by the GDPR include the following:
- the obligation to appoint a representative if not established in the EU;
- the obligation to ensure certain minimum clauses in contracts with data controllers;
- the obligation to keep a record of all categories of processing activities carried out in behalf of a data controller;
- the obligation to cooperate with the supervisory authority;
- the obligation to notify the data controller in the event of a data breach without undue delay;
- the obligation to appoint a data protection officer, where applicable; and
- the obligation to comply with the rules on the transfer of personal data outside of the EU.
Data processors will now be liable for material or non-material damage suffered by any person as a result of an infringement of the GDPR. However, a data processor's liability will be limited to the extent that it has not complied with the data processor obligations of the GDPR or where it has acted outside or contrary to lawful instructions of the data controller.
Data Protection Officer
Under the GDPR, certain data controllers and data processors will need to appoint a Data Protection Officer ("DPO"). The entities that are caught by the requirement to appoint a DPO are (i) public authorities, (ii) data controllers and data processors whose core activities consist of regular and systematic monitoring of data subjects on a large scale or (iii) data controllers and data processors which consist of large scale processing of personal data. A group of undertakings may appoint a single DPO provided that the DPO is easily accessible from each establishment. The DPO may be an employee of the data controller or data processor or fulfil the tasks on the basis of a service contract. Details of the DPO shall be published by the data controller or data processor. The DPO is responsible for monitoring compliance with the GDPR and must report to the highest level of management within an entity.
Data Breach Notifications
The GDPR requires data controllers to notify the supervisory authority without undue delay, and where feasible, not later than 72 hours after having become aware of a personal data breach unless the breach is unlikely to result in a risk to the rights and freedoms of natural persons. If the data controller does not notify the supervisory authority within 72 hours, it must give reasons for the delay. The GDPR sets out what should be included in the notification to the supervisory authority. The data controller must document any personal data breaches.
When the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons, the data controller must communicate the personal data breach to the data subject without undue delay.
Under the GDPR, the data processor must notify the data controller without undue delay after becoming aware of a personal data breach.
Sanctions
The GDPR increases the sanctions which may be imposed on organisations that breach EU data protection law. Organisations may now be subject to administrative fines of up to €20,000,000 or 4% of annual global turnover, whichever is higher. Administrative fines may be imposed in addition to, or instead of, the supervisory authority's corrective powers. The GDPR sets out a list of factors for supervisory authorities to consider when deciding on whether or not to impose a fine and the level of any fine to impose.
Currently in Ireland, the Data Protection Commission (the "DPC") does not have the power to impost administrative fines for infringements of the data protection law. The DPC's power to issue fines under the GDPR will significantly increase the risk profile of data protection compliance/non-compliance.
Footnotes
1 The GDPR is accompanied by the Criminal Law Enforcement Data Protection Directive (2016/680) (which applies to the processing of personal data by law enforcement authorities) and must be implemented in all Member States by 6 May 2018 however it is not considered further in this article.
2 The Directive was transposed into Irish law by virtue of the Data Protection Act 1998 and the Data Protection Amendment Act 2003 (the "DPA").
3 There is a limited exemption to the obligation to appoint a representative where the processing is occasional, is unlikely to be a risk to individuals and does not involve large scale processing of sensitive personal data.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances. | https://www.mondaq.com/ireland/data-protection/570132/the-european-general-data-protection-regulation |
Universal Declaration of Human Rights
Universal Declaration of Human Rights
The
Universal Declaration of Human Rights (UDHR
)
is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgrounds from all regions of the world, the Declaration was proclaimed by the United Nations General Assembly in Paris on 10 December 1948 (
General Assembly resolution 217 A
) as a common standard of achievements for all peoples and all nations. It sets out, for the first time, fundamental human rights to be universally protected and it has been
translated into over 500 languages.
The UDHR is widely recognized as having inspired, and paved the way for, the adoption of more than seventy human rights treaties, applied today on a permanent basis at global and regional levels (all containing references to it in their preambles).
##
Preamble
Whereas recognition of the inherent dignity and of the equal and inalienable rights of all members of the human family is the foundation of freedom, justice and peace in the world,
Whereas disregard and contempt for human rights have resulted in barbarous acts which have outraged the conscience of mankind, and the advent of a world in which human beings shall enjoy freedom of speech and belief and freedom from fear and want has been proclaimed as the highest aspiration of the common people,
Whereas it is essential, if man is not to be compelled to have recourse, as a last resort, to rebellion against tyranny and oppression, that human rights should be protected by the rule of law,
Whereas it is essential to promote the development of friendly relations between nations,
Whereas the peoples of the United Nations have in the Charter reaffirmed their faith in fundamental human rights, in the dignity and worth of the human person and in the equal rights of men and women and have determined to promote social progress and better standards of life in larger freedom,
Whereas Member States have pledged themselves to achieve, in co-operation with the United Nations, the promotion of universal respect for and observance of human rights and fundamental freedoms,
Whereas a common understanding of these rights and freedoms is of the greatest importance for the full realization of this pledge,
Now, therefore,
The General Assembly,
Proclaims this Universal Declaration of Human Rights as a common standard of achievement for all peoples and all nations, to the end that every individual and every organ of society, keeping this Declaration constantly in mind, shall strive by teaching and education to promote respect for these rights and freedoms and by progressive measures, national and international, to secure their universal and effective recognition and observance, both among the peoples of Member States themselves and among the peoples of territories under their jurisdiction.
##
Article 1
All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.
##
Article 2
Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. Furthermore, no distinction shall be made on the basis of the political, jurisdictional or international status of the country or territory to which a person belongs, whether it be independent, trust, non-self-governing or under any other limitation of sovereignty.
##
Article 3
Everyone has the right to life, liberty and security of person.
##
Article 4
No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.
##
Article 5
No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment.
##
Article 6
Everyone has the right to recognition everywhere as a person before the law.
##
Article 7
All are equal before the law and are entitled without any discrimination to equal protection of the law. All are entitled to equal protection against any discrimination in violation of this Declaration and against any incitement to such discrimination.
##
Article 8
Everyone has the right to an effective remedy by the competent national tribunals for acts violating the fundamental rights granted him by the constitution or by law.
##
Article 9
No one shall be subjected to arbitrary arrest, detention or exile.
##
Article 10
Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.
##
Article 11
1. Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defence.
2. No one shall be held guilty of any penal offence on account of any act or omission which did not constitute a penal offence, under national or international law, at the time when it was committed. Nor shall a heavier penalty be imposed than the one that was applicable at the time the penal offence was committed.
##
Article 12
No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.
##
Article 13
1. Everyone has the right to freedom of movement and residence within the borders of each state.
2. Everyone has the right to leave any country, including his own, and to return to his country.
##
Article 14
1. Everyone has the right to seek and to enjoy in other countries asylum from persecution.
2. This right may not be invoked in the case of prosecutions genuinely arising from non-political crimes or from acts contrary to the purposes and principles of the United Nations.
##
Article 15
1. Everyone has the right to a nationality.
2. No one shall be arbitrarily deprived of his nationality nor denied the right to change his nationality.
##
Article 16
1. Men and women of full age, without any limitation due to race, nationality or religion, have the right to marry and to found a family. They are entitled to equal rights as to marriage, during marriage and at its dissolution.
2. Marriage shall be entered into only with the free and full consent of the intending spouses.
3. The family is the natural and fundamental group unit of society and is entitled to protection by society and the State.
##
Article 17
1. Everyone has the right to own property alone as well as in association with others.
2. No one shall be arbitrarily deprived of his property.
##
Article 18
Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change his religion or belief, and freedom, either alone or in community with others and in public or private, to manifest his religion or belief in teaching, practice, worship and observance.
##
Article 19
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.
##
Article 20
1. Everyone has the right to freedom of peaceful assembly and association.
2. No one may be compelled to belong to an association.
##
Article 21
1. Everyone has the right to take part in the government of his country, directly or through freely chosen representatives.
2. Everyone has the right of equal access to public service in his country.
3. The will of the people shall be the basis of the authority of government; this will shall be expressed in periodic and genuine elections which shall be by universal and equal suffrage and shall be held by secret vote or by equivalent free voting procedures.
##
Article 22
Everyone, as a member of society, has the right to social security and is entitled to realization, through national effort and international co-operation and in accordance with the organization and resources of each State, of the economic, social and cultural rights indispensable for his dignity and the free development of his personality.
##
Article 23
1. Everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment.
2. Everyone, without any discrimination, has the right to equal pay for equal work.
3. Everyone who works has the right to just and favourable remuneration ensuring for himself and his family an existence worthy of human dignity, and supplemented, if necessary, by other means of social protection.
4. Everyone has the right to form and to join trade unions for the protection of his interests.
##
Article 24
Everyone has the right to rest and leisure, including reasonable limitation of working hours and periodic holidays with pay.
##
Article 25
1. Everyone has the right to a standard of living adequate for the health and well-being of himself and of his family, including food, clothing, housing and medical care and necessary social services, and the right to security in the event of unemployment, sickness, disability, widowhood, old age or other lack of livelihood in circumstances beyond his control.
2. Motherhood and childhood are entitled to special care and assistance. All children, whether born in or out of wedlock, shall enjoy the same social protection.
##
Article 26
1. Everyone has the right to education. Education shall be free, at least in the elementary and fundamental stages. Elementary education shall be compulsory. Technical and professional education shall be made generally available and higher education shall be equally accessible to all on the basis of merit.
2. Education shall be directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms. It shall promote understanding, tolerance and friendship among all nations, racial or religious groups, and shall further the activities of the United Nations for the maintenance of peace.
3. Parents have a prior right to choose the kind of education that shall be given to their children.
##
Article 27
1. Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.
2. Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.
##
Article 28
Everyone is entitled to a social and international order in which the rights and freedoms set forth in this Declaration can be fully realized.
##
Article 29
1. Everyone has duties to the community in which alone the free and full development of his personality is possible.
2. In the exercise of his rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society.
3. These rights and freedoms may in no case be exercised contrary to the purposes and principles of the United Nations.
##
Article 30
Nothing in this Declaration may be interpreted as implying for any State, group or person any right to engage in any activity or to perform any act aimed at the destruction of any of the rights and freedoms set forth herein. | |
Showing results for: Waste and resource use
Food waste is common in both developing and developed countries. Estimates of the scale of waste and loss are between 30% to 40% of all food produced. Waste loss occurs during production, distribution and at the consumer stage. In richer nations, more food is wasted at the consumer level than in poorer countries: in Europe, an average of 95 kg of food is thrown out by each consumer each year. In developing countries much produce is lost due to a lack of suitable packaging and storage facilities (so called post-harvest losses). According to the FAO, consumers in rich countries waste almost as much food (222 million tonnes) a year as the entire net food production of sub-Saharan Africa (230 million tonnes). Food waste also represents a waste of all the embedded resources involved in producing it (land, water, fossil fuel inputs, agro chemicals) and in this sense is also a source of 'unnecessary' GHG emissions.
This report from the Food Ethics Council sets out how UK food businesses and government could learn from the Danish food system. Although Denmark and the UK have similarities, e.g. in climate, Denmark ranked 7th in the 2018 Food Sustainability Index while the UK ranked 24th.
This paper outlines the main sustainability challenges linked to nitrogen, including inadequate access to nitrogen fertiliser in some parts of the world and excessive fertiliser application in other areas, leading to water pollution, algal blooms and risks to human health. The paper argues that solving nitrogen problems would have co-benefits for other sustainability issues such as hunger, air, soil and water quality, climate and biodiversity.
This book presents case studies and guidance on extracting high-value compounds from waste and by-products from foods such as dairy, meat, sweet potato, cereals and olive oil.
This report from the UK charity Waste and Resources Action Programme (WRAP) provides the latest estimates for food losses and food waste in primary production (i.e. on farms) in the UK. It finds that 3% of food harvested is wasted at the farm stage (sent to waste treatment such as composting without first being used for another purpose, or left in the field) and 4% is surplus (material intended for food uses that ends up being redistributed to people, fed to animals or used for other purposes), making a total of 7%.
This report from the Food and Land Use Coalition proposes ten critical transitions that could enable the food system to provide healthy diets for nine billion people by 2050 while also protecting the climate and biodiversity. The transitions are estimated to provide over 15 times more social benefit than their investment cost, which is likely to be less than 0.5% of global GDP.
This paper quantifies the carbon emissions, water use and land use associated with the consumption of food excess to requirements, on the basis that overnutrition has sometimes been classified as a form of food waste. It finds high geographical variation in the environmental impacts of so-called excess food consumption, with impacts being an order of magnitude greater in Europe, North America and Oceania than in sub-Saharan Africa.
This paper reviews the literature on the supply chain of phosphorus, a nutrient required in agriculture, and finds that current reporting is inadequate regarding phosphorus reserves and resources, losses along the supply chain, environmental and sociopolitical externalities, and open access to data.
FCRN member Peter Alexander has co-authored this paper, which finds that incremental improvements in several areas of the food system (including production efficiency, reducing food waste and changing diets) could reduce agricultural land use by between 21% and 37%, depending on adoption rates.
Conservation NGO WWF has released the 40-minute film “Our planet, our business”, which sets out five principles for businesses to follow in order to protect nature and their own future.
Over 100 food organisations, including many supermarkets, have signed the “Step up to the Plate” pledge (organised by the UK’s Department for Environment, Food and Rural Affairs) to halve food waste by 2030, support a week of action in November 2019, empower citizens and change their individual habits so as to reduce food waste.
According to this paper, households in the Netherlands wasted 41kg of solid food per person in 2016 - a 15% decline since 2010. Furthermore, 57 litres per person of potable liquids such as coffee, tea and milk are disposed of via the sink or toilet each year. Rice, bread, pasta, vegetables and pastries are among the food types most likely to be wasted (as a percentage of purchased quantity).
Wageningen University and Research has formed a consortium together with several private companies to research the use of co-products and residues from the food sector and industry as animal feed. A particular research focus will be on increasing Europe’s self-sufficiency in feed materials.
This paper calculates the environmental impacts (climate change, acidification, eutrophication, land use, and water use) caused by either making a meal by using a meal kit (which contains pre-portioned ingredients for cooking a meal) or by buying the ingredients from a grocery store.
Free-range eggs in the Agbogbloshie slum in Ghana are contaminated with some of the highest levels ever measured (in eggs) of certain toxic substances due to the illegal dumping of electronic waste from Europe, according to this report from Swedish non-profit IPEN and US non-profit Basel Action Network.
This paper maps the potential for different subnational, national, or regional areas to reduce their agricultural dependence on imported phosphorus fertiliser by recycling manure or urban waste (including both human excreta and household and industrial wastes).
FCRN member Tom Quested of resource efficiency organisation WRAP Global recommends the REFRESH Community of Experts, which is an online platform to find and share information (such as best practices) on food waste prevention.
This policy briefing from EU food waste research project REFRESH outlines policy options for reducing food waste at the consumer level, based on both desktop research and a survey of households in four countries. | https://fcrn.org.uk/research-library/issues/waste-and-resource-use |
In an era of rising anti-globalization; bitter trade wars between the US and its partners, China and Mexico; Europe in the throes of the rise of populism; and an impending Brexit - it is assuring to know that some of the world’s institutions are still global-minded and focused on how we can all work together to achieve economic progress.
Founded in 1961, the Organisation for Economic Co-operation and Development’s (OECD’s) mandate is to shape policies that stimulate economic prosperity and global trade. The OECD is a forum for countries committed to democracy and the market economy with convening powers that provide a platform to compare policy experiences, seek answers to common problems, identify good practices and coordinate domestic and international policy standards for its 36 members and up to another 150 countries.
Greg Medcraft is the Director for Financial and Enterprise Affairs at the OECD and is a champion for standard setting in the global community. As a former tier one European banker and securities specialist – in addition to former Australian Securities and Investments Commission (ASIC) Commissioner and International Organization of Securities Commissions (IOCSO) Chair – he has the technical knowledge and experience of the “engine room workings” of global markets regulation combined with the charm of a Saturday evening television compere and the soundbite narrative of a presidential candidate.
Last year, the OECD focused on the role of digital assets and blockchain technologies in transforming society. It convened global policymakers, regulators and industry blockchain and digital assets specialists in a number of forums over the year, culminating in the creation of the OECD Global Blockchain Policy Forum, which will convene again this year on September 12th and 13th in Paris.
I sat down with Greg to discuss his vision for the OECD’s Department for Financial and Enterprise Affairs, how the OECD is using its convening power to move the dial on global standards for blockchain, the importance of blockchain and digital assets, and, how the power of the people can help industry set higher standards for conduct.
Lawrence Wintermeyer: The OECD Financial and Enterprise Affairs team has been busy, what are some of the innovations and highpoints from the past year?
Greg Medcraft: On top of our business-as-usual work supporting fair and efficient markets, we’ve established three priorities:
1. Blockchain
We have seen a huge demand from the private sector for regulatory certainty on blockchain-based services, products and assets, especially in the financial sector. Governments are now regularly seeking guidance and opportunities to exchange experience and collaborate.
The OECD’s Global Blockchain Policy Forum was the high point of this work last year; it was the first global conference to look at the policy impact of blockchain across the full range of government priorities, including tax, finance, supply chains and government services.
2. Trust in Business
This is a unifying theme for our work on markets and responsible business conduct. We’re looking to give investors, consumers and businesses the tools they need to direct resources towards business activities that build trust and meet social expectations - essentially harnessing the power of the crowd.
3. Infrastructure Financing
The world needs to invest around $95 trillion in infrastructure over the next decade and we need to mobilize private finance to achieve this. Again, our work here focusses on providing information to the market to direct resources towards infrastructure projects that are viable and responsible over their life cycle.
Wintermeyer: The OECD has an extraordinary global convening power with governments, policy makers and regulators – you have really focused on extending this invitation to industry innovators and disruptors – how has this gone?
Medcraft: The OECD’s main audience is governments; this is whom we direct our advice to. But the private sector is at the core of the priorities I’ve mentioned.
Engaging with industry is critical because businesses are affected by the policy standards and advice that the OECD creates. Business is often a big part of the solution to our shared challenges.
Take blockchain as an example. With cryptocurrencies and token offerings, governments have scrambled to respond to a host of real and perceived market issues.
The OECD responded by bringing regulators from across the globe to the table alongside entrepreneurs, start-ups and engineers at the forefront of blockchain innovation in a series of roundtables last year and most recently with the Financial Stability Board (FSB) in February.
This has been enormously beneficial. It has allowed the global policy community to improve their understanding of the technology - its risks, and its benefits - while taking a more considered approach to blockchain innovation in finance.
Wintermeyer: You are a big fan of using the voice of the community to help develop emerging standards for regulation in global financial services, what does “the power of the crowd” mean?
Medcraft: The power of the crowd recognizes that a business’s community – its customers, investors, employees and other stakeholders – have growing expectations around good business conduct – what you could call the social license to operate.
At the same time, the crowd has been empowered by social media and the 24-hour news cycle to monitor business behavior and hold it to account.
I see this as a particularly important development for business conduct and long-term value creation. Businesses now need to deliver good environmental, social and governance outcomes if they are to properly manage reputational risk and secure success in the long-term.
The social license is also constantly shifting, which means companies need to be responding a lot faster than it takes regulators to make rules.
The social license doesn’t replace the need for good market governance, but it is a powerful driver of good conduct and risk management on top of a robust regulatory regime.
Wintermeyer: How important is the OECD’s focus on the non-G20 members and countries when it comes to blockchain and digital assets. Can the blockchain and new forms of digital assets accelerate economic development?
Medcraft: Some of our OECD standards include over 150 countries as signatories. We work with countries of all sizes and levels of economic development.
This has been important in our blockchain work for two reasons:
1. Blockchain offers great potential for economic development. We’ve seen this in use-cases that support financial inclusion, like remittance services, and that support property rights like land titling, a pre-condition for economic development.
These are early days and the technology is not yet at scale, but we are seeing how smaller jurisdictions like Mauritius or Jamaica – as well as jurisdictions like France who has passed ground breaking legislation on digital assets - are especially willing to innovate
2. Digital assets are global by nature, which means they can flow freely between jurisdictions.
As with our tax transparency work, we need countries of all sizes to work together if we are to build a global market which participants can trust and avoid opportunities for regulatory arbitrage.
Wintermeyer: How do you see the developments in blockchain and digital assets disrupting the mature financial services markets and top tier global banks? How does China figure in these developments?
Medcraft: Clearly banks are looking closely at how blockchain and digital assets can streamline activities and created efficiency and transparency in inter-bank operations, as we’ve seen with the Corda project, the Utility Settlement Coin and JPM Coin.
It’s likely that such large private networks, with assets that are backed by real-world assets, are the future of blockchain’s use in finance.
While not as open and inclusive as bitcoin, it is still a quantum leap in efficiency and transparency in banking and will probably path the way for even more disruptive innovation like central bank digital currencies.
Developments in China really show where blockchain is headed. We’re seeing new fintech offerings marrying blockchain’s security and transparency with internet-enabled sensors and AI to develop a new generation of financial products, for example, automated livestock insurance.
Wintermeyer: The OECD has always worked closely with the International Monetary Fund (IMF) and the World Bank – what does this partnership seek to achieve?
Medcraft: International organizations like the IMF, World Bank and OECD have strong links because our work is complementary and each of us plays to our comparative advantages.
We’ve all been looking at fintech and blockchain from different angles:
- The IMF’s deep expertise in the global financial system has given it an excellent view of how fintech and blockchain could shape international financial flows and central bank priorities.
- The World Bank’s has leveraged its operational financial functions to develop our collective understanding of digital assets, as we saw from the blockchain-based bond issuance and ‘learning coin’ project developed with the IMF.
- At the OECD, we look at how we can help ensure fair and orderly financial markets – which means supporting things like good financial consumer protection and financial literacy in the digital age.
But actually, our true comparative advantage is the wide range of policy areas we cover, from finance to tax, health and agriculture, anti-corruption, public governance, etc.
When it comes to looking at blockchain’s impact between sectors and across the whole economy, the OECD is where the dots get connected.
Wintermeyer: You have dedicated your career to the development of standards, policies and regulations in both developed and developing economies. What is the single biggest lesson you have learned?
Medcraft: Things move quickly in business and more quickly than regulation and laws in this fast-paced digital era. In the past, industry often looked at implementing the minimum standards when they should have been looking at implementing good standards above the minimum, especially cognizant of what the community expects.
With the power of the crowd, industry now has the tools to set high conduct standards in line with community expectations for customers, investors, employees, other stakeholders, and the environment – industry should also harness this opportunity to engage and work with global policy makers and regulators. | https://www.forbes.com/sites/lawrencewintermeyer/2019/06/06/the-power-of-the-crowd-a-new-approach-to-financial-regulation/ |
Due to their confinement to specific host plants or restricted habitat types, Auchenorrhyncha are suitable biological indicators to measure the quality of chalk grassland under different management practices for nature conservation. They can especially be used as a tool to assess the success of restoring chalk grassland on ex-arable land. One objective of this study was to identify the factors which most effectively conserve and enhance biological diversity of existing chalk grasslands or allow the creation of new areas of such species-rich grassland on ex-arable land. A second objective was to link Auchenorrhyncha communities to the different grassland communities occurring on chalk according to the NVC (National Vegetation Classification). Altogether 100 chalk grassland and arable reversion sites were sampled between 1998 and 2002. Some of the arable reversion sites had been under certain grazing or mowing regimes for up to ten years by 2002. Vegetation structure and composition were recorded, and Auchenorrhyncha were sampled three times during the summer of each year using a "vortis" suction sampler. Altogether 110 leafhopper species were recorded during the study. Two of the species, Kelisia occirrega and Psammotettix helvolus, although widespread within the area studied, had not previously been recognized as part of the British fauna. By displaying insect frequency and dominance as it is commonly done for vegetation communities, it was possible to classify preferential and differential species of distinct Auchenorrhyncha communities. The linking of the entomological data with vegetation communities defined by the NVC showed that different vegetation communities were reflected by distinct Auchenorrhyncha communities. Significant differences were observed down to the level of sub-communities. The data revealed a strong positive relationship between the diversity of leafhoppers species and the vegetation height. There was also a positive correlation between the species richness of Auchenorrhyncha and the diversity of plant species. In that context it is remarkable that there was no correlation between vegetation height and botanical diversity. There is a substantial decrease in Auchenorrhyncha species richness from unimproved grassland to improved grassland and arable reversion. The decline of typical chalk grassland and general dry grassland species is especially notable. Consequently, the number of stenotopic Auchenorrhyncha species which are confined to only a few habitat types, are drastically reduced with the improvement of chalk grassland. Improved grassland and arable reversion fields are almost exclusively inhabited by common habitat generalists. The decrease in typical chalk grassland plants due to improvement is mirrored in the decline of Auchenorrhyncha species, which rely monophagously or oligophagously on specific host plants. But even where suitable host plants re-colonize arable reversion sites quickly, there is a considerable delay before leafhoppers follow. That becomes especially obvious with polyphagous leafhoppers like Turrutus socialis or Mocydia crocea, which occur on improved grassland or arable reversion sites only in low frequency and abundance, despite wide appearance or even increased dominance of their host plants. These species can be considered as the most suitable indicators to measure success or failure of long term grassland restoration. A time period of ten years is not sufficient to restore species-rich invertebrate communities on grassland, even if the flora indicates an early success. | https://kola.opus.hbz-nrw.de/solrsearch/index/search/searchtype/all/rows/50/facetNumber_author_facet/all/facetNumber_doctype/all/start/0/author_facetfq/Maczey%2C+Norbert |
Searching database of minor planet names
The catalog of asteroids discovered at Klet can be searched in two ways. In the first one, pick a starting letter of asteroid's name from the listed alphabet. In the second way, start writing an expression to search (diacritics is allowed), and after at least three letters are typed, the results are automatically listed. The search can also be performed only on asteroid names, catalogue numbers, date of discovery or names of the discoverers.
Kvíz
8137
1979 SJ
Discovered by 19.09.1979 by on Klet observatory
MPC 32349 (8.08.1998)
Kvíz
https://ssd.jpl.nasa.gov/sbdb.cgi?sstr=8137;old=0;orb=1;cov=0;log=0;cad=0#orb
Named in memory of Zdeněk Kvíz (1932-1993), Czech astronomer. His early works dealt with meteor showers, although his main interest was in light curves of close binaries and eclipsing binaries. Beginning in 1968 he lived in Australia, working also in Switzerland. The Czech Astronomical Society prize for young astronomers bears his name. Name suggested by J. Tichá. | https://names.klet.org/en/detail/8137 |
New Chair of Communities Committee
Warwick Town Council’s Community and Cultural Committee meets every two or three months to support local community organisations, and guard the town’s cultural heritage.
A representative of Unlocking Warwick, the Town Council’s community volunteer group, is always invited to these meetings to report on our activities. At the recent meeting (5th June) Cllr Moira-Ann Grainger was elected to be the new Chair of the committee, replacing long-serving Cllr Mandy Littlejohn who has stepped down because of other commitments, but remains on the committee as Deputy Chair for continuity.
At the meeting, the committee members were warm in their appreciation of all the work carried out by the volunteers, including the monthly social gatherings ‘In the Ballroom’, the Warwick War Memorial Project, our Court House Tours, the compilation of the ‘What’s On in Warwick at a Glance’ long-term list of local events, assistance in the Visitor Information Centre, and the organisation of community events in the ballroom such as the Warwick Barn Dance coming up on July 20th and the Armistice Afternoon Tea on October 7th.
We look forward to working with Moira-Ann and her committee members to help promote the wonderful historic town of Warwick and to make the refurbished Court House in Jury Street a central hub of community activity. | https://www.unlockingwarwick.org/new-chair-of-communities-committee/ |
Humility treads the fine line between arrogance and self-deprecation. Humility, modesty and down-to-earth are synonyms deriving from the Latin, ‘humus’, translatable as ‘grounded or from the earth’. So, what makes someone humble? Are the humble meek or psychologically weak?
In today’s stressful world, greatly concerned with the pursuit of happiness, a bit of humility can deliver great reserves of inner strength. All spiritual traditions value humility and make it essential for a person to be humble to be able to receive divine benediction. The Bhagwad Gita, 13:8, lists humility as the first of the twenty qualities that comprise wisdom. Significantly, the Gita mentions the idea of humility by a negative definition to convey its subtlety: ‘Amanitvam’, absence of the craving for respect or absence of ego. Surprisingly, it is difficult to find Indian equivalents to the word humility in daily usage, while references to the concept are abundant in our scriptures. Many terms use 'neti', meaning ‘no me’ or ‘i am not', and give rise to words such as ‘viniti’ and ‘samniti’. Not surprisingly the Sanskrit word ‘ahamkar’ literally translates into ‘the-sound-of-i’, quite simply, the sense of the self or ego.
Gandhiji felt that humility is an essential virtue that must exist in a person for other virtues to emerge. To Swami Vivekananda, humility did not mean crawling on all fours and calling oneself a sinner. Instead, it meant recognising and feeling oneness with everyone and everything else in the universe, without inferiority or superiority or any other bias.
Although humility is deeply revered in most spiritual traditions, in interpersonal narrative or management lingua, it hasn’t found much salience. Amongst qualities of leaders, humility finds hardly any mention, although most of the world’s great leaders are themselves lessons in the art of it. Fortunately, a great deal of management and psychological research today is devoted to the role of humility in character building and leadership.
As workplaces tend to be aggressive arenas and breeding grounds for misogyny and other abuse, humble co-workers are valued. They put their interlocutors at ease, and it takes fear and trepidation out of social intercourse. From the interpersonal perspective, being humble facilitates trust, and builds relationships. The humble may be more talented, gifted, or skilled than anyone else and above all better learners and problem solvers. In fact, studies show that humility is more important as a predictive performance indicator than IQ. With humbleness comes a self-acceptance from grounding one’s worth in one’s intrinsic value as human beings rather than other trappings of power and wealth.
Is it possible to develop humility? We must first embrace our humanness and have an accurate understanding of our strengths and weaknesses. Expressing gratitude can induce humility in us, and humble people have a greater capacity for conveying gratitude. Holding nature in high esteem, recognising it is an overwhelming and awe-inspiring force reminds us of our own insignificance in the cosmic scale. Being curious and open to learning fosters humility. Emulating great people and imbibing from them what we lack in our own understanding can build our own reserves of humility. After all, as Socrates said, wisdom is, above all, knowing what we don’t know. | https://www.speakingtree.in/article/humility-opens-the-door-to-inner-strength |
Search Forums
If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Re: Theistic beliefs are not rationally justified
Originally Posted by Squatch347
We don't?
No, we don't have any examples of the change from non-existence to existence which is required for P1. Further, even if we accept that the rearranging of atoms is an information creation process, which we don't since there was information there before, the creation of information which you claim takes place when wood is changed to chair is not comparable to the change from non-existence to existence which you are applying in P1.
Originally Posted by Squatch347
If we go from the state in picture 1 to the state in picture 2, a chair has been created.
Again, this is equivocating between the usage of "created" in the change from wood to chair and the usage in P1 of "begin to exist" meaning a change from non-existence to existence. If you want to use the wood-chair comparison for the universe, you need to offer the "wood", or material from which the universe came.
Originally Posted by Squatch347
The rearranging of atoms is an information creation process.
The information process in wood-chair is not comparable to the change from non-existence to existence which you try to apply to the universe in P1.
Originally Posted by Squatch347
Thus we cannot simply dismiss that information and arrangement creation as irrelevant as it is fundamentally described in thermodynamics as an underlying law for the universe.
Sure we can, and we do. Information creation is simply not comparable to matter/energy/time/space/reality/universe creation.
Originally Posted by Squatch347
What's more, we can reject your objection further with the example of virutal particles (which cause Hawking Radiation). These are particles that pop into existence at the quantum level and after a brief period annihilate themselves. These particles don't appear out of nothing as is sometimes described, but come from probabilistic fluctuations to the quantum foam that exists at the Planck distance level of the universe.
For all practical purposes, virtual particles exist only in the mathematics of the models used to describe the measurements of real particles. They don't exist in the same sense as the existence following the change from non-existence which you are trying to apply in P1. Further, even if we accepted virtual particles as changing from demonstrably and actually not-existing to demonstrably and actually existing, this would still not be comparable to the change from non-existence to existence which you are applying to the universe, if for the sole reason that the virtual particles would begin to exist inside this universe. The way in which you are trying to apply P1 has no universe into which this universe began to exist. You're grasping at virtual straws, here, Squatch.
Originally Posted by Squatch347
Thus, we have two examples of the causation proposed in my argument.
No, we have two examples which prove yet again why you should leave the scientisting to real scientists. If all you can offer as evidence that anything at all can actually begin to exist is "creation of information when wood is changed to chair", and "virtual particles pop into existence at the quantum level", your confidence in P1 is in no way rationally justified.
I don't see why that would be the case. Given, for example, quantum entanglement the cause and effect are simultaneous, no linear time is required for the state change of the particles. We can even remove the state change here and retain the causation. If particle 1 is spinning downward, it causes particle 2 to spin upward to retain information symmetry. No time is taken in that sentence, it is a static description that requires causation.
The problem with your example is that both the cause and effect are present within space-time, which makes it yet another invalid comparison. Even the word "simultaneous" is itself necessarily temporal. By removing time in the case of the cause and effect for the universe you are placing the cause in a state without temporal properties and the effect in a state with temporal properties at t=0. All you're doing is making nonsensical statements devoid of any actual meaning. Again, it's best that you leave the scientisting to actual scientists. That's the only way we'll get to the bottom of this.
Re: Theistic beliefs are not rationally justified
Again, this is equivocating between the usage of "created" in the change from wood to chair and the usage in P1 of "begin to exist" meaning a change from non-existence to existence.
I'm not sure you understand the term equivocating here. Equivocation is when you use a word that has two different definitions and your argument conflates those two definitions.
Take for example, the classic example of the word "bank."
P1: Otters live in banks
P2: Banks store money
C: Otters store money
Obviously fallacious because we are using two dramatically different defintions of the word bank (specifically P1 uses definition 1, P2 uses definition 2).
That isn't occuring here. The creation of a chair specifically fulfills the only (non-archiac) definition for that word. You can't have an equivocation fallacy if there is only one definition and it describes the activity.
Take a look at the examples:
‘he created a thirty-acre lake’
‘over 170 jobs were created’
‘In its draft resolutions, the ANC called on all levels of government to create projects that generated jobs.’
‘This deliberate thrust for creating an enabling environment brings about the shift in growth strategy.’
‘From Spain he brought a translator who created a Latin summary of Aristotle's biological and zoological works.’
‘Saying that humans, being creatures of flesh, could not obey the law was to say, in effect, that God made a bad job of creating them.’
‘In an effort to justify their existence they create documents that only a fool would sign without modifying it.’
‘The company plans to centralise its business by moving into the large distribution warehouse in Kettlestring Lane, creating an extra 30 jobs.’
‘Plans to rejuvenate the River Eden could create dozens of new jobs and bring millions of pounds into the local economy, according to a new report.’
‘How far can we use the imagination to create a videogame that brings someone to nirvana?’
‘Scotland can demonstrate that plans to revitalise health and safety in the workplace can be made a reality by creating real partnerships to bring the accident figures down.’
‘The system, if adopted, is predicted to bring in revenues of nearly £50 billion and create two million jobs.’
‘It effectively created a new bank which has brought us back into the mainstream of competing with the big Scottish banks.’
‘With a wide array of workshop topics, Career Services has information on every aspect of the job hunt, from creating a resume to selling your skills in an interview.’
‘This new recording features two dozen carols brought together to create a concert performance.’
‘If granted, it will generate power for thousands of homes, creating hundreds of jobs in the Doncaster area.’
‘His paintings are attempts at getting outside of time, at creating timeless icons of existence.’
‘Chaos is a calm Goddess, who loves to work with Existence to create things and let them run amok on their own.’
‘We are going to create new jobs from bringing in new products and services to the community.’
‘It was this love of generations yet unborn that brought God to create the universe.’
‘Mayo County Council have done an exemplary job in creating this trail and bringing the visual arts to the people.’
‘At the same time, the Commission was not brought into being to create a historical document.’
Literally none of the examples used in the dictionary follow the strict interpretation you mention. They all follow the informational focused creation process I mentioned.
Originally Posted by Future
Information creation is simply not comparable to matter/energy/time/space/reality/universe creation.
You are confusing differences in scale with differences in kind.
Building a wooden chair is nothing like the complexity of building a Virginia class submarine, but that doesn't mean both aren't building something.
How, exactly, do the two concepts materially differ?
Originally Posted by future
For all practical purposes, virtual particles exist only in the mathematics of the models used to describe the measurements of real particles. They don't exist in the same sense as the existence following the change from non-existence which you are trying to apply in P1.
Well, let's see what actual physicists say.
Gordon Kane, director of the Michigan Center for Theoretical Physics at the University of Michigan at Ann Arbor, provides this answer.
Virtual particles are indeed real particles. Quantum theory predicts that every particle spends some time as a combination of other particles in all possible ways. These predictions are very well understood and tested.
Quantum mechanics allows, and indeed requires, temporary violations of conservation of energy, so one particle can become a pair of heavier particles (the so-called virtual particles), which quickly rejoin into the original particle as if they had never been there. If that were all that occurred we would still be confident that it was a real effect because it is an intrinsic part of quantum mechanics, which is extremely well tested, and is a complete and tightly woven theory--if any part of it were wrong the whole structure would collapse.
...
Thus virtual particles are indeed real and have observable effects that physicists have devised ways of measuring. Their properties and consequences are well established and well understood consequences of quantum mechanics.
No, we have two examples which prove yet again why you should leave the scientisting to real scientists.
You mean like the ones I've quoted throughout this thread? Or are you engaging in a no true scottsman fallacy?
Remind me again, which one of the two of us actually has physics training?
Originally Posted by future
The problem with your example is that both the cause and effect are present within space-time, which makes it yet another invalid comparison.
Welcome to a begging the question fallacy. You are insisting that the only support for the premise is the premise itself. Sorry man, that isn't a valid rebuttal. If I were to say, "prove the sky is blue" and you offered evidence based on light refraction to which I object "those don't count because they aren't the actual sky" it would be dismissed out of hand as dumb objection. That is literally the same argument structure you've employed here.
I should take a step back and discuss the consequences of rejecting premise 1 as you have. You can either mean: a) things cannot begin to exist, they must always exist or b) things can begin to exist without a cause.
A is clearly not a tenable position given the consensus scientific opinion that this universe, and its dimensions did, in fact begin to exist. B is equally as bizarre as it posits a universe where things can begin to exist at random, or must always exist because they don't need a cause to begin existing. This is, essentially, appealing either to an eternal universe rejected by physicists or a fantasy universe of magic where things begin to exist on their own.
There is a reason that the principle of causation isn't really disputed by philosophers or physicists (good luck finding one that agrees with your point), because it is fundamentally nonsensical to reject. Your attempt to limit the principle to our universe only is a remnant of 1950s thinking that was long ago abandoned in physics. It is also a pretty classic example of a taxicab fallacy. You take the principle as far as it suits you, then abandon it with no explanatory reason.
"Suffering lies not with inequality, but with dependence." -Voltaire
"Fallacies do not cease to be fallacies because they become fashions.” -G.K. Chesterton
Re: Theistic beliefs are not rationally justified
Originally Posted by Squatch347
I'm not sure you understand the term equivocating here.
Equivocation:
"the use of ambiguous language to conceal the truth or to avoid committing oneself"
You are using the "creation" of a chair (meaning the change from wood to a chair - something existed and then something else existed, the process between them being called "creation") as a comparison and proof that the universe was created (meaning the change from the universe not existing to the universe existing - nothing existing and then something existing, the process between them also being called "creation"). Your use of creation with the chair and the universe is ambiguous, as the processes are not the same.
If you want to commit to using wood-chair as proof of "creation", then you by definition are saying that the universe resulted from the same kind of process as wood-chair.
Originally Posted by Squatch347
You mean like the ones I've quoted throughout this thread? Or are you engaging in a no true scottsman fallacy?
No, I mean the two examples you provided as support for the creation of the universe (wood-chair, which isn't valid since it's a different kind of creation, and virtual particles, which aren't valid since their creation isn't actually observed)
Originally Posted by Squatch347
Remind me again, which one of the two of us actually has physics training?
Oh that's right, I forgot that ODN was where real physicists came to suss out the truth behind the origins of the universe. Really, Squatch, such a statement from you only highlights how little weight KCA actually carries.
Originally Posted by Squatch347
Welcome to a begging the question fallacy. You are insisting that the only support for the premise is the premise itself.
No, I'm insisting that you have the intellectual honesty to make valid comparisons when grasping at straws for support of P1. As it stands, we have no observed instances of things beginning to exist in the sense that you are attempting to use it in your argument. All you've offered is observed instances of things beginning to exist (in the case of wood-chair), and not-observed instances of things beginning to exist (in the case of virtual particles, only their effects have been observed), both of which take place inside an already-existing universe. Therefore, P1 remains unsupported and incoherent.
Originally Posted by Squatch347
I should take a step back and discuss the consequences of rejecting premise 1 as you have. You can either mean: a) things cannot begin to exist, they must always exist or b) things can begin to exist without a cause.
No, the consequence of rejecting P1 is that one maintains rational skepticism. No further conclusions must be reached if we don't even know what you're talking about. Please provide a coherent explanation for P1 in the sense that you are using it with the universe.
Originally Posted by Squatch347
the consensus scientific opinion that this universe, and its dimensions did, in fact begin to exist
Do you mean there are scientists which say that there was nothing and then there was the universe? Please provide a coherent explanation for P1.
Originally Posted by Squatch347
Your attempt to limit the principle to our universe only is a remnant of 1950s thinking that was long ago abandoned in physics.
I have not attempted to limit any principles to our universe only. I've explained why your use of the terms is ambiguous and why your examples fail to serve as support for even just your 1st premise.
Re: Theistic beliefs are not rationally justified
You are using the "creation" of a chair (meaning the change from wood to a chair - something existed and then something else existed, the process between them being called "creation")
How does my example (the creation of the chair) not meet the definition offered?
Bring (something) into existence.
Again, take a look at the examples. Is the dictionary wrong in using the word creation here?
Originally Posted by future
If you want to commit to using wood-chair as proof of "creation", then you by definition are saying that the universe resulted from the same kind of process as wood-chair.
Moving the goal posts fallacy. You've shifted from giving examples of something being created to something being created by the same process. Surely creating a lake and creating a chair don't involve the same process, but both (according to the dictionary) are both creations.
Originally Posted by future
No, I mean the two examples you provided as support for the creation of the universe (wood-chair, which isn't valid since it's a different kind of creation, and virtual particles, which aren't valid since their creation isn't actually observed)
Including where I quote a physicist showing that your understanding of virtual particles isn't correct?
...
But while the virtual particles are briefly part of our world they can interact with other particles, and that leads to a number of tests of the quantum-mechanical predictions about virtual particles. The first test was understood in the late 1940s. In a hydrogen atom an electron and a proton are bound together by photons (the quanta of the electromagnetic field). Every photon will spend some time as a virtual electron plus its antiparticle, the virtual positron, since this is allowed by quantum mechanics as described above. The hydrogen atom has two energy levels that coincidentally seem to have the same energy. But when the atom is in one of those levels it interacts differently with the virtual electron and positron than when it is in the other, so their energies are shifted a tiny bit because of those interactions. That shift was measured by Willis Lamb and the Lamb shift was born, for which a Nobel Prize was eventually awarded.
Quarks are particles much like electrons, but different in that they also interact via the strong force. Two of the lighter quarks, the so-called "up" and "down" quarks, bind together to make up protons and neutrons. The "top" quark is the heaviest of the six types of quarks. In the early 1990s it had been predicted to exist but had not been directly seen in any experiment. At the LEP collider at the European particle physics laboratory CERN, millions of Z bosons--the particles that mediate neutral weak interactions--were produced and their mass was very accurately measured. The Standard Model of particle physics predicts the mass of the Z boson, but the measured value differed a little. This small difference could be explained in terms of the time the Z spent as a virtual top quark if such a top quark had a certain mass. When the top quark mass was directly measured a few years later at the Tevatron collider at Fermi National Accelerator Laboratory near Chicago, the value agreed with that obtained from the virtual particle analysis, providing a dramatic test of our understanding of virtual particles.
Another very good test some readers may want to look up, which we do not have space to describe here, is the Casimir effect, where forces between metal plates in empty space are modified by the presence of virtual particles.
ibid
Originally Posted by future
Oh that's right, I forgot that ODN was where real physicists came to suss out the truth behind the origins of the universe.
Who ever said it was? I pointed out the incredibly broad error in your statement about leaving it to actual physicists.
1) I'm the only one citing physicists here. You are relying on your own understanding.
2) I have physics training, you do not.
Now, I'm happy to drop all of that and discuss the mechanics in detail (I've already offered peer-reviewed papers on them), but I think you might get a bit overwhelmed.
Originally Posted by future
No, the consequence of rejecting P1 is that one maintains rational skepticism.
You cannot hold that both a and ~a are valid. If you say that a is not a true statement you must, be definition, be saying that ~a is. That is foundational critical thinking. If, "the sky is blue" is not true, then "the sky is not blue" has to be true.
So by saying that premise 1 isn't true, you are de facto, accepting one of these two positions, so which is it?
A: things cannot begin to exist, they must always exist
B: things can begin to exist without a cause.
How does my example (the creation of the chair) not meet the definition offered?
All your examples are cases of creatio ex materia, meaning creation from something or somewhere which already exists. Or more accurately, they're examples of things beginning to exist in an already-existing universe.
Originally Posted by Squatch347
Moving the goal posts fallacy. You've shifted from giving examples of something being created to something being created by the same process.
No, I'd actually rather you refrain from using the term "created" at all, since P1 is "begin to exist" and will no longer respond to statements using "created". Please provide a coherent definition of "begin to exist".
Originally Posted by Squatch347
Surely creating a lake and creating a chair don't involve the same process, but both (according to the dictionary) are both creations.
Again, your examples "begin to exist" in an already-existing universe. Are you saying that the universe began to exist in an already-existing universe? If not, then you are using "begin to exist" ambiguously. Please provide a coherent definition of the terms used in P1.
Originally Posted by Squatch347
So by saying that premise 1 isn't true, you are de facto, accepting one of these two positions, so which is it?
I'm not saying P1 isn't true, I'm saying I reject it because it isn't even coherent in the context you are using it. Please explain what you mean by "begin to exist" when you say the universe began to exist. It would also help if you defined how you are using the term "universe", to avoid further ambiguity inherent in KCA.
| |
“That was bizarre. I could not believe that, actually. Like, really? That’s my brother,” Aniston, 52, told Entertainment Tonight. “But I understand it, though. It just shows you how hopeful people are for fantasies for dreams to come true.” At the time, reps for both Aniston and Schwimmer, 54, denied the reports that they rekindled their onscreen romance in real life, calling the relationship rumors “false.”
Per Closer’s August report, Schwimmer and Aniston “began texting immediately after filming” the special. The source also said that, in July, David flew from New York to see Jen in L.A. and that they had been “spending time at Jen’s home,” where they “enjoyed quality time together, chatting and laughing.” The source added that the two had “lots of chemistry” and were “spotted drinking wine, deep in conversation, as they walked around one of Jen’s favourite vineyards in Santa Barbara.”
Photo Courtesy of HBO Max
Within seconds, Aniston and Schwimmer’s rumored relationship was spreading like wildfire on social media. “The Jennifer Aniston and David Schwimmer rumors have restored my faith in love,” one fan tweeted. Someone else wrote that they needed to “take time off work to recover.” It’s not hard to understand why fans were so excited. In late May, the duo made headlines when the American Crime Story actor admitted during the special that he “had a major crush” on Aniston while filming Season 1 of Friends.
During the reunion, Aniston admitted that the feelings were “reciprocated,” but they never took their relationship offscreen. “At some point, we were both crushing hard on each other, but it was like two ships passing because one of us was always in a relationship,” Schwimmer said. “And we never crossed that boundary.” “Bullsh*t,” co-star Matt LeBlanc interjected, suggesting that he witnessed the two canoodling on set.
Schwimmer also recalled cuddling on a couch with the We’re The Millers star between takes. And even though he didn’t think his co-stars noticed that they were “crushing on each other,” Matthew Perry and Courteney Cox both said that they “knew for sure.” Cox also added that their chemistry was “palpable,” especially in their first kiss scene from Season 2. “It was just perfect,” she said.
Aniston, for her part, recalled wanting her first kiss with Schwimmer to be off-camera, but it didn’t work out. “I just remember saying one time to David, it’s gonna be such a bummer if the first time you and I actually kiss is going to be on national television,” she said, reflecting on their iconic Central Perk smooch. “Sure enough! First time we kissed was in that coffee shop. We just channeled all of our adoration and love for each other into Ross and Rachel.” | |
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to a process for the hydrolysis and condensation of alkyl/aryl-trichlorosilanes which may contain up to 40 mole % of dialkyl or diaryl or alkylaryl-dichlorosilanes.
2. Description of the Prior Art
The hydrolysis of halogen silanes usually takes place in open systems of the kind wherein halogen silanes are added to water which is already present. However, this method does not allow alkyl/aryl- trichlorosilanes or silane mixtures, which contain the trichlorosilanes predominantly, to be hydrolized without gelling the reaction products. "Chemie and Technologie der Silicone" (Chemistry and Technology of the Silicones) by Walter Noll, published by Chemie GmbH in 1968, Page 164, states that methyltrichlorosilane forms highly cross-linked gel-like or powder-like polymers on being hydrolyzed with water. The other know processes of hydrolysis are also dealt with in this book. It is reported, for example, that trichlorosilanes can be hydrolyzed in the presence of larger amounts of solvents which dilute the reaction medium. However, when using trichlorosilanes having bulky organic groups and ether as the solvent, only low molecular weight cyclic siloxanes are primarily obtained.
The hydrolysis of trichlorosilanes can also be carried out with water/alcohol mixtures. In this instance also, the concomitant use of solvents is customary so that this process is expensive and does not completely avoid the danger of gelling when high proportions of trichlorosilanes are present in the silane mixture.
The well known "reverse hydrolysis" process in which water is added to halogen silanes also leads to complete gelling of the reaction products when trichlorosilanes are hydrolized, even in the presence of solvents.
Hydrolysis at lower temperatures, e.g., at -73° C. in ether leads to low molecular weight chlorosiloxanes with a high residual chlorine content. The ratio of chlorine to silicon in these compounds is greater than 1.
It is a particular disadvantage of these processes that portions of the halosilane starting materials are carried along with the hydrogen chloride liberated in the reaction. As a consequence, the composition of the reaction mixture in relation to the chlorosilane starting material changes uncontrollably during the reaction so that the reproducibility of the products manufactured is impaired.
The gaseous HCl which escapes cannot be utilized further since it is contaminated by silanes. It must therefore be neutralized. The conventional processes are therefore disadvantageous since the waste air and water produced must be further treated before disposal. Also, the chlorine of the chlorosilane starting materials is lost for further use. Also, the halogen silanes which are carried off are deposited and collected on various surfaces of the equipment, especially in the waste air lines, where they are hydrolyzed by the moisture in the air. In addition to the waste of the materials, the collection of these materials creates the danger of further corrosion.
SUMMARY OF THE INVENTION
We have discovered a method for hydrolyzing alkyl/aryl- trichlorosilanes which avoids many of the disadvantages of the known processes. Particularly, the object of the present invention is to hydrolyze and condense alkyl/aryl-trichlorosilanes and mixtures which may contain up to 40 mole % of dialkyl or diaryl or alkylaryldichlorosilanes in such a manner that there is no gelling of the reaction products and highly viscous or fusible condensation products are obtained which are soluble in organic solvents, such as, toluene. At the same time, the hydrolysis of the pure alkyltrichlorosilane, particularly of methyltrichlorosilane, is preferred and of particular industrial interest.
A further object of the present invention is to avoid the concomitant use of organic solvents in the hydrolysis and condensation reactions to minimize the burden with respect to the waste air and water. Thus, the silanes and water should be the only starting materials required.
The present process is carried out by dissolving chlorosilanes in liquid hydrogen chloride and hydrolyzing them at a pressure between 15 to 80 atmospheres and a temperature of -17 to +47° C. with 0.165 to 0. 465 moles of water per chlorine atom attached to a silicon atom and, after subsequent condensation, freeing the reaction product from HCl and silanes that have not been hydrolized using conventional techniques.
Preferably, the reaction product is freed from HCl and silanes that have not been hydrolized by releasing the pressure and/or increasing the temperature.
DESCRIPTION OF THE PREFERRED EMBODIMENT
The preferred alkyltrichlorosilanes are those in which the alkyl group contains 1 to 4 carbon atoms and the most preferred alkyl group is methyl. However, those silanes whose alkyl group is longer than 4 carbon atoms, for example, alkyl groups with even 12 to 18 carbon atoms, may also be reacted. Additionally, the alkyl group may be linear or branched.
Thus, mixtures of alkyltrichlorosilanes and aryltrichlorosilanes are preferably used if the incorporation of aryl groups into the siloxane skeleton is desired. The alkyl/aryl groups are defined as above for those cases where diorganodichlorosilanes are used concomitantly.
Liquid hydrogen chloride is used as the solvent in the process of the present invention. At the end of the reaction, this hydrogen chloride, together with the hydrogen chloride formed during the reaction, can be transported in the gaseous state by releasing the pressure in the reaction vessel. It may then be compressed once again and supplied to a second reaction batch or completely or partially drawn off and used for a different purpose.
At the same time, it is of particular significance that the hydrogen chloride is obtained in a very pure form, so that the contamination problems and waste air and water problems which occur in the prior art process do not occur. No extraneous solvent is required nor is it necessary to neutralize the hydrogen chloride. Alternatively, the hydrogen chloride may also be distilled off while maintaining an adequate pressure for this purpose and condensed again directly into the liquid form by cooling.
The rate of the reaction can be influenced by the amount of liquid hydrogen chloride used and particularly by the temperature and pressure conditions used. It is best to dissolve very reactive trichlorosilanes in larger amounts of hydrogen chloride and carry out their hydrolysis at lower temperatures than those trichlorosilanes whose hydrocarbon residues decrease the reactivity, for example, by their bulkiness. In the case of propyltrichlorosilane, for example, the amount of liquid hydrogen chloride is kept very small and the hydrolysis is carried out at an elevated reaction temperature.
In general, it is advisable to use an amount of liquid hydrogen chloride that is approximately equal in volume to that of the chlorosilane to be hydrolyzed. The use of an amount of solvent greater than twice the volume of the silane is not recommended since it impairs the space-time results which are particularly efficient in this process.
The water required for the hydrolysis is advisably added in the form of an aqueous solution of HCl. It has proven to be especially advantageous to use a concentration of HCl in water that corresponds to the solubility of HCl in water under the reaction conditions used for the process. This requires the aqueous HCl solution to be prepared first in a separate pressure vessel.
In a preferred embodiment of the present process, the halogen silanes are dissolved in liquid hydrogen chloride in a first pressure reactor and sufficient HCl is dissolved in the water required for the hydrolysis so that a solution is obtained which is saturated under the conditions existing in the first pressure reactor. This solution of HCl in water is now fed either continuously or discontinuously into the pressure reactor through the appropriate piping. The aqueous HCl solution is added in proportion to the rate of the reaction of the water with the halogen silane.
The rate of reaction can be readily recognized by the fact that the aqueous HCl solution added is not miscible with the silane/HCl phase already present so that two separate layers are formed. If the reactants are stirred, these two layers become dispersed. The interfaces disappear as the reaction proceeds. It is therefore possible to follow the addition of HCl-saturated water visually and to control it so as to correspond to the reaction rate.
Since a reaction temperature of +20° C. must not be exceeded for highly reactive halogen silanes, such as, the methyltrichlorosilanes and since the optimum range of the reaction temperature lies between -10. degree. and +10° C., the pressure reactor may have have to be cooled.
The addition of the required amount of water may be followed by a post- reaction time of 15 minutes to 2 hours before the pressure in the reactor is released or before the hydrogen chloride is distilled off. When the pressure is released after the reaction is completed, very pure HCl gas escapes which, as noted above, is used for the next reaction batch or may be compressed and filled into cylinders for a different use.
The reaction product obtained may be heated by raising its temperature up to 200° C. At the same time, the residual hydrogen chloride contained in the siloxane and, depending on the degree of hydrolysis achieved, the residual in any unconverted halogen silane escape. The reaction products so obtained are free of silanol groups at least to the extent that such groups can not be detected by IR spectroscopy.
The product obtained by the inventive process is a siloxane of the general formula ##EQU1## wherein R is the hydrocarbon group of the starting silane,
a has a value of 1.0 to 1.4 and can be calculated from the amount of the diorganodichlorosilane,
b has a value of 0.18 to 0.5.
The sum of a+b is 1.2 to 1.70.
It is evident from the formula that the siloxane obtained still contains a certain amount of chlorine. The product is therefore readily accessible to further reactions that attack the SiCl bond. At the same time, it should be noted that these reactive chlorine atoms are present in a highly cross-linked siloxane skeleton, particularly in the case where the value of `a` is low. Such chlorosiloxanes could not be obtained without gelling in a fusible state or of a viscous consistency with the conventionally known processes.
The invention therefore also relates to new compounds of the general formula ##EQU2## in which b has a value of 0.2 to < 0.5.
Examples of chlorosiloxanes that may be prepared in accordance with the invention are: ##STR1## The chlorine atoms in these compounds are terminal groups.
The inventive process has the advantages that
(1) the reaction products do not gel,
(2) it is possible to work without requiring additional organic solvent,
(3) silane contamination of the equipment and of the HCl gas is avoided,
(4) problems with waste air and waste water do not arise,
(5) neutralization is not required, and
(6) the products can be obtained with excellent reproducibility.
As is evident from the description of the present process, no more than 0.465 moles of H.sub.2 O per mole of SiCl may be employed. Above this limit, insoluble gels are obtained from methyltrichlorosilane. On the other hand, the less the amount of water used, the greater the amount of chlorosilane that remains unreacted. This is surprising since the chlorosilanes are highly reactive substances which are susceptible to hydrolysis. Too small an amount of water is not recommended because of the volume and time yields. The lower limit of water that may be added is thus based on this factor.
Especially preferred is the synthesis of compounds corresponding to Formulas I and II from methyltrichlorosilane which is obtained in surplus amounts from the synthesis of silanes. As a result of the reactive SiCl groups, which are still present in sufficient amounts in products of these Formulas I or II and because of the good solubility, such materials can be processed into other valuable products.
The following examples illustrate the present process:
EXAMPLE 1
475 g (2.97 moles) of methyltrichlorosilane are added to a pressure reactor into which 700 ml of hydrogen chloride are then condensed at a temperature of -8° C. 71.8 g of water (3.98 moles), corresponding to 0.448 moles of water per chlorine atom attached to a silicon atom, are added in the form of a 37% hydrochloric acid solution to a second pressure vessel and saturated under an HCl gas pressure of 30 kilopond/cm. sup.2 (kp/cm.sup.2). Subsequently, the aqueous hydrochloric acid is introduced to the hydrolysis reactor in proportion to the amount consumed. The hydrolysis is completed after a reaction time of 6.5 hours at a reaction temperature of -8° to +2° C.
The chlorosiloxane obtained is discharged into a receiver for further processing. Traces of methyltrichlorosilane (0.4 g) are removed by heating the hydrolysis product to 90° C. at 30 torr.
The acid value of the viscous pressure-hydrolysis product is 4. 44. times.10.sup.-3 equivalents per gram. This value corresponds to the general formula ##STR2##
EXAMPLE 2
302 g of methyltrichlorosilane (2.02 moles) are added to a pressure- hydrolysis reactor and dissolved in 250 ml of condensed hydrogen chloride. 49.0 g of water (2.72 moles), corresponding to 0.449 moles of water per chlorine atom attached to a silicon atom are filled in the form of a 37% hydrochloric acid solution into a second pressure vessel and saturated under an HCl gas pressure of 45 kp/cm.sup.2. The hydrolysis is carried out at reaction temperatures between -7° C. and +2° C. by the slow addition of aqueous hydrochloric acid solution into the hydrolysis reactor during a period of 7 hours. The hydrogen chloride present in the system is subsequently removed by evaporation and the hydrolysis product is transferred to a receiver under inert gas pressure.
The product is freed from HCl residues by heating it to 90° C. for 0.5 hours at a pressure of 30 torr. At the same time, 2.9 g of a distillate having an acid value of 1.82×10.sup.-2 equivalents per gram are collected.
The hydrolysate obtained has an acid value of 3.67×10.sup. - 3 equivalents per gram, a Cl/Si ratio of 0.27 and is readily soluble in toluene. The general formula ##STR3## corresponds to this reaction product. In the solvent-free state, the reaction product solidifies at room temperature and softens at 80° C.
EXAMPLE 3
299 g of methyltrichlorosilane (2.0 moles) are added to the pressure- hydrolysis reactor described above and are dissolved in 250 ml of condensed hydrogen chloride. 45.0 g of water (2.5 moles), corresponding to 0.417 moles of water per chlorine atom attached to a silicon atom, are brought in the form of a 37% hydrochloric acid solution into the second pressure vessel and saturated under an HCl pressure of 45 kp/cm.sup.2. The hydrolysis is carried out within 8 hours at reaction temperatures between -8° C. and +4° C. After evaporation of the liquid hydrogen chloride, a further 17.8 g of CH.sub. 3 SiCl.sub.3 are removed from the reaction product by heating to 90. degree. C. at 30 torr. The viscous reaction product obtained then has the acid value of 4. 435× 10.sup.-3 equivalents per gram, corresponding to the general formula: ##STR4##
EXAMPLE 4
2.0 moles of CH.sub.3 SiCl.sub.3 (299.0 g) are reacted with 2.0 moles of water (36.0 g) corresponding to 0.335 moles of water per chlorine atom attached to a silicon atom, according to the experimental conditions of Example 2. 69.2 g of CH.sub.3 SiCl.sub.3 are removed from the HCl-free reaction product. The resulting hydrolysis product has an acid value of 5. 39×10.sup.-3 equivalents per gram, which corresponds to an average composition of the formula ##STR5##
EXAMPLE 5
266 g of n-propyltrichlorosilane (1.5 moles) are mixed with 150 ml of condensed hydrogen chloride in an apparatus like that of Example 1. The hydrolysis is carried out at reaction temperatures between 20. degree. C. and 27.5° C. with 31.6 g (1.75 moles) of water, corresponding to 0. 385 moles of water per chlorine atom attached to a silicon atom, the water being in the form of a 37% hydrochloric acid solution which has been saturated under an HCl-gas pressure of 56 kp/cm. sup.2. The resultant highly concentrated hydrochloric acid is added to the hydrolysis reactor within 4 hours.
After a reaction time of 6 hours, the reaction product is freed from the remaining HCl-gas pressure and the resulting chlorosiloxane is filled into a separate receiver.
The reaction product is subsequently free from the adhering HCl and residual n-propyltrichlorosilane at a temperature of 150° C. and in a vacuum produced by an oil pump. In so doing, 22.9 g of propyltrichlorosilane are recovered. The chlorosiloxane produced is obtained as a highly viscous liquid with an acid value of 3.95×10. sup.-3 equivalents per gram. It therfore corresponds to the general composition ##STR6##
EXAMPLE 6
150 ml of hydrogen chloride are condensed at -6° C. into a silane mixture, consisting of 0.76 moles of methyltrichlorosilane (113.6 g) and 0.5 moles of dimethyldichlorosilane (64.6 g) in a pressure reactor. Over a period of 5.5 hours, 1.20 moles of water (21.6 g), corresponding to 0.366 moles of water per chlorine atom attached to a silicon atom are added to these chlorosilanes in the form of an aqueous hydrochloric acid solution saturated under an HCl pressure of 36 kp/cm. sup.2. In so doing, the hydrolysis reaches a maximum temperature of +6. degree. C.
After a further reaction time of 1.5 hours, the reaction product is freed from the solvent HCl by allowing the HCl to evaporate. The separated cohydrolysate is subsequently heated at 150° C. in a vacuum of 30 torr. In so doing, 33.6 g of dimethyldichlorosilane are recovered. The reaction product has and acid value of 4.32×10.sup.- 3 equivalents per gram and a viscosity of 7500 cps at 20° C. The chlorosiloxane obtained corresponds to the formula ##STR7##
EXAMPLE 7
1.8 moles of methyltrichlorosilane (269.0 g) and 0.2 moles of phenyltrichlorosilane (42.2 g) are mixed with 400 ml of condensed hydrogen chloride in a pressure reactor.
2.72 moles of water (49.0 g), corresponding 0.453 moles of water per chlorine atom attached to a silicon atom, are metered in the form of a solution saturated at 32 kp/cm.sup.2 with HCl gas from a separate pressure vessel and within a period of 4.5 hours. The reaction is carried out within a temperature range of -10° C. to +10° C. After the aqueous hydrochloric acid has been added completely, the reaction is continued for a further 2 hours before the solvent HCl is evaporated off completely. A viscous chlorosiloxane with an acid value of 3.46×10. sup.-3 equivalents per gram is obtained as reaction product. This product corresponds to the formula ##STR8##
EXAMPLE 8
423.2 g (2.0 moles) of phenyltrichlorosilane are introduced into a pressure hydrolysis apparatus and mixed with 600 ml of liquid hydrogen chloride. This silane is hydrolyzed in the temperature range of 15. degree. to 22° C. by the addition of 45 g (2.5 moles) of water, corresponding to 0.417 moles of water per chlorine atom attached to a silicon atom. This water is added to a separate reactor as 37% hydrochloric acid and saturated under an HCl-gas pressure of 45 kp/cm. sup.2.
After a hydrolysis time of 3.5 hours, the solvent HCl is evaporated from the reaction product which is then taken from the reaction apparatus as a viscous product.
After the volatile components are eliminated by heating to 125. degree. C. for 2 hours in the vacuum of an oil pump, 11.0 g of phenyltrichlorosilane and a solid reaction product, which is readily soluble in toluene and acetone and which has an acid value of 3.02× 10.sup.-3 equivalents per gram, are obtained.
In addition, 2.1 g of benzene are obtained as decomposition product. The acid value obtained corresponds to a product of the general formula ##STR9##
EXAMPLE 9 (outside the scope of the present invention)
489.4 (3.06 moles) of methyltrichlorosilane are added for the purpose of pressure hydrolysis into a reactor into which 700 ml of hydrogen chloride are then condensed at a temperature of -9° C. 77.58 g of water (4.31 moles) corresponding to 0.47 moles of water per chlorine atom (this amount of water is outside the limit of the present process) are added in the form of 37% hydrochloric acid to the second pressure vessel and saturated under an HCl-gas pressure of 30 kp/cm.sup. 2. Subsequently, the aqueous HCl is brought into the hydrolysis reactor at a rate corresponding to the inlet pressure. The hydrolysis is terminated after a reaction time of 7 hours at reaction temperatures from -9° C. to + 7° C.
The condensed hydrogen chloride is subsequently removed in gaseous form over a system of three intensive cold traps (-70° C. ), whereby these remained free from condensable products. The working up of the product was troublesome because of the presence of gelled silsesquioxide portions. | |
Critical resource sharing among multiple entities in a processing system is inevitable, which in turn calls for the presence of appropriate authentication and access control mechanisms. Generally speaking, these mechanisms are implemented via trusted software "policy checkers" that enforce certain high level application-specific "rules" to enforce a policy. Whether implemented as operating system modules or embedded inside the application ad hoc, these policy checkers expose additional attack surface in addition to the application logic. In order to protect application software from an adversary, modern secure processing platforms, such as Intel's Software Guard Extensions (SGX), employ principled hardware isolation to offer secure software containers or enclaves to execute trusted sensitive code with some integrity and privacy guarantees against a privileged software adversary.
We extend this model further and propose using these hardware isolation mechanisms to shield the authentication and access control logic essential to policy checker software. While relying on the fundamental features of modern secure processors, our framework introduces productive software design guidelines which enable a guarded environment to execute sensitive policy checking code -- hence enforcing application control flow integrity -- and afford flexibility to the application designer to construct appropriate high-level policies to customize policy checker software. Joint work with Syed Kamran Haider, Hamza Omar, Ilia Lebedev, and Srini Devadas.
Marten van Dijk is the Charles H. Knapp Associate Professor at the ECE department at the University of Connecticut. He has over 15 years of experience in system security research in both academia (MIT and UConn) and industry (Philips Research and RSA Laboratories). He has contributed in widely varying areas, from Physical Unclonable Functions (PUFs), to Aegis, the first single-chip secure processor that verifies integrity and freshness of external memory, to Oblivious RAM, authenticated file systems with proofs of retrievability and fully homomorphic encryption over the integers. | https://ece.umass.edu/event/seminar-marten-van-dijk |
Government has failed to meet its target of 2000 housing units in the first quarter of the year after managing only 569 units.
This translates to just above a quarter of the target.
According to Information Minister Monica Mutsvangwa, “a total of 569 housing units were constructed and completed for the quarter out of a targeted 2000 units.”
“The Covid-induced lockdown adversely affected the construction industry which was declared a non-essential service,” she said.
The failure has been made dire by the backdrop of myriad demolitions of residential houses.
Over 3000 houses were demolished in Melfort whilst a further 11 000 face demolition in Chitungwiza.
Last year over 200 houses were also destroyed along High Glen road.
Many other cooperatives have also suffered the same plight.
Government’s National Development Strategy (NDS1) envisions the delivery of 220 000 housing units by 2023.
Out of the total target, government is left with only 2 years to deliver the remaining 219 431 units.
Earlier this year Housing Minister Garwe said they would channel $5 billion released by Treasury, towards affordable housing for low income earners.
The country has a housing backlog of over 1.4 million.
The government has previously pledged to clear the backlog by 2030. | https://marondera.opencouncil.co.zw/government-misses-75-of-housing-target/ |
Last month, the average car rental length in Breda was 4 days.
The average rental car length in Breda is 3 days.
The most booked rental car type currently in Breda is ECONOMY cars.
Last year, the most booked rental car type in Breda were ECONOMY cars.
How much does it cost to rent a car in Breda? Last month, the average car rental price was 142 USD.
Last year, how much did it cost to rent a car in Breda? Last year, the average car rental price was 112 USD.
What is the current average daily price to rent a in Breda? Last month, the average rental price was 40 USD per day.
How much did it cost to rent a car in Breda over the past 12 months? Last Year, the average rental price was 42 USD per day.
The car rental companies available in Breda are: .
See below last 5 customer reviews. Our customers rated Breda Car Rental with an average of 10.00 based on 1 ratings.
The person who gave me a car very friendly nice good knowing his work man. | https://www.rentalcargroup.com/en/netherlands/breda-car-rental/ |
Celie reads another of Nettie's letters. Nettie has been working hard in the village, from early in the morning till late at night, helping with every aspect of life there, and with Samuel and Corrine's missionary labors. Nettie enjoys the work, but finds it utterly exhausting.
Nettie begins her missionary post with a great deal of idealism. Although the work taxes her physically and mentally, she, at the very least, has been allowed to come to Africa as a free and independent woman, in command of her own future.
Catherine, a woman in the village, has a daughter named Tashi, who enjoys playing with Olivia. Catherine says Tashi does not need to be educated, since her only job will be to act as a good wife to a man one day. Nettie disputes this, and says she herself is not married. Catherine counters that, for this reason, Nettie does not matter very much, since she has no husband and no family.
Immediately, however, Nettie is confronted with the Olinka's view of Nettie's unmarried life. Catherine thus perfectly recapitulates a feeling that Nettie and Celie had when living in America: that women can only matter if they are mothers, taking care of households, children, and their husbands.
But Olivia is very smart, and she learns a great deal in school, even teaching some to Tashi when they are playing. Corrine tells Nettie, later, that, to avoid confusion, Nettie should make clear to the villagers that she is not married to Samuel, and that Corrine is. Nettie is slighted and upset by this, but she nevertheless agrees. Nettie does not understand why, all of a sudden, Corrine seems to treat her with the greatest circumspection.
Corrine's jealousy is here made plain. It is hard for Corrine to imagine that Nettie would be willing to move to Africa without somehow having designs on her husband. The Olinka, interestingly, also believe, at first, that Samuel has two wives, as is permitted in their culture.
Nettie then describes her small hut, and says that she wishes she had a picture of Celie to put in it, next to her picture of Jesus Christ.
Although the move is not subtle, Walker appears to equate Nettie's powerful religious feelings with her devotion to her sister, so many thousands of miles away. She puts the connections and love between family on the same level as connection with and love for God. | https://www.litcharts.com/lit/the-color-purple/letter-62 |
Good Afternoon, Chairperson Cheh, Committee members, Councilmembers, and your staffs. My name is Lucinda Babers, and I am the Director of the District of Columbia Department of Motor Vehicles (DC DMV).
Prior to sharing the major FY14 and year-to-date FY15 DC DMV accomplishments, I would like to thank our customers, this Committee and the rest of the Council for your support. We often rely on all of you to provide the necessary feedback that guides our decisions for service improvements. I also must extend a special thanks to my DC DMV coworkers whose dedication and efforts make it possible for us to service the needs of the customers. As I always say, I am truly humbled to serve with them and thank them for all they do for the District and Team DMV.
DMV provides service to over 576,000 licensed drivers/identification card holders and 296,000 registered vehicles at four service centers. We provide adjudication services and collect ticket payments for more than 2.1 million tickets annually. We also conduct over 187,000 annual vehicle inspections. DMV interacts with DC residents and non-residents, with an average of 3,200 daily customer contacts—more than almost any other District government agency.
During FY14 and year-to-date FY15, we continued our efforts to streamline and improve service delivery. Following are the highlights of our past accomplishments.
In November 2013, we completed the implementation of a more secure credential with advanced security features, including taking digital photos upfront. We also introduced a central issuance process which further deters fraud. Our strong collaboration with the Office of the Chief Technology Officer (OCTO) enabled the successful implementation of this major project.
With assistance from the Department of General Services (DGS), we opened a new DMV service center located at Georgetown Park Mall on April 29, 2014. This facility, which offers licensing, identification, registration and titling services, increased our capacity to service District residents and allowed for the implementation of Limited Purpose credentials.
On May 1, 2014, we implemented the Driver Safety Amendment Act of 2013 which primarily allows for the issuance of limited purpose credentials to undocumented residents. We also rolled out REAL ID, a federally compliant credential with a star in the upper right-hand corner, on May 1, 2014, and we received full REAL ID certification from the Department of Homeland Security on November 17, 2014. REAL ID requires all District residents to recertify, in person, their proof of identity, proof of social security number and two proofs of residency when they obtain an original credential, renew, change their address or obtain a duplicate credential. It is important to note a resident’s existing DC credential remains valid for federal purposes until the expiration date. In other words, I’m good until 2020!
In terms of information technology enhancements, in August 2014, we implemented an online transaction to allow customers to request a registration of out-of-state automobiles exemption to prevent non-DC residents from receiving tickets for failure to display District tags. We also implemented an online process for customers to renew their various reciprocity permits; once again allowing customers to avoid an in-person visit. Our September 2014 introduction of free Wi-Fi in all DMV locations allows residents to remain connected while waiting to transact DMV business.
Another major initiative was the October 1, 2014 implementation of the Traffic Adjudication Amendment Act of 2013. To comply with this Act, DMV significantly modified the adjudication process to allow customers to request reconsideration within thirty days after a liable hearing decision and prior to the appeals process. Additionally, in limited circumstances, customers are able to submit a ticket for a hearing within 365 days of the ticket issuance date.
As always, prior to closing, I would like to provide the listening public with a few important DC DMV tips. To avoid insurance lapse fines and possible registration suspension, insurance must be maintained on all vehicles that are currently registered in the District. Therefore, prior to cancelling the insurance for a vehicle which you are no longer driving, please also cancel your vehicle registration by surrendering your license plates to DC DMV. This tag/registration cancellation can be done online at www.dmv.dc.gov.
There is often confusion among customers related to the functions of the various agencies involved with parking. The District Department of Transportation regulates parking by managing policy, signage and meters; the Department of Public Works enforces parking by issuing tickets; and DC DMV adjudicates tickets by allowing you to contest a ticket you believe was issued in error. If you receive a ticket which you believe to have been issued in error, you must officially adjudicate the ticket, within the timeframe allowed, using the instructions on the back of the ticket or on our website to protect your legal right to adjudicate. You can submit your request to adjudicate tickets online at dmv.dc.gov. Additionally, if you receive a ticket which doubles (i.e., has a penalty added), and you wish to contest just the penalty amount, you should not pay the original fine amount. Payment of any portion of the ticket is an admission of liability and prevents you from adjudicating any part of the ticket.
Finally, DC DMV rivals almost any motor vehicle jurisdiction in terms of online services with more than forty transactions. Therefore, we encourage you to take advantage of these online services to skip the in-person DMV trip and avoid the lines. All DC local libraries have secure Internet connections to assist those without computers or Internet access. Furthermore, most online services can also be done by mail; however, you should allow sufficient mailing time.
Although we have made improvements during the past year, we are aware more is needed. Therefore, we will continue to move forward to identify and implement innovative operational processes, especially those aimed at reducing customers’ in-person visits. Again, we appreciate the support we have received from the Council and look forward to continuing our efforts to improve the quality of service to the residents of the District of Columbia. We will now address any questions you may have. Thank You! | https://dmv.dc.gov/release/dc-dmv-testimony-committee-transportation-environment |
Hearts Of Mercy Ministries Uganda
Love one another as I loved you.
Why We Serve in Uganda
86% of the populatiion of uganda live in rural areas and poverty is a phenomenon. Many children live in remote communities- with 55 percent of the village's children living below poverty line.
Orphans and widows are among the hardest hit from the country's turmoil as they have been left to fend for themselves—either on the streets or in the remains of IDP camps. Healthcare is a great concern for families. Access to medical facilities is limited and costly. Without access to even the most basic necessities or services, acute conditions such as malnutrition are rampant. Psychological and emotional stresses affect all ages. Unresolved trauma resulting from horrific war-time experiences haunts adults and children alike. | https://hommu.cfsites.org/custom.php?pageid=49350 |
We begin each new client relationship with a thorough risk analysis, which we have found to be the single most important contributer to long term satisfaction with portfolio management. Next we prepare a thorough evaluation of your current holdings, compare them to our strategy to determine current income and risk divergences from our benchmarks. We will outline our buy-sell recommendations free of charge and if you decide to engage us as your advisor, we will prepare an investment policy designed personally for you. Upon your approval of your investment management documents we will immediately begin rebalancing your portfolio, strategically planning each change depending on current market conditions and our near term forecasts.
You will find us available any time for conferences and updates and we will formally review your portfolio with you at least quarterly. Based on your investment goals, we expect to make significant modifications to your portfolio on average 2 to 3 times a year.
Disclaimer: Investing involves risk including the potential loss of principal. Please note that rebalancing investments may cause investors to incur transaction costs and, when rebalancing a non-retirement account, taxable events will be created that may increase your tax liability. Investment strategies, such as rebalancing a portfolio, cannot guarantee a profit or protect against loss in every market environment. | http://www.tappanco.com/Services-Provided.3.htm |
In an effort to make information about clinical trials widely available to the public, the U.S. Department of Health and Human Services today issued a final rule that specifies requirements for registering certain clinical trials and submitting summary results information to ClinicalTrials.gov. The new rule expands the legal requirements for submitting registration and results information for clinical trials involving U.S. Food and Drug Administration-regulated drug, biological and device products. At the same time, the National Institutes of Health has issued a complementary policy for registering and submitting summary results information to ClinicalTrials.gov for all NIH-funded trials, including those not subject to the final rule.
Clinical trials are vital to medical advances because they test new and existing health-related interventions, helping us understand whether they are safe and effective in humans when used as intended. Some clinical trials provide information about which medical treatments work best for certain illnesses or certain groups of people.
Expanding the registration information in ClinicalTrials.gov improves people’s ability to find clinical trials in which they may be able to participate and access investigational therapies. More information about the scientific results of trials, whether positive or negative, may help inform healthcare providers and patients regarding medical decisions. Additional information will help researchers avoid unnecessary duplication of studies, focus on areas in need of study and improve study designs, ultimately advancing the development of clinical interventions.
Requirements under the final rule apply to most interventional studies of drug, biological and device products that are regulated by the FDA. The requirements do not apply to phase 1 trials of drug and biological products, or small feasibility studies of device products. The final rule specifies how and when information collected in a clinical trial must be submitted to ClinicalTrials.gov. It does not dictate how clinical trials should be designed or conducted, or what data must be collected.
Providing a list of potential legal consequences for non-compliance.
Learn more about these changes in the related links in the right-hand sidebar to this release.
The NIH policy applies to all NIH-funded trials, including phase 1 clinical trials of FDA-regulated products and small feasibility device trials as well as products that are not regulated by the FDA, such as behavioral interventions.
HHS values the public’s participation in clinical trials and the knowledge gained by their participation; considers it an obligation to support the maximal use of this knowledge for the greatest benefit to human health; and strongly supports sharing of clinical trial summary data to allow the broader scientific research community to use and build upon clinical trial findings.
The Office of the Director is responsible for setting policy for NIH, which includes 27 Institutes and Centers. The Office of the Director also includes program offices which are responsible for stimulating specific areas of research throughout NIH. Additional information is available at https://www.nih.gov/institutes-nih/nih-office-director.
About the National Library of Medicine (NLM): The world’s largest biomedical library, NLM maintains and makes available a vast print collection and produces electronic information resources on a wide range of topics that are searched billions of times each year by millions of people around the globe. It also supports and conducts research, development, and training in biomedical informatics and health information technology. Additional information is available at http://www.nlm.nih.gov. | https://www.nih.gov/news-events/news-releases/hhs-take-steps-provide-more-information-about-clinical-trials-public |
Purpose: Because of the automated nature of knowledge, experts tend to omit information when describing a task. A potential solution is cognitive task analysis (CTA). The authors investigated the percentage of knowledge experts omitted when teaching a cricothyrotomy to determine the percentage of additional knowledge gained during a CTA interview.
Method: Three experts were videotaped teaching a cricothyrotomy in 2010 at the University of Southern California. After transcription, they participated in CTA interviews for the same procedure. Three additional surgeons were recruited to perform a CTA for the procedure, and a "gold standard" task list was created. Transcriptions from the teaching sessions were compared with the task list to identify omitted steps (both "what" and "how" to do). Transcripts from the CTA interviews were compared against the task list to determine the percentage of knowledge articulated by each expert during the initial "free recall" (unprompted) phase of the CTA interview versus the amount of knowledge gained by using CTA elicitation techniques (prompted).
Results: Experts omitted an average of 71% (10/14) of clinical knowledge steps, 51% (14/27) of action steps, and 73% (3.6/5) of decision steps. For action steps, experts described "how to do it" only 13% (3.6/27) of the time. The average number of steps that were described increased from 44% (20/46) when unprompted to 66% (31/46) when prompted.
Conclusions: This study supports previous research that experts unintentionally omit knowledge when describing a procedure. CTA is a useful method to extract automated knowledge and augment expert knowledge recall during teaching. | https://pubmed.ncbi.nlm.nih.gov/24667500/?dopt=Abstract |
Mosasaurus was a prehistoric marine reptile, a marine creature with immense and powerful jaws, preying on other marine reptiles, ammonites and large nautili. This is part of a group of marine reptiles which attained great size in the Cretaceous before becoming extinct, forerunners of the great sharks and cetaceans which filled the gap they left in the world's oceans.
The Cretaceous period marine reptile featured in the saga of Jurassic world films. The Mosasaurs were a group of perfectly developed marine predators, resembling a crocodilian and lizard form, huge double-hinged jaws and skulls armed with many teeth; It is thought they could unhinge their jaws and gulp down large prey. Powerful flippers and streamlined bodies. At first, it was thought that Mosasaurs used their bodies like snakes or eels when swimming, in an undulating side to side movement. However, the most recent research nows leads us to theorise their bodies remained stiff and a large fluked tail provided the locomotion which propelled the Mosasaur.
Part of a more ancient group of aquatic lizards, the aigialosaurs group. Mosasaurs breathed air, gave birth to live young and grew from around 3 feet; Dallasaurus turneri, the smallest Mosasaur to 50 feet in length; Mosasaurus hoffmannii, the largest. The Mosasaurs reigned for around 20 million years before the mass extinction which also ended the dinosaurs' existence, around 65 million years ago at the K-T boundary event. This was the last 20 million years of the Cretaceous period. The Turonian to Maastrichtian ages.
The first fossils were discovered in an 18th-century limestone quarry near Maastricht on the river Meuse. When first discovered the skeleton was thought to be that of a dragon and the major of Maastricht had the fossil bones put on display in a glass case for all to marvel at the size of the amazing new discovery.
Genus: Prognathodon mosasaurus. (Mosasauridae family).
Age: Mesozoic era, Cretaceous Period, Turonian to Maastrichtian ages, approximately 135 to 65 million years.
Origin: From the Khouribga formation, Casablanca, Morocco, North Africa.
Mosasaurus tooth measurements. | https://www.thefossilstore.com/prognathodon-mosasaurus-tooth-87mm |
Wolf!
Economists and other industry watchers predict U.S. auto sales will decline next year. Just as they predicted last year, for 2000.
They were incredibly wrong about 2000 sales. But this time, they insist, the wolf really is at the door.
Sales forecasts for 2001 from nine analysts interviewed for this report averaged 16.2 million light vehicles, down 6.9 percent from an estimated record of 17.4 million this year.
Although that would be a decline of 1.2 million units, no one is sounding the alarm.
'That's still a great number,' said Ellen Hughes-Cromwick, Ford Motor Co. manager of corporate economics. Ford expects 2001 light-vehicle sales of 16 million to 16.5 million units, she said.
Although some slowing is likely, the U.S. economy is expected to stay strong in 2001, growing about 3 percent. And Federal Reserve Chairman Alan Greenspan signaled Dec. 5 that the central bank is inclined toward easier interest rates, remarks that jolted Wall Street back to life.
'I see no red lights,' said Van Bussmann, DaimlerChrysler corporate economist.
Many forecasters have predicted a so-called 'soft landing' for auto sales and the economy in general for six straight years. But the stock market, the dot-com economy and the automakers themselves made mincemeat of those conservative forecasts.
This year U.S. light-vehicle sales will set a record for the second straight year: an estimated 17.4 million units, vs. 16.6 million in 1999. It took 13 years to beat the old record of 16 million, set in 1986.
By the end of 2000, customers will have bought almost 80 million light vehicles in a five-year stretch - an average of 16 million a year - vs. about 68 million in the five years up to and including 1986.
Who's left?
'People have bought an awful lot of cars. I don't think there are that many people left,' said Cynthia Latta, chief U.S. economist for Standard & Poor's DRI in Lexington, Mass.
DRI has done better than most in predicting sales the past two years, but even its forecasts fell short of actual sales. DRI's 2001 forecast is 16.1 million light vehicles.
'What we have is a less exuberant stock market than we had before,' Latta said. Oil prices are relatively high now, but they are expected to moderate in the spring. DRI also expects the Federal Reserve to cut interest rates next year to stimulate the U.S. economy.
The automakers helped negate their own predictions for 2000 by tearing up their sticker prices.
'This was the year of the great price wars,' said Diane Swonk, chief economist for Chicago-based Bank One Corp. The bank expects 2001 light-vehicle sales of 16.5 million.
'The reason I think sales will be slow, but remain at a heightened level, is that Main Street is in very good shape, even though Wall Street took it on the chin this year,' she said.
Heavy incentives
Incentives have grown from an average of around $800 per vehicle in 1996 to an estimated $2,600 this year, based on a year-to-date, sales-weighted average for the traditional Big 3 brands, according to Bussmann at DaimlerChrysler.
Bussmann said DaimlerChrysler is spending $2,650 per vehicle on incentives, vs. an estimated $2,525 per unit at GM and $2,596 for Ford.
That points to the obvious. While incentives have sparked demand, said Swonk, they have cost the industry a lot of money.
'That's how you get record vehicle sales, yet automakers are being hammered (on profits),' she said. 'Zero down and zero percent financing in an environment when interest rates are rising is pretty expensive stuff.'
Jim Meil, chief economist for Cleveland-based Eaton Corp., says two powerful forces are at work in the marketplace.
'The (automakers) say they can't increase incentives because they have to make money. The customer says, `We've got them built into our pricing parameters,' ' he said.
'To increase incentives, even in the face of a declining market, might sound attractive to the marketers, but I think the word in Detroit now is to preserve profits,' he said.
Eaton's 2001 forecast of 16.2 million light vehicles depends on a moderation in oil prices; a cut in interest rates next year; and continued price restraint.
'But for 2001 there are two or three factors that have to change to keep the sales decline to just 5 percent. You're going to have to have a reversal of factors that right now are hurting the market. And that's not a for-sure thing by any means,' he said.
Less to give
What will stop automakers from cranking up incentives even higher in 2001? Fundamentally, they are earning less money, so they have less to give away. They will have still less, if volume declines.
'When volume is declining, you can't cut costs fast enough. Profits always go down substantially more than units,' said Scott Merlis,, auto industry analyst for Wasserstein Perella Securities Inc. in New York.
'We're looking for the soft landing in sales, and a somewhat hard landing on profits. One problem is that labor is quasi-fixed; it's hard to cut. It's hard to accelerate other cost reductions.'
Merlis said automakers can't maintain profits by cutting costs during a market correction because cost adjustments take too long to work through to the bottom line.
'When you get a 1-million-unit decline in the industry, you just can't move fast enough' to maintain profits, Merlis said.
DaimlerChrysler already is in trouble. Precisely because of high incentives and lower volumes, the former Chrysler Corp. brands posted an operating loss of $512 million for the third quarter, vs. a year-earlier operating profit of $1 billion.
Worldwide, earnings fell 78 percent to $289 million in the third quarter, not counting one-time gains and losses.
At the same time, cash on hand had fallen to $4 billion on Sept. 30, from $9.1 billion on Dec. 31, 1999. Burning up that much cash was one reason Moody's Investors Service downgraded DaimlerChrysler's long-term debt rating on Dec. 1. That action could raise DaimlerChrysler's cost of borrowing money.
Meanwhile, Ford Motor Co. third-quarter net income fell 7.4 percent to $888 million. Ford said higher U.S. interest rates made it more expensive to offer lease incentives and cut-rate loans.
Ford said its third-quarter earnings would have been a record if not for the Firestone recall.
GM's third-quarter earnings fell 5.5 percent to $829 million. Paul Ballew, General Motors general director of global market and industry analysis, said GM has been reining in incentives, despite offering some zero-down and zero-interest deals.
A dip, not a plunge
Ballew said he expects incentives to stop escalating, but he does not predict a sharp drop.
'It's pretty hard to see a dramatic pullback on incentives (in 2001). GM, for instance, has sacrificed some share this year, where if we wanted to push incentives or fleet sales, we could have done so,' he said.
Bussmann said automakers don't even want sales to increase, if higher incentives are the only way it can be done.
'I don't see the industry going from an average $3,000 incentive this year, to an average $5,000 incentive next year,' he said. Not only are incentives more expensive; they seem to be getting less effective, he said.
'What produced all the impact this year was not so much the level of incentives, but the fact that they increased so much,' Bussmann said.
Last year at this time, Bussmann said he did not expect incentives to escalate in 2000. But in the third quarter of 1999, the situation was different. The old Chrysler Corp. brands were making an average operating profit of $1,425 per vehicle, on the basis of U.S. operating results divided by the number of units sold in the United States.
In the third quarter of 2000, using the same measure, the Chrysler brands had an operating loss of $822 per vehicle. | https://www.autonews.com/article/20001218/ANA/12180732/no-red-lights-seen-for-u-s-economy |
This material is copyrighted and all rights are reserved. No portion of this excerpt may be reproduced or transmitted in any form or by any means, electronic or mechanical, including printing, scanning, photocopying, recording, emailing, posting on other web sites, or by any other information storage and retrieval or distribution system, without permission in writing from the copyright owner.
Introduction
The Astro-logic of Consciousness
Although few astrologers would argue that any birthchart is written in stone, beyond the exercise of free will, choice or consciousness, little is written about the subjective interplay between astrological symbolism and the consciousness exercised in relation to it. The astrological community has little to say about consciousness – neither in terms of how it impacts the subjective perception of a birthchart, nor the way the meaning of a birthchart changes as the symbolism is embodied more or less consciously. As the first in a series elaborating an astropoetic approach to astrology, this book will be a partial attempt to remedy this glaring omission. For as we discuss the various tenets of exoteric astro-logic – hoping to feel our way into a more esoteric place of subjective wisdom – we can only do so within a framework recognizing the primacy of consciousness as the overarching context within which everything astrological must be understood.
Exoteric astro-logic can be helpful as a language of inquiry into the nature, meaning and purpose of life experience – as I will demonstrate in some detail in this book – but it is the inquiry that is primary. Ultimately, the birthchart and the exoteric astro-logic that infuses it with abstract meaning are only really useful as the point of departure for an exploration of the life that colors it with subjective nuance and the intimate particulars of actual experience. To put it bluntly: without an assessment of the choices made by the soul living the chart, the chart cannot be interpreted, except as an exercise in speculation. Our ability to say anything at all about a chart depends upon knowing something about who or what the chart refers to.
Interpretation as an Exercise in Consciousness
We must begin by recognizing that the astrological inquiry itself is an exercise in consciousness. What we “see” in a birthchart will depend upon the consciousness that we bring to it at various stages of our inquiry. However astutely it is approached in the moment, the birthchart is not something that reveals itself all at once, nor will it necessarily reveal itself in the same way tomorrow as it did yesterday. What we see when we look within it changes as we change. The same chart will be ours throughout a lifetime of experience, but it will not mean the same thing at age 47 that it does at age 7. The exoteric astro-logic we apply to the chart may be consistent, but the fruit of our application – the esoteric understanding that we gather – will ripen with age and spiritual maturity. Both life and our understanding of life as it is reflected in our birthchart are thus primarily a reflection of the evolution of our consciousness.
If we are attempting to understand our own birthchart, the quest is necessarily one that will evolve over the course of a lifetime. If this is true of each of us end-users of astrology (hopefully each practicing astrologer is also an end-user), it must be an even more profoundly humbling realization when we turn our attention to the birthchart of another. For what we see in any birthchart, and how we interpret what we see will depend upon who we are as souls, where we have been in our own spiritual journeys and what we are capable of seeing.
At best, when we interpret the birthchart of another, we stand outside of that life, attempting to peer through its walls with the x-ray vision afforded by our exoteric knowledge of astrology, and hoping to grab a snapshot of a moving human work-in-progress that the subject of the snapshot will recognize as theirs. Although many astrologers do accomplish this minor miracle on a routine basis, the ability to do so is at heart a profound demonstration – not of astrological prowess per se – but of the capacity of the astrologer to enter into the experience of the client and see it within the framework that astrology provides. It is not just our competence as astrologers that allows us to do this, but also – and perhaps primarily – our maturity as souls endowed with empathy, an awareness of the human tendency to project, and a high level of responsibility for our own subjective state of being.
This self-evident truth is rarely acknowledged among astrologers, who are weaned along with the rest of our culture on the expectations inherent in a dominant scientific paradigm. As discussed in some detail in The Seven Gates of Soul, science has vigorously sought to neutralize, if not eliminate altogether the impact of the observer on the observation. Through double blind experiments, endless replication by other experimenters, and other rigorous procedural checks and balances, the subjective is effectively squeezed out of any statement of scientific fact. Despite the fact that some of the more scientifically-minded in our community call for scientific rigor in the articulation of exoteric astro-logic, few practicing astrologers I know would feel comfortable knowing they are not an integral part of the equation by which the truth in a birthchart reveals itself in the course of a reading. In the best readings, a synergistic magic takes place that is as much a product of the interaction between the astrologer and the client – as two human beings in dialogue – as it is of mere astrological knowledge. Any practicing astrologer who has experienced this must acknowledge that the consciousness that the individual astrologer brings to the actual practice of astrology matters profoundly.
Most astrologers celebrate the fact that different astrologers bring different perspectives to the same birthchart, but we tend to consider our differences mostly in terms of preferred techniques, or of astrological orientation, rather than the more intangible factors that each of us brings to our work – personal history and background; cultural, socio-economic, political and religious biases; individual strengths and weaknesses; life experiences; unresolved psychological issues; personal beliefs about the meaning and purpose of life; temporary moods and mindsets at the time of a given reading, etc. Our focus is almost exclusively on astrology and our relationship to the language – even though each of us will speak a different language and see a chart differently – not just because of what we know about astrology, but also because of who we are.
It is important to acknowledge that the picture in the mirror gets passed through an additional set of filters when a professional astrologer “reads” the birthchart for a client. In order to function as a clear intermediary between a birthchart and its rightful owner, the professional astrologer must possess, in addition to his exoteric training, a deep esoteric understanding of his own chart and who he is in relation to it. Ideally, he will be consciously and intentionally on his own spiritual journey, preferably one facilitated by the application of astrology to his own life process. In addition to standard counseling skills, he must cultivate the ability to astro-empathize – to listen with astrologically trained ears for the subjective truth behind the symbolism of the chart. To do this effectively, he must set the intention to “get out of the way,” be upfront and honest about any personal biases that may skew a reading this way or that, and suspend his exoteric knowledge of astrology so that new information – supplied by the client – can register. In this way, his presence in the reading can become as transparent a filter as possible, and a genuine guide to self-knowledge for those who come to him for perspective.
Astrology as a Language for Self-Inquiry
Although I know from personal experience that this can be done, I have gradually shifted my own focus away from reading charts for other people to helping them to read their own. In 1993, after 25 years of professional practice, I began to promote self-reliance in the quest for guidance by teaching a correspondence course called Eye of the Centaur to those wanting to learn astrology to facilitate their own process of self-discovery. This book is written to serve this same student base and audience, as well as conscientious professional astrologers who wish to use an esoteric knowledge of their own chart as a platform for their practice.
Each individual, armed with a knowledge of exoteric astro-logic, and taught a few techniques of self-observation and memory work, is in a much better position to intuitively penetrate to the heart of the deeper esoteric understanding of the symbolism than any stranger, however well-versed in astrology they might be. We all need help from time to time, and well-trained professional astrologers can be a godsend in this capacity. But in the end, each of us must make sense of our own lives, and bring as much consciousness to it as possible. Astrology is even more valuable as an aid to this more personal, solitary process.
Some would argue that it is not possible to be objective about oneself. However, since it is actually subjective wisdom that we seek, and not mere astrological information, it is only as we attempt to see the image reflected in the mirror for ourselves that we can find what we’re looking for. It may take time, but it is not something that can just be handed to us, even by the most skilled astrologer. What we seek in this mirror is not just a static interpretation of symbolism, but rather an evolving sense of self, reflected through the dynamic track record of choices mapped to various astrological cycles. This mapping of choices to cycles, in turn, renders the birthchart comprehensible to us as a useful reflection of the evolving consciousness we have brought to it.
The Birthchart as a Template for the Tracking of Consciousness
Having established the primacy of this track record of personal choices, and encouraged the reader to explore her own, we are now in a position to understand how the birthchart itself might reveal, purely on the level of exoteric astro-logic, the tracks upon which consciousness will tend to travel. In The Seven Gates of Soul, I suggested that in the age-old argument between fate and free will, the birthchart represents a template of fate, while free will is what each of us brings to the template. Here I would add that fate can be understood, in part, as a set of habits of consciousness, which we are free to indulge unconsciously or to break with new awareness. Exercising this freedom is the essence of the choice we make at each juncture of our journey. Thus, the choices we track as we explore the birthchart are essentially between a set of defaults and a departure from habit, which broadens and deepens our way of being. Claiming the broadest and deepest possible way of being is then tantamount to fully incarnating as Spirit within a body – of becoming a fully embodied soul.
Astrology can be helpful in this process by revealing the default position from which we attempt to expand and deepen. Or put another way, the natal birthchart can be understood as a template of habit patterns, in the exercise of consciousness, against which we can measure our subsequent growth. In Part Two, we will look in some detail at how this template is constructed. We will map various states of consciousness to the appropriate astrological correlates, and develop a system through which any birthchart can be understood explicitly as a template for the awakening and development of consciousness. In Part Three, I offer additional suggestions for moving beyond the system into a more open-ended awareness of the evolutionary process at the heart of the life behind the birthchart. First, however, it will be helpful to differentiate between various states of consciousness, discuss the psychology that governs each state, and explore the habits of default that are implicated at each level.
Consciousness is a fluid media, and any attempt to discuss it in terms of discrete states is somewhat contrived. On the other hand, having a conceptual framework in which to explore the spiritual psychology of consciousness can be a useful point of departure for evolving a system of exoteric astro-logic conducive to a more nuanced quest for subjective esoteric wisdom, so we will persist a while longer in this folly.
The Yogic Model of Consciousness
An elegant and relatively simple system for understanding consciousness was conceptualized at least four thousand years ago (Judith, Eastern Body, Western Mind 5) by yogic practitioners of a Hindu mystical tradition , postulating a series of seven chakras , through which primal energy and consciousness rise in the course of spiritual evolution. These practitioners were intent on charting their spiritual progress and articulating the inner work yet to be done, and used the chakra system as a point of reference. The system was passed down to successive generations of disciplines as an oral teaching, and eventually codified in writing a couple centuries before the birth of Christ. Though other frameworks are possible, in my experience none speaks quite so simply or eloquently to the possibility of human evolution. This perspective gains immensely in power and clarity when synthesized with astrology, as I will demonstrate comprehensively in this book.
Our modern understanding of the chakra system comes primarily through a Hindu teacher known as Patanjali, who lived three centuries before the birth of Christ. His teachings were first written down as a set of aphorisms in the Yoga-Sûtra, c. 100-200 CE, elaborated in the 16th century by Tantrik Purnanda-Swami in a text called Sat-Chakra-Nirupana, and transmitted to the West largely through the promotional activities of the Theosophical Society, an esoteric metaphysical group founded in New York 1875 by Madame Helena Blavatsky and Colonel H.S. Olcott. The Society counted among its members early western chakra authorities such as Alice Bailey, Annie Besant, and Charles Leadbeater. In the late 19th century and on into the early 20th, the Society published a number of translations, and sponsored various East Indian teachers in the US and western Europe, who brought with them variations of this ancient yogic wisdom teaching. In the 1960s and 70s, the chakra system was further popularized by spiritual teachers such as Parmahansa Yogananda, Swami Satchidananda, Gopi Krishna, Swami Kriyananda, Yogi Amrit Desai, Haridas Chaurhuri, Sri Chimnoy, Yogi Bhajan, Swami Muktananda, and others who traveled from India and established ashrams in both the US and Europe. More recently, knowledge of the chakras has been refreshed by a new wave of teachers, such as spiritually oriented psychologist Anodea Judith, medical intuitive Caroline Myss, and energy medicine pioneer, Rosalyn Bruyere.
My understanding of the chakra system comes from both Yogi Bhajan and Swami Muktananda, with whom I studied intensively from 1971 – 1979. This understanding necessarily differs from more traditional teachings through a consideration of this system within an astrological context, as well as through allowing my understanding of the system to percolate through a quarter century of life experience. While I have no interest in reinventing the wheel, I do intend to turn that wheel in kaleidoscopic fashion to reveal a layered multi-dimensional perspective that to my knowledge is not available elsewhere.
The Five Levels of Manifestation
Our discussion of chakras will be further developed through reference to a second, lesser-known system, as described by the Hindu concept of the five koshas, or levels of penetration of Spirit into matter. In The Seven Gates of Soul, I speak of the soul’s experience of embodiment as an inhabitation of the body by Spirit. Here, I will elaborate that understanding through reference to the koshas, which describe in more detail the extent to which Spirit has penetrated the material realm. Reference to this second complementary system lets us assess not only the quality of consciousness brought to focus through each astrological pattern that we study, but also the depth of awareness the embodied soul is being called to by its experience.
Within the Hindu framework, the koshas exist as intermediate states of receptivity to Spirit along a continuum of being that lies between pure matter, or prakriti, at one end of the scale, and pure consciousness, or purusha, at the other. Kosha is roughly translated as “sheath,” implying that within the yogic system, Spirit is essentially clothing itself with matter at various levels of transparency. At one end, Spirit wears the thick winter clothing of matter, and the soul is essentially oblivious to the presence of Spirit within the body. At the other end, Spirit is essentially naked, and the soul is fully conscious and essentially identified with Spirit.
Between the endpoints of pure matter and pure consciousness are distinct but interrelated levels of kosha, or sheathing, that find their expression within the human body. According to the yogic system, these are annamaya kosha, or the physical body itself; pranamaya kosha, or the vital energy body; manomaya kosha, or the realm of sense perception, emotion, memory, and ego-consciousness; vijnanamaya kosha, or the intuitive, meditative mind; and anandamaya kosha, or the ongoing state of bliss in which a full embodiment of Spirit is being realized in each moment.
While the five koshas can be understood as a hierarchy along which a soul becomes increasingly more conscious in the embodied state, it is also useful to understand that Spirit is continuously manifesting concurrently on all five levels. It is our awareness that draws one or more of the koshas into focus at any given time. Those koshas seemingly activated by our awareness will in turn determine the apparent circumstances of our lives. At any given time, however, it is possible to shift our awareness from one kosha to another, in order to understand more deeply how Spirit is available to us on a deeper or more all-encompassing level of penetration. Such a shift can suggest strategies for more effectively coping with whatever life issues appear to be related to the more obviously activated kosha.
For example, if you have just been diagnosed with cancer, then obviously your attention is being directed to annamaya kosha, or the physical body – that is to say, the densest layer of clothing assumed by Spirit. If you choose traditional allopathic treatment for this cancer, the available healing modalities of radiation, chemotherapy, and surgery will all be directed exclusively to this level. Cancer – or any seemingly physical manifestation of Spirit in the body – is, however, never just a physical manifestation. It also reverberates throughout all five koshas, whether we are aware of these reverberations or not.
If you move to pranamaya kosha, or the vital energy body, and ask what is going on there, for example, you might well notice how you feel shut down on an energetic level, as though you don’t quite have the vital force you need to cope on the physical level. Going more deeply still, to manomaya kosha, or the realm of sense perception, emotion, memory, and ego-consciousness, you might then trace these feelings back to an incident of betrayal five years earlier by someone you loved deeply, from which you have never fully recovered. Moving to vijnanamaya kosha, or the intuitive, meditative mind, you might then realize the necessity for forgiveness and letting go, and understand – in a way that a merely medical model of intervention could never fathom – that your healing and recovery in some sense is actually more dependent on your ability to do this, than any drug you might take or operation you might choose to undergo. Were you able to accomplish this inner task, you might then become aware of how Spirit was manifesting on the level of anandamaya kosha, and suddenly know beyond a shadow of a doubt, that regardless of what appeared to be happening on the physical level – whether you cured your cancer or not – you would be absolutely fine.
This is a hypothetical example. But as we will explore later in this book, what happens at each kosha, and the interplay between koshas can be tracked with exquisite detail as we view life through an astrological template. Each astrological indication, including the birthchart as a whole, can be interpreted at the level of each kosha, so that it gives one level of meaning relevant to the functioning of our physical bodies, another relevant to the flow of energy through our vital body, another relevant to the ongoing operation of our sensory-emotional experience, and so on. Each level of interpretation will parallel, confirm, and enhance our understanding at every other level, creating a developed synthesis of observations at all five koshas that can provide a much more complete astrological picture than a mere consideration of astro-logic alone. For this reason, as we discuss the astrology of consciousness, I will employ the koshas as a useful component of the system.
The Chakra System Revisited
The primary component of this system – the seven chakras – is more familiar to western seekers, since it has been part of our common vocabulary since the integration of eastern and western religious cultures in the late 1960s and 1970s. The word, chakra means “circle,” “wheel of light,” or “vortex” and refers to a state of consciousness through which life energy, or prana, is processed and released into expression. The chakras are not explicitly physical in nature, though each is associated with an endocrine gland and with a group of nerves called a plexus. Each chakra also has its psychological correlates, which in turn determine the needs, desires, source of motivation, intention, and characteristic patterns of fear and resistance that can be associated with the prana or life energy that is channeled through it.
Chakras are sometimes also referred to in the Vedantic literature as granthi, meaning knots, or sankhocha, meaning contractions, implying that as we work through the blockages of fear and resistance associated with each center, we rise to a higher level of consciousness . Thus, within the system of seven chakras, we have everything we need to understand the consciousness that is brought to bear upon our perception of the birthchart. We also have a broader context in which to place our study of astro-logic that recognizes the central role consciousness plays in the soul’s evolution through the patterns represented by the birthchart.
Before we explore this system in more detail, I wish to present a perspective about the system as a whole that is somewhat different than that which originally governed the yogic practitioners that developed it. These practitioners envisioned the system as a hierarchical progression of evolutionary states that paralleled and in some ways depended upon the raising of kundalini energy up the spine from one chakra to the next. Kundalini, sometimes referred to as shakti, was a reserve of psychic energy thought to rest at the base of the spine like a coiled serpent. Various yogic practices were designed to arouse the kundalini, and draw it up the spine where it would open and activate higher centers of awareness. Through the ongoing practice of kundalini yoga, coupled with an ascetic life based vows of chastity, minimal concern for material needs, and non-violence, the higher chakras could be opened on a more permanent basis, and serve as the energetic basis for a life of refined motivation and expression.
While kundalini yoga is still practiced in both India and the West today, one need not subscribe to the practice in order to benefit from an understanding of the chakra system. Indeed, our purpose here is not to facilitate the practice of yoga, but to provide a context in which ordinary, everyday life might be understood with reference to consciousness, and a system of astro-logic might be used more consciously to map ordinary life as a vehicle for soul growth. From this perspective, it is somewhat misleading to think of the system in hierarchical terms, because everyday life is not a simple matter of progression from one state to the next.
Even within practice of kundalini yoga itself, spiritual progress is rarely a strictly linear proposition. One may experience the temporary opening of a so-called higher chakra, for example, before a so-called lower one is completely purified. One may also find it necessary to return to more intensive work on a lower chakra in order to sustain one’s experience at a higher level. In fact, while the aim of kundalini yoga is to raise consciousness up the scale, in practice, the real work of spiritual growth often requires going down the scale to deal with unresolved issues connected to the lower centers.
Since the ultimate goal – in the practice of kundalini yoga and in life – is to experience all chakras open and vibrating at an optimal level, even the notion that we are moving up or down the scale can impede our understanding of the process. In practice, our spiritual work, however we choose to pursue it, will involve moving between various centers and learning to negotiate more skillfully and gracefully the energy dynamic between them. For this reason, I prefer to think of the seven chakras as being arranged in a circle, rather than as a straight line, as this allows us to work with them conceptually in a more flexible way.
The Chakras Considered as a Circle of Circles
Some precedence exists for this view among Taoist yogis, who speak in modern terms of the microcosmic orbit (Chia 6), through which ch’i, or the Taoist equivalent of kundalini, circulates through the chakras in a continuous circuit. While energy flows up through the chakra system along the spine at the back of the body, it flows down through the same system in front of the body. Anodea Judith also speaks of the necessity for balancing the upward flow of energy through the chakras, which she calls the “current of liberation,” with a downward flow, the “current of manifestation” (Eastern Body, Western Mind 14-15).
Conceptualizing the chakras as a circular continuum also creates a practical advantage in visualizing how they work together as a system. Among practitioners of both Taoist and Hindu yoga, there are known connections between various chakras that are not successive in nature – such as that between the second, or sexual center and the fifth or throat chakra, or that between the root chakra at the base of the spine and the crown at the top of the head. A circular model more easily allows for consideration of these kinds of connections, particularly within the context of spiritual psychology that is not dependent upon strict physiological correlates. The advantage of this will become clear as we proceed.
A third reason for moving away from a hierarchical model is that hierarchies invariably encourage judgment, which is nearly always detrimental to a clear understanding of the soul’s process. The natural assumption in contemplating any hierarchy is that higher is better than lower, but in the case of the chakra system, especially as it is considered from the perspective of spiritual psychology, this is not always the case. Energy medicine pioneer, Rosalyn Bruyere points out that it was the Victorian mindset, prevalent throughout the age of British Colonialism that endowed the lower chakras with their negative associations. She calls for a less judgmental approach that distinguishes between the frequency of energy moving through a chakra (consciousness) and the chakra itself (Wheels of Light 58) .
Outside of the context of yogic discipline, it is a mistake to make judgments on the basis of which chakras seem more open than others, or where the major concentration of energy lies. This is necessarily so, because different processes experienced at different points in the evolutionary process, or within different life contexts, may require a different mix of energies, and so-called lower chakras may well play as important, if not a more important role, than so-called higher chakras.
A young couple, desirous of children, for example, may be appropriately focused on clarifying issues within their second or sexual centers and their fourth or heart centers, while an older, solitary writer may be almost exclusively focused in her fifth or creative/expressive center. This does not mean, as a hierarchical interpretation of chakras might suggest, that the process of writing is a higher, more evolved activity than starting a family. Within the context of a quest for subjective wisdom, such judgments are, in fact, quite meaningless.
In The Seven Gates of Soul, I suggest that a major conceptual barrier to understanding soul process is the judgment projected onto that process, largely by religion. As the by-product of a religious orientation, the chakra system must also be purged of its judgmental overtones, before it can be truly useful. From a symbolic perspective, this can readily be accomplished by arranging the chakras conceptually within a circle, rather than along a straight line.
This approach also nicely mirrors our understanding that the circle is the basic foundation for all subsequent development of a cogent system of astro-logic. By synthesizing the astro-logic of the circle with a circular interpretation of the chakra system, we have the basis for understanding the movement of circles (chakras) within circles (the birthchart as a whole, and the various planetary cycles that circulate within it). Such a conceptual basis is itself astrological, since it employs the same symbol to explore the connection between the conscious evolution of an individual soul and the circulation of consciousness throughout the larger Whole or Cosmos of which the individual is a part.
A Word About the Psychic Correlates to the Chakra System
Before we can make the leap to a broader system of astro-logic, it will be necessary to briefly explore each of the seven chakras, and discuss their related psychologies. In the Hindu system, each chakra was part of an elaborate system of symbolic correspondences, involving animals, colors, a particular number of petals, and sounds. Since these are often suggestive on a subliminal level, and may intuitively engender a more personal set of associations, I will include them here for the reader to ponder, mostly without additional comment. Other aspects of the Hindu symbology can be rather esoteric, and are perhaps less meaningful or useful to the Western mind.
Although there is not universal agreement on these correspondences, the descriptions I have chosen come from a scholarly classic, The Serpent Power: The Secrets of Tantric and Shaktic Yoga by Arthur Avalon (Sir John Woodroffe), who translated them from original Tantric texts, written in Sanskrit, including Tantrik Purnananda-Swami’s Sat-Chakra-Nirupana. Woodroffe spent half his life locating these documents, which were well-guarded secrets of esoteric literature, and we can be reasonably sure they are as close to the source of the original teachings about the chakras as it is possible for modern western scholars to get, though certainly not the final word on the subject.
Even though we can talk about chakras as an objective system, chakras are by nature a subjective phenomena that will be experienced differently by different people. As Theosophist authority Charles Leadbetter notes, referring to the wide divergence of opinion regarding the colors to be associated with each chakra, “It is not surprising that such differences as these should be on record, for there are unquestionably variants in the chakras of different people and races, as well as in the faculties of observers” (The Chakras 97).
Woodroffe’s correspondences, drawn from original texts, are best understood as corroborated observations made by yogic practitioners whose intention was to purify and cleanse their chakras, not necessarily replicable by contemporary observers viewing the auras of ordinary people. On the other hand, the modern tendency to simply map the chakras to the colors of the rainbow (Bruyere Wheels of Light 79, Judith Eastern Body Western Mind 2) – while conceptually appealing – is probably not any closer to the actual truth for most people. For the sake of comparison, I include observations by Leadbetter, Judith, Bruyere, and Myss, where they differ from Woodroffe’s translations of Tantric texts.
The Chakras as a Model of Psychological Process
Given that our interest here is not the psychic correlates to the chakras, but rather their relevance to a psychology of soul, I offer this brief taste of the ancient teachings only as an appetizer to the main course. The primary discussion will revolve around an exploration of each chakra in more contemporary psychological terms, as it manifests on the level of each kosha. The psychology of each chakra will be most visible on the level of manomaya kosha, or the realm of sense perception, emotion, memory, and ego-consciousness – and it is largely here that we will look for the perceptual framework that makes the mirror of the birthchart what it appears to be at each level of consciousness. Manomaya kosha is where most of us focus most of the time, since this is where the melodrama of the soul’s everyday journey appears to be playing itself out. Yet, the melodrama is often merely the most noticeable manifestation of a process that is unfolding concurrently at all five koshas.
Each chakra will function not just in terms of psychology, but also on the level of the physical body, on an energetic level, as an intimately personal play of suggestive imagery that speaks directly to the intuitive mind, and as a statement of spiritual purpose, where even the most mundane of life circumstances becomes grist for the mill of our awakening. Most modern interpretations of the chakra system assume that the so-called lower chakras are closer to the physical end of the spectrum, while the so-called higher chakras are more spiritual (Myss 68-70, Bruyere 44-47, Anodea 7). This conception largely comes from the linear arrangements of the chakras up the spine, in which the lower chakras are literally closer to the earth. Using the circular model, by contrast, allows us to think of each chakra as a neutral domain in which Spirit can manifest on various levels of penetration as indicated by the koshas.
At shallower depths of penetration by Spirit, the first three chakras will appear to encompass a psychology that revolves primarily around earthly concerns, and to some extent, there will be a developmental progression to less worldly concerns as one moves into the fifth, sixth and especially the seventh chakra. But there will also be situations in which a deeper level of penetration of Spirit into matter in one of the so-called lower chakras can spark a powerful spiritual awakening, or conversely where shallow levels of penetration by Spirit in the upper chakras can manifest physically. In any case, spiritual development will not be a strictly linear progression from physical manifestation at the lower chakras to awakening of Spirit in the higher centers, and a dual system allows greater latitude in considering the actual course of the evolutionary process.
The Astro-Chakra System
In many ways, the chakra system – as understood by the ancient practitioners and modern adherents alike – constitutes a stand-alone approach to spiritual psychology. The depth and sophisticated simplicity of the system, refined through thousands of years of observation and practice, rivals anything modern psychology – still in its infancy – has to offer. This is especially true to the extent that modern psychology is built upon a scientific model, since, as discussed in The Seven Gates of Soul, such a foundation precludes serious consideration of the spiritual implications of psychological experience.
As many have pointed out, astrology is also a stand-alone system of spiritual psychology, refined through thousands of years of observation and practice. Any attempt to combine the two systems might seem an exercise in redundancy, were it not for the fact that each contributes something unique to the synthesis. As discussed earlier, astrology lacks an explicit understanding of the way in which consciousness alters the meaning of its symbolism, nor does it inherently include a discussion of consciousness as a framework for spiritual evolution that the chakra system offers. Individual astrologers may bring a sense of this to their work, but it is not intrinsically a part of the astrological language.
What astrology does contribute is the unparalleled ability to personalize the spiritual process, and to time it. Each chart is a unique signature of possibilities for spiritual growth belonging to a particular soul, and encoded within each chart is a timetable for the outworking of those possibilities. For all its sophistication, the chakra system lacks these two essential ingredients. For this reason, bringing the two systems together – in a creative synthesis I will call the astro-chakra system – can only serve to enhance them both.
As we correlate astrological patterns with various chakras and koshas in Part Two, we will gradually evolve a larger conceptual framework in which the birthchart can be understood as a multi-dimensional template for the tracking of consciousness through the life of an embodied soul. The astro-chakra system will provide a point of access to the esoteric wisdom beyond the exoteric logic of astrology’s symbolism. It will render an astrology more fully capable of articulating the astropoetic impulse at the heart of the human experience and of facilitating answers to the deepest, most intimate questions the soul is capable of posing. It will also provide a solid foundation on which subsequent volumes in The Astropoetic Series – exploring the astro-logic of number, astronomy and mythology – can proceed toward the development of a true language of soul, which must necessarily have a systemic appreciation for the interplay of consciousness and symbolism at its base.
Endnotes
As Bruyere points out, the chakras were known to other cultures as well – including the Egyptians, the Chinese, the Greeks and Native Americas – “although they may have called them by different names” (Wheel of Light 27).
Unfamiliar terms, from various spiritual traditions, specific to the astro-chakra system, or unique to the practice of astropoetics can be found in the glossary, beginning on page 425.
According to Ken Wilbur, “Liberation . . . is not the actual untying of these knots, but rather the silent admission that they are already untied. Herein lies the key to the paradox of the chakras: They are ultimately dissolved in the realization that they need not be dissolved” (Kundalini, Evolution and Enlightenment 121).
This fearful attitude toward the lower centers was also adopted by the Theosophists, who often focused it toward the second or sexual center, which they replaced by the spleen. In a footnote to his classic, The Chakras, Leadbeater acknowledges this when he says, “The spleen chakra is not indicated in the Indian books; its place is taken by a centre called the Svadhisthana, situated in the neighbourhood of the generative organs. . . From our point of view the arousing of such a centre would be regarded as a misfortune, as there are serious dangers connected with it” (7).
If you've enjoyed the Introduction and want to read more,
you can purchase your copy of Tracking the Soul here. | https://www.joelandwehr.com/tts-introduction |
# Robert Kegan
Robert Kegan (born August 24, 1946) is an American developmental psychologist. He is a licensed psychologist and practicing therapist, lectures to professional and lay audiences, and consults in the area of professional development and organization development.
He was the William and Miriam Meehan Professor in Adult Learning and Professional Development at Harvard Graduate School of Education, where he taught for forty years until his retirement in 2016. He was also Educational Chair for the Institute for Management and Leadership in Education and the Co-director for the Change Leadership Group.
## Education and early career
Born in Minnesota, Kegan attended Dartmouth College, graduating summa cum laude in 1968. He described the civil rights movement and the movement against the Vietnam War as formative experiences during his college years. He took his "collection of interests in learning from a psychological and literary and philosophical point of view" to Harvard University, where he earned his Ph.D. in 1977.
## The Evolving Self
In his book The Evolving Self (1982), Kegan explored human life problems from the perspective of a single process which he called meaning-making, the activity of making sense of experience through discovering and resolving problems. As he wrote, "Thus it is not that a person makes meaning, as much as that activity of being a person is the activity of meaning-making". The purpose of the book is primarily to give professional helpers (such as counselors, psychotherapists, and coaches) a broad, developmental framework for empathizing with their clients' different ways of making sense of their problems.
Kegan described meaning-making as a lifelong activity that begins in early infancy and can evolve in complexity through a series of "evolutionary truces" (or "evolutionary balances") that establish a balance between self and other (in psychological terms), or subject and object (in philosophical terms), or organism and environment (in biological terms). Each evolutionary truce is both an achievement of and a constraint on meaning-making, possessing both strengths and limitations. Each subsequent evolutionary truce is a new, more refined, solution to the lifelong tension between how people are connected, attached, and included (integrated with other people and the world), and how people are distinct, independent, and autonomous (differentiated from other people and the rest of the world).
Kegan adapted Donald Winnicott's idea of the holding environment and proposed that the evolution of meaning-making is a life history of holding environments, or cultures of embeddedness. Kegan described cultures of embeddedness in terms of three processes: confirmation (holding on), contradiction (letting go), and continuity (staying put for reintegration).
For Kegan, "the person is more than an individual"; developmental psychology studies the evolution of cultures of embeddedness, not the study of isolated individuals. He wrote, "One of the most powerful features of this psychology, in fact, is its capacity to liberate psychological theory from the study of the decontextualized individual. Constructive-developmental psychology reconceives the whole question of the relationship between the individual and the social by reminding that the distinction is not absolute, that development is intrinsically about the continual settling and resettling of this very distinction."
Kegan argued that some of the psychological distress that people experience (including some depression and anxiety) are a result of the "natural emergencies" that happen when "the terms of our evolutionary truce must be renegotiated" and a new, more refined, culture of embeddedness must emerge.
The Evolving Self attempted a theoretical integration of three different intellectual traditions in psychology. The first is the humanistic and existential-phenomenological tradition (which includes Martin Buber, Prescott Lecky, Abraham Maslow, Rollo May, Ludwig Binswanger, Andras Angyal, and Carl Rogers). The second is the neo-psychoanalytic tradition (which includes Anna Freud, Erik Erikson, Ronald Fairbairn, Donald Winnicott, Margaret Mahler, Harry Guntrip, John Bowlby, and Heinz Kohut). The third is what Kegan calls the constructive-developmental tradition (which includes James Mark Baldwin, John Dewey, George Herbert Mead, Jean Piaget, Lawrence Kohlberg, William G. Perry, and Jane Loevinger). The book is also strongly influenced by dialectical philosophy and psychology and by Carol Gilligan's psychology of women.
Kegan presented a sequence of six evolutionary balances: incorporative, impulsive, imperial, interpersonal, institutional, and interindividual. The following table is a composite of several tables in The Evolving Self that summarize these balances. The object (O) of each balance is the subject (S) of the preceding balance. Kegan uses the term subject to refer to things that people are "subject to" but not necessarily consciously aware of. He uses the term object to refer to things that people are aware of and can take control of. The process of emergence of each evolutionary balance is described in detail in the text of the book; as Kegan said, his primary interest is the ontogeny of these balances, not just their taxonomy.
The final chapter of The Evolving Self, titled "Natural Therapy", is a meditation on the philosophical and ethical fundamentals of the helping professions. Kegan argued, similarly to later theorists of asset-based community development, that professional helpers should base their practice on people's existing strengths and "natural" capabilities. The careful practice of "unnatural" (self-conscious) professional intervention may be important and valuable, said Kegan; nevertheless "rather than being the panacea for modern maladies, it is actually a second-best means of support, and arguably a sign that the natural facilitation of development has somehow and for some reason broken down". Helping professionals need a way of evaluating the quality of people's evolving cultures of embeddedness to provide opportunities for problem-solving and growth, while acknowledging that the evaluators too have their own evolving cultures of embeddedness. Kegan warned that professional helpers should not delude themselves into thinking that their conceptions of health and development are unbiased by their particular circumstances or partialities. He acknowledged the importance of Thomas Szasz's "suggestion that mental illness is a kind of myth", and he said that we need a way to address what Szasz calls "problems in living" while protecting clients as much as possible from the helping professional's partialities and limitations.
The Evolving Self has been cited favorably by Mihaly Csikszentmihalyi, Ronald A. Heifetz, Ruthellen Josselson, and George Vaillant. Despite the book's wealth of human stories, some readers have found it difficult to read due to the density of Kegan's writing and its conceptual complexity.
## In Over Our Heads
Kegan's book In Over Our Heads (1994) extends his perspective on psychological development formulated in The Evolving Self. What he earlier called "evolutionary truces" of increasing subject–object complexity are now called "orders of consciousness". The book explores what happens, and how people feel, when new orders of consciousness emerge, or fail to emerge, in various domains. These domains include parenting (families), partnering (couples), working (companies), healing (psychotherapies), and learning (schools). He connects the idea of orders of consciousness with the idea of a hidden curriculum of everyday life.
Kegan repeatedly points to the suffering that can result when people are presented with challenging tasks and expectations without the necessary support to master them. In addition, he now distinguishes between orders of consciousness (cognitive complexity) and styles (stylistic diversity). Theories of style describe "preferences about the way we know, rather than competencies or capacities in our knowing, as is the case with subject–object principles". The book continues the same combination of detailed storytelling and theoretical analysis found in his earlier book, but presents a "more complex bi-theoretical approach" rather than the single subject–object theory he presented in The Evolving Self.
In the last chapter, "On Being Good Company for the Wrong Journey", Kegan warns that it is easy to misconceive the nature of the mental transformations that a person needs or seeks to make. Whatever the virtues of higher orders of consciousness, no one should expect us to master them when we are not ready or when we are without the necessary support; and we are unlikely to be helped by someone who assumes that we are engaged at a certain order of consciousness when we are not. He ends with an epilogue on the value of passionate engagement and the creative unpredictability of human lives.
In Over Our Heads has been cited favorably by Morton Deutsch, John Heron, David A. Kolb, and Jack Mezirow.
## How the Way We Talk Can Change the Way We Work
Kegan's next book, How the Way We Talk Can Change the Way We Work (2001), co-authored with Lisa Laskow Lahey, jettisons the theoretical framework of his earlier books The Evolving Self and In Over Our Heads and instead presents a practical method, called the immunity map, intended to help readers overcome an immunity to change. An immunity to change is the "processes of dynamic equilibrium, which, like an immune system, powerfully and mysteriously tend to keep things pretty much as they are".
The immunity map continues the general dialectical pattern of Kegan's earlier thinking but without any explicit use of the concept of "evolutionary truces" or "orders of consciousness". The map primarily consists of a four-column worksheet that is gradually filled in by individuals or groups of people during a structured process of self-reflective inquiry. This involves asking questions such as: What are the changes that we think we need to make? What are we doing or not doing to prevent ourselves (immunize ourselves) from making those changes? What anxieties and big assumptions does our doing or not doing imply? How can we test those big assumptions so as to disturb our immunity to change and make possible new learning and change?
Kegan and Lahey progressively introduce each of the four columns of the immunity map in four chapters that show how to transform people's way of talking to themselves and others. In each case, the transformation in people's way of talking is a shift from a habitual and unreflective pattern to a more deliberate and self-reflective pattern. The four transformations, each of which corresponds to a column of the immunity map, are:
"From the language of complaint to the language of commitment" "From the language of blame to the language of personal responsibility" "From the language of New Year's resolutions to the language of competing commitments" "From the language of big assumptions that hold us to the language of assumptions we hold"
In three subsequent chapters, Kegan and Lahey present three transformations that groups of people can make in their social behavior, again from a lesser to greater self-reflective pattern:
"From the language of prizes and praising to the language of ongoing regard" "From the language of rules and policies to the language of public agreement" "From the language of constructive criticism to the language of deconstructive criticism"
## Immunity to Change
Immunity to Change (2009), the next book by Kegan and Lahey, revisits the immunity map of their previous book. The authors describe three dimensions of immunity to change: the change-preventing system (thwarting challenging aspirations), the feeling system (managing anxiety), and the knowing system (organizing reality). They further illustrate their method with a number of actual case studies from their experiences as consultants, and they connect the method to a dialectic of three mindsets, called socialized mind, self-authoring mind, and self-transforming mind. (These correspond to three of the "evolutionary truces" or "orders of consciousness" in Kegan's earlier books.) Kegan and Lahey also borrow and incorporate some frameworks and methods from other thinkers, including Ronald A. Heifetz's distinction between technical and adaptive learning, Chris Argyris's ladder of inference, and a reworded version of the four stages of competence. They also provide more detailed guidance on how to test big assumptions.
The revised immunity map worksheet in Immunity to Change has the following structure: (0) Generating ideas. (1) Commitment (improvement) goals. (2) Doing / not doing. (3) Hidden competing commitment (and worry box). (4) Big assumption. (5) First S-M-A-R-T test: Safe, Modest, Actionable, Research stance (not a self-improvement stance), Testable.
The immunity to change framework has been cited favorably by Chris Argyris, Kenneth J. Gergen, Manfred F.R. Kets de Vries, and Tony Schwartz.
## An Everyone Culture
The book An Everyone Culture: Becoming a Deliberately Developmental Organization (2016) was co-authored by Robert Kegan, Lisa Laskow Lahey, Matthew L. Miller, Andy Fleming, and Deborah Helsing. The authors connect the concept of the deliberately developmental organization (DDO) with adult development theory and argue that creating conditions for employees to successfully navigate through the transitions from socialized mind to self-authoring mind to self-transforming mind (described in Kegan's earlier works) "has a business value", at least in part because they expect demand for employees with more complex mindsets "will intensify in the years ahead". Three different and successful DDOs are introduced and analyzed throughout the book. These DDOs are Next Jump, Bridgewater Associates, and The Decurion Corporation. Kegan, along with his fellow co-authors, explore the successful business practices that promote a culture where individual growth and personal satisfaction can flourish.
The book elaborates on three concepts that the authors believe to be critical to the success of a DDO. These three concepts are what they refer to as "edge", "groove", and "home". The "edge" of a DDO is the drive of the organization to uncover weaknesses and to develop. The "groove" is the practices or "flow" of the company from day-to-day that foster development. "Home" is the supportive community within a DDO that allows people to be vulnerable and trust each other. The authors emphasize that underlying each of these parts of a DDO is the idea that adults are truly capable of continuous improvement and development. The authors also explain that for DDOs, the goals of adult development and business success are not mutually exclusive, but both ultimately become one objective.
## Criticism
Adult education professor Ann K. Brooks criticized Kegan's book In Over Our Heads. She claimed that Kegan fell victim to a cultural "myopia" that "perfectly reflects the rationalist values of modern academia". Brooks also said that Kegan excluded "the possibility of a developmental trajectory aimed at increased connection with others". Ruthellen Josselson, in contrast, said that Kegan "has made the most heroic efforts" to balance individuality and connection with others in his work.
In an interview with Otto Scharmer in 2000, Kegan expressed self-criticism toward his earlier writings; Kegan told Scharmer: "I can go back and look at things I've written and think, ugh, this is a pretty raw and distorted way of stating what I think I understand much better now."
In the 2009 book Psychotherapy as a Developmental Process by psychologists Michael Basseches and Michael Mascolo—a book which Kegan called "the closest thing we have to a 'unified field theory' for psychotherapy"—Basseches and Mascolo said that they "embrace both Piagetian models of psychological change and their organization into justifications of what constitutes epistemic progress (the development of more adequate knowledge)". However, Basseches and Mascolo rejected theories of global developmental stages, such as those in Kegan's earlier writings, in favor of a more finely differentiated conception of development that focuses on "the emergence of specific skills, experiences, and behavioral dispositions over the course of psychotherapy as a developmental process".
## Key publications
Kegan, Robert; Lahey, Lisa Laskow; Miller, Matthew L.; Fleming, Andy; Helsing, Deborah (2016). An everyone culture: becoming a deliberately developmental organization. Boston: Harvard Business Review Press. ISBN 9781625278623. OCLC 907194200. Kegan, Robert; Lahey, Lisa Laskow (2009). Immunity to change: how to overcome it and unlock potential in yourself and your organization. Boston: Harvard Business Press. ISBN 978-0787963781. OCLC 44972130. Wagner, Tony; Kegan, Robert (2006). Change leadership: a practical guide to transforming our schools. San Francisco: Jossey-Bass. ISBN 978-0787977559. OCLC 61748276. Kegan, Robert; Lahey, Lisa Laskow (2001). How the way we talk can change the way we work: seven languages for transformation. San Francisco: Jossey-Bass. ISBN 978-0787963781. OCLC 44972130. Kegan, Robert (1994). In over our heads: the mental demands of modern life. Cambridge, MA: Harvard University Press. doi:10.2307/j.ctv1pncpfb. ISBN 978-0674445888. JSTOR j.ctv1pncpfb. OCLC 29565488. Kegan, Robert (1982). The evolving self: problem and process in human development. Cambridge, MA: Harvard University Press. ISBN 978-0674272316. JSTOR j.ctvjz81q8. OCLC 7672087. Kegan, Robert (1976). The sweeter welcome: voices for a vision of affirmation—Bellow, Malamud, and Martin Buber. Needham Heights, MA: Humanitas Press. ISBN 978-0911628258. OCLC 2952603. | https://en.wikipedia.org/wiki/Robert_Kegan |
Technical Support Agent
The Role:
Due to Company expansion, Lightnet is currently seeking to recruit a Technical Support Agent to join the existing team. The role is a technical support role answering customer technical queries, troubleshooting and solving issues remotely and network monitoring. This role will suit a friendly, outgoing person with excellent communication and organisational skills.
This is a permanent role with the potential for long-term career development.
Location:
The position will be based in our Galway office, with a blend of office and remote working on offer.
Responsibilities:
⦁ Deal with customer’s technical queries by phone and email, ensuring a prompt and efficient response at all times
⦁ Troubleshoot connections and identify any faults to determine correct course of action
⦁ Track and take ownership of all outstanding queries to ensure follow-through in all cases
⦁ Work closely with onsite technicians to activate connections
⦁ Assist Field Engineers with remote network maintenance
⦁ Excellent customer service skills and ensure proper recording, documentation and closure of trouble tickets
⦁ Provide guidance to customers on upgrade solutions
⦁ Work proficiently with minimal daily guidance
⦁ Network monitoring to identify faults and first-level resolution of issues
⦁ Contribute to process improvement and development projects
⦁ Any additional tasks as the role involve
Requirements:
⦁ Minimum 1-2 years experience in a similar role
⦁ Competent computer skills and technical knowledge
⦁ IT/Networking, A+ or MCP would be an advantage
⦁ Good phone manner and customer service skills
⦁ Ability to handle multiple tasks concurrently and prioritise appropriately
⦁ Willing to work shift hours and weekends
How to apply: | https://www.lightnet.ie/open-position-technical-support-agent/ |
As an Automation Controls Lead at EaglePicher, you will be responsible to support automation projects and troubleshoot/maintain automated equipment such as robotic-based automation systems, scanners, cameras, printers, scales, HMI, PLC systems, and control cabinets. You will work closely with operation to schedule project and maintenance work. Additionally, you will mentor and develop/train automation controls technicians.
Responsibilities
- Schedule project work and equipment down time.
- Assign tasks to technicians and ensure completion in a safe, timely, and efficient matter.
- Coordinate contractors for major projects.
- Troubleshoot and repair all equipment; perform equipment failure analysis and preventative maintenance
- Maintain control systems, including PLC controllers and industrial networks (such as Ethernet, Devicenet and Remote I/O), motor control systems, servo drives, frequency drives, and electrical distribution systems.
- Perform PLC control level diagnosis and program edits using ladder logic software.
- Install, maintain and troubleshoot relay logic, ladder diagrams, control components - photo eyes, motor starters, relays, limit switches, proximity sensors, timers, solenoids, servo drives, frequency inverters, machine vision, and encoders.
- Maintain, troubleshoot, and edit Fanuc robotics automation system using teach pendant.
- Work with automation team developing concept designs, offer suggestions during the quoting, design, and build processes.
- Work with operations on root cause and problem resolutions
- Communicate with vendors on specification and function of commercial parts for a project via email or phone, resolving issues whenever necessary
- Submit parts and material orders for the assigned project(s)
- Support Automation team project implementation.
Qualifications
Hiring Requirements:
- 5 or more years of maintenance/controls experience in production environment with automated processes.
- 2 or more years in a lead position role.
- High school diploma or GED
- Must work well with others and be a team leader
- Excellent verbal and written communication skills
- Ability to read and modify robot, PLC, HMI programs
- Understanding of automation tooling
- Project management experience
- Mechanical experience with ability to apply theory to actual practice.
- Experience with industrial pneumatics and hydraulics
- Ability to read electrical schematics and engineering part drawings
- Knowledge of Allen Bradley control systems.
- Hands-on experience with robotics and vision systems
- Control system start-up experience
- Hands on knowledge troubleshooting hardwire electrical controls.
- Experience with Industrial network communication protocols, Ethernet, devicenet, serial.
Hiring Preferences:
- Applicable technical Associates degree or VO-tech education
- Experience with Fanuc Robots
- Experience with Cognex vision systems
ABOUT EAGLEPICHER
EaglePicher Technologies, LLC is a leading producer of batteries and energetic devices for the defense, aerospace, medical, commercial, oil, and gas industries. The company provides the most experience and broadest capability in battery electrochemistry of any battery supplier in the United States. Battery technologies include lithium ion, thermal, silver zinc, lithium carbon monofluoride, lithium thionyl chloride, lithium manganese dioxide, lithium sulfur dioxide, and reserve lithium oxyhalide. EaglePicher also provides custom battery assemblies, battery management systems, pyrotechnic devices, and other power solutions. EaglePicher Technologies is headquartered in Joplin, MO. and is ISO9001:2008, ISO 13485, and AS9100C certified. For more information, visit www.eaglepicher.com.
PERKS OF BEING AN EAGLEPICHER EMPLOYEE
Some of the great things about being an EaglePicher employee include:
· Medical, dental, vision, life, and disability insurance;
· 10 paid holidays and PTO;
· Matching 401K;
· Tuition reimbursement;
· Dependent scholarship programs.
EaglePicher Technologies LLC is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, national origin, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any other federal, state or local protected class. | https://jobs.eaglepicher.com/controls-automation-lead/job/11490455 |
The GDSC MPSTME AI Summit is an exciting and highly-anticipated event for anyone interested in the field of artificial intelligence. This one-day summit will bring together some of the most renowned experts and thought leaders in the industry. Speakers will share their insights and knowledge through a series of engaging and informative talking sessions.
During the event, attendees will have the opportunity to learn about the latest advancements and trends in AI, as well as hear about practical applications of the technology in various industries. Whether you're a researcher, a developer, a business leader, or simply someone who is passionate about AI, you'll find valuable information and inspiration at the summit.
In addition to the informative talking sessions, attendees will also have the opportunity to network and connect with other like-minded individuals in the industry. This is a great opportunity to expand your professional network, learn from others in the field, and make valuable connections that could benefit you in your career.
Don't miss out on this fantastic opportunity to learn from some of the most respected experts in the field of artificial intelligence, and to connect with other professionals who share your passion for this exciting technology. Register now for the GDSC MPSTME AI Summit and be a part of the discussion that's shaping the future.
Where is it happening?Mukesh Patel Auditorium, Mukesh Patel Auditorium, NMIMS Main Building, Mumbai, India
Event Location & Nearby Stays: | https://happeningnext.com/event/gdsc-ai-summit-2023-eid4so1aa873w1 |
I don’t read very many fiction books, but I am a huge fan of Sir Arthur Conan Doyle’s character Sherlock Holmes! I just finished listening to a wonderful collection of short adventures from the book The Return Of Sherlock Holmes. The audiobook I listened to was expertly narrated by Stephen Thorne.
Without being as wordy as some authors, Doyle paints such descriptive pictures of Dr. Watson, Holmes, his clients, his villains, and the crime scenes. I can “see” exactly how the characters look and “hear” how they talk, and can feel the emotions they are feeling. And the crime scenes are also painted in such vivid detail by Doyle, that I can catch all of the same details the Sherlock Holmes is taking in.
I cannot stand how some detective story authors “uncover” some hidden details at the very end that magically helps their protagonist solve the crime. The “magic” of Sherlock Holmes’ solutions is that Doyle allowed you to see everything Holmes saw. The real art is in the way Holmes uses his gift of deductive reasoning to solve the clues.
These mysteries are not always crimes. Often times they are simply perplexing problems. I’ve never been called upon to solve a crime before, but I certainly am called upon to find solutions to thorny problems. In that regard, I owe a debt to Sir Arthur Conan Doyle for helping me learn from Sherlock Holmes how to deduce the most logical solution to my mysterious situations.
These are also great stories to read aloud, especially to your kids. | https://craigtowens.com/2020/09/15/the-return-of-sherlock-holmes-book-review-2/ |
"Emergency neurological life support: airway, ventilation, and sedation" by David B Seder, Andy Jagoda et al.
Emergency neurological life support: airway, ventilation, and sedation.
Airway management and ventilation are central to the resuscitation of the neurologically ill. These patients often have evolving processes that threaten the airway and adequate ventilation. Furthermore, intubation, ventilation, and sedative choices directly affect brain perfusion. Therefore, airway, ventilation, and sedation was chosen as an emergency neurological life support protocol. Topics include airway management, when and how to intubate with special attention to hemodynamics and preservation of cerebral blood flow, mechanical ventilation settings, and the use of sedative agents based on the patient's neurological status.
Seder, David B; Jagoda, Andy; and Riggs, Becky, "Emergency neurological life support: airway, ventilation, and sedation." (2015). Maine Medical Center. 380. | https://knowledgeconnection.mainehealth.org/mmc/380/ |
The Team USA Council on Racial and Social Justice has released its final recommendation on racism and discrimination, the United States Olympic & Paralympic Committee (USOPC) announced Thursday.
The recommendations, which are the organization’s fourth and final set, are aimed at promoting racial and social justice across the Olympic and Paralympic community while preventing acts of discrimination.
The fourth recommendation outlines how the USOPC and National Governing Bodies “can serve as collaborative leaders in eradicating systematic injustices, including structural racism, that leads to discrimination.”
“The fourth recommendation released today culminates the work of the Council that began more than a year ago – work that aims to better the Olympic and Paralympic movements,” said Moushaumi Robinson, 2004 Olympic gold medalist in track and field, and chair of the Council.
“We’ve seen actionable change in the past year, but this is just the beginning. We will continue to see change as the organizations that make up the movements look to create a better, more inclusive community for all.”
Fourth Set of Recommendations
The full detail of the Council’s fourth recommendation can be found here.
Acknowledge the Organization’s Role in Perpetuating Racial Inequities
The first step in creating antiracist environments devoid of discrimination is to acknowledge that the organizations that make up the Olympic and Paralympic community are expressive of structural racism and other social ills.
Counteract the Dehumanization and Exploitation of BIPOC and All Minoritized Members of the Olympic and Paralympic Community
Over the past year, Black Olympic and Paralympic athletes have shared with the Council powerful and brave stories about navigating structural racism and white supremacy within the USOPC and NGBs. At the heart of these stories was the perception that athletes are often reduced to the athletic value they bring to the organization rather than be seen as humans with various intersecting identities. One expression of structural racism is the dehumanization of members from minoritized groups, and an antiracist framework for change prompts us to counteract such racist tendencies.
Increase Protections for Minoritized Populations
Athletes’ safety and well-being (physical, psychological, and emotional) are of utmost importance when it comes to changing systems of oppression. Oftentimes, athletes from racially minoritized groups and other minoritized groups do not feel safe to call out discrimination. Additionally, over the past months physical threats have been disproportionately directed at minoritized Team USA athletes. Indeed, threats to the physical safety of athletes are often directed at members from minoritized groups. Therefore, a commitment to antiracism includes increased protections for members of minoritized groups.
Create Cultures of Accountability & Transparency
A lack of proper communication channels, accountability, and transparency often help reinforce racial inequities. A commitment to antiracism embraces a culture of accountability and transparency.
Provide Structural Support for Antiracist Efforts
To be effective in driving antiracist efforts and counteract discrimination across the Olympic and Paralympic movements, the USOPC and NGBs must provide structural support for initiatives targeting racial and social inequities.
Formed in September 2020, the Council was created to “create pathways for dialogue and to advocate for action and work that will implement impactful and meaningful change.”
About James Sutherland
James swam five years at Laurentian University in Sudbury, Ontario, specializing in the 200 free, back and IM. He finished up his collegiate swimming career in 2018, graduating with a bachelor's degree in economics. In 2019 he completed his graduate degree in sports journalism.
Prior to going to Laurentian, James swam … | |
The United States Olympic and Paralympic Committee (USOPC) reiterated its opposition to a boycott of the 2022 Beijing Winter Olympics on Wednesday, saying athletes should not be used as “political pawns.”
In remarks to reporters ahead of a US Olympic team media event, USOPC president Susanne Lyons repeated the organisation’s stance that boycotts were ineffective.
“We at the USOPC oppose athlete boycotts because they’ve been shown to negatively impact athletes while not effectively addressing global issues,” Lyons said.
“For our athletes, their only dream is to represent the USA and what we stand for on the international field of play.
“We do not believe that Team USA’s young athletes should be used as political pawns.”
The Beijing Winter Games are scheduled to begin on 4 February next year, just six months after the summer Tokyo Olympics.
China is facing global scrutiny over a range of issues, notably the mass internment of Uighur Muslims in the western region of Xinjiang, which the US has said amounts to genocide.
It is also under pressure for its rights clampdown in the former British colony of Hong Kong.
On Tuesday, US State Department spokesman Ned Price was asked if the United States would consider a joint boycott with allies and said it was “something that we certainly wish to discuss.”
But he later stressed that the United States does not “have any announcement regarding the Beijing Olympics,” writing on Twitter that “we will continue to consult closely with allies and partners to define our common concerns and establish our shared approach.”
China, which has rejected criticism of its human rights record, on Wednesday hit back at suggestions of a boycott, accusing the United States of “politicising sports”.
USOPC chief president Lyons acknowledged the human rights issues in China, but said the issue was best dealt with on a government level.
“We certainly do not want to minimise the serious human rights issues that are happening in China,” Lyons said. | https://www.blueprint.ng/us-olympic-chiefs-oppose-beijing-winter-olympics-boycott/ |
'There cannot be 'social problems' that are not the product of social construction – naming, labelling, defining and mapping them into place – through which we can 'make sense' of them' (Clarke, 2001). It will be argued that to understand 'crime', it must first be understood that it is a historical and social construction. This is equally true when looking at 'youth'. The concepts of 'crime' and 'youth' are neither fixed in time or place and therefore definitions of either are as such are both contested and contestable.
It will be argued that due to the problematic nature of these individual definitions, that 'youth crime' is also social construct, and as such problematic. The criminalisation of youth, by imposition of age restrictions and responsibilities, will be specifically focused on. This essay does not intend to comment on the rights or wrongs of the resulting constructions neither does it intend to comment on youth crime causation.
Crime is not a unitary concept (Henry, 2001), and as such it cannot be seen outside its broader, demographic, economic, religious or political contexts, (Briggs, Harrision, McInnes & Vincent, 1996:18) as it is constructed from within these contexts. That is to say that as a social construction what is or is not 'criminal' changes over time and across societies (Lilly, Cullen, and Ball, 2002). The Oxford English Dictionary states that crime is 'an act or omission constituting an offence (usu. a grave one) against an individual or the State and punishable by law'(Simpson and Weiner, 1993).
However the law is socially constructed, it changes across time and place. For example in 2003 it is illegal to buy alcohol at 15 in France; 17 in the UK and 20 in the USA (1). Also in the USA the drink purchase age, of 21, was only made a national law in 1984, prior to this it varied from state to state and from 18 to 21(2). Hester and Eglin (1992) argue that the law is constructed within society, and as such crime is a social construction. Becker (1963) furthers this point from an interactionist stance and as such argues that deviance, crime is created by social groups making rules, infraction of which is labelled a crime or criminal.
As such it is not the quality of an act but what is conferred on the act by society that is at issue for Erikson (1962), Becker (1963) and Kituse (1962). This interactionist stance has subsequently been reinterpreted with a Marxist slant. It has been argued for example that when we examine the social construction of categories of criminal law it becomes clear why certain social groups, such as youths, are over-represented in criminal statistics (Box, 1983; Chambliss and Seidman, 1971).
The argument is that the law rather than being a fair reflection of behaviours that cause us collectively the most suffering, is in fact an ideological construct (Box 1983). As such the criminal law is an artfully, created construct designed to criminalize only some behaviours, usually those more commonly committed by the powerless, and to exclude other behaviors, that are more usually committed by the powerful against others. (Box 1983, 7) In other words no matter how immoral or harmful, to the individual or society as a whole, the behaviour may appear to be it cannot be regarded as crime unless it comes under current legislation.
As such, those in power criminalize the 'other', a prime example of which is the criminalization and problematising of youth (4). This Marxist stance has been argued by revisionist historians as consistent through out history, both modern and pre modern (Muncie, 1999). Muncie (1999) argues that 'the notion of childhood and youth are not universal biological states, but social constructions in particular historical contexts'. These ideas of social construction can be applied to the concept of 'youth'. Like crime, youth is not a unitary category (Cohen, 1986 cited in Muncie, 1999).
Newburn (2002) states that ''youth' is an elastic concept. It means different things, at different times, and in different places. ' The concept of childhood, as we understand it today did not emerge until the late nineteenth century (Valentine, Skelton and Chambers, 1999); similarly 'adolescence' is a historically invented category (3). The emergence of the concept of adolescence, which also began in the nineteenth century, could be attributed to the ability of the middle classes to school their children for longer than previously, following the rise of industrial capitalism (3).
Another term often considered synonymous with that of 'youth' is the term 'teenager', a term that did not emerge until the 1950's and 60's, when the young started to be seen as relatively affluent and therefore prime targets for commercial retailers, as consumers (3). As such it can be argued that patterns of crime are an inevitable consequence of the extension of youth as a phase in the life cycle, a process that can be seen to have been occurring since the late eighteenth century (Furlong & Cartmel, 1997).
It can be seen that terms considered synonymous with youth are historically constructed, however the boundaries of child, youth, adulthood can been seen not only to be arbitrary but also equally constructed. According to Sibley (1995: 34) 'The limits of the category 'child' vary between cultures and have changed considerably through history within Western, capitalist societies. The boundary separating child and adult is a decidedly fuzzy one. Adolescence is an ambiguous zone within which the child/adult boundary can be variously located according to who is doing the categorizing.
To add to the problem young people negotiate the social meanings of different age boundaries themselves. Youth is a fluid stage of life where someone is continuously becoming someone else, as opposed to being an adult, where it is presumed one's identity has coalesced into a state of permanence (Sefton-Green, 1999:3) As seen the concepts of childhood, adolescence and teenage are historical constructions with their boundaries as social constructions, which will be discussed further in the points made on the criminalisation of youth.
It is important to point out that if the category of youth does not exist neither can the category of youth crime exist, and as such young people were treated as adults in the eyes of the law until the 1900's. Juvenile courts were not in formal existence until 1906 (Newburn, 2002) Despite historical amnesia since this time youth crime has been seen as an area of great concern, be it focusing on the 'dangerous classes' of the nineteenth century (Newburn, 2002) or mods and rockers in the early 1960's, football hooligans of the 1970's or yob culture in 1994 (Muncie and Mclaughlin,1996).
In was even considered a problem prior to times when crimes of the young were separated from those of adults (Shore, 2000). As such it can be argued that concern over behaviours of the young is not a resent phenomenon (Furlong and Cartmel, 1997) Pearson, (1983, 1989, 1994 cited in Newburn, 2002) indicates that much academic writing on 'youth crime' is ahistorical in character. Historical amnesia is an important concept if the construction of youth crime is to be understood. Pearson (1994) points out that everything was always better thirty years ago, however they were saying the same think thirty years ago.
The specifics may vary but the underlying principle does not. For example worries about hooliganism are most commonly associated with the 1970's onwards yet the term was actually coined in 1898 (Muncie, 1999) 'It is a truism that juvenile crime and the petty and not so petty delinquencies of youth have been a central concern in society from time immemorial' (Shore 2000). Shore argues that concerns of modern youth as out of control, is not just a modern phenomena or even one that emerged in the late eighteen and nineteenth century, it can even be seen to stem back as far 1585, and William fleets report on 'judicial nyppers' (2000).
Equally Pearson has highlighted complaints about the behaviour of the young going back to the seventeenth century (1983, 1989, 1994 cited in Newburn, 2002) However traditional historians argue that juvenile (youth) crime was 'invented' in the nineteenth century (Muncie, 1999), with a pivotal point being 1816, shortly after the Napoleonic wars when a report of the committee for investigating the alarming increases of juvenile crime in the metropolis (Shore, 2000).
Rawlings (1999:25) explains this as being because, peace meant the return of young men with no useful experience for regular peace time employment, along with the end of a useful way of ridding England of troublesome youths . It is useful to note that the boundaries of youth are frequently defined by exclusion i. e. by defining what cannot be done at certain ages (3). Such as in the cases of legislation controlling activities, for example paid employment, sexual practice, drinking, voting or fighting in a war (3).
What cannot be done at certain ages, changes over time. For example in 1997 a 15 year old could not buy fireworks, but a 16 year old could. In 2003 a person was required to be eighteen before they could purchase fireworks (Muncie, 1999). What cannot be done at certain ages, also changes across nations at the same time. Criminal responsibility in England and Wales is ten years old, yet the age for criminal responsibility in Scotland is eight (Muncie, 1999) . Young adults or old children?
Depending on the circumstances this can lead to oscillating views constructed of youth as savage and innocent; pure and tainted; ignorant and intuitive, among other binary oppositions (Sefton-Green, 1999: 2). Hendrick (1990b cited in Muncie, 1999) illustrates several competing constructions of childhood. For example the romantic child (innocence) versus the evangelical child (in need of discipline and control). These competing constructions highlight the reinvention and redefinition of youth and consequently youth offending, with the conception of the delinquent child (Muncie, 1999).
As a conclusion is can be seen that 'youth', 'crime' and 'youth crime' are historically, socially, and ideologically constructed. Although the problem of 'youth' is often seen as a modern phenomenon, it has been demonstrated that this is a fallacy that is covered up by a process of historical amnesia. The reason behind the argument in this essay, that youth crime is social constructed, is that unlike in the seventeenth century when there was one law for everyone, with some gender differences, we now live in a society where specific activities are criminalised for certain age groups.
However with this deconstionist idea often associated with postmodernism, many issues can be raised, yet it does give any way forward in solving the so called crime problem, except possibly that of radical non-intervention which is not a politically viable option. As Carlen (2000) states '"would it pass The Sun test? " continues to be the routine query of pusillanimous politicians and officials, reluctant to implement less punitive and more effective crime reduction policies. ' (Emphasis in original)
Bibliography
Becker, H. (1963) Outsiders: studies in the sociology of deviance. New York, Free Press. Box, S.(1983) Power, crime and mystification. London, Routledge. Briggs, J. , Harrison, C. , McInnes, A. , & Vincent, D. (1996) Crime Punishment in England, An Introductory History, London, UCL Press. Carlen, P. (2000) Youth justice? In Green, P. and Rutherford, A. (eds) Criminal policy in transition. Oregon, Hart Publishing. | https://lawaspect.com/youth-crime-is-a-social-construction/ |
The work, most prominently, of anthropologist Franz Boas and sociologist W. E. B. Du Bois in the early twentieth century went a long way to establishing the predominantly correct view that race has no basis in actual physical differences between groups of human beings. The anthropologist of race Ashley Montagu, concerned about the Nazis’ eugenicist practices, agreed (Montagu 1962). Following the end of Nazi rule, the idea that race is socially constructed became widely – if not universally – accepted in scientific and political circles. The most well-known exponent of the social constructionist position on race from within genetics is Richard Lewontin, who first argued in 1972 that there is more genetic difference between individuals than there is among population groups, and that ‘there is no objective way to assign the various human populations to clear-cut races’ (Lewontin 2006). This appeared to be borne out in broad terms by the publication of the human genome project in 2003 (El-Haj 2007).
Nevertheless, there has never been a time in which race was not in use both colloquially and by scientists. Amade M’charek reminds us that even the 1950 UNESCO ‘Declaration on Race and Racial Prejudice’ wished to conserve a separation between the ‘fact’ of biological race as it may pertain in the laboratory and the mythical nature of race as it is applied in common parlance (M’charek 2013: 431). Race is under constant, silent production, with research continuously emerging that appears to open caveats in the dominant position that there is no way to equate race with human genetic diversity (Hartigan 2008). However, the general public lack of scientific literacy, the political investment in the idea of natural racial differences that can be ‘read’ in our DNA, which, as I have shown, is resurgent today, as well as the popular fascination with genetics as a mode of explanation for a range of human phenomena, often leading ‘to a reductive stance that biology is destiny’ (Yehuda et al. 2018: 5), all conspire to make it incumbent upon us to be better at explaining what race does.
The explosion in popularity of DNA testing services such as 23 and Me, a company that claims to ‘democ- ratize personal genetics’, is evidence of the epistemic primacy of genetics in the twenty-first century. An online search for ‘DNA’ will reveal a panoply of articles about whether genetics can tell your politics or whether not you are likely to be more promiscuous or monog- amous. DNA ancestry testing is the object of particular popular fascination. Harvard Professor Henry Louis Gates has spurred a digital genealogy industry through his role as producer of highly successful television series such as African American Lives. There are a multitude of social media forums and DIY reality television-style YouTube posts in which people reveal the results of their ancestry tests. DNA testing is even proposed to have an antiracist impact, as seen in attempts to use test results to confront avowed white supremacists on the fallacy of racial purity. In one notorious case, a white supremacist activist called Craig Cobb, so convinced of his ‘racial purity’, took up the challenge to take a DNA test, which was revealed on American daytime talk programme The Trisha Goddard Show. The test revealed that 14% of Cobb’s DNA came from sub-Saharan Africa, a result that he rejected as a multiculturalist plot (WYSO 2018). Indeed, research into white supremacist reactions to DNA test results revealed a tendency to ‘bargain’ over what percentage of white ancestry makes a person white, or to condemn ancestry testing as a whole as a ‘Jewish conspiracy’ if the desired results were not received (Panofsky and Donovan 2017).
DNA is the object of intense politicization, as was seen in the revelation by US Democratic senator and presidential hopeful Elizabeth Warren of the results of her DNA ancestry test in late 2018. The publication of the results was Warren’s attempt to quell Republican criticisms of her claim to have Cherokee and Delaware heritage and Donald Trump’s derogatory references to her as ‘Pocahontas’. The test revealed that she had ‘a small but detectable amount of Native American DNA’ and ‘concluded there is “strong evidence” she had a Native American ancestor approximately six to 10 generations ago’ (McDonald 2018). However, the reliance on DNA to prove Indigenous identity directly contravenes tribal protocols for assessing membership, which do not see genetic testing as valid. As Kim TallBear remarks, ‘It is one of the privileges of whiteness to define and control everyone else’s identity’ (Johnson 2018). TallBear contends that, rather than sitting down with tribal leaders, which the senator had repeatedly refused to do until meeting with Cherokee representa- tives in August 2019 during the Democratic Party primaries campaign, Warren ‘privileges DNA company definitions in this debate, which are ultimately settler- colonial definitions of who is indigenous’.
Assessing indigeneity according to a scale of racial purity has dangerous implications given the use of ‘blood’ and ‘genes’ to exclude rather than include. For example, Australia’s far-right One Nation Party announced proposals to submit Aboriginal people to DNA testing and introduce a ‘qualifying benchmark of twenty-five percent Indigenous DNA ancestry’ in order to quell what it called the ‘widespread “rorting” [cheating] of the welfare system’. However, there is no test for genetic Aboriginality and no Australian Aboriginal genome (Fryer 2019b). Race under settler colonialism was a project of what the late Australian historian of race and colonialism Patrick Wolfe refers to as replacement and elimination, with the ultimate aim of wresting land away from its original inhabitants for the purposes of European wealth creation. In order to achieve this, European invaders had to construct Indigenous peoples as ‘maximally soluble, encouraging their disappearance into the settler mainstream’ (Wolfe 2016: 39). The measurement of blood quantum was used colonially in the process of Indigenous elimination. ‘Blood’, as Wolfe notes, ‘is like money, which also invokes liquidity to disguise the social relations that sustain it’ (2016: 39). Hence to possess Aboriginal lands, white colonizers set about diluting blood, dissolving Indigenous people, and scattering those left around the landscape. The separation of Aboriginal peoples from their homelands and the forced mixing of different tribal groups on missions, under a policy euphemistically titled ‘protection’, was integral to the cultural genocide endured by Aboriginal peoples. This historical fact makes the appeal to racial measurement dressed up as genomic science particularly egregious to many Indigenous people within a context of ongoing colonization.
Sadly, Indigenous people’s views do not stop the rise of genetic absolutism in the public sphere, with ‘savvy political commentators … taking new findings by geneticists and directly assailing social constructionist perspectives’ (Hartigan 2008: 164). The problem for antiracists confronted with the resurgence of racial science among ‘race realists’ and their ‘alt-right’ mouth- pieces is that the maxim that race is a social construct is often the only riposte we have recourse to. Yet, far from ending the discussion of whether biological race is real, according to anthropologist Jason Antrosio, the idea that race is a social construction is actually a ‘conservative goldmine’ because it was never ‘connected to concrete political change’ (Antrosio 2012). It is thus especially important not to leave the questioning of the social construction of race to those such as Quillette’s Claire Lehmann, who tweeted that ‘we abhor racism yet do not believe that race is merely a social construct (another pernicious blank state dogma that has reper- cussions in the real world)’.
Antiracists are very good at denying the biological facticity of race, but not very good at explaining what is social about race. Echoing Patrick Wolfe’s point in this chapter’s epigraph, Antrosio suggests that the social construction of race ‘should have never been a stopping point, but a way to analyse the particular circum- stances that result in current configurations’. Focusing our arguments on whether race is or is not about biology is meaningless outside of academia because ‘underlying socioeconomic structural racism is unaltered’ (Antrosio 2012). Failures to properly explicate the social construction of race in the public domain have led to statements such as that Eduardo Bonilla-Silva reports hearing from a colleague: ‘Race is a myth, an invention, a socially constructed category. Therefore, we should not make it “real” by using it in our analyses. People are people, not black, white, or Indian. White males are just people’ (Bonilla-Silva 2018: 207). Social constructionism lends itself to such wilfully ignorant semantic arguments. According to Antrosio, we need to judge the theory that race is socially constructed on whether or not it has contributed to alleviating basic issues of racially deter- mined power imbalances and inequality. On all measures, Antrosio claims, it is impossible to say that it has.
This problem is not confined to the social sciences. As John Hartigan notes, ‘Genetics is not going to provide the basis for either proving or disproving the “social” reality of race’ (Hartigan 2008: 167). If activists and social scientists are not good at parsing research in the natural sciences, geneticists and those in the biomedical sciences concerned with public misinter- pretations of their findings may not be adept at reading the political writing on the wall which spells out that there is no way to discuss race outside of the political context in which it is continually reproduced. The problem with the pure social constructionist position is that it runs the risk of reasserting the primacy of race as biological rather than political. In a debate with the philosopher of race Charles W. Mills, Barnor Hesse asks: what is race the social construction of? The usual answer, he says, is ‘race is a construction of the idea that there is a biological racial hierarchy’. However, this does not answer the question ‘What is race?’ ‘In effect,’ Hesse remarks ‘social constructionists do not have anything to say about race that is not already said by the biological discourses’ (Hesse 2013). There is abundant evidence that ideas of race developed in situ and that there were competing ideas among various actors within and across various colonial contexts and vis-à-vis a range of different populations about what race meant for a generalized understanding of the human (Wolfe 2016).
According to Ian Hacking in The Social Construction of What?, social constructionist critiques usually contain three elements: that the thing being socially constructed is neither natural nor inevitable, that it is undesirable, and that it can be changed (Hacking 2003). Hesse argues that to resolve the tautology posed by the formulation ‘race is a social construction of the idea of biological race’, we need an alternative account of race that goes beyond this unexplanatory circularity, because ‘our account of race as a social fact cannot be the same as the very thing we’re discrediting’. If race can be changed because it is not natural, we need, as Antrosio also suggests, a way of explaining how race is socially produced that proposes ways of dismantling it. And because race does not originate in nineteenth-century biological theorizations, but is, as Hesse explains, ‘colonially assembled over a period of time’ which goes back at least to the fifteenth century, we need more complete historical and political accounts of how race emerged and became institutionalized. What is clear is that there is no way of reducing the broad scope of racial rule to only the ‘bodily or the biological’ (Hesse 2013). | https://publicseminar.org/essays/why-race-still-matters/ |
"RINA R&D is an inspiring environment composed by a strong team of brilliant and enthusiastic colleagues. Working together we are able to successfully develop amazing projects in the various sectors where RINA is active where we can experience new technologies and find solutions to real problems in collaboration with the most distinguished research institutions and industrial stakeholders"
Research and Development is a prime mover in RINA’s evolution and is key to secure ongoing success.
We practice research to build on one side our future knowledge assets while opening new market opportunities, and on the other side to support our customers.
We believe that R&D and innovation must also connect with strategic business thinking, so that new ideas are quickly transferred to industrial practice.
We foster truly open collaborative innovation paradigms, leveraging on an international network with leading universities, research and technology organizations as well as research intensive enterprises. | https://www.rina.org/en/business/research-development |
by Crispin Boyer
I loved this funny book! The animals all have vivid personalities, and I loved their hilarious dialogue. It’s so clever how Greek mythology is woven into the pet store setting.
My favorite thing was seeing how the story of Jason and the Argonauts and their quest for the Golden Fleece is reimagined with all the major plot points like Charybdis, Phineas the soothsayer, the harpies, a dragon who guards the fleece, and even the Oracle of Delphi. Locations around the pet store correspond to locations in ancient Greece, like Mount Olympus, the island of Crete, and the Aegean Sea. It’s so imaginative!
The illustrations are adorable, and I loved the cartoony style. It really brings the characters to life with their silly expressions and funny antics.
I would recommend this book for children ages 8-12 who “don’t like to read.” It’s sure to capture their attention, and help them to discover a love for reading!
Disclaimer: I received a copy of this book from the publisher via Media Masters Publicity in exchange for a free and honest review. All the opinions stated here are my own true thoughts, and are not influenced by anyone. | https://luminouslibro.com/2019/12/31/book-review-zeus-the-mighty/ |
In one afternoon, sample different classes offered by Philly Dance Fitness
You can try ballet, Bollywood and other dance styles during the workshop
Sample different workout programs at Philly Dance Fitness, then refuel with healthy snacks. You can try ballet, Bollywood, hip-hop and more during the workshop.
If you're interested in trying out a new workout in 2020, then join Philly Dance Fitness to sample eight of its classes in one afternoon. You may find one, or maybe more, that you like and want to keep doing.
There are two dates you can take the workshop at the South Philly studio. The first will take place Saturday, Jan. 18, and the next will be on Saturday, Feb. 15. | |
There will be an increased opportunity for public input in deer management decision-making under a pilot project launched today by the state Department of Environmental Conservation (DEC). This new project will incorporate modern technology and gather input directly from a broader cross-section of New Yorkers.
"The old method of collecting public input on desired deer population levels was ground-breaking at the time and has served DEC well for a quarter century," said Acting Commissioner Marc Gerstman. "However, we know we can make the program better by obtaining input from a broader range of citizens, by taking better advantage of current electronic communication methods and by making the process easier for those participating."
DEC is initiating this pilot effort in central New York and has selected a 1,325-square-mile group of three WMUs (7H, 8J and 8S) which encompass Seneca County and portions of Ontario, Wayne, Yates, Schuyler, Tompkins and Cayuga counties.
The Human Dimensions Research Unit (HDRU) and the Cooperative Extension in the Department of Natural Resources at Cornell University are assisting DEC with the research and educational outreach aspects of the pilot. In addition, Cornell Cooperative Extension of Seneca, Cayuga, and Tompkins counties will play a central role in implementation of the pilot process.
The new process is intended to replace the existing Citizen Task Force (CTF) model for seeking public recommendations on desired deer population levels within individual Wildlife Management Units (WMUs), in place since 1990.
In keeping with DEC's Management Plan for White-tailed Deer in New York State: 2012-2016, DEC began grouping the existing 92 WMUs into fewer, larger WMU aggregates that will allow for better use of existing and new data and improved deer population monitoring. Public recommendations for deer population change will also be identified for WMU aggregates rather than individual WMUs. DEC is evaluating the best approach to engage the public at this larger scale.
Planning for the revised public input process started in 2013. Activities included interviews with DEC and Cooperative Extension staff, as well as citizens who were involved in the original CTFs to identify the strengths and shortcomings of the old method. In addition, last spring DEC and HDRU conducted a broad-based survey of residents in the central New York pilot WMU aggregate to collect information on public values for deer and their experiences and concerns with deer impacts (e.g., deer-vehicle collisions, landscape damage, agricultural damage) in that area.
using the recommendations of the citizen group, together with the results of the public survey, to define the public recommendation for deer population change in the pilot WMU aggregate.
Citizens participating in the process will no longer be asked to gather input themselves from other stakeholders, which was one of the limitations under the previous CTF approach. Solicitation of input, now via broad public survey, will be more far-reaching and representative than collecting opinions on a limited one-on-one basis.
The public recommendation for deer population change will be considered for data describing the ecological impacts of deer within the WMU aggregate. DEC biologists will base final objectives for deer population change on whether the public recommendation is compatible with existing levels of deer impacts on forests.
Results of the process, as well as the decisions pursuant to it, will be shared with the public broadly, serving as an audit on the pilot system, and providing feedback for improving the process before expanding it to other WMU aggregates in the future. Once refined, DEC intends to implement the new process on a routine cycle in each aggregate in the state to respond to changing conditions and attitudes about deer impacts over time.
The original CTF process involved the selection of a relatively small group of citizens, usually eight to 12 individuals, each representing a particular stake in the deer population level in a WMU. Members included farmers, hunters, motorists, foresters, landowners and others having an interest in the size of a unit's deer herd. Task Force members were asked to seek opinions about desired deer numbers from other citizens in their stakeholder group, form a collective stakeholder position and then report that position back to the CTF. The group, as a whole then debated the merits of the various positions and settled on one collective recommendation to the DEC on which direction the local deer population should go and by how much. The recommendation was expressed as percent change desired in the deer population, including no change. DEC then used the CTF recommendation to guide deer management actions in that particular WMU.
For information regarding other DEC deer programs, visit the Department's Deer Management web page. | https://www.nyoutdoortalk.com/news/view/dec-launches-pilot-project-to-improve-collection-of-public-input-about-deer-populations-341.html |
In the December 2012 edition of the Journal of Cancer Survivorship, an article titled “Racial and ethnic differences in health status and health behavior among breast cancer survivors—Behavioral Risk Factor Surveillance System, 2009” examined racial/ethnic differences in health status and behaviors among female breast cancer survivors.
What the researchers concluded, yet again, is that surviving breast cancer comes down to what you do as an individual, and that interventions that promote healthy lifestyles are key. These conclusions ignore the social context in which behavior choices are made. Different communities have different social advantages or disadvantages which determine the options we have to make healthy choices. Inequities in breast cancer outcomes among different racial and ethnic communities are based on a complex interplay of numerous social factors of where we live, learn, work and play.
Encouraging woman to eat better, exercise more and drink and smoke less are part of the solution, but continuing to tell people to make better choices without changing policies that increase access to resources, is setting up communities for failure after failure and blaming women for their disease. It also does nothing to change the growing disparities we see in breast cancer and actually might make them worse.
This study’s results, to me, underscore the need for community-based approaches that include policy, systems, environmental, and individual-level changes. The CDC agrees: their Racial and Ethnic Approaches to Community Health Across the U.S. (REACH) program acknowledges that there are numerous societal, policy, environmental, cultural, and individual-level factors that must be changed to eliminate racial and ethnic disparities and that developing appropriate programs that address the complex root causes of racial and ethnic health disparities is critical.
As we look at studies that focus on personal behavior, we should continue to examine how factors outside an individual’s control – such as race, economic status and political power – affect who enjoys good health and who does not, and whether communities are involved in decision-making on policies that affect their access to resources.
Breast Cancer Action continues to challenge our society’s strong emphasis on personal behavior as the silver bullet, and believes that effective strategies to eliminate inequities, to reduce disparities in breast cancer incidence, mortality and survival, require a broader focus on the social and economic contexts in which we all live. | https://bcaction.org/2012/12/10/women-are-to-blame-again-for-their-breast-cancer/ |
15 Depictions of Robots and AI in Film
Written by Edmund H. North and based on the 1940 science fiction short story "Farewell to the Master" by Harry Bates, in The Day the Earth Stood Still, a humanoid alien visitor named Klaatu comes to Earth, accompanied by a powerful eight-foot tall robot, Gort, to deliver an important message that will affect the entire human race.
The film follows a voyage to Jupiter with the sentient computer HAL after the discovery of a mysterious black monolith affecting human evolution. The film deals with themes of existentialism, human evolution, technology, artificial intelligence and the possibility of the existence of extraterrestrial life.
The Imperial Forces, under orders from cruel Darth Vader, hold Princess Leia hostage, in their efforts to quell the rebellion against the Galactic Empire. Luke Skywalker and Han Solo , captain of the Millennium Falcon, work together with the companionable droid duo R2-D2 and C-3PO to rescue the beautiful princess, help the Rebel Alliance, and restore freedom and justice to the Galaxy.
The film's success led to two critically and commercially successful sequels, The Empire Strikes Back in 1980 and Return of the Jedi in 1983, and later to a prequel trilogy, a sequel trilogy, and anthology films.
The film is set in a dystopian future Los Angeles of 2019, in which synthetic humans known as Replicants are bio-engineered by the powerful Tyrell Corporation to work on off-world colonies. When a fugitive group of replicants led by Roy Batty escapes back to Earth, burnt-out cop Rick Deckard reluctantly agrees to hunt them down. The sequel, Blade Runner 2049, was released in 2017.
The film stars Jeff Bridges as a computer programmer who is transported inside the software world of a mainframe computer where he interacts with programs in his attempt to escape. Over time, Tron developed into a cult film and eventually spawned a franchise, which consists of multiple video games, comic books, an animated television series and a sequel, Tron: Legacy in 2010.
The Terminator, a cyborg assassin, is sent back in time from 2029 to 1984 to kill Sarah Connor (Linda Hamilton), whose son will one day become a savior against machines in a post-apocalyptic future. Its success led to a franchise consisting of four sequels (Terminator 2: Judgment Day, Terminator 3: Rise of the Machines, Terminator Salvation and Terminator Genisys), a television series, comic books and novels.
Set in a crime-ridden Detroit, Michigan, in the near future, RoboCop centers on police officer Alex Murphy who is murdered by a gang of criminals and subsequently revived by the megacorporation Omni Consumer Products (OCP) as a superhuman cyborg law enforcer known as RoboCop.
The film depicts a dystopian future in which reality as perceived by most humans is actually a simulated reality called "the Matrix", created by Sentient machines to subdue the human population, while their bodies' heat and electrical activity are used as an energy source. Cybercriminal and computer programmer Neo learns this truth and is drawn into a rebellion against the machines, which involves other people who have been freed from the "dream world."
Directed and produced by Steven Spielberg and set in a futuristic post-climate change society, A.I. tells the story of David, a childlike android uniquely programmed with the ability to love.
In the year 2035, humanoid robots serve humanity, which is protected by the Three Laws of Robotics. Del Spooner, a Chicago police detective, hates and distrusts robots because he was rescued from a car crash by a robot using cold logic (his survival was statistically more likely), leaving a 12-year-old girl to drown.
Arthur Dent is trying to prevent his house from being bulldozed when his friend Ford Prefect whisks him into outer space. It turns out Ford is an alien who has just saved Arthur from Earth's total annihilation.
The film follows Sam Bell, a man who experiences a personal crisis as he nears the end of a three-year solitary stint mining helium-3 on the far side of the Moon with his robot companion, GERTY.
The film follows Theodore Twombly, a man who develops a relationship with Samantha, an intelligent computer operating system personified through a female voice.
Dr. Will Caster, the world's foremost authority on artificial intelligence, is conducting highly controversial experiments to create a sentient machine. When extremists try to kill the doctor, they inadvertently become the catalyst for him to succeed.
Caleb Smith a programmer at a huge Internet company, wins a contest that enables him to spend a week at the private estate of Nathan Bateman, his firm's brilliant CEO. When he arrives, Caleb learns that he has been chosen to be the human component in a Turing test to determine the capabilities and consciousness of Ava, a beautiful robot.
(This list is not intended to be a best of list, but merely highlights films that had some influence in the world of AI and Robotics).
Top 10 Used and Upcoming Smart Technologies
Switzerland CIO 2018 - What is CIO Event?
You have missed out some details, please try again. | https://gbievents.com/blog/15-depictions-of-robots-and-ai-in-film |
You can set up additional users to be class schedulers in your TMS organization. The new user account will need the Operations Manager role. Follow the steps below to create a new user with the permissions to schedule classes.
Go to the Site Administration page, on the Users tile, click Create User. On the Basic Information page, fill in the user’s information. First Name, Last Name, Primary Email, User Name and Password are required fields. Next, click on the Roles tab. Click + Assign Role. This will open the Choose User Role dialog box. Select the Operations Manager – Subscription Centers role and click OK. You will also need to give this user Organization Management over your organization; go to the Organization Management tab, check the box next to your organization's name. Once you are finished, click Save. Please provide the new user with their login credentials.
If the user already has an account within your organization, navigate to the user’s profile page. Click Edit and Select the Roles tab. Assign the user the Operations Manager – Subscription Centers and give them Organization Management over your organization. Click Save to finalize the changes. If the user is already logged in, they may need to log out of their account and log back in for the changes to take effect.
How do I set up an Instructor?
How do I create an Instructor account? | https://learnondemand.zendesk.com/hc/en-us/articles/360001173371-How-do-I-set-up-another-user-to-schedule-classes- |
This article reports on a citation study examining the use of archives by researchers in the field of Catholic history. The authors collected citation data from three Catholic history journals published from 2010 through 2012. They analyzed two citation attributes: the type of materials cited and, for archival materials, the type of repository. This article presents results and observations from the study and discusses them in the context of archival practice. The authors discuss how findings from this study can inform collection development and archival description as well as ideas for further research.
Inclusive pages
43-56
ISBN/ISSN
10674993
Comments
The data used for this research are archived in this repository as supplemental files attached to this record. They are available in the following file formats:
- Excel 2010 (.xlsx)
- Excel 97-2003 (.xls)
- Comma-separated (.csv), with each worksheet saved as a separate file
- Open Document Spreadsheet (.ods)
Volume
36
Issue
1
Place of Publication
Chicago, IL
Peer Reviewed
yes
Keywords
Catholic history, Citation study, Catholic archives, Citation analysis
eCommons Citation
Jillian M. Slater (0000-0003-0805-3097) and Colleen Hoelscher (2014).
Use of Archives by Catholic Historians, 2010-2012: A Citation Study. Archival Issues. | https://ecommons.udayton.edu/imri_faculty_publications/4/ |
Quantum entanglement and non-locality are non-classical characteristics of quantum states with phase coherence that are of central importance to physics, and relevant to the foundations of quantum mechanics and quantum information science. This thesis examines quantum entanglement and non-locality in two- and three-component quantum states with phase coherence when they are subject to statistically independent, classical, Markovian, phase noise in various combinations at the local and collective level. Because this noise reduces phase coherence, it can also reduce quantum entanglement and Bell non-locality. After introducing and contextualizing the research, the results are presented in three broad areas. The first area characterizes the relative time scales of decoherence and disentanglement in 2 x 2 and 3 x 3 quantum states, as well as the various subsystems of the two classes of entangled tripartite two-level quantum states. In all cases, it was found that disentanglement time scales are less than or equal to decoherence time scales. The second area examines the finite-time loss of entanglement, even as quantum state coherence is lost only asymptotically in time due to local dephasing noise, a phenomenon entitled "Entanglement Sudden Death" (ESD). Extending the initial discovery in the simplest 2 x 2 case, ESD is shown to exist in all other systems where mixed-state entanglement measures exist, the 2 x 3 and d x d systems, for finite d > 2. The third area concerns non-locality, which is a physical phenomenon independent of quantum mechanics and related to, though fundamentally different from, entanglement. Non-locality, as quantified by classes of Bell inequalities, is shown to be lost in finite time, even when decoherence occurs only asymptotically. This phenomenon was named "Bell Non-locality Sudden Death" (BNSD).
Description
Thesis (Ph.D.)--Boston UniversityPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at [email protected]. Thank you. | https://open.bu.edu/handle/2144/34430 |
Supervises employees involved in a variety of production and/or operation functions such as assembly, inspection, test, and/or final test which is related to the manufacturing of the company's capital equipment and systems, electronic, mechanical, electro-mechanical, and/or components, subassemblies and subsystems.
Prioritizes production schedules based on available manpower, equipment efficiency, capacity and materials requirements.
Participates in operational meetings. Supports the development and tracking of organization performance metrics. Schedules and conducts milestone meetings. Responsible for meeting or improving cycle time performance and other metrics.
Ensures timely response to operations issues impacting customer satisfaction; acts as an interface in the establishment of and ensuring conformance to customer/vendor requirements; prepares detailed analysis of cost of quality opportunity and initiates corrective action.
Oversees the prevention of employee accidents and injuries. Responsible for ensuring and documenting that all department employees (including temporaries) receive and follow appropriate department training including Environmental, Health and Safety training. Implements, emphasizes importance of, and monitors compliance to appropriate safety policies and procedures.
Develops personnel to include training and career development; manages the performance management process to include the development of team and individual goals, implementing employee development plans, and coaching. Manages the employee selection, hiring, reward and discipline processes.
Support the analysis of and plans for maximum production capacity optimization; implements and monitors manufacturing or dept. processes that collect, analyze and report key measurement data and real time status.
Identifies process and quality changes designed to improve manufacturing or department capabilities. Drives Lean, Safety and Quality. Takes corrective action.
Functional Knowledge
- Demonstrates understanding and application of procedures and concepts within own job family and basic knowledge of other related job families.
Business Expertise
- Applies understanding of how the team relates to other closely related areas to improve efficiency of own team
Leadership
- Has formal supervisory responsibilities; sets priorities for and coaches employees to meet daily deadlines
Problem Solving
- Uses judgment to identify and resolve day-to-day technical and operational problems
Impact
- Impacts the quality, efficiency and effectiveness of own team and its contribution to the business unit, department or sub-function
Interpersonal Skills
- Uses tact and diplomacy to exchange information and handle sensitive issuesMay be required to interact with outside customers, vendors or suppliers
Position requires understanding of Applied Materials global Standards of Business Conduct and compliance with these standards at all times. This includes demonstrating the highest level of ethical conduct reflecting Applied Materials' core values.
Qualifications
Education:
Bachelor's Degree
Skills
Certifications:
Languages:
Years of Experience:
2 - 4 Years
Work Experience:
Additional Information
Travel:
No
Relocation Eligible:
No
Applied Materials is an Equal Opportunity Employer committed to diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law. | https://mass-green.jobs/newburyport-ma/manager-ii-operations-management/1F3F38B3D5BF4AFE9E24784C47298AD5/job/?vs=28 |
These educational groups have allied and have been accompanied by a network of universities, organizations and institutions that collaborate in the educational, formative and social development within our University.
LiberQuare University pretends to go one step further in the pursuit of quality and educational excellence, without losing the values and identity where it is appropriate, adapting and helping our students and teachers to experience an educational offer that is distinctive, special, and of high quality.
LiberQuaré University is strongly committed to society from its start. The history of this University are foundations, schools and associations that have made education a way of contributing to the development of individuals and population as a group. LiberQuare University owns this legacy by contributing to progress with a formation of quality in higher education.
The lectures are reference points in a specific field. With the creation of a lecture, LiberQuare wishes to address its field in a special manner.
Lectures will be imparted by a person of special relevance and international dimension and recognition, who will be supported by a Lecture Assistant. The activity of the lecture means to organize or collaborate in a punctual manner in an event of any kind of scientific, cultural or social significance.
The director of the lecture will be able to impart conferences or preside activities of the same. The assistant’s mission will be to provide all the information deemed necessary by the lecture for its follow-up on the network, not only by the students of the University, but in many cases, as an extracurricular activity.
The lectures along with their activities will contribute to social good in a determined field and make the University visible to the world.
Likewise, LiberQuare University in hand with Mundy Family Foundation, develops care activities for families in crisis situations or at disadvantage in social contexts, and promote a movement of international solidarity in favor of the family. | https://www.wbcuniversity.com/historia-ulq.php |
The 'Scream: Resurrection' Ending Is A Creepy Callback To The Original Movie
Spoilers ahead for Scream: Resurrection. The conclusion of Scream: Resurrection left a lot of corpses strewn around the Atlanta area, but it appears the killings have finally come to an end. The identity of Ghostface has been revealed and if you didn't see the original movie, you might have been surprised to find out that asking who the killer in Scream: Resurrection is was actually the wrong question. Fans should have been askingwho the killers — plural — are.
The second-to-last episode of Scream suggests that Jamal, Deion's half-brother, was the killer the entire time. However, audiences are treated to a fake-out when another Ghostface shows up and brutally injures Jamal. However, when Deion saves Jamal, he reveals that he actually is the real killer and that he was attacked by his former partner in murder. He then reveals his motive for the killings, echoing the motivation of the killers in the original Scream film.
At the end of the first Scream, main character Sidney Prescott learns that the killing pair behind the Ghostface identity is her own boyfriend, Billy, accompanied by his best friend, Stu. Billy, however, was the true mastermind of the two, formulating the killings as a way to get revenge on Sidney's mother for ruining his family.
A year prior to the events of Scream, Billy killed Sidney's mother for having an affair with his father, breaking his family apart. Stu, however, was mostly just along for the ride. Scream: Resurrection expands on the idea of the original by having two killers where neither is a sidekick, but merely helping each other act out their own fantasies.
Jamal wanted to get revenge on Marcus — that's right, Marcus — for letting Deion die. Jamal reveals that he may be the only person who knows that it was actually Deion who died in that salvage yard so many years ago. Marcus assumed Deion's identity, not knowing that Deion had met Jamal or that the twins even had a half-brother. He was responsible for the deaths of nearly everyone that wasn't in the Dead Fast club, to prove to Marcus that he was serious about killing and to get revenge for Marcus' death — which is why he killed the hook man. All the other deaths, however, are the proud creation of the Dead Fast Club's own horror expert, Beth.
Beth simply wanted to be in her own horror movie, so she created one. She killed her "friend" Shane, then Manny and Amir, and almost killed Liv and Deion/Marcus so that she could prove to be a better killer than the ones she grew up watching. All those questions about why the killer obsessed with Deion/Marcus' past would spend time killing Manny and Amir are finally explained by the fact that these two killers worked together but had very different motives.
Despite Beth's insistence that the killer is the real hero of every slasher movie, her death seems pretty final. Although Deion/Marcus gets a phone call from a hidden number at the end of the series, the fact that he ignores it suggests that he can actually focus on enjoying his life for a change instead of looking over his shoulder for someone trying to plunge a knife into his back. | |
- Published:
A micro-credentialing methodology for improved recognition of HE employability skills
International Journal of Educational Technology in Higher Education volume 19, Article number: 10 (2022)
-
1528 Accesses
-
13 Altmetric
-
Abstract
Increasingly, among international organizations concerned with unemployment rates and industry demands, there is an emphasis on the need to improve graduates’ employability skills and the transparency of mechanisms for their recognition. This research presents the Employability Skills Micro-credentialing (ESMC) methodology, designed under the EPICA Horizon 2020 (H2020) project and tested at three East African universities, and shows how it fosters pedagogical innovation and promotes employability skills integration and visibility. The methodology, supported by a competency-based ePortfolio and a digital micro-credentialing system, was evaluated using a mixed-method design, combining descriptive statistics and qualitative content analysis to capture complementary stakeholder perspectives. The study involved the participation of 13 lecturers, 169 students, and 24 employers. The results indicate that the ESMC methodology is a promising approach for supporting students in their transition from academia to the workplace. The implementation of the methodology and the involvement of employers entails rethinking educational practices and academic curricula to embed employability skills. It enables all actors to broaden their understanding of the relationship between higher education and the business sector and to sustain visibility, transparency, and reliability of the recognition process. These findings indicate that there are favourable conditions in the region for the adoption of the approach, which is a meaningful solution for the stakeholder community to address the skills gap.
Introduction
The need to improve the quality and relevance of skills development and their visibility and comparability for better career perspectives is emphasized in “Agenda 2063” (African Union Commission, 2015). Likewise, the International Labour Organization (ILO) highlights the positive impact of skills recognition on the labour market, particularly with regard to matching skills and jobs (Braňka, 2016). The Organization for Economic Co-operation and Development Skills Strategy (OECD, 2011) continues to acknowledge the importance of skills and their recognition (OECD, 2021). In this scenario, higher education institutions (HEIs) are increasingly focusing on their students’ employability skills as an integral part of their goals (Suleman, 2018) to improve their chances on entering the economic and labour market.
In parallel, the shift of focus from the recognition of conventional qualifications to micro-credentials has also emerged as a trend (Kato et al., 2020). The special interest in their adoption stems from a long-standing debate on the value of degrees for the future of work (Gallagher, 2018). As a consequence, one of the current challenges for education policies and systems is to provide students with the option to accumulate meaningful, skills-focused digital credentials in order to meet today’s workforce requirements.
Micro-credentials offer an opportunity to bridge the skills gap that is acknowledged to make the transition from post-secondary education to the world of work difficult and which is affecting many developing countries, some of which belong to the East African Community (ILO, 2020). Against this background, a twofold strategy is required to respond to the growing concern among employers about the preparation of African graduates for the workplace (Leopold et al., 2017). On one hand, HEIs in Africa need to start questioning how to reduce this discrepancy by designing and implementing curriculum innovation and supporting platforms where employability skills can be recognised and easily shared, thus enhancing the visibility of student achievements. On the other hand, industry partners need to participate in innovations in traditional modes of recognition (Guàrdia et al., 2021) in order to deliver effective solutions that aid in the selection of the best-qualified graduates for employment.
Purpose of the study
This article presents the methodological approach designed in the EPICA H2020 project for micro-credentialing employability skills of students approaching graduation which is supported by (a) a competency-based ePortfolio as a transition tool from academia to the workplace along with (b) a digital micro-credentialing system addressed to make the skills visible to prospective employers. Additionally, the article reports on the results of a pilot that took place at three East African universities: Maseno University (Kenya), Makerere University (Uganda), and the Open University of Tanzania (Tanzania). The evaluation of the methodology considers the complementary perspectives of the three different stakeholder groups directly involved in the process: lecturers, students and employers. The guiding research questions of the study were:
-
RQ1. Which pedagogical innovations focused on employability skills integration and recognition could be fostered by the Employability Skills Micro-credentialing (ESMC) methodology?
-
RQ2. How may the ESMC methodology promote employability skills visibility, transparency and trustability for employability purposes?
The focus of the study is on the academic community, particularly on lecturers and students’ perspectives regarding the capacity of the methodology to foster curriculum innovation toward micro-credentialing for employability skills and, hence, to enhance employment opportunities. Special attention was paid to key actors in the professional context and their need to optimise candidate recruitment processes, and the analysis therefore also draws on the employer perspectives regarding the pertinence of an approach that encourages students to showcase badges as evidence of their proficiency in specific skills.
Background
Context and the significance of the problem
The lack of focus in HEIs on helping students develop the skills they need to be fully equipped to enter the labour market is one of the reasons for the skills gap reported in the East-Africa region (Njeg’ere Kabita & Ji, 2017). Similarly, the African Development Bank (2019) emphasizes the relevance of adopting a skills-based approach to learning to enhance the employability of undergraduates.
Despite the description of the skills and competencies associated with a particular qualification provided by both the National Qualifications Frameworks (NQFs) and the East African Qualifications Framework for Higher Education (EAQFHE), traditional credentials are proving to be insufficient to address the growing disconnect between what employers want and what the credential communicates (Nikusekela et al., 2016). Credentials, in fact, leave out what and how students learned, and the skills and competencies they acquired within and beyond the walls of the university (Wienhausen & Elias, 2017) which makes them inadequate to reflect the transferable skills needed in a changing workplace. As a consequence, they are becoming increasingly ineffective as a screening mechanism for recruiters.
The need to include more granular skills, abilities and dispositions in class-level learning outcomes also led Transforming Employability for Social Change in East Africa (TESCEA) to undertake different initiatives with the aim of transforming HE for employability and social change (Wild & Omingo, 2020). Among these initiatives, a TESCEA partnership joined forces to develop a skills matrix for graduates’ skills and employment to guide the course redesign process within the university partners. This growing emphasis on the identification of graduate employability skills, suggests micro-credentials as a possible solution to the challenges facing the African labour market today.
Micro-credentials as a contribution to enhance employability
The European Commission (2020) defines a micro-credential as a certified document issued by an institution or organization of learning outcomes achieved through a learning experience, following quality assurance standards, and containing additional information regarding the holder’s name, the applied assessment methods, and, “where applicable, the qualifications framework level and the credits gained” (p.5). They are owned by the recipient who can share them, combine them with others, or showcase them in different digital contexts. According to Milligan and Kennedy (2017), micro-credentials are part of “a digital credentialing ecosystem, made possible by digital communications technologies establishing networks of interest through which people can share information about what a learner knows and can do” (p.43).
As reported by Oliver (2019), micro-credentials can stand alone or interact with formal qualifications. In the latter case, they may be used as an alternative entry mechanism to degree programs, a means to provide value-added programs during degrees or/and a way to connect ready-to-graduate learners to work experience and employment. Micro-credentials when used to add value to academic programs, therefore, make it possible to acknowledge skills and experiences that are not shown on academic transcripts or CVs, including interpersonal skills and extracurricular or volunteer activities (Braxton et al., 2019) and hence complement conventional qualifications and map career paths.
To sum up, micro-credentials enable the capturing of the extensive range of experiences and skills that students develop during their careers and showcase them to employers as additional signals that go beyond traditional transcripts. The trust of relevant stakeholders in the skills certified by micro-credentials, however, should be fostered by a digital credentialization system that makes relevant information behind the micro-credential (contents, quality, outcomes, assessment, workload, etc.) digitally available and presents it appropriately. The provision of transparent information on the learning experience that led to the credential also addresses the perennial problem that arises from the different ways HE and the labour market describe achievements (Orr et al., 2020).
Micro-credentialing employability skills in the higher education curriculum
HEIs are one of the key providers of micro-credentials and well placed to drive innovation in this area (MICROBOL, 2020). They have a crucial role in guiding students in the aggregation of learning experiences into structured learning journeys and in recognising the skills required for success in the workplace (Gauthier, 2020).
Integrating and micro-credentialing skills in HE curricula, however, requires that academic institutions overcome the knowledge transfer paradigm in favour of active learning models and authentic assessment scenarios (Sokhanvar et al., 2021) that surface both academic knowledge and workplace skills (Kilsby & Goode, 2019).
Additionally, institutions should take into account not only the employability skills students learn throughout the curriculum but also those developed outside the classroom. Capturing evidence of learning gained in different contexts in a way that both supplemental learning achievements and non-cognitive attributes are visible (Tyton Partners, 2015) requires institutions to reframe the way outcomes and learning are recognised.
According to Selingo (2017), students’ ePortfolios of assets and data enhance the credibility of micro-credentials by documenting the learning process, progress and performances. In this way, employers are able to see the incremental advances that students have made and use this information in hiring decisions. Additionally, the fact that ePortfolios are integrative in nature promotes a process of reflection and articulation by the students of the range of experiences, knowledge, and competencies that constitute their education (Wienhausen & Elias, 2017). Digital badges are often paired with ePortfolios. Although the terms digital badges and micro-credentials have been often used interchangeably in the literature, digital badges usually make reference to the process through which a technological system handles and issues them. Digital badges, therefore, should be intended as “a representation of an accomplishment, interest or affiliation that is visual, available online, and contains metadata that help explain the context, meaning, process and result of an activity” (Gibson et al., 2015, p.403). These electronic symbols stored in the competency-based ePortfolio provide recognition of students’ skills, and give access to a collection of evidence about their capacities and readiness for work. The intersection between digital badges and ePortfolios, when successful, can unlock the power of the evidence behind the badge and enhance both the learner’s ability to present a collection of projects and their capacity to make claims about their competences (Ambrose et al., 2016). The pairing of badges and ePortfolios has also been identified as a future direction for research and practice (Eynon & Gambino, 2017).
The EPICA micro-credentialing methodology
With the aim of providing graduates with more detailed accounts of their learning and accrediting their employability skills, the EPICA project developed and tested a new methodology for employability skills assessment, micro-credentialing and visibility that employs a competency-based ePortfolio as a transition tool, where evidence is attached to badges (Maina et al., 2020). The methodology draws from different successful initiatives. First, Deakin University’s Professional Practice Credentials (Jorre De St Jorre et al., 2016) which adopts the use of evidence, rubrics and micro-credentials to certify professional skills acquired throughout a career. Second, the UWaterloo curriculum vitae project (WatCV) that supports students in articulating their skills focusing on their transferability to the workplace, and the use of a digital portfolio as a high-impact educational practice (Watson et al., 2016), the Catalyst Framework (Eynon & Gambino, 2017), and the STAR (Situation, Task, Action, Results) method, a structured manner of answering behavioural interview questions focusing on skills and the strategic description of significant lived experiences, which is frequently used in hiring processes. Third, the Comprehensive Learner Record project (Green & Parnell, 2017) which highlights the importance of developing a record showing achievements in employability skills to complement the HE degree and transcript, and linking them with evidence coming from curricular and extracurricular experience. Finally, the VALUE initiative (McConnell et al., 2019) on students learning outcomes assessment, provided a set of initial generic rubrics for adaptation and application.
The underlying assumption for the design of this methodology is that university courses and programs currently do provide opportunities for the development of employability skills but they are not directly and explicitly managed or assessed (Tomasson Goodwin et al., 2019). This premise led to the identification of a solution capable of raising awareness of the significant skills already addressed, albeit unintentionally, by the curriculum and engaging students, lecturers, and employers in a collaborative endeavour focused on micro-credentialing these skills. This process was organised in two articulation phases (see Fig. 1) in which the students developed the ability to identify and communicate their skills to specific target groups (Guàrdia et al., 2021).
Articulation 1
In the first articulation, students were required to articulate their employability skills in written, visual and verbal form for lecturers by engaging in inquiry, reflection and integration tasks following the pedagogical design principles of the Catalyst Framework (Eynon & Gambino, 2017) and using an ePortfolio. Hence, they had to:
-
Inquire into their own curricular and extracurricular experiences to identify significant situations within and outside their academic contexts which could provide direct or indirect evidence to demonstrate the development of their employability skills;
-
Reflect on how the situations they had identified contributed to the development of these skills and select pieces of evidence that best illustrate their application. Each piece of evidence is presented with a description providing contextual explanation;
-
Integrate their learning by elaborating a reflective narrative that carefully explains and justifies how the overall evidence presented demonstrates their skills. A skill-specific rubric with the assessment criteria enables students to self-assess their level of development, and thus guide the justification and the presentation of the evidence.
Lecturers scaffold their students throughout the whole process providing formative feedback and encouraging the submission of their work. The ability to demonstrate the selected employability skills is then assessed by academics on the basis of a customized rubric. Effective showcasing is acknowledged by a digital badge per skill showcased.
Articulation 2
The second articulation of the micro-credential process is aimed at ensuring that students are able to effectively communicate their employability skills to employers. Students are required to review their profiles and personalize their ePortfolios by:
-
Writing or reviewing the presentation of themselves, including their short bios and pictures and other relevant information considered of interest to an employer;
-
Reviewing their awarded digital badges and evidence to start developing a script of a short video presentation;
-
Adding new evidence on achievements that contribute to build a more comprehensive portrait of their capacities;
-
Recording a 3 to 5-min video testimony to communicate their profile to a prospective employer following the STAR method and highlighting their experiences and achievements;
-
Customize the ePortfolio paying attention to the formal organization of all elements and to aesthetic aspects before sharing with an employer for appraisal.
Employers are required to appraise the employability skills on the basis of the student’s presentation and the evidence provided through the competency-based ePortfolio. In cases where the showcased work is of exceptional quality, employers may endorse the student with a written personal commendation.
Both articulations are bridged by the digital badge as the main evidence that serves as formal academic recognition linked to experiences and achievements and thus providing substantive information on the student’s capacities.
Materials and methods
Research context
The pilot of the EPICA ESMC methodology took place between January and July 2020 and, despite the COVID outbreak that forced it to be done completely online, it involved the participation of 13 lecturers, 169 students, and 24 employers. The micro-credentialing process was implemented, taking as a reference Ornellas et al. (2019) employability skills taxonomy, in 11 bachelor programs from different disciplines including Education, Law, Management, Mathematics, Social Work, Nursing, Informatics, and others. During this experience the 169 students engaged in the process of assessment of 2 to 4 employability skills each, accounting for a total of 526 assessed skills, mainly creative thinking (161, 30.6% of skills), communication and interpersonal skills (139, 26.4%), and problem-solving (108, 20.5%). 136 students (80%) then continued with Articulation two showcasing their badges to potential employers. It is worth mentioning that the employers were expressly appointed to the pilot for research purposes. The future deployment of the methodology is expected to be managed by university staff and integrated into the curriculum as part of an innovation aiming at increasing graduates employability opportunities.
Research design
A mixed-method approach was applied, combining qualitative and quantitative techniques, and convergent design (Clark & Ivankova, 2016), to capture the perspectives of participating African universities students, lecturers and appointed employers.
A set of data collection instruments was designed (see Fig. 2) based on the Catalyst Framework (Eynon & Gambino, 2017) dimensions: inquiry, reflection, integration, and outcomes assessment. And, with the aim of gaining insights into students and lecturers’ perceptions of the potential of the methodology to enhance employability opportunities, the Electronic Portfolio Student Perspective Instrument, EPSPI (Ritzhaupt et al., 2010) was also integrated into the analysis. EPSPI involves four distinct constructs defined as primary purposes: learning, assessment, visibility, and employment. In particular, the latter two provided information relating to the use and relevance of the EPICA solution in the transition from university to the workplace.
The students and lecturers’ online questionnaire was answered by 50 and 8 participants respectively and measured the Catalyst dimensions on a scale of 1–7. The employers’ questionnaire drew on EPSPI and was answered by 11 African employers. In parallel, 28 semi-structured interviews based on EPSPI were carried out with students (21) and lecturers (7). Additionally, student reports, following a similar script, were produced by six students (2 per university) which described their experience in a narrative form. By the end of the project, 29 free and open testimonies produced by students (12), lecturers (11), and employers (6) were also collected and included in the analysis as a secondary source, as they provided a valuable narrative corpus.
Data analysis
As part of a pilot project, the group of participants represent a non-probabilistic sampling by criterion where inferential validity and reliability probabilistic measures do not apply. Considering the sample sizes, the quantitative results of students, lecturers, and employers’ questionnaires have been analysed applying univariate descriptive statistics with the SPSS© program. The results are reported through parameters appropriate to the variables metrics and sample size.
Qualitative content analysis (Schreier, 2012) was applied to the corpus of data from the students and lecturers’ interviews, students’ reports, and employers’ open questions, as well as testimonies. Atlas.ti© was used for coding and analysis. A coding manual (Syed & Nelson, 2015) was developed based on Catalyst and EPSPI.
The code attributes were established in a collaborative exercise among the researchers, including definition and inclusion criteria (Creswell & Poth, 2018). Two researchers individually started by coding 20% of the total data as a trial coding before comparison for consistency. The accuracy and scope of the codes were discussed, and adjustments were made. This process assisted in the refinement of the existing codes and the identification of other emerging ones in a deductive-inductive approach (Elo & Kyngäs, 2008). To reach a consensus on the meaning attributed to data and to ensure the reliability of the results, this procedure was also carried out with the data from students and employers. The codes were clustered into categories (see Table 1, in Appendix), and semantic networks representing visual depictions of the conceptual structure and the connections between concepts (see Figs. 6, 7, 8, in Appendix) were created to increase the understanding of each group of participants’ views.
Results
This section presents lecturers, students, and employers’ perspectives organized into three research concerns: the pedagogical principles underlying the micro-credentialing process, the micro-credentials and ePortfolio for employability enhancement, and the methodology uptake.
Lecturer perspective
Pedagogical principles
Lecturers evaluated the pedagogical principles for employability skills micro-credentialing (ESMC) positively both in the questionnaires (see Fig. 3) and interviews. The questionnaires revealed promising results with little variations regarding inquiry, reflection, and integrative learning (75%, n = 6 lecturers).
The lecturers highlighted that the methodology enabled students to inquire into their experience and identify appropriate situations and evidence demonstrating the acquisition of skills. The collection of the evidence along with the articulation of skills fostered reflection on their learning journey and contributed to a better understanding of their strengths in relation to the requirements of the labour market. One of the lecturers mentioned that “the ePortfolio requires students to upload their own evidence where they can actually understand what they’ve learnt, and show to the employers exactly what they know”. Some others explained that this experience has helped students to “improve their self-confidence in various fields of study” and, even though “there were some challenges, they became more resilient; it was a full win situation”.
Additionally, lecturers concurred that the methodology provides strong support for outcomes assessment (87.5%, n = 7). Specifically, they stated that the ePortfolio has proven to be a very useful tool for assessing students’ achievements across an entire programme. The use of rubrics was also appreciated as they provide common criteria and detailed descriptors that increase transparency and homogeneous assessment. Moreover, some lecturers highlighted the benefits of formative feedback and continuous assessment, especially in the process of identifying the best evidence in support of their skills demonstration. This brings opportunities to students to fine-tune their submissions. The interaction among students and lecturers was also perceived by the latter as a valuable strategy to support reflection and to improve the awareness of their own employability skills: “through ePortfolio, students can simply send their activities online and get immediate feedback for further improvement where necessary. Contrary to traditional assessment, students can be assessed several times to make sure that they are able to fully demonstrate their capacity in a particular skill”.
The qualitative analysis also showed that the implementation of the methodology prompted a conceptual change that influenced the lecturers’ teaching practices as shown in the following quote: “Prior to my participation in the project, I did not know how to incorporate employability skills in my lecturing activities. I can now include employability skills in the assignments and tasks. My module design skills have also greatly improved”. Lecturers concurred that this experience provided clues on how to integrate employability skills by rethinking the design of their courses focusing on the use of active learning, the definition of learning outcomes and an assessment no longer based on the student’s ability to memorise, as testified by one of them “This experience will be integrated into the course preparations, specifically in the selection of the skills, how they will be put into actual practice (…) and how they will be reflected in the evidence”.
Employability enhancement
The lecturers acknowledged the potential of the micro-credentialing methodology to foster employability. Most of them agreed that the badges linked to the selected evidence allow students to showcase their value as future employees: “The EPICA ePortfolio has been proven to definitely increase the visibility of our students’ employability skills to employers, after having gone through a digital-based review and approval process by our teachers”. Great importance was attributed by lecturers to the authenticity of the evidence from students’ lived professional or work-related experiences as they provide substantive information to employers about the students’ performance and application of knowledge and know-how to real situations and problems. Besides this, the contextualisation of the evidence with further explanation was also considered as an added value as it helps employers develop a comprehensive understanding of the presented experiences. Finally, the ePortfolio was also perceived to be an optimal tool for students to demonstrate their skills and for employers to more reliably appraise student profiles.
In addition to this, the communication between academia, employers and students encouraged by the methodology was also seen by lecturers as a means to discover more about the demands of the labour market and therefore to redesign the curriculum: “My participation in EPICA project has helped me understand in a broader perspective what it entails to make graduates competent enough to become employable as well as create employment by themselves”.
Regarding the visibility of skills, lecturers felt enthusiastic with the idea that students show their achievements and productions to employers or other people, like potential clients. One of the lecturers expressed: “I encouraged students to set their profiles to a public setting. They are proud and full of esteem for their work, so they want to share it”. Moreover, they stated that most students expressed eagerness to present their profiles to others as they were proud of the micro-credentials earned and their achievements. However, the lecturers expressed the need to give the student control over their own ePortfolios and over what they share and with whom.
Methodology uptake
Despite some difficulties experienced during the pilot due to Covid-19, lecturers positively valued the overall experience and expressed their willingness to integrate the micro-credentialing methodology into their teaching practices (87.5%, n = 7). However, they highlighted the need for wide adoption of a competency-based approach and for transformation of the curriculum to close the gap between graduate readiness and labour market expectations: “My participation made me realize that the University curriculum needs transformation in order to prepare students so that they are competent for the labour market at the time they graduate”. They also recommended extending the implementation of the micro-credentialing process to other academic programs within the university to reach a larger audience and benefit all university students.
Student perspective
Pedagogical principles
Students have an even better perception than lecturers of the pedagogical dimensions of the micro-credentialing process (see Fig. 4). They also presented greater homogeneity in their opinions: inquiry (98%, n = 49 students), reflection (100%) and integration (92%, n = 46) present similar scores.
The qualitative analysis revealed that the micro-credentialing of their employability skills helped them develop a new perspective regarding their educational journey. Some students explained that identifying relevant situations and evidence demonstrating their skills raised their awareness of their usefulness within and outside their academic contexts. The formative feedback was perceived as essential to foster reflection on students’ own achievements and hence on their level of performance, progress over time and aspects to improve: “Due to the feedback I got from my teachers and the employer about my evidence in the ePortfolio, I realized that there are some aspects that I needed to improve in my professional development”. However, the students commented that the production of the reflective narrative linking situations and evidence to provide an integrated account of their skills proficiency was a challenging task. They appreciated the availability of the assessment rubrics at all times as a conceptual support tool for this purpose, but also as an element that contributed to the development of their self-assessment skills. Similarly, they reported that the STAR method encouraged them to identify connections between skills, tasks, actions and the results attained, improving their awareness of the skills developed in a variety of situations.
Most of the students also stated that the integrative process not only helped develop their awareness but also improved their skills (e.g. writing and editing, self-assessment, digital and analytical skills, self-presentation, self-directed learning): “I’ve gained not only writing and editing skills, but also I’ve learned how to express myself, improving myself-esteem and confidence”. Some students commented that they increasingly improved their performance by repeating the same process for the demonstration of each skill. The exercise of communicating the skills first to lecturers and then to employers also helped students to better understand what each target group was expecting from them.
Students also agreed with lecturers that the most highly valued aspect is the outcomes assessment (94%, n = 47) since it provided them with a clear view of their skills development: “it is an appropriate strategy to assess the level of development of my skills in a progressive way and throughout the whole program”.
Employability enhancement
The micro-credentialing process stands out among students for its potential to enhance employability. Some of them concurred that the badges awarded by the lecturers make skills not explicitly mentioned in the curriculum visible along with their level of development, and that this increases job opportunities. In addition, this process entails linking academic and professional sectors and hence putting recruiters in contact with possible candidates. This connection along with the option to interact with employers and receive their appraisal is perceived as an opportunity to understand what they are expecting and, therefore, to fine-tune the way they present their profiles.
According to most of the students, the process also provided them with the opportunity to showcase the skills developed through curricular or extracurricular activities and to connect them with real evidence and certificates demonstrating their achievements. The availability of this information makes the process behind the badge transparent for employers who have to make hiring decisions and increases trust and confidence among stakeholders.
Moreover, digital badges and the ePortfolio, by making skills visible, also provide added value to the diploma that could increase opportunities to find a job: “I think that the use of the ePortfolio adds value to my diploma as I attain more skills and also present real up-to-date evidence that never leaves questions hanging about the truth and in this way, this adds more employability opportunities on my scale”. Some of the students also claimed that the ePortfolio is a powerful tool to be used together with the CV as it provides the job applicant with an advantage over other candidates. The potential to enhance employability is also attributed to the use of the STAR method, which is perceived as a useful strategy to improve performance in job interviews.
Besides this, most of the students stated that they felt comfortable sharing digital badges and the related evidence with others as this exercise also increases their visibility. The majority of them shared their ePortfolio with lecturers, tutors and employers but also with colleagues, family and friends. Conversely, a few of them, while comfortable with showcasing badges, prefer to limit their sharing of their ePortfolios and evidence to lecturers, and are very cautious about what they show to others.
Methodology uptake
Students expressed positive attitudes regarding the micro-credentialing process and the ePortfolio and stated their intention to use them in the future (92%, n = 46).
Among the benefits they perceived is the opportunity to develop employability skills relevant to the workplace and to complement the traditional curriculum vitae (CV) with micro-credentials. However, they pointed to two additional aspects that affect successful adoption of the methodology: wider use in different programs and courses and ePortfolio ownership. Student control over their ePortfolios once their studies are finished will play a significant role in ensuring their badges remain linked to the supporting evidence and in helping them build their lifelong learning and career on a centralized platform. Likewise, some students proposed that in order to capture a comprehensive overview of the skills to be showcased to employers, their assessment should be applied throughout the whole programme.
Additionally, the students stated that they would appreciate it if employers were involved earlier in the programme as the interaction with them is key to improve their performance: “I am of the view that employers should be in the system from the beginning of the program so as to enable students to be guided from the very beginning and improve their confidence”.
Cooperation with peers should also be enhanced. Another suggestion that emerged that would facilitate uptake of the solution is to involve students who already know the micro-credentialing process in the training of their peers. A few students also revealed their intention to keep using it in the future due to the opportunity it offers to work from home due to the relatively low bandwidth needed. For this reason, the implementation of this methodology should be recommended to other African universities, especially after the pandemic which forced students to move online all their activities.
Employer perspective
Pedagogical principles
The overall employers’ perception of the students ePortfolio, understood as an intentionally organized collection of artefacts, showed that the ways that students expressed themselves and presented their profiles were useful both for getting to know the candidate profile (81.8%, n = 9 employers) and for verifying how skills had been developed (81.8%, n = 9). Asked also about the linking of badges to supporting evidence and the video testimony, the results also reflected high scores (see Fig. 5) in line with the overall opinion of the ePortfolio, thus reinforcing the relevance of each element in building a trustworthy portrait of the candidates’ capacities (90.9%, n = 10).
The qualitative analysis provides further information regarding employer perceptions of the experience which is seen as an innovation that enables students not only to learn but also to better prepare themselves for the labour market. Through the appraisal of students’ profiles, they manifested that the students presented significant situations revealing their capacity to perform complex tasks and that the evidence was clearly explained providing credibility to the earned micro-credentials. In addition, some of them saw the experience as an opportunity for students to improve their network of contacts, reflect on a personal development plan and career goals, and enhance the skills required by the labour market: “It has been a good and inspiring experience for the students, in a way that it has given room to the learners to build their employability skills in communication and interpersonal skills, teamwork and problem-solving skills and with this, I believe the students can be in a position to be prepared for the job market and also equipped with the necessary skills for it”.
Some employers mentioned that they had been pleasantly impressed by the students’ engagement in the process, their great sense of responsibility in the presentation of the contents and their enthusiasm for the feedback received.
Employability enhancement
Most employers consider that the micro-credential process and the ePortfolio provide new employability opportunities for graduates. The award of badges is perceived as a viable solution for showcasing skills that help differentiate and identify potential best graduates for a job. Some of them commented that the badges and the attached evidence provide a clear view of the candidate skills that is due also to the availability of rich information that complements what is reported in a traditional curriculum vitae: “the ePortfolio provides practical evidence that supports students’ resumes/CV”. Moreover, some of them underlined that it meets their need to see more than academic achievements reported in transcripts and resumées. Through the ePortfolio they can access a broad range of evidence that links university learning with extracurricular and work-related experiences that are of particular value in hiring processes. This also increases the reliability of the process as “the recruitment is based on verifiable information about the candidates”.
Most of the employers also agree that digital badges and the ePortfolio provide some advantages in the selection process over other methods. They make it easier to identify the strengths of a candidate and thus easily match organizational goals and job requirements with the graduate’s skills, qualifications, talents and personal interests. To sum up, employers recognise the potential of the EPICA solution to support job interviews and to simplify the recruitment of new candidates: “I believe this is the best system and it should be adopted and implemented because it clearly illustrates someone’s skills. All the students’ ePortfolios I have appraised clearly illustrated their problem-solving, communication, and teamwork skills”. According to them, the use of the STAR method also contributes to the capture of the skills of a job seeker as applied to a real context.
Additionally, some of them acknowledged the benefit of the digitalization of achievements and credentials. Making relevant information digitally available enhances graduates’ visibility and provides opportunities for students to be noticed by companies looking for candidates for a given position.
Methodology uptake
Most employers saw a benefit in the use of badges attached to evidence within an ePortfolio as part of the hiring processes: 81.1% said they would use it always or frequently. Although this is only a picture of employers who were inclined to participate in the pilot, their comments on the subject provide insights into their view of the EPICA solution. Their participation in the pilot was seen as having a positive impact on their efficiency in terms of productivity and the time spent on exploration of students’ profiles, and their effectiveness, due to improved performance in the identification of best candidates based on a clear understanding of their characteristics and documented previous experience. The easy access to rich profiles through digital makes it possible to foresee adoption: “(…) with the ePortfolio available online employers can find online students’ credentials for a particular position more easily than with a traditional CV”.
Discussion
The analysis of each stakeholder perspective provides substantive information on the implementation of the ESMC methodology and how it can lead to curricular transformation and provide students with increased opportunities for employability. This section presents a general overview looking at how this pedagogical innovation raises students’ awareness of their actual capacities, and improves the recognition of these achievements on the basis of a formal assessment procedure, and their acknowledgement by employers.
EMSC as a pedagogical innovation that supports the transition to the workplace
This pilot experience shows the ESMC methodology to be a promising approach for supporting students in their transition from academia to the workplace. The employability skills micro-credentialing methodology and the involvement of employers in the students’ academic journey are perceived in general as a challenging endeavour that entails rethinking of educational practices, academic curricula, and lecturers’ professional development.
With regard to the application of the ESMC methodology, the lecturers identified three main actions that challenged their practice, the first being the focus on employability skills, the second a special attention to a program view where their course is closely interlinked with other courses, and the third, the integrated approach connecting the students’ educational experiences within and outside the curriculum. This experience also enabled them to broaden their understanding of the relationship between HE and the business sector.
The students reported similar opinions in relation to the methodology that enabled them to create connections between courses and non-curricular experiences, and identify value, as is the intention of the pedagogical dimension of the Catalyst Framework (Eynon & Gambino, 2017), in terms of the mastery of their employability skills. This inquiry and reflection process, and the effort in shaping the way they introduce themselves and present their profiles to specific targets (Tomasson & Lithgow, 2018), increased their awareness of what their expectations are, and enhanced their self-esteem and self-confidence in dealing with the challenges of the professional world.
The lecturers also pointed out a shift in the assessment practices implemented during the pilot. This change was mainly driven by the fact that the students’ achievements were being considered across the programme and by the use of common criteria for the assessment of the learning outcomes. In line with Oliver (2019), both lecturers and employers emphasized the positive role of continuous evaluation, feedback, and interaction with students, although they also underline that this approach could be challenging when dealing with a large number of students. The feedback received from lecturers and employers is also perceived by the students as especially useful to improve their ability to identify and communicate their skills to specific target groups.
EMSC fostering employability skills awareness
The students emphasized that the articulation of their employability skills has improved their awareness regarding key moments throughout the curriculum and other contexts where they have been confronted with situations requiring the use of their skills. They also highlighted that the interaction with lecturers and particularly with employers helped them identify current expectations in the labour market more clearly and increased their understanding of how employers might appraise their profile as highlighted by Tomasson Goodwin et al. (2019). This strategic knowledge is perceived as especially useful when preparing for job applications, particularly in tailoring their ePortfolio and in developing bold arguments regarding their readiness for work. Their awareness of their skills and their increasing ability to showcase them using the STAR method aligns with the postulate that the skills gap is best characterised as a ‘skills-articulation gap’ (Watkins & McKeown, 2018). The ESMC methodology, besides accrediting employability skills, also properly scaffolds students in closing this skills articulation gap and in gaining a deeper understanding of their readiness for the workplace.
The lecturers recognized that through their intervention, and the examination of employer participation, they were able to identify key curriculum changes to support students’ awareness, mainly related to the scaffolding of reflective learning and the provision of multiple opportunities for articulation.
All three stakeholders agreed on the capacity of the methodology to support learners in the identification of personal strengths and the development of a career plan.
Micro-credentials enhancing skills visibility, transparency and trustability
The fact that badges provide direct access to supporting evidence through the ePortfolio is in line with initiatives focused on enhancing learners’ records (Green & Parnell, 2017) and ensuring increased visibility, transparency and trustability of the recognition process. Although this perception was shared by all participants, it was especially emphasized by employers who saw concrete advantages in the recruitment process over more traditional methods. These findings fit with the results reported by Gallagher (2018) which highlight that skills-based hiring is gaining significant interest and momentum among HR leaders. In particular, employers referred to the video testimony where they can “see and hear”' students introducing themselves, and to the clear and systematic way in which students present their awarded skills and evidence. They also pointed to the benefit of going online, facilitating access to a digital set of structured information and documentation reflecting what the student knows and can do, as advocated by Milligan and Kennedy (2017). The organized and easily navigable collection of evidence provides greater depth helping employers see what the students’ full potential could be and assists in matching the candidate profile to the job requirements.
Extended certification, CV and employability opportunities
The students pointed out that the badges awarded enrich their current CV providing additional relevant information regarding their capacities and, in combination with the evidence stored in the ePortfolio, help them distinguish themselves from other candidates. They can be used in addition to the traditional diploma and thus increase employment opportunities. Employers, in turn, underlined that the micro-credentials provided significant additional information on skills and experiences that are not shown on academic transcripts or traditional CVs and which are highly relevant and required when deciding on a job applicant, as pointed out by Braxton et al. (2019) and Kato et al. (2020). They add that the use of micro-credentials displayed in the form of digital badges through the ePortfolio could also modernize their hiring processes, reducing the time spent by employers in reviewing candidates’ profiles and the multiple rounds of interviews. These results suggest that the dissatisfaction of the marketplace with traditional credentials might be, at least in part, mitigated by the use of the micro-credentialing system.
ESMC methodology future implementation
From a different angle, the participants’ views and interest in the adoption of the micro-credentialing methodology provide additional information regarding its potential. The lecturers expressed their willingness to integrate the methodology into their teaching practices and to consider the pedagogical principles in the design of their courses. Similarly, students expressed their intention to keep using the EPICA solution. Future adoption, however, will require further transformation toward a more focused and explicit integration of employability skills throughout the curriculum. The lecturers concurred that successful implementation should be accompanied by significant learning-centred institutional change (Tomasson Goodwin & Lithgow, 2018) involving the redesign of the curriculum at all levels, including methodologies focused on skills development, the design of authentic learning experiences, including internships, externships and similar, and innovation in teaching practices relating to the facilitation of learning and assessment.
The students, for their part, emphasized that ownership and control over their ePortfolio beyond the end of their period of study is pivotal to encouraging their use. They pointed to new opportunities after graduation for updating their profiles and making use of digital badges in other contexts, such as embedding them in professional social networks while at the same time ensuring continued access to the related supporting evidence and experiences. Employers were also keen on using the ePortfolio, and particularly pointed to micro-credentials as trustable achievements endorsed by universities, and backed up by transparent recognition processes. As also stated by Oliver (2019), trust is a crucial part of winning stakeholders’ confidence. Employers also pointed to the benefits of the EPICA solution over other conventional hiring processes which they mainly located in a more efficient mechanism for identifying potential candidates based on tangible and clear information about the students’ experiences.
Conclusion
Employability skills are of concern to HEIs and the business sector which questions graduates’ readiness to enter the labour market. This research presents the development and testing of the ESMC methodology for the assessment and recognition of employability skills through the implementation of an ePortfolio as a transition tool. The methodology is centred on the micro-credentialing process, providing university recognition of students’ employability skills and building external trust in students' full capacities on the basis of a transparent procedure and access to rich, concrete and multiple evidence.
The findings point to a conceptual change influencing the teaching practices of the lecturers taking part in the pilot, entailing mainly the integration of an ePortfolio strategy supporting outcomes-based assessment and the issuing of badges. Results also show that students have increased awareness of their own employability skills and of the expectations of the labour market. They became more competent in identifying their strengths and gained confidence by receiving formal university recognition through micro-credentials. Moreover, they improved their communication skills by developing academic and experiential accounts, contributing to the projection of a professional digital identity that may bring new opportunities for employability. The employers, for their part, valued the students’ presentation through the ePortfolio and the university-endorsed badges as access points to well organized and contextualized evidence. This opportunity, besides facilitating the match between candidates’ profiles and job requirements, is also a positive way of making the recognition process more transparent and trustable.
Future studies should benefit from reinforcements in curriculum design focused on employability, implementing the micro-credentialing methodology throughout the entire duration of the academic program and closely involving the business and administration sectors. A more transversal, but also progressive and sustained approach could improve not only students’ awareness but also promote further development of their skills.
Availability of data and materials
The data generated and analysed during this study are included in this published article and its additional information files. Additional material can be shared upon reasonable request.
Abbreviations
- CV:
-
Curriculum vitae
- EPSPI:
-
Electronic Portfolio Student Perspective Instrument
- ESMC:
-
Employability skills micro-credentialing
- H2020:
-
Horizon 2020
- HE:
-
Higher education
- HEI:
-
Higher education institution
- ILO:
-
International Labour Organization
- STAR:
-
Situation, Task, Action, Results method
- TESCEA:
-
Transforming Employability for Social Change in East Africa
References
African Development Bank. (2019). African economic outlook 2019. African Development Bank. https://www.afdb.org/fileadmin/uploads/afdb/Documents/Publications/2019AEO/AEO_2019-EN.pdf
African Union Commission. (2015). AGENDA 2063. The Africa we want—a shared strategic framework for inclusive growth and sustainable development, first ten year implementation plan. https://www.un.org/en/africa/osaa/pdf/au/agenda2063-first10yearimplementation.pdf
Ambrose, A., Anthony, E., & Clark, C. (2016). Digital badging in the MOOC space. https://er.educause.edu/articles/2016/11/digital-badging-in-the-mooc-space
Maina, M.F, Guardia, L., Albert, S., Mancini, F & Clougher, D. (2020). Using an ePortfolio to showcase students’ employability skills. The case of the Master in Education and ICT (e-learning) at UOC. In L. Gómez Chova, A. López Martínez, & I. Candel Torres (Eds.), ICERI2020 Proceedings (pp. 1872-1879). https://doi.org/10.21125/iceri.2020.0469
Braňka, J. (2016). Understanding the potential impact of skills recognition systems on labour markets: Research report. ILO. https://www.ilo.org/wcmsp5/groups/public/---ed_emp/---ifp_skills/documents/publication/wcms_532417.pdf
Braxton, S., Bohrer, J., Jacobson, T., Moore, K., Leuba, M., Proctor, C., & Reed, A. (2019). 7 things you should know about digital badges. https://library.educause.edu/resources/2019/7/7-things-you-should-know-about-digital-badges
Clark, V. L. P., & Ivankova, N. V. (2016). Mixed methods research: A guide to the field. Sage Publications. https://doi.org/10.4135/9781483398341
Creswell, J. W., & Poth, C. N. (2018). Qualitative inquiry & research design. Sage.
Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of Advanced Nursing, 62(1), 107–115.
European Commission. (2020). A European approach to micro-credentials. Output of the Micro-credentials Higher Education Consultation Group. https://op.europa.eu/en/publication-detail/-/publication/cb7e025d-61e5-11eb-aeb5-01aa75ed71a1
Eynon, B., & Gambino, L. M. (2017). High-impact e-portfolio practice: A catalyst for student, faculty, and institutional learning. Stylus.
Gallagher, S. R. (2018). Educational credentials come of age: A survey on the use and value of educational credentials in hiring. Northeastern University. https://cps.northeastern.edu/wp-content/uploads/2021/03/Educational_Credentials_Come_of_Age_2018.pdf
Gauthier, T. (2020). The value of microcredentials: The employer’s perspective. The Journal of Competency-Based Education. https://doi.org/10.1002/cbe2.1209
Gibson, D., Ostashewski, N., Flintoff, K., Grant, S., & Knight, E. (2015). Digital badges in education. Education and Information Technologies, 20, 403–410. https://doi.org/10.1007/s10639-013-9291-7
Green, T. & Parnell, A. (2017). Comprehensive student record project: Final report. AACRAO. https://www.aacrao.org/docs/default-source/signature-initiative-docs/clr/comprehensive-student-record-project-final-report-9_2017---pub-version.pdf
Guàrdia, L., Maina, M. F., & Mancini, F. (2021). Increasing the Visibility of Graduate Students' Employability Skills: An ePortfolio Solution Addressing the Skills Gap. In B. Padilla Rodriguez, & A. Armellini (Eds.), Cases on Active Blended Learning in Higher Education (pp. 253-275). IGI Global. https://doi.org/10.4018/978-1-7998-7856-8.ch013
International Labour Organization. (2020). Report on employment in Africa (Re-Africa). Tackling the youth employment challenge. https://www.ilo.org/wcmsp5/groups/public/---africa/---ro-abidjan/documents/publication/wcms_753300.pdf
Jorre De St Jorre, T., Johnson, L., & Oliver, B. (2016). Deakin hallmarks: Principles for employability credentials. In ASCILITE 2016: Shaping the future of tertiary education: Proceedings of the Australasian Society for Computers in Learning in Tertiary Education 2016 conference. ASCILITE. http://dro.deakin.edu.au/eserv/DU:30097295/johnson-deakinhallmarks-2016.pdf
Kato, S., Galán-Muros, V., & Weko, T. (2020). The emergence of alternative credentials (Education Working Paper No. 216). OECD. https://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=EDU/WKP(2020)4&docLanguage=En
Kilsby, K., & Goode, C. (2019). Taking the college to the company. Scope: (learning Teaching), 7: Credentials. https://doi.org/10.34074/scop.4007006
Leopold, T. A., Ratcheva, V., & Zahidi, S. (2017). The future of jobs and skills in Africa: Preparing the region for the Fourth Industrial Revolution. World Economic Forum. http://www3.weforum.org/docs/WEF_EGW_FOJ_Africa.pdf
McConnell, K. D., Horan, E. M., Zimmerman, B., & Rhodes, T. L. (2019). We have a rubric for that: The VALUE approach to assessment. Association of American Colleges and Universities.
Microbol. (2020). Micro-credentials linked to the Bologna Key Commitments. Desk research report. https://microcredentials.eu/wp-content/uploads/sites/20/2020/09/MICROBOL-Desk-Research-Report.pdf
Milligan, S., & Kennedy, G. (2017). To what degree? Alternative micro-credentialing in a digital age. In R. James, S. French, & P. Kelly (Eds.), Visions for Australian Tertiary Education (pp. 41–53). https://melbourne-cshe.unimelb.edu.au/__data/assets/pdf_file/0006/2263137/MCSHE-Visions-for-Aust-Ter-Ed-web2.pdf
Nikusekela, E., Pallangyo, E. M., & Nicholaus,. (2016). Analysis of supply side factors influencing employability of fresh higher learning graduates in Tanzania. Global Journal of Human-Social Science Research, 16, 21–27.
Njeg’ere Kabita, D., & Ji, L. (2017). The why, what and how of competency-based curriculum reforms: The Kenyan experience. In-Progress Reflection No 11. UNESCO.
Oliver, B. (2019). Making micro-credentials work: For learners, employers and providers. Deakin University. http://wordpress-ms.deakin.edu.au/dteach/wp-content/uploads/sites/103/2019/08/Making-micro-credentials-work-Oliver-Deakin-2019-full-report.pdf?_ga=2.72845779.1580040068.1625670555-368361593.1616148903
Organisation for Economic Cooperation and Development. (2011). Towards an OECD skills strategy. OECD. http://www.oecd.org/edu/47769000.pdf
Organisation for Economic Cooperation and Development. (2021). OECD skills outlook 2021: Learning for life. OECD. https://www.oecd.org/education/oecd-skills-outlook-e11c1c2d-en.htm
Ornellas, A., Falkner, K., & Edman Stålbrandt, E. (2019). Enhancing graduates’ employability skills through authentic learning approaches. Higher Education, Skills and Work-Based Learning, 9(1), 107–120. https://doi.org/10.1108/heswbl-04-2018-0049
Orr, D., Pupinis, M., & Kirdulytė, G. (2020). Towards a European approach to micro-credentials: A study of practices and commonalities in offering micro-credentials in European higher education. Publications Office of the European Union. https://doi.org/10.2766/7338
Ritzhaupt, A. D., Ndoye, A., & Parker, M. A. (2010). Validation of the electronic portfolio student perspective instrument (EPSPI): Conditions under a Different Integration Initiative. Journal of Digital Learning in Teacher Education, 26(3), 111–119.
Schreier, M. (2012). Qualitative content analysis in practice. Sage.
Selingo, J.J. (2017). The future of degree: How Colleges can survive the new credential economy. The Chronicle of Higher Education. https://www.pearson.com/content/dam/one-dot-com/one-dot-com/us/en/files/The-Future-Of-the-Degree-2017.pdf
Sokhanvar, Z., Salehi, K., & Sokhanvar, F. (2021). Advantages of authentic assessment for improving the learning experience and employability skills of higher education students: A systematic literature review. Studies in Educational Evaluation, 70, 101030. https://doi.org/10.1016/j.stueduc.2021.101030
Suleman, F. (2018). The employability skills of higher education graduates: Insights into conceptual frameworks and methodological options. Higher Education, 76(2), 263–278. https://doi.org/10.1007/s10734-017-0207-0
Syed, M., & Nelson, S. C. (2015). Guidelines for establishing reliability when coding narrative data. Emerging Adulthood, 3(6), 375–387. https://doi.org/10.1177/2167696815587648
Tomasson Goodwin, J., Goh, J., Verkoeyen, S., & Lithgow, K. (2019). Can students be taught to articulate employability skills? Education Training, 61(4), 445–460. https://doi.org/10.1108/ET-08-2018-0186
Tomasson Goodwin, J., & Lithgow, K. (2018). ePortfolio, professional identity, and twenty-first century employability skill. In Eynon & L. M. Gambino (Eds.), Catalyst in action: Case studies of high-impact ePortfolio practice (pp. 154–171). Stylus Publishing.
Tyton Partners. (2015). Evidence of learning: The case for an integrated competency management system. https://tytonpartners.com/wp-content/uploads/2015/02/Evidence-of-Learning_Case-for-Integrated-Competency-Mgmt.pdf
Watkins, E. K., & McKeown, J. (2018). The inside story on skills: The student perspective. In F. Deller, J. Pichette, & E.K. Watkins (Eds.), Driving academic quality: Lessons from Ontario’s skills assessment projects (pp. 81–92). Higher Education Quality Council of Ontario. https://heqco.ca/wp-content/uploads/2020/04/Driving-Academic-Quality_10_FINAL.pdf
Watson, C. E., Kuh, G. D., Rhodes, T., Light, T. P., & Chen, H. L. (2016). ePortfolios—The eleventh high impact practice. International Journal of ePortfolio, 6(2), 65–69.
Wienhausen, G., & Elias, K. (2017). Beyond the transcript: The need to showcase more. Change: the Magazine of Higher Learning, 49(4), 14–19. https://doi.org/10.1080/00091383.2017.1357091
Wild, J., & Omingo, M. (2020). Graduate skills for employability in East Africa: Evolution of a skills matrix for course redesign. https://www.inasp.info/publications/skills-matrix-TESCEA
Acknowledgements
We would like to thank all partners in the EPICA project for their support in the data collection.
Funding
Our research was undertaken as part of the EPICA H2020 project, a new strategic partnership between Europe and Africa, co-funded by the Horizon 2020 Research and Innovation Programme of the European Union (Project No. 780435).
Ethics declarations
Ethics approval and consent to participate
The present study was carried out in compliance with the ethics requirements included in the project grant agreement and in the ‘ethics deliverables’, as required in all activities funded under Horizon 2020. The participation of the stakeholders in the study was approved by the legal, ethical and data protection officer appointed by the consortium who also monitored the compliance of their involvement with the ethics, data protection and privacy management plan drawn up at the beginning of the project and validated by the European Commission.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Maina, M.F., Guàrdia Ortiz, L., Mancini, F. et al. A micro-credentialing methodology for improved recognition of HE employability skills. Int J Educ Technol High Educ 19, 10 (2022). https://doi.org/10.1186/s41239-021-00315-5
Received:
Accepted:
Published: | https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-021-00315-5 |
At TRS Training Ltd we go further in fully understanding what it is that our employer clients want to achieve from their training. We work as their partner in delivering on that vision, developing stronger employees who work well as individuals and as part of a team.
This apprenticeship is designed for individuals who are typically the first point of contact with customers and whose role involves delivering high-quality products and services to the customers of their organisation. Their actions will influence your customers’ experience and satisfaction with your organisation.
Typical Job Roles: Customer Service – for individuals who are the first point of contact with customers and working in any sector or organisation type.
The learner must have the knowledge to be able to carry out (as a minimum) the list below:
The duration of this apprenticeship is minimum 12 months and an independent End Point Assessment must be completed at the end in order to pass. See below.
During the apprenticeship, the learner will have a dedicated trainer-assessor who will visit them within the workplace at least once per month in order to support their learning, development of competency and generation of evidence.
This will also be supported between visits by off-site information, advice, guidance, academic progress and technical competence support.
The trainer-assessor will work with the learner and the employer in order to ensure that all learning needs are being met for both parties, in order to ensure successful progression against all elements of the apprenticeship.
The learner will be required to have or achieve level 1 English and Maths and to have taken level 2 English and Maths tests prior to completion of their Apprenticeship.
An employer must be prepared to provide the learner with the opportunity to carry out work and be part of projects which will enable the learner to produce substantial evidence towards their qualification. In order to ensure the successful progression of the learner we request that employers participate in joint reviews of the learner’s progress at regular intervals throughout the apprenticeship. This ensures continued and positive progress through the apprenticeship. It will also provide the opportunity to discuss and agree how any issues are
to be resolved and how additional stretching and challenging activities can be built in.
The Level 2 Customer Service includes the following elements:
Knowledge
Skills
Behaviours/Attitude
To successfully complete the apprenticeship, the learner needs to pass an End Point Assessment. This assessment is an independent assessment which has several stages:
The assessor from the end point assessment body will then decide whether to award successful apprentices with a pass, a merit or a distinction.
Find out more: | https://www.trstrainingltd.com/our-courses/customer-service-level-2-apprenticeship-programme/ |
A Social Service Worker will help people in solving and coping with their problems. A well-drafted Social Service Worker Resume will focus on duties like –identifying clients who need help or support, assessing client’s needs and strengths, developing plans for improving client’s well being, responding to crises situation, helping clients get resources, following up with clients to make sure their situations have improved, assessing services provided to check its efficiency, and advocating clients.
To be successful in this career line, candidates have to depict the following skills and abilities – strong communication skills, plenty of resilience, the ability to be flexible and practical, good listening and observation skills; and plenty of knowledge about social work and public services. A bachelor’s degree or an associate’s degree in the field of Social Work is the most common qualification for this post.
Headline : Competent, confident, and organized Social Service Worker with considerable, diverse experience augmented by formal education in Psychology and administrative training. Seeking for an opportunity in the field of communication to utilize skills for the growth of the organization and upgrade the skills and knowledge in the communication sector.
Skills : Literacy, Numeracy, Computer Literacy, Communication, Team Worker, Leadership, Independent, Critical Thinking, Problem-Solving, Research, Information Finding, Interpersonal, Great Time Management.
Description :
Summary : Social Service Worker I, education and experience in several different fields of social work, which has provided me with the ability to quickly assess multiple situations. Possess strong investigative skills, medical background, communication skills, amazing customer service skills, organizational skills, self-motivation, and time management skills.
Skills : Social Services, Substance Abuse Counselor, Interpersonal, Great Time Management.
Description :
Headline : To become a successful professional and to work in an innovative and competitive environment. To contribute to the progress of the company through hard work, dedication and team spirit. To secure a challenging position where one can efficiently contribute the organizational skills, educational background, and ability to work well with people.
Skills : Microsoft Office, Excel, Case Management, Needs Assessment, Intake Procedures
Description :
Headline : To have a challenging position that will utilize my experience and unique abilities. A high-impact leadership position requiring creative and innovative approaches to problem-solving, strategy development and fulfillment of the personal goals. To pursue a career in fields that would help to obtain a meaningful and challenging position, that enables to learn new things and allows for advancement.
Skills : Proficient in Word, Excel, Access, and Outlook, Discharge Planning, Resource Referral, Documentation, Progress Notes.
Description :
Objective : Junior Social Service Worker, driven to explore new areas for business development. A zest for knowledge & always upfront to learn new things. To obtain a challenging position in the area of retailing and hospitality and strive for excellence with dedication, proactive approach, positive attitude, and passion towards the work that will fully utilize the logical and reasoning abilities in the best possible way for the fulfillment of personal and organizational goals.
Skills : Microsoft Office Certification. Shorthand, Literacy, Numeracy, Computer Literacy, Communication, Team Worker
Description :
Objective : Multi-faceted, efficient, and reliable Social Service Worker/Incharge with 3+ years of experience with people and providing help and support. Proficient in standard office desktop software. Diversified skill sets covering administrative support, client relations, writing, human resources and recruiting, account management. Excellent interpersonal, phone, and digital communication skills.
Skills : Customer Service, Data Entry, Typing, Insurance Verification, Interpersonal Skills
Description :
Objective : Social Service Worker, skilled in professional Social Work, case management, evaluations and assessments, management, training and teaching, surveys and interviews, group meetings, team leadership, and related administrative duties including strong customer service skills. A reliable and dependable and often seek new responsibilities within a wide range of employment areas.
Skills : Literacy, Numeracy, Computer Literacy, Communication, Team Worker, Leadership, Independent
Description :
Headline : Highly trustworthy and ethical Volunteer Social Service Worker. Self-starter with drive, initiative and a positive attitude. Capable in the completion of given work, effective at multi-tasking. Profound ability to analyze the problem and identify the solution with Strong analyses and resolving power. A very proficient, creative and energetic team player. Strong ability to deal with the queries and problems and also resolve them.
Skills : Microsoft, Analytical Skills, Team Building, Bilingual, Computer Skills, Strong Work Ethic, Strong Communication Skills, Strong Organizational Skills.
Description :
Summary : To quickly and effectively connect with people in a meaningful way. It would also include opportunities for providing education, outreach, and support to a variety of populations and working within a collaborative team that shares the desire to make the world a better place for those who are often marginalized.
Skills : Microsoft Office, Typing, Telephone Skills, Word, Workers Compensation, Human Resources, Insurance Verification, Policy Development.
Description :
Summary : Good team player with a client-centric mindset, people management skills. An articulate communicator combining strong interpersonal, negotiation, problem-solving, relationship management, analytical and decision-making capabilities. An energetic professional with experience in delivering over and beyond the client expectations, looking to secure a job in a well-established organization with a creative and challenging environment.
Skills : Case Planning And Management, Client Advocacy, Community Client Relations, Community Outreach, Fundraising
Description :
Objective : To evolve into a hardworking and sincere profession, contributing to the success of the organization and at the same time enhance the knowledge and develop the communication, managerial and interpersonal skills. To be a part of an organization that promotes term effort and provides an opportunity for value-based grown as well as career.
Skills : Donor Relations, Licensing Homes, Staff Development And Supervision, Interagency Partnership.
Description : | https://www.qwikresume.com/resume-samples/social-service-worker/ |
Provo Canyon School has long maintained a tradition of academic excellence within the residential treatment community. Accredited by the Northwest Association of Schools since 1973, we offer direct classroom instruction for students in grades 3-12.
Teachers are certified in their content area, and currently have or are working on a Special Education Licensure. We maintain a low student-to-teacher ratio in order to provide students individualized help in both core and elective classes.
Many students who come to Provo Canyon School have struggled with academic performance in a traditional school setting. We offer students the opportunity to work toward their full potential through a supportive academic environment that includes positive incentives for good grades, two week monitoring of each student’s academic progress, daily study hall time, access to individualized teacher help, and daily feedback on assignment completion. We also offer access to a library, and a career counseling center for high school students.
Provo Canyon School operates on a year-round, three semesters per year system. Providing three full semesters of coursework per year enables students who are credit deficient to work towards earning a high school diploma and provides opportunities for students to overcome learning gaps they may have encountered in their school experience.
Students attend class in a traditional setting: Monday through Friday, 5 ½ hours each day, typically enrolled in six classes and a study hall. Those enrolled in the elementary program will study subject matter in an environment similar to that of a public elementary school setting, yet with the supportive academic atmosphere already mentioned.
Provo Canyon offers a curriculum that is challenging yet attainable. Throughout the school’s history, we have emphasized the belief that students have the ability to learn, change, progress, and grow. We are thankful and proud to have played a small part in the lives of the thousands of students who have made these changes. Since 1971, Provo Canyon School was founded for the purpose of effecting change in the lives of students whose problems were serious to warrant residential placement. Throughout its forty plus year history, Provo Canyon School has emphasized the belief that a student has the ability to change, progress, and grow. The school is committed to academic services that are of high standard, individualized to the needs of each student, and that assist students in their achievement of educational goals while providing a safe therapeutic setting in which the student can learn.
Highlights of our Academic Services
Ongoing High Quality Education:
- All teachers are certified teachers in their subject area
- All teachers are Special Education certified or are completing endorsement licensure
- 120 semester course offerings
- Year round schooling (3 semesters)
- Daily Traditional school schedule (Traditional Bell Schedule Class Rotation)
Environment that Maximizes Students talents and abilities:
- Orton-Gillingham instructional method for struggling readers
- Direct instruction with a focus on reading, writing and classroom conversation
- Project-Based and Cooperative Learning classroom experiences
- Small class sizes
- A two week grading system
- Competitive sports programs
- Student improvement recognition
- Student government opportunities
- Academic Communication with parents and staff through PowerSchool (web-based access to student progress reports)
Safe Therapeutic Environment:
- Daily homework list and feedback to encourage students submit assignments accurately
- Supervised Study Halls
- Teacher assistance before or after school hours
- Independent practice with teacher
- Therapists on campus and in the school area for students to speak to as needed
- Full nursing and medical staff
Students Learn:
- Peer tutoring
- Authentic learning experiences in the classroom
- School Improvement areas focus on Student Reading, Writing, & increased academic conversation
- Report cards every two weeks help students track improvement
For additional information regarding our academic services contact one of our expert admissions coordinators who will be able to answer your questions.
Other Academic Information (download): | http://www.provocanyon.com/academics/ |
The McKusick-Nathans Institute of Genetic Medicine manages seven clinics which are staffed by medical providers who are among the world’s experts in genetic conditions.
Staff who answer our main appointment line will collect your medical information and will work with our genetic counselors to determine which clinic and health care providers are the most appropriate for you.
Pediatric and Adult Genetics Clinic: visit this clinic if your doctor has asked you to see a geneticist.
Metabolic Genetics Clinic: phenylketonuria (PKU), maple syrup urine disease (MSUD) and other inborn errors of metabolism.
Lysosomal Storage Disease Center: Gaucher, Fabry, Pompe, Niemann–Pick disease, Hunter syndrome, and other diseases that affect the lysosomes. | https://www.hopkinsmedicine.org/institute-genetic-medicine/patient-care/genetics-clinics/ |
TYPE OF INFORMATION PROCESSED AND WHY IT IS PROCESSED
Browsing data.
During normal use, the information systems and software procedures required to make this website function acquire some personal data that is then implicitly transmitted when using internet communication protocols. This information is not collected to be associated with identified users but, because of its nature, may allow users to be identified by processing and associating the data with information held by third parties. This category of data includes IP addresses or domain names of computers used by the users who visit our website, the URI addresses of requested resources, the time of the request, the method used to submit the request to the server, the size of the feedback file, the number code indicating the server reply status (successful, error, etc.) and other parameters regarding the user’s operating and computer systems. The sole purpose of this data is to collect anonymous statistical information on how the website is used and to make sure it functions correctly. It is cancelled straight after it has been processed. This information may be used to determine responsibilities in the event of hypothetical cyber crimes committed against the website. In all other cases, the data collected about website contacts is never stored for more than seven days.
Data supplied voluntarily by users/visitors.
If the users/visitors of this website send us their personal data to access specific services or to make requests by email, Convivio srl Unipersonale will enter in possession of their email addresses and/or of other personal data which will only be processed to meet this specific request, i.e. to provide the service required. The personal data supplied by users/visitors will only be communicated to third parties if this is strictly necessary to satisfy the requests of the former.
Cookies.
Cookies are not used to transmit personal information; moreover, no so-called persistent cookies – i.e. user tracking systems - of any kind are used by us. The use of so-called session cookies (which are not persistently stored on the user’s computer and disappear when you close your browser) is strictly limited to transmitting the session identifiers (consisting of random numbers generated by the server) required to allow safe and efficient website browsing. This website uses so-called session cookies to avoid having to employ other data processing techniques that could be detrimental to the privacy of the user’s browsing habits. Session cookies do not allow any of the user’s personal identity data to be collected.
Data processing methods.
Personal data is processed using automatic tools for the time strictly necessary to achieve the purposes it has been collected for. Specific safety measures are adopted to prevent loss of, unlawful or improper use of, and unauthorised access to, personal data.
Opting to provide personal data.
With the exception of browsing data, users/visitors are free to decide whether to provide their personal data or not. The only consequence of failing to supply such information may be the impossibility of obtaining the requested service.
Place where data is processed.
The personal data related to the abovementioned website is processed at the Convivio srl Unipersonale company headquarters in Viale de la Comina 27/D – 33170 Pordenone, at the Data Controller’s headquarters and/or at the hosting company’s and/or website management company’s headquarters. Processing is performed exclusively by the I.T. personnel of the data processing department or by occasional maintenance personnel. None of the data collected for the website service is communicated or divulgated. Personal data supplied by users to request informative material is used exclusively to provide the requested service or performance and will be communicated to third parties only if strictly necessary to achieve this purpose.
Data controller.
Mr Leandro Cappellotto (BBA) is the personal data controller.
Data subject’s rights.
The subjects to whom personal data refers are entitled, at any moment in time, to obtain confirmation that such data exists and to know its contents and source, check that it is correct or request for it to be supplemented and updated or corrected (art. 7 of Italian Legislative Decree 196/03). According to the same article, data subjects are entitled to request that any unlawfully processed data is cancelled, transformed into an anonymous form or blocked, and to object to the processing of their data on legitimate grounds. Requests must be sent to the Data Controller at the addresses specified above. | http://convivio.eu/privacy-en.html |
"Must reading for anyone who seeks a better understanding of the U.S. Supreme Court's role in race relations policy." - Choice "Beware! Those committed to the Supreme Court as the ultimate defender of minority rights should not read Race Against the Court. Through a systematic peeling away of antimajoritarian myth, Spann reveals why the measure of relief the Court grants victims of racial injustice is determined less by the character of harm suffered by blacks than the degree of disadvantage the relief sought will impose on whites. A truly pathbreaking work." - Derrick Bell As persuasive as it is bold. Race Against The Court stands as a necessary warning to a generation of progressives who have come to depend on the Supreme Court of the perils of such dependency. It joins with Bruce Ackerman's We, the People and John Brigham's Cult of the Court as the best in contemporary work on the Supreme Court. - Austin Sarat, William Nelson,Cromwell Professor of Jurisprudence and Political Science, Amherst College The controversies surrounding the nominations, confirmations, and rejections of recent Supreme Court justices, and the increasingly conservative nature of the Court, have focused attention on the Supreme Court as never before. Although the Supreme Court is commonly understood to be the guardian of minority rights against the tyranny of the majority, Race Against The Court argues that the Court has never successfully performed this function. Rather the actual function of the Court has been to perpetuate the subordination of racial minorities by operating as an undetected agent of majoritarian preferences in the political preferences. In this provocative, controversial, and timely work, Girardeau Spann illustrates how the selection process for Supreme Court justices ensures that they will share the political preferences of the elite majority that runs the nation. Customary safeguards that are designed to protect the judicial process from majoritarian predispositions, Spann contends, cannot successfully insulate judicial decisionmaking from the pervasive societal pressures that exist to discount racial minority interests. The case most often cited as the icon of Court sensitivity to minority rights, Brown v. Board of Education, has more recently served to lull minorities into believing that efforts at political self-determination are futile, fostering a seductive dependence and overreliance on the Court as the caretaker of minority rights. Race Against The Court demonstrates how the Court has centralized the law of affirmative action in a way that stymies minority efforts for meaningful political and economic gain and how it has legitimated the legal status quo in a way that causes minorities never even to question the inevitability of their subordinate social status. Spann contends that racial minorities would be better off seeking to advance their interests in the pluralist political process and proposes a novel strategy for minorities to pursue in order to extricate themselves from the seemingly inescapable grasp of Supreme Court protection. Certain to generate lively, heated debate, Race Against The Court exposes the veiled majoritarianism of the Supreme Court and the dangers of allowing the Court to formulate our national racial policy.
You do not have access to this
book
on JSTOR. Try logging in through your institution for access.
Log in to your personal account or through your institution.
The present Supreme Court has been noticeably unreceptive to legal claims asserted by racial minorities. Although it is always possible to articulate nonracial motives for the Court’s civil rights decisions, the popular perception is that a politically conservative majority wishing to cut back on the protection of minority interests at majority expense now dominates the Supreme Court. In reviewing the work of the Court during its infamous 1988–89 term, U.S. LAW WEEK reported that “[a] series of civil rights decisions by a conservative majority of the U.S. Supreme Court making it easier to challenge affirmative action programs and more...
Under the traditional model of judicial review, which is traceable to John Marshall’s seminal decision inMarbury v. Madison, the function of the Supreme Court is to protect the rights of minorities who are unable to protect themselves effectively in the pluralist political process. Racial minorities have typically been thought to be among those who require Supreme Court protection because their “discrete and insular” character precludes their effective participation in the political process. Although a variety of substantive process, and hybrid theories of judicial review have evolved as an elaboration upon the traditionalMarburymodel, all theories share the belief...
Despite the aspirations of the traditional model, the Supreme Court is ultimately unable to protect minorities from the tyranny of the majority. In fact, the Court is institutionally incapable of doing anything other than reflecting the very majoritarian preferences that the traditional model requires the Court to resist. Because Supreme Court justices are socialized by the same majority that determines their fitness for judicial office, they will arrive at the bench already inculcated with majoritarian values. And none of the traditional safeguards can reliably prevent those values from controlling judicial decisions. The formal safeguards of life tenure and salary protection,...
Many legal principles expressly incorporate majoritarian preferences into their substantive contents. As a result, such principles cannot be relied upon to insulate judicial decisionmaking from the desires of the majority. On the contrary, the principles themselves ensure that the will of the majority is what ultimately controls any minority claims that are subject to those principles. Surprisingly, the Supreme Court has expressly incorporated majoritarian preferences into constitutional principles, as it did inMcCleskey v. Kemp, even where the effect has been to permit the majority to define the content of racial minority rights. More subtly, the Supreme Court often incorporates...
The process of principled adjudication begins with specification of the legal principles that govern proper resolution of a disputed issue. A variety of legal principles will arguably be relevant, but the Court must somehow decide which of the candidates actually apply. Selecting applicable principles is an act of loosely constrained discretion that once again creates opportunities for a judge’s personal attitudes to enter into the decision-making process. Where obviously controlling rules or precedents exist, the problem may appear to be insignificant, but in fact, serious difficulties often lurk beneath the surface of such apparent certainty. Moreover, in cases of first...
In theory, once a governing legal principle is identified, it eliminates the danger of majoritarian exploitation of minority interests because the governing principle rather than majoritarian-influenced judicial discretion will generate case outcomes. A legal principle can emanate from a constitutional provision, a statute, a regulation, or from common law precedents. But regardless of its source, proper application of a principle to the facts of a case, in accordance with the accepted tenets of logical analysis, will control the outcome of the case. Even if the principle does leave room for the exercise of some discretion, the sphere within which that...
Contemporary minority attraction to judicial review has been premised on the belief that the framers’ political safeguards against factionalism could not adequately protect the interests of racial minorities who would effectively be under-enfranchised by their discrete and insular character.¹ Moreover, any effectiveness that the structural safeguards might initially have had was further called into question by the substantial dilution of those safeguards that occurred during the New Deal.² However, reexamination of these assumptions in light of the majoritarianism inherent in judicial review suggests that whatever their defects, the political safeguards hold more promise for contemporary racial minorities than continued reliance...
Brown v. Board of Education¹ is the case typically offered as evidence of the countermajoritarian capacity of the Supreme Court. In the face of massive popular resistance, the Court not only desegregated the public schools, but also invalidated the constitutional standard adopted byPlessy v. Ferguson² that tolerated separate-but-equal public facilities. Since Brown, racial minorities have concentrated their efforts at achieving equality on the Supreme Court, because the Court has appeared to be more receptive to minority claims of right than the representative branches of government. Despite the countermajoritarian rhetoric that has been cultivated by theBrowndecision, the case...
The law of affirmative action is the most significant body of law affecting contemporary race relations in the United States. The Supreme Court, however, has developed the legal doctrines that govern affirmative action in a way that adversely affects the interests of racial minorities. It has done this by insinuating itself into the political policymaking process that governs affirmative action, and by incorporating centralized rather than local standards into the regulatory framework that it has imposed upon that process. InCity of Richmond v. J A. Croson Co.,¹ the Court held that state and municipal affirmative action plans were subject...
Racial minorities in the United States have suffered centuries of brutal inequality with remarkable quiescence. Slave rebellions were rare; race riots have been few and far between; and concerted minority political action has remained largely untried. Even recent minority political victories in majority voting districts have been unsuccessful at securing true minority participation in the political process, because the victorious minority candidates have had to strip their candidacies of anything other than diluted concern for racial issues in order to make themselves acceptable to majority voters.¹ As Chapter 7 has explained, minorities have become dependent upon the Supreme Court rather...
For racial minorities, judicial review has proven to be more of a curse than a blessing. Rather than protecting racial minority interests from the tyranny of the majority, the Supreme Court has done just the opposite. It has protected the majority from claims of equality by racial minorities. During the early history of the Supreme Court, the Court was fairly explicit in its sacrifice of minority interests for majoritarian gain. Whether the Court was abandoning the Cherokee Tribe in the face of majoritarian hostility as it did inCherokee Nation v. State of Georgiaitalic>,¹ denying citizenship to blacks in gratuitously...
Processing your request... | https://slave2.omega.jstor.org/stable/j.ctt9qg3gw |
The following Family practice note provides comprehensive and up to date legal information covering:
The provisions regarding statements of truth in family proceedings are contained in Part 17 of the Family Procedure Rules 2010, SI 2010/2955, (FPR 2010) and in the supporting Practice Direction, FPR 2010, PD 17A, which prescribes the forms of words to be used in statements of truth.
A statement of truth must be included in a number of different documents. The deponent is required to verify that they have an honest belief in the accuracy of the content of the document ie that they believe the facts stated in the document are true.
A false statement may be the subject of contempt proceedings. See: False statements of truth.
The following documents must be verified by a statement of truth:
a statement of case—this means the whole or part of, an application form or answer (application form means a document in which the applicant states their intention to seek a court order other than in accordance with the Part 18 procedure) except that it does not include:
an application under Article 56 of the Council Regulation (EC) No 4/2009 on jurisdiction, applicable law, recognition and enforcement of decisions and cooperation in matters relating to maintenance obligations (EU Maintenance Regulation) made on the form in Annex VI or VII to that Regulation
an application under Article 10 of the 2007 Hague Convention using the
Free trials are only available to individuals based in the UK
Complete all the fields above to proceed to the next step.
**Trials are provided to all LexisPSL and LexisLibrary content, excluding Practice Compliance, Practice Management and Risk and Compliance, subscription packages are tailored to your specific needs. To discuss trialling these LexisPSL services please email customer service via our online form. Free trials are only available to individuals based in the UK. We may terminate this trial at any time or decide not to give a trial, for any reason. Trial includes one question to LexisAsk during the length of the trial.
To view the latest version of this document and thousands of others like it, sign-in to LexisPSL or register for a free trial.
Existing user? Sign-in
Take a free trial
Take a free trial
This Practice Note examines why parties involved in a construction project may enter into an escrow agreement (or escrow deed) to set up an escrow account. It looks at the benefits of paying funds into escrow, how an escrow account operates and the provisions typically found in an escrow
What is a res judicata?A res judicata is a decision given by a judge or tribunal with jurisdiction over the cause of action and the parties, which disposes, with finality, of a matter decided so that it cannot be re-litigated by those bound by the judgment, except on appeal.Final judgments by
There are two kinds of burden:•the legal burden, and•the evidential burdenThe legal burdenA party has the legal (sometimes called ‘the persuasive’) burden where the onus is on that party to prove a fact or issue in a case to the required standard of proof.The legal burden is generally on the
This Practice Note considers claims for damages for breach of statutory duty. For guidance on claims for damages for a negligent breach of duty of care outside a statutory duty, see Practice Notes:•Negligence—when does a duty of care arise?•Negligence—when is the duty of care breached?Breach of
0330 161 1234
To view our latest legal guidance content,sign-in to Lexis®PSL or register for a free trial. | https://www.lexisnexis.co.uk/legal/guidance/statements-of-truth-in-family-proceedings |
A Quick and Dirty Guide to Renting an Apartment in Budapest – Part 2
In the first installment of this guide to renting a home in Budapest we looked at the process of actually finding the right property and ensuring that the legal and immigration aspects have been properly covered.
#1. Renting an Apartment in Budapest: Failure may cause problems at the immigration office
Now we’ll look at the terms and conditions you need to ensure are included in your lease contract:
First of all the contract must state the right of all users of the property to live there. The contract must state “and family” or mention those family members by name as users of the property. Failure to do so may cause problems later at the immigration office.
#2. Renting an Apartment in Budapest: key contract clauses
There are then key contract clauses that you should ensure are included:
1. The security deposit should be refundable and would typically be an amount of one or two months’ rent.
2. In Hungary the tenant is not expected to return the property to the landlord in the condition it was given. The law allows that normal wear and tear during a lease is acceptable and not recoverable from the security deposit. The definition of what constitutes “normal wear and tear” is quite broad, however, and disputes can easily arise when it comes to handing back the property.
3. It’s important to have a clause which states that should anything go wrong with the property it be fixed within seven calendar days. For critical losses of service such as power, water supply, heating, etc. to be addressed within 24 hours of notification by the tenant.
4. If the tenant has relocated to Budapest for work, it’s important to add in what’s called a diplomatic clause to the contract. This allows the tenant to break the terms of the lease at one month’s notice in the event that his or her position in Hungary is terminated and they can provide proof to that end. The pain of losing your job should not be compounded by having to pay rent on a property you no longer live in.
#3. Renting an Apartment in Budapest: housing law tends to favour the tenant
The Hungarian housing law (lakástörvény) actually tends to favour the tenant, and any contract clause that contradicts the law is considered invalid. The key of course in all such contracts is to reach an agreement that ultimately avoids the need for resolution via the courts.
Finally, before moving into your new home it’s vital to document the condition of the property to avoid any misunderstandings later. Most tenants accept some small fault or imperfection when they move into a property, and to avoid being charged to fix that fault at the end of the lease it’s important to write some kind of handover protocol, and ideally to have it witnessed when both landlord and tenant sign it. Ideally you should also take photographs of every room and specifically of anything that isn’t perfect when you move in. | https://interrelo.com/a-q%C2%ADuick-and-dirty-guide-to-renting-an-apartment-in-budapest-part-2-2/ |
A novel mutation in NDUFS4 causes Leigh syndrome in an Ashkenazi Jewish family.
Leigh syndrome is a neurodegenerative disorder of infancy or childhood generally due to mutations in nuclear or mitochondrial genes involved in mitochondrial energy metabolism. We performed linkage analysis in an Ashkenazi Jewish (AJ) family without consanguinity with three affected children. Linkage to microsatellite markers D5S1969 and D5S407 led to evaluation of the complex I gene NDUFS4, in which we identified a novel homozygous c.462delA mutation that disrupts the reading frame. The resulting protein lacks a cAMP-dependent protein kinase phosphorylation site required for activation of mitochondrial respiratory chain complex I. In a random sample of 5000 healthy AJ individuals, the carrier frequency of the NDUFS4 mutation c.462delA was 1 in 1000, suggesting that it should be considered in all AJ patients with Leigh syndrome.
| |
Don’t fight OPA3 defect alone.Find your community on the free RareGuru App.
The following summary is from Orphanet, a European reference portal for information on rare diseases and orphan drugs.
Orpha Number: 67047
Definition
3-methylglutaconic aciduria type III (MGA III) is an organic aciduria characterised by the association of optic atrophy and choreoathetosis with 3-methylglutaconic aciduria.
Epidemiology
The vast majority of reported cases involved the Iraqi-Jewish population, in which the prevalence of the disorder has been estimated at around 1 in 10 000.
Clinical description
Onset of the optic atrophy occurs during infancy with a progressive decrease in visual acuity. The choreoathetoid movement disorder manifests later, usually within the first ten years of life. Other clinical features may include spastic paraparesis, mild ataxia and cognitive deficit, dysarthria, and nystagmus.
Etiology
MGA III is caused by mutations in the OPA3 gene (19q13.2-q13.3). The biological function of the OPA3 gene product remains to be defined but MGA III is hypothesised to be a primary mitochondrial disorder.
Diagnostic methods
Diagnosis may be suspected up on presentation with early-onset optic atrophy and choreoathetosis (particularly in individuals of Iraqi-Jewish origin) and by detection of an elevation in the levels of 3-methylglutaconic and 3-methylglutaric acid in the urine. Diagnosis can be confirmed by detection of mutations in the OPA3 gene.
Differential diagnosis
MGA type III can be distinguished from other forms of MGA (types I, II and IV; see these terms) on the basis of the clinical phenotype and, more specifically, from 3-MGA type I by the absence of an elevation in 3-hydroxyisovaleric acid levels and normal 3-methylglutaconyl-CoA hydratase activity in cultured fibroblasts. The differential diagnosis may also include Behr syndrome (see this term) and cerebral palsy.
Antenatal diagnosis
Prenatal testing is clinically available for affected families through molecular analysis of amniocytes or chorionic villus samples.
Genetic counseling
MGA III is transmitted as an autosomal recessive trait.
Management and treatment
Treatment is symptomatic only and should be managed by a multidisciplinary team.
Prognosis
The long-term prognosis remains unknown: although the disease progresses during childhood, it appears to stabilise during early adulthood.
Visit the Orphanet disease page for more resources.
Source: GARD Last updated on 05-01-20
Do you have information about a disease, disorder, or syndrome? Want to suggest a symptom?
Please send suggestions to RareGuru! | https://rareguru.com/library/disease/4295/opa3-defect |
Council is seeking valuable community input to guide the development of a new environmental sustainability strategy for Baw Baw Shire.
Environmental management and sustainability practices are constantly evolving along with the expectations of our community. As a result, the Council is undertaking a review of the Environmental Sustainability Strategy 2018 - 2022 to ensure it remains relevant and serves the needs of our growing population.
Council is currently undertaking a review of its Environmental Sustainability Strategy 2018 - 2022 to develop a new strategy that will guide Council's environmental sustainability and the community is encouraged to send feedback and have their say
Service and responsibilities in the area
Council plays a vital role in directly managing the environment within the municipality. Some of its key service and responsibility areas include:
- Maintenance of council-owned urban spaces, trees, parks, gardens and natural reserves
- Assessment and implementation of environmental planning controls under the Victorian Planning Provisions
- Waste management services
- Monitoring electricity use and generation of greenhouse gas emissions across Council owned assets
- Managing the risk of climate change on its operations and services
In areas that are not Council managed, Council partners with a number of external organisations and government authorities to ensure management practices are done so in the most sustainable manner as possible.
Managing biodiversity and environmental assets
Mayor Cr Danny Goss said, "This strategy is so important as it sets out the approach to managing Baw Baw's unique biodiversity and environmental assets for now and into the future. It's something that impacts us all. The environment essentially guides how we work, play and live here in Baw Baw Shire."
"It relates to our open spaces, our buildings, our tourism and the overall livability of the Shire. I strongly encourage our residents to review the existing strategy, fill in the survey questions and don't miss this opportunity to have your say and help shape a sustainable future for Baw Baw." Cr Danny said.
Send your feedback
To develop the new strategy, the Council is now seeking community input on the environmental matters, priorities and sustainability practices that are most important to them. This feedback will inform the draft strategy that will preserve Baw Baw's natural environment and promote the Shire as a vibrant and healthy place to live.
To review the current strategy in full and to provide your input, visit the Have Your Say section of Council's website. This round of consultation is open until 5pm Friday 15 October 2021.
Pictures from Baw Baw Shire Council website. | http://gippsland.com/News/Default.asp?guidNewsID=A86A13A64BC44579A1CC03C8BFECBD0B |
Ingredients: Protein Blend [soy protein isolate, hydrolysed gelatin, concentrate Whey Protein, Whey Protein Isolate, Soybean Nuggets (soy protein isolate, rice flour, salt)], Sugars (corn syrup, glucose-fructose, fructose), Chocolate-flavored coating [maltitol, fractionated palm kernel oil, modified milk ingredients, cocoa powder, cocoa powder (treated with alkali), soy lecithin], sorbitol, polydextrose, fractionated palm kernel oil, canola oil, water, natural and artificial flavors, Glycerin, Soy Lecithin, Caramel Color, Tocopherols Blend, Sucralose.
Contains: Milk and soy.
Nutrition Facts for 1 bar (41 g)
Calories: 150
Carbohydrates: 15 g
Protein: 15 g
May contain eggs, peanuts, nuts and sesame seeds. | https://www.gestionnutrition.ca/en/barre-caramel-supreme.html |
Lactic acidosis is a rare but serious side effect of metformin use. The estimated incidence is 6 cases per 100,000 patient-years (9).
How do you know if metformin causes lactic acidosis?
Signs and symptoms of biguanide-induced lactic acidosis are nonspecific and include anorexia, nausea, vomiting, altered level of consciousness, hyperpnoea, abdominal pain and thirst. Doctors should suspect lactic acidosis in patients presenting with acidosis, but without evidence of hypoperfusion or hypoxia.
Can 500mg of metformin cause lactic acidosis?
High overdose of metformin or concomitant risks may lead to lactic acidosis. Lactic acidosis is a medical emergency and must be treated in hospital. The most effective method to remove lactate and metformin is haemodialysis.
Can metformin cause high lactate levels?
Metformin, along with other drugs in the biguanide class, increases plasma lactate levels in a plasma concentration-dependent manner by inhibiting mitochondrial respiration predominantly in the liver.
Is lactic acidosis rare?
Lactic acidosis is a rare but serious metabolic condition, with a mortality rate of 50%. It classically causes a high anion gap metabolic acidosis and is divided into two types.
Is lactic acidosis fatal?
Lactic acidosis is a rare, potentially fatal metabolic condition that can occur whenever substantial tissue hypoperfusion and hypoxia exist.
How do you reverse lactic acidosis?
Increasing oxygen to the tissues and giving IV fluids are often used to reduce lactic acid levels. Lactic acidosis caused by exercising can be treated at home. Stopping what you’re doing to hydrate and rest, often helps.
Can you restart metformin after lactic acidosis?
Reevaluation of eGFR 48 hours after imaging procedure is recommended and metformin can be restarted if the renal function is stable.
What are the long term effects of taking metformin?
The medication can cause more serious side effects, though these are rare. The most serious of these is lactic acidosis, a condition caused by buildup of lactic acid in the blood. This can occur if too much metformin accumulates in the blood due to chronic or acute (e.g. dehydration) kidney problems.
How fast does lactic acidosis happen?
Lactic acidosis occurs when the body produces too much lactic acid and cannot metabolize it quickly enough. The condition can be a medical emergency. The onset of lactic acidosis might be rapid and occur within minutes or hours, or gradual, happening over a period of days.
How much metformin is too much?
Your doctor may increase your dose by 500 mg every week if needed until your blood sugar is controlled. However, the dose is usually not more than 2500 mg per day.
What does lactic acid build up feel like?
When lactic acid builds up in your muscles, it can make your muscles feel fatigued or slightly sore. Other symptoms may include: nausea. vomiting. | https://diabeticdiscountdirect.com/diabetes/how-rare-is-lactic-acidosis-from-metformin.html |
Description:
Job Description:
Who We Are
Micro Focus is one of the world’s largest enterprise software providers, delivering the mission-critical software that keeps the digital world running. We combine pragmatism, discipline, and customer-centric innovation to deliver trusted, proven solutions that customers need in order to succeed in today’s rapidly evolving marketplace. That’s high tech without the drama
High Level Role Description:
We are looking for hands on s to join the Micro Focus team in partnering with key clients. You will play an integral part in the solution implementation & success of these exciting & rewarding programs of work.
The successful applicant is required to have a proven track record in the following key attributes:
Responsible for verifying and implementing the detailed technical design solution to the problem as identified by the Project/Technical Manager.
Often responsible for providing a detailed technical design for enterprise solutions.
Is often the Principal Consultant who analyzes and develops enterprise technology solutions.
Regularly leads in the technical assessment and delivery of specific technical solutions to the customer. Provides a team structure conducive to high performance, and manages the team lifecycle stages.
Coordinates implementation of new installations, designs, and migrations for technology solutions in one of the following work domains: networks, applications or platforms.
Provides advanced technical consulting and advice to others on proposal efforts, solution design, system management, tuning and modification of solutions.
Provides input to the company strategy moving forward.
Collects and determines data from appropriate sources to assist in determining customer needs and requirements.
Responds to requests for technical information from customers.
Develops customer technology solutions using various industry products and technologies.
Engages in technical problem solving across multiple technologies; often needs to develop new methods to apply to the situation.
Owns and manages knowledge sharing within a community (e.g. team, practice, or project). Ensures team members support knowledge sharing and re-use requirements of project. Contributes significant knowledge to job family community.
Proactively encourages membership and contributions of others to professional community and coaches others in area of expertise. Regularly produces internally published material such as knowledge briefs, service delivery kit components and modules, etc. Presents at multi-customer technology conferences.
Creates and supports sales activities. Manages bids, or major input into the sales lifecycle. Manages activities and provides qualitative and quantitative information for successful sales. Produces complete proposals for smaller engagements within area of expertise. Actively grows the company portfolio with existing customers through new opportunities and change management.
Impact/Scope/Complexity:
Assists with multiple customers.
Leads and/or provides expertise to functional project teams and may participate in cross-functional initiatives.
Sustained and consistent contribution at the work group level.
Medium to large project/programs, multiple client sites and management organizations.
Must Have Skills & Experience:
NV1 clearance is requirement for the role
Experience working on ITIL/ITSM solutions in a technical consulting role, ideally with Micro Focus tool set
Demonstrable experience in a customer-facing role drawing out requirements, creating designs & delivering projects at a senior level
Excellent consultative, interpersonal communication skills, requirements analysis & design skills, including documentation & communication skills
Experience across ITIL service support processes within an organisation (Incident, Problem, Change) is essential
Solid technical requirements analysis & ability to translate into application workflow logic or configurations
Collaborate with project managers to ensure effective and efficient delivery including providing direction to team activities and facilitates information validation and decision making process
Desirable Skills:
Strong knowledge & experience of other commercially available ITSM software application, such as BMC Remedy,
CA Service Desk, Infra, FrontRange Solutions ITSM or Service Now including technical activities such as:
Requirements gathering and solution design
Software installation
Configuration & customisation to design/requirements
Problem identification and resolution
ITIL v2/3 Foundations certification, ITIL v2/3 Managers/Expert certification would be advantageous
Education and Experience:
8+ years of professional experience and a Bachelor of Arts/Science or equivalent degree in computer science or related area of study; without a degree, three additional years of relevant professional experience (11+ years in total). | https://au.bebee.com/job/20210223-d672b6abc20997b4a9e2fc859baa5153 |
Since there are NO coincidences, we know that this thing that is happening in our qigong community is not an aberrant happening … it is a call to Awaken those parts of us that are still lost in blame…
Personally I have no desire to talk anybody in to or out of anything. That is not the intention of this post. I trust that we each are handling the situation of abuse in our qigong community in the way we must … and I hold a vision of a greater peace becoming possible between us all. I want to write about THAT possibility.
As I see it, we either trust the Guiding Principles to be true or we don't. We trust they mean what they teach on EVERY LEVEL, including situations like this which we find ourselves in now, or we don't.
For our qigong community and for our super-star teacher (for that he is!) it means that what is IS for a reason. The first guiding principle is based on the Law of Cause and Effect, which, among other things, teaches that there are no coincidences in life. What we experience with one another is the natural consequence of a previous cause.
For example, let's say your dog, frightened by stormy weather, snapped at the hand of a friend, who retaliated by slapping the dog, which prompted you to intervene for your pet by verbally dressing down the friend whose hand was bitten. As a result your friend turns on you in anger and resentment, prompting you to react negatively to her, and on and on and on … Of course, which side of the fence you are riding determines whether you see what's happening as a “good” or a “bad” thing. And that will change as you shift places with your friend, moving from one unhappy position to another. (Notice there is no happy place on the triangle!) The bottom line is that every happening was connected to a previous “cause,” creating a chain of cause and effect. In that same way, we are all links on a chain of events connected by the same original cause. Spiritual principles teach us that the original cause is always mental by design. In other words, what causes us to be and do what we do is the thoughts behind the action. Blaming thoughts hold us in victim consciousness and make peace impossible.
As long as we blame we are still moving around the victim triangle, from feeling victimized (Victim), to justified rage(Persecutor), to feeling the need to rescue the perceived victims (Rescuer) by pointing out to them their victim status, and initiating a few rounds of “Ain't It Awful” with them to really drive home the fact that they have been violated.
PLEASE DO NOT MISUNDERSTAND ME! I am not trying to imply that we must minimize what has happened, nor does it help to justify, or rationalize the actions of our teacher. Far from it … It's just that when we step back from blame and look at the situation through the guiding principles, we quickly realize that there is no “side” to take! What we see instead is an opportunity in our community to move past blame.
After all we get to choose whether to split ourselves off into two categories: victims versus perpetrators. Or whether we choose to move past the need to ascribe blame towards a greater understanding instead. To insist on ascribing victim/persecutor status to individuals, is to keep us going round and round on the victim triangle.
Although the reasons we are having this experience in our lives may vary from individual to individual, we are still one body with varied parts having this experience together and interpreting it in our own unique way. When we accuse others, it is always important to also look for where the accused lives within us!
It's also helpful to remember that we are vibrational beings, which means we magnetically attract the people and experiences we need to encounter that will motivate us for change most. Perhaps we attracted this teacher into our lives because we needed to experience the voice in us that identifies itself as a victim. Perhaps we need to stand up for ourselves, speak our truth, or walk away from something to take care of ourselves … Perhaps on the other hand, we tend to be the one who “takes advantage of” others … in some way … Perhaps we are being given the opportunity to make peace with the part of us that abuses ourselves!
We only know that we need to do what we do, because those are the things we find ourselves doing. OR perhaps we need to stay and experience abuse, or for some other reason … perhaps we don't see abuse at all … but whatever our Reality is, it is always based on our own thoughts, beliefs and interpretations of the situation.
There is not a single thing we encounter on our path that is not there by the Universe's design and Its desire to help us awaken to the Reality we create through the thoughts we believe. This is what we mean when we say there is NO problem. I hold no “should's” about what I'm hearing, or about what the Teacher or any of the students who are stepping forward to present their thoughts and truth “should” do. I trust that each of us are learning from this event in our lives. (I know I am!)
While shifting our perception away from blame may not alter the course of the action we feel we must take, it dramatically changes the vibrational frequency of both outcome, AND the way we feel and see it from inside.
Seeing ourselves as victims of a “sexual predator” for instance, may spiral us down on the victim triangle to hell, whereas seeing ourselves as having gotten involved with someone who uses females for his own purposes leaves us free to ask why we needed that particular experience (how do we know we needed it? because we had it – it is what is… ) It allows us to look for how we can grow from the situation, and free us up to choose differently next time: i.e. “I learned that lesson, and now must move on … or take a stand … or report abuse… ” or whatever it is we feel we must do – not from a place of victim/blame, but because it is what we must do to learn what this is happening to teach us.
But when we see ourselves as being a victim (remember we can be victimized, without ever resorting to victim consciousness!) we take that ride… we feel all the feelings and think all the thoughts that go with the belief that we are a victim of the circumstances, and we will play the part. To see the situation as something we experienced to help us see our own beliefs better and what those beliefs are attracting into our lives leaves us feeling stronger, more enlightened, and like we took an unpleasant situation and used it for our own refinement! NO BLAME – of them – of ourselves.
But it lets THEM off the hook, we think, right? It's our job to punish them? We must control their behavior? Are these things true? When we believe these things who do we become?
It's like being in a pit with a rattlesnake … we appreciate it's beauty, even as we respect it's bite… but we don't make their behavior about or at us! We don't need to see ourselves as the snake's victim … nor do we need to turn the snake into some evil force … it's just a snake doing what snakes do …
Teachers simply do what they do to play out their own beliefs for their own growing edge so they can see in visible form that which they believe … in other words, they are JUST LIKE US!
And if a teacher is playing out a story that mixes and mingles with our own, that is not by accident either! Perhaps our work is to call him on his behavior … it could be that we are called on to play that part. And we can trust he must have needed to experience that too … life relentlessly reflects to us the frequency we put out … this is how we come to know what our internal programming is – through playing it out with others and facing the consequences that programming brings. Perhaps, for instance, he needed to hear his own internal voice speaking to him through our harsh words … there is no right or wrong … it's all happening right on time for the purpose of evolving souls.
To move past blame we access the Observer Consciousness which does not operate from blame.
The Observer has no need to deem one person's acts “wrong,” and another person's acts “right.” It does not need to take sides – even though it may appear to do so on occasion.
The Observer in us looks at the situation through a lens that sees no blame; It sees people, not as good or bad, right or wrong, but as having the life process they came to have through the encounters and people they meet on their vibrational frequency pathway…
The Observer is not invested personally in what action is taken … It knows that people do what they do because they believe what they think, and that what they think is what causes them to feel and react the way they do, and that is what then attracts to them that which will prove them right. (so says the Reality Formula™) …
The Observer Self does not have a list of should's about how the outcome of a particular situation should go; it does not demand that others act any differently than they do. The Observer simply sees what is. It sees Reality as a growth opportunity for everyone involved.
Since Earth is where we come to live out, in three-dimensional form, the thoughts we believe, the Observer trusts that which we each attract into our lives – not to be painless – but to expand and grow us … which is truly the soul's greatest interest.
The Observer sees what is, without needing to make how others respond into our business. What any of us, including the Teacher in question, gets from this current situation, or what anyone else learns from it, is NOT our business – the Observer notes. We do not need to judge or blame to set thing right. We simply see Reality for what it is and apply what we see to refining ourselves for the greatest results.
To recapitulate, there is no remedy, no real transformation possible through Blame. Blaming another, even when we are right, will not “make” them different. But the widely accepted worldview says WE must judge, label, and punish those we blame. Nonetheless, regardless of how many systems use blame as their primary default to stop “abuse,” it does not enhance it's ability to do so. Quite the opposite, in fact; the more we blame the more we will be blamed, and the more we resist the more we will be met with resistance, and the further from peace we move.
The guiding principles offer an alternative to blame that does not need to separate ourselves from the other, regardless of what they may have done, without denying their behavior or making excuses for them either.
Giving up blame does not mean to DENY or JUSTIFY abuse. It does not mean to pretend it did not happen. It does not mean we must give up our feelings about it happening …
Giving up blame simply means that we have come to ACCEPT the Reality that people do what they do, including violate others, because they believe their own unhappy thoughts. We accept, without making excuses for them, we see the repercussions their choices bring and we step out of the way of trying to intervene on those consequences. We understand that they, like us, are here to experience their path and part of that experience is to live out their beliefs so they can experience the full impact of that … whether they are making choices we approve of or not … that is not our business.
There's a cause and a effect that the Universe is after … and it helps to remember that the Universe always works FOR us (God is LOVE!).
In greatest respect and love for my qi community and the teacher who brought us all together. | https://www.lynneforrest.com/observer-self/2015/07/the-law-of-cause-effect-denies-coincidence-even-in-cases-of-abuse/ |
Accountability is an integral component of ‘empowerment’ and hence poverty reduction. Social Accountability relies on ordinary citizens and/or civil society organizations participating directly or indirectly in exacting accountability from leaders and public officials.
PeaceOpoly is a demand-driven accountability initiative designed to improve governance and transparency by empowering youth and women to play a role in their nation’s democracy. Traditionally, efforts have concentrated on improving the supply side of governance through political checks and balances and administrative rules, but these have had very limited success. PeaceOpoly strengthens the voice and capacity of citizens (including the poor, vulnerable and disabled) to directly demand greater accountability and responsiveness from public officials.
PeaceOpoly amplifies the voices of citizens and enables political leaders to listen and respond effectively, with the goal of creating a more efficient and transparent democratic government. The effectiveness and sustainability of social accountability mechanisms are improved when they are ‘institutionalized’ and when the state’s own ‘internal’ mechanisms of accountability are rendered more transparent and open to civic engagement. PeaceOpoly helps that process along by empowering its participants to take matters into their own hands.
PeaceOpoly leverages information and communication technology (ICT) to develop informed, inclusive and accountable relationships between citizens and political leaders. ICT provides government with the tools it needs to be more transparent and accountable to its citizens. PeaceOpoly draws on skills for media-making, research, writing, collaboration, problem-solving, public speaking, leadership and digital literacy.
By providing critical information on rights and entitlements and soliciting feedback from the marginalized, social accountability mechanisms provide a means to increase and aggregate the voice of disadvantaged and vulnerable groups. | http://www.peaceopoly.org/social-accountability/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.