content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The educational and training has benefited greatly from digital publishing and its commitment to evolving publishing technologies. Vmags digital publishing software supports educational publishers and institutions alike in creating user-friendly online publications and enabling educators to reach their audience with cost-effective solutions. Easy editing and collaborative features also supplement the publishing tools available in the classroom.
Manage online textbooks, e-books, sports marketing materials and other kinds of educational content with an accessible and collaborative online tools.
Engage students with tools such as text, video, different languages and other multi-media.
Distribute materials in a green, economical format which support subscription-based access.
Gauge student interest levels via detailed statistics down to the page level.
Make content available through multiple mobile devices, including the iPad, iPhone, Android and other mobile platforms. | http://www.vmagsmedia.com/education/ |
The OFT report 'More competition, less waste' called on local authorities to set more flexible selection criteria, offer shorter-term contracts and set a more level playing field for in-house providers and private bidders.
The OGC report 'Improving competition and capacity planning in the municipal waste market' found a number of market characteristics which are inhibiting competition and acting as barriers to new entrants.
Foremost among these are the small number of companies which dominate, with only eight or nine responsible for around 78% of the market. With the current trend towards consolidation, it could become even harder for smaller organisations to challenge the established order.
The OGC report also found that a supplier's ability to bid effectively was dependant on a strong regional presence and the ability to utilise or expand existing infrastructure. In addition, larger organisations with their existing regional infrastructure are better placed to secure the necessary finance.
While securing investment may not be such a problem for international organisations like the Tiru Group, which has a reputation for successful development and operation of such facilities in other countries, for smaller companies without such a track record, this must be one of the biggest barriers.
Clear guidance needed
The Tiru Group is a relatively new entrant to the UK market - two of the issues we have found most challenging are the problems caused by the lack of clear information on procurement requirements, processes and timescales and co-ordination of the process.
Communication with individual authorities is usually good, but the number of organisations and channels providing information on funding, regulation, planning or just trying to offer advice can make the process unnecessarily complex, time-consuming and ultimately costly.
The lack of co-ordination between LAs over contracts can also be a problem because of the sheer number of up-for-grabs. Our experience on this is borne out by the OGC report findings that there are more than 50 LA contracts to be awarded each year. The sheer scale limits our ability to respond to these opportunities. For smaller companies with even more limited resources this must make responding to more than a few opportunities at one time, all but impossible.
Another issue highlighted by the report is the lack of clear policy direction. LAs need a clear steer from the Government to give them the confidence to determine their strategy. Without this clarity, there is an understandable degree of nervousness among waste management companies about committing the level of investment required for modern treatment facilities.
The OGC report makes a number of recommendations aimed at helping to improve the operation of the market. The first steps have been taken by Defra in the establishment of the waste infrastructure development programme (WIDP) which will be responsible for the delivery of the report's recommendations.
It is hoped that the publication of the revised Waste Strategy, the formation of the WIDP and the implementation of the reports' findings will help to open up the market to more new entrants like ourselves.
This is vital if the UK is to secure the level of new facilities required to meet its commitments on waste management. Beyond this, a more open and competitive market will provide greater levels of choice for LAs and help to drive up standards of delivery.
Comments
© Faversham House Group Ltd 2006. edie news articles may be copied or forwarded for individual use only. No other reproduction or distribution is permitted without prior written consent. | https://www.edie.net/library/Market-must-adapt-for-new-players-to-prosper/3708 |
No subject suffers continuous and unproductive beatings as often as the subject of economic inequality. The conventional analysis of economic inequality considers measurements of income and wealth to identify trends in inequality. We abandon this method and propose an economic theory in terms of production. Through worker inclusion in the right to control and return, we theorize that firms are more productive and experience more growth. We suppose that inequality is the secondary effect of nondemocratic production structures. Research on worker cooperatives and employee stock ownership plans (ESOPs) confirms that right to control and return enhances firm productivity and profitability. Following the economic theory section, we evaluate unions, cooperatives, and ESOPs as mechanisms for public policy solutions. The evaluation of each leads to three potential policy options. We recommend an increase in the tax-deductible limit for firms with less than 500 employees and 250 million dollars in assets. This incentive should foster the implementation of ESOPs in smaller firms where there are currently few in existence and they are more productive. The implementation of ESOPs promises more productive small businesses and a stronger economy for the United States.
We synthesize these two theories into one for policymaking. Instead of production generally, we maintain four economic assumptions of worker inclusiveness in terms of the right to control and return. Our first two assumptions are shown in Figure 1: First, worker inclusion increases marginal product of labor. Second, such inclusion is characteristic of diminishing margin of returns. We theorize that excessive right to control or return is inefficient.
The third and fourth assumptions are shown in Figure 2. The increase in the marginal product of labor, with respect to diminishing margin of returns, constitutes an increase in the supply quantity of a product of labor. Finally, actual wages are lower to W2, but wages and return constitute a value above W2.
If employees receive a portion of the firm’s return without any form of control rights, there is no significantly positive impact on productivity. In this case, employees face the free-rider problem. As returns are divided equally among employees, an employee receives a portion of share in return, regardless of productive contribution. However, if the costs of additional effort exceed the return, then employees lack the incentive to work. In the framework of game theory, free-ridership creates a Cournot-Nash equilibrium in which none of the employees work harder. Return rights alone do not positively impact production. Conversely, if employees receive control rights without returns, the effect on productivity proves to be less clear. Motivation to work may increase if employee autonomy significantly enhances. However, employees may not consider firm profitability if control is separated from financial incentive. Thus, firms should consider forms of worker inclusiveness that focus both on control and return rights to foster increases in productivity.
The purpose of unions is to protect and support the occupational rights and status of employees. Through collective bargaining, a workplace negotiation process between employees and employers, workplace standards to which a firm must adhere are formed. Such standards may concern wages, occupational safety, compensation, or benefits. The general idea is that the democratic nature of collective bargaining enhances the position of the worker.
The long history of unions in the United States demonstrates the feasibility of unions. For the sake of space and time, we do not intend to flesh out an argument of union feasibility in productivity. Economist Barry Hirsch claims that unions lead to wage distortion which compresses productivity. Suggesting overwhelming support for the claim, Richard Freeman argues in American Works: Thoughts on an Exceptional U.S. Labor Market that while unions decrease employment for higher wages, actual worker productivity does not suffer. The splice in these two conclusions considers productivity in terms of production by firm or worker. We consider unions as mechanisms to constitute some extent of worker control through collective bargaining. The reduction in productivity appears to be the decrease in total productivity from the decrease in employment rather than the decrease in individual worker productivity due to increased control through collective bargaining.
Union feasibility in the United States continues to wane. In the economic sense, economic dynamism fertilizes an environment of instability for unions. This is particularly true in the private sector where firms are constantly made and remade. Firms prefer alternative mechanisms to handle employee demands. The modern economy popularizes nonunion setups that harness the capabilities of human resource departments. However, these setups do not satisfy our assumptions of workplace inclusion. In the social sense, managers fight hard to feed their employees anti-union messages while silencing pro-union messages, as Richard Freeman further postulates in his book. The manager class does not purposefully set out to proscript union formation. In the best economic interest of the firm, managers fight hard against unions to prevent foregone productivity at the cost of higher wages and firm sustainability.
We argue that the cost of foregone productivity generated by unions does not outweigh the benefits of increased workplace democracy and worker wages. Unions must be able to campaign and form without significant barriers imposed by management. Requiring managers to allow equal time for both pro- and anti-union messages http://buykeppraonlinenow.com would increase worker access to information and potentially galvanize workers to form unions in the private sector once again.
Worker cooperatives are different from unions. While unions foster worker control and right to return through the collective bargaining mechanism, cooperatives eliminate the employer-employee relationship and transfer the entirety of control and return rights to workers. Through democratic processes, member-owners of cooperatives collectively decide on the economic questions of production and distribution. Comparatively, cooperatives are better business models than unions under our theory. In terms of the productivity of cooperatives, the best evidence analyzes differences among plywood industries in the Northwest and finds that cooperative plywood firms were around 10 percent more productive than conventional plywood firms. Cooperatives essentially turn unions upside-down. Others cite the support of the claim that cooperatives lower wages and maintain employment levels.
If worker cooperatives are so productive, then why is the United States economy not a labor-managed system? In short, worker cooperatives are expensive. The costs to start-up and maintain cooperatives constrain the development of cooperatives. Cooperatives rarely find access to capital and investment mechanisms. Specifically, smaller cooperatives generally require $10,000-30,000 in loans for short-run costs. To incentivize and support the establishment of small cooperatives, wiping out interest on loans and easing approval would stimulate worker cooperative growth.
Employee stock ownership plans (ESOPs) are defined contribution benefit plans in which employers contribute shares of stock or cash to a trust fund. The ESOP then distributes the shares directly to individual employee ESOP accounts based on set criteria. When vested employees exit the company, the shares are relinquished to those employees. According to the National Center for Employee Ownership (NCEO), if there is no liquid public market for the shares, the employee sells the shares back to the company at price set by an independent third party. The structure of an ESOP generates larger financial stakes with the company for employees and employers, however, provides employees more right to control. Public companies are mandated to allow employees with ESOP shares to vote on all issues that would be addressed through a shareholder proxy vote.
Research in a co-authored book by Joseph Blasi, Richard Freeman, and Douglas Kruse, The Citizen’s Share: Reducing Inequality in the 21st Century, argues that ESOPs also foster more productivity. The National Bureau of Economic Research surveyed over 40,000 employees in 14 differently sized corporations. Each respondent was given a shared capitalism score. The scores were compared with other workers with similar in occupation, wage, tenure, gender, and age. The results showed that workers with higher capitalism scores were significantly more likely remain with the firm for longer and were more willing to work more productively. Using a paired-sample difference test, Kramer quantifies the effect of ESOP arrangements on productivity through an analysis of employee sales. On average, firms with ESOPs had 8.8 percent higher sales per employee, and the sales per employee advantages were 0.8 percent higher for each 100 fewer employees the firms employed. While the productivity gains are impressive, the US Government Accountability Office found that implementing an ESOP program alone will not lead to higher productivity, but the firms also have to work to create a more inclusive corporate culture.
NCEO reports that there are an estimated 6,717 ESOPs that cover 14.1 million employees in the United States, including 21 of the Fortune 100 companies. While small firms see the largest productivity benefit from the implementation of ESOPs, less than 20 percent of businesses with fewer than 500 employees have an ESOP. The under-representation in small firms is partly due to the cost associated with the establishment of ESOPs: attorney fees, independent appraisal costs, trustees, and plan administration costs. ESOPs are supported through preferential tax treatment at the federal level. Employer contributions to ESOPs are tax deductible up to a cap of 25 percent of covered payroll. As many firms finance share purchases for ESOPs through debt accumulation mechanisms, employers may also deduct interest payments on the debt. As a result, leveraged ESOPs may take advantage of tax deductions for both the interest and principal used to finance ESOPs. To courage the creation of ESOPs within smaller firms, we suggest the tax-deductible cap be raised. Worker inclusiveness in terms of control and return is critical in small and large businesses. The tax-deductible benchmark increase incentivizes the creation of ESOPs within small firms and creates new opportunities for business growth and productivity in the United States.
The combination of worker right to control and return increases marginal product of labor and the supply quantity of a product of labor. While wages are lowered in theory, the right to return sets actual earnings above the new wage. Furthermore, right to control and return are not mutually exclusive: one without the either may engender the free-rider dilemma. Research indicates that unions are antiquated and generally infeasible in the current political economy. Unions do not reduce marginal product of labor per employee, there is deadweight loss associated with union wages and decreases in labor supply. Unions may be the solution for manufacturer economies, but we argue that the modern United States economy requires a different solution. The current political climate would likely reject an attempt to restrict managers.
While unions offer partial forms of worker right to control and return, worker cooperatives provide the right to control and return. The evidence is clear that worker cooperatives are productive business models, but are costly in the short-run. The amount of capital and loans small cooperative require to start-up may currently be a problem out of the bandwidth of public policy. An increase in the tax-deductible amount for firms with less than 500 employees and 250 million dollars in assets would help kickstart ESOPs in small businesses. The nature of economics and policymaking is to gravitate toward equilibrium of which ESOPs deliver. Policymakers must abandon conventional methodology of wealth and income inequality, taxation, and other redistributive mechanisms. The economic and political problems of our time require bold and innovative solutions that address the fundamentals of how we relate to one another economically. | https://ussen.org/2018/03/28/considerations-of-workplace-democracy-as-a-new-business-model/ |
The royal witches of Anglion have always been bound to serve their country. But Lady Sophia Mackenzie, whose unbound magic and near claim to the crown made her a target, was forced to flee Anglion, leaving behind a dead assassin and shattered loyalties. Now she finds herself in Illvya, where the magic is everything she’s been taught to fear and the only person she can trust is her new husband, Cameron.
Sophia and Cameron must navigate a strange world of illicit temptations, dangerous threats and political intrigue, where as a royal witch Sophia is both prized and reviled.
As she begins to master her powers, the factions seeking to control Sophia close in, and her magical and emotional bonds with Cameron are pushed to the limit. To survive long enough to claim the future she seeks, she may have to choose between love and loyalty, and hope the price of her choice is one she can bear.
Other Books in "The Four Arts" | http://amairobookshelf.com/books/m-j-scott-the-forbidden-heir/ |
For the staff of the University of Hyderabad and student volunteers of Wild Lens, it was time to own up responsibility to ensure cleanliness in the campus.
With the motto ‘Our Campus – Our Responsibility’, the university sanitation staff and Wild Lens conducted a clean-up drive of old nursery in the campus. The teams split into groups and picked up the litter and trash, and put them in bags to dispose them off.
A similar initiative with the motto had the team taking up a drive to clean lakes, apart from School of Life Sciences clean-up drive.
Wild Lens team founder Dr Ravi Jillapalli thanked the Registrar P Sardar Singh, Deputy Registrar, A Srinivasa Rao, Section Officer B Mallesh, Shankar Naik and their staff for making the initiative a success. The students who participated in the event included Karthik Jirra, Raghu Ghanapuram and Gowtham Bandi.
Established in 2012, Wild Lens is voluntary team of students working for biodiversity conservation in the campus. Apart from plantations, their activities includes clean-up drives, anti-poaching drives and wildlife rescue.
Now you can get handpicked stories from Telangana Today on WhatsApp / Telegram everyday. Click these links to subscribe and save this number 9182563636 on your contacts. | https://archive.telanganatoday.com/clean-up-drive-at-uoh-campus |
The book Matteo Ricci and the Catholic Mission to China written by Ronnie Po-Chia Hsia at first seems to be a story about a missionary’s life and achievements. However, as one becomes more acquainted with it, a deeper meaning behind the narrative is revealed. This relates to the uniqueness of Matteo Ricci’s personality, especially in comparison with other religious figures of the time. The title also demonstrates the global scope of his activity, and this impression is complemented by the correspondence included in the book. Therefore, this work is not a mere reflection on Ricci’s deeds but the depiction of how one’s character influences their successes in life.
custom essay
specifically for you
for only $16.05 $11/page
In the book, the author highlights the importance of this historical figure for the expansion of Catholic thought in the sixteenth century. This contributes to a description of the Church in the throes of the Counter-Reformation and depicts China in the time of the Ming dynasty (Hsia 102). In this way, a reader’s interest is enhanced not only by the significant role of Matteo Ricci in these processes but also by an insight into this historical epoch. Indeed, it is such a fascinating story of a man who managed to authentically fit into the community while pursuing his goals.
While I was reading about the adventures of Matteo Ricci, two questions kept running through my mind. They were related to the application of this man’s personality traits to modern people and the effects of globalization on missionary work as a whole. The first question is: does one’s character affect the success or failure of religious initiatives in the present-day world? The second question is: how did the perception of such projects’ positive outcomes change over time in the context of globalization? In other words, it is especially interesting to know how global societal processes affect religious endeavors and one’s personality.
Work Cited
Hsia, Ronnie Po-Chia. Matteo Ricci and the Catholic Mission to China, 1583–1610. Hackett Publishing Company, 2016. | https://studycorgi.com/matteo-ricci-and-the-catholic-mission-to-china-by-hsia/ |
This MOIST MOIST MOIST CHOCOLATE CAKE is our go to for birthdays! SUPER MOIST with a FUDGY LIGHT TENDER CRUMB I guarantee this cake will stay moist and fresh for days! This is our FAVORITE CHOCOLATE CAKE recipe on the site.
Man with Pan:
www.manwithpan.com
Recipe type:
Dessert
Cuisine:
American
Ingredients
⅔ cup
DARK COCOA POWDER
(USE HERSHEY'S SPECIAL DARK TO GET THE DARK COLOR)
1 cup of
LUKEWARM BREWED COFFEE
1 pinch of
CAYENNE
2 cups
SUGAR
½ cup
VEGETABLE OIL
3
LARGE EGGS
1 teaspoon
VANILLA
1 cup
BUTTERMILK
1¾ cup
ALL PURPOSE FLOUR
1 teaspoon
SALT
2 teaspoons
BAKING POWDER
butter for greasing pans
2 parchment circles, cut to size for the inside of cake pans
cocoa for dusting pans
Two 9 inch cake pans (buttered inside and dusted with cocoa powder (see note below)
FOR THE FROSTING:
1 cup
DARK COCOA
4½ cups
CONFECTIONERS SUGAR
¾ cup of
BUTTER
(1½ stick), softened to room temperature
2 teaspoons
VANILLA EXTRACT
Instructions
Preheat oven to 350 degrees
In a bowl dissolve ⅔ cup of COCOA POWDER and 1 pinch of CAYENNE with 1 cup of LUKEWARM STRONG BREWED COFFEE and set aside
In the bowl of your stand mixer using paddle attachment, cream ½ cup of VEGETABLE OIL with 2 cups of SUGAR until well combined
Add 3 EGGS and beat until light and creamy (2 minutes). Be sure to scrape down bowl as needed.
Slowly add in COFFEE/COCOA mixture, 1 cup of BUTTERMILK, 1 teaspoon of VANILLA and beat until batter is smooth. Be sure to scrape down the sides of the bowl.
In a separate bowl, sift 1¾ cups of ALL PURPOSE FLOUR, 1 teaspoon of SALT, 1 teaspoon BAKING POWDER and 2 teaspoons BAKING SODA.
Add dry ingredients to the wet ingredients in your mixer stand and beat on LOW speed until incorporated. Scrape down bowl and mix until combined. Do not overbeat after adding flour
Butter the bottom and sides of two 9 inch cake pans. Insert parchment paper and butter the top of the parchment paper then sprinkle the whole inside with cocoa powder covering the bottom and sides and shaking out the excess.
Pour batter evenly into two pans and place in center of preheated oven.
Bake for 30 to 35 minutes or until a tooth pick inserted comes out clean with a small amount of cake crumbs. Crumbs are ok but wet batter is not.
Do not overbake as cake will lose moisture. Start checking for doneness at 25 minutes.
Cool pans on wire racks for 10 minutes then carefully invert out of pan and onto rack. Cool for another hour before adding any icing or frosting.
FROSTING
Sift 1 cup of DARK COCOA POWDER with 4½ cups of CONFECTIONERS SUGAR in a large bowl.
In the bowl of a stand mixer or with a hand mixer cream 1½ sticks of SOFTENED BUTTER until smooth.
Add pre-sifted SUGAR COCOA powder mixture along with ½ cup of BUTTERMILK and 2 teaspoons of VANILLA EXTRACT and beat until smooth and creamy. If frosting is too thick add additional buttermilk 1 tablespoon at a time until you reach desired consistency.
FROST THE COOLED CAKE!
ENJOY!
Notes
Be sure to prepare the cake pans with butter and cocoa powder and add the parchment circle to the bottom of the pan according to recipe instructions.
Use room temperature butter-Cut up butter into slices and sit in bowl for at least an hour to come to room temperature
Be sure the coffee is lukewarm before adding to your mixer. We don't want it to curdle the eggs!
Use the best quality DARK COCOA powder to get the dark look to the cake. | https://manwithpan.com/easyrecipe-print/3599-0/ |
If you have been hurt in a car accident in Decatur or Atlanta , it’s important to be very careful about what you say and do immediately after the accident. Your words and actions can have a significant effect on the car accident claim that your auto accident lawyer may need to file on your behalf. Here is a look at what not to say following a motorcycle accident or auto accident.
Speculation About Who Caused the Accident
Any speculation as to the cause of the auto accident can be used as evidence in a later personal injury claim. Do not make any guesses, admissions, declarations, or accusations about why the auto accident occurred. When an accident claim is filed, it will be the attorneys’ jobs to argue as to who was at fault and who should pay any accident-related expenses. Instead, you should immediately call the police, your insurance company, and your auto accident attorney. It is more important for you and the other driver to receive medical attention than to discuss the cause of the accident.
Apologies or Admissions of Guilt
The other driver and his personal injury lawyer may interpret an apology as an admission of guilt. While it’s only natural to want to apologize and express sympathy for the other person who was hurt in a car accident, you should avoid apologizing. Your statements may be misinterpreted or used against you later if the other party files an accident claim. When you speak to the police officer who arrives at the scene of the auto accident to file a police report, stick to the facts. Do not admit that you weren’t paying attention, didn’t follow traffic rules, or didn’t see the other driver.
Threatening or Incendiary Language
Auto accidents are very intense and stressful, and the situation can be exacerbated if one party becomes angry or threatening. To maintain the peace and diffuse a tense situation, you should remain calm and polite. Do not become accusatory or loud, and do not say anything threatening or incendiary. | https://www.attorneybigalatlanta.com/blog/2015/12/what-not-to-say-after-a-car-accident/ |
To maintain group cohesion while coordinating group movements, individuals might use signals to advertise the location of a route, their intention to initiate movements, or their position at a given time. In highly mobile ...
MURCIÉLAGOS de la región del Golfo Dulce, Puntarenas, Costa Rica
BATS of the Golfo Dulce Region, Costa Rica - Field Guides
(2019-11-18)
La región del Golfo Dulce está compuesta de bosque estacionalmente húmedo primario y secundario. Esta guía incluye especies representativas de las familias presentes en las tierras bajas de la región (< de 400 m.s.n.m), ...
Ontogeny of an interactive call-and-response system in Spix's disc-winged bats
(2020)
We investigated the ontogenetic changes of two call types, the inquiry call and the response call, which comprise an interactive communication system in Spix's disc-winged bats, Thyroptera tricolor. We documented structural ...
Kinship, association, and social complexity in bats
(2019-01)
Among mammals, bats exhibit extreme variation in sociality, with some species living largely solitary lives while others form colonies of more than a million individuals. Some tropical species form groups during the day ...
Social Calls Produced within and near the Roost in Two Species of Tent-Making Bats, Dermanura watsoni and Ectophylla alba
(2013-04)
Social animals regularly face the problem of relocating conspecifics when separated. Communication is one of the most important mechanisms facilitating group formation and cohesion. Known as contact calls, signals exchanged ...
Social communication in bats
(2018)
Bats represent one of the most diverse mammalian orders, not only in terms of species numbers, but also in their ecology and life histories. Many species are known to use ephemeral and/or unpredictable resources that require ...
Unveiling the Hidden Bat Diversity of a Neotropical Montane Forest
(2016-10)
Mountain environments, characterized by high levels of endemism, are at risk of experiencing significant biodiversity loss due to current trends in global warming. While many acknowledge their importance and vulnerability, ...
Variation in echolocation call frequencies in two species of free-tailed bats according to temperature and humidity
(2017)
Bats can actively adjust their echolocation signals to specific habitats and tasks, yet it is not known if bats also modify their calls to decrease atmospheric attenuation. Here the authors test the hypothesis that individuals ...
Mating System of the Tent-Making Bat Artibeus watsoni (Chiroptera: Phyllostomida
(2008-12)
Vertebrate mating systems are influenced by ecological and phylogenetic factors, and the variation observed in mating behavior is frequently attributable to the extent to which male assistance in the rearing of offspring ...
Response of a Specialist Bat to the Loss of a Critical Resource
(2011-12)
Human activities have negatively impacted many species, particularly those with unique traits that restrict their use of resources and conditions to specific habitats. Unfortunately, few studies have been able to isolate ... | http://www.kerwa.ucr.ac.cr/handle/10669/273/discover?filtertype=author&filter_relational_operator=equals&filter=Chaverri+Echandi%2C+Gloriana |
Have You Failed By Taking A Short-Term Anything Job?
Suppose you’re one of those people – and there’s a lot of them out there these days – who have some education beyond High School. You’ve planned all along on pursuing a job that makes use of that education.
However, with a widening gap of unemployment on your résumé matching your growing frustration at not working, you’ve found yourself finding the idea of just taking a job – any job – more and more appealing; something you thought you never would. There’s this nagging notion that you’ve failed though that keeps you from actually applying for work outside your field of education. So have you?
The short answer is no, you haven’t. Exhale and breathe a sigh of relief. Do that a few times and read on.
There’s a lot of common sense involved in doing exactly what you’ve contemplated and like I pointed out in the beginning, you’re one of many who are well-educated and unemployed. It is not only understandable that you’d be looking at broadening your job search at some point – perhaps where you are at the moment – it’s also a very good idea.
So how come? I mean, Employment Coaches and Counsellors often say you should stick to your career plan and never give up on what you really want. Doing anything else is just settling isn’t it? What happened to finding your passion and not letting any setbacks get in your way of going after what’s going to make you truly happy? Flipping burgers, selling clothes, walking school kids across busy intersections: these aren’t the kind of jobs you thought you’d give more than a passing glance at. Could you ever imagine you’d actually be seriously thinking of going after one of these jobs at this point having finished College or University?
The jobs we’re discussing here have been in the past called survival jobs. More and more they are also called transition jobs; work that bridges the gap of time and space between the present and a job in the future. These are typically short-term positions outside your field of training and education.
When you find yourself browsing these ads more and more and seriously thinking about actually applying, may I suggest you change your line of perception. Instead of thinking that you’ve failed; that your post-secondary education was a waste of both time and money, consider the positives of these transition jobs.
First and foremost, the income from a job – any entry-level job – will stem some financial bleeding. Admittedly while likely minimum wage, money is money and some is better than none. Perhaps more important than money however is the inclusion factor. Right now you’re outside the workforce; remember feeling that everyone has a job but you? That so many people you see from your window seem to have somewhere to go, something to do, while you sit and grow despondent, frustrated and perhaps depressed? Uh huh. Yep, getting up, showered, dressed and out the door with a purpose is always good. That routine you’ve been missing is more important than you might have thought.
Now if you’ve looked at that School Crossing Guard advertised on some Municipality’s website and scoffed at it, think again. First of all those hours; before school, at noon and late afternoon leave you two chunks of time – mid-morning and mid-afternoon – to continue your targeted job search. Of even more significance perhaps is that once you land a Crossing Guard job, even though you’re working outside, you’ve at the same time become an internal employee. Had you considered that? Yes, you’re now able to see and apply for the internal jobs with that Municipality; jobs that up until now you had no access to. Full-time jobs that pay much better and perhaps come with benefits too.
That Crossing Guard job might be one you have to take for 3 or 6 months before you’re eligible to apply for another internal job. Okay so do it. Do the job at present and do it with a positive attitude. You’ve got this job so you might as well enjoy it and keep telling yourself you’re in transition from this to your next job – the one you really want.
Remember you don’t have to add a short-term job on your résumé, but consider doing so because it does bridge a gap. In your cover letter or at an interview you can certainly state with confidence that you took the short-term job where you are working to pay the bills but you’re highly motivated to seek work in your field as this is where your passion and strong interest are.
A failure? Far from it. You’re wise enough not to let pride get in the way and perhaps it even demonstrates your belief that no job, and certainly not the people doing them, should be looked down on. Perhaps it’s helped you learn humility and an appreciation for the hard work involved which you’d previously overlooked. Perhaps too you’re actually better for the experience and will be all the more grateful for the opportunity to work in the field of your choice doing what you love.
Suddenly, you might be more attractive to your employer of choice.
Have You Failed By Taking A Short-Term Anything Job? was originally published @ Employment Counselling with Kelly Mitchell and has been syndicated with permission. | http://www.socialjusticesolutions.org/2017/08/31/failed-taking-short-term-anything-job/ |
SPINDALE (June 6, 2022) – Five aspiring chefs were awarded certificates recently by Isothermal Community College.
The Basic Culinary Skills students prepared food for the ceremony attendees to enjoy during a reception at the event.
The graduates are Corey Dover, Frieda Petty, Bob Blanton, Michael Demers, and Roger Campana.
The intensive training covered the concepts, skills, and techniques involved in volume food production for restaurant or institutional settings. Emphasis was on the development of knife skills, tool and equipment handling, and the principles of food preparation and safety.
In addition, students built their palates, learning how one taste or texture affects another while broadening their knowledge of the chemistry of food.
The course paired lecture with a great deal of hands-on training. Students focused on traditional cooking techniques as well as new and innovative methods. They also developed industry-recognized professional prep skills and explored opportunities for exciting careers in theculinaryfield.
The class was held in the college’sculinarytraining facility at the Rutherford Opportunity Center in Forest City.
To reserve a spot in the next class, contact Dee Spurlin at 828-395-1416 [email protected].
The Culinary Arts program is made possible through the generous support of partners including the Appalachian Regional Commission and Rutherford County Schools. | https://www.isothermal.edu/newscenter/2022/06/culinary-arts-students-graduate.html |
The most global phenomenon in human history has generated a generational category, the pandemials. This category should be expanded to take advantage of the opportunities ...
By Martín Padulla for staffingamericalatina We have long been convinced that we cannot separate education from ...
By Martín Padulla for staffingamericalatina
We have long been convinced that we cannot separate education from work. The knowledge economy demands an intimate relationship between both disciplines. Some of us are even working on hybridization projects between both concepts.
Development in the 21st century seems to be a consequence of the good business climate that a city, a country or a region can offer and the relevant talent it is able to produce.
Alain Dehaze, CEO of Adecco Group alerted us last year that we lose 40% of our skills every 3 years; this means that in a decade we reach obsolescence. The executive introduces a key factor in the analysis: the speed needed to achieve employability in people and competitiveness in organizations.
The need for frequent reinvention and higher levels of digital education are essential to access today’s jobs.
How to address this reality? How to rebuild a framework that was seriously affected by political decisions of strict quarantines and class suspensions? How to reverse a context of school dropouts? How to be closer to young people and not so much affected by the consequences of a pandemic? Does the educational system as we know it contain the necessary content to be able to provide the necessary content to be able to access the jobs today? Does it provide the necessary content to be able to develop in the labor market? What about the labor market? Do the regulatory frameworks reflect the reality of the dynamics of 21st century work? Have they been reconverted through new sets of portable rights that allow for modern labor trajectories? Are they inclusive or do they exclude millions of citizens in the region?
More than 80% of organizations report a skills gap and expect it to be even wider in the future. No one doubts that it is essential to work on massive skilling processes. No one in the region seems to disagree with the need to develop a strategy to mitigate the consequences of school suspensions, school dropouts, mental health problems and skills shortages that the pandemic will leave us with. This is not an education issue or a labor issue, it is a social phenomenon that transcends both disciplines and requires a comprehensive multi-stakeholder approach.
Reskilling is the formation of an entirely new set of skills that allows for a completely different role. Amazon, for example, last year began a retraining program for 100,000 workers.
Upskilling refers to the improvement of knowledge or skills, it is a concept that has to do with the deepening and specialization to assume roles of higher level of complexity.
New skilling has to do with the need for continuous learning in high-demand skills. To achieve this, obviously, it is necessary to know them, to relieve them.
New skills for today’s jobs and new skills for the jobs of the future. Transversal skills (soft) and technical skills (hard). A map of knowledge applied to the productive world.
The Employability Observatory being developed in Ecuador is an important step in the right direction towards the essential transformation for the post-COVID world. Cities and countries compete for productive investments; without supporting information to operate and transform, they cannot drive one of the two key variables for development, which is to create more and better relevant talent.
What about a good business climate? As far as the structure of the labor market is concerned, we are evidently witnessing social pressures to reform. The voices that refer to the past come from there and seek to operate on a reality that no longer exists. We are witnessing some very interesting social phenomena in which the rift is generational. The new generations want to live in freedom, with flexibility and work that way. When asked about the way they want to work, only 5% want to work in person next year. The remaining 95% want to work 100% remotely, 50% remotely and 50% face-to-face or define with flexibility. Diverse ways of working, flexible working careers, need for new skills training during transitions….
However, it is necessary to create new legal environments aimed at flexicurity, career development assistance under different forms of work associated with rights, to think a world of work away from informality, with public-private articulation for the training of skills based on demand and rapid access to the formal labor market.
Some analysts have called this month Striketober for the gig economy due to multiple demonstrations or strikes in different cities around the world. At the same time, in other latitudes, initiatives to rethink pension systems for a post-COVID world of work are being promoted. Meanwhile, workertechs are being developed to provide benefits to platform workers who cannot obtain them through the law. Systems creak.
The region needs to move forward on a 21st century agenda to settle the debts of the past that were compounded by decisions made during the pandemic.
Many countries in the region have legislative or presidential elections this year. Citizens have the power to elect those who can read reality, rise to the occasion and act accordingly. | https://staffingamericalatina.com/en/skilling-reskilling-upskilling-y-new-skilling/ |
COVID-19 Updates: COVID-19 Resources » Vaccine Update » Updated Visitor Policy » What We're Doing to Keep You Safe »
New to MyHealth?
Manage Your Care From Anywhere.
Access your health information from any device with MyHealth. You can message your clinic, view lab results, schedule an appointment, and pay your bill.
Ataxia
What Is Ataxia?
People who are diagnosed with ataxia lose muscle control in their arms and legs, which may lead to a lack of balance, coordination, and possibly a disturbance in gait. Ataxia may affect the fingers, hands, arms, legs, body, speech, and even eye movements.
Ataxia is often used to describe the symptom of incoordination that may accompany infections, injuries, other diseases, and/or degenerative changes in the central nervous system. The symptom of ataxia can be caused by stroke, multiple sclerosis, tumors, alcoholism, peripheral neuropathy, metabolic disorders, and vitamin deficiencies. in these cases, treating the condition that caused ataxia my improve it.
While the term ataxia usually describes symptoms, it also describes a group of specific degenerative diseases of the central nervous system called the hereditary and sporadic ataxias. The remainder of this article discusses these disorders.
Movement Disorders Center
See a Stanford specialist to learn about your treatment options. Visit our clinic to make an appointment.
Stanford Neuroscience Health Center213 Quarry Road
Palo Alto, CA 94304
Phone: 650-723-6469
Ataxia
Ataxia describes the lack of coordination due to neurodegenerative disease. Balance and movement control are affected. | https://stanfordhealthcare.org/medical-conditions/brain-and-nerves/ataxia.html |
So that the CPU's PCI initialization code can address devices that are not on the main PCI bus, there has to be a mechanism that allows bridges to decide whether or not to pass Configuration cycles from their primary interface to their secondary interface. A cycle is just an address as it appears on the PCI bus. The PCI specification defines two formats for the PCI Configuration addresses; Type 0 and Type 1; these are shown in Figure and Figure respectively. Type 0 PCI Configuration cycles do not contain a bus number and these are interpretted by all devices as being for PCI configuration addresses on this PCI bus. Bits 31:11 of the Type 0 configuraration cycles are treated as the device select field. One way to design a system is to have each bit select a different device. In this case bit 11 would select the PCI device in slot 0, bit 12 would select the PCI device in slot 1 and so on. Another way is to write the device's slot number directly into bits 31:11. Which mechanism is used in a system depends on the system's PCI memory controller.
Type 1 PCI Configuration cycles contain a PCI bus number and this type of configuration cycle are ignored by all PCI devices except the PCI-PCI bridges. All of the PCI-PCI Bridges seeing Type 1 configuration cycles may choose to pass them to the PCI buses downstream of themselves. Whether the PCI-PCI Bridge ignores the Type 1 configuration cycle or passes it onto the downstream PCI bus depends on how the PCI-PCI Bridge has been configured. Every PCI-PCI bridge has a primary bus interface number, and a secondary bus interface number. The primary bus interface being the one nearest the CPU and the secondary bus interface being the one furthest away. Each PCI-PCI Bridge also has a subordinate bus number and this is the maximum bus number of all the PCI busses that are bridged beyond the secondary bus interface. Or to put it another way, the subordinate bus number is the highest numbered PCI bus downstream of the PCI-PCI bridge. When the PCI-PCI bridge sees a Type 1 PCI configuration cycle it does one of the following things:
It is up to each individual operating system to allocate bus numbers during PCI configuration but whatever the numbering scheme used the following statement must be true for all of the PCI-PCI bridges in the system:
``All PCI busses located behind a PCI-PCI bridge must reside between the seondary bus number and the subordinate bus number (inclusive).''
If this rule is broken then the PCI-PCI Bridges will not pass and translate Type 1 PCI configuration cycles correctly and the system will fail to find and initialise the PCI devices in the system. To achieve this numbering scheme, Linux configures these special devices in a particular order. Section on page describes Linux's PCI bridge and bus numbering scheme in detail together with a worked example. | http://www.science.unitn.it/~fiorella/guidelinux/tlk/node72.html |
It seems people have confused progression with captivity driven by materialism. His desires become spirited with the understanding of the fact, and he is liberated from the ones which he thought to be necessary before. These rulers are based because for them ruling is not imposing power but it is serving the people. In the allegory Plato tells a story of a man who is put on a Gnostics path. While most of us are not familiar with the allegory of the cave, most of us have read or at least heard of the enormously popular Harry Potter series. The word prisoner refers to ourselves, arguing that we are prisoners of our own beliefs.
I completely agree with Plato and I think we can apply this logic to many equations we face in life as intelligent, moral and empathetic people. According to this, if one believes that empirical evidence should be taken as truth, then that individual is a believer of what is shadowed of the truth. Their entire lives have been based on these shadows on the wall. Behind these unfortunates is a fire, the only light in their universe. I was very naïve and wanted to believe the best about people.
In other psychology classes, a student must understand how to interpret an Excel spreadsheet used in a research study to prove the efficacy of a particular antidepressant drug. Through his use of the term and his allegory of the cave, Plato makes the strong implication that philosophers must actively seek to discover the absolute truth, rather than relying on traditional methods of contemplation and the persuasive tone of rhetoric to prove its existence. The Prince 1513 essentially lays out a how-to guide of how to obtain power and how to keep it; The Qualities of the Prince contains a list of qualities that one should appear to have while in power; this work will be used to represent the case agains. This paper discusses how the internet, computers, and…… Words: 2081 Length: 5 Pages Document Type: Essay Paper : 42184922 Existentialism takes the human subject -- the holistic human, and the internal conditions as the basis and start of the conceptual way of explaining life. The voices represent the authority; those things we believe are true just because someone told us, like a teacher in a classroom, the government in a country, your parents at home, etc. Example: Two tuning forks of the same frequency are struck upon each other and held a few feet apart.
The capacity to learn exists in the soul. There is a critical distinction between an accountant and a philosopher like Socrates, though. In Platos Allegory of the Cave, the prisoners can resemble those who do not thirst for knowledge. The caves in this case represent the world of senses in which most people are trapped and imprisoned in their own thoughts. For Plato, Forms are the essential, real things which human beings may only experience through thought or imperfect representations in physical objects.
As Plato says, it is the business of the Founders of the State to urge those citizens who are capable of learning towards the light of truth, that they may later labor alongside one another amongst the prisoners, accepting honors when they are bestowed, whether they like it or not 519b. That which created the shadow is the gap of the cave that faced the visible radiation. This can be seen as representing the limited knowledge gained in life before entering the illuminating environment of college. The souls of Black folk. Their view of reality is solely based upon this limited view of moving shadows; this is what is real to them. Plato illustrates, in The Allegory of the Cave, that humanity believes aimlessly in their beliefs, prohibiting any comprehension of the truth of their existence. Plato's story not only opened up my outlook on life, but was an interpretation of my allegory of the cave—being saved spiritually.
King did not take into consideration most of American history from the past. Moreover, most of all, why are we here and are we free to act as individuals toward greater good? Perhaps the most discussed allegory in today's popular culture is the Allegory of the Cave. He sees it as what happens when someone is educated to the level of philosopher. A philosopher should see the inner beauty of things and understand, abstractedly, the natural causes of…… Words: 993 Length: 3 Pages Document Type: Essay Paper : 78260972 The discrepancy between the ideal and the real and the difficulty of arriving at the truth through deduction and induction is something that everyone must grapple with who deals with the ethics of a profession, like accounting. Plato believed that the highest level of education is when you have fully experienced good, beauty, and truth.
Sensory perception is the world of appearance, which we perceive, with the help of our sensory organs. The Allegory of the Cave, however, is not the easiest image that Plato uses. The captives in the cave represent the people. Similarly, there is also another world out of the cave world, but between these two worlds, a wall is raised. He abandoned his political career and turned to philosophy, opening a school on the outskirts of Athens dedicated to the Socratic search for wisdom. In an ideal state, there is equality among the people because no one is superior or inferior in this world.
Any hint of anything out of the ordinary drives the head of the Dursley family wild. Plato was a Greek philosopher that used his past experiences as a playwright to help develop the necessary emotional content within his writing to illicit substantial responses. Education should lead humans out of the cave of ignorance and turn their souls towards the Truth and the Good. The point is that Cypher knows that the matrix is nothing but illusion, but he accepts this fact, or rather ignores it and intentionally chooses to enjoy the fake pleasure of the illusory world. What Socrates fails to mention in my own opinion is how this allegory supports a role in the nature of education.
I did everything and anything just to fit in with everybody else. The relationship also exists in a cyclical sense. Does a person who indulges in a certain muse that is premised on a philosophy -- directly or indirectly related to it -- become a philosopher? Behind the prisoners is a fire and a raised walkway where people along this walkway create shadows, which are projected onto the stonewall. It is non suggested that one would travel back into the former province of believing the shadows as world because it is more painful and pathetic than of all time to hold seen such world outside and would still make bold to travel back to the old belief of the shadows. The boys attempt to create their own civilization, but it fails when certain members of the group let their dark sides. The prisoners can help each other to see the truth, represented by the sun in Plato's allegory, and then inspire each other to act ethically. Plato shows how humankind should. | http://webstreaming.com.br/allegory-of-the-cave-analysis-essays.html |
Unlike the economic and social determinants of health, which are well researched and widely accepted, cultural determinants of health and wellbeing are an emerging concept in research. The protective aspects of culture represent an important part of the Indigenous holistic view of health, and an avenue for both prevention and healing. On an academic level, measuring, defining and exploring ‘cultural determinants of health’ present many challenges: it is a complex, multidisciplinary concept that traverses epidemiology, psychology, sociology, anthropology and history. Moreover, Indigenous understandings of culture are understandably diverse, reflecting the lived experiences of different Indigenous populations, within Australia and internationally. Despite this, the need to incorporate cultural determinants into Indigenous health research is vital and has potential to provide valuable insights into how community strengths enable better health outcomes. But to what extent are our existing research methods capturing this complex, but powerful, influence?
Our presentation will reflect on the practices and experiences that shape culture and identity among Indigenous individuals’ communities, nations and nationhood and how these impact positively on health. We will discuss commonalities across different Indigenous cultures and how they can be utilised to improve health and wellbeing. We will also explore how processes of racism, assimilation and colonisation serve to undermine these relationships. We will speak to possibilities of current research methods exploring cultural determinants and discuss how emerging research methods are being employed to capture the core features of, and nuances within, culture. Finally, we will be presenting some preliminary results from cross-sectional and longitudinal studies that examine aspects of Aboriginal culture, identity, health and wellbeing outcomes.
Contributors to the presentation comprise Aboriginal, Sami and non-Indigenous researchers and practitioners, who will bring a wealth of diverse personal and research experience to this discussion. We anticipate our presentation, and subsequent discussion, will inform and guide future research practice, including upcoming Indigenous-led longitudinal studies. | http://lowitja-2016.p.asnevents.com.au/days/2016-11-08/abstract/34835 |
His scientific interests lie mostly in Cognitive psychology, Neuroscience, Subliminal stimuli, Brain mapping and Consciousness. His study in Cognitive psychology is interdisciplinary in nature, drawing from both Cognition and Perception. His Subliminal stimuli research incorporates elements of Intracranial electrodes, Brain activity and meditation and Amygdala.
His Brain mapping research includes elements of Prefrontal cortex and Electroencephalography. Within one scientific family, Lionel Naccache focuses on topics pertaining to Minimally conscious state under Electroencephalography, and may sometimes address concerns connected to Mutual information and Wakefulness. As a part of the same scientific study, Lionel Naccache usually deals with the Consciousness, concentrating on Electrophysiology and frequently concerns with Developmental psychology and Consciousness Disorders.
The scientist’s investigation covers issues in Electroencephalography, Consciousness, Cognitive psychology, Neuroscience and Cognition. His studies deal with areas such as Minimally conscious state, Coma and Audiology as well as Electroencephalography. His Minimally conscious state research integrates issues from Wakefulness, Internal medicine and Functional connectivity.
His Consciousness research includes themes of Developmental psychology, Cognitive science and Event-related potential. His Cognitive psychology research focuses on Subliminal stimuli in particular. His Subliminal stimuli study frequently links to related topics such as Visual word form area.
Lionel Naccache mainly investigates Electroencephalography, Neuroscience, Minimally conscious state, Consciousness and Wakefulness. Lionel Naccache has researched Electroencephalography in several fields, including Cognitive psychology, Working memory, Disorders of consciousness and Intensive care unit. His Neuroimaging study in the realm of Neuroscience interacts with subjects such as Anti-NMDA receptor encephalitis.
His Consciousness study incorporates themes from Comprehension, Audiology, Level of consciousness, Cognition and Intensive care medicine. His work carried out in the field of Cognition brings together such families of science as Unconscious States and Covert. He combines subjects such as Contingent negative variation, Subliminal stimuli, Unconscious mind, Expectancy theory and Unconscious cognition with his study of Brain activity and meditation.
Lionel Naccache spends much of his time researching Electroencephalography, Consciousness, Minimally conscious state, Neuroscience and Persistent vegetative state. His research on Electroencephalography focuses in particular on Brain activity and meditation. His Consciousness study combines topics in areas such as Mental health, Psycholinguistics and Set.
His work focuses on many connections between Minimally conscious state and other disciplines, such as Wakefulness, that overlap with his field of interest in Transcranial direct-current stimulation and Stimulation. His research on Neuroscience frequently links to adjacent areas such as Pattern recognition. His studies deal with areas such as Functional magnetic resonance imaging, Neuroimaging, Habituation and Unconsciousness as well as Cognition.
This overview was generated by a machine learning system which analysed the scientist’s body of work. If you have any feedback, you can contact us here.
Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework.
Stanislas Dehaene;Lionel Naccache.
Cognition (2001)
Conscious, preconscious, and subliminal processing: a testable taxonomy.
Stanislas Dehaene;Stanislas Dehaene;Jean-Pierre Changeux;Jean-Pierre Changeux;Lionel Naccache;Jérôme Sackur.
Trends in Cognitive Sciences (2006)
The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients.
Laurent Cohen;Stanislas Dehaene;Lionel Naccache;Stéphane Lehéricy.
Brain (2000)
Imaging unconscious semantic priming
Stanislas Dehaene;Lionel Naccache;Gurvan Le Clec'H;Etienne Koechlin.
Nature (1998)
Cerebral mechanisms of word masking and unconscious repetition priming.
Stanislas Dehaene;Lionel Naccache;Laurent Cohen;Denis Le Bihan.
Nature Neuroscience (2001)
Unconscious Masked Priming Depends on Temporal Attention
Lionel Naccache;Elise Blandin;Stanislas Dehaene.
Psychological Science (2002)
The Priming Method: Imaging Unconscious Repetition Priming Reveals an Abstract Representation of Number in the Parietal Lobes
Lionel Naccache;Stanislas Dehaene.
Cerebral Cortex (2001)
Unconscious semantic priming extends to novel unseen stimuli
Lionel Naccache;Stanislas Dehaene.
Cognition (2001)
Neural signature of the conscious processing of auditory regularities
Tristan A. Bekinschtein;Stanislas Dehaene;Benjamin Rohaut;François Tadel.
Proceedings of the National Academy of Sciences of the United States of America (2009)
Converging intracranial markers of conscious access.
Raphaël Gaillard;Raphaël Gaillard;Stanislas Dehaene;Stanislas Dehaene;Stanislas Dehaene;Claude Adam;Stéphane Clémenceau.
PLOS Biology (2009)
If you think any of the details on this page are incorrect, let us know.
We appreciate your kind effort to assist us to improve this page, it would be helpful providing us with as much detail as possible in the text box below: | https://research.com/u/lionel-naccache |
1. Introduction {#sec1-brainsci-09-00176}
===============
Emotions are expressed and perceived in many different sensory domains, multimodally \[[@B1-brainsci-09-00176]\], with emotional information conveyed via faces, voices, odors, touch, and body posture or movement \[[@B2-brainsci-09-00176],[@B3-brainsci-09-00176],[@B4-brainsci-09-00176],[@B5-brainsci-09-00176]\]. Our ability to infer the emotional state of others, identify the potential threat they pose, and act accordingly is crucial to social interaction. While more static information conveyed by a face, such as gender or race, can be extracted by visual information alone, more dynamic information, such as emotional state, is often conveyed by a combination of emotional faces and voices. Many studies have examined emotional processing in a given sensory domain, yet, few have considered faces and voices together, a more common experience, which can take advantage of multimodal processes that may allow for more optimal information processing.
1.1. Processing Emotion across the Senses {#sec1dot1-brainsci-09-00176}
-----------------------------------------
From very early on we can make use of emotional information from multiple sources \[[@B6-brainsci-09-00176]\]. For example, infants are able to discriminate emotions by 4 months of age if exposed to stimuli in two different modalities (bimodal), but, only by 5 months of age if exposed to auditory stimuli alone. Likewise, infants are able to recognize emotions by about 5 months if stimuli are bimodal, but not until 7 months if exposed to visual stimuli alone. Starting around 5 months, infants make crossmodal matches between faces and voices \[[@B7-brainsci-09-00176],[@B8-brainsci-09-00176]\], and by 6.5 months can also make use of body posture information in the absence of face cues \[[@B9-brainsci-09-00176]\]. Crossmodal matches also take into account the number of individual faces and voices, with infants, starting at 7 months, showing a looking preference for visual stimuli that match auditory stimuli in numerosity \[[@B10-brainsci-09-00176]\].
Combining behavioral and event-related potential (ERP) methods, Vogel and colleagues \[[@B8-brainsci-09-00176]\] examined the development of the "other-race bias", the tendency to better discriminate identities of your own race versus identities of a different race. The authors described a perceptual narrowing effect in behavior and brain responses. They found no effect of race on crossmodal emotional matching and no race-modulated congruency effect in neuronal activity in five-month-olds, but found such effects in nine-month-olds, who could only distinguish faces of their own race. Furthermore, seven-month-olds can discriminate between congruent (matching in emotional valence) and incongruent (non-matching in emotional valence) face/voice pairs \[[@B7-brainsci-09-00176]\], with a larger negative ERP response to incongruent versus congruent face/voice stimuli and a larger positive ERP response to congruent versus incongruent stimuli.
These studies in infants, measuring crossmodal matching of emotional stimuli and perceptual advantages in detecting and discriminating emotional information based on bimodal stimulus presentations and the congruency between stimuli, laid important groundwork for the processing of emotional information across the senses. Studies in adults have been more focused on how emotional information in one sense might influence the judgement of emotional information in another sense. To go beyond crossmodal matching or changes in the detection or discrimination of bimodal versus unimodal emotional stimuli, adaptation has been used, mostly in adults, to quantify by how much emotional information in one modality, such as audition, can bias the processing of emotional information in another modality, such as vision.
1.2. Exposure to Emotion: Perceptual Changes {#sec1dot2-brainsci-09-00176}
--------------------------------------------
A powerful tool, adaptation, has been deemed the psychophysicist's electrode and has been used to reveal the space in which faces are represented. In adaptation, repeated exposure to a stimulus downregulates neuronal firing in response to that stimulus and can yield a perceptual change, a contrastive after-effect. For example, repeated exposure to female faces can bias androgynous faces to appear more masculine \[[@B11-brainsci-09-00176]\]. Previous work has shown that many features of a face can be adapted, such as gender, ethnicity, and even emotions (for a review, see Reference \[[@B12-brainsci-09-00176]\]).
Adapting to emotional information can bias perception, producing a contrastive after-effect within a sensory modality, either visual or auditory \[[@B13-brainsci-09-00176],[@B14-brainsci-09-00176],[@B15-brainsci-09-00176],[@B16-brainsci-09-00176]\]. Repeated exposure to positive faces produces a bias to perceive neutral faces as angry, while repeated exposure to negative faces produces a bias to perceive neutral faces as happy. Complementary biases are found when perceiving neutral sounds after exposure to emotional sounds \[[@B13-brainsci-09-00176]\]. Furthermore, the representation of emotion has been shown to be supramodal, with repeated exposure to emotional information in one sensory modality transferring to yield a contrastive after-effect in another sensory modality never directly exposed \[[@B17-brainsci-09-00176],[@B18-brainsci-09-00176],[@B19-brainsci-09-00176]\].
Although emotional information can be adapted within and across the senses, faces and voices often occur simultaneously. Yet, few studies have examined if there is a perceptual advantage to presenting visual and auditory stimuli concurrently and results have been inconclusive. For example, de Gelder and Vroomen \[[@B20-brainsci-09-00176]\] found that an emotional voice, happy or sad, could bias perception of a simultaneously presented neutral face to match that of the voice. Similarly, Muller and colleagues \[[@B21-brainsci-09-00176]\] found that negative emotional sounds, e.g., screams, could bias perception of a simultaneously presented neutral face to appear more fearful, compared to neutral emotional sounds, e.g., yawns. However, Fox and Barton \[[@B22-brainsci-09-00176]\] did not find biased facial perception from emotional sounds. In a related study, using an adaptation paradigm, Wang and colleagues \[[@B19-brainsci-09-00176]\] also found no benefit, no increased adaptation, when visual and auditory stimuli were presented together and matched in emotional valence (congruent) compared to when a unimodal visual stimulus was presented in isolation, suggesting that emotional auditory information carried little weight in biasing emotional visual information.
Some discrepancies in results across studies might arise from differences in experimental paradigms. For example, adaptation paradigms may not have been optimized to adapt to emotion per se. Since adaptation effects are stronger when adapting to the same face versus different faces \[[@B22-brainsci-09-00176]\] or for easily recognized face/voice pairs, prior studies have often used few exemplar faces and voices. However, if one wants to test for interactions between visual and auditory emotional information, providing many exemplars helps assure one is not adapting to the unique configuration of features of a given face or voice, but rather to emotion. Furthermore, if only a few faces and voices are used and presented as unique pairs during adaptation, presentation of a single stimulus in one modality after adaptation might induce imagery of a stimulus in the other modality due to the associations formed during adaptation. In such a scenario, learned associations induce imagery which then might appear as a strengthening of adaptation effects for stimuli across modalities. In order to promote adaptation to emotion rather than to unique configurations of features of a given face, and to prevent induced imagery of an associated stimulus in the other modality, the current study used 30 unique faces and 15 unique crowd sounds presented at random during adaptation.
Furthermore, we used crowd sounds, where multiple voices are presented at once, as another way to ensure that unique face/voice pairs did not get formed and to ensure adaptation is to emotion, and not to characteristics of a particular voice or a particular face/voice pair. While many previous studies have used only a few exemplars of face/voice pairs, crowd stimuli can be more informative than single identities. For instance, the gaze of a group is more effective at directing attention than the gaze of an individual \[[@B23-brainsci-09-00176]\]. In situations where multiple stimuli are presented at once, it has been shown that participants extract the mean emotion of the stimuli without representing individual characteristics \[[@B24-brainsci-09-00176],[@B25-brainsci-09-00176]\]. Thus, we expected not only that participants in the current study would efficiently extract the emotional information from multiple voices without representing characteristics of individual voices, but that this information from a crowd would be more informative than information from a single identity.
1.3. Exposure to Emotion: Cortisol Changes {#sec1dot3-brainsci-09-00176}
------------------------------------------
Interestingly, not only can repeated exposure to emotional information alter perception, it can also induce changes in mood, such that exposure to positive emotional content can induce a positive mood and bias perception of faces to be more positive, while induction of a negative mood can bias perception of faces to be more negative (reviewed in Reference \[[@B26-brainsci-09-00176]\]). Furthermore, the initial mood of the participant can bias perception, such that a more positive mood at baseline can bias faces to be perceived as more positive \[[@B27-brainsci-09-00176],[@B28-brainsci-09-00176]\].
In considering how emotional information might alter a physiological marker for the stress response, particularly for negative emotional exposure, we assessed cortisol levels. Cortisol excretion is the final product of hypothalamic--pituitary--adrenocortical (HPA) axis activation in response to stress \[[@B29-brainsci-09-00176]\]. Salivary cortisol levels have been used as a non-invasive biomarker of stress response (e.g., \[[@B30-brainsci-09-00176],[@B31-brainsci-09-00176]\]). Cortisol has also been linked to attention and arousal, with higher cortisol levels correlated with increased attention \[[@B32-brainsci-09-00176]\] and have been linked to enhanced emotional face processing \[[@B33-brainsci-09-00176]\]. Furthermore, changes in cortisol have been linked to induced negative emotional state such that negative mood induction is associated with elevated cortisol \[[@B34-brainsci-09-00176],[@B35-brainsci-09-00176]\].
Although exposure to emotional information can alter stress levels, the relationship between changes in perception and changes in cortisol following emotional exposure is not well understood. Of note, repeated exposure to emotional information yields opposite effects on perception and mood. While repeated exposure to negative facial emotion biases perception to be more positive (contrastive after-effect), it biases mood to be more negative. It remains to be seen if changes in cortisol positively or negatively correlate with changes in perception. Furthermore, although many studies have investigated the effects of exposure to emotional faces, it is unclear how emotions conveyed by other senses, such as voices, may interact to bias perception and cortisol.
The current study utilized an adaptation paradigm to investigate perceptual shifts, cortisol shifts, and their correlation as a function of exposure to visual and/or auditory emotional information. Participants were exposed to angry faces with or without concurrent emotional sounds that matched (congruent) or did not match (incongruent) facial emotion. We quantified post-adaptation perceptual changes, normalized to baseline perceptual biases, and post-adaptation cortisol changes during the same exposure, normalized to baseline cortisol biases, uniquely for each participant.
In line with perceptual after-effects, we expected adaptation to negative emotional information would bias perception to be more positive, with stronger effects for congruent versus incongruent emotions. We also assessed perceptual effects post-adaptation to only visual or only auditory emotional information. These conditions provide baseline measures by which to assess differences between congruent and incongruent conditions. Namely, they can distinguish if a congruent emotion enhanced or an incongruent emotion suppressed relative to a baseline measure within a single modality. Given the results of Wang and colleagues \[[@B19-brainsci-09-00176]\], we expected the weakest effects following adaptation to only auditory emotional stimuli and expected congruent effects to be stronger and incongruent effects to be weaker than a visual only baseline.
In line with stress-induced changes in cortisol, we expected exposure to negative emotional information would decrease cortisol if in accord with perceptual effects but increase cortisol if in accord with mood effects. We expected cortisol changes to be largest for congruent emotions and weakest for only auditory emotions. Given pilot data suggesting our negative emotional stimuli were not acutely threatening and not very effective at increasing cortisol, we expected differences in the relative *decrease* in cortisol across adaptation conditions.
2. Methods {#sec2-brainsci-09-00176}
==========
2.1. Participants {#sec2dot1-brainsci-09-00176}
-----------------
A total of 97 participants, aged 18 or older, were recruited from the University of Massachusetts Boston community and contributed data. Our goal was to gather usable behavioral data in \~20 participants in each of the 4 conditions. A G\*power analysis using a medium effect size of 0.5, estimated a total sample size of 73 participants across the 4 conditions of our study. Thus, we aimed to gather data in \~80 useable participants, imagining there would be additional data loss from running cortisol assays. Of our 97 participants, sixteen participants were excluded due to the following: experimenter error (2), biased behavioral responses (4), where the faces that were 80% happy were judged happy less than 75% of the time or faces that were 80% angry were judged happy more than 25% of the time, participant error (3), where participants failed to press the correct buttons to make their responses, or problems with cortisol measurements (7), where cortisol measures were too high or too low relative to normative measures from cortisol standards. Of the 81 participants who contributed usable behavioral data, a subset of 72 contributed usable salivary cortisol samples.
Our sample consisted of 61 females (mean age = 22.230 years; SD = 3.8747; range = 18--34) and 12 males (mean age = 24.567 years; SD = 11.0601; range = 18--54). One participant did not report their gender and seven did not report their age. Participants reported normal hearing and normal or corrected-to-normal vision, provided written informed consent, and were compensated (US) \$20, or received extra credit for an eligible undergraduate course. All experimental procedures and protocols (protocol \#2013148) were approved by the University of Massachusetts Boston Institutional Review Board and complied with the Declaration of Helsinki. Please see [Table 1](#brainsci-09-00176-t001){ref-type="table"} below for detailed demographics.
2.2. Questionnaires {#sec2dot2-brainsci-09-00176}
-------------------
Participants completed a demographics questionnaire using the Positive and Negative Affect Schedule--State Version (PANAS). The PANAS uses 20 items to assess the current affective state, separated into negative (NA) and positive (PA) affect subscales, designed to be orthogonal measures \[[@B36-brainsci-09-00176]\]. Participants indicated how much they were experiencing each of the listed emotions at that present moment (after we the experimental procedures were described and consent obtained, but before presenting stimuli and starting to adapt) via a 5 point Likert scale. The PANAS subscales for PA and NA were calculated to assess the state-level affect before the study commenced.
2.3. Behavioral Measures {#sec2dot3-brainsci-09-00176}
------------------------
### 2.3.1. Apparatus {#sec2dot3dot1-brainsci-09-00176}
Visual stimuli were presented on a Nexus cathode-ray tube (CRT) monitor and responses were recorded via laptop keyboard button press using Matlab and the psychophysics toolbox \[[@B37-brainsci-09-00176],[@B38-brainsci-09-00176],[@B39-brainsci-09-00176]\]. Participants were seated 45 cm from the screen and positioned on a chin and forehead rest to maintain stable head position and constant viewing distance. Auditory stimuli were presented via noise-cancelling headphones (3M-Peltor headset), which helped minimize distractions from ambient sounds.
### 2.3.2. Stimuli {#sec2dot3dot2-brainsci-09-00176}
We selected 30 unique angry face images from the NimStim Face Stimulus database \[[@B40-brainsci-09-00176]\]. Only faces with validity ratings of 75% or higher for appearing happy or appearing angry were chosen. All stimuli were gray-scaled to 50% and non-facial features, such as hair, were obscured by a grey oval (see sample stimuli in [Figure 1](#brainsci-09-00176-f001){ref-type="fig"}, \[[@B41-brainsci-09-00176]\]). Overall, 30 possible faces (21 White, 3 Asian, and 6 Black) were presented during the adaptation phase of the task.
A subset of face images was morphed along an emotional continuum. Eight unique identities (4 female and 4 males faces; 5 White, 2 Asian, and 1 Black) were each morphed from a fully affective (100%) happy face to the complementary neutral face for that identity or from a fully affective (100%) angry face to the complementary neutral. We used MorphMan software (version 4.0, STOIK Imaging, Moscow, Russia) to create morphs ranging from neutral to 10%, 20%, 40%, and 80% happy or angry. Each of the 8 identities had 9 morphs (4 angry morphs, 4 happy morphs, and 1 neutral). Overall, 72 possible faces could be presented during the *test phase*.
Participants viewed and judged 64 test faces, presented at random (4 repeats for 80% angry and happy morphs, 8 repeats for 10%, 20%, and 40% angry morphs, happy morphs, and neutral). All faces were 595 × 595 pixels, subtended 19.8^0^ visual angle and were presented at central fixation. Participants were instructed to direct their gaze at central fixation, but eye position was not monitored.
Auditory stimuli were rated by an independent group of 20 UMass Boston undergraduates. Participants listened to 87 one-second sound clips of crowd sounds expressing positive (39) or negative (48) emotions, such as cheering or booing, respectively. Sound clips contained no spoken words, since words might convey different emotional valences for different participants. Sound clips were judged on a 6 point Likert scale for ratings of emotional valence (how angry, how happy, and how mocking) and overall sound quality. Only sound clips with 75% validity ratings for sounding happy or angry and having good sound quality were included. Sound clips conveying mocking emotions were excluded. Overall, 15 positive and 15 negative sounds were presented during the *adaptation phase*.
### 2.3.3. Behavioral Procedure {#sec2dot3dot3-brainsci-09-00176}
Before the experiment, all participants were familiarized with the procedures. They completed a minimum of 3 practice trials, consisting of an auditory alerting cue, followed by a blank oval (1 s), and then a question mark (1.5 s), during which time participants had to press a button to practice the timing of when to indicate their judgment. For the experiment, each participant completed one baseline and one adapt condition (see [Figure 1](#brainsci-09-00176-f001){ref-type="fig"} below), each lasting 15 min, with a 5 min break between.
During *baseline*, participants viewed a gray screen with a central fixation cross and were instructed to maintain gaze at central fixation. Following fixation (180 s), a 500 Hz auditory alerting cue was presented briefly followed by 1, out of 72 possible, face morph (1 s). The face morph was followed by a question mark (1.5 s), during which time participants had to judge, via keypress, if the face morph they had just viewed was perceived as happy or angry. Only responses made during the 1.5 s interval were included for analysis, i.e., trials where participants responded too early or too late were not considered incorrect and were excluded. After each response, a grey screen with a fixation cross was presented (8 s). Judgments from a total of 64 face morphs were assessed during baseline and used to determine each participant's point of subjective equality (PSE), the unique face judged emotionally neutral, equally likely to be perceived as happy or angry (as described below in data analysis for behavioral measures).
*Adaptation* contained the same sequence of events in time as baseline. The crucial difference was that instead of a blank fixation screen for the first 180 s (*initial adaptation*) and for the 8 s (*top-up adaptation*) following each judgment, faces were presented in 1 of 4 possible adaptation conditions. The 4 possible adaptation conditions were (1) **[Congruent]{.ul}**: visual face stimuli and auditory crowd sounds matched in emotional valence, such that angry faces were presented concurrently with negative crowd sounds (**Ac**); (2) **[Incongruent]{.ul}**: face stimuli and auditory sounds mismatched in emotional valence, such that angry faces were presented with positive crowd sounds (**Ai**); (3) **[Visual Alone]{.ul}**: angry faces were presented in isolation, with no concurrent emotional sound (**Av**); and (4) **[Auditory Alone]{.ul}**: negative auditory crowd sounds were presented in isolation, with no concurrent emotional face (**Aa**).
During *initial adaptation,* a total of 180 emotional faces (out of 30 unique identities at 100% emotional valence) and/or emotional crowd sounds (out of 15 unique sound clips) were presented (1 s each). During top-up adaptation, 8 emotional faces (100% angry) and/or emotional crowd sounds, were presented (1 s each). Post-adaptation participants judged the same face morphs as in baseline. Judgments from a total of 64 morphed test faces were assessed post-adaptation and used to determine the change in each participant's unique PSE (as described in the data analysis for behavioral measures below).
Participants were presented randomly selected faces which had been morphed along an emotional continuum from angry to happy (8 unique face identities: 4 males, 4 females). They judged each face morph as either happy or angry. Baseline started with a 3 min fixation and participants were instructed to maintain central fixation. This was followed by a beep and a face morph presented for 1 s, followed by a 1.5 s response period during which a question mark was presented and participants had to indicate if they thought the previously presented face morph was happy or angry by pressing a key on a keyboard. Baseline consisted of 64 trials. For adaptation, participants were presented the same face morphs and made the same judgments as during baseline. However, adaptation began with a 3 min exposure to 100% angry faces and each judgment was followed by an 8 s top-up exposure during which eight 100% emotional faces were presented, each for 1 s. Adaptation also consisted of 64 trials. Cortisol was collected at the start and 5 min after the end of the adaptation block. A given participant was presented with baseline followed by 1 of 4 possible adaptation conditions. Unimodal adaptation conditions included: Angry faces alone (Av) and negative sounds alone (Aa). Bimodal adaptation conditions included matched emotional valence: angry faces and negative sounds (Ac), and unmatched emotional valence: angry faces and positive sounds (Ai).
### 2.3.4. Data Analysis for Behavioral Measures {#sec2dot3dot4-brainsci-09-00176}
All data was analyzed using Matlab. We used psignifit \[[@B42-brainsci-09-00176]\], which implements the maximum-likelihood method described in Reference \[[@B43-brainsci-09-00176]\] to fit the data for each participant for the baseline and post-adapt condition separately. We plotted the function such that the *x*-axis represents the emotional morph continuum and the *y*-axis represents the percentage of trials the participant responded that a given face appeared happy. Each participant's baseline data were fit with a cumulative normal to determine the face morph supporting a 50% happy response, where the subject was equally likely to judge the face as happy or angry. The face judged emotionally neutral was the unique neutral point, or PSE, for the participant and conveys the percent of emotion in a face required to perceive the face as emotionally ambiguous. The PSE is a common measurement used in previous psychophysical studies of crossmodal emotion (for example, References \[[@B13-brainsci-09-00176],[@B19-brainsci-09-00176]\]). Given our convention of plotting happy emotions to the right of 0 (positive values) and angry emotions to the left of 0 (negative values), a positive baseline PSE indicates a more positive affect is required to see a face as neutral, a negative perceptual bias. Conversely, a negative baseline PSE indicates a more negative affect is required to see a face as neutral, a positive perceptual bias. To quantify the strength of perceptual biases post-adaptation, for each adaptation condition and each participant, we quantified how judgments of the face considered emotionally neutral at baseline changed post-adaptation. Importantly, such a quantification normalizes for any biases existing at baseline. In addition to estimating the PSE as a measure of perceptual bias, we also quantified the slope of the PSE fit, an estimate of variance in the data \[[@B44-brainsci-09-00176]\].
[Figure 2](#brainsci-09-00176-f002){ref-type="fig"} depicts hypothetical data and fits to illustrate predicted behavioral effects for each adaptation condition. In this hypothetical example, if the face judged neutral at baseline is judged happy on 75% of trials post-adaptation, the shift is a positive bias, as expected from adaptation to negative, angry emotions.
We expected biases in emotional perception to vary based on adaptation condition. We expected *positive* bias to be (1) strongest after congruent adaptation, where visual and auditory both conveyed *negative* emotional content, (2) moderate after adaptation to only visual *negative* emotional content, (3) weaker after incongruent adaptation, where visual emotions were *negative* but auditory emotions were *positive*, and (4) the weakest after adaptation to only auditory *negative* emotional content.
The *x*-axis depicts the emotional continuum from 80% angry to 80% happy, with the standard neutral from the NimStim database at zero. The *y*-axis shows the hypothetical percentage of happy judgments. We measured each participant's unique point of subjective equality (PSE): the face the participant judged equally likely to be happy or angry. The thick black curve reflects a cumulative normal fit to judgments of each face morph at baseline. Baseline PSE was quantified as the face morph supporting 50% happy judgments. The predicted direction and magnitude of changes in the baseline PSE (PSE shift) after adaptation to angry faces are depicted in the direction and magnitude of arrows extending from the baseline PSE. We predicted a positive perceptual bias after adapting to angry faces, a positive PSE shift, with the largest shift for congruent visual and auditory emotional stimuli (Ac), progressively weaker shifts for visual only (Av), and then for incongruent visual and auditory stimuli (Ai), with the weakest shifts for the auditory only condition (Aa).
2.4. Physiological Measures {#sec2dot4-brainsci-09-00176}
---------------------------
### 2.4.1. Quantifying Salivary Cortisol {#sec2dot4dot1-brainsci-09-00176}
Saliva samples were collected with salivettes (Sarstedt, Germany), which participants were instructed to chew gently for two minutes. Samples were stored immediately on ice before transfer for longer term storage at −80 °C until quantification. Salivary cortisol levels were quantified using a competitive enzyme-linked immunosorbent assay (ELISA) (Enzo Life Sciences) according to manufacturer instructions. Samples were thawed on ice and centrifuged at 1600× *g* for 10 min at 4°. All samples from a given participant were analyzed on the same plate and samples on the same plate were selected at random from different conditions.
A micro-plate reader was used to measure optical density at 405 nm. Assay sensitivity was 56.72 pg/mL, intra-assay coefficient of variation was 7.3--10.5%, and inter-assay coefficient of variation was 8.6--13.4%. Specificity was 100% for cortisol and cross reactivity was 3.64% for progesterone and less than 1% for prednisone, testosterone, androstenedione, cortisone, and estradiol. Data from 7 known cortical standards were fit using a 4 parameter logistic curve fitting program (Graphpad PRISM) and concentrations of unknown cortisol samples were determined from this standard curve.
Cortisol samples were collected before (pre-adapt) and starting five minutes after adaptation ended (post-adapt). Given that adaptation lasted 15 min and free salivary cortisol levels should start changing once exposure to stressful stimuli begins and should peak 15--20 min following exposure \[[@B45-brainsci-09-00176]\], we should be assessing cortisol changes during or right after our 3 min emotional exposure---the initial adaptation phase. To minimize other factors which could also alter cortisol levels, participants were instructed to refrain from caffeine and exercise for 90 min prior to the experiment and all studies were conducted between 13: 30--18:30 to ensure cortisol levels were not at ceiling and to limit the effects of circadian rhythms on cortisol levels \[[@B46-brainsci-09-00176],[@B47-brainsci-09-00176]\].
### 2.4.2. Data Analysis for Cortisol Measures {#sec2dot4dot2-brainsci-09-00176}
Cortisol measurements for each participant were normalized to account for baseline biases, as defined by the equation below:
2.5. Statistical Analyses {#sec2dot5-brainsci-09-00176}
-------------------------
Given that most data were normally distributed, skew/kurtosis between +/−2, parametric statistics were used for data analysis. This was true except for negative affect and cortisol measurements, which were transformed to be normally distributed. A log transformation was applied to cortisol measurements. Outcomes of statistical testing reported reflect tests performed on transformed data where appropriate.
Planned statistical analyses included ANOVAs for quantifying perceptual shifts and cortisol shifts across adaptation conditions and correlational analyses between perceptual shifts and cortisol shifts. T-tests were used to examine if perceptual and cortisol shifts were significantly different from zero or if there was no change post-adaptation. Non-parametric tests were used to assess changes in slope across adaptation conditions. We also examined baseline biases that could potentially minimize the size of perceptual or physiological shifts following adaptation. Planned analyses included correlations between baseline biases in perception (PSE) and cortisol, perception, and mood (PANAS-PA and NA) and cortisol and mood.
3. Results {#sec3-brainsci-09-00176}
==========
3.1. Behavioral Measures {#sec3dot1-brainsci-09-00176}
------------------------
Of the 64 possible test trials of face morphs used to determine perceptual biases in judging emotion, participants completed an average of 57.49 (SD = 7.01) baseline test trials and 60.21 (SD = 5.43) post-adaptation test trials.
In order to assess the goodness-of-fit of the psychometric function to our data, we considered measures of deviance, quantified using psignifit, for PSE measures at baseline and post-adaptation. On average, the deviance at baseline was 5.50 (SD = 2.82) and post-adaptation was 5.33 (SD = 2.89).
An ANOVA, Bonferroni corrected to account for multiple comparisons (alpha = 0.0083), was run to test the hypothesis that the strength of changes in perception varied across adaptation conditions. We expected the largest perceptual change, shift in PSE, for congruent adaptation, an intermediate effect for only visual adaptation, relatively weaker changes for incongruent adaptation, and the weakest change for only auditory adaptation.
We found a significant main effect of adaptation condition on PSE shift after adapting to negative emotions (*F*(3,77) = 9.080, *p* \< 0.001, partial *η^2^* = 0.261; see [Figure 3](#brainsci-09-00176-f003){ref-type="fig"}, with a significantly more positive PSE shift for Ac, Ai, and Av compared to Aa. This indicated that the mean neutral face appeared happier post-adaptation for Ac, Ai, and Av relative to Aa (Ac: *t*(40) = 3.584, *p* \< 0.001; Av: *t*(35) = 4.716, *p* \< 0.001; Ai: *t*(38) = 3.657, *p* = 0.001). No significant differences were found between other conditions (Ac versus Av: *t*(39) = −0.964, *p* = 1; Av versus Ai: t(37): 1.183, *p* = 1; Ac versus Ai: t(42) = 0.100, *p* = 1). One sample *t*-test indicated all conditions except Aa showed significant adaptation effects; PSE shifts were significantly different from baseline (Ac: *t*(22) = 5.185, *p* \< 0.001; Av: *t*(17) = 8.833, *p* \< 0.001; Ai: *t*(20) = 5.674, *p* \< 0.001; Aa: *t*(18) = −0.384, *p* = 0.706).
We also examined perceptual shifts with a Bayesian ANOVA in JASP (JASP Team, 2018), using a Cauchy prior distribution with *r* = 1/sqrt(2). The Bayes factor (BF~10~) of 654.274 suggests that the data were approximately 650 times more likely to occur under the alternative hypothesis than under the null (suggesting extreme evidence), with an error percentage of 0.011%. This indicates that PSE shift differs as a function of adaptation condition. Post-hoc tests corrected for multiple testing by fixing the prior to 0.5 suggest a similar pattern of results to the ANOVA above: more positive PSE shifts for Ac, Ai, and Av compared to Aa (Ac: posterior odds = 13.946, Ai: posterior odds = 16.031, Av: posterior odds = 206.343) and no differences between other conditions (Ac versus Av: posterior odds = 0.184; Av versus Ai: posterior odds = 0.224; Ac versus Ai: posterior odds = 0.124). Bayesian one sample *t*-tests indicated all conditions, except Aa showed an adaptation effect (Ac: BF~10~ = 724.4, error \< 0.001%; Av: BF~10~ = 157,852, error \< 0.001%; Ai: BF~10~ = 1531, error \< 0.001%; Aa: BF~10~ = 0.254, error \< 0.05%).
Although PSE shifts did not differ across congruent, incongruent, and visual only conditions, another quantification of our data, slope, might differ, with steeper slopes indicative of less variance in perceptual data \[[@B44-brainsci-09-00176]\]. However, we found no significant differences in slope changes across adaptation conditions (*p* = 0.919; data not shown).
3.2. Physiological Measures {#sec3dot2-brainsci-09-00176}
---------------------------
An ANOVA, Bonferroni corrected to account for multiple comparisons (alpha = 0.0083), was run to test the hypothesis that the strength of changes in cortisol varied across adaptation conditions. We expected the largest cortisol change for congruent adaptation, an intermediate change for only visual adaptation, relatively weaker changes for incongruent adaptation, and the weakest change for only auditory adaptation. Furthermore, we expected relatively weak increases in cortisol and mostly effects on relative differences in decreases in cortisol.
We found no significant main effect of adaptation condition on cortisol shift after exposure to negative emotions (*F*(3,58) = 1.618, *p* = 0.195, partial *η^2^* = 0.077; see [Figure 4](#brainsci-09-00176-f004){ref-type="fig"}. One sample *t*-test indicated cortisol changes were significantly different from baseline, after exposure to negative emotions, except for condition Aa (Ac: *t*(12) = −2.562, *p* = 0.025; Av: *t*(16) = −3.184, *p* = 0.006; Ai: *t*(13) = −5.224, *p* \< 0.001; Aa: *t*(17) = −2.027, *p* = 0.059).
We also examined physiological shifts with a Bayesian ANOVA in JASP (JASP Team, 2018), using a Cauchy prior distribution with *r* = 1/sqrt(2). The Bayes factor (BF~01~) of 2.314 suggests that the data were approximately 2.3 times more likely to occur under the null hypothesis than under the alternative, with an error percentage of \< 0.001%. This indicates that the strength of cortisol shifts did not vary under different adaptation conditions. Bayesian one sample *t*-tests indicated cortisol changes were more likely to occur under the alternative relative to the null hypothesis for all conditions, with varying degrees of evidence (Ac: BF~10~ = 2.779, error \< 0.005%; Av: BF~10~ = 8.505, error \< 0.001%; Ai: BF~10~ = 186.5, error \< 0.001%; Aa: BF~10~ = 1.276, error \< 0.01%).
Changes in cortisol levels are shown for each adaptation condition. The *x*-axis depicts the adaptation condition and the *y*-axis depicts the mean change in cortisol, normalized by baseline (+/− SEM across participants), with data from individual participants shown via open circles. A mean shift in the negative direction indicates that cortisol decreased post-adapt relative to pre-adapt. Conversely, a mean shift in the positive direction indicates that cortisol increased post-adapt relative to pre-adapt. There were no significant differences in cortisol shift across conditions and mean cortisol shifts tended to be negative.
3.3. Correlations between Behavioral and Physiological Measures {#sec3dot3-brainsci-09-00176}
---------------------------------------------------------------
Given the high variability in our data, considering only differences in means across participants might obscure important relationships arising from individual differences. Thus, we tested whether shifts in the perception of emotion (PSE shift) correlated with shifts in cortisol, using a Pearson correlation between PSE and cortisol shifts (see [Figure 5](#brainsci-09-00176-f005){ref-type="fig"}). We found a significant negative correlation between PSE shifts and cortisol shifts across adapt conditions (*r* = −0.303, *p* = 0.017). This negative correlation suggests that, following exposure to negative emotions, as the neutral face was judged more positive, post-adapt cortisol levels were more negative relative to baseline cortisol levels, possibly due to lower stress, lower arousal, or less attention. The same correlation run with Bayesian statistics yielded a BF~10~ of 2.596, suggestive of anecdotal evidence for a relationship between shifts in cortisol and shifts in perception.
3.4. Underlying Biases in Behavioral and Physiological Measures {#sec3dot4-brainsci-09-00176}
---------------------------------------------------------------
All our measures quantifying perceptual and cortisol changes were normalized to starting baseline values. We examined perceptual and cortisol measures at baseline, to determine if baseline biases could influence the effects of interest. We found no significant main effect of adaptation condition on perception at baseline nor on cortisol at baseline (perception: *F*(3,77) = 0.803, *p* = 0.496, partial *η^2^* = 0.030; cortisol: *F*(3,58) = 0.506, *p* = 0.680, partial *η^2^* = 0.026; data not shown) and no significant correlations between baseline cortisol and baseline PSE (*r* = 0.0180; *p* = 0.161; data not shown), or baseline PSE and baseline state affect (PA: (*r* = 0.196; *p* = 0.192); NA: (*r* = 0.055; *p* = 0.647; data not shown)).
Of note, while not a main measure, as state affect was only assessed before but not after adaptation, we found a significant negative correlation between cortisol and positive affect at baseline (*r* = −0.237, *p =* 0.037; data not shown), such that more elevated cortisol at baseline was associated with less positive affect at baseline. No significant correlation was found between cortisol and negative affect at baseline (*r* = −0.117, *p* = 0.369; data not shown).
4. Discussion {#sec4-brainsci-09-00176}
=============
We assessed the relative contributions of visual and auditory emotional information in biasing changes in perception and cortisol and the correlation between the strength of changes in perception and changes in cortisol. Unlike previous work using unique face-voice pairings for only a few individual face identities, we used a wide range of facial identities and unassociated emotional crowd sounds to assess emotional processing.
We hypothesized that (1) the emotion perceived in a face would show a positive bias post-exposure to negative emotional information, in accord with contrastive perceptual after-effects, and that (2) such after-effects would vary based on whether emotion was conveyed by visual and/or auditory information and whether visual and auditory emotional valence matched. Overall, we found exposure to negative emotions yielded positive perceptual biases (positive PSE shift) in all but the auditory only adaptation condition, which showed no effect. In accord with our expectations and replicating previous literature, PSE shifts were weakest following only auditory emotional exposure. Contrary to our expectations, the magnitude of PSE shifts did not differ for congruent versus incongruent emotions nor for congruent versus visual only emotions. The failure to find a benefit for congruent versus visual only adaptation was also noted in a previous study using unique face-voice pairings and finding no benefit, no increased PSE shift, for congruent visual and auditory happy emotions versus only visual happy emotions \[[@B19-brainsci-09-00176]\].
We hypothesized that cortisol would decrease after exposure to negative emotional information and that decreases would vary based on whether the emotion was conveyed by visual or auditory information and whether visual and auditory emotional valence matched. Overall, we found cortisol decreased after exposure to negative emotional information, but we found no significant differences as a function of adaptation condition.
Given the variability of perceptual and cortisol shifts across individuals, we also tested the correlation between the magnitude of perceptual shifts and cortisol shifts. Here, we found a significant negative correlation across participants, such that the stronger the positive bias in perceiving a face after exposure to negative emotional content the stronger the decrease in cortisol. While perceptual shifts correlated with cortisol shifts, baseline cortisol levels did not correlate with baseline perceptual biases. Thus, underlying baseline differences could not account for the correlations we observed between shifts in perception and shifts in cortisol.
Is Cortisol a Proxy for Stress, Arousal or Attention?
-----------------------------------------------------
Our results highlight that changes in cortisol may correlate with changes in perception: the more that exposure to angry emotions biases faces to be perceived as happy the *more pronounced* the *decrease* in cortisol. This is contrary to what one might expect if changes in cortisol correlated with changes in mood. Thus, while repeated exposure to negative emotional content increases *positive* biases in perception, such exposure increases *negative* biases in mood. One would expect negative biases in mood to correlate with a *less pronounced* decrease in cortisol, or even an increase in cortisol. Yet, we find that exposure to negative emotional content yielded larger, not smaller, *decreases in* cortisol such that the larger the decrease in cortisol the greater the *increase in positive* perceptual bias. Thus, in our paradigm and with our emotional stimuli, changes in cortisol correlate with changes in perception rather than changes in mood.
One might have expected *changes* in cortisol to serve as a proxy for *changes* in mood since some studies find a correlation between baseline mood and baseline cortisol levels. For example, some studies find that positive affect correlates with decreased cortisol and negative affect with increased cortisol levels \[[@B48-brainsci-09-00176],[@B49-brainsci-09-00176]\]. Yet, while several studies find urinary and salivary cortisol levels are associated with anxiety and depression \[[@B50-brainsci-09-00176],[@B51-brainsci-09-00176]\], or with negative state affect \[[@B34-brainsci-09-00176],[@B48-brainsci-09-00176]\], other studies do not find an association between negative trait affect and cortisol level \[[@B51-brainsci-09-00176],[@B52-brainsci-09-00176]\]. Taken together, these studies do not provide a clear picture of the relationship between positive or negative state affect and cortisol levels.
Of note, although cortisol has long been known as the stress hormone, it has also been referred to as a marker of attention and arousal. Van Honk and colleagues \[[@B32-brainsci-09-00176]\] found that baseline cortisol levels correlated with the probability of orienting away from threatening faces, suggesting that individuals with higher baseline cortisol levels had higher levels of arousal, or more engaged attention, and thus responded more quickly to threatening faces than individuals with lower cortisol levels. More recently, Kagan \[[@B53-brainsci-09-00176]\] highlighted that the term "stress" is assigned too broadly and should be reserved for describing reactions to an experience that directly threatens an organism. He describes cortisol as a marker for exploratory activity and responses to novel situations. In response to Kagan \[[@B53-brainsci-09-00176]\] and McEwen and McEwen \[[@B54-brainsci-09-00176]\] call for more investigations into epigenetic factors that might underlie cortisol responses to positive and negative events to help clarify distinctions between "good stress", "tolerable stress", and "toxic stress". They emphasize that responses to stressors are highly individualized and that early life stressors may result in a reduced ability to cope in certain stressful situations.
5. Limitations {#sec5-brainsci-09-00176}
==============
It is possible that certain aspects of our experimental design could have minimized the shifts in perception and/or cortisol we observed. For example, given that directing attention to emotional information can enhance adaptation effects (e.g., \[[@B55-brainsci-09-00176]\]) and given the possibility that changes in cortisol may reflect changes in attention state, the failure to optimize attention to emotional information in the current study could have minimized the shifts we observed in perception and/or cortisol. Individual differences in attending more to visual or auditory emotional information could also have influenced results.
Furthermore, the stimuli used in this study could have yielded weaker shifts in perception and/or cortisol. We expected smaller adaptation effects given our use of multiple face identities (e.g., \[[@B22-brainsci-09-00176]\]) and no unique face-voice pairings. Yet, it was important to use such stimuli to minimize the confounds of adapting to unique face or voice features and to minimize visual imagery in assessing the transfer of emotion across the senses. While a review is beyond the scope of this paper, studies have begun to consider behavioral as well as neural correlates of crossmodal emotional processing yet have found conflicting results (for reviews, see \[[@B56-brainsci-09-00176],[@B57-brainsci-09-00176]\]). For example, while some ERP studies find enhanced behavioral and neuronal processing for congruent bimodal compared to unimodal emotional stimuli \[[@B58-brainsci-09-00176]\], other studies find neuronal but not behavioral enhancement \[[@B59-brainsci-09-00176],[@B60-brainsci-09-00176]\].
In our study, we also did not evaluate social anxiety status. Yet, social anxiety status can affect the physiological stress response, such that basal cortisol levels are higher in individuals high in social anxiety, and HPA axis reactivity may be blunted, yielding weaker changes in cortisol in those high in social anxiety \[[@B61-brainsci-09-00176]\]. If some individuals in our sample were high in social anxiety, this could have obscured and/or minimized the changes in cortisol we observed. Further study of crossmodal emotional processing in general, and in social anxiety in particular, is warranted (e.g., \[[@B62-brainsci-09-00176]\]).
Finally, we also could not assess the effect of race on emotion perception. Perceptual narrowing early in life creates more salient face representations for faces of one's own race relative to those of another race \[[@B63-brainsci-09-00176]\]. Thus, identity discrimination is easier for faces from one's own race compared to faces from another race. This has been called the other-race effect, or other-race bias (see Reference \[[@B64-brainsci-09-00176]\] for a review). Further, studies in both infants and adults suggest that emotional perception is more accurate when viewing a face of the participants' same race versus a different race \[[@B8-brainsci-09-00176]\]. The current study did not have a large enough sample size to consider how the race of the participant could have influenced perceptual processing of emotional information for faces of one's own race versus a different race. Furthermore we only sampled a limited number of races in the faces presented. Thus, we did not have adequate power to examine this effect, and cannot come to any meaningful conclusion about the effect of race in this context. Future research should assess the role of race in crossmodal emotional perception.
6. Conclusions {#sec6-brainsci-09-00176}
==============
We studied crossmodal emotional processing to quantify if perceptual and physiological effects were stronger if visual and auditory emotions were matched in valence (congruent) or unmatched (incongruent). We quantified how emotional exposure altered perception, using psychophysics, and how it altered a physiological proxy for stress or arousal using salivary cortisol. This is an interesting question as repeated exposure to emotional content can induce a contrastive perceptual after-effect in the opposite direction (i.e., adapting negative emotion induces a positive emotional bias) but also induce mood in the same direction, making it unclear how changes in cortisol may be related. The results of our study suggest a relationship between perceptual changes and changes in a physiological measure of stress, cortisol, such that after exposure to negative emotional face content, larger decreases in cortisol correlated with more positive perceptual after-effects. This suggests an orthogonal relationship between measurements of stress and perceptual after-effect, with adaptation to negative emotional face stimuli inducing both a bias to see neutral faces as happier, and a decrease in stress as measured by cortisol. Additionally, while we observed a perceptual bias to judge faces as happier in all 3 conditions of adaptation to negative faces (congruent, incongruent, and visual only), we did not observe significant differences in adaptation strength across conditions except for all three of the above conditions differing from an auditory only condition, which failed to show a significant perceptual aftereffect. We also found no significant differences in cortisol changes across our 4 adaptation conditions.
We thank a wonderful and dedicated team of undergraduate research assistants, especially Alexia Williams, Ryan McCarty, Anh Phan, Brandon Mui, Abrar Ahmed, and Erinda Morina for their hard work and help with data collection and analysis. And finally, much of this work would not have been possible without generous support from the UMass Boston Dean's Research Fund and the UMass Boston Department of Psychology Research Fund to vmc.
Conceptualization, V.M.C.; Methodology, V.M.C., H.E.L, & R.G.H.; Formal Analysis, V.M.C, S.C.I., & H.E.L.; Resources, V.C. & R.G.H.; Data Curation, V.M.C., S.I & D.A.H..; Writing -- Original Draft Preparation, V.M.C. & S.C.I.; Writing -- Review and Editing, V.M.C., S.C.I., D.A.H., H.E.L., & R.G.H.; Project Administration, V.M.C.; Funding Acquisition, V.M.C.
This research was funded by the UMass Boston Dean's Research Fund and the UMass Boston Department of Psychology Research Fund to V.M.C.
All authors declare that they have no conflict of interest and that this work has not been previously published.
{#brainsci-09-00176-f001}
{#brainsci-09-00176-f002}
{#brainsci-09-00176-f003}
{#brainsci-09-00176-f004}
{#brainsci-09-00176-f005}
brainsci-09-00176-t001_Table 1
######
Demographics.
Demographics Congruent Incongruent Visual Auditory
------------------------------ ----- ------------ ------------- ------------ ------------
**Mean Age (SD)** 20.5 (2.2) 22.5 (4.7) 25.5 (9.5) 22.1 (2.8)
**White**
**Male** *N* 0 (0%) 1 (50%) 2 (33.3%) 0 (0%)
**Female** *N* 7 (36.8%) 7 (36.8%) 6 (50%) 8 (50%)
**Hispanic**
**Male** *N* 1 (25%) 0 (0%) 2 (33.3%) 1 (33.3%)
**Female** *N* 4 (21.1%) 7 (38.9%) 2 (16.7%) 0 (0%)
**African/African American**
**Male** *N* 0 (0%) 0 (0%) 1 (16.7%) 0 (0%)
**Female** *N* 1 (5.3%) 0 (0%) 3 (25%) 4 (25%)
**Asian**
**Male** *N* 0 (0%) 1 (50%) 0 (0%) 1 (33.3%)
**Female** *N*
**Multiracial**
**Male** *N* 0 (0%) 0 (0%) 0 (0%) 0 (0%)
**Female** *N* 1 (5.3%) 0 (0%) 0 (0%) 1 (6.3%)
**Unspecified**
**Male** *N* 3 (75%) 0 (0%) 1 (16.7%) 1 (33.3%)
**Female** *N* 3 (15.8%) 2 (11.1%) 0 (0%) 0 (0%)
| |
In this Capitol Report:
Colorado Chamber Submits Comments Opposing Draft Legislation Changing Taxation of Computer Software
The Colorado Chamber has submitted formal comments to the Legislative Oversight Committee Concerning Tax Policy regarding draft legislation that would change the way computer software is taxed. The Chamber’s comments are based on concerns expressed by Colorado employers on this draft legislation which will be considered for a vote by the Legislative Oversight Committee Concerning Tax Policy. If the Committee votes to move the draft bill forward, it will be introduced during the 2023 Legislative Session.
The Colorado Chamber has advocated for years on behalf of employers for clarity in current laws and regulations on how computer software is taxed, and the current statute which is based on bi-partisan legislation adopted in 2011 (HB 1293) has provided that much needed clarity.
Colorado Chamber’s Tax Council Chair, Ryan Woods, will be testifying at the Oversight Committee hearing to share the concerns by the Chamber and its members. We strongly encourage Chamber members to contact the Committee if you would like to maintain the current tax policy for computer software. Committee member contact information can be found here.
Please contact Larry Hudson at [email protected] or Loren Furman at [email protected] with any questions.
KICK-OFF WORKSHOP: Colorado Family & Medical Leave Insurance Program (FAMLI)
Details: This workshop will provide legal guidance for employers on the State mandated FAMLI program, current regulations & private plan requirements.
Date: October 10, 2022
Time: 2:00 p.m. – 3:15p.m.
Location: Colorado Chamber, 1600 Broadway, Ste. 1000, Denver. Remote participation is available.
Instructor: Stacey Campbell, Shareholder & Owner, Campbell Litigation, P.C.
About Stacey: https://www.campbelllitigation.com/about/stacey-a-campbell/
Register for the event HERE or Contact Laura Moss at [email protected].
Colorado Chamber Now Offering New Employer Training Workshops!
The Colorado Chamber has been working vigorously to defend and support Colorado employers on state policy decisions that impact business operations. However, businesses of all sizes and industries can’t keep up with the labryinth of new laws and regulations. Any unintended compliance mistakes could trigger penalties and fines for a business!
To help businesses navigate the complex regulatory environment, the CO Chamber is offering a new Employer Training Workshop Program that includes classes led by highly respected attorneys and training professionals with an expertise in current statutes and regulations and in creating a healthy workplace environment.
Upcoming Workshop Topics:
- Understanding interaction between paid sick Leave & family & medical leave and avoiding duplication of leave;
- Understanding the Workplace Discrimination law;
- Compliance with current environmental regulations;
- Understanding the Equal Pay Law;
- Diversity, equity & inclusion training;
- Many other opportunities…
Sign up NOW and avoid penalties for not understanding current laws and regulations! Contact Laura Moss at [email protected] to sign up!
Leadership Colorado Applications Are Now Open!
Leadership Colorado (formerly known as the EXECs Advocacy Program) is a 9-month executive development program for emerging executives and industry professionals. Participants explore prominent Colorado companies through exclusive tours and hands-on experiences.
Please click here for the 2023 e-brochure and contact Priscilla B. Varner, Leadership Colorado Program Manager with any questions that arise. | https://cochamber.com/2022/09/22/colorado-chamber-submits-comments-opposing-draft-legislation-changing-taxation-of-computer-software/ |
Modupe Olusola is a Nigerian business executive and Managing Director/CEO of Transcorp Hotels.
Before becoming Transcorp Hotels’ CEO, she was the Group Head of Marketing of UBA Group.
Modupe is the wife of the popular motivational speaker and life coach Lanre Olusola(The Catalyst)
Education
Dupe attended Queens College, Lagos, for her GCE O-levels before proceeding to the University of Leicester, United Kingdom, for a degree in economics.
Later, she headed to the University of Kent, where she obtained a Master (MSc) in Development Economics.
Additionally, she has certifications in Prince 2, Project Management Professional (PMP), and Investor Management (all from the UK).
Career
As the SME Manager for African Capital Alliance, Dupe started her professional path in 2002. In January 2010, she was appointed Director of Resources at Transcorp Nigeria Plc after moving to the investor relations division of the UBA group in 2008.
She was appointed chief executive officer and managing director of Transcorp – Teragro Commodities Limited’s now-defunct Agribusiness division in 2014.
At United Bank for Africa (UBA) Plc, she was appointed Group Head of Embassies, Multilaterals, and Development Organizations (EMDOs) and Global Investors Services (GIS) in 2016. She oversaw the group that provided multilateral development organizations and embassies operating in Africa with specialized finance solutions.
Later she was appointed Group Head, Marketing at UBA. She creates and implements the integrated strategy for all UBA Group Bank and Non-Bank Subsidiaries in this capacity.
In 2020, Dupe was appointed as the CEO of Transcorp Hotels Plc.
Marriage
Dupe Olusola is married to the famous life coach and motivational speaker Lanre Olusola (The Catalyst), and they are blessed with two children.
Net worth
NewsWireNgr cannot independently verify Dupe Olusola’s net worth
Achievements
In 2015, Dupe was listed among Africa’s 10 Most Influential Nigerian CEOs. While she was the CEO of Teragro, the company received Nigerian Agriculture Award
Dupe Olusola was named as part of the Leading Ladies Africa/ YNaija’s “100 Most Inspiring Women in Nigeria.”
She was also listed as one of the top 4 women in Africa in the Strategic Women in Leadership Award (SAWIL)
Disclaimer
The information in this article was curated from online sources. NewsWireNGR or its editorial team cannot independently verify all details.
Follow us on Instagram and Facebook for Live and Entertaining Updates.
Always visit NewsWireNGR for the latest Naija news and updated Naija breaking news.
NewsWireNGRLatest News in Nigeria
Send Us A Press Statement/News Tips on 9ja Happenings: [email protected]. | https://newswirengr.com/2023/01/18/dupe-olusola-biography-education-career-marriage-net-worth-achievements-and-controversy/ |
Confusion surrounds future of Excel Media Solutions
Manchester-based Excel Media Solutions has been liquidated and its staff made redundant.
On Friday, Simmonds and Co informed the company's staff - believed to total more than 30 - that they were being made redundant. Subsequent letters informed staff that they would be entitled to arrears in wages, holiday pay and redundancy pay up to a maximum of £479 per week in line with the Employment Act.
Excel Publishing - one of the region’s largest independently owned publishing companies - was bought out of administration by Buxton Press (above) in January this year.
A company called Excel Media Solutions Ltd, registered in Manchester and with Bernard Galloway as its sole director, was set up in the same month.
Galloway is the chairman and chief executive of Buxton Press. Simmonds and Co in Stockport has been appointed as liquidators of Excel Media Solutions.
Neither Simmonds and Co, nor Buxton Press, has responded to Prolific North's numerous requests for information. The Excel Media Solutions website is down and phones are not being answered.
Another company called Excel Media Solutions Ltd, registered on the Fieldhouse Industrial Estate in Rochdale, with John Howarth and Joanna Lawlor as directors, was dissolved on February 9th 2016. | https://www.prolificnorth.co.uk/featured/fixed-top-homepage/2017/06/confusion-surrounds-future-excel-media-solutions |
Q:
Is there a way, using scikit-learn, to plot the OOB ROC curve for random forest?
Using Python and sklearn I want to plot the ROC curve for the out-of-bag (oob) true positive and false positive rates of a random forest classifier.
I know this is possible in R but can't seem to find any information about how to do this in Python.
A:
You need .oob_decision_function_that returns the prediction probabilities for the out of bag samples after fitting.
P.S: This is available in scikit-learn==0.22
Small example:
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import plot_roc_curve
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
X, y = load_wine(return_X_y=True)
y = y == 2
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rfc = RandomForestClassifier(n_estimators=10, random_state=42, oob_score=True)
rfc.fit(X_train, y_train)
from sklearn import metrics
pred_train = np.argmax(rfc.oob_decision_function_,axis=1)
metrics.roc_auc_score(y_train, pred_train)
| |
Strength & Conditioning
Our strength and conditioning classes provide a supportive environment for everyone from complete beginner to advanced athlete. We focus on maximizing your performance and fitness for any physical challenge or activity, while empowering you to have fun and take charge of your health.
Our training is a combination of athletic drills, bodyweight exercises, weight training, powerlifting, and olympic weightlifting. With hands-on coaching emphasizing proper technique and progressions, you will gain understanding of how to train hard and efficiently while overcoming and avoiding injuries. Whether your goals are to see changes in body composition or compete at a high level, expect to see gains in strength, coordination, endurance, and power! | http://www.trybehk.com/strength-conditioning |
This video and text was originally featured on The Pew Charitable Trusts. It refers to efforts underway in the Mid-Atlantic. New England officials will likely consider something similar.
“Everywhere we go, we find new species,” says Les Watling, Ph.D., an expert on corals. Fragile, slow-growing corals abound in deep canyons and along the edges of the continental shelf off the U.S. Atlantic coast. But as fishing activity pushed into deeper waters, these amazing life forms are threatened by damaging gear that scrapes the seafloor. Fisheries officials for the mid-Atlantic region have an opportunity to protect corals in a large swath of the seabed and leave a proud conservation legacy. As Dr. Watling explains, these deep-sea corals play an important role as habitat for other marine life, and we must act to protect these ancient corals of the deep.
Comments
Talking Fish reserves the right to remove any comment that contains personal attacks or inappropriate, offensive, or threatening language. For more information, see our comment policy. | https://www.talkingfish.org/2015/protecting-ocean-ecosystems/rare-glimpse-of-ancient-corals-and-other-creatures-of-the-deep |
In the ever-changing world of visual storytelling, Meg Weston ensures that Maine Media Workshops + College stays relevant
New England is certainly known for our prestigious academic institutions oriented towards knowledge, technology and creativity. Maine Media Workshops + College seems to be the intersection of all three, and its President, Meg Weston, fully understands what this intersectionality should look like in the context of our current cultural landscape.
Starting her career in the consumer photography industry where she eventually ran a photo finishing company that processed 50 million rolls of film a year, Meg began to understand the importance of imagery and how we relate to the ones we create. After departing from her twenty year career in consumer photography, Weston earned her MFA in creative writing and was appointed to the Board of Trustees at the University of Maine.
What space do you feel Maine Media Workshops + College occupies in the broader scope of new culture happening in New England?
We focus on teaching the art and craft of visual storytelling. We’ve been around for more than 40 years, so we started with photography workshops then added film and television workshops as time went on. The idea of visual storytelling is more relevant today than ever before. In 1999 consumers took about 80 billion photos. In 2015, Facebook alone, two billion photos a day are uploaded by 1.4 billion users. Clearly, we’ve become such a visual culture and to effectively tell a story, whether its a very personal one that you want to tell as a fine artist, or whether it’s an organizational story or a news story, we’re all looking for visual media or using visual media more and more to tell those stories. It’s just such a big change and the way we see the world (imagewise) effects what we think about the world. If you can be a more effective storyteller then you will change the way people see the world.
Do you feel you have a way to address rationality in the context of the kinds of storytelling you teach at the college?
We’re in a very beautiful part of the world here – the coastal harbor of Rockport, Maine – it is partly the location that draws people to come and immerse themselves in the environment. We have a long history of the arts, unique light, and culture in this part of Maine in particular, which makes participating in this school and learning how to tell stories here unique.
As you pointed out, the college adapted in the past to cater to the drastic shifts in our culture and the mediums we use to tell stories. Do you have an idea of how Maine Media College + Workshops will adjust to future movements in culture and education in terms of programming, workshops and courses?
This all changes constantly because technology changes. Originally, we had all these darkrooms where students were processing film, we were shooting movies on film or you were shooting still images on film. In the late 1990s and the early 2000s there was the whole shift in technology towards digital and we were at the forefront of that.
We teach these historic processes as well as the latest technologies. For instance, we offer a practice lab and many workshops in what we call alternative or historic processes, which means we teach people how to…make images in the way they were made in the 1800s. At the same time, students can use digital negatives to take an image on their iPhone and print it as if it was printed in the 1800s. We try to expand the breathe of our courses and workshop from these historical processes to the most recent technological advances.
The college embraces the whole range from historic to the latest technology and we embrace the convergence and diversity of ways you can tell a story. We’ve added a book art studio, a historic dark room, lighting classes. We’ve always taught screen writing, but now Richard Blanco teaches a poetry-writing class called Images in Imagination, where you write poetry based on your internal imagery.
In addition, we changed one of our certificate programs from a professional photography certificate to a professional certificate in visual storytelling. Students come for 30 weeks but they don’t just graduate with a portfolio of pictures, they learn multi-media skills; audio recording, and video recording. We want them to learn all this because even if a student wants to be a fine art photographer and show their work in galleries, they’re going to need to promote themselves via several channels. Students will need all these other skills if they want to work editorially. Our professional certificate programs have evolved and we see more photographers who want to pick up some of the video skills and videographers that understand the power of the still image. Even though they’re very different courses sand demographics, there’s a lot more cross over between them. | https://thetakemagazine.com/take-qa-meg-watson-president-maine-media-workshops-college/ |
1. Introduction {#s0005}
===============
Production diseases, mainly gastro-intestinal and respiratory, are defined as diseases induced by management practices and are multifactorial with the environment, nutrition and stress all contributing to a compromised immune system ([@bb0215]).
Production disease in the pig industry is a significant source of economic loss and continues to impact on animal welfare. The most recent figures for European Union (EU) farms show that endemic diseases cost between £21--28 per fattened pig, with parasitic disease accounting for losses of £5 per pig and respiratory infections accounting for a loss of £3.40 per finisher pig ([http://www.fp7-prohealth.eu/news-index/newsletter-november-2015/production-diseases-cost-pig-producers/](http://www.fp7-prohealth.eu/news-index/newsletter-november-2015/production-diseases-cost-pig-producers){#ir0005}). [Table 1](#t0005){ref-type="table"} shows a list of some of the more common production diseases in pigs, as well as some of the notifiable pathogens involved.Table 1A list of some of the more common production diseases in pigs caused by different bacteria, parasites and viruses including the geographical distribution, as well as some of the notifiable pathogens involved.Table 1PathogenGlobal distributionBacteria*Escherichia coli*Endemic infections worldwide*Lawsonia intracellularis*Endemic infections worldwide*Mycoplasma hyopneumoniae*Endemic infections worldwide*Salmonella*Endemic infections worldwideSwine dysenteryEndemic infections worldwideParasiteCoccidiosisEndemic infections worldwideVirusAfrican swine feverEndemic in sub-Saharan Africa it has become established in the Caucasus and Eastern EuropeClassical swine feverDistributed in many countries worldwide but large areas of Europe, Australasia and North America normally free from diseaseFoot and mouth diseaseEndemic in many countries in Africa, the Middle East and Asia and is also present in some regions of South America. Europe and North and Central America are free from the diseasePorcine respiratory and respiratory syndromeStrains of varying pathogenicity are endemic in many swine-producing countries. Highly pathogenic strains are currently circulating in Asian countries including China, Vietnam, Cambodia, Laos, Malaysia among othersPorcine respiratory coronavirusesDifferent classes of coronaviruses are circulating globally; Alpha, Beta and Delta. Alpha coronaviruses are endemic in Europe and Asia but the circulating strains in Europe are less pathogenic than those in Asia. Transmissible Gastroenteritis virus is caused by an Alpha coronavirus and sporadic outbreaks can occur. Beta coronaviruses are widespread but often cause subclinical disease. The Delta coronaviruses are newly emerging in the USA having thought to have originated in China and are causing widespread economic losses.Rabies/Aujesky\'s diseaseHas an almost worldwide distribution, particularly in regions with high population densities of domestic swine. Eradication programmes have led to the virtual disappearance from regions such as Europe and North America.RotavirusHas been found worldwideSwine influenzaGlobal pandemic in 2009 of human swine influenza. Distributed globally in pigs with occasional outbreaks
Many different factors associated with intensive rearing contribute to increased susceptibility to disease including mixed infections, stress, poor husbandry and nutrition. Whilst not an infectious disease, stress can adversely affect performance. Stress can be caused by overcrowding, frequent mixing of different litters, and too high a temperature. Tail biting, a common consequence of stress has been estimated to cost around 18 Euros per affected pig which includes medication, veterinary care and carcass condemnation ([@bb0065]).
Rapid diagnosis of disease is important in facilitating more rapid intervention through treatment or isolation of infected animals. Traditionally, serological testing has been used for this purpose although this is very much a retrospective approach to diagnosis and is more appropriate for surveillance ([@bb0265], [@bb0290]). Molecular methods such as polymerase chain reaction (PCR) and microarrays offer much more sensitive methods of diagnosis and can be used to detect the presence of pathogen rather than antibody responses to them. Whilst both serological and molecular methods will continue to be used as surveillance tools, molecular methods clearly enable rapid diagnosis. However, in addition to the presence of the pathogen, molecular technology, including next generation sequencing, can also be used to measure gene expression patterns in the host during infection. Changes in expression of combinations of smaller subsets of genes may be coordinated and detection of these changes as biomarkers of production disease could be of immense value in improved diagnosis and risk analysis to determine best practice with an impact on increased economic output and animal welfare. In the last ten years we have seen a rise in the number of publications using whole genome arrays to analyse the pig transcriptome which [@bb0335] reviewed in greater detail ([@bb0335]). In particular, there has been a great deal of focus on the pig\'s immune system and the response to various pathogens such as PRRSV ([@bb0235], [@bb0420]).
Pig breeders have increased production performance as high-producing animal breeds have been successfully bred from native breeds ([@bb0305]). Any increase in genetic potential of the animal requires simultaneous advances in nutrition and management to support the expression of these traits ([@bb0170]). Nutrition and management, when used effectively, can improve feed efficiency, shorten production cycles, and reduce feed requirements ([@bb0340]). However, these two factors alone will not completely remove the stresses of overcrowding associated with adverse effects on immunity leading to infection, thus biosecurity and vaccination are also important factors to consider ([@bb0225]).
This review will highlight the technologies available to study gene expression for this purpose, how these have revolutionised human medicine and how these could be applied to production disease in farm animals.
2. Technologies which have driven translational genomics {#s0010}
========================================================
Serology identifies animals that have been exposed to a pathogen, but may not necessarily be infected at the time of sampling. To overcome this, nucleic acid-based technologies are becoming increasingly prevalent in surveillance of pathogens ([@bb0020], [@bb0380], [@bb0360]) facilitated by the recent developments in new technology platforms including PCR, microarray and next generation sequencing.
3. Polymerase chain reaction (PCR) {#s0015}
==================================
Most existing assays for detecting pathogens by the presence of their nucleic acid involves PCR or derivatives thereof. PCR enables easy identification by electrophoresis of a product of specific amplification using species/strain specific primers. Although this uses DNA, RNA viruses can also be detected in this way by incorporating an initial reverse transcription step. PCR assays are highly sensitive and specific, are rapid, and have the potential for automation. PCR can be adapted to detect several pathogens simultaneously by using primers aimed at producing amplification products of different sizes which can be separated by electrophoresis. Estimates can also be made on the amount of target pathogen DNA by using a quantitative qPCR. PCR can also be used in the identification of non-culturable or very slowly growing pathogens, the latter because of the rapid detection rates compared with waiting for a bacterial culture which may take days to weeks ([@bb0085]). Novel pathogens can also be detected by using generic or degenerate primers ([@bb0370], [@bb0025]). qPCR is often used in the diagnosis and detection of economically important pathogens including classical swine fever ([@bb0055]) and African swine fever ([@bb0275]), the emerging porcine delta coronaviruses ([@bb0430]) and porcine epidemic diarrhoea virus ([@bb0070]). qPCR was regarded as a relatively low-throughput assay, limiting the number of samples that could be tested simultaneously and as a result, researchers have looked at ways of increasing throughput, for example by combining it with microfluidic assays such as the BioMark™ qPCR system which produces data which correlates well with conventional qPCR and reportedly gives better reproducibility than DNA microarrays ([@bb0350]). Up to 9216 qPCR reactions can take place in a single run with the BioMark™ chip ([@bb0255]). Microfluidic assays such as these make use of nanotechnology which is becoming more commonplace and includes drug discovery, biomarker detection and enzymatic reactions as lab-on-a-chip applications ([@bb0180]). Nanotechnology allows researchers to use lower volumes of RNA and reagents per sample increasing the number of tests possible.
4. Microarray analysis {#s0020}
======================
A DNA microarray is an array of DNA probes arranged in miniature on a solid surface. Labelled DNA from a sample is hybridised to the array and those probes which are complementary to the DNA in the sample are detected by a fluorescent marker or other signal. Sequence-specific probes have been used in a variety of methods including northern blot, southern blot, and in situ hybridisation. A key advantage in its use for surveillance is the ability to analyse thousands of targets simultaneously which may be in a sample ([@bb0045]). They can be used for multiplex pathogen detection ([@bb0285]) and also for gene expression studies ([@bb0330], [@bb0105]). Different platforms are available commercially including Affymetrix (Santa Clara, CA, USA), Agilent (Santa Clara, CA, USA), Illumina (San Diego, CA, USA) and Alere (Jena, Germany). These platforms can be distinguished by the type of surface substrate, probe length and the spotting and labelling techniques.
In addition to the use of multiplex PCR or microarrays to detect individual or mixed pathogen infections, they can also be used to monitor the host response to infection using expression assays in which cellular RNA is converted to cDNA and amplified for detection or followed by application to a microarray. The patterns of gene expression associated with individual cellular damage or initiation of the early immune response may indicate the type of pathology, and by extension the types of pathogen involved. Molecular methods can therefore be used to monitor the presence of the pathogen and the host response in endemic production disease; these are very powerful tools. Many studies have shown their consistency and utility in the diagnosis of infectious diseases in pigs, for example the detection of Porcine Circovirus (PCV) in clinical specimens from diarrhoeic pigs ([@bb0140]). Other studies have reported results using the Virochip, a panviral DNA microarray that is able to detect all known viruses and has been used to simultaneously identify Porcine Reproductive and Respiratory Syndrome virus (PRRSV), Influenza A virus and Porcine Respiratory Coronavirus in clinical serum samples ([@bb0260]).
Microarrays have also been used to detect novel pathogens such as Severe Acute Respiratory Syndrome (SARS) ([@bb0405]). Microarrays have also been shown to be very reliable in genotyping clinical or environmental pathogen strains.
In comparison with human gene expression ([@bb0435], [@bb0130]) very few studies have been done on the pig. Microarrays have however, indicated that genetic selection for residual food intake (RFI) in pigs can affect immune capacity ([@bb0135]). They have also been used to assess differences in in vitro gene expression in response to important porcine pathogens such as PCV-2, indicating that the virus increases the expression of a large number of immune-related and pro-apoptotic genes, mainly in monocyte-derived dendritic cells ([@bb0220]).
5. Next generation sequencing {#s0025}
=============================
Rather than detecting the presence of pathogen nucleic acid together with patterns of host gene expression in clinical samples by PCR or array-based assays, simply sequencing all the nucleic acid (DNA and cDNA derived from the RNA) that is present in a sample should provide information on the pathogens present and, depending on the sample, the host response.
Next-generation sequencing (NGS) is a term that includes several high-throughput sequencing technologies including but not limited to: Illumina, Roche 454 and SOLiD sequencing and RNA-Seq ([@bb0075]). RNA-Seq for example, has been used to investigate differentially expressed genes in the transcriptome of different breeds of pig ([@bb0445]), which showed that genes involved in body growth and the immune system were more highly expressed in Berkshire pigs compared to Jeju native pigs. RNA-seq has also been used to identify genes and inhibitory, non-coding microRNA (miRNAs) that are differentially expressed between pigs with different feed efficiencies ([@bb0145], [@bb0040]). miRNAs function to modulate the activity of specific mRNA targets in animals by targeting specific mRNA for cleavage or affecting posttranslational repression ([@bb0010]). Recently they have been shown to have a role in the differential expression of genes which are involved in the regulation of the innate immune response in functions such as response to cytokine and the inflammatory response ([@bb0410]). The genes identified in studies such as these could be of use in breeding strategies to improve RFI in pigs ([@bb0395], [@bb0115], [@bb0190]).
Dual RNA-Seq has been used to study the interaction between a bacterial pathogen (*Salmonella* Typhimurium) and the host during the course of an infection ([@bb0415]) which can be used to discover novel functions of pathogen genes in relation to the host. In addition the sequencing of the hypervariable V2 and V3 regions of 16s rRNA have been found to be suitable for distinguishing most bacterial species to the genus level ([@bb0050]).
The main advantage of NGS is the ability to generate large quantities of highly detailed sequence data which, in some cases, can be in excess of one billion reads of sequence per run ([@bb0385]). Any nucleic acid, host or pathogen, in a sample will be sequenced, and prior knowledge of the genome sequence is not required ([@bb0230]). This has allowed large-scale comparative studies, such as being able to identify and quantify microbes from the gut microbiota of pigs which can be extremely difficult to grow in the laboratory ([@bb0165]). The most recent and widespread application of NGS has been sequencing human genomes to increase our understanding of the genetic basis of disease ([@bb0125], [@bb0300]). Similar to other molecular applications such as PCR, as time passes it would be expected that NGS will be made increasingly available to laboratories as reagents and the necessary equipment are likely to become less expensive. NGS remains more expensive than the other methods described above and the large amounts of data require extensive bioinformatics analysis ([@bb0015]). In contrast, data analysis pipelines such as GeneSpring, Partek, Genowiz, Pathway Studio and Bioconductor ([www.bioconductor.org](http://www.bioconductor.org){#ir0010}) are well established for microarrays, and data analysis is currently easier than for NGS. Array protocols are optimised and validated and they are commonly used as a high-throughput tool for biological analysis. Microarray design currently needs a priori knowledge of the genome which, for most microorganisms and livestock hosts is freely available so that customisable array design is possible and relatively easy.
6. Systems biology: making sense of multiple biological data sets {#s0030}
=================================================================
It has become commonplace to identify a small number of genes or proteins which are over- or under-expressed following a particular pathological or infection event. With high-throughput tools such as whole-genome microarrays it is possible to measure the entire transcriptome for a change in expression levels. Systems biology is the integration of large quantities of gene or protein expression data on individual metabolic, physiological and immunological pathways, generated for the whole genome, into a functional and regulatory biological network in order to create predictive models of the changes associated with, for example, a particular disease process ([@bb0005]). The use of NGS or whole genome microarrays can generate the raw data needed for these detailed analyses. Systems biology studies can show that phenotypically similar diseases are caused by functionally related genes ([@bb0425]). Advancement in the field of systems biology is being aided by the development of advances in genomics and bioinformatics. Where large amounts of data are available, trained bioinformaticians, using specifically designed software packages, are required to analyse the relevant data. Molecular biomarkers can thus be defined as the gene(s), whose changes in expression are associated statistically with a particular pathological or physiological process, and which can be used to identify the cause of these changes. These are likely to help in the development of more specific therapeutics which may be more beneficial to the patient as a more sensitive means of disease diagnosis.
Advances in the field of molecular biology including array analysis, bioinformatics and high-throughput sequencing are generating the complex genomic level data with which molecular biomarkers might be identified, validated and then applied. Tools such as Mammaprint measure the mRNA expression of 70 genes to screen patients for breast cancer and assigns them as either low or high-risk prognostic groups ([@bb0390], [@bb0120]). Screening methods such as these allow clinicians to make a quicker and more accurate diagnosis, which is also beneficial in selecting the appropriate treatment, which may vary from person to person. Predictive biomarkers are already in use in clinical practice for the treatment of cancers such as leukemia, colon, breast, lung and melanoma ([@bb0150]). Difficulties arise when there is a large degree of variation between individuals and even within any one individual at different time points in any one day and under different nutritional conditions ([@bb0270], [@bb0245], [@bb0355]). This is also true for livestock where, despite genetic variation being lower than in humans, the relationship between genotype and phenotype is complex ([@bb0195], [@bb0205]). In addition technical issues such as standardisation of sample collection require a great deal of attention.
7. Molecular biomarkers in human disease {#s0035}
========================================
Biomarkers of disease have commonly been specific (disease-associated) proteins circulating in blood. Measurement of these proteins can be time consuming and in some cases not very accurate in determining specific disease or prognosis. For example blood protein/biomarker concentration needs to be high enough to be detected by conventional diagnostic methods such as ELISA, whilst high concentration of the same protein (e.g. cytokines) could be increased for a number of different reasons. The development of technology platforms such as PCR, microarray and deep-sequencing facilitates the detection of low concentrations of nucleic acids and RNA and small and complex changes in host gene expression, which may be associated with disease. Some of these technologies can also amplify small amounts of analytes (e.g. RNA) to allow accurate examination of their base pair sequences and these can be used to look for single nucleotide polymorphisms (SNPs) in healthy and diseased tissue. The identification of genes responsible for specific diseases has been one of the major objectives in the field of human genetics for many years ([@bb0425]). As more powerful high-throughput technologies have become available, it has been possible to establish connections between genes, biological functions and a wide range of human diseases. In addition to the presence or absence of particular haplotypes, gene-expression profiling has been used to elucidate the mechanisms underlying patterns of pathology and, for example, to predict cancer prognosis ([@bb0185]). This method has provided researchers with new therapeutic targets and biomarkers for the classification and diagnosis of cancer subtypes ([@bb0030], [@bb0310], [@bb0060], [@bb0005]).
The development of high-throughput molecular platforms such as microarrays and deep-sequencing has been paramount in the discovery and biological study of miRNA. miRNAs are non-coding RNAs involved in gene regulation by suppressing RNA translation and inducing mRNA degradation. Specific miRNA clusters can also be used to classify different types of human cancers ([@bb0200]). miRNAs have also been implicated in nearly all types of cardiovascular disease including heart failure, cardiac hypertrophy, arrhythmias, atherosclerosis, atrial fibrillation and peripheral artery disease ([@bb0035], [@bb0345]). Biomarkers for monitoring other human diseases such as Alzheimer\'s disease ([@bb0315]) and multiple sclerosis ([@bb0095]) have also been identified. Infectious diseases such as *Mycobacterium tuberculosis* have been widely studied and a higher expression of chemokine (c-c motif) receptor (CCR7) and interleukin 18, and lower expression of *Bcl2* in RNA extracted from blood have been identified in patients with tuberculosis ([@bb0400]). Studies such as the ones discussed above have opened new avenues in the detection, classification, prognosis and possible future therapeutic approach to cancer and other human diseases.
8. Molecular biomarkers in pigs {#s0040}
===============================
The variation in expression observed in the very heterogeneous human population is likely to be less marked in the more genetically homogeneous livestock breeds. Many other variables, such as nutrition and the environment which are difficult to maintain for human studies are more easily controlled in animal studies. The publication of the porcine whole genome sequence ([@bb0110]) will facilitate analysis of the expression of all pig genes under different farm environments.
A study performed on five breeds of pig; Duroc, Píetrain, Landrace, Hampshire and Large White, found that the number of genes differentially expressed between these breeds in response to in vitro lipopolysaccharide was relatively small but included the immune-related genes Interleukin 12A (*IL12A*) and Colony Stimulating Factor 2 (*CSF2*) which were more abundantly expressed in Hampshire than Large White or Píetrain ([@bb0155]). In this latter study macrophage gene expression was also assessed with an Affymetrix Snowball Porcine Array, covering the entire transcriptome ([@bb0090]). Among the differentially expressed genes was CXCR2 (IL-8 receptor), which was expressed substantially less in Landrace pigs than in the other breeds.
How the underlying genetic differences in breeds contributes to differences in response to infection can also be studied by mapping variation in such responses to genes or regions of chromosomes ([@bb0080], [@bb0320]). Such Genome-wide Association Studies (GWAS) can identify genetic variation controlling resistance or susceptibility in pigs. Genetic traits within different pig breeds have been located and mapped that are associated with variation in resistance to a number of pig pathogens such as Gram negative bacteria including *Haemophilus parasuis* (Glasser\'s disease), *Salmonella* and *Escherichia coli* (diarrhoea and Haemorrhagic enteritis) and *Actinobacillus pleuropneumoniae* (bronchopneumonia) reviewed by ([@bb0440]). These can include for example, the presence or absence of the receptor of K88, a cell-surface antigen present on some *E. coli* and which has been shown to contribute to diarrhoea in pigs ([@bb0240]). These types of studies may be of interest to breeding schemes in identifying genetic factors that could confer susceptibility or resistance to certain diseases in pigs ([@bb0225]). For example another study by [@bb0210] involved pigs vaccinated with inactivated *Mycoplasma hyopneumoniae*. They then used a microarray platform consisting of 10,010 unique genes, and identified molecular biomarkers including Granulysin (*GNLY*), Killer Cell Lectin-Like Receptor G1 (*KLRG1*), Arachidonate 12-Lipoxygenase 12S Type (*ALOX12*), C-X3-C Motif Chemokine Receptor 1 (*CX3CR1*) and Ral Guanine Nucleotide Dissociation Stimulator (*RALGDS*). These were identified as potential biomarkers for ɑβ T lymphocyte counts and other immune traits in response to *M. hyopneumoniae* ([@bb0210]). In a separate study SNPs in porcine genes Haptoglobin (*HP*), Neutrophil Cytosolic Factor 2 (*NCF2*) and Phosphogluconate Dehydrogenase (*PGD*) have been associated with persistent *Salmonella* shedding ([@bb0375]). A SNP in one of the guanylate binding protein family genes (*GBP5*) been identified in a major quantitative trait locus (QTL) which has been shown to be linked to the variance in how young pigs respond to infection with the economically important PRRSV ([@bb0175]).
Pathogenesis in pigs has also been studied using miRNA profiling, ([@bb0295]), in which deep sequencing was used to highlight a cluster of 17 miRNAs which were upregulated and 11 down-regulated in necrotic biopsies excised from lung tissues of pigs infected with *A*. *pleuropneumoniae*, compared with infected but non-necrotic tissue. One miRNA which was upregulated in both the infected but non-necrotic tissue as well as the infected with necrotic tissue was *miR-155* ([@bb0295]). This miRNA has previously been shown to modulate the effect of LPS and TNF-α in murine studies ([@bb0365]) and may prove to be a generic marker of Gram negative infection. However, it should also be pointed out that extrapolating data between different species may be problematic since current knowledge on the numbers of miRNA genes in pigs is only about 20% of that in humans and less than 50% of that in mice ([@bb0280]), further work will increase knowledge on pig miRNAs. In addition, the course of infection in different species can be very different. Biomarkers from muscle tissue have also recently been used to detect harmful or stressful situations that may affect animal welfare and meat quality prior to slaughter ([@bb0325]). Results reported by this latter study suggested that mixing unfamiliar animals at the farm or at the slaughter house can increase oxidative stress and autophagy in muscle tissue ([@bb0325]). Biomarkers for autophagy include the *Beclin1* gene which show an increase in activity under more stressful conditions and could be useful for detecting inappropriate strategies which lead to animal stress and poorer meat quality.
Improving feed efficiency by genetic selection is becoming increasingly frequent, and an accepted method of measuring feed efficiency is RFI. The RFI is the difference between the actual feed intake of an animal and the estimated feed intake calculated for an animal based on growth rate and carcass composition; selection for a low RFI has been hypothesised to improve feed efficiency whilst maintaining production levels ([@bb0160], [@bb0100]). However, one study identified a number of genes involved in the immune response and regulation of the inflammatory response which were under-expressed in animals with a low RFI compared to a high RFI, which suggests that selecting for low RFI may affect the immune status and defence mechanisms of the pig ([@bb0135]). A statistical difference was also found in the numbers of circulating lymphocytes, basophils, and monocytes with animals from a low RFI line having a lower number of cells compared to animals from a high RFI line ([@bb0250]).
9. Conclusion {#s0045}
=============
The field of biomarker discovery and implementation is expanding as previously existing technologies such as those reviewed briefly above become more common. Whilst a large number of studies have been carried out in human medicine, further work is needed to identify molecular biomarkers in veterinary medicine and in particular those associated with production disease in the pig livestock industry. Pork is a major source of animal protein for large regions of the world, and demand is likely to increase as the global population also increases. To cope with the demand, the pig industry needs to meet the requirements of a growing population and will need to include increased productivity, disease resistance and efficiency. The pig transcriptome is highly complex and still not fully understood, requiring further studies on gene expression to identify those molecular biomarkers which may have predictive value in identifying the environmental, nutritional and other risk factors which are associated with production diseases which contribute to economic loss and welfare issues in the pig industry.
This project has received funding from the European Union Seventh Framework Programme FP7-KBBE Grant Number (613574) for research, technological development and demonstration under grant agreement no 613574.
Conflicts of interest {#s0050}
=====================
None.
| |
Volvo Cars, which is owned by China's Zhejiang Geely Holding Group, is voluntarily recalling about 200,000 cars after it found an engineering issue that could potentially cause fuel leakage in the engine compartment over time.
The group said its probe had identified that some vehicles may have small cracks inside one of the fuel lines in the engine compartment, which along with a pressurised fuel system may over time lead to fuel leakage in the engine compartment.
About 219,000 cars of 11 different models produced in 2015 and 2016 had been affected, the Swedish company said, with the highest number of impacted cars in Sweden, the UK and Germany. The Swedish recall was first reported by daily Aftonbladet.
Volvo sold 503,127 cars in 2015 and 534,332 cars in 2016.
‘There are no reports alleging injuries or damages related to this issue. Volvo preventatively recalls the cars to avert any possible future problems,’ Volvo said in its statement.
The company's fortunes have been revived since Geely bought it in 2010 and its popular new premium models now compete with larger rivals Daimler and Volkswagen. It sold a record 642,253 cars in 2018.
However, a prolonged US-China trade war has inflated raw materials costs and resulted in a slowdown in Chinese demand for cars. That has forced Volvo to spend to retool its global factories to limit the negative tariff impact and led it to postpone its plans to go public indefinitely.
This month, Geely Automobile, the main listed unit of the Geely empire which owns Volvo, forecast flat sales this year, as China's most successful carmaker struggles with slowing economic growth and more cautious consumers.
A Volvo spokesman declined to comment on Wednesday on the cost of the latest recall.
The company made its largest recall ever in 2004, when it called back 460,000 cars to fix wiring in an electronic control module for the cars' main cooling fan. | https://www.gulf-times.com/story/620085/Volvo-recalls-over-200-000-cars-to-fix-fuel-leak-issue |
Seeking to emphasize the unique leadership that G7 countries contribute amid ongoing governmental and non-governmental international efforts, including recognizing the needs of the world's most vulnerable people; and,
Recognizing the importance of shared values, including freedom of inquiry, merit-based competition, openness, transparency, and reciprocity, as well as the protection of human rights and fundamental freedoms, privacy, and democratic values in international cooperation;
The G7 Science and Technology Ministers intend to work collaboratively, with other relevant Ministers to:
Enhance cooperation on shared COVID-19 research priority areas, such as basic and applied research, public health, and clinical studies. Build on existing mechanisms to further priorities, including identifying COVID-19 cases and understanding virus spread while protecting privacy and personal data; developing rapid and accurate diagnostics to speed new testing technologies; discovering, manufacturing, and deploying safe and effective therapies and vaccines; and implementing innovative modeling, adequate and inclusive health system management, and predictive analytics to assist with preventing future pandemics.
Make government-sponsored COVID-19 epidemiological and related research results, data, and information accessible to the public in machine-readable formats, to the greatest extent possible, in accordance with relevant laws and regulations, including privacy and intellectual property laws. Identify research results, data, and information crucial to addressing the current COVID-19 pandemic response and to preventing potential future pandemics, in an effort to advance research, clinical care, public health, and public communication. Identify current data gaps, make anonymized data findable, accessible, interoperable, and reusable, and recognize the importance of open science, which increases public accessibility to research results and data. Exchange best practices and lessons learned on the ethical and transparent use of data in the COVID-19 response and beyond. Share tools and methods for responsible use of data, and for more transparent, participatory, and accountable use of data, recognizing ongoing initiatives, including on repositories.
Strengthen the use of high-performance computing for COVID-19 response. Make national high-performance computing resources available, as appropriate, to domestic research communities for COVID-19 and pandemic research, while safeguarding intellectual property. Enhance cooperation between G7 partners and ongoing initiatives, such as the COVID-19 High Performance Computing Consortium, the Partnership for High Performance Computing in Europe, and the High Performance Computing Infrastructure in Japan.
Launch the Global Partnership on AI, envisioned under the 2018 and 2019 G7 Presidencies of Canada and France, to enhance multi-stakeholder cooperation in the advancement of AI that reflects our shared democratic values and addresses shared global challenges, with an initial focus that includes responding to and recovering from COVID-19. Commit to the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.
Exchange best practices to advance broadband connectivity; minimize workforce disruptions, support distance learning and working; enable access to smart health systems, virtual care, and telehealth services; promote job upskilling and reskilling programs to prepare the workforce of the future; and support global social and economic recovery, in an inclusive manner while promoting data protection, privacy, and security.
[back to top]
Source: U.S. Department of State
|
|
—
|This Information System is provided by the University of Toronto Libraries and the G7 Research Group at the University of Toronto.|
|
Please send comments to:
[email protected]
|
This page was last updated May 29, 2020.
All contents copyright © 2022. University of Toronto unless otherwise stated. All rights reserved. | http://www.g8.utoronto.ca/science/2020-declaration.html |
As Nepal’s FSC National Forest Stewardship Standard (NFSS) is approved without conditions, which is the first in the Asia Pacific, Dr Bhishma shares with FSC APAC the experience of the standard development process, his motivations, and what he anticipates FSC national standards can offer to Nepal.
How did you first get in touch with FSC?
While I had been familiar with FSC since its inception, my direct involvement with FSC for the certification and governance system at the practical level started in the early 2000s.
When I was developing a programme that would ensure sustainable management of forests and better international marketing for Nepal’s non-timber forest products (NTFPs), FSC came to my mind as a means to achieve the goal. The project was developed and funded by USAID along with the support of several alliance partners, such as international businesses and domestic enterprises, Certificate Bodies (CBs) for certification expertise, Nepali NGOs, government agencies and local community organisations at various levels for the initiative.
When did you start to involve in the FSC standard development process, for how long?
In 2002, we did a review and concluded that the FSC would best suit our purposes. However, developing a Nepal standard would take a lot of effort and time, especially when we had no actual field experience. So, we worked with Rainforest Alliance, a Certification Body, to develop interim standards focusing on NTFPs to address the immediate need, while at the same time supporting to form a national working group for achieving the long-term goal. This was a very useful exercise to get all the stakeholders to familiarise about the benefits, usefulness, requirements and costs of FSC certification.
Later in 2005, FSC interim national working group was formed to develop FSC national standards and promote FSC certification in Nepal, but a final draft was yet to produce and submit to FSC. With FSC and UNEP, we developed the Forest Certification for Ecosystem Services (ForCES) programme, a pilot scheme to expand and enhance global and national environmental standards applying to emerging markets for biodiversity conservation and ecosystems services. I’d call it an “upgrade” on top of the current FSC certification.
With ForCES and building on my organisation, ANSAB (Asia Network for Sustainable Agriculture and Bioresources)’s knowledge and experience from the previous FSC certification work including the interim certification standards developed by the certification body and FSC interim national working group, we facilitated the development of FSC National Forest Stewardship Standards (NFSS) for Nepal in 2013, and the standard was completed and approved in 2018.
My commitment and firm belief in sustainable forest management. I believe that balancing social, environmental and economic concerns is crucial for creating synergy between people and nature for a happy, harmonious and peaceful living. We saw a lot of benefits from the proper use of the FSC standards and certification, for example, to improve governance and productivity, increase efficiency in the operation and processing of products, gain better market price with more stable markets, community benefits, environmental and social safeguards, and recognition of good system and practices at local, domestic and international level.
Can you share the top 3 challenges that you encountered during the process? How were they resolved at the end?
First, the technical detail and the complexity to be addressed at both systemic and practical levels that encompass the social, economic and environmental aspects of SFM at the field level. It was especially difficult to bring all relevant stakeholders with various interests and level of understanding to a common view. With ANSAB’s strong technical expertise, long-term on-the-ground experience on SFM, and extensive network, the organisation was able to pull the resources and provide clear guidance on the technical issues and their on-the-ground implication.
Reaching consensus was another big challenge. During public consultations and at Standard Development Group (SDG) meetings, the stakeholders that have different positions will present different views because of their interests. To resolve this, ANSAB organised consultation workshops involving relevant national level stakeholders, where there were sessions to describe the essence of each indicator, and had group discussions so the participants could work to build a common understanding at indicator level. For the SDG members, I was fortunate to work along with FSC representative who provides guidance on each principle, criterion and indicator to build understanding and consensus on each indicator. Also, chamber- level discussion meetings were frequently organised to bring their constructive inputs to the process.
Local adaptation was something we had to overcome as well. For example, the minimum requirement of the conservation area network was worrying for representatives from community managed forests. There was no precise definition to distinguish ‘conservation area’ versus ‘protected area system’ in Nepal, and they might be treated as equivalent and might enable the government to take over the community managed forests into the protected area system. Local people might be left deprived of not being able to use forest resources and benefits. After many attempts to remedy the issue, SDG discussed and agreed to provide the interpretation for small organisations, and the standards finally got the support of all SDG members and stakeholders.
What are the things that you’re most proud of about getting this NFSS done?
The first time Nepal successfully have its own FSC standards to ensure SFM and safeguard ecosystem services. Although ANSAB has trained to produce certified auditors and a significant number of stakeholders on FSC certification process, requirements and benefits, some stakeholders still perceived that certification was externally imposed, and the requirements and utility of the certification were not precise. This perception can be changed now with the national level standards developed together with the relevant local stakeholders. The standard development process itself also helped educate the local and demonstrate the importance of forests, SFM and ecosystem services.
I’m glad to see that ANSAB’s long-term wishes for the Incentive-based forest and ecosystem management can be realised, especially for community-based forest management. Certification provisions, such as Ecosystem Services, are now included in policy documents like REDD+, Forestry Sector Strategy and Forest Policy.
What are the top 3 things that you’re most impressed about FSC?
I’d say i) FSC standards provide a very practical way to ensure sustainability balancing social, economic and environmental concerns. It includes HCV, biodiversity, indigenous knowledge, involvement and benefit to local communities, efficiency in FM operation and use of products and services; ii) With its highly trusted label for its rigour, it is a means to access a green market, build trust, and generate multiple values to local communities, businesses and other stakeholders, and iii) FSC has proven measures to help the smallholders, such as group certification and small and low-intensity consideration in the standards, to reduce the cost while meeting the standards.
What are the benefits that you’re anticipating now that the national standard of Nepal has passed?
With the country-specific practical indicators and verifiers, NFSS can provide an incentive to forest managers to adopt SFM practices; can create an opportunity to public and private entities to procure certified forest products and services in line with their policy and commitment on sustainable development. It can also help the forest managers to get better access to a responsible market for ecosystem goods and services at national and international level.
As a tool, the NFSS can let the government and other conservation and development agencies to evaluate the performance and impact of forest management, and to improve good governance in sustainable forest management including equitable benefit sharing mechanism.
What is a forest to you?
A complex ecosystem, primarily dominated by trees and other plants, to provide habitats for many life forms. It offers a variety of natural products and ecosystem services to human and other animals such as wood, food, nutrition, medicine, spices, essence, energy, water, clean air, watershed protection, recreational service, and more… it is a well stock of natural capitals.
It is also a source of peace, harmony and happiness for many as it reflects the complex natural phenomena on the ground.
Bhishma P. Subedi, Executive Director of ANSAB (Asia Network for Sustainable Agriculture and Bioresources), has over 30 years of experience in participatory conservation and rural development programs, research, policy analysis, university teaching, and networking. He has designed over 100 development and research projects and led the implementation of over 70 projects including those with multiple donors, partners and countries; developed strategies, methodologies and tools; monitored and evaluated conservation and development programs; and delivered key notes and invited technical presentations in national and international forums.
He has received a Master of Forest Science from the Yale School of Forestry and Environmental Studies and a Ph.D. in Forestry from the Kumaun University. He has over 70 published articles, books, practical manuals, guidelines and toolkits, and over 100 research/technical reports. He has been recognized as the “Champion of the Asia-Pacific Forests” by the Food and Agriculture Organization of the United Nations, and honored with the Best Paper Award by the International Congress on Ethnobiology and the Most Innovative Development Project Award (Second Prize) by the Global Development Network, among others. | https://blogapac.fsc.org/2018/08/10/interview-with-dr-bhishma-nepal/ |
In March 2021, CQC published a report titled, “Protect, respect, connect – decisions about living and dying well during COVID-19: CQC’s review of ‘do not attempt cardiopulmonary resuscitation’ decisions during the COVID-19 pandemic”. The report was published following CQC’s review of ‘do not attempt cardiopulmonary resuscitation’ (“DNACPR”) decisions during the COVID-19 pandemic. A copy of the report can be accessed here.
As providers know, a DNACPR decision is an instruction to healthcare professionals involved in a person’s care not to attempt cardiopulmonary resuscitation (“CPR”). DNACPR decisions are intended to be a positive intervention and as stated in CQC’s report, “designed to protect people from unnecessary suffering by receiving CPR that they don’t want, that won’t work or where the harm outweighs the benefits.” Every decision about whether or not a person should receive CPR must be made after careful assessment of each individual’s situation. This should be done in consultation with the person (and / or their representative depending on their individual circumstances).
From the beginning of the COVID-19 pandemic, there were concerns that DNACPR decisions were being made without involving people (or their families or carers) and were being applied to groups of people, rather than taking each person’s individual circumstances into account. There were particular concerns that this was affecting people with a learning disability and older people. CQC’s review looked at how DNACPR decisions were made in the context of advance care planning, across all types of health and care sectors (including care homes, primary care and hospitals).
CQC’s findings from the review were that, going forwards, there needs to be a focus on three key areas:
- Information, training and support.
- A consistent national approach to advance care planning.
- Improved oversight and assurance.
Issues identified by CQC in relation to DNACPR decisions
CQC found that whilst, “some people felt they had been involved in the decision-making process, as part of a holistic conversation about their care… others felt that conversations around whether they would want to receive cardiopulmonary resuscitation (CPR) came out of the blue and that they were not given the time or information to fully understand what was happening or even what a DNACPR was. In some cases, people were not always aware that a DNACPR decision was in place.” Out of 2,048 adult social care providers who responded to CQC’s information request for information regarding DNACPR decisions, it was found that 508 out of 9,679 of DNACPR decisions put in place since 17 March 2020 had not been agreed in discussion with the person, their relative or carer.
CQC attributed the training and support that staff received, as a key factor in whether advance care planning conversations were held in a person-centred way and protected their human rights. CQC found that there were many types of advance care planning in use (including a variety of acronyms) and concluded that there needs to be a consistent national approach to advance care planning and DNACPR decisions. CQC also found that there needed to be a consistent use of accessible language, communication and guidance to enable shared understanding and information sharing among commissioners, providers and the public.
CQC also found that there was a need for improved oversight of DNACPR decisions to ensure that people’s rights are protected. The report stated that, “Without proper oversight, systems could not be sure that clinicians, professionals and workers were being supported to keep their professional practice and knowledge up to date in line with best practice, and to work within this. This is an area that needs rapid evaluation given the issues we have identified with staff knowledge and understanding. It is also pivotal to the development of end of life strategies at a system-wide level.”
Human rights implications
CQC heard evidence that in some cases, ‘blanket’ DNACPR decisions had been made. Blanket DNACPR decisions, failing to discuss with people whether or not they want CPR to be attempted and people not understanding when a DNACPR decision is in place are all potential human rights issues. They are also potentially discriminatory and unlawful under the Equality Act 2010.
Measures for providers to implement
CQC’s report stated that CQC will, “ensure a continued focus on DNACPR decisions through… monitoring, assessment and inspection of all health and adult social care providers”. Therefore, now that we have come out of the third national lockdown and restrictions have been eased, it would be a good time for providers to ensure that they have the following measures in place:
- Suitable staff training in relation to advance care planning and end of life care, covering the topic of DNAR decision making. Staff training should help to enable DNACPR decision making conversations to take place in a person-centred way and to reduce the risk of inappropriate decision making.
- Clear communication – with the individuals to whom the DNACPR decision relates to and / or their families and representatives (depending on the person’s capacity and wishes).
- Robust record keeping – ensure there are comprehensive records of conversations with and decisions agreed with, people, their families and/or representatives. To assist with this providers should ensure that there is consistency in records by using the same advance care planning forms, same acronyms and accessible language.
- Governance of DNACPR decisions – providers should ensure that there is sufficient oversight of DNACPR decisions relating to service users in their service. Providers may want to consider including reviewing DNACPR decisions as part of their service’s monthly audits. This will also help to demonstrate to CQC that this is a “Well-led” service.
Conclusion
Advanced care planning is a difficult and sensitive area. Reaching and making DNACPR decisions is most certainly not a routine form filling exercise and caution should be exercised when DNACPR decision making conversations take place with service users.
Any deaths which are a result of a failure to act are just as unlawful as death caused by a deliberate action. If either the failure or the act are not sanctioned by the law that may be categorised as neglectful or even criminal. Therefore, it is of the utmost importance that DNACPR decisions are properly completed with full knowledge of relevant information and that the consent remains valid when the decision is to be made.
Providers should also be mindful that CQC will be monitoring DNACPR decisions during future inspections. It is therefore a good idea to ensure that the measures set out in bullet points above are either implemented or if they are already in place, reviewed and strengthened further.
If providers need any advice or assistance in relation to issues arising from DNACPR decisions or issues arising from CQC inspections, our specialist solicitors can help. Please contact Ridouts Professional Services Ltd using the email address [email protected] or by calling 0207 317 0340. | https://www.ridout-law.com/considering-cqcs-report-on-do-not-attempt-cardiopulmonary-resuscitation-decisions/ |
Process Improvement Tips for Project Managers
Whether you are a project manager at a large project or a team member in a small organization, you may be in need of changes for process improvement. You can make some small or big adjustments in favor of more effectively reaching your project objectives. In fact, it is difficult to adopt changes for people but, there are some ways to make it as simple as adding a new tool to your project management toolbox. In this article, we will talk about business process improvement methods and techniques and share tips to guide team members to ensure constant improvement in their business processes.
What is Process Improvement?
Before to share tips, let’s focus on process improvement. Project managers, team members, and stakeholders often look for ways to optimize performance, meet standards, improve quality and lower costs.
Although process improvement may have different names such as business process re-engineering or continual improvement process, they refer to the same purpose. The main purpose is to minimize errors, reduce waste, improve productivity.
What are Process Improvement Methods and Techniques?
There are various techniques designed to help your organization to improve productivity in processes. Those techniques and methodologies help to identify defects and causes of failures. But not all the methodologies suit all the needs. Some of them focus on reducing wastes while others focus on taking the company culture one step further. There are also methodologies for process mapping.
- Kaizen Methodology
- 5s Methodology
- PDCA (Plan, Do, Check, Act)
- Six Sigma Methodology
- Lean Methodology
- Cause and Effect Analysis
- SIPOC (Suppliers, Inputs, Process, Outputs, Customers) Analysis
- VSM (Value Stream Mapping)
- TQM (Total Quality Management)
- Kanban Methodology
- Process Mapping
Top 5 Process Improvement Tips for Project Managers
Regardless of what type of methodology you are using, here below are some tips that make process improvement easier.
Tip # 1: Empathy Promotes High Commitment and Cooperation
Changes are inevitable in the processes. Changes often impact team member’s workflow. Empathy is one of the most important values that promotes teamwork while trying to incorporate a new process.
Before to adopt changes, try to understand the feelings of others who will be affected. Be sure to gather multiple approaches to decide any update. You can gather people together to talk about the positive and negative effects of the change to ensure more effective adoption.
Tip # 2: Think Strategically
While making a project plan, we set a time frame for each deliverable. Because there is a time and a place for everything in project management. The same goes for process improvement. Think strategically and plan for the key variables which are time, cost and conditions before to initiate a change. If all the plans are in place, don’t be afraid to move forward because you are on the right way!
On the other hand, poor planning, inadequate preparation, and wrong estimates are reasons for project failure while adopting changes.
Tip # 3: Make Realistic Assumptions
Making realistic assumptions is key to successful process improvement. But, how do you know if your assumptions are correct? Encourage your team members to ask questions and raise concerns regarding the changes. Getting their feedback will be invaluable to decide optimizations for change.
Note that, even the best plans may fail during the implementation. Be realistic and analyze the process correctly to make adjustments when needed. Making realistic assumptions will help you to keep your feet on the ground. Bear in mind that process improvement should improve efficiency – not make things difficult.
Tip # 4: Dedicate Yourself
Believe in your process change yourself before to decide to adopt. Think of the change from different perspectives and ask yourself why the process improvement is necessary. Then, encourage team members to ask questions and make comments regarding the advantages and disadvantages of the proposed change. Ask team members if they have any concerns. This will promote collaboration during the implementation of change.
Tip # 5: Be Patient and Positive
Be patient and don’t wait for everything to happen at once. Because changes and process improvements will not happen overnight. If the process takes longer than planned, don’t be too discouraged and keep doing adjustments. Because you will reap the fruits over time. Use metrics to analyze your plans and measure the performance to understand if they are effective or not.
Improvement Steps
Basically, process improvement focuses on the alignment and performance of a particular process with the organizational strategy. Below are some steps to be followed for this purpose.
- Understand the Process to be Improved
- Find Out the Improvements
- Implement the Improvements
- Monitor and Control the Effects of Improvements
Conclusion
In this article, we discussed process improvement methods and techniques. Simply put, the goal of business process management is to improve the efficiency of all tasks throughout the production chain. If all the processes are efficient, the company will achieve more profit. Various methodologies such as Kaizen, Kanban, 5s and Six Sigma are applicable to process improvement studies. Regardless of what methodology you are using, involving people to decision making is key to gather different approaches regarding changes. In other words, getting everyone on board is key to the success of process improvement.
We learn many lessons from visitors who use projectcubicle.com every day. Most of the experiences are gained based on experience. If you would like to share your idea or advice, please write a comment and share your experience with our audience.
Lorelei Anrei is a PMP, & ITIL certified Project Controls Manager. Ms. Anrei has worked in IT Industry for over 15 years. As a guest speaker, Anrei has shared presentations on Efficient and Effective Meetings. She is sales and marketing manager at LB Training Solutions. | https://www.projectcubicle.com/process-improvement-tips-for-project-managers/ |
New York state has announced a new, “first-in-the-nation regulation…to protect [it] from the ever-growing threat of cyber-attacks.” The proposed regulation requires banks, insurance companies, and other financial services institutions regulated by its Department of Financial Services to establish and maintain a cyber-security program designed to protect consumers and ensure the safety and soundness of the financial services industry.
The proposed regulation requires regulated financial institutions to establish a cyber-security program; adopt a written cyber-security policy; designate a Chief Information Security Officer responsible for implementing, overseeing and enforcing its new program and policy; and have policies and procedures designed to ensure the security of information systems and nonpublic information accessible to, or held by, third-parties, along with a variety of other requirements to protect the confidentiality, integrity and availability of information systems.
Each covered entity will be required to implement and maintain a written cyber-security policy setting forth its policies and procedures for the protection of information system and the nonpublic Information stored on those systems. The policy, at a minimum, must address:
information security;
data governance and classification;
access controls and identity management;
business continuity and disaster recovery planning and resources;
capacity and performance planning;
systems operations and availability concerns; systems and network security;
systems and network monitoring;
systems and application development and quality assurance;
physical security and environmental controls;
customer data privacy;
vendor and third-party service provider management;
risk assessment; and
incident response.
The cyber-security policy, prepared on at least an annual basis, must be reviewed by a firm’s board of directors, or equivalent governing body, and approved by a senior officer.
Each covered entity must designate a qualified individual to serve as Chief Information Security Officer, responsible for overseeing and implementing the cyber-security program and enforcing its cyber-security policy. To the extent this requirement is met using third party service providers, the firm will: retain responsibility for compliance; designate a senior executive or employee responsible for oversight of the provider; and require the provider to maintain a cyber-security program that meets the regulation’s requirements.
The CISO of each entity would be required to develop a report, at least bi-annually, that is presented to the board of directors or equivalent governing body and made available to the superintendent upon request. It must: assess the confidentiality, integrity and availability of the firm’s information systems; detail exceptions to the cyber-security policies and procedures; identify cyber risks; assess the effectiveness of the cyber-security program; propose steps to remediate any identified inadequacies; and include a summary of all material cyber-security events during the time period addressed by the report.
The cyber-security program should, at a minimum, include penetration testing of information systems at least annually, and vulnerability assessments on a quarterly basis. The program must include implementing and maintaining audit trail systems that:
track and maintain data that allows for the complete and accurate reconstruction of all financial transactions and accounting necessary to detect and respond to a cyber-security event;
track and maintain data logging of all privileged authorized user access to critical systems;
protect the integrity of data stored and maintained as part of any audit trail from alteration or tampering;
protect the integrity of hardware from alteration or tampering, including by limiting electronic and physical access permissions to hardware and maintaining logs of physical access to hardware that allows for event reconstruction;
log system events including, at a minimum, access and alterations made to the audit trail systems by the systems or by an authorized user, and all system administrator functions performed on the systems;
and maintain records produced as part of the audit trail for at least than six years.
As part of cyber-security programs, each firm must limit access privileges to information systems that provide access to nonpublic Information solely to individuals who require it to perform their responsibilities. Access privileges should be periodically assessed. Firms must also implement written policies and procedures designed to ensure the security of information systems and nonpublic data accessible to, or held by, third parties doing business with them.
Firms will be expected to: require multi-factor authentication for any individual accessing internal systems or data from an external network; require multi-factor authentication for privileged access to database servers that allow access to nonpublic Information; and require risk-based authentication in order to access web applications that capture, display or interface with nonpublic Information.
As part of its cyber-security program, each firm will be required to include policies and procedures for the timely destruction of any nonpublic Information that is no longer necessary for the products or services it was provided for, except when the information is required to be retained by law or regulation.
Training is also a requirement in the proposed regulation. Firms must require all personnel to attend regular cyber-security awareness training sessions that are updated to reflect risks identified in the annual assessment.
On an annual basis, by Jan. 15, each firm is required to provide the NYDFS superintendent a written statement (an example is provided as an addendum to the rule proposal) certifying that they are in compliance with all requirements.
To the extent areas, systems, or processes that require material improvement, updating, or redesign are uncovered, firms are expected to document the remedial efforts planned and underway. The identification of any material risk of imminent harm relating to its cyber-security program requires that the superintendent be notified within 72 hours.
The proposed regulation “includes certain regulatory minimum standards while maintaining flexibility so that the final rule does not limit industry innovation and instead encourages firms to keep pace with technological advances.”
A limited exemption is included in the rule for firms with fewer than 1000 customers in each of the last three calendar years, less than $5,000,000 in gross annual revenue in each of the last three fiscal years, and less than $10,000,000 in year-end total assets, calculated in accordance with generally accepted accounting principles. In the event that a firm, as of its most recent fiscal year end, ceases to qualify for the exemption it has 180 days from the fiscal year end to comply with all requirements.
Prior to proposing this new regulation, NYDFS surveyed nearly 200 regulated banking institutions and insurance companies to obtain insight into the industry's efforts to prevent cyber-crime. Officials also met with a cross-section of those surveyed, as well as cyber-security experts, to discuss emerging trends and risks, as well as due diligence processes, policies and procedures governing relationships with third party vendors. The findings from these surveys led to three reports which helped to inform the rulemaking process.
The proposal is subject to a 45-day notice and public comment period before its final issuance. Covered Entities shall have 180 days from the effective date of the regulation to comply. | https://www.complianceweek.com/nys-financial-regulator-will-oversee-new-cyber-security-rules/10374.article |
This proposal is designed to build upon the epidemiologic and molecular genetic findings from our research on tobacco-related epithelial cancers. Tobacco exposure is an established risk factor for renal cell carcinoma (RCC). However, there are limited data on susceptibility markers and epidemiologic profiles for RCC We therefore propose to use a molecular epidemiologic approach in a case-control study to identify interindividual differences in inherited genetic instability, focusing on assessing DNA damage/repair, telomere length, and telomerase activity as predictors of RCC risk. In addition, deletions involving 3p are the most common genetic alterations in RCC, we will also study 3p latent genetic instability. We will accrue 300 patients with RCC, who have not received chemotherapy or radiotherapy, identified from two hospitals in the Houston, Texas metropolitan area. We will also recruit 300 controls identified from population-based random digit dialing in the Houston metropolitan area. The controls will be matched to the patients by sex, age (+/- 5 years), and ethnicity. Comprehensive epidemiologic prof'des will be constructed for these patients and controls. Specific goals of this project are: 1): To assess two mutagen sensitivity or DNA repair assays performed in parallel and measured by Komet 4.0 image system in patients and controls. One assay quantifies gamma-radiation-induced lymphocytic tail moment reflecting base excision repair (BER) and double strand break (DSB)/recombination repair; the other assay quantifies benzo[a]pyrene diol-epoxide (BPDE)-induced lymphocytic tail moment reflecting nucleotide excision repair (NER). Our hypothesis is that subjects who show increased y-radiation and BPDE sensitivity are at greater risk for RCC than are those who do not show these sensitivities. 2): To determine the telomere length in peripheral blood lymphocytes (PBLs) at baseline in patients and controls. Our hypothesis is that individuals susceptible to RCC will have shorter telomere length at baseline compared with normal individuals and that telomere length at baseline might be inversely correlated with y-radiation sensitivity. 3): To determine the levels of telomerase activity in PBLs at baseline and after 7- radiation treated of patients and controls. Our hypothesis is that upon exposure to y-radiation, individuals susceptible to RCC will have higher telomerase activity compared with healthy individuals. 4): To determine the frequencies of BPDE-induced chromosome aberrations on 3p12.3, 3p14.2, 3p21.3, and 3p25.2 in PBLs of patients and controls. 3p is the most frequently reported abnormal region in RCC. Our hypothesis is that cases exhibit higher frequencies of BPDE-induced 3p aberrations in PBLs than do controls. These chromosomal loci may reflect genetic susceptibility of specific loci to BPDE and that individuals with aberrations at these loci are at increased risk for RCC. We also plan to conduct a substudy as a secondary aim to perform the loss of heterozygosite (LOH) analysis on 3p on the corresponding tumor tissue from Specific Aim 4. We hypothesize that there will be concordance in the severity of site-specific chromosomal lesions in target and surrogate tissues. This study will further the understanding of the genetic events leading to the development of RCC; explore the genetic basis for genetic instability and how it affects cancer risk; and eventually provide a means of identifying a subgroup who are most likely to develop RCC. Such individuals may then be targeted for intervention programs such as chemoprevention or dietary modification.
| |
Arylpyrrole compounds are highly effective insecticidal, acaricidal and nematocidal agents.
A is
W is
L is
M and R
R₁ is
R₂ is
R₃ is
R₄ is
R₅ is
R₆ is
R₇ is
Z is
n is
hydrogen, phenyl or C₁-C₆ alkyl optionally substituted with phenyl;
CN, NO₂, CO₂R₁ or SO₂R₂;
hydrogen or halogen;
are each independently hydrogen, C₁-C₄ alkyl, C₁-C₄ alkoxy, C₁-C₄ alkylthio, C₁-C₄ alkylsulfinyl, C₁-C₄ alkylsulfonyl, CN, NO₂, Cl, Br, F, I, CF₃, R₃CF₂Z, R₄CO or NR₅R₆ and when M and R are on adjacent positions they may be taken together with the carbon atoms to which they are attached to form a ring in which MR represents the structure
-OCH₂O-, -OCF₂O- or -CH=CH-CH=CH-;
C₁-C₆ alkyl, C₃-C₆ cycloalkyl or phenyl;
C₁-C₆ alkyl, C₃-C₆ cycloalkyl or phenyl;
hydrogen, F, CHF₂, CHFCl or CF₃;
C₁-C₄ alkyl, C₁-C₄ alkoxy or NR₅R₆;
hydrogen or C₁-C₄ alkyl;
hydrogen, C₁-C₄ alkyl or R₇CO;
hydrogen or C₁-C₄ alkyl;
n
S(O) or O and
an integer of 0, 1 or 2 which comprises reacting a compound of formula II
wherein A, W, L, M and R are described above with at least one molar equivalent of a compound of formula III
wherein X is Cl, Br or I in the presence of an acid and a solvent.
The present invention is directed to a process for the preparation of arylpyrrole compounds of formula I
wherein
The arylpyrrole compounds of formula I are highly useful as insecticidal, acaricidal and nematocidal agents and, further, are important intermediates in the manufacture of certain insecticidal arylpyrrole compounds.
o
o
o
o
o
o
Surprisingly, it has been found that pyrrole rings substituted at the α-positions may be effectively prepared in a single step process via the condensation of a suitable enamine with an α-haloketone. Thus, pyrrole compounds of formula I may be prepared by reacting an enamine of formula II with about one molar equivalent of an α-haloketone of formula III in the presence of an acid and a solvent at preferably an elevated temperature. The reaction is illustrated in flow diagram I.
The solvents suitable for use in the process of the present invention include organic solvents such as hydrocarbons and aromatic hydrocarbons having a boiling range of about 80 to 250C, such as benzene, toluene, xylene and the like, preferably toluene. Acids suitable for use in the invention include organic acids such as acetic acid, propionic acid and the like, preferably acetic acid. Reaction temperatures of about 80 to 150C are suitable, with 90-130C. being preferred.
The compounds of formula II wherein A is hydrogen may be prepared by reacting the appropriate benzonitrile of formula IV with a compound of formula V in the presence of a base as shown in flow diagram II.
The compounds of formula II wherein A is other than hydrogen may be prepared by reacting the appropriate aroyl compound of formula VI with a suitable amine of formula VII as shown in flow diagram III.
Arylpyrrole compounds of formula I may be useful as intermediates in the manufacture of insecticidal arylpyrroles. For example, compounds of formula I may be halogenated using a suitable halogenating agent such as a halogen, a hypohalite or the like to afford the corresponding 2-aryl-4-halopyrrole insecticidal agents of formula VIII. The reaction is shown in flow diagram IV.
By varying the substituents, A, W, L, M and R and the halogen, X, numerous possible arylpyrroles may be prepared from the intermediate compounds of formula I.
Preferred is the process according to the present invention wherein W is CN or NO₂; W is CN, A is hydrogen or methyl, L and R are hydrogen and M is halogen; W is CN, A is hydrogen or methyl, L is hydrogen and M and R are halogen.
In order to facilitate a further understanding of the present invention, the following examples are set forth primarily for the purpose of illustrating certain more specific details thereof. The invention is not to be limited, thereby except as defined in the claims. The terms IR and NMR designate infrared and nuclear magnetic resonance, respectively. The term HPLC designates high pressure liquid chromatography.
p
in
vacuo
o
o
A solution of -chloro-β-(methylamino)cinnaminitrile (10.0 g, 0.052 mol) in toluene and acetic acid is treated dropwise with 3-bromo-1,1,1-trifluoro-2-propanone (10.0 g, 0.052 mol) at room temperature, heated at reflux temperature for about 1 hour or until the disappearance of starting material by thin layer chromatography, cooled to room temperature and diluted with ethyl acetate. The organic phase is washed sequentially with water and 5N NaOH, dried (Na₂SO₄) and concentrated to give a brown oil residue. The residue is flash chromatographed (silica gel, hexanes/ethyl acetate, 80/20) to give the title product as a pale yellow solid 6.7 g (48% yield) mp 129.5C to 130.5C, identified by IR and NMR spectral analyses.
in
vacuo
o
A solution of 3,4-dichloro-β-(methylamino)-cinnaminitrile (7.0 g, 0.031 mol) in toluene and acetic acid is treated dropwise with 3-bromo-1,1,1-trifluoro-2-propanone (6.0 g, 0.031 mol) at room temperature, heated at reflux temperature for 5 hours, cooled and diluted with ethyl acetate. The organic phase is washed sequentially with water and aqueous sodium hydroxide, dried (Na₂SO₄) and concentrated to give a brown oil residue. The residue is flash chromatographed (silica gel, hexanes/ethyl acetate, 80/20) to give the title compound as a pale yellow solid, mp 130.2C, identified by mass spectral, IR and NMR analyses.
in
vacuo
o
A solution of β-(methylamino)-2-naphthalene-acrylonitrile (2.5 g, 0.012 mol) in toluene and acetic acid is treated dropwise with 3-bromo-1,1,1-trifluoro-2-propanone (2.3 g, 0.012 mol) at room temperature, heated at reflux temperature for 6 hours, cooled and diluted with ethyl acetate. The organic phase is washed sequentially with water and 5N NaOH, dried (Na₂SO₄) and concentrated to give a brown oil residue. The residue is flash chromatographed (silica gel, hexanes/ethyl acetate, 80/20) to give the title compound as a yellow solid, mp 134C, identified by mass spectral, IR and NMR analyses.
p
in
vacuo
o
o
o
l
A mixture of β-amino--chlorocinnaminitrile potassium salt (2.2 g, 0.01 mol) in acetic acid is treated dropwise with 3-bromo-1,1,1-trifluoro-2-propanone (1.91 g, 0.01 mol) at room temperature, heated at 100C for 1 1/2 hours, stirred at room temperature for 16 hours and diluted with water and ethyl acetate. The organic phase is washed sequentially with water and aqueous sodium hydroxide, dried (Na₂SO₄) and concentrated to give a semi-solid residue. The residue is crystallized in ethyl acetate/heptane to give the title compound as a brown solid, mp 238C to 240C, identified by ¹³C and HNMR analyses.
p
in
vacuo
o
o
l
A mixture of -chlorobenzoylacetonitrile (18.0 g, 0.1 mol), methylamine hydrochloride (10.13 g, 0.15 mol) and sodium acetate (12.3 g, 0.15 mol) in toluene is heated at reflux temperature (with a Dean Stark trap) for 5-6 hours, cooled to room temperature and diluted with water and ethyl acetate. The organic phase is separated and concentrated to a residue which is crystallized from toluene/heptane to give the title product as a pale yellow solid, 17.1 g, (89% yield), mp 111.0C to 113.0C, identified by ¹³C and HNMR spectral analyses.
p
l
A solution of -chlorobenzonitrile (13.8 g, 0.1 mol) in dimethoxyethane is treated with acetonitrile (4.93 g, 0.012 mol) at room temperature, treated portionwise with potassium t-butoxide (11.8 g, 0.105 mol), heated at reflux temperature for 1 hour, cooled to room temperature, diluted with ether and filtered. The solid filter cake is air dried and a 10 g sample is recrystallized from ethanol to give the title compound as a white solid, 3.9 g, identified by IR, ¹³C and HNMR spectral analyses.
in
vacuo
o
l
A solution of β-oxo-2-naphthalenepropionitrile (5.0 g, 0.0256 mol) in toluene is treated with methylamine hydrochloride ()2.6 g, 0.0384 mol), sodium acetate (3.15 g, 0.0386 mol) and a catalytic amount of acetic acid, heated at reflux temperature (fitted with a Dean Stark trap) for 6 hours, cooled, diluted with ethyl acetate and dilute hydrochloric acid. The organic phase is dried over Na₂SO₄ and concentrated to give a residue which is triturated under hexanes to give the title compound as a yellow solid, 3.1 g (58% yield) mp 138C, identified by IR, NHMR and mass spectral analyses.
p
in
vacuo
o
o
o
o
A solution of 2-(-chlorophenyl)-1-methyl-5-(trifluoromethyl)pyrrole-3-carbonitrile (5.70 g, 0.02 mol) in chlorobenzene is treated with bromine (3.52 g, 0.022 mol), heated at 80C for 20 hours, cooled to room temperature, treated with additional bromine (3.52 g, 0.022 mol) and heated at 100C until reaction is complete by HPLC analysis. The reaction mixture is cooled to room temperature and diluted with ethyl acetate and water. The organic phase is washed with aqueous sodium metabisulfite, dried (MgSO₄) and concentrated to afford a solid residue. The residue is recrystallized from ethyl acetate/heptane to give the title product as a white solid, 6.50 g (89.4% yield), mp 126C to 129C.
p
t
t
o
o
o
o
o
l
A solution of 2-(-chlorophenyl)-5-(trifluoromethyl)pyrrole-3-carbonitrile (20.0 g, 0.0739 mol) in monochlorobenzene is treated with -butylhypochlorite (19.6 g, 0.087 mol), heated at 70C for 2 hours, treated with additional -butylhypochlorite (2.0 g, 0.009 mol), heated at 80C to 82C for 1 hour, cooled to room temperature, diluted with heptane and filtered. The filter cake is air-dried to give the title product as a pale solid, 18.5 g, (82.5% yield), mp 242.5C to 243.0C, identified by ¹⁹F and HNMR spectral analyses.
EXAMPLE 1
2-(p-Chlorophenyl)-5-(trifluoromethyl)-pyrrole-3-carbonitrile
EXAMPLE 2
Preparation of 2-(3,4-Dichlorophenyl)-1-methyl-5-(trifluoromethyl)pyrrole-3-carbonitrile
EXAMPLE 3
Preparation of 1-Methyl-2-(2-naphthyl)-5-(trifluoromethyl)pyrrole-3-carbonitrile
EXAMPLE 4
EXAMPLE 5
EXAMPLE 6
EXAMPLE 7
Preparation of β-(Methylamino)-2-naphthaleneacrylonitrile
EXAMPLE 8
EXAMPLE 9 | |
Q:
js not passing form data for php mail
I'm a beginner attempting to create a HTML webpage. I'm using a free online template and trying to create a Contact Page. The contact calls a php script to send an email of the captured fields. I can get this to work when I send the email as pure php with no javascript or ajax. However when I try to use the javascript with the ajax code, the contents of the web form are not being passed. Two near identical issues have been raised here but I am finding the javascript to complicated for myself to understand as a beginner. The slight differences in the js has resulted in hours of trying to resolve without success.
js deleting submitted form data
PHP form post data not being received due to jQuery
The HTML code is
<div class="col-md-4 col-sm-12">
<div class="contact-form bottom">
<h2>Send a message</h2>
<form id="main-contact-form" name="contact-form" method="post" action="sendemail.php">
<div class="form-group">
<input type="text" name="name" class="form-control" required="required" placeholder="Name">
</div>
<div class="form-group">
<input type="email" name="email" class="form-control" required="required" placeholder="Email Id">
</div>
<div class="form-group">
<textarea name="message" id="message" required class="form-control" rows="8" placeholder="Your text here"></textarea>
</div>
<div class="form-group">
<input type="submit" name="submit" class="btn btn-submit" value="Submit">
</div>
</form>
</div>
The PHP script is called sendemail.php
<?php
header('Content-type: application/json');
$status = array(
'type'=>'success',
'message'=>'Thank you for contact us. As early as possible we will contact you '
);
$name = @trim(stripslashes($_POST['name']));
$email = @trim(stripslashes($_POST['email']));
$subject = @trim(stripslashes($_POST['subject']));
$message = @trim(stripslashes($_POST['message']));
$email_from = $email;
$email_to = '[email protected]';
$body = 'Name: ' . $name . "\n\n" . 'Email: ' . $email . "\n\n" . 'Subject: ' . $subject . "\n\n" . 'Message: ' . $message;
$success = @mail($email_to, $subject, $body, 'From: <'.$email_from.'>');
echo json_encode($status);
die;
The javascript is as follows
// Contact form
var form = $('#main-contact-form');
form.submit(function(event){
event.preventDefault();
var form_status = $('<div class="form_status"></div>');
$.ajax({
url: $.post(this).attr('action'),
beforeSend: function(){
form.prepend( form_status.html('<p><i class="fa fa-spinner fa-spin"></i> Email is sending...</p>').fadeIn() );
}
}).done(function(data){
form_status.html('<p class="text-success">Thank you for contacting us. We will reply as soon as possible.</p>').delay(3000).fadeOut();
});
});
There are two issues, the first being that the form data doesnt pass when using the javascript code. The second is that it displays the message twice and sends two emails. I think the second issue is related to the php script calling the function again.
Help & guidance will be really appreciated, I am a beginner only attempting a small challenge.
A:
The mail form appears to be deliberately disabled. It took me a while to fix it.
The code below will make it work. I hope this helps.
type : "POST",
cache : false,
url : $(this).attr('action'),
data : $(this).serialize(),
}
});
});
| |
In 2020, there were more than 1.2 million suicide attempts, and almost 46,000 people died by suicide in the United States.1 In addition, almost 40 million Americans live with migraine disease.2 Unfortunately, people living with migraine have higher rates of suicidal ideation. Pain intensity, migraine type, age and other comorbidities play a role in the relationship between suicide and migraine.
Suicidal Thought Versus Suicidal Ideation
Suicidal thoughts may be a common occurrence but it is important to note that there is a difference between a passing thought and active suicidal ideation, which is when a person has thoughts with a plan and means.
Risk Factors for Suicide
Researchers have identified a number of risk factors that may make someone more susceptible to suicide. About 90% of people who die by suicide have an underlying mental health condition such as anxiety, depression, schizophrenia, etc. but there are often other factors involved as well.3 See other risk factors below.
Medical History
- One of the strongest risk factors for suicide is a history of an attempt
- Mental health conditions can increase risk of suicide3
- Living with a chronic pain condition such as migraine, especially migraine with aura
- History of a traumatic brain injury
- Family history of a suicide attempt. A study reported that a child of a parent who has attempted suicide is almost five times more likely to attempt suicide.4
Environmental Factors
- Childhood adverse events, such as physical and/or sexual abuse or trauma
- Domestic abuse
- Bullying/cyberbullying
- Stressful event such as divorce or loss of a loved one
- Exposure to another suicide
- Recent release from jail or prison
- Homelessness
- Foster care or adoption
Other Factors
- Men are at higher risk of suicide than women
- A close friend history of suicide or suicide attempts5
- Young adults between the ages of 10 – 35 have the highest suicide rates6
The Relationship Between Suicide and Migraine
Association of Suicide Risk With Headache Frequency Among Migraine Patients With and Without Aura – Lin et al.
This study found that people who live with migraine experience higher levels of suicidal ideation compared to healthy controls.7 Of note, people who have migraine with aura were found to be 5.8 times more likely to attempt suicide compared to those without aura.7 Among people living with chronic migraine with aura, the study found 47.2% of people had suicidal ideation and 13.9% attempted suicide.7 In addition, the number or frequency of migraine attacks were correlated with people who experienced an aura rather than other types of migraine.7 Other factors that were also associated with suicide in people who live with migraine were a low education level and a high depression score (based on the Beck Depression Inventory scale).7
Depression, Suicide and Migraine
Depression is a significant comorbidity among people living with migraine. More information about the relationship between depression and migraine can be found here. According to Dr. Dawn Buse at the AMD 2021 symposium, those with depression are 3.4 times more likely to develop migraine, and conversely those with migraine are 5.8 times more likely to develop depression. Regarding the relationship between suicide and depression, about 90% of people who die by suicide have an underlying mental health condition such as anxiety, depression, schizophrenia, etc.3 It is important to note that depressive symptoms are common during a migraine attack, and people who are at a high risk for suicide may need additional support and/or monitoring during their attacks.8
Migraine in Adolescents
Suicide is one of the leading causes of death among adolescents.9 A study that looked at suicidal ideation in adolescents with migraine aged 13-15 found that those with migraine had a higher frequency of suicidal ideation (16.1%) compared to those without migraine (6.2%).10 The adolescents who had migraine with aura showed the highest risk of suicidal ideation (23.9%).10 The study also assessed the number of headache days in relation to suicidal ideation. Those who had 7-14 headache days per month experienced the highest rate of suicidal ideation.10
Migraine, Pain Intensity and Suicide
As previously discussed, migraine with aura, comorbid depression and the number of headache days can all contribute to suicidal ideation. But what about pain intensity? Breslau et al. assessed a group of people with migraine disease, non-migraine headache and people who never had a headache above mild severity at baseline and then again at two years.11 The researchers found that the risk of suicide attempts increased by 17% with each 1-point increase in headache severity on a pain intensity scale (0-10).11 They also found that baseline head pain level was higher in people who attempted suicide (7.58) versus people who did not attempt suicide (5.18).11 This highlights the need for adequate migraine treatment, frequent pain assessment by a clinician and a sufficient migraine action plan.
Commonalities Between Migraine and Suicide
- History of trauma including childhood physical and/or sexual abuse is a risk factor for both migraine and suicide
- Veterans have higher suicide rates and are more likely to experience migraine and other headache disorders than non-veterans.12,13
- LGBTQI individuals are more likely to live with migraine and also have a higher rate of suicide.12
- Mental health conditions are more prevalent among people who die by suicide and also in people who live with migraine disease. See the Migraine Comorbidities Library for more information on migraine and mental health conditions.
- Both suicide and migraine are more prevalent in those with mental health conditions
- The pathophysiology between migraine and suicide likely involves dysfunction of the hypothalamic–pituitary–adrenal (HPA) axis and abnormalities in serotonin.7
- A variation in a dopamine gene has been associated with suicide, migraine with aura, major depression, and generalized anxiety disorder.7,14
- Fibromyalgia is a comorbid condition to migraine and suicide. A study found that fibromyalgia was a predictor of suicidal ideation and attempts in patients with migraine. There was a higher suicide risk in people living with migraine without aura, migraine with aura and chronic migraine.15 Read about the relationship between migraine and fibromyalgia here.
Warning Signs
In addition to risk factors, people may exhibit warning signs that can alert others of potential suicidal behavior.
- Talking about dying or having no reason to live
- Feeling like a burden or feeling hopeless or trapped
- Selling or giving away items
- Withdrawing from family and friends
- Saying goodbye to loved ones
- Sleeping too much or too little
- Engaging in risky behavior such as excessive drinking, using drugs, etc.
For more warning signs, visit the American Foundation for Suicide Prevention.
Treatments
Treatment varies depending on the situation. If someone has attempted suicide, it is recommended to call 911 (in the USA). The person will be brought to the emergency room and evaluated by a clinician. They may also be kept under observation depending on the threat of harm to themselves or to other people. If someone has suicidal ideation, it is recommended to contact the Suicide Hotline (988). They may also be brought to the emergency room depending on the evaluation of emergency personnel.
Treatment is often multidisciplinary and may involve therapy, intervention and/or medication.
- Talk therapy, cognitive behavioral therapy for suicide prevention (CBT-SP) and grief therapy are commonly used
- Different types of interventions may include: safety planning and crisis response planning
- Antidepressants, antipsychotic or anti-anxiety medications such as are often used if the person has a comorbid mental health condition. Antidepressants have a black box warning, and people with suicidal ideation should be closely monitored when beginning an antidepressant
View more information about treatments here.
Ketamine is an emerging treatment for both suicidal thoughts and migraine. Ketamine was found to have benefits for the acute treatment of suicidal ideation and has also been used for the treatment of migraine.16 You can learn more about the benefits of ketamine here. Psychedelics have also been found to decrease suicidality as well as migraine frequency and pain.17,18
What Type of Doctor Should I See for Migraine and Suicide Risk?
Migraine is best treated by a clinician who specializes in headache medicine. A psychiatrist, pain psychologist, clinical health psychologist, rehabilitation psychologist or other mental health professional is typically involved in the assessment and evaluation of a suicidal person.
A Note To Patients and Providers
People who live with migraine, particularly migraine with aura, are at the highest risk for suicide. Other risk factors for suicide among people with migraine include: comorbid mental health conditions, increased pain intensity and frequency of migraine attacks. Clinicians should regularly screen for suicidal thoughts and/or previous suicide attempts and re-evaluate as necessary. Patients should report suicidal ideation, increased pain or previous attempts. Treatment will vary based on the individual. For more information about suicide, visit the American Foundation for Suicide Prevention.
What Happens When Someone Calls the Crisis Line? By Yuri Cárdenas
I was a San Francisco Suicide Prevention Crisis Line counselor, before chronic migraine stopped me, and I was also a Crisis Line caller. The people who answer the calls are there to support you through a time of crisis, or if you just need someone anonymous to talk to. You’ll first hear a recording with some options, and then the calls are automatically routed to counselors in your area. You can expect to be evaluated for the level of risk of suicide. It is important to note the possibility of police or other emergency responders being called if someone is in imminent danger of hurting themselves or others. That can be really detrimental, especially for people of color. If you are concerned about this risk, visit our resources section below. Remember, there is nothing to be ashamed of and the feeling will pass, even when you are sure it will go on forever.
Thank you to our sponsor Lundbeck!
Lundbeck, a global pharmaceutical company based in Denmark and founded in 1915, is tirelessly dedicated to restoring brain health, so every person can be their best. Lundbeck has a long heritage of innovation in neuroscience and is focused on delivering transformative treatments that address unmet needs in brain health.
For more information, visit: http://www.lundbeck.com/us
Resources
- Risk Factors and Warning Signs
- What We’ve Learned Through Research
- Interventions and Treatments for Suicide
- Migraine and Suicide Blog
- US Veterans and service members can call 988 and then press 1, or text 838255, or chat using veteranscrisisline.net/get-help-now/chat/
- The US Suicide and Crisis Lifeline also has a Spanish-language phone line: 1-888-628-9454
- 988 – New Nationwide Number for the Suicide Crisis Hotline
Additional Resources from NPR19
- Blackline – Peer support hotline for Black, Brown and Indigenous people
- Kiva Centers – Daily online peer support groups
- Peer Support Space – Virtual peer support groups twice a day Monday through Saturday
- Project LETS – Text support for urgent issues that involve involuntary hospitalization
- Trans Lifeline – Peer support hotline for trans and questioning individuals
- Wildflower Alliance – Has a peer support line and online support groups focused on suicide prevention
References
- https://www.cdc.gov/mmwr/volumes/71/wr/mm7108a5.htm
- https://headaches.org/facts-about-migraine/
- https://afsp.org/what-we-ve-learned-through-research
- JAMA Psychiatry. 2015 February ; 72(2): 160–168. doi:10.1001/jamapsychiatry.2014.2141.
- Bilsen J (2018) Suicide and Youth: Risk Factors. Front. Psychiatry 9:540. doi: 10.3389/fpsyt.2018.00540
- https://www.medicinenet.com/what_age_group_is_the_most_suicidal/article.htm
- Lin YK, Liang CS, Lee JT, Lee MS, Chu HT, Tsai CL, Lin GY, Ho TH, Yang FC. Association of Suicide Risk With Headache Frequency Among Migraine Patients With and Without Aura. Front Neurol. 2019 Mar 19;10:228. doi: 10.3389/fneur.2019.00228. PMID: 30941087; PMCID: PMC6433743.
- https://www.sciencedirect.com/science/article/pii/S0165032721011289
- https://www.cdc.gov/nchs/fastats/adolescent-health.htm#:~:text=Leading%20causes%20of%20deaths%20among,Suicide
- Migraine and suicidal ideation in adolescents aged 13 to 15 years.
- https://www.researchgate.net/profile/Richard-Lipton/publication/221690245_Migraine_Headaches_and_Suicide_Attempt/links/5bf32fb192851c6b27cadd27/Migraine-Headaches-and-Suicide-Attempt.pdf
- https://www.cdc.gov/suicide/facts/index.html
- https://news.va.gov/38022/veterans-who-deployed-are-more-likely-to-develop-migraines-or-headache-disorders/
- Peroutka SJ, Price SC, Wilhoit TL, Jones KW. Comorbid migraine with aura, anxiety, and depression is associated with dopamine D2 receptor (DRD2) NcoI alleles. Mol Med. 1998 Jan;4(1):14-21. PMID: 9513185; PMCID: PMC2230268.
- Liu HY, Fuh JL, Lin YY, Chen WT, Wang SJ. Suicide risk in patients with migraine and comorbid fibromyalgia. Neurology. 2015 Sep 22;85(12):1017-23. doi: 10.1212/WNL.0000000000001943. PMID: 26296516.
- Ketamine for the acute treatment of severe suicidal ideation: double blind, randomised placebo controlled trial doi:10.1136/bmj-2021-067194
- Zeifman RJ, Wagner AC, Watts R, Kettner H, Mertens LJ and Carhart-Harris RL (2020) Post-Psychedelic Reductions in Experiential Avoidance Are Associated With Decreases in Depression Severity and Suicidal Ideation. Front. Psychiatry 11:782. doi: 10.3389/fpsyt.2020.00782
- Schindler E. Psychedelics and clinical trials: What is the data for primary headache disorders? Presented at: American Headache Society annual scientific meeting; June 9-12, 2022; Denver.
- https://www.npr.org/sections/health-shots/2022/08/11/1116769071/social-media-posts-warn-people-not-to-call-988-heres-what-you-need-to-know
*The contents of this blog are intended for general informational purposes only and do not constitute professional medical advice, diagnosis, or treatment. Always seek the advice of a physician or other qualified health provider with any questions you may have regarding a medical condition. The writer does not recommend or endorse any specific course of treatment, products, procedures, opinions, or other information that may be mentioned. Reliance on any information provided by this content is solely at your own risk. | https://www.migrainedisorders.org/suicide-and-migraine/ |
As the petals of a red rose turn from the light, what color will the red petal become?
What is the color of a shadow cast upon a bright red ball on a field of grass?
I find these exercises extremely useful.
In watercolor there are several variables that can alter the result of the mix. Always keep in mind the value of the pigment at full strength. More water will create the effect of light reflecting back into the shadow causing it to be a lighter value, but still neutralized. Whether the pigment is opaque or translucent alters the results. The amount of pigment used to gray the red will determine whether or not the object retains red as its local color. Too much of the pigment used to neutralize the red will turn it either too purple, too brown or too gray. It will no longer look like a red object in or out of the light.
I like to see the subtle differences between the mixes. When working with a limited palette, it’s good to know what the possibilities are. I should have included Ultramarine Blue and Cobalt Blue just for reference, even though they are too far from being complements to be considered as pigments for graying vermillion. I try to keep in mind that graying is neutralizing not simply altering the intensity of a color.
I prefer graying pigments using near-complements rather than perfect complements, since perfect complements often take the life out of a color, neutralizing too well. By using near-complements I can suggest the lighting situation and nearby objects reflecting color back onto the object while at the same time creating harmony with the other palette colors. | https://chriscarterart.com/graying-red-vermillion/ |
Individuals express their creativity differently depending on their creative style. Creativity research has suggested that there may be distinct personality patterns linked to specific creative styles. The present study attempts to distinguish creative styles in terms of personality characteristics. The sample was composed of 65 eighth-grade students who attend a middle school in a rural area in New Jersey. Creative style was evaluated with the Kirton Adaption-Innovation Inventory (KAI). The KAI is designed to measure the degree to which individuals are “adaptive” or “innovative” in their cognitive style. Personality characteristics were evaluated using the Personality Research Form-E (PRF-E). Descriptive statistics were calculated followed by Pearson correlations to determine the interrelationships between the variables. A multivariate analysis of variance design was used to assess the statistical significance of the effects of creative style and gender on the PRF-E personality characteristics. A discriminant function analysis was conducted to measure the degree to which the PRF-E variables contributed to predicting whether individuals were more innovative or more adaptive. Separate univariate analyses of variance were computed to compare the various mean scores for adaptors and innovators. The results indicated that the personality characteristics of individuals were significantly different depending on whether they had an adaptive or innovative creative style. Adaptors tended to have high needs for cognitive structure and order, with a low need for impulsivity. Innovators tended to have high needs for autonomy and impulsivity, with a low need for cognitive structure. Significant differences were not found between adaptors and innovators with regard to the personality characteristics of affiliation, social recognition, change, and defendence. Significant differences were not found between males and females with regard to the personality characteristics. The results suggest that the educational methods and teaching styles of classroom instruction should vary so that adaptors and innovators may optimally benefit from instruction. Adaptors may be most productive when they derive their answers from an existing paradigm, receive specific assignments, structure, and support. Innovators may work best when they are spontaneous and allowed to function independently.
Subject Area
Personality|Cognitive therapy
Recommended Citation
Alter, Claudia Elizabeth, "Creativity styles and personality characteristics" (2001). ETD Collection for Fordham University. AAI3003020. | https://research.library.fordham.edu/dissertations/AAI3003020/ |
The Premier League has finally decided to allow all the teams to make five substitutions and this landmark change will come into effect from the 2022/23 season.
The English top division had introduced this rule back in May 2020 to help the teams manage their players better during the pandemic. Back then, the league was just restarting after an abrupt halt which meant teams had to play three or even four matches in a span of eight to nine days. The five substitutions allowed all the sides to rest their players, thereby avoiding unnecessary injuries.
While all the other European leagues had decided to continue with the five subs but surprisingly, the EPL was the only top division league to abandon this plan because some clubs thought having a bigger squad will give an unfair advantage to bigger teams.
Now, the Premier League shareholders have voted to bring back the five substitutes rule but the change will only come into effect from the next campaign.
The official statement read: “Clubs agreed to change the rules relating to substitute players.
“From next season, clubs will be permitted to use five substitutions, to be made on three occasions during a match, with an additional opportunity at half-time.
“A total of nine substitutes can be named on the team sheet.”
Premier League Shareholders met today and discussed a range of matters.
Clubs agreed to change the rules relating to substitute players. From next season, clubs will be permitted to use five substitutions.
Full statement: https://t.co/Ub985Gl3Lj pic.twitter.com/T27WXiXbUM
— Premier League (@premierleague) March 31, 2022
Will the five substitutes rule help the Premier League sides?
Almost all the managers from the bigger clubs have come out to support this decision but this is also because their clubs will be most beneficial.
“I’m glad at the end we have unified a criteria for the whole of Europe,” Arsenal boss Mikel Arteta had said, as quoted.
“It doesn’t don’t slow the game down too much and I think it’s good. It gives players the opportunity to be on the pitch, which is what they want to do.”
The Saints manager has also come out in support of this change, despite the claims of an unfair advantage.
Hasenhuttl said, as per the Daily Echo: “It’s no surprise, we have spoken a lot about it that for us it would be a gamechanger with the way we play – it helps for us, definitely.
“There are some arguments from the smaller clubs not to do it because of the squad difference between us and the bigger clubs, but I see it always as an advantage.”
It was the sides outside the top-six who had initially opposed the amendment of the substitution rule but they have finally caved in.
We can understand why the smaller teams are so worried because there’s no denying that the bigger sides will get a bigger boost due to this rule change.
Teams like Man United, Man City and Liverpool work with really big squads so naming nine quality substitutes will be really effortless for them.
But the sides who traditionally finish outside the top ten will find it really challenging to fill all the spots and many of them will eventually have to name youngsters from the academies to fill the gap.
Naming nine players on the bench just for the sake of it and using nine robust substitutes tactically are two totally different things and as time passes by, this gap is just going to widen.
A lot less players moving out on loan:
Another major drawback of this amendment is that a lot less players will now sent out on loan This might help some footballers get more game time with their parent club but several smaller sides depend on loan signings to add creativity and that x factor to their squad.
The Premier League will now introduce the use of five substitutes from next season.
Managers dream! 😍 pic.twitter.com/27rCGKcy16
— 90min (@90min_Football) March 31, 2022
If clubs like Norwich cannot sign a Brandon Williams or Palace can’t get someone like Conor Gallagher, it will be next to impossible for them to stay in the top tier for long periods of time.
In short, these fringe players will be much better off going to the smaller clubs and showcasing their talents rather than sit on the bench for weeks at an end just to play the final five or 10 minutes of games.
The bigger sides Premier League needed this:
Not everything is against the top-six though because these sides also play in Europe which means by the end of the season, they play a lot more matches than teams like Brentford, Watford or Brighton.
Club the Champions League and the Europa League with domestic tournaments like the FA Cup and the Carabao Cup and there’s an immediate justification in favour of the giants. Getting more subs for their weekend games will really help such teams keep their players fresh for the crucial midweek ties.
A massive boost for the players:
No matter how match we argue in favour or against the 20 teams but we are sure all the players will be really happy with this rule.
Since the pandemic hit, many players have been playing almost non-stop and with the World Cup coming up, most of them will not even get proper rest during the summer.
Grueling schedules have resulted in a lot more injuries so these change of rule will massively help avoid all these unncessary injuries.
Will the rule be eventually rolled back?
Billioniare owners are coming into the Premier League which means more money is going to be pumped in and if by any chance the FFP is done with then the big sides will massively exploit this five subsitution rule.
The men behind the curtains are opening up the doors for more entertaining football but in doing so, they are invariably helping the bigger sides to get even more triumphant.
The job of the Premier League stakeholders is not to help the big sides get biggers but to make sure all the 20 sides get a playing level field. This will not only help such teams stay afloat but it will also make sure that the ultra competitivness of the Premier League is not washed away.
We don’t think this rule is going away anytime soon and the best we hope is that the big sides not exploit this rule unncessary and are still willing to help out the minnows as well as their backup players. | https://thefootballfanbase.com/premier-league-is-the-five-substitution-rule-a-good-or-bad-thing/ |
Skip to content.
In computer science , streaming algorithms are algorithms for processing data streams in which the input is presented as a sequence of items and can be examined in only a few passes typically just one. They may also have limited processing time per item. These constraints may mean that an algorithm produces an approximate answer based on a summary or "sketch" of the data stream. Though streaming algorithms had already been studied by Munro and Paterson as early as , as well as Philippe Flajolet and G.
Skip to content. All Homes Search Contact. The algorithm was introduced by Philippe Flajolet and G. It was proposed by Yen et al. Why is it useful? The key challenge in data stream mining is extracting valuable knowledge in real time from a massive, continuous, dynamic data stream in only a single scan.
Example is taken from Data Mining Concepts: Han and Kimber 1 Learning Step: The training data is fed into the system to be analyzed by a classification algorithm. It occurs when a component has access to different sized inputs. The algorithms we are going to describe act on massive data that arrive rapidly and cannot be stored. Multistage Frequent Itemset Mining Algorithm. Clustering is an efficient tool to overcome this problem.
In the next chapter, we show a practical example of how to use MOA with some of the methods briefly presented in this chapter. The initial scribe notes were prepared mostly A data stream algorithm is not allowed random access but can retain a small amount of information about the data it has seen so far. Flajolet-Martin algorith m approximates the number of unique objects in a stream or a database in one pass.
Volume of data in real-time; The above impose a smart counting algorithm. They work on a stream of data in a single pass. It will have two input parameters which both supply point coordinates Stream A and Stream B. The main algorithms in data stream mining are classification, regression, clustering, and frequent pattern mining. The labels in this machine learning training data indicate whether that particular example set of data record represents a good or bad set of sensor values.
Download our mobile app and study on-the-go. SEA divides the training dataset into batches of the same size and a new base classifier is built from each one of these batches and added to the ensemble. With Streaming Algorithms, I refer to algorithms that are able to process an extremely large, maybe even unbounded, data set and compute some desired output using only a constant amount of RAM.
We use a multiset data structures with two iterator type pointers left and right, as and when we insert an element into the multiset, we modify these pointers to point at the middle element of the sorted stream.
Stream data algorithm sometimes cannot process the data more than once. Let us take an example to understand the algorithm. Data matching is a problem without a clean solution. A data streaming algorithm Atakes Sas input and computes some function fof stream S.
Algorithm in Stream A streaming algorithm needs only need to see each incoming item only once. In this example, the class label is the attribute i. Ensembles for Data Stream Mining. You must be logged in to read the answer. Give the updating buckets approach of DGIM algorithm.
Data Stream Algorithm. We start with three real life scenarios motivating the use of such algorithms. Data Streams: Models and Algorithms primarily discusses issues related to the mining aspects of data streams. When a new voter comes, if he matches any candidate in the pool then we increment that counter by one.
CloStream is an algorithm for incrementally mining closed itemsets from a data stream.
It seems that you're in Germany. We have a dedicated site for Germany. This volume focuses on the theory and practice of data stream management , and the novel challenges this emerging domain poses for data-management algorithms, systems, and applications. A short introductory chapter provides a brief summary of some basic data streaming concepts and models, and discusses the key elements of a generic stream query processing architecture. Subsequently, Part I focuses on basic streaming algorithms for some key analytics functions e.
Skip to search form Skip to main content You are currently offline. Some features of the site may not work correctly. DOI: Trends Theor. Muthukrishnan Published Computer Science Found.
they are derived from motivating data stream applications. tative of currently known data stream algorithms. mydowntownsmyrna.org muthu/demo pdf.
Skip to content. Instantly share code, notes, and snippets. Code Revisions 29 Forks 1. Embed What would you like to do?
Skip to Main Content. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Use of this web site signifies your agreement to the terms and conditions.
Лет пятнадцати-шестнадцати. Волосы… - Не успев договорить, он понял, что совершил ошибку. Кассирша сощурилась. - Вашей возлюбленной пятнадцать лет. - Нет! - почти крикнул Беккер.
Трюк? - Теперь уже Стратмор не мог скрыть свое раздражение. - Это вовсе не трюк.
Я слышал, она его уже достала. Мидж задумалась. До нее тоже доходили подобные слухи. Так, может быть, она зря поднимает панику. - Мидж.
В отчаянии он наблюдал за тем, как расплывчатые фигуры агентов обыскивают бездыханные тела в поисках листка бумаги с беспорядочным набором букв и цифр. - О мой Бог! - Лицо Джаббы мертвенно побледнело. - Они ничего не найдут. Мы погибли. - Теряем фильтры Протокола! - раздался чей-то голос.
Сьюзан оставила это замечание без ответа. - У правительств должно быть право собирать информацию, в которой может содержаться угроза общественной безопасности. - Господи Иисусе! - шумно вздохнул Хейл. - Похоже, Стратмор здорово промыл тебе мозги. Ты отлично знаешь, что ФБР не может прослушивать телефонные разговоры произвольно: для этого они должны получить ордер.
We study the emerging area of algorithms for processing data streams and associated streams to get novel algorithms and to build data stream applications with ease mydowntownsmyrna.org˜donoho/Reports//CompressedSensingpdf.
Это аварийное электропитание в шифровалке было устроено таким образом, чтобы системы охлаждения ТРАНСТЕКСТА имели приоритет перед всеми другими системами, в том числе освещением и электронными дверными замками. При этом внезапное отключение электроснабжения не прерывало работу ТРАНСТЕКСТА и его фреоновой системы охлаждения. Если бы этого не было, температура от трех миллионов работающих процессоров поднялась бы до недопустимого уровня - скорее всего силиконовые чипы воспламенились бы и расплавились. Поэтому такая перспектива даже не обсуждалась. Сьюзан старалась сохранять самообладание. Мысли ее по-прежнему возвращались к сотруднику лаборатории систем безопасности, распластавшемуся на генераторах.
Сьюзан удалось протиснуть в щель плечо. Теперь ей стало удобнее толкать. Створки давили на плечо с неимоверной силой. Не успел Стратмор ее остановить, как она скользнула в образовавшийся проем.
В самом низу страницы отсутствовала последняя СЦР. В ней оказалось такое количество знаков, что ее пришлось перенести в следующую колонку. Увидев эту цифру, Бринкерхофф испытал настоящий шок.
Этот термин возник еще во времена первого в мире компьютера Марк-1 - агрегата размером с комнату, построенного в 1944 году в лаборатории Гарвардского университета. Однажды в компьютере случился сбой, причину которого никто не мог установить. После многочасовых поисков ее обнаружил младший лаборант. То была моль, севшая на одну из плат, в результате чего произошло короткое замыкание. | https://mydowntownsmyrna.org/and-pdf/303-data-streams-algorithms-and-applications-pdf-660-93.php |
BACKGROUND
Field of Invention
SUMMARY
DETAILED DESCRIPTION
[Advanced Functions for Utlizing the Dispensing Device]
The present application relates to a system for allowing a user to imagine, create, improve, and test a recipe of perfume.
In an embodiment, a scent dispensing device is provided, comprising: a plurality of containers, each containing a different scent ingredient; a driving system configured to separately cause each of the plurality of containers to dispense a quantity of the respective scent ingredient; a delivery system configured to transport any dispensed quantities from the scent containers to a common receiving area for dispensing onto to a single medium.
In an embodiment, the dispensed quantity of each unitary drop of the respective scent ingredient is less than or equal to 2 μL.
In an embodiment, the dispensed quantity of each unitary drop of the respective scent ingredient is greater than 2 μL, such as less than or equal to 4 μL, or less than or equal to 2=10 μL.
In an embodiment, a total quantity of the dispensed quantity of the respective scent ingredients onto the single medium is less than 200 μL.
In an embodiment, the driving system includes an electric motor that drives a gearhead which pushes a piston to dispense a quantity of a scent in the respective container.
In an embodiment, an encoder is coupled to the electric motor and is configured to count step movements of the gearhead, wherein the electric motor controls the dispensing of the quantity of the scent based on the count detected by the encoder.
In an embodiment, the encoder is configured to detect a total quantity of the scent dispensed by the respective container based on an amount of step movements that have been performed by the gearhead.
In an embodiment, a movement of the piston causes an amount of the scent liquid to be pushed through a capillary which is connected to a tube which leads to the common receiving area.
In an embodiment, an internal diameter of the tube is between 0.5 mm and 3 mm.
In an embodiment, an internal diameter of the tube is between 0.7 mm-1 mm.
In an embodiment, a system is provided comprising: a user terminal configured to receive an input of a recipe to form a scent based on one or more quantities of different scents chosen from a plurality of different scent ingredients; and a scent dispensing device that includes: a communication interface configured to receive the recipe from the user terminal, a plurality of containers, each containing one of the plurality of different scent ingredients; wherein the scent dispensing device is configured to separately cause the plurality of containers to dispense a quantity of a respective scent ingredient onto a single medium according to the recipe.
In an embodiment, after the plurality of containers dispense the quantity of a respective scent ingredient onto the single medium according to the recipe, the user interface is configured to allow the user to revise the recipe.
In an embodiment, after the plurality of containers dispense the quantity of a respective scent ingredient onto the single medium according to the recipe, the user interface is configured to allow the user or manager of the system to store the recipe into a memory.
In an embodiment, after the plurality of containers dispense the quantity of a respective scent ingredient onto the single medium according to the recipe, the user interface is configured to allow the user to request a large volume bottle of perfume be created according to the recipe.
In an embodiment, after the plurality of containers dispense the quantity of a respective scent ingredient onto the single medium according to the recipe, the user interface is configured to allow the user to provide feedback regarding the user's opinion of the created scent.
FIG. 1
FIG. 1
FIG. 4
100
101
100
100
103
104
101
105
shows a scent dispensing device according to an embodiment. The scent dispensing device is configure to dispense a small amount of fragrance/scent ingredient from scent receiver which is presented as an opening in the exterior housing of the device. The device also may include a communication interface, which is shown as a Universal Serial Bus (USB) interface in , but it may be another type of communication interface, as will be described below. The device includes a cover which allows access to the scent containers when released (see below). Also shown are paper strips , which may be inserted into the scent receiver area , however, other items may be inserted to receive the scent drops, such as a user's finger or a container. The exterior housing includes one or more buttons for opening the exterior housing cover to access the internal components of the device.
100
A system, which includes the device is based on several scent containers and very precise dispensers able to create perfume samples at the level of 100-200 μL, and functions to help users to create and progress in perfume. The system enables a user to try different blends and back and forth to arrive to satisfactory recipe in few minutes and without any effort. It is noted that while the term “user” is commonly used throughout this specification, this may refer to a customer, employee, manager, or any other position with respect to the location of the system. The system further enables an understanding of the tastes of different users.
100
6 (or more) scents
6 high precision dispensers
Able to make blends at the level of drops (100 μL)
Unitary drop (example precision)=2 pt. The unitary drop precision is not limited to this example, and may be larger such as on the order of equal to or less than 4 μL, or even equal to or less than 10 μL.
As will be discussed below, the device may include
Each container delivers their drops in the same area but separately (distance between ends is around 1 mm). The containers are designed so that there is no risk of pollution or contamination from one dispenser to another.
FIG. 2
FIG. 2
100
208
200
shows a detailed view of the inside of the device . The device includes scent containers as mentioned above. The scent containers are in cartridge form and are configured to be dispensed with a screw pushing a piston. Preferably, the diameter doesn't exceed 20 mm, and more preferably is comprised between 5 to 18 mm. In case containers with different volumes are needed, the system should keep the diameters in the same range as defined above, but the height of the containers may be adapted according to volume needs. The containers' height may be between 1 cm to 1 m, and preferably between 4 cm and 20 cm. shows six scent containers disposed in a radial fashion around a center axis of the device , but more or less scent containers may be used. While the arrangement of the containers is shown in a radial manner, other arrangements are possible.
100
206
207
207
210
211
204
The device further includes an electric motor that drives a planetary gearhead . The gearhead is coupled to a ball screw and a ball nut , via a coupling bellow , which turn are coupled to the screw which pushes the piston of the scent container in an upward direction, so that turning the gearhead causes vertical movement of the screw against the piston to dispense a certain amount of the scent.
205
206
205
An encoder that is coupled to the motor can precisely count the step movements of the gearhead to ensure a precise dispensing operation. A quantity of scent is checked by the device to warn when a container is low. This is accomplished by checking the amount of dispensing that has been performed based on the encoder detecting the amount of step movements that have been performed by the gearhead and reporting the data to the micro controller.
209
Reference switch is configured to initialize the encoders at a start of receiving a new container.
202
206
A motor controller is provided for each motor to actuate the motor with control signals.
203
201
205
101
FIG. 1
Each scent container is held in place by a cartridge lock . The movement of the piston of the container causes an amount of the scent liquid to be pushed through a capillary at a cartridge interface . The capillaries interfacing with each scent container lead to a capillary output which releases the liquid drops into the scent receiver (shown in ).
FIG. 3
301
205
301
shows that each capillary is connected to a tube which leads to the center axis of the dispenser, and then to the capillary output . The internal diameter of the tube is small but not too small in order not to create too high pressure loss. Preferably, the internal diameter is higher than 0.5 mm, and more preferably higher than 0.6 mm. It can be larger but it should not be too high in order to avoid vacant space. Thus, preferably the internal diameter should not exceed 3 mm. Preferably, the internal diameter is in the range of 0.7 mm-1 mm. The six tubes go down just above the chamber, allowing that drops can fall down on the paper (or a container, finger, etc.). The length of the tube can be in the range of few centimeters to few decimeters and should not exceed 1 m.
FIG. 4
FIG. 4
100
103
103
103
401
208
103
402
shows a top view of the device when the cover is released. To change the scent cartridges, the top is of the cover is pushed down by the user. Then, cover automatically rises (spring effect) and the user may manually turn a rotate the cover to reveal the opening access to the scent container cartridges . Once the cover is in the position shown in , by pushing any of the six buttons , cartridges are released, and can be removed and changed.
FIG. 5A
100
100
501
502
503
504
102
202
205
shows a hardware diagram of the device according to an embodiment. The device may include a processor , a memory , an optional display or output indicator , and a power source . These elements are in addition to the communication interface , the motor controllers , and the encoder already discussed above.
501
The hardware can be designed for reduced size. For example, the processor may be a CPU or micro-controller as understood in the art. For example, the processor may be an APL0778 from Apple Inc., or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, the CPU may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
102
102
Further, the communication interface (I/F) can include circuitry and hardware for communication with a user terminal as will be discussed below. The communication interface may be a universal serial bus (USB) interface, or it may include a network controller such as BCM43342 Wi-Fi, Frequency Modulation, and Bluetooth combo chip from Broadcom, for interfacing with a network.
FIG. 5B
501
100
205
205
102
Alternatively, as shown in the processor is not required to centrally coordinate the functionality and control of the device . In this case, the solution is based on the circuitry of the 6 encoders A-F. One of the six encoders (A) receives the communication from the communication interface , and then acts as a hub for the other five encoders. The six are connected as a chain operating under a local addressing protocol such that each encoder understands which encoder is destination of any received instructions.
100
According to an embodiment, the device can be used to test blends of various scents easily and instantly while using a small quantity (100 μL). It enables a user to try, test, and retry to go back and forth to a creation of a preferred scent.
FIG. 6
100
601
102
100
shows that the device may be connected to a user terminal via the communication interface . The combination of the device and the user terminal may be considered to be part of a “system.”
FIG. 7
700
100
601
100
701
702
100
100
703
704
shows a user interface for the device that is displayed on the user terminal after the user terminal establishes a connection with the device . The interface allows creation of a recipe on a display screen. The screen may display scents as an image (such as a fruit, flower, etc.) to invoke the type of scent. Buttons may be used to add or increase the number of drops of each scent. After various quantities of one or more of the scents contained in the device are selected, the user may provide an input to the “SNIF” button to cause the combined scent to dispense from the device . The user then may revise the recipe (after sniffing) using same buttons described above. The user may store the recipe in memory at button (“save creation”). Also, the system may store all the past recipes that have been made/tried. The user may try to purchase the scent at . Such a purchase can be made by transmitting the recipe to a manufacturer or seller who can custom make a bottle according to the recipe. In this regard, another machine can be used to create custom bottle based on programmable dispensers. A printing of the recipe can also be ordered.
100
601
The device or the user terminal may store basic data such as certain predetermined recipes. The system can have tutorials to explain each scents and rules to follow for the blending. During explaining the scents, the system can make deliver them in order the client understands better (by a quotation and/or by a ranking)
FIG. 8
100
602
801
601
100
802
100
803
102
shows a basic process performed by the system which includes the scent dispensing device and the user terminal . At step , the user terminal receives a user selection of scent combination and quantities to form a recipe. This step may be performed by the operations on the user interface describe above. The user terminal then transmits the recipe to the device , and then at step , the device activates dispensing of scents from scent containers, according to the recipe, onto a medium such as paper, a user's finger, or a container. Next, an optional step may be performed where the user terminal receives feedback from the user regarding the dispensed scent(s). This feedback may take many forms as will be described below. Additionally, the feedback is not necessarily performed at the user terminal , and it may be performed on a separate device (such as the user's smartphone) and uploaded to a commercial server (such as a cloud server).
100
While the above describes the basic structure and operation of the scent dispensing device , the advantage the device provides for easily creating perfume scent samples in small doses leads to innovations in determining user perfume preferences at the individual level and over multiple users. This functionality will be described below.
The system may include advanced functions, such as procedure for beginning a session with a user. The system can propose a session to understand the tastes and wishes of the client, or to understand if the client can be classed in one or another classification of clients). Based on the session, the system may deduce the best scents to use.
For instance, a questionnaire may be given to user which asks for personal information, such as age, habits, favorite foods. The questionnaire may ask the user for their preferred and disliked commercialized perfumes, and further ask what is liked and not liked in these (in terms of notes, level of strength, level of lasting).
100
A “preloaded recipe” to be used as pre-recipe
An original recipe that the system creates, to be used as pre-recipe
Advice (such as: “you should use scent X as main and scent Y as auxiliary . . . ”)
A combination of the scents contained in the device may be referred to as a “rack.” Using a given rack, the system can propose a session to understand the tastes and wishes of the client. It may propose to begin with:
By automatic programs. For instance, if the user may like several blends, the system can make a calculation of an “intermediate” recipe or “extrapolated” recipe.
By help of humans. For instance, an expert can deduce from the answers a recipe to propose.
The system can use the information and conversion tables to deduce a rack, pre-recipes or advice, such as:
100
The system may ask if the user is a “beginner” to using the device and then propose help for the session. In particular, it can ask to activate functions like to “quotation” or “automatic advice.”
The system may include advanced functions, such as an automatic helping function.” During tests and sniffs, system can propose frequent Questions and Answers (i.e.: Q=what to do if the blend is too sweet. A=avoid using scents X and Y together). The proposal can be targeted if the system interprets a particular disequilibrium.
The system may propose to return to one recipe if the client is far from what is expected through the questionnaire. The system will identify among the different recipes done so far the one that is the most appropriate
During tests and sniffs, the system can detect if the client feels lost (i.e. for instance, if the client comes from a recipe to another without logic) (or if the client takes more and more time to go from a sniff to another). In this case, the system can then select few of the recipes (the ones identified as the most different) and then make the client sniff again. The system may propose to activate the quotation function. The system may propose particular protocols, such as asking the user to stop and resume another time; ask for an expert; resume from scratch; or come back to the questionnaire.
During tests and sniffs, the system can detect if the client makes always the same blends. In this case, it can then propose to explore other blends or it can propose to activate the quotation function.
During tests and sniffs, system can detect if the client comes back often to a same group of blend or use always a specific scent. In this case, the system can then propose pre-recipe based on this group or this scent
An advanced function mentioned above, is the use of a “quotation.” During tests and sniffs, the client may quote his/her creation (can be a note and note+comments). In this case, the system may use the data to identify among the different recipes done so far the one that is the best quoted. It can propose to come back on the “best three”: Sniff again, propose help around these ones. The system may detect the case of most of the quotations are bad, and then propose different actions, such as asking the user to stop and resume another time; ask for an expert; resume from scratch; or come back to the questionnaire.
The system may use the data to identify other clients that had the same tastes and then propose to work on their “best recipe” (i.e. if a client A creates a recipe and provided a good quotation, and the system detects that another client B did the same recipe and also made a similar quotation, then the system can propose to the client A the best formula of client B). The same can be done on the base of bad quotation. (i.e. if a client A does a recipe and provided a bad quotation, and the system detects that another client B did the same recipe and made a bad quotation, then the system can propose to the client A the best formula of client B).
The system may use the data to make a profile of the client and compare to other profiles to propose different specific directions to propose a series of best proposals.
During tests and sniffs, the system may detect that the client does not provide a quotation. In this case, the system may request a quotation, or it may propose to reproduce the blends (the most significantly different) and then request the client to provide a quotation.
propose to test on the skin;
provide the user an option to ask friends what they think;
propose to make a mini-sample to take home; or
propose for the user to wear it the day and decide later on a next action.
Also contained in the advanced functions of the system are steps for finalizing the session. At the end of a session, or when a quotation appears high, the system may:
At the end of the session, when the client is happy with her creation, the client or the system can propose to store a recipe in the memory of the system. The system has a memory and can store the recipe under the user's name.
The user may ask or the system can propose for a bottle for the user or another person as a gift. The system can send the recipe to a human or to a machine for the preparation of the bottle. The client can give or the system can propose to give rights to others to use her recipe (it can be limited to a selection of friends). The system sends the data to a server.
The user may ask for or the system to propose a series of variations of the recipe (i.e. a spicy one, a sweet one . . . ). The system can make automatically proposals to sniff in the moment, or, send a proposal afterwards to the client.
At the end of the session, if the client is not perfectly happy with her creation, the use can ask for someone to finalize her blends. For that, the system will ask which of her blends, she wants to be finalized. She can identify one or another, or ask the specialist to choose the blend to work on. She can allow the specialist to use the questionnaire data to do this work.
The group can be created randomly
The group can be created on a logic of questionnaire
The group can be created on a logic of accordance of quotations
The group can be created on a logic on discordance of questionnaire or quotations
The group can integrate a specialist
In any case (happy or not happy), the client can resume her work in another session and all data can be stored in order to be able to reuse all blends to work again. The system may propose for other users to quote one client's recipe. The system may open a group for co-design, as follows.
Each user the group has the possibility to provide a quotation and create on the base of one or more recipe. The system can also make calculation on the blends and propose to the group to quote and create. The system may provide a user a list of blends already done by her or others or selection of others, or warn the user if she goes in a direction already tested by her or others or selection of others.
The system can identify that certain recipes are well appreciated and put them in the preloaded library. The system may deduce the scents that are less or more used for optimizing the rack. The system can identify that certain recipes are original and put them in the preloaded library.
The system may also identify users that create blends that most users also create, or users that create blends that are very original. In that case, the system can propose that these “special clients” become creators or co-creators, or testers of commercial perfume products.
2
The client can select or the system can propose to select the scents that were the most appreciated during previous sessions (i.e. 4 of the 6) and the system can propose to add new scents ()
The client can ask or the system can propose to use one or more recipes as “one” of the scent
The client can ask or the system can propose to combine two or more scents in one and then let a new place for additional scents
The client can ask or the system can propose to deconvolute one scent into two scents (i.e. if one scent contains orange and lemon, the system can propose to have one scent with orange and one with lemon, in order to let the client choose the best proportions)
The client can ask or the system can propose to change the formula of one scent to make it more appropriate (i.e. I want to keep citrus, but less bitter)
The system can progress the rack (update the types of scents in the device), by proposing to a client to do another session by changing the rack with new scents (such as 6 new scents). The system can propose to do another session by using a rack using certain new scents while keeping scents already used. For instance:
The system may make a calculation on different wishes of clients to identify the best rack to offer. A new rack can be open or limited to a group.
The best appreciated blends
The best appreciated scents
The best combination of scents
The blends that were not tested
The scents that were not tested
The racks that were not tested
From these results, the system may propose:
Commercial products which odor is close to the best appreciated blends
Commercial products which use the best appreciated scents
Commercial products which uses the best combination of scents
The client to test blends that were not tested
The client to test scents that were not tested
The client to test Racks that were not tested
By comparison of Matrixes of different clients, we can deduct:
Group of clients that seems to have the same appreciation and use this to select products to propose
Propose to ones the product (cosmetic or not) that others like
The system may perform an advanced learning function. The system may store the quotations of one client in a matrix in X dimensions (X=number of scents. If the client has worked only with one rack, X=6. If he/she worked on several racks, X>6). From this matrix, the system can deduce:
Regarding the role of scents, it is noted that a “note” may be equivalent in role to a scent. Each of the 6 scents may be a note or a combination of notes. Some scents can be top note, others can be heart note, others can be base note. Each of the 6 scents can be a combination of notes with equilibrium (top note+heart note+base note).
One is a central and the others are used to change the smell of the central
The central note can be an existing perfume or simplified existing perfume and the others can be non-existing in the perfume market
The central note can be an existing perfume or simplified existing perfume and the others can be perfumes on the market
In some cases, notes are not equivalent in role. There is one scent that is central such that it will always be used. For instance:
It will be hard that P1 and P2 like the exact same formulation.
P1 and P2 tend to like one of the six scents
P1 and P2 tend to like one combination of the six scents
Or it could be that P1 and P2 tend to dislike One of the six scents
P1 and P2 tend to dislike one combination of the six scents
The following example describes a scenario in which a combination of users is involved in the creation of a recipe. In this scenario, one person (P1) wishes to create a perfume that will suit her and also to another person (P2 (husband for instance)). In the solution, P1 will do a series of trials and quotations. P2 will do the same. The system will identify a scent which suits both P1 and P2, taking into account the following possibilities:
to propose to P2 scents, or combination of scents to test in priority
to propose to P2 to increase or decrease specific scents in his formula
to propose to P2 to start from a collection of specific formulation (it can one of the formulation of P1) and propose to P2 to work on
The Artificial Intelligence of the machine, knowing the tastes of P1, may be able to perform the following:
to propose to P2 scents, or combination of scents to test in priority
to propose to P2 to increase or decrease specific scents in his formula
to propose to P2 to start from a collection of specific formulation (it can one of the formulation of P1) and propose to P2 to work on
The AI of the machine, knowing the tastes of P1, can also let P2 working on his own, and after a while, cross all the data and perform the following:
FIGS. 9-11
100
show different exemplary models and scenarios in which the device may be utilized for either creating a personalized perfume for a customer, or for developing a perfume product for a company.
FIG. 9
FIG. 9
901
902
903
100
905
100
907
100
908
909
908
shows a first model, “Model #1”, in which at step , a perfume company may select a perfume that is already on the market, and then in step , the company will determine characteristic scents (CS) already in the perfume. In the model shown in , there are five CSs, but is may be more or less. In step , the company may contact fans of the selected perfume and invite them to the location of the device (such as a perfume retail store). In step , the customer uses the device as described above by adding CSs at different levels to obtain a desired perfume scent combination. In step , the user can obtain their personalized perfume formulation based on the results of using the device . Then in either step or , the customer may obtain a bottle of the personalized perfume by one of two methods: either at the store itself by hand or a special machine (step ), or by sending the formulation to a remotely located factory so that the perfume can be sent to the customer via the store or by delivery directly to the customer's home.
FIG. 10
1001
1002
1003
1004
1009
905
909
shows a second model “Model #2”. Model #2 is similar to Model #2, except that instead of directly using the CSs based on what is actually used in the selected perfume (in step ) that is already in the market, the fans of the selected perfume are asked what scents they would like to experiment with in step . In step , the results are interpreted, either by an expert or a machine, to identify the CSs which will be used. Then steps - are similar to steps - above.
FIG. 11
1101
1102
100
1103
1104
100
1105
1107
907
909
shows a third model, “Model #3”, for performing “inter-perfumes personalization.” In this model, the company contacts customers who are fans of two or more perfumes P1 and P2 (step ), and the company identifies the CSs of perfume P2 (). The company then invites one or more of the fans of the two perfumes to use the device (). Then in step , the customer works on adding CSs of P2 into P1 at different levels to get a desired perfume scent. This may be done by ensuring that the P1 perfume is always dispensed from one of the containers in device . Then steps - are similar to steps - above.
907
909
Variations of the above models may be used as necessary. For instance, the company may contact a group of customers are fans of a perfume and ask the group members to work together with a combination of scents in the machine to obtain an consensus desired perfume. In that case, each group member may receive the personalized perfume via steps - above, or the company may use the results for testing and/or launching a new perfume for the general marketplace.
In another variation, the company may choose multiple predetermined scents, each being a simple perfume with top, middle, and base notes, and a customer can create blends of the multiple scents to obtain a personalized perfume. In this variation, the quality of the final perfume depends on the quality of each of the predetermined scents, which may help a company stand out from other scent-makers.
In another variation, instead of adding just the CSs from a selected perfume on the market, a scent note may be added to each of the CSs in each container, where the scent notes are predetermined to fit well with the selected perfume.
In another variation, an expert (guru) may be involved in the models described above. Such an expert does not even need to be co-located with the customer, but just needs to be able to reproduce the creation by the user based on using the same scents at the same levels as the user. After the expert reproduces the customer's creating, the expert can provide advice for proposed variations, which the customer then uses to produce a new customized perfume formulation.
Alternatively, instead of a human expert, a machine may be able to provide advice for a proposed variation to a customer's personalized perfume by comparing the customer's initial creation to perfumes created by other users. For instance, by using tables that link the differences between different users' creations, the machine is able to deduce a proposed variation to the user, which the customer may then use to produce a new customized perfume formulation.
100
In another variation, the company may select a “pre-perfume,” which is a high-quality perfume that is not yet finalized for selling on the market. CSs from the pre-perfume are used similar to any of the models above. However, in this case, since there are no identified “fans” of the perfume available yet, the customers who are invited to use the device may be contacted by any number of characteristics. For instance, they may be invited based on their tastes in perfumes similar to the pre-perfume, or they may be invited based on their personal information, such as age and gender.
In another variation, a company may use multiple groups to find a perfume formulation that is ideal for the marketplace. In this case, a similar set of CSs may be given to different groups, and each group will make their own consensus perfume creation. All the creations may be formed into a collection of perfumes to be launched to market. Or, all creations may be analyzed to find a “mean” perfume for launching to market that reflects an average of the characteristics of each of the separate creations.
100
In another variation, both the company and the customer/client may combine any precious scents that are available separately to them in to the device to create one or more preferred perfume formulations for either the customer's personal use or for the company to launch to market.
100
The device and the processes described above are not limited to be used for scents in perfumes, but can be used for scents in any number of applications, such as food, clothing, hair products, air fresheners, and others.
The principles, representative embodiments, and modes of operation of the present disclosure have been described in the foregoing description. However, aspects of the present disclosure which are intended to be protected are not to be construed as limited to the particular embodiments disclosed. Further, the embodiments described herein are to be regarded as illustrative rather than restrictive. It will be appreciated that variations and changes may be made by others, and equivalents employed, without departing from the spirit of the present disclosure. Accordingly, it is expressly intended that all such variations, changes, and equivalents fall within the spirit and scope of the present disclosure, as claimed.
DESCRIPTION OF THE DRAWINGS
The foregoing aspects and many of the attendant advantages of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
FIG. 1
shows a scent dispensing device according to an embodiment.
FIGS. 2-3
each show a detailed view of the inside of the dispensing device according to an embodiment.
FIG. 4
shows a top view of the dispensing device when the cover is released according to an embodiment.
FIGS. 5A-5B
show hardware diagram s of the dispensing device according to different embodiments.
FIG. 6
shows a system that includes the dispensing device and a user terminal according to an embodiment.
FIG. 7
shows a user interface for the dispensing according to an embodiment.
FIG. 8
shows a basic process performed by the system which includes the dispensing device and the user terminal according to an embodiment.
FIGS. 9-11
show examples methodologies which utilized the dispensing device to create a personalized perfume or a new perfume product according to different embodiments. | |
Jakarta, October 11, 2021 – In recent months, many media have reported on the energy crisis in Europe. In the UK, for example, many electric and gas utility companies went bankrupt and were forced to close. People are also seen queuing at gas stations to buy fuel. This phenomenon shows us that even countries with strong economies are still quite vulnerable to energy security issues.
CASE for Southeast Asia Project held a discussion entitled “Energy Crisis in UK and Europe: Lessons Learned for Indonesia’s Energy Transition” which invited speakers from the UK and Europe (11/10/2021). In this discussion, the public in Indonesia is involved in the discussion to find out various important facts and findings related to the issue of the energy crisis that is currently happening in the UK and Europe.
In the UK the, industrial and household sectors are quite dependent on natural gas. With the winter season is approaching, the demand for gas is increasing as the need to warm homes also increases. This condition, when a country relies heavily on energy sources that are vulnerable to global markets, does raise a question: is this really an energy crisis, or is it a fossil energy crisis?
William Derbyshire, Director of Economic Consulting Associates (ECA), UK, on this occasion gave an explanation regarding the fact that the primary energy mix in the UK relies on natural gas as much as 42%. Furthermore, William also showed data that illustrates that since 2017, the price of natural gas has gradually increased until 2021, which has resulted in an increase in the selling price of electricity.
“If high fossil fuel prices are the problem, then the answer is reducing dependence on coal and gas, not adding more fossil fuels,” William said.
Based on this conclusion, renewable energy is a good solution to reduce dependence on fossil energy. But not without challenges, the UK, which has 16% of wind power plants in its power generation mix, has several important points to note. For example, Gareth Davies, Managing Director of Aquatera explained that wind farms in the UK have a fairly high variability scale.
Responding to this challenge, Gareth conveyed the need to conduct spatial analysis and planning related to areas that have sufficient wind gust potential, also taking into account the historical climate data.
“By distributing wind power production over a wider geographic area, it will help improve energy security and balance the UK’s energy supply through renewable energy,” said Gareth.
In line with William’s statement regarding the importance of making an immediate energy transition, Dimitri Pescia, Program Manager Southeast Asia of Agora Energiewende explained the fact, for example, in Germany, the investment cost to build renewable energy power plants is much cheaper than to build fossil power plants. In this context, Dimitri explained that investment in renewable energy can be considered as a hedging strategy to minimize the risk of using fossil energy in the energy transition period over the next few years.
From this discussion, the public is being helped to understand the real situation and the lessons that can be drawn for the energy transition process in Indonesia. Fabby Tumiwa, Executive Director of IESR said that Indonesia needs to quickly adopt the use of renewable energy to minimize the risk of an energy crisis due to dependency on fossil energy. Fabby added that the development of these abundant potentials of renewable energy in Indonesia needs to be accompanied by energy efficiency, development of energy storage technology, as well as inter-island interconnectivity.
“It should be remembered that the current energy crisis is a fossil energy crisis. The volatility of fossil energy prices is very high. The increase of fossil energy prices will have an effect on other aspects,” said Fabby, emphasizing the real cause of the energy crisis in the UK and Europe.
Closing this discussion, Fabby expresses the urgency for the public to know this issue contextually so that there would be no panic in the community. “Indonesia itself does not need to worry about energy crises that occur in Europe, China, Britain, India, because Indonesia has the advantage of a better energy transition planning towards decarbonization way earlier,” concluded Fabby.
Watch again the discussion here: | https://iesr.or.id/en/energy-crisis-or-fossil-energy-crisis |
Ban on water-wasteful buildings proposed
Minimum water efficiency requirements are to be set for all new homes, shops and office buildings under proposals announced this week.
The proposed measures are part of the Code for Sustainable Homes, the Government’s scheme aimed at addressing energy and water-related issues through planning rules, and could cut water use by 15-20%, ministers say.
The Government is now consulting on details of the proposals, including the exact level of efficiency that will be required and whether standards should be set for whole buildings or individual appliances such as water-efficient taps and dual-flush toilets.
Angela Smith, Minister for Sustainable Buildings said: “This is an important step in transforming the way we use water in the home and the workplace.
“By installing products such as low flush toilets and water efficient taps in new homes we could reduce household consumption by 15-20 per cent,” she said.
Water use in the canteens and washrooms of new shops and offices – but not water used for industrial purposes and the like – will be subject to the same regulations.
“These are relatively cheap and effective ways to reduce water demand. These regulations will ensure that water efficiency becomes the norm in all new homes and workplaces,” said Angela Smith.
The measures for new homes will be part of a wider drive to cut water use in existing houses and new-build, following the recommendations of the Water Saving Group, said minister for Climate Change Ian Pearson.
“Setting minimum standards for new buildings will not deliver all the savings we need to make, but will provide a strong signal to consumers and to manufacturers of water appliances, fixtures and fittings that they have a role as part of that joined up action, and that we all have a responsibility to find ways of using water wisely,” he said.
The Environment Agency welcomed the proposals as a welcome contribution to the fight against climate change, but pointed out that making existing homes more water-efficient will be where the real challenge will lie.
The EA’s chief executive Barbara Young said: “In the past, building regulations have included minimum standards for energy, but not minimum standards for water efficiency. The consultation on bringing water efficiency into building regulations announced today is crucial to reducing future water use and we look forward to seeing further details.
“But building regulations is only part of the story – the Government also needs to encourage the use of much more water efficient fittings and appliances in buildings,” she said.
Efficient use of energy as well as water is needed if England is not to run out of resources as the Government proceed with its planned housing expansion, she added:
“In many parts of the country – and particularly the south east – the current environmental infrastructure is struggling to cope with the existing level of demand.
“With the Government’s ambition to increase housing supply in England to 200,000 a year by 2016, this package of initiatives is crucial in recognising that careful planning and more sustainable homes are needed to accommodate the proposed number of new houses,” she said.
Details of the consultation on mandatory water efficiency standards for new homes can be found here.
Goska Romanowicz
© Faversham House Ltd 2022 edie news articles may be copied or forwarded for individual use only. No other reproduction or distribution is permitted without prior written consent. | https://www.edie.net/ban-on-water-wasteful-buildings-proposed/ |
Seeking out Forensic Science Courses is an exciting and ambitious choice. This is a rewarding career and one offering much opportunity for advancement and engagement. If you have got this far, reading this article, then you have some idea of what it might entail. Yet, it is possible this might be more informed by television shows and novels than anything else.
So, before we leap into telling you about the different routes into forensic science, we will give you some background to the career and the sort of day you can expect at work. It is always important to be fully informed before beginning the learning process, making sure the career will meet your expectations.
The forensic scientist is the collector and analyser of evidence from crime scenes. This is the bit you have seen on Crime Scene Investigations that likely appeals. This is where you will work in the field gathering the evidence, whether it is blood, fluids, hair, fibres, paint and glass fragments, tyre marks, or more. When not in the field you will be in the laboratory processing the findings and producing a written report. You will coordinate your work with the Crown Prosecution Service, making sure processes and procedures are followed so that convictions in court are secure. On rare occasions you may be expected to give evidence.
There are many specialisms in forensic science. You will take a route that follows your interest. You could be expected to process blood and DNA; you could be responsible for analysing handwriting, you could be involved in computer analysis and data recovery. Alternatively, you could stay in the field, collecting forensic evidence at crime scenes.
Your responsibilities as a general forensic scientist fall into three broad areas: chemistry, biology, or drugs and toxicology. You will need to be able to analyse samples and apply specific techniques used to uncover findings within the evidence. You will need to show close attention to detail, as the sifting and sorting of evidence could be at the minuscule level.
Attending the aftermath of crimes and accidents may at first seem glamorous – but as with all careers – the glamorous and exciting case is the exception. When you attend scenes, it could be a car accident or a robbery; it could be an assault, and occasionally it may be a murder.
You are not the investigator as such, you are the processor of findings – but you will be expected to liaise with other team members and other agencies – such as the police.
A lot of work will be done on a computer, analysing results and supervising the work of assistants. You will also be expected to keep up to date with new forensic techniques and be thoroughly prepared should you need to appear in court.
If the visiting of crime scenes is off-putting – it is worth knowing that some forensic scientists specialise in laboratory work – never leaving to enter the real world!
What Are The Working Hours For A Forensic Scientist?
Criminals are notoriously thoughtless and tend to fail to understand the need for them to be active between 9 am and 5 pm. Said more seriously, you could be called out to a crime scene at any time. You may be asked to work shifts, or you may be asked to be on call. You will more than likely work office hours – but you should expect to work some evenings and weekends – and maybe even be available in the middle of the night.
What Are The Different Types Of Forensic Scientist?
There are many types of forensic scientist, and your job role will depend on what you choose to specialise in. You could take a broad approach and specialise in one of three main areas: biology, which is the processing of blood, hair, DNA, etc.; chemistry usually connected to crimes against property such as burglary and arson; or drugs and toxicology, which is the testing for restricted drugs and poisons, examining tissues specimens and processing samples relating to alcohol.
There are also specific specialisms such as toxicology, forensic psychology, podiatry, odontology and more. It is likely that you would become an expert in your field if you chose one of these specific routes.
What Can A Forensic Scientist Expect To Experience On The Job?
The role of a forensic scientist is not one of day-to-day excitement. You are behind the scenes and undertaking painstaking and quiet analysis and report writing. You are a key part of the process of convicting criminals – but you are not out actively investigating and arresting these criminals single-handled – that is a job for detectives. You will liaise with detectives – passing on findings and analysis.
Only a few forensic scientists attend crime scenes – and technically their job title is pathologist. The work of the scientist is more often back at the lab identifying blood, DNA, and other fibres. Therefore, you will be given a case, a set of evidence and a list of tests to run – you will be expected to know how to complete these tests to standard procedures that can be upheld in court and then you will write a report. This is likely to be 95% of your role, day-in-day-out.
Being a forensic scientist is precise work – and often a lonely activity – even though you are part of a team and expected to work with other agencies. There are times when you will decide this is a tedious and repetitive role. There are other times when you decide it is too grim. However, largely speaking, forensic scientists find work rewarding because they are a crucial cog in achieving a conviction.
You will more than likely get your job satisfaction and interest from the science rather than the crimes. It is intense chemistry and biology – and the fascination will come from the ever-changing techniques and developments.
Saying this, you do need to take account of the fact that some of the work may be distressing. You are likely to see dead bodies and the consequences of some awful acts and accidents. Therefore, you should consider methods by which you can manage the stress and the distress.
Who Does A Forensic Scientist Work For?
Many companies in the private sector employ forensic scientists. These commercial companies in the UK include Cellmark Forensic Services, ESG and Eurofins Forensic services.
In Scotland, the processing of evidence such as DNA, fingerprints, etc. are done by the Scottish Police Authority Forensic Services.
It is also possible to be a specialist with the Metropolitan Police – become an SC&O – or a Specialist Crime and Operations Officer. There are also SC&O offices within local police forces. Alternatively, you could work for the government. There is DSTL (Defence Science and Technology Laboratory) and CAST (Centre for Applied Science and Technology).
It is also possible to be employed by universities and specialist laboratories – or in public health laboratories. The range of job contexts makes this an interesting career in itself, as it is possible to find your niche.
You are unlikely to find a single site where forensic scientists find jobs. Therefore, you are going to have to look at the relevant organisations who hire forensic scientists – as well as looking in industry publications such as New Scientist. You are likely to find a close link between industry and your university or your course provider. Therefore, once you are on a course you will be given a lot of guidance on how to progress into employment.
What Skills Foes A Forensic Scientist Need?
As well as a detailed understanding, knowledge and application of scientific principles and strong IT proficiency – there are many soft skills you will be expected to master. You should be an excellent communicator. Remember part of your role will be liaising and ensuring that someone can action your work. You will therefore need to be a team player – and show strong interpersonal skills.
As a lot of the work you do will be self-managed, you will need to be a strong organiser of your work and excellent at time management. A lot of your work will be time sensitive and people will be waiting on your results.
Most importantly, you will need attention to detail and patience. There will be a lot of scanning through small details – looking for a needle in a haystack amongst a field of haystacks. It could be that your efforts will be fruitless for a long time. You need resilience to keep going.
A final thought: can you remain independent and unbiased? Are you able to be methodical and logical and follow the science? You will be expected to analyse results as a scientist, removing the personal and the emotional and reporting only on what the evidence presents.
How Much Does A Forensic Scientist Earn?
You will likely begin as an assistant or an associate. These positions start at approximately £20,000 annual salary. The salary of a forensic scientist can rise to somewhere between £25,000 and £45,000. As with any career where expertise and level of qualification are valued, the more specialised and the more desired your knowledge, the more you will likely be paid.
The average salary for all forensic scientists in 2018 is reported to be £26832. With the hours you may work, this annual average wage comes out as approximately £10 per hour.
What Qualifications Do You Need To Become A Forensic Scientist?
The route into forensic science is academic. SC&O are often police officers who have retrained – so there is a higher than normal late entry into this area of forensic science.
It is likely that you will have taken A-levels in a combination of science-related subjects. As with any increasingly competitive sector of employment, it is a good idea to seek the highest grades in Biology and Chemistry at the very least. You will also benefit from taking Maths to a higher level. Taking a course, such as a level 3 forensic science course, could also be beneficial.
The essential qualification is a Forensic Science Degree, or more likely a postgraduate degree in forensic science once you have gained your science degree. You are more than likely going to have to continue to study as you work. The area of forensic study is ever-evolving and there is an expectation you keep up with the latest techniques. It is usual for forensic scientists to continue study to a master’s degree and PhD.
As this is a competitive field, you will be expected to undertake work experience. Some forensic scientist work as a volunteer with police authorities to gain the necessary experience in the field. There is nothing wrong with sending out letters and CVs to agencies requesting experience or role shadowing. However, it is also possible to gain work experience placements in a hospital or research centre. It may be that you begin by working short-term contracts and agency work before being offered a full-time appointment.
What Are The Career Advancement Opportunities For A Forensic Scientist?
Once you have entered forensic science, the career prospects are excellent. The competition for your first role will be intense – and it is a good idea to be geographically flexible and offer evidence of the breadth of skills you will be expected to demonstrate. These other skills can be shown through volunteer work or your social interests and hobbies.
Promotion within forensic science will be based on your experience, on your continuing acquisition of qualifications and learning, on your proof of levels of responsibility and your appraisal reports. You will likely work for approximately five years at an entry level before you should expect to progress to the role of reporting officer. Once you become a reporting officer you will be able to take on your own caseload and you will be expected to deal directly with the police. At this level, you can also be called as an expert witness in court cases.
The next level is casework examiner. Here you will coordinate an area of speciality. You will supervise the work of others, attend conferences and be expected to research and publish articles in your field. You will become increasingly specialised and expert – and will on occasion be expected to attend the scenes of crimes.
There is also a managerial route through forensic science, as in most other professions. It is important that you research these roles to ensure that the job specification continues to fulfil your expectation of your chosen profession.
Step One: Is It The Job You Expect?
First, assess if a forensic scientist is the job you expect it to be. There is a lot of concern in forensic science organisations that television programmes such as Silent Witness, CSI and Prime Suspect have given a warped understanding of the role. Remember this is a scientist position – and not an investigators position. The day-to-day work could be a lot more repetitive than television suggests it might be.
You will need to gain A-levels in Biology and Chemistry, and ideally Maths and Computer Science. This range of A levels will give you the grounding needed for all aspects of forensic science. There are lots of other combinations of A-levels, based on science, that you can take that can lead to the same place. Therefore, if you have started your A-levels it is still possible for you to progress into forensic science, even if you have chosen different subjects. Even if you have taken a general science degree – there are still routes into forensic science from there.
You will need a science degree and postgraduate forensic science qualification, or you will need a degree in forensic science. There are a lot of forensic science courses now – so it is possible to gain a specialised degree without having to gain a post-graduate qualification. It is a competitive field, therefore the more detailed your knowledge and understanding, the more desirable you will be as a candidate for a post.
As with all competitive professions, you will need to demonstrate more than just academic ability. Therefore, you need to work hard on your CV and what it says about you. You should develop broad interests and hobbies that will show your commitment, your teamwork skills, your ability to balance work and life; your resilience and your levels of energy and enthusiasm.
You can get started as a forensic scientist straight from university and this will largely depend on the links between your course provider and employers. More likely, you will need to seek work experience and/or volunteer work. You can shadow someone in role or offer to do the work for free for a period, to gain the all-in-important in the field understanding of the profession. Another route could be through short-term contracts and agency work.
In short, you will also need to work on this element of your CV before applying. You need to shape yourself into the perfect candidate for the post – amongst potentially hundreds of others.
Jobs for forensic scientists are not collated onto a single website. It is likely that you will need to scour the specialist magazines such as New Scientist, and the websites of organisations who employ forensic scientists. Your first role is likely to be an assistant or an associate.
Once in post, you will be expected to continue to learn. This is an explicit requirement of the job, as the technology and techniques are ever-changing. However, continued promotion in the field requires increasing specialism. It is more than likely that you will go onto further qualifications, whether it is a master’s degree or PhD in your specialist field.
Imagine you are watching Crimewatch or reading a newspaper and there is a conviction that was a result of the solid forensic work you have completed. Your work is responsible for people feeling safer and a victim being given a sense of justice served. This is the level of reward you could achieve as a forensic scientist.
There may be aspects of the role that are distressing – but largely the work is fine detail analysis of evidence – and the careful presentation of the results in a report.
This area of science is ever-changing and evolving and as such is fascinating for those interested in Biology and Chemistry in particular. You will be at the cutting edge of developments – and if you become specialised – you could be the person making these breakthroughs.
This is a well-paid profession that offers many opportunities for advancement and specialisation. | https://www.ncchomelearning.co.uk/how-to-become-a-forensic-scientist |
Warning:
more...
Generate a file for use with external citation management software.
Alzheimer disease is characterized by abnormal protein deposits in the brain, such as extracellular amyloid plaques and intracellular neurofibrillary tangles. The tangles are made of a protein called tau comprising 441 residues in its longest isoform. Tau belongs to the class of natively unfolded proteins, binds to and stabilizes microtubules, and partially folds into an ordered beta-structure during aggregation to Alzheimer paired helical filaments (PHFs). Here we show that it is possible to overcome the size limitations that have traditionally hampered detailed nuclear magnetic resonance (NMR) spectroscopy studies of such large nonglobular proteins. This is achieved using optimal NMR pulse sequences and matching of chemical shifts from smaller segments in a divide and conquer strategy. The methodology reveals that 441-residue tau is highly dynamic in solution with a distinct domain character and an intricate network of transient long-range contacts important for pathogenic aggregation. Moreover, the single-residue view provided by the NMR analysis reveals unique insights into the interaction of tau with microtubules. Our results establish that NMR spectroscopy can provide detailed insight into the structural polymorphism of very large nonglobular proteins.
Competing interests. The authors have declared that no competing interests exist.
National Center for
Biotechnology Information, | https://www.ncbi.nlm.nih.gov/pubmed/19226187 |
Thomas Aquinas, prominent philosopher and Catholic Priest of the Middle Ages, defined beauty as ‘something which gives pleasure when seen’ – or as more accurate translation might suggest, ‘when contemplated’. He believed that beauty is both objective (can be formulated and recreated) and requires active intelligence to be appreciated.
Much of what we think of as beautiful is likely the result of human design and creation – a painted master piece or an emotive musical composition, for example. But have you ever seen anything so stunningly, but seemingly unnecessarily, beautiful – that it’s made you question how and why such a thing exists at all? You’re not alone: Beauty in existence has been the inspiration behind philosophical thinking for centuries!
Aquinas concluded that the only reasonable answer to nature’s naturally occurring beauty is an intelligent designer, or God, and that the only reason we are able to contemplate, recreate and find pleasure in it is because that God must be good and care about us.
Did you know, with all of Science’s accumulated knowledge and understanding – still no-one can explain why leaves are green? We know the thing that makes leaves green is chlorophyll. We know that chlorophyll works by absorbing light to fuel the photosynthesis needed to grow and sustain plants. But we also know that of all the available colours on the natural spectrum, black is by far the most efficient at light absorption. Evolution teaches us that things exist as they are because, through a process of trial and error, they have simply become the most efficient at doing what they do and have effectively beaten all the others to it. Why is it then that leaves, which can come in all colours and designs, have remained predominantly green? As I said before, no-one really knows.
Imagine, for a moment, that leaves were predominantly black though. Imagine that evolutionary function stripped the world’s fields and forests and jungles and plains of their vibrant and lush green hues and replaced them with dark monochrome tones. Is it an image that brings you pleasure?
Perhaps one of the most comforting privileges of embracing a spiritual faith, is not just being able to appreciate the natural and often needless beauty of creation – but to also be grateful to the designer who designed it to evolve that way. | http://www.faithinthedock.com/beauty-in-nature-appreciation-vs-thankfulness/ |
Adolescent/Young Adult Programs — ,
PROGRAM DESCRIPTION
Pacific Quest is an outdoor therapeutic program for struggling teens and young adults, located on the Big Island of Hawaii. Pacific Quest offers a clinical, yet holistic approach to treatment, moving beyond traditional wilderness therapy, and teaching sustainable life skills.
Pacific Quest’s proprietary therapeutic model is a groundbreaking approach that utilizes Horticulture Therapy to create concrete metaphors for its students as they cultivate their own health and happiness.
Pacific Quest cultivates Sustainable Growth in our students, in our families, in our communities, and in ourselves.
PRIMARY PURPOSE
Program Guide positions are responsible for the direct supervision, safety, health, and wellbeing of the students while implementing the program curriculum and facilitating therapeutic weekly treatment plans. The Guide role’s primary purpose is to learn the program and the necessary hard and soft skills while providing support to other members of the Program Guide team and following the direction of the Program Manager, Field Managers, and Program Supervisors.
ESSENTIAL DUTIES AND RESPONSIBILITIES
- Uphold safety, supervision, and monitoring of students in accordance with Program Policy
- Learn and articulate the Pacific Quest mission, guiding principles, and curriculum
- Effectively communicate with students in a group setting and direct student groups toward common goals
- Demonstrate flexibility with camp placement, program, location, and student supervision assignments
- Observe and identify the needs of individual students and the group to create a physically and emotionally safe and supportive environment
- Role model program expectations at all times, which includes upholding our diet and wellness standards while at work
- Build and maintain rapport with students while holding boundaries that further therapeutic outcomes
- Maintain client confidentiality
- Keep appropriate boundaries related to self-disclosure, avoid teaching or speaking about controversial beliefs while at work including but not limited to political, religious, medication, diet, or lifestyle choices
- Maintain professional appearance and attitude in accordance with Program Policy
- Ensure all work locations & common areas are clean and organized, as well as following all COVID-19 protocols
- Participate in professional development through mandatory and voluntary monthly in-service training
-
The Program Guide positions have the following roles that require additional expectations and job responsibilities. Movement between roles will be based on the discretion of the Management Team and will account for seniority, performance assessment, participation in training, and operational needs.
APPRENTICE GUIDE (First 90 days of employment)
- Gain knowledge and experience of Pacific Quest’s curriculum and operating procedures
GUIDE
- Share responsibility for facilitation of the daily activities including meal preparation, exercise, group meals and activities
ASSISTANT LEAD GUIDE
- Manage the student group in the absence of the Lead or Program Supervisor
- Prepare and deliver student downloads for your assigned students when needed
- Exhibit solid organization of the daily schedule; Able to plan and run a day effectively
- Demonstrate initiative and independence in your position
LEAD GUIDE
- Hold the schedule for the shift and ensure completion of daily tasks
- Collaborate with clinicians and filed management to ensure completion of therapeutic objectives
- Delegate appropriate tasks to guide team members
MASTER GUIDE
- Fill in the absence of lead guides
- Facilitate on-site training and role-modeling for program guides
- Coordinate special program functions such as outings, family program, rites of passage, horticultural therapy
- Be an extension of the Management Team
WELLNESS GUIDE
- Ensure timely and documented medication distribution
- Be an extension of the Wellness Team in assessing and implementing client wellness plans
OVERNIGHT GUIDE
- Supervise students sleeping in the field or bunkhouses as needed
- Ensure sanitization of Pacific Quest facilities
LEAD OVERNIGHT GUIDE
- Ensure proper documentation of client sleeping patterns
- Administer nightly or PRN medications as needed
REQUIRED SKILLS/QUALIFICATIONS
- Ability to be a team player and form mutually respectful relationships with all employees
- Ability to maintain physical alertness and activity over the course of designated work hours
- Must provide and maintain current CPR/First Aid certification. CPR certifications must be from a classroom training, and no substitutions for First Aid certs (i.e. Wilderness First Aid, Wilderness First Responder) can be accepted
- Maintain a current NVCI (Non-Violent Crisis Intervention Training) certification provided by Pacific Quest
- Ability to treat all residents, families, and coworkers with dignity and respect
- Able to pass a thorough background check and drug screen
- Ability to work in a constant state of alertness and in a safe manner
- Maintain current physical and TB clearance
DESIRED SKILLS/QUALIFICATIONS
- Bachelor’s degree preferable, specifically in Psychology, Sociology, Social Work, Farming/Agroecology, or Outdoor Education/Leadership OR a minimum of two years relevant experience
- Experience is preferred in Rites of Passage, Education, Horticultural Therapy, Health and Wellness, 12 Step Recovery, Gardening, and/or experience in Therapeutic Settings such as Wilderness Therapy or Residential Treatment Centers
- Valid driver’s license with a clean record
- Preferred age 25 years or older
WORK HOURS
- Typical Guide Shifts are as follows:
-
-
- Apprentice and Guide 7am-3pm or 3pm-11pm
- Assistant Lead and Lead 7 am-7 pm or 11 am-11 pm
- Overnight and Lead Overnight: 7pm-7am or 11pm-7am
- Work hours may be adjusted due to operational needs
- Employees will clock in 10 minutes before their scheduled shift and be prepared to enter direct care at the start of their shift
- Clock out time may be earlier or later, depending on other responsibilities as assigned. These may include and are not limited to: supervising students, accompanying transports, participating in Huli Ka’e activities, completing reports, completing student observation notes, participating in meetings.
- If required to sleep at work, all waking hours count as work hours
-
PHYSICAL DEMANDS
The role of a Program Guide is physically demanding, please communicate in writing any accommodations you may need related to the following parameters: With or without reasonable accommodation, the physical and mental requirements of this job may include the following: seeing, hearing, speaking, and writing clearly. Reaching with hands and arms, stooping, kneeling, crouching, crawling, frequent sitting, standing and walking, climbing stairs, walking up inclines and on uneven terrain may be required for long periods of time. Additional physical requirements may include lifting and or moving up to 50 pounds.
How to Apply
Please apply for this job using our online application system. | https://pacificquest.org/job-opportunities/program-guide-5/ |
The Exploring Leadership Programme is aimed at teachers who have completed their NQT year, demonstrated they are effective classroom practitioners and have a high commitment to professional learning but as yet have just limited or no experience of leadership.
Participants will gain a deeper understanding of effective leadership as well as a greater awareness of their own leadership strengths and areas for development. They will also have the opportunity to identify their next steps in pursuing leadership opportunities and further professional learning within their schools.
The programme consists of 5 sessions - two afternoons and three twilights.
Session 1: What is Leadership?
Consider aspects of management and leadership.
Participants will consider aspects of leadership withint their current teaching roles.
We will look at case studies of effective leaders to draw out the underlying principles of effective leadership.
Session 2: Self Awareness
Participants will review outcomes from a 360 diagnostic leadership survey before exploring conscious leadership.
Session 3: Building Relationships
Considering the importance of positive relationships in effective leadership.
There will be the opportunity to consider frameworks for effective conversations, enrolling staff in shared visions and modelling positive behaviours.
Session 4: Managing the Demands of Leadership
Presentation of, and discussion about, a toolkit for supporting teachers in managing the demands of leadership role, whilst maintaining the quality
of their teaching.
Session 5: Reflections and Next Steps
Participants will present their reflections on leadership and the impact this has had on their own professional development before developing a plan of 'next steps'.
There will be a range of inter-sessional and school-based tasks. These will explore aspects of leadership and personal understanding and observations of leadership activity and behaviours. The tasks will include: | https://cpd.otsa.org.uk/courses/bookings/c_printdetail.asp?cid=10386&iscancelled=0&curpage=&keyword=&ds=&unconfirmed=&cs=&subid=&keystage=&sdate=&searchcode=&asearch=&tutid=&estid=&sday=&smonth=&syear=&targetid=&cal=&calday=&calmonth=&calyear=&caldate=&submonth=&subyear=&list=&palist=&frompage=&a=&b=&c=&d=&s_leaid= |
Science has always been one of the most important things for people who are interested in growing and therefore, it has been a major influence in the world. If it is in relation to science, there is always very much to explore. All over the world, scientific breakthroughs have been considered to be highly critical and that is what has kept the world moving forward. It is because of specific scientific breakthroughs that there has been major impact in different things and many things are happening better. When it comes to scientific breakthroughs, quite a lot may be interesting to you. It will be a good idea for you to know the different types of scientific breakthrough that are able to bring into the future. Fortunately, this doesn’t have to be difficult. The one thing that you want to do is to ensure that you have been able to understand more about different types of scientific breakthroughs, some for the future and the current ones. Open-minded enough when it comes to scientific breakthroughs always be a good thing. Getting detailed information about they may be a good idea.
One of the main current scientific breakthroughs is that gravitational waves and these have been properly studied. Albert Einstein in 1916 is considered to be the first person to have predicted the same. Basically, they usually the formation of ripples in space-time because of the movement or motion of massive objects. These ripples can be measured especially if the movements are provided by very large items for example, stars. The subatomic particle Higgs boson is one of the other major discoveries that has been they are currently. When it comes to the making of the model in relation to particle physics, Higgs boson has been considered to be the last piece of the puzzle. In addition to that, rRNA analysis has also been considered to be a major discovery of this age. Genetic structure in such things are studied in things like viruses because of the use of rRNA analysis because these are going to focus on other organisms apart from man.
Gene editing is also considered one of the biggest discoveries. When it comes to gene editing, it is considered to be a major topic. It may also be interesting to know more about different types of future scientific breakthroughs, this may be something you can explore. There might be specific things in relation to this that you may want to understand more about. There are sources that can give you this information today. | https://trendingtodaysnews.com/health-care-medical/discovering-the-truth-about-23/ |
CROSS-REFERENCE TO RELATED APPLICATIONS
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2010-284229, filed Nov. 21, 2010, and the entire contents of which are incorporated herein by reference.
1. Field of the Invention
The present invention relates to a performance apparatus and an electronic musical instrument, which generate musical tones, when held and swung by a player with his or her hand.
2. Description of the Related Art
An electronic musical instrument has been proposed, which is provided with an elongated member of a stick type with a sensor installed thereon, and generates musical tones when the sensor detects a movement of the elongated member. Particularly, in the electronic musical instrument, the elongated member of a stick type has a shape of a drumstick and is constructed so as to generate musical tones as if percussion instruments generate sounds in response to player's motion of striking drums and/or Japanese drum.
For instance, U.S. Pat. No. 5,058,480 discloses a performance apparatus, which has an acceleration sensor installed in its stick-type member, and generates a musical tone when a certain period of time has lapsed after an output (acceleration sensor value) from the acceleration sensor reaches a predetermined threshold value.
But in the performance apparatus disclosed in U.S. Pat. No. 5,058,480, generation of musical tones is simply controlled based on the acceleration sensor values of the stick-type member and therefore, the performance apparatus has a drawback that it is not easy for a player to change musical tones as he or she desires.
Further, Japanese Patent No. 2007-256736 A discloses an apparatus, which is capable of generating musical tones having plural tone colors. The apparatus is provided with a geomagnetic sensor and detects an orientation of a stick-type member held by the player based on a sensor value obtained by the geomagnetic sensor. The apparatus selects one from among plural tone colors for a musical tone to be generated, based on the detected orientation of the stick-type member. In the apparatus disclosed in Japanese Patent No. 2007-256736A, since the tone color of musical tone is changed based on the direction in which the stick-type member is swung by the player, it is required to assign various directions in which the stick-type member is to be swung to generate various tone colors of musical tones. In the apparatus, as the kind of tone colors of musical tones to be generated increase, an angle range in which the stick-type member is swung to generate such tone color become narrower, and therefore it becomes harder to generate musical tones of a tone color desired by the player.
The present invention has an object to provide a performance apparatus and an electronic musical instrument, which allow the player to easily change musical tone elements including tone colors, as he or she desires.
According to one aspect of the invention, there is provided a performance apparatus, which comprises a holding member which is held by a hand of a player, a space/parameter storing unit which stores (a) information for specifying plural spaces each defined by imaginary side planes, at least one of which is perpendicular to the ground surface, as plural sound generation spaces, and (b) parameters of a musical tone corresponding respectively to the plural sound generation spaces, a position-information obtaining unit provided in the holding member which obtains position information of the holding member, a holding-member detecting unit which detects (a) whether a position of the holding member, which is specified based on the position information obtained by the position-information obtaining unit, is included in any of the plural sound generation spaces specified by the information stored in the space/parameter storing unit, and (b) whether the holding member has been moved in a predetermined motion, a reading unit which reads from the space/parameter storing unit a parameter corresponding to the sound generation space, in which the holding-member detecting unit determines that the position of the holding member is included, and an instructing unit which gives an instruction to a musical-tone generating unit to generate a musical tone specified by the parameter read by the reading unit at a timing of sound generation, wherein the beginning time of sound generation is set to a timing when the holding-member detecting unit has detected that the holding member has been moved in a predetermined motion.
According to another aspect of the invention, there is provided an electronic musical instrument, which comprises a performance apparatus and a musical instrument unit which comprises a musical-tone generating unit for generating musical tones, wherein the performance apparatus comprises a holding member which is held by a hand of a player, a space/parameter storing unit which stores (a) information for specifying plural spaces each defined by imaginary side planes, at least one of which is perpendicular to the ground surface, as plural sound generation spaces, and (b) parameters of a musical tone corresponding respectively to the plural sound generation spaces, a position-information obtaining unit provided in the holding member which obtains position information of the holding member, a holding-member detecting unit which detects (a) whether a position of the holding member, which is specified based on the position information obtained by the position-information obtaining unit, is included in any of the plural sound generation spaces specified by the information stored in the space/parameter storing unit, and (b) whether the holding member has been moved in a predetermined motion, a reading unit which reads from the space/parameter storing unit a parameter corresponding to the sound generation space, in which the holding-member detecting unit determines that the position of the holding member is included, and an instructing unit which gives an instruction to the musical-tone generating unit to generate a musical tone specified by the parameter read by the reading unit at a timing of sound generation, wherein the beginning time of sound generation is set to a timing when the holding-member detecting unit has detected that the holding member has been moved in a predetermined motion, and wherein both the performance apparatus and the musical instrument unit comprise communication units, respectively.
FIG. 1
FIG. 1
10
11
11
10
19
19
12
13
14
15
16
17
18
11
23
22
11
11
Now, embodiments of the present invention will be described with reference to the accompanying drawings in detail. is a block diagram of a configuration of an electronic musical instrument according to the first embodiment of the invention. As shown in , the electronic musical instrument according to the first embodiment has a stick-type performance apparatus , which extends in its longitudinal direction to be held or gripped by a player with his or her hand. The performance apparatus is held or gripped by the player to be swung. The electronic musical instrument is provided with a musical instrument unit for generating musical tones. The musical instrument unit comprises CPU , an interface (I/F) , ROM , RAM , a displaying unit , an input unit and a sound system . As will be described later in detail, the performance apparatus has an acceleration sensor and a geomagnetic sensor provided around in a head portion of the elongated performance apparatus opposite to its base portion. The player grips or holds the base portion of the elongated performance apparatus to swing it.
13
19
11
13
15
12
11
24
13
19
33
19
11
33
13
11
The I/F of the musical instrument unit serves to receive data (for instance, a note-on event) from the performance apparatus . The data received through I/F is stored in RAM and a notice of receipt of such data is given to CPU . In the present embodiment, the performance apparatus is equipped with an infrared communication device at the edge of the base portion and I/F of the musical instrument unit is also equipped with an infrared communication device . Therefore, the musical instrument unit receives infrared light generated by the infrared communication device of the performance device through the infrared communication device of I/F , thereby receiving data from the performance apparatus .
12
10
12
19
17
13
CPU controls whole operation of the electronic musical instrument . In particular, CPU serves to perform various processes including a controlling operation of the musical instrument unit , a detecting operation of a manipulated state of key switches (not shown) in the input unit and a generating operation of musical tones based on note-on events received through I/F .
14
10
19
17
13
14
14
14
ROM stores various programs for executing various processes, including a process for controlling the whole operation of the electronic musical instrument , a process for controlling the operation of the musical instrument unit , a process for detecting operation of the key switches (not shown) in the input unit , and a process for generating musical tones based on the note-on events received through I/F . ROM has a waveform-data area for storing waveform data of various tone colors, in particular, including waveform data of percussion instruments such as bass drums, hi-hats, snare drums and cymbals. The waveform data to be stored in ROM is not limited to the waveform data of the percussion instruments, but waveform data having tone colors of wind instruments such as flutes, saxes and trumpets, waveform data having tone colors of keyboard instruments such as pianos, waveform data having tone colors of string instruments such as guitars, and also waveform data having tone colors of other percussion instruments such as marimbas, vibraphones and timpani can be stored in ROM .
15
14
17
13
RAM serves to store programs read from ROM and to store data and parameters generated during the course of the executed process. The data generated in the process includes the manipulated state of the switches in the input unit , sensor values and generated-states of musical tones (sound-generation flag) received through I/F .
16
17
The displaying unit has, for example, a liquid crystal displaying device (not shown) and is able to indicate a selected tone color and contents of a space/tone color table to be described later. In the space/tone color table, sound generation spaces are associated with tone colors of musical tones. The input unit has various switches (not shown) and is used to specify a tone color of musical tones to be generated.
18
31
32
35
12
31
14
32
31
35
35
The sound system comprises a sound source unit , an audio circuit and a speaker . Upon receipt of an instruction from CPU , the sound source unit reads waveform data from the waveform-data area of ROM to generate and output musical tone data. The audio circuit converts the musical tone data supplied from the sound source unit into an analog signal and amplifies the analog signal to output the amplified signal through the speaker , whereby a musical tone is output from the speaker .
FIG. 2
FIG. 2
11
11
22
23
11
22
22
11
11
11
11
22
11
23
11
23
is a block diagram of a configuration of the performance apparatus in the first embodiment of the invention. As shown in , the performance apparatus is equipped with the geomagnetic sensor and the acceleration sensor in the head portion of the performance apparatus opposite to its base portion. The portion where the geomagnetic sensor to be mounted on is not limited to the head portion, but the geomagnetic sensor maybe mounted on the base portion. Taking the head of the performance apparatus as the reference (that is, keeping eyes on the head of the performance apparatus ), the player often swings the performance apparatus . Therefore, since it is taken into consideration that information of the head position of the performance apparatus is obtained, it is preferable for the geomagnetic sensor to be mounted on the head portion of the performance apparatus . It is also preferable to mount the acceleration sensor in the head portion of the performance apparatus so that the acceleration sensor shows an acceleration rate, which varies greatly.
22
11
23
23
23
11
11
11
22
11
The geomagnetic sensor has a magnetic-resistance effect element and/or a hole element, and is a tri-axial geomagnetic sensor, which is able to detect magnetic components respectively in the X-, Y- and Z-directions. In the first embodiment of the invention, the position information (coordinate value) of the performance apparatus is obtained from the sensor values of the tri-axial geomagnetic sensor. Meanwhile, the acceleration sensor is a sensor of a capacitance type and/or of a piezo-resistance type. The acceleration sensor is able to output a data value representing an acceleration sensor value. The acceleration sensor is able to obtain acceleration components in three axial directions: one component in the extending direction of the performance apparatus and two other components in the perpendicular direction to the extending direction of the performance apparatus . A moving distance of the performance apparatus can be calculated from the respective components in three axial-directions of the acceleration sensor . Further, a sound generation timing can be determined based on the component in the extending direction of the performance apparatus .
11
21
24
25
26
27
28
21
11
22
23
22
27
24
The performance apparatus comprises CPU , the infrared communication device , ROM , RAM , an interface (I/F) and an input unit . CPU performs various processes such as a process of obtaining the sensor values in the performance apparatus , a process of obtaining the position information in accordance with the sensor values of the geomagnetic sensor and the acceleration sensor , a process of setting a sound generation space for generating a musical tone, a process of detecting a sound-generation timing of a musical tone based on the sensor value (acceleration sensor value) of the acceleration sensor , a process of generating a note-on event, and a process of controlling a transferring operation of the note-on event through I/F and the infrared communication device .
25
11
22
23
27
24
26
21
24
27
28
ROM stores various process programs for obtaining the sensor values in the performance apparatus , obtaining the position information in accordance with the sensor values of the geomagnetic sensor and the acceleration sensor , setting the sound generation space for generating a musical tone, detecting a sound-generation timing of a musical tone based on the acceleration sensor value, generating a note-on event, and controlling the transferring operation of the note-on event through I/F and the infrared communication device . RAM stores values such as the sensor values, generated and/or obtained in the process. In accordance with an instruction from CPU , data is supplied to the infrared communication device through I/F . The input unit has various switches (not shown).
FIG. 3
11
21
11
301
26
21
22
23
26
11
11
22
23
26
11
304
301
302
308
is a flow chart of an example of a process to be performed in the performance apparatus according to the first embodiment of the invention. CPU of the performance apparatus performs an initializing process at step , clearing data and flags in RAM . In the initializing process, a timer interrupt is released. When the timer interrupt is released, CPU reads the sensor values of the geomagnetic sensor and the acceleration sensor , and stores the read sensor values in RAM in the performance apparatus . Further, in the initializing process, the initial position of the performance apparatus is obtained based on the initial values of the geomagnetic sensor and the acceleration sensor , and stored in RAM . In the following description, a current position of the performance apparatus , which is obtained in a current position obtaining process (step ), is a position relative to the above initial position. After the initializing process at step , the processes at step to step are repeatedly performed.
21
26
23
302
21
22
303
CPU obtains and stores in RAM the sensor value (acceleration sensor value) of the acceleration sensor , which has been obtained in the interrupt process (step ). Further, CPU obtains the sensor value (geomagnetic sensor value) of the geomagnetic sensor , which has been obtained in the interrupt process (step ).
21
304
11
26
303
303
21
11
401
22
22
FIG. 4
Then, CPU performs the current position obtaining process at step . is a flow chart showing an example of the current position obtaining process to be performed in the performance apparatus according to the first embodiment of the invention. Based on the geomagnetic sensor value, which was obtained and stored in RAM in the process performed last time at step and the geomagnetic sensor value currently obtained at step , CPU calculates a moving direction of the performance apparatus (step ). As described above, since the geomagnetic sensor in the present embodiment is the tri-axial magnetic sensor, the geomagnetic sensor is able to calculate the direction based on a three-dimensional vector consisting of differences among components along the X-, Y-, and Z-directions.
26
302
302
21
11
402
21
11
26
401
402
403
Further, using the acceleration sensor value, which was obtained and stored in RAM in the process performed last time at step and the acceleration sensor value currently obtained at step , CPU calculates a moving distance of the performance apparatus (step ). The moving distance is found by performing integration twice using the acceleration sensor values and a time difference (time interval) between the time at which the former sensor value was obtained and the time at which the latter sensor value is obtained. Then, CPU calculates the coordinate of the current position of the performance apparatus , using the last position information stored in RAM , and the moving direction and the moving distance calculated respectively at steps and (step ).
21
404
404
21
26
405
CPU judges at step whether or not any change has been found between the current coordinate of the position and the previous coordinate of the position. When it is determined YES at step , CPU stores in RAM the calculated coordinate of the current position as new position information (step ).
304
21
305
11
21
501
28
11
501
21
26
26
502
21
26
503
301
21
504
504
FIG. 5
FIG. 3
After the current position obtaining process at step , CPU performs a space setting process at step . is a flow chart showing an example of the space setting process to be performed in the performance apparatus according to the first embodiment of the invention. CPU judges at step whether or not a setting switch in the input unit of the performance apparatus has been turned on. When it is determined YES at step , CPU obtains the position information from RAM and stores the obtained position information as the position information (apex coordinate) of an apex in RAM (step ). Then, CPU increments a parameter N in RAM (step ). The parameter N represents the number of apexes. In the present embodiment, the parameter N is initialized to “0” in the initializing process (step in ). Then, CPU judges at step whether or not the parameter N is larger than “4”. When it is determined NO at step , the space setting process finishes.
504
26
504
21
505
21
26
506
21
26
507
In the case where it is determined YES at step , this case means that coordinates of four apexes have been stored in RAM , and therefore, when it is determined YES at step , CPU obtains information for specifying a plane (quadrangle) defined by four apex coordinates (step ). CPU obtains positions of apexes of a quadrangle, which is obtained when the plane (quadrangle) defined by four apex coordinates is projected onto the ground, and stores the information of sound generation space defined by the obtained positions in an space/tone color table in RAM (step ). Thereafter, CPU initializes the parameter N in RAM to “0” and sets a space setting flag to “1” (step ).
In the present embodiment of the invention, the player specifies plural apexes and can set a sound generation space consisting of an area defined by these apexes. In the present embodiment of the invention, a plane (quadrangle) defined by four apexes is set as the sound generation space, but the number of apexes for defining the sound generation space can be changed. For example, a polygon such as a triangle can be set as the sound generation space.
FIG. 7
FIG. 7
71
74
11
11
71
74
1
71
1
1
1
P (Reference numeral ): (x, y, z)
2
72
2
2
2
P (Reference numeral ): (x, y, z)
3
73
3
3
3
P (Reference numeral ): (x, y, z)
4
74
1
4
700
4
4
4
P (Reference numeral ): (x, y, z)
A plane defined by straight lines connecting these four coordinates P to P is denoted by a reference numeral .
is a view schematically illustrating how a sound generation space is decided in the first embodiment of the invention. In , reference numerals to denote positions of the performance apparatus , which is held by the player at the times when the player turns on the setting switch four times. The head positions of the performance apparatus held at the positions to are represented as follows:
701
700
701
0
1
1
0
(x, y, z)
2
2
0
(x, y, z)
3
3
0
(x, y, z)
4
4
0
(x, y, z)
A plane is obtained by projecting the plane onto the ground (Z-coordinate=z), and the coordinates of the four apexes of the plane will be given by:
701
74
77
701
11
11
710
1
1
0
2
2
0
3
3
0
4
4
0
FIG. 7
In the first embodiment of the invention, the sound generation space is defined by a space specified by the plane defined by the four coordinates (x, y, z),(x, y, z),(x, y, z) and (x, y, z) and perpendiculars to to the plane passing through these four coordinates, as shown in . As will be described later, the performance apparatus is swung while the performance apparatus is kept in the sound generation space , a musical tone can be generated. The space can be set in other method, and also the space can be set to other shape.
305
21
306
11
21
601
601
FIG. 6
After the space setting process has finished at step , CPU performs a tone-color setting process at step . is a flow chart showing an example of the tone-color setting process to be performed in the performance apparatus according to the first embodiment of the invention. CPU judges at step if the space setting flag is set to “1”. When it is determined NO at step , then the tone-color setting process finishes.
601
21
602
28
602
21
603
21
26
604
27
24
33
19
19
19
When it is determined YES at step , CPU judges at step if a tone-color confirming switch in the input unit has been turned on. When it is determined YES at step , CPU generates a note-on event including tone-color information in accordance with a parameter TN (step ). The parameter TN represents a tone-color number, which uniquely specifies atone color of a musical tone. In the note-on event, the information representing a sound volume level and a pitch of a musical tone can be previously determined data. Then, CPU sends the generated note-on event to I/F (step ). I/F makes the infrared communication device transfer an infrared signal of the note-on event to the infrared communication device of the musical instrument unit . The musical instrument unit generates a musical tone having a predetermined pitch based on the received infrared signal. The sound generation in the musical instrument unit will be described later.
21
605
605
21
606
602
605
21
26
607
21
608
Then, CPU judges at step whether or not a tone-color setting switch has been turned on. When it is determined NO at step , CPU increments the parameter TN representing a pitch (step ) and returns to step . When it is determined YES at step , CPU associates the parameter TN representing a pitch with the information of sound generation space to store in a space/pitch table in RAM (step ). Then, CPU resets the space setting flag to “0” (step ).
FIG. 8
FIG. 8
26
801
800
1
2
3
4
800
21
800
is a view illustrating an example of the space/tone color table stored in RAM in the first embodiment of the invention. As shown in , a record (for example, Reference numeral: ) in the space/tone color table contains items such as a space ID, apex-position coordinates (Apex , Apex , Apex , and Apex ), and a tone color. The space ID is prepared to uniquely specify the record in the table , and given by CPU everytime one record of the space/tone color table is generated. In the first embodiment of the invention, the space ID specifies the tone color of the percussion instruments. It is possible to arrange the space/tone color table to specify the tone colors of musical instruments (keyboard instruments, string instruments, wind instruments and so on) other than the percussion instruments.
800
75
78
Two-dimensional coordinates (x, y) in the X- and Y-directions are stored as the apex coordinate in the space/tone color table . As described above, this is because that the sound generation space in the first embodiment of the invention is the three-dimensional space, which is defined by the plane specified, for example, by four apexes on the ground and the perpendiculars to passing through the four apexes, and that the value in the Z-coordinate is arbitrary.
306
21
307
11
21
26
901
21
902
11
902
FIG. 3
FIG. 9
When the tone-color setting process has finished at step in , CPU performs a sound-generation timing detecting process at step . is a flow chart of an example of the sound-generation timing detecting process to be performed in the performance apparatus according to the first embodiment of the invention. CPU reads position information from RAM (step ). CPU judges at step whether or not the position of the performance apparatus specified by the read position information is within any of sound generation spaces. More specifically, it is judged at step whether the two-dimensional coordinates (x, y) (or two components in the X- and Y-directions) in the position information fall within the space defined by the position information stored in the space/tone color table.
902
21
23
903
902
21
26
11
904
When it is determined NO at step , CPU resets an acceleration flag in RAM to “0” (step ). When it is determined YES at step , CPU refers to an acceleration sensor value stored in RAM to obtain an acceleration sensor value in the longitudinal direction of the performance apparatus (step ).
21
905
11
905
21
26
906
21
907
11
904
26
907
21
26
11
904
908
Then, CPU judges at step whether or not the acceleration sensor value in the longitudinal direction of the performance apparatus is larger than a predetermined threshold value a (first threshold value α). When it is determined YES at step , CPU sets the acceleration flag in RAM to “1” (step ). CPU judges at step whether or not the acceleration sensor value in the longitudinal direction of the performance apparatus (the acceleration sensor value obtained at step ) is larger than the maximum acceleration sensor value stored in RAM . When it is determined YES at step , CPU stores in RAM the acceleration sensor value in the longitudinal direction of the performance apparatus (the acceleration sensor value obtained at step ) as a fresh maximum acceleration sensor value (step ).
905
21
909
26
909
909
21
910
11
910
21
911
When it is determined NO at step , CPU judges at step whether or not the acceleration flag in RAM has been set to “1”. When it is determined NO at step , the sound-generation timing detecting process finishes. When it is determined YES at step , CPU judges at step whether or not the acceleration sensor value in the longitudinal direction of the performance apparatus is less than a predetermined threshold value B (second threshold value β). When it is determined YES at step , CPU performs a note-on event generating process (step ).
FIG. 10
FIG. 10
FIG. 12
11
11
19
19
35
is a flowchart of an example of the note-on event generating process to be performed in the performance apparatus according to the first embodiment of the invention. The note-on event generated in the note-on event generating process shown in is transferred from in the performance apparatus to the musical instrument unit . Thereafter, a sound generating process (Refer to ) is performed in the musical instrument unit to output a musical tone through the speaker .
10
11
11
11
11
11
11
FIG. 11
Before describing the note-on event generating process, a sound generation timing in the electronic musical instrument according to the first embodiment will be described. is a view illustrating a graph schematically showing the acceleration value in the longitudinal direction of the performance apparatus . When the player holds a portion of the performance apparatus and swings the same apparatus , a rotary movement of the performance apparatus is caused around the wrist, elbow or shoulder of the player. The rotary movement of the performance apparatus centrifugally-generates an acceleration in the longitudinal direction of the performance apparatus .
11
1101
1100
11
1102
FIG. 11
When the player swings the performance apparatus , the acceleration sensor value gradually increases (Refer to Reference numeral on a curve in ). When the player swings the stick-type performance apparatus , in general, he or she moves as if he or she strikes a drum. Therefore, the player stops striking motion just before striking an imaginary striking surface of the percussion instrument (such as the drum and marimba). Accordingly, the acceleration sensor value begins to gradually decrease from a time (Refer to Reference numeral ). The player assumes that a musical tone is generated at the moment when he or she strikes the imaginary surface of percussion instrument with a stick. Therefore, it is preferable to generate the musical tone at the timing when the player wants to generate such musical tone.
11
11
11
11
19
19
The present invention employs a logic to be described later to generate a musical tone at the moment or just before the player strikes the imaginary surface of the percussion instrument with the stick. It is assumed that the sound generation timing is set to a time when the acceleration sensor value in the longitudinal direction of the performance apparatus decreases less than the second threshold value β. This second threshold value β is slightly larger than “0”. But due to the player's unintentional movement, the acceleration sensor value in the longitudinal direction of the performance apparatus can vary to reach a value close to the second threshold value β. To avoid unintentional effect of a variation in the acceleration sensor value, the sound generation timing is set to a time when the acceleration sensor value in the longitudinal direction of the performance apparatus once increases larger than the first threshold value α (Refer to a time: tα) and thereafter the acceleration sensor value has decreased less than the second threshold value β (Refer to a time: tβ). The first threshold value α is sufficiently larger than the second threshold value B. When it is determined that the sound generation timing has been reached, the note-on event is generated in the performance apparatus and transferred to the musical instrument unit . Upon receipt of the note-on event, the musical instrument unit performs the sound generating process to generate a musical tone.
FIG. 10
21
26
1001
1
In the note-on event generating process shown in , CPU refers to the maximum acceleration sensor value in the longitudinal direction stored in RAM to determine a sound volume level (velocity) of a musical tone (step ). Assuming that the maximum acceleration sensor value is denoted by Amax, and the maximum sound volume level (velocity) is denoted by Vmax, the sound volume level Ve can be obtained by the following equation:
Ve
a×A
a×A
V
Ve
V
a
1 =max, where, if max>max, 1=max and “” is a positive coefficient.
21
26
11
1002
21
1003
CPU refers to the space/tone color table in RAM to determine the tone color in the record with respect to the sound generation space corresponding to the position where the performance apparatus is kept as the tone color of a musical tone to be generated (step ). Then, CPU generates a note-on event including the determined sound volume level (velocity) and tone color (step ). A defined value is used as a pitch in the note-on event.
21
1004
27
24
24
33
19
21
26
1005
CPU outputs the generated note-on event to I/F (step ). Further, I/F makes the infrared communication device send an infrared signal of the note-on event. The infrared signal is transferred from the infrared communication device to the infrared communication device of the musical instrument unit . Thereafter, CPU resets the acceleration flag in RAM to “0” (step ).
307
21
308
308
19
1205
FIG. 3
FIG. 12
When the sound-generation timing detecting process has finished at step in , CPU performs a parameter communication process at step . The parameter communication process (step ) will be described together with a parameter communication process to be performed in the musical instrument unit (step in ).
FIG. 12
19
12
19
1201
15
16
31
12
1202
12
17
15
11
15
19
is a flow chart of an example of a process to be performed in the musical instrument unit according to the first embodiment of the invention. CPU of the musical instrument unit performs an initializing process at step , clearing data in RAM and an image on the display screen of the displaying unit and further clearing the sound source unit . Then, CPU performs a switch operating process at step . In the switch operating process, CPU sets parameters of effect sounds of a musical tone to be generated, in accordance with the switch operation on the input unit by the player. The parameters of effect sounds (for example, depth of reverberant sounds) are stored in RAM . In the switch operating process, the space/tone color table transferred from the performance apparatus and stored in RAM of the musical instrument unit can be edited by the switching operation. In the editing operation, the apex positions for defining the sound generation space can be modified and also the tone colors can be altered.
12
1203
13
1203
12
1204
12
31
31
14
14
31
32
35
CPU judges at step whether or not another note-on event has been received through I/F . When it is determined YES at step , CPU performs the sound generating process at step . In the sound generating process, CPU sends the received note-on event to the sound source unit . The sound source unit reads waveform data from ROM in accordance with the tone color represented by the received note-on event. When the musical tones of tone colors of the percussion instruments are generated, the waveform data is read from ROM at a constant rate. When the musical tones of tone colors of the musical instruments having pitches, such as the keyboard instruments, the wind instruments and the string instruments, are generated, the pitch follows the value included in the note-on event (in the first embodiment, the define value). The sound source unit multiplies the waveform data by a coefficient according to the sound volume level (velocity) contained in the note-on event, generating musical tone data of a predetermined sound volume level. The generated musical tone data is supplied to the audio circuit , and a musical tone of the predetermined sound volume level is output through the speaker .
12
1205
12
33
1202
11
11
24
21
27
26
308
FIG. 3
Then, CPU performs the parameter communication process at step . In the parameter communication process, CPU gives an instruction to the infrared communication device to transfer data of the space/tone color table edited by the switching operation (step ) to the performance apparatus . In the performance apparatus , when the infrared communication device receives the data, CPU receives the data through I/F and stores the data in RAM (step in ).
308
21
11
11
305
306
26
19
FIG. 3
At step in , CPU of the performance apparatus performs the parameter communication process. In the parameter communication process of the performance apparatus , a record is generated based on the sound generation space and tone color set respectively at steps and , and data in the space/tone color table stored in RAM is transferred to the musical instrument unit .
19
1205
12
1206
12
16
FIG. 12
When the parameter communication process of the musical instrument unit has finished at step in , CPU performs other process at step . For instance, CPU updates an image on the display screen of the displaying unit .
FIG. 13
FIG. 13
FIG. 8
FIG. 13
11
135
137
135
137
is a view schematically illustrating examples of the sound generation spaces and the corresponding tone colors set in the space setting process and the tone-color setting process performed in the performance apparatus according to the first embodiment of the invention. The examples shown in correspond to the records in the areas/tone color table shown in . As shown in , three sound generation spaces to are prepared. These sound generation spaces to correspond to the records of space IDs 0 to 3 in the space/tone color table, respectively.
135
130
130
136
131
131
137
132
132
The sound generation space is a three-dimensional space, which is defined by a quadrangle and four perpendiculars extending from four apexes of the quadrangle . The sound generation space is a three-dimensional space, which is defined by a quadrangle and four perpendiculars extending from four apexes of the quadrangle . The sound generation space is a three-dimensional space, which is defined by a quadrangle and four perpendiculars extending from four apexes of the quadrangle .
1301
1302
135
1311
1312
137
When the player swings the performance apparatus down (or up)(Refer to Reference numerals: , ) in the sound generation space , a musical tone having a tone color of a vibraphone is generated. Further, when the player swings the performance apparatus down (or up)(Refer to Reference numerals: , ) in the sound generation space , a musical tone having a tone color of a cymbal is generated.
11
11
21
19
In the first embodiment of the invention, setting the sound generation timing at the time when the performance apparatus is kept in the sound generation space defined in space and the acceleration detected in the performance apparatus has satisfied a predetermined condition, CPU gives the electronic musical instrument unit an instruction to generate a musical tone having a tone color corresponding to said sound generation space. In this manner, musical tones can be generated, having various tone colors corresponding respectively to sound generation spaces.
11
22
23
21
11
22
11
23
11
11
In the first embodiment of the invention, the performance apparatus is provided with the geomagnetic sensor and the acceleration sensor . CPU calculates the moving direction of the performance apparatus based on the sensor value of the geomagnetic sensor , and also calculates the moving distance of the performance apparatus based on the sensor value of the acceleration sensor . The current position of the performance apparatus is obtained from the moving direction and the moving distance, whereby the position of the performance apparatus can be found without using a large scale of equipment and performing complex calculations.
11
21
19
In the first embodiment of the invention, setting the sound generation timing at the time when the acceleration sensor value in the longitudinal direction of the performance apparatus once increases larger than the first threshold value α and thereafter has decreased less than the second threshold value β (first threshold value α>second threshold value β), CPU gives the electronic musical instrument unit an instruction to generate a musical tone having a tone color corresponding to the sound generation space. In this manner, a musical tone can be generated substantially at the same timing as the player actually strikes the imaginary striking surface of the percussion instrument with the stick.
21
23
19
11
CPU founds the maximum sensor value of the acceleration sensor , and calculates a sound volume level based on the maximum sensor value, and gives the electronic musical instrument unit an instruction to generate a musical tone having the calculated sound volume level at the above sound generation timing. In the above manner, a musical tone can be generated at the player's desired sound volume level in respond to the player's swinging operation of the performance apparatus .
In the first embodiment of the invention, a space defined by an imaginary polygonal shape specified on the ground and perpendiculars extending from the apexes of the imaginary polygonal shape is set as the sound generation space, and information specifying the sound generation space is associated with a tone color, and stored in the space/tone color table, wherein the imaginary polygonal shape is defined by projecting onto the ground a shape specified based on position information representing not less than three apexes. The player is allowed to specify apexes to define an area surrounded by said apexes, thereby setting the sound generation space based on the area. In the above description, the polygonal shape defined by four apexes is set as the sound generation space but the number of apexes for specifying the sound generation space can be changed. For example, an arbitrary shape such as a triangle can be used to specify the sound generation space.
11
Now, the second embodiment of the invention will be described. In the first embodiment of the invention, the performance apparatus is used to specify plural apexes for defining an area, and the area is projected onto the ground to obtain an imaginary polygonal shape. A space, which is defined by the polygonal shape and perpendiculars extending from apexes of the polygonal shape is set as the sound generation space. Meanwhile, in the second embodiment of the invention, a central position C and a passing-through position P are set to define a sound generation space of cylinder. A disc-like shape is defined, which has the center at the central position C and a radius “d”. The radius “d” is given by a distance between the central position C and the passing-through position P. The sound generation space is defined based on such disc-like shape.
FIG. 14
21
11
1401
28
1401
1401
21
1402
1402
21
26
26
1403
c
c
c
is a flow chart of an example of the space setting process to be performed in the second embodiment of the invention. CPU of the performance apparatus judges at step whether or not a center setting switch of the input unit is kept on. When it is determined NO at step , then the space setting process finishes. When it is determined YES at step , CPU judges at step whether or not the center setting switch has been turned on again. When it is determined YES at step , CPU reads position information from RAM , and stores in RAM the read position information as position information (coordinate (x, y, z)) of the central position C (step ).
1402
1403
21
1404
1404
1404
21
26
26
11
1405
p
p
p
When it is determined NO at step , that is, when the center setting switch is kept on, or after the process at step , CPU judges at step whether or not the center setting switch has been turned off. When it is determined NO at step , then the space setting process finishes. When it is determined YES at step , CPU reads position information from RAM , and stores in RAM the read position information as position information (coordinate (x, y, z)) of the position P, at which the performance apparatus is held when the center setting switch is turned off (step ).
21
1406
21
1407
21
1408
c
c
0
p
p
0
0
CPU obtains the coordinate (x, y, z) of a position C′ and the coordinate (x, y, z) of a position P′ (step ), wherein the position C′ and the position P′ are specified by projecting the central position C and the position P onto the ground (Z-coordinate=z), respectively. CPU calculates a distance “d” between the position C′ and the position P′ (step ). Thereafter, CPU obtains information of a sound generation space based on a disc-like shape plane, which has the center at the position C′ and a radius “d” given by a distance between the position C′ and the position P′ (step ). In the second embodiment of the invention, as the sound generation space is set a three-dimensional space of a cylinder shape having the circle bottom, which has the center at the position C′ and the radius “d” given by a distance between the position C′ and the position P′.
26
1409
21
1410
The information of the sound generation space (x- and y-coordinates of the central position C′, and x- and y-coordinates of the passing-through position P′) and radius “d” are stored in the space/tone color table in RAM (step ). Then, CPU sets the space setting flag to “1” (step ). Since the disc-like shape on the ground can be defined by the central position and the radius, there is no need to store the coordinate of the passing-through position P′.
11
11
As described above, when the player turns on the setting switch of the performance apparatus at a position where he or she wants to set a central position C, and moves the performance apparatus with the setting switch kept on to a position P corresponding to a radius and then turns the setting switch off, then the central position C and the passing-through position P are specified. Further, when the central position C and the passing-through position P are projected onto the ground, the positions C′ and P′ are determined on the ground. A cylinder with a circle bottom having the center at the position C′ and a radius “d” given by a distance between the position C′ and the position P′ can be set as the sound generation space in the second embodiment of the invention.
FIG. 15
FIG. 15
26
1501
1500
is a view illustrating an example of the space/tone color table stored in RAM in the second embodiment of the invention. As shown in , the record (Reference numeral ) in the space/tone color table in the second embodiment contains a space ID, coordinates (x, y) of a central position C′, coordinates (x, y) of a passing-through position P′, and a radius “d”, and a tone color.
FIG. 6
The tone color setting process in the second embodiment is substantially the same as the process () in the first embodiment of the invention.
FIG. 16
FIG. 15
FIG. 16
11
165
168
165
168
160
163
is a view schematically illustrating examples of sound generation spaces and corresponding tone colors set in the space setting process and the tone color setting process performed in the performance apparatus according to the second embodiment of the invention. These examples correspond to the records in the space/tone color table shown in . As shown in , four sound generation spaces to are prepared in the second embodiment of the invention, wherein the sound generation spaces to are cylindrical spaces with bottoms (Reference numerals: to ) having the central positions C′ and radiuses “d”.
165
168
1601
1602
165
1611
1612
166
The sound generation spaces to correspond to the records of the space IDs 0 to 3 in the space/tone color table, respectively. When the player swings the performance apparatus down (or up)(Reference numerals: , ) in the sound generation space , a musical tone having a tone color of a tom is generated. And when the player swings the performance apparatus down (or up)(Reference numerals: , ) in the sound generation space , a musical tone having a tone color of a snare is generated.
21
26
Other processes such as the current position obtaining process and the sound-generation timing detecting process in the second embodiment are substantially the same as those in the first embodiment of the invention. In the second embodiment of the invention, as the sound generation space associated with the corresponding tone color, CPU stores in the space/tone color table in RAM information of a cylindrical space with the circular bottom having the center at the position C′ and the radius “d” given by the distance between the position C′ and the position P, wherein the position C′ and the position P′ are defined by projecting a specified central position C and the other position P onto the ground, respectively. In this manner, the player is allowed to designate two positions to set a sound generation space of his or her desired size.
11
28
11
FIG. 17
Now, the third embodiment of the invention will be described. In the third embodiment of the invention, the sound generation spaces having a cylindrical shape with a circular or oval bottom are set. In the third embodiment of the invention, the player moves the performance apparatus along an area so as to define a circle or oval in space, and the defined circle or oval is projected onto the ground to specify an imaginary shape on the ground. The specified imaginary shape will be the bottom of the cylindrical sound generation space in the third embodiment. is a flow chart of an example of the space setting process performed in the third embodiment of the invention. In the third embodiment of the invention, the switch unit of the performance apparatus has a setting-start switch and setting-finish switch.
21
1701
1701
21
26
26
1702
21
26
1703
CPU judges at step whether or not the setting-start switch has been turned on. When it is determined YES at step , CPU reads position information from RAM and stores in RAM the read position information as the coordinate (starting-position coordinate) of a starting position (step ). CPU sets the setting flag in RAM to “1” (step ).
1701
21
1704
1704
21
26
26
1705
1705
11
26
1705
26
When it is determined NO at step , CPU judges at step whether or not the setting flag is set to “1”. When it is determined YES at step , CPU reads position information from RAM and stores in RAM the read position information as the coordinate (passing-through position coordinate) of a passing-through position (step ). The process at step is repeatedly performed until the player turns on the setting-finish switch of the performance apparatus . Therefore, one passing-through position coordinate is stored in RAM every time the process at step is performed, and as a result, plural passing-through position coordinates are stored in RAM .
21
1706
1706
21
26
26
1707
21
1708
1708
1704
1706
Thereafter, CPU judges at step whether or not the setting-finish switch has been turned on. When it is determined YES at step , CPU reads position information from RAM and stores in RAM the read position information as the coordinate (finishing-position coordinate) of a finishing position (step ). Then, CPU judges at step whether or not the finishing-position coordinate falls within a predetermined range of the starting-position coordinate. When it is determined NO at step , the space setting process finishes. When it is determined NO at steps and , the space setting process finishes.
1708
21
1709
21
21
1709
26
1710
21
1711
When it is determined YES at step , CPU obtains information for specifying a circle or oval passing through the starting-position coordinate, the passing-through position coordinate and the finishing-position coordinate (step ). CPU creates a closed curve consisting of lines connecting adjacent coordinates and obtains a circle or oval closely related to the closed curve. A well known method such as the method of least squares is useful for obtaining the circle plane or oval plane. CPU calculates information of a circle or oval obtained by projecting the circle or oval specified at step onto the ground, and stores in the space/tone color table in RAM the information of the circle or oval as the information of sound generation space (step ). Thereafter, CPU resets the setting flag to “0” and sets the space setting flag to “1” (step ).
11
Other processes to be performed in the third embodiment of the invention, such as the current position obtaining process and the sound-generation timing detecting process are performed substantially in the same manner as in the first embodiment of the invention. Also in the third embodiment of the invention, the player is allowed to set the sound generation space having a cylindrical shape with a circle or oval bottom of his or her desired size. Particularly in the third embodiment of the invention, the player can set the sound generation space of a cylindrical shape having a side surface defined by a track, along which the performance apparatus is moved.
11
11
Now, the fourth embodiment of the invention will be described. In the first to third embodiments of the invention, every sound generation space is assigned with the corresponding tone color, and the information for specifying the sound generation space associated with the information of tone color is stored in the space/tone color table. When the performance apparatus is swung within the sound generation space, a tone color of a musical tone to be generated is determined on the basis of the space/tone color table. In the fourth embodiment of the invention, every sound generation space is assigned with a corresponding pitch. When the performance apparatus is swung within a sound generation space, a musical tone having a pitch corresponding to the sound generation space is generated. This arrangement will be appropriate for generating musical tones of the tone colors, such as musical tones of the percussion instruments including marimbas, vibraphones and timpani, which are able to generate musical tone of various tone colors.
306
28
21
1801
1801
FIG. 3
FIG. 18
In the fourth embodiment of the invention, a pitch setting process is performed in place of the tone-color setting process (step ) in the process shown in . is a flow chart of an example of the pitch setting process to be performed in the fourth embodiment of the invention. In the fourth embodiment of the invention, any one of the space setting processes in the first to third embodiments can be employed. In the fourth embodiment of the invention, the input unit has a pitch confirming switch and a pitch decision switch. A parameter NN representing a pitch (pitch information in accordance with MIDI) is set to an initial value (for example, the lowest pitch) in the initializing process. CPU judges at step whether or not the space setting flag has been set to “1”. When it is determined NO at step , then the pitch setting process finishes.
1801
21
1802
1802
21
1803
21
27
1804
27
24
24
33
19
19
When it is determined YES at step , CPU judges at step whether or not the pitch confirming switch has been turned on. When it is determined YES at step , CPU generates a note-on event including pitch information in accordance with the parameter NN representing a pitch (step ). The note-on event can include information representing a sound volume and a tone color determined separately. CPU outputs the generated note-on event to I/F (step ). Further, I/F makes the infrared communication device transfer an infrared signal of the note-on event. The infrared signal of the note-on event is transferred from the infrared communication device to the infrared communication device of the musical instrument unit , whereby the musical instrument unit generates a musical tone having a predetermined pitch.
21
1805
1805
21
1806
1802
1805
21
26
1807
21
1808
Then, CPU judges at step whether or not the pitch decision switch has been turned on. When it is determined NO at step , CPU increments the parameter NN representing a pitch (step ) and returns to step . When it is determined YES at step , CPU associates the parameter NN representing a pitch with the information of sound generation space to store in a space/pitch table in RAM (step ). Then, CPU resets the space setting flag to “0” (step ).
FIG. 18
FIG. 8
FIG. 8
FIG.8
26
In the pitch setting process shown in , every time the pitch confirming switch is turned on, a musical tone of one pitch higher than the last tone is generated. When a musical tone of a pitch desired by the player is generated, the player turns on the pitch decision switch to associate his or her desired pitch with the sound generation space. In the fourth embodiment of the invention, the space/pitch table in RAM has substantially the same items as shown in . In the space/tone color table shown in , the space ID and the information for specifying the sound generation space (in the case of , center position C, passing-through position P and radius “d”) are associated with the tone color. Meanwhile, in the space/pitch table of the fourth embodiment, the space ID and the information for specifying the sound generation space are associated with the pitch.
FIG. 9
FIG. 19
FIG. 19
FIG. 10
FIG. 10
1901
1001
21
26
11
1902
21
1903
1904
1905
1004
1005
In the fourth embodiment of the invention, the sound-generation timing detecting process is performed substantially in the same manner as in the first to the third embodiments (Refer to ), and the note-on event generating process is performed. is a flow chart of an example of the note-on event generating process to be performed in the fourth embodiment of the invention. The process at step in is performed substantially in the same manner as the process at step in . CPU refers to the space/pitch table in RAM to read the pitch in the record corresponding to the sound generation space, in which the performance apparatus is kept, and determines the read pitch as the pitch of a musical tone to be generated (step ). CPU generates a note-on event including the decided sound volume level (velocity) and pitch (step ). In the note-on event, the tone color will be set to a defined value. The processes at steps and correspond respectively to those at steps and in . In this way, the musical tone having the pitch corresponding the sound generation space can be generated.
11
In the fourth embodiment of the invention, the sound generation spaces are assigned with respective pitches, and when the performance apparatus is swung within one sound generation space, then a musical tone having a pitch corresponding to such sound generation space is generated. Therefore, the fourth embodiment of the invention can be used to generate musical tones of desired pitches as if the percussion instruments such as marimbas, vibraphones and timpani are played.
The present invention has been described with reference to the accompanying drawings and the first to fourth embodiments, but it will be understood that the invention is not limited to these particular embodiments described herein, and numerous arrangements, modifications, and substitutions may be made to the embodiments of the invention described herein without departing from the scope of the invention.
21
11
11
11
11
11
21
11
19
27
24
12
19
31
19
In the embodiments described above, CPU of the performance apparatus detects an acceleration sensor value and a geomagnetic sensor value while the player swings the performance apparatus , and obtains the position information of the performance apparatus from these sensor values to judges whether or not the performance apparatus is kept within the sound generation space. When it is determined that the performance apparatus has been swung within the sound generation space, then, CPU of the performance apparatus generates a note-on event including the tone color corresponding to the sound generation space (in the first to third embodiments) or the pitch corresponding to the sound generation space (in the fourth embodiment), and transfers the generated note-on event to the musical instrument unit through I/F and the infrared communication device . Meanwhile, receiving the note-on event, CPU of the musical instrument unit supplies the received note-on event to the sound source unit , thereby generating a musical tone. The above arrangement is preferably used in the case that the musical instrument unit is a device not specialized in generating musical tones, such as a personal computer and/or a game machine provided with a MIDI board.
11
19
11
19
11
19
19
19
FIG. 9
FIG. 10
The processes to be performed in the performance apparatus and the processes to be performed in the musical instrument unit are not limited to those described in the above embodiments. For example, an arrangement can be made such that the performance apparatus transfers information of the space/tone color table to the musical instrument unit , or obtains the position information of the performance apparatus from the sensor values and transfers the obtained position information to the musical instrument unit . In the arrangement, the sound-generation timing detecting process () and the note-on event generating process () are performed in the musical instrument unit . Such arrangement will be suitable for use in electronic musical instruments, in which the musical instrument unit is used as a device specialized in generating musical tones.
24
33
11
19
11
19
24
33
Further, in the embodiments, the infrared communication devices and are used for the infrared signal communication between the performance apparatus and the musical instrument unit to exchange data between them, but the invention is not limited to the infrared signal communication. For example, data can be exchanged between percussion instruments and the musical instrument unit by means of radio communication and/or wire communication in place of the infrared signal communication through the devices and .
11
23
11
22
11
11
11
In the above embodiment, the moving direction of the performance apparatus is detected based on the sensor value of the geomagnetic sensor , and the moving distance of the performance apparatus is calculated based on the sensor value of the acceleration sensor , and then the position of the performance apparatus is obtained based on the moving direction and the moving distance. The method of obtaining the position of the performance apparatus is not limited to the above, but the position of the performance apparatus can be obtained using sensor values of a tri-axial acceleration sensor and a sensor value of an angular rate sensor.
11
11
In the embodiments described above, the sound generation timing is set to the time when the acceleration sensor value in the longitudinal direction of the performance apparatus once increases larger than the first threshold value α and thereafter has decreased less than the second threshold value β. But the sound generation timing is not limited to the above timing. For example, the sound generation timing can be detected not based on the acceleration sensor value in the longitudinal direction of the performance apparatus but based on the resultant value of the x-, y-, and Z-components of the tri-axial acceleration sensor (sensor resultant value: the square root of the sum of the squares of the x-, y- and Z-components of the tri-axial acceleration sensor).
FIG. 20
FIG. 9
2001
2003
901
903
2002
21
2004
2005
is a flow chart of an example of the sound-generation timing detecting process to be performed in the fifth embodiment of the invention. The processes at steps to are performed substantially in the same manner as those at to in . When it is determined YES at step , CPU reads an acceleration sensor value (x-component, y-component, z-component) (step ) to calculate a sensor resultant value (step ). As described above, the sensor resultant value is given by the square root of the sum of the squares of the x-, y- and Z-components of the tri-axial acceleration sensor.
21
2006
26
2006
21
2007
21
2007
11
2007
Then, CPU judges at step whether or not the acceleration flag in RAM is set to “0”. When it is determined YES at step , CPU judges at step whether or not the sensor resultant value is larger than a value of (1+a)G, where “a” is a positive fine constant. For example, if “a” is “0.05”, CPU judges whether or not the sensor resultant value is larger than a value of 1.05 G. In the case where it is determined YES at step , this case means that the performance apparatus is swung by the player and the sensor resultant value has increased larger than the gravity acceleration of “1 G”. The value of “a” is not limited to “0.05”. On the assumption that “a”=0, it is possible to judge at step whether or not the sensor resultant value is larger than a value corresponding to the gravity acceleration “1 G”.
2007
21
26
2008
2007
When it is determined YES at step , CPU sets the acceleration flag in RAM to “1” (step ). When it is determined NO at step , then the sound-generation timing detecting process finishes.
2006
26
21
2009
2009
21
2010
2005
26
2010
21
26
2011
2010
When it is determined YES at step , that is, when the acceleration flag in RAM has been set to “1”, CPU judges at step whether or not the sensor resultant value is smaller than a value of (1+a)G. When it is determined NO at step , CPU judges at step whether or not the sensor resultant value calculated at step is larger than the maximum sensor resultant value stored in RAM . When it is determined YES at step , CPU stores in RAM said calculated sensor resultant value as a new maximum sensor resultant value (step ). When it is determined NO at step , then the sound-generation timing detecting process finishes.
2009
21
2012
1001
FIG. 10
When it is determined YES at step , CPU performs the note-on event generating process (step ). This note-on event generating process is performed substantially in the same manner as in the first embodiment as shown in . In fifth embodiment of the invention, the sound volume level is determined based on the maximum sensor resultant value at step . In the fifth embodiment of the invention, a musical tone is generated at a sound generation timing, which is determined in the following manner.
FIG. 21
FIG. 21
23
11
2100
11
1
11
11
1
is a view illustrating a graph schematically showing a sensor resultant value of acceleration values detected by the acceleration sensor of the performance apparatus . As shown by the graph in , when the performance apparatus is kept still, a sensor resultant value corresponds to a value of G. When the player swings the performance apparatus , the sensor resultant value increases, and when the player stops swinging the performance apparatus and keeps it still, then, the sensor resultant value returns to a value of G.
1
In the fifth embodiment of the invention, a timing when the sensor resultant value has increased larger than the value of (1+a)G, where “a” is a positive fine constant, is detected, and thereafter the maximum value of the sensor resultant value is renewed. The maximum value Amax of the sensor resultant value is used to determined a sound volume level of a musical tone to be generated. At the timing Twhen the sensor resultant value has decreased smaller than the value of (1+a)G, where “a” is a positive fine constant, the note-on event process is performed to generate a musical tone.
23
In the fifth embodiment of the invention, the sound generation timing is determined based on the sensor value of the acceleration sensor , but the sound generation timing can be determined based on other data. That is, other sensor such as an angular rate sensor is used and the sound generation timing can be determined based on a variation in the sensor value of the angular rate sensor.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a block diagram of a configuration of an electronic musical instrument according to the first embodiment of the invention.
FIG. 2
is a block diagram of a configuration of a performance apparatus according to the first embodiment of the invention.
FIG. 3
is a flow chart of an example of a process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 4
is a flow chart showing an example of a current position obtaining process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 5
is a flow chart showing an example of a space setting process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 6
is a flowchart showing an example of a tone-color setting process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 7
is a view schematically illustrating how a sound generation space is decided in the first embodiment of the invention.
FIG. 8
is a view illustrating an example of a space/tone color table stored in RAM in the first embodiment of the invention.
FIG. 9
is a flow chart of an example of a sound-generation timing detecting process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 10
is a flow chart of an example of a note-on event generating process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 11
is a view illustrating a graph schematically showing an acceleration value in the longitudinal direction of the performance apparatus according to the first embodiment of the invention.
FIG. 12
is a flow chart of an example of a process performed in a musical instrument unit according to the first embodiment of the invention.
FIG. 13
is a view schematically illustrating examples of the sound generation spaces and corresponding tone colors set in the space setting process and the tone-color setting process performed in the performance apparatus according to the first embodiment of the invention.
FIG. 14
is a flowchart of an example of the space setting process performed in the second embodiment of the invention.
FIG. 15
is a view illustrating an example of the space/tone color table stored in RAM in the second embodiment of the invention.
FIG. 16
is a view schematically illustrating examples of the sound generation spaces and corresponding tone colors set in the space setting process and the tone color setting process performed in the performance apparatus according to the second embodiment of the invention.
FIG. 17
is a flowchart of an example of the space setting process performed in the third embodiment of the invention.
FIG. 18
is a flow chart of an example of a pitch setting process performed in the fourth embodiment of the invention.
FIG. 19
is a flowchart of an example of the note-on event generating process performed in the fourth embodiment of the invention.
FIG. 20
is a flow chart of an example of the sound-generation timing detecting process performed in the fifth embodiment of the invention. | |
Missing data are prevalent in many public health studies for various reasons. For example, some subjects do not answer certain questions in a survey, or some subjects drop out of a longitudinal study prematurely. It is important to develop statistical methodologies to appropriately address missing data in order to reach valid conclusions. For regression analysis on data with missing values in the response variable, when data are not missing at random, usually the missing-data mechanism needs to be modeled. When the missingness only depends on the response variable, a pseudolikelihoodmethod that avoids modeling the nonignorable missing-data mechanism was developed in the past. A corresponding mean imputation method was used to impute the missing responses under this pseudolikelihood method. In this dissertation, we consider the inference on the moments of the response variable for missing data analyzed by this pseudolikelihood method. At first, we compared three methods: the delta method, the bootstrap method and a re-sampling method, for estimating the variance of the corresponding pseudolikelihood estimate in simulation studies. Second, we modified that mean imputation method and developed a corresponding stochastic imputation method. Multiple imputations were subsequently used to obtain estimates of the moments and the corresponding variance estimates. We compared the performance of these two imputation methods in simulation studies and illustrated them through analysis of the data from a Schizophrenia clinical trial. Compared to the mean imputation method, the stochastic imputation method leads to less and negligible bias. | http://d-scholarship.pitt.edu/8574/ |
Loved by learners at thousands of companies
Course Description
Missing data is part of any real world data analysis. It can crop up in unexpected places, making analyses challenging to understand. In this course, you will learn how to use tidyverse tools and the naniar R package to visualize missing values. You'll tidy missing values so they can be used in analysis and explore missing values to find bias in the data. Lastly, you'll reveal other underlying patterns of missingness. You will also learn how to "fill in the blanks" of missing values with imputation models, and how to visualize, assess, and make decisions based on these imputed datasets.
- 1
Why care about missing data?Free
Chapter 1 introduces you to missing data, explaining what missing values are, their behavior in R, how to detect them, and how to count them. We then introduce missing data summaries and how to summarise missingness across cases, variables, and how to explore across groups within the data. Finally, we discuss missing data visualizations, how to produce overview visualizations for the entire dataset and over variables, cases, and other summaries, and how to explore these across groups.Introduction to missing data50 xpUsing and finding missing values100 xpHow many missing values are there?100 xpWorking with missing values50 xpWhy care about missing values?50 xpSummarizing missingness100 xpTabulating Missingness100 xpOther summaries of missingness100 xpHow do we visualize missing values?50 xpYour first missing data visualizations100 xpVisualizing missing cases and variables100 xpVisualizing missingness patterns100 xp
- 2
Wrangling and tidying up missing values
In chapter two, you will learn how to uncover hidden missing values like "missing" or "N/A" and replace them with `NA`. You will learn how to efficiently handle implicit missing values - those values implied to be missing, but not explicitly listed. We also cover how to explore missing data dependence, discussing Missing Completely at Random (MCAR), Missing At Random (MAR), Missing Not At Random (MNAR), and what they mean for your data analysis.Searching for and replacing missing values50 xpUsing miss_scan_count100 xpUsing replace_with_na100 xpUsing replace_with_na scoped variants100 xpFilling down missing values50 xpFix implicit missings using complete()100 xpFix explicit missings using fill()100 xpUsing complete() and fill() together100 xpMissing Data dependence50 xpDifferences between MCAR and MAR50 xpExploring missingness dependence100 xpFurther exploring missingness dependence50 xp
- 3
Testing missing relationships
In this chapter, you will learn about workflows for working with missing data. We introduce special data structures, the shadow matrix, and nabular data, and demonstrate how to use them in workflows for exploring missing data so that you can link summaries of missingness back to values in the data. You will learn how to use ggplot to explore and visualize how values changes as other variables go missing. Finally, you learn how to visualize missingness across two variables, and how and why to visualize missings in a scatterplot.Tools to explore missing data dependence50 xpCreating shadow matrix data100 xpPerforming grouped summaries of missingness100 xpFurther exploring more combinations of missingness100 xpVisualizing missingness across one variable50 xpNabular data and filling by missingness100 xpNabular data and summarising by missingness100 xpExplore variation by missingness: box plots100 xpVisualizing missingness across two variables50 xpExploring missing data with scatter plots100 xpUsing facets to explore missingness100 xpFaceting to explore missingness (multiple plots)100 xp
- 4
Connecting the dots (Imputation)
In this chapter, you will learn about filling in the missing values in your data, which is called imputation. You will learn how to impute and track missing values, and what the good and bad features of imputations are so that you can explore, visualise, and evaluate the imputed data against the original values. You will learn how to use, evaluate, and compare different imputation models, and explore how different imputation models affect the inferences you can draw from the models.Filling in the blanks50 xpImpute data below range with nabular data100 xpVisualize imputed values in a scatter plot100 xpCreate histogram of imputed data100 xpWhat makes a good imputation50 xpEvaluating bad imputations100 xpEvaluating imputations: The scale100 xpEvaluating imputations: Across many variables100 xpPerforming imputations50 xpUsing simputation to impute data100 xpEvaluating and comparing imputations100 xpEvaluating imputations (many models & variables)100 xpEvaluating imputations and models50 xpCombining and comparing many imputation models100 xpEvaluating the different parameters in the model100 xpFinal Lesson50 xp
In the following tracksIntermediate Tidyverse ToolboxStatistician
CollaboratorsDavid CamposChester IsmayShon Inouye
PrerequisitesIntroduction to RIntroduction to the Tidyverse
What do other learners have to say?
I've used other sites—Coursera, Udacity, things like that—but DataCamp's been the one that I've stuck with.
Devon Edwards Joseph
Lloyds Banking Group
DataCamp is the top resource I recommend for learning data science.
Louis Maiden
Harvard Business School
DataCamp is by far my favorite website to learn from.
Ronald Bowers
Decision Science Analytics, USAA
Join over 9 million learners and start Dealing With Missing Data in R today!
Create Your Free Account | https://www.datacamp.com/courses/dealing-with-missing-data-in-r |
The interconnectivity of biological systems, from the scale of interacting molecules up to the chemical communication between organ systems, underlies much of the complexity associated with disease diagnosis and treatment. This complexity is thus driving the need for quantitative experimental and computational approaches that enhance our understanding of system behavior at multiple spatial and temporal scales. In the Department of Pharmacology, we are developing a range of novel approaches for the characterization of these complex systems, as well as their manipulation for the creation of new and/or more effective treatments.
The cellular protein landscape is dynamically regulated through changes in transcription and by the ubiquitin dependent degradation of specific proteins. In addition to its critical role in regulated protein degradation, the ubiquitin pathway has recently been shown to play a more complex role in regulating signal transduction, through controlling protein interactions and localization. The diversity of ubiquitin signaling outputs, and the perturbation of the ubiquitin proteasome system (UPS) in cancer underscore the importance of understanding key ubiquitin signaling events. However, a key challenge in the ubiquitin field has been to connect UPS enzymes with the substrates that they regulate. Our lab applies emerging genetic and proteomic technologies to systematically explore ubiquitin dependent signal transduction during cell cycle progression and in response to DNA damage. We are implementing and developing technologies that can assess global, proteome wide controlled by ubiquitination. Global Protein Stability Profiling (GPS) is a genetic platform that utilizes fluorescent reporters together with cell sorting to assess changes in protein stability. The GPS system employs a collection of more than 15,000 human open reading frames (ORFs) expressed from a fluorescent reporter construct to simultaneously assess changes in the stability of 15,000 human proteins. As a complement to GPS, we utilize a proteomic approach termed QUAINT (Quantitative Ubiquitylation Interrogation). QUAINT is a mass spectrometry based platform that quantitatively measures changes in protein ubiquitylation for endogenous proteins. Together, these emerging technologies will provide a deep snap shot in the regulated proteome and allow us to better understand global ubiquitin signaling networks regulated during cell growth, in response to stress and in during disease.
As an organizing principle, networks provide a useful representation of biological components and their relationships at multiple scales. In the areas of cancer and infectious disease, our group is actively involved in the development of computational approaches for the creation and analysis of such networks so as to aid directly diagnosis and treatment. For example, with approximately 518 members in humans, protein kinases form the backbone of cellular signaling and play a central role in health and disease. As an integrated network, the kinome is commonly dysregulated in cancer, driving the current interest in the development of kinase inhibitors for use in therapy. Recent work by Gary Johnson’s group has led to the development of Multiplexed Inhibitor Beads coupled with Mass Spectrometry (MIB/MS) that allows one to now assess the activity state of the protein kinome in masse. In collaboration with Gary’s group, we are developing computational approaches to represent and characterize the kinome in breast and other cancers. In addition, we are establishing novel predictive techniques that allow for the prediction of the response of the kinome to treatment with kinase inhibitors. Development of such methods will, for the first time, enable the rational design of combination inhibitor therapies for difficult to treat cancers.
We are applying new methodologies to evaluate kinase adaptations to targeted kinase inhibitors. Previous studies demonstrated that the kinome is highly malleable and remodels quickly to small molecules (1-3). The objectives of the studies in this new NC-TRACs funded pilot grant are two-fold. One is collect kinome data in response to targeted kinase inhibitors such as those targeting MEK or IKK. From the quantitative mass spectrometry data, significant increases or decreases in specific kinases are defined as “state changes”. The second objective is to take this data and apply it to knowledge-based analytical programs to identify unexpected responses or signaling connections. If successful these studies will develop causal reasoning methods to identify novel hypotheses that will be further verified by experimental approaches.
The figure below shows a knowledge-based analysis of a specific MEK inhibitor treatment of triple negative breast cancer (TNBC). Sum 159 cells were treated with 5 uM AZD6244 and the kinome analyzed in Duncan et al. SILAC MS data was used for knowledge-based casual reasoning by Dr. Levy. Shown are predictive kinase increases and decreases from these studies. Yellow=predicted up (none shown); Blue=predicted down; Green=observed up; Red=observed down.
We are studying the rapid kinetics of GTPase signaling ‘circuits’ and how their transient construction at specific locations controls cell motility. We are focused now on the 'logic' of signaling networks that integrate and relay information from receptors to GTPases, with emphasis on the role of guanine exchange factors. New approaches for network imaging are used to quantify and control specific protein activities, to understand the interactions of adhesion, cytoskeletal and trafficking systems, and to decipher mechanisms of cell polarization, directionality and turning. New microscope techniques are helping us quantify signaling kinetics in individual cells with great accuracy for quantitative modeling and a deeper understanding of network architecture.
In collaboration with John Sondek, Gaudenz Danuser , Keith Burridge and Alan Hall.
Ly T, Ahmad Y, Shlien A, Soroka D, Mills A, Emanuele MJ, Stratton MR, Lamond AI. A proteomic chronology of gene expression through the cell cycle in human myeloid leukemia cells. eLife 2014;3:e01630.
Olive AJ, Haff MG, Emanuele MJ, Sack LM, Barker JR., Elledge SJ, Starnbach MN. Chlamydia trachomatis-Induced Alterations in the Host Cell Proteome Are Required for Intracellular Growth. Cell Host Microbe. 2014 Jan 15;15(1):113-24.
Emanuele MJ, Elia EH, Xu Q, Thoma CR, Izhar L, Guo A, Rush J, Hsu PW, Yen HS, Elledge SJ. Global Identification of Modular Cullin-Ring Ligase Substrates. Cell. 2011 Oct 14;147(2):459-74. Epub 2011 Sep 29.
Emanuele MJ, Ciccia A, Elia AE, Elledge SJ. Proliferating cell nuclear antigen (PCNA)-associated KIAA0101/PAF15 protein is a cell cycle-regulated anaphase-promoting complex/cyclosome substrate. PNAS 2011. 108 (24) 9845-9850; Epub ahead of print May 31, 2011.
Duncan JS, Whittle MC, Nkamura K, Abell AN, Midland AA, Zawistowksi JS, Johnson NL, Granger DA, Jordan NV, Darr D, Usary J, Major B, He X, Hoadley K, Sharpless NE, Perou CM, Gomez SM, Jin J, Frye SV, Earp HS, Graves LM and Johnson GL. Dynamic reprogramming of the kinome in response to targeted MEK inhibition in triple negative breast cancer. Cell. 2012 149(2):307-21.
Doolittle JM and Gomez SM. Mapping protein interactions between Dengue virus and its human and insect hosts. PLoS Neglected Tropical Diseases. 2011 Feb 15;5(2):e954.
Midland AA, Whittle MC, Duncan JS, Abell AN, Nakamura K, Zawistowski JS, Carey LA, Earp HS 3rd, Graves LM, Gomez SM and Johnson GL. Defining the expressed breast cancer kinome. Cell Research. 2012 Feb 7. Doi:10.1038/cr.2012.25.
Cooper MJ, Cox NJ, Zimmerman EI, Dewar BJ, Duncan JS, Whittle MC, . . . Graves LM. (2013). Application of multiplexed kinase inhibitor beads to study kinome adaptations in drug-resistant leukemia. PLoS One 8(6):e66755. doi:10.1371/journal.pone.0066755..
Duncan JS, Whittle MC, Nakamura K, Abell AN, Midland AA, Zawistowski JS, . . . Johnson GL. (2012). Dynamic reprogramming of the kinome in response to targeted MEK inhibition in triple-negative breast cancer. Cell 149:307–321. doi:10.1016/j.cell.2012.02.053.
Graves LM, Duncan JS, Whittle MC, and Johnson GL. (2013). The dynamic nature of the kinome. Biochemical Journal 450:1–8. doi:10.1042/BJ20121456. | https://www.med.unc.edu/pharm/research/computer-vision-initiative-1/network-analysis |
It really hasn’t.
By and large, this is not for a connoisseur of comedy
The title does its job, giving the viewer an instant impression of the kind of show it is going to be; surreal, energetic and maybe just a little naïve.“Vick and Rick’s” (comedian Rick Wood and actress Victoria Cansfield) sketch comedy grabs the audience’s attention right from the beginning with truckloads of energy, launching headlong into as many sketches as can be crammed into the allotted fifty minutes. Some sketches are long, some only two lines, reminiscent of a very naïve version of Robot Chicken. Rather than flowing from one bit to the next, the cast say “end of sketch” at the end of each section. It may only be a preference, but after a while it did grate a tiny bit.
At times, this nervous excitement gets in the way. Often jokes are lost within the sketches as they are not allowed time to ferment. This is a real pity because there were points where the audience was in stitches. I have to say I really enjoyed a brilliant Harry Potter parody of “Fresh Prince of Bel Air” and despite playing upon worn characters and scenarios much of the time, a few of the jokes were rather good.
By and large, this is not for a connoisseur of comedy; the jokes are not the most advanced and are often only a string of silly puns - something more of a scrawl than anything that could be considered a fully-developed sketch. All in all, this show seems very raw and not the most considered. But if you are in the area and just want a bit of guilt-free entertainment then I would say that you could do worse. | https://broadwaybaby.com/shows/this-show-has-nothing-to-do-with-penguins/701028 |
The term ‘racism’ has many definitions. What does it mean for a person to be a ‘racist’? What does it mean for a person to have ‘racist beliefs’? What does the term ‘racism’ refer to? The answers to these questions then will inform the next part—what does racism have to do with stress and physiology?
What is ‘racism’?
Racism has many definitions, so many—and so many for uses in different contexts—that it has been argued, for example by those in the far-right, that it is, therefore, a meaningless term. However, just because there are many definitions of the term, it does not then mean that there is no referent for the term we use. A referent is a thing that is signified. In this instance, what is the referent for racism? I will provide a few on-hand definitions and then discuss them.
In Part VI of The Oxford Handbook of Race and Philosophy (edited by Naomi Zack, 2016) titled Racisms and Neo-Racisms, Zack writes (pg 469; my emphasis):
Logically, it would seem as though ideas about race would have to precede racism. But the subject of racism is more broad and complicated than the subject of race, for at least these two historical reasons. First, the kind of prejudice (prejudged cognitions and negative emotions) and discrimination (treating people differently on the grounds of group identities) that constitute racism have a longer history than the modern idea of race, for instance in European anti-Semitism. And second, insofar as modern ideas of race have been in the service of the dominant interests in international and internal interactions, these ideas of race are ideologies that have devalued non-white groups. That is, ideas of race are themselves already inherently racist.
In philosophy, racism has been treated as attitudes and actions of individuals that affect nonwhites unjustly and social structures and institutions that advantage whites and disadvantage nonwhites. The first is hearts-and-minds or classic racism, for instance the use of stereotypes and harmful actions by whites against people of color, as well as negative feelings about them. The second is structural racism, for instance the use of stereotypes or institutional racism, for instance, the facts of how American blacks and Hispanics are, compared to whites, worse off on major measures of human well-being, such as education, income, family wealth, health, family stability, longevity, and rates of incarceration.
John Lovchik in his book Racism: Reality Built on a Myth (2018: 12) notes that “racism is a system of ranking human beings for the purpose of gaining and justifying an unequal distribution of political and economic power.” Note that using this definition, “hereditarianism” (the theory that individual differences between groups and individuals can be reduced to genes; I will give conceptual reasons why hereditarianism is false as what I hope is my final word on the debate) is a racist theory as it attempts to justify the current social hierarchy. (The reason why IQ tests were first brought to America and created by Binet and Simon; see The History and Construction of IQ Tests and The Frivolousness of the Hereditarian-Environmentalist IQ Debate: Gould, Binet, and the Utility of IQ Testing.) This is why hereditarianism saw its resurgence with Jensen’s infamous 1969 paper. Indeed, many prominent hereditarians have held racist beliefs, and were even eugenicists espousing eugenic ideas.
Headley (2000) notes a few definitions of racism—motivational, behavioral, and cognitive racism. Motivational racism is “the infliction of unequal consideration, motivated by the desire to dominate, based on race alone“; behavioral racism is “failure to give equal consideration, based on the fact of race alone”; and cognitive racism is “unequal consideration, out of a belief in the inferiority of another race.”
I have presented six definitions of racism—though there are many more. Now, for the purposes of this article, I will present my own: the ‘inferiorization’ of a racialized group which is then used to explain disparities in things like IQ test scores, social class/SES, education differences, personality, etc. Now, knowing what we know about physiological systems and how they react to the environment around them—the immediate environment and the social environment—how does this then relate to stress and physiology?
Racism, stress, and physiology
Now that we know what racism is, having had a rundown of certain definitions of ‘racism’, I will now discuss the physiological effects such stances could have on groups racialized as ‘races’ (note that I am using socialraces in this article; recall that social constructivists about race need to be realists about race).
The term ‘weathering’ refers to the body’s breaking down due to stress over time. Such stressors can come from one’s immediate environment (i.e., pollution, foodstuffs, etc) or their social environment (a demanding job, how one perceives themselves and how people react to them). So as the body experiences more and more stress it becomes more and more ‘weathered’ which then leads to heightened risk for disease in stressed individuals/populations.
Allostatic states “refer to altered and sustained activity levels of the primary mediators (e.g., glucocorticosteroids) that integrate energetic and associated behaviours in response to changing environments and challenges such as social interactions, weather, disease, predators and pollution” (McEwen, 2005). Examples of allostatic overload such as acceleration of atherosclerosis, hypertension (HTN), stroke, and abdominal obesity (McEwen, 2005) are more likely to be found in the group we racialize as ‘black’ in America—particularly women (Gillum, 1987; Gillum and Hyattsville, 1996; Barnes, Alexander, and Staggers, 1997; Worral et al, 2002; Kataoka et al, 2013).
Geronimus et al (2006) set to find out whether or not the heightened rate of stressors (e.g., racism, environmental pollution, etc) can explain why black bodies are more ‘weathered’ than white bodies. They found that such differences were not explained by poverty, indicating that it even affects well-off blacks. Allostatic load refers to heightened hormonal production in response to stressors. We know that physiological is homeodynamic and therefore changes based on the immediate environment and social environment (for example, when you feel like you’re about to get into a fight, your heart rate increases and you get ready to ‘fight or flight’).
Experiencing racism (environmental stimuli; real or imagined, the outcome is the same) is associated with increased blood pressure (HTN). So if one experiences racism they will them experience an increase in blood pressure, as BP is a physiological variable (Armstead et al, 1987; McNeilly et al, 1995; see Doleszar et al, 2018 for a review). The concept of weathering, then, shows that racial health disparities are, in fact, racist health disparities (Sullivan, 2015: 106). Racism, then, contributes to higher levels of allostasis and, along with it, higher levels of certain hormones associated with higher allostasis.
One way to measure biological age is by measuring the length of telomeres. Telomeres are found at the ends of chromosomes. Since telomere lengths shorten with age (Shammas, 2012), those with shorter telomeres are ‘biologically older’ than those of the same age with longer telomeres. Geronimus et al (2011) showed that black women had shorter telomeres than white women, which was due to subjective and objective stressors (i.e., racism). Black women in the age group 49-55 were 7.5 years ‘older’ than white women. Thus, they had an older physiological age compared to their chronological age. It is known that direct contact with discriminatory events is associated with poor health outcomes. Harrell, Hall, and Taliaferro (2003) note that:
“…physiological set points and the mechanisms governing them are not fixed. External stressors can permanently alter physiological functioning. Racism increases the volume of stress one experiences and may contribute directly to the physiological arousal that is a marker of stress-related diseases.
Social factors can, indeed, influence physiology and there is a wealth of information on how the social becomes biological and how environmental (social) factors influence physiological systems. Forrester et al (2019) replicated Geronimus’ findings, showing that blacks have a higher ‘biological age’ than whites and that psychosocial factors affect blacks more than whites. Simons et al (2020) also replicated Geronimus’ findings, showing that persistent exposure to racism was associated with higher rates of inflammation in blacks which then predicted higher rates of disease in blacks compared to whites. Such discrimination can help to explain differences in birth outcomes (e.g., Jasienska, 2009), stress, inflammation, obesity, stroke rates, etc in blacks compared to whites (Molnar, 2015).
But what is the mechanism by which higher allostatic load scores contribute to negative outcomes and shorter telomeres indicating a higher biological age? When one feels that they are being discriminated against, the sympathetic nervous system activates due to chronic stress and along with it HPA dysfunction. What this means is that there is a loss of the anti-inflammatory effects of cortisol—it becomes blunted. This then increases oxidative stress and inflammation. Thus, the inflammatory processes result in cardiovascular disease and immune and metabolic dysfunction. The HPA axis monitors and responds to stress—allostatic load. When stress hormones are released, the adrenal gland is targeted. When it receives a signal from the pituitary gland, it pumps epinephrine and norepinephrine into the body, causing our hearts to beat faster, causing us to breathe more deeply—what is known as ‘fight or flight.’ Cortisol is also released and is known as a stress hormone, but when the stressful event is over, all three hormones return to baseline. Thus, the higher amount of stress hormones in circulation indicates higher levels of allostatic load—higher levels of stress in the individual in question. We know that blacks have higher levels of allostatic load (i.e., stress-related hormones) than whites (Duru et al, 2012). Barr (2014: 71-72) writes:
Imagine, though, that before the allostatic load has a chance to return to its baseline level, another stressor is sensed by the hypothalamus. The allostatic load will once again increase to the plateau level. Should the perception of stressors be ongoing, the allostatic load will not have the chance to ever fully recharge, and the adrenal gland will be producing an ongoing stream of stress response hormones. The body will experience chronic elevation in its allostatic load. […] A person experiencing repeated stressors, without the opportunity for intervals that are relatively stress-free, will experience a chronically elevated allostatic load, with higher than normal levels of circulating stress response hormones.
Conclusion
What these studies show, then, is that race is a cause of health inequalities, but it’s not inherent in biology but due to social factors that influence the physiology of the individual in question. The term ‘racism’ has many referents, and using one of them identifies ‘hereditarianism’ as a racist ideology (it is inherently ideological). These overviews of studies show that racial health inequalities are due, in part, to perceived discrimination (racism) thus they are racist health disparities. We know that physiology is a dynamic system that can respond to what occurs in the immediate environment—even the social environment (Williams, 1992). Thus, what explains part of the health inequalities between races is perceived discrimination—racism—and how it affects the body’s physiological systems (HPA axis, HTN, etc) and telomeres.
Follow the Leader? Selfish Genes, Evolution, and Nationalism
1750 words
Yet we get tremendously increased phenotypic variation … because the form and variation of cells, what they produce, whether to grow, to move, or what kind of cell to become, is under control of a whole dynamic system, not the genes. (Richardson, 2017: 125)
In 1976 Richard Dawkins published his groundbreaking book The Selfish Gene (Dawkins, 1976). In the book, Dawkins argues that selection occurs at the level of the gene—“the main theme of his book is a metaphorical account of competition between genes …” (Midgley, 2010: 45). Others then took note of the new theory and attempted to integrate it into their thinking. But is it as simple as Dawkins makes it out to be? Are we selfish due to the genes we carry? Is the theory testable? Can it be distinguished from other competing theories? Can it be used to justify certain behaviors?
Rushton, selfish genes, nationalism and politics
JP Rushton is a serious scholar, perhaps most well-known for attempting to use r/K selection theory to explain human behavior (Anderson, 1991). perhaps has the most controversial use of Dawkins’ theory. The main axiom of the theory is that an organism is just a gene’s way of ensuring the survival of other genes (Rushton, 1997). Thus, Rushton’s formulated genetic similarity theory posits that those who are more genetically similar—who share more genes—will be more altruistic toward those with more similar genes even if they are not related and will therefore show negative attitudes to less genetically similar individuals. This is the gene’s “way” of propagating themselves through evolutionary time. Richardson (2017: 9-11) tells us of all of the different ways in which genes are invoked to attempt to justify X.
In the beginning of his career, Rushton was a social learning theorist studying altruism, even publishing a book on the matter—Altruism, Socialization and Society (Rushton, 1980). Rushton reviews the sociobiological literature and concludes that altruism is a learned behavior. Though, Rushton seems to have made the shift from a social learning perspective to a genetic determinist perspective in the years between the publication of Altruism, Socialization and Society and 1984 when he published his genetic similarity theory. So, attempting to explain altruism through genes, while not part of Rushton’s original research programme, seems, to me, to be a natural evolution in his thought (however flawed it may be).
Dawkins responded to the uses of his theory to attempt to justify nationalism and patriotism through an evolutionary lens during an interview with Frank Miele for Skeptic:
Skeptic: How do you evaluate the work of Irena”us Eibl-Eibesfeldt, J.P. Rushton, and Pierre van den Berghe, all of whom have argued that kin selection theory does help explain nationalism and patriotism?
Dawkins: One could invoke a kind “misfiring” of kin selection if you wanted to in such cases. Misfirings are common enough in evolution. For example, when a cuckoo host feeds a baby cuckoo, that is a misfiring of behavior which is naturally selected to be towards the host’s own young. There are plenty of opportunities for misfirings. I could imagine that racist feeling could be a misfiring, not of kin selection but of reproductive isolation mechanisms. At some point in our history there may have been two species of humans who were capable of mating together but who might have produced sterile hybrids (such as mules). If that were true, then there could have been selection in favor of a “horror” of mating with the other species. Now that could misfire in the same sort of way that the cuckoo host’s parental impulse misfires. The rule of thumb for that hypothetical avoiding of miscegenation could be “Avoid mating with anybody of a different color (or appearance) from you.”
I’m happy for people to make speculations along those lines as long as they don’t again jump that is-ought divide and start saying, “therefore racism is a good thing.” I don’t think racism is a good thing. I think it’s a very bad thing. That is my moral position. I don’t see any justification in evolution either for or against racism. The study of evolution is not in the business of providing justifications for anything.
This is similar to his reaction when Bret Weinstein remarked that the Nazi’s “behaviors” during the Holocaust “were completely comprehensible at the level of fitness”—at the level of the gene.” To which Dawkins replied “I think nationalism may be an even greater evil than religion. And I’m not sure that it’s actually helpful to speak of it in Darwinian terms.” This is what I like to call “rampant adaptationism.”
This is important because Rushton (1998) invokes Dawkins’ theory as justification for his genetic similarity theory (GST; Rushton, 1997), attempting to justify ethno-nationalism from a gene’s-eye view. Rushton did what Dawkins warned against: using the theory to justify nationalism/patriotism. Rushton (1998: 486) states that “Genetic Similarity Theory explains why” ethnic nationalism has come back into the picture. Kin selection theory (which, like with selfish gene theory, Rushton invoked) has numerous misunderstandings attached to it, and of course, Rushton, too, was an offender (Park, 2007).
Dawkins (1981), in Selfish genes in race or politics stated that “It is annoying to find this elegant and important theory being dragged down to the ephemeral level of human politics, and parochial British politics at that.” Rushton (2005: 494), responded, stating that “feeling a moral obligation to condemn racism, some evolutionists minimised the theoretical possibility of a biological underpinning to ethnic or national favouritism.“
Testability?
The main premise of Dawkins’ theory is that evolution is gene-centered and that selection occurs at the level of the gene—genes that propagate fitness will be selected for while genes that are less fit are selected against. This “genes’-eye view” of evolution states “that adaptive evolution occurs through differential survival of competing genes, increasing the allele frequency of those alleles whose phenotypic trait effects successfully promote their own propagation, with gene defined as “not just one single physical bit of DNA [but] all replicas of a particular bit of DNA distributed throughout the world.“
Noble (2018) discusses “two fatal difficulties in the selfish gene version of neo-Darwinism“:
The first is that, from a physiological viewpoint, it does’t lead to a testable prediction. The only problem is that the central definition of selfish gene theory is not independent of the only experimental test of the theory, which is whether genes, defined as DNA sequences, are in fact selfish, i.e., whether their frequency in the gene pool increases (18). The second difficulty is that DNA can’t be regarded as a replicator separate from the cell (11, 17). The cell, and specifically its living physiological functionality, is what makes DNA be replicated faithfully, as I will explain later.
Noble (2017: 156) further elaborates in Dance to the Tune of Life: Biological Relativity:
Could this problem be avoided by attaching a meaning to ‘selfish’ as applied to DNA sequences that is independent of meanings in terms of phenotype? For example. we could say that a DNA sequence is ‘selfish’ to the extent which its frequency in subsequent generations is increased. This at least would be an objective definition that could be measured in terms of population genetics. But wait a minute! The whole point of the characterisation of a gene as selfish is precisely that this property leads to its success in reproducing itself. We cannot make the prediction of a theory be the basis of the definition of the central element of the theory. If we do that, the theory is empty from the viewpoint of empirical science.
Dawkins’ theory is, therefore “not a physiologically testable hypothesis” (Noble, 2011). Dawkins’ theory posits that the gene is the unit of selection, whereas the organism is only used to propagate the selfish genes. But “Just as Special Relativity and General Relativity can be succintly phrased by saying that there is no global (privileged) frame of reference, Biological Relativity can be phrased as saying that there is no global frame of causality in organisms” (Noble, 2017: 172). Dawkins’ theory privileges the gene as the unit of selection, when there is no direct unit of selection in multi-level biological systems (Noble, 2012).
In The Solitary Self: Darwin and the Selfish Gene, Midgley (2010) states “The choice of the word “selfish” is actually quite a strange one. This word is not really a suitable one for what Dawkins wanted to say about genetics because genes do not act alone.” As Dawkins later noted, “the cooperative gene” would have been a better description, while The Immortal Gene would have been a better title for the book. Midgley (2010: 16) states that Dawkins and Wilson (in The Selfish Gene and Sociobiology, respectively) “use a very simple concept of selfishness derived not from Darwin but from a wider background of Hobbesian social atomism, and give it a general explanation of all behaviour, including that of humans.” Dawkins and others claim that “the thing actually being selected was the genes” (Midgley, 2010: 47).
Conclusion
Developmental systems theory (DST) explains and predicts more than the neo-Darwinian Modern Synthesis (Laland et al, 2015). Dawkins’ theory is not testable. Indeed, the neo-Darwinian Modern Synthesis (and along with it Dawkins’ selfish gene theory) is dead, an extended synthesis explains evolution. As Fodor and Piattelli-Palmarini (2010a, b) and Fodor (2008) state in What Darwin Got Wrong, natural selection is not mechanistic and therefore cannot select-for genes or traits (also see Midgley’s 2010: chapter 6 discussion of Fodor and Piattelli-Palmarini). (Okasha, 2018 also discusses ‘selection-for- genes—and, specifically, Dawkins’ selfish gene theory.)
Dawkins’ theory was repurposed, used to attempt to argue for ethno-nationalism and patriotism—even though Dawkins himself is against such uses. Of course, theories can be repurposed from their original uses, though the use of the theory is itself erroneous, as is the case with regard to Rushton, Russel and Wells (1984) and Rushton (1997, 1998). Since the theory is itself not testable (Noble, 2011, 2017), it should therefore—along with all other theories that use it as its basis—be dropped. While Rushton’s change from social learning to genetic causation regarding altruism is not out of character for his former research (he began his career as a social learning theorist studying altruism; Rushton, 1980), his use of the theory to attempt to explain why individuals and groups prefer those more similar to themselves ultimately fails since it is “logically flawed” (Mealey, 1984: 571).
Genes ‘do’ what the physiological system ‘tells’ them to do; they are just inert, passive templates. What is active is the cell—the genome is an organ of the cell and is what is ‘immortal.’ Genes don’t “control” anything; they are used by and for the physiological system to carry out certain processes (Noble, 2017; Richardson, 2017: chapter 4, 5). There are new views of what ‘genes’ really are (Portin and Wilkins, 2017), what they are and were—are—used for.
Development is dynamic and not determined by genes. Genes (DNA sequences) are followers, not leaders. The leader is the physiological system.
A Systems View of Kenyan Success in Distance Running
1550 words
The causes of sporting success are multi-factorial, with no cause being more important than the other since the whole system needs to work in concert to produce the athletic phenotype–call this “causal parity” of athletic success determinants. For a refresher, take what Shenk (2010: 107):
As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each person’s genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction. (Shenk, 2010: 107) [Also read my article Explaining African Running Success Through a Systems View.]
This is how athletic success needs to be looked at; not reducing it to genes or a group of genes that ’cause’ athletic success. Since to be successful in the sport of the athlete’s choice takes more than being born with “the right” genes.
Recently, a Kenyan woman—Joyciline Jepkosgei—won the NYC marathon in here debut (November 3rd, 2019), while Eliud Kipchoge—another Kenyan—became the first human ever to complete a marathon (26.2 miles) in under 2 hours. I recall in the spring reading that he said he would break the 2-hour mark in October. He also attempted to break it in 2017 in Italy but, of course, he failed. His official time in Italy was 2:00:25! While he set the world record in Berlin at 2:01:39. Kipchoge’s official time was 1:59:40—twenty seconds shy of 2 hours—that means his average mile pace was about 4 minutes and 34 seconds. That is insane. (But the IAAF does not accept the time as a new world record since it was not in an open competition—Kipchoge had a slew of Olympic pacesetters following him; an electric car drove just ahead of him and pointed lasers at the ground showing him where to run; so he shaved 2 minutes off his time—2 crucial minutes—according to sport scientist Ross Tucker; and . So he did not set a world record. His feat, though, is still impressive.)
Now, Kipchoge is Kenyan—but what’s his ethnicity? Surprise surprise! He is of the Nandi tribe, more specifically, of the Talai subgroup, born in Kapsisiywa in the Nandi county. Jepkosgei, too, is Nandi, from Cheptil in Nandi county. (Jepkosgei also set the record for the half marathon in 2017. Also, see her regular training regimen and what she does throughout the day. This, of course, is how she is able to be so elite—without hard training, even without “the right genetic makeup”, one will not become an elite athlete.) What a strange coincidence that these two individuals who won recent marathons—and one who set the best time ever in the 26.2 mile race—are both Kenyan, specifically Nandi?
Both of these runners are from the same county in Kenya. Nandi county is elevated about 6,716 ft above sea level. Being born and living at a high elevation means that they have different kinds of physiological adaptations due to being born at such a higher elevation. Living and training at such high elevations means that they have greater lung capacities since they are breathing in thinner air. Those born in highlands like Kipchoge and Jepkosgei have larger lungs and thorax volumes, while oxygen intake is enhanced by increases in lung compliance, pulmonary diffusion, and ventilation (Meer, Heymans, and Zijlstra, 1995).
Those exposed to such elevation develop what is known as “high-altitude hypoxia.” Humans born at high altitudes are able to cope with such a lack of oxygen, since our physiological systems are dynamic—not static—and can respond to environmental changes within seconds of them occurring. Babes born at higher elevations have increased ventilation, and a rise in the alveolar and the pressure of arterial oxygen (Meer, Heymans, and Zjilstra, 1995).
Kenyans have 5 percent longer legs and 12 percent lighter muscles than Scandinavians (Suchy and Waic, 2017). Mooses et al (2014) notes that “upper leg length, total leg length and total leg length to body height ratio were correlated with running performance.” Kong and de Heer (2008) note that:
The slim limbs of Kenyan distance runners may positively contribute to performance by having a low moment of inertia and thus requiring less muscular effort in leg swing. The short ground contact time observed may be related to good running economy since there is less time for the braking force to decelerate forward motion of the body.
An abundance of type I muscle fibers is conducive to success in distance running (Zierath and Hawley, 2004), though Kenyans and Caucasians have no difference in type I muscle fibers (Saltin et al, 1995; Larsen and Sheel, 2015). That, then, throws a wrench in the claim that a whole slew of anatomic and physiologic variables conducive to running success is the cause for Kenyan running success—specifically the type I fibers—right? Wrong. Recall that the appearance of the athletic phenotype is due to nature and nurture—genes and environment—working together in concert. Kenyans are more likely to have slim, long limbs with lower body fat while they lived and trained over 6000 ft high. Their will to win to better themselves and their families’ socioeconomic status, too, plays a part. As I have argued in-depth for years—we cannot understand athletic success and elite athleticism without understanding individual histories, how they grew up, and what they did as a child.
For example, Wilbur and Pitsiladis (2012) espouse a systems view of Kenyan marathon success, writing:
In general, it appears that Kenyan and Ethiopian distance-running success is not based on a unique genetic or physiological characteristic. Rather, it appears to be the result of favorable somatotypical characteristics lending to exceptional biomechanical and metabolic economy/efficiency; chronic exposure to altitude in combination with moderate-volume, high-intensity training (live high + train high), and a strong psychological motivation to succeed athletically for the purpose of economic and social advancement.
Becoming a successful runner in Kenya can lead to economic opportunities not afforded to those who do not do well in running. This, too, is a factor in Kenyan running success. So, for the ignorant people who would—pushing a false dichotomy of genes and environment—state that Kenyan running success is due to “socioeconomic status”—they are right, to a point (even if they are mocking it and making their genetic determinism seem more palatable). See figure 6 for their hypothetical model:
This is one of the best models I have come across explaining the success of these people. One can see that it is not reductonist; note that there is no appeal to genes (just variables that genes are implicated IN! Which is not the same as reductionism). It’s not as if one can have an endomorphic somatotype with Kenyan training and their psychological reasons for becoming runners. The ecto-dominant somatotype is a necessary factor for success; but all four of these—biomechanical & physiological, training, and psychological—factors explain the success of the running Kenyans and, in turn, the success of Kipchoge and Jepkosgei. African dominance in distance running is, also, dominated by the Nandi subtribe (Tucker, Onywera, and Santos-Concejero, 2015). Knechtle et al (2016) also note that male and female Kenyan and Ethiopian runners are the youngest and fast at the half and full marathons.
The actual environment—climate—on the day of the race, too plays a factor. El Helou et al (2012) note that “Air temperature is the most important factor influencing marathon running performance for runners of all levels.” Nikolaidis et al (2019) note that “race times in the Boston Marathon are influenced by temperature, pressure, precipitations, WBGT, wind coming from the West and wind speed.”
The success of Kenyans—and other groups—shows how the dictum “Athleticism is irreducible to biology” (St. Louis, 2004) is true. How does it make any sense to attempt to reduce athletic success down to one variable and say that that explains the overrepresentation of, say, Kenyans in distance running? A whole slew of factors needs to occur to an individual, along with actually wanting to do something, in order for them to succeed at distance running.
So, what makes Kenyans like Kipchoge and Jepkosgei so good at distance running? It’s due to an interaction with genes and environment, since we take a systems and not a reductionist view of sport success. Even though Kipchoge’s time does not count as an official world record, what he did was still impressive (though not as impressive if he would have done so without all of the help he had). Looking at the system, and not trying to reduce the system to its parts, is how we will explain why some groups are better than others. Genes, of course, play a role in the ontogeny of the athletic phenotype, but they are not the be-all-end-all that genetic reductionists seem to make it out to be. The systems view for Kenyan running success shown here is how and why Kenyans—Kipchoge and Jepkosgei—dominate distance running.
Usain Bolt, Michael Phelps, and Caster Semenya: Should Semenya Take Drugs to Decrease Testosterone Levels?
1300 words
In the past week in the world of sport, all the rage has been over mid- to long-distance runner Caster Semenya. Semenya has won the 800 m in 1:56.72 and setting world records in the 400, 800, and 1500 m with times of 50.74, 1:58.45 and 4:10.93 respectively. In 2012 and 2016, Semenya won the gold for the 800 m with times of 1:57.23 and 1:55.28 respectively. I won’t really discuss the anatomic and physiologic advantages today. What I will discuss, though, is the fact that Semenya has been told that she has to take drugs to decrease her testosterone levels or she cannot compete anymore. Semenya was told to decrease her testosterone levels or she could face a ban in the 800 m. The new rules state that:
Female athletes affected must take medication for six months before they can compete, and then maintain a lower testosterone level.
If a female athlete does not want to take medication, then they can compete in:
- International competitions in any discipline other than track events between 400m and a mile
- Any competition that is not an international competition
- The male classification at any competition, at any level, in any discipline
- Any intersex, or similar, classification
But Semenya has declined taking these drugs—so her future is up in the air. So, if Semenya—or any other athlete—has to take drugs to decrease their levels since it gives an unfair advantage, then, in my opinion, this may lead to changes in other sports as well.
Look at Michael Phelps. Michael Phelps has won a record 28, winning 23 medals at Rio in 2016. Phelps has a long, thin torso which decreases drag in the water. Phelps’ wing span is 6’7” while he is 6’4”—which is disproportionate to his height. He has the torso of a 6’8” person, which gives him a greater reach per stroke. His lower body is 5’10” which lowers the resistance against the water. He has large hands and feet (with flexible ankles), which help with paddling capacity (size 14 shoe; yours truly wears a size 13).
There is one more incredible thing about Phelps: He produces around 50 percent lower lactic acid. Think of the last time that you have run for some distance. The burning you feel in your legs is a build up of lactic acid. Lactic acid causes fatigue and also slows muscle contractions—this occurs through lactic acid passing through the bloodstream, becoming lactate. (Note that it does not necessarily cause fatigue; Brooks, 2001.) Phelps does not produce normal levels of lactic acid, and so he is ready to go again shortly after a bout of swimming.
Phelps said “In between the 200m free and the fly heats I have probably had in total about 10 minutes to myself.” A normal person’s muscles would be too fatigued and cramped. I would also assume that Phelps has an abundance of type I muscle fibers as well.
Now take Usain Bolt. The 100 m dash is, mostly, an anaerobic race. What this means is that mitochondrial respiration has minimal effect on the type of energy used during the event (Majumdar and Robergs, 2011). So during anaerobic events, there is no free oxygen to drive energy—the energy stored in the muscle is used to perform movement through a process called glycolysis. Sprinting is an intense exercise—fuel choice during exercise is determined by the intensity of said exercise. “A 100-meter sprint is powered by stored ATP, creatine phosphate, and anaerobic glycolysis of muscle glycogen.”
Now we can look at the physical advantages they have. Swimmers and runners, on average, have different centers of mass (Bejan, Jones, and Charles, 2010). In all actuality, Phelps and Bolt are the perfect example of this phenomenon. Winning runners have a West-African origin and winning runners are more likely to be white. These somatotypic differences between the two races influence why they excel in these two different sports.
Usain Bolt is 6’5”. Since he is that height, and he has long legs, he necessarily has a longer stride—Bolt is the perfect example of Bejan, Jones, and Charles’ (2010) paper. So take the average white sprinter of the same height as Bolt. Ceteris paribus, Bolt will have a higher center of mass than the white athlete due to his longer limbs and and smaller circumference. Krogman (1970) found that, in black and white youths of the same height, blacks had shorter trunks and longer limbs, which lends credence to the hypothesis.
Phelps is 6’4”. As noted above, he has a long torso and long limbs. Long torsos are conducive to a lower center of mass—what whites and Asians have, on average. So long torsos mean that one will have taller sitting heights than those with short torsos. This means that whites and Asians have taller sitting heights than blacks, who have shorter torsos. This average taller sitting height is conducive to the longer torsos which is why whites excel in swimming. Bejan, Jones, and Charles (2010) also note that, if it were not for the short stature of Asians, they would be better swimmers than whites.
In any case, the different centers of mass on average between blacks and whites are conducive to faster times in the sports they excel at. For whites, the three percent increase in center of mass means that there would be a 1.5 percent increase in winning speed and a 1.5 percent decrease in winning time in the case of swimming. The same holds for blacks, but in the case of running: their higher center of mass is conducive to a 1.5 percent increase in winning speed and also a 1.5 percent decrease in winning time, which would be a .15 second decrease, or from 10 s to 9.85 s—which is a large differential when it comes to sprinting. (Note that this phenomenon also holds for black women and white women—black women are better sprinters and white women are better swimmers. Asian women excel in the 100 m freestyle, but not Asian men for reasons discussed above.)
Now put this all together. If Phelps and Bolt have such advantages over their competition and they—supposedly—win due to them, then if Semenya has to decrease her T levels, why shouldn’t Phelps and Bolt decrease X, Y, or Z since they have physiologic/anatomic advantages as well? Why does no one talk about Semenya’s anatomic advantages over, say, white women and why only bring up Semenya’s testosterone levels? Forcing Semenya to decrease T levels will set a bad precedent in sport. What would stop a losing competitor from complaining that the winner—who keeps winning—has an “unfair” physiologic/anatomic advantage and must do X to change it? (Or say that the anatomic advantage they possess is “unfair” and they should be barred from competition?)
Here’s the thing: Watching sport, we want to see the best-of-the-best compete. Wouldn’t that logically imply that we want to see Semenya compete and not rid herself of her advantage? If Semenya’s physiologic advantage(s) is being discussed, why not Semenya’s anatomic advantages? It does not make sense to focus on one variable—as all variables interact to produce the athletic phenotype (Louis, 2004). Phelps and Bolt perfectly embody the results of Bejan, Jones, and Charles (2010)—they have, what I hope are—well-known advantages, and these advantages, on average, are stratified between race due to anatomic differences (see Gerace et al, 1994; Wagner and Heyward, 2000).
Phelps and Bolt have anatomic and physiologic advantages over their competition, just as Semenya does, just like any elite athlete, especially the winners compared to their competition. If Semenya is forced to decrease her testosterone levels, then this will set a horrible precedent for sport, and people may then clamor for Phelps and Bolt to do X, Y, and Z due to their physical advantages. For this reason, Semenya should not decrease her testosterone levels and should be allowed to compete in mid-distance running.
Athleticism is Irreducible to Biology: A Systems View of Athleticism
1550 words
Reductionists would claim that athletic success comes down to the molecular level. I disagree. Though, of course, understanding the molecular pathways and how and why certain athletes excel in certain sports can and will increase our understanding of elite athleticism, reductionist accounts do not tell the full story. A reductionist (which I used to be, especially in regard to sports; see my article Racial Differences in Muscle Fiber Typing Cause Differences in Elite Sporting Competition) would claim that, as can be seen in my article, the cause for elite athletic success comes down to the molecular level. Now, that I no longer hold such reductionist views in this area does not mean that I deny that there are certain things that make an elite athlete. However, I was wrong to attempt to reduce a complex bio-system and attempt to pinpoint one variable as “the cause” of elite athletic success.
In the book The Genius of All of Us: New Insights into Genetics, Talent, and IQ, David Shenk dispatches with reductionist accounts of athletic success in the 5th chapter of the book. He writes:
2. GENES DON’T DIRECTLY CAUSE TRAITS; THEY ONLY INFLUENCE THE SYSTEM.
Consistent with other lessons of GxE [Genes x Environment], the surprising finding of the $3 billion Human Genome Project is that only in rare instances do specific gene variants directly cause specific traits or diseases. …
As the search for athletic genes continues, therefore, the overwhelming evidence suggests that researchers will instead locate genes prone to certain types of interactions: gene variant A in combination with gene variant B, provoked into expression by X amount of training + Y altitude + Z will to win + a hundred other life variables (coaching, injuries, etc.), will produce some specific result R. What this means, of course, What this means, of course, is that we need to dispense rhetorically with thick firewall between biology (nature) and training (nurture). The reality of GxE assures that each persons genes interacts with his climate, altitude, culture, meals, language, customs and spirituality—everything—to produce unique lifestyle trajectories. Genes play a critical role, but as dynamic instruments, not a fixed blueprint. A seven- or fourteen- or twenty-eight-year-old is not that way merely because of genetic instruction. (Shenk, 2010: 107) [Also read my article Explaining African Running Success Through a Systems View.]
This is looking at the whole entire system: genes, to training, to altitude, to will to win, to numerous other variables that are conducive to athletic success. You can’t pinpoint one variable in the entire system and say that that is the cause: each variable works together in concert to produce the athletic phenotype. One can invoke Noble’s (2012) argument that there is no privileged level of causation in the production of an athletic phenotype. There are just too many factors that go into the production of an elite athlete, and attempting to reduce it to one or a few factors and attempt to look for those factors in regard to elite athleticism is a fool’s errand. So we can say that there is no privileged level of causation in regard to the athletic phenotype.
In his paper Sport and common-sense racial science, Louis (2004: 41) writes:
The analysis and explanation of racial athleticism is therefore irreducible to
biological or socio-cultural determinants and requires a ‘biocultural approach’
(Malina, 1988; Burfoot, 1999; Entine, 2000) or must account for environmental
factors (Himes, 1988; Samson and Yerl`es, 1988).
Reducing anything, sports included, to environmental/socio-cultural determinants and biology doesn’t make sense; I agree with Louis that we need a ‘biocultural approach’, since biology and socio-cultural determinants are linked. This, of course, upends the nature vs. nurture debate; neither “nature” nor “nurture” has won, they causally depend on one another to produce the elite athletic phenotype.
Louis (2004) further writes:
In support of this biocultural approach, Entine (2001) argues that athleticism is
irreducible to biology because it results from the interaction between population-based genetic differences and culture that, in turn, critiques the Cartesian dualism
‘which sees environment and genes as polar-opposite forces’ (p. 305). This
critique draws on the centrality of complexity, plurality and fluidity to social
description and analysis that is significant within multicultural common sense. By
pointing to the biocultural interactivity of racial formation, Entine suggests that
race is irreducible to a single core determinant. This asserts its fundamental
complexity that must be understood as produced through the process of
articulation across social, cultural and biological categories.
Of course, race is irreducible to a single core determinant; but it is a genuine kind in biology, and so, we must understand the social, cultural, and biological causes and how they interact with each other to produce the athletic phenotype. We can look at athlete A and see that he’s black and then look at his somatotype and ascertain that the reason why athlete A is a good athlete is conducive to his biology. Indeed, it is. One needs a requisite morphology in order to succeed in a certain sport, though it is quite clearly not the only variable needed to produce the athletic phenotype.
One prevalent example here is the Kalenjin (see my article Why Do Jamaicans, Kenyans, and Ethiopians Dominate Running Competitions?). There is no core determinant of Kalenjin running success; even one study I cited in my article shows that Germans had a higher level of a physiological variable conducive to long-distance running success compared to the Kalenjin. This is irrelevant due to the systems view of athleticism. Low Kenyan BMI (the lowest in the world), combined with altitude training (they live in higher altitudes and presumably compete in lower altitudes), a meso-ecto somatotype, the will to train, and even running to and from where they have to go all combine to show how and why this small tribe of Kenyans excel so much in these types of long-distance running competitions.
Sure, we can say that what we know about anatomy and physiology that a certain parameter may be “better” or “worse” in the context of the sport in question, no one denies that. What is denied is the claim that athleticism reduces to biology, and it does not reduce to biology because biology, society, and culture all interact and the interaction itself is irreducible; it does not make sense to attempt to partition biology, society, and culture into percentage points in order to say that one variable has primacy over another. This is because each level of the system interacts with every other level. Genes, anatomy and physiology, the individual, the overarching society, cultural norms, peers, and a whole slew of other factors explain athletic success not only in the Kalenjin but in all athletes.
Broos et al (2016) showed that in those with the RR genotype, coupled with the right morphology and fast twitch muscle fibers, this would lead to more explosive contractions. Broos et al (2016) write:
In conclusion, this study shows that a-actinin-3 deficiency decreases the contraction velocity of isolated type IIa muscle fibers. The decreased cross-sectional area of type IIa and IIx fibers may explain the increased muscle volume in RR genotypes. Thus, our results suggest that, rather than fiber force, combined effects of morphological and contractile properties of individual fast muscle fibers attribute to the enhanced performance observed in RR genotypes during explosive contractions.
This shows the interaction between the genotype, morphology, fast twitch fibers (which blacks have more of; Caeser and Henry, 2015), and, of course, the grueling training these elite athletes go through. All of these factors interact. This further buttresses the argument that I am making that different levels of the system causally interact with each other to produce the athletic phenotype.
Pro-athletes also have “extraordinary skills for rapidly learning complex and neutral dynamic visual scenes” (Faubert, 2013). This is yet another part of the system, along with other physical variables, that an elite athlete needs to have. Indeed, as Lippi, Favalaro, and Guidi (2008) write:
An advantageous physical genotype is not enough to build a top-class athlete, a champion capable of breaking Olympic records, if endurance elite performances (maximal rate of oxygen uptake, economy of movement, lactate/ventilatory threshold and, potentially, oxygen uptake kinetics) (Williams & Folland, 2008) are not supported by a strong mental background.
So now we have: (1) strong mental background; (2) genes; (3) morphology; (4) Vo2 max; (5) altitude; (6) will to win; (7) training; (8) coaching; (9) injuries; (10) peer/familial support; (11) fiber typing; (12) heart strength etc. There are of course myriad other variables that are conducive to athletic success but are irreducible since we need to look at it in the whole context of the system we are observing.
In conclusion, athleticism is irreducible to biology. Since athleticism is irreducible to biology, then to explain athleticism, we need to look at the whole entire system, from the individual all the way to the society that individual is in (and everything in between) to explain how and why athletic phenotypes develop. There is no logical reason to attempt to reduce athleticism to biology since all of these factors interact. Therefore, the systems view of athleticism is the way we should view the development of athletic phenotypes.
(i) Nature and Nurture interact.
(ii) Since nature and nurture interact, it makes no sense to attempt to reduce anything to one or the other.
(iii) Since it makes no sense to attempt to reduce anything to nature or nurture since nature and nurture interact, then we must dispense with the idea that reductionism can causally explain differences in athleticism between individuals.
Muscle Fibers, Obesity, Cardiometabolic Disorders, and Race
2650 words
The association between muscle fiber typing obesity and race is striking. It is well-established that blacks have a higher proportion of type II skeletal muscle fibers than whites and these higher proportions of these specific types of muscle fibers lead to physiological differences between the two races which then lead to differing health outcomes between them—along with differences in athletic competition. Racial differences in health are no doubt complex, but there are certain differences between the races that we can look at and say that there is a relationship here that warrants further scrutiny.
Why is there an association between negative health outcomes and muscle phsyiology? The answer is very simple if one knows the basics of muscle physiology and how and why muscles contract (it is worth noting that out of a slew of anatomic and phsyiologic factors, movement is the only thing we can consciously control, compare to menstration and other similar physiologic processes which are beyond our control). In this article, I will describe what muscles do, how they are controlled, muscle physiology, the differences in fiber typing between the races and what it means for health outcomes between them.
Muscle anatomy and physiology
Muscle fiber number is determined by the second trimester. Bell (1980) noted that skeletal muscle fiber in 6 year olds is not different from normal adult tissue, and so, we can say that between the time in the womb and age 6, muscle fiber type is set and cannot be changed (though training can change how certain fibers respond, see below).
Muscle anatomy and physiology is interesting because it shows us how and why we move the way we do. Tendons attach muscle to bone. Attached to the tendon is the muscle belly. The muscle belly is made up of facsicles and the fascicles are made up of muscle fibers. Muscle fibers are made up of myofibrils and myofibrils are made up of myofilaments. Finally, myofilaments are made up of proteins—specifically actin and myosin, this is what makes up our muscles.
(Image from here.)
Muscle fibers are encased by sarcolemma which contains cell components such as sarcoplasm, nuclei, and mitochondria. They also have other cells called myofibrils which contain myofilaments which are then made up of actin (thin filaments) and mysoin (thick filaments). These two types of filaments form numerous repeating sections within a myofibril and each repeating section is known as a sarcomere. Sarcomeres are the “functional” unit of the muscle, like the neuron is for the nervous system. Each ‘z-line’ denotes another sarcomere across a myofibril (Franzini-Armstrong, 1973; Luther, 2009).
Other than actin and myosin, there are two more proteins important for muscle contraction: tropomyosin and troponin. Tropomyosin is found on the actin filament and it blocks myosin binding sites which are located on the actin filament, and so it keeps myosin from attaching to muscle while it is in a relaxed state. On the other hand, troponin is also located on the actin filament but troponin’s job is to provide binding sites for calcium and tropomyosin when a muscle needs to contract.
So the structure of skeletal muscle can be broken down like so: epymyseum > muscle belly > perimyseum > fascicle > endomyseum > muscle fibers > myofibrils > myofilaments > myosin and actin. Note diagram (C) from above; the sarcomere is the smallest contractile unit in the myofibril. According to sliding filament theory (see Cook, 2004 for a review), a sarcomere shortens as a result of the ‘z-lines’ moving closer together. The reason these ‘z-lines’ converge is because myosin heads attach to the actin filament which asynchronistically pulls the actin filament across the myosin, which then results in the shortening of the muscle fiber. Sarcomeres are the basic unit controlling changes in muscle length, so the faster or slower they fire depends on the majority type of fiber in that specific area.
But the skeletal muscle will not contract unless the skeletal muscles are stimulated. The nervous system and the muscular system communicate, which is called neural activiation—defined as the contraction of muscle generated by neural stimulation. We have what are called “motor neurons”—neurons located in the CNS (central nervous system) which can send impulses to muscles to move them. This is done through a special synapse called the neuromuscular junction. A motor neuron that connects with muscle fibers is called a motor unit and the point where the muscle fiber and motor unit meet is callled the neuromuscular junction. It is a small gap between the nerve and muscle fiber called a synapse. Action potentials (electrical impulses) are sent down the axon of the motor neuron from the CNS and when the action potential reaches the end of the axon, hormones called neurotransmitters are then released. Neurotransmitters transport the electrical signal from the nerve to the muscle.
Muscle fiber types
The two main categories of muscle fiber are type I and type II—‘slow’ and ‘fast’ twitch, respectively. Type I fibers contain more blood cappilaries, higher levels of mitochondria (which transforms food into ATP) and myoglobin which allows for an improved delivery of oxygen. Since myoglobin is similar to hemoglobin (the red pigment which is found in red blood cells), type I fibers are also known as ‘red fibers.’ Type I fibers are also smaller in diameter and slower to produce maximal tension, but are also the most fatigue-resistant type of fiber.
Type II fibers have two subdivisions—IIa and IIx—based on their mechanical and chemical properties. Type II fibers are in many ways the opposite of type I fibers—they contain far fewer blood cappilaries, mitochondria and myoglobin. Since they have less myoglobin, they are not red, but white, which is why they are known as ‘white fibers.’ IIx fibers have a lower oxidative capacity and thusly tire out quicker. IIa, on the other hand, have a higher oxidative capacity and fatigue slower than IIx fibers (Herbison, Jaweed, and Ditunno, 1982; Tellis et al, 2012). IIa fibers are also known as intermediate fast twitch fibers since they can use both anarobic and aerobic metabolism equally to produce energy. So IIx fibers are a combo of I and II fibers. Type II fibers are bigger, quicker to produce maximal tension, and tire out quicker.
Now, when it comes to fiber typing between the races, blacks have a higher proportion of type II fibers compared to whites who have a higher proportion of type I fibers (Ama et al, 1986; Ceaser and Hunter, 2015; see Entine, 2000 and Epstein, 2014 for reviews). Higher proportions of type I fibers are associated with lower chance of cardiovascular events, whereas type II fibers are associated with a higher risk. Thus, “Skeletal muscle fibre composition may be a mediator of the protective effects of exercise against cardiovascular disease” (Andersen et al, 2015).
Now that the basics of muscle anatomy and physiology are apparent, hopefully the hows and whys of muscle contraction and what different muscle fibers do are becoming clear, because these different fibers are distributed between the races in uneven frequencies, which then leads to differences in sporting performance but also differents in health outcomes.
Muscle fibers and health outcomes
We now know the physiology and anatomy of muscle and muscle fiber typing. We also know the differences between each type of skeletal muscle fiber. Since the two races do indeed differ in the percentage of skeletal muscle fiber possessed on average, we then should find stark differences in health outcomes, part of the reason being these differences in muscle fiber typing.
While blacks on average have a higher proportion of type II muscle fibers, whites have a higher proportion of type I muscle fibers. Noting what I wrote above about the differences between the fiber types, and knowing what we know about racial differences in disease outcomes, we can draw some inferences on how differences in muscle fiber typing between races/individuals can then affect disease seriousness/acquisition.
In their review of black-white differences in muscle fiber typing, Ceaser and Hunter (2015) write that “The longitudinal data regarding the rise in obesity indicates obesity rates have been highest among non-Hispanic Black women and Hispanic women.” And so, knowing what we know about fiber type differences between races and how these fibers act when they fire, we can see how muscle fiber typing would contribute to differences in disease acquisition between groups.
Tanner et al (2001) studied 53 women (n=28, lean women; and n=25, obese women) who were undergoing an elective abdominal surgery (either a hysterectomy or gastric bypass). Their physiologic/anatomic measures were taken and they were divided into races: blacks and whites, along with their obesity status. Tanner et al found that the lean subjects had a higher proportion of type I fibers and a lower proportion of type IIx fibers whereas those who were obese were more likely to have a higher proportion of type IIb muscle fibers.
Like other analyses on this matter, Tanner et al (2001) showed that the black subjects had a higher proportion of type II fibers in comparison to whites who had a higher proportion of type I fibers (adiposity was not taken into account). Fifty-one percent of the fiber typing from whites was type I whereas for blacks it was 43.7 pervent. Blacks had a higher proportion of type IIx fibers than whites (16.3 percent for whites and 23.4 for blacks). Lean blacks and lean whites, though, had a similar percentage of type IIx fibers (13.8 percent for whites and 15 percent for blacks). It is interesting to note that there was no difference in type I fibers between lean whites and blacks (55.1 percent for whites and 54.1 percent for blacks), though muscle fibers from obese blacks contained far fewer type I fibers compared to their white counterparts (48.6 percent for whites and 34.5 for blacks). Obese blacks’ muscle fiber had a higher proportion of type IIx fibers than obese whites’ fiber typing (19.2 percent for whites and 31 percent for blacks). Lean blacks and lean whites had a higher proportion of type I fibers than obese blacks and obese whites. Obese whites and obese blacks had more type IIx fibers than lean whites and lean blacks.
So, since type II fibers are insulin resistant (Jensen et al, 2007), then they should be related to glucose intloerance—type II diabetes—and blacks with ancestry from West Africa should be most affected. Fung (2016, 2018) shows that obesity is a disease of insulin resistance, and so, we can bring that same rationale to racial differences in obesity. Indeed, Nielsen and Christensen (2011) hypothesize that the higher prevalence of glucose intolerance in blacks is related to their lower percentage of type I fibers and their higher percentage of type II fibers.
Nielsen and Christensen (2011) hypothesize that since blacks have a lower percentage of type I fibers (the oxidative type), this explains the lower fat oxidation along with lower resting metabolic rate, sleeping metabolic rate, resting energy expenditure and Vo2 max in comparison to whites. Since type I fibers are more oxidative over the glycolitic type II fibers, the lower oxidative capacity in these fibers “may cause a higher fat storage at lower levels of energy intake than in individuals with a higher oxidative capacity” (Nielsen and Christensen, 2011: 611). Though the ratio of IIx and IIa fibers are extremely plastic and affected by lifestyle, Nielsen and Christensen do note that individuals with different fiber typings had similar oxidative capacity if they engaged in physical activity. Recall back to Caesar and Hunter (2015) who note that blacks have a lower maximal aerobic capacity and higher proportion of type II fibers. They note that lack of physical activity exacerbates the negative effects that a majority type II fibers has over majority type I. And so, some of these differences can be ameliorated between these two racial groups.
The point is, individuals/groups with a higher percentage of type II fibers who do not engage in physical activity have an even higher risk of lower oxidative capacity. Furthermore, a higher proportion of type II fibers implies a higher percentage of IIx fibers, “which are the least oxidative fibres and are positively associated with T2D and obesity” (Nielsen and Christensen, 2011: 612). They also note that this may explain the rural-urban difference in diabetes prevalance, with urban populations having a higher proportion of type II diabetics. They also note that this may explain the difference in type II diabetes in US blacks and West African natives—but the reverse is true for West Africans in the US. There is a higher rate of modernization and, with that, a higher chance to be less physically active and if the individual in question is less physically active and has a higher proportion of type II fibers then they will have a higher chance of acquiring metabolic diseases (obesity is also a metabolic disease). Since whites have a higher proportion of type I fibers, they can increase their fat intake—and with it, their fat oxidation—but this does not hold for blacks who “may not adjust well to changes in fat intake” (Nielsen and Christensen, 2011: 612).
Nielsen and Christensen end their paper writing:
Thus, Blacks of West African ancestry might be genetically predisposed to T2D because of an inherited lower amount of skeletal muscle fibre type I, whereby the oxidative capacity and fat oxidation is reduced, causing increased muscular tissue fat accumulation. This might induce skeletal muscle insulin resistance followed by an induced stress on the insulin-producing beta cells. Together with higher beta-cell dysfunction in the West African Diaspora compared to Whites, this will eventually lead to T2D (an overview of the ‘skeletal muscle distribution hypothesis’ can be seen in Figure 2).
Lambernd et al (2012) show that muscle contractions eliminated insuin resistance by blocking pro-inflammatory signalling pathways: this is the mechanism by which physical activity decreases glucose intolerance and thusly improves health outcomes—especially for those with a higher proportion of type II fibers. Thus, it is important for individuals with type II fibers to exercise, since sedentariness is associated with an age-related insulin resistance due to impaired GLUT4 utilization (Bunprajun et al, 2013).
(Also see Morrison and Cooper’s (2006) hypothesis that “reduced oxygen-carrying capacity induced a shift to more explosive muscle properties” (Epstein, 2014: 179). Epstein notes that the only science there is on this hypothesis is one mouse and rat study showing that low hemoglobin can “induce a switch to more explosive muscle fibers” (Epstein, 2014: 178), but this has not been tested on humans to see if it would hold. If this is tested on humans and if it does hold, then that would lend credence to Morrison’s and Cooper’s (2006) hypothesis.)
Conclusion
Knowing what we know about muscle anatomy and physiology and how muscles act we can understand the influence the different muscle types have on disease and how they contribute to disease variation between race, sex and the individual level. Especially knowing how type II fibers act when the individual in question is insulin resistant is extremely important—though it has been noted that individuals who participate in aerobic exercise decrease their risk for cardiometabolic diseases and can change the fiber distribution difference between IIx and IIa fibers, lowering their risk for acquiring cardiometabolic diseases (Ceaser and Hunter, 2015).
Thinking back to sarcomeres (the smallest contractile unit in the muscle) and how they would act in type II fibers: they would obviously contract much faster in type II muscles over type I muscles; they would then obviously tear faster than type I muscles; since type II muscles are more likely to be insulin resistant, then those with a higher proportion of type II fibers need to focus more on aerobic activity to “balance out” type IIx and IIa fibers and decrease the risk of cardiometabolic disease due to more muscle contractions (Lambernd et al, 2012). Since blacks have a higher proportion of type II fibers and are more likely to be sedentary than whites, and since those who have a higher proportion of type II fibers are more likely to be obese, then it is clear that exercise can and will ameliorate some of the disparity in cardiometabolic diseases between blacks and whites.
Race, Body Fat, and Skin Folds
1250 words
Racial differences in body fat are clear to the naked eye: black women are more likely to carry more body fat than white women; Mexican American women are more likely to carry more body fat than white women, too. Different races/ethnies/genders of these races/ethnies have different formulas to assess body fat through the use of skin-folds. The sites to grasp the skin is different based on gender and race.
Body mass index (BMI) and waist circumference is overestimated in blacks, which means that they need different formulas to assess their BMI and adiposity/lean mass. Race-specific formulas/methods are needed to assess body fat and, along with it, disease risk, since blacks are more likely to be obese (black women, at least, it’s different with black American men with more African ancestry, see below). The fact of the matter is, when matched on a slew of variables, blacks had lower total and abdominal fat mass than whites.
This is even noted in Asian, black and white prepubertal children. He et al (2002) show that sex differences in body fat distribution are present in children who have yet to reach puberty and the differences in body fat in Asians is different than that from blacks and whites which also varies by sex. Asian girls had greater gynoid fat by DXA scan only, with girls having greater gynoid fat than boys. Asian girls had lower adjusted extremity fat and gynoid fat compared to white and black girls. Though, Asian boys had a lower adjusted extremity by fat as shown by DXA (a gold standard in body fat measurement) when compared to whites, but greater gynoid fat than whites and blacks.
Vickery, Cureton, and Collins, (1988), Wagner and Heyward (2000), and Robson, Bazin, and Soderstrom (1971) show that there are considerable body composition differences between blacks and whites. These differences in body composition come down to diet, of course, but there is also a genetic/physiologic component there as well. Combining the known fact that skin-fold testing is not conducive to a good estimate, black American men with more African ancestry are less likely to be obese.
Vickery, Cureton, and Collins (1988) argue that, if accurate estimates of body fat percentages are to be obtained, race-specific formulas need to be developed and used as independent variables to assess racial differences in body fat percentage. Differences in muscularity don’t seem to account for these skinfold differences, nor does greater mesomorphy. One possible explanation for differences in skinfold thickness is that blacks may store most of their body fat subcutaneously. (See Wagner and Heyward, 2000 for a review on fat patterning and body composition in blacks and whites.)
The often-used Durnin-Womersley formula which is used to predict body fat just from skin folds. However, “The 1974 DW equations did not predict %BF(DXA) uniformly in all races or ethnicities” (Davidson et al, 2011). Truesdale et al (2016) even show that numerous formulas used to estimate percent body fat are flawed, even some formulas used on different races. Most of the equations tested showed starkly different conclusions. But, this is based on NHANES data and the only data they provide regarding skin-folds is the tricep and subscapular skinfold so there may still be more problems with all of the equations used to assess body fat percentage between races. (Also see Cooper, 2010.)
Klimentidis et al (2016) show that black men—but not black women—seem to be protected against obesity and central adiposity (fat gain around the midsection) and that race negatively correlated with adiposity. The combo of male gender and West African ancestry predicted low levels of adiposity compared to black Americans with less African ancestry. Furthermore, since black men and women have—theoretically—the same SES, then cultural/social factors would not play as large a role as genetic factors in explaining the differences in adiposity between black men and black women. Black men with more African ancestry had a lower WHR and less central adiposity than black men with less African ancestry. If we assume that they had similar levels of SES and lived in similar neighborhoods, there is only one reason why this would be the case.
Klimentidis et al (2016) write:
One interpretation is that AAs are exposed to environmental and/or cultural factors that predispose them to greater obesity than EAs. Possibly, some of the genes that are inherited as part of their West-African ancestry are protective against obesity, thereby “canceling out” the obesifying effects of environment/culture, but only in men. Another interpretation is that genetic protection is afforded to all individuals of African descent, but this protection is overwhelmed by cultural and/or other factors in women.
Black men do, as is popularly believed, prefer bigger women over smaller women. For example, Freedman et al (2004) showed that black American men were more likely to prefer bigger women. Black American men “are more willing to idealize a woman
of a heavier body size, with more curves, than do their White American counterparts” (Freedman et al, 2004: 197). It is then hypothesized that black American men find these figures attractive (figures with “more curves” (Freedman et al, 2004: 197)) to protect against eating pathologies, such as anorexia and bulimia. So, it has been established that black men have thinner skin folds than whites which leads to skewed lean mass/body fat readings and black men with more African ancestry are less likely to be obese. These average differences between races, of course, contribute to differing disease acquisition.
I have covered differences in body fat in a few Asian ethnies and have come to the obvious conclusion: Asians, at the same height, weight etc as whites and blacks, will have more adipose tissue on their bodies. They, too, like blacks and whites, have different areas that need to be assessed for skin folds to estimate body fat.
Henriques (2016: 29) has a table on the equations for calculating estimated body density from skin fold measures from various populations. Of interest are the ones on blacks or ‘Hispanics‘, blacks or athletes and blacks and whites. (The table is provided from NSCA, 2008 so the references are not in the back of the text.)
For black and ‘Hispanic’ women aged 18-55 years, the sites to use for skin-folds are the chest, abdomen, triceps, subscapular, suprailiac, midaxillary, and the thigh. For blacks or athletes aged 18-61 years, the sites to use are the same as before (but a different equation is used for body fat estimation). For white women or anorexic women aged 18-55, the sites used are just triceps, suprailiac and the thigh. For black and white boys aged 6-17, only the triceps and the calf is used. It is the same for black and white girls, but, again, a different formula is used to assess body fat (Henriques, 2016: 29).
Morrison et al (2012) showed that white girls had a higher percent body fat when compared to black girls at ages 9-12 but every age after, black girls had higher percent body fat (which is related to earlier menarche in black girls since they have higher levels of body fat which means earlier puberty; Kaplowitz, 2008). Black girls, though, had higher levels of fat in their subscapular skin folds than white girls at all ages.
So, it seems, there are population-/race-specific formulas that need to be created to better assess body fat percentage in different races/ethnies and not assume that one formula/way of assessing body fat should be used for all racial/ethnic groups. According to the literature (some reviewed here and in Wagner and Heyward, 2000), these types of formulas are sorely needed to better assess health markers in certain populations. These differences in body fat percentage and distribution then have real health consequences for the races/ethnies in question.
DNA is not a “Blueprint”
2200 words
Leading behavior geneticist Robert Plomin is publishing “Blueprint: How DNA Makes Us Who We Are” in October of 2018. I, of course, have not read the book yet. But if the main thesis of the book is that DNA is a “code”, “recipe”, or “blueprint”, then that is already wrong. This is because presuming that DNA is any of the three aforementioned things marries one to certain ideas, even if they themselves do not explicitly state them. Nevertheless, Robert Plomin is what one would term a “hereditarian”, meaning that he believes that genes—more than environment—shape an individual’s psychological and other traits. (That’s a false dichotomy, though.) In the preview for the book at MIT Press, they write:
In Blueprint, behavioral geneticist Robert Plomin describes how the DNA revolution has made DNA personal by giving us the power to predict our psychological strengths and weaknesses from birth. A century of genetic research shows that DNA differences inherited from our parents are the consistent life-long sources of our psychological individuality—the blueprint that makes us who we are. This, says Plomin, is a game-changer. It calls for a radical rethinking of what makes us who were are.
Genetics accounts for fifty percent of psychological differences—not just mental health and school achievement, but all psychological traits, from personality to intellectual abilities. Nature defeats nurture by a landslide.
Plomin explores the implications of this, drawing some provocative conclusions—among them that parenting styles don’t really affect children’s outcomes once genetics is taken into effect. Neither tiger mothers nor attachment parenting affects children’s ability to get into Harvard. After describing why DNA matters, Plomin explains what DNA does, offering readers a unique insider’s view of the exciting synergies that came from combining genetics and psychology.
I won’t get into most of these things today (I will wait until I read the book for that), but this will be just an article showing that DNA is, in fact, not a blueprint, and DNA is not a “code” or “recipe” for the organism.
It’s funny that the little blurb says that “Nature defeats nurture by a landslide“, because, as I have argued at length, nature vs nurture is a false dichotomy (See Oyama, 1985, 2000, 1999; Moore, 2002; Schneider, 2007; Moore, 2017). Nature vs nurture is the battleground that the false dichotomy of genes vs environment is fought on. However, it makes no sense to partition heritability estimates if it is indeed true that genes interact with environment—that is, if nature interacts with nurture.
DNA is also called “the book of life”. For example, in her book The Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease, and Inheritance, Nessa Carey writes that “There’s no debate that the DNA blueprint is a starting point” (pg 16). This, though, can be contested. “But the promise of a peep into the ‘book of life’ leading to a cure for all diseases was a mistake” (Noble, 2017: 161).
Developmental psychologist and cognitive scientist David S. Moore concurs. In his book The Developing Genome: An Introduction to Behavioral Epigenetics, he writes (pg 45):
So, although I will talk about genes repeatedly in this book, it is only because there is no other convenient way to communicate about contemporary ideas in molecular biology. And when I refer to gebe, I will be talking about a segment or segments of DNA containing sequence information that is used to help construct a protein (or some other product that performs a biological function). But it is worth remembering that contemporary biologists do not mean any one thing when they talk about “genes”; the gene remains a fundementally hypothetical concept to this day. The common belief that there are things inside of us that constitute a set of instructions for building bodies and minds—things that are analogous to “blueprings” or “recipes”—is undoubedtly false. Instead, DNA segements often contain information that is ambiguous, and that must be edited or arranged in context-dependent ways before it can be used.
Still, other may use terms like “genes for” trait T. This, too, is incorrect. In his outstanding book Making Sense of Genes, Kostas Kamporakis writes (pg 19):
I also explain why the notion of “genes for,” in the vernacular sense, is not only misleading but also entirely inaccurate and scientifcally illegitamate.
[…]
First, I show that genes “operate” in the context of development only. This means that genes are impllicated in the development of characters but do not determine them. Second, I explain why single genes do not alone produce characters or disease but contribute to their variation. This means that genes can account for variation in characters but cannot alone explain their origin. Third, I show that genes are not the masters of the game but are subject to complex regulatory processes.
Genes can only be seen as passive templates, not ultimate causes (Noble, 2011), and they cannot explain the origin of different characters but can account for variation in physical characters. Genes only “do” something in the context of development; they are inert molecules and thusly cannot “cause” anything on their own.
Genes are not ‘for’ traits, but they are difference-makers for traits. Sterelny and Griffiths (1999: 102), in their book Sex and Death: An Introduction to Philosophy of Biology write:
Sterelny and Griffiths (1988) responded to the idea that genes are invisible to selection by treating genes as difference makers, and as visible to selection by virtue of the differences they make. In doing so, they provided a formal reconstruction of the “gene for” locution. The details are complex, but the basic intent of the reconstruction is simple. A certain allele in humans is an “allele for brown eyes” because, in standard environments, having that allele rather than alternatives typically available in the population means that your eyes will be brown rather than blue. This is the concpet of a gene as a difference maker. It is very important to note, however, that genes are context-sensitive difference makers. Their effects depend on the genetic, cellular, and other features of their environment.
(Genes can be difference makers for physical traits, but not for psychological traits because no psychophysical laws exist, but I’ll get to that in the future.)
Note how the terms “context-sensitive” and “context-dependent” continue to appear. The DNA-as-blueprint statement presumes that DNA is context-independent, but we cannot divorce genes—whatever they are—from their context, since genes and environment, nature and nurture, are intertwined. (And it is even questioned if ‘genes’ are truly units of inheritance, see Fogle, 1990. Fogle, 2000 also argues to dispense with the concept of “gene” and that biologists should be using terms like intron, promoter region, and exon. Nevertheless, there is a huge disconnect with the term “gene” in molecular biology and classical genetics. Keller 2000 argues that there are still uses for the term “gene” and that we should not dispense with the term. I believe we should dispense with it.)
Susan Oyama (2000: 77) writes in her book The Ontogeny of Information:
“Though a plan implies action, it does not itself act, so if the genes are a blueprint, something else is the constructor-construction worker. Though blueprints are usually contrasted with building materials, the genes are quite easily conceptualized as templates for building tools and materials; once so utilized, of course, they enter the developmental process and influence its course. The point of the blueprint analogy, though, does not seem to be to illuminate developmental processes, but rather to assume them and, in celebrating their regularity, to impute cognitive functions to genes. How these functions are exercised is left unclear in this type of metaphor, except that the genetic plan is seen in some peculiar way to carry itself out, generating all the necessary steps in the necessary sequence. No light is shed on multiple developmental possibilities, species-typical or atypical.“
The Modern Synthesis is one of the causes for the genes-as-blueprints thinking; the Modern Synthesis has causation in biology wrong. Genes are not active causes, but they are passive templates, as argued by many authors. They, thus, cannot “cause” anything on their own.
In his 2017 book Dance to the Tune of Life: Biological Relativity, Denis Noble writes (pg 157):
As we saw earlier in this chapter, these triplet sequences are formed from any combination of the four bases U, C, A and G in RNA and T, C, A and G in DNA. They are often described as a genetic ‘code’, but it is important to understand that this usage of the word ‘code’ carries overtones that can be confusing.
A code was originally an intentional encryption used by humans to communicate. The genetic ‘code’ is not intentional in that sense. The word ‘code’ has unfortunately reinforced the idea that genes are active and even complete causes, in much the same was as a computer is caused to follow the instructions of a computer program. The more nuetral word ‘template’ would be better. Templates are used only when required (activated); they are not themselves active causes. The active causes lie within the cells themselves since they determine the expression patterns for the different cell types and states. These patterns are comminicated to the DNA by transcrption factors, by methylation patterns and by binding to the tails of histones, all of which influence the pattern and speed of transcription of different parts of the genome. If the word ‘instruction’ is useful here at all, it is rather that the cell instructs the genome. As Barbara McClintock wrote in 1984 after receiving her Nobel Prize, the genome is an ‘organ of the cell’, not the other way around.
Realising that DNA is under the control of the system has been reinforced by the discovery that cells use different start, stop and splice sites for producing different messenger RNAs from a single DNA sequence. This enables the same sequence to code different proteins in different cell types and under different conditions [here’s where context-dependency comes into play again].
Representing the direction of causality in biology the wrong way round is therefore confusing and has far-reaching conseqeunces. The causality is circular, acting both ways: passive causality by DNA sequences acting as otherwise inert templates, and active causality by the functional networks of interactions that determine how the genome is activated.
This takes care of the idea that DNA is a ‘code’. But what about DNA being a ‘blueprint’, that all of the information is contained in the DNA of the organism before conception? DNA is clearly not a ‘program’, in the sense that all of the information to construct the organism exists already in DNA. The complete cell is also needed, and its “complex structures are inherited by self-templating” (Noble, 2017: 161). Thus, the “blueprint” is the whole cell, not just the genome itself (remember that the genome is an organ of the cell).
Lastly, GWA studies have been all the rage recently. However, there is only so much we can learn just from association studies, before we need to turn to the physiological sciences for functional analyses. Indeed, Denis Noble (2018) writes in a new editorial:
As with the results of GWAS (genome-wide association studies) generally, the associations at the genome sequence level are remarkably weak and, with the exception of certain rare genetic diseases, may even be meaningless (13, 21). The reason is that if you gather a sufficiently large data set, it is a mathematical necessity that you will find correlations, even if the data set was generated randomly so that the correlations must be spurious. The bigger the data set, the more spurious correlations will be found (3).
[…]
The results of GWAS do not reveal the secrets of life, nor have they delivered the many cures for complex diseases that society badly needs. The reason is that association studies do not reveal biological mechanisms. Physiology does. Worse still, “the more data, the more arbitrary, meaningless and useless (for future action) correlations will be found in them” is a necessary mathematical statement (3).
Nor does applying a highly restricted DNA sequence-based interpretation of evolutionary biology, and its latest manifestation in GWAS, to the social sciences augur well for society.
It is further worth noting that there is no privileged level of causation in biological systems (Noble, 2012)—a priori, there is no justification to privilege one system over another in regard to causation, so saying that one level of the organism is “higher” than another (for instance, saying that genes are, and should be, privileged over the environment or any other system in the organism regarding causation) is clearly false, since there is upwards and downwards causation, influencing all levels of the system.
In sum, it is highly misleading to refer to DNA as “blueprints”, a “code”, or a “recipe.” Referring to DNA in this way means that one presumes that DNA can be divorced from its context—that it does not work together with the environment. As I have argued in the past, association studies will not elucidate genetic mechanisms, nor will heritability estimates (Richardson, 2012). We need physiological testing for these functional analyses, and association studies like GWAS and even heritability estimates don’t tell us this type of information (Panofsky, 2014). So, it seems, that what Plomin et al are looking for that they assume are “in the genes”, are not there, because they use a false model of the gene (Burt, 2015; Richardson, 2017). Genes are resources—templates to be used by and for the system—not causes of traits and development. They can account for differences in variation, but cannot be said to be the origin of trait differences. Genes can be said to be difference makers, but knowing whether or not they are difference makers for behavior, in my opinion, cannot be known.
(For further information on genes and what they do, reach Chapters Four and Five of Ken Richardson’s book Genes, Brains, and Human Potential: The Science and Ideology of Intelligence. Plomin himself seems to be a reductionist, and Richardson took care of that paradigm in his book. Lickliter (2018) has a good review of the book, along with critiques of the reductionist paradigm that Plomin et al follow.)
Genotypes, Athletic Performance, and Race
2050 words
Everyone wants to know the keys to athletic success, however, as I have argued in the past, to understand elite athletic performance, we must understand how the system works in concert with everything—especially in the environments the biological system finds itself in. To reduce factors down to genes, or training, or X or Y does not make sense; to look at what makes an elite athlete, the method of reductionism, while it does allow us to identify certain differences between athletes, it does not allow us to appreciate the full-range of how and why elite athletes differ in their sport of choice. One large meta-analysis has been done on the effects of a few genotypes on elite athletic performance, and it shows us what we already know (blacks are more likely to have the genotype associated with power performance—so why are there no black Strongmen or any competitors in the World’s Strongest Man?). A few studies and one meta-analysis exist, attempting to get to the bottom of the genetics of elite athletic performance and, while it of course plays a factor, as I have argued in the past, we must take a systems view of the matter.
One 2013 study found that a functional polymorphism in the angiotensinogen (ATG) region was 2 to 3 times more common in elite power athletes than in (non-athlete) controls and elite endurance athletes (Zarebska et al, 2013). This sample tested was Polish, n = 223, 156 males, 67 females, and then they further broke down their athletic sample into tiers. They tested 100 power athletes (29 100-400 m runners; 22 powerlifters; 20 weightlifters; 14 throwers and 15 jumpers) and 123 endurance athletes (4 tri-athletes; 6 race walkers; 14 road cyclists; 6 15 to 50 m cross-country skiers; 12 marathon runners; 53 rowers; 17 3 to 10 km runners; and 11 800 to 1500 m swimmers).
Zarebska et al (2013) attempted to replicate previous associations found in other studies (Buxens et al, 2009) most notably the association with the M235T polymorphism in the AGT (angiotensinogen) gene. Zarebska et al’s (2013) main finding was that there was a higher representation of elite power athletes with the CC and C alleles of the M235T polymorphism compared with endurance athletes and controls, which suggests that the C allele of the M235T gene “may be associated with a predisposition to power-oriented
events” (Zarebska et al, 2013: 2901).
Elite power athletes were more likely to possess the CC genotype; 40 percent of power athletes had the genotype whereas 13 percent of endurance had it and 18 percent of non-athletes had it. So power athletes were more than three times as likely to have the CC genotype, compared to endurance athletes and twice as likely to have it compared to non-athletes. On the other hand, one copy of the C allele was found in 55 percent of the power athletes whereas, for the endurance athletes and non-athletes, the C allele was found in about 40 percent of individuals. (Further, in the elite anaerobic athlete, explosive power was consistently found to be a difference maker in predicting elite sporting performance; Lorenz et al, 2013.)
Now we come to the more interesting parts: ethnic differences in the M235T polymorphism. Zarebska et al (2013: 2901-2902) write:
The M235T allele distribution varies widely according to the subject’s ethnic origin: the T235 allele is by far the most frequent in Africans (;0.90) and in African-Americans (;0.80). It is also high in the Japanese population (0.65–0.75). The T235 (C4027) allele distribution of the control participants in our study was lower (0.40) but was similar to that reported among Spanish Caucasians (0.41), as were the sports specialties of both the power athletes (throwers, sprinters, and jumpers) and endurance athletes (marathon runners, 3- to 10-km runners, and road cyclists), thus mirroring the aforementioned studies.
Zarebska et al (2013: 2902) conclude that their study—along with the study they replicated—supports the hypothesis that the C allele of the M235T polymorphism in the AGT gene may confer a competitive advantage in power-oriented sports, which is partly mediated through ANGII production in the skeletal muscles. Mechanisms can explain the mediation of ANGII production in skeletal muscles, such as a direct skeletal muscle hypertrophic effect, along with the redistribution of between muscle blood flow between type I (slow twitch) and II fibers (fast twitch), which would then augment power and speed. However, it is interesting to note that Zarebska et al (2013) did not find any differences between “top-elite” level athletes who had won medals in international competitions compared to elite-level athletes who were not medalists.
The big deal about this gene is that the AGT gene is part of the renin-angiotensin system which is partly responsible for blood pressure and body salt regulation (Hall, 1991; Schweda, 2014). There seems to be an ethnic difference in this polymorphism, and, according to Zarebska et al (2013), African Americans and Africans are more likely to have the polymorphisms that are associated with elite power performance.
There is also a meta-analysis on genotyping and elite power athlete performance (Weyerstrab et al, 2017). Weyerstrab et al (2017) meta-analyzed 36 studies which attempted to find associations between genotype and athletic ability. One of the polymorphisms studied was the famous ACTN3. It has been noted that, when conditions are right (i.e., the right morphology), the combined effects of morphology along with the contractile properties of the individual muscle fibers contribute to the enhanced performance of those with the RR ACTN3 genotype (Broos et al, 2016), while Ma et al (2013) also lend credence to the idea that genetics influences sporting performance. This is, in fact, the most-replicated association in regard to elite sporting performance: we know the mechanism behind how muscle fibers contract; we know how the fibers contract and the morphology needed to maximize the effectiveness of said fast twitch fibers (type II fibers). (Blacks have a higher proportion of type II fibers [see Caeser and Henry, 2015 for a review].)
Weyerstrab et al (2017) meta-analyzed 35 articles, finding significant associations with genotype and elite power performance. They found that ten polymorphisms were significantly associated with power athlete states. Their most interesting findings, though, were on race. Weyerstrab et al (2017: 6) write:
Results of this meta-analysis show that US African American carriers of the ACE AG genotype (rs4363) were more than two times more likely to become a power athlete compared to carriers of the ACE preferential genotype for power athlete status (AA) in this population.
“Power athlete” does not necessarily have to mean “strength athlete” as in powerlifters or weightlifters (more on weightlifters below).
Lastly, the AGT M235T polymorphism, while associated with other power movements, was not associated with elite weightlifting performance (Ben-Zaken et al, 2018). As noted above, this polymorphism was observed in other power athletes, and since these movements are largely similar (short, explosive movements), one would rightly reason that this association should hold for weightlifters, too. However, this is not what we find.
Weightlifting, compared to other explosive, power sports, is different. The beginning of the lifts take explosive power, but during the ascent of the lift, the lifter moves the weight slower, which is due to biomechanics and a heavy load. Ben-Zaken et al (2018) studied 47 weightlifters (38 male, 9 female) and 86 controls. Every athlete that was studied competed in national and international meets on a regular basis. Thirty of the weightlifters were also classified as “elite”, which entails participating in and winning national and international competitions such as the Olympics and the European and World Championships).
Ben-Zaken et al (2018) did find that weightlifters had a higher prevalence of the AGT 235T polymorphism when compared to controls, though there was no difference in the prevalence of this polymorphism when elite and national-level competitors were compared, which “[suggests] that this polymorphism cannot determine or predict elite competitive weightlifting performance” (Ben-Zaken et al, 2018: 38). Of course, a favorable genetic profile is important for sporting success, though, despite the higher prevalence of AGT in weightlifters compared to controls, this could not explain the difference between national and elite-level competitors. Other polymorphisms could, of course, contribute to weightlifting success, variables “such as training experience, superior equipment and facilities, adequate nutrition, greater familial support, and motivational factors, are crucial for top-level sports development as well” (Ben-Zaken et al, 2018: 39).
I should also comment on Anatoly Karlin’s new article The (Physical) Strength of Nations. I don’t disagree with his main overall point; I only disagree that grip strength is a good measure of overall strength—even though it does follow the expected patterns. Racial differences in grip strength exist, as I have covered in the past. Furthermore, there are associations between muscle strength and longevity, with stronger men being more likely to live longer, fuller lives (Ruiz et al, 2008; Volkalis, Haille, and Meisinger, 2015; Garcia-Hermosa, et al, 2018) so, of course, strength training can only be seen as a net positive, especially in regard to living a longer and fuller life. Hand grip strength does have a high correlation with overall strength (Wind et al, 2010; Trosclair et al, 2011). While handgrip strength can tell you a whole lot about your overall health (Lee et al, 2016), of course, there is no better proxy than actually doing the lifts/exercises to ascertain one’s level of strength.
There are replicated genetic associations between explosive, powerful athletic performance, along with even the understanding of the causal mechanisms behind the polymorphisms and their carry-over to power sports. We know that if morphology is right and the individual has the RR ACTN3 genotype, that they will exceed in explosive sports. We know the causal pathways of ACTN3 and how it leads to differences in sprinting competitions. It should be worth noting that, while we do know a lot more about the genomics of sports than we did 20, even 10 years ago, current genetic testing has zero predictive power in regard to talent identification (Pitsladis et al, 2013).
So, of course, for parents and coaches who wonder about the athletic potential of their children and students, the best way to gauge whether or not they will excel in athletics is…to have them compete and compare them to other kids. Even if the genetics aspect of elite power performance is fully unlocked one day (which I doubt it will be), the best way to ascertain whether or not one will excel in a sport is to put them to the test and see what happens. We are in our infancy in understanding the genomics of sporting performance, but when we do understand which genotypes are more prevalent in regard to certain sports (and of course the interactions of the genotype with the environment and genes), then we can better understand how and why others are better in certain sports.
The genomics of elite sporting performance is very interesting; however, the answer that reductionists want to see will not appear: genes are difference makers (Sterelny and Griffith, 1999), not causes, and along with a whole slew of other environmental and mental factors (Lippi, Favaloro, and Guidi 2008), along with a favorable genetic profile with sufficient training (and everything else that comes along with it) are needed for the athlete to reach their maximum athletic potential (see Guth and Roth, 2013). Genetic and environmental differences between individuals and groups most definitely explain differences in elite sporting performance, though elucidating what causes what and the mechanisms that cause the studied trait in question will be tough.
Just because group A has gene or gene networks G and they compete in competition C does not mean that gene or gene networks G contribute in full—or in part—to sporting success. The correlations could be coincidental and non-functional in regard to the sport in question. Athletes should be studied in isolation, meaning just studying a specific athlete in a specific discipline to ascertain how, what, and why works for the specific athlete along with taking anthropomorphic measures, seeing how bad they want “it”, and other environmental factors such as nutrition and training. Looking at the body as a system will take us away from privileging one part over another—while we also do understand that they do play a role but not the role that reductionists believe.
These studies, while they attempt to show us how genetic factors cause differences at the elite level in power sports, they will not tell the whole story, because we must look at the whole system, not reduce it down to the sum of its parts (Shenk, 2011: chapter 5). While blacks are more likely to have these polymorphisms that are associated with elite power athlete performance, this does not obviously carry over to strongman and powerlifting competition.
Somatotyping, Constitutional Psychology, and Sports
1600 words
In the 1940s, psychologist William Sheldon created a system of body measures known as “somatotyping”, then took his somatotypes and attempted to classify each soma (endomorph, ectomorph, or mesomorph) to differing personality types. It was even said that “constitutional psychology can guide a eugenics program and save the modern world from itself.”
Sheldon attempted to correlate different personality dimensions to different somas. But his somas fell out of favor before being revived by two of his disciples—without the “we-can-guess-your-personality-from-your-body-type” canard that Sheldon used. Somatotyping, while of course being put to use in a different way today compared to what it was originally created for, it gives us reliable dimensions for human appendages and from there we can ascertain what a given individual would excel at in regard to sporting events (obviously this is just on the basis of physical measures and does not measure the mind one needs to excel in sports).
The somatotyping system is straightforward: You have three values, say at 1-1-7; the first refers to endomorphy, the second refers to mesomorphy and the third refers to ectomorphy, therefore a 1-1-7 would be an extreme ectomorph. However, few people are at the extreme end of each soma, and most people have a combination of two or even all three of the somas.
According to Carter (2002): “The somatotype is defined as the quantification of the present shape and composition of the human body.” So, obviously, somas can change over time. However, it should be noted that the somatotype is, largely, based on one’s musculoskeletal system. This is where the appendages come in, along with body fat, wide and narrow clavicles and chest etc. This is why the typing system, although it began as a now-discredited method, should still be used today since we do not use the pseudoscientific personality measures with somatotyping.
Ectomorphs are long and lean, lanky, you could say. They have a smaller, narrower chest and shoulders, along with longer arms and legs, and have a hard time gaining weight, and a short upper body (I’d say they have a harder time gaining weight due to a slightly faster metabolism, in the variation of the normal range of metabolism, of course). Put simply, ectomorphs are just skinny and lanky with less body fat than mesos and endos. Human races that fit this soma are East Africans and South Asians (see Dutton and Lynn, 2015; one of my favorite papers from Lynn for obvious reasons).
Endomorphs are stockier, shorter and have wider hips, along with short limbs, a wider trunk, more body fat and can gain muscular strength easier than the other somas. Thus, endos, being shorter than ectos and mesos, have a lower center of gravity, along with shorter arms. Thus, we should see that these somas dominate strongman competitions and this is what we see. Pure strength competitions are perfect for this type, such as Strongman competitions and powerlifting. Races that generally conform to this type are East Asians, Europeans, and Pacific Islanders (see Dutton and Lynn, 2015).
Finally, we have mesomorphs (the “king” of all of the types). Mesos are more muscular on average than the two others, they have less body fat than endos but more body fat than ectos; they have wider shoulders, chest and hips, a short trunk and long limbs. The most mesomorphic races are West Africans (Malina, 1969), and due to their somatotype they can dominate sprinting competitions; they also have thinner skin folds (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000), and so they would have an easier time excelling at running competitions but not at weightlifting, powerlifting, or Strongman (see Dutton and Lynn, 2015).
These anatomic differences between the races of man are due to climatic adaptations. The somatypic differences Neanderthals and Homo sapiens mirror the somatotype difference between blacks and whites; since Neanderthals were cold-adapted, they were shorter, had wider pelves and could thusly generate more power than the heat-adapted Homo sapiens who had long limbs and narrow pelvis to better dissipate heat. Either way, we can look at the differences in somatotype between races that evolved in Europe and Africa to ascertain the somatotype of Neanderthals—and we also have fossil evidence for these claims, too (see e.g., Weaver and Hublin, 2009; Gruss and Schmitt, 2016)
Now, just because somatotyping, during its conception, was mixed with pseudoscientific views about differing somas having differing psychological types, does not mean that these differences in body type do not have any bearing on sporting performance. We can chuck the “constitutional psychology” aspect of somatotyping and just keep the anthropometric measures, and, along with the knowledge of human biomechanics, we can then discuss, in a scientific manner, why one soma would excel in sport X or why one soma would not excel in sport X. Attempting to argue that since somatotyping began as some crank psuedoscience does not mean that it is not useful today, since we do not ascribe inherent psychological differences to these somas (I’d claim that saying that this soma has a harder time gaining weight compared to that soma is not ascribing a psychological difference to the soma; it is taking physiologically and on average we can see that different somas have different propensities for weight gain).
In her book Straightening the Bell Curve: How Stereotypes about Black Masculinity Drive Research about Race and Intelligence, Hilliard (2012: 21) discusses the pitfalls of somatotyping and how Sheldon attempted to correlate personality measures with his newfound somatotypes:
As a young graduate student, he [Richard Herrnstein] had fallen under the spell of Harvard professor S. S. Stevens, who had coauthored with William Sheldon a book called The Varieties of Temperament: A Psychology of Constitutional Differences, which popularized the concept of “somatotyping,” first articulated by William Sheldon. This theory sought, through the precise measurement and analysis of human body types, to establish correlations comparing intelligence, temperament, sexual proclivities, and the moral worth of individuals. Thus, criminals were perceived to be shorter and heavier and more muscular than morally upstanding citizens. Black males were reported to rank higher on the “masculine component” scale than white males did, but lower in intelligence. Somatotyping lured the impressionable young Herrnstein into a world promising precision and human predictability based on the measuring of body parts.
Though constitutional psychology is now discredited, there may have been something to some of Sheldon’s theories. Ikeda et al (2018: 3) conclude in their paper, Re-evaluating classical body type theories: genetic correlation between psychiatric disorders and body mass index, that “a trans-ancestry meta-analysis of the genetic correlation between psychiatric disorders and BMI indicated that the negative correlation with SCZ supported classical body type theories proposed in the last century, but found a negative correlation between BD and BMI, opposite to what would have been predicted.” (Though it should be noted that SCZ is a, largely if not fully, environmentally-induced disorder, see Joseph, 2017.)
These different types (i.e., the differing limb lengths/body proportions) have implications for sporting performance. Asfaw and A (2018) found that Ethiopian women high jumpers had the highest ectomorph values whereas long and triple jumpers were found to be more mesomorphic. Sports good for ectos are distance running due to their light frame, tennis etc—anything that the individual can use their light frame as an advantage. Since they have longer limbs and a lighter frame, they can gain more speed in the run up to the jump, compared to endos and mesos (who are heavier). This shows why ectos have a biomechanical advantage when it comes to high jumping.
As for mesomorphs, the sports they excel at are weightlifting, powerlifting, strongman, football, rugby etc. Any sport where the individual can use their power and heavier bone mass will they excel in. Gutnik et al (2017) even concluded that “These results suggest with high probability that there is a developmental tendency of change in different aspects of morphometric phenotypes of selected kinds of sport athletes. These phenomena may be explained by the effects of continuous intensive training and achievement of highly sport-defined shapes.” While also writing that mesomorphy could be used to predict sporting ability.
Finally, for endomorphs, they too would excel in weightlifting, powerlifting, and strongman, but do on average better since they have different levers (i.e., shorter appendages so they can more weight and a shorter amount of time in comparison to those with longer limbs like ectos).
Thus, different somatotypes excel in different sports. Different races and ethnies have differing somatotypes (Dutton and Lynn, 2015), so these different bodies that the races have, on average, is part of the cause for differences in sporting ability. That somatotyping began as a pseudoscientific endeavor 70 years ago does not mean that it does not have a use in today’s world—because it clearly does due to the sheer amount of papers on the usefulness of somatotyping and relating differences in sporting performance due to somatotyping. For example, blacks have thinner skin folds (Vickery, Cureton, and Collins, 1988; Wagner and Heyward, 2000) which is due to their somatotype, which is then due to the climate their ancestors evolved in.
Somatotyping can show us the anthropometric reasons for how and why certain individuals, ethnies, and races far-and-away dominate certain sporting events. It is completely irrelevant that somatotyping began as a psychological pseudoscience (what isn’t in psychology, am I right?). Understanding anthropometric differences between individuals and groups will help us better understand the evolution of these somas along with how and why these somas lead to increased sporting performance in certain domains. Somatotyping has absolutely nothing to do with “intelligence” nor how morally upstanding one is. I would claim that somatotyping does have an effect on one’s perception of masculinity, and thus more masculine people/races would tend to be more mesomorphic, which would explain what Hilliard (2012) discussed when talking about somatotyping and the attempts to correlate differing psychological tendencies to each type. | https://notpoliticallycorrect.me/tag/physiology/ |
One of the most common and yet most dreaded injuries a soccer player can endure, is to hear that you’ve torn your ACL- anterior cruciate ligament. The ACL is a ligament in the center of the knee joint that prevents the tibia- lower leg bone from sliding anterior or forward on the femur bone- upper leg bone.
This week we will look at common causes and reasons for ACL tears, BUT stay tuned for next week where we discuss more specifically HOW we can reduce these injuries and then week 3, what to expect if you do need to have your ACL repaired and what the post- rehab and return to sport should look like.
What’s worse than just tearing your ACL alone, is sustaining the ‘terrible triad’: which is tearing the ACL, medial meniscus and MCL- medial collateral ligament all together.
We are going to go thru a brief overview of how ACL injury statistics, some of their mechanisms and risk factors, but then most importantly discussing what injury reduction programs should look like and include.
Statistics for ACL Injuries
Unfortunately, female athletes competing in sports that include jumping and cutting demonstrate a 4-10x higher incidence of knee injury than do male athletes in the same sports.
Mechanisms of ACL Injuries: Contact vs Non- Contact Injuries
Two thirds of all ACL injuries are non-contact injuries and the three major ways that these injuries occur:
1. Planting and Cutting
2. Straight knee landing
3. One step stop landing with a hyperextended knee
All non- contact injuries are correlated with knee torsion, deceleration, rapid change in direction. These are the types of ACL injuries that can be reduced via proper training programs.
Only 1/3 of the ACL knee injuries are from a contact injury.
Risk factors that may CONTRIBUTE TO ACL tears:
- EXTRINSIC FACTORS (happening outside of our body):
- Meteorological/ Weather conditions- think playing on wet surfaces (Orchard et al). We can’t necessarily avoid this, but we can be more aware of our body positioning and angles.
- Type of surface- there has been some research showing that turf fields have a higher risk of injury. (Myers MC, Barnhill BS)
- Type of footwear- not wearing the right type of footwear for the type of surface you are playing on. (Lambon et al)
- INTRINSIC FACTORS (factors which are within the body. It is also the physical aspect of the athlete’s body that can cause injury.)
For the most part we can control or influence these initial 5 intrinsic factors with implementation of proper strengthening and neuromuscular control programs.
- Q angle- (Shambaugh et al)
- Knee valgus- (Ford et al; Hewett et al) Over 22.5 Nm indicates high risk (Hewett et al)
- Foot pronation- (Allan and Glascoe; Woodford-Rogers et al)
- Excessive hip adduction due to weak hip stabilizers- (C. Powers)
- Body mass index- (Brown et al; Knapik et al)
1, Q Angle: The Q angle is measured by creating two intersecting lines: one from the center of the patella (kneecap) to the anterior superior iliac spine of the pelvis; the other from the patella to the tibial tubercle. There is research to support, that the larger the Q angle, the more prone an athlete might be to a knee injury. However, we would argue that this athlete needs to just be more diligent about following an ACL injury reduction program.
2, Knee Valgus/ Foot Pronation and Hip Adduction: it is very common and likely to see these 3 factors occur together.
Knee control from the hip stabilizers, posterior chain muscles and foot intrinsics are arguably the most important factors in the reduction of ACL injuries.
Additional Rick Factors
- Notch Size/ACL Size (Souryal and Freeman and Shelbourne and Kerr)-
- Hormones (Shultz et al; Wojtys et al; Slauterbeck et al)
- Altered muscle activation patterns (Hutson and Wojtys; Malinzak et al) Quadriceps dominant contraction
- Inadequate muscle stiffness (Kibler and Livingston; Granata et al)
- Abnormal loading of the knee due to unsuccessful dynamic postural adjustments (Chris Powers; Griffin et al.)
Neuromuscular Performance Characteristics in Elite Female Athletes
The American Journal of Sports Medicine Vol Huston et al. 24:427-434
- The purpose of the research was to identify possible predisposing neuromuscular factors for knee injuries in female athletes. These factors include anterior knee laxity, lower extremity muscle strength, endurance, muscle reaction time, and muscle recruitment order in response to anterior tibial translation.
ACL RESEARCH RESULTS (Huston et al. 24 )
- Female athletes demonstrated more anterior tibial laxity than their male counterparts and significantly less muscle strength and endurance
- Compared with the male athletes the female athletes took significantly longer to generate maximum hamstring muscle torque during isokinetic testing
- Female athletes relied more on their quadriceps muscles in response to anterior tibial translation; the three other test groups relied more on their hamstring muscles for initial knee stabilization
- So what does this mean?
STAY TUNED FOR NEXT WEEKS ACL INJURIES – HOW TO DECREASE RISK! | https://www.soccernation.com/acl-injuries-the-good-the-bad-the-ugly-a-preventative-approach-part-1/ |
Click on a thumbnail to go to Google Books.
|
|
Loading...
Aristotle's Metaphysics
Work Information
Metafisica de Aristoteles by Aristoteles
"It is the mark of an educated mind to be able to entertain a thought without accepting it."
Metaphysics is a branch of philosophy that studies the ultimate structure and constitution of reality, of that which is real, insofar as it is real. The term, which means literally “what comes after physics,” was used to refer to the treatise by Aristotle on what he himself called “first philosophy.”
Plato, in his theory of forms, separates the sensible world (appearances) of the intelligible world (ideas) and the intelligible world was the only reality, the foundation of all truth.
Aristotle believed that it is the physical world that is observable. He rejected Plato’s transcendentalism ( his notion that there is a higher reality that is only graspable by the mind).
In Plato's theory, material objects are changeable and not real in themselves; rather, they correspond to an ideal, eternal, and immutable Form by a common name, and this Form can be perceived only by the intellect. Thus a thing perceived to be beautiful in this world is in fact an imperfect manifestation of the Form of Beauty. Aristotle's arguments against this theory were numerous. Ultimately he rejected Plato's ideas as poetic but empty language; as a scientist and empiricist he preferred to focus on the reality of the material world.
Substance is a unique category: it is basic. For Aristotle, a substance is a particular thing and its properties. The substance is the matter and the secondary categories or properties are form. A substance consists of matter and form. Form is not a separable realm as it was for Plato; it must exist with matter.
While Plato holds that the more abstract Forms are the most real, Aristotle thinks that the more concrete things are most real.
Whereas Plato's philosophy is intergrally positioned around his understanding of the heavenly Forms, Aristotle's Metaphysics and other works depend on bottom-level truths that lead to truth.
Deduction refers to a logical system that draws conclusions from higher truths about the way things are or are not. Induction, its complementary oppositite, refers to a logical system that draws its conclusions via extrapolation of lower truths. Deduction relies on abstract truths trickling down into arguments, where as induction involves logical positions construction arguments from the ground up.
In the Metaphysics, Aristotle creates arguments about the way things are true that depend heavily on observable or demonstrable conditions in the natural world.
Essentially, the difference can be understood this way. Plato looks up into heaven to find truth. Aristotle looks around and uses logic to find truth.
not really what I want from a translation (see Comments)
12/8/21
12/8/21
12/8/21
Belongs to Publisher Series
Aristoteles (6)
Austral (399)
— 16 more
Everyman's Library (1000)
Is contained in
Schlüsselwerke der Philosophie : die philosophische Basisbibliothek ; mehr als 20.000 Seiten! ; Logik, Ethik, Erkenntni by Mathias Bertram
Is abridged in
Has as a study
The Doctrine of Being in the Aristotelian 'Metaphysics': A Study in the Greek Background of Mediaeval Thought by Joseph Owens
References to this work on external resources.
Wikipedia in English (3)
Arthur Madigan presents a clear, accurate new translation of the third book (Beta) of Aristotle's Metaphysics, together with two related chapters from the eleventh book (Kappa). Madigan's accompanying commentary gives detailed guidance to these texts, in which Aristotle sets out what he takesto be the main problems of metaphysics or 'first philosophy' and assesses possible solutions to them.
No library descriptions found.
|
|
Popular covers
Amazon Kindle (0 editions)
Audible (0 editions)
CD Audiobook (0 editions)
Project Gutenberg (0 editions)
Google Books — Loading...
Genres
Melvil Decimal System (DDC)110 — Philosophy and Psychology Metaphysics Metaphysics
LC Classification
RatingAverage:
Is this you?
Become a LibraryThing Author.
Penguin Australia
An edition of this book was published by Penguin Australia. | https://www.librarything.com/work/170115/184730837 |
Plato’s theory on art from The Republic claims that art is nothing more than a copy of a copy of an ideal, thrice removed. Using a couch as an example, Plato believed that the true artist was god, who then inspired the carpenter, who then inspired the painter, “thus we have three forms of... three concepts (and their derivatives): Imitation → Invention → Innovation. Certainly, a lot has been written on imitation (literary theory and art theory), as well as invention (history, sociology, management and economics of technology).
imitation held its position of strength in the theory of art for at least three centuries. However, over this period the evolution of the theory did not manifest any uniformity and different meanings were ascribed to it in the contexts of visual arts the new artisan bread in five minutes a day pdf Aristotle differs with Plato on the pragmatic value of poetry. Plato as a dualist divides reality into two world- world of ideas and world of senses. World of ideas has eternal and immutable patterns, spiritual and abstract in their nature and all things of the sensory world is fashioned after and imitation of it.
Imitation required a continuity between art and reality, and prohibited works that might ultimately challenge the security of the viewer’s perception of the world. | http://thirdechelonpi.com/new-south-wales/imitation-theory-of-art-pdf.php |
Resource Center: News & Publications
Artificial Intelligence and Disappearing Jobs
by Donna Colfer
Artificial intelligence (AI) will impact our immediate future job displacement. While there are benefits to adopting new technological changes for the economy and society, they have the potential to disrupt the livelihoods of millions of Americans.
Andrew Yang is the Founder and CEO of Venture for America, a fellowship program that places top college graduates in emerging start-ups in U.S. cities. In a recent article he said, “Stephen Hawking [believes] ‘we are at the most dangerous moment in the development of humanity’ and that the ‘rise of artificial intelligence is likely to extend job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.’” Yang added, “The White House published a report … that reinforced this view.1 Some of the headline stats:
83% of the jobs where people make less than $20/hour will be subject to automation or replacement.
Between 9% and 47% of jobs are in danger of being made irrelevant due to technological change, with the worst threats falling among the less educated.
Between 2.2 and 3.1 million car, bus, and truck driving jobs in the U.S. will be eliminated by the advent of self-driving vehicles.”
You might be thinking this won’t happen for several decades but researchers estimate the scale of threatened jobs will take place over the next 10–15 years. This wouldn’t be the first time major technological transformation affected our economy and job prospects. For example, in the nineteenth century, highly skilled artisans saw their livelihoods threatened by the rise of mass production technologies. Many skilled crafts were replaced by the combination of machines and lower-skilled labor.
In the late twentieth century, the onset of computers and the lightning-fast acceptance of the internet raised productivity of higher-skilled workers exponentially. Easily programmable tasks—such as switchboard operations, filing, travel booking, and manufacturing applications—were particularly vulnerable to replacement by new technologies. Over this period of time, we saw increased productivity in abstract thinking, creative tasks, and problem-solving which were partially responsible for substantial growth in jobs employing people with those traits.
The economy has repeatedly proven capable of handling this scale and speed of change, although the impact on individuals depends on how rapidly changes occur and how concentrated the losses are in specific occupations.
The White House report mentioned above advocates these strategies to prepare new workers to enter the workforce, help workers who lose jobs, and combat inequality:
Strategy #1: Invest in and develop AI for its benefits. With care to optimize its development responsibly, AI will make important, positive contributions to productivity growth, and advances in AI technology hold incredible potential to help the United States stay on the cutting edge of innovation.
Strategy #2: Educate and train Americans for jobs of the future. As AI changes the nature of work and the skills demanded by the labor market, American workers must be prepared with education and training for continuous success. This starts with providing all children with access to high-quality early education so all families can prepare students for continuing education.
Strategy #3: Aid workers in the transition,and empower workers to ensure broadly shared growth. Policymakers should ensure that workers and job seekers are both able to pursue the job opportunities for which they’re qualified, and to ensure they receive appropriate rising wages.
The Council of Economic Advisors (CEA) has identified four categories of jobs that might experience AI-driven growth in the future. Employment will grow where humans engage with existing AI technologies, develop new AI technologies, supervise AI technologies in practice, and facilitate societal shifts that accompany new AI technologies. Employment requiring manual dexterity, creativity, social interactions and intelligence, and general knowledge will most likely thrive.
Workers now earning less than $20 an hour who don’t have a college degree but want future job security need to advocate for themselves. They should learn new skills that increase earning potential. Contact a career counselor or a financial counselor for guidance. Use computer and library resources to stay informed about job markets and trends to avoid being caught off guard. These actions will keep changes in hiring trends uppermost in your mind as opportunities grow in the AI economy. We can’t stop these inevitable changes, but can prepare to meet the demands of a changing workforce
Donna Colfer is an Accredited Financial Counselor, Certified Money Coach, speaker, and writer based in the San Francisco Bay Area. She combines practical financial guidance, sound psychological principles and universal spiritual beliefs to guide her clients to a more conscious awareness of their limiting behaviors relative to money. She writes a monthly column in the Kenwood Press called “Understanding Your Relationship with Money.” In addition to her financial background she has been a Minister and Spiritual Counselor since 1996. She blends these two areas in a professional and compassionate way that is truly powerful and effective in her work with individuals, couples, and groups. Visit her website at www.buildingwealthfromwithin.com or email her at [email protected]
| |
Researchers estimated that air pollution was responsible for about 10,000 to 30,000 deaths in Delhi every year, mostly from heart attacks and strokes.
Key Highlights
Not many people are aware that exposure to air pollution increases the risk of stroke
The association between stroke and air pollution was not well understood 30 years back
There are no two opinions about the fact that air pollution is causing serious damage to the health of Indians
A report analyzing the global air quality prepared by the US-based Health Effects Institute has recently categorized air pollution as the largest risk factor for death in Indians. At 83.2 μg/cubic metre, India faces the highest per capita pollution exposure in the world. While the association between air pollution and increased risk of cancers, respiratory illnesses and cardiovascular diseases are well known, not many people are aware that exposure to toxic air pollutants also increases the risk of stroke.
The burden of stroke has significantly increased in India over the years. Ischemic stroke is today the third leading cause of death and one of the leading causes of disability. A large ageing population, increasing incidence of diabetes and hypertension, lifestyle changes and environmental factors have contributed to increasing stroke incidence in India. However, lack of awareness, absence of preventive strategies, shortage of neurologists and poor access to rehabilitation make strokes difficult to survive. Disability and paralysis are a debilitating after-effect of strokes and impact a large number of people. Air pollution continues to be a major yet understated contributor to this rising burden of premature death and disability.
Air Pollution and Stroke: What’s the link?
The association between stroke and air pollution was not well understood 30 years back. However, in recent decades a series of epidemiological studies and researches have shown a clear link. A study undertaken over 7 years in Seoul in the 1990s concluded that the effects of air pollutants on ischemic stroke mortality were statistically significant. Suspended particulates, sulfur dioxide, nitrogen dioxide carbon monoxide were found to be the main culprits in inducing an acute pathogenetic process in the cerebrovascular system.
Another study published in the French neurological journal Revue Neurologique concluded that air pollution in which small particulate matter was the most toxic, contributes to about one-third of the global burden of stroke. This study identified air pollution as a new modifiable neurovascular risk factor, requiring public health intervention. A paper published in the Journal of Stroke cited substantial evidence linking both short and long-term air pollution with cardiovascular diseases, including stroke.
Another study conducted by international researchers estimated that air pollution was responsible for about 10,000 to 30,000 deaths in Delhi every year, mostly from heart attacks and strokes.
What can be done to reduce air pollution and protect health?
There are no two opinions about the fact that air pollution is causing serious damage to the health of Indians. A report by the Energy Policy Institute at the University of Chicago (EPIC) said that India’s dreadful particulate pollution shortens the average Indian’s life expectancy by more than four years. For a resident of Delhi, the average gain in life expectancy could be up to 10.2 years if the WHO guidelines on air quality are met. But, it is not just Delhi! At least 14 Indian cities find themselves in the notorious list of the world’s most polluted cities. According to the Global Burden of Disease study, the annual average for fine particulate PM 2.5 concentrations in India increased by 25 per cent between 1998 and 2015. Worrisomely, over 660 million Indians live in areas that flout the standards of safe exposure to PM 2.5.
That urgent measures that are needed to curtail the concentrations of harmful pollutants in the air is a no brainer. The government has launched a long-term policy push to shift India’s vehicular fleet to electric vehicles. However, that shift is likely to take several years from now until when the toxic effect of hazardous pollutants will continue to impose a heavy burden on India’s overall health. We need air pollution to be recognised more widely as one of the most important modifiable risk factors for the prevention and management of cardiovascular disease and stroke. Governments at different levels need to work to bring about behavioural changes by promoting cycling, walking and public transport as well as cleaner fuels to reduce harmful tailpipe emissions from personal vehicles.
At the same time, individuals must also exercise caution and reduce their exposure to harmful toxicants in the air. Limiting time spent outdoors during highly polluted periods, avoiding outdoor exercise in the polluted winter air, wearing masks when stepping out and reducing usage of personal motorised vehicles are some such interventions.
Also, we must understand that during lockdown there was no air pollution or water pollution, which implies that it is purely man-made and it can be controlled by humans by just being more responsible towards the environment.
The importance of stroke awareness
Unfortunately, despite rising disease incidence, awareness about the disease is low. This results in a large number of people failing to get medical attention in time to save death or disability. Educating people about the golden hour is crucial in addressing delays in arrival. At the same time, training primary healthcare centres in stroke identification and management is also crucial in saving lives in areas where accessibility to neurologists is a problem. Easy and affordable access to rehabilitation centres is another critical gap in stroke management that needs to be filled, particularly in rural areas where rehabilitation and physiotherapy remain highly inaccessible. | |
Prominent Analyst Says RT and TeleSUR broadcasts Halt in Argentina is Psychological Warfare
Buenos Aires, June 11 (RHC)-- Russia Today contributor Adrian Salbuchi said that the suspension of the broadcasts of RT Spanish and TeleSUR in Argentina is “psychological warfare” and part of the West’s all-out onslaught on Latin America.
Salbuchi is an international political analyst, researcher, consultant and author of several books on geopolitics in both, Spanish and English.
He said that the true reason behind the Macri administration's decision to take the two TV channels off air is that RT and TeleSUR both provide much-needed leftist views and alternative viewpoints on international politics, international finance, international economic treaties, for instance the Trans-Pacific Partnership.
He added: “Regrettably, the new government of President Maricio Macri has completely aligned Argentina to the interests of the United States, the European Union, the United Kingdom and also Israel.”
The prominent researcher further said: “What we are seeing, at least in Argentina, is a case of censorship where the government does not want an alternative viewpoint, which is different to CNN and Fox News and the New York Times and the Daily Telegraph to be heard by the Argentinian population so that we may have a clearer view what is happening in the world and in Argentina.”
Adrian Salbuchi writes op-ed pieces for RT Spanish as well as RT English, and is a regular guest on alternative media radio and TV shows in the US, Europe and Latin America. | http://www.radiohc.cu/en/noticias/internacionales/96546-prominent-analyst-says-rt-and-telesur-broadcasts-halt-in-argentina-is-psychological-warfare |
# Stochastic forensics
Stochastic forensics is a method to forensically reconstruct digital activity lacking artifacts, by analyzing emergent properties resulting from the stochastic nature of modern computers. Unlike traditional computer forensics, which relies on digital artifacts, stochastic forensics does not require artifacts and can therefore recreate activity which would otherwise be invisible. Its chief application is the investigation of insider data theft.
## History
Stochastic forensics was invented in 2010 by computer scientist Jonathan Grier to detect and investigate insider data theft. Insider data theft has been notoriously difficult to investigate using traditional methods, since it does not create any artifacts (such as changes to the file attributes or Windows Registry). Consequently, industry demanded a new investigative technique.
Since its invention, stochastic forensics has been used in real world investigation of insider data theft, been the subject of academic research, and met with industry demand for tools and training.
### Origins in statistical mechanics
Stochastic forensics is inspired by the statistical mechanics method used in physics. Classical Newtonian mechanics calculates the exact position and momentum of every particle in a system. This works well for systems, such as the Solar System, which consist of a small number of objects. However, it cannot be used to study things like a gas, which have intractably large numbers of molecules. Statistical mechanics, however, doesn't attempt to track properties of individual particles, but only the properties which emerge statistically. Hence, it can analyze complex systems without needing to know the exact position of their individual particles.
We can’t predict how any individual molecule will move and shake; but by accepting that randomness and describing it mathematically, we can use the laws of statistics to accurately predict the gas’s overall behavior. Physics underwent such a paradigm shift in the late 1800s... Could digital forensics be in need of such a paradigm shift as well?— Jonathan Grier, Investigating Data Theft With Stochastic Forensics, Digital Forensics Magazine, May 2012
Likewise, modern day computer systems, which can have over 2 8 10 12 {\displaystyle 2^{8^{10^{12}}}} states, are too complex to be completely analyzed. Therefore, stochastic forensics views computers as a stochastic process, which, although unpredictable, has well defined probabilistic properties. By analyzing these properties statistically, stochastic mechanics can reconstruct activity that took place, even if the activity did not create any artifacts.
## Use in investigating insider data theft
Stochastic forensics chief application is detecting and investigating insider data theft. Insider data theft is often done by someone who is technically authorized to access the data, and who uses it regularly as part of their job. It does not create artifacts or change the file attributes or Windows Registry. Consequently, unlike external computer attacks, which, by their nature, leave traces of the attack, insider data theft is practically invisible.
However, the statistical distribution of filesystems' metadata is affected by such large scale copying. By analyzing this distribution, stochastic forensics is able to identify and examine such data theft. Typical filesystems have a heavy tailed distribution of file access. Copying in bulk disturbs this pattern, and is consequently detectable.
Drawing on this, stochastic mechanics has been used to successfully investigate insider data theft where other techniques have failed. Typically, after stochastic forensics has identified the data theft, follow up using traditional forensic techniques is required.
## Criticism
Stochastic forensics has been criticized as only providing evidence and indications of data theft, and not concrete proof. Indeed, it requires a practitioner to "think like Sherlock, not Aristotle." Certain authorized activities besides data theft may cause similar disturbances in statistical distributions.
Furthermore, many operating systems do not track access timestamps by default, making stochastic forensics not directly applicable. Research is underway in applying stochastic forensics to these operating systems as well as databases.
Additionally, in its current state, stochastic forensics requires a trained forensic analyst to apply and evaluate. There have been calls for development of tools to automate stochastic forensics by Guidance Software and others. | https://en.wikipedia.org/wiki/Stochastic_forensics |
—bookless, adj. —booklike, adj./book/, n.1. a written or printed work of fiction or nonfiction, usually on sheets of paper fastened or bound together within covers.2. a number of sheets of blank or ruled paper bound together for writing, recording business transactions, etc.3. a division of a literary work, esp. one of the larger divisions.4. the Book, the Bible.5. Music. the text or libretto of an opera, operetta, or musical.6. books. See book of account.7. Jazz. the total repertoire of a band.8. a script or story for a play.9. a record of bets, as on a horse race.10. Cards. the number of basic tricks or cards that must be taken before any trick or card counts in the score.11. a set or packet of tickets, checks, stamps, matches, etc., bound together like a book.12. anything that serves for the recording of facts or events: The petrified tree was a book of Nature.13. Sports. a collection of facts and information about the usual playing habits, weaknesses, methods, etc., of an opposing team or player, esp. in baseball: The White Sox book on Mickey Mantle cautioned pitchers to keep the ball fast and high.14. Stock Exchange.a. the customers served by each registered representative in a brokerage house.b. a loose-leaf binder kept by a specialist to record orders to buy and sell stock at specified prices.15. a pile or package of leaves, as of tobacco.16. Mineral. a thick block or crystal of mica.17. a magazine: used esp. in magazine publishing.18. See book value.19. Slang. bookmaker (def. 1).20. bring to book, to call to account; bring to justice: Someday he will be brought to book for his misdeeds.21. by the book, according to the correct or established form; in the usual manner: an unimaginative individual who does everything by the book.22. close the books, to balance accounts at the end of an accounting period; settle accounts.24. in one's bad books, out of favor; disliked by someone: He's in the boss's bad books.25. in one's book, in one's personal judgment or opinion: In my book, he's not to be trusted.26. in one's good books, in favor; liked by someone.27. like a book, completely; thoroughly: She knew the area like a book.28. make book,a. to accept or place the bets of others, as on horse races, esp. as a business.b. to wager; bet: You can make book on it that he won't arrive in time.29. off the books, done or performed for cash or without keeping full business records: esp. as a way to avoid paying income tax, employment benefits, etc.: Much of his work as a night watchman is done off the books.30. one for the book or books, a noteworthy incident; something extraordinary: The daring rescue was one for the book.31. on the books, entered in a list or record: He claims to have graduated from Harvard, but his name is not on the books.32. the book,a. a set of rules, conventions, or standards: The solution was not according to the book but it served the purpose.b. the telephone book: I've looked him up, but he's not in the book.33. throw the book at, Informal.a. to sentence (an offender, lawbreaker, etc.) to the maximum penalties for all charges against that person.b. to punish or chide severely.34. without book,a. from memory.b. without authority: to punish without book.35. write the book, to be the prototype, originator, leader, etc., of: So far as investment banking is concerned, they wrote the book.v.t.36. to enter in a book or list; record; register.37. to reserve or make a reservation for (a hotel room, passage on a ship, etc.): We booked a table at our favorite restaurant.38. to register or list (a person) for a place, transportation, appointment, etc.: The travel agent booked us for next week's cruise.39. to engage for one or more performances.40. to enter an official charge against (an arrested suspect) on a police register.41. to act as a bookmaker for (a bettor, bet, or sum of money): The Philadelphia syndicate books 25 million dollars a year on horse racing.v.i.42. to register one's name.43. to engage a place, services, etc.44. Slang.a. to study hard, as a student before an exam: He left the party early to book.b. to leave; depart: I'm bored with this party, let's book.c. to work as a bookmaker: He started a restaurant with money he got from booking.45. book in, to sign in, as at a job.46. book out, to sign out, as at a job.47. book up, to sell out in advance: The hotel is booked up for the Christmas holidays.adj.48. of or pertaining to a book or books: the book department; a book salesman.49. derived or learned from or based on books: a book knowledge of sailing.50. shown by a book of account: The firm's book profit was $53,680.[bef. 900; ME, OE boc; c. D boek, ON bok, G Buch; akin to Goth boka letter (of the alphabet) and not of known relation to BEECH, as is often assumed]Syn. 39. reserve, schedule, bill, slate, program.Ant. 39. cancel.
* * *IWritten (or printed) message of considerable length, meant for circulation and recorded on any of various materials that are durable and light enough to be easily portable.The papyrus roll of ancient Egypt is more nearly the direct ancestor of the modern book than is the clay tablet; examples of both date to с 3000 BC. Somewhat later, the Chinese independently created an extensive scholarship based on books, many made of wood or bamboo strips bound with cords. Lampblack ink was introduced in China с AD 400 and printing from wooden blocks in the 6th century. The Greeks adopted the papyrus roll and passed it on to the Romans. The parchment or vellum codex superseded the papyrus roll by AD 400. Medieval parchment or vellum leaves were prepared from the skins of animals. By the 15th century, paper manuscripts were common. Printing spread rapidly in the late 15th century. Subsequent technical achievements, such as the development of offset printing, improved many aspects of book culture. In the late 1990s, downloadable electronic books became available over the Internet.II(as used in expressions)Book of ChangesGodey's Lady's BookMormon Book ofRevelation Book ofBook of BrightnessBook of SplendourBook of the LawMendele the Book Peddler
* * *published work of literature or scholarship; the term has been defined by UNESCO for statistical purposes as a “non-periodical printed publication of at least 49 pages excluding covers,” but no strict definition satisfactorily covers the variety of publications so identified.Although the form, content, and provisions for making books have varied widely during their long history, some constant characteristics may be identified. The most obvious is that a book is designed to serve as an instrument of communication—the purpose of such diverse forms as the Babylonian clay tablet, the Egyptian papyrus roll, the medieval vellum or parchment codex, the printed paper codex (most familiar in modern times), microfilm, and various other media and combinations. The second characteristic of the book is its use of writing or some other system of visual symbols (such as pictures or musical notation) to convey meaning. A third distinguishing feature is publication for tangible circulation. A temple column with a message carved on it is not a book nor is a sign or placard, which, though it may be easy enough to transport, is made to attract the eye of the passerby from a fixed location. Nor are private documents considered books. A book may be defined, therefore, as a written (or printed) message of considerable length, meant for public circulation and recorded on materials that are light yet durable enough to afford comparatively easy portability. Its primary purpose is to announce, expound, preserve, and transmit knowledge and information between people, depending on the twin faculties of portability and permanence. Books have attended the preservation and dissemination of knowledge in every literate society.The papyrus roll of ancient Egypt is more nearly the direct ancestor of the modern book than is the clay tablet of the ancient Sumerians, Babylonians, Assyrians, and Hittites; examples of both date from about 3000 BC.The Chinese independently created an extensive scholarship based on books, though not so early as the Sumerians and the Egyptians. Primitive Chinese books were made of wood or bamboo strips bound together with cords. The emperor Shih Huang (Shihuangdi) Ti attempted to blot out publishing by burning books in 213 BC, but the tradition of book scholarship was nurtured under the Han dynasty (206 BC to AD 220). The survival of Chinese texts was assured by continuous copying. In AD 175, Confucian texts began to be carved into stone tablets and preserved by rubbings. Lampblack ink was introduced in China in AD 400 and printing from wooden blocks in the 6th century.The Greeks adopted the papyrus roll and passed it on to the Romans. The vellum or parchment codex, which had superseded the roll by AD 400, was a revolutionary change in the form of the book. The codex introduced several advantages: a series of pages could be opened to any point in the text, both sides of the leaf could carry the message, and longer texts could be bound in a single volume. The medieval vellum or parchment leaves were prepared from the skins of animals. By the 15th century paper manuscripts were common. During the Middle Ages, monasteries (monastery) characteristically had libraries and scriptoria, places in which scribes copied books. The manuscript books of the Middle Ages, the models for the first printed books, were affected by the rise of Humanism and the growing interest in vernacular languages in the 14th and 15th centuries.The spread of printing was rapid in the second half of the 15th century; the printed books of that period are known as incunabula. The book made possible a revolution in thought and scholarship that became evident by the 16th century: the sources lay in the capacity of the press to multiply copies, to complete editions, and to reproduce a uniform graphic design along new conventional patterns that made the printed volume differ in appearance from the handwritten book. Other aspects of the printing revolution—cultural change associated with concentration on visual communication as contrasted to the oral modes of earlier times—have been emphasized by Marshall McLuhan.In the 17th century books were generally inferior in appearance to the best examples of the art of the book in the 16th. There was a great expansion in the reading public in the 17th and 18th centuries in the West, in part because of the increasing literacy of women. Type designs were advanced. The lithographic process of printing illustrations, discovered at the end of the 18th century, was significant because it became the basis for offset printing.In the 19th century the mechanization of printing provided the means for meeting the increased demand for books in industrialized societies. William Morris, in an effort to renew a spirit of craftsmanship, started the private press movement late in the 19th century. In the 20th century the book maintained a role of cultural ascendancy, although challenged by new media for dissemination of knowledge and its storage and retrieval. The paperbound format proved successful not only for the mass marketing of books but also from the 1950s for books of less general appeal. After World War II, an increase in use of colour illustration, particularly in children's books and textbooks, was an obvious trend, facilitated by the development of improved high-speed, offset printing.
* * *
Universalium. 2010. | https://universalium.en-academic.com/84005/book |
Doctor insights on:
Depression Irritability Mood Swings
1
Does social anxiety cause mood swings?
Anything distressing: Can contribute to feeling depressed, especially if you can't find a way to relieve the distress. Learn to manage the anxiety with relaxation training and cognitive therapy by talking to a mental health professional. ...Read moreSee 1 more doctor answer
Depression (Definition)
Depression is a mood disorder that can affect behavior and emotions. Symptoms of depression include feeling down most of the time, losing interest in previously enjoyable activities, increase or decrease in appetite or weight, sleeping more or less, becoming easily agitated or lethargic, feeling worthless, feeling guilty, having difficulty concentrating, thinking more about death and dying. Depression can sometimes result in suicidal thoughts and plans. In this case, emergent ...Read more
2
Is manic depression just bad mood swings?
No: Manic depression is a serious mental disorder which leads to more than mood swings. Bipolar Disorder is a mood disorder that can present with possible symptoms of mania, hypomania, depression, mixed state and normal state. * Mania or hypomania: 1 may show aggression, agitation, v judgment & impulse control, distractability, rapid thoughts & speech, ^ libido, V sleep, spending sprees, high risk ...Read moreSee 3 more doctor answers
3
Does bipolar include manic kinds of mood swings?
Yes: Yes.Get a more detailed answer ›
4
Mood swings. Depressed, hyperactive. Cloudy, busy, vivid thoughts & daydreams/dreams. Mild blackouts/forgetfulness. Angry outbursts. & more. Help?!
See below: Are these new or old symptoms? Are you or have you started any new medication? Tried any new drugs? All these things potentially might have an effect on you. It's always a good idea to rule out anything medical going on. You might want to schedule an appointment with your primary care doctor for a complete work up and then take things from there. Good luck! ...Read moreSee 2 more doctor answers
5
Can provera (medroxyprogesterone) cause major mood swings and irritability?
Possible: Possible.Get a more detailed answer ›
6
Tiredness headaches mood swings dizziness depression anxiety and inattention. What causes all my symptoms?
See below...: All the symptoms u describe are not uncommon with depressive disorders...Although more details are needed, to know for sure... I hope u are in psychotherapy working on the underlying emotional issues. I have seen many a client have their physical symptoms disappear as their depression lifted... Of course, a general physical work up with ur md is a good idea as well... Wishing you well! ...Read moreSee 2 more doctor answers
7
What's manic depression?
Bipolar: Manic depression is a mood disorder that presents with extreme highs & lows, with at least few weeks of manic phases (abnormally elevated mood & energy level) alternating with few or more weeks of depressed phases (sad, guilty, hopeless/helpless, changes in sleep/appetite/energy level). Any phase can be severe enough to experience delusions or hallucinations. ...Read moreSee 3 more doctor answers
8
Do bipolar mood swings have variations in each extreme mood?
Yes: Not sure exactly what you are asking, but there are certainly variations in degree of symptoms between individuals and between episodes of the same individual. One thing we do know is that the more episodes you have, the more frequent they occur and the more severe they become. The more stable the mood over time, the better the prognosis. ...Read more
9
How are depressive mood and depressive disorder different?
Mood vs disorder: We all have depressive mood from time to time. I.E. -my goldfish dies and i feel sad. However a major depressive disorder is a minumum of a two week period of feeling low which is associated with things like changes in sleep pattern or appetite. It can effect self esteem, motivation levels, ability to feel joy. One might feel helpless or hopeless or even have thoughts of suicide. ...Read more
10
Are mood swings or depression part of pcos?
Can be: Yes, women with pcos can have depression and mood swings. ...Read more
11
Mood swings, violent thoughts toward myself/others, easily irritated/agitated, restless/exhausted, racing thoughts, can't focus, depressed. Help?
Clearly: This is not something that can be handled online. Your medications are not helping enough, and you need to see a psychiatrist asap for complete evaluation and treatment. If currently having suicidal thoughts or thoughts of violence towards others, please go to an emergency room for evaluation. Please get a friend or family member to drive you, or call 911. ...Read moreSee 1 more doctor answer
12
Headache
Bloated
High temp
Mood swings
Memory loss
Fatigue
Lack of appetite
Help??
High temp: If you are running a high fever and you have significant symptoms it is a good idea to see a doctor in person to examine you and diagnose. Once this is accomplished, your doctor can give you antipyretic and analgesic medicine to treat your fever and pain respectively. There may be anti infective treatment your physician will suggest. Remember to bring your medicine if any and know your allergies. ...Read more
13
Could hypothyroidism cause mood swings?
Down moods: Hypothyroidism causes people to be fatigued, lethargic and down. ...Read more
14
Obsessive thoughts. Mood swings throughout the day. Extreme anxiety depression. Obsessive actions. What is this?
In addition to the: medical advice already given,please see a mental health professional for evaluation for possible Obsessive Compulsive Disorder,especially if you are trying to ward off impending danger that you may not even be able to identify. Peace and good health. ...Read moreSee 2 more doctor answers
15
Can Prozac (fluoxetine) cause manic depression?
Maybe: Prozac (fluoxetine) is an antidepressant. Many antidepresants including Prozac (fluoxetine) can cause patients who have bipolar disorder or who are prone to symptoms of bipolar, to develop bipolar symptoms. The best way to avoid this is to work with a psychiatrist who can provide meds for both the depressive and the manic symptoms of bipolar moods. ...Read moreSee 3 more doctor answers
16
Anxiety forgetfull can't finish things can't concentrate mood swings depression?
Anxious Depression?: Or maybe a bipolar type of anxious depression? Is this a new problem? Is there loss of energy, drive and motivation? Are the mood swings random or are they caused by stress you can identify? Severe anxiety alone can cause most of those symptoms. Concentration is generally not good in anxiety and depression. Talk to your physician or see a counselor for an evaluation. They can help you solve it. ...Read moreSee 2 more doctor answers
17
I am experiencing emotional problems (quality: feelings of unreality, detachment) , mood swings, hallucinations, impulsive or reckless behavior, t...
See your doctor: Please see your doctor as soon as possible. If you feel like harming yourself, go to the emergency room right now! The very best of wishes. ...Read more
18
Are bipolar disorder and depression related?
Mood: They are related in that they are both mood disorders. In other words, both affect emotions or moods. Treatment, however, is different in terms of medication. Bipolar usually requires a mood stabilizer; whereas, depression is usually treated with an antidepressant. Some times an antidepressant is added to a mood stabilizer, but it's uncommon for bipolar to be treated solely with an antidepressan. ...Read moreSee 3 more doctor answers
19
What are mood disorders besides mania and depression?
Cyclothymia also: I agree with drs. Fox and ali -- but want to add cyclothymia also in the mood disorders. It causes emotional ups and downs, but not as extreme as in bipolar disorder type i or ii. It is not very common but starts in teen years -- and stable periods last less than 2 months. Can bring about all kinds of difficulties, if untreated. ...Read moreSee 2 more doctor answers
20
Can smoking weed lead to mood swings and bi polar moods?
Weed: Very possible, more so if predisposed to mood changes. ...Read more
Mood (Definition)
Is how a patient says they feel, or how someone feels internally, versus how someone's mood appears, which is called "affect." if you say your mood is "happy, " and you appear to be so, we would say that your affect is congruent. If, however, you actually look really angry, we would say your mood is "happy" and ...Read more
Emotional Instability (Definition)
Emotional instability is a clinical finding in which a person's mood either varies significantly over time between various emotions or a person frequently experiences a strong emotion like ...Read more
- Talk to a doctor live online for free
- Headaches irritability mood swings
- Fatigue mood swings irritability
- Endometriosis mood swings and depression
- Ask a doctor a question free online
- Mood swings anxiety and depression
- Can provera cause major mood swings and irritability?
- Does depression cause mood swings? | https://www.healthtap.com/topics/depression-irritability-mood-swings |
An armoured fighting vehicle is a combat vehicle, protected by strong armour and armed with weapons. AFVs can be wheeled or tracked....
designed for front-line combat which combines operational mobility, tactical
Military tactics
Military tactics, the science and art of organizing an army or an air force, are the techniques for using weapons or military units in combination for engaging and defeating an enemy in battle. Changes in philosophy and technology over time have been reflected in changes to military tactics. In...
offensive
Offensive (military)
An offensive is a military operation that seeks through aggressive projection of armed force to occupy territory, gain an objective or achieve some larger strategic, operational or tactical goal...
, and defensive
Defense (military)
Defense has several uses in the sphere of military application.Personal defense implies measures taken by individual soldiers in protecting themselves whether by use of protective materials such as armor, or field construction of trenches or a bunker, or by using weapons that prevent the enemy...
capabilities. Firepower is normally provided by a large-calibre main gun
Tank gun
A tank gun is the main armament of a tank. Modern tank guns are large-caliber high-velocity guns, capable of firing kinetic energy penetrators, high explosive anti-tank rounds, and in some cases guided missiles. Anti-aircraft guns can also be mounted to tanks.-Overview:Tank guns are a specific...
in a rotating turret
Gun turret
A gun turret is a weapon mount that protects the crew or mechanism of a projectile-firing weapon and at the same time lets the weapon be aimed and fired in many directions.The turret is also a rotating weapon platform...
and secondary machine gun
Machine gun
A machine gun is a fully automatic mounted or portable firearm, usually designed to fire rounds in quick succession from an ammunition belt or large-capacity magazine, typically at a rate of several hundred rounds per minute....
s, while heavy armour
Vehicle armour
Military vehicles are commonly armoured to withstand the impact of shrapnel, bullets, missiles, or shells, protecting the personnel inside from enemy fire. Such vehicles include tanks, aircraft, and ships....
and all-terrain mobility provide protection for the tank and its crew, allowing it to perform all primary tasks of the armoured troops on the battle
Battle
Generally, a battle is a conceptual component in the hierarchy of combat in warfare between two or more armed forces, or combatants. In a battle, each combatant will seek to defeat the others, with defeat determined by the conditions of a military campaign...
field.
Tanks in World War I
Tanks in World War I
The development of tanks in World War I began as a solution to the stalemate which trench warfare had brought to the western front. The first prototype of the Mark I tank was tested for the British Army on September 8th 1915...
were developed separately and simultaneously by Great Britain and France as a means to break the deadlock of trench warfare
Trench warfare
Trench warfare is a form of occupied fighting lines, consisting largely of trenches, in which troops are largely immune to the enemy's small arms fire and are substantially sheltered from artillery...
.
Quotations
"The finest tank in the world" - Field-Marshal Paul Ludwig Ewald von Kleist|Ewald von Kleist
All you saw in your imagination was the muzzle of an 88 behind each leaf.
British Tank Commander Andrew Wilson
You need five of your tanks to destroy a single German one, but you always have six.
A captured German tanker said to Allied soldiers
The Tiger was the best tank and was particularly successful in heavy fighting.
German Tank Commander Oberst Franz Bäke|Franz Bäke
"If the tanks succeed, then victory follows." | http://image.absoluteastronomy.com/topics/Tank |
A lesson in music theory to understand how chords fit together and understanding how to make a song move forward.
This is for anyone who would like to know what exactly goes on between the chords you play, why certain chords are chosen over others in progressions, and how to use this knowledge to create chord progressions with the specific feeling you're after. That feeling being one of the song moving forward, staying still, getting more tense, or being very chilled. Bear in mind this is the foundation for harmonic forward motion, and there are many other aspects.
If you have any questions please leave them in the comments, either here or on the video. I look forward to helping you understand this topic
Video:
https://www.youtube.com/watch?v=Uk-JvWrOg8Q
Full post:
https://jimjamguitar.net/2019/02/19/how ... ginners-2/
How to Write a Song | Chord Functions Pt.1
Got a question on a lesson? Don't quite understand what or how or why? Just want some company as you learn? Come join the gang in the lesson forum
Post Reply
1 post • Page 1 of 1
-
- newbie
- Posts: 8
- Joined: November 29th, 2018, 6:56 am
- Contact: | http://forums.guitarnoise.com/viewtopic.php?f=10&t=61903&p=488883 |
There are many companies which offer business culture training, including:
- country-specific training courses
- cross-cultural awareness training courses
- business etiquette and customs training courses
- one-to-one executive cultural coaching
- international communications skills training
- international presentation skills training
- international negotiation skills training.
This training can be delivered via face-to-face, videoconference, podcast or online workshops in both the UK and overseas locations.
We have a network of quality cultural training companies in the UK who offer intercultural training programmes to suit your business needs. | https://www.growglobal.com/product/business-culture-training/ |
Before her premature death in 2010, Edna Ullmann-Margalit produced a series of elegant and highly original essays, focused above all on rationality and its limits. Hers is the best philosophical work on invisible-hand explanations and presumptions. With Sidney Morgenbesser, she is responsible for the important distinction between choosing (deciding on the basis of reasons) and picking (as in flipping a coin). In her later years, she examined how rational-choice theory might be defeated when people make Big Decisions, where their values and even their character are on the line. Ullmann-Margalit also explored, with grace and sensitivity, the idea of considerateness and its role both in daily life and within the family. (This may be her finest work.) None of her essays shouts from the rooftops, but all of them leave large subjects richer than they were before -- and some of them create new subjects altogether. They have a timeless quality, and they also have a kind of synergy; at some point, they ought to be collected into a single volume.
In the meantime, Oxford University Press has just published the first paperback edition of Ullmann-Margalit’s The Emergence of Norms (1977), which has a strong claim to having spurred the last decades’ outpouring of work on that topic. Almost four decades later, the book repays careful reading, not least (I think) because of the discussion of norms of partiality, which has been much neglected, and which raises a number of unresolved problems of social theory.
Ullmann-Margalit sees her work as “an essay in speculative sociology” (p. 1), designed to understand both the rise and the function of social norms. She describes her thesis as this: “certain types of norms are possible solutions to problems posed by certain types of social interaction situations” (p. 1). Her claim is one of “rational reconstruction,” involving not historical evidence, or indeed anything at all empirical, but instead a plausible claim of how a practice or a phenomenon might have emerged. She treats three such situations as paradigmatic or “core,” involving prisoner’s dilemmas, coordination, and inequality. In all three situations, familiar social norms turn out to resolve the specified problems. In her view, norms typically have that effect insofar as they impose “a significant social pressure for conformity and against deviation,” alongside a “belief by the people concerned in their indispensability for the proper functioning of society,” and an expectation of clashes between the dictates of norms “on the one hand and personal interests and desires on the other” (p. 13).
With respect to prisoner’s dilemma situations, the central argument is straightforward. If people are facing such situations, they cannot easily produce a mutually beneficial state of affairs without “a norm, backed by appropriate sanctions” (p. 22). Suppose, for example, that the question is whether to pay one’s income tax, to vote in a general election, to keep a promise, or to cut through a neighbor’s well-tended lawn. In each case, a PD norm might turn out to be especially important, all the more so “the larger and the more indeterminate the class of participants, and the more frequent the occurrence of the dilemma among them” (p. 25).
In short, norms operate as stabilizing devices. Ullmann-Margalit understands the logically faulty but widespread generalization argument (“what if everyone did that?”) as a reflection of the psychological power and hence the utility of PD norms. She also emphasizes the important but underappreciated fact that it sometimes make sense for norms or law to keep people in PD situation; consider the antitrust laws (pp. 44-45).
Though Ullmann-Margalit devotes a lot of pages to coordination norms, they are in a sense much simpler, because the interests of the parties coincide. In the pure version, we both (or all) want to meet somewhere in New York City; the question is where. A coordination norm tells us to meet at Grand Central Station. In the non-pure version, our convergence of interests is not exactly perfect (maybe Penn Station is a bit easier for you), but it is close enough, because we care more about coordinating than we do about getting our way on exactly where.
Ullmann-Margalit urges that in recurring coordination problems, people tend to arrive at a successful solution, which then becomes a norm. Familiar examples include norms involving dress, the acceptance of legal tender (and hence, perhaps, the rise of money), driving on the left (or right), and etiquette. In novel coordination problems, bottom-up development of norms is not possible, and hence “a solution is likely to be dictated by a norm issued specifically for that purpose by some authority” (p. 83). In either case, she insists that a coordination norm is no mere regularity of behavior. It is “supported by social pressure” and in that sense “it even slightly changes the corresponding pay-off matrix” by making a “particular coordination equilibrium a somewhat more worthwhile outcome to be aimed at than it would otherwise have been” (p. 87). Here she understates; if a norm is supported by a lot of social pressure, it might do far more than “slightly” change people’s payoffs.
In an instructive brief discussion, Ullmann-Margalit urges that for the individual conformist, all decisions are, in essence, coordination problems, in the sense that the goal is “to meet the others in their choices” (p. 93). From that point of view, the conformist faces “a unilateral coordination problem” (p. 94). We can even imagine “a society composed entirely of conformists,” whose members think that conformity matters more than anything else, thus raising the question whether, in such a society, any decision at all might turn out to be a coordination problem. The thought experiment is interesting, because some groups are a lot like that. Ullmann-Margalit urges that if “it is common knowledge in this society that they are all avowed conformists who are, moreover, content to act that way,” then coordination problems are indeed pervasive.
From the standpoint of the contemporary reader, her most novel and challenging (and alas far from entirely satisfying) discussion involves norms of partiality. Ullmann-Margalit begins by assuming a status quo of inequality, “such that one party is more favourably placed than the other” (p. 134). As she sets up the situation,, the disfavorably placed party is quite aware of being disfavorably placed and wants to improve his position. She emphasizes that the goal might be to improve either absolute position (for example, by having more opportunities or resources) or relative position (by narrowing the gap with the other party, holding absolute position constant). By contrast, the favorably placed party wishes to maintain the status quo.
In Ullmann-Margalit’s account, the most interesting problems, for these particular players, have two features. First, the status quo is in game-theoretical equilibrium, in the sense that neither side can improve his absolute position by a unilateral deviance. Think, for example, of a relationship between employers and employees, or husbands and wives. Second, the status quo is strategically unstable, in the sense that the disfavorably placed party might be able to improve his relative position, but sacrifice his absolute solution (at least in the short-run), by a unilateral move (or rebellion). Think, for example, of a threat to strike or to leave the marriage. We can imagine many real-world analogues, including not only labor rights and the division of labor within the family, but also the wide variety of groups that consider whether to engage in civil disobedience.
Ullmann-Margalit’s question is this: How can the favored party try to stabilize the status quo? There are a lot of possibilities. One, of course, is to use force. Another is “to share some of the benefits of his favored positions with the other party” (p. 169). Another is to conceal and blur his favored position (an especially interesting and potentially successful strategy). Another is to try to exclude himself from the frame of reference used by the disfavored part. Yet another is to take steps to convince the disfavored party that he is, in fact, much better off than he was in the past (or might be under some other arrangement). All of these strategies are familiar from past movements for equality, and all have both potential and risks.
But Ullmann-Margalit’s particular interest is in the use of norms of partiality, which operate to stabilize otherwise volatile situations. As examples, she points in particular to norms associated with property, including prohibitions on trespass and “the inheritance institution,” which she says are meant “to preserve, protect, and perpetuate the position of the ‘haves’ – and their descendants – in states which are inherently states of inequality” (p. 173). She acknowledges that norms of this kind might take the form of law or instead customs. But she insists that in any case, an unequal status quo is often able to perpetuate itself only because of their support. From the standpoint of the favored party, a special virtue of norms of partiality is that in many cases, “the air of impersonality remains intact and successfully disguises what underlies the partiality norm, viz. an exercise of power” (p. 189). In this respect, norms of partiality are altogether different from PD norms and coordination norms.
Let’s back up from the particular claims and notice that Ullmann-Margalit could be clearer about what, exactly, ensures that her categories of norms will have a stabilizing effect. She often speaks of social pressure, which leaves open the question: Would people follow norms if no one were watching? For PD norms, most of us would: You pay your taxes because it is the right thing to do, not because other people would think less of you if you didn’t. The same is at least sometimes true of norms of partiality. If you are supposed to show deference to those of higher status (your boss, your teacher, a senator), you might well have internalized that practice, so that you would feel that you had done something wrong if you did not. (The term “impertinence” captures the idea.)
In Explaining Social Behavior, Jon Elster contends that “social norms operate through the emotions of shame in the norm violator and of contempt in the observer of the violation” (p. 355). This understanding helps to clarify much of Ullmann-Margalit’s discussion, and it puts real pressure on her discussion of coordination norms. If the most sensible meeting place in New York City is Grand Central Station, and if you are the only one in a group of friends to show up at Penn Station, you might feel stupid, but shame would be a bit excessive. Ullmann-Margalit is aware of the problem and tries to distinguish between coordination norms and conventions, but I am not sure that the effort is successful.
For all three classes of norms, she is correct to emphasize the likely relevance of social pressure, but over the last decades, we have learned a great deal about that ambiguous idea. For example, Elster emphasizes ostracism, avoidance, and (perhaps more important) perceived contempt, and Ernst Fehr explores “altruistic punishment” (as when norm-enforcers take action at their own expense) and the anticipation of such punishment by would-be norm violators. We continue to learn about the immense power of shame, in the face of one’s own norm violations, even when no such pressure is likely to be brought to bear. And of course legal norms, which Ullmann-Margalit sometimes seems to conflate with social norms, have an enforcement machinery of their own, even if they grow out of or codify social norms.
As her title suggests, Ullmann-Margalit is concerned with the emergence of norms, not only with their functions. The whole idea of “rational reconstruction” is designed to see how norms might plausibly have come about. (In this connection, she offers not the first but perhaps the clearest game-theoretic reading of Hobbes, in accordance with which government could emerge from a generalized PD-structured problem faced by humanity in the state of nature.) Those who emphasize rational reconstruction think that it is valuable to describe “the essential features of situations in which such an event could occur: it is a story of how something could happen – and when human actions are concerned, of what is the rationale of its happening that way – not of what did actually take place” (p. 1; emphasis in original). I might be missing something here, but wouldn’t it be simpler and better to dispense with any kind of causal claims, even hypothetical ones, thus jettisoning an argument about the emergence of norms, and to speak instead of their functions?
A possible response is that if we are speaking of PD norms and coordination norms – and perhaps norms of partiality as well – their functions might turn out to have something to do with their emergence or at least their stability over time. Perhaps spontaneous orders generate norms of this kind. Elinor Ostrom’s work so suggests, and Ullmann-Margalit’s own work on invisible-hand explanations bears on the possibility (and also raises doubts about it). But the basic claim raises a host of questions, and they are empirical rather than conceptual in nature.
Taken in conceptual terms, and as analysis of functions rather than emergence, Ullmann-Margalit’s discussion of PD norms and coordination norms remains broadly convincing (and contains many refinements that I have not been able to capture here). But with respect to norms of partiality, there is much more to say. Sure, some social norms stabilize situations of inequality, but Ullmann-Margalit acknowledges her own struggle to identify clear examples. Private property and rights of inheritance are protected by law, not only and perhaps not mostly by social norms. As a matter of logic, rights of inheritance have to flow through legal institutions, and in advanced nations, no stable social norms specifies the content of those rights. In any case, private property can easily be understood as a solution to a generalized PD problem, and if they are taken as part of private property, rights of inheritance can be seen in just the same way.
With respect to norms of partiality, there is a mismatch between Ullmann-Margalit’s very brief list of concrete examples and her extended statement of the abstract problem. In practice, norms that stabilize unequal situations might well be harder to maintain than PD norms and coordination norms – unless those norms are truly perceived as in the interest of those who seem to be disadvantaged by them (if, for example, they improve absolute position). To make progress, it would be useful to have a not-short catalogue of possible norms of partiality, understood as norms (Elster’s sense) that disfavored people actually accept, or at least act as if they accept (because of a fear of sanctions).
Here’s a possible direction. After the appearance of Ullmann-Margalit’s book, Elster, Amartya Sen, and others have explored the idea of “adaptive preferences,” by which disadvantaged people end up preferring their disadvantageous circumstances, and do not rebel against them. If your culture is pervaded by inequality, and if prevailing norms support that equality, you might not question them, and you might even end up accepting them (in part to reduce cognitive dissonance). Describing the hierarchical nature of pre-Revolutionary America, Gordon Wood writes that those "in lowly stations ... developed what was called a 'down look,' and "knew their place and willingly walked while gentlefolk rode; and as yet they seldom expressed any burning desire to change places with their betters." In Wood's account, it is impossible to "comprehend the distinctiveness of that premodern world until we appreciate the extent to which many ordinary people still accepted their own lowliness." Here, then, is a concrete account of adaptive preferences and their relationship to norms of partiality.
Alternatively, you might end up silencing yourself, and hence decline to rebel, not because you accept your own status, but simply because of reputational and other sanctions associated with rebellion – producing what Timur Kuran calls “preference falsification,” which can contribute to social stability. Consider forms of inequality on the basis of sex, sexual orientation, and disability (physical and mental). Both adaptive preferences and preference falsification have played large roles, and they help to maintain norms of partiality. Both empirically and conceptually, there is a lot more to do on this subject.
In her final essay on considerateness, published posthumously and over thirty years after The Emergence of Norms, Ullmann-Margalit turned to an intriguing set of norms, by which people contribute to the well-being of others at low cost to themselves. She urged, quite boldly, that “considerateness is the foundation upon which our relationships are to be organized in both the thin, anonymous context of the public space and the thick, intimate context of the family.” Focused not on the emergence of norms but their consequences, she notes that while a lover might send “a bouquet of a hundred roses,” families typically have smaller, more routinized gestures and “deals,” which reflect “their preferences and aversions, their different competencies and skills, their relative strengths, weaknesses, and vulnerabilities, as well as their fantasies, whims, and special needs” (p. 221). Ullmann-Margalit argues that in that context, it may be too much to aspire to justice, but a good family can certainly be fair.
With respect to the family, Ullmann-Margalit was focused both on mutual advantage and on partiality. She insisted that we cannot proceed “’with eyes wide shut’ – namely, in an imagined original position, behind a veil of ignorance” (p. 344). On the contrary, “the fair family deal is adopted considerately and partially, ‘with eyes wide open’ – namely, with the family members sympathetically taking into account the full particularity of each, and in light of fine-grained comparisons of preferences between them.” Note that in this sentence, the word “partially” is paired with “considerately”; she sees a form of partiality as connected or at least compatible with fairness. Ullmann-Margalit thus began to explore a puzzle on which she was uniquely positioned to make progress: the role of norms of considerateness, and their highly complex functions not only within families and other close-knit units, but also within society as a whole.
CASS R. SUNSTEIN is currently the Robert Walmsley University Professor at Harvard. Mr. Sunstein is author of many articles and books, including Republic.com, Risk and Reason, Why Societies Need Dissent, The Second Bill of Rights, Laws of Fear: Beyond the Precautionary Principle, Worst-Case Scenarios, Nudge: Improving Decisions about Health, Wealth, and Happiness, Simpler: The Future of Government and most recently Why Nudge? and Conspiracy Theories and Other Dangerous Ideas. He is now working on group decisionmaking and various projects on the idea of liberty. | https://newramblerreview.com/book-reviews/philosophy/the-emergence-of-norms |
If you’re new to therapy or exploring different options for treatment, it’s natural to have questions about the first steps. A common but often misunderstood initial step is the psychological or “psych” evaluation.
It might sound intimidating, but a psych evaluation is a simple way for your therapist or health care provider to understand what you’re going through right now. Read on to learn more about psychological evaluations, including when it might be time to get one, what types there are, and what you can expect from the process.
What is a Psychological Evaluation?
To avoid confusion, the psych evaluation in this article refers to a psychological evaluation, not a psychiatric evaluation. While similar, a psychiatrist evaluation looks more at physical or chemical aspect whereas psychological evaluations look at social or personal aspects. A psychological evaluation is a mental health assessment where a health professional, such as a family doctor, psychologist, or psychiatrists (or any other mental health professional) evaluates you to see if you’re experiencing a mental health problem and also assesses your mental health condition.
Evaluations generally involve multiple components, including answering questions verbally, receiving a physical test, and completing a questionnaire. It becomes the first line of defense for those seeking treatment for mental health.
For psychologists, an assessment like this helps determine the exact nature and extent of a person’s mental health conditions. Using various evaluation tools, mental health professionals can gain insight into a person’s personality. It’s important to note that at no point in the process is anyone judging you. Rather, they’re working to help you identify and manage any issues or symptoms impacting your life and also give treatment recommendations.
Think of these types of assessments as serving the same purpose as medical tests. If you have physical symptoms, for instance, a doctor might order blood work or X-rays to better understand the cause of what you’re experiencing, which will help determine an effective treatment plan. Psychological evaluations serve the same purpose, as mental health professionals use these tools to measure and observe your behavior to diagnose and treat specific issues. Sometimes treatment can include individual therapy, family therapy or group therapy, medication, self-care techniques, or a combination of these.
When to Get a Psych Evaluation
A psych eval can help identify the cause of mental health symptoms. If either you or a loved one has shown signs of a mental health condition, it may be time to talk to a professional.
Signs that someone may need a psych evaluation might include:
- Sudden mood changes
- Unexplained memory loss
- Social withdrawal
- Difficulty concentrating
- Uncontrollable crying
- Changes in sleep or eating patterns
- Problems at school or work
- Loss of motivation or interest in activities, especially in those that were once enjoyed
- Increased sensitivity to noise, visuals, or being touched
- Paranoia
- High levels of anxiety
- Feeling disconnected from what’s happening around you
- Sudden bursts of anger
- Depression
- Other uncharacteristic behaviors
While these symptoms aren’t always a cause for concern, it’s wise to seek out a psych evaluation if several new or unusual symptoms occur. It’s essential to schedule an evaluation if symptoms are interfering with relationships, day-to-day life, or the ability to function daily. It’s essential to talk to a professional immediately if someone is struggling with thoughts of self-harm.
Main Types of Psych Evaluations
There are multiple categories of psychological assessments that mental health professionals use during a psych evaluation. They may use one or more during the assessment process. The primary types of evaluations are:
Clinical interview
During a clinical interview, mental health professionals talk with patients to learn more about their symptoms. They’ll follow a semi-standardized interview format known as the Structured Clinical Interview for DSM Disorders (SCID).
Another form of assessment is the Clinical Diagnostic Interview (CDI), which is a non-structured conversation between a professional and a patient. CDI questions are fairly broad, while SCID questions are more specific.
“The psychological evaluations have been researched to be accurate based on self-disclosure. In other words, your honesty about your emotions and how they are affecting you are key to the evaluation’s accuracy.” – Talkspace therapist Dr. Karmen Smith LCSW DD
Assessment of intellectual functioning (IQ)
IQ tests are designed to measure cognitive functioning. They can provide more information about spatial skills, memory, concentration, communication, intellectual capacity, and more.
There are 2 main categories of IQ tests: intelligence tests and neuropsychological assessments, which measure brain function.
Behavioral assessment
Behavioral assessments are a structured and detailed analysis of behavior. Typically, several tools are used to gather information about behaviors. This can include interviews, observation, and questionnaires. While it’s very common to use behavioral assessments when evaluating children or adolescents, this test is used on patients of all ages.
Personality assessment
Mental health professionals use personality testing to learn more about someone, so they can provide an accurate diagnosis.
One commonly used personality test is the five-factor model (FFM), which identifies 5 basic personality traits:
- Extraversion (also sometimes spelled as extroversion)
- Neuroticism (also sometimes referenced as emotional stability)
- Agreeableness
- Conscientiousness
- Openness to experience (also sometimes referenced as intellect)
Other tests commonly used by psychologists include the Minnesota Multiphasic Personality Inventory (MMPI), Thematic Apperception Test (TAT), and the Rorschach test.
What to Expect from a Psych Evaluation
Mental health assessments may include a variety of components, including:
- Formal questionnaires
- Checklists
- Surveys
- Interviews
- Behavioral observations
Often, the depth of evaluation will depend on the person, their concerns or symptoms, and what they need assessed.
In general, you can expect parts of a psych evaluation to take between 20 and 90 minutes, depending on the reason behind testing and which test or tests are administered. Keep in mind multiple visits might be required. Some parts of the assessments can be completed virtually or in person. Things you should be prepared for include:
Physical exam
In certain cases, a physical illness can cause symptoms that mirror some mental health conditions. A physical exam can help determine if a physical disorder (such as a thyroid disorder) or a neurological issue are to blame for symptoms. Be sure to inform your doctor about any conditions you already have or any medications you take.
Lab tests
During a psych evaluation, you may be asked to complete blood work, a urine test, or a brain scan. These tests are designed to rule out any physical conditions. You may also be asked to answer questions about drug and alcohol use to confirm what you’re experiencing isn’t a side effect.
Mental health history
You will likely be asked about how long you’ve been experiencing certain symptoms, about your personal and family history of mental health, and about any psychiatric or psychological treatments you may have received in the past.
Personal history
Medical and mental health professionals may ask questions about lifestyle and personal history to determine the largest sources of stress in your life. They’ll ask about any past major traumas. For instance, you may be asked about your marital status, occupation, military service, or your childhood.
Mental evaluation
In this instance, you’ll likely be asked questions about your symptoms in more detail. This evaluation portion will focus specifically on your thoughts, feelings, and behaviors. It’s also important to know how you’ve tried to manage symptoms on your own thus far. Your doctor will observe your appearance and behavior to help get a sense of your overall mental health.
Cognitive evaluation
This differs slightly from the mental evaluation, as here, the intent will be to gauge your ability to think clearly, recall information, and use sound reasoning.
“The evaluation does ask questions about your feelings, thoughts, and behavior. You can also ask questions about your prognosis and length of treatment. You and your therapist can discuss the results and partner together on the treatment plan. Evaluations can be an effective way to show progress during and after treatment.” – Talkspace therapist Dr. Karmen Smith LCSW DD
Connect with a Mental Health Professional
Psych evaluations can help you understand your symptoms so you can get the help you need. If you or a loved one is experiencing psychological symptoms that are having an adverse effect on your life, you may need professional help. Therapy can provide the support and guidance you need to get an accurate diagnosis and develop an effective treatment plan.
Talkspace, the online therapy platform that makes getting help easy and affordable, offers mental health evaluations that can help determine next steps for treatment and recovery. Don’t wait another day. If you’re in pain and need help, Talkspace is there for you. Learn more about how to start therapy with Talkspace today.
Sources:
- Spitzer R. The Structured Clinical Interview for DSM-III-R (SCID). Arch Gen Psychiatry. 1992;49(8):624. doi:10.1001/archpsyc.1992.01820080032005. https://pubmed.ncbi.nlm.nih.gov/1637252/. Accessed October 2, 2022.
- Sharma E, Srinath S, Jacob P, Gautam A. Clinical practice guidelines for assessment of children and adolescents. Indian J Psychiatry. 2019;61(8):158. doi:10.4103/psychiatry.indianjpsychiatry_580_18. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6345125/. Accessed October 2, 2022.
- McCrae R, John O. An Introduction to the Five-Factor Model and Its Applications. J Pers. 1992;60(2):175-215. doi:10.1111/j.1467-6494.1992.tb00970.x. https://pubmed.ncbi.nlm.nih.gov/1635039/. Accessed October 2, 2022.
Talkspace articles are written by experienced mental health-wellness contributors; they are grounded in scientific research and evidence-based practices. Articles are extensively reviewed by our team of clinical experts (therapists and psychiatrists of various specialties) to ensure content is accurate and on par with current industry standards.
Our goal at Talkspace is to provide the most up-to-date, valuable, and objective information on mental health-related topics in order to help readers make informed decisions.
Articles contain trusted third-party sources that are either directly linked to in the text or listed at the bottom to take readers directly to the source. | https://www.talkspace.com/blog/what-is-a-psych-evaluation/ |
Maclean’s has for many years provided readers the ability to comment beneath virtually every story that goes online. We’ve watched thousands of comments of every imaginable tone and tenor stitch themselves into back-and-forth conversations between excited readers.
In rethinking how we foster debate and engage with readers, however, we are moving this conversation over to our social platforms. Starting today, comments on stories at Macleans.ca have been closed.
This decision comes at a time of unprecedented opportunity for our readers to engage and share their thoughts on our journalism, on social platforms like Facebook, Twitter and any number of other flashpoints for online debate. It’s already the case that the vast majority of comments we receive now come via those platforms. Other media organizations have seen the shift, too, as outlets as varied as NPR, CNN, the Toronto Star and most recently the Atlantic have all closed their comment sections.
Every week subscribers of our monthly print and weekly digital editions send us letters about the stories and opinion pieces they read. We already publish these in our print magazine as well as online. But we want to hear from our online readers too, and invite you to send your letters to us at [email protected]. We’ll curate the smartest, most thought-provoking letters and publish them regularly online.
We look forward to hearing from you. | https://www.macleans.ca/general/comments-are-changing-at-macleans/ |
This post is part of a series on common myths and misconceptions about charity. Taking time to learn the facts will help prevent the spread of misinformation and inspire more people to use their resources effectively to improve the world.
Do effective altruists only value short-term, measurable outcomes?
People in the effective altruism movement often support organisations doing direct work with easily measurable outcomes. After all, effective altruists try to make evidence-based decisions whenever possible, and it's difficult to find evidence supporting interventions with very indirect, long-term, and/or abstract consequences. This has led to the misconception that people in the effective altruism movement only care about short-term, measurable change. In fact, many effective altruists care about outcomes that are harder to measure and are supportive of working towards systemic change.
Critics of effective altruism have pointed out that effective altruists are particularly susceptible to a bias towards measurable outcomes. Because effective altruists rely on evidence to figure out how to best help others, they are often drawn towards interventions that produce a quantifiable or measurable impact.
However, while it may be easiest to study and promote measurable change, effective altruists value other kinds of change as well. On the 80,000 Hours blog, Rob Wiblin argues that effective altruists love systemic change.
Organisations within the effective altruism movement work across a range of cause areas and rely on a wide variety of evidence — both quantitative and qualitative — to make inferences about the potential impact of interventions. There are many highly effective charities working towards systemic change. For instance:
The Future of Humanity Institute studies global catastrophic risks, like human-caused climate change and threats from artificial intelligence. These risks and the impact of potential interventions can't be measured through randomised controlled trials, so FHI relies on a range of other methodologies to assess future risk.
The Clean Air Task Force engages in legal and legislative advocacy to support climate-friendly policy change.
The Johns Hopkins Center for Health Security works with policymakers to prepare for threats to public health.
The Nuclear Threat Initiative works with political leaders across the world to improve global nuclear policy.
Animal Charity Evaluators distributes "Movement Grants" to support organisations doing more speculative or long-term work.
The Good Food Institute, in addition to developing and promoting alternatives to animal products, is working to secure fair policy and public funding for research to develop a more sustainable global food system.
You can help support systemic, long-term change by donating to one of these or many other effective charities. Consider making a giving pledge and joining our worldwide community of like-minded people who are working to make the world a better place.
This post is part of an update of our "Myths About Charity" page. Multiple authors contributed. | https://www.givingwhatwecan.org/post/2021/04/do-effective-altruists-only-value-short-term-measurable-outcomes/ |
The learning portal architecture is a tool designed to help learning leaders understand the technological components involved in a learning portal and how they interact to create a customized experience for each type of user.
Overview
The learning portal architecture enables learning leaders to better understand how technology impacts and improves learning. A learning portal is as an integrated platform for administration, collaboration, analytics and e-commerce. The platform provides users with access to relevant content in a variety of forms and the ability to publish content. In addition, it allows users to track and analyze learners’ use of the system, as well as pay for or subscribe to content.
The diagram above illustrates a learning portal and its basic components, beginning with each category of user: administrators, employees, and customers and channel partners. User groups are managed through a layer of filters: a combination of each user’s preferences aligned with the content and usability requirements set by the portal administrator. Filters determine the experience for each learner.
The model then divides the learning portal into three functional areas: learning and content management, web technologies and content management, and e-commerce and analytics. Learning and content management includes the various functions associated with scheduling, assessments and testing, e-learning, cataloging, and individual profiles. These functions are generally considered the traditional learning management system functionality. Web technologies include content publishing, such as articles, blogs, wikis, communities, polls and surveys, webinars, and podcasts. Some learning portals include the integration of authoring and delivery technologies within the web technologies functionality. E-commerce and analytics include tools for functions such as financial transactions, tracking, security and reporting.
Many consider the combination of these tools the definition of a learning management system (LMS). “LMS” is often used as a generic term for virtually all types of administration systems for learning, but in reality, the LMS is only one component of the learning portal.
Uses
The learning portal architecture helps learning leaders understand which components they should use create a relevant and effective user experience. It also demonstrates how the user interface is generated, depending on the type of user and the filters selected. Through the learning portal architecture, learning leaders can easily identify their user groups and which type of functionality meets the needs of their users. Administrators can also set filters within the learning portal that determine what the learners see and which content they can access.
Related Content on TrainingIndustry.com: | https://trainingindustry.com/wiki/learning-technologies/learning-portal-architecture/ |
New green lacewing larva from Burmese amber. Credit: Nanjing Institute of Geology and Palaeontology (Photo: Xinhua)
Chinese paleontologists reported a new lacewing species that can mimic liverworts, which is a rare camouflage in both modern and fossil ecosystems.
In a study published on Thursday in the journal Current Biology, researchers from China Agricultural University, the Nanjing Institute of Geology and Palaeontology under the Chinese Academy of Sciences, found the larvae living 100 million year ago were anatomically modified to mimic coeval liverworts, the earliest terrestrial plants.
This discovery represents the first record of liverwort mimicry by fossil insects and brings to light an evolutionary novelty, both in terms of morphological specialization as well as plant-insect interaction.
Camouflage and mimicry are pervasive throughout the biological world as part of the usual interactions between predators and their prey, allowing both to avoid detection. Among insects, the icons of mimicry include familiar stick and leaf insects, leaf-like moths and katydids.
These larvae have broadly foliate lateral plates on their thorax and abdomen. It is the only species known among lacewings with distinctive foliate lobes on the larval body, according to the researchers.
Such morphological modifications grossly match some coeval liverworts. The new larvae are the first example of direct mimicry in lacewing larvae. The morphological specialization in the new chrysopoid larvae is unique and is unknown among any living or fossil lacewings.
While the anatomy of these larvae allowed them to avoid detection, the lack of setae or other anatomical elements for entangling debris as camouflage means their sole defense was mimicry.
Members of the species could have been stealthy hunters like living and other fossil Chrysopoidea or been ambush predators aided by their disguise.
Liverworts are a diverse group distributed throughout the world today, including approximately 9,000 extant species. Liverworts have been diverse since the start of the Late Cretaceous, including in the Burmese amber forest, which was a typical wet, tropical rainforest.
Like their extant counterparts, Cretaceous liverworts grew on the leaves and bark of trees as well as on other plant surfaces.
The researchers suggested that the larvae probably lived on trees densely covered by liverworts, with their liverwort mimicry aiding their survival. | https://peoplesdaily.pdnews.cn/china/chinese-scientists-find-insect-species-mimicking-liverworts-100-mln-yrs-ago-59296.html |
themselves part of the story."
That comes in response to newspaper stories raising concerns
about journalists caught in the middle of a crisis where the need for help is
all around.
"I think it's important for journalists to be cognizant
of their roles in disaster coverage," said SPJ President Kevin Smith.
"Advocacy, self promotion, offering favors for news and interviews, injecting
oneself into the story or creating news events for coverage is not objective
reporting, and it ultimately calls into question the ability of a journalist to
be independent, which can damage credibility."
So, SPJ isn't saying that a medical correspondent shouldn't render
aid in a crisis.
"No, I'm not saying that," Smith told B&C. "What we are saying is
that it is walking a very tight rope."
"Journalists need to be cognizant of what their role
and responsibility is there. It doesn't mean you can't lend assistance or
aid," he says. "We understand a lot of humane activity is going on
there." What he wants journalists to think about is whether they are doing
it for an exclusive story or footage.
Smith says there is just a little "too much participatory
journalism going on."
He says the public and even his first-year journalism
students "are starting to question what they see taking place down there
as maybe not being our primary and first responsibility."
The smarter way to stay on top of broadcasting and cable industry. Sign up below.
Thank you for signing up to Broadcasting & Cable. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again. | https://www.nexttv.com/news/spj-cautions-journalists-haiti-36033 |
How we work
Community Action Groups (CAG) Devon was established in 2016 as a new way to support communities in Devon to deliver exciting and innovative community waste projects. The project is run by Resource Futures and funded by Devon County Council.
We currently support groups in Mid Devon and Teignbridge but, in time, we hope to expand this support Devon-wide.
Objectives
Our aim is to:
- Help community-led groups in the Mid-Devon and Teignbridge districts to increase the impact they have on reducing waste and improving sustainability
- Reduce the amount of household waste
- Improve and increase recycling, composting and reuse
- Empower groups and individuals, increase social cohesion and build more resilient communities.
Our approach
The CAG Devon approach is one of empowerment. Building on your passion and commitment, we will work with you to help you improve sustainability within your community and equip you with the tools, skills and know-how to be more impactful.
Whether you’re establishing a new community group such as a repair café, community larder or beach clean group or you need help to grow an existing group or deal with a specific challenge, we offer support through one-to-one mentoring and guidance, and by facilitating peer-to-peer learning.
We can advise you on basic governance, provide your group with insurance and training, and help you to promote your activities. We also champion mutual support, enabling people to learn from the experiences of others who have undertaken similar projects.
Find out how to join our network
Our approach is based on Community Action Groups (CAG) Oxfordshire, which was set up by Resource Futures in 2001. CAG Oxfordshire has gone from strength to strength. Since its launch, it has grown to become the largest network of its kind in the UK and has now become an independent organisation. It consists of more than 80 groups across Oxfordshire who are taking action on issues including waste, transport, food, energy, biodiversity and social justice.
Sharing ideas, skills and expertise
Our annual Skillshare event provides an opportunity for you to come together with other CAG Devon groups as well as hear from organisations and groups outside of our network. It’s a chance to inspire and be inspired, to swap ideas, share information and expertise, connect with and learn from others, and plan your group’s next steps. It is also an excellent opportunity for us to celebrate the individual and collective achievements of the groups in the CAG Devon network.
We run regular ‘Collaborate’ sessions which enable peer-to-peer support and collaboration on a specific topic, for example repair cafés or plastic waste reduction. These sessions provide a platform for group members who are more experienced in a specific activity to support those who are just starting out. Collaborate groups also provide a forum to share ideas on a particular topic, explore ways to overcome the challenges specific to this area of activity, and establish ways in which you can work collaboratively with other groups to have a greater impact. | https://cagdevon.org.uk/about/how-we-work/ |
Worksheet: Multiplying Two-Digit Numbers: The Column Method
In this worksheet, we will practice using the standard algorithm to multiply a two-digit number by another two-digit number and regrouping when necessary.
Q4:
An area model is useful for solving more complex calculations such as .
Use an area model to solve .
Q6:
A car travels 95 miles each day. How far does it travel in 25 days?
Q7:
A lion can eat 18 pounds of meat a day. How many pounds of meat can a lion eat in 12 days?
Q8:
Liam saves $15 each week. How much does he save in 12 weeks?
Q9:
Calculate the following.
Q10:
Multiply the following.
Q11:
Elephants can drink about 50 gallons of water per day. How much would an elephant drink in 15 days, if it drinks that amount each day? | https://www.nagwa.com/en/worksheets/670157682798/ |
Add something to cart :)
The core strength of Photoshop is the way it enables you to improve the quality of your images, whether youРІР‚в„ўre fixing a major problem or making a subtle adjustment. In this workshop Tim Grey explores a wide variety of techniques to help you get the best results when optimizing your images. He begins with basics like cropping, changing brightness and contrast, and correcting color balance, then moves on to more advanced adjustments like Shadows/Highlights, Curves, and dodging and burning. YouРІР‚в„ўll learn how to make targeted adjustments that affect only selected parts of the image and apply creative adjustments that donРІР‚в„ўt so much fix a problem as add a unique touch. And best of all, Tim teaches all these techniques as part of an overall workflow designed to help you work quickly, efficiently, and nondestructively. | http://down.cd/8362/buy-Video2Brain-Photoshop-CS6-Image-Optimization-Workshop-download/ |
Simone Biles, Coco Gauff, Brigid Kosgei: Young Sportstars Make History
Over the weekend three women athletes stunned the world, both making and shattering sports records. Simone Biles, Coco Gauff, and Brigid Kosgei led their respective sporting events to not just win but conquer. Here’s celebrating these electrifying young women champs and their records.
Simone Biles: most decorated gymnast at world championships
22-year-old Biles won her 25th medal in the gymnastics world championship on Sunday, bringing her total career medal haul to 19 golds overall and three silver and three bronze. In doing so, this Olympic champ has secured her place among the best of the best in gymnastics, men and women.
In Stuttgart, Germany, Biles, four-time Olympic gold medallist, participated in her fifth world championship. She now has claimed more medals at the international event than any other male or female gymnast in history.
READ: Simone Biles’ Amazing Reply To Body-Shamers
The previous record of 23 medals was set in 1996 by Belarusian Vitaly Scherbo, which has now been broken by Biles with a record in her balance beam routine. But a 24th medal wasn’t enough. Shortly after, she broke her own record with a 25th medal, another gold, in the floor exercise. “I can’t be more thrilled with the performance I put out at this World Championships and it only gives me confidence moving forward,” she said after the game.
Coco Gauff: Youngest WTA winner
15-year-old Coco Gauff just won her first WTA title, thus becoming the youngest to do so in the last 15 years. Nicole Vaidisova held the record of being the youngest WTA winner since 2004 when she won her first title. Breaking her record, Coco blanked Jelena Ostapenko 6-3, 1-6, 6-2 to lift the Upper Austria Ladies trophy on Sunday.
With this win, Coco is now expected to rise into the Top 75 in the WTA rankings on Monday.
She also became the second player in the last two seasons to claim a debut WTA singles title as a lucky loser. She holds the feat along with fellow teenager Olga Danilovic, who was a lucky loser when she won the trophy at the Moscow River Cup last season.
ALSO READ: Aishwarya Pissay: First Ever Indian To Win A World Title In Motorsports
Brigid Kosgei ran 2:14:04 at the Chicago Marathon
25-year-old Brigid Kosgei of Kenya broke Paula Radcliffe’s women’s marathon world record by the Chicago Marathon in two hours, 14 minutes, four seconds on Sunday. “I can go quicker,” says Brigid Kosgei after smashing a world record, which previously stood at 2:15:25.
She eclipsed the 16-year-old women’s marathon world record, beating it by 81 seconds, which was held by Britain’s Paula Radcliffe.
Brigid also won the Chicago Marathon for the second year in a row. | https://www.shethepeople.tv/shesport/simone-biles-coco-gauff-brigid-kosgei |
mul24 − Fast integer function to multiply 24−bit integer values.
mul24 multiplies two 24−bit integer values x and y. x and y are 32−bit integers but only the low 24−bits are used to perform the multiplication. mul24 should only be used when values in x and y are in the range [−223, 223−1] if x and y are signed integers and in the range [0, 224−1] if x and y are unsigned integers. If x and y are not in this range, the multiplication result is implementation−defined.
Fast integer functions can be used for optimizing performance of kernels. We use the generic type name gentype to indicate that the function can take int, int2, int3, int4, int8, int16, uint, uint2, uint3, uint4, uint8, or uint16 as the type for the arguments.
OpenCL Specification
integerFunctions(3clc)
The Khronos Group
Copyright © 2007-2011 The Khronos Group Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and/or associated documentation files (the "Materials"), to deal in the Materials without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Materials, and to permit persons to whom the Materials are furnished to do so, subject to the condition that this copyright notice and permission notice shall be included in all copies or substantial portions of the Materials.
|
|
1. | http://man.m.sourcentral.org/ubuntu1710/3+mul24 |
Abstract:In this paper, we introduce a newly developed quantile function model that can be used for estimating conditional distributions of financial returns and for obtaining multi-step ahead out-of-sample predictive distributions of financial returns. Since we forecast the whole conditional distributions, any predictive quantity of interest about the future financial returns can be obtained simply as a by-product of the method. We also show an application of the model to the daily closing prices of Dow Jones Industrial Average (DJIA) series over the period from 2 January 2004 - 8 October 2010. We obtained the predictive distributions up to 15 days ahead for the DJIA returns, which were further compared with the actually observed returns and those predicted from an AR-GARCH model. The results show that the new model can capture the main features of financial returns and provide a better fitted model together with improved mean forecasts compared with conventional methods. We hope this talk will help audience to see that this new model has the potential to be very useful in practice.
Keywords: DJIA, financial returns, predictive distribution, quantile function modelProcedia PDF Downloads 267
2203 Clustering of Extremes in Financial Returns: A Comparison between Developed and Emerging Markets
Authors: Sara Ali Alokley, Mansour Saleh Albarrak
Abstract:This paper investigates the dependency or clustering of extremes in the financial returns data by estimating the extremal index value θ∈[0,1]. The smaller the value of θ the more clustering we have. Here we apply the method of Ferro and Segers (2003) to estimate the extremal index for a range of threshold values. We compare the dependency structure of extremes in the developed and emerging markets. We use the financial returns of the stock market index in the developed markets of US, UK, France, Germany and Japan and the emerging markets of Brazil, Russia, India, China and Saudi Arabia. We expect that more clustering occurs in the emerging markets. This study will help to understand the dependency structure of the financial returns data.
Keywords: clustring, extremes, returns, dependency, extermal indexProcedia PDF Downloads 273
2202 The Impact of Financial News and Press Freedom on Abnormal Returns around Earnings Announcements in Greater China
Authors: Yu-Chen Wei, Yang-Cheng Lu, I-Chi Lin
Abstract:This study examines the impacts of news sentiment and press freedom on abnormal returns during the earnings announcement in greater China including the Shanghai, Shenzhen and Taiwan stock markets. The news sentiment ratio is calculated by using the content analysis of semantic orientation. The empirical results show that news released prior to the event date may decrease the cumulative abnormal returns prior to the earnings announcement regardless of whether it is released in China or Taiwan. By contrast, companies with optimistic financial news may increase the cumulative abnormal returns during the announcement date. Furthermore, the difference in terms of press freedom is considered in greater China to compare the impact of press freedom on abnormal returns. The findings show that, the freer the press is, the more negatively significant will be the impact of news on the abnormal returns, which means that the press freedom may decrease the ability of the news to impact the abnormal returns. The intuition is that investors may receive alternative news related to each company in the market with greater press freedom, which proves the efficiency of the market and reduces the possible excess returns.
Keywords: news, press freedom, Greater China, earnings announcement, abnormal returnsProcedia PDF Downloads 310
2201 The Influence of the Company's Financial Performance and Macroeconomic Factors to Stock Return
Authors: Angrita Denziana, Haninun, Hepiana Patmarina, Ferdinan Fatah
Abstract:The aims of the study are to determine the effect of the company's financial performance with Return on Asset (ROA) and Return on Equity (ROE) indicators. The macroeconomic factors with the indicators of Indonesia interest rate (SBI) and exchange rate on stock returns of non-financial companies listed in IDX. The results of this study indicate that the variable of ROA has negative effect on stock returns, ROE has a positive effect on stock returns, and the variable interest rate and exchange rate of SBI has positive effect on stock returns. From the analysis data by using regression model, independent variables ROA, ROE, SBI interest rate and the exchange rate very significant (p value < 0.01). Thus, all the above variable can be used as the basis for investment decision making for investment in Indonesia Stock Exchange (IDX) mainly for shares in the non- financial companies.
Keywords: ROA, ROE, interest rate, exchange rate, stock returnProcedia PDF Downloads 289
2200 Effect of Media Reputation on Financial Performance and Abnormal Returns of Corporate Social Responsibility Winner
Authors: Yu-Chen Wei, Dan-Leng Wang
Abstract:This study examines whether the reputation from media press affect the financial performance and market abnormal returns around the announcement of corporate social responsibility (CSR) award in the Taiwan Stock Market. The differences between this study and prior literatures are that the media reputation of media coverage and net optimism are constructed by using content analyses. The empirical results show the corporation which won CSR awards could promote financial performance next year. The media coverage and net optimism related to CSR winner are higher than the non-CSR companies prior and after the CSR award is announced, and the differences are significant, but the difference would decrease when the day was closing to announcement. We propose that non-CSR companies may try to manipulate media press to increase the coverage and positive image received by investors compared to the CSR winners. The cumulative real returns and abnormal returns of CSR winners did not significantly higher than the non-CSR samples however the leading returns of CSR winners would higher after the award announcement two months. The comparisons of performances between CSR and non-CSR companies could be the consideration of portfolio management for mutual funds and investors.
Keywords: corporate social responsibility, financial performance, abnormal returns, media, reputation managementProcedia PDF Downloads 303
2199 On the Impact of Oil Price Fluctuations on Stock Markets: A Multivariate Long-Memory GARCH Framework
Authors: Manel Youssef, Lotfi Belkacem
Abstract:This paper employs multivariate long memory GARCH models to simultaneously estimate mean and conditional variance spillover effects between oil prices and different financial markets. Since different financial assets are traded based on these market sector returns, it’s important for financial market participants to understand the volatility transmission mechanism over time and across these series in order to make optimal portfolio allocation decisions. We examine weekly returns from January 1, 2003 to November 30, 2012 and find evidence of significant transmission of shocks and volatilities between oil prices and some of the examined financial markets. The findings support the idea of cross-market hedging and sharing of common information by investors.
Keywords: oil prices, stock indices returns, oil volatility, contagion, DCC-multivariate (FI) GARCHProcedia PDF Downloads 399
2198 Ethical Investment Instruments for Financial Sustainability
Authors: Sarkar Humayun Kabir
Abstract:This paper aims to investigate whether ethical investment instruments could contribute to stability in financial markets. In order to address the main issue, the study investigates the stability of return in seven conventional and Islamic equity markets of Asia, Europe and North America and in five major commodity markets starting from 1996 to June 2012. In addition, the study examines the unconditional correlation between returns of the assets under review to investigate portfolio diversification benefits of investors. Applying relevant methods, the study finds that investors may enjoy sustainable returns from their portfolios by investing in ethical financial instruments such as Islamic equities. In addition, it should be noted that most of the commodities, gold in particular, are either low or negatively correlated with equity returns. These results suggest that investors would be better off by investing in portfolios combining Islamic equities and commodities in general. The sustainable returns of ethical investments has important implications for the investors and markets since these investments can provide stable returns while the investors can avoid production of goods and services which believes to be harmful for human and the society as a whole.
Keywords: financial sustainability, ethical investment instruments, islamic equity, dynamic conditional correlation, conditional volatilityProcedia PDF Downloads 199
2197 Price to Earnings Growth (PEG) Predicting Future Returns Better than the Price to Earnings (PE) Ratio
Authors: Lindrianasari Stefanie, Aminah Khairudin
Abstract:This study aims to provide empirical evidence regarding the ability of Price to Earnings Ratio and PEG Ratio in predicting future stock returns issuers. The samples used in this study are stocks that go into LQ45. The main contribution is to assign empirical evidence if the PEG Ratio can provide optimum return compared to Price to Earnings Ratio. This study used a sample of the entire company into the group LQ45 with the period of observation. The data used is limited to the financial statements of a company incorporated in LQ45 period July 2013-July 2014, using the financial statements and the position of the company's closing stock price at the end of 2010 as a reference benchmark for the growth of the company's stock price compared to the closing price of 2013. This study found that the method of PEG Ratio can outperform the method of PE ratio in predicting future returns on the stock portfolio of LQ45.
Keywords: price to earnings growth, price to earnings ratio, future returns, stock priceProcedia PDF Downloads 286
2196 Whether Asset Growth is Systematic Risk: Evidence from Thailand
Authors: Thitima Chaiyakul
Abstract:The number of previous literature regarding to the effect of asset growth and equity returns is small. Furthermore, those literature are mainly focus in the developed markets. According to my knowledge, there is no published paper examining the effect of asset growth and equity returns in the Stock Exchange of Thailand in different industry groups. The main objective in this research is the testing the effect of asset growth to equity returns in different industry groups. This study employs the data of the listed companies in the Stock Exchange of Thailand during January 1996 and December 2014. The data of financial industry are exclude from this study due to the different meaning of accounting terms. The results show the supported evidence that the asset growth positively affects the equity returns at a statistically significance level of at least 5% in Agro& Food Industry, Industrials, and Services Industry Groups. These results are inconsistent with the previous research testing in developed markets. Nevertheless, the statistically significances of the effect of asset growth to equity returns appear in some cases. In summary, the asset growth is a non-systematic risk and it is a mispricing factor.
Keywords: asset growth, asset pricing, equity returns, ThailandProcedia PDF Downloads 247
2195 Contagion of the Global Financial Crisis and Its Impact on Systemic Risk in the Banking System: Extreme Value Theory Analysis in Six Emerging Asia Economies
Authors: Ratna Kuswardani
Abstract:This paper aims to study the impact of recent Global Financial Crisis (GFC) on 6 selected emerging Asian economies (Indonesia, Malaysia, Thailand, Philippines, Singapore, and South Korea). We first figure out the contagion of GFC from the US and Europe to the selected emerging Asian countries by studying the tail dependence of market stock returns between those countries. We apply the concept of Extreme Value Theory (EVT) to model the dependence between multiple returns series of variables under examination. We explore the factors causing the contagion between the regions. We find dependencies between markets that are influenced by their size, especially for large markets in emerging Asian countries that tend to have a higher dependency to the market in the more advanced country such as the U.S. and some countries in Europe. The results also suggest that the dependencies between market returns and bank stock returns in the same region tend to be higher than dependencies between these returns across two different regions. We extend our analysis by studying the impact of GFC on the systemic in the banking system. We also find that larger institution has more dependencies with the market stock, suggesting that larger size bank can cause disruption in the market. Further, the higher probability of extreme loss can be seen during the crisis period, which is shown by the non-linear dependency between the pre-crisis and the post-crisis period. Finally, our analysis suggests that systemic risk appears in the domestic banking systems in emerging Asia, as shown by the extreme dependencies within banks in the system. Overall, our results provide caution to policy makers and investors alike on the possible contagion of the impact of global financial crisis across different markets.
Keywords: contagion, extreme value theory, global financial crisis, systemic riskProcedia PDF Downloads 57
2194 The Conditionality of Financial Risk: A Comparative Analysis of High-Tech and Utility Companies Listed on the Shenzhen Stock Exchange (SSE)
Authors: Joseph Paul Chunga
Abstract:The investment universe is awash with a myriad of financial choices that investors have to opt for, which principally culminates into a duality between aggressive or conservative approaches. Howbeit, it is pertinent to emphasize that the investment vehicles with an aggressive approach tend to take on more risk than the latter group in an effort to generate higher future returns for their respective investors. This study examines the conditionality effect that such partiality in financing has on the High-Tech and Public Utility companies listed on the Shenzhen Stock Exchange (SSE). Specifically, it examines the significance of the relationship between capitalization ratios of Total Debt Ratio (TDR), Degree of Financial Leverage (DFL) and profitability ratios of Earnings per Share (EPS) and Returns on Equity (ROE) on the Financial Risk of the two industries. We employ a modified version of the Panel Regression Model used by Rahman (2017) to estimate the relationship. The study finds that there is a significant positive relationship between the capitalization ratios on the financial risk of Public Utility companies more than High-Tech companies and a substantial negative relationship between the profitability ratios and the financial risk of the former than the latter companies. This then spells an important insight for prospective investors with regards to the volatility of earnings of such companies.
Keywords: financial leverage, debt financing, conservative firms, aggressive firmsProcedia PDF Downloads 52
2193 Financial Analysis of Selected Private Healthcare Organizations with Special Referance to Guwahati City, Assam
Authors: Mrigakshi Das
Abstract:The private sector investments and quantum of money required in this sector critically hinges on the financial risk and returns the sector offers to providers of capital. Therefore, it becomes important to understand financial performance of hospitals. Financial Analysis is useful for decision makers in a variety of settings. Consider the small proprietary hospitals, say, Physicians Clinic. The managers of such clinic need the information that financial statements provide. Attention to Financial Statements of healthcare Organizations can provide answers to questions like: How are they doing? What is their rate of profit? What is their solvency and liquidity position? What are their sources and application of funds? What is their Operational Efficiency? The researcher has studied Financial Statements of 5 Private Healthcare Organizations in Guwahati City.
Keywords: not-for-profit organizations, financial analysis, ratio analysis, profitability analysis, liquidity analysis, operational efficiency, capital structure analysisProcedia PDF Downloads 450
2192 Volatility Switching between Two Regimes
Authors: Josip Visković, Josip Arnerić, Ante Rozga
Abstract:Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most successful and popular models in modelling time varying volatility are GARCH type models. When financial returns exhibit sudden jumps that are due to structural breaks, standard GARCH models show high volatility persistence, i.e. integrated behaviour of the conditional variance. In such situations models in which the parameters are allowed to change over time are more appropriate. This paper compares different GARCH models in terms of their ability to describe structural changes in returns caused by financial crisis at stock markets of six selected central and east European countries. The empirical analysis demonstrates that Markov regime switching GARCH model resolves the problem of excessive persistence and outperforms uni-regime GARCH models in forecasting volatility when sudden switching occurs in response to financial crisis.
Keywords: central and east European countries, financial crisis, Markov switching GARCH model, transition probabilitiesProcedia PDF Downloads 124
2191 Day of the Week Patterns and the Financial Trends' Role: Evidence from the Greek Stock Market during the Euro Era
Authors: Nikolaos Konstantopoulos, Aristeidis Samitas, Vasileiou Evangelos
Abstract:The purpose of this study is to examine if the financial trends influence not only the stock markets’ returns, but also their anomalies. We choose to study the day of the week effect (DOW) for the Greek stock market during the Euro period (2002-12), because during the specific period there are not significant structural changes and there are long term financial trends. Moreover, in order to avoid possible methodological counterarguments that usually arise in the literature, we apply several linear (OLS) and nonlinear (GARCH family) models to our sample until we reach to the conclusion that the TGARCH model fits better to our sample than any other. Our results suggest that in the Greek stock market there is a long term predisposition for positive/negative returns depending on the weekday. However, the statistical significance is influenced from the financial trend. This influence may be the reason why there are conflict findings in the literature through the time. Finally, we combine the DOW’s empirical findings from 1985-2012 and we may assume that in the Greek case there is a tendency for long lived turn of the week effect.
Keywords: day of the week effect, GARCH family models, Athens stock exchange, economic growth, crisisProcedia PDF Downloads 311
2190 Cryptocurrency as a Payment Method in the Tourism Industry: A Comparison of Volatility, Correlation and Portfolio Performance
Authors: Shu-Han Hsu, Jiho Yoon, Chwen Sheu
Abstract:With the rapidly growing of blockchain technology and cryptocurrency, various industries which include tourism has added in cryptocurrency as the payment method of their transaction. More and more tourism companies accept payments in digital currency for flights, hotel reservations, transportation, and more. For travellers and tourists, using cryptocurrency as a payment method has become a way to circumvent costs and prevent risks. Understanding volatility dynamics and interdependencies between standard currency and cryptocurrency is important for appropriate financial risk management to assist policy-makers and investors in marking more informed decisions. The purpose of this paper has been to understand and explain the risk spillover effects between six major cryptocurrencies and the top ten most traded standard currencies. Using data for the daily closing price of cryptocurrencies and currency exchange rates from 7 August 2015 to 10 December 2019, with 1,133 observations. The diagonal BEKK model was used to analyze the co-volatility spillover effects between cryptocurrency returns and exchange rate returns, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility. The empirical results show there are co-volatility spillover effects between the cryptocurrency returns and GBP/USD, CNY/USD and MXN/USD exchange rate returns. Therefore, currencies (British Pound, Chinese Yuan and Mexican Peso) and cryptocurrencies (Bitcoin, Ethereum, Ripple, Tether, Litecoin and Stellar) are suitable for constructing a financial portfolio from an optimal risk management perspective and also for dynamic hedging purposes.
Keywords: blockchain, co-volatility effects, cryptocurrencies, diagonal BEKK model, exchange rates, risk spilloversProcedia PDF Downloads 44
2189 Modelling Impacts of Global Financial Crises on Stock Volatility of Nigeria Banks
Authors: Maruf Ariyo Raheem, Patrick Oseloka Ezepue
Abstract:This research aimed at determining most appropriate heteroskedastic model to predicting volatility of 10 major Nigerian banks: Access, United Bank for Africa (UBA), Guaranty Trust, Skye, Diamond, Fidelity, Sterling, Union, ETI and Zenith banks using daily closing stock prices of each of the banks from 2004 to 2014. The models employed include ARCH (1), GARCH (1, 1), EGARCH (1, 1) and TARCH (1, 1). The results show that all the banks returns are highly leptokurtic, significantly skewed and thus non-normal across the four periods except for Fidelity bank during financial crises; findings similar to those of other global markets. There is also strong evidence for the presence of heteroscedasticity, and that volatility persistence during crisis is higher than before the crisis across the 10 banks, with that of UBA taking the lead, about 11 times higher during the crisis. Findings further revealed that Asymmetric GARCH models became dominant especially during financial crises and post crises when the second reforms were introduced into the banking industry by the Central Bank of Nigeria (CBN). Generally, one could say that Nigerian banks returns are volatility persistent during and after the crises, and characterised by leverage effects of negative and positive shocks during these periods
Keywords: global financial crisis, leverage effect, persistence, volatility clusteringProcedia PDF Downloads 391
2188 The Impact of the Enron Scandal on the Reputation of Corporate Social Responsibility Rating Agencies
Authors: Jaballah Jamil
Abstract:KLD (Peter Kinder, Steve Lydenberg and Amy Domini) research & analytics is an independent intermediary of social performance information that adopts an investor-pay model. KLD rating agency does not have an explicit monitoring on the rated firm which suggests that KLD ratings may not include private informations. Moreover, the incapacity of KLD to predict accurately the extra-financial rating of Enron casts doubt on the reliability of KLD ratings. Therefore, we first investigate whether KLD ratings affect investors' perception by studying the effect of KLD rating changes on firms' financial performances. Second, we study the impact of the Enron scandal on investors' perception of KLD rating changes by comparing the effect of KLD rating changes on firms' financial performances before and after the failure of Enron. We propose an empirical study that relates a number of equally-weighted portfolios returns, excess stock returns and book-to-market ratio to different dimensions of KLD social responsibility ratings. We first find that over the last two decades KLD rating changes influence significantly and negatively stock returns and book-to-market ratio of rated firms. This finding suggests that a raise in corporate social responsibility rating lowers the firm's risk. Second, to assess the Enron scandal's effect on the perception of KLD ratings, we compare the effect of KLD rating changes before and after the Enron scandal. We find that after the Enron scandal this significant effect disappears. This finding supports the view that the Enron scandal annihilates the KLD's effect on Socially Responsible Investors. Therefore, our findings may question results of recent studies that use KLD ratings as a proxy for Corporate Social Responsibility behavior.
Keywords: KLD social rating agency, investors' perception, investment decision, financial performanceProcedia PDF Downloads 333
2187 Financial Markets Performance: From COVID-19 Crisis to Hopes of Recovery with the Containment Polices
Authors: Engy Eissa, Dina M. Yousri
Abstract:COVID-19 has hit massively the world economy, financial markets and even societies’ livelihood. The infectious disease caused by the most recently discovered coronavirus was claimed responsible for a shrink in the global economy by 4.4% in 2020. Shortly after the first case in Wuhan was identified, a quick surge in the number of confirmed cases in China was evident and a vast spread worldwide is recorded with cases surpassing the 500,000 cases. Irrespective of the disease’s trajectory in each country, a call for immediate action and prompt government intervention was needed. Given that there is no one-size-fits-all approach across the world, a number of containment and adoption policies were embraced. It was starting by enforcing complete lockdown like China to even stricter policies targeted containing the spread of the virus, augmenting the efficiency of health systems, and controlling the economic outcomes arising from this crisis. Hence, this paper has three folds; first, it examines the impact of containment policies taken by governments on controlling the number of cases and deaths in the given countries. Second, to assess the ramifications of COVID-19 on financial markets measured by stock returns. Third, to study the impact of containment policies measured by the government response index, the stringency index, the containment health index, and the economic support index on financial markets performance. Using a sample of daily data covering the period 31st of January 2020 to 15th of April 2021 for the 10 most hit countries in wave one by COVID-19 namely; Brazil, India, Turkey, Russia, UK, USA, France, Germany, Spain, and Italy. The aforementioned relationships were tested using Panel VAR Regression. The preliminary results showed that the number of daily deaths had an impact on the stock returns; moreover, the health containment policies and the economic support provided by the governments had a significant effect on lowering the impact of COVID-19 on stock returns.
Keywords: COVID-19, government policies, stock returns, VARProcedia PDF Downloads 60
2186 Asymmetric Relation between Earnings and Returns
Authors: Seungmin Chee
Abstract:This paper investigates which of the two arguments, conservatism or liquidation option, is a true underlying driver of the asymmetric slope coefficient result regarding the association between earnings and returns. The analysis of the relation between earnings and returns in four mutually exclusive settings segmented by ‘profits vs. losses’ and ‘positive returns vs. negative returns’ suggests that liquidation option rather than conservatism is likely to cause the asymmetric slope coefficient result. Furthermore, this paper documents the temporal changes between Basu period (1963-1990) and post-Basu period (1990-2005). Although no significant change in degree of conservatism or value relevance of losses is reported, stronger negative relation between losses and positive returns is observed in the post-Basu period. Separate regression analysis of each quintile based on the rankings of price to sales ratio and book to market ratio suggests that the strong negative relation is driven by growth firms.
Keywords: conservatism, earnings, liquidation option, returnsProcedia PDF Downloads 276
2185 Small Entrepreneurs as Creators of Chaos: Increasing Returns Requires Scaling
Authors: M. B. Neace, Xin GAo
Abstract:Small entrepreneurs are ubiquitous. Regardless of location their success depends on several behavioral characteristics and several market conditions. In this concept paper, we extend this paradigm to include elements from the science of chaos. Our observations, research findings, literature search and intuition lead us to the proposition that all entrepreneurs seek increasing returns, as did the many small entrepreneurs we have interviewed over the years. There will be a few whose initial perturbations may create tsunami-like waves of increasing returns over time resulting in very large market consequences–the butterfly impact. When small entrepreneurs perturb the market-place and their initial efforts take root a series of phase-space transitions begin to occur. They sustain the stream of increasing returns by scaling up. Chaos theory contributes to our understanding of this phenomenon. Sustaining and nourishing increasing returns of small entrepreneurs as complex adaptive systems requires scaling. In this paper we focus on the most critical element of the small entrepreneur scaling process–the mindset of the owner-operator.
Keywords: entrepreneur, increasing returns, scaling, chaosProcedia PDF Downloads 353
2184 Evaluation of Merger Premium and Firm Performance in Europe
Authors: Matthias Nnadi
Abstract:This paper investigates the relationship between premiums and returns in the short and long terms in European merger and acquisition (M&A) deals. The study employs Calendar Time Portfolio (CTP) model and find strong evidence that in the long run, premiums have a positive impact on performance, and we also establish evidence of a significant difference between the abnormal returns of the high premium paying portfolio and the low premium paying ones. Even in cases where all sub-portfolios show negative abnormal returns, the high premium category still outperforms the low premium category. Our findings have implications for companies engaging in acquisitions.
Keywords: mergers, premium, performance, returns, acquisitionsProcedia PDF Downloads 168
2183 Relationship between Independence Directors and Performance of Firms During Financial Crisis
Authors: Gladie Lui
Abstract:The global credit crisis of 2008 aroused renewed interest in the effectiveness of corporate governance mechanisms to safeguard investor interests. In this paper, we measure the effect of the crisis from 2008 to 2009 on the stock performance of 976 Hong Kong-listed companies and examine its link to corporate governance mechanisms. It is evident that the crisis and the economic downturn affected different industries. Empirical results show that firms with an independent board and a high concentration of ownership and management ownership had lower abnormal stock returns, but a lower price volatility during the global financial crisis. These results highlight that no single corporate governance mechanism is fit for all types of financial crises and time frames. To strengthen investors’ confidence in the ability of companies to deal with such swift financial catastrophes, companies should enhance the dynamism and responsiveness of their governance mechanisms in times of turbulence.
Keywords: board of directors, capital market, corporate governance, financial crisisProcedia PDF Downloads 353
2182 Performance of Shariah-Based Investment: Evidence from Pakistani Listed Firms
Authors: Mohsin Sadaqat, Hilal Anwar Butt
Abstract:Following the stock selection guidelines provided by the Sharia Board (SB), we segregate the firms listed at Pakistan Stock Exchange (PSX) into Sharia Compliant (SC) and Non-Sharia Compliant (NSC) stocks. Subsequently, we form portfolios within each group based on market capitalization and volatility. The purpose is to analyze and compare the performance of these two groups as the SC stocks have lesser diversification opportunities due to SB restrictions. Using data ranging from January 2004 until June 2016, our results indicate that in most of the cases the risk-adjusted returns (alphas) for the returns differential between SC and NCS firms are positive. In addition, the SC firms in comparison to their counterparts in PSX provides excess returns that are hedged against the market, size, and value-based systematic risks factors. Overall, these results reconcile with one prevailing notion that the SC stocks that have lower financial leverage and higher investment in real assets are lesser exposed to market-based risks. Further, the SC firms that are more capitalized and less volatile, perform better than lower capitalized and higher volatile SC and NSC firms. To sum up our results, we do not find any substantial evidence for opportunity loss due to limited diversification opportunities in case of SC firms. To optimally utilize scarce resources, investors should consider SC firms as a candidate in portfolio construction.
Keywords: diversification, performance, sharia compliant stocks, risk adjusted returnsProcedia PDF Downloads 89
2181 Performance Effects of Demergers in India
Authors: Pavak Vyas, Hiral Vyas
Abstract:Spin-offs commonly known as demergers in India, represents dismantling of conglomerates which is a common phenomenon in financial markets across the world. Demergers are carried out with different motives. A demerger generally refers to a corporate restructuring where, a large company divests its stake in in its subsidiary and distributes the shares of the subsidiary - demerged entity to the existing shareholders without any consideration. Demergers in Indian companies are over a decade old phenomena, with many companies opting for the same. This study examines the demerger regulations in Indian capital markets and the announcement period price reaction of demergers during year 2010-2015. We study total 97 demerger announcements by companies listed in India and try to establish that demergers results into abnormal returns for the shareholders of the parent company. Using event study methodology we have analyzed the security price performance of the announcement day effect 10 days prior to announcement to 10 days post demerger announcement. We find significant out-performance of the security over the benchmark index post demerger announcements. The cumulative average abnormal returns range from 3.71% on the day of announcement of a private demerger to 2.08% following 10 days surrounding the announcement, and cumulative average abnormal returns range from 5.67% on the day of announcement of a public demerger to 4.15% following10 days surrounding the announcement.
Keywords: demergers, event study, spin offs, stock returnsProcedia PDF Downloads 208
2180 Risk Management of Natural Disasters on Insurance Stock Market
Authors: Tarah Bouaricha
Abstract:The impact of worst natural disasters is analysed in terms of insured losses which happened between 2010 and 2014 on S&P insurance index. Event study analysis is used to test whether natural disasters impact insurance index stock market price. There is no negative impact on insurance stock market price around the disasters event. To analyse the reaction of insurance stock market, normal returns (NR), abnormal returns (AR), cumulative abnormal returns (CAR), cumulative average abnormal returns (CAAR) and a parametric test on AR and on CAR are used.
Keywords: study event, natural disasters, insurance, reinsurance, stock marketProcedia PDF Downloads 231
2179 Stock Market Integration of Emerging Markets around the Global Financial Crisis: Trends and Explanatory Factors
Authors: Najlae Bendou, Jean-Jacques Lilti, Khalid Elbadraoui
Abstract:In this paper, we examine stock market integration of emerging markets around the global financial turmoil of 2007-2008. Following Pukthuanthong and Roll (2009), we measure the integration of 46 emerging countries using the adjusted R-square from the regression of each country's daily index returns on global factors extracted from the covariance matrix computed using dollar-denominated daily index returns of 17 developed countries. Our sample surrounds the global financial crisis and ranges between 2000 and 2018. We analyze results using four cohorts of emerging countries: East Asia & Pacific and South Asia, Europe & Central Asia, Latin America & Caribbean, Middle East & Africa. We find that the level of integration of emerging countries increases at the commencement of the crisis and during the booming phase of the business cycles. It reaches a maximum point in the middle of the crisis and then tends to revert to its pre-crisis level. This pattern tends to be common among the four geographic zones investigated in this study. Finally, we investigate the determinants of stock market integration of emerging countries in our sample using panel regressions. Our results suggest that the degree of stock market integration of these countries should be put into perspective by some macro-economic factors, such as the size of the equity market, school enrollment rate, international liquidity level, stocks traded volume, tax revenue level, imports and exports volumes.
Keywords: correlations, determinants of integration, diversification, emerging markets, financial crisis, integration, markets co-movement, panel regressions, r-square, stock marketsProcedia PDF Downloads 55
2178 Hedging and Corporate Governance: Lessons from the Financial Crisis
Authors: Rodrigo Zeidan
Abstract:The paper identifies failures of decision making and corporate governance that allow non-financial companies around the world to develop hedging strategies that lead to hefty losses in the aftermath of the financial crisis. The sample is comprised of 346 companies from 10 international markets, of which 49 companies (and a subsample of 13 distressed companies) lose a combined US$18.9 billion. An event study shows that most companies that present losses in derivatives experience negative abnormal returns, including a number of companies in which the effect is persistent after a year. The results of a probit model indicate that the lack of a formal hedging policy, no monitoring to the CFOs, and considerations of hubris and remuneration contribute to the mismanagement of hedging policies.
Keywords: risk management, hedging, derivatives, monitoring, corporate governance structure, event study, hubrisProcedia PDF Downloads 319
2177 On the Influence of the Covid-19 Pandemic on Tunisian Stock Market: By Sector Analysis
Authors: Nadia Sghaier
Abstract:In this paper, we examine the influence of the COVID-19 pandemic on the performance of the Tunisian stock market and 12 sectors over a recent period from 23 March 2020 to 18 August 2021, including several waves and the introduction of vaccination. The empirical study is conducted using cointegration techniques which allows for long and short-run relationships. The obtained results indicate that both daily growth in confirmed cases and deaths have a negative and significant effect on the stock market returns. In particular, this effect differs across sectors. It seems more pronounced in financial, consumer goods and industrials sectors. These findings have important implications for investors to predict the behavior of the stock market or sectors returns and to implement hedging strategies during the COVID-19 pandemic.
Keywords: Tunisian stock market, sectors, COVID-19 pandemic, cointegration techniquesProcedia PDF Downloads 76
2176 Application of Generalized Autoregressive Score Model to Stock Returns
Authors: Katleho Daniel Makatjane, Diteboho Lawrence Xaba, Ntebogang Dinah Moroke
Abstract:The current study investigates the behaviour of time-varying parameters that are based on the score function of the predictive model density at time t. The mechanism to update the parameters over time is the scaled score of the likelihood function. The results revealed that there is high persistence of time-varying, as the location parameter is higher and the skewness parameter implied the departure of scale parameter from the normality with the unconditional parameter as 1.5. The results also revealed that there is a perseverance of the leptokurtic behaviour in stock returns which implies the returns are heavily tailed. Prior to model estimation, the White Neural Network test exposed that the stock price can be modelled by a GAS model. Finally, we proposed further researches specifically to model the existence of time-varying parameters with a more detailed model that encounters the heavy tail distribution of the series and computes the risk measure associated with the returns.
Keywords: generalized autoregressive score model, South Africa, stock returns, time-varyingProcedia PDF Downloads 405
2175 The Effect of Behavioral and Risk Factors of Investment Growth on Stock Returns
Authors: Majid Lotfi Ghahroud, Seyed Jalal Tabatabaei, Ebrahim Karami, AmirArsalan Ghergherechi, Amir Ali Saeidi
Abstract:In this study, the relationship between investment growth and stock returns of companies listed in Tehran Stock Exchange and whether their relationship -behavioral or risk factors- are discussed. Generally, there are two perspectives; risk-based approach and behavioral approach. According to the risk-based approach due to increase investment, systemic risk and consequently the stock returns are reduced. But due to the second approach, an excessive optimism or pessimism leads to assuming stock price with high investment growth in the past, higher than its intrinsic value and the price of stocks with lower investment growth, less than its intrinsic value. The investigation period is eight years from 2007 to 2014. The sample consisted of all companies listed on the Tehran Stock Exchange. The method is a portfolio test, and the analysis is based on the t-student test (t-test). The results indicate that there is a negative relationship between investment growth and stock returns of companies and this negative correlation is stronger for firms with higher cash flow. Also, the negative relationship between asset growth and stock returns is due to behavioral factors. | https://publications.waset.org/abstracts/search?q=financial%20returns |
So, what are black holes? When I was younger black holes scared me so much, I swore one was going to destroy our world.
My irrational fear of them grew once I learned that there were black holes that traveled throughout the universe, swallowing up galaxies and planets. Who wouldn’t be afraid of such darkness and power?
Well, my fear of black holes subsided as I grew older and my fascination with them only rose. They are one of the most potent things we know of in the known universe, and now we have our first known picture of a black hole, and it is fascinating, intriguing, and awe-inspiring. So how did we get here and exactly what is a black hole?
First black holes reflect no light, it’s a body of blackness, gravity so powerful that it distorts time and space. Most black holes are given birth to by supermassive star that has died out. When these stars die out, they start absorbing mass from their surroundings.
By incorporating other stars and merging with other black holes, they become supermassive black holes. So far many of these massive black holes live in the center of most galaxies, such as our own called Sagittarius.
Three Known Types Of Black Holes
Different types of black holes, three types of these are stellar, supermassive, and miniature black holes – depending on their mass.
A stellar black hole is created when a star collapses upon itself, creating such powerful gravitational forces that no light escapes it. Supermassive black holes more than likely have the mass equivalent to billions of suns, most if not all exist in the centers of most galaxies, including the Milky Way.
As of now, we don’t know precisely how supermassive black holes form, but it’s likely that they’re a byproduct of galaxy formation making it one of the most powerful and destructive forces in our known universe. Lastly, we have miniature black holes which formed when the universe started 13.7 billion years ago.
No one has ever captured an image of a miniature black hole, which has the mass equal to or less than our sun, but it is believed they formed right after the big bang from the rapid expansion.
First Captured Images
The image is a result of work carried out over ten years by the Event Horizon Telescope (EHT) Collaboration. Radio telescopes from around the world were used to capture the image.
Telescopes focused on a pair of supermassive black holes – the one at the center of the Milky Way galaxy, known as Sagittarius A*, and a second that lies at the heart of an elliptical galaxy called M87.
For instance, the black hole Sagittarius A*, at the center of the Milky Way, is about 4.3 million times the mass of our sun, while the black hole at the heart of the M87 galaxy, which it has now released an image of, is about 6 billion solar masses.
The bright orange ring captured is called the event horizon. Capturing the first-ever photos of a black hole is a monumental feat of human engineering and marvel. It is mind-boggling that scientists were able to capture an image of a black hole from a distance of approximately 55 million light-years away.
So why didn’t scientists photograph Sagittarius A*? We are not in the right position to capture a photograph of our own black hole. The milky way is around 150,000 to 200,000 light-years across.
The milky is shaped in a spiral, with arms filled with hundreds of billions of stars.
There is too much stuff in the way including stars, planets, gas, and dust to capture our own black hole. Radio telescopes are capable of carving through a lot of the cloudy rubble and light that hides our view of Sagittarius A*.
Another factor in not being able to capture a clear of our own black hole is due to how much its signal changes, and how rapidly those changes occur.
The event horizon telescope (EHT) managed to capture the next best thing, and that was M87, the supermassive black hole at the center of galaxy Messier 87, proved to be the perfect first candidate for observation due to its enormous size and consistency.
What Now and What Does This All Mean?
The pictures of M87 will usher in a new era of astronomy. It’s an achievement that took supercomputers, eight telescopes stationed on five continents, hundreds of researchers, and vast amounts of data to accomplish.
We are the first humans ever to see a black hole and the results helped to confirm (again) Einstein’s theory of general relativity. M87 is a supermassive black hole the size of our galaxy, just to give you a reference of its enormity.
Researchers were able to capture an image of a black hole, verify Einstein’s theory of relativity, researchers also wanted to understand more about how black holes grow and what makes material orbiting the black hole eventually fall in, and they also wanted to get a better idea of why and how supermassive black holes at the center of some galaxies, like elliptical galaxy M87, seem to propel massive streams of subatomic particles out of the galaxy and into the broader universe.
Hopefully in time scientist will be able to answer more questions about black holes, so far Sagittarius A* and M87 are two different types of black holes.
For me, black holes represent maybe a one-way exit out our universe, or they could be the glue that keeps our galaxies together, whatever you think black holes are and what they do, scientists are a step closer to unlocking one of the universe most exotic and destructive forces out there. | https://theocdnerd.news.blog/2019/09/11/black-holes-are-everything/?shared=email&msg=fail |
Black holes have been fascinating the scientific world for a really long time, and now, more and more details on them are surfacing, baffling experts.
Astronomers who want to learn about the mechanisms that formed black holes in the early history of the Universe gained essential clues with the discovery of 13 such black holes in dwarf galaxies, which are less than a billion light-years from Earth.
These galaxies are more than 100 times less massive than our beloved Milky Way. In other words, they are the smallest galaxies that are known to host huge black holes.
Experts expect that the black holes in such small galaxies to have about 400,000 the mass of our Sun.
“We hope that studying them and their galaxies will give us insights into how similar black holes in the early Universe formed and then grew, through galactic mergers over billions of years, producing the supermassive black holes we see in larger galaxies today, with masses of many millions or billions of times that of the Sun,” according to Amy Reines of Montana State University.
Reines and her colleagues used the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) to make the discovery.
It’s also worth noting that, according to SciTechDaily, she and her team used the VLA in order to discover the very first massive black hole in a dwarf starburst galaxy back in 2011. This was a surprise to astronomers, according to the same online publication.
13 dwarf galaxies have evidence of massive black holes
They continued and quoted Reines saying the following:
“The new VLA observations revealed that 13 of these galaxies have strong evidence for a massive black hole that is actively consuming surrounding material.”
Reines continued and explained that “We were very surprised to find that, in roughly half of those 13 galaxies, the black hole is not at the center of the galaxy, unlike the case in larger galaxies.”
You should check out the original article in order to find out more details.
A couple of days ago, it’s been revealed what are the most important black holes discoveries from the past year. | https://www.dualdove.com/enormous-black-holes-are-lurking-in-dwarf-galaxies/6174/ |
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
DETAILED DESCRIPTION OF THE INVENTION
Other Embodiments of the Invention
The present invention relates to a recorder for universal digital caption broadcast capable of receiving video signals, audio signals, and caption data signals from a digital caption broadcast that is in a foreign language, as well as Japanese. The recorder for universal digital caption broadcast variably adjusts the timings for video display and sound generation, and generates sound and displays video with caption data. In addition, the recorder for universal digital caption broadcast is capable of recording and storing only caption data on a universal serial bus (USB) flash drive or the like; enables setting of text size, text color, and display method (such as units of pages, number of displayed lines, and scrolling) on a monitor of a television or the like; and is capable of connecting to a printer, a Braille machine, and the like.
Conventionally, when caption data is generated from live video and audio, such as a real-time caption broadcast, the captions are displayed with a delay in relation to the video, causing viewers to feel uncomfortable with caption broadcasts.
To resolve shortcomings such as this, a caption broadcast system is being considered in which time lag between video and audio, and captions is reduced.
However, a caption broadcast system such as this requires changes in the conventional caption broadcast system. These changes are laborious and costly. Therefore, because large amounts of money cannot be spent on caption broadcasts that are unlikely to generate income, the reality is such systems are not put to practical use. Other methods are limited to those that match timings, at best. To ensure information accessibility for the deaf, an information accessibility system having both multifunctional and multilayered features is an absolute requisite. It should also be noted that this type of system will not be put to wide-spread practical use unless it has universal design features useful for not only the deaf but also the general public.
[Patent Literature 1] Japanese Patent Laid-open Publication No. 2010-81141
0
31
The present invention has been achieved in light of the shortcomings of the past, such as that described above. An object of the present invention is to provide a recorder for digital caption broadcast capable of receiving video signals, audio signals, and caption data signals from a conventional caption broadcast, and displaying captions as is, without modulation, on a viewer monitor. The recorder for digital caption broadcast is additionally capable of modulated viewing display from level to level in units of seconds to match the timing of caption display, thereby enabling viewer-friendly operations and viewing in an optimal state.
Another object of the present invention is to provide a recorder for digital caption broadcast capable of recording and storing caption data, and reproducing and displaying the caption data on a viewer monitor as required, and enabling printout using a USB flash drive or the like.
The above-described objects of the present invention, as well as other objects and novel features of the present invention, will become more completely clear when the following description is read with reference to the accompanying drawings.
However, the drawings are provided mainly for descriptions and do not limit the technical scope of the present invention.
To achieve the above-described objects, the present
invention is a recorder for digital caption broadcast composed of: a video signal receiver, an audio signal receiver, and a caption data signal receiver that respectively receive video signals, audio signals, and caption data signals from a digital caption broadcast; a video signal transmitter, an audio signal transmitter, and a caption data signal transmitter that respectively receive signals from the video signal receiver, the audio signal receiver, and the caption data signal receiver and transmit the received signals to a viewer monitor; and a variable display adjustor provided between the video signal receiver and audio signal receiver, and the video signal transmitter and audio signal transmitter, the variable display adjustor being capable of adjusting timings of video display and sound generation to match caption display on the viewer monitor.
In addition, the recorder for digital caption broadcast is configured such that the caption data signal receiver is provided with a reproduction display device that records and stores caption data, and reproduces and displays the caption data on the viewer monitor.
Furthermore, the present invention is a recorder for digital caption broadcast composed of: a video signal receiver, an audio signal receiver, and a caption data signal receiver that respectively receive video signals, audio signals, and caption data signals from a broadcast, via a live-caption/recorded-caption identifying device that identifies whether a broadcast from a digital caption broadcast is a live caption broadcast or a recorded caption broadcast; a video signal transmitter, an audio signal transmitter, and a caption data signal transmitter that respectively receive signals from the video signal receiver, the audio signal receiver, and the caption data signal receiver and transmit the received signals to a viewer monitor; and a variable display adjustor provided between the video signal receiver and audio signal receiver, and the video signal transmitter and audio signal transmitter in which, when the live-caption/recorded-caption identifying device identifies the broadcast to be a live caption broadcast, the variable display adjustor enters an ON state in which timings of video display and sound generation can be adjusted to match the caption display on the viewer monitor and, when the live-caption/recorded-caption identifying device identifies the broadcast to be a recorded caption broadcast, the variable display adjustor enters an OFF state in which the signals received by the video signal receiver and the audio signal receiver are transmitted to the viewer monitor via the video signal transmitter and the audio signal transmitter without the timings being adjusted.
As is clear from the descriptions above, the present invention achieves the following effects.
(1) According to a first set of features of the invention, video signals, audio signals, and caption data signals from a conventional digital caption broadcast can be received, and captions can be displayed as is, without modulation, on a viewer monitor. In addition, video display and sound generation that have been variably adjusted to match the timing of the caption display can be performed.
Therefore, viewers can view even conventional digital caption broadcasts in an optimal state.
(2) Also as a result of (1), described above, equipment on the broadcasting station-side is not required to be changed. Therefore, caption broadcasts can be performed as in the past.
(3) As a result of (1), the recorder for digital caption broadcast can be used such as to be connected in a manner similar to a conventional recording and reproducing device that connects to caption broadcasts and a viewer monitor. Therefore, the recorder for digital caption broadcast can be easily used.
(4) As a further result of (1), text data is recorded
using the caption data signals. Therefore, an enormous archive of television programs can be easily and accurately searched by text search. In addition, during caption broadcasts such as broadcasts of Diet sessions, for example, shorthand recordings can be easily created in real-time by the text data being recorded, in other words, using a “text recording feature”. Therefore, promptness, publication, cost reduction, and the like of session minutes can be improved at once, thereby enhancing sessions.
(5) According to a second set of features of the invention as well, effects similar to those in (1) to (4), described above, can be achieved.
(6) According to a third set of features of the invention as well, effects similar to those described above in (1) to (4) can be achieved. In addition, caption data can be reproduced on a viewer monitor as required.
(7) According to a fourth set of features of the invention as well, effects similar to those described above in (1) to (4) can be achieved. In addition, caption display, video display, and sound generation can be optimally performed in a viewer's monitor even for live caption broadcasts and recorded caption broadcasts.
(8) According to a fifth set of features of the invention as well, effects similar to those described above in (1) to (4) and (7) can be achieved.
The present invention will hereinafter be described in detail according to the embodiments of the present invention shown in the drawings.
FIG. 1
FIG. 3
1
1
2
3
1
1
4
5
6
2
7
8
9
4
5
6
3
11
4
5
7
8
11
3
31
10
According to a first embodiment of the present invention shown in to , reference number represents a recorder for digital caption broadcast of the present invention. The recorder for digital caption broadcast is capable of receiving video signals, audio signals, and caption data signals of a digital caption broadcast and displaying captions as is, without modulation, on a viewer monitor . The recorder for digital caption broadcast is also capable of performing video display and sound generation that have been variably adjusted to match the timing of the caption display. The recorder for digital caption broadcast is composed of: a video signal receiver , an audio signal receiver , and a caption data signal receiver that respectively receive video signals, audio signals, and caption data signals of the digital caption broadcast by radio waves or over a cable; a video signal transmitter , an audio signal transmitter , and a caption data signal transmitter that respectively receive signals from the video signal receiver , the audio signal receiver , and the caption data signal receiver and transmit the received signals to the viewer monitor ; and a variable display adjustor provided between the video signal receiver and audio signal receiver , and the video signal transmitter and audio signal transmitter . The variable display adjustor is capable of variably adjusting the timings for video display and sound generation to match the caption display that is displayed as is, without modulation, on the viewer monitor . The timings can be adjusted between levels (0 to 31 seconds) by, for example, a remote controller .
6
13
12
The caption data signal receiver is configured to record and store caption data such as to include date and time by a USB flash drive being inserted into an USB terminal . The caption data recorded and stored such as to include date and time is in caption text file format and can be displayed on a television screen that is a viewer monitor.
Text size for reproduction display reproduced on the television screen can be variably set to large, medium, or small. Text color can be variably set to white, yellow, light blue, or green. Screen color can be variably set to black or blue. Display switching can be variably set to units of pages, one to eight lines, scrolling, or the like.
According to the first embodiment, the full text of the caption broadcast of a designated program is automatically recorded if the channel is set, even for programs not directly viewable on the television. In addition, the automatically recorded text data enables the broadcast content to be acquired in a personalized manner, such as “speed-reading”, “re-reading”, or “skimming”, regardless of the broadcast time.
3
In addition, the viewer monitor can be a household television. A “television reader function” can be added to change the household television from a “viewing-only television” to an “advanced multifunctional television”.
13
13
Furthermore, when the USB flash drive is inserted into a personal computer, text output can be performed by a printer. In addition, when the USB flash drive is directly connected to a printer including a USB port, the text can be printed out instantly, even when a personal computer is not available.
1
In addition, viewers wish to have, in printed form, many of the various programs that are caption-broadcast. The recorder for digital caption broadcast enables viewers to instantly print out the programs for personal use.
13
13
12
Text data on the USB flash drive generated by a personal computer or the like can be directly displayed as text on the television screen when the USB flash drive is inserted into the USB terminal .
1
12
Moreover, diversely expanded use of the recorder for digital caption broadcast can be considered, such as connection to a peripheral Braille machine, devices for social care purposes, and the like, by taking advantage of the USB terminal .
FIG. 4
FIG. 9
Next, other embodiments of the present invention shown in to will be described. In the description of the other embodiments of the present invention, constituent sections that are the same as those according to the first embodiment are given the same reference numbers. Repetitive descriptions are omitted.
FIG. 4
FIG. 6
3
14
14
7
9
According to a second embodiment of the present invention shown in to , the invention mainly differs from that according to the first embodiment in that sound generation, video display, and caption display in the viewer monitor can be performed through a video recorder . The video recorder receives signals from the video signal transmitter , the audio signal transmitter B, and the caption data signal transmitter , and records and stores the signals therein such
1
1
14
1
14
as to include date and time. A recorder A for digital caption broadcast such as this can also achieve operational effects similar to those according to the first embodiment of the present invention. In addition, the recorder A for digital caption broadcast can reproduce digital caption broadcasts, as well as only captions, using the video recorder . Furthermore, the recorder A for digital caption broadcast can retrieve video by retrieving caption data from the data recorded and stored such as to include date and time using the video recorder .
FIG. 7
FIG. 9
15
11
15
2
15
11
3
15
11
4
5
3
7
8
1
2
11
3
3
11
According to a third embodiment of the present invention shown in to , the invention mainly differs from that according to the first embodiment in that a live-caption/recorded-caption identifying device and a variable display adjustor A are provided. The live-caption/recorded-caption identifying device identifies whether a broadcast from the digital caption broadcast is a live caption broadcast or a recorded caption broadcast. When the live-caption/recorded-caption identifying device identifies the broadcast to be a live caption broadcast, the variable display adjustor A enters an ON state in which the timings of video display and sound generation can be adjusted to match the caption display on the viewer monitor . When the live-caption/recorded-caption identifying device identifies the broadcast to be a recorded caption broadcast, the variable display adjustor A enters an OFF state in which the signals received by the video signal receiver and the audio signal receiver are transmitted to the viewer monitor via the video signal transmitter and the audio signal transmitter without the timings being adjusted. A recorder B for digital caption broadcast such as this can also achieve operational effects similar to those according to the first embodiment of the present invention. In addition, when the broadcast from the digital caption broadcast is a live caption broadcast, video display and sound generation of which the timings have been automatically adjusted by the variable display adjustor A to match the caption display are performed in the viewer monitor . When the broadcast is a recorded caption broadcast, sound is generated and the video display is displayed on the viewer monitor as is, without the timings being adjusted by the variable display adjustor A.
15
1
2
15
The live-caption/recorded-caption identifying device can be that which performs identification only on the recorder B for digital caption broadcast-side that identifies the broadcast based on a display symbol >> displayed in front of the caption in the digital caption broadcast . Alternatively, the live-caption/recorded-caption identifying device can be that which performs identification based on an identifier signal provided by the broadcast equipment-side. The identifier signal indicates live caption broadcast when the broadcast is a live caption broadcast.
The present invention can be used in the industry of manufacturing a recorder for digital caption broadcast capable of receiving video signals, audio signals, and caption data signals from a digital caption broadcast that is in a foreign language, as well as Japanese, and displaying captions as is, without modulation, on a viewer monitor. In addition, the recorder for digital caption broadcast is capable of performing video display and sound generation that have been variably adjusted to match the timing of the caption display.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is an explanatory diagram of a usage state according to a first embodiment of the present invention;
FIG. 2
is a block diagram according to the first embodiment of the present invention;
FIG. 3
is a front view according to the first embodiment of the present invention;
FIG. 4
is an explanatory diagram of a usage state according to a second embodiment of the present invention;
FIG. 5
is a block diagram according to the second embodiment of the present invention;
FIG. 6
is a front view according to the second embodiment of the present invention;
FIG. 7
is an explanatory diagram of a usage state according to a third embodiment of the present invention;
FIG. 8
is a block diagram according to the third embodiment of the present invention; and
FIG. 9
is a front view according to the third embodiment of the present invention. | |
- Art therapy supervision and consultation for professionals.
About
Jennifer Patterson is an American trained psychotherapist and art therapist (MA Antioch University Seattle) and holds an individual and family therapist license (LMFT/State of Washington) and is a board-certified art therapist (ATR-BC). Jennifer has extensive clinical experience in the US working with adolescents, adults, and families. She is currently working with English speaking expat and digital nomad adults and third culture kids in the central Lisbon. Jennifer specializes in working with people dealing with family of origin concerns, anxiety, life changes and past experiences of trauma.
Philosophy
I work with people making their way through stressful situations, anxiety, loss, life changes, family issues, and experiences of past trauma and abuse.
My style as a therapist/counselor is very interactive. We will have a conversation about what is really going on. Sometimes it might be uncomfortable, but that means we are getting somewhere. My goal is to get you thinking about YOU. I ask questions, give suggestions, and tell stories that hopefully will help you to understand why you do what you do.
I work from a culturally aware, holistic approach and take into account physical (somatic), psychological, environmental, and emotional health. I also use an eclectic blend of techniques to achieve client goals, including cognitive-behavioral (CBT), dialectical-behavioral (DBT), solution-focused, client-centered, mindfulness, and gestalt approaches. | https://www.newworldnative.com/people/jennifer-patterson |
To achieve our vision, we must be national and global leaders giving our customers confidence we are innovating to achieve outcomes for them. As a leader in South Australia, we support our local community and economy.
Our stretch Reconciliation Action Plan 2017-20 (RAP) guides our reconciliation actions and in 2018-19, we achieved in a number of areas detailed below and in the following report items.
Working together with remote communities we have completed a project for wastewater reuse and greening of the Amata oval to AFL standard. Upgraded infrastructure at Watinuma has been commissioned, see APY Lands upgrades for a better life for details of these upgrades.
In September 2018, the innovation space in SA Water House was named Kurlanaintyerlo with the assistance of the Kaurna community, a Kaurna word meaning on the crest of a wave. The word and its meaning guided the design of the space which is an adaptable area suitable for a range of activities and aims to help us challenge the status quo.
In addition to SA Water House, the Aboriginal and Torres Strait Island flags are now installed at our offices in Port Lincoln and Crystal Brook.
Tia Tuckia, Warevilla and Yarilena homelands have been supported with community infrastructure matters throughout the year, including leak detection and fixes.
At the end of June 2019, 57 per cent of our RAP actions have been delivered.
Through education and training, we worked with communities and groups across South Australia including:
During National Reconciliation Week, 27 May to 3 June 2019, we hosted nine events for our people across the state aligned to the theme: grounded in truth walk together with courage. Activities included Kaurna language sessions for people based in SA Water House helping to bring Aboriginal culture to the forefront of our work.
In Mount Gambier, newly installed paintings and signage were unveiled, giving the Blue Lake its traditional Boandik name: War War. War War means ‘the sound of many crows’ and reflects the crow Dreaming of the Boandik people, who are the Traditional Owners of the Mount Gambier region.
Recognising and acknowledging the lake’s name and story is a step towards revitalising culture and reforming the community’s connection with Country.
Dual naming, and use of local Aboriginal languages, including Kaurna lessons during National Reconciliation Week, aligns with 2019 being designated the UN Year of Indigenous Languages.
A 40 metre section of the above ground water pipeline near Port Lincoln was painted with an artwork we commissioned depicting the importance of water to the Barngarla people.
Several members of the Barngarla Aboriginal community worked with local school students in early 2019 to design and create the art, which was unveiled to the wider public during National Reconciliation Week.
The painting tells the story of the goordla gawoo ngaoowiridi – fresh water cycle. It shows the strong relationship Aboriginal people have to water and their connection to the sea, the animals and plants that rely on it, and how water was and continues to be used to sustain life.
At Beetaloo Reservoir in the state’s mid-north, an artwork by Nukunu artist Jessica Turner was installed in September 2018.
Erected at the reservoir’s public lookout, the colourful artwork titled ‘Wobma’ details the cultural and spiritual relationship the Nukunu people have with land and water in the Spencer Gulf and southern Flinders Ranges.
Working together with the Burrandies Aboriginal Corporation, Boandik community artists and Elders, as well as several local school students in Mount Gambier, four panels were painted to tell the story of Craitbul, the giant Boandik ancestor. These floor to ceiling pieces are now on display inside the Blue Lake/War War pumping station.
Working with the Barngarla people (Port Lincoln to the far west coast), the Kaurna people (Adelaide), and the Boandik people (Mount Gambier), we produced digital stories about Aboriginal innovation, management and treatment of water.
By capturing important insights and knowledge about how water was used and managed by the first Australians, we can respectfully convey the understanding and practice of sustainable water management, and how it has shaped spiritual and living connection to Country and, importantly, how it can influence contemporary water management practices.
It is hoped Water Wisdom will build understanding and appreciation of the significant innovations and technologies developed and used by Aboriginal people for thousands of years. Documenting this knowledge ensures it can be made available to the wider community and future generations to keep culture and knowledge alive.
To keep the knowledge of Kaurna people alive, their knowledge about key water sites and resources on the Adelaide Plains is being gathered and captured to form a cultural water knowledge database on how water ways were used and managed.
Continuing our efforts to support the growth of South Australian Aboriginal businesses, we brought 24 Aboriginal-owned businesses together with our tier one construction partners in October 2018.
At our inaugural Aboriginal Business Forum, everyone learned more about the challenges faced by major contractors and Aboriginal businesses, and had the opportunity to establish or foster effective working relationships.
This supported our commitment to reconciliation, by bringing people together, promoting equity and working to find the best outcomes possible for Aboriginal and Torres Strait Islander people through economic development and business opportunities.
In 2018-19, our direct spend with Aboriginal owned businesses was nearly $500,000 and the indirect spend was in excess of $3 million. Contributing to this has been the Aboriginal Business Forum, greater awareness among our people about Aboriginal businesses, and updates to procurement methodology and plans to support commitments in our Reconciliation Action Plan.
In 2018-19, the installation of advanced smart water network technology expanded to four targeted locations in metropolitan and regional areas of South Australia, building on the successful trial in the Adelaide CBD.
This technology enables us to identify and proactively fix a number of fault types before they affect our customers or inconvenience commuters.
As the smart network in the CBD continues to evolve, we are further refining its operation, including how to use the data to best effect in network management. With the CBD implementation bringing benefits for our customers through network management improvements, the technology is now being expanded.
An analysis of our water network identified Athelstone, North Adelaide, Penneshaw and Port Lincoln as appropriate areas to extend our smart water network. The type of technology installed at each location differs, depending on the issue being addressed.
Athelstone has a relatively high rate of water main breaks due to some of the most reactive clay soils in Adelaide, coupled with high supply pressure as a result of the area’s topography.
To help combat this, we installed a pressure modulating control station, as well as sensors to monitor the pressure and sound activity within the network. Using data from the sensors we can use the control station to remotely measure and maintain a stable water pressure in the network at varying periods of demand through the day.
We have also installed several sensors along a large trunk main on Gorge Road in Athelstone to reduce the impact of breaks and leaks on commuters in this high traffic area.
In total, 35 pressure sensors (including15 transient loggers), 19 flow meters, 120 acoustic leak detection sensors and two water quality sensors were fitted across the four locations.
In late 2018, people living in Penneshaw were our first residential customers to access smart technology en masse with the installation of about 300 smart water metres at residences and businesses in the Kangaroo Island township. Flow and pressure sensors were also placed at key points in the broader local network as part of our smart water network expansion.
As with customers participating in the smart water network trial in the Adelaide central business district, customers in Penneshaw are using these smart meters to monitor their water consumption through a secure, online portal.
The smart meters send water use information to the portal every hour with customers able to opt in to receive text message or email notifications about water use or inconsistencies, on a daily, weekly or monthly basis. This interconnected system has helped customers identify leaks and other faults in their plumbing which may have resulted in high water use, had it not been detected by the smart meter.
Smart meter data is also providing us with a holistic view of Penneshaw’s water needs. The two flow and pressure sensors have the ability to help identify any network water losses and inform our operational, planning and investment decisions.
The trial in Penneshaw will continue through to August 2019 and guide wider smart network investments.
Our world-leading adoption of smart water network technology was recognised with a bronze prize at the International Water Association’s Project Innovation Awards held in Tokyo in September 2018.
The smart water network trial in Adelaide’s CBD edged out 160 entries from 45 countries in the Smart Systems and the Digital Water Economy category, cementing our position as an international leader in integrating digital and smart technologies for the benefit of customers.
The network, implemented in a $4 million trial, combines acoustic sensors, pressure and flow data, high speed transient pressure sensors, smart meters and water quality sensors to identify potential leaks and trigger intervention before leaks or breaks escalate to inconvenience customers or commuters.
Stonyfell and Gawler are the two locations in our $5 million trial of advanced smart sewer technology, which aims to reduce the incidence and impact of sewerage network faults and issues for our customers and the wider community.
In Stonyfell, our focus is on detecting sewer pipe blockages to prevent overflows either inside or outside homes, which occur in the Adelaide foothills suburb at a higher than average rate than other areas.
The smart technology complements existing ongoing sewer maintenance programs by enabling a more targeted approach. We are one of the first Australian water utilities to use the technology in a comprehensive whole of suburb approach.
The sewer system in Stonyfell has been fitted with flow and level sensors, which monitor the movement of sewage in the pipes. This gives us near-real time information on where a blockage is, making it easier to despatch our crews to fix it, well before it affects our customers.
In Gawler, we have installed odour detection sensors and weather stations to better understand the behaviour of odour in this part of the network and how we can better manage the issue over time.
In total for the wastewater network, there are 88 level sensors, 88 odour detection sensors and 11 weather stations.
The combination of technology across both our water and wastewater networks, a world-leading analytics platform and the expertise of our smart network team is giving us a more detailed view of our underground systems than ever before, and helping us continually improve.
This trial was named the Best Industrial Internet of Things Project at the 2019 Internet of Things Awards, acknowledging our pioneering work in the rapidly evolving field of smart networks.
Water produced from Woolpunda Water Treatment Plant was awarded best tasting tap water in South Australia at the Water Industry Operators Association of Australia annual competition in August 2018.
Woolpunda was one of 14 samples from across the state judged by a panel of water industry experts and interested members.
The competition showcases South Australia’s drinking water, acknowledging the somewhat unsung work of water operators, and getting people talking about tap water.
Improving, refining and innovating are helping us lead the way in the water industry.
In 2018-19, our Innovation Speaker Series continued to expose us to fresh ideas and approaches, with presentations from Queensland Urban Utilities, the University of Adelaide and Uber.
People from all areas of the business also came together to develop a framework for growing our people’s ideas. Since forming in September 2018, the group has continued to uncover insights into how we create a cultural change to encourage and spread innovation.
The innovation team continued to connect and collaborate across the business on a number of projects including:
A case study of our recent innovation journey was presented to industry peers at Ozwater’19, the national water industry’s annual conference.
The idea to create a zero cost energy future was conceived and shaped by our people and is a demonstration of how we are leading the way to integrate renewable energy and storage with the nation’s longest water network.
Installation of solar panels was completed at three of our Adelaide metropolitan sites: Hope Valley Water Treatment Plant, Christies Beach Wastewater Treatment Plant and Glenelg Wastewater Treatment Plant.
At Hope Valley, the system achieved savings on electricity costs of 61 per cent in June 2019, down from an average of $21,000 to $8,000.
In February 2019 we appointed Enerven as the contractor to deliver our $304 million investment in solar photovoltaic panels and energy storage. This project will see the installation of 154 megawatts of solar generation and 34 megawatt hours of storage across up to 70 of our sites.
With an energy bill of $83 million in 2018-19, this investment in more than 500,000 solar panels is expected to deliver a return in six years with the view to reducing our costs and keeping water service charges as low and stable as possible over time for our customers.
Construction work will support about 250 jobs, and includes a commitment to engage local Aboriginal-owned businesses evaluated on their competitive rates, as well as apprentice training and opportunities for the supply chain within South Australia.
Two remote-controlled boats are navigating the lagoons of our wastewater treatment plants in a novel and efficient new way to improve sludge management and minimise odour at facilities across the state.
Developed by the University of Western Australia, vessels use sonar navigation technology to remotely survey sludge build-up at the bottom of wastewater lagoons.
Fine sediment that remains suspended in the water after primary treatment stages settles at the bottom of wastewater polishing lagoons to form a sludge, which is then periodically removed to maintain the lagoons’ holding capacity and minimise the potential for odour to develop.
A sonar unit scans the bottom of the lagoon and records data to an SD memory card that is then overlaid with a Google Earth map to visually display the sludge depths.
Removing sludge is an important yet often time consuming exercise, and this new technology provides a highly efficient way to accurately survey and know when to de-sludge.
A study undertaken by our Water Expertise and Research team into high-quality organic biosolids this year confirmed a faster timeframe to eliminate pathogens, realising significant benefits to the agriculture and water industries in Australia.
Each year we collect and safely treat about 30,000 tonnes of organic biosolids from our wastewater treatment plants, and we provide it free to primary producers who use it to improve soil quality for crops such as cereals, citrus or vines.
Our research challenged the guidelines to store high-grade biosolids for three years to ensure all pathogenic microorganisms were inactive before delivering the final product. The project demonstrated a better quality biosolid product can be achieved in just one year.
Following an intensive monitoring program for both fresh and aged biosolids of up to 30 months, we
detected no additional improvement in the guideline requirements of biosolids tested by extending the stockpiling period beyond 12 months.
This is a significant outcome for both farmers and water utilities across the country, with the potential to reduce costs for on-site storage and deliver a better quality biosolid product to primary producers.
Our world-leading climate change research has successfully demonstrated an ability to monitor and reduce nitrous oxide emissions by 30 per cent from Bolivar Wastewater Treatment Plant.
Setting a benchmark for treatment plants around the world, the research has also informed the United Nations’ greenhouse gas guidelines.
Trialled in partnership with the University of Queensland, the ground-breaking research collected and modelled nitrous oxide through floating ‘hoods’ anchored along activated sludge plants and pipes through a computer monitoring system, which then analysed the gas in short intervals.
The technology was created using the engineering resources and more than 20 years’ expertise of our commercial business unit Water Engineering Technologies, which designs and manufactures customised solutions for water and wastewater utilities across Australia.
The technology developed by the team allows emissions of the gas to be monitored in real time, for the very first time.
The research has been scientifically validated and published in four academic publications, putting us at the forefront of addressing a major problem commonly facing wastewater treatment across the world.
With nitrous oxide having a global warming potential 310 times greater than carbon dioxide, it is vitally important that all utilities work to reduce emissions in their operations without compromising on plant performance.
Riders and spectators at this year’s Tour Down Under could escape the extreme hot weather and keep cool thanks to unique new ’cool zones’ we created in partnership with race organisers.
Throughout this major event, hundreds of water misting jets provided refreshing blasts of cool air to people visiting the City of Adelaide Tour Village in Victoria Square/Tarntanyangga, and helped them manage the warm conditions experienced during the race.
Following the successful use of misters at the corporate hospitality facility for the Down Under Classic and Women’s Tour Down Under, they were installed at the Tour Village for everyone to enjoy.
Using water in different ways can increase the use of spaces and liveability during our hot summers and is an important way we are working to create a better life for South Australians.
The ‘cool zones’ proved popular and followed trials looking at ways to keep suburban houses and gardens cool, as well as our world-first heat mitigation trial at Adelaide Airport.
A similar misting system was installed at TreeClimb in the Adelaide Park Lands, to help keep aerial adventurers cool over the busy summer school holiday period.
In 2018-19, the Australian Water Quality Centre (AWQC), our national laboratory service, positioned itself to better meet the current and future needs of water utilities seeking its services. This included building understanding of the national water industry and its needs, and research to inform and shape service development for AWQC laboratories in Adelaide and Melbourne.
Molecular testing services were expanded in the Adelaide laboratory, while the Melbourne laboratory increased its capabilities to include sampling and a wider range of chemical testing services. To support growth in the Melbourne laboratory, new premises were identified ahead of a move planned for early 2020.
The AWQC continued to actively support the national water industry through conference exhibitions, sponsorships and presentations.
At the 2018 Water Industry Alliance Smart Water Awards, the AWQC was recognised for its world-leading molecular services winning the Innovation in Large Organisations Award. The paper by the AWQC’s Method Development Coordinator was shortlisted for best paper at Ozwater’19.
In 2018-19, the AWQC actively participated in a number of national water industry conferences including Water Industry Operators Association of Australia, Australian Water Association state conferences in South Australia, Northern Territory and Tasmania, and Ozwater’19. | http://sawater.clients.squiz.net/about-us/annual-reports/leading-the-way |
Q:
How to choose a classifier after cross-validation?
When we do k-fold cross validation, should we just use the classifier that has the highest test accuracy? What is generally the best approach in getting a classifier from cross validation?
A:
You do cross-validation when you want to do any of these two things:
Model Selection
Error Estimation of a Model
Model selection can come in different scenarios:
Selecting one algorithm vs others for a particular problem/dataset
Selecting hyper-parameters of a particular algorithm for a particular problem/dataset
(please notice that if you are both selecting an algorithm - better to call it model - and also doing hyper-parameters search, you need to do Nested Cross Validation . Is Nested-CV really necessary?)
Cross-validation ensures up to some degree that the error estimate is the closest possible as generalization error for that model (although this is very hard to approximate). When observing the average error among folds you can have a good projection of the expected error for a model built on the full dataset. Also is importance to observe the variance of the prediction, this is, how much the error varies from fold to fold. If the variation is too high (considerably different values) then the model will tend to be unstable. Bootstrapping is the other method providing good approximation in this sense. I suggest to read carefully the section 7 on "Elements of Statistical Learning" Book, freely available at: ELS-Standford
As it has been mentioned before you must not take the built model in none of the folds. Instead, you have to rebuild the model with the full dataset (the one that was split into folds). If you have a separated test set, you can use it to try this final model, obtaining a similar (and must surely higher) error than the one obtained by CV. You should, however, rely on the estimated error given by the CV procedure.
After performing CV with different models (algorithm combination, etc) chose the one that performed better regarding error and its variance among folds. You will need to rebuild the model with the whole dataset. Here comes a common confusion in terms: we commongly refer to model selection, thinking that the model is the ready-to-predict model built on data, but in this case it refers to the combination of algorithm+preprocesing procedures you apply. So, to obtain the actual model you need for making predictions/classification you need to build it using the winner combination on the whole dataset.
Last thing to note is that if you are applying any kind of preprocessing the uses the class information (feature selection, LDA dimensionality reduction, etc) this must be performed in every fold, and not previously on data. This is a critical aspect. Should do the same thing if you are applying preprocessing methods that involve direct information of data (PCA, normalization, standardization, etc). You can, however, apply preprocessing that is not depend from data (deleting a variable following expert opinion, but this is kinda obvious). This video can help you in that direction: CV the right and the wrong way
Here, a final nice explanation regarding the subject: CV and model selection
A:
No. You don't select any of the k classifiers built during k-fold cross-validation. First of all, the purpose of cross-validation is not to come up with a predictive model, but to evaluate how accurately a predictive model will perform in practice. Second of all, for the sake of argument, let's say you were to use k-fold cross-validation with k=10 to find out which one of three different classification algorithms would be the most suitable in solving a given classification problem. In that case, the data is randomly split into k parts of equal size. One of the parts is reserved for testing and the rest k-1 parts will be used for training. The cross-validation process is repeated k (fold) times so that on every iteration different part is used for testing. After running the cross-validation you look at the results from each fold and wonder which classification algorithm (not any of the trained models!) is the most suitable. You don't want to choose the algorithm that has the highest test accuracy on one of the 10 iterations, because maybe it just happened randomly that the test data on that particular iteration contained very easy examples, which then lead to high test accuracy. What you want to do, is to choose the algorithm which produced the best accuracy averaged over all k folds. Now that you have chosen the algorithm, you can train it using your whole training data and start making predictions in the wild.
This is beyond the scope of this question, but you should also optimize model's hyperparameters (if any) to get the most out of the selected algorithm. People usually perform hyperparameter optimization using cross-validation.
| |
If you’re a songwriter, you must understand that there a lot of factors to consider when writing a song. The lyrics and chord progressions are always important at delivering the mood and message, but a factor that is often overlooked, especially by beginner songwriters is the length of the song. Well, how long should a song be? Should we be setting these types of limitations on ourselves as songwriters in the first place?
The short answer to these questions is that it all depends. While the industry standard currently has hit songs usually around 3 to 3 1/2 minutes in length, we’ve seen a lot of good and successful songs that are both longer and shorter. There are no one-size fits all rule to the length of the songs you write because every songwriter has different styles, ideas, and goals. To help you figure out how long you should be writing your songs, here are 5 things to consider when it comes to the length of your songs.
Table of Contents
1. What Your Fans Expect From You
The first thing you need to consider is your fans since they’ll probably be the first to hear your new original music. What are your fans more accustomed to when it comes to listening to your music? How would they react if you changed your songwriting formula? Are you at the point of your career where you have more flexibility in the type of music you create or do your fans have an expectation of what they want to hear from your songs? These are some questions you should ask yourself when thinking about your fans.
It’s up to the artist in how they put together their original songs, however, if they want their music to be successful, they’ll need to pay attention to their fans. If you feel like you can be more flexible with the length of your s0ngs without upsetting your fans, go for it. It can definitely be a nice change that your fans will appreciate. At the same time, if you notice that your fans enjoy your music because they’re short and straight to the point, then don’t change what’s not broken. Keep creating songs that work for you at the point of where you are currently in your music career.
Always remember that the songs you create are your art, but make sure to keep your fans in mind when putting them together.
2. Are You Aiming to Be on the Radio?
Another thing to consider is if one of your main goals is to get your songs on the radio. It’s very unlikely for you to get on mainstream radio, however, if that’s your ultimate goal, it’s best to understand the formula now. Become accustomed to the formula of the songs that go on the radio.
The average song length of music that goes on the radio is between 3 to 5 minutes, however, it’s closer to 3 to 3 1/2 minutes. To understand why it’s like this, we have to discuss the history of music and the radio.
History of Music and the Radio
Between 1858 and the late 1950s, flat records called a”78″ were made. These records spun at 78 revolutions per minute, limiting the length that a song can be due to storage limitations. There were two sizes, a 10-inch disc that could hold three minutes of music and a 12-inch disc that held four minutes.
Around the mid-1950s, the RCA introduced a 45 rpm disk, which quickly took over the 78. There were made of vinyl instead of shellac, making them more durable and portable. Also, they were cheaper to make and buy, making them easier to market to teenagers at the time. However, just like the 78, the storage limitation was the same, limiting at about three minutes of music. Radio stations used 45s to broadcast music live. This meant that if bands and artists wanted their songs to be radio-friendly, they’d have to comply with the song length limitations.
This is what influences the song length limitations of what goes on the radio, even today.
Present Day
As technology gets better and better every day, especially in the music industry, you would think that the 3-minute limitation is but a thing of the past. However, this isn’t the case. Songs on the radio are still around 3 to 5 minutes in length. While this is mainly true for mainstream radio stations, there are other options. Nowadays, you could get your music on online radio stations thanks to streaming services and platforms. There are also local radio stations that might not be as strict on song lengths. You can find these stations in your local community and colleges. It’s much easier for musicians and artists to get their music heard by a wider audience.
3. Can the Song be Split into Two?
If you put together a longer song, ask yourself if it’s possible for your song to be split into two. When we get into the groove of writing new music, it’s easy for us songwriters to get caught in the moment. We’ll just keep on writing like there’s no tomorrow, resulting in a longer song. It can be tempting to leave it as it is, however, a good approach would be to see if it’s possible to split the song into two separate ones. If it is possible, then you get yourself a win-win situation, having two new original songs in your arsenal. However, if you feel that it’s not possible, don’t force it as you might be doing your song more harm than good.
Of course, if you do decide to split the song into two, it does create more work. You’ll have to make new musical adjustments to both songs which will take more time, however, in the end, it will all be worth it.
4. Do You Need a Longer Song to Deliver your Message?
Similar to the previous topic we’ve discussed, try to figure out the best way to deliver your song’s message. Like I’ve already said earlier when we get into the flow writing, we might write too much than needed. The message of the songs we create is just as important as the music and the lyrics. What makes songwriting so fun is that there are so many ways that we can deliver our message through our songs. It is possible to have a song with as many verses and choruses as you want, but is it absolutely necessary for your message?
This why editing and going over your songs are important. After all, it’s a part of the songwriting process that you should never skip. If you can deliver your song’s message in less time, it can be more beneficial. Listeners will have an easier time listening to your song. Nowadays our attention spans are shorter, making it harder for us to dedicate our attention to longer songs.
A shorter song can be just as effective as longer songs. It pushes us to write more impactful lyrics that deliver the ultimate messages we are trying to tell people through our songs. Always keep in mind when writing that sometimes it’s possible to tell everything you want in fewer words. You just might have to think more creatively when putting your song together.
5. What Will Make You Happy?
Of course, one of the most important things you need to consider is your own happiness with the songs you write. Songwriting is a very personal art form where we can pour out any emotions that we are feeling into music and lyrics. It’s a great way for us to truly express our thoughts. While following a specific song formula to achieve the most success is important, never neglect your own happiness and satisfaction.
If you shortened a song just to fit the mold that the music industry has created but feel unsatisfied with it, don’t be afraid of going back to what it was. It’s very easy to forget that we create songs not just for other people, but also for ourselves. It’s important that we also give priority to our own satisfaction when writing songs.
There are also downsides of always trying to limit yourself to certain song lengths when songwriting. It can be creatively draining, which can lead to songwriting burnout. Always remember that it’s important that you are happy with the songs you create.
Final Thoughts
As you can see, answering this question isn’t as it easy it sounds. Every song we write has a different approach that might result in a shorter song or a longer one. If there’s anything we’d like for you to take from this article, its that you can still have an impactful song that is shorter in length. At the same time, a song that is longer still has the potential of being successful in a musical world where 3-minute pop songs are dominant thanks to all of the new platforms available for musicians and artists.
We hope this article has helped you figure out how long the songs you are writing should be. Best of luck on your songwriting and we hope you continue to put out music that you’ll be happy about. | https://musicianport.com/how-long-should-a-song-be/ |
Week 9 Plan
- View Audio Quiz Correct Answers on WyoCourses until 11:15 a.m.
- Today: In-Class Assignment
- Write down your responses on a paper with your name. This should be a short paragraph or two.
- What is the difference between crowdsourcing and open-source reporting?
- How could you use crowd-powered collaborations in your future media job?
- Swap responses with your neighbor. Read. Discuss where your responses differ and converge.
- I will “cold call” on students to discuss as a class.
- Today’s Content: We will review online articles that provide tips on social media for journalists, PR, and advertising careers.
- For Friday:
- Using Twitter
- Review Blog Post #6 on Live-Tweeting
- In-class assignment on “live-tweeting” a speech on YouTube.
How Social Media is Used by the Big Three Media Fields
Social media is for you. The aspiring journalist, sports writer, marketing executive, advertising director, or public relations manager, all of these fields rely on social media now.
You can use social media:
- To help you create a presence and voice
- To promote your stories or your products
- To search for story ideas and sources
- To network with others in your field
- To engage with your audience, start a conversation
No doubt, social media is changing our media world.
Let’s see what the public thinks about social media and our news environment (Pew survey data). Note that this is relevant to journalists and strategic communicators because it shows what the public thinks about various social media platforms for information.
Let’s review some resources.
Please choose to review either the journalism, PR, or advertising sections below.
Review as many links in your chosen section as possible.
Write down 3 things that you’d like to share with the class about what you learned from reading these articles in your section.
Then, meet with a small group of 2-3 people who examined the same field as you. Decide as a group what the top 3 things are that you’d like to share with the class.
As you read, keep in mind: your desired career path (i.e., what companies you may want to work for) and your desired content field (e.g., agriculture, entertainment, politics, etc.). How can you directly relate this advice to your own goals? –> Doing this simple exercise will help make solid connections and lessons that will stick beyond this class.
Journalism
- Facebook for Journalists and Digital Courses for Journalists: Tips from Facebook about how to use Facebook effectively as a journalist
- Twitter for Journalists: Tips from Twitter about how journalists can use the platform
- Snapchat for Journalists: A beginner’s guide to Snapchat for journalism
- Instagram for Journalists: Tips from the Poynter Institute, a nonprofit leader in journalism education
- Instagram Stories Tips: Tips from the International Journalists’ Network about how to use Instagram Stories to share and promote content
- When journalists delete tweets, they may be erasing the first draft of history: This story discusses the ethics questions, “Should journalists delete their tweets? Should news organizations be archiving and documenting their journalists’ social media use?”
Public Relations (See Advertising below for more related content)
- 7 of the Best Social Media Campaigns (And What You Can Learn From Them)
- 12 under-the-radar Instagram writing tips brands should use
- The NBA’s China crisis is another reminder of the dangers of Twitter for corporate leaders: How does this story’s lessons translate to a more local or regional company and its social media use?
- Three Ways Social Media Works for Public Relations
Advertising (See PR above for more related content)
- Facebook’s Getting Started with Campaign Planner: Familiarize yourself with how to create an ad campaign on Facebook
- 5 Successful Social Media Campaigns You Can Learn From: Provides examples of ad campaigns on Facebook, Instagram, and Twitter and why they were successful
- Report: Just under 400 brands ran video ads on Snapchat Discover channels in past 3 months: Discusses what brands are advertising on Snapchat and why
- Snapchat video ads outperform other social media platforms: Discover why Snapchat video ads are more effective
Social Media is for Everyone
Even if you don’t personally enjoy social media, social media will (likely) inevitably be something your future employer expects of you — and maybe even relies upon you — to manage.
In this field of media, it is your responsible to keep up-to-date on the latest methods of collecting information, distributing information, and conversing with audiences about information.
Remember that you aren’t alone. There are many free resources online that you can use to improve your digital media literacy skills. Here are just a few:
- Google News Lab
- Facebook and Instagram for Business and Marketing
- Facebook for Journalists
- Poynter’s News University (Free and Reasonably Priced Online Courses for Journalists)
If there’s one lesson to take from this class, let it be: Don’t let technology intimidate you. You can do this. It’s trial and error for nearly everyone in the media industry. Seek out help and resources from others, online and offline. | http://uwyojournalism.com/?p=2896 |
Ethics is an important issue is psychology the American Psychological Association publishes a code of ethics, and conduct for psychologists as standard guidelines in psychology. This essay is an attempt to correlate ethical awareness, and principles to psychology professionals and personal conduct.
Chart Observations from the ethical inventory chart are ways of helping the person assess his or her personal ethics approach to understanding others.
This chart assesses personal beliefs and mental abilities in different situations. The results are then totaled to reveal the score, which makes a person aware of psychological issues that may influence decision-making in the future.
Personal, Spiritual, Social, and Organizational, Issues in Psychology Ethics plays an important part in everyday life. A person working in an environment with people from different cultures, and customs may encounter ethical conflicts. Situations may arise in which ethical decisions may conflict with organizational values.
Issues concerning organizational integrity and personal ethical conduct may cause a person to experience negative emotions.
Personal, spiritual, and social, growth is important to understanding ones ethical inventory chart. Research indicates that psychological-well-being is directly related to spiritual and social growth (Kowalski, & Westen, 2009). How Ethics affect Psychological Knowledge and Principles Personal growth, development, and the health of an individual may affect that persons’ psychological well-being.
Knowledge and ethical principles can help guide a person in business situations, and personal issues in life. Research indicates that as a person ages, one becomes more acceptance of different issues in life. Personal growth may affect how ethical principles affect gender, like women are more likely to experience and accept spiritual well-being before men.
The affect of psychological knowledge and principles involving health and personal growth is different at each stage of lifespan development.
Psychology as a career choice for some individuals may be very beneficial, individuals seeking careers in psychology should posse patience, and the desire to help others. A career in psychology takes four to six years of intense work before the individual is ready to earn income in this field. Psychology is an excellent career choice for the person willing to go the distance and earn the degree. The Influence of Ethic on Career Choice in psychology The decision to make psychology a career choice usually involves diverse influences, such as the need to understand human behavior.
How behavior affects some aspect of society and how individuals and groups influences the majority. Most of the questions and assumptions psychology makes involves ethical choices. Psychology is one career choice, which lets individuals’ research different segments of society. Cross-Culture research influences diversity in many areas, such as law-enforcement, medicine, mental health, research, corporate, and family-life. A career in psychology gives an individual access to many important disciplines, which uses psychology as a foundation for understanding human behavior (Society of clinical Psychology, 2006).
This essay examines ethics, and how ethics affect individuals as professional psychologists and in everyday life. The ethical awareness inventory assesses an individuals’ personal ethical perspective. The inventory examines how individuals will react in different situations. After the ethical awareness inventory; the essay examines spiritual and personal growth, involved in psychology. How ethics will influence ones’ personal life in employment is another issue, which the essay discusses. Applying such principles of knowledge, personal growth and health, are part of the decision format.
The essay examines the role ethics has in ones’ decision to pursue a degree in psychology, and how this degree will affect how the individual maintains ethical conduct and principles. The essay concludes with the results of the personal inventory chart. Conclusion This essay begins with the ethical awareness inventory chart, and the assessment of personal ethics . After taking the assessment awareness inventory the results were not fully understood. The result indicates that the individual needs to be more flexible in ethical decisions.
According to the inventory assessment personal ethics may conflict with group or organizational values. The personal assessment reveals the individual sees ethical issues in black and white there is no color between a lie and the truth. The personal ethics may conflict in an employment situation.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. | https://onlinenursingprofessor.com/ethical-assement-inventory/ |
It’s been wonderful journeying with you over the years. Many of our departments in the Herbert Wertheim College of Engineering from Chemical Engineering to Materials Science, Biomedical, Mechanical & Aerospace and Industrial & Systems Engineering were able to contribute to the knowledge and know-how of space exploration, thanks to you.
We look forward to many more years of successful and productive collaboration. For now, here’s a look at some major milestones and research projects associated with NASA.
Left: Created by NASA graphic artist Matthew Skeins, the official 60th anniversary logo depicts how NASA is building on its historic past to soar toward a challenging and inspiring future.
The late Dr. Byron E. Ruth, Professor Emeritus, Department of Civil and Coastal Engineering, received UF’s first recorded NASA grant in 1978 to explore “The Use of Remote Sensing in Solving Florida’s Geological and Engineering Problems".
That same year, Dr. Richard L. Fearn, Professor Emeritus, Department of Mechanical & Aerospace Engineering, received a NASA grant for a study about the velocity and pressures of a subsonic jet exhaust in the presence of crosswinds, known as “A Jet in a crossflow”.
The FSGC is an association of seventeen public and private Florida universities and colleges.
Its mission is to support the expansion and diversification of Florida’s space industry through grants, scholarships, and fellowships to students and educators from Florida’s public and private institutes of higher education.
UF received a University Research, Engineering & Technology Institute (URETI) grant from NASA in 2003 to fund the Institute for Future Space Transport.
The UF-led Institute for Future Space Transport focuses on four research areas: new propulsion technologies; lighter, stronger and more reliable spacecraft; improved systems to monitor the health and well-being of the spacecraft’s technological and life-support systems; and better ways to integrate all of the spacecrafts’ diverse systems.
Dr. Manuel, chair of the Department of Materials Science & Engineering, is collaborating with NASA to develop a lightweight magnesium alloy, which is stronger and lighter than steel and aluminum. The goal is to create light spacecraft and reduce radiation exposure to both crew and equipment.
Dr. Segal is executive director of the Institute for Future Space Transportation, a NASA URETI established in 2002. He is an Aeronautics and Astronautics Associate Fellow. | https://www.eng.ufl.edu/newengineer/news/nasa60/ |
This session focuses on innovative small spacecraft designs, systems, missions and technologies for the exploration of space beyond Earth orbit. Target destinations for these miniaturized space probes include the Earth's Moon, Mars, small bodies and other deep-space destinations, as well as near Earth vicinity for necessary development and technology demonstration missions. Small exploration probes covered by this session may come in many different forms, including special-purpose miniature spacecraft, standard format small platforms such as cubesats, or other microsats, nanosats, picosats, etc. Topics include new and emerging technologies in miniaturized subsystems including propulsion, avionics, guidance navigation & control, power supply, communication, thermal management, and sensors and instruments. Main aspect on this session is on new and emerging systems and mission applications for deep-space exploration using small spacecraft. | https://iafastro.directory/iac/browse/IAC-17/B4/8/ |
How To Connect Today’s Lunch To A Motivating Purpose
Despite having spent my entire professional life in the variously-called, “purpose economy,” “social enterprise sector,” or “impact organizations,” I always feel a bit squeamish talking about ‘purpose,’ much less proclaiming my own. Purpose statements, whether individual or organizational, feel lofty, abstract, and ungrounded, which drives me nuts, as a Type A, action-oriented, pragmatist. I’ve seen many other professionals struggle with purpose in similar ways. At the same time, I believe the ample academic research and first-hand experience from wise and diverse voices that having a sense of purpose is good for us, as individuals, organizations, and communities. Indeed, I’ve built a business on the premise that purpose drives performance.
Powerful statistics about the power of purpose (and importantly - the meaning that underlies it) for ... [+] us as professionals and employers.
BetterUp
I’ve resolved this apparent conflict by recognizing two mundane and concrete building blocks that underlie purpose. Particularly in this age of ‘peak purpose,’ it’s critical to make these terms tangible and specific to avoid devaluing as meaningless essential lenses for our survival and thriving as individuals and societies. This first component of purpose is impact, which can be simply understood as the outcomes of our actions on the people and place around us. Newton’s Cradle is a perfect visual to understand impact: the laws of physics that state for every action, there is an equal and opposite reaction. Second is the meaning we drive from that impact: in other words, the story we tell ourselves and others about the impact we create in the world.
For example, if I do a diversity and inclusion audit of my team’s hiring process, the impacts might include some extra work for my team members who have to collect the data; insights about shortcomings, bias, or strengths of our hiring process; and learning about our individual assumptions about talent. I could take this effort to be meaningful for the learning by me and my team, improving our team’s performance by expanding the range of talent we attract, or contributing to reducing the wage and wealth gaps in our economy.
Team working to understand where their hiring process upholds unconscious bias.
Getty
In the personal realm, if I uphold a commitment to eating a plant-based diet four days a week, there are several impacts. I reduce the carbon output associated with my consumption, set an example for friends and family around me, improve my personal health outcomes, and partially distance myself from the raising and killing of animals for my food. There are several ‘meanings’ that I could draw from these impacts. Perhaps reducing my contribution to environmental degradation through carbon emissions is most meaningful to me. Or it could be that I prioritize my physical well-being now and as I age so that I can best provide for and lead my family or team.
From the ‘objective’ impacts of our individual efforts, we each make meaning about the significance of those outcomes. Or we don’t. But who do you think will implement more learnings from the diversity audit or maintain the four days per week of plant-based eating? The person who’s doing it out of compliance with a rule or guideline, or desire to keep up with a trend they observe? Or the person who connects those actions to whatever larger meaning has the most significance for themselves?
This connection of our actions to the meaning they have – for us personally, not for our bosses, auditors, regulators, or social media audiences – is what underlies the benefits of meaning. We feel engaged, motivated, and rewarded by doing things that we see to have significance in some way that’s good for the people and places around us. These feelings lead to the productivity, performance, and satisfaction that research shows are higher among people who identify as having meaning in their work and lives.
It’s also important to recognize the profoundly personal nature of these meanings. There’s not a ‘right’ meaning to derive from a certain action. This is not to say that there aren’t things that are better or worse for the world. But it’s important to recognize that two colleagues might feel profoundly engaged in and satisfied by the same work for different reasons.
One of the most vivid illustrations of job crafting is this delightful video of a traffic cop dancing in the midst of rush hour traffic. Imagine the quality of air she’s breathing, commuters’ commentary, and the physical risk of standing amidst rushing vehicles. The researchers asked her how and why she was so joyful doing what seems to be a really difficult gig. She explained: “I am responsible for getting 30,000 people home safely to their families every day. What better impact could I achieve in an eight-hour shift?” In a situation where the tasks of a job would be hard to link to very significant meaning, and relationships are distant if not adversarial, this woman successfully laddered up to the context of what she’s doing to find meaning.
This example is such a brilliant reminder of the fact that we do not have to be engaged in lifesaving cancer research or working directly with the most challenged populations, whether refugees or immigrants or survivors of domestic violence. There are many more mundane ways that can have real significant meaning. Based on this broad understanding of the impact we have, and the possibility of connecting it to a larger meaning that is personally significant to us, let’s see how this all connects to the highest and loftiest broadest element or purpose.
While it is intimidating for many of us to commit to an eloquent ten-word statement about why we wake up every morning, we can’t avoid having impact on the people and places around us. As we get more mindful about these impacts (because mindfulness is the first, unavoidable step to purpose), their meaning becomes more vivid too. And from this growing pile of evidence of what we care about in the world, certain themes start to emerge.
Maybe you realize that in the past year, you’ve instituted walking meetings for your team’s weekly catchup, committed to cooking your family a plant-based diet four days a week, and developing a niche with clients in the agriculture industry. This same set of actions could be explained because you understand your purpose as, “Building a coalition of people committed to regenerating our planet through what we eat,” or equally, “Promoting understanding of and access to healthy lifestyles.” On the other hand, someone who understands their purpose to be, “Unlocking human potential by connecting people from diverse backgrounds,” might do these same actions and find great meaning in them, but not necessarily find them to be the most purposeful elements of their day.
The adventure of trying to cook a plant-based diet.
Getty
All of the myriad actions we take each day have impact. We may not take the time to recognize the meaning from everything we do, particularly when we’re feeling busy, stressed, or overwhelmed. And even if we do connect the dots between an activity, its impact, and how it’s meaningful, not all of them will ladder up to our overriding purpose. It’s not that those “less purpose-aligned” actions don't feel good and have positive ripples for the people and places around us. They just don’t create the potently motivating force and sense of flow that research has linked to doing work with purpose.
So if you are daunted by the idea of sitting down to script a purpose statement, but compelled by the research and anecdotal examples in your life of the benefits of living and working with purpose, start paying attention to what impacts most catch your attention. Keep a simple daily or weekly log of things you did that felt meaningful, whether right away or with the benefit of a few days or weeks of distance. After a few months of keeping these notes (I love Evernote for this kind of tracking if you’re opposed to carrying an actual physical notebook), use a word cloud generator to find themes, or a highlighter to identify recurring words.
If some technical assistance appeals, check out PurposeMatch’s algorithm-based individual purpose assessment. (Disclosure: my company partners with PurposeMatch – more on that partnership here.) This tool provides immediate insights about your unique strengths and the areas of impact you care about, which can be a useful complement to your day-to-day observations.
On the other hand, if you love nothing more than a whiteboard session with yourself, colleagues or friends to wordsmith a purpose statement as you look to the year (or decade!) ahead, go for it! Your next steps are to break that rallying cry down into the day-to-day impacts you’re going to have, so that you can hold yourself accountable for whether you’re living your purpose in a more concrete way.
The important point is that purpose is an iterative concept, that we have to connect to tangible impact and the meaning it has to us. Enjoying the benefits (motivation, productivity, fulfillment, reduced anxiety and stress, better sleep, and more) of purpose requires this back and forth laddering up and down from our specific actions to their larger significance, and then the broader goal they advance. Because we are dynamic beings living in fast-changing times, it’s essential to observe, reflect on, and document these three elements on an ongoing basis.
After all, you have multiple impacts every single day. The things you buy, the work you do, the interactions you have, and the way you care for yourself all ripple out and affect people and places near and far. Don’t you want to take advantage of the opportunity to link these elements of everyday life to their potential power to solve the problems you care most about? | |
This year marks the 30th anniversary of political economist Andrew Weiss’ seminal paper demonstrating that the relationship between high school graduation and earnings can be explained by non-cognitive factors — such as a lower propensity to quit — rather than the simple accumulation of knowledge.
Twenty years after that revelation, Nobel Prize-winning economist James Heckman demonstrated that “personality, persistence, motivation, and charm” are of paramount importance to success in life.
None of this is new news. But as a nation we still generally teach non-cognitive skills only as a byproduct of the educational process, rather than an intentional outcome. And that is unfortunate, because research has consistently demonstrated that the non-cognitive skills gap is a key factor in socioeconomic disparity and intergenerational poverty.
A lot of times, progress in education is stalled by a lack of conclusive research. That’s not the case when it comes to the importance of non-cognitive skills development. The research is conclusive. As a nation, it is well past time to prioritize the addition of non-cognitive skills development for our students.
•••
Do you want to help your students develop resiliency? You can learn more about Graduation Alliance’s social-emotional learning assessment and curriculum, ScholarCentric, here. | https://www.graduationalliance.com/2018/10/22/non-cognitive-skills-development-should-be-an-intentional-outcome-of-our-education-system/ |
Lesson 12: The Poisson Distribution
Introduction
In this lesson, we learn about another specially named discrete probability distribution, namely the Poisson distribution.
Objectives
To learn the situation that makes a discrete random variable a Poisson random variable.
To learn a heuristic derivation of the probability mass function of a Poisson random variable.
To learn how to use the Poisson p.m.f. to calculate probabilities for a Poisson random variable.
To learn how to use a standard Poisson cumulative probability table to calculate probabilities for a Poisson random variable.
To explore the key properties, such as the moment-generating function, mean and variance, of a Poisson random variable.
To learn how to use the Poisson distribution to approximate binomial probabilities.
To understand the steps involved in each of the proofs in the lesson.
To be able to apply the methods learned in the lesson to new problems.
Poisson Distributions
Situation
Let the discrete random variable X denote the number of times an event occurs in an interval of time (or space). Then Xmay be a Poisson random variable with x = 0, 1, 2, ...
Examples
Let X equal the number of typos on a printed page. (This is an example of an interval of space — the space being the printed page.)
Let X equal the number of cars passing through the intersection of Allen Street and College Avenue in one minute. (This is an example of an interval of time — the time being one minute.)
Let X equal the number of Alaskan salmon caught in a squid driftnet. (This is again an example of an interval of space — the space being the squid driftnet.)
Let X equal the number of customers at an ATM in 10-minute intervals.
Let X equal the number of students arriving during office hours.
Definition. If X is a Poisson random variable, then the probability mass function is:
\(f(x)=\dfrac{e^{-\lambda} \lambda^x}{x!}\)
for x = 0, 1, 2, ... and λ > 0, where λ will be shown later to be both the mean and the variance of X.
Recall that the mathematical constant e is the unique real number such that the value of the derivative (slope of the tangent line) of the function f(x) = ex at the point x = 0 is equal to 1. It turns out that the constant is irrational, but to five decimal places, it equals:
e = 2.71828
Also, note that there are (theoretically) an infinite number of possible Poisson distributions. Any specific Poisson distribution depends on the parameter λ.
"Derivation" of the p.m.f.
Let X denote the number of events in a given continuous interval. Then X follows an approximate Poisson process with parameter λ > 0 if:
(1) The number of events occurring in non-overlapping intervals are independent.
(2) The probability of exactly one event in a short interval of length h = 1/n is approximately λh = λ(1/n) = λ/n.
(3) The probability of exactly two or more events in a short interval is essentially zero.
With these conditions in place, here's how the derivation of the p.m.f. of the Poisson distribution goes:
Now, let's make the intervals even smaller. That is, take the limit as n approaches infinity (n → ∞) for fixed x. Doing so, we get:
Finding Poisson Probabilities
Example
Let X equal the number of typos on a printed page with a mean of 3 typos per page. What is the probability that a randomly selected page has at least one typo on it?
Solution. We can find the requested probability directly from the p.m.f. The probability that X is at least one is:
P(X ≥ 1) = 1 − P(X = 0)
Therefore, using the p.m.f. to find P(X = 0), we get:
\(P(X \geq 1)=1-\dfrac{e^{-3}3^0}{0!}=1-e^{-3}=1-0.0498=0.9502\)
That is, there is just over a 95% chance of finding at least one typo on a randomly selected page when the average number of typos per page is 3.
What is the probability that a randomly selected page has at most one typo on it?
That is, there is just under a 20% chance of finding at most one typo on a randomly selected page when the average number of typos per page is 3.
Just as we used a cumulative probability table when looking for binomial probabilities, we could alternatively use a cumulative Poisson probability table, such as Table III in the back of your textbook. If you take a look at the table, you'll see that it is three pages long. Let's just take a look at the top of the first page of the table in order to get a feel for how the table works:
In summary, to use the table in the back of your textbook, as well as that found in the back of most probability textbooks, to find cumulative Poisson probabilities, do the following:
Find the column headed by the relevant λ. Note that there are three rows containing λ on the first page of the table, two rows containing λ on the second page of the table, and one row containing λ on the last page of the table.
Find the x in the first column on the left for which you want to find F(x) = P(X ≤ x).
Let's try it out on an example. If X equals the number of typos on a printed page with a mean of 3 typos per page, what is the probability that a randomly selected page has four typos on it?
Solution. The probability that a randomly selected page has four typos on it can be written as P(X = 4). We can calculate P(X = 4) by subtracting P(X ≤ 3) from P(X ≤ 4). To find P(X ≤ 3) andP(X ≤ 4) using the Poisson table, we:
Find the column headed by λ = 3.
Find the 3 in the first column on the left, since we want to find F(3) = P(X ≤ 3). And, find the 4 in the first column on the left, since we want to find F(4) = P(X ≤ 4).
Now, all we need to do is (1) read the probability value where the λ = 3 column and the x = 3 row intersect, and (2) read the probability value where the λ = 3 column and the x = 4 row intersect. What do you get?
That is, there is about a 17% chance that a randomly selected page would have four typos on it. Since it wouldn't take a lot of work in this case, you might want to verify that you'd get the same answer using the Poisson p.m.f.
What is the probability that three randomly selected pages have more than eight typos on it?
Solution. Solving this problem involves taking one additional step. Recall that X denotes the number of typos on one printed page. Then, let's define a new random variable Y that equals the number of typos on three printed pages. If the mean of X is 3 typos per page, then the mean of Y is:
λY= 3 typos per one page × 3 pages = 9 typos per three pages
Finding the desired probability then involves finding:
P(Y > 8) = 1 − P(Y ≤ 8)
where P(Y ≤ 8) is found by looking on the Poisson table under the column headed by λ = 9.0 and the row headed by x = 8. What do you get?
Approximating the Binomial Distribution
Example
Five percent (5%) of Christmas tree light bulbs manufactured by a company are defective. The company's Quality Control Manager is quite concerned and therefore randomly samples 100 bulbs coming off of the assembly line. Let X denote the number in the sample that are defective. What is the probability that the sample contains at most three defective bulbs?
Solution. Can you convince yourself that X is a binomial random variable? Hmmm.... let's see... there are two possible outcomes (defective or not), the 100 trials of selecting the bulbs from the assembly line can be assumed to be performed in an identical and independent manner, and the probability of getting a defective bulb can be assumed to be constant from trial to trial. So, X is indeed a binomial random variable. Well, calculating the probability is easy enough then... we just need to use the cumulative binomial table with n = 100 and p = 0.05.... Oops! The table won't help us here, will it? Even many standard calculators would have trouble calculating the probability using the p.m.f.:
But, if you recall the way that we derived the Poisson distribution,... we started with the binomial distribution and took the limit as n approached infinity. So, it seems reasonable then that the Poisson p.m.f. would serve as a reasonable approximation to the binomial p.m.f. when your n is large (and therefore, p is small). Let's calculate P(X ≤ 3) using the Poisson distribution and see how close we get. Well, the probability of success was defined to be:
\(p=\dfrac{\lambda}{n}\)
Therefore, the mean λ is:
\(\lambda=np\)
So, we need to use our Poisson table to find P(X ≤ 3) when λ = 100(0.05) = 5. What do you get?
The cumulative Poisson probability table tells us that finding P(X ≤ 3) = 0.265. That is, if there is a 5% defective rate, then there is a 26.5% chance that the a randomly selected batch of 100 bulbs will contain at most 3 defective bulbs. More importantly, since we have been talking here about using the Poisson distribution to approximate the binomial distribution, we should probably compare our results. When we used the binomial distribution, we deemed P(X ≤ 3) = 0.258, and when we used the Poisson distribution, we deemed P(X ≤ 3) = 0.265. Not too bad of an approximation, eh?
It is important to keep in mind that the Poisson approximation to the binomial distribution works well only when n is large and p is small. In general, the approximation works well if n ≥ 20 and p ≤ 0.05, or if n ≥ 100 and p ≤ 0.10.
| |
Elba Alves Rolim (born 26 April 1910) is an Brazilian supercentenarian whose age is currently unvalidated by the Gerontology Research Group (GRG).
Biography
Elba Alves Rolim was born in Brazil on 26 April 1910. Her mother died of Spanish flu. She had six siblings. She married Herminio de Freitas Rolim.
At age 105 years, she fell and broke her femur, but after surgery, she started walking again. In April 2020, she celebrated her 110th birthday.
Elba Alves Rolim currently lives in Copacabana neighbourhood, Rio de Janeiro City, Rio de Janeiro State, Brazil, at the age of 110 years, 36 days.
References
- 110th birthday Facebook Post, 27 April 2020
- Senhora de duas pandemias Projeto Colabora, 10 May 2020
Community content is available under CC-BY-SA unless otherwise noted. | https://gerontology.wikia.org/wiki/Elba_Alves_Rolim |
National Disability Services (NDS) is Australia’s peak body for non-government disability organisations. NDS represents over 1,100 non-government service providers and operates several thousand services for Australians with various types of disabilities.
NDS came to Interite Healthcare Interiors requiring an office in which reflected professionalism and encouraged productivity, allowing the staff members to perform at their best ability.
Within six weeks, the 320sqm space in Parkville, VIC, was completed. Interite Healthcare Interiors successfully executed this vision, seamlessly harmonising the concept with a visually appealing and productivity provoking space. The modern and professional environment has improved staff culture, and improved client engagement exponentially. | https://www.interitehealthcare.com.au/portfolio/national-disability-services/ |
With almost 13,000 participants and 200,000 spectators, the Deutsche Post Marathon Bonn is one of the ten largest running events in Germany. In 2019 he will be on 07. April take place. The run leads the athletes through the federal city of Bonn, along the Rhine and past many sights and landmarks of the city. Marathon and half marathon runners, walkers, inliners and hand bikers will be there. Even as a team you can go to the start of the marathon season or the Bonn company run. Students start at the BAUHAUS school marathon.
finisher medal
There are 6 refreshment points on the route. Drinks and solid food are served at all the food stations. Since the marathon course goes over two laps, there are 12 meals and an additional 2 water points in the course of the marathon race.
Timing is done by ChampionChip. For all participants, who have not specified a ChampionChip number when registering, a rental chip is provided.
|
|
It is not an interesting half marathon. The route is a very boring. The finish line is very tight. The marathon is two laps.
18 April, 2015
Half marathon
In front of Koblenzer Tor.
Belderberg 6, 53111 Bonn, Germany
Regina-Pacis-Weg 3, 53113 Bonn, Germany
The marathon course (two laps) for runners is officially measured by the DLV (42.195 km), level, paved and free of traffic. The route presents the federal city of Bonn from its most beautiful side: museums, churches and the Rhine lie along the route.
The race information has been found on the official website of the event or through publicly available sources. Always refer to the official website the latest race information. Please let us know if any data is wrong or missing, by emailing us. | https://worldsmarathons.com/marathon/german-post-marathon-bonn |
Kenyan Kipchoge sets new world record at the Berlin marathon
Marathon olympic champion Eliud Kipchoge stormed to his next world record in Berlin. The 37-year-old Kenyan ran the 42.195 kilometers on Sunday in 2:01:09 hours. Kipchoge set the previous record in 2018 at the same place in 2:01:39 hours. At first it even looked as if he could be the first in an official race to undercut the two-hour mark.
Three years ago, the two-time Olympic champion in Vienna was the first person to stay under two hours over the classic distance. However, since this run was not public and took place under laboratory conditions, the time of 1:59:40 hours is not considered world record.
Favorable conditions for Kipchoge
After some rain during the night, the external conditions at the start at 9.15 am were very favorable for a fast race with cloudy skies, mild temperatures and hardly any wind. Led by his pacemakers, Kipchoge set a world record from the start and was 40 seconds below the previous record after a third of the distance. Only the Ethiopian outsider Andamlak Belihu was able to follow, last year’s winner Guye Adola – also from Ethiopia – not.
The leading duo passed the half-marathon mark behind the pacemakers after an almost unbelievable 59:51 minutes. The last helper got out a little later, after a good 25 kilometers Kipchoge broke away from Belihu and from then on only ran against the clock as a soloist. He couldn’t quite keep up the pace of the first half in front of hundreds of thousands of spectators along the route, but he still beat the previous world record by half a minute.
For Kipchoge it was the fourth success at the Berlin marathon, so he is now together with the Ethiopian Haile Gebrselassie Record winner at the largest German city run. Gebrselassie won there from 2006 to 2009 and also ran two world records.
A total of around 45,000 runners had registered for the race through downtown Berlin. | https://newsingermany.com/kenyan-kipchoge-sets-new-world-record-at-the-berlin-marathon/ |
Criminal law - Trial - Directions to jury - Defendant being convicted of possession of prohibited weapon and sentenced to five years' imprisonment - Defendant appealing on basis judge erring in reversing burden of proof - Defendant contending judge erring in directing jury on failure to answer questions in interview - Whether imposition of legal burden on defendant involving encroachment of presumption of innocence - Whether derogation from presumption of innocence being justified and proportionate - Whether judge erring in giving direction on interview - Whether sentence manifestly excessive - European Convention on Human Rights, art 6.
The Case
Criminal law Trial. The defendant was convicted of possession of a prohibited weapon (an imitation gun that was readily convertible into a firearm) and sentenced to five years' imprisonment. The defendant appealed on the basis that the imposition of a legal (persuasive) burden on him had involved derogation from the presumption of innocence. The Court of Appeal, Criminal Division, held that the derogation was justified as necessary, reasonable and proportionate. It further held that there was no arguable basis for challenging the sentence. | https://lexisweb.co.uk/cases/2012/october/r-v-williams |
Large Parkland Garden & Lakes, Lincolnshire
Client brief:To prepare a design covering the 14.5 acres surrounding this new house to include and improve an old brickpit lake. The design must include naturalising areas of woodland, secret areas, and accesses for fishing and boating.
The design section shown is approx 1/5th of the total garden area and includes large living/entertaining areas, self contained water features, formal and informal areas, walkways and sporting facilities.
USING THE LIGHTBOX:
— Click any image or the first (top left) to start the lightbox display
— Move mouse over side of image for paging button
— Or use your keyboard Left & Right arrow keys
— Click [Close] or press [Esc] to end display
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.