content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
Upon the death of his great-grandfather, Brandon Davis (Ben Browder), a wedding photographer, inherits an antique camera famous for taking Victorian death photography. After photographing his subjects they start to die from horrible, bizarre deaths, then reappear as eerie death portraits. One by one, Brandon begins to lose people very close to him as he struggles to uncover the haunting mystery behind the cursed camera. When his eleven-year-old son goes missing, Brandon discovers the camera has supernatural powers and has trapped his son inside of it. He must now risk it all and journey beyond the realm of all imagination, to fight the hideous entities within, save his son and reverse the deadly curse that plagues them before they all become... Dead Still.
|
|
Cast:
|Aspen Steib|
|Ben Browder|
|Benjamin Boucvalt|
|Carrie Lazar|
|Chris Pezzano|
|Corey Mendell Parker|
|D.J. Mills|
|Elle Lamont|
|Gavin Casalegno|
|Han Soto|
|Heather Buckley|
|Jia Calderini|
|Joshua Hidalgo|
|Lawrence Turner|
|Luke J. Watson|
|Mark Mills|
|Natalie Mejer|
|Niki Boyle|
|Rachel Marie Booth|
|Ray Wise|
|Steffie Grote|
|Toby Nichols|
|
|
Genre: | http://rabbittvgo.com/guide/movie/info/114456-dead-still/ |
NEMA, FRSC sensitize road users on traffic rules ahead of Christmas
The National Emergency Management Agency (NEMA) and the Federal Road Safety Corps (FRSC) have raised the awareness of motorists on the need to observe traffic rules and regulations in a bid to minimize road traffic accidents during the yuletide season.
Speaking at the flag-off of the 2021 sensitization campaign for road users plying Benin-Lagos expressway and other roads in Edo State, Dahiru Yusuf, head of operations, Benin operation office, NEMA, urged motorists to drive at a minimal speed and avoid wrongful overtaking.
Yusuf said the awareness campaign, with the theme, “safety above all”, was to rally support from other critical stakeholders to support the FRSC in curbing road mishaps, loss of lives and property during and after the festive period.
Read also: Lagos deploys cameras to combat violation of traffic rules
According to him, “during the Christmas and new year period, we always carry out campaigns at the end of the year to let road users know why they should obey traffic rules and regulations and drive safely to stay alive.”
Henry Benamaisia, Edo State sector commander of the FRSC, commended the emergency management agency for deploying its assets to support the corp during this period of high vehicular movement.
Benamaisia, while assuring that the FRSC would continue to collaborate with NEMA to ensure safe driving on roads, appealed to the agency to assist them with equipment such as stretchers, body bags and fire extinguishers among others. | https://businessday.ng/news/article/nema-frsc-sensitize-road-users-on-traffic-rules-ahead-of-christmas/ |
In April last year, the U.K. parliament suddenly realised that—even with the Great Repeal Bill—if new sanctions were imposed on a country by the U.N., it had no legislation to deal with introducing or complying with such sanctions. A White Paper was hurriedly put together, and in August, the Sanctions and Anti-Money Laundering Bill was introduced to the House of Commons. On 18 October, the same bill was introduced to the House of Lords. Which is where it still is, having reached the second committee stage by mid-December. The report stage, a further chance to examine the bill and make changes, is scheduled to begin on 15 January this year.
But why doesn’t Britain already have sanctions regulations? The U.K. currently uses European law to implement sanctions, regardless of whether they originated at the U.N. or the EU. The EU currently imposes some 30 sanction regimes, of which about half come from the U.N., for example, restrictions against people, institutions and trade in Russia, North Korea, Syria, Iraq, Iran and other countries. The new law will allow the U.K., for the first time, to impose sanctions on another country by itself. Currently, the U.K. has limited powers to impose some financial sanctions unilaterally, such as through the Terrorist Asset Freezing etc. Act 2010 or the Anti-terrorism, Crime and Security Act 2001. This new legislation is needed because, after Brexit, the Great Repeal Bill would only be able to maintain current sanctions.
The Sanctions and AML bill aims to:
create powers for the government to make regulations to impose sanctions
allow financial, immigration, trade, aircraft and shipping sanctions to be imposed
allow for regulations to create exceptions and licences to allow activities to take place that would otherwise be prohibited or restricted by sanctions
have ministerial and judicial review processes to allow individuals and organisations to challenge sanctions imposed on them
allow regulations to be made to update existing provisions on anti-money laundering and terrorist financing, particularly the Money Laundering Regulations 2017, to be updated after the UK’s exit from the EU
During the same period last year, the U.K. Office of Financial Sanctions Implementation (OFSI) was given new civil powers to implement fines and prosecutions. It reportedly opened 125 investigations between March 2016 and July 2017. The OFSI also revealed that there are 60 ‘live’ investigations into organisations suspected of breaching the U.K. sanctions regime. The OFSI has come to be known as a new ‘U.K.-OFAC’ (referring to the U.S. Treasury’s sanctions enforcement agency, the Office of Foreign Assets Control).
What is the nature of the sanctions regime in the U.K.?
The most frequently applied measures are:
arms embargoes
bans on exporting equipment that might be used for internal repression
export controls
asset freezes and financial sanctions on designated individuals and corporate entities
travel bans on named individuals
bans on imports of raw materials or goods from the sanctions target
In general terms, it is a criminal offence to:
deal with funds or economic resources belonging to, owned, held or controlled by a Designated Person, if it is known, or if you have reasonable cause to suspect, that you are dealing with such funds or economic resources
make funds available to, or for the benefit of, a Designated Person if it is known, or if you have reasonable cause to suspect, that you are making funds so available
make economic funds available to, or for the benefit of, a Designated Person if it is known, or if you have reasonable cause to suspect, that you are making economic resources so available and, in the case of making economic resources available to a Designated Person, that the Designated Person would be likely to exchange the economic resources, or use them in exchange, for funds, goods or services
Source: Eversheds Sanctions Guide
,
Companies must now make the same kind of compliance efforts to manage the risk of OFSI enforcement, on top of existing EU and OFAC enforcement, especially since many continue to have a financial presence in London. If a problem does arise, OFSI guidance indicates that early internal investigation and, where appropriate, voluntary disclosures to the relevant U.K. authorities may help reduce financial penalties and/or criminal enforcement.
In the midst of all this activity comes a new report from Lexis Nexis—Better Safe Than Sorry—about building an effective sanctions compliance programme. The report’s gist is that, pushed by recent actions by OFAC, enforcement agencies, which are growing in number, are moving beyond their traditional targets, financial institutions. For example, more than half of the 17 OFAC penalties levied in 2015, for example, involved non-banking organisations. The report notes that, according to the National Law Review, seven of the nine companies, did not have a compliance programme at the time of the sanctions violations.
Take the case of National Oilwell Varco cited in the report. From 2002 to 2005, senior-level finance executives at Varco approved at least four commission payments from its Dreco subsidiary to a U.K.-based entity related to the sale and export of goods from Dreco to Iran. From 2006 to 2008, two deals worth about $13 million involved actual sales to Iran. From 2007 to 2009, Dreco engaged in 45 transactions valued at about $1.7 million involving the sale of goods to Cuba. Finally, there was a single transaction with Sudan worth around $20,000 in either 2005 or 2006; OFAC was not able to establish the precise date.
The lack of a compliance programme was not an issue with Varco, however, which introduced its U.S. Export Controls & Economic Sanctions policy in 1997. The most recent revision date for this policy was 2009 which, given its prosecution in 2015, might seem a little lax. Barclays, another company prosecuted by OFAC in 2015, and in more potential trouble if it turns out that it is associated with the Zupta scandal in South Africa, does not put a date on its sanctions policy, but it feels more recent. And HSBC—already implicated in the Zupta affair—is working on a brand-new sanctions and AML policy for introduction this year.
Barclays Sanctions Policy
The Barclays Group Sanctions Policy is designed to ensure that the Group complies with applicable sanctions laws in every jurisdiction in which it operates.
All Barclays Group Companies are required to screen against United Nations, European Union, UK Treasury and US Office of Foreign Assets Control (OFAC) sanctions lists at a minimum in all jurisdictions in which we operate, unless to do so would conflict with local legislation.
All employees receive training on the Sanctions Policy at least once a year, with more detailed and advanced training for those whose roles involve heightened sanctions risks. Failure to comply with the policy may give rise to disciplinary action, up to and including dismissal.
Source: Barclays AML Policy Statement
But sanctions are a fast-moving issue, says the report, citing new sanctions on Russian and North Korea in the last month, and fines can run into the billions of dollars, so without a robust, fast-moving compliance programme to address these issues, the costs for companies, not just financial but reputational, can be substantial. However, Lexis Nexis found that more than half of companies it surveyed did not have a sanctions compliance programme in place.
The report puts together several useful checklists, including:
know your customer & other third parties
know your product or service
know the receiving country
know the end-use
know the end-user
know the transaction
But who are the regulators in the United Kingdom? They are, unfortunately, manifold, and it does not appear that the new Bill will shrink the number of agencies involved in overseeing the sanctions regime. The Foreign and Commonwealth Office, according to the law firm Eversheds, “has overall responsibility for the U.K.’s policy on sanctions and embargoes, including the scope and content of international sanctions regimes.” HM Treasury is responsible for implementing and administering financial sanctions in the UK, work that is now carried out by OFSI. The Financial Conduct Authority is responsible for ensuring that regulated firms have adequate systems and controls to comply with requirements. The Export Control Organisation within the BIS (the Department for Business, Innovation and Skills) “is responsible for trade sanctions, such as bans on weapon exports and for export licences.” While the International Trade and Export Control Directorate of BIS and HM Revenue and Customs “advise and deal with trade policy, regimes and procedural issues governing imports to the U.K.”
With all of that enforcement in mind, there is a possibility of being issued exemptions under the regime. HM Treasury is responsible for granting exemptions to financial sanctions. The Export Control Organisation is in charge of issuing exemptions for exporting and trading in certain controlled goods. And finally, the Import Licensing Branch (also part of the BIS) is responsible for licensing exemptions for importing controlled products.
But, without a compliance programme in place, a firm would not even know it needed to apply for an exemption. | https://www.complianceweek.com/post-brexit-new-uk-sanctions-laws-and-regulations-needed/2417.article |
This beautiful, free-flowering, evergreen shrub is fast growing, drought resistant and easy to grow. Tecomaria capensis normally has bright orange-red trumpet shaped flowers, but colour forms from yellow through orange to deep red are commonly available. It is an excellent plant for a wildlife friendly garden, attracting bees, butterflies, moths and nectar feeding birds, especially sunbirds, their main pollinators.
Name Derivation:
- Tecomaria – “tecoma”, a closely related genus and “aria” denoting this relationship.
- capensis – from the Cape of Good Hope.
Common Names:
- Cape-honeysuckle (Eng), Kaapse kanferfoelie (Afr), morapa-šitšane (Nso), xunguxungu (Tso), mpashile (Ven), and umunyane (Zul).
SAF Number: 673.1
Size: 2 – 4 m (–17 m) by 3 – 5 m.
Flowers:
- Narrow tubular flowers, widening to a 5 lobed, 2 lipped mouth.
- Flowers in dense, terminal clusters.
- Very showy when in full flower.
- Flowers produce copious amounts of nectar.
Colour: Orange-red, but also yellow, salmon, peach, apricot and red.
Flowering Months: All year with peaks in Sep-Oct and Mar-May.
Fragrance: Not fragrant.
- Evergreen.
- Leaves are compound, usually about seven leaflets and dark glossy green.
- The rachis (the stalk to which the leaflets are attached) and the petiole (leaf stalk) are slightly winged (flattened).
- The leaflets are variable in shape, elliptic to almost round, with scalloped margins.
Thorns: No thorns.
- Long and narrow (13 * 1 cm), flattened pod-like capsule, green to brown.
- The capsule split lengthwise when ripe, releasing the papery winged seeds.
Bark:
- Pale brown.
- Young stems have many lenticels.
In the Garden:
- Tecomaria capensis can be used to great effect in almost any garden or landscape.
- Planted en-masse on large embankments it can make a spectacular show.
- Tecomaria may be used to make a stunning hedge.
- Inter-plant with other shrubs like Plumbago auriculata, Bauhinia galpinii, Freylinia tropica and Leonotis leonurus for a spectacular colourful effect.
- Use as a screen to hide walls and fences, or as a feature plant on its own.
- There are many colour forms to choose from, from pale yellow to deep red.
- It can be used along boundary walls or amongst other plants in a shrubbery.
- The yellow form, known as Tecomaria capensis “Lutea” is a more compact shrub and more cold hardy than other varieties.
- A good choice for coastal gardens as it is wind resistant.
- A wildlife friendly plant, attracting insects and birds.
- Quick growing.
Care:
- A low maintenance plant.
- Feed with organic fertilizer and mulch with compost.
- Prune after winter to shape and stimulate flowering.
- Protect young plants against the cold in winter.
Cold Hardiness:
- Semi-hardy, but frosted plants normally recover quickly.
- Protect young plants.
- Drought hardy, but thrives on regular water in summer.
- Water-wise
Light Requirements: More sun than shade.
Roots: The roots are not aggressive.
Birds:
- Insect-eating birds are attracted to the insects that visit these plants.
- Nectar feeders and sunbirds feed on the rich nectar in the flowers.
Insects and Butterflies:
- Honey bees, adult butterflies and other insects feed from the flowers.
- Tecomaria capensis is the larval host to Barker’s Smokey Blue and the Common Blue.
- Adult butterflies feed on nectar from the flowers.
- It is also larval host to 10 moth species.
- Used to treat many conditions, fever, insomnia, bleeding gums, dysentery, pain and chest ailments.
- Also used to stimulate milk flow in feeding Mothers.
Poisonous: Not poisonous.
Notes of interest:
- Some American botanists a few years ago decided that Tecomaria should be included in the genus Tecoma. This has been shown to be incorrect and the genus Tecomaria, was re-instated, but some nurseries and books published at the time have the incorrect name.
- A bright red variety called ‘Rocky Horror’ is sometimes available.
- Some antelope do browse the leaves.
- Grown as an ornamental shrub in many other parts of the world, it is an unwanted invasive alien in some places, including the West Indies and New Zealand.
- Found in the WC, EC KZN, Swa, Moz, M and L.
- The subspecies Tecomaria capensis subsp. capensis is endemic to southern Africa.
Natural Habitat:
- Fynbos, thicket, savanna-lowveld and savanna-bushveld.
- In margins of evergreen forests, in bush and scrub in coastal areas and along streams.
- From sea level to 1 200m. | https://witkoppenwildflower.co.za/tecomaria-capensis/ |
ANN ARBOR, Mich., Oct. 25, 2021 (GLOBE NEWSWIRE) -- Arotech Corporation's Training and Simulation Division (ATSD) announces a research and development partnership between MILO's Cognitive Division and the Kent State University Electrophysiological Neuroscience Laboratory of Kent (ENLoK).
"ENLoK is generating path-breaking social science discoveries," said Dr. Lisa Troyer, program manager, U.S. Army Combat Capabilities Development Command Army Research Laboratory. "The team's efforts are leading the use of immersive virtual reality and capabilities to identify neurological signals of influencers in groups that can support Army missions by better understanding Army influence networks as well as networks of adversarial groups."
DEVCOM ARL recently funded the DURIP (instrumentation) award to Dr. Joshua Pollock and Dr. Will Kalkhof to enable the creation of an immersive virtual reality lab at the ENLoK that can be used in tandem with their other biophysiological technologies to continue to advance major breakthroughs in social dynamics. This funding initiated an idea for research that prompted them to contact MILO after completing a research project with a local police department where they used a MILO Range Pro training system.
"The simulator performed flawlessly, and it inspired us to connect with MILO to discuss mutual interests and possible collaborative projects aimed at taking modern police and military training to the next level," said Dr. Will Kalkhoff, Executive Director of the ENLok and Professor in the KSU Sociology Department. "By conducting systematic, state-of-the-art research together, our shared ultimate goal is to enhance the efficacy of modern policing and make it safer for officers and citizens alike. We are honored to have formed a partnership with an industry leader in simulation training for public safety and military agencies."
"Support of this kind of social research is exactly why we established the MILO Cognitive Division," explained Robert McCue, MILO's General Manager. "We're thrilled to partner with ENLoK to learn more about how expanded MILO technology used for military training prepares their brains not only for reality but for improved social interactions."
"Kent State is well-known for both their academic leadership and their resilience," said Dr. Joy VerPlanck, MILO's academic liaison. "It's a great honor that they chose MILO as the most accurate and scalable tool for their advanced research needs in the military and law enforcement space."
For more information regarding MILO Cognitive, contact Amanda Williams ([email protected])
About Arotech Training and Simulation Division
Arotech's Training and Simulation Division (ATSD) provides world-class simulation and training solutions. ATSD develops, manufactures, and markets right-fidelity solutions to train military, law enforcement, security, municipal, and private industry personnel. The division's fully interactive vehicle operator/crew systems feature state-of-the-art simulator technology enabling training in situation awareness, risk analysis and decision-making, emergency reaction and avoidance procedures, conscientious equipment operation, crew coordination, and engineering research. The division's judgment skills training products and live fire range solutions allow organizations to train their personnel in safe, productive, and realistic environments. The division supplies pilot decision-making support software for the F-15, F-16, F-18, F-22, and F-35 aircraft, simulation models for the ACMI/TACTS air combat training ranges, and Air-Refueling Boom Arm simulators. The division's live training and test instrumentation systems provide aircraft, including the Apache helicopter, with an immersive networked training environment.
Arotech Training and Simulation Division consists of FAAC Incorporated (www.faac.com), Inter-Coastal Electronics (www.inter-coastal.com), MILO (www.milo-lvc.com), and Realtime Technologies (www.simcreator.com).
Related Images
This content was issued through the press release distribution service at Newswire.com. | https://ca.finance.yahoo.com/news/milo-announces-research-partnership-kent-150000336.html |
Oct 17, 2017
The effect of healthy choices on academic performance and how tutors can help their students make good decisions.
The link between academic success and healthy habits has become a popular topic. Perhaps, not surprisingly, researchers have found strong evidence that students who make good health-related decisions exhibit better performance in school, and those who make poor decisions often see grades suffer. In-depth studies look at a variety of health factors, with each generally contributing to factors in student success. Many school systems have recently looked upon educators to guide students toward healthy choices. As a tutor, you can play a role as well, ultimately leading to better academic outcomes for your students. Here are 3 health-related factors that contribute to academic performance and a look at how tutors can help improve them.
Research has shown that physical fitness is correlated with academic achievement. In a 2013 study, sixth and ninth grade students who scored higher than their peers in fitness also exhibited better grades in math and social studies. A New York Times article examined the relationship between exercise and health, and noted that a “panel of experts assembled by the institute says that ‘a growing body of evidence’ suggests children who are more active are better able to focus their attention, are quicker to perform simple tasks, and have better working memories and problem solving skills than less-active children” and they also outscored their peers on standardized tests (source). Other studies have further supported the benefits, with exercise improving blood flow to the brain along with a variety of other brain functions. Another important reason for students to exercise is to help lower obesity among the student population, a problem that negatively affects grades as well.
How can tutors help?
Sessions don’t have to be stationary -- make your meetings mobile. Try taking your student on a walk instead of a regularly planned seated session. It’s a great way to mix things up a bit, but can also be a means to an end. For example, if you’re a reading tutor, take your walk over to the public library. History tutor? Head to a local historical site. The options are endless and it’s a fun and easy way to get students moving.
Try connecting your lessons to the concept of exercise. For example, calorie burn is a perfectly relevant topic for science tutors, just as math tutors can give students problems focusing on exercise-related factors, such as distance biked. If anything, including the topic throughout your lessons can keep the concept of exercise top of mind for students.
Be a role model to your students by exhibiting good exercise habits. Talk to your students about your favorite sports you participate in, or try walking to sessions. Your students look up to you and setting a good example is an easy way to impact your student’s own decisions.
Routine is a contributor to overall health as planning makes it easier for people to lower stress and find time for activities such as exercise, good nutrition, and more. A student’s schedule can affect academic performance in numerous ways. Research has shown a definitive correlation when it comes to schedule and how it affects academic performance. The organization of a student’s day plays a role in how well they function in an education environment. Routine changes can negatively affect academic performance as “revisions to classroom routines can cause some students to become unsure of exactly what to expect; not surprisingly, students perform better if there is consistency between teachers’ expectation, student responses, and teacher feedback” (source).
When considering which time of the day would be optimal for learning, researchers found that students who took math earlier in the day had higher grades in the subject. It’s unclear whether this is associated with higher attentiveness earlier in the day or a different factor, but it’s certainly a significant finding to consider. Another benefit of earlier learning is the room it creates for more sleep in the evening, which has been shown to be a significant contributor to academic outcomes.
How can tutors help?
Try following a relatively strict routine. Because organization and regularity within a student’s schedule leads to better academic outcomes, keeping your sessions regular can be key. During sessions, you can also try to follow a relatively similar lesson plan outline throughout sessions to align with student expectations.
Keep sessions on the earlier side, if possible. Ensuring your sessions aren’t scheduled too late in the day will lead to higher levels of student attention and will make it easier for students to get to sleep earlier.
Ensure your sessions aren’t contributing to a student’s schedule-related stress. It’s common today for adolescent and child agendas to be jam-packed with extracurriculars outside of the classroom, and by scheduling your sessions on relatively light days you’ll be helping your student more easily find much-needed downtime on the off days.
It’s no secret that making healthy choices when it comes to food comes with numerous of benefits, and academic performance is no exception. Good nutrition improves grades and helps concentration. Research has shown that “healthier meals could raise student achievement by about 4 percentile points on average”. Meeting the recommended amount of fruits and vegetables in one’s diet contributes to improved academic outcomes. Another side of the nutrition argument is associated with meal regularity. Students who eat breakfast exhibit improved concentration when compared to their peers who skip out on the important meal, and also showed lower rates of obesity.
How can tutors help?
During sessions, provide healthy snacks for your students. By sharing food, you can lead by example. Social influence is a powerful factor in decision making, and when students see the choices you make they will often be inspired to make similar healthy choices.
Make it interactive by taking your student on a field trip. Depending on the subject you teach your students, there are a variety of locations that can contribute to a student’s understanding of nutrition. If you’re a biology tutor, for example, visit a local farm to look at the lifecycle of plants as part of your lesson. Math tutors can take students to the grocery store to work on math through shopping cart totals. Get creative with your lessons to incorporate food in a setting where you can educate on nutrition as you go along.
Developing healthy habits can make a significant impact on a student’s academic performance. As a tutor, you are an individual that your students look up to, and you have the opportunity to influence the decisions your students make when it comes to health. Even if it doesn’t seem simple to incorporate health-related topics into your sessions, the most important factor in changing human motivation is education. Simply speaking with your students about the correlating factors between health and grades can make all the difference.
For more tips on improving the academic outcome of your students, join Clark and follow along with the blog. | https://www.hiclark.com/blog/the-link-between-healthy-habits-and-academic-achievement/ |
How can we ensure that decision-makers really consider environmental ethics? For truly sustainable developments, it is vital to run local forums (1) and incorporate environmental ethics into the planning decisions. However, this is all too often ignored. In this article, we investigate how developers of a mini wind farm in Orkney failed to consider ecological ethics during the negotiation process.
[printfriendly]
Jordi Albacete
Before the creation of the national parks, Scottish naturalist John Muir (1829-1914) said:
‘Everybody needs beauty as well as bread, places to play and pray in which nature may heal and give strength to body and soul’.
In our post-industrial world, with our accelerated lifestyles, we often look for refuge and therapy in wild spaces. These places also provide local identity for people in the surrounding communities.
The current necessity to reduce dependence on fossil fuels means that some of these natural landscapes are altered. The presence of wind farms is not welcomed by everyone.
In 2013, I was studying a master’s degree at the University of Glasgow about how societies use technology to respond to environmental challenges. I looked at the ethical legitimacy of the development of a mini wind farm in Orkney, which began in 2007. The project, which had been approved by local authorities, had received opposition from the local community. The development consisted of the construction of three 900kW turbines right in the middle of an area of high archaeological value. Orkney has some of the oldest monuments in the United Kingdom and Europe, such as the ruins of Skara Brae which are more than 5,000 years old. These isles are a hub for leading research projects (2) into renewable energy, particularly in marine energy production. The headquarters of the European Centre for Marine Energy are based there.
Scotland, territory of renewable energy
Scotland is a leading country in the decarbonisation of the economy. In 2009 it passed the Climate Change Act with important targets for the development of renewable energies. In 2011, Scotland set the ambitious goal of supplying, by 2020, 100% of its energy demand from renewable resources (SPP6, Scottish Planning Policy). In 2014 this target was accomplished by 50%.
The planning of energy production in Scotland projects is decided between the agreement of the local and national authorities. However, to decarbonise the economy, have been criticized by rural communities.
In 2009 Scottish local authorities rejected the development of 18 wind farms out of 35 proposals. Frequently, the opposition to wind farms is based on two main arguments: the protection of wildlife, particularly ornithology; and their effect on the landscape. In 2011, a representative of the local Government of the Scottish Borders Council (one of the regions with the lowest population densities in Scotland) Carolyn Riddell-Carre said: “My main concern is [that] our picturesque landscape must not be destroyed in order to satisfy some national agenda”.
Indeed, preserving the landscape against the development of wind farms is a recurring issue for the protection of the cultural and historical identity in the debate on the development of renewable energy.
Under Scottish law, technologically innovative projects, such as wind farms, should meet two requirements when carried out in areas of archaeological interest. Firstly, an evaluation of the impact on the landscape must be carried out by two agencies of archaeological conservation. Secondly, it’s necessary to include a public consultation, which should ensure the participation of residents.
The visual impact on the archaeological status in Orkney was reported by the two public agencies, Scottish National Heritage and Historic Scotland. They claimed that the visual impact of the wind turbines would particularly affect the emotional values of connection with this ancient landscape. Despite this report, local authorities estimated that the visual impact of the three turbines would be minimal (in agreement with one of the companies participating in the project, Scotrenewables).
The other important stage to legitimize the project is public consultation. Some residents criticized the consultation, which was in-favour of the project, saying that there was a lack of advertising so that the community could attend the event. According to a letter from the neighbours, only 20 people attended that meeting.
Orkney Heritage and Renewable Energy
Orkney has some of the oldest archaeological sites in Britain. In the southwest of its main island, known locally as the mainland island, different archaeological monuments make up the Heart of the Neolithic. This is formed by: the ruins of Skara Brae, a human settlement of more than 5,000 years old; the standing stones of the Ring of Brodgar and Stenness; and the burial monuments like Maeshowe. Through these monuments, the presence of different prehistoric civilizations can be precisely tracked, including the Vikings. These sights give a very clear idea of the megalithic culture of north-western Europe for ceremonial practices such as funeral homes. All these monuments were nestled (3) in the topography with a purpose, forming a series of curves and slopes.
Over the course of contemporary history the archaeological landscape definition has also been taken into account in the construction of modern buildings, creating a very diverse and complex landscape.
Orkney has become a reference for developing renewable energy sources and it is a micro paradigm to demonstrate to what extent the economy can be decarbonized.
This archipelago was the first territory of the United Kingdom to have a smart grid. In addition, it is a leader in the development of tidal power (produced by tidal current energy). The headquarters of the European Centre of Marine Energy in Orkney is the first centre in the world, which includes machines for the production of energy from waves and tides. Currently, Orkney is a net exporter of renewable energy for the rest of Scotland.
Dominant Discourse vs. Social Ecology
The case of the wind farm in Orkney is quite unique. The ethical problem posed by the University was to make a conclusion about the best ethical decision for this community. On one side of the fence, acceptance of the project would be a valuable step towards a decarbonised national economy on the other, rejection or revision of the project would fall in line with the reports from public archaeological agencies and promote inclusion of social values and cultural identity.
Prioritising ethical values and the participation in the decision-making process is the basis of a critical sociological theory named Social Ecology. Founded by Murray Bookchin (1921-2006), this theory connects environmental and social justice. From this perspective, hierarchies and lack of horizontality in community decision-making generate exploitation, not only between individuals and the environment. The main target for this philosophy is to develop a moral economy exceeding the hierarchy and the shortage in a world where communities are in harmony with the natural environment.
Applying social ecology to a conflict of interest might generate more questions than it answers. In the case of this wind farm, both the positions for and against the development encouraged the redefinition of some concepts like social welfare and even what exactly constitutes a natural resource.
The value that can be granted to social welfare is very volatile and depends on the system of values and beliefs of an individual and its collective identity. In this case, when we think about the welfare state, a social ecological perspective might pose the following question: “which of the options is more important in an area where opportunities for economic development are limited: economic security and energy self-sufficiency preservation of archaeological landscapes, regardless of some of the immediate needs for local, regional or national economy? In which time scale can social welfare be defined? Only in the present or must future generations be taken into account? Perhaps the problem lies in the increasing distance between individual well-being and collective well-being.
“In which time scale can social welfare be defined? Only in the present or must future generations be taken into account?”
The definition of a natural resource also leads to many questions. How are the natural resources of this island assessed? By their economic production value (from wind power in this case) by their contribution to the local identity? How can a symbolic value evaluated and quantified? How can money be measured against such a symbolic value? The problem, therefore, might lie in scaling rather than in the recognition. In other words, what could be the economic cost of cultural identity, and should it be costed in economic terms in the first place? All these questions require discussion, thorough and participative reflection, study, analysis, and perhaps a large dose of idealism and altruism. But, perhaps the most important and obvious question for social ecology is who responds and asks these questions? Where and how should strategic decisions for the development of a community be taken? Who and how the community is defined?
“Is it possible to price cultural identity?”
The redefinition of concepts is a challenge for citizen participation and interdisciplinary debate, where science, humanities, economics, social values and beliefs must all be involved in the discussion. Responsibilities arising from these discussions must also be well balanced with individual freedoms. William Blackstone, author of Ethics and Ecology, outlined that the meaning of equality and liability should be based on the balance between freedom and moral responsibility to protect human rights. For example, it can be relatively accepted that the freedom of having “freedom of speech” is limited by our respect of honour and integrity.
Internationally, there are already many initiatives where environmental and social ethics are articulating community development plans. One of the most international initiatives is probably The Earth Charter, which seeks to apply ethics by the inclusion of the most deprived (i.e. prisoners) in sustainable development decisions.
In Scotland, there are various initiatives in which communities have been organized to produce and manage their own renewable energy supplies, as is the case for the Community Power Scotland initiative. Other community initiatives, in line with social ecology, emerge in the context of the explorations for fracking in Scotland. Community Charter is a document produced by a community of neighbours in the area of Falkirk (in East Central Scotland) where residents agree “assets” that are fundamental to the present and the future of their communities, such as: a clean environment, future generations, dwellings, the stability of the community, the biodiversity of local ecosystems, food security, the health of the economy, and the confidence on the political representatives.
The case of the wind farm, and its framing within the philosophy of social ecology, creates opportunities for debate, reflection, and in ultimately its review. Climate change has raised the stakes for renewable energy. However, the alteration of the landscape in the installation of wind farms involves incalculable impact on the emotional and cultural values transmitted over generations.
In these days where history is being made in Greece, we should recall that Greece begat democracy. The challenges of climate change are already obvious around all the tables of discussion, and we should consider what collective response must be given to our planet and all its beings that are yet to come.
We will probably have to make some sacrifices for the sake of sustainable development (3) and to mitigate many of the mistakes that have been made in the past. A globalised world should be aware that there are millions of people who are not questioned or even know that they have that right.
International solidarity needs to ensure that democratic resources are granted to the most disadvantaged communities to empower them to make their own decisions, making sure that these decisions are respected. In Orkney, 5,000 years ago resided one of the first prosperous settlements in the United Kingdom, which probably overused their natural resources.
In Greece, 2,500 years ago, it was the dawn democracy began a chapter of world history that it is being shaken by the financial trap of the European economy. They are perhaps passing signs from the past to remind us that we must redirect ourselves towards a more sustainable future for our planet.
Use of English for Spanish Speakers
- Example: “For truly sustainable developments, it is vital to run local forums […] “. (“Para desarrollos verdaderamente sostenibles, es vital organizar foros […]“).
- Translation: organizar un forum.
- Comment: Pay attention to the use of run for meetings, forums, etc.
(2) A hub for leading research projects.
- Definition: (for hub) a focus of activity.
- Example: “These isles are a hub for leading research projects into renewable energy”. (“Estas islas son un centro líder en los proyectos de investigación de energías renovables”).
- Translation: centro.
- Comment: Be aware of this use, to have alternative options for centre.
(3) To make some sacrifices for the sake of sustainable development.
- Definition: (sake) benefit or well-being.
- Example: “We will probably have to make some sacrifices for the sake of sustainable development“. (“Probablemente tendremos que hacer algunos sacrificios para el beneficio del desarrollo sostenible […]“).
- Translation: beneficio, bienestar.
- Comment: be aware of the different uses and idioms with “sake”. | http://cosmopolitascotland.org/environmental-ethics-in-green-energy/ |
Susceptibility to climate change differs across sectors and regions. A clear
example is sea-level rise, which mostly affects coastal zones (see Box
19-2). People living in the coastal zone generally will be negatively affected
by sea-level rise, but the numbers of people differ by region. For example,
Nicholls et al. (1999) found that under a sea-level rise of about 40 cm by the
2080s, assuming increased coastal protection, 55 million people would be flooded
annually in south Asia; 21 million in southeast Asia, the Philippines, Indonesia,
and New Guinea; 14 million in Africa; and 3 million in the rest of the world.
The relative impacts in small island states also are significant (see Section
19.3). In addition, the Atlantic coast of North and Central America, the
Mediterranean, and the Baltic are projected to have the greatest loss of wetlands.
Inland areas face only secondary effectswhich, unlike the negative primary
effects, may be either negative or positive (Yohe et al., 1996; Darwin and Tol,
2001).
Box 19-3. The Impact of Climate Change on Agriculture
The pressures of climate change on the world's food system are better
understood than most other impacts. Research has focused on crop yields;
on the basis of those insights, many studies also look at farm productivity,
and a smaller number look at national and international agricultural markets.
Climate change is expected to increase yields at higher latitudes and
decrease yields at lower latitudes. Changes in precipitation, however,
also can affect yields and alter this general pattern locally and regionally.
Studies of the economic impact of this change (in all cases, climate change
associated with 2xCO2) conclude that the aggregated global
impact on the agricultural sector may be slightly negative to moderately
positive, depending on underlying assumptions (e.g., Rosenzweig and Parry,
1994; Darwin, 1999; Parry et al., 1999; Mendelsohn et al., 2000). Most
studies on which these findings are based include the positive effect
of carbon fertilization but exclude the negative impact of pests, diseases,
and other disturbances related to climate change (e.g., droughts, water
availability). The aggregate also hides substantial regional differences.
Beneficial effects are expected predominantly in the developed world;
strongly negative effects are expected for populations that are poorly
connected to regional and global trading systems. Regions that will get
drier or already are quite hot for agriculture also will suffer, as will
countries that are less well prepared to adapt (e.g., because of lack
of infrastructure, capital, or education). Losses may occur even if adaptive
capacity is only comparatively weak because trade patterns will shift
in favor of those adapting best. Overall, climate change is likely to
tip agriculture production in favor of well-to-do and well-fed regionswhich
either benefit, under moderate warming, or suffer less severe lossesat
the expense of less-well-to-do and less well-fed regions. Some studies
indicate that the number of hungry and malnourished people in the world
may increase, because of climate change, by about 10% relative to the
baseline (i.e., an additional 80-90 million people) later in the
21st century (e.g., Parry et al., 1999).
Agriculture, to turn to another example, is a major economic
sector in some countries and a small one in others. Agriculture is one of the
sectors that is most susceptible to climate change, so countries with a large
portion of the economy in agriculture face a larger exposure to climate change
than countries with a lower share, and these shares vary widely. Whereas countries
of the Organisation for Economic Cooperation and Development (OECD) generate
about 2-3% of their GDP from agriculture, African countries generate 5-58%
(WRI, 1998).
Activities at the margin of climatic suitability have the most to lose from
climate change, if local conditions worsen, and the most to win if conditions
improve. One example is subsistence farming under severe water stressfor
instance, in semi-arid regions of Africa or south Asia. A decrease of precipitation,
an increase in evapotranspiration, or higher interannual variability (particularly
longer droughts) could tip the balance from a meager livelihood to no livelihood
at all, and the unique cultures often found in marginal areas could be lost.
An increase in precipitation, on the other hand, could reduce pressure on marginal
areas. Numerous modeling studies of shifts in production of global agricultureincluding
Kane et al. (1992), Rosenzweig and Parry (1994), Darwin et al. (1995), Leemans
(1997), Parry et al. (1999), and Darwin (1999)have estimated that production
in high-latitude countries is likely to increase and production in low-latitude
countries is likely to decrease, even though changes in total global output
of agriculture could be small. Results in the temperate zone are mixed. Low-latitude
countries tend to be least developed and depend heavily on subsistence farming.
Under current development trends they will continue to have a relatively high
share of GDP in agriculture. Thus, the impacts of declines in agricultural output
on low-latitude countries are likely to be proportionately greater than any
gains in high-latitude countries (see Box 19-3).
Vulnerability to the health effects of climate change also differs across regions
and within countries, and differences in adaptive capacity again are important.
Box 19-4 notes that wealthier countries will be better
able to cope with risks to human health than less wealthy countries. Risks also
vary within countries, however. In a country such as the United States, the
very young and the very old are most sensitive to heat waves and cold spells,
so regions with a rapidly growing or rapidly aging population would have relatively
large exposure to potential health impacts. In addition, poor people in wealthy
countries may be more vulnerable to health impacts than those with average incomes
in the same countries. For example, Kalkstein and Greene (1997) found that in
the United States, residents of inner cities, which have a higher proportion
of low-income people, are at greater risk of heat-stress mortality than others.
Differences among income groups may be more pronounced in developing and transition
countries because of the absence of the elaborate safety nets that developed
countries have constructed in response to other, nonclimate stresses.
These observations underscore one of the critical insights in Chapter
18: Adaptive capacity differs considerably between sectors and systems.
The ability to adapt to and cope with climate change impacts is a function of
wealth, technology, information, skills, infrastructure, institutions, equity,
empowerment, and ability to spread risk. The poorest segments of societies are
most vulnerable to climate change. Poverty determines vulnerability via several
mechanisms, principally in access to resources to allow coping with extreme
weather events and through marginalization from decisionmaking and social security
(Kelly and Adger, 2000). Vulnerability is likely to be differentiated by genderfor
example, through the "feminization of poverty" brought about by differential
gender roles in natural resource management (Agarwal, 1991). If climate change
increases water scarcity, women are likely to bear the labor and nutritional
impacts.
The suggested distribution of vulnerability to climate change can be observed
clearly in the pattern of vulnerability to natural disasters (e.g., Burton et
al., 1993). The poor are more vulnerable to natural disasters than the rich
because they live in more hazardous places, have less protection, and have less
reserves, insurance, and alternatives. Adger (1999), for instance, shows that
marginalized populations within coastal communities in northern Vietnam are
more susceptible to the impacts of present-day weather hazards and that, importantly,
the wider policy context can exacerbate this vulnerability. In the Vietnamese
case, the transition to market-based agriculture has decreased the access of
the poor to social safety nets and facilitated the ability of rich households
to overexploit mangroves, which previously provided protection from storms.
Similarly, Mustafa (1998) demonstrates differentiation of flood hazards in lowland
Pakistan by social group: Insecure tenure leads to greater impacts on poorer
communities. See Chapter 18 for further examples. The
natural disaster literature also concludes that organization, information, and
preparation can help mitigate large damages at a moderate cost (e.g., Burton
et al., 1993). This underscores the need for adaptation, particularly in poor
countries.
Box 19-4. The Health Impacts of Climate Change
Global climate change will have diverse impacts on human healthsome
positive, most negative. Changes in the frequency and intensity of extreme
heat and cold, floods and droughts, and the profile of local air pollution
and aeroallergens will directly affect population health. Other effects
on population health will result from the impacts of climate change on
ecological and social systems. These impacts include changes in infectious
disease occurrence, local food production and nutritional adequacy, and
the various health consequences of population displacement and economic
disruption. Health impacts will occur very unevenly around the world.
In general, rich populations will be better protected against physical
damage, changes in patterns of heat and cold, introduction or spread of
infectious diseases, and any adverse changes in world food supplies.
The geographic range and seasonality of various vector-borne infectious
diseases (spread via organisms such as mosquitoes and ticks) will change,
affecting some populations that currently are at the margins of disease
distribution. The proportion of the world's population living in
regions of potential transmission of malaria and dengue fever, for example,
will increase. In areas where the disease currently is present, the seasonal
duration of transmission will increase. Decreases in transmission may
occur where precipitation decreases reduce vector survival, for example.
An increased frequency of heat waves will increase the risk of death
and serious illness, principally in older age groups and the urban poor.
The greatest increases in thermal stress are forecast for mid- to high-latitude
(temperate) cities, especially in populations with limited air conditioning.
Warmer winters and fewer cold spells, because of climate change, will
decrease cold-related mortality in many temperate countries. Basic research
to estimate the aggregate impact of these changes has yet been limited
largely to the United States and parts of Europe. Recent modeling of heat-wave
impacts in 44 U.S. urban populations, allowing for acclimatization, suggests
that large U.S. cities may experience, on average, several hundred extra
deaths per summer. Although the impact of climate change on thermal stress-related
mortality in developing country cities may be significant, there has been
little research in such populations.
For each anticipated adverse health impact, there is a range of social,
institutional, technological, and behavioral adaptation options that could
lessen that impact. The extent to which health care systems will have
the capacity to adopt them is unclear, however, particularly in developing
countries. There is a basic and general need for public health infrastructure
(programs, services, surveillance systems) to be strengthened and maintained.
The ability of affected communities to adapt to risks to health also depends
on social, political, and economic circumstances.
| |
Many communities in the UK are threatened by flood risk, but some people are more disadvantaged than others.
The effects of flooding on individuals are wide ranging and include:
- power shortages
- disruptions to livelihood
- impacts on physical and mental health and wellbeing.
Flood effects also vary with the severity of flooding, and current investment in flood-risk strategies and defences is driven by physical exposure to flood risk.
But this misses a key consideration.
The social vulnerability of individuals and communities at risk determines their capacity to respond to the consequences of flooding.
Personal, social or economic circumstances significantly affect people’s vulnerability. Those most at risk include children, the elderly, ill and homeless.
For example, those with less economic security may not have insurance to cover flood damage costs, elderly individuals may be at higher risk of becoming ill after floods, and cultural differences in communities may lead to misunderstandings over flood warnings.
Flood disadvantage = Exposure to climate hazard + social vulnerability
Social vulnerability is influenced by a range of factors including:
- sensitivity (personal factors like age)
- adaptive capacity (ability to prepare, respond and recover, which is affected by income, mobility, access to information and insurance)
- environment (for instance housing and location).
Areas with significant exposure to flood risk and a large number of individuals with social vulnerability are areas at most flood disadvantage. Flooding in these areas may lead to greater losses than elsewhere.
Social vulnerability needs to be taken into consideration in government funding, allocation and investment in response to flood risk.
And flood risk is increasing
The number of people exposed to flood risk in the UK is increasing due to environmental, social and policy changes.
Climate change is a driver of more frequent flooding, with higher river flows, more extreme weather and rising sea levels.
Population growth compounds this risk, with increasing housing demands putting pressure on councils to build houses in flood-risk areas.
An ageing population means more vulnerable people at risk, with greater challenges for coping and recovery after flood events.
Flood insurance in the UK is currently arranged so that premiums are higher for those living in areas at higher risk of flooding. This bias doesn’t account for individuals’ varying levels of income and poverty, and ability to cover the cost of insurance.
Poverty increases individual and community vulnerability to the impacts of floods. As flood risk increases, so too does flood disadvantage.
Is the government doing enough?
The government’s investment programme to minimise exposure to flood risk in England does not fully consider flood disadvantage.
The government does allocate 2.25 times more funding to those in the 20% most deprived areas than in the 60% least deprived areas (Targeting flood investment and policy to minimise flood disadvantage. Kit England and Katherine Knox, June 2016). However, this allocation does not mean the money is being spent directly on reducing flood risk.
Funding to local flood authorities is not ring-fenced. In 2012, over a third of local flood authorities reported that some of their funding was diverted from flood risk management.
Funding takes into consideration long-term investment using a cost-benefit approach, but does not address urban bias due to larger populations or the varying costs of flooding on communities and individuals.
The Netherlands is leading the way
A legal flood protection standard has been introduced in the Netherlands, which addresses economic capacity and social protection alongside cost benefit. This specifies a minimum safety level for everyone at risk of flooding.
The last word
The government needs to review their approach to flood investment to ensure that social vulnerability and deprivation are adequately addressed.
- Social vulnerability should be considered in local flood-risk management strategies in guidance and development plans.
- A minimum standard of protection for all should be considered.
- Funding policy should ensure that allocated funding is spent on flood risk management.
- Plans for new developments should include better understanding of the different people affected and their varying capacity to review planning applications.
- The increased risks associated with climate change should be factored in.
- Investment to increase flood resilience should take better account of the social context and equity issues. | https://www.hydrosolutions.co.uk/2019/03/19/flood-disadvantage-time-for-a-change/ |
What can you experience at COY13?
COY13 is made up of more than 250 individual programme contributions, organised and performed by more than 600 young Climate Activists from all over the world. Everything the COY13 Organising Team did was fitting these contributions into an overall framework.
The core objective of COY13 is to empower and strengthen young individuals and youth movements to take responsibility for and action against climate change. Both participants and programme contributors shall gather knowledge, experience and expertise by being actively involved in COY13.
Capacity-building includes policy-training sessions and training in other every-day “tools” of an activist. Capacity-building also includes ensuring participants are up-to-date with recent scientific findings and political agendas. Sharing knowledge lays the foundation for successful participation in political and social processes. Furthermore, capacity-building focuses on enhancing existing youth movements, networks and organisations by showcasing best practices and providing space for the exchange of experiences.
COY13 is a key preparatory event for YOUNGO members participating in the following COP23 climate negotiations. This includes both updating all YOUNGOs on the latest status of negotiations and discussing and developing policy-papers within the different Working Groups. This is key to bringing the voice of youth to the negotiations and making their participation in COP23 as effective and successful as possible.
COY13 wants to encourage participants to discover existing opportunities and find new creative solutions to face the many challenges of climate change and shape the future we want. These solutions will have to happen on different levels, including global, national, local and individual. Besides, creativity is also a means to process challenges and threats posed by climate change and make them more comprehensible.
COY13 will be an opportunity to address a wide-ranging field of topics related to climate change and sustainability. As Fiji holds presidency of this year’s COP we will focus on the perspective of Small Island Developing States and the issue of Climate Justice. Furthermore, we want the programme to address the wide variety of theory of changes that are pursued by different parts of the youth climate movements, broadly falling under the category of activism and policy work.
The Fijian COP23-Presidency has led to overall awareness of the perspectives of Small Island Developing States (SIDS), that are vulnerable and that are disproportionately more affected by the impacts of climate change.
Climate Justice puts climate change and climate action in the broader social and environmental context. This includes aspects such as intergenerational equity, gender, human rights and the rights of indigenous peoples. Climate justice also recognises the responsibilities of the Global North due to their historic emissions. Building on this thematic, the participants will be encouraged to reflect not only about the current challenges in the international decision making process, but also about their personal responsibilities towards the Mother Nature.
COY13 will connect multiple strands and approaches of activism to spark discussion and share skills required in the fields of policy and actions. This also includes possibilities to develop national, regional or international networks and movements, and to showcase how individuals can get actively involved.
Bearing in mind that current negotiations at COP23 are highly technical, COY13 will provide policy training for young participants at COP23. There will be content-related sessions regarding topics addressed in the upcoming sessions of COP, APA, SBI and SBSTA. Furthermore, workshop sessions will share the necessary knowledge and skills for working with policy documents and advocating youth positions in various circumstances. | https://www.coy13.org/index.php/programme/ |
How many times have you been asked this question? I bet many, many times. It’s that birth order theory. You know the one: when you were born in relation to your siblings makes you who you are. (Just yesterday, someone in my yoga class asked me if I was the youngest sibling. “Nope,” I said with some satisfaction, “I’m the oldest.” As is often the case, she was dead wrong.)
So, What Have You Heard?
If you are using an e-reader or tablet that allows you to make notes, you can write your answers to this and all the other exercises in the book. Otherwise, you can keep some paper handy and write there. Or you can consider your answers and ideas and see how they compare to what experts and other readers have to say.
Think of and write down at least three words to describe what you’ve heard about birth order, for oldest child, middle child, and youngest child.
Then see how your descriptions line up with some of the standard characteristics.
Oldest Child
Middle Child
Youngest Child
In 1961, a man by the name of Walter Toman got the birth order theory off and running in a book titled Family Constellations: Its Effects on Personality and Social Behavior. Since then, folks have argued whether or not birth order, more than anything else, determines our personality and how we get along outside the family.
Actually, Toman details eight basic birth order positions:
Whew! And as if that’s not enough, Toman also has portraits of the male only and female only child.
The problem is: most studies have failed to show that the order in which we’re born affects who we are. (One major exception is a study by Frank J. Sulloway, who found that over the last four hundred years, the most important scientists in the world were younger than at least one sibling in their family.)
Birth order is not so different than your astrological sign. You may find some characteristics that match and some that don’t. For example, I’m a Leo. And as the lion, I am supposed to be: creative, passionate but also self-centered, lazy, and someone who likes expensive things. Well, some of that is accurate (creative, passionate) but some is way off base (lazy, self-centered). At least, that’s my opinion.
Don’t get me wrong: birth order is a factor in who we are and our relationships outside of the family. But it’s only one of many influences on how we see ourselves and our family ties. | https://www.bublish.com/bubble/stream/janemerksyleder/the-sibling-connection-how-siblings-shape-our-lives-7976 |
Why Pursue Intercultural Studies?
Canada today is more diverse than ever before, as globalization translates into greater workplace and domestic cultural diversity. Language, cultural reference points, history, and social norms differ widely among the many groups constituting the Canadian mosaic. That makes it even more necessary than ever for Canadians to develop knowledge of the world’s diverse cultures, and how they relate and interact.
Saint Mary’s Intercultural Studies program will allow you to develop both efficient practical strategies, and theoretical knowledge, for interacting successfully with colleagues, customers, clients, students, and acquaintances from different sociocultural backgrounds—regionally, nationally, and globally.
What can I do with a degree in Intercultural Studies?
Developing intercultural communication skills is becoming critical for professionals, employers and employees, business owners, educators, and social workers. Theories in Intercultural Studies have flourished in the past 20 years in a wide spectrum of professional and academic specializations, from health to social work, education and business.
Graduates with an Intercultural Studies major will be better equipped to work in changing, diverse, multilingual contemporary workplaces. Students who plan to pursue careers in environments that involve understanding between different cultural groups—especially where different linguistic backgrounds are at play—will find it especially valuable. Intercultural Studies will be an asset in many professions, including social work, counselling, teaching, education, administration, business, customer relations, international development, law, information management, museum studies, film studies, fine arts, and many more.
Hands-on Learning
Students will be encouraged to do volunteering with our partner community organizations. In the fourth year of the program, students will have to take the course “Applied Intercultural Studies”. This course is designed to provide students with an experiential learning component where they will put in practice the knowledge and the know-how they have gained in the Intercultural Studies program.
Students will have different options to choose from: study-abroad programs, Community Service Learning (abroad or local), and special projects.
Sample Courses offered
The Intercultural Studies program (ICST) is an interdisciplinary program structured around four core courses:
- 1st year: Intercultural Communication
- 2nd year: Located Voices and Decentered Subjectivities
- 3rd year: Cross-Cultural Psychology
- 4th year: Applied Intercultural Studies
Students are asked to take complementary courses from a list of accredited courses divided into four overarching themes:
- Intercultural Issues in Global Context
- Communities and Identities
- Historical Perspectives on Cultural Representations
- Language and Power
An Intercultural Studies degree complements studies throughout the Faculty of Arts and beyond. | https://smu.ca/academics/intercultural-studies.html |
What is grey literature?
Grey literature is information which has been published informally or non-commercially, or remains unpublished. It can appear in many forms, including conference papers, government reports, statistics, patents, patient information sheets or posters.
Grey literature is not usually peer-reviewed, but may still be good, reliable information. Appraisal of grey literature should be conducted with the same rigour as with peer review.
See the PHCRIS Introduction to Accessing the Grey Literature for a more detailed overview. They are aimed at medical researchers but are relevant to anyone searching for grey literature.
The University of South Australia has a comprehensive Grey Literature guide.
Why is grey literature important?
"Published trials tend to be larger and show an overall greater treatment effect than grey trials. This has important implications for reviewers who need to ensure they identify grey trials, in order to minimise the risk of introducing bias into their review"
Hopewell et al. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database Sys. Rev. 2007.
"failure to identify trials reported in conference proceedings and other grey literature might affect the results of a systematic review"
6.2.1.8 Grey literature databases. Cochrane Handbook for Systematic Reviews of Interventions.
Grey literature can be found through a number of methods, including:
This guide lists some resources to help you get started, but is not comprehensive - you'll need to look elsewhere as well.
Grey literature is hard to find - be systematic!
Before you start searching, ask yourself:
This will influence how and where you search.
Tips for using search engines:
For more detailed guidance, see the online tutorials Academic research on the internet and Developing a search strategy.
Evaluating grey literature
You should evaluate every source you use, but it's particularly important to do so when using grey literature.
The quality of grey literature can vary greatly as it comes from a wide range of sources and doesn't go through the traditional editorial process.
The AACODS checklist is designed to enable evaluation and critical appraisal of grey literature.
The checklist includes the following areas:
Tyndall, J. AACODS Checklist. Flinders University, 2010. Available from http://dspace.flinders.edu.au/dspace/
The Evaluating sources section of the Academic research on the internet tutorial can also help guide you through an evaluation process.
Selected databases, repositories and website gateways:
Selected websites
Nursing and midwifery:
Nutrition and dietetics:
Rural and remote health:
Indigenous health:
Library databases that include some grey literature:
Many other databases include conference proceedings, check their descriptions on the databases library guide.
Theses:
Use Search to find Monash theses, and Trove for other Australian theses. See the theses library guide for international theses and more detailed information.
Government publications:
See the library's government publications guide for more detailed advice on locating documents from Australia and other countries.
Selected Australian government sites:
Search tools, directories and newsletters:
Other websites: | https://guides.lib.monash.edu/c.php?g=219702&p=3821290 |
As part of the Student Services Wellbeing, Counselling and Mental Health team, my role involves both mental health development work and practitioner work in evidence-based cognitive behavioural therapy interventions for mild to moderate mental health problems. Prior to my current role at Student Services, I gained a range of academic and professional experience that informs my work - completing my doctorate and teaching at the University, working as Registry Student Support Co-ordinator, and gaining years of experience and training within the Student Services Wardennial Service and Wellbeing team.
In my mental health development role, I am a member of the University's strategic mental health and suicide prevention working groups, and support the delivery of projects and initiatives to improve student and staff mental health and wellbeing at St Andrews. I play a lead role in institutional work on our Student Mental Health Agreement, and deliver training and presentations to support the development of staff and student literacy in mental health and suicide prevention.
In my practitioner role, I work to support and empower students in overcoming the challenges they face in order to fulfil their potential in their studies and life. My delivery of dynamic, evidence-based CBT interventions is characterised by a person-centred, psychoeducational approach rooted in the strengths and values of each person, and by the integration of mindfulness-based cognitive defusion techniques.
Alongside my daytime role I am a Warden, managing the Wardennial team and service delivery at McIntosh Hall. As Warden, I am passionate about the creation of a welcoming, supportive and inclusive environment within which students can feel a sense of belonging and find opportunities for personal growth and development within their hall community. | https://www.st-andrews.ac.uk/student-services/staff/adamwelstead/ |
DIGITALEUROPE believes that codes of conduct, like certification, can play an important role in facilitating as well as demonstrating compliance with the General Data Protection Regulation (GDPR). The GDPR text provides sufficient flexibility as to how codes can be brought into actual existence, and GDPR implementation must make it practical for organisations to develop and participate in codes.
To date, no EU-wide code has been approved, and the limited number of codes that do exist are all restricted to national application. This inherently fragments the European market for codes of conduct and greatly reduces their potential to facilitate GDPR compliance.
d. Ensuring that flexible and harmonised rules for monitoring bodies are put in place.
We understand this language as a general requirement for codes to consider how their application could benefit different industry sectors, types of processing operations and/or SMEs, but not as a hard requirement that codes be applicable only to a single industry sector or an SME subset of a single sector. Throughout the draft Guidelines, however, the EDPB describes a code’s applicability to a single sector as an absolute requirement.
We believe that the final Guidelines should explicitly recognise that, provided a code ‘aim[s]s to codify how the GDPR shall apply in a specific, practical and precise manner,’2 it can in principle be applicable to more than one industry sector. This also applies to the definition of ‘code owners,’3 which should be clarified as potentially referring to more than a single association hailing from a single industry sector.
Drafting codes for a single sector using legal and technology concepts only applicable to it ‘is an acceptable method,’4 but the GDPR does not prescribe that it be the only one. This should be a factual determination based on the contents and merits of each code.
Organisations from different sectors, or organisations comprising multiple sectors, might find it appropriate to adopt largely similar solutions to implement GDPR compliance, with respect to specific types of data processing or even to their data processing operations as a whole.5 The fact that a code detailing such solutions might be open to companies – be they multinationals or SMEs – from sectors as varied as retail and manufacturing shouldn’t in and of itself preclude its approval.
We regret the EDPB’s choice not to include more detailed consideration of transfers to third countries beyond paragraph 17. Although we appreciate that separate guidelines are being announced, we believe that transfers to non-EEA jurisdictions will represent a key factor in generating uptake of GDPR codes and should therefore be dealt with in the final Guidelines.
Because GDPR codes can in principle allow for a comprehensive assessment of an organisation’s processing activities,6 which may include transfers to third countries or international organisations, we believe the final Guidelines should explicitly state that, to the extent that the commitments required by Art. 46(2)(e) are included in a code, adherence to such code can represent an appropriate safeguard to enable third-country transfers.
This approach appears narrow and of limited value. On the one hand, the relatively straightforward procedure described for national codes does not address the complexity generated by different Member State laws. On the other, the procedure described for ‘transnational’ codes adds layers of redundancy compared to the GDPR text. This will not help in the assessment of codes and, most importantly, will inhibit the approval of EU-wide codes, which contradicts the GDPR’s fundamental goal of ensuring the free movement of personal data within the Union.
By contrast, the draft Guidelines require the national DPA to whom a code was submitted to individually identify and notify DPAs concerned, letting them participate in a joint review although the final approval of the Code would still rest with the original DPA. In addition, the DPAs concerned subsequently are also provided with an opportunity to raise issues before the code is submitted to the EDPB.
We would find it more beneficial if the final Guidelines focused on the elements necessary for DPAs to determine whether codes submitted to them could be considered as relating to processing activities in several Member States, which would trigger reference to the EDPB as described in Art. 40(7) and the Commission’s assessment under Art. 40(9). While it is the Commission alone who can grant a code EU-wide validity,8 the Commission’s assessment is the final step in a longer process that essentially rests with the DPAs and the EDPB, and as such we urge the EDPB to focus on this aspect.
While codes should quite clearly not contradict Member State – or, for that matter, EU – law, and while in some cases – particularly for solely national codes – specific reference to national legislation might be in order, a general requirement for all codes to explicitly cover national legislation is not included in the GDPR text. The final Guidelines should therefore make it clear that reference to specific Member State law should only be provided if relevant.
The draft Guidelines stipulate that transnational codes should always be submitted in the language of the relevant national DPA.10 We believe that a more pragmatic approach should be described whereby, unless clear exceptions can be found, EU-wide codes can be submitted in English so as to facilitate the procedure for an EDPB Opinion and subsequent Commission implementing act.
We welcome the draft Guidelines’ recognition that codes may be monitored by either external or internal monitoring bodies, provided that relevant procedures and structures to ensure their independence and expertise are in place.11 This flexible approach will make codes more easily implementable and scalable.
Along the same lines, we’d like to draw the EDPB’s attention to the fact that the draft Guidelines state that accreditation ‘applies only for a specific code.’12 However, footnote 11 states that ‘a monitoring body may be accredited for more than one code provided it satisfies the requirements for accreditation.’ We believe this should be the default position and that this text should be moved to the body of the document.
The draft Guidelines refer in passing to the similarities between internal monitoring bodies and data protection officers (DPOs). We would welcome it if the final Guidelines could elaborate more on this relationship and on whether, or under what additional safeguards, DPOs could be accredited as monitoring bodies in their own right in light of their statutory independence with respect to their tasks and duties.
Finally, it is important to stress that a code cannot be approved if it doesn’t identify a monitoring body. As a consequence, it appears that no codes can be submitted unless criteria for the accreditation of monitoring bodies have been approved by the competent DPAs. Given the delays and the potentially divergent results that this process may create, we encourage the EDPB to consider approving an Opinion, or more detailed Guidelines, setting out consistent and harmonised criteria for accreditation. | https://www.digitaleurope.org/resources/response-to-public-consultation-on-draft-edpb-guidelines-on-codes-of-conduct-and-monitoring-bodies/ |
DESPITE FAWCETT’S once-enormous fame, many details of his life, like those of his death, have been shrouded in mystery. Until recently, Fawcett’s family kept the bulk of his papers private. Moreover, the contents of many of the diaries and correspondence of his colleagues and companions, such as Raleigh Rimell, have never been published.
In trying to excavate Fawcett’s life, I have drawn extensively on these materials. They include Fawcett’s diaries and logbooks; the correspondence of his wife and children, as well as those of his closest exploring companions and his most bitter rivals; the journals of members of his military unit during World War I; and Rimell’s final letters from the 1925 expedition, which had been passed down to a cousin once removed. Fawcett himself was a compulsive writer who left behind an enormous amount of firsthand information in scientific and esoteric journals, and his son Brian, who edited
I also benefited from the tremendous research of other authors, particularly in reconstructing historical periods. I would have been lost, for instance, without John Hemming’s three-volume history on the Brazilian Indians or his book
Anything that appears in the text between quotation marks, including conversation in the jungle from vanished explorers, comes directly from a diary, a letter, or some other written document and is cited in the notes. In a few places, I found minor discrepancies in the quotations between published versions of letters, which had been edited, and their original; in these cases, I reverted to the original. In an effort to keep the notes as concise as possible, I do not include citations for well-established or uncontroversial facts, or when it is clear that a person is speaking directly to me. | http://indbooks.in/mirror1/?p=294214 |
‘These songs are my salvation’: Dundee’s Be Charlotte pens emotional note as she releases new song amid lockdown
A Dundee pop star has released her latest single in the hopes of helping people get through the lockdown.
Be Charlotte wants her new song Lights Off – which was unveiled on Friday morning – to give music fans “three minutes of relief” amid the Covid-19 pandemic.
In a personal note, the singer-songwriter explained her decision to release the song amid the health crisis; and admitted she had debated about whether or not to do so for weeks.
She said Lights Off is an “incredibly personal” track which she wrote in a flat in Glasgow in August 2017 at a time when she felt “defeated, depressed and uninspired”.
She described creating music as “my salvation” in the emotional message to fans.
She wrote: “We are in a time of unprecedented difficulty. The world as we knew it has changed. I realise that a lot of people are struggling just now.
“The pandemic has affected millions of people around the world and might continue to do so for some time.
“I have weighed up all the pros and cons of releasing music at this time of international crisis and changed my mind many times over the last few weeks.
“I have decided to release Lights Off as planned because of the circumstances in which it was written.
“I initially wrote it on an old out of tune piano in my flat in Glasgow, Scotland, at a time I felt defeated, depressed and uninspired.
“At that moment I recognised that I have felt like that quite often throughout my life.
“I think this song was the realisation that I had to find a way to express that sadness.
“Since writing the song it has opened up a lot of doors for me and is often the song that my fans want to hear the most. It’s the song that my record label wanted to release since day one.”
She added she has waited “a long time to share it (Lights Off) with the world”.
VIDEO: ‘It feels like the right place to be tonight’ – Thousands attend ‘once in a lifetime’ V&A Dundee 3D Festival
Be Charlotte said: “These songs are my salvation. They are my way to figure things out.
“I know music isn’t the cure for what we are collectively experiencing right now but I hope this song can give you three minutes of relief from whatever you are dealing with and maybe even make you dance a little.
“There are a lot of pressures involved in releasing music but in my mind, the song has already done what I intended it to do.
“I hope that it can connect with people who are feeling like they are not sure what to do next or how to get through the tough times.
“I hope it can encourage people to express themselves when their own mental health is suffering.
“We can dream with the lights off. We can see the rainbow through the clouds.We can get through this together.”
Be Charlotte is the stage-name of former Morgan Academy pupil Charlotte Brimner.
In September 2018, she joined the likes of Lewis Capaldi and Primal Scream in playing the 3D Festival which marked the opened of V&A Dundee.
In 2018 she signed a deal with record label Columbia/Sony Music. | https://www.thecourier.co.uk/fp/news/local/dundee/1302814/these-songs-are-my-salvation-dundees-be-charlotte-pens-emotional-note-as-she-releases-new-song-amid-lockdown/ |
Insights at the Atrium: "Fire in my mouth: Remembering the Triangle Shirtwaist Factory Fire"
Composer Julia Wolfe, Forward archivist Chana Pollack, and Remember the Triangle Fire Coalition founder Ruth Sergel will discuss the 1911 Triangle Shirtwaist Factory fire, its impact, and the importance of memorializing it. Ms. Wolfe will discuss her new multimedia work Fire in my mouth, and Ms. Pollack will present archival materials from the Forward's front-lines coverage of the tragedy and its aftermath. Philharmonic President and CEO Deborah Borda moderates.
Drawing inspiration from folk, classical, and rock genres, Julia Wolfe’s music brings a modern sensibility to each while tearing down the walls between them. Her art-balled Steel Hammer, runner-up for the Pulitzer Prize in Music, was inspired by the legends and music of Appalachia. Its text is culled from more than 200 versions of the John Henry ballad — telling the story of the story. The work, for Trio Mediaeval and the Bang on a Can All-Stars, was recently released on CD in April 2014 and will be presented at Brooklyn Academy of Music’s (BAM) 2015 Next Wave Festival directed by Anne Bogart. Ms. Wolfe’s recent body concerto, riSE and fLY, for percussionist Colin Currie and orchestra, features Mr. Currie playing rapid fire rhythms on his body. Ms. Wolfe has a major body of work for strings. In addition to her quartets, Cruel Sister for string orchestra, inspired by a traditional English ballad, received its U.S. Premiere at the Spoleto Festival. A CD of the work, coupled with Fuel, performed by Hamburg’s Ensemble Resonanz, is on the Cantaloupe label. Julia Wolfe’s collaborators include Anna Deavere Smith, Diller Scofidio+Renfro, Bill Morrison, Ridge Theater, Francois Girard, Jim Findlay, Jeff Sugg, and Susan Marshall. Her music has been heard at BAM, Settembre Musica (in Italy), Theatre de la Ville (Paris), Lincoln Center, Carnegie Hall, NCPA in Beijing, and LG Arts Center Korea, among others. In 2009 Ms. Wolfe joined the New York University–Steinhardt School composition faculty. She is co-founder and co-artistic director of New York’s music collective Bang on a Can. Her music is published by Red Poppy Music (ASCAP) and is distributed worldwide by G. Schirmer, Inc. | https://nyphil.org/concerts-tickets/1819/insights-jan-15 |
South Korea’s fertility rate is expected to hit a record breaking 0.96 births per woman this year. This is due to various reasons including high unemployment rate and increasing cost of living, but fundamentally it is due to high costs of marriage and birth. In a country influenced by Confucianism, women in South Korea are mostly in charge of most (if not all) household chores and childcare, while men are in charge of providing income for the family.
This long-going uneven distribution of household labor can also be seen in the labor market, where care workers continuously receive low wages compared to other white collar workers and blue collar workers. According to research, illegal discrimination of sex segregation and employment instability were core reasons for total wage differentials, and low human capital was also common among care workers. However, low human capital is due to the limit of measuring the output of care work due to its nature.
Undervaluing of care work has led to women spending more time in unpaid household labor in families, which is a crucial part of family formation. According to research, South Korean men work an average of +7 hours of labor every day, while South Korean women spend the same amount of time in household labor. Korea’s ratio of women’s time spent for household labor to men’s is more than double, the highest compared to France, Spain, UK, the Netherlands, USA, Germany, Norway and Denmark. Men’s unpaid work time increases in the societies with equal gender ideology and norm, reduced paid labor time and high employment level of married women while women’s unpaid work time increases in the societies with reduced total working time and long parental leave. Therefore, in order to increase men’s household labor time, national policies must reduce total working time and reform the unequal gender ideology present in the society.
References:
Kim, Young Mi. 2014. “A Comparative Study on the Relation between Welfare State Policy and the Unpaid Work Time by Gender.” Korea Social Policy Review 21(1): 143-177.
Lee, Ju Hwan and Ja Young Yoon. 2015. “Wage Penalty and Decomposition of Care Employment.” Korean Journal of Social Welfare Studies 46(4): 33-57. | http://inequality.kr/en/2018/11/30/childcare-and-household-labor-in-south-korea/ |
1. Field of the Invention
The present invention relates to a vibration-proof construction method, more particularly to a vibration-proof construction method for preventing or reducing the vibrations from vibration generating sources such as a road, railroad structure, or the like, to surrounding structures and the ground surface, by suppressing vibration propagation directly underneath the vibration generating sources or in the nearby ground.
2. Description of the Related Art
In recent years, vibrational disturbances along side of roads, railroad structures, and the like have frequently occurred due to traffic vibration or mechanical vibration. In particular, the negative effects due to such vibrations affecting surrounding houses and residents are serious with heavy traffic or close by railway tracks, and accordingly effective and efficient countermeasures for suppressing such vibrations have been strongly demanded.
As conventionally known suppression methods, for example, there is a vibration-screening trench construction method by providing a hollow space on a propagation path of vibrations in the ground, a vibration-impeding underground wall construction method by filling in the hollow trench with a suitable material, and so forth. These construction methods are methods to obtain vibration-proofing effects by directly blockading vibrations which propagate in the ground by the hollow trench or by the underground wall, but the former method has difficulties not only in increased costs in order to perform additional construction for building soil-retaining structures or supporting members, because it is realistically impossible to retain the hollow trench as it is, but also in losing vibration-blocking effects due to the additional construction. On the other hand, the latter method does nothing but replace the hollow trench with the underground wall having a constant quality of material so as to eliminate the need to perform the additional construction in the former method, so that the latter method cannot obtain sufficient vibration-proof effects as compared with the former method.
As a solution of these problems, the present inventors have proposed an anti-vibration method (the Wave Impeding Block (WIB) construction method using horizontal blocks) for solving the problems by laying flat blocks in the underground (Japanese Patent No. 2850187 (claims, etc.)), and furthermore in a later application have proposed an improved construction method (Japanese Patent No. 2764696 U.S. Pat. No. 5,779,397 (Claims, tc.)). These techniques involve flat blocks with a predetermined size, stiffness, and depth, are laid underground beneath or around a substructure which generates vibration or receives vibration. This has been realized based upon a theory regarding wave propagation in the ground (identification method for propagation/non-propagation phenomenon of waves) which had been established by the present inventors.
Moreover, with the above-described WIB method, a problem has remained in that anti-vibration effects are low as to vibrations with a low-frequency band of below 5 Hz, and also as to earthquakes, artificial vibration sources such as traffic vibration, and so forth, in ground influenced by low-frequency bands. In order to solve this problem, the present inventors have proposed a technique to obtain anti-vibration effects as to the vibration with a low-frequency band of below 5 Hz while taking advantage of the WIB construction method (see Japanese Unexamined Patent Application Publication No. 2000-282501 (claims, etc.)).
Furthermore, the present inventors made studies to realize improvement of vibration-proof effects based upon the theory regarding wave propagation within the ground described in the foregoing Japanese Patent No. 2850187, and as a result of these studies, have found that employing a building structure which takes advantage of the physical properties of scrap tires can obtain excellent vibration-proof effects which cannot be obtained in the conventional methods, and have presented their discovery to the Society (The 36th Geotechnical Conference Presentation (2001 Presentation Lectures, May 8, 2000, Japanese Geotechnical Society))
Although any of the above-described vibration-proof construction methods proposed by the present inventors is an effective vibration suppressing method, in recent years, the required properties are being increased more and more, furthermore, suppressing construction costs including material costs has been strongly demanded more than ever.
| |
COMPANY :
City of Kelowna
POSITION :
Communications Advisor
– 12+ month term
LOCATION:
Kelowna
Posting Number
CK52044P
Position Title
Communications Advisor
– 12+ month term
Job Classification
Communications Advisor
No.
of Staff Required
2
Job Type
Term
Division
Corporate Strategic Services
Job Summary/Basic Function
Lead the conversation on the future of our city!
Are you an experienced communications professional looking for new challenges and creative opportunities?
Join the Communications Department at the City of Kelowna, where you will support City-led projects and programs that deliver services to citizens and advance community vision.
A Communications Advisor leads communications and community engagement on projects that advance the City’s strategic priorities through a corporate service delivery model.
You will have key internal business partners within the City that you guide, support and deliver a full suite of services, from media event planning to issue management, to internal and external communications strategies and tactics for both traditional and digital channels.
Engage your strong writing, research and interpersonal skills to develop and implement communication plans, public participation strategies and public awareness campaigns.
As a creative communicator with both tactical and strategic experience, you can move seamlessly from defining goals, to analyzing audiences and risks, to creating the in-market collateral for a variety of channels.
Flexibility to balance priorities and multiple projects is critical, as you will have multiple business partners you support in your portfolio.
The successful candidate will have a university degree in Communications or Public Relations and will have at least four years of related employment experience that includes public engagement, strategic and collaborative communications, media relations, issues management and social media.
Experience in web content management, design software and digital engagement tools will be preferred.
Duties
Qualifications
Posting Date
10/22/2021
Closing Date
11/02/2021
Special Instructions to Applicants
We are currently recruiting for 2 term positions .
These position are work from home roles and are open to candidates across British Columbia:
Please attach copies of your education/membership(s) as ?other documents?.
Testing may be required.
If no fully qualified candidates apply, those not fully qualified may be considered for a training opportunity.
Training rates will apply. | https://www.mncv.org/other-general/communications-advisor-12-month-term-9cf689/ |
Background. Similarly to other age groups, there are significant social inequalities in health among young adults (YA). Education is thought to be the most appropriate indicator of YA socioeconomic status (SES), yet it is often in progress at that age and may not be representative of future achievement. Therefore, scholars have explored YA ‘expected’ education as a proxy of SES. However, no study has examined how it compares to the more common SES indicator, ‘completed’ education. Methods. Using data from 1,457 YA surveyed twice over a two year period, we describe associations between participants’ completed and expected education at baseline and completed education at followup. We then compare associations between these two measures and three health outcomes – smoking status, self-rated mental health, and participation in physical activity and sports – at baseline and follow-up using regression models. Results. At baseline, half of the participants were imputed a higher ‘expected’ level than that ‘completed’ at that time. In regression models, ‘expected’ and ‘completed’ education were strongly associated with all outcomes and performed slightly differently in terms of effect size, statistical significance, and model fit. Conclusions. ‘Expected’ education offers a good approximation of future achievement. More importantly, ‘expected’ and ‘completed’ education variables can be conceptualized as complementary indicators associated with inequalities in health in YA. Using both may help better understand social inequalities in health in YA. | https://papyrus.bib.umontreal.ca/xmlui/handle/1866/19939 |
Socioeconomic disadvantage is a fundamental cause of morbidity and mortality. One of the most important ways that governments buffer the adverse consequences of socioeconomic disadvantage is through the provision of social assistance. We conducted a systematic review of research examining the health impact of social assistance programs in high-income countries.
Methods
We systematically searched Embase, Medline, ProQuest, Scopus, and Web of Science from inception to December 2017 for peer-reviewed studies published in English-language journals. We identified empirical patterns through a qualitative synthesis of the evidence. We also evaluated the empirical rigour of the selected literature.
Results
Seventeen studies met our inclusion criteria. Thirteen descriptive studies rated as weak (n = 7), moderate (n = 4), and strong (n = 2) found that social assistance is associated with adverse health outcomes and that social assistance recipients exhibit worse health outcomes relative to non-recipients. Four experimental and quasi-experimental studies, all rated as strong (n = 4), found that efforts to limit the receipt of social assistance or reduce its generosity (also known as welfare reform) were associated with adverse health trends.
Conclusions
Evidence from the existing literature suggests that social assistance programs in high-income countries are failing to maintain the health of socioeconomically disadvantaged populations. These findings may in part reflect the influence of residual confounding due to unobserved characteristics that distinguish recipients from non-recipients. They may also indicate that the scope and generosity of existing programs are insufficient to offset the negative health consequences of severe socioeconomic disadvantage.
2. Faraz Vahid Shahidi, Odmaa Sod-Erdene, Chantel Ramraj, Vincent Hildebrand, Arjumand Siddiqi (2018), “Government social assistance programmes are failing to protect the health of low-income populations: evidence from the USA and Canada (2003–2014).” Journal of Epidemiology and Community Health, Publisher’s Site
Abstract
Background Social policies that improve the availability and distribution of key socioeconomic resources such as income, wealth and employment are believed to present the most promising avenue for reducing health inequalities. The present study aims to estimate the effect of social assistance recipiency on the health of low-income earners in the USA and Canada.
Methods Drawing on nationally representative survey data (National Health Interview Survey and the Canadian Community Health Survey), we employed propensity score matching to match recipients of social assistance to comparable sets of non-recipient ‘controls’. Using a variety of matching algorithms, we estimated the treatment effect of social assistance recipiency on self-rated health, chronic conditions, hypertension, obesity, smoking, binge drinking and physical inactivity.
Results After accounting for underlying differences in the demographic and socioeconomic characteristics of recipients and non-recipients, we found that social assistance recipiency was associated with worse health status or, at best, the absence of a clear health advantage. This finding was consistent across several different matching strategies and a diverse range of health outcomes.
3. Arjumand Siddiqi, Faraz Vahid Shahidi, Vincent Hildebrand, Anthony Hong, Sanjay Basu (2018), Illustrating a ‘Consequential’ Shift in the Study of Health Inequalities: A Decomposition of Racial Differences in the Distribution of Body Mass, Annals of Epidemiology. 28(4) , 236-241. Publisher’s Site
Abstract
Purpose
We present a conceptual introduction to “distributional inequalities”—differences in distributions of risk factors or other outcomes between social groups—as a consequential shift for research on health inequalities. We also review a companion analytical methodology, “distributional decomposition”, which can assess the population characteristics that explain distributional inequalities.
Methods
Using the 1999–2012 U.S. National Health and Nutrition Examination Survey, we apply statistical decomposition to (a) document gender-specific, black-white inequalities in the distribution of body mass index (BMI) and, (b) assess the extent to which demographic (age), socioeconomic (family income, education), and behavioral predictors (caloric intake, physical activity, smoking, alcohol consumption) are associated with broader distributional inequalities in BMI.
Results
Black people demonstrate favorable or no different caloric intake, smoking, or alcohol consumption than whites, but worse levels of physical activity. Racial inequalities extend beyond the obesity threshold to the broader BMI distribution. Demographic, socioeconomic, and behavioral characteristics jointly explain more of the distributional inequality among men than women.
Conclusions
Black-white distributional inequalities are present both among men and women, although the mechanisms may differ by gender. The notion of “distributional inequalities” offers an additional purchase for studying social inequalities in health.
4.Can Erutku and Vincent Hildebrand (2018), “Carbon Tax at the Pump in British Columbia and Quebec”, Canadian Public Policy. 44(2), 126-133. Publisher’s Site
British Columbia and Quebec introduced a carbon tax on the sale of retail gasoline in July 2008 and October 2007, respectively. Our findings suggest that the BC carbon tax had a short-term negative effect on gasoline consumption per capita and led to an amplified behavioural response, but only initially. This amplified response might have been the consequence of a constant carbon tax after July 2012. In comparison, we find weak evidence that the QC carbon tax had a negative effect on gasoline consumption per capita and created an amplified behavioural response. Moreover, these impacts appeared only years after the introduction of the QC carbon tax. This delay might be explained by the increase in carbon cost incurred by QC fossil fuel distributors after their participation in the Western Climate Initiative Regional Carbon Market, which started in January 2015. We believe, however, that more research is necessary to reach more definitive conclusions about the effect of carbon taxes on gasoline consumption. | https://www.glendon.yorku.ca/economics/professor-vincent-hildebrand-publishes-four-new-articles-in-bmc-public-health-journal-of-epidemiology-and-community-health-energy-policy-annals-of-epidemiology-and-canadian-public-policy/ |
Matthew was an extraordinary young man in so many ways: an excellent student; an amazing athlete; a sophisticated entrepreneur; a budding musician; an accomplished computer programmer; and a genuinely loyal friend. He was a perfect son as well. Always well behaved and unfailingly responsible, Matt never had a drink or an illegal drug.
Matt was kind, empathetic, and generous with implacable integrity. He had a wry, good sense of humor, but would never offend or insult anyone.
In his brief 18 years, Matt’s accomplishments were impressive. He had created two online businesses. He had become an accomplished computer scientist. He was becoming a very good rock drummer. And he was named “Captain” of his high school track team. Calculus and physics were courses where he excelled, and he regularly tutored friends who struggled in these subjects. Although these subjects came easy to him, he always showed unfailing patience with friends who he helped.
Matt was very shy, never wanting to be the center of attention. Always showing appropriate restraint, he lacked the false bravado so many young men affect to shield insecurity. He was truly loved and respected by all who knew him, and had a lasting positive impact on his family and friends. Everyone who knew him, and even those who did not have that pleasure, regularly comment on how strongly they were influenced by him. | http://www.mattsfoundation.org/about-matthew |
Which type of bond is present in hydrogen sulfide h2s )?
Usually, covalent bonds formed between the two nonmetals, between p-block and p-block, and formed when electronegativity difference between atoms exist less than 1.7. Hydrogen sulfide (H2S) is a covalent compound because the bond forms between two hydrogens and one sulfur are covalent in nature.Here are some related question people asked in various search engines.
Is there hydrogen bonds in H2S?
Short answer: Hydrogen bond is formed between two molecules if they have hydrogen and any of the three electronegative atoms (N,O,F) covalently bonded to each other . As there is no (NOF) in H2S , there is no hydrogen bond there although it has dipole dipole forces.
Is H2S a polar bond?
H2S is a slightly polar molecule because of the small difference in electronegativity values of Hydrogen (2.2) and Sulfur (2.58) atoms.
Is H2S single bond?
Step 3: Now construct the structure of H2S by connecting the single bond to the central atom. Step 4: After placing the central atom and connecting with it a single bond, we have to fulfill the octet rule for every atom in H2S to maintaining the last requirement of the lewis diagram.
Does n2 have covalent bonds?
Nitrogen has 5 valence electrons therefore needs three more electrons to complete its octet configuration. Mutual sharing of three pair of electrons gives triple covalent bond.
Why is hydrogen bond not present in H2S?
However, in H2S molecule the central atom sulfur is less electronegative and is bigger in size, so that it is unable to form the intermolecular hydrogen bonding. Therefore H2S molecule can not exist in liquid form. Original article published on whoatwherewhy.com
What type of bond is h20?
H2O is a Covalent bond, as the two elements are non-metals. A water molecule is a simple molecule, and simple molecules consist of a small number of atoms joined by covalent bonds.
What is H2S molecular polarity?
Because the H2S molecule is not symmetrical there is a region of unequal sharing. The bent shape means that the top (where the lone pairs of electrons are) is more electronegative. The Hydrogen atoms at the bottom of the structure are then more positive. Therefore, H2S is a polar molecule.
Is hydrogen fluoride polar or nonpolar?
Hydrogen fluoride (HF) can be described as a very polar molecule, while hydrogen (H2) is nonpolar. The origin of the polarization of the HF covalent bond has to do with electronegativity, an inherent property of all atoms.
What type of bond is between the nitrogen atoms in N2?
The N2 Lewis structure has a triple bond between two nitrogen atoms. According to the octet rule, nitrogen atoms need to bond three times. The N2 molecule is diatomic, meaning that two atoms of the same element are connected in a pair.
How does hydrogen and sulfur bond?
Each of the hydrogen atoms can share an electron with the sulfur atom and make a molecular bond. … Because its outer shell is not full, a single atom of sulfur can make a molecular bond with each of the two hydrogen atoms, resulting in a single molecule of hydrogen sulfide.
How many bonds does H2 have?
But since hydrogen wants a complete shell, it can have 2 covalent bonds with 2 electrons.
Why is nitrogen covalent bond?
Nitrogen atoms will form three covalent bonds (also called triple covalent) between two atoms of nitrogen because each nitrogen atom needs three electrons to fill its outermost shell. … These elements all share the electrons equally, creating four nonpolar covalent bonds.
What type of bonding is present in a hydrogen chloride molecule?
Hydrogen chloride is made from molecules . The hydrogen atom and the chlorine atom are joined by a covalent bond .
Does hydrogen fluoride have hydrogen bonding?
Hydrogen fluoride has an abnormally high boiling point for a molecule of its size(293 K or 20°C), and can condense under cool conditions. This is due to the fact that hydrogen fluoride can form hydrogen bonds. … Hydrogen bonds form between the δ+ hydrogen on one HF molecule and a lone pair on the fluorine of another one.. This article is first published on whoatwherewhy.com
Is SrCl2 ionic or covalent?
SrCl2 is polar ,u can check it by electronegativity difference as Cl (3.0) – Sr (1.0) = 2.0; 0.0 to 0.4 is covalent, 0.4+ to 1.7 is polar-covalent and above 1.7 is ionic. You can also assume that a metal and non-metal will form an ionic bonds. So basically it is an ionic compound.
Why is hydrogen fluoride a polar bond?
The hydrogen fluoride (HF) molecule is polar by virtue of polar covalent bonds; in the covalent bond, electrons are displaced toward the more electronegative fluorine atom. The polar covalent bond, HF. … The resulting hydrogen atom carries a partial positive charge.
Is hydrogen iodide polar or nonpolar?
All heteronuclear diatomic molecules a non-polar. Hydrogen iodide (HI) Notice the symmetry of the molecule: When divided, the top and bottom as well as the left and right are not mirror images of one another. One also knows the molecule is polar because the bond is polar.
Is nif2 ionic or covalent?
Nickel(II) fluoride is the chemical compound with the formula NiF2. Its is an ionic compound of nickel and fluorine and forms yellowish to green tetragonal crystals.
What type of bonds are there in a N2 molecule?
A pair of atoms may be connected by one or by two pi bonds only if a sigma bond also exists between them; in the molecule of nitrogen (N2), for example, the triple bond between the two nitrogen atoms comprises a sigma bond and two pi bonds.
What type of bond is N2 polar or nonpolar?
It is non-polar, because it is made up of two identical nitrogen atoms, both of which have the same electronegativity. Therefore none of the nitrogen atoms pull the electrons at a greater strength towards its nucleus than the other, and so the bond is not polar.
How many sigma bonds are present in N2 molecules?
1 sigma and 2 pi bonds are present in N2 molecule.
What is the expected bond angle in H2S?
Thus, the predicted bond angles in hydrogen sulfide are 92.1oC 92.1 o C .
How many covalent bonds are made by the central atom of H2S?
These two hydrogens atoms form a bond by sharing of an electron with a central sulfur atom to form the covalent bond. b) The electron geometry of H2S H 2 S is tetrahedral. c) The central sulfur atom is bonded to two hydrogen atoms by two single bonds. Therefore, there are two bonds.
What type of bond occurs between nitrogen and sulfur?
AtomValenceOxygen2Sulfur2Nitrogen3Carbon4
Which type of bonding is found in all molecular substances?
Which type of bonding is found in all molecular substances? 1 In all molecules, the atoms are bonded by the sharing of electrons, that is, by covalent bonding.
What type of bond is carbon and hydrogen?
The carbon-hydrogen bond (C–H bond) is a bond between carbon and hydrogen atoms that can be found in many organic compounds. This bond is a covalent bond meaning that carbon shares its outer valence electrons with up to four hydrogens. This completes both of their outer shells making them stable.
How many bonds does fluorine form?
It has 9 electrons, 2 core and 7 valence. Rather than forming 7 bonds, fluorine only forms a single bond for basically the same reasons that oxygen only forms two bonds. Hydrogen fluoride, HF, has one bond, but four centers of electron density around the fluorine.
How many bonds do carbon nitrogen oxygen and hydrogen form?
Monzur R. Oxygen forms two single covalent bonds, carbon forms four single covalent bonds and hydrogen forms one single covalent bond.
Can hydrogen form 3 bonds?
Hydrogen can form only one covalent bond, by pairing its only electron with another unpaired electron of another atom (of hydrogen or other element). | https://whoatwherewhy.com/which-type-of-bond-is-present-in-hydrogen-sulfide-h2s/ |
The cube has all its sides equal, therefore, the volume will be equal to the cube of its side length. If the radius and height of a cone and a cylinder are the same, then volume of cone is equal to one-third of volume of cylinder. The formula of volume of cuboid and rectangular prism is the same. More items...
https://www.wikihow.com/Calculate-Volume
What are the 3 ways to measure volume? There are other units for measuring volume; cubic inches, cubic feet, cubic yards are all units used for measuring volume. Milliliters, liters, gallons are also used especially when measuring liquids. We write cubic sizes using a small 3 next to the unit. What is concentration volume?
https://reviews.tn/wiki/what-is-net-volume-chemistry/
Determining the volume of shapes can be done using certain formulas. The formulas for each of these shapes are: Cube volume (V) equals s^3 where s is the side of the cube, V=s^3. Rectangular Prism volume (V) equals L times W time H, where L is the length, W is the width and H is the height, V = L x W x H.
https://www.wikihow.com/Calculate-Volume
Volume Dimensions – Length, Width & Height SI Metric Prefix Length Units Imperial and US Length Units Astronomical Units. This is the volume of the rectangular shape which corresponds to the dimensions entered for length, width and height. | https://pdf-doc.net/doc/how-to-find-volume-formula.html |
5 Elements Of Crafting A Compelling Story
As a writer, you want people to pay attention to what you want to say. With so much clutter and noise, how can you make your work stand out? The answer is storytelling. Stories fire neurons and light up our brain in the same way as an actual physical action would. The makeup of a compelling and well-structured story comes down to engaging characters, relatable plots, and most importantly, a feeling of connection. Here are the five C’s of storytelling.
The five elements of crafting a compelling story:
#1 Connect
The first element of storytelling is to connect with your audience. We may think rationally, but we make decisions based on emotions. Emotions influence us to make decisions. As a writer, it is crucial to tap into your reader’s emotions. Without this connection, engagement is impossible. Emotions help you build relationships with your audience. It is your opportunity to personalize your story and have people hooked from start to finish. You need to leverage this connection to your advantage by an obstacle which leads us to the second element of crafting a compelling story.
#2 Challenge
The first element of storytelling is to challenge your audience with the goal of resolving that challenge by the end of the story. Make your audience feel special. Your writing should appeal to many but talk directly to only one person. Articulate how you too were burdened by the same challenge.Highlight the core problem your audience has. To keep them engaged and begging for more, you need to make them really feel the pain. Just telling them isn’t enough. It’s time to for the rollercoaster of conflict to begin.
#3 Conflict
Now it’s time to add opposing forces that create an unfavorable result for your audience. Walk your audience through the worst parts of the problems that you are trying to solve. Hop back and forth between hurdles to them achieving their goals. Introduce new hurdles and make your audiences hopeless, ready to give up. Leave them emotionally drained. This will make them up for what’s coming next when all their hardships wither away.
#4 Conquer
Now it’s time to provide some hope. Show your audience how the characters have overcome the hurdles presented. Give them the feeling that a positive result is possible. As the story develops, take your audience on a journey from giving up all the way through to the results, feelings or accolades that your characters achieve on the other side. You are putting your readers in the shoes of your characters, and making them feel like the solution to all their problems is in their hands. By this part of your story, the reader is so emotionally bought into your character’s journey. They start to visualize themselves conquering their problems and achieving their goals. The emotional rollercoaster is coming to an end, and your reader finally sees a solution. They are inspired, ready and willing to take action.
#5 Conclude
Your job as a storyteller isn’t complete until you take your reader’s hand and guide them to the very next step they should take. Deliver a solution to the challenge, ending on a positive message the audience can take away. This is usually one part of a greater solution. But your job isn’t quite done. You need to guide your reader through their next step. What is the very next thing they need to do to? Is there somewhere they should visit? Where can they get help or inspiration? What other resources would be helpful?
Wrap
Stories will help you break down barriers and sail you through the sea of noise your readers are exposed to. The more personal you can make the experience, the greater the connection will be. Your job as a writer to evoke emotion and connection at an early stage in your story. But that’s not enough. Create a visceral experience that magnifies a core challenge and creates a feeling of mental conflict. Until you resolve this conflict and strengthen your readers to conquer their demons, the story isn’t complete. Every step of the way, you are guiding your readers on a journey, an experience, and an adventure until finally, you tell them exactly what they should do next. Keep your readers hooked for as long as you can. Hand over the reins and let others tell the story for you when the time is right. | https://aspiringquill.com/5-elements-of-crafting-a-compelling-story/ |
The Best Man by: Kristan Higgins
Faith Holland left her hometown after being jilted at the altar. Now a little older and wiser, she’s ready to return to the Blue Heron Winery, her family’s vineyard, to confront the ghosts of her past, and maybe enjoy a glass of red. After all, there’s some great scenery there .
Like Levi Cooper, the local police chief—and best friend of her former fiancé. There’s a lot about Levi that Faith never noticed, and it’s not just those deep green eyes. The only catch is she’s having a hard time forgetting that he helped ruin her wedding all those years ago. If she can find a minute amidst all her family drama to stop and smell the rosé, she just might find a reason to stay at Blue Heron, and finish that walk down the aisle.(Goodreads)
In honor of the Romance Writers of America (RWA 2013) 33rd Annual Conference being held in Atlanta, Georgia next week (July 17-20), I decided to write a book review from one of my favorite Romance authors, Kristan Higgins. She has about twelve books published and I have devoured five. They are all different in story plots and characters (no series here-maybe a few reoccurring faces) but all guarantee to whisk you away to a part of America with beautiful landscapes, well-developed characters, quirky friends and families, a touch of heartache and a simmering, building romance.
“Maybe love isn’t just a bouquet of roses once in a while. Maybe it’s just sticking it out, when it’s hard, when you’re mad, when you’re tired.”
The Best Man is her most recent endeavor and probably my favorite to date. Well, Chief Levi Cooper might have had something to do with it. But anyway….Until There was You was my first Kristan Higgins book, and I enjoyed it so much that I wanted to see if her others would be as good. They are.
After reading some very heavy books that create emotional exhaustion, I turn to Higgins for a lighter read. That in no way means fluffy or “less of a book” to me. She is an excellent writer who can easily transport the reader to a small town (this one-the beautiful Finger Lakes) and make you feel as if you are part of the cast. The well-developed, zany sub-characters and dysfunctional, family dynamics are always a favorite of mine. I always look forward to the new people I will meet when I open one of her books.
Higgins also adds elements of drama realistically without going over the top. Just enough to have you rooting for the characters to “heal” and pull on your heart strings a bit. The humor and embarrassing scenes she creates give the characters a humility in a relatable way: the string of bad dates for Faith, the wardrobe malfunction (kill me now), the forever bickering grandparents, her sister’s crazy sexcapades, finding her Dad a new wife…just to name a few.
The Best Man is a perfect summer read. Or beach read. An “in-between darker stories” book. Perfect to curl up with on a rainy day with a cup of coffee or tea. Actually, great with a glass of wine and some snacks. When you need a good laugh and a simmering romance kind of book. Or when you are reminiscing about…oh, just pick it up already!
| |
Summary: Unlike other University-owned residence areas, the University Villas had minimal composting opportunities despite every apartment having a full-functioning kitchen. As a solution, the project team proposed providing personal compost bins for each of the units in the hopes of decreasing the amount of food waste going to landfill. An original pilot test of 60 units was approved, and with the Housing Office’s assistance the implementation has expanded to every unit in the University Villas.
Summary: When the building was initially constructed in 1999, the three basketball courts of the Malley Center were lit via high-intensity discharge lighting. With the development of energy-efficient LED lighting, the applicants felt it was time to renovate the courts to be more sustainable. As a result, the LED lighting is anticipated to reduce the facility’s energy consumption by 25%--amounting to $56,523 in savings over 5 years.
Summary: The SCU chapter of the national nonprofit Food Recovery Network works with Dining Services by Bon Appetit on a twice a week basis to transport leftover food to Martha’s Kitchen -- a soup kitchen in San Jose. The approved funding is for the registered student organization to open a ZipCar business account to increase the reliability of transportation for the pick-ups, and will be a solution to one of the RSO’s largest challenges. In addition to helping feed the homeless, the increased transports will keep more food out of the landfill.
Summary: All too often classroom lights and media projectors are left on all night in the main academic buildings on campus. This team’s research found that 41 metric tons of carbon dioxide is wasted every year between Kenna, Vari, Lucas, and O’Connor Halls. Their solution is to install motion and ambient light sensors in the classrooms of these buildings to reduce overlighting and the chances of lights being left on throughout the night. As a pilot test, the full scope of the motion and ambient light sensors will be installed in Kenna Hall, and motion sensors will be installed in Lucas Hall.
Summary: Currently, the water used for the sinks and showers in Swig Hall is heated via natural gas, which emits carbon dioxide. As a way to help with Santa Clara’s goal of climate neutrality by 2020, the project team proposed the installation of solar panels on the roof of Swig to be the power source for the building’s hot water. By connecting the solar heater to Swig's plumbing the need for natural gas will be eliminated, and CO2 emissions will decrease.
Summary: Currently the cyclorama sheet at the back of the Mayer Theatre stage, used for scene lighting for performances, is lit by 30 one-kilowatt and 30 750-watt incandescent bulbs. The applicants proposed replacing the 60 incandescent bulbs with 24 LED fixtures that would provide the same quality of lighting at significantly reduced energy consumption; the team estimated a 93% reduction in electrical usage. The proposal requested $31,000 for the 24 LED lights and subsequent electrical supporting materials needed for the new lights, w7/1hile labor was considered cost-free as members of the Theatre & Dance Department faculty and staff would install the fixture; the estimated ROI of the project is a savings of $5,577.98 per year (plus savings from reductions in energy used to cool the building due to less heat emitted from the new fixtures, increased efficiency due to less staff labor time to replace bulbs). The project received endorsements from Butch Coyne (Director of SCUpresents) and David Popalisky (Theatre & Dance Department Chair), with product consultation provided by Dina Myers (Musson Theatrical).
Summary: The University Operations department currently uses two-cycle leaf blowers that run on gasoline and pre-mix oil, which operate for about 10 hours each week. The addition of a rechargeable electric blower would substantially limit carbon emissions, six times less CO2 than these existing blowers. Through this pilot program, University Operations will determine if additional funding should go towards electric leaf blowers.
Summary: This project will replace existing fluorescent fixtures in the two stairwells and hallways on all three floors of Kenna Hall with higher efficiency LED fixtures. In doing so, the project aims to reduce electricity consumption, provide more light, and reduce the maintenance costs for replacing the current fluorescent lamps and ballasts in Kenna Hall. | https://www.scu.edu/sustainability/programs/investmentfund/projects/ |
The use of artificial intelligence (AI) is growing across a wide array of business segments. From medical sciences to space engineering, the influence of AI in business is widespread. According to a recent report, the global AI software market is showing signs of rapid growth and is expected to reach around US $126 billion by 2025 (Liu, 2021).
These trends point to the fact that AI technology is being deployed to build solutions for all sorts of uses, both from a business and customer perspective. One of the key functions of AI, therefore, is to provide a decision-making answer as an output to a user-driven input or activity. As such, it won’t be an exaggeration to say that the AI system operates on the notion of giving an answer to a question in relation to the problem that the user may be interested in. Artificial intelligence technologies such as machine learning/deep learning and natural language processing (NLP) essentially infer and predict from the data fed into them and provide an output that is just like an answer to a particular question or query for decision-making and further information processing purposes.
For instance, an AI system trained on MRI scans helps with cancer diagnosis and treatment protocols. The data fed into the system is processed by the system to generate MRI scans that help human decision-makers answer the probability of the presence or absence of cancer in the patient (NIH, 2018; Recht & Sodickson, 2020).
Similarly, an AI system using natural language processing (NLP) technology to convert voice to text is simply solving a problem of having text transcripts of voice data.
What it means is that AI’s functioning is predominantly problem solving oriented. Luger (2005; p.25) concurs and highlights that AI programmes are designed to solve useful problems. As such, the term “intelligence” in artificial intelligence causes confusion for people, as AI does not possess any genuine intelligence per se.
When it comes to human intelligence, it is important to recognize the various challenges that we face to describe and define what intelligence is.
First, despite a lot of progress and development of knowledge about ‘how the brain functions”, the understanding of what intelligence is remains elusive. Is it a collection of various abilities or a single faculty? What is perception, intuition and creativity and how such concepts are developed? What are cognitive capabilities and how they are developed? (Luger, 2005). Second, human intelligence is not limited to problem solving and decision-making alone. It is much more than that. Human intelligence involves interaction with the environment dynamically and responding to emerging scenarios and situations. The inherently dynamic nature of human intelligence involving cognition capabilities is not question-answer sequence focused. Third, how do creativity and intuitive intelligence drive the actions and behaviours of humans? How do creativity and intuitive intelligence manifest in real-time? These are some other areas that need more work to understand the human cognitive processes.
For AI machines to be called intelligent in any extant imagination, they need to possess at least some limited intelligence capabilities similar to those a human mind possesses. It raises the question of whether the current AI systems’ abilities and functioning are limited by their focus on problem-solving. Is problem-solving focus a problem for the evolution and development of AI? If so, what should be done for further development of AI?
Note: This article was previously published on: www.academiasolution.com
References
- David, E. (2020) How The Future Of Deep Learning Could Resemble The Human Brain, https://www.forbes.com/sites/forbestechcouncil/2020/11/11/how-the-future-of-deep-learning-could-resemble-the-human-brain/?sh=3566d2f9415c
- Liu, S. (2021). Artificial intelligence software market revenue worldwide 2018-2025
- https://www.statista.com/statistics/607716/worldwide-artificial-intelligence-market-revenues/
- Luger, G. F. (2005). Artificial intelligence: structures and strategies for complex problem-solving. Pearson education. | https://www.bizcatalyst360.com/is-problem-solving-a-problem-for-ai-development/ |
The first step in creating meaningful, long-term, sustainable innovation in any organization is to recognize that cultures cause outcomes. And if this is true, bad cultures will cause bad outcomes. And if this is true, it further follows that bad leadership causes bad cultures, which in turn cause bad outcomes. The challenge in innovation is not projects, and initiatives, and programs and so on: the challenge is how human beings can change their way of thinking and the way they lead so as to foster innovation. It really is that simple.
Or, maybe not.
Leadership in innovative organizations operates on multiple, often contradictory levels. An innovative leader is simultaneously both leader and role model, both doing and causing. If you are leading innovation, you are simultaneously encouraging risk-taking, and acting as steward of your organization's resources; you are advocating change, while preserving your organization's legacy, history and current success; you are thinking both about a strong current performance and a future, likely different state. All at the same time. It can be a dizzying state of being.
Leaders who operate in this state of contradiction share many common traits and practices and habits. These shared attributes are like cardinal points of navigation, places you can return to in moments of uncertainty. They can be returned to as centering points, and they can be taught, nurtured and encouraged. And with effective powerful leadership and role models, these attributes of leadership almost magically, over time, morph into systemic attributes of an organization. And that is when innovation states are achieved.
Accountability
Accountability for the innovative leader has little to do with what we normally think of as "job accountabilities." Accountability in innovation is a powerful way of becoming one with your organization. Although that may sound just a little vague or warm and fuzzy, it's actually a very pragmatic outcome or state. Being accountable is not a list of tasks and outcomes and goals and objectives that are the sum of things you are expected to do. Those are responsibilities, the table stakes of good performance in your job. Holding yourself accountable (or, better, being in a state of accountability) means that you embrace the entirety of the organization you belong to as yours. You hold yourself accountable for the organization's success, and the success of every single person within the organization.
One way to think about this is that leaders who are fully accountable are those who have wholly and unconditionally bonded with their organization, and this leads them to a belief and intentional faith in the intrinsic value and meaning of what they are committed to. It seems difficult to imagine a powerful state of being accountable without an equally powerful state of believing in what you do. Innovation in particular -- in individuals and in organizations -- needs energy and drive. Broad and deep individual accountability, along with a deep understanding of what really constitutes innovation culture, is a powerful driver of innovation.
Self-awareness
An innovation culture is not generally a tidy culture, nor is it conducive to predictability ; the ride can be fast and unsettling, breeding a lot of uncertainty. Leaders immersed in true innovation must be able to accommodate this uncertainty, and not a little personal self-doubt. Self awareness allows you to understand your own weaknesses and failures, and doing so means that you have more empathy for others. Developing the ability to be comfortable in your own skin -- warts and all -- means failure, random events, disappointments and all the many daily surprises innovation can bring will not be about you -- they will be about the organization.
And just as it is with accountability, as individual leaders nurture their own self-awareness, they will nurture it in others. And then the organization itself becomes one that values, encourages and understands the returns that emerge from broadly distributed self-awareness.
Being ritual
Rituals form the foundation of life, giving us a sense of permanence and of stability. So too, all organizations benefit from the power and influence of shared celebrations. Rituals create a sense of belonging and alignment to common goals. Leaders of innovative organizations are mindful of the power of ritual to increase performance, to create loyalty and to help carry individuals through rough organizational times. They understand that the celebration and recognition intrinsic to ritual is the single biggest lever for organizational alignment. Powerful leaders are not so much practicing ritual as they are being ritual.
But ritual plays a different and especially important role in highly innovative organizations, and a strong leader of innovation knows this. Because of the sense of constant change and disruption associated with innovation, there is a great need for grounding, for creating both the sense and reality of a constant thread of organizational history, which of course in turn implies a more likely and reliable future. There's a difference between established, legacy rituals which honor past accomplishments and past history, and the process of creating new rituals to bind up the present. In the face of rapid change - even rapid desired and encouraged change -- an effective innovation leader encourages the process of ritualizing the human parts of an organization. Absent a process of ritualizing, change and failure can be unsettling; with ritual, they are celebrated.
Contradiction
The most challenging aspect of innovation is contradiction. Over time all organizations naturally arc toward a status quo state, a need to optimize and harvest the return on investment in existing business models and strategies. Generally, the more successful an organization, the less there is an appetite for innovation. A strong leader knows that there is a profound organizational inertia that competes with change, and is mindful of the conflict this can cause in other individuals. The conflict arises from contrary expectations. We may want individuals to take more risk (as part of being innovative), but at the same discourage risk institutionally. We may want everyone to trust more, to move faster, to get more done, but we also want verification, documentation and i's dotted and t's crossed.
An innovation leader knows this, understands the conflict and -- more important -- knows how this kind of whipsaw contradiction can leave individuals within an organization disoriented and uncertain. Accordingly, a committed leader will work to ensure transparency, openness and disclosure, so that conflict is a shared problem, not an individual burden. Contradiction then becomes more of an opportunity, rather than a barrier.
With a little luck and perseverance, the consistent application of these ideas can lead to an organization that is resilient, quick to adapt to change and opportunity, and which values human beings. Or, in other words, a highly innovative organization.
| |
---
abstract: |
[rock-scissors-paper game, non-transitive interactions, evolutionary game theory, replicator dynamics, spatial heterogeneity, metacommunity dynamics]{} **Abstract.** The rock-paper-scissor game – which is characterized by three strategies R,P,S, satisfying the non-transitive relations S excludes P, P excludes R, and R excludes S – serves as a simple prototype for studying more complex non-transitive systems. For well-mixed systems where interactions result in fitness reductions of the losers exceeding fitness gains of the winners, classical theory predicts that two strategies go extinct. The effects of spatial heterogeneity and dispersal rates on this outcome are analyzed using a general framework for evolutionary games in patchy landscapes. The analysis reveals that coexistence is determined by the rates at which dominant strategies invade a landscape occupied by the subordinate strategy (e.g. rock invades a landscape occupied by scissors) and the rates at which subordinate strategies get excluded in a landscape occupied by the dominant strategy (e.g. scissor gets excluded in a landscape occupied by rock). These invasion and exclusion rates correspond to eigenvalues of the linearized dynamics near single strategy equilibria. Coexistence occurs when the product of the invasion rates exceeds the product of the exclusion rates. Provided there is sufficient spatial variation in payoffs, the analysis identifies a critical dispersal rate $d^*$ required for regional persistence. For dispersal rates below $d^*$, the product of the invasion rates exceed the product of the exclusion rates and the rock-paper-scissor metacommunities persist regionally despite being extinction prone locally. For dispersal rates above $d^*$, the product of the exclusion rates exceed the product of the invasion rates and the strategies are extinction prone. These results highlight the delicate interplay between spatial heterogeneity and dispersal in mediating long-term outcomes for evolutionary games.
**Author’s Summary:** The rock-paper-scissor game, which might initially seem to be of purely theoretical interest, plays an important role in describing the behavior of various real-world systems including the evolution of alternative male mating strategies in the side-blotched lizard, the evolution of bacterial populations, and coexistence in plant communities. While the importance of dispersal in mediating coexistence for these intransitive communities has been documented in theoretical and empirical studies, these studies have, by in large, ignored the role of spatial heterogeneity in mediating coexistence. We introduce and provide a detailed analysis of models for evolutionary games in a patchy environment. Our analysis reveals that spatial heterogeneity coupled with low dispersal rates can mediate regional coexistence, despite species being extinction prone in all patches. The results suggests that diversity is maintained by a delicate interplay between dispersal rates and spatial heterogeneity.
author:
- 'Sebastian J. Schreiber$^1$ and Timothy P. Killingback$^2$'
bibliography:
- '../../seb.bib'
title: 'Spatial heterogeneity promotes coexistence of rock-paper-scissor metacommunities'
---
v c
Introduction {#introduction .unnumbered}
============
Since its inception over 30 years ago evolutionary game theory has become a major theoretical framework for studying the evolution of frequency dependent systems in biology [@maynardsmith-82; @hofbauer-sigmund-98; @hofbauer-sigmund-03]. There have been numerous applications of evolutionary game theory in biology (and increasingly also in economics and the social sciences), ranging from the evolution of cooperation [@axelrod-84; @axelrod-hamilton-81] and animal conflicts [@maynardsmith-price-73], to the evolution of sex ratios [@hamilton-67], and the origin of anisogamy [@parker-etal-72]. Indeed it is striking that three of the simplest possible games that can be considered, the Prisoner’s Dilemma game [@axelrod-84], the Hawk-Dove (or Snowdrift) game [@maynardsmith-82], and the Rock-Paper-Scissor game [@hofbauer-sigmund-98], have all found fruitful applications in the study of important biological problems, namely, the evolution of cooperation [@axelrod-84; @axelrod-hamilton-81], the evolution of animal contests [@maynardsmith-82; @maynardsmith-price-73], and the evolution of Red Queen dynamics [@sinervo-lively-96; @kerr-etal-02; @kirkup-riley-04] (in which the system cycles constantly between the different possible strategies).
In formulating evolutionary game theory it is often assumed that the individual strategists interact at random in a well-mixed population. Under this assumption the evolutionary game dynamics can be formulated as a system of ordinary differential equations, the replicator equations, which describe the time evolution of the different strategies in the game [@maynardsmith-82; @hofbauer-sigmund-98]. Any evolutionary stable strategies (i.e. a strategy, which if adopted by almost all members of the population, cannot be invaded by any mutant strategy) is a stable equilibrium of the replicator equations [@hofbauer-sigmund-98].
In many situations the assumption that the population is well-mixed, with individuals interacting randomly throughout the whole population, is not realistic. This will often be the case if there is some spatial structure in the population, which results in individuals interacting more with neighboring individuals than with more distant ones. One way of modeling a structured population is to assume that individuals are associated with the vertices of a graph, with two individuals interacting if the corresponding vertices are connected by an edge. This approach leads to a network based formulation of evolutionary game theory in which the evolutionary dynamics on the graph is determined by a suitable deterministic or stochastic analogue of the replicator dynamics. Evolutionary games on graphs have been rather well studied [@nowak-may-92; @killingback-doebeli-96; @nakamaru-etal-97; @hauert-szabo-03; @ifti-etal-04; @hauert-doebeli-04; @santos-pacheco-05; @ohtsuki-etal-06]. One of the basic conclusions of this work is that the evolutionary dynamics of a game on a graph can be quite different from the dynamics of the game in a well-mixed population. A particularly important instance of this is that cooperation can be maintained in the Prisoner’s Dilemma game on a graph. In contrast, in a well-mixed population cooperation is always driven to extinction by defection.
An alternative way of modeling a structured population is to assume that it is composed of a number of local populations, within which individuals interact randomly, coupled by dispersal. In this approach the total population or community is modeled as a metapopulation or metacommunity. Metapopulation and metacommunity structure is known to have important implications for population dynamics in ecology and evolution [@hanski-99; @holyoak-etal-05; @prsb-10].
In spite of the considerable amount of work that has been devoted to understanding the ecological and genetic consequences of metacommunity structure there has been much less attention devoted to studying the dynamics of evolutionary game theory in the metacommunity context. The purpose of this paper is to provide a general mathematical formulation of metacommunity evolutionary game dynamics, and to obtain detailed results for the case of a particularly interesting game – the rock-paper-scissors game. In the last few years the rock-paper-scissor game, which might initially seem to be of purely theoretical interest, has emerged as playing an important role in describing the behavior of various real-world systems. These include the evolution of alternative male mating strategies in the side-blotched lizard *Uta Stansburiana* [@sinervo-lively-96], the *in vitro* evolution of bacterial populations [@kerr-etal-02; @nahum-etal-11], the *in vivo* evolution of bacterial populations in mice [@kirkup-riley-04], and the competition between genotypes and species in plant communities [@lankau-strauss-07; @cameron-etal-09]. More generally, the rock-scissors-paper game – which is characterized by three strategies R, P and S, which satisfy the non-transitive relations: P beats R (in the absence of S), S beats P (in the absence of R), and R beats S (in the absence of P) – serves as a simple prototype for studying the dynamics of more complicated non-transitive systems [@buss-jackson-79; @paquin-adams-83; @may-leonard-75; @jmb-97; @oikos-04; @vandermeer-pascual-05; @allesina-levine-11].
One of the central issues that has arisen in recent years in ecology is the degree to which metacommunity structure can lead to the coexistence of competing species [@hanski-99; @amarasekare-nisbet-01; @moquet-etal-05; @gravel-etal-10]. Here, we study an interesting aspect of this larger question, namely, the effect of a general metacommunity structure on the coexistence of the strategies in the rock-paper-scissor game. In a well-mixed population the evolutionary dynamics of the rock-paper-scissor game is known to be determined by the sign of the determinant of the payoff matrix [@hofbauer-sigmund-98]. If the determinant of the payoff matrix is positive then the replicator dynamics converges to a stable limit point, in which the frequencies of the three strategies tend to constant values. If, however, the determinant of the payoff matrix is negative then the replicator dynamics converges to a heteroclinic cycle, in which the frequencies of the three strategies continue to undergo increasingly extreme oscillations. In the latter case the frequencies of the different strategies successively fall to lower and lower levels as the population dynamics approach the heteroclinic attractor. Consequently, stochasticity would result in the ultimate extinction of one of the strategies followed by the elimination of the remaining dominated strategy.
In this paper we study the dynamics of the rock-scissors-paper game in a metacommunity context, and show that dispersal in spatially heterogeneous environments can alter dynamical outcomes. In particular, we characterize under what conditions dispersal in heterogeneous environments stabilizes or destabilizes rock-paper-scissor metacommunities. When dispersal is stabilizing, all strategies in the rock-scissors-paper metacommunity are maintained indefinitely by a Red Queen type dynamic.
Model and Methods {#model-and-methods .unnumbered}
=================
Evolutionary Games in Space. {#evolutionary-games-in-space. .unnumbered}
----------------------------
We consider interacting populations playing $m$ distinct strategies ($i=1,\dots,m$) in a spatially heterogeneous environment consisting of $n$ patches ($r=1,\dots,n$). Space is the primary limiting resource for the populations and assumed to be fully saturated i.e. all sites within a patch are occupied. Let $x_i^r$ denote the frequency of strategy $i$ in patch $r$. Within patch reproductive rates of individuals are determined by pairwise interactions where an individual in patch $r$ playing strategy $i$ receives a payoff of $A_{ij}(r)$ following an encounter with an individual playing strategy $j$. Individuals reproduce at a rate equal to their net payoff. For individuals playing strategy $i$ in patch $r$, this net payoff equals $\sum_i A_{ij} (r) x_j^r$. All individuals in patch $r$ experience a per-capita mortality rate $m^r$. Dying individuals free up space that can be colonized with equal likelihood by all offspring living in the patch. In the absence of dispersal, the probability that a site emptied by a dying individual gets colonized by an offspring playing strategy $i$ is $\frac{\sum_i A_{ij} (r) x_j^rx_i^r}{\sum_{j,k} A_{jk}(r) x_j^r x_k^r}$. Thus, in the absence of dispersal, the population dynamics in patch $r$ are $$\label{local}
\frac{dx^r_i}{dt} = -m^r\,x_i^r + m^r \frac{\sum_j A_{ij}(r) x_i^r x_j^r}{\sum_{j,k} A_{jk}(r) x_j^r x_k^r}.$$
To account for movement between patches, let $d_{sr}$ denote the fraction of progeny born in patch $s$ that move to patch $r$. In which case, the rate at which offspring of strategy $i$ arrive in patch $r$ equals $\sum_s d_{sr} \sum_j A_{ij}(s) x_i^s x_j^s$ and the probability an offspring playing strategy $i$ colonizes an emptied site equals $ \frac{\sum_j A_{ij}(s) x_i^s x_j^s}{\sum_s d_{sr}\sum_{j,k} A_{jk}(s) x_j^s x_k^s}$. Hence, the full spatial dynamics are $$\label{replicator}
\frac{dx^r_i}{dt} = -m^r\,x_i^r + m^r \frac{\sum_s d_{sr} \sum_j A_{ij}(s) x_i^s x_j^s}{\sum_s d_{sr}\sum_{j,k} A_{jk}(s) x_j^s x_k^s}.$$ We assume that the matrix $D$ of dispersal probabilities is primitive (i.e. after sufficiently many generations, the decedents of any individual in any one patch occupy all patches).
For the rock-paper-scissor game, there are three strategies with rock as strategy $1$, paper as strategy $2$, and scissor as strategy $3$. Let $a^r$ be the basal reproductive rate of an individual in patch $r$. Let $b_i^r$ (i.e. the benefit to the winner) be the payoff to strategy $i$ in patch $r$ when it wins against its subordinate strategy, and $-c_{i}^r$ (i.e. the cost to the loser) be the payoff to strategy $i$ in patch $r$ when it loses against the dominant strategy. Under these assumptions, the payoff matrix in patch $r$ is given by $$\label{payoff}
\A(r)=a^r+\begin{pmatrix}
0& -c_1^r& b_1^r \\
b_2^r& 0 & -c_2^r \\
-c_3^r & b_3^r & 0
\end{pmatrix}$$ Throughout this article, we assume that $a^r>0$, $0<c_i^r<a^r$, $b_i^r>0$. The assumption $a^r>c_i^r$ ensures that payoffs remain positive.
Analytical and Numerical Methods {#analytical-and-numerical-methods .unnumbered}
--------------------------------
To understand whether the strategies persist in the long-term, we analyze (\[replicator\]) using a combination of analytical and numerical methods. Long-term persistence of all the strategies is equated with *permanence*: there exists a minimal frequency $\rho>0$ such that $$x_i^r(t) \ge \rho \mbox{ for all $i,r$}$$ whenever $t$ is sufficiently large and all strategies are initially present (i.e. $\sum_r x_i^r (0)>0$ for $i=1,2,3$). Permanence ensures that populations recover from rare large perturbations and continual small stochastic perturbations [@dcds-07; @benaim-etal-08]. Using analytical techniques developed by @jde-10, we derive an analytical condition for permanence in terms of products of eigenvalues at the single strategy equilibria of the model. These criteria take on an explicit, interpretable form when (i) populations are relatively sedentary (i.e. $d_{rr} \approx 1$ for all $r$) and (ii) populations are well mixed (i.e. there exists a probability vector $v=(v_1,\dots,v_n)$ such that $d_{rs}\approx v_s$ for all $r,s$). To better understand permanence at intermediate dispersal rates, we derive an analytical result about critical dispersal thresholds for persistence of metacommunity exhibiting unconditional dispersal (i.e probability of leaving a patch is independent of location) and numerically simulate (\[replicator\]) using the deSolve package of R [@r]. To simplify our exposition, we present our results under the assumption that $m^r=m$ and $a^r=a$ for all $r$ i.e. there is only spatial heterogeneity in the benefits and in the costs. More general results are presented in the Appendices.
Results {#results .unnumbered}
=======
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![Boundary dynamics for rock-paper-scissors. For within patch and metacommunity dynamics, there is a cycle of trajectories (i.e. heteroclinic cycle) connecting the pure strategy equilibria. In (a), the cycle is repelling and the community persists. In (b), the cycle is attracting and the community is extinction prone. Simulated metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)/30$, $b^r =0.85\, c^r$, $a=3$). The fraction dispersing equals $d=0.005$ in (a) and $d=0.5$ in (b). []{data-label="fig:hetero"}](repelling "fig:"){width="2in"} ![Boundary dynamics for rock-paper-scissors. For within patch and metacommunity dynamics, there is a cycle of trajectories (i.e. heteroclinic cycle) connecting the pure strategy equilibria. In (a), the cycle is repelling and the community persists. In (b), the cycle is attracting and the community is extinction prone. Simulated metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)/30$, $b^r =0.85\, c^r$, $a=3$). The fraction dispersing equals $d=0.005$ in (a) and $d=0.5$ in (b). []{data-label="fig:hetero"}](attracting "fig:"){width="2in"}
![Boundary dynamics for rock-paper-scissors. For within patch and metacommunity dynamics, there is a cycle of trajectories (i.e. heteroclinic cycle) connecting the pure strategy equilibria. In (a), the cycle is repelling and the community persists. In (b), the cycle is attracting and the community is extinction prone. Simulated metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)/30$, $b^r =0.85\, c^r$, $a=3$). The fraction dispersing equals $d=0.005$ in (a) and $d=0.5$ in (b). []{data-label="fig:hetero"}](rpsA "fig:"){width="2.75in"} ![Boundary dynamics for rock-paper-scissors. For within patch and metacommunity dynamics, there is a cycle of trajectories (i.e. heteroclinic cycle) connecting the pure strategy equilibria. In (a), the cycle is repelling and the community persists. In (b), the cycle is attracting and the community is extinction prone. Simulated metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)/30$, $b^r =0.85\, c^r$, $a=3$). The fraction dispersing equals $d=0.005$ in (a) and $d=0.5$ in (b). []{data-label="fig:hetero"}](rpsB "fig:"){width="2.75in"}
\(a) repelling cycle \(b) attracting cycle
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Local coexistence {#local-coexistence .unnumbered}
-----------------
We begin by studying the behavior of the within-patch dynamics (\[local\]) in the absence of dispersal. If only strategy $i$ is present in patch $r$, then the per-capita growth rate of the strategy, call it $j$, dominated by strategy $i$ is $-m\, c_i^r/a$. Alternatively, the per-capita growth rate of the strategy $j$ dominating strategy $i$ equals $m \,b_i^r/a$. The three single-strategy equilibria are connected by population trajectories in which dominant strategies replace subordinate strategies (Fig. \[fig:hetero\]). This cycle of population trajectories in patch $j$ is known as a *heteroclinic cycle* [@hofbauer-sigmund-98]. Using average Lyapunov functions, time-one maps, or measure-theoretic techniques [@hofbauer-81; @krupa-melbourne-95; @jde-00], one can show that the strategies in patch $r$ locally coexist in the sense of permanence provided that the product of the invasion rates exceeds the product of the exclusion rates: $$\label{local permanence}
\prod_{i}b_i^r> \prod_{i} c_i^r.$$ Interestingly, inequality is equivalent to the determinant of the payoff matrix being positive.
When coexistence occurs, the heteroclinic cycle of the boundary of the population state space is repelling and there is a positive global attractor for the within-patch dynamics (Fig. \[fig:hetero\]a) . When inequality is reversed, the heteroclinic cycle on the boundary is attracting (Fig. \[fig:hetero\]b). The strategies asymptotically cycle between three states (rock-dominated, paper-dominated, scissor-dominated), and the frequencies of the under-represented strategies asymptotically approach zero. Hence, all but one strategy goes extinct when accounting for finite population sizes.
Metacommunity coexistence. {#metacommunity-coexistence. .unnumbered}
--------------------------
[ **Analytical results.**]{} When the patches are coupled by dispersal, we show in Appendix A that for any pair of strategies, the dominant strategy competitively excludes the subordinate strategy. Hence, as in the case of the dynamics within a single patch, the metacommunity exhibits a heteroclinic cycle on the boundary of the metacommunity phase space.
Work of @jde-10 on permanence for structured populations (see Appendix B) implies that metapopulation persistence is determined by invasion rates and exclusion rates at single strategy equilibria. More specifically, consider the rock strategy equilibrium where $x_1^r =1$ and $x_2^r=x_3^r=0$ for all patches $r$. Linearizing the paper strategy dynamics at the rock equilibrium yields $$\frac{dx_2^r}{dt}\approx -m\, x_2^r +m \frac{\sum_s d_{sr} (a+ b_2^s) x_2^s}{\sum_ s d_{sr} a}.$$ Equivalently, if $\x_2 = (x_2^1,\dots, x_2^n)^T$ where $^T$ denotes transpose, then $$\frac{d\x_2}{dt} \approx \left(-m I+m \Psi D^T (aI+B_2) \right) \x_2$$ where $I$ is the identity matrix, $\Psi$ is the diagonal matrix with entries $1/\sum_ s d_{1s} a^s,\dots,1/\sum_s d_{ns} a^s$, $B_2$ is the diagonal matrix with diagonal entries $b_2^1,\dots,b_2^n$, and $D^T$ is the transpose of the dispersal matrix. Corresponding to the fact that the paper strategy can invade the rock strategy, the stability modulus of $-mI+m\Psi D^T(aI+B_2)$ (i.e. the largest real part of the eigenvalues) is positive. Call this stability modulus $\mathcal{I}_2$, the invasion rate of strategy $2$. Linearizing the scissor strategy dynamics at the rock equilibrium yields $$\frac{d\x_3}{dt} \approx \left(-mI+m \Psi D^T (aI-C_3) \right) \x_3$$ where $C_3$ is the diagonal matrix with diagonal entries $c_3^1,\dots,c_3^n$. Corresponding to the fact that scissor strategy is displaced by the rock strategy, the stability modulus of $-mI+m\Psi D^T (aI-C_3)$ is negative. We call this negative of this stability modulus ${\mathcal E}_3$, the exclusion rate of strategy $3$. By linearizing around the other pure strategy equilibria, we can define the invasion rates ${\mathcal I}_i$ for each strategy invading its subordinate strategy and the exclusion rates ${\mathcal E}_i$ for each strategy being excluded by its dominant strategy.
Appendix A shows that the metapopulation persists if the product of the invasion rates exceeds the product of the exclusion rates: $$\label{eq:condition}
\prod_{i=1}^3 \mathcal{I}_i > \prod_{i=1}^3 \mathcal{E}_i$$ If the inequality is reversed, then the metapopulation is extinction prone as initial conditions near the boundary converge to the heteroclinic cycle and all but one strategy is lost regionally. While inequality can be easily evaluated numerically, one can not, in general, write down a more explicit expression for this permanence condition. However, when the metapopulation is weakly mixing (i.e. dispersal rates are low) or well-mixed (i.e. dispersal rates are high), we are able to find more explicit criteria. Furthermore, when dispersal is unconditional, we show that there is critical dispersal rate below which persistence is possible (Appendix C).
At sufficiently low dispersal rates i.e $d_{rr}\approx 0$ for all $r$, the metacommunity coexistence criterion simplifies to $$\label{metacommunity permanence low}
\prod_{i=1}^3\max_r b_i^r
>
\prod_{i=1}^3 \min_r c_i^r.$$ Unlike the local coexistence criterion which requires that the geometric mean of benefits exceeds the geometric mean of costs within a patch, inequality requires that the geometric mean of the maximal benefits exceeds the geometric mean of the minimal costs. Here, the maxima and minima are taken over space. Thus, inequality implies that localized dispersal promotes coexistence if there is sufficient spatial variation in relative benefits, costs, or mortality rates.
For well-mixed metacommunities (i.e. $d_{rs}\approx v_s$ for all $r,s$), the invasion rate $\mathcal{I}_i$ of the strategy is approximately $m \, \sum_r b_i^r/a$. Conversely, the exclusion rate $\mathcal{E}_i$ of strategy $i$ is $-m\,\sum_r c_i^r/a$. These well-mixed metacommunities coexist provided that the geometric mean of the spatially averaged benefit exceeds the geometric mean of the spatially averaged cost: $$\label{metacommunity permanence high}
\prod_{i=1}^3 \left(\frac{1}{n} \sum_r b_i^r \right)> \prod_{i=1}^3 \left(\frac{1}{n} \sum_r c_i^r\right).$$ Since implies , it follows that persistence of well-mixed communities implies persistence of weakly-mixing communities, but not vice-versa. We can refine this observation under the assumption of unconditional dispersal.
Unconditional dispersal occurs when the fraction of individuals dispersing, $d$, is independent of location. Let $p_{rs}$ denote the fraction of dispersing individuals from patch $r$ that end up in patch $s$ i.e. $p_{sr}$ is a dispersal kernel that describes how dispersing individuals redistribute across patches. Under these assumptions, the fraction $d_{rs}$ of individuals in patch $r$ dispersing to patch $s\neq r$ equals $d\, p_{rs}$. The fraction $d_{rr}$ of individuals remaining in patch $r$ is $1-d$. In Appendix C, we show that there is a critical dispersal threshold $d^*$ (possibly $0$ or $1$) such that the metacommunity persists if its dispersal rate is below $d^*$ and is extinction prone when its dispersal rate is greater than $d^*$. It follows that if the metacommunity persists when highly dispersive (i.e. $d^*=1$), then it persists at all positive dispersal rates. Conversely, if a metacommunity is extinction prone when weakly mixing (i.e. is violated), then it is extinction prone at all positive dispersal rates.
[ **Numerical results.**]{} To illustrate the implications our analytical results, we consider two scenarios where either there is only spatial variation in the payoffs or where there is within-patch and spatial variation in payoffs. There are $n=30$ patches that are equally connected. A fraction $d$ of individuals disperse and dispersing individuals are distributed equally amongst the remaining patches (i.e. $d_{rs}=d/(n-1)$ for $r\neq s$). For this form of dispersal, the metapopulation is well-mixed when $d=(n-1)/n$ in which case $d_{rs}=1/n$ for all $r,s$.
First, we consider the case where there is spatial variation in payoffs, but all strategies within a patch fare equally well when they are the dominant player in an interaction and fare equally poorly when they are the subordinate player in the interaction (i.e. $b_i^r =b^r$, and $c_i^r=c^r$ for all $i=1,2,3$). Local coexistence requires that the benefit $b^r$ to the winner must exceed the cost $c^r$ to the loser. For well-mixed communities, regional coexistence requires that the spatially averaged benefit $\frac{1}{n}\sum_r b^r$ must exceed the spatially averaged average cost $\frac{1}{n} \sum_r c^r$. From these two conditions, it follows that metapopulation persistence for well-mixed communities requires that at least one of the patches promotes local coexistence.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The effect of spatial variation and dispersal rate on the persistence criterion in (a) and the long-term metapopulation frequencies in (b). Metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)\sigma/30$, $b^r =0.85\, c^r$, $a=3$). In (a), the difference between the product $\prod_i \mathcal{I}_i$ of the invasion rates and the product $\prod_i \mathcal{E}_i$ of the exclusion rates are plotted as function of the fraction $d$ of dispersing individuals and the range $\sigma$ of spatial variation in the payoffs. Positive values correspond to persistence and negative values to the metapopulation being extinction prone. The white curve is where the difference in products equals zero. In (b), the minimal and maximal frequencies for one patch and the spatial average are plotted as a function of the fraction $d$ of dispersing individuals and $\sigma$. The white curve is where the difference in the products of invasion and exclusion rates equals zero. []{data-label="fig:bif2D"}](lottery-bif-asymmetric-A "fig:"){height="3.5in"} ![The effect of spatial variation and dispersal rate on the persistence criterion in (a) and the long-term metapopulation frequencies in (b). Metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)\sigma/30$, $b^r =0.85\, c^r$, $a=3$). In (a), the difference between the product $\prod_i \mathcal{I}_i$ of the invasion rates and the product $\prod_i \mathcal{E}_i$ of the exclusion rates are plotted as function of the fraction $d$ of dispersing individuals and the range $\sigma$ of spatial variation in the payoffs. Positive values correspond to persistence and negative values to the metapopulation being extinction prone. The white curve is where the difference in products equals zero. In (b), the minimal and maximal frequencies for one patch and the spatial average are plotted as a function of the fraction $d$ of dispersing individuals and $\sigma$. The white curve is where the difference in the products of invasion and exclusion rates equals zero. []{data-label="fig:bif2D"}](lottery-bif-asymmetric-B "fig:"){height="3.5in"}
(a) (b)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The effect of dispersal rates on metapopulation dynamics. Metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals and spatial variation in payoffs ($c^r=1+(r-1)/30$, $b^r =0.85\, c^r$, $a=3$). In (a), the minimal and maximal frequencies for one patch and the spatial average are plotted as a function of the fraction $d$ of dispersing individuals. In (b)-(d), the spatial-temporal dynamics are plotted for low, intermediate, and high dispersal rates. Rock frequencies are color-coded as indicated. []{data-label="fig:bif1D"}](together){width="6.5in"}
When all patches fail to promote local coexistence (i.e. $c^r>b^r$ for all $r$), weakly mixing metacommunities persist provided that the benefit in some patch exceeds the cost in another (possibly the same) patch i.e. $
\max_r b^r
>
\min_r c^r.
$ When this condition is meet, there is a critical dispersal threshold $d^*$ below which the metacommunity persists, and above which the metacommunity is extinction-prone.
Figure \[fig:bif2D\]a demonstrates the analytical prediction that the difference between the products of the invasion and exclusion rates is a decreasing function of the fraction $d$ dispersing. Furthermore, the difference in products is an increasing function of the amplitude of the spatial variation in payoffs. Hence, the critical dispersal threshold increases with the amplitude of the spatial variation of the payoffs. Intuitively, higher dispersal rates are needed to average out greater spatial variation. Unlike the difference between the products of invasion and exclusion rates, the minimum frequency of strategies exhibits a highly nonlinear response to increasing dispersal rates (Fig \[fig:bif2D\]b): the minimal frequency initially increases with dispersal rates, reaches a plateau of approximately one-third at intermediate dispersal rates, and decreasing abruptly to zero after crossing the critical dispersal.
At low dispersal rates, metacommunity persistence is achieved by a spatial game of hide and seek (Figs. \[fig:bif1D\]a,b). At any point in time, each strategy is at high frequency in some patches and low frequencies in the remaining patches. Strategy composition in each patch cycles as dominant strategies displace subordinate strategies. Intermediate dispersal rates stabilize the local and regional dynamics (Figs. \[fig:bif1D\]a,c). As a consequence, local diversity is maximal at intermediate dispersal rates. At high dispersal rates, the population dynamics synchronize across space as they approach the heteroclinic cycle (Figs. \[fig:bif1D\]a,d).
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The effect of spatial variation and dispersal rate on the persistence criterion in (a) and long-term metapopulation frequencies in (b). Metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals. Each strategy has $10$ patches in which their benefit equals $b_{high}$ and equals $0$ in the remaining patches. $c=1$, $a=2$, $m=0.1$ in all patches. In (a), the difference between the product $\prod_i \mathcal{I}_i$ of the invasion rates and the product $\prod_i \mathcal{E}_i$ of the exclusion rates are plotted as function of the fraction $d$ of dispersing individuals and the maximal benefit $b_{high}$. Positive values correspond to persistence and negative values to the metapopulation being extinction prone. The white curve is where the difference of products equals zero. In (b), the minimal and maximal frequencies for one patch and the spatial average are plotted as a function of the fraction $d$ of dispersing individuals and the maximal benefit $b_{high}$. The white curve is where the difference in the products of invasion and exclusion rates equals zero. []{data-label="fig:bif2D-2"}](lottery-bif-more-A "fig:"){height="3.5in"} -0.1in![The effect of spatial variation and dispersal rate on the persistence criterion in (a) and long-term metapopulation frequencies in (b). Metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals. Each strategy has $10$ patches in which their benefit equals $b_{high}$ and equals $0$ in the remaining patches. $c=1$, $a=2$, $m=0.1$ in all patches. In (a), the difference between the product $\prod_i \mathcal{I}_i$ of the invasion rates and the product $\prod_i \mathcal{E}_i$ of the exclusion rates are plotted as function of the fraction $d$ of dispersing individuals and the maximal benefit $b_{high}$. Positive values correspond to persistence and negative values to the metapopulation being extinction prone. The white curve is where the difference of products equals zero. In (b), the minimal and maximal frequencies for one patch and the spatial average are plotted as a function of the fraction $d$ of dispersing individuals and the maximal benefit $b_{high}$. The white curve is where the difference in the products of invasion and exclusion rates equals zero. []{data-label="fig:bif2D-2"}](lottery-bif-more-B "fig:"){height="3.5in"}
(a) (b)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
![The effect of dispersal rates on metapopulation dynamics. Metapopulations consist of $30$ patches with all-to-all coupling for dispersing individuals. Each strategy has $10$ patches in which their benefit equals $b_{high}=4$ and equals $0$ in the remaining patches. $c=1$, $a=2$, $m=0.1$ in all patches. In (a), the minimal and maximal frequencies for one patch and the spatial average are plotted as a function of the fraction $d$ of dispersing individuals. In (b)-(d), the spatial-temporal dynamics are plotted for low, intermediate, and high dispersal rates. Rock frequencies are color-coded as indicated.[]{data-label="fig:bif1D-2"}](together2){width="6.5in"}
For the second numerical scenario, we consider when payoffs vary within patches (e.g. rock gets a higher benefit than scissor when playing their subordinate opponents in one patch, but scissor gets the higher benefit in another patch) as well as spatially. In this case, well-mixed communities can persist despite being locally extinction prone. To understand why, assume each strategy wins big in some patches but win nothing in others. Let $f$ denote the fraction of patches where a strategy wins big and receives a payoff $b_{high}$ against its subordinate strategy. In the remaining fraction $1-f$ of patches, each strategy receives no benefit when playing against their subordinate strategy. Furthermore, assume that there is no variation in the costs $c_i^r=c$ for all $i,r$. Under these assumptions, local coexistence is impossible as $c>b_{high}\cdot 0 = 0$. In contrast, a well-mixed metacommunity persists if $ f\, b_{high} > c$ and a weakly-mixing metacommunity persists if $b_{high}>c$. Therefore, provided that $b_{high}$ is sufficiently large, coupling the communities by any level of dispersal mediates regional coexistence despite local communities being extinction prone.
Consistent with these analytical predictions, Fig. \[fig:bif2D-2\] illustrates that metacommunity persists at all dispersal rates if the difference in payoffs is sufficiently great ($b_{high}>3$) and only persists at low dispersal rates for intermediate differences in the payoffs. When there are large differences, metapopulation abundance and stability increases continually with dispersal rates (Fig. \[fig:bif1D-2\]). In contrast, metapopulation abundance is maximized at intermediate dispersal rates whenever there are intermediate differences in the payoffs (Fig. \[fig:bif2D-2\]b).
Discussion {#discussion .unnumbered}
==========
The rock-paper-scissors game represents the prototypical situation in which the components of a system satisfy a set of non-transitive relations. It is a surprising and fascinating feature of recent work in evolutionary biology and ecology that such interactions have been discovered in a wide range of natural systems [@buss-jackson-79; @sinervo-lively-96; @kerr-etal-02; @kirkup-riley-04; @lankau-strauss-07; @cameron-etal-09]. The existence of non-transitive interactions in biological systems has been suggested as an important mechanism for maintaining biodiversity [@durrett-levin-97; @kerr-etal-02; @lankau-strauss-07; @roelke-eldridge-10; @allesina-levine-11]. This suggestion, however, raises an important theoretical question: Is it possible for all components of such a system to persist in the long term? This question is pertinent since modeling the dynamics of the rock-paper-scissors game (and related non-transitive systems) using the replicator equation shows that cyclic behavior corresponds to convergence toward a heteroclinic attractor on the boundary of the strategy space, and this process must ultimately result in the extinction of some strategies [@hofbauer-sigmund-98].
It is widely believed in ecology that the inclusion of spatial structure, in which the interactions of individuals are local, can result in the coexistence of communities that could not persist in a panmictic situation [@durrett-levin-97; @hanski-99; @amarasekare-nisbet-01; @holyoak-etal-05]. There are numerous ways in which a spatially structured population can be modeled mathematically, depending on the assumptions made regarding the nature of the spatial interactions of the individuals in the population [@durrett-levin-94]. Possible approaches include reaction-diffusion systems [@cantrell-cosner-03], metapopulation and metacommunity theory [@hanski-99; @holyoak-etal-05], coupled lattice maps [@hastings-93; @holland-hastings-08], and cellular automata and related lattice models [@nowak-may-92; @killingback-doebeli-96; @durrett-levin-97; @durrett-levin-98; @iwasa-etal-98; @kerr-etal-02].
Most previous attempts to understand the effect of spatial structure on the persistence of systems with non-transitive interactions have utilized cellular automata-type models [@durrett-levin-97; @durrett-levin-98; @iwasa-etal-98; @frean-abraham-01; @kerr-etal-02; @karolyi-etal-05; @reichenbach-etal-07; @rojas-allesina-11]. The main conclusion that can be drawn from these cellular automata studies is that in three-species systems with non-transitive interactions it is possible for all species to coexist in a spatially structured model even when they could not all persist in the corresponding panmictic system. Coexistence in these models when formulated in two spatial dimensions results from the different species aggregating in regions that cyclically invade each other. It is worth noting that in the reaction-diffusion approach of @nakamaru-iwasa-00 coexistence is not possible in one-dimensional systems. This issue has not, however, been investigated using lattice models. Cellular automata models have the virtue of explicitly introducing space through a lattice of cells and of directly modeling the spatial interactions between individuals. However, such models also have a number of significant limitations. Since spatial structure is introduced in a very concrete fashion, through an explicit choice of a spatial lattice (almost always taken to be a two-dimensional square lattice) and a spatial interaction neighborhood (usually taken to be the eight cells surrounding the focal cell) it is, in general, unclear how changes in these structures affect species coexistence. A second limitation of cellular automata models is the difficulty is using them to study the effects of spatial heterogeneity. In all the lattice models of non-transitive interactions that have been studied the rules determining how cells are updated are the same at every spatial location, although it is known, in general, that spatial heterogeneity may have important implications for species coexistence [@amarasekare-nisbet-01]. A third limitation is that cellular automata are notoriously difficult to study analytically, and indeed almost all the key results on coexistence of species with non-transitive interactions in lattice models have been obtained from simulations (see, however, @durrett-09).
In this paper we have adopted the metacommunity perspective to formulate a new approach to studying the dynamics of spatially structured communities in which rock-paper-scissors-type interactions hold. This approach assumes that the overall metacommunity is composed of a number of local communities, within each of which the interactions are panmictic, and that the local populations are coupled by dispersal. The resulting metacommunity model allows for a very general treatment of the population dynamics of spatially structured systems with non-transitive interactions, which overcomes many of the limitations inherent in cellular automata-type models. In particular, our model allows a very general treatment of dispersal between spatial patches, includes spatial heterogeneity in a fundamental way, and allows precise analytic derivations of the central results.
In our model, in the absence of dispersal, the population dynamics within each patch exhibits a heteroclinic cycle. Coexistence of all strategies in any given patch requires that the geometric mean of the benefits obtained from the payoff exceed the geometric mean of the costs within that patch. Moreover, when the spatial patches are coupled by dispersal the metacommunity possesses a heteroclinic cycle, and all members of the metacommunity persist when a regional coexistence criterion holds–the geometric mean of invasion rates when rare of the dominant strategies exceed the geometric mean of the exclusion rates when rare of the subordinate strategies. Although it is not possible, in general, to write down an explicit formula for the eigenvalues associated with these invasion and exclusion rates, it is possible to find more explicit expressions in the limiting cases of weakly-mixed metacommunities and well-mixed metacommunities. Weak mixing occurs when when dispersal rates are low. In this case, our analysis reveals that sufficient spatial heterogeneity in the payoffs for pairwise interactions allows metacommunity coexistence even when every local community is extinction prone. Thus, in the presence of spatial heterogeneity, local dispersal promotes coexistence. Alternatively, when dispersal rates are high, the metacommunity is well-mixed. In this case, the coexistence criterion requires that the geometric mean of spatially averaged benefits obtained from the payoff exceed the geometric mean of the spatially averaged costs. These coexistence criteria imply that the coexistence of a well-mixed metacommunity guarantees the coexistence of the corresponding weakly mixed one. The converse result does not hold. Thus, metacommunities with higher dispersal rates are less likely to persist than those with lower ones.
For unconditional dispersal (i.e. when the fraction $d$ of individuals dispersing is independent of location), the interaction between spatial heterogeneity and dispersal leads to a threshold effect: there exists a critical dispersal value $d^*$, such that if the dispersal rate is less than $d^*$ the metacommunity persists, while if the dispersal rate is greater than $d^*$ it is extinction prone. This threshold effect occurs whenever well-mixed communities are extinction prone but weakly-mixed communities are not. For example, there is sufficient spatial variation in the payoffs but the cost paid by the loser exceeds the benefit gained by the winner in every pairwise interaction. Similar dispersal thresholds have been demonstrated for two-species competitive communities exhibiting either priority effects or local competitive dominance [@levin-74; @amarasekare-nisbet-01]. However, unlike these transitive systems, regional coexistence for these intransitive systems does not require each species having regions in space where either they are initially more abundant or competitively dominant.
Our results on the effect of dispersal on the coexistence of rock-paper-scissors metacommunities are in broad qualitative agreement with the conclusions that can be drawn from cellular automata-type models that include the movement of individuals, which is the lattice analogue of dispersal. @karolyi-etal-05 considered a two-dimensional lattice model of non-transitive interactions in which individuals moved due to a chaotic flow, such as might occur in a fluid system. @reichenbach-etal-07 also studied the effect of mobility on coexistence in a two-dimensional cellular automata model of rock-paper-scissors interactions, where individual movement was modeled using techniques of dimmer automata [@schofisch-hadeler-96]. In each case it was found through simulation that there exists a critical level of mobility, below which all species coexist and above which only one species survives in the long-term. This critical mobility level in lattice models of rock-paper-scissors interactions is the analogue of the critical dispersal rate $d^*$ in our metacommunity model. It is interesting to note in this context that a similar threshold also occurs in a model of cyclic interactions on complex networks studied by @szabo-etal-04. In this case if the fraction of long-range interactions present in a small-world network is below a critical value coexistence of all species is possible, while if it is exceeded species extinctions occur.
We also note that a further example of a lattice model that has been used to study the effect of spatial structure in maintaining meta-community persistence in a system with non-transitive interactions occurs in the area of prebiotic evolution. @eigen-schuster-79 observed that there is a fundamental problem in the evolution of self-replicating molecules: there exists an information threshold since the length of the molecule is restricted by the accuracy of the replication process. Eigen and Schuster proposed as a solution to this problem the concept of the hypercycle, in which a number of molecules catalyze the replication of each other in a cyclic fashion. The dynamics of a hypercycle can be modeled mathematically as a replicator equation with a cyclic payoff matrix [@hofbauer-sigmund-98], and thus the hypercycle corresponds dynamically to a replicator system with non-transitive interactions. The concept of a hypercycle has, however, a crucial flaw: it is not evolutionarily stable - selection will favor the evolution of a parasitic mutant which does not provide any catalytic support to other molecules in the hypercycle even though it receives such catalytic support itself [@maynard-smith-79; @bresch-etal-80]. The evolution of parasitic mutants results in the collapse of hypercycles as entities capable of encoding information. Interestingly, the inclusion of spatial structure can prevent the evolution of selfish mutants and may result in the persistence of hypercycles. The effect of spatial structure on the persistence of hypercycles has been studied using a cellular automaton model in [@boerlijst-hogeweg-91]. It is shown in this model that local spatial interactions result in the formation of self-organized spiral waves, and that selection acting between these spiral waves can counteract the effect of selection acting at the level of the individual molecules, with the consequence that the hypercycle can be resistant to the evolution of parasitic mutants.
The metacommunity model we have introduced here provides a complementary approach to the lattice models that have previously been used to study coexistence in rock-paper-scissors-type systems. It seems likely that each type of model will most naturally describe different types of empirical systems with non-transitive interactions. For example, the lattice modeling approach may describe reasonably well an *in vitro* microbial population growing on a plate [@kerr-etal-02]. In contrast, our metacommunity model would seem to be a more natural approach to use to describe an *in vivo* microbial population inhabiting many host organisms with transmission between the hosts, as in the model system of @kirkup-riley-04, or plant communities living on different soil types [@lankau-strauss-07; @cameron-etal-09]. This observation raises the possibility that it may be possible to use such systems to empirically test the predictions of our metacommunity model.
| |
Plaintiff was 54 years old on the date of the accident and employed as a cake decorator for Zaro's Bakery.
FACTS AND ALLEGATIONS
Plaintiff, a pedestrian, was attempting to cross Bruckner Boulevard within the crosswalk with the crosswalk signal in her favor when she was struck by Defendant's van which was attempting to turn left onto Bruckner Boulevard. The Defendant claimed Plaintiff was not crossing within the crosswalk but rather in the middle of the street and that he did not see Plaintiff because of the weather conditions which included sleet and snow.
INJURIES/DAMAGES
Plaintiff sustained an injury to her neck as a result of the accident. After conservative treatment failed to provide any relief she was referred for an MRI study of the cervical spine which revealed a herniation at the C5-C6 level. When all attempts at continued treatment failed to provide any relief Plaintiff was recommended for, and underwent, a fusion surgery at the C5-C6 level. As a result of the fall Plaintiff also sustained injuries to her knees bilaterally. MRI studies confirmed multiple tears to both knees requiring bilateral arthroscopic surgeries. Additionally, Plaintiff sustained a right shoulder injury. Again, after conservative treatment failed to provide relief Plaintiff was referred for an MRI that revealed a labral tear resulting in an arthroscopic surgery.
Plaintiff was unable to return to work after the accident and most likely will never work again. Additionally, as a result of these significant injuries plaintiff has been unable to engage in her activities of daily living. Plaintiff testified how the injuries have prevented her from working and caring and playing with her young grandchildren, which is crushing to her spirit.
The Defendants argued that the impact was minimal and Plaintiff's cervical spine injury, bilateral knee injuries, and right shoulder injury were all degenerative in nature and not causally related to the subject accident and that all diagnostic studies failed to illustrate any traumatically induced injuries. Further, defendant's argued that no further treatment was necessary for any of the claimed injuries and plaintiff can engage in all activities of daily living without restriction.
Plaintiff made claims for loss of earnings, past and future medical expenses, as well as pain and suffering and loss of enjoyment of life.
CASE RESULT
The parties negotiated a $1,500,000 pretrial settlement.
ATTORNEYS
This case was handled by Stephen J. Murphy, Esq. and Michael J. Hurwitz, Esq. | https://www.blockotoole.com/Verdicts-Settlements/1-500-000-Settlement-For-Pedestrian-Struck-By-Truck.shtml |
The Eclipse Necklace is the result of the fruitful collaboration between Caralarga and Taller Maya both from Mexico. They are inspired by the understanding of this phenomenon in Mayan Culture, in which an eclipse is the conflict between the Earth and the Moon; this necklace is also a tribute to the Mayan knowledge about astrology.
The necklace is part of the Ancestral Collection which is made up by unique pieces that reflect the natural and cultural wealth of the Mayan region, with designs crafted by artisans from the Yucatan Peninsula.
- Sansevieria Fiber
- Recycled bull horn
- 925 oxidized silver hook
- Adjustable length
With every purchase you directly contribute to the sustainable development of the indigenous Mayan communities behind each product. | https://www.whitelabel-project.com/products/handmade-eclipse-necklace-caralarga-x-taller-maya |
VR Safety create immersive virtual reality and augmented reality solutions which can be applied to a wide range of industry sectors in order to deliver tangible business improvements.
Virtual or augmented reality can be utilised to speed up the learning experience of both on and off the job training and to analyse the robustness of a solution against a specific parameter set. So by using interactive simulators to access what would be hazardous environments in the real world, VR Safety’s technology allows organisations to engage with their workforce and build a culture of safety.
The team have a wide range of industry skills with first-hand knowledge of complex environments and can offer valuable business improvement solutions across the aerospace, rail, construction manufacturing, and food sectors.
Angela Tooley, Project Director for Enscite, added her experience of how the relationship has developed. | https://www.enscite.co.uk/case-studies/p2/vr-safety/ |
Quick Answer: What Is Raim Prediction?
Receiver Autonomous Integrity Monitoring (RAIM ) assesses the integrity of the Global Positioning System (GPS) signals. This system predicts outages for a specified geographical area. These predictions are based on the location, path, and scheduled GPS satellite outages.
What is Raim and when is it required?
Receiver autonomous integrity monitoring (RAIM) is a technology developed to assess the integrity of global positioning system (GPS) signals in a GPS receiver system. It is of special importance in safety-critical GPS applications, such as in aviation or marine navigation.
What is Raim and how does it work?
Receiver Autonomous Integrity Monitor (RAIM) is a form of integrity monitoring performed within the avionics themselves. By comparing the distance measurements of a number of satellites, the RAIM function can identify a satellite failure and issue an alert to the pilot.
How many satellites does Raim predict?
RAIM — requires 5 satellites in view (1 extra) to provide the extra geometry needed to check the integrity of each satellite being used. Predictive RAIM — Uses almanac data or NOTAMS to determine in advance if any satellites should be excluded.
What causes Raim?
Clouds are made up of tiny water droplets. When these droplets grow, they eventually become too heavy to stay suspended in the sky and fall to the ground as rain. Some droplets fall through the cloud and coalesce into raindrops on their way down.
Can you fly without RAIM?
If there is no RAIM available during part of your flight, you can’t rely on GPS during that part. The GPS may still work fine, but there is no way to check its integrity. A single fault in a GPS satellite or, more likely, a corruption in the downlinked GPS satellite almanac will cause the position fix to be off.
What happens if you lose RAIM?
Loss of the required number of satellites in view, or the detection of a position error, cannot be displayed to the pilot by such receivers. In receivers with no RAIM capability, no alert would be provided to the pilot that the navigation solution had deteriorated, and an undetected navigation error could occur.
How does GPS RAIM work?
RAIM (Receiver Autonomous Integrity Monitoring) is a technology that is used in GPS receivers to assess the integrity of the GPS signals that are being received at any given time. RAIM uses redundant signals to produce several GPS position fixes and then compares them to figure out if there are any inconsistencies.
Does ForeFlight predict RAIM?
Yes. RAIM is available in the Flights view Navlog for Performance customers. ForeFlight’s RAIM prediction service covers flights within the continental United States, Alaska, and Hawaii.
Do you need RAIM to have WAAS?
WAAS enhances the reliability of the GPS system and thus no longer requires a RAIM check if WAAS coverage is confirmed to be available along the entire route of flight; in this case the pilot can plan the flight to a destination and file an alternate airport using only the WAAS navigation capabilities.
Is RAIM required for LPV?
En route, you can check RAIM if you’re planning to fly to LNAV minimums. But RAIM can’t tell you if you’ll have sufficient signal quality to fly to LPV minimums, so there’s no need to check it if you plan to fly to LPV approach or LNAV/VNAV minimums.
What is meant by GNSS?
Global navigation satellite system (GNSS) is a general term describing any satellite constellation that provides positioning, navigation, and timing (PNT) services on a global or regional basis.
How does a GPS receiver determine from which satellites it is receiving the signal?
GPS uses a lot of complex technology, but the concept is simple. The GPS receiver gets a signal from each GPS satellite. The satellites transmit the exact time the signals are sent. By subtracting the time the signal was transmitted from the time it was received, the GPS can tell how far it is from each satellite.
How will rain come?
Water vapor turns into clouds when it cools and condenses—that is, turns back into liquid water or ice. In the cloud, with more water condensing onto other water droplets, the droplets grow. When they get too heavy to stay suspended in the cloud, even with updrafts within the cloud, they fall to Earth as rain.
Why is rain important?
Rain and snow are key elements in the Earth’s water cycle, which is vital to all life on Earth. Rainfall is the main way that the water in the skies comes down to Earth, where it fills our lakes and rivers, recharges the underground aquifers, and provides drinks to plants and animals.
What is rainfall explain?
Rainfall is the amount of precipitation, in the form of rain (water from clouds), that descends onto the surface of Earth, whether it is on land or water. It develops when air masses travel over warm water bodies or over wet land surfaces. The clouds eventually release this water vapor, which is dropped as rainfall. | https://lastfiascorun.com/africa/quick-answer-what-is-raim-prediction.html |
Best Information Tool For Busy FMs
We will keep you updated with trends, education, strategies, insights & benchmarks to help drive your career & project success.
- Building Automation
- Ceilings, Furniture & Walls
- Doors & Hardware
- Elevators
- Equipment Rental & Tools
- Energy Efficiency
- Facilities Management
- Green
- Grounds Management
- Fire Safety/Protection
- Flooring
- HVAC
- Lighting
- Maintenance & Operations
- Plumbing & Restrooms
- Power & Communication
- Roofing
- Security
- Software
Creative Engagement Case Studies Provide Proof of Concept
OTHER PARTS OF THIS ARTICLEPt. 1: How to Engage Occupants to Create Better SpacePt. 2: 7 Steps for Successful Creative Engagement Pt. 3: This PagePt. 4: How Creative Engagement is Growing into a National Trend
As with many processes, creative engagement is best understood by witnessing examples — or trying it oneself. At the Yale School of Medicine’s Child Study Center, for example, creative engagement workshops were essential to drawing out valuable ideas to ensure functional, flexible spaces that make children feel comfortable in a clinical study setting. The overall project involved consolidating and relocating the center’s clinical services to an existing building while “significantly improving the patient experience and integrating programs that are now scattered over multiple sites,” according to Yale School of Medicine. Key goals included creating patient-centered clinical spaces with appropriately sized rooms for families, good acoustical privacy, and roomy waiting areas.
Helping children feel comfortable in the new facility — an admirable aim — required a highly effective process and design. In the creative engagement phase of asking animating questions, the facility management leadership and interior design consultants worked to inspire broad thinking while remaining on the same page in terms of project goals and aspirations.
For one exercise, the facilities director asked group participants to identify and present imagery that made them feel comfortable and happy. The collected input created a working list of what the center needed to help make children — many with mental health differences and a range of psychological and developmental needs — truly comfortable. In a second key exercise, the facilities team queried participants on what feelings they wanted child study center staff and visitors to have upon entering the facility. The team gathered the words and assembled a word cloud based on frequency and importance of the terms and adjectives used. The imagery, affixed on panels, and the word cloud captured highly relevant input that became touchstones for all downstream decisions.
These workshops helped to create the design narrative or charter for the project. They were an essential part of the engagement process.
With the unique mandate of the Yale Child Study Center — preventing childhood mental illness through integrated research, clinical practice, and professional training — the qualitative aspects of occupant ease and a positive experience are essential to a successful mission. It’s not surprising they became central to the creative engagement process.
The work helped to galvanize varied stakeholders from all different work environments, seen as critical to the most successful outcomes for the new Child Study Center. These outcomes included significantly improving the patient experience and access for families, promoting center-wide collaboration and innovation, accommodating program growth, and improving relationships with referring providers.
Proof of concept: Benefits of creative engagement
For another project, the Yale New Haven Health System required a revamped and reorganized new workplace for its systems business office in New Haven, Conn. More than simply adapting a former high school gymnasium and storage facility to create 30,000 square feet of the acclaimed healthcare provider’s administrative and business workplace, the project had to reflect the group’s values of integrity, patient-centeredness, respect, accountability, and compassion.
Collaboration was achieved through both brainstorming and value engineering of choices, maximizing spatial requirements within the confines of a fixed concrete structure. The brainstorming elicited valuable workplace values for the healthcare provider, including the idea that “the entire healthcare system benefits when medical and support staff are themselves as healthy as possible.” The main reason: Hospital employees experiencing wellness and workplace comfort will make better decisions and engage fully with their work.
The creative engagement approach also addressed physical facility attributes seen to help reduce absenteeism, employee error, and staff turnover. Formerly working in an office setting with dark, taller dividers and less visual connections to coworkers, the occupants asked for a more enticing and even joyful interior through the creative yet iterative process of “wash, rinse, repeat.” The facility management team saw a direct connection between specific design moves — the dashes of color, the exposed and whitewashed ceilings, and the open workstation specifications — and the work challenges as they were distilled in the creative engagement takeaways.
The renovation incorporated a new mezzanine structure at the street and main entrance level to add 3,500 square feet of expansion space and create an engaging, open, and bright workplace ideal for collaboration and efficiency among the approximately 200 professionals. The design left exposed the original structure and mechanical systems. These were painted a soft white, complementing the new, high-efficiency lighting. Utilizing existing furniture and new furniture systems, the low workstations include individual desks and benching systems in groups of two people, back to back. A new stand-up huddle space serves as a mini seminar room, with pull-up trolleys and hidden ottoman-style seating. Throughout are woodgrain laminate finishes on work surfaces, as well as comfortable, colorful fabrics in greys and neutrals.
Lynn Brotman ([email protected]), IIDA, NCIDQ, is associate principal at Svigals + Partners, an architecture and design firm. Robynne Orr, is a project planner for the Yale University School of Medicine and West Campus for Yale Office of Facilities. Svigals + Partners, working with varied clients including Yale University, developed the creative engagement template presented in this article. Barry Svigals, FAIA, who helped develop the creative engagement method, contributed to this article. | https://www.facilitiesnet.com/facilitiesmanagement/article/Creative-Engagement-Case-Studies-Provide-Proof-of-Concept--18421 |
Although no cases of Dipentum overdose have been reported, it is possible for a person to take too much of the medicine. Based on what has been seen with similar drugs, healthcare providers believe that vertigo, drowsiness, diarrhea, and other symptoms could occur if too much of the drug is taken. A Dipentum overdose would likely be treated by pumping the stomach or administering supportive care, depending on how recently the overdose occurred.
An Introduction to Dipentum OverdoseDipentum® (olsalazine sodium) is a medication that is used to treat ulcerative colitis. As with all medicines, it is possible for a person to take too much Dipentum. Overdose effects will vary based on a number of factors, including how much Dipentum is taken and if it is taken with any other medicines.
Symptoms of a Dipentum OverdoseNo cases of Dipentum overdose in humans have been reported. However, it is expected that overdose symptoms would be similar to those seen with other medications that are closely related to Dipentum. For example, symptoms could include:
- Tinnitus (ringing in the ears)
- Vertigo (a spinning sensation)
- Confusion
- Drowsiness
- Sweating
- Hyperventilation
- Vomiting
- Diarrhea.
Treatment for a Dipentum OverdoseThe treatment for a Dipentum overdose will also vary. If the overdose was recent, the healthcare provider may place a tube into the stomach to "pump the stomach" or give certain medicines. However, once the drug has been absorbed into the body, there is no treatment that can remove it quickly. Therefore, in these cases, treatment typically involves supportive care, which is treating the symptoms that occur as a result of the overdose. For example, supportive treatment options may include:
- Fluids through an intravenous line (IV)
- Other treatments based on complications that occur.
It is important that you seek medical attention immediately if you believe that you may have overdosed on Dipentum. | http://colitis.emedtv.com/dipentum/dipentum-overdose.html |
Date:
Location:
The Harvard Medical School Center for Global Health Delivery–Dubai hosted a technical meeting in Dubai bringing together leaders in TB care and prevention to discuss a best-practice framework of program indicators for monitoring a comprehensive approach to the tuberculosis epidemic. Sixty participants—including clinicians, researchers, program implementers, and policy makers—represented 14 countries. Many of the participants are associated with the Zero TB Initiative. This Initiative aims to build broad-based local coalitions that can together implement a comprehensive “Search, Treat, Prevent” approach to rapidly accelerate progress against TB.
As programs move toward adopting a comprehensive approach to tackling the TB epidemic, a need has been identified for indicators that can be used to monitor, evaluate, and improve program performance. The current performance indicators used by TB programs focus almost exclusively on the treatment of patients who passively present to the health system and are treated for TB disease. In contrast, a comprehensive approach includes actively searching for people who have TB, enhanced support during treatment, and the treatment of TB infection to prevent future disease. An expanded indicator framework is required for such an approach. At this meeting, a group of researchers from the Zero TB Initiative proposed a best-practice framework of process indicators that can be used to monitor program performance, identify gaps, and measure progress in the implementation of a comprehensive approach to the TB epidemic. They presented a draft guidance document intended to help programs develop systems to operationalize the framework. Participants discussed both the framework and the guidance document in both small group and full group settings, sharing their diverse perspectives, experience, and expertise, and providing constructive critique.
Participants enthusiastically supported the development of a comprehensive framework of indicators and expressed their intent to employ this framework to improve performance and streamline implementation of their TB programs. Their input has informed the further development of the guidance document. Ultimately, these indicators will help Zero TB coalitions improve their programs to eliminate TB. | https://ghd-dubai.hms.harvard.edu/event/tb-indicators-meeting-0 |
1. Introduction {#sec1-cancers-12-00153}
===============
Cancer is one of the leading causes of death worldwide \[[@B1-cancers-12-00153]\]. Cancer mortality is mainly due to metastases, especially as secondary tumors develop in vital organs. Metastasis is described as the spread of tumor cells, detaching from the primary tumor, circulating through the blood and/or lymphatic vessels, then escaping the circulation to develop a secondary tumor, with the characteristics of the primary tumor, in a distant site within the human body \[[@B2-cancers-12-00153]\]. According to Chambers et al. \[[@B3-cancers-12-00153]\], the metastatic process is composed of five different steps: (1) epithelial-mesenchymal transition; (2) intravasation; (3) surviving in circulation; (4) extravasation, and (5) seeding and colonization. The epithelial-mesenchymal transition and extracellular matrix degradation is the loss of cell-to-cell adhesion (and associated polarity) as well as the acquisition of migratory and invasive properties \[[@B4-cancers-12-00153]\]. Acquiring mesenchymal properties enables the tumor cell to invade surrounding and distant tissues. Conversely, during extravasation, tumor cells undergo a mesenchymal to epithelial transition after the invasion of the distant tissue. Intravasation is the metastatic step where tumor cells enter the systemic circulation via existing or newly formed blood and/or lymphatic vessels to reach and invade a distant tissue \[[@B3-cancers-12-00153]\]. Tumor cells that have reached the lymphatic and/or blood circulation are called circulating tumor cells \[[@B5-cancers-12-00153]\]. Circulating tumor cells acquire some motility and survival characteristics against mechanical and hemodynamic shear stress generated by the blood stream, the immune system, and apoptosis induced by the detachment of tumor cells from the extracellular matrix (also called anoikis), enabling dissemination into secondary tissues \[[@B5-cancers-12-00153]\]. The fourth step of the metastatic process consists in the arrest of the circulating tumor cells that have survived, and their passage through the endothelial membrane to escape the circulation and their subsequent invasion of a distant tissue \[[@B6-cancers-12-00153]\]. Extravasation depends on various factors: mechanical characteristics of blood vessels, such as size and permeability, expression of specific binding factors, and secretion of lytic enzymes, among others. The last step of the metastatic cascade is the seeding of the tumor cell in the distant tissue where it extravasates. The seeding is followed by colonization that needs a favorable microenvironment to proliferate. The 'seed and soil' theory \[[@B7-cancers-12-00153]\] proposes that the proliferation of a tumor cell in a secondary tissue is not random and depends on favorable interactions of the tumor cell, which originates from the primary tissue (the seed) and the characteristics of the secondary tissue (the soil). Only an optimal combination of both permits the cell to develop itself and form a secondary tumor. This may explain why certain cancers develop metastases in specific sites and not in others \[[@B3-cancers-12-00153]\].
Following the World Health Organization's advice, physical inactivity is the predominant avoidable risk factor for developing cancer \[[@B8-cancers-12-00153]\]. In Europe, 9% to 19% of cancers could be prevented if physical activity was performed \[[@B9-cancers-12-00153]\]. A meta-analysis reported that regular physical activity before and after the diagnosis of cancer is associated with reduced cancer mortality, especially in breast-, colon- and endometrial cancer, which depends on the activity level \[[@B10-cancers-12-00153]\]. Furthermore, an increased level of physical activity from before to after the diagnosis, in comparison to subjects who did not change their activity level, was also shown to decrease the risk of cancer mortality \[[@B10-cancers-12-00153]\]. In contrast, a decreased physical activity level between pre- and post-diagnosis was associated with a higher risk of cancer mortality \[[@B10-cancers-12-00153]\]. This altogether reinforces the hypothesis that physical activity, before and after diagnosis, may decrease cancer mortality.
The links between exercise, cancer risk, and primary tumor metabolism are well studied, but less is known about the regulation of the factors involved in the metastatic cascade by exercise. Therefore, the main purpose of this narrative review is to provide an overview of the currently available knowledge about the mechanisms by which physical activity may affect metastatic development. The secondary aim of this study is to describe the adequate type, volume, frequency, and intensity of exercise to induce beneficial effects on metastatic development.
2. Results {#sec2-cancers-12-00153}
==========
2.1. Whole Metastatic Development {#sec2dot1-cancers-12-00153}
---------------------------------
Three studies looked at the effect of physical activity on the global metastatic development with no focus on a specific metastatic phase. In the first study, a protective effect of physical activity on tumor spread was found in previously trained running mice for 9 weeks, particularly in less aggressive tumor cells \[[@B11-cancers-12-00153]\] ([Table 1](#cancers-12-00153-t001){ref-type="table"}). In the second study, in a model of breast cancer in mice, 35 weeks-long moderate-intensity exercise training protected from pulmonary metastases, whereas two metastases were measured in the control group \[[@B12-cancers-12-00153]\]. In contrast, in the third study, enhanced tumor growth and metastasis were found in voluntary running mice before (6 weeks) and after (8 weeks) inoculation with liposarcoma cells compared to sedentary mice \[[@B13-cancers-12-00153]\]. Those contrasting results on the effects of physical activity on metastatic development nicely reflect the literature in this research area. The rest of the results section will attempt to present in which conditions beneficial or detrimental effects of physical activity are found for each metastatic phase. Those conditions will then be discussed in the last section.
2.2. Epithelial-Mesenchymal Transition and Extracellular Matrix Degradation {#sec2dot2-cancers-12-00153}
---------------------------------------------------------------------------
Jones et al. investigated whether the observed lower metastatic burden after 8-week voluntary wheel running was related to changes in pro-metastatic gene expression in a model of murine prostate cancer \[[@B21-cancers-12-00153]\]. No clear picture was drawn as a downregulation of the mRNA levels of hepatocyte growth factor receptor, and a tendency towards a downregulation of insulin-like growth factor-1 (IGF-1) receptor, as well as an upregulation of the mRNA levels of C-X-C chemokine receptor type 4 (CXCR4) and a tendency to an upregulation of matrix metalloproteinase (MMP)-9, were measured in the exercising compared to the control group after 8 weeks. Zhang et al. compared the effects of different swimming durations on the levels of dopamine secretion and its effect on liver cancer progression and dissemination in mice \[[@B31-cancers-12-00153]\]. Voluntary swimming (8 min/day for 9 weeks) reduced the rate of metastases and prolonged survival, while forced prolonged swimming (16 or 32 min/day for 9 weeks) had the opposite effect \[[@B31-cancers-12-00153]\]. In the prefrontal cortex, serum, and tumor tissue, the levels of dopamine, known to inhibit angiogenesis, proliferation, and invasion of cancer cells, increased in the voluntary swimming group only. Voluntary swimming decreased the protein expression of transforming growth factor-β1 (TGF-β1), vimentin, and N-cadherin and increased the expression of E-cadherin, while forced prolonged swimming had the opposite effect on the expressions of those molecules in liver cancer tissue. Altogether, those results suggest that extracellular-matrix transition, tumor proliferation, and tissue invasion are slowed down by moderate swimming in mice, while the opposite seems to occur in case of forced prolonged swimming \[[@B31-cancers-12-00153]\].
Extracellular matrix degradation by MMP enhances tumor invasion, tumor cell behavior, and leads to cancer progression. In nasopharyngeal carcinoma cells incubated with platelet-enriched plasma from exercising men, decreased MMP-2 and -9 activities, and increased platelet-nasopharyngeal carcinoma cell aggregation were measured when the subjects had exercised at high-intensity exercise (80%--100% VO~2~max) \[[@B33-cancers-12-00153]\]. Moderate intensity exercise (60% VO~2~max) reduced the formation of platelet-nasopharyngeal carcinoma cells aggregates without changes in MMP activities. Warming-up attenuated the changes induced by high-intensity exercise, possibly by deactivation of adhesion molecules on platelets and regulation of the redox status \[[@B33-cancers-12-00153]\].
2.3. Intravasation {#sec2dot3-cancers-12-00153}
------------------
Angiogenesis is a key process leading to intravasation. Theoretically, the molecular mechanisms regulating angiogenesis may, therefore, influence intravasation, among which hypoxia and the transcription factor responsive to hypoxia, i.e., hypoxia-inducible factor 1 (HIF-1). Higher expression of HIF-1 is associated with an elevated rate of invasiveness and tumor cell motility \[[@B6-cancers-12-00153]\]. Exercise-induced stabilization of HIF-1 is thus expected to increase metastatic spread, but the opposite was observed in voluntary wheel running mice with prostate \[[@B21-cancers-12-00153]\] or breast cancer \[[@B20-cancers-12-00153]\]. The favorable or unfavorable outcome induced by HIF-1 on metastasis seems to depend on the mechanisms by which HIF-1 is activated. Hypoxia induced under resting conditions is linked with pathological and inefficient angiogenesis that sustains a tumor hypoxic microenvironment. In contrast, exercise induces physiological angiogenesis that leads to improved tumor vascularization \[[@B34-cancers-12-00153]\]. Circulating endothelial progenitor cells promote this physiological angiogenesis by incorporating into growing blood vessels \[[@B21-cancers-12-00153]\], but they may also facilitate metastasis by supporting the establishment of the pre-metastatic niche \[[@B32-cancers-12-00153]\]. As the levels of those cells increased after a 12-week aerobic exercise program on a cyclo-ergometer in women suffering from breast cancer, it cannot be excluded that aerobic training could facilitate the acquisition of an invasive tumor phenotype or acceleration of metastasis \[[@B32-cancers-12-00153]\].
Chronic physical activity stimulates angiogenesis by increasing vascular endothelial growth factor (VEGF) expression, which leads to a better association between pericytes and endothelial cells and reduced endothelial permeability \[[@B17-cancers-12-00153]\]. In female rats inoculated with mammary cancer, running on a treadmill for 35 weeks induced a higher expression of VEGF-A and higher tumor vascularization, which led to increased tumor growth \[[@B17-cancers-12-00153]\]. However, the tumors were less aggressive than in the sedentary controls, and the time of latency between tumor inoculation and development was longer in exercising rats \[[@B17-cancers-12-00153]\]. Following lung tumor inoculation in mice, 4 weeks of wheel running did not change tumor nor metastasis weight or volume compared to the sedentary group \[[@B27-cancers-12-00153]\]. VEGF is known to upregulate monocyte chemotactic protein-1 (MCP-1) expression, and high MCP-1 plasma levels are associated with lymph node metastases \[[@B30-cancers-12-00153]\]. Voluntary wheel running did not modify the increase in VEGF and MCP-1 after injection of Lewis lung carcinoma cells compared to sedentary mice \[[@B30-cancers-12-00153]\].
Another important regulator of intravasation is nitric oxide (NO), which may play a dual and opposite role in this process. On the one hand, NO could promote metastasis by stimulating proliferation, migration, and cellular invasion; on the other hand, the cytotoxic effect of NO on damaged DNA could prevent cancer cell proliferation \[[@B35-cancers-12-00153]\]. A study in mice found that 4 weeks of wheel running after breast cancer cell injection induced a lower NO production together with a higher amount of pulmonary metastases compared to sedentary controls \[[@B26-cancers-12-00153]\].
2.4. Survival in the Circulation {#sec2dot4-cancers-12-00153}
--------------------------------
Regmi et al. looked at the effect of hemodynamic shear stress on the survival of circulating tumor cells, i.e., breast, lung, and ovarian cancer cells, in vitro \[[@B16-cancers-12-00153]\]. Various shear stress conditions were tested, from resting state (15 dynes/cm^2^ in human arteries and 1--6 dynes/cm^2^ in human veins) to intense exercise (up to 60 dynes/cm^2^). Shear stress of 45 and 60 dynes/cm^2^ killed 82% and 99% of the circulating tumor cells, respectively, whereas shear stress of 15 and 30 dynes/cm^2^ allowed 48% and 64% circulating tumor cells viability after the same incubation time in the microfluidic system \[[@B16-cancers-12-00153]\]. Viability was inversely related to the time of incubation \[[@B16-cancers-12-00153]\].
Circulating tumor cells are particularly sensitive to immunologic agents circulating in the blood vessels. Thus, exercise could indirectly modulate circulating tumor cells viability via regulation of immunologic factors. In 9-week wheel and treadmill running mice, higher splenic natural killer (NK) cell activity was detected 3 weeks after tumor cell injection and cessation of exercise compared with sedentary controls, which was not associated with reduced tumor incidence in the lung \[[@B23-cancers-12-00153]\]. The effect of NK activity on tumor dissemination seems to be effective only if exercise is initiated before the metastatic spread. Exercise training enhances NK cytotoxicity and levels, but these adaptations do not appear after tumor injection in the blood vessels, possibly due to the suppressive effect of tumor cells on NK activity \[[@B36-cancers-12-00153]\].
Macrophages are also important in the control of metastasizing tumor cells. Their cytotoxicity was enhanced in mice after a session of running-to-fatigue at an increasing intensity up to 68%--78% VO~2~max that lasted for about 3 h, and this effect lasted 8 h post-exercise \[[@B37-cancers-12-00153]\]. In the same study, running for 30 min at 55%--65% VO~2~max was less effective at stimulating macrophage cytotoxicity \[[@B37-cancers-12-00153]\]. Similarly, to acute exercise to fatigue, repeated moderate-intensity treadmill running during the 6 days before tumor inoculation enhanced macrophage cytotoxicity and decreased metastatic spread of B16 melanoma cells in mice \[[@B25-cancers-12-00153]\].
In sedentary men suffering from nasopharyngeal carcinoma, high-intensity cycling exercise (80%--100% VO~2~max) increased platelet-tumor aggregation and tissue factor-induced coagulation, which are known to promote metastasis \[[@B33-cancers-12-00153]\]. These effects were limited when a warm-up preceded the exercise intervention. Moderate-intensity exercise (60% VO~2~max) inhibited platelet-tumor aggregation and, thereby metastatic dissemination \[[@B33-cancers-12-00153]\].
2.5. Extravasation {#sec2dot5-cancers-12-00153}
------------------
Extravasation begins as the circulating tumor cells get stuck in a blood vessel distant from the primary tumor site. After having found that tumor cell retention in secondary organs post tumor injection was lower in wheel-running trained mice compared to sedentary controls, Hoffmann-Goetz et al. hypothesized that exercise could affect the adherence of tumors to the endothelial vessels \[[@B19-cancers-12-00153]\]. However, others did not find any difference in the localization of metastases between running 30 min at 55%--65% VO~2~max, running to fatigue up to 68%--78% VO~2~max and control mice. A small decrease in the amount of entrapped circulating tumor cells in the run-to-fatigue group was measured \[[@B37-cancers-12-00153]\].
Wolff et al. \[[@B29-cancers-12-00153]\] investigated if voluntary wheel-running in mice would modify tight junction protein expression, such as occludin, zonula occludens, and claudin-5, as enhanced tight junctions in blood vessels may prevent the escape of circulating tumor cells into distant tissues \[[@B29-cancers-12-00153]\]. After 5 weeks of voluntary wheel running, mice were injected with highly metastatic tumor cells into the brain vasculature. While a decrease in occludin, zonula occludens, and claudin-5 expression was measured in the sedentary mice, an increase in claudin-5 expression and unchanged occludin and zonula occludens levels were found in the exercising mice 48 h post tumor injection. Three weeks post-injection, expression of occludin and claudin-5 were increased in the exercising mice. As fewer metastases were observed in exercising mice, those results suggest that the protection induced by the blood-brain barrier against metastases could be enhanced. Activation of small GTPases results in claudin-5 and occludin phosphorylation that leads to impaired tight junction proteins function and, thereby decreased endothelial cell tightness and increased blood barrier permeability \[[@B38-cancers-12-00153]\]. In another study in mice, Wolff et al. investigated if 4-week endurance training modulated endothelial permeability after tumor inoculation via the small GTPases Ras, Rac1, and Rho, which are redox-sensitive \[[@B28-cancers-12-00153]\]. Activation of the small GTPase Rho was negatively correlated with running distance in the tumor cells infused mice, probably due to exercise-induced enhanced antioxidant capacity \[[@B28-cancers-12-00153]\].
Circulating tumor cells interact with cells such as the bone-resorbing osteoclasts to alter bone remodeling and to invade bones \[[@B14-cancers-12-00153]\]. The effect of mild intensity physical activity on this process has been mimicked in vitro by applying oscillatory fluid flows of 0.8--3 Pa in osteocytes \[[@B14-cancers-12-00153]\]. Conditioned medium from flow-stimulated osteocytes increased migration and reduced apoptosis of breast cancer cells. The opposite was observed with conditioned medium from osteoclasts themselves cultured in flowed osteocytes conditioned medium. Cancer cell trans-endothelial migration was reduced when using flowed osteocytes conditioned medium, which was abolished when blocking intercellular adhesion molecule 1 (ICAM-1) and interleukin 6 in the medium \[[@B14-cancers-12-00153]\]. Those results suggest that osteocyte activation through oscillatory fluid flow can indirectly reduce breast cancer circulating tumor cells migration into the bone matrix and increase circulating tumor cells apoptosis \[[@B14-cancers-12-00153]\]. In a follow-up study looking at molecular regulators of extravasation, the same group found that in addition to ICAM-1, MMP-9 and frizzled 4 were found to regulate invasion of bone-metastatic breast cancer cells \[[@B15-cancers-12-00153]\]. Altogether, flow-stimulated osteocytes can downregulate the bone-metastatic potential of breast cancer cells by signaling through endothelial cells.
2.6. Seeding and Colonization {#sec2dot6-cancers-12-00153}
-----------------------------
In a recent review, Koelwyn et al. explored the putative effects of exercise in reprogramming the interaction between the host and the tumor microenvironment \[[@B39-cancers-12-00153]\]. Focusing on seeding and colonization, bone marrow-derived dentritic cells seem particularly interesting to study. Those cells regulate angiogenesis, primary tumor, and promote the formation of the premetastatic niche and the colonization of tumor cells. Moderate to intensive cycling training for 12 weeks was shown to increase the amount of circulating bone marrow-derived dentritic cells in women with breast adenocarcinoma, suggesting that exercise could facilitate the acquisition of an invasive tumor phenotype or augment seeding and/or colonization \[[@B32-cancers-12-00153]\]. However, all the aforementioned factors in this result section contribute to the interaction between the host and the tumor environment with more or less importance. As such, a more integrative approach and discussion are needed, which will be the purpose of the next section.
3. Discussion {#sec3-cancers-12-00153}
=============
The purpose of this review was to investigate if and how the metastatic cascade is regulated by physical activity with a specific focus on the nature, i.e., aerobic or resistance exercise, and the modalities, i.e., intensity, volume, and frequency, with which exercise is performed ([Figure 1](#cancers-12-00153-f001){ref-type="fig"}). As physical activity was found to either enhance or prevent metastatic spread, it seems that different conditions and modalities of exercise can induce distinctive, and sometimes opposite, effects on the metastatic process, mainly due to specific hormonal responses induced by those different conditions. It is, therefore, crucial to highlight and discuss those specific conditions.
While their levels generally increase after resistance exercise in human \[[@B40-cancers-12-00153]\], IGF-1 and TGF-β1 have been shown to be regulated by voluntary or forced swimming in mice, initiated whether before or after tumor inoculation \[[@B31-cancers-12-00153]\]. Activation of TGF-β1 helps tumor cells to detach from the primary tumor by remodeling the extracellular matrix \[[@B31-cancers-12-00153]\]. IGF-1 promotes metastasis through the inhibition of proteasome-mediated cathepsin degradation. Cathepsin permits tumor invasion and spread by the degradation of the extracellular matrix \[[@B41-cancers-12-00153]\]. However, further investigation is necessary to examine if hormones secreted following resistance exercise may directly enhance extracellular matrix degradation in the tumor microenvironment, certainly in human. Catecholamines secreted during intensive exercise can regulate inflammatory cytokines, such as TNF-α and interleukins (i.e., IL-1 and IL-6) \[[@B42-cancers-12-00153]\]. These hormones enhance endothelial permeability and adhesion to the vascular wall, helping circulating tumor cells to enter in a distant tissue. Increased secretion of catecholamines may also result in the enhancement of NK cell activity and cytotoxicity, as NK cells express β-adrenergic receptors \[[@B43-cancers-12-00153]\]. Voluntary running suppresses tumor growth and spread through epinephrine- and IL-6-dependent NK cell mobilization and redistribution, which modulates immunity and increases efficiency against circulating tumor cells \[[@B43-cancers-12-00153]\]. Higher NK cell activity has been linked with a lower rate of metastasis \[[@B11-cancers-12-00153]\]. Finally, inflammatory cytokines and growth hormones may also increase MMP expression, which plays a role, among others, in extracellular matrix degradation \[[@B44-cancers-12-00153]\]. Their expression was shown to be either increased, either decreased in physically active patients, but directly correlated with TNF-α levels. The following sections will discuss how different exercise modalities may regulate the metastatic cascade.
3.1. Moderate Versus High Intensity Physical Activity {#sec3dot1-cancers-12-00153}
-----------------------------------------------------
In healthy people, higher intensity leads to higher levels of exercise-induced stress and, thereby more adaptations and a more potent increased in exercise capacity. However, in cancer patients, exercising at higher intensities does not always appear to be beneficial when it comes to tumor spread capacities. Exercise intensity may affect platelet adhesion to tumor cells as the aggregation of platelets has been found to be higher than resting levels after high-intensity endurance exercise \[[@B33-cancers-12-00153]\]. When the intensity was moderate, platelet adhesion was lower, thereby decreasing circulating tumor cells' survival in the vessels. Of note, platelet-tumor aggregation was decreased below resting levels after high-intensity exercise if the subjects performed a warm-up before \[[@B33-cancers-12-00153]\].
In addition to platelet adhesion, exercise intensity also modulates immunity \[[@B18-cancers-12-00153],[@B22-cancers-12-00153],[@B24-cancers-12-00153]\], which is especially important in the survival of circulating tumor cells. A higher phagocytic capacity of macrophages was reported after moderate-intensity exercise, but not after high-intensity treadmill running nor in sedentary controls \[[@B45-cancers-12-00153]\]. It is important to note that the macrophage phenotype may either enhance (M2 macrophages) or prevent (M1 macrophages) tumor growth and dissemination. Both moderate- and high-intensity exercise decrease macrophage recruitment in the tumor microenvironment and enhance anti-inflammatory M1 phenotype expression and macrophage cytotoxicity \[[@B46-cancers-12-00153]\].
The regulation of adhesion molecules, such as ICAM-1, is dependent on exercise intensity as well. ICAM-1 plays a role in endothelial barrier disruption, cell adhesion, and transmigration \[[@B47-cancers-12-00153]\] and thus, promotes extravasation. In breast cancer cells, ICAM-1 activation was lower after moderate-intensity electrical stimulation in vitro \[[@B14-cancers-12-00153],[@B15-cancers-12-00153]\]. In cancer patients with an activity level of 150 or 300 min/week at 50%--70% of maximum heart rate (i.e., moderate-intensity exercise), ICAM-1 levels were lower, which in turn reduced the amount of circulating tumor cells in the vessels \[[@B48-cancers-12-00153]\]. A recent review concluded that moderate-intensity endurance exercise in healthy people is associated with a decrease in adhesion molecules, whereas high-intensity endurance exercise is associated with an increased expression of these factors for several hours post-exercise \[[@B47-cancers-12-00153]\]. In addition, interval training is more effective at reducing ICAM-1 levels than continuous training. Whether those conclusions apply to cancer patients needs to be tested as well as the possible involvement in the modulation of the metastatic development.
3.2. Acute Versus Chronic Physical Activity {#sec3dot2-cancers-12-00153}
-------------------------------------------
The responses to physical activity can be categorized into short-term adaptations, during or directly after a single exercise session, and long-term adaptations, when exercise sessions are repeated over a certain period. One typical example of this differential adaptation according to the repetitive character or not is the immune response. Circulatory immune markers have been shown to decrease directly after an exercise session, but the immune response is improved after a training period \[[@B49-cancers-12-00153]\]. In addition to the chronicity of physical activity and the training status, other factors may affect the immune response to acute exercise, including age, nutritional status, and extreme environments, but the most critical determinants are the intensity and duration of the exercise bout \[[@B49-cancers-12-00153]\]. The intensity mainly affects changes in blood lymphocyte numbers during and after exercise, while the duration has a stronger influence on the neutrophil and total leukocyte count \[[@B50-cancers-12-00153]\]. It is, therefore, important not to exercise at a too high intensity or during a too long period to avoid a significant drop in the immune activity as the latter would favor cancer cell survival in the circulation and metastatic spread. The difficulty is to determine the intensity and the duration threshold for each patient, which mainly depends on the training status and the progression of the disease.
Regular moderate exercise is usually associated with reductions in circulating pro-inflammatory cytokines, increased T-cell proliferation, and NK cell activity, leading to enhanced immuno-surveillance in resting state \[[@B49-cancers-12-00153]\]. Indeed, immunity is a first response to eliminate abnormal cells that develop into a malignant tumor mass and tumor cells \[[@B51-cancers-12-00153]\]. Furthermore, immune cells mobilized by exercise tend to be more differentiated and to have an increased cytotoxic function. In mice, cytotoxic immune cells are particularly mobilized after wheel running by increasing the number of immune-attractive chemokines and ligands that activate NK cells \[[@B43-cancers-12-00153]\].
In addition to the immune system, chronic exercise has been shown to regulate the Wnt-β-catenin pathway. The latter regulates the primary step in the metastatic process, i.e., tumor invasion, and, more particularly, the regulation of cell differentiation, polarity, and cell-to-cell adhesion \[[@B47-cancers-12-00153]\]. After wheel-running for 6 weeks, levels of E-cadherin were higher, and nuclear levels of β-catenin were lower in small intestine tumors in mice \[[@B52-cancers-12-00153]\]. Together with the higher expression of the tumor suppressor E-cadherin \[[@B52-cancers-12-00153]\], the lower expression of β-catenin indicates that tumor invasion is reduced after 6 weeks of wheel running in mice. In human, low levels of nuclear β-catenin in normal and colorectal cancer cells of exercising patients were associated with reduced mortality risk \[[@B53-cancers-12-00153]\]. These results confirm previous findings that patients with low nuclear β-catenin levels and high physical activity levels (\>18 MET-h/week) after cancer diagnosis had lower cancer mortality rates \[[@B54-cancers-12-00153]\]. This all suggests that chronic exercise may be beneficial for controlling tumor invasion by decreasing β-catenin levels. Here as well, both the chronicity and the intensity of physical activity seem important in the regulation of tumor invasion.
Vascularization, an important long-term adaptation in response to exercise training, has been shown to play a predominant role in the second step of the metastatic cascade. Angiogenesis can be induced by NO, which usually increases in response to both acute and chronic physical activity \[[@B17-cancers-12-00153]\]. In aerobic exercising breast cancer patients, higher NO production was related to less aggressive and invasive tumors \[[@B32-cancers-12-00153]\]. Inversely, lower NO production after training was associated with higher metastatic rates in mice \[[@B26-cancers-12-00153]\]. The latter result is in line with the inhibitory role of NO in platelet aggregation, together with its vasodilatory and anti-oxidative properties \[[@B26-cancers-12-00153]\]. High NO production would reduce while low NO production would rather favor metastasis development. While those results seem consistent regarding metastasis, it remains to determine why exercise training for a few weeks induces opposite results on NO production, which is probably not only a matter of model, i.e., patients vs. mice. Other unknown mechanisms are probably involved.
3.3. Forced versus Voluntary Physical Activity {#sec3dot3-cancers-12-00153}
----------------------------------------------
Forced physical activity relates to treadmill running or to any structured exercise, while voluntary activity refers to wheel running or to any exercise spontaneously performed. Forced prolonged physical activity may exert deleterious effects through excessive physiological and psychological stress and consequently counteract the positive effects of exercise \[[@B38-cancers-12-00153]\]. For example, voluntary swimming for 8 min was linked with decreased metastatic burden, whereas forced prolonged swimming time enhanced tumor growth and lung metastatic spread in mice with transplanted liver cancer \[[@B31-cancers-12-00153]\]. After voluntary exercise, levels of dopamine, which exhibits anti-tumor properties, were increased, and levels of TGF-β1, a key factor to induce epithelial-mesenchymal transition, decreased \[[@B31-cancers-12-00153]\]. Those levels evolved in the opposite direction in the forced swimming group \[[@B31-cancers-12-00153]\]. No resting time was allowed for the whole duration of forced prolonged swimming sessions compared to voluntary swimming, which usually allows intermittent floating and recovery. Sufficient recovery time between exercise sessions also needs to be taken into account to avoid excessive fatigue and potential deleterious outcomes in cancer patients \[[@B55-cancers-12-00153]\].
Based on the previous sections, it seems particularly important to adapt the training program to the training status of the patient and to increase progressively the volume, the intensity, and the frequency of the training sessions as well as the recovery between the sessions.
3.4. Lactate Levels {#sec3dot4-cancers-12-00153}
-------------------
When dealing with cancer and physical activity, it is impossible not to deal with lactate as the latter plays key roles in both conditions. Contrary to what was thought for years, lactate is not a waste product of exercise but rather an important energy substrate for gluconeogenesis, which can be oxidized in type I muscle fibers as well \[[@B56-cancers-12-00153]\]. In addition, the buffer capacity of lactate limits the pH decrease during exercise \[[@B56-cancers-12-00153]\]. In cancer, lactate plays a major role in tumor proliferation and malignant phenotyping, i.e., self-sufficient metabolism, angiogenesis, immune escape, migration, and metastasis \[[@B57-cancers-12-00153]\]. Its modulation by exercise may thus have an impact on tumor invasion. The link between exercise-induced lactate metabolism and tumor spread has been reviewed by San-Millan and Brooks \[[@B57-cancers-12-00153]\]. For more details, the reader is referred to this review, but the key message is that lactate is an important factor of tumor proliferation and dissemination and, therefore, a new main target for cancer therapies. Its role in cancer metabolism is comparable with exercise and forms a new field of investigation in exercise-oncology but needs to be better understood. Blood lactate levels are directly correlated with exercise intensity. It can thus be hypothesized that, on the one hand, moderate-intensity exercise may improve tumor lactate metabolism and mitochondrial function; on the other hand, high-intensity exercise may lead to overexpression of lactate receptors, thus enhancing its action in tumor growth and spread. Based on the latter and the current knowledge, it is probably safe not to recommend high-intensity training to cancer patients.
3.5. Limitations {#sec3dot5-cancers-12-00153}
----------------
A first limitation of the study was the heterogeneity in the investigated variables that together make it difficult to compare the different studies. Tumor types, method, and timing of tumor inoculation, investigated species, exercise protocols, among others, were found to differ across studies. These differences can also help to explain divergent results among studies investigating similar factors. A second limitation is the fact that most studies were conducted in vitro or in rodents. Observations may, therefore, not entirely reflect the same processes in humans subjected to the same kind of protocols. Third, no study evaluated the effect of resistance training alone on metastatic development while the latter is regularly included in the exercise training programs for cancer patients.
3.6. Perspectives {#sec3dot6-cancers-12-00153}
-----------------
Regarding resistance exercise, it is known that it activates the mammalian target of rapamycin (mTOR) pathway to increase protein synthesis and, in the long term, muscle mass \[[@B58-cancers-12-00153]\]. As the mTOR pathway is also highly active in proliferating cancer cells \[[@B59-cancers-12-00153]\], it will be crucial to determine whether the activation of the mTOR pathway by resistance exercise does not exacerbate tumor proliferation and metastasis development. Another perspective is the better understanding of the role of myokines in metastasis development. Some myokines, such as decorin, IL-6, irisin, oncostatin-M, have been found to play a role in cancer modulation \[[@B59-cancers-12-00153],[@B60-cancers-12-00153]\]. Due to their systemic effects, those myokines form a new interesting research area in the regulation of tumor cell metabolism. Finally, knowing that exercise-induced beneficial effects on tumor spread may vary according to the type of tumor and the step in the metastatic process \[[@B30-cancers-12-00153]\], further investigation will be required to adjust the exercise prescription accordingly.
4. Materials and Methods {#sec4-cancers-12-00153}
========================
The search terms exercise \[Title/Abstract\] AND metastasis \[Title/Abstract\] returned 222 articles on PUBMED on the 10 February 2019. A first selection was made based on title relevance and language. Articles not written in English were excluded. Sixty-two articles remained after this first selection round. This was followed by a careful reading of the abstracts, which reduced the number of articles from 62 to 31. The final selection was made by reading the 31 remaining articles, and 24 original articles were finally included ([Table 1](#cancers-12-00153-t001){ref-type="table"}). Studies were selected for their relevance in terms of adaptations to exercise and metastatic development. Studies with a focus on primary tumor growth and those with mixed conditions, e.g., diet and exercise combined with no exercise group alone, were excluded. Finally, two studies were excluded because they only described the protocol of the experiment, and no result was presented. Applying the same searching methodology in Embase and Scopus resulted in no additional article.
5. Conclusions {#sec5-cancers-12-00153}
==============
This narrative review investigated if acute and chronic physiological changes in response to physical activity regulate metastatic spread. Chronic adaptations to moderate-intensity endurance exercise (60%--70% VO~2~max) seems the most effective way to limit excessive stress and to achieve a preventive effect of exercise on metastases, whereas high-intensity exercise (\>60%--70% VO~2~max) was shown to enhance the metastatic spread in some cases. Altogether, the data gathered here reinforce the importance of encouraging cancer patients to perform some form of moderate physical activity several times a week. To limit the undesired events thereof, a good knowledge of the patient's training level is important to establish an adapted and progressive exercise training program, with sufficient recovery between exercise sessions.
S.v.D.d.t.R. wrote the first draft, drew the figure and made the summarizing table. L.D. corrected the draft and wrote the final version that was approved by S.v.D.d.t.R. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
The authors declare no conflict of interest.
{#cancers-12-00153-f001}
cancers-12-00153-t001_Table 1
######
Design and main results from the studies selected based on the research equation.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Reference Model Tumor Type Exercise Intervention Measured Effects Main Results (vs. Sedentary) Conclusions
---------------------------------------------------------- ------------------------ ------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------ -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
In vitro
Ma et al. (2018) \[[@B14-cancers-12-00153]\] In vitro Breast cancer cells (non-metastatic and highly metastatic) In vitro oscillatory fluid flow 1 Pa (mimicking mechanistic loading during mild exercise) Osteocytes activity\ Mimicked loading ↗ osteocytes activation\ Opposite effects on breast cancer cell migration and apoptosis directly and indirectly mediated by osteocytes exposed to oscillatory fluid flow
Osteoclasts activity\ Osteocytes directly ↗ cancer cell migration and ↘ apoptosis\
IL-6 and ICAM-1 Osteocytes inhibit osteoclasts differentiation → ↘ migration and ↗ apoptosis\
Ostéocytes ↘ IL-6 expression → ↘ ICAM-1 → ↘ apoptosis
Ma et al. (2018) \[[@B15-cancers-12-00153]\] In vitro Bone metastatic breast cancer cells and osteocytes Oscillatory Fluid Flow (2 h at 1 Hz and 1 peak shear stress at 1 Pa) Endothelial permeability\ ↘ Endothelial permeability\ Flow-stimulated osteocytes ↘ bone-metastatic potential of breast cancer cells by signaling through endothelial cells
Adhesion on endothelial monolayer\ ↘ Breast cancer cell adhesion onto endothelial monolayer (mediated by ICAM-1)\
Cancer cells genes expression Alteration of cancer cell genes expression via endothelial cells\
↘ MMP-9 and FZD4
Regmi et al. (2017) \[[@B16-cancers-12-00153]\] In vitro Breast cancer (+ lung metastases)\ Microfluidic circulatory system:\ CTC necrosis\ ↗ CTC death in high shear stress\ Intensive exercise may be a good strategy for generating high shear stress that can destroy CTC and prevent cancer metastasis
Ovarian lung cancer cells\ Low shear stress of 15 dynes/cm^2^ (resting state)\ CTC apoptosis 90% necrosis within 4h with high shear stress\
Leukemic cancer cells High shear stress of 60 dynes/cm^2^ (intensive ex) 10% apoptosis within 16--24h\
↘ Viability of highly metastatic tumor cells in prolonged high shear stress treatment
**Rodents**
Alvarado et al. (2017) \[[@B12-cancers-12-00153]\] Female rats Mammary cancer cells, ER and PR positive tumor cells 35 weeks of moderate exercise training (treadmill running), 60 min/day, 5 days/week; post-tumor injection Primary tumor and metastases development\ ↘ Mammary tumors numbers and masses\ Long-term exercise training ↘ the risk of metastatic dissemination of breast cancer\
ER and PR immunoexpression No metastasis developed in exercising animals vs. 2 developed in the sedentary group\ Anti-metastatic effects of exercise training are hormone-independent
↗ ER and PR immuno-expression in neoplasms from sedentary and exercising groups
Assi et al. (2017) \[[@B13-cancers-12-00153]\] Male mice Liposarcoma 6 weeks of spontaneous physical activity before tumor injection\ Intramuscular tumor size\ Larger intramuscular tumors\ Physical activity ↗ liposarcoma development by ↗ autophagy in tumor mass
8 weeks of voluntary wheel running post-tumor injection FABP4\ ↗ Expression of FAPB4, C/EBP-β, and PPAR-γ\
IL-6, C/EBP-β\ ↗ IL-6 levels in both active and inactive groups\
PPAR-γ\ ↗ Expression of autophagy markers Beclin-1 and GABARAPL-1
Autophagy markers
Faustino-Rocha et al. (2016) \[[@B17-cancers-12-00153]\] Female rats Mammary cancer 35 weeks of treadmill running (60 min/day, 5 days/week) VEGF-A\ ↗ VEGF-A expression\ Long-term exercise training:\
Vascularization\ ↗ Tumor vascularization\ ↗ Tumor vascularization\
Aggressiveness ↘ Aggressiveness ↘ Tumor multiplicity and aggressiveness
Hoffman-Goetz et al. (1994) \[[@B18-cancers-12-00153]\] Female mice Mammary tumor line 66 (in vivo)\ 8 weeks before tumor inoculation: forced treadmill (T), voluntary wheel running (W) or sedentary (S)\ NK cell activity\ Exercise before tumor injection:\ ↗ LAK activity with endurance training or physical activity before tumor injection\
YAC-1 (in vitro) 3 weeks post-inoculation: continuation (TT/WW), cessation (TS/WS), start activity (ST/SW) or maintenance sedentary (SS) LAK\ ↘ Basal NK activity (WS, TS)\ ↗ Natural immunity with exercise training but no significant difference in tumor burden and spread\
Pulmonary tumor density ↗ LAK activity (WS, TS)\ Exercise-induced changes depend on the tumor sensitivity to NK and LAK cells
↘ Pulmonary tumor density\
Exercise continued after tumor injection:\
↘ LAK activity (TT, WW)\
↗ Pulmonary tumor density (TT, WW)
Hoffmann-Goetz et al. (1994) \[[@B19-cancers-12-00153]\] Male mice CIRAS 1 tumor cells 9 weeks of voluntary wheel running before tumor cells injection Tumor cell lung residency\ ↘ Lung tumor cell residency in exercise-trained animals at 5 min post-injection, up to 30 min\ Exercise training ↘ tumor cell adherence to the vascular wall
Tumor cell radioactivity ↘ Tumor cell radioactivity in liver, spleen, kidney of wheel running mice 30 min and 3 h post-injection
Jadeski et al. (1996) \[[@B11-cancers-12-00153]\] Mice CIRAS 1 and CIRAS 3Beige mutation (impaired NK) 9 weeks of treadmill running (30 min/day, 5 days/week) before tumor cell injection Tumor cell retention in lungsNK cell contribution to exercise-mediated effects ↘ Tumor retention in trained (mutated or not) mice↘ Tumor cell retention in running CIRAS 1 mice vs. sedentaryNo effect of training in CIRAS 3 retention Exercise training ↗ innate natural immune responsesPhysical activity is protective early in the metastatic cascade, before selection of a more aggressive and metastatic phenotype
Jones et al. (2010) \[[@B20-cancers-12-00153]\] Female mice Human mammary adenocarcinoma Voluntary wheel running Tumor volumeTumor vascularizationHypoxiaTumor energy status No difference in tumor volume↗ tumor vascularizationHigher intratumoral hypoxia levels (HIF-1)No difference in ATP, PGC-1α and AMPK levels Aerobic exercise ↗ intratumoral vascularization → normalization of tumor microenvironment → ↘ rate of metastasis and ↗ cancer therapy efficiency
Jones et al. (2012) \[[@B21-cancers-12-00153]\] Male mice Murine prostate cancer 8 weeks of voluntary wheel running after tumor inoculation Tumor blood perfusionHIF-1 and angiogenesisPrometastatic mRNA levelsTumor MAPK and PI3K signalingCirculating proinflammatory cytokines + metabolites ↘ Expression of pro-metastatic genes in exercising animalsImproved tumor vascularization associated with ↗ intratumoral levels of HIF-1α and VEGF↗ Expression of metabolic genes in tumors↘ Plasma angiogenic cytokines Exercise-induced stabilization of HIF-1α upregulates VEGF expression. This led to physiological tumor vascularization with a shift toward suppressed metastasis.
MacNeil et al. (1993) \[[@B22-cancers-12-00153]\] Mice CIRAS 1 tumor cells with pulmonary metastases Voluntary wheel running (W) or sedentary (S) 9 weeks pre- and 3 weeks post tumor inoculationfour groups: WW, WS, SW, and SS Lung tumor numberNatural immunity (NK cell activity, splenic serine esterase) ↘ Number of lung metastases in WW and WSNo difference in splenic esterase activity, nor in NK cell activity Exercise training before, but not after tumor inoculation ↘ number of lung metastases
MacNeil et al. (1993) \[[@B23-cancers-12-00153]\] Male mice and in vitro CIRAS 1 tumor cellsYAC-1 tumor cells 9 weeks before tumor injection -Continuous access to wheel-Treadmill exercise, 30 min/day, 5 days/weekNo exercise for 3 weeks post-tumor injection Lung metastasisLung tumor cell retentionNK cell spleen number and cytotoxicity No change in lung metastases incidence↗ Number of tumors with running distanceSmall ↘ tumor cell retention in lung↗ Splenic NK cells cytotoxicity but no change in number Exercise ↗ the development of metastasesLow clinical impact of enhanced immunity on tumor growth and spread
MacNeil et al. (1993) \[[@B24-cancers-12-00153]\] Male mice CIRAS 3 tumor cellsYAC-1 tumor cells 9 weeks before tumor injection, 30 min/day, 5 days/week -Voluntary wheel running-Forced treadmill running Citrate synthase activityTumor cell retentionIn vivo and in vitro NK cell cytotoxicity Citrate synthase activity: treadmill running \> wheel running = sedentary↘ Lung tumor cell retention↗ In vitro splenic NK cytotoxicity Exercise training ↗ natural immunity and NK cell cytotoxicity against tumor cells and ↘ pulmonary tumor retention in wheel and treadmill running mice
Murphy et al. (2004) \[[@B25-cancers-12-00153]\] Male mice B16 melanoma cells 6 days of treadmill running (1 h/day) before tumor cells injection Metastatic spreadMacrophages cytotoxicity ↘ Amount of lung metastases↗ Macrophages cytotoxicity Short-term moderate-intensity exercise training ↘ metastatic spread by ↗ macrophages function
Smeda et al. (2017) \[[@B26-cancers-12-00153]\] Female mice Orthotopic breast cancer cells 4 weeks of voluntary wheel running after cancer injection Tumor volume and numberNO productionPlatelet activation ↗ Number in pulmonary metastases associated with ↘ NO production in aortaNo significant ↘ in systemic NO bioavailabilityNo change in plasma P-selection concentration and platelet activation markers Pro-metastatic effect of voluntary exercise associated with lower NO productionPotential explanations: -↗ ROS and RNS production-Pseudo-vessels formation-Untrained animals
Tsai et al. (2013) \[[@B27-cancers-12-00153]\] Male mice Lewis lung carcinoma 4 weeks aerobic training, 5×/week, post-inoculation -Interval: 6 × 10 min at 65% maximum speed-Continuous: 45 min at 54%--58% VO~2~max VEGFTumor growth and metastasis ↗ Serum VEGF levels in both exercising groups but not in tumor VEGF levelsNo difference between exercising conditionsNo difference in weight and volume of lung and liver tumors No effect of moderate exercise on tumor growth despite higher plasma VEGF levels
Wolff et al. (2014) \[[@B28-cancers-12-00153]\] Male mice Lewis lung carcinoma 4 weeks of voluntary wheel running before tumor inoculation Oxidative status of brain microvesselsAnti-oxidative enzymes gene expressionRho-GTP-ase activation ↘ Superoxide levels in high running group↘ Antioxidant capacity in tumor cell infusion + exercise↘ Levels of Rho-GTPases activation in high running group Exercise can protect microvessels from blood-brain barrier instability by decreasing Rho activation
Wolff et al. (2015) \[[@B29-cancers-12-00153]\] Male mice Murine Lewis lung carcinoma 5 weeks of voluntary wheel running before tumor inoculation Tight Junction proteins -Occludin-ZO-1-Claudin-5 48 h post tumor cells injection: -Maintained expression of occludin and ZO-1 in exercised tumor mice-↗ Claudin-5 expression with exercise3 weeks post tumor cells injection: -↗ Basal levels of occludin and claudin-5 ↗ Blood-barrier regulation by exercise↘ Extravasating tumor cells in the brains from exercised mice
Yan et al. (2011) \[[@B30-cancers-12-00153]\] Male mice Lewis lung carcinoma 9 weeks of non-motorized voluntary running before tumor inoculation2 weeks of voluntary running after tumor inoculation Plasma angiogenic cytokines (PDGF, VEGF, and MCP-1)Insulin, leptin, adiponectin No differences in number and size of metastasis↘ Plasma insulin and leptin levels↗ Adiponectin levels↗ PDGF but not VEGF and MCP-1 No effect of voluntary running on metastasisVoluntary running may affect favorably energy expenditure and adipogenesis (against obesity)
Zhang et al. (2016) \[[@B31-cancers-12-00153]\] Mice and in vitro Liver cancer transplantation with lung metastasis 9 weeks of regular moderate (8 min/day) and overload (16 and 32 min/day) swimmingSwimming before and after cancer inoculation Lung metastases ratioDR2 activationTGF-β1 expression ↘ Mean lung metastases ratio in 8 min-group and ↗ in 16 and 32 min-groupsDR2 activation ↘ cancer cell proliferation and invasionModerate swimming ↘ EMT induced by TGF-β1 Divergent activation of dopamine system may explain the opposite effects on tumor growth and metastasis between moderate and overload swimming
**Human**
Jones et al. (2013) \[[@B32-cancers-12-00153]\] Women Breast adenocarcinoma ▪20--45 min cycle ergometer, at 55% to 100% of VO~2~max, 3x/week, 12 weeks Tumor blood flowTumor hypoxiaCEPPlasma cytokines and angiogenic factors IL-1β, IL-2, PLGF ↘ Tumor hypoxia and enhanced tumor vascularization↗ CEP, PLGF, ↘ IL-1β, IL-2No change in tumor phenotype markers (CD-31, HIF-1, GRP78)↘ Gene expression of NF-κB signaling and inflammation Aerobic exercise training can modulate tumor progression and metastasis:-↗ Provascular adaptations-Normalized tumor microenvironment
Wang et al. (2007) \[[@B33-cancers-12-00153]\] Men Nasopharyngeal carcinoma (NPC) Bicycle ergometer -Moderate-intensity exercise (60% VO~2~max for 40min)-High-intensity exercise (up to 100% VO~2~max)-Warm-up exercise (20 min at 60% VO~2~max → 30 min rest → VO~2~max) Platelet-NPC aggregationPlatelet-promoted Tissue FactorTFPIMMP-2 and -9Tissue Inhibitor of MMP-1 High-intensity exercise: -↗ Binding affinity to fibrinogen-↗ Tissue Factor expression/activity-↗ Tissue Inhibitor of MMP release-↘ TFPI release-↘ MMP-2 and -9 activitiesWarm-up ↘ these effectsModerate-intensity exercise:↘ formation of platelet-NPC aggregates High-intensity exercise: -↗ Aggregation and coagulation-↘ MMP bioactivity↘ Effect of HIE after warm-upModerate-intensity exercise:Minimizes the risk of thrombosis induced by platelet-NPC interactions
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Characteristics of tumor cell types: CIRAS 1: sensitive to NK-cell lysis, less aggressive than CIRAS 3; Mammary tumor line 66: highly metastatic, NK-cell resistant. C/EBP-β: CCAAT-enhancer-binding proteins; CEP: circulating endothelial progenitor cells; CTC: circulating tumor cells; DR: dopamine receptor; EMT: epithelial-mesenchymal transition; ER: estrogen receptor; FABP4: fatty acid-binding protein 4; FZD4: frizzled-4; HIE: high-intensity exercise; HIF-1: hypoxia-inducible factor 1; ICAM-1: intercellular adhesion molecule 1; IL: interleukin; LAK: lymphokine-activated killer; MCP-1: monocyte chemotactic protein-1; MMP: matrix metalloproteinases; NK: natural killer; NO: nitric oxide; NPC: nasopharyngeal carcinoma; PDGF: platelet-derived growth factor; PPAR-γ: peroxisome proliferator-activated receptor coactivator gamma; PR: progesterone receptor; RNS: reactive nitrogen species; ROS: reactive oxygen species; TFPI: tissue factor pathway inhibitor; TGF-β1: transforming growth factor β-1; VEGF: vascular endothelial growth factor; ZO: zonula occludens.
| |
Climate Justice (Part I)
Climate change affects all nations and people throughout the world. Within the wider climate change discourse, however, increasing attention is being drawn to the equally significant issue of climate justice. In the first of this two-part series on climate justice, we explore what climate justice is and who is most affected by it.
What Is Climate Justice?
Whilst climate change tends to generally focus on the climate crisis from a scientific and environmental lens, climate justice instead shifts the focus, framing the climate emergency as an ethical and political issue instead. Rather than simply considering the physical and environmental impact of climate change upon the planet, climate justice instead seeks to shed light on the causes and effects of climate change from a social and humane perspective as well. For a growing number of people, climate justice is not merely terminology, but a movement.
Although climate change affects everyone in the world, the level of impact is not fairly distributed across all nations, with the poorest of people and communities facing the brunt of the crisis, despite contributing the least to it. Climate justice therefore, seeks to balance the scales.
Through the movement, it aims to repair the damage caused to the poorest communities and the biggest victims of climate change, whilst simultaneously holding those responsible to account. It acknowledges that climate change can exacerbate existing inequalities and inequitable social conditions and seeks to promote strategies and policies to address these, in order to create a fairer, more balanced society moving forward.
Moreover, the climate justice movement aims to ensure that all people in the world are able to prepare for, respond to and recover from the impacts of climate change when faced with them. Whilst at present different groups within society ae disproportionately affected, the goal of climate justice is to ensure that all people and nations are given increased access to universal resources.
Additionally, climate justice highlights the fact that current climate injustice is not due to a singular factor. Instead, it emphasises that existing vulnerability is due to a combination of interconnected factors and struggles, including social, economic, environmental and cultural factors, as well as institutional practices and policies.
Who Is Most Affected by Climate Injustice?
According to research by Oxfam, the world’s richest 10% of people cause 50% of the world’s emissions. Similarly, the world’s wealthiest countries have also contributed the most towards climate change. Although at present many of these countries are attempting to reduce their carbon emissions and are in the process of making commitments towards zero net emissions, many were historically the biggest polluters. Yet, despite this, it is often those individuals, groups and countries which are the least responsible who often feel the most acute effects.
According to research, those countries in the global south, as well as low-income communities in the industrialised North are the ones most likely to suffer the harshest consequences of climate change. Similarly, typically marginalised communities are the most likely to face some of the worst consequences of climate change, including low-income groups and individuals, people of colour, women, disabled people and indigenous groups.
Whilst richer nations and communities are able to retreat to safety and rebuild following natural disasters, poorer communities are less likely to have the same level of access to funding and resources, therefore suffering the impacts more significantly. For instance, many low-income communities may not be able to evacuate when natural disasters strike, or have access to funds and resources to help them rebuild their homes. In many cases such groups may already have been the subject of existing discriminatory state and corporate policies, which limit their ability to prepare and cope with such situations when they arise.
For poorer nations, droughts and extreme weather events can lead to increased food insecurity and lack of access to clean water, housing and medical care. Similarly, rising sea levels and melting glaciers are increasingly threatening coastal communities, who often rely on the sea for their livelihood. In many areas, indigenous communities are also being forced to abandon their land and move to higher ground as a result of flooding or deforestation. Diseases which spread through crops, causing them to fail are also extremely problematic, particularly for poorer nations where many of their peoples rely upon farming to survive.
Low-income groups in richer countries are also negatively affected. Often, they tend to live in lower-quality accommodation or cramped conditions, which negatively impact on their health. Such groups are also less financially able to transition towards more sustainable living practices and energy use. Yet, they are less likely to contribute towards carbon emissions. Proportionally more people within low-income or vulnerable groups are likely to use public transport, less likely to fly and tend to have lower household emissions. Yet, because of their socio-economic circumstances, they are less able to adequately prepare for, respond to and recover from negative circumstances relating to climate change when they arise.
Similarly, younger generations and the elderly are more likely to be negatively affected by climate change. For many elderly people, they do not have the financial or physical means to adapt to changing circumstances. By contrast, for younger generations, they are the ones most likely to have to bear the brunt of past inaction on the issue of climate change.
The climate justice movement not only highlights these inequalities, but proposes ways in which we, as a global society, can mobilise and make changes towards a fairer world. In Climate Change – Part II, we will examine who is responsible for climate injustice and what steps we can take to rectify the problem for future generations.
An Invitation...
Climate justice is a topic which affects all of us. Inaction in tackling climate change is no longer an option, but how we address the issue and what changes we make are very much dependant on a range of factors. In many cases, individuals may wish to make a significant change or contribution, but not have the means required at their disposal.
This week, we invite you to assess your own carbon footprint and circumstances. Tackling climate change may feel overwhelming, but even individual small changes can make a big overall impact.
Consider one thing which you could do to help combat climate change and, in so doing, combat climate injustice. It may be something big, such as donating to a climate change organisation or switching to a green energy supplier or buying an electric car. Or perhaps, it is something smaller, like reducing your use of plastic bags, reducing your water usage or walking instead of driving.
Or perhaps you can begin by simply finding out more about climate change and climate justice. You could research the issue more deeply, sign up to a newsletter or online group or forum. You could also share your findings with family and friends, or learn about it with your children. Educating yourself and others is an important and valuable step in the process towards developing new habits and moving towards more sustainable living practices.
We’d love to hear your suggestions and thoughts! Connect with us in the comments below or on our Facebook page and, as always, please share this post! | https://www.dunami-somatics.com/post/climate-justice-part-i |
###### Strengths and limitations of this study
- This systematic review aims to objectively estimate the acceptance, adherence and dropout rates of people with chronic obstructive pulmonary disease (COPD) enrolled in telehealth interventions, and the associated variables that potentially impact or are impacted by these rates.
- This systematic review will update existing knowledge on trial-related, patient-related and intervention-related factors potentially influencing acceptance, adherence and dropout rates.
- Exploring acceptance, adherence and dropout rates in COPD telehealth care and the associated factors will allow future researchers to design prospective clinical trials, while increasing the validity and generalisability of their results.
- The exclusion of papers written in languages other than English might leave relevant studies out of the review.
Introduction {#s1}
============
According to the Global Initiative for Chronic Obstructive Lung Disease (GOLD), chronic obstructive pulmonary disease (COPD) is a common disease characterised by the persistent limitation of airflow to the lungs. It can be prevented and treated; furthermore, it is progressive in nature and associated with enhanced chronic inflammatory responses (in the airways and lungs) to noxious gases.[@R1] Airflow limitations in COPD can lead to respiratory exacerbation which is defined as an acute worsening of respiratory symptoms.[@R1] Exacerbations can negatively impact an individual's health status, often resulting in hospitalisation.[@R2] COPD is a major public health problem, and individuals with COPD require appropriate management strategies to minimise the likelihood of hospitalisation.[@R4]
Telehealth refers to the use of electronic information and communication technologies to support distance healthcare, which allows healthcare professionals and long-distance patients to exchange information and enable access to healthcare services. Various terms are used throughout the medical industry to reference specific applications and use cases for telehealth---these are presented in [table 1](#T1){ref-type="table"}.[@R5] For example, telehealth interventions with COPD could be used to deliver care and it can help to detect exacerbations at an early stage, minimising the potential for emergency admissions and facilitating self-management.[@R5] Telehealth is also used for remote monitoring of a patient's clinical data, such as their vital signs; this enables healthcare teams to identify disease deterioration at an early stage and provide the requisite care in a timely manner. This has the effect of helping individuals manage their diseases and for facilitating early detection of disease exacerbation.[@R9] There is growing evidence that telehealth may be a useful tool for minimising hospital admissions due to respiratory exacerbations, particularly in the case of individuals who are constrained by geographical barriers, or have limited access to healthcare services.[@R5] Clinical trials have shown that individuals with COPD have positive attitudes towards participating in telehealth and that telehealth can promote patients' independence toward self-management.[@R11] However, the precise impact of telehealth on avoiding exacerbation and reducing hospital readmissions remains inconclusive.[@R5] The uncertainty about the impact of telehealth may be due to non-adherence or partial adherence to intervention techniques as well as the withdrawal of participants over the course of previous studies.[@R18] Dropout rates for telehealth vary across clinical trials.[@R6] It is unclear which variables are most strongly associated with non-adherence and withdrawal, although possible factors may be related to participant characteristics, intervention characteristics, and the context and environment in which the intervention is delivered. Understanding the characteristics of individuals with COPD, features of the interventions undertaken, and the environment of clinical trials is essential for reducing dropout rates in future studies. Such understanding will help with designing prospective clinical trials, while also increasing the validity and generalisability of their results.[@R23] Evaluating the reasons that prevent individuals with COPD from enrolling and completing telehealth interventions may help clinicians appropriately tailor interventions to the individuals' needs and limit dropout rates.[@R18] Moreover, researchers can explore individual's preferences and use them to develop more desirable and feasible telehealth interventions.
######
Telehealth applications and definitions
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Terminology Definitions
---------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Telehealth Using electronic information and communication technologies to support distance healthcare, which allows healthcare professionals and long-distance patients to exchange information and support access to healthcare services.[@R5]
Telemonitoring Using electronic technologies, equipment and sensors to transfer clinical data from patient settings to the healthcare providers at the clinical settings.[@R5]
Telemedicine Using e-health and communications networks for the delivery of healthcare services and medical education from one geographical location to another.[@R30]
Telehomecare Using electronic information and communication technologies to support care and treatment between a patient's home and professional healthcare settings.[@R31]
Teleconsultation Using videoconferencing and webcams to connect the healthcare provider with patients, allowing the healthcare provider to assess, diagnose, and treat patients.[@R5]
Tele-education Using web-based platforms to educate patients about the patient disease management.[@R5]
Telehealth\ Using telehealth to deliver pulmonary rehabilitation to COPD patients via communication technologies, and maintain connections between patients and healthcare professionals.[@R5]
Pulmonary\
Rehabilitation
Dropout rate The number of participants who dropout divided by the number of participants who consented to participate.[@R32]
Acceptance rate The number of participants who consented to participate divided by the number of eligible participants.[@R32]
Adherence definition The ability to measure telehealth use and observe the intention to use telehealth technology.[@R33]
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Objectives {#s2}
==========
The objectives of this systematic review are to:Estimate acceptance, adherence and dropout rates in trials of telehealth interventions (including RCT, crossover and pre-studies post-studies).Identify the reasons for dropout from the intervention.Estimate the impact of trial-related factors, sociodemographic factors and intervention-related factors on acceptance, adherence and dropout rates.Estimate the extent to which acceptance, adherence and dropout rates affect patient's outcomes.
Methods {#s3}
=======
This systematic review will be conducted according to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P).
Patient and public involvement {#s3a}
------------------------------
Patients and or public were not involved in this systematic review.
Inclusion criteria {#s3b}
------------------
1. Study type: randomised or non-randomised control trials, observational single arm pre-post trials, and crossover clinical trials.
2. Population: studies including individuals diagnosed with COPD based on reported forced expiratory volume in one second as a percentage of predicted (FEV~1~%) will be considered for this review.
3. Type of intervention: this review includes any information technology tool designed for the clinical support of patients with COPD involving the remote exchange of data between a patient and a healthcare professional. This includes, for example, telehealth, telecare, telehomecare, e-health, telemonitoring, telerehabilitation, telemedicine, home monitoring, digital monitoring, web-based monitoring, or internet-based monitoring as part of a COPD-management plan.
4. Type of outcome: outcomes include health-related quality of life, adherence to the action plan, exacerbations, duration of hospital stay, hospitalisation or utilisation of health services (including COPD related cost), and exercise capacity.
Exclusion criteria {#s3c}
------------------
1. Trials not published in English.
2. Studies that do not describe the telehealth intervention researched, including delivery methods, mode of administration and frequency of data transmissions.
3. Studies that do not report the number of individuals who were approached, who gave their consent, and who dropped out.
Search strategy {#s4}
===============
Electronic databases {#s4a}
--------------------
A systematic search of the following databases from earliest records to November 2018 will be undertaken to identify relevant articles: CINAHL; Medline (Ovid); Cochrane Library and Embase. The following Medical Subject Headings (MeSH terms), subject headings and keywords or combinations thereof will be used: telecare; telehomecare; telehealth; e-health; telemonitoring; telerehabilitation; telemedicine; home monitoring; digital monitoring; web-based monitoring; internet-based monitoring; Chronic Obstructive Pulmonary Disease; Chronic Obstructive lung disease and COPD. The search strategy was developed in collaboration with a health sciences librarian, to ensure the involvement of appropriate and necessary keywords in the review. Keywords and subject terms will be customised for each database. Further, all words with the prefix 'tele-' will be searched both with and without a hyphen (eg, both 'tele-monitoring' and 'telemonitoring'). The search strategies from Medline (Ovid) are presented in online [supplementary appendix 1](#SP1){ref-type="supplementary-material"}.
10.1136/bmjopen-2018-026794.supp1
Manual literature search {#s4b}
------------------------
We will perform manual searches of reference lists of all relevant primary studies and systematic reviews to identify any additional studies that were not captured by our original search.
Reference manager {#s4c}
-----------------
All articles will be imported to EndNote software and any duplicates removed.
Search procedures {#s4d}
-----------------
The search will be performed by two team members (SA), after which all articles will be imported to EndNote V.7.7 and any duplicates removed. All article titles and abstracts will be screened by two independent reviewers. A manual search of the reference lists of relevant studies shall be undertaken, to identify any additional articles that were missed by the database search but that might be suitable for inclusion in the review. Subsequently a full-text review of all the included articles will be carried out. Disagreements between reviewers will be resolved through discussion. If no consensus can be reached, a third reviewer's decision will be considered. Any study that does not meet the inclusion criteria will be excluded and the reasons for exclusion recorded according to the PRISMA flowchart.
Study selection and data extraction {#s4e}
-----------------------------------
A data extraction form will be created using an Excel sheet. Two independent reviewers will perform the data extraction. First, reviewers will pilot the data extraction form based on ten included studies. Second, any disagreement between reviewers at this stage will be resolved by consensus. If no consensus can be reached, a third reviewer will make the decision. The first reviewer will then start extracting data. The second reviewer will check the consistency of the data and identify any errors. In case information is missing from an included study's published manuscript, its authors will be contacted and asked for clarification.
Data extraction and data management {#s4f}
-----------------------------------
Data related to the study characteristics, population characteristics and intervention characteristics, as defined in the intervention Complexity Assessment Tool for Systematic Reviews, shall be extracted.[@R25]
Study characteristics: authors' names; year of publication; country; research design, as well as recruitment methods.
Population characteristics: age; gender; level of education; GOLD grade and/or FEV~1~ as a percentage of predicted (FEV~1~%); smoking history; number of COPD patients who consented to participate, were approached, dropped out and completed the study, as well as reasons for dropout.
Intervention characteristics: settings, methods, frequency and components of telehealth (active elements, targeted behaviour, targeted users, the degree of tailoring, health professional assistance) and duration of intervention.
Outcomes {#s5}
========
All reported outcomes of COPD will be extracted, as will the effect size (ES) of telehealth intervention on these outcomes. The ES will be calculated if it is not mentioned by the author(s). ES calculation will be performed according to results from the first post-interventional evaluation, which will reflect the earliest impact of telehealth interventions on outcomes. Any results after the first post-interventional evaluation (eg, results from multiple follow-up points) will not be considered in the ES calculation. Also, the ES on the main outcome will be included in the analysis if the studies have more than one outcome.
Outcomes {#s5a}
--------
Outcomes extracted from each study:
All primary and secondary outcomes defined by each study will be extracted. These include, but are not limited to:
Hospitalisation: admissions due to exacerbations and causes of hospitalisation will be reported. Attention shall be paid to differences between count and dichotomous data (eg, the count of participants in each group who experience at least one exacerbation event vs number of events per intervention group).
Exacerbation rate is a commonly reported outcome.[@R26] As exacerbations can be reported in different ways, the data collection allows for the following numbers to be recorded: number of exacerbations or exacerbation rate (that may also be classified based on the patient disease severity), all-cause mortality and number of patients per study group who died during the survey.
Adherence to the action plan: (including any measurement mentioned by the authors to report the adherence to the action plan---for example, adherence to intervention, adherence to physiological monitoring, adherence to symptom monitoring, adherence to medication, adherence to exercise and adherence telehealth and/or telemonitoring).
Health-related quality of life: disease-specific or non-disease-specific quality of life reported by a validated instrument.
Physical activity measurements (any type reported by a validated measurement system).
Outcome of adherence for this review:
Outcomes of this review are acceptance, adherence and dropout rates. When these outcomes are not reported in the original studies, we will calculate the rates as follows:
The acceptance rate will be calculated by taking the total number of participants who accepted, agreed, and consented to participate in this study and dividing it by the number of participants who were approached for involvement in telehealth intervention. The adherence rate will be calculated as the total number of participants who completed the telehealth intervention according to the study protocol divided by the number who started the intervention. The dropout rate will be calculated as number of participants who withdrew from or did not continue with the intervention divided by the number of participants who consented to participate on the study. All rates will be presented using an overall average.
Risk of bias assessment {#s5b}
-----------------------
Two team members (SA and RA) will independently assess the risk of bias for each study included in the review; the Cochrane Collaboration Risk of Bias criteria will be used for randomised clinical trials, and Scottish Intercollegiate Guidelines Network checklist (SIGN) checklist will be used for observational studies. Reviewers will independently report justifications and comments for their decisions. A third team member will be consulted to resolve any discrepancies. The AMSTAR II tool will be used to assess the risk of bias for the systematic review.
Data analysis {#s6}
=============
Statistical Analysis System software will be used to run regression models. Possible variables associated with rates will be categorised and tested using the univariate analysis model. Subsequently, a random effect meta regression analysis will be used to estimate the effects of the participant, study, and intervention characteristics on acceptance, adherence, dropout rates. A separate model analysis will be conducted for each rate. If we are restricted in this regard and unable to perform a meta-analysis, we will synthesise and summarise the results narratively.
Dealing with missing data {#s6a}
-------------------------
Authors will be contacted to obtain any unreported data.
Discussion {#s7}
==========
This systematic review aims to objectively estimate the acceptance, adherence and dropout rates of COPD populations enrolled in telehealth and the associated variables that might affect or be affected by these rates. It will help identify the extent to which associated variables can be used for an improved design of clinical trials, to suggest the characteristics of associated target populations, and to recommend elements for inclusion in telehealth intervention to support self-management.
To the best of our knowledge, this will be the first systematic review estimating acceptance, adherence, and dropout rates of COPD populations participating in telehealth, as well as the associated factors influencing these rates. Previous systematic reviews were unable to provide information about effective elements contributing to better acceptance, adherence and dropout rates using meta-regression analysis; in contrast, the current study will try to explore elements of telehealth that impact acceptance, adherence and dropout rates.[@R27] Our systematic review will analyse the literature using meta-analysis, and in doing so provide the advantage of having an opportunity to investigate and understand the correlation between pertinent factors and acceptance, adherence and dropout rates. We will provide specific information about the trials' characteristics (RCTs vs non-RCTs), population characteristics (ie, mild severity vs moderate severity) and intervention characteristics (ie, primary care settings vs specialty care settings), as well as how such information may facilitate users' adherence to telehealth interventions.
Furthermore, existing evidence on acceptance, adherence and dropout rates will inform methods for designing future telehealth projects focusing on COPD. This study may also be beneficial for the management of grants for research in the field.[@R29] It will contribute to future research by identifying the target populations among which telehealth are accepted, and identify feasible interventions. Finally, this systematic review will help tailor technological interventions to more effectively meet the needs of COPD patients.
Ethics and dissemination {#s7a}
------------------------
This systematic review requires no ethics approval. This research will use no confidential or personal patient data. Findings will be disseminated through publication in a peer-reviewed specific journal.
Supplementary Material
======================
###### Reviewer comments
###### Author\'s manuscript
The authors thank the reviewers of the manuscript for their constructive feedback.
**Contributors:** SMA, SA, TF-J and RA developed the idea and designed the study protocol. SMA, SA designed and wrote the search strategy and the first protocol draft. SMA, SA and RA planned the data extraction and statistical analysis. SA, TF-J, RA provided critical insights. All authors have approved and contributed to the final written manuscript.
**Funding:** The main author disclosed receipt of the following financial support for the research, authorship and/or publication of this article. This study was supported by a scholarship from Umm Al Qura University in Saudi Arabia, Sara Ahmed and Tania Janaudis-Ferreira are supported by Fonds de recherche Santé (FRQS) career award.
**Competing interests:** None declared.
**Provenance and peer review:** Not commissioned; externally peer reviewed.
**Patient consent for publication:** Not required.
| |
Aim A major challenge for invasion ecology is to identify high-impact invaders to guide prioritization of management interventions. We argue that species with the potential to cause regime shifts (altered states of ecosystem ...
Interactions between environment, species traits, and human uses describe patterns of plant invasions
(ECOLOGICAL SOC AMER, 2006-07)
Although invasive alien species (IAS) are a major threat to biodiversity, human health, and economy, our understanding of the factors controlling their distribution and abundance is limited. Here, we determine how environmental ...
Niche-based modelling as a tool for predicting the risk of alien plant invasions at a global scale
(BLACKWELL PUBLISHING, 2005-12)
Predicting the probability of successful establishment of plant species by matching climatic variables has considerable potential for incorporation in early warning systems for the management of biological invasions. We ...
Impacts of alien plant invasions on species richness in Mediterranean-type ecosystems: a meta-analysis
(2009)
Besides a general consensus regarding the negative impact of invasive alien species in the literature, only recently has the decline of native species attributable to biological invasions begun to be quantifi ed in many ...
Species richness of alien plants in South Africa: environmental correlates and the relationship with indigenous plant species richness
(2005)
This study explores the correlates of alien plant species richness in South Africa at the scale of quarter-degree squares (QDS; ≈ 25 x 27 km; 675km2). We considered all alien plant species for which we had records and a ... | https://ir.sun.ac.za/cib/handle/123456789/362/discover?filtertype=subject&filter_relational_operator=equals&filter=exotic+species |
As I am preparing my presentation for the upcoming PDT Europe 2017 conference in Gothenburg, I was reading relevant experiences to a data-driven approach. During PDT Europe conference we will share and discuss the continuous transformation of PLM to support the Lifecycle Model-Based Enterprise.
One of the direct benefits is that a model-based enterprise allows information to be shared without the need to have documents to be converted to a particular format, therefore saving costs for resources and bringing unprecedented speed for information availability, like what we are used having in a modern digital society.
For me, a modern digital enterprise relies on data coming from different platforms/systems and the data needs to be managed in such a manner that it can serve as a foundation for any type of app based on federated data.
This statement implies some constraints. It means that data coming from various platforms or systems must be accessible through APIs / Microservices or interfaces in an almost real-time manner. See my post Microservices, APIs, Platforms and PLM Services. Also, the data needs to be reliable and understandable for machine interpretation. Understandable data can lead to insights and predictive analysis. Reliable and understandable data allows algorithms to execute on the data.
Classical ECO/ECR processes can become highly automated when the data is reliable, and the company’s strategy is captured in rules. In a data-driven environment, there will be much more granular data that requires some kind of approval status. We cannot do this manually anymore as it would kill the company, too expensive and too slow. Therefore, the need for algorithms.
What is understandable data?
I tried to avoid as long as possible academic language, but now we have to be more precise as we enter the domain of master data management. I was triggered by this recent post from Gartner: Gartner Reveals the 2017 Hype Cycle for Data Management. There are many topics in the hype cycle, and it was interesting to see Master Data Management is starting to be taken seriously after going through inflated expectations and disillusionment.
This was interesting as two years ago we had a one-day workshop preceding PDT Europe 2015, focusing on Master Data Management in the context of PLM. The attendees at that workshop coming from various companies agreed that there was no real MDM for the engineering/manufacturing side of the business. MDM was more or less hijacked by SAP and other ERP-driven organizations.
Looking back, it is clear to me why in the PLM space MDM was not a real topic at that time. We were still too much focusing and are again too much focusing on information stored in files and documents. The only area touched by MDM was the BOM, and Part definitions as these objects also touch the ERP- and After Sales- domain.
Actually, there are various MDM concepts, and I found an excellent presentation from Christopher Bradley explaining the different architectures on SlideShare: How to identify the correct Master Data subject areas & tooling for your MDM initiative. In particular, I liked the slide below as it comes close to my experience in the process industry
Here we see two MDM architectures, the one of the left driven from ERP. The one on the right could be based on the ISO-15926 standard as the process industry has worked for over 25 years to define a global exchange standard and data dictionary. The process industry was able to reach such a maturity level due to the need to support assets for many years across the lifecycle and the relatively stable environment. Other sectors are less standardized or so much depending on new concepts that it would be hard to have an industry-specific master.
PLM as an Application Specific Master?
If you would currently start with an MDM initiative in your company and look for providers of MDM solution, you will discover that their values are based on technology capabilities, bringing data together from different enterprise systems in a way the customer thinks it should be organized. More a toolkit approach instead of an industry approach. And in cases, there is an industry approach it is sporadic that this approach is related to manufacturing companies. Remember my observation from 2015: manufacturing companies do not have MDM activities related to engineering/manufacturing because it is too complicated, too diverse, too many documents instead of data.
Now with modern digital PLM, there is a need for MDM to support the full digital enterprise. Therefore, when you combine the previous observations with a recent post on Engineering.com from Tom Gill: PLM Initiatives Take On Master Data Transformation I started to come to a new hypotheses:
For companies with a model-based approach that has no MDM in place, the implementation of their Product Innovation Platform (modern PLM) should be based on the industry-specific data definition for this industry.
Tom Gill explains in his post the business benefits and values of using the PLM as the source for an MDM approach. In particular, in modern PLM environments, the PLM data model is not only based on the BOM. PLM now encompasses the full lifecycle of a product instead of initially more an engineering view. Modern PLM systems, or as CIMdata calls them Product Innovation Platforms, manage a complex data model, based on a model-driven approach. These entities are used across the whole lifecycle and therefore could be the best start for an industry-specific MDM approach. Now only the industries have to follow….
Once data is able to flow, there will be another discussion: Who is responsible for which attributes. Bjørn Fidjeland from plmPartner recently wrote: Who owns what data when …? The content of his post is relevant, I only would change the title: Who is responsible for what data when as I believe in a modern digital enterprise there is no ownership anymore – it is about sharing and responsibilities
Conclusion
Where MDM in the past did not really focus on engineering data due to the classical document-driven approach, now in modern PLM implementations, the Master Data Model might be based on the industry-specific data elements, managed and controlled coming from the PLM data model
Do you follow my thoughts / agree ? | https://virtualdutchman.com/2017/10/08/master-data-management-and-plm/?shared=email&msg=fail |
Background: Applications of artificial intelligence (AI) in health care have garnered much attention in recent years, but the implementation issues posed by AI have not been substantially addressed.
Objective: In this paper, we have focused on machine learning (ML) as a form of AI and have provided a framework for thinking about use cases of ML in health care. We have structured our discussion of challenges in the implementation of ML in comparison with other technologies using the framework of Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies (NASSS).
Methods: After providing an overview of AI technology, we describe use cases of ML as falling into the categories of decision support and automation. We suggest these use cases apply to clinical, operational, and epidemiological tasks and that the primary function of ML in health care in the near term will be decision support. We then outline unique implementation issues posed by ML initiatives in the categories addressed by the NASSS framework, specifically including meaningful decision support, explainability, privacy, consent, algorithmic bias, security, scalability, the role of corporations, and the changing nature of health care work.
Results: Ultimately, we suggest that the future of ML in health care remains positive but uncertain, as support from patients, the public, and a wide range of health care stakeholders is necessary to enable its meaningful implementation.
Conclusions: If the implementation science community is to facilitate the adoption of ML in ways that stand to generate widespread benefits, the issues raised in this paper will require substantial attention in the coming years.
doi:10.2196/13659
Keywords
Introduction
Artificial intelligence (AI) has become a topic of central importance to the ways in which health care will change in the coming decades, with recent commentaries addressing potential transformations in clinical care [, ], public health [ ], and health system planning [ ]. AI is a general purpose technology (GPT), which means it represents a core set of capabilities that can be leveraged to perform a wide variety of tasks in different contexts of application [ ]. Understanding the core capabilities of AI as a GPT, and the ways in which it stands to be incorporated into health care processes, is essential for the implementation research community to contribute to promoting a positive place for AI in the future of health care. We believe that AI has the potential to substantially reconfigure health care, with implications that reach beyond enhancing the efficiency and effectiveness of care delivery. Due to this potential, we suggest that implementation science researchers and practitioners make a commitment to more fully consider the wider range of issues that relate to its implementation, which include health system, social, and economic implications of the deployment of AI in health care settings.
We suggest that the most appropriate language for discussions of AI in health care is actually to discuss machine learning (ML), which is the specific subfield of AI that is currently making the most impact across industries. We then focus on 2 questions about the deployment of ML in health care. First, how should ML be understood in terms of its actual use cases in health care? This question addresses the nature of ML as an implementation object [, ] in health-related contexts. We present a basic framework for thinking about use cases of ML in terms of decision support versus automation and elaborate clinical, operational, and epidemiological categories of these use cases.
Second, what are the unique challenges posed by ML that may require consideration during an implementation initiative? As opposed to focusing on strategies for the adoption of digital technologies in general, which has been addressed extensively in other literature [- ], we focus on what we understand to be the most important risks arising from the implementation of ML in health care. Our discussion of the risks associated with implementing ML in health care is guided by the work of Greenhalgh et al in the framework for theorizing and evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability (NASSS) of health and care technologies [ ].
The NASSS framework is based on the premise that when considering influences on whether and how a technology is successfully taken up and used, it is important to keep in mind that “it is not individual factors that make or break a technology implementation effort but the dynamic interaction between them” . The NASSS framework outlines a range of considerations that are relevant to understanding how a technology might be adopted across an entire region or health system, ranging from a focus on the particular health condition in the clinical scenario to the wider political, regulatory, and sociocultural system in which it is to be embedded. In our paper, we examine ML as a GPT that has the potential to apply across clinical conditions and focus our analysis on elements of the NASSS framework: the technology, its value propositions, and the adopters, organizations, and systems into which it might be introduced. We emphasize the evolutionary nature of ML as a GPT and explicitly acknowledge that it will continue to develop and change over the coming years, which is also an important feature of the NASSS framework. We conclude by advocating for further research on the risks posed by ML from an implementation science perspective.
AI has been described in many ways. Using the framing in Agrawal et al, we emphasize that recent advances in AI can be best understood as “prediction technology” . Quite simply, prediction is defined for this purpose as “taking information you have, often called ‘data’, and using it to generate information you don’t have” (PM, p. 24). This newly generated information estimates the true information that is missing, leading to the potential for people and technology to take actions that may have otherwise been based on less accurate information.
Predicting illness episodes that might be experienced in the future is an obvious application of AI in this sense, but prediction as we have defined it has many other uses as well. Examples include an automatic translator predicting the phrases of Spanish that correspond to a particular set of phrases in English or a chat bot predicting the most appropriate cluster of words in response to a given query. These examples might not represent the very intuitive understanding of prediction that we have become used to in everyday usage or the way we tend to think of prediction of health-related events and outcomes in health care. However, they represent the prediction of information that we do not have based on information we do have and point toward the potentially widespread applications of AI as a GPT.
The phrase “predictive analytics” is very intuitive with regard to defining AI as a prediction technology, using advanced computer algorithms to predict health-related events from existing data in ways that exceed the ability of individual researchers applying individual analyses . However, AI opens new opportunities for prediction beyond the familiar predictive analytics for hospital admissions, length of stay, and patient survival rates. As a process of filling in missing information, better and cheaper prediction is already being used in new areas, from transcribing audio to enhancing security to informing diagnoses.
At its core, current applications of AI bring statistical modeling, computer code, and advanced computing power to bear on large amounts of representative data. In his recent commentary on the potential of deep learning (a form of AI) to transform health care, Hinton gave the example of deciding whether a patient has a particular disease and explained that a common approach would be to use a simple logistic regression (using data to predict a binary outcome: the patient has the disease or does not). However, he suggested that if there are extremely high numbers of potential influences or predictors of whether the person has the disease, many of which may interact with one another, the prediction challenge becomes much more complex. This is especially the case where we have imperfect knowledge of the causes and correlates of a particular disease. This example also pertains only to binary queries specifically about whether a patient has a single disease, which is different from the typical reasoning processes involved in differential diagnosis among clinicians, where multiple confounding, interdependent outcomes must be considered [, ].
Specific applications of AI can fall under distinct categories, with AI serving as an umbrella concept, covering more specific frameworks. In this paper, we are primarily concerned with the subdomain of AI referred to as ML in which statistical models are automatically (or semiautomatically) induced from data according to some criterion (eg, best expected discriminative power or maximum likelihood given to training data). This means that complex statistical models capable of executing advanced predictions are generated in part by using data to train the model to achieve a particular goal.
Often, ML involves supervised methods that categorize data points, for example, as images of skin cancer or otherwise given datasets in which all data points (or at least a substantial subset) are associated with a label, ordinal, or category that is meant to be predicted or inferred . This process requires datasets that have the appropriate labels indicating what the data means; in the example of images of skin cancer, each data point would be labeled according to its representation of a mass as malignant or benign or some variation thereof. Given these labels and the statistical models they help to train, ML can be very effective at determining the category in which any newly available individual data point belongs, thereby being useful in the effort to, for example, identify malignant cancers based on particular images [ ].
Much of the power of modern ML also derives from unsupervised pattern recognition, in which hidden (or latent) aspects of the data are automatically identified by the algorithms and exploited according to the aforementioned criteria. Unsupervised ML can often identify patterns in the data that humans do not even think to look for. Often, these hidden aspects are nonlinear combinations of many parts of the input.
ML can also improve its ability to take actions according to these induced hidden patterns and particular functions of cost and reward in a process called reinforcement learning. For example, ML can dynamically adapt survey questions to more quickly identify possible diseases , dynamically avoid potential communication breakdowns during speech conversation in the assessment of dementia [ ], and even recommend treatments directly when using structured institutional data [ ]. As so much health care information can be represented digitally, the potential of ML to improve health care practices is profound.
Methods
Use Cases of Machine Learning in Health Care
In the remainder of our paper we refer primarily to ML as opposed to AI, focusing our analysis on the concrete possibilities of ML in health care. We can think about use cases of ML in health care in 2 broad ways. The first is through decision support, wherein ML algorithms are used to provide some form of input into human decision making. An example is where an algorithm is used to provide more accurate predictions of the outcome of a particular procedure given a particular clinical presentation. This helps to inform a human decision about whether a given procedure is the best course of action. The second is through automation, wherein algorithms are used not only to predict an output but also to take action to achieve a particular outcome. An example is the automatic transcribing of a clinical note when dictated into a computer program, resulting in a complete note being added to a patient’s record (technically referred to as Automated Speech Recognition). These 2 broadly defined categories of use cases can be thought of as applying to various types of tasks in health care, and we suggest it is instructive to consider 3 types of tasks as most relevant for the implementation of ML for health: clinical, operational, and epidemiological.
Clinical tasks refer to health-related assessment, intervention, and evaluation, generally performed by qualified health care providers, for example, determining a differential diagnosis. Operational tasks are those related to activities that are ancillary to clinical tasks but necessary or valuable in the delivery of services, such as generating, storing, and retrieving medical records. Finally, epidemiological tasks are those related to more accurately identifying the health needs and outcomes of a set of people within a given population. An example is the development of a warning system for disease outbreak. As epidemiological use cases of ML are related to enhancing the ability of humans to make decisions in the other categories described here (clinical or operational), there are no examples of pure automation for epidemiological tasks that contain an output other than informing a human decision. Hypothetical examples of both decision support and automation are given under each of these categories in.
This table presents a basic framework for thinking about use cases of ML in health care as falling into 2 primary categories: decision support and automation. These use cases apply in categories of clinical, operational, and epidemiological tasks. As no examples of pure automation exist for epidemiological tasks, no example is presented in that cell.
The considerations most pertinent to the implementation of ML will depend on the particular use case being proposed in a given implementation initiative, and the categories outlined inprovide a framework for understanding those use cases. The NASSS framework and other work in implementation science for digital health technologies emphasize the importance of attending to the particular value proposition that a new technology offers for health care stakeholders [ , ]. The value proposition of digital technology might be different for different stakeholder groups, and implementation frameworks direct attention to the implications of newly introduced technologies for patients, health care providers, managers, health policymakers, and others [ , , ]. The clinical, operational, and epidemiological task types presented in will correspond to different value propositions for different stakeholder groups, meaning that specific applications of ML might preferentially benefit one group over another, for example, identifying a scheduling process to maximize efficiency in operating costs might preferentially benefit managers over health care providers inconvenienced by a new system. Understanding how value propositions differ for the various stakeholders implicated in a given implementation of ML is an essential consideration for successful adoption and use.
|Type of use case||Clinicala||Operationalb||Epidemiologicalc|
|Decision support||Producing a more accurate prediction of the likely outcome of a particular intervention ||Identifying potential staff scheduling changes related to forecasted emergency room volumes ||Warning systems for disease outbreak |
|Automation||Automatically altering insulin treatment in response to monitored glucose-insulin dynamics ||Use of robotics for operational tasks in dementia care, such as meal delivery ||N/Ad|
aTasks related to the assessment, intervention, and evaluation of health-related issues and procedures, generally performed by qualified health care providers.
bTasks related to activities that are ancillary to clinical tasks but necessary or valuable in the delivery of services.
cTasks related to more accurately identifying the health needs and outcomes of people within a given population.
dNot applicable.
The potential value propositions of an ML technology offering decision support versus one offering automation are very different and bring along different sorts of implementation issues. The implementation of decision support systems in health care that do not include applications of ML have been well studied and the difficulties include perceived challenges to autonomy, lack of time, and dissatisfaction with user interfaces [, ]. Implementation initiatives involving decision support applications of ML will need to consider this past work to develop implementation strategies that more effectively address known challenges.
Implementation initiatives involving automation are likely to face some similar and some different challenges. For example, stakeholder views on the introduction of automated robotics into a variety of health care settings found a widespread lack of interest and understanding and fear of the ways in which work would be disrupted and distributed . Although automation has existed in health care for decades through technologies such as heart rate monitors, the question of how acceptable stakeholders will perceive new forms of automation to be remains an important issue. This point raises the overarching issue of the extent of automation that is possible through applications of ML, linked to speculation about whether ML will mostly augment or actually replace health care providers’ work [ , ].
Augmentation and Replacement of Health Care Work
We agree with a growing chorus of health care providers and researchers who suggest that ML will primarily serve to augment as opposed to replace the work of humans in the provision of health care in the near term , despite applications of automation in health care. This is because the role of ML in the current generation of capabilities functions at the level of the task, and not at the level of an entire job. Agrawal et al explained that “the actual implementation of AI is through the development of tools. The unit of AI tool design is not ‘the job’ or ‘the occupation’ or the ‘the strategy’, but rather ‘the task’.” (p. 125). Therefore, for a health care provider to be entirely replaced, every single task performed by that provider would need to be automated by an ML tool or handed off to a different human.
The complete automation of the full range of human tasks involved in providing clinical care is not yet possible; activities such as making treatment decisions based on a differential diagnosis that integrates data from laboratory investigation, visual observation, and patient history are still too complex for automation. In emphasizing this point, we are suggesting that although much of the hype about AI (and specifically ML) in health care has focused on its potential role in automating processes of health service delivery, it is more likely that near-term applications of ML will fall under the category of decision support.
Further comments about prediction tasks and decision tasks will help to clarify this point. As stated earlier, ML applications fundamentally perform some form of prediction. The specific instance of prediction that the application is performing may be thought of as the prediction task, which may be paired with a complementary decision task. The decision task is where the newly generated information is used to select a particular action in a given context. In applications of ML that function as decision support, the decision task is performed by a human. As ML diffuses, an important new challenge for health care providers is to make choices using the predictions that arise from decision support applications of ML, involving new forms of input to clinical thought processes related to risks, benefits, and previously unrecognized influences on health. The examples of decision support ininvolve generating better information to inform human decision making.
In applications of ML that function as automation, both the prediction task and the decision task are accomplished by machines. A clear example is self-driving cars. The sensors surrounding the car enable predictions of the best direction in which the car should travel. However, it is the selection from a predetermined set of actions and execution of one action over another that makes self-driving cars an example of automation as distinct from one of decision support. ML is not yet sophisticated enough to complete these selection and execution functions for many health care tasks, across both clinical and operational levels.
As prediction tasks become more amenable to being performed by ML, decision tasks become more valuable [, ]. This is because predictions are improved, meaning that decisions can be made with greater confidence and impact. The enhanced value of these decisions represents the potential value of ML as a decision support tool and illustrates the potential breadth of value propositions that could arise from this technology with a wide range of implications for the implementation process. However, for decision support to be valuable in health care, the outputs of algorithms must have a clear entryway into the human decision-making processes that pervade health service delivery. This points us toward one of a series of important issues raised from an implementation science perspective on the introduction of ML in health care settings, which we turn to next.
Results
Unique Considerations for Implementation Science
We have described use cases (and attendant value propositions) of ML in health care as more likely relating to decision support and less likely to automation, which begins to illustrate the implementation object of focus in ML initiatives [, ]. In many cases of decision support, the implementation object is actually not all that different from the statistical tools that are already used as part of common practice, such as risk prediction. In cases of automation, there are similarly many examples of technologies that have already been successfully implemented in health care settings (such as automatic transcription mentioned earlier). However, ML as a GPT raises a number of issues that run across use cases and might be anticipated as unique in comparison with implementation projects for other digital technologies.
Best practices of implementation for digital innovations [, , ] will be fundamental to the adoption of ML in health care. Here, we discuss considerations that might appear in implementation projects involving ML that may be less likely to appear in implementation projects involving other digital technologies and yet stand to have a potentially strong influence on the success of such projects. We organize this section based on distinct levels of consideration that are presented in the NASSS framework that we have not yet addressed [ , ]: health care providers, patients and the public, health care organizations, and health policy and systems. Although we consider the primary considerations of health technology vendors working on the development of ML application in health care to be outside the scope of this paper, we acknowledge this is a gap in the literature that requires attention.
Health Care Providers
Health care providers are those responsible for doing the actual work of health care delivery and are being increasingly expected to adopt and use new technologies in health care environments. We suggest that the core considerations or risks of the implementation of ML for health care providers will fall into the categories of meaningful decision support and explainability.
Meaningful Decision Support
For ML to function as decision support in a way that is valuable to health care stakeholders, the outputs of algorithms must have a meaningful entryway into decision making. From an operational or epidemiological perspective, isolated analyses of risk prediction may help to inform resource allocation and subsequent analysis decisions fairly simply. However, from a clinical perspective, algorithms that perform isolated risk prediction may be less useful. Clinical decision making is a complex process involving the integration of a variety of data sources, incorporating both tacit and explicit modes of intelligence [- ]. To inform this decision-making process more intuitively, attention is increasingly being devoted to communication tools such as data visualization [ ]. The nature and value of these communication tools are central to the implementation process, helping to determine whether and how algorithmic outputs are incorporated in everyday routine practices. This point primarily relates to the decision support use case across clinical, operational, and epidemiological tasks.
Explainability
There is a growing concern in the AI community related to the explainability of the results achieved by ML algorithms, wherein the ways in which algorithms enhance the performance of prediction can often not be understood . As a result of the processes described earlier in this paper, the ways in which data are being used to train algorithms cannot be traced out in sequential, logical detail. Hence, the actual ways in which models achieve their results are in some instances not knowable even to the computer scientists who create them. Evidence-based medicine rests on a foundation of the highest standards of explainability; medical decision making aspires to incorporate a sound understanding of the mechanisms by which diseases and their treatments function and the particular treatments that have demonstrated the greatest benefits under particular experimental circumstances (in addition to patient needs and values [ , , ]). The lack of understanding of those mechanisms and circumstances poses challenges to the acceptability of ML to health care stakeholders. Although the issue of explainability relates clearly to decision support uses cases of ML as explained here, the issue may apply even more profoundly to automation-focused use cases as they gain prominence in health care.
Patients and the Public
The issues of public trust and public input into the governance of ML initiatives in health care have been widely discussed as the popularity of AI has grown, with advocates suggesting that future developments of AI ought to be explicitly supporting a broader public interest. We suggest that 2 pairs of issues frame the risks of ML related to patients and the public. The first pair is privacy and consent and the second is representative data and algorithmic bias.
Privacy and Consent
The training of ML models requires large amounts of data, which means that applications of ML in health will likely rely on health-related data from patients and the public. As governments and other actors internationally become interested in developing applications of ML, health-related data are increasingly made available to private entities with the capability of producing AI applications that are relevant to peoples’ health [- ]. Currently, data from wearable devices such as smart watches and mobile apps are not widely covered by health information legislation [ ], and many health-related apps have unclear consenting processes related to the flow of data generated through their use [ ]. Furthermore, data that are de-identified may be reidentifiable when linked with other datasets [ ]. These considerations create major risks for initiatives that seek to make health data available for use in the development of ML applications, potentially leading to substantial resistance from health care providers such as that seen in primary care in Denmark in recent years [ ]. This will be particularly important for population and public health use cases that require data from very large segments of the population. The meaning of consent and strategies to maintain patient privacy are central considerations to ML implementation initiatives. The related issues of privacy and consent pertain especially to clinical and epidemiological use cases of ML in both decision support and automation categories, as data from patients /or the public are essential to train algorithms in these areas (whereas operational use cases may only rely on other forms of data, such as clinical scheduling histories).
Representative Data and Algorithmic Bias
Algorithms are only as good as the data used to train them. In cases where training data are partial or incomplete or only reflect a subset of a given population, the resulting model will only be relevant to the population of people represented in the dataset . This raises the question about data provenance [ , ] and represents a set of issues related to the biases that are built into algorithms used to inform decision making. One high profile example was the hiring bias exhibited when algorithms were used to make hiring decisions at Amazon, resulting in only men being advanced to subsequent stages of hiring [ ]. This is notable in part because the algorithm performed extremely well based on the available data, simply extending the bias that already existed in hiring practices at the company. When applied to health care of public health, data provenance and potential bias in training data represent important issues that are likely to be of major concern for the stakeholders involved in the implementation of an ML initiative. Public health has health equity as a primary goal, and representativeness in terms of which populations can be addressed by an ML initiative will be a central consideration.
A further challenge with the nature of the data on which algorithms are trained relates to concept drift, a phenomenon where data on which an algorithm is trained change over time (or become out of date), which changes the performance of the algorithm as new data are acquired . The possibility of concept drift means that those overseeing the performance of ML-based technologies in health care must identify strategies to determine how well the algorithm deals with new data and whether concept drift is occurring. Applications to support this effort are emerging in the literature [ ].
The issues addressed here apply most clearly to ML applications that use patient data to inform clinical and epidemiological use cases that enhance clinical care and health system planning. And although the use of public data will likely be the most contentious issue in this domain, the challenges of representativeness and bias apply to all ML use cases across decision support and automation domains.
Health Care Organizations
Health care and public health systems are composed of independent organizations that need to develop and execute strategies within the limits of the resources available to them. Organizations have been the driving force behind the adoption of many innovations in health care and have a collection of considerations that are unique from the broader systems of which they are a part. We suggest that the issue of security and computational resources become particularly important for organizations as they adopt ML initiatives in health care and public health.
Security
As data are collated and stored for training ML models, the risk and potential severity of security breaches grows. The global attack of health care organizations using WannaCry ransomware in May 2017 shows the vulnerabilities of even well protected health data to malicious interests. This particular attack is estimated to have affected 200,000 systems in over 150 countries, indicating the potential scope of security problems as the value of data grows . Strategies to prevent such security breaches on Web accessible health data are now being proposed in the literature [ , ], and the high profile of security issues makes this a particularly important issue as ML applications develop in health care and public health. The issue of security transcends any particular use case of ML and includes any applications or analysis that relies on big data more generally.
Computational Resources
Advanced applications of ML require substantial computing power, with some predictive analyses and training models requiring up to several weeks to run. The more extensive the computing support, the more efficient ML applications will become, raising the question of the cost and availability of such advanced computing power for health care organizations. Health care is publicly funded in many countries around the world, and public support to secure the resources to fund the necessary computing power may not be present. Cloud-based analytics present an opportunity and a challenge for health-related organizations in relation to the issue of computational resources. Cloud-based data analysis means that organizations would not need to own computational resources directly but also introduces the potential challenges of data safety. These issues are relevant to the training phase of a newly developed algorithm, but of course, less computing power is required to simply apply algorithms that have been generated and trained elsewhere. How data are stored and processed is thus also an important consideration in ML implementation initiatives. The issue of computational resources also applies more generally than any given ML use case, related to the development and functioning of many kinds of AI algorithms.
Health Policy and Systems
The challenges associated with ML initiatives at the level of health policy and systems are extensive. These include broad legislative frameworks related to emerging health-related technologies more generally and to the innovation procurement systems that vary across health system settings [ , ]. The policy issues presented by ML in health care are beginning to garner more attention [ , ], but here we present one issue that we have not seen addressed in health care or public health literature: the challenge of scalability.
Scalability and Normal Accidents
A major challenge that extends beyond any single implementation of ML, and therefore requires a system-wide view, relates to the scalability of ML. Scalability in this sense refers to the unanticipated effects of the appearance of multiple ML technologies that will inevitably interact with one another by some means. As applications of ML proliferate across health care and public health, eventually some algorithmic outputs will confront others. The effects of this interaction are impossible to predict in advance, in part because the particular technologies that will interact are unclear and likely not yet implemented in the course of usual care.
Health care represents what Charles Perrow referred to as a complex system or a system in which processes are tightly linked to one another and interact in unintended ways in the effort to achieve the goals of the system . This acknowledgement has led to the high reliability movement in health care and other industries [ ], intending to implement management strategies that could mitigate against the risk of disasters arising from such immense complexity. Perrow’s work was titled Normal Accidents: Living with High Risk Technologies, suggesting that in systems characterized by complexity and the use of advanced technologies, accidents are bound to happen [ ]. This basic point about the seeming inevitability of accidents in the context of complex systems and new technologies underscores the significance of the scalability challenge of ML in health care. We suggest that implementation scientists will need to consider the unintended consequences of the implementation and scale of ML in health care, creating even more complexity and greater opportunity for risks to the safety of patients, health care providers, and the general public. ML safety will likely need to become a dedicated focus of patient safety research internationally. This point about scalability frames the broader challenge for implementation scientists who are committed to a system-wide perspective on health innovations and relates not only to each type of use case identified in our framework but also to the interactions between them as well.
Discussion
Intersecting Issues in the Future of Health Care
In our brief Discussion section, we outline 2 overarching issues that we consider to frame the challenges facing health care systems that are hoping to adopt ML in the coming years. The discussion here is informed by the explicit recognition in the NASSS framework that both the technology and context in which innovations are being introduced shift and change over time. Greenhalgh et al suggest that although the levels of the framework can be distinguished analytically, “at an empirical level they are inextricably interlinked and dynamically evolving, often against a rapidly shifting policy context or continued evolution of the technology” (p. 14). Our assessment of the 2 issues we address here is intended to represent the connections between the changes that will be required as the policy context and technology evolve concurrently. The first is the issue of the role of corporations in health-related applications of ML, and the second is the issue of the role of ML in the evolving nature of health care.
The Role of Corporations
As the innovations enabled by ML have taken on a more powerful role in driving global economies, corporations have strategically sought to acquire larger amounts of more diverse data to boost their capacity to develop ML algorithms . The shifting focus of many large corporations to the collection and manipulation of data characterizes what Zuboff refers to as surveillance capitalism, a relatively recent phenomenon in the global economy that relies on data for innovation and corporate success. The more that large corporations enter the health care industry with the power to collect, store, and use data, the more intertwined health care will become with the corporate realities of these large, multinational companies [ ].
As large corporations acquire more data and develop more sophisticated forms of ML that transcend any individual geographical region, the implications for domestic health care policy are at risk of being overlooked. Although recent efforts to create regional protections around data collection and use have appeared to make an impact, such as the General Data Protection Regulation in Europe, health care policy is well behind. In cases where health-related data are already being stored in a country other than where the user is living, what are the regulations on how those data can be used? Where users voluntarily engage with technologies that collect their data for explicit health-related use by a corporation outside of their political jurisdiction, what legislative frameworks apply to protect patients and the public? These issues represent the important challenge of making health policy matter when conventional political boundaries are less able to contain the potential of large corporations to develop and use their technological capabilities.
The Changing Nature of Artificial Intelligence–Enabled Health Care
AI applications represent a potential impetus for major change in the institutions that constitute health care. In this sense, the term institution refers not just to the organizations in which health care providers work but to a complex collection of cognitive, cultural, regulative, and moral influences that shape the way that health care workers see their work and their lives . The social sciences have worked to provide clear definitions of institutions through decades of research and theory [ - ]. Scott explained that institutions are combinations of 3 pillars: norms of the way things are usually done around here (cultural-cognitive influences), laws and regulations (regulative influences), and assumed moral codes (normative influences) [ ]. Health care represents a confluence of institutions understood in this sense, many of which are naturally oriented toward maintaining some version of the status quo. Particularly for members of institutions who maintain power over resources, such as the medical profession, embracing institutional change is a point of resistance and difficulty.
We suggest that ML will confront the realities of entrenched institutions through issues such as meaningful decision support and explainability described earlier. These 2 issues represent the authority of health care providers over the decisions that come to define health care as a multi-institutional field, both in terms of their rightful positions within the system and the fabric of decision making that has always defined health care processes. These issues point toward an important challenge that we suggest implementation scientists must grapple with: the changing nature of health care work. In Prediction Machines, the authors explain that as AI technology develops, “the value of substitutes to prediction machines, namely human prediction, will decline. However, the value of complements, such as the human skills associated with data collection, judgment, and actions, will become more valuable.” (p. 81). As the implementation science community considers how to encourage the adoption of ML technologies, it will also need to consider how such technologies stand to change the ways in which health care planning, decision making, and delivery are understood and the evolving role of human health care providers within that context.
The challenges described here refer to unique considerations of ML that pose novel challenges to implementation beyond the work of promoting the routine use of technologies among health care providers. We suggest that the hype and high stakes of ML make these issues more prominent in the mindsets of health care stakeholders and therefore more likely to impact upon an ML implementation project. The implementation science community will need to establish strategies to address these issues as ML becomes more prominent, each of which requires ongoing work to be adequately addressed.
Conclusions
In this paper, we have provided an overview of ML for implementation scientists informed by the NASSS framework, outlining the use cases of ML as falling into the categories of decision support and automation. We suggest these use cases apply to clinical, operational, and epidemiological tasks and that the primary ways in which ML will enter into health care in the near term will be through decision support. We then outlined unique implementation issues posed by ML initiatives from 4 perspectives, those of health care providers, patients and the public, health care organizations, and health policy and systems.
Ultimately, we suggest that the future of ML in health care remains positive but uncertain, as support from patients, the public, and a wide range of health care stakeholders is necessary to enable its meaningful implementation. However, as applications of ML become more sophisticated and investment in communications strategies such as data visualization grows, ML is likely to become more user-friendly and more effective. If the implementation science community is to facilitate the adoption of ML in ways that stand to benefit all, the issues raised in this paper will require substantial attention in the coming years.
Acknowledgments
This research is supported by an Associated Medical Services Phoenix Fellowship to the corresponding author.
Authors' Contributions
JS led the writing of the manuscript. JS, TJ, AG, and FR contributed to the conceptualization, design, and approach for the manuscript. JS, TJ, AG, and FR contributed to analysis and interpretation of the argument made in the manuscript. All authors contributed to writing and revising the manuscript. All authors provided the final approval of the manuscript. All authors agree to be accountable for the manuscript.
Conflicts of Interest
None declared.
References
- Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. J Am Med Assoc 2016 Dec 13;316(22):2353-2354. [CrossRef] [Medline]
- Naylor CD. On the prospects for a (deep) learning health care system. J Am Med Assoc 2018 Sep 18;320(11):1099-1100. [CrossRef] [Medline]
- Thiébaut R, Thiessard F, Section Editors for the IMIA Yearbook Section on Public Health and Epidemiology Informatics. Artificial intelligence in public health and epidemiology. Yearb Med Inform 2018 Aug;27(1):207-210. [CrossRef] [Medline]
- Harwich E, Laycock K. Wilton Park. London: National Health Service; 2018. Thinking On Its Own: AI in The NHS URL: https://www.wiltonpark.org.uk/wp-content/uploads/Thinking-on-its-own-AI-in-the-NHS.pdf
- Ajay A, Gans J, Goldfarb A. The Economics of Artificial Intelligence: An Agenda. Washington DC: National Bureau of Economic Research; 2019.
- Nilsen P, Ståhl C, Roback K, Cairney P. Never the twain shall meet?-a comparison of implementation science and policy implementation research. Implement Sci 2013;8(1):63. [CrossRef]
- Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci 2015 Apr 21;10:53 [FREE Full text] [CrossRef] [Medline]
- Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017 Dec 1;19(11):e367 [FREE Full text] [CrossRef] [Medline]
- Shaw J, Agarwal P, Desveaux L, Palma DC, Stamenova V, Jamieson T, et al. Beyond 'implementation': digital health innovation and service design. NPJ Digit Med 2018;1(1):48. [CrossRef]
- May C. Agency and implementation: understanding the embedding of healthcare innovations in practice. Soc Sci Med 2013;78:26-33. [CrossRef] [Medline]
- Agrawal A, Gans J, Goldfarb A. Prediction Machines: The Simple Economics of Artificial Intelligence. Brighton, Massachusetts: Harvard Business Review Press; 2018.
- Cohen IG, Amarasingham R, Shah A, Xie B, Lo B. The legal and ethical concerns that arise from using complex predictive analytics in health care. Health Aff (Millwood) 2014 Jul;33(7):1139-1147. [CrossRef] [Medline]
- Eva KW. What every teacher needs to know about clinical reasoning. Med Educ 2005 Jan;39(1):98-106. [CrossRef] [Medline]
- Victor-Chmil J. Critical thinking versus clinical reasoning versus clinical judgment: differential diagnosis. Nurse Educ 2013;38(1):34-36. [CrossRef] [Medline]
- Esteva A, Kuprel B, Novoa R, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542(7639):115-118. [CrossRef] [Medline]
- Kourou K, Exarchos TP, Exarchos KP, Karamouzis MV, Fotiadis DI. Machine learning applications in cancer prognosis and prediction. Comput Struct Biotechnol J 2015;13:8-17 [FREE Full text] [CrossRef] [Medline]
- Rajpurkar P, Polamreddi V, Balakrishnan A. ResearchGate.: Cornell University Library; 2017. Malaria likelihood prediction by effectively surveying households using deep reinforcement learning URL: https://www.researchgate.net/publication/321324506_Malaria_Likelihood_Prediction_By_Effectively_Surveying_Households_Using_Deep_Reinforcement_Learning
- Chinaei H, Currie LC, Danks A, Lin H, Mehta T, Rudzicz F. Identifying and avoiding confusion in dialogue with people with alzheimer's disease. Comput Linguist 2017 Jun;43(2):377-406. [CrossRef]
- Liu N, Logan B, Liu N, Xu Z, Tang J, Wang Y. Deep reinforcement learning for dynamic treatment regimes on medical registry data. Healthc Inform 2017 Aug;2017:380-385 [FREE Full text] [CrossRef] [Medline]
- Chekroud AM, Zotti RJ, Shehzad Z, Gueorguieva R, Johnson MK, Trivedi MH, et al. Cross-trial prediction of treatment outcome in depression: a machine learning approach. Lancet Psychiatry 2016 Mar;3(3):243-250. [CrossRef] [Medline]
- Jones SS, Thomas A, Evans RS, Welch SJ, Haug PJ, Snow GL. Forecasting daily patient volumes in the emergency department. Acad Emerg Med 2008 Feb;15(2):159-170 [FREE Full text] [CrossRef] [Medline]
- Chen M, Hao Y, Hwang K, Wang L, Wang L. Disease prediction by machine learning over big data from healthcare communities. IEEE Access 2017;5:8869-8879. [CrossRef]
- Miller S, Nimri R, Atlas E, Grunberg EA, Phillip M. Automatic learning algorithm for the MD-logic artificial pancreas system. Diabetes Technol Ther 2011 Oct;13(10):983-990. [CrossRef] [Medline]
- Casey D, Beyan O, Murphy K, Felzmann H. Robot-Assisted Care for Elderly With Dementia: Is There a Potential for Genuine End-User Empowerment? In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts. 2015 Presented at: HRI'15 Extended Abstracts; March 2-5, 2015; Portland, OR p. 247-248.
- Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci 2013 Feb 17;8:22 [FREE Full text] [CrossRef] [Medline]
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009 Aug 7;4:50 [FREE Full text] [CrossRef] [Medline]
- Légaré F, O'Connor AM, Graham ID, Saucier D, Côté L, Blais J, et al. Primary health care professionals' views on barriers and facilitators to the implementation of the Ottawa decision support framework in practice. Patient Educ Couns 2006 Nov;63(3):380-390. [CrossRef] [Medline]
- Lyell D, Coiera E. Automation bias and verification complexity: a systematic review. J Am Med Inform Assoc 2017 Mar 1;24(2):423-431. [CrossRef] [Medline]
- Cresswell K, Cunningham-Burley S, Sheikh A. Health care robotics: qualitative exploration of key challenges and future directions. J Med Internet Res 2018;20(7):e10410 [FREE Full text] [CrossRef] [Medline]
- Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med 2018;15(11):e1002689 [FREE Full text] [CrossRef] [Medline]
- Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol 2017;2(4):230-243 [FREE Full text] [CrossRef] [Medline]
- Agrawal A, Gans J, Goldfarb A. WSB Wiki. Toronto: University of Toronto; 2018. The Labor Market Impact of Artificial Intelligence URL: https://wiki.bus.wisc.edu/download/attachments/45908081/AGG_2018_10_15.pdf?version=1&modificationDate=1549310002037&api=v2
- Shaw J, Shaw S, Wherton J, Hughes G, Greenhalgh T. Studying scale-up and spread as social practice: theoretical introduction and empirical case study. J Med Internet Res 2017;19(7):e244 [FREE Full text] [CrossRef] [Medline]
- Greenhalgh T, Wieringa S. Is it time to drop the 'knowledge translation' metaphor? A critical literature review. J R Soc Med 2011 Dec;104(12):501-509 [FREE Full text] [CrossRef] [Medline]
- Greenhalgh T, Howick J, Maskrey N, Evidence Based Medicine Renaissance Group. Evidence based medicine: a movement in crisis? Br Med J 2014 Jun 13;348:g3725 [FREE Full text] [CrossRef] [Medline]
- Patel VL, Kaufman DR, Arocha JF. Emerging paradigms of cognition in medical decision-making. J Biomed Inform 2002 Feb;35(1):52-75 [FREE Full text] [CrossRef] [Medline]
- Rossi RA, Ahmed NK. The Network Data Repository with Interactive Graph Analytics and Visualization. In: Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015 Presented at: AAAI'15; January 25-30, 2015; Austin Texas p. 4292-4293.
- Samek W, Wiegand T, Müller KR. arXiv. 2017. Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models URL: https://arxiv.org/pdf/1708.08296.pdf
- Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. J Am Med Assoc 1992 Nov 4;268(17):2420-2425. [CrossRef] [Medline]
- Tracy CS, Dantas GC, Upshur RE. Evidence-based medicine in primary care: qualitative study of family physicians. BMC Fam Pract 2003 May 9;4:6 [FREE Full text] [CrossRef] [Medline]
- Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health Technol (Berl) 2017;7(4):351-367 [FREE Full text] [CrossRef] [Medline]
- Wadmann S, Hoeyer K. Dangers of the digital fit: rethinking seamlessness and social sustainability in data-intensive healthcare. Big Data Soc 2018 Jan 11;5:205395171775296. [CrossRef]
- Vezyridis P, Timmons S. Understanding the care.data conundrum: new information flows for economic growth. Big Data Soc 2017 Jan;4(1):205395171668849. [CrossRef]
- Gostin LO, Halabi SF, Wilson K. Health data and privacy in the digital era. J Am Med Assoc 2018 Jul 17;320(3):233-234. [CrossRef] [Medline]
- Grundy Q, Held FP, Bero LA. Tracing the potential flow of consumer data: a network analysis of prominent health and fitness apps. J Med Internet Res 2017;19(6):e233 [FREE Full text] [CrossRef] [Medline]
- Culnane C, Rubinstein B, Teague V. arXiv. 2017. Health data in an open world URL: https://arxiv.org/ftp/arxiv/papers/1712/1712.05627.pdf
- Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L. The ethics of algorithms: mapping the debate. Big Data Soc 2016 Dec;3(2):205395171667967. [CrossRef]
- Yu KH, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Qual Saf 2019 Mar;28(3):238-241. [CrossRef] [Medline]
- Higginbottom K. Forbes. 2018. The pros and cons of algorithms in recruitment URL: https://www.forbes.com/sites/karenhigginbottom/2018/10/19/the-pros-and-cons-of-algorithms-in-recruitment/#79f3658b7340
- Tsymbal A. ResearchGate.: Computer Science Department, Trinity College Dublin; 2004. The problem of concept drift: definitions and related work URL: https://www.researchgate.net/publication/228723141_The_Problem_of_Concept_Drift_Definitions_and_Related_Work
- Žliobaitė I, Pechenizkiy M, Gama J. An overview of concept drift applications. In: Big Data Analysis: New Algorithms for a New Society. Switzerland: Springer International Publishing; 2016:91-114.
- Martin G, Martin P, Hankin C, Darzi A, Kinross J. Cybersecurity and healthcare: how safe are we? Br Med J 2017;358:j3179. [CrossRef] [Medline]
- Clarke R, Youngstein T. Cyberattack on Britain's national health service - a wake-up call for modern medicine. N Engl J Med 2017 Aug 3;377(5):409-411. [CrossRef] [Medline]
- Sittig DF, Singh H. A socio-technical approach to preventing, mitigating, and recovering from ransomware attacks. Appl Clin Inform 2016;7(2):624-632 [FREE Full text] [CrossRef] [Medline]
- Youssef AE. A framework for secure healthcare systems based on big data analytics in mobile cloud computing environments. Int J Ambient Syst Appl 2014;2(2):1-11. [CrossRef]
- Jogova M, Shaw J, Jamieson T. The regulatory challenge of mobile health: lessons for Canada. Healthc Policy 2019;14(3):19-28. [CrossRef] [Medline]
- Torbica A, Cappellaro G. Uptake and diffusion of medical technology innovation in Europe: what role for funding and procurement policies? J Med Market 2010 Jan;10(1):61-69. [CrossRef]
- Allen B, Wade E, Dickinson H. Bridging the divide - commercial procurement and supply chain management: are there lessons for health care commissioning in England? J Public Procure 2009 Mar;9(1):79-108. [CrossRef]
- Perrow C. Normal Accidents: Living With High Risk Technologies. Princeton, NJ: Princeton University Press; 2011.
- Vogus TJ, Welbourne TM. Structuring for high reliability: HR practices and mindful processes in reliability-seeking organizations. J Organ Behav 2003 Nov;24(7):877-903. [CrossRef]
- Zuboff S. Big other: surveillance capitalism and the prospects of an information civilization. J Inf Technol 2015;30(1):75-89. [CrossRef]
- CB Insights. 2018. AI In Healthcare Heatmap: From Diagnostics To Drug Discovery, Deals Heats Up URL: https://www.cbinsights.com/research/artificial-intelligence-healthcare-heatmap-expert-intelligence/
- Scott WR. Institutions and Organizations: Ideas, Interests, and Identities. New York: Sage Publications; 2013.
- North DC. Institutions. J Econ Perspect 1991;5(1):97-112. [CrossRef]
- Battilana J, Leca B, Boxenbaum E. How actors change institutions: towards a theory of institutional entrepreneurship. Acad Manag Ann 2009 Jan;3(1):65-107. [CrossRef]
Abbreviations
|AI: artificial intelligence|
|GPT: general purpose technology|
|ML: machine learning|
|NASSS: Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability|
Edited by G Eysenbach; submitted 07.02.19; peer-reviewed by KL Ong, S Chen, S Zheng; comments to author 27.04.19; revised version received 16.05.19; accepted 31.05.19; published 10.07.19Copyright
©James Shaw, Frank Rudzicz, Trevor Jamieson, Avi Goldfarb. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.07.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. | https://www.jmir.org/2019/7/e13659/ |
BCBS Report on Open Banking and Application Programming Interfaces
BCBS published a report that monitors the evolving trend of open banking and application programming interfaces (APIs) in certain Basel Committee member jurisdictions. The report presents key trends and challenges identified in this area through the information gathered from 25 Basel Committee members from 17 jurisdictions, with focus on supervised banks and customer-permissioned data. The report also discusses the implications of these developments for banks and bank supervision. The report builds on the findings of BCBS paper on the implications of fintech developments for banks and bank supervisors.
The following are the key findings of the report with respect to the open banking frameworks:
- Traditional banking is evolving into open banking. A number of jurisdictions have adopted, or are considering adopting, open banking frameworks to require, facilitate, or allow banks to share customer-permissioned data with third parties.
- Open banking frameworks vary across jurisdictions in terms of stage of development, approach, and scope. Open banking is still in the early stages of development in a number of jurisdictions. Approximately half of the Basel Committee members have not observed significant open banking developments in their jurisdictions. There are benefits and challenges associated with each approach to open banking, when balancing bank safety and soundness, encouraging innovation, and consumer protection.
- Data privacy laws can provide a foundation for an open banking framework. Many jurisdictions that have adopted open banking frameworks also updated or plan to update their data protection and/or privacy laws.
- Multi-disciplinary features of open banking may require greater regulatory coordination. Within each jurisdiction, multiple authorities can have a role in addressing issues related to banks’ sharing of customer-permissioned data with third parties owing to the multi-disciplinary aspects of open banking.
Open banking comes with not only benefits but also various challenges for banks, such as risks to the business models and reputation and issues regarding data, cyber security, and third-party risk management. Therefore, banks and bank supervisors would need to pay more attention to the challenges that accompany the increased sharing of customer-permissioned data and growing connectivity of various entities involved in the provision of financial services. The report identifies the following key challenges for banks and supervisors:
- Challenges of adapting to the potential changes in business models
- Challenges of ensuring data and cyber-security in an open banking framework
- Time and cost to build and maintain APIs and the lack of commonly accepted API standards
- Oversight of third parties can be limited, especially in cases where banks have no contractual relationship with the third party, or where the third party has no regulatory authorization
- Assigning liability in the event of financial loss, or in the event of erroneous sharing or loss of sensitive data, which is more complex with open banking, as more parties are involved
- Increase in reputational risk, even in jurisdictions where there are established liability rules
Related Links
Keywords: International, Banking, Open Banking, API, Operational Risk, Governance, Fintech, Cyber Risk, BCBS
Featured Experts
María Cañamero
Skilled market researcher; growth strategist; successful go-to-market campaign developer
Pierre-Etienne Chabanel
Brings expertise in technology and software solutions around banking regulation, whether deployed on-premises or in the cloud.
Nicolas Degruson
Works with financial institutions, regulatory experts, business analysts, product managers, and software engineers to drive regulatory solutions across the globe.
Previous ArticleIASB Publishes Summary of October Meeting of Global Preparers Forum
Related Articles
EBA Updates List of Validation Rules for Reporting by Banks
EBA issued a revised list of validation rules with respect to the implementing technical standards on supervisory reporting.
EBA Responds to EC Call for Advice to Strengthen AML/CFT Framework
EBA published its response to the call for advice of EC on ways to strengthen the EU legal framework on anti-money laundering and countering the financing of terrorism (AML/CFT).
NGFS Advocates Environmental Risk Analysis for Financial Sector
NGFS published a paper on the overview of environmental risk analysis by financial institutions and an occasional paper on the case studies on environmental risk analysis methodologies.
MAS Issues Guidelines to Promote Senior Management Accountability
MAS published the guidelines on individual accountability and conduct at financial institutions.
APRA Formalizes Capital Treatment and Reporting of COVID-19 Loans
APRA published final versions of the prudential standard APS 220 on credit quality and the reporting standard ARS 923.2 on repayment deferrals.
SRB Chair Discusses Path to Harmonized Liquidation Regime for Banks
SRB published two articles, with one article discussing the framework in place to safeguard financial stability amid crisis and the other article outlining the path to a harmonized and predictable liquidation regime.
FSB Workshop Discusses Preliminary Findings of Too-Big-To-Fail Reforms
FSB hosted a virtual workshop as part of the consultation process for its evaluation of the too-big-to-fail reforms.
ECB Updates List of Supervised Entities in EU in September 2020
ECB updated the list of supervised entities in EU, with the number of significant supervised entities being 115.
OSFI Identifies Focus Areas to Strengthen Third-Party Risk Management
OSFI published the key findings of a study on third-party risk management.
FSB Extends Implementation Timeline for Framework on SFTs
FSB is extending the implementation timeline, by one year, for the minimum haircut standards for non-centrally cleared securities financing transactions or SFTs. | https://www.moodysanalytics.com/regulatory-news/Nov-19-19-BCBS-Report-on-Open-Banking-and-Application-Programming-Interfaces |
1. Assistance in strategic analysis and assisting with strategic planning.
2. assisting in the long-term business plans.
3. Assisting in the research into pricing, competitors and factors affecting performance.
4. controlling income, cash flow and expenditure.
5. assistance in the carrying out business modelling and risk assessments.
6. responsible for special reports and analysis involving financial data.
7. Assists the Senior Manager and is often charged with responsibility for one of the functional areas such as financial accounting or budgetary planning and control.
8. Identify loopholes and recommend risk aversion measures and cost savings.
9. Perform monthly balance sheet, income statement and changes in financial position/budget variance analysis.
10. Investigate and report to the Manager any inconsistencies or improprieties.
11. Analyze data to ensure proper accounting procedures have been followed.
12. Assistance in the development of product costing techniques, cost control measures, ensuring timely and accurate labor, material, and overhead reports, supervises the undertaking of special cost studies and periodically reviews allocation of overhead costs.
13. Assistance in the preparation of financial reports, charts, tables and other exhibits as requested.
14. Provide timely, relevant and accurate reporting & analysis of the results of the divisions performance against historical, budgeted, forecasted and strategic planning results to facilitate decision-making toward the achievement of the budget and strategic plan.
15. Maintain and develop various financial models and standard templates distributed for use by all of Finance during the planning processes, ensuring quality, accuracy and focused analytic review.
16. Demonstrate appropriate understanding / working knowledge of accounting principles and internal controls, and apply them.
17. Describe an insightful use of financial analysis techniques, tools, and concepts, to provide practical counsel to business area partners and management to drive business results.
18. Identify non-value added processes within the department and seeks solutions.
19. Forecast daily cash requirements and execute daily financing decisions.
20. Prepare or monitor companys various cash flow forecasts and perform financial modelling.
21. Evaluate, develop and implement cash management systems to optimize efficiencies.
22. Participate in compensation management, semi-annual incentive compensation payments. Provide financial and analytical support for the incentive compensation accounting and forecasting processes. Respond to ad-hoc projects related to incentive compensation program.
23. Monitor and report on deviations from credit standards.
24. Conduct credit checks on all customer, establish and manage limits.
25. Weekly reporting of invoicing totals/aging totals/cash receipts/invoice adjustments. | https://www.timesjobs.com/job-detail/finance-job-in-jdc-recruitment-services-bengaluru-bangalore-jobid-0c5MZhgGnTtzpSvf__PLUS__uAgZw== |
Table of Contents
Does iambic pentameter have to be 10 syllables?
It is used both in early forms of English poetry and in later forms; William Shakespeare famously used iambic pentameter in his plays and sonnets. As lines in iambic pentameter usually contain ten syllables, it is considered a form of decasyllabic verse.
How do you make an iambic pentameter?
Putting these two terms together, iambic pentameter is a line of writing that consists of ten syllables in a specific pattern of an unstressed syllable followed by a stressed syllable, or a short syllable followed by a long syllable. 5 iambs/feet of unstressed and stressed syllables “ simple!
Can iambic pentameter have 9 syllables?
A given line may have 9 , 11 or even 12 syllables instead of 10. Not all of these lines could be called Iambic Pentameter (since they’re not all Pentameter or five foot lines), but they might be variations if they vary from (but not too far from) an established iambic pentameter pattern.
Which is an example of iambic pentameter?
Iambic pentameter is one of the most commonly used meters in English poetry. For instance, in the excerpt, When I see birches bend to left and right/Across the line of straighter darker Trees¦ (Birches, by Robert Frost), each line contains five feet, and each foot uses one iamb.
How do you know if a word is iambic?
A foot is an iamb if it consists of one unstressed syllable followed by a stressed syllable, so the word remark is an iamb. Penta means five, so a line of iambic pentameter consists of five iambs “ five sets of unstressed and stressed syllables.
What is an example of iambic?
An iamb is a metrical foot of poetry consisting of two syllables”an unstressed syllable followed by a stressed syllable, pronounced duh-DUH. An iamb can be made up of one word with two syllables or two different words. An example of iambic meter would be a line like this: The bird has flown away.
What words are Iambs?
A simple iamb contains two syllables, the first unstressed and the second unstressed, such as in the words, ”equate,”’destroy,” and ”belong. ” An extended iamb is a unit of three or four syllables, with an added end-syllable that is unstressed, such as in the words, ”revising,” ”surprising,” and ”intended.
How do you tell if a syllable is stressed?
A stressed syllable combines five features:
- It is l-o-n-g-e-r – com p-u-ter.
- It is LOUDER – comPUTer.
- It has a change in pitch from the syllables coming before and afterwards.
- It is said more clearly -The vowel sound is purer.
- It uses larger facial movements – Look in the mirror when you say the word.
Which line is written in iambic pentameter?
An iamb consists of 2 syllables. The first syllable is unstressed and the second one is stressed. Since iambic pentameter is 5 sets of 2 syllables the line has 10 syllables in all. The only line that has 10 syllables is “I came to see a man about a horse”. /I/ /came/ to/ see/ a/ man/ a/bout/ a /horse/.
Who invented iambic pentameter?
Henry Howard, Earl of Surrey
What effect does iambic pentameter have?
Iambic pentameter is thought to be the sound of natural conversation and so poets will often use it to create a conversational or natural feel to the poem. It often helps the reader to be able to focus on the words in a comfortable rhythm.
Is iambic pentameter like a heartbeat?
Iambic pentameter is the name given to the rhythm that Shakespeare uses in his plays. The rhythm of iambic pentameter is like a heartbeat, with one soft beat and one strong beat repeated five times.
Why is iambic pentameter so popular?
The most common meter used in poetry is iambic pentameter (penta=five). Poets choose to use this meter when writing poetry because it gives the poem a strong underlying structure as a formal writing device. Iambic pentameter can be rhymed or unrhymed. In the case of unrhymed it is called Blank Verse.
Why did Shakespeare use iambic pentameter in Romeo and Juliet?
The majority of Shakespeare’s ‘Romeo and Juliet’ is written in blank verse, or unrhymed iambic pentameter. Shakespeare also dispenses of iambic pentameter to underline the abrupt or crass nature of certain sections of dialogue – for example, during bawdy jokes, or when servants are conversing amongst themselves.
Is Romeo and Juliet real?
Romeo and Juliet was based on the life of two real lovers who lived in Verona, Italy 1303, and who died for each other. Shakespeare is reckoned to have discovered this tragic love story in Arthur Brooke’s 1562 poem entitled The Tragical History of Romeo and Juliet and rewrote it as a tragic story.
Is Romeo and Juliet written in Old English?
Romeo and Juliet by William Shakespeare is written in English. The English language is normally divided into Old English, Middle English, and Modern English, according to the following criteria: Old English or Anglo-Saxon: (ca.
Who speaks in prose in Romeo and Juliet?
Prose in Romeo and Juliet usually marks either comic speech or the speech of low-status characters. The Nurse, Peter and the Musicians usually speak in prose, because they are comic and low-status characters. Mercutio and Romeo mostly use verse, but they often use prose when they are exchanging jokes.
What is the name of the man that Juliet rejects?
Lady Capulet tells Juliet about Capulet’s plan for her to marry Paris on Thursday, explaining that he wishes to make her happy. Juliet is appalled. She rejects the match, saying I will not marry yet; and when I do, I swear / It shall be Romeo”whom you know I hate” / Rather than Paris (3.5. 121“123).
Why did Rosaline not like Romeo?
Whereas Romeo had told Benvolio that Rosaline had rejected him because she’d sworn to remain chaste forever, Friar Laurence suggests that Rosaline didn’t believe Romeo’s love to be authentic, saying Oh, she knew well, / Thy love did read by rote that could not spell. In other words, she knew Romeo was only acting …
What is Queen Mab the queen of?
Mab, also called Queen Mab, in English folklore, the queen of the fairies. Mab is a mischievous but basically benevolent figure. In William Shakespeare’s Romeo and Juliet, she is referred to as the fairies’ midwife, who delivers sleeping men of their innermost wishes in the form of dreams. | https://citiesthemagazine.com/does-iambic-pentameter-have-to-be-10-syllables/ |
Iambic pentameter is a line of poetry written in alternating stressed and unstressed syllables, with a total of ten syllables to the line.
You can train yourself to hear the rhythm with a little practice. Not as hard as you thought?
Log in to download. The Petrarchan or Italian sonnets follow a rhyme scheme that has a b b a a b b a for he first eight lines, followed by a different grouping of two or three rhyming sounds for the last six lines with no couplet.
This should sound correct to your ear.
Because pentameter is a measure of five feet, each line of your poem needs to have five feet -- or five sets of unstressed syllable followed by stressed syllable in a line.
There are five iambic feet in a line of iambic pentameter.
Meter is the rhythm in a line of poetry. Often, iambic pentameter can be used to give indications of class differences in renaissance plays and works.
It is the most melodic way to fashion a line of poetry in English. | https://hotizywikebas.usainteriordesigners.com/how-to-write-a-poem-in-iambic-pentameter-example267112091ag.html |
The problem, according to Euler: "Let x, y and z be the three numbers being sought, of which the largest is x and the smallest z, and let x=pp+qq and y=2pq, so that x+y=(p+q)2 and x–y = (p–q)2. In the same way, setting x=rr+ss and z=2rs, then x+z=(r+s)2 and x–z=(r–s)2. In addition to these four conditions being satisfied, it must be that rr+ss = pp+qq. Then, two additional conditions must be added, that y+z = 2pq+2rs and y–z = 2pq–2rs must both be squares." Euler gets x=50, y=50, z=14, then x=733025, y=488000, z=418304. Then, characteristically, he proposes a slightly different problem (section 16) and solves it by the same means.
Original Source Citation
Mémoires de l'académie des sciences de St.-Petersbourg, Volume 6, pp. 54-65.
Opera Omnia Citation
Series 1, Volume 5, pp.20-27. | https://scholarlycommons.pacific.edu/euler-works/753/ |
Rearrange a string in sorted order followed by the integer sum
Given a string containing uppercase alphabets and integer digits (from 0 to 9), the task is to print the alphabets in the order followed by the sum of digits.
Examples:
Input : AC2BEW3 Output : ABCEW5 Alphabets in the lexicographic order followed by the sum of integers(2 and 3).
Implementation: 1- Start traversing the given string. a) If an alphabet comes increment its occurrence count into a hash_table. b) If an integer comes then store it separately by summing up everytime. 2- Using hash_table append all the characters first into a string and then at the end, append the integers sum. 3- Return the resultant string.
C++
|
|
chevron_right
filter_none
Java
|
|
chevron_right
filter_none
C#
|
|
chevron_right
filter_none
PHP
|
|
chevron_right
filter_none
Output:
AABCCDEW6
Time Complexity: O(n)
Reference :
https://www.careercup.com/question?id=13382661
This article is contributed by Sahil Chhabra. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
My Personal Notes arrow_drop_up
Recommended Posts: | https://www.geeksforgeeks.org/rearrange-a-string-in-sorted-order-followed-by-the-integer-sum/ |
According to the data of the analytical agency AUTOSTAT, as of July 1st, 2019, there were 4.6 thousand electric cars in our country.
According to the assessment of the analytical agency "AUTOSTAT", for 8 months of 2020, only 5% of used cars were sold with the usage of classic car loan scheme.
This low figure is due to two factors.First, the overwhelming majority of transactions in the secondary market are made directly between individuals, or with the participation of "outbids". The share of dealer sales in the used car market is still not high. Secondly, it is easier and often cheaper for buyers of cars on the secondary market to get a consumer loan than a classic car loan with a transfer of money to a dealer and a car pledge.
According to the executive director of the agency "AUTOSTAT", Sergei Udalov, the growth of car loans for used cars has been clearly seen lately. In August, against the background of a decrease in the number of loans for new cars (-6%), lending of used cars grew by 15%. By considering the low share in the total volume of this market (3.3 million used cars were sold in 8 months), the potential for banks and dealers is extremely high. | https://eng.autostat.ru/news/19013/ |
Q:
Verify my thought process on permutations
If there are $15$ distinguishable objects; all will be placed into $2$ boxes. There needs to be at least one object in each box. How many ways can you place these objects into the $2$ boxes?
Tried solution
I learned about the stars and bars, however, this method does not work because the objects in question are distinguishable. I'm thinking of it this way:
For each book, there's a choice, it will be placed in box $1$ or box $2$. With two choices on $n$ objects, there are $2^n$ (permutations?). There needs to be at least $1$ object in each box. However there are two combinations where there's not at least $1$ object in each box, so the combination amounts to:
$$2^{15} - 2 = 32768 - 2 = 32766$$
I'm not sure if my solution is right, can anyone verify? Maybe direct me to where I can learn more about the common types of combination and permutation problems?
A:
You are looking for Stirling numbers of the second kind. Specifically, the number of ways to partition $n$ distinct objects into $k$ identical, non-empty groups is given by $S(n,k)$, where $S(n, k)$ are the Stirling numbers of the second kind.
You are right that the stars and bars approach won't work, because the objects are now distinct. However, it is crucial to take into consideration whether the groups (boxes) are distinct or identical. If the boxes are identical, then there are $S(n, k)$ ways; if they are distinct, there are $S(n,k)k!$ ways.
Applying specifically to the context you provided: There are indeed $S(15, 2)2! = 2^{15} - 2$ ways to distribute $15$ distinct objects into $2$ distinct, non-empty boxes. In fact, one can easily generalize this to the result: $S(n, 2) = 2^{n-1} - 1$ for all $n \in \mathbb{Z^+}, n \ge 2$.
| |
It's entirely subjective. But that does not mean it's invaluable. If you are looking for signs from God and open to what happens, you just might be a better listener, observer and attentive companion, even if your interpretation is wrong.
As there is no god all that leaves is chance. I find money all the time, too bad it is only a penny here and a nickel there. All that means is someone is to lazy or has no value assigned to a nickel or penny lesser than bending down and picking it up.
I mean besides subjectivity or assigning ONLY the good or positive occurances and events to God....the bad or negative occurances to "Nature" "Free Will" or even "Adam's Fall"?
For instance, I find a $20 on the sidewalk (with nobody around for blocks)....is that random chance or God's will?
I come down with the flu right before a big job interview and miss it.....is that random chance or God's will?
A massive tornado misses the town of Podunkville, Kansas and only tears up some empty pasture land.....is that random chance or God's will?
I'm bored on a rainy Sunday afternoon and TCM just happens to show one of my favorite movies.....is that random chance or God's will?
A random event is merely an event that has no discernible pattern, hence unpredictable. However, over a sufficiently large range of events, the frequency of an individual event follows a probability field. What might seem like an individual 'random' occurrence to you is actually just part of a larger, discernible pattern.
When people talk about divine will, it usually pertains to a much larger, overall design--your individual event merely a very small part of that design.
A random event is merely an event that has no discernible pattern, hence unpredictable. However, over a sufficiently large range of events, the frequency of an individual event follows a probability field. What might seem like an individual 'random' occurrence to you is actually just part of a larger, discernible pattern.
When people talk about divine will, it usually pertains to a much larger, overall design--your individual event merely a very small part of that design.
It's entirely subjective. But that does not mean it's invaluable. If you are looking for signs from God and open to what happens, you just might be a better listener, observer and attentive companion, even if your interpretation is wrong.
Metaphysical balderdash. Nature, and our experiences in it, work as if there is no god or gods whatsoever.
| |
rose bush with no leaves and only one flower
My, once lovely rose bush, now has only one flower and the leaves fell off. Why?
Certified GKH Gardening Expert
Usually when a rosebush drops its leaves it is some form of stress or shock. The stress can be strings of very hot, intense rays of the sun days, to not enough water, insect or fungus attacks, etc... Here is a link to an article I wrote on this subject for you: https://www.gardeningknowhow.com/ornamental/flowers/roses/leaves-falling-off-roses.htm
Keep the bush watered well and the foliage should return once the temps cool or the stress is otherwise relieved. I recommend watering the rosebush with some water that has both a good root stimulator and a product called Super Thrive mixed into it. Water the rosebush with a freshly mixed batch the next 4 to 5 times she needs watering, this will help relieve the stresses. Only spray with an insecticide or fungicide if you are sure there is a problem with either one that is causing the problems.
You must be logged into your account to answer a question.
If you don't have an account sign up for an account now. | https://questions.gardeningknowhow.com/rose-bush-with-no-leaves-and-only-one-flower/ |
There are many things that make us nostalgic for the old days, even if they require more effort: handwritten letters, homemade bread, mixtapes. But in the world of manufacturing, traditional processes can represent increased cost, decreased flexibility and slowed production schedules.
While many industries have adapted 3-D printing to help facilitate speedier, more economic production, particularly for small batch products, it too has its limits.
“When it comes to complex geometries, for instance multiple curvatures or difficult overhangs, it can be very challenging for a traditional 3-D printer to print those parts well, no matter how much it tries,” said Yeo Jung Yoon, a USC Viterbi Ph.D. candidate in the USC Viterbi School of Engineering Department of Aerospace and Mechanical Engineering.
Yoon and a team of researchers at the USC Viterbi Center for Advanced Manufacturing (CAM), including Smith International Professor of Mechanical Engineering and Computer Science and CAM Director Satyandra Gupta, have spent the last year overcoming this problem—with robots. The team developed a robotic 3-D printing system, which would offer an alternative to traditional additive manufacturing process, improving the surface quality, reducing build time and enhancing mechanical properties of the final product. Essentially, an extruder, a tool which helps create objects with a fixed cross-sectional profile, is placed on the end of the robotic arm transforming it into a 3-D printer.
Coupled with an algorithm which plans out the path the robot will take to print a specific part, the team achieved faster printing with the robotic system than with a traditional 3-D printing option. The final product is also stronger mechanically, including supporting a higher load and demonstrating greater stiffness than the traditional 3-D printed piece.
“Because robots can orient the deposition head—a tool that lays down materials to build up parts—in any direction, deposit material on curved surfaces and change building directions while printing, they can tackle these complex shapes while improving on print quality,” Yoon said.
The research was published in the Journal of Computing and Information Science in Engineering and presented at the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference and at the International Conference on Robotics and Automation.
Building a 3-D Printing Robot
Additive manufacturing technologies, which create three-dimensional objects by adding materials layer by layer, have been widely used to reduce build costs and times. However, particularly with more complex designs, such as those with curvatures, traditional 3-D printing technologies can create many challenges, Yoon said.
Typically, a technique called material extrusion is used to construct 3-D parts; the material is fed through the extruder, where it is heated and then deposited on the build platform, layer by layer, Yoon said. In a traditional additive manufacturing set up, the 3-D printer is restricted to x and y axis movements — meaning up and down or left and right.
The researchers used six degree of freedom robotic arms, which allow for freedom of movement with regard to rotation and positioning (compared to three degrees of freedom, which only allows for positioning freedom). To convert these arms into printers, the researchers made custom extruders—a three-nozzle extrusion system, which deposit the materials—to attach to the ends of the arms.
With this setup, the researchers found they were able to orient material fibers at desired directions. Compared to a traditional 3-D printer, Yoon said the robotic printer produced parts with good surface finish, less layers and enhanced mechanical properties (e.g. high stiffness, high peak loads).
The researchers demonstrated numerous ways to enhance the robotic set up to improve upon different printing technologies. For example, using a three-nozzle extrusion system, where one extruder prints support materials and the other two print structural material with different resolutions, build time is significantly reduced and surface finish is significantly improved, Yoon said.
Path Planning Algorithms
To successfully print using the robotic 3-D printer, path planning—or mapping of the exact route the robot will take to build the prototype—is essential. In its absence, given the numerous angles and degrees of freedom to the robotic arm, collisions and disruptions are more likely.
Once successfully implemented, path planning can help save material, improve print speeds and reduce energy use. To ensure such optimization, the researchers generated sequences of positions and tool angles to optimize the robot’s path or approach to printing. For instance, while printing on a concave surface, there is the risk that an extruder tip will hit the substrate or already printed structure of it’s left in the usual default position, which is perpendicular to the surface. Thus, in path planning, different angles might be considered. Printing layers with varying fiber orientation has additional benefits: this process makes the output smoother and stronger, Yoon said.
Automating Industries
Aerospace and automotive companies benefit from additive manufacturing from both a cost and productivity standpoint. Said Yoon: “Companies used to need a lot of parts for a product, but now, with 3-D printers, a lot fewer parts are needed to build the same product.” Ultimately, these companies want to automate manufacturing and robots are a great solution to implement automation. In addition, when they are not used for 3-D printing they can be used for other tasks such as finishing, assembly and part handling, Yoon said.
Next up, Yoon and team members, including USC Viterbi mechanical engineering master’s students Oswin Almeida, Ashish Kulkarni and Aniruddha Shembekar and CAM Manager Alec Kanyuck, want to look at making this system smarter. They want to use AI to improve accuracy and efficiency. “When you print a complex part, there might be a gap between the filaments or oppositely, material accumulation in one spot. This is a common issue in both traditional and robotic 3-D printers,” Yoon said. “We want to print defect-free parts, first by installing a camera and taking a picture of printed samples, as well as recording when these defects are created. That way we can observe how different variables impact the process.” The data gathered from the camera will help train an algorithm to prevent conditions that create the defects, helping to print better parts.
This work is funded in part by the Viterbi Graduate School Merit Fellowship for Yeo Jung Yoon. | https://viterbischool.usc.edu/news/2020/03/too-complex-to-print-call-a-robot/ |
This walkthrough uses an ML algorithm called an image classification model. These models learn to distinguish between different objects by observing many examples over many iterations. This post uses a technique called transfer learning to dramatically reduce the time and data required to train an image classification model. For more information about transfer learning with Amazon SageMaker built-in algorithms, see How Image Classification Works. With transfer learning, you only need a few hundred images of each type of trash. As you add more training samples and vary the viewing angle and lighting for each type of trash, the model takes longer to train but improves its accuracy during inference, when you ask the model to classify trash items it has never seen before.
Before going out and collecting images yourself, consider using the many sources for images that are publicly available. Wewant images that have clear labels (often done by humans) on what’s inside the image. Here are some sources you could look for your use case:
A good practice for collecting images is to use pictures at different possible angles and lighting conditions to make the model more robust. The following image is an example of the type of image the model classifies into landfill, recycling, or compost.
When you have your images for each type of trash, separate the images into folders.
|-images |-Compost |-Landfill |-Recycle
After you have the images you want to train your ML model on, upload them to Amazon S3. First, create an S3 bucket. For AWS DeepLens projects, the S3 bucket names must start with the prefix deeplens-.
This recipe provides a dataset of images labeled under the categories of recycling, landfill, and compost. | https://www.awsdeeplens.recipes/400_advanced/410_trash_sorter/412_collect_training_data/ |
- The gross weight.
- average cover
- The average amount spent by a customer in a meal period or month.
- baker’s percentage
- A formula that states the ingredients in relation to the amount of flour.
- breakeven point
- The point at which cost and revenue are equal.
- capital
- Physical assets or money used in the production of goods and services.
- closing inventory
- The amount of product on hand at the end of the inventory period.
- contribution margin
- Portion of sales that can be applied against fixed costs; gross sales minus variable costs.
- count
- 1. Number of items in stock
2. Number of items in a case or to the pound or kilogram
- directs
- Products purchased and used as soon as they arrive or on the same day.
- edible product (EP) weight
- The amount of usable product after cleaning or portioning.
- extending
- Calculating the total value of goods on hand after taking a physical inventory.
- FIFO
- First in, first out; a system of managing inventory so that the product received first gets used first.
- fixed cost
- Costs which do not change based on the volume of business.
- food cost
- The direct cost of food.
- inventory
- Total goods in stock at any one time.
- invoice
- A document indicating the amount owed for goods or services.
- labour cost
- The cost of labour required for a fixed period of time; usually reflected as a percentage of sales
- menu engineering
- 1. To maximize profitability by encouraging customers to buy what you want them to buy
2. Structuring of a menu to balance low- and high-profit items to achieve overall target food costs and profit
- opening inventory
- The amount of product on hand at the start of the inventory period.
- overhead
- The ongoing expenses required to operate a business that are not direct costs of producing goods or services.
- par stock
- Maximum amount of an item that should be in stock at any one time.
- perpetual inventory
- A system of tracking product as it is received and used, thereby keeping a running total of items on hand
- physical inventory
- A physical inventory requires that all items in storage be counted periodically.
- point-of-sale (POS) system
- A computerized system that coordinates customer purchases, sales, and costs through various linked terminals in a business.
- portion cost
- The cost of a single portion.
- productivity
- A measure of the amount of work done in a fixed period.
- profit
- Any revenue left over after all costs have been covered.
- profitability
- The amount of profit a business generates compared to sales, usually reflected in a percentage.
- purchase order
- A document indicating the approval of a quantity of goods ordered from a supplier.
- ratio
- The proportion between two amounts, usually with one item being referred to as 1.
- receiver
- The individual responsible for accepting and checking deliveries.
- rotate
- To rearrange inventory so that the oldest product is placed in front of newly acquired product.
- sales
- Total revenue received for goods or services in a fixed period.
- specific gravity
- The density of a substance (mass for a given volume), when compared against a reference substance, such as water.
- specifications
- Purchase criteria such as size, grade, packaging, market form.
- standardized recipe
- Consistent, tested recipe that is used by everyone in the kitchen to prepare the same product.
- stores
- Goods taken from the storage area and used.
- turnover
- 1. Number of times in a period that inventory is turned into revenue.
2. Number of times in a day that a seat is filled.
- volume
- 1. Quantity of product or business.
2. A type of measurement that measures the space taken up by a substance.
- yield
- Amount of usable product.
- yield test
- A test to determine the net or edible product (EP) weight from the gross or as purchased (AP) weight. | https://opentextbc.ca/basickitchenandfoodservicemanagement/back-matter/key-terms/ |
Lijiang is around 200 kilometers from Dali. Driving a car will take 2 to 3 hours. Taking a regular train will take 2 hours and 30 minutes. There are also buses available for this route.
By Plane
There is no flight available between Dali and Kunming probably because it will only take few hours to reach Lijiang from Dali by train.
By Bus
Long distance buses will take 3 to 4 hours travel time from Dali to Lijiang. Bus fare is from 50 to 80 CNY which is almost the same price if you take the train. The only advantage of taking the bus is the terminal is walking distance to Lijiang Old Town.
By Train
It takes 3 hours travel time to reach Lijiang from Dali. We choose the soft seat as it will only take few hours to be in our destination. We spent just 49 CNY and we had a smooth travel and we’re able to nap for a while.
Hiring a Taxi or Private Car
We tried getting a car to get to Lijiang but prices were 300 to 350 CNY. Well, a car can drop you straight to Lijiang old town but the taxis are small and not comfortable.
Private car is ranging from 300 to 400 CNY. | https://www.diamzon.com/en/travel-china-guide-and-tips/distance-from-dali-to-lijiang/ |
Research suggests that people with more symmetric faces are perceived as more attractive, have better developed immune system, and are more resistant to upper respiratory tract infections. What are the reasons for and the implications of this relationship? Dr hab. Urszula Marcinkowska Trimboli from the JU MC Institute of Public Health will shed some light on this issue.
The perception of human faces plays an important role in many areas of our lives. Persons who look like ourselves seem more trustworthy than strangers (DeBruine et al. 2005), femininity or masculinity of facial features has an impact on their perceived attractiveness (Little et al. 2011), and so does the similarity of a given face to the faces of our parents (Marcinkowska et al. 2012) or siblings (Marcinkowska et al. 2013). What’s more, the perceived attractiveness of other people’s faces differs depending on the beholder’s living conditions, (Marcinkowska et al. 2014, Marcinkowska et al. 2019), age (Marcinkowska et al. 2017), and the level of sex hormones (Marcinkowska et al. 2018).
One of the features that impact the perception of faces is symmetry, and, more precisely, lack of what is known as fluctuating asymmetry, which is defined as the random deviation from perfect symmetry in bilateral physical traits, which does not display any directional tendency (Vanvalen 1962). For several dozen years, researchers have been searching for the causes of fluctuating asymmetry and trying to understand its association with the perceived attractiveness and other personal traits (Grammer and Thornhill 1994).
It is thought that the level of facial asymmetry can serve as an indicator showing the body’s capability of bilaterally symmetrical development in the face of adverse environmental factors (such as energy deficiencies or pathogens, Parsons 1990). This means that facial symmetry can reflect developmental stability (also prenatal) and indicate the so-called inherited genetic quality (Thornhill and Moller 1997), which is broadly understood as the individual’s chances of passing on genes to future generations. In other words, this would mean that we are attracted to symmetrical faces, as they are a sign of good general health (including reproductive health) of a given person,
This line of reasoning links facial symmetry both to real (Van Dongen and Gangestad 2011) and perceived (Jones 2018) state of health. It has also turned out that oxidative stress (marker of the body’s aging speed) is related to face symmetry. The studies recently conducted in the Institute of Public Health of the Jagiellonian University Medical College have shown that the faces of women with high oxidative stress levels (whose bodies aged more quickly) were perceived as less symmetrical by randomly selected judges and were indeed less symmetrical (Marcinkowska et al. 2020).
Yet, not all research suggest such a simple relationship between symmetry on the one hand and health and attractiveness on the other. The recently published study based on a large population sample, has not confirmed the association between facial symmetry and a number of real indicators of health condition (Foo et al. 2017). It is thus unclear, which specific aspects of health are linked to facial symmetry and at which point of an individual’s development (from conception to death) can this relationship be strengthened or weakened. | https://en.uj.edu.pl/en_GB/ju-research/-/journal_content/56_INSTANCE_2XezEHy2NT5h/81541894/147067937 |
I recently asked about criticism regarding short pieces of fiction. What about longer works like novels? Surely no one can sit down and agonize over individual word choices in a larger work the way they can over something as short as three to four pages. So how do you approach longer works? What else do you do differently? What are your main concerns when reading over it to give feedback? Is size alone the only difference in such approaches?
While @wetcircuit and @Liquid make important points, I'll once more try a step by step list.
There are two options when analysing a novel: either it's an academic assignment, or it isn't. If it's the former (and the mark really matters to you), read the whole thing once, then read it again and take your sweet time going through all the details. And then re-read it a third time if need be. If it's the latter, you once more have two options: either your focus is having fun with the read, or your focus includes being able to give accurate feedback or a deep analysis/review. I'm going to assume the latter is the case.
1. Prepare yourself
Get a notebook to jot down your impressions as you read. Make a list of what you have to consider: narrator, characters, time and place, action, description, symbols, style.
2. Theme
Sometimes, one knows the main theme when going in (most love stories and spy-action stories will have the same basic one). If this is the case, you can add theme to the list of things you're keeping track of.
If you don't know the theme, do not try to identify it from the get go. Let the story develop and reveal it to you stress-free.
3. Chapters
Read one chapter at a time, and stop for a reflection at the end of each.
3.1 narrator
Identify the type of narrator and see how much impact it has on the tone of the story.
If it's a third persson narrator, check if the POV is focused on one or more characters. See how that POV shifts and how it affects the read.
Was the best POV chosen, or another character would have made more sense? If there are multiple POVs, is there a confusing head-hopping approach or is there a logic to it? And does it work?
3.2 characters and action (including time and space)
Who did what, basically.
Look at how the characters are presented and described. Look at their importance both for the narrator and for the action. Start wondering about their motives and their character (do their actions match their ideas/ideals?).
As you advance in the book, map out the action. Identify the main plot and the secondary plots and see how they are weaved together. Pay attention to time jumps (to the past and to the future), how the events relate to one another (think in terms of what causes what), and how they interact. See how space is dealt: are there locations in the plot that are negative or positive? How is it described? Get a feel for the rhytm: is it slow or fast paced? Are there slow moments that help to up the tension or is it the speed of events tht creates it? How do the events affect the characters and make them grow (or not)? Do the characters' motives change?
If you want to be fancy, make an actual map where you jot down which plots are dealt with in which chapters and then show off cool graphics proving that the plots are balanced throughout the book.... or not. You'll be accused of nitpicking, but! Everybody loves cool graphics.
3.3 Description and Dialogue
When there are descriptions, pay attention to them. Is there something meaningful about one of them? Are there colours or moods that are insisted on? Are they too long or too short? Do they use metaphors, images, etc? How are adjectives used?
Analyse dialogues carefully (but not word by word). How do words match the thoughts of the characters? Are the dialogues solely to advance the plot, solely to characterise the characters or a mix of both? Is there balance between dialogues and narration (eg.: unbalanced walls of dialogue)? If the dialogues are long, is there a balance of direct, indirect and free indirect speech to avoid monotony?
3.4 themes and symbols
Allow the theme and symbols to manifest themselves at their own rhythm.
To be honest, you can do the above with little to no writing (if you have a good memory). You can also analyse some dialogues and skip others. You can ignore every description save one that caught your fancy. However, if there's a chapter that feels particularly good - or bad - I strongly advise you to re-read it to identify what caused that impression.
If the objective is to give some feedback, a relaxed approach may be better. If the objective is as much feedback as possible, treat each chapter as a short story and then see how the chapters build the greater picture.
Surely no one can sit down and agonize over individual word choices in a larger work the way they can over something as short as three to four pages.
You can agonise about anything, but that is usually the author's privilege in a novel. Seriously, there are moments when it's right to agonise over word choice. If you're going through a dialogue, keep an eye out for 'said / asked / shrugged / ...'. If it's a description of an important character / landscape / dress, then check if the words convey the right feeling. Again, how thorough you are depends on what your aim is.
When I beta for someone, I look at every detail because that is what I feel my job entails. I often look at the use of parallel and contrasting structures / events within the book, individual chapters, paragraphs or sentences. I jot down important characters' physical and psychological descriptions to make sure they're consistent througout the work and, yes, I do make cool graphics. I once used a graphic to prove a secondary character deserved its own arc because it nearly had more appearances than the MC.
Of course the author has to make clear if they want their work nitpicked or not. Some people welcome a 'why did you have the character look at the horizon with beady eyes', while some are only interested in having plotholes pointed out.
I tend to judge stories on the success of their theme.
Characters create relatable and enviable anchors to follow through a story, while Plot is the sequence of story beats that are arranged to invoke and subvert reader expectations. Characters are usually a relatable hook, and plot is constructed to entertain, but the theme is a larger overall effect of the story that is not endlessly variable just by shifting around the details, it's a broader immutable structure that emerges from the synergy of the story components – hard to define exactly what is "theme" but most can recognize it when they see it.
Short stories do not have room to fully explore a theme. They often work as vignettes or a tableau – a slice of the theme which does not evolve, but just illustrates the concept. There may be room for a single twist that proves or subverts the theme, raising more questions than it answers. That twist typically comes at the end of the short story like a punctuation mark on the theme.
In a novel, I expect the theme to be explored in multiple ways. Several characters may represent various aspects of the theme, or the theme effects each differently. A novel may be able to fully abstract the theme, or transition to an unexpected theme, creating an analogy that feeds back onto the story. Themes of failed morals can be followed by a theme of redemption, themes of excess can be followed by themes about consequences.
There is also more time for a main character to repeat a theme, to show it is a lesson they need or cannot learn. The theme may be a hidden structure of the story that supersedes character and plot. The full exploration of the theme is what makes the novel feel finished. We know the story is over when the theme has played out or has looped back to the beginning.
Is size alone the only difference in such approaches?
My answer:
It's not about size, it's about a sense of scale.
Short pieces are like snacks; easily eaten and digested. A bad one will leave a bad aftertaste in your mouth; a good one will leave you wanting for more, or, if it's really good, make you wonder at the writer's ability that condensed so many ingredients in such a small thing.
Novels, though, are a different thing. Unless you are an editor, you won't read a novel a day, and after all they are not supposed to be consumed on the spot. Hence, novels have a larger scale; they give you more time to breathe, and they need more time to breathe, also.
In a short story the author must set things straight in a short time. Characters must be explained, stakes must be clarified, something resembling an arc must rise and fall in a given set of words.
In a novel you have whole chapters to explore the very same concepts and - possibly - much more.
Surely no one can sit down and agonize over individual word choices in a larger work the way they can over something as short as three to four pages.
To be honest, I wouldn't fret over individual word choices in any case. But it makes kind of sense for a short story, since your words are limited. Word-choice sets the tone; if your upper limit is 1000 words, you've got to be careful with them (consider poetry an even more extreme example).
But then again, being so sharply focused on word choice in a novel doesn't make sense and is dangerously close to nitpicking. Not because novelists don't need to worry about style or lexicon (they do, as all writers) but because there is a bigger picture to look at.
A single page of a novel, or a single chapter, may be faulty or badly written. But setting aside particular cases (e.g., you don't want your prologue to be that chapter) the overall novel can still be good. A bigger, more articulate structure will tolerate some faults. In other words, a novel is more than the sum of its chapters, and more than the sum of each individual scene.
So, when inspecting a novel, I'd keep an eye on the writing, of course, but I'd be more lenient. Things like character arcs, subplots, worldbuilding, branching narratives, theme and so on all can take advantage of a novels longer span. | https://writing.stackexchange.com/questions/42915/criticizing-long-fiction-how-is-it-different-from-short |
1.
เซ็นทรัลพลาซา ขอนแก่น
99,99/1 Mittraphap-Srichan road Intersection, Naimuang, Muang District
0.07 Miles Away
6.38354
Landmark
Shopping Mall
2.
CentralPlaza Khon Kaen
ถนน มิตรภาพ
0.07 Miles Away
6.21477
Shopping Mall
Landmark
Outdoors
3.
Der La Jazz : Restaurant & Live Music : Khon Kaen
The Houze Condominium 123 Srichan Road
0.24 Miles Away
5.06927
Bar
Landmark
4.
Khon Kaen Wittayayon School
Khon Kaen
0.47 Miles Away
4.07001
Landmark
High School
Public School
5.
ร้านผัดไทยโคราช - เจ้าเก่า
1/4 ถ.สถิตยุติธรรม ต.ในเมือง อ.เมือง
0.57 Miles Away
2.77657
Landmark
6.
วัดหนองแวงพระอารามหลวง
จังหวัดขอนแก่น
0.59 Miles Away
4.85076
Religious organization
Landmark
Religious Organization
7.
P.R. CAR PARK
520 Namuang Rd.
0.65 Miles Away
3.18051
Landmark
Parking
8.
อำเภอเมืองขอนแก่น
Khonkean
0.75 Miles Away
4.34721
State/province/region
Landmark
County
Cafe
9.
Mueang Khon Kaen District
Amphoe Muang Khon Kaen
0.75 Miles Away
2.61193
State/province/region
Landmark
City
Restaurant
10.
N.E.T. เทคโนภาคตะวันออกเฉียงเหนือ
Khon Kaen
0.90 Miles Away
4.33959
Landmark
Results
1 - 10
<<
<
1
2
>
>>
EXPLORE NEARBY CENTRAL PLAZA KONKAEN | เซ็นทรัลพลาซ่า ขอนแก่น
Residence
Restaurant
Restaurant/cafe
Buddhist Temple
Shopping/retail
Coffee Shop
Community & Government
Government Organization
School
Deli
Professional Service
Elementary School
Hotel
Food & Restaurant
Apartment & Condo Building
Travel & Transportation
Dorm
Community Center
Cafe
Automotive
Workplace & Office
Outdoors
Education
Resort
Corporate Office
County
Sports & Recreation
Food/grocery
Place To Eat/Drink
Hospital/Clinic
Religious Center
Thai Restaurant
Tours & Sightseeing
Region
Tourist Information
Lodge
Mountain
Middle School
Spas/Beauty/Personal Care
Arts & Entertainment
Business Service
Health/beauty
Bar
Repair Service
Grocery Store
Clinic
Park
Buffet Restaurant
Other
Dessert Place
Food/beverages
Beauty Salon
Lodging
Gym
Tours/sightseeing
Convenience Store
Beach
Fairground
Medical & Health
College & University
Wholesale & Supply Store
Bar & Grill
Cosmetics & Beauty Supply
Farm
Bakery
Restaurant Wholesale
Military Base
Seafood Restaurant
Landmark
Local Education
City
Shopping District
Sports/recreation/activities
Pet Service
Manufacturing
Real Estate
Eco Tours
Social Services
Car Wash & Detailing
Steakhouse
Market
Automotive Parts & Accessories
Pub
Ice Cream Parlor
Shopping Mall
Church/religious Organization
Police Station
Convent & Monastery
Fine Dining Restaurant
Nursing
Junior High School
Department Store
Mobile Phone Shop
Arts/entertainment/nightlife
Fast Food Restaurant
Home Improvement
Family Style Restaurant
Late Night Restaurant
Community/government
TH.LOCALE.ONLINE
Welcome to Thailand's most updated platform to explore Places. We guides you through 2530161 Places with real-time reviews and ratings. | https://th.locale.online/a/central-plaza-konkaen----137715050/t/landmark/ |
What is NetworkX?
NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. It has powerful data structures for graphs, digraphs and multigraph and so on. You may refer to the reference to learn more about NetworkX
Many systems of scientific and societal interest consist of a large number of interacting components. The structure of these systems can be represented as networks where network nodes represent the components, and network edges, the interactions between the components. Network refers to the real world object, such as a road network, whereas a graph refers to its abstract mathematical representation. Graphs consist of nodes, also called vertices, and links, also called edges. Mathematically, a graph is a collection of vertices and edges where each edge corresponds to a pair of vertices. When we visualize graphs, we typically draw vertices as circles and edges as lines connecting the circles.
Get started with networkX
import networkx as nx import matplotlib.pyplot as plt # Creating an empty graph G = nx.Graph() # adding single node in graph G.add_node("A") # to add multiple nodes in graph G.add_nodes_from(["B", "C", "D", "E", "F"]) # to display the nodes G.nodes()
NodeView(('A', 'B', 'C', 'D', 'E', 'F'))
# to add edge between nodes G.add_edge("A","B") # to add multiple edges at a time G.add_edges_from([("A","C"), ("A","D"), ("A","E"), ("E","F")]) # to view the edges G.edges()
EdgeView([('A', 'B'), ('A', 'C'), ('A', 'D'), ('A', 'E'), ('E', 'F')])
# to know the number of edges print(G.number_of_edges()) # to know the number of nodes print(G.number_of_nodes())
5 6
# We have created a graph with 5 nodes and 6 edges. # Now let's view our graph. nx.draw(G, with_labels=True, node_color="red", edge_color="blue")
# to remove node from graph G.remove_node("F") # to remove edge from graph G.remove_edge("A","E")
# We can now visualize our graph after removing # node F and edge A to E nx.draw(G, with_labels=True, node_color="red", edge_color="blue")
Random Graph
The simplest possible random graph model is the so-called Erdos-Renyi, also known as the ER graph model. This family of random graphs has two parameters, capital N and lowercase p. Here the capital N is the number of nodes in the graph, and p is the probability for any pair of nodes to be connected by an edge. Here’s one way to think about it– imagine starting with N nodes and no edges. You can then go through every possible pair of nodes and with probability p insert an edge between them. In other words, you’re considering each pair of nodes once, independently of any other pair. You flip a coin to see if they’re connected, and then you move on to the next pair. If the value of p is very small, typical graphs generated from the model tend to be sparse, meaning having few edges. In contrast, if the value of p is large, typical graphs tend to be densely connected. Here we will implement our own ER model graph in order to learn and be able to create more complex graph by our own. We will use scipy bernoulli function for probability p. | https://cslesson.org/NetworkX/ |
The Best Fluffy Pancakes recipe you will fall in love with. Full of tips and tricks to help you make the best pancakes.
The simple answer is to check the bottom of the pizza. If the dough is still raw, it isn’t getting hot enough to fully cook it or needs more time to fully cook. Something to look out for is a pizza recipe that makes the pizza dough soggy.
Can you eat undercooked pizza dough?
after taking it out and eating some slices some parts of the pizza smelled like dough and tasted doughy. Shouldn’t be harmful. None of those ingredients cannot be eaten raw, so undercooked won’t hurt you. You just have indigestion, not food poisoning.
How do I know if my pizza is cooked?
You want to eat it right away. And if you peek in the oven, and it looks done, you might be tempted to do just that. But keep in mind, a pizza generally needs about 3-4 minutes more in the oven after it looks “done.” You want to be sure that the dough underneath your toppings is completely cooked through.
Why is my pizza dough raw in the middle?
The most common reasons for undercooked pizza dough are too low baking temperature, too short baking time, the pizza crust is too thick, and overtopping.
How Do You Know When dough is ready?
When we make yeasted breads such as Challah, we press the dough gently with our knuckle or finger to determine if it is properly proofed and ready for baking. If the dough springs back right away, it needs more proofing. But if it springs back slowly and leaves a small indent, it’s ready to bake.
Why is my pizza dough chewy?
There are a number of things that can cause a pizza crust to become excessively tough or chewy. The tough and chewy stage is set when a high protein (very strong) flour is used to make the dough. Another cause of a tough and chewy crust is the development of a gum line in the pizza. …
Why is my pizza dough not crispy?
You can also try setting the stone on the lowest rack in the oven, which can help get that bottom really thoroughly cooked. Also, what are you topping it with? Sometimes overloading it with too much can make it harder to get that crispy crust. Pizza at home has to be done with the oven as hot as possible.
At what degree should we bake pizza?
Bake pizza in the 475°F (245°C) oven, one at a time, until the crust is browned and the cheese is golden, about 10-15 minutes. If you want, toward the end of the cooking time you can sprinkle on a little more cheese.
Should I bake my pizza dough before adding toppings?
It’s absolutely essential to pre-bake the dough for 5-6 minutes before adding your toppings. Once you’ve added Pizza Sauce and all your toppings, return it to the oven to finish baking! This will result in a crust that holds on it’s own and is crispy on the outside, and soft and airy on the inside.
Why is raw dough bad for you?
Bacteria are killed only when food made with flour is cooked. This is why you should never taste or eat raw dough or batter—whether made from recalled flour or any other flour. In recent years (2016 and 2019), two outbreaks of E. coli infections linked to raw flour made more than 80 people sick.
How do you fix undercooked pizza?
The best way to fix undercooked pizza is to lower the temperature down to about 350 and move the oven rack down to the lowest notch. Then, cook for an additional 3 minutes. If the bottom is golden brown, it is done. If not, keep cooking the pizza in 3-minute increments until it is done.
What can you do with undercooked pizza dough?
Try dropping your oven temperature slightly or lowering your pizza to a level or two on your oven rack. You can then bake for longer – try 4 minutes longer – and you should cook your dough through without burning the top.
Does undercooked dough make you sick?
The short answer is no. Eating raw dough made with flour or eggs can make you sick. Raw dough may contain bacteria such as E. … Raw eggs may contain Salmonella bacteria, and should never be consumed raw or undercooked.
Can I let dough rise all day?
Standard dough left to rise at room temperature typically takes between two and four hours, or until the dough has doubled in size. If left for 12 hours at room temperature, this rise can slightly deflate, though it will still remain leavened. Some doughs should be left to rise overnight or be kept in a refrigerator.
What does over kneaded dough look like?
When you cut into an over kneaded dough, you will notice that the interior is very dry and crumbly. The slices will likely fall apart rather than holding their shape. While the general taste of the bread may be the same, it will not have a nice mouth feel but, again, be dry, dense and crumbly- no thank you! | https://checkfoodmenuprices.com/your-questions/question-how-can-you-tell-if-pizza-dough-is-cooked/ |
Glycemic index ⓘ Source:
61 (medium)
Serving Size ⓘ Serving sizes are taken from FDA's Reference Amounts Customarily Consumed (RACCs)
( grams)
Acidity (Based on PRAL) ⓘ PRAL (Potential renal acid load) is calculated using a formula. On the PRAL scale the higher the positive value, the more is the acidifying effect on the body. The lower the negative value, the higher the alkalinity of the food. 0 is neutral.
1.1 (acidic )
Calories
321
Calcium
Vitamin B2
Carbs
Potassium
Phosphorus
Explanation: The given food contains more Calcium than 91% of foods. Note that this food itself is richer in Calcium than it is in any other nutrient. Similarly, it is relatively rich in Vitamin B2, Carbs, Potassium, and Phosphorus.
Condensed milk Glycemic index (GI)
Source:
Check out similar food or compare with current
Macronutrients chart
Protein:
Daily Value: 16%
7.91 g of 50 g16%
Fats:
Daily Value: 13%
8.7 g of 65 g13%
Carbs:
Daily Value: 18%
54.4 g of 300 g18%
Water:
Daily Value: 1%
27.16 g of 2,000 g1%
Other:
1.83 g
NEW NUTRITION FACTS LABEL
Nutrition Facts
___servings per container
Serving Size ______________
Serving Size ______________
Amount Per 100g
Calories 321
% Daily Value*
14%Total Fat 9g
23%Saturated Fat 5g
11%Cholesterol 34mg
6%Sodium 127mg
18%Total Carbohydrate 54g
0%Dietary Fiber 0g
Total Sugars g
Includes ? g Added Sugars
Protein 8g
Vitamin D 6mcg 1%
Calcium 284mg 28%
Iron 0mg 0%
Potassium 371mg 11%
*
The % Daily Value (DV) tells you how much a nutrient in a serving of food contributes to a daily diet. 2,000 calories a day is used for general nutrition advice.
Health checks
Low in Cholesterol
details
Dietary cholesterol is not associated with an increased risk of coronary heart disease in healthy individuals. However, dietary cholesterol is common in foods that are high in harmful saturated fats. Source
No Trans Fats
details
Trans fat consumption increases the risk of cardiovascular disease and mortality by negatively affecting blood lipid levels. Source
Low in Saturated Fats
details
Saturated fat intake can raise total cholesterol and LDL (low-density lipoprotein) levels, leading to an increased risk of atherosclerosis. Dietary guidelines recommend limiting saturated fats to under 10% of calories a day. Source
Low in Sodium
details
Increased sodium consumption leads to elevated blood pressure. Source
Low in Sugars
details
While the consumption of moderate amounts of added sugars is not detrimental to health, an excessive intake can increase the risk of obesity, and therefore, diabetes. Source
Condensed milk nutrition infographic
Copy infographic link
Mineral coverage chart
Calcium: 284 mg of 1,000 mg 28%
Iron: 0.19 mg of 8 mg 2%
Magnesium: 26 mg of 420 mg 6%
Phosphorus: 253 mg of 700 mg 36%
Potassium: 371 mg of 3,400 mg 11%
Sodium: 127 mg of 2,300 mg 6%
Zinc: 0.94 mg of 11 mg 9%
Copper: 0.015 mg of 1 mg 2%
Manganese: 0.006 mg of 2 mg 0%
Selenium: 14.8 µg of 55 µg 27%
Choline: 89.1 mg of 550 mg 16%
Mineral chart - relative view
Calcium
284 mg
TOP 9%
Potassium
371 mg
TOP 22%
Phosphorus
253 mg
TOP 22%
Magnesium
26 mg
TOP 39%
Sodium
127 mg
TOP 46%
Choline
89.1 mg
TOP 53%
Selenium
14.8 µg
TOP 54%
Zinc
0.94 mg
TOP 55%
Iron
0.19 mg
TOP 90%
Copper
0.015 mg
TOP 94%
Manganese
0.006 mg
TOP 95%
Vitamin coverage chart
Vitamin A: 267 IU of 5,000 IU 5%
Vitamin E : 0.16 mg of 15 mg 1%
Vitamin D: 0.2 µg of 10 µg 2%
Vitamin C: 2.6 mg of 90 mg 3%
Vitamin B1: 0.09 mg of 1 mg 8%
Vitamin B2: 0.416 mg of 1 mg 32%
Vitamin B3: 0.21 mg of 16 mg 1%
Vitamin B5: 0.75 mg of 5 mg 15%
Vitamin B6: 0.051 mg of 1 mg 4%
Folate: 11 µg of 400 µg 3%
Vitamin B12: 0.44 µg of 2 µg 18%
Vitamin K: 0.6 µg of 120 µg 1%
Vitamin chart - relative view
Vitamin B2
0.416 mg
TOP 17%
Vitamin A
267 IU
TOP 30%
Vitamin C
2.6 mg
TOP 34%
Vitamin B5
0.75 mg
TOP 44%
Vitamin B12
0.44 µg
TOP 50%
Vitamin B1
0.09 mg
TOP 52%
Vitamin D
0.2 µg
TOP 53%
Folate
11 µg
TOP 59%
Vitamin B6
0.051 mg
TOP 78%
Vitamin E
0.16 mg
TOP 80%
Vitamin K
0.6 µg
TOP 80%
Vitamin B3
0.21 mg
TOP 87%
Protein quality breakdown
Tryptophan: 112 mg of 280 mg 40%
Threonine: 357 mg of 1,050 mg 34%
Isoleucine: 479 mg of 1,400 mg 34%
Leucine: 775 mg of 2,730 mg 28%
Lysine: 627 mg of 2,100 mg 30%
Methionine: 198 mg of 1,050 mg 19%
Phenylalanine: 382 mg of 1,750 mg 22%
Valine: 529 mg of 1,820 mg 29%
Histidine: 214 mg of 700 mg 31%
Fat type information
Saturated Fat: 5.486 g
Monounsaturated Fat: 2.427 g
Polyunsaturated fat: 0.337 g
Fiber content ratio for Condensed milk
Sugar: 54.4 g
Fiber: 0 g
Other: 0 g
All nutrients for Condensed milk per 100g
|Nutrient||DV%||In TOP % of foods||Value||Comparison|
|Protein||19%||50%||7.91g||2.8 times more than Broccoli|
|Fats||13%||38%||8.7g||3.8 times less than Cheese|
|Carbs||18%||19%||54.4g||1.9 times more than Rice|
|Calories||16%||27%||321kcal||6.8 times more than Orange|
|Sugar||0%||23%||54.4g||6.1 times more than Coca-Cola|
|Fiber||0%||100%||0g||N/A|
|Calcium||28%||9%||284mg||2.3 times more than Milk|
|Iron||2%||90%||0.19mg||13.7 times less than Beef|
|Magnesium||6%||39%||26mg||5.4 times less than Almond|
|Phosphorus||36%||22%||253mg||1.4 times more than Chicken meat|
|Potassium||11%||22%||371mg||2.5 times more than Cucumber|
|Sodium||6%||46%||127mg||3.9 times less than White Bread|
|Zinc||9%||55%||0.94mg||6.7 times less than Beef|
|Copper||2%||94%||0.02mg||9.5 times less than Shiitake|
|Vitamin E||1%||80%||0.16mg||9.1 times less than Kiwifruit|
|Vitamin D||2%||53%||0.2µg||11 times less than Egg|
|Vitamin C||3%||34%||2.6mg||20.4 times less than Lemon|
|Vitamin B1||8%||52%||0.09mg||3 times less than Pea|
|Vitamin B2||32%||17%||0.42mg||3.2 times more than Avocado|
|Vitamin B3||1%||87%||0.21mg||45.6 times less than Turkey meat|
|Vitamin B5||15%||44%||0.75mg||1.5 times less than Sunflower seed|
|Vitamin B6||4%||78%||0.05mg||2.3 times less than Oat|
|Folate||3%||59%||11µg||5.5 times less than Brussels sprout|
|Vitamin B12||18%||50%||0.44µg||1.6 times less than Pork|
|Vitamin K||1%||80%||0.6µg||169.3 times less than Broccoli|
|Tryptophan||0%||77%||0.11mg||2.7 times less than Chicken meat|
|Threonine||0%||76%||0.36mg||2 times less than Beef|
|Isoleucine||0%||75%||0.48mg||1.9 times less than Salmon|
|Leucine||0%||76%||0.78mg||3.1 times less than Tuna|
|Lysine||0%||75%||0.63mg||1.4 times more than Tofu|
|Methionine||0%||75%||0.2mg||2.1 times more than Quinoa|
|Phenylalanine||0%||80%||0.38mg||1.7 times less than Egg|
|Valine||0%||75%||0.53mg||3.8 times less than Soybean|
|Histidine||0%||78%||0.21mg||3.5 times less than Turkey meat|
|Cholesterol||11%||40%||34mg||11 times less than Egg|
|Saturated Fat||27%||23%||5.49g||1.1 times less than Beef|
|Monounsaturated Fat||0%||49%||2.43g||4 times less than Avocado|
|Polyunsaturated fat||0%||70%||0.34g||140 times less than Walnut|
References
The source of all the nutrient values on the page (excluding the main article and glycemic index text the sources for which are presented separately if present) is the USDA's FoodCentral. The exact link to the food presented on this page can be found below. | https://foodstruct.com/food/condensed-milk |
Q:
Why is this relation recursive?
A relation $R \subset \mathbb{N}^d$ is called recursive if there exists a primitive recursive function f with
$$ (x_1 ,\dots,x_d) \in R \Leftrightarrow f(x_1,\dots,x_d)=0.$$
In Kurt Gödel's article 'Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I', he proves the following proposition IV:
Assume $f : \mathbb{N}^n \to \mathbb{N}$ is a recursive function and $R \subset \mathbb{N}^{m+1}$ is a recursive relation. Then, the relation $S \subset \mathbb{N}^{n+m}$
$$ (a,b)\in S :\Leftrightarrow \exists x\in \mathbb{N} \text{ with }x \leq f(a) \text{ and } (x,b)\in R $$
with $a=(x_1,\dots,x_n)$ and $b=(y_1,\dots,y_m)$, is also recursive.
After this proposition he claims that it is clear that $U \subset \mathbb{N}^2$
$$ (x_1,x_2)\in U :\Leftrightarrow \exists x\in \mathbb{N} \text{ with }x \leq x_1 \text{ and } x_1 = x_2\cdot x $$
is a recursive relation. However, I do not see that this follows from proposition IV because the relation $R$ should only depend on $(x,x_2)$ not on $(x,x_1,x_2)$.
Is it still possible to show that $U$ is a recursive relation?
A:
Because of proposition IV the relation $S$ definied by
$$ (x_1,x_2,y_1,y_2) \in S : \Leftrightarrow \exists z\in \mathbb{N} \text{ with } z \leq x_1 \text{ and } y_1 = y_2 \cdot z $$
is recursive. Therefore, it exists a recursive function $f$ with
$$ (x_1,x_2,y_1,y_2) \in S \Leftrightarrow f(x_1,x_2,y_1,y_2) = 0 .$$
Since $g(x,y):=f(x,y,x,y)$ is a recursive function, we have that the relation $U$ defined by
$$ (x,y) \in U :\Leftrightarrow (x,y,x,y) \in S \Leftrightarrow g(x,y)=0 $$
is recursive.
| |
Pepper is the fruit of a flowering vine of the family Piperaceae which grows in the tropics. The Pepper Plant starts carrying berries after its 3rd year two times a year for thirty years. The pepper plant can grow up to 10 meters if its supporting tree grows that high. For commercial growing the Pepper plants are kept under a high of 4 meters. The Pepper plant needs shade and lots of water but should not stand in it or she will die.
The green Pepper is made from unripe berries.
The fresh green pepper berries have to be preserved by drying. Green pepper is used in desserts or in the green pepper sauce. | https://www.orlandosidee.de/spices/pepper/peppergreen.htm |
The utility model provides a remove compartment for interior decoration, be in including crossbearer and connection the first removal baffle and the second removal baffle of crossbearer bottom, first removal baffle with the second removes the bottom of baffle and all installs the pulley, the side of first removal baffle is provided with tapered end and baffle recess, the crossbearer is built -in to have the crossbearer to cut off, the both sides that the crossbearer cuts off are first slide and second slide respectively, in the middle part of cutting off, the crossbearer leaves the blank space with first removal baffle length unanimity, first removal baffle with the surface that the second removed the baffle all is provided with the blank, the both sides of blank all are provided with the decoration colour bar, the utility model discloses the space department that utilizes the crossbearer to cut off, can place a slide with first removals baffle and second removal baffle in, make whole plane of two baffles formation, messenger decorative effect is better, when two baffles synthesized one, alright use set up at its surperficial blank simultaneously. | |
After months of tests, troubleshooting and repairs, NASA ran into problems during fueling of the Space Launch System moon rocket early Monday, forcing the agency to scrub the planned launch of its Artemis 1 test flight — a critical mission to send an unpiloted Orion crew capsule on a 42-day mission beyond the moon and back.
Launch originally was planned for 8:33 a.m. EDT, the opening of a two-hour window. The next opportunity for launch will be Friday, Sept. 2, at 12:48 p.m. EDT, if the issues are resolved by then.
The countdown was paused overnight because of stormy weather and troubleshooting to resolve an apparent hydrogen leak, 750,000 gallons of super-cold liquid oxygen and hydrogen fuel were loaded into the SLS core stage, clearing the way for another 22,000 gallons to be pumped into the upper stage.
The Artemis 1 test flight is intended to verify the rocket’s ability to propel Orion capsules into Earth orbit and then onto the moon. Engineers also will test the crew ship’s myriad systems in deep space and make sure its heat shield can protect returning astronauts from the 5,000-degree heat of re-entry.
NASA plans to follow the uncrewed Artemis 1 mission by launching four astronauts on a looping around-the-moon flight in 2024, setting the stage for the first astronaut landing in nearly 50 years when the first woman and the next man step onto the surface in the 2025-2026 timeframe.
But first, NASA must prove the rocket and capsule will work as planned — and that begins with the Artemis 1 launch.
Fueling was delayed 55 minutes by an approaching storm and lightning within about 6 miles of launch pad 39B. The six-hour fueling procedure began about 1:13 a.m. only to be interrupted by indications of a hydrogen leak near the area where propellant lines enter the base of the rocket.
During a transition from “slow fill” to a 10 times faster rate, sensors detected higher-than-allowable concentrations of hydrogen, indicating a leak somewhere in the system. After reverting back to slow fill and enabling temperatures to equalize across the plumbing, fast fill was restarted and this time, there were no issues.
Still to be determined: the status of a 4-inch quick-disconnect fitting used to route hydrogen to the core stage engines to cool them prior to ignition. NASA reported three of the engines were being properly conditioned, but engine No. 3 did not initially “see” the desired flow rates. That prompted additional troubleshooting.
And if that wasn’t enough, an unusual line of frost was spotted on the exterior of the rocket’s core stage — a possible indicator of a leak of some sort, a crack in thermal insulation or some other issue.
Backup launch opportunities are available on September 2 and 5. But if the rocket isn’t off the pad by then, the SLS will have to be hauled back to the Vehicle Assembly Building for servicing. | http://www.tettap.com/2022/08/launch-of-nasas-artemis-1-test-flight.html |
My first bike was a 20” BMX-style bike, a Schwinn Predator. I rode it with training wheels at first, and it was the bike I learned to ride without training wheels on. I remember learning how to ride in the yard around our house, at a park, or on a neighborhood street.
Our driveway had quite a steep hill, and our street was also hilly, so it was all about either going up or down. One day, a reporter from the local paper stopped and asked to take some pictures of my dad and I. The picture ran in the paper the next day as a “sight of spring” image.
My next two bikes were more “mountain bikes.” A Peugeot Lizardhead followed by a Trek 800 Antelope. They were my first bikes with gears and the ones I rode when I started to explore and get on some mountain bike trails.
My first real mountain bike trail was on the Trek 800, and I was super excited! I had ridden some trails through the local forests and fields but never on a specific MTB trail. Our next-door neighbor, Brian, was into mountain biking and had a crew of guys that rode the local trails. They took me out to Cannonsburg Ski Area (the same ski hill I lived on during the winters) and had a few miles of trails. I was worried I wouldn’t be fast enough to ride with them or not having the skills to handle the downhills, but immediately I enjoyed it and wanted to go back.
Around 12 or 13, I switched over to clipless pedals. There were definitely a few tip-overs, but it didn’t seem to slow me down and pretty soon, I was hooked. Looking back, I probably should have waited to be attached to the bike as my skills really developed after that and I was a mess if I ever rode on flat pedals.
As a kid, I liked working on my bike but always liked riding it even more. Over the years, I worked at a couple of different bike shops and fortunately had some very patient teachers that help me learn the basics. At those jobs, I typically was relegated to building the kids bikes or more entry-level bikes, but this got me familiar with the basics. I rarely received a paycheck because every dollar I made was immediately spent buying new gear. | https://www.brentbookwalter.com/new-blog/2018/2/12/bikes-through-the-years |
On Thursday, December 19, it’s scheduled the opening of “Room Package”, the installation by Aron Cheroes at the ContestaRockHair San Quirico salon in Florence.
Already presented in Rome at CRHmonti and at Albero salon in Florence, this traveling installation has been developed by the artist on purpose for ContestaRockHair.
Pictorial creativity needs a large space, sufficient enough to contain materials and thoughts that characterize the pictorial space where the artistic process is going on itself. Accordingly to Aron Cheroes (nickname for the artist Francesco Giannino) boxes represent rooms, chosen as a place for works creation and elected as treasure chest where memories are collected. During the event, Visitors can join this world by scanning a QR Code placed on each box. Images and cubic portions of worlds are united by a creative field connecting the various rooms and a screen displaying 100 photos mounted in stop-motion.
The last connection is with the present, a live-moment in which the artist will paint during the event using an open box connected to other rooms, while images of the artwork in progress are displayed on screen in loop –motion. | https://contestarockhair.com/en/room-package/ |
Evaluate these powers of 67. What do you notice? Can you convince someone what the answer would be to (a million sixes followed by a 7) squared?
If a number N is expressed in binary by using only 'ones,' what can you say about its square (in binary)?
What is the sum of: 6 + 66 + 666 + 6666 ............+ 666666666...6 where there are n sixes in the last term?
What is the sum of: 6 + 66 + 666 + 6666 ............+ 666666666...6 where there are n sixes in the last term?
Find the decimal equivalents of the fractions one ninth, one ninety ninth, one nine hundred and ninety ninth etc. Explain the pattern you get and generalise.
Watch the video to see how to sum the sequence. Can you adapt the method to sum other sequences?
Explore what happens when you draw graphs of quadratic equations with coefficients based on a geometric sequence.
How can visual patterns be used to prove sums of series?
This article by Alex Goodwin, age 18 of Madras College, St Andrews describes how to find the sum of 1 + 22 + 333 + 4444 + ... to n terms.
In the limit you get the sum of an infinite geometric series. What about an infinite product (1+x)(1+x^2)(1+x^4)... ?
When is a Fibonacci sequence also a geometric sequence? When the ratio of successive terms is the golden ratio!
Simple additions can lead to intriguing results...
Can you correctly order the steps in the proof of the formula for the sum of a geometric series?
The interval 0 - 1 is marked into halves, quarters, eighths ... etc. Vertical lines are drawn at these points, heights depending on positions. What happens as this process goes on indefinitely?
Generalise the sum of a GP by using derivatives to make the coefficients into powers of the natural numbers.
If you continue the pattern, can you predict what each of the following areas will be? Try to explain your prediction.
A circle is inscribed in an equilateral triangle. Smaller circles touch it and the sides of the triangle, the process continuing indefinitely. What is the sum of the areas of all the circles?
Each week a company produces X units and sells p per cent of its stock. How should the company plan its warehouse space?
What is the total area of the triangles remaining in the nth stage of constructing a Sierpinski Triangle? Work out the dimension of this fractal.
Make a poster using equilateral triangles with sides 27, 9, 3 and 1 units assembled as stage 3 of the Von Koch fractal. Investigate areas & lengths when you repeat a process infinitely often. | https://nrich.maths.org/public/topic.php?code=-64&cl=4&cldcmpid=1940 |
Geology (2012) 40 (4): e259.
Fassett et al. (2011, herein) analyzed fossil bone fragments from the San Juan Basin of New Mexico and claim to have achieved the “first successful direct dating of fossil vertebrate bone” (p. 159). This claim is asserted to establish the survival of dinosaurs into the Paleogene, thus supporting a view championed by J. Fassett for nearly 30 years (Fassett, 1982). This would be a unique discovery if valid and, consequently, the burden of proof is high. Unfortunately, the data presented by Fassett et al. are unconvincing for several reasons.
The samples were fossils, not bones composed of unaltered hydroxyapatite, and as such they have been open systems whose uranium and lead uptake/leaching histories are conjectural. Uranium typically adsorbs preferentially in dinosaur bones buried within the local water table (Gillette, 1994), and many elements are drawn from surrounding sediments during fossilization (Goodwin et al., 2007, and references therein). The best that can be hoped for, if the isotopic composition of non-radiogenic lead is known and early absorbed uranium and its radiogenic lead are quantitatively retained (none of which is demonstrated by Fassett et al.), is that a minimum age for the bone can be postulated. How closely such a minimum age coincides with the pre-mortem age of the animal or with the depositional age of entombing sediments depends on many taphonomic variables, sediment geochemistry, groundwater flux history, and the porosity of fossil bone.
Both samples analyzed (22799-D and BB1) are fossil bone fragments rather than articulated fossil skeletons and are therefore suspect a priori of being reworked. Failure to recognize faunal reworking of isolated dinosaur teeth in the uppermost Hell Creek Formation of northeastern Montana resulted in a similar conclusion that dinosaurs survived into the Paleocene. Detailed stratigraphic and faunal analysis (Fastovsky and Dott, 1986; Lofgren, 1995) disproved this sensational claim, and the so-called “Bug Creek Problem” is considered by most workers to be resolved.
Decades of fieldwork above the Cretaceous-Paleogene boundary in the Western Interior has produced dozens of articulated skeletons and/or skulls of champsosaurs, crocodilians, lizards, turtles, and mammals firmly in place in Paleocene sediments. To date, no dinosaur skulls, skeletons, or nests—fossils unlikely to survive reworking unrecognized—are recorded in museum collections across North America from these or contemporaneous formations (www.paleodb.org; ucmpdb.berkeley.edu; muse.museum.montana.edu/paleodb_pub/; collections.nmnh.si.edu/search/paleo/). A long term, multi-institutional field study (1999–2009) of the geology and paleontology of the Hell Creek Formation produced no evidence of non-avian dinosaurs in situ above the Cretaceous-Paleogene boundary (Horner et al., 2011).
As in the “Bug Creek Problem,” it is entirely possible that the fossil fragments studied by Fassett et al. were reworked postmortem. U-uptake may have occurred at any time postmortem and could have continued during its subsequent history. There is no way to prove, based on the data presented, that this was not the case. After highly subjective data selection, Fassett et al. interpreted a U/Pb age of the control sample (22799-D) to be in agreement with a 40Ar/39Ar age for a volcanic ash bed at “virtually the same stratigraphic level” (p. 159) some 3.5 km away. Terrestrial facies can change dramatically over such distances, and chronostratigraphic correlation on even much shorter length scales can be difficult—a lesson of the “Bug Creek Problem.” Even if the correlation is correct, a reworked fossil fragment could easily acquire a U/Pb age coeval with deposition, which would have little bearing except as a one-sided constraint on the pre-mortem age of the bone. The two oldest inferred, but rejected, ages for sample 22799-D could be an indication that uranium uptake began prior to 80 Ma, long before the interpreted depositional age, and therefore indicate that the fossil fragment was reworked.
The data of Fassett et al. support their conclusions if and only if highly subjective data interpretation is employed. Even if such data could be used to date early postmortem diagenesis, taphonomic and stratigraphic evidence must be used to establish that the age of U-uptake places a valid constraint on the age of the animal. The claim for Paleocene dinosaurs in this case is simply not credible based on the evidence presented.
N31°30'00" - N37°00'00", W109°04'60" - W103°00'00" | https://pubs.geoscienceworld.org/gsa/geology/article/40/4/e259/130919/Direct-UPb-dating-of-Cretaceous-and-Paleocene |
Ahead of Uttar Pradesh Assembly polls, Yogi Adityanath govt announces to give smartphones, laptops to one crore youths; all you need to know
Uttar Pradesh, Aug 19 (ANI): Uttar Pradesh Chief Minister Yogi Adityanath speaks on building houses for the poor and taking actions against land mafia at State Assembly, in Lucknow on Thursday, | (ANI Photo)
Advertisement
Lucknow: Just before the assembly polls in Uttar Pradesh, the Yogi government has made a big announcement for youth. The chief minister Yogi Adityanath on Thursday announced to give smartphones or laptops to one crore youths in the state. This is the biggest ever poll gift in the state. Earlier, the Samajwadi Party government had given laptops to 15 lakh students who had passed 12th standard exam in 2017.
The Yogi government would give smartphones or laptop to those students who take admissions in graduate, postgraduate of diploma courses.
While making this announcement in the state assembly on Thursday, the chief minister said that his government has made a budgetary provision of Rs 3000 crore in the supplementary budget for this purpose. In the supplementary budget tabled on Wednesday the state government has provided Rs 3000 crore to the electronics department for making the youth digitally capable. The chief minister while concluding the debate on the supplementary budget in the state assembly on Thursday said that his government would provide special allowances to the students appearing in the competitive examinations. This allowance would be given to those students who are appearing in at least three competitive examinations.
Regarding government employees, the chief minister said that his government would provide an increase of 28 per cent in the dearness allowance with effect from July this year. He said that the lawyers in the state would get Rs 5 lakh in the name of social security as against earlier Rs 1.5 lakh. Accusing opposition parties specially the Congress of shielding mafias, the chief minister said that his government has seized the properties of criminal elements in the state. He announced that houses for poor and Dalits would be constructed on the lands seized from the clutches of mafia.
Meanwhile, the state assembly was adjourned sine die on Thursday after clearing the supplementary budget. On Wednesday, the state government had tabled supplementary budget of Rs 7301 crores. The monsoon session of UP assembly concluded in just three days. During this period 14 important bills were passed. | |
The National Fire Protection Association’s standards (e.g., NFPA 25) list specific requirements for what equipment needs to be inspected or tested and how often. Some items need to be inspected or tested weekly, others monthly, quarterly, semiannually, annually, or after five years. Many of Ryan Fireprotection’s clients choose to have us come out quarterly and annually, while handling the more frequent requirements in-house. The following is a list of what should be checked during the quarterly and annual inspections for a typical wet- or dry-pipe fire sprinkler system.
Quarterly Inspection
The quarterly inspection includes everything that is required for a monthly inspection (1–2), plus some additional items (3–7):
- Inspect valves to verify that they are in the following condition:
- In their normal open or closed position
- Properly sealed, locked, or supervised
- Accessible
- Free from external leaks
- Free of physical damage
- Appropriately labeled
- (Alarm Valves) Retarding chamber or alarm drains are not leaking
- Inspect all of the gauges to verify they are in the following condition:
- For Wet Systems, gauges shall be inspected monthly to ensure they are in good condition and that normal water supply pressure is being maintained.
- For Dry Systems:
- The gauge on the supply side of the dry pipe valve must indicate that the normal supply water pressure is being maintained.
- The gauge on the quick-opening device, if provided, must indicate the same pressure as the gauge on the system side of the dry pipe valve.
- Gauges on systems with low air or nitrogen pressure alarms must be inspected monthly.
- Inspect water flow alarm and supervisory alarm devices for physical damage.
- Inspect and test the water flow alarm by opening the test connection on a wet pipe system and the bypass connection on a dry pipe system.
- If the sprinkler system is hydraulic, inspect the hydraulic nameplate to ensure that it’s attached and easily seen.
- Inspect fire department connections to make sure they are visible and undamaged, and ensure that gaskets and valves are not leaking or damaged.
- Inspect pressure-reducing valves and relief valves, if provided, to verify that they are:
- In the open position
- Not leaking
- Maintaining downstream pressures in accordance with the design criteria
- In good condition, with hand-wheels installed and unbroken.
Annual Inspection
The annual inspection includes everything in the quarterly inspection as well as the following:
- Inspect all sprinkler heads, including the pipes and fittings (floor level only).
- Ensure that there are extra sprinkler heads on site, as well as tools to change out the sprinkler heads.
- Inspect interior of dry pipe valves while resetting (if applicable).
- Conduct a main drain water flow test to determine whether there has been a change in the condition of the water supply piping. (This test is required quarterly if the water is supplied through a backflow preventer and/or pressure-reducing valve.)
Contact Ryan Fireprotection to schedule the quarterly and annual inspections of your wet or dry sprinkler system. Other systems such as anti-freeze, standpipe, hose, water spray, and foam systems have their own set of requirements. But whatever the system, Ryan Fireprotection can handle your required inspections and testing needs, as well as full sprinkler system installation. Ryan Fireprotection professionals can also resolve any issues found during the inspections to help keep your system functioning when it is needed most—when lives are on the line during a fire. | https://www.ryanfp.com/quarterly-annual-sprinkler-inspection-requirements/ |
Just months after the National Park Service started a relocation program to trap and transport new wolves to Michigan’s remote Isle Royale in hopes of boosting the dwindling pack, a winter survey that will give researchers their first peek at how the new wolves are fitting into their new home might be called off because of the ongoing federal government shutdown.
Staff from the research project posted a message Sunday night on the Wolves and Moose of Isle Royale Facebook page, alerting their followers that the winter survey, typically done by plane, might be grounded this year.
“It is our present understanding that the 61st Winter Study of Wolves and Moose in Isle Royale National Park will not be allowed during the partial shutdown of the Federal government,” read the brief note.
Based at Michigan Technological University, it is the longest running predator-prey study of its kind.
For decades, researchers have been tracking the number and pack structure of the island’s wolves and its moose. For the last few years, only two wolves have survived on the island in Lake Superior, located about 56 miles from the Upper Peninsula mainland. Meanwhile, the moose population has ballooned to more than 1,500, researchers have said.
The large number of moose and the real possibility of them deforesting the island wilderness, which is also a national park, prompted federal officials to lay out a plan to bring more wolves to the island. They want to build back up the wolf numbers so new packs can keep the moose population in check.
The first new wolves arrived on the island this past fall. More are supposed to come this winter from Ontario, Canada. Future wolves may be trapped in Michigan’s U.P.
While the three new wolves have tracking collars that allow researchers to see where they go on the island, the winter survey is designed to give study researchers even more information. In the past, researchers have been able to get great aerial photos of the island’s two older wolves, their winter kills, and large groupings of moose as they gather in the forest.
Hundreds of thousands of federal workers are going without pay as the partial federal government shutdown is in its third week. | https://www.boreal.org/2019/01/07/187058/government-shutdown-threatens-wolf-and-moose-winter-survey-on-michigan-s-isle-royale |
Microseismicity may be fluid-induced or it may be caused by changing stress conditions in the reservoir, therefore not all seismicity will contribute to production. Development of a microseismic-based DFN model can describe fracture networks which have been activated during stimulation, but further interpretation is required to determine how these fractures will impact reservoir drainage. This interpretation starts with an examination of stimulated reservoir volume (SRV).
Estimates of SRVs have evolved over the lifetime of the technology. Early attempts to define SRV by using envelope functions around microseismic event distributions generally resulted in large overestimates of the stimulated zone by incorrectly accounting for outlier events and an inability to distinguish between fluid-induced and stress induced events. Further refining SRV to an estimate of the most seismically deformed volume addressed the issue of outliers, but does not incorporate knowledge of failure mechanisms or activated fracture sets. By considering that stimulated fractures can form a number of intersections the stimulated volume can be interpreted in terms of fracture complexity (FC).
A final consideration to the stimulated reservoir volume is to determine where fracture complexity allows for a part of the reservoir to be well connected back to the perforations, in essence providing a drainage pathway. Using advanced SMTI analysis, high-quality events can be inverted for a general solution, which enables determination of whether mixed-mode shear-tensile events exhibit fracture opening or closing components. With reference to a geomechanical model, the amount of net opening within the fracture networks defines a volume of enhanced fluid flow (EFF) in the reservoir. By evaluating the orientation, density and size of fractures as they intersect within the fracture network, it is possible to better delineate drainage pathways within the reservoir. | https://www.esgsolutions.com/technical-resources/microseismic-knowledgebase/determining-effective-fluid-flow-using-microseismic |
Name:
JAPANESE LATE 20TH TO EARLY 21ST CENTURY PORCELAIN VASE BY LNT TOKUDA YASOKICHI III
SOLD
Inventory #:
XMA-02
Description:
Japanese late 20th to early 21st century porcelain vase by Japan Living National Treasure, Tokuda Yasokichi III, born Tokuda Masahiko in Komatsu, Japan in 1933. He passes away in August, 2009. The vase is done in a large round shape with small spout top. Tokuda Yasokichi III became Living National Treasure in 1997 and is recognized for work with Saiyu, glaze color gradation technique. Comes with TOMOBAKO, original artist signed wooden storage box. The vase measures 9 1/8" tall and is 8 1/2" in diameter. (inesu)
Age:
Late 20th to Early 21st Century
Size:
9 1/8"H by 8 1/2"Diameter
Price:
Sold
Copyright 2022 Oriental Treasure Box. All rights reserved. | https://www.orientaltreasurebox.com/item.php?id=1018&cat_id=20 |
Q:
Pyspark Average interval for RDD
I am trying to use PySpark to find the average difference between adjacent list of tuples.
For example if I have a RDD like so
vals = [(2,110),(2,130),(2,120),(3,200),(3,206),(3,206),(4,150),(4,160),(4,170)]
I want to find the average difference for each key.
For example for key value "2"
The average difference would be (abs(110-130) + abs(130-120))/2 = 15.
This is my approach so far. I am trying to change the average calculation code to accommodate for this instead. But it doesn't seem to be working.
from pyspark import SparkContext
aTuple = (0,0)
interval = vals.aggregateByKey(aTuple, lambda a,b: (abs(a[0] - b),a[1] + 1),
lambda a,b: (a[0] + b[0], a[1] + b[1]))
finalResult = interval.mapValues(lambda v: (v[0]/v[1])).collect()
I want to do this using the RDD functions, no Spark SQL or any other additional packages.
What would be the best way to do this?
Please let me know if you have any questions.
Thank you for your time.
A:
I came up with a naive approach to this. I am not sure if this will work in all cases. It goes something like this.
Lets first make a function to calculate the moving average. Please correct me if this is not the correct way to calculate moving average.
def get_abs(num_list):
'''
>>> get_abs([110, 130, 120])
15.0
'''
acc = 0
num_pairs = 0
for i in range(len(num_list)-1):
acc += abs(num_list[i]-num_list[i+1])
num_pairs +=1
return acc/num_pairs
Next, we parallelize the list
>>> vals = [(2,110),(2,130),(2,120),(3,200),(3,206),(3,206),(4,150),(4,160),(4,170)]
>>> rdd = sc.parallelize(vals)
>>> rdd.collect()
[(2, 110),
(2, 130),
(2, 120),
(3, 200),
(3, 206),
(3, 206),
(4, 150),
(4, 160),
(4, 170)]
Then, group the values belonging to the same list.
>>> vals = rdd.groupByKey().mapValues(list)
>>> vals.collect()
[(4, [150, 160, 170]), (2, [110, 130, 120]), (3, [200, 206, 206])]
Then we just need to call our function that we defined above to calculate the moving average on the grouped values.
>>> vals.mapValues(get_abs).collect()
[(4, 10.0), (2, 15.0), (3, 3.0)]
| |
Vitamin E supplementation does not improve survival from infection in mice when given after burn injury.
Previous studies have shown that vitamin E acetate given by gavage to mice before thermal injury can improve survival from subsequent infection. We tested the efficacy of aqueous vitamin E given parenterally after burn injury. Female BALB/c mice (n = 120) were given 15% total body surface area full-thickness flame burns. Three groups of mice were randomized to receive water-miscible vitamin E with the resuscitation fluid in doses of 0.5 mg per mouse, 0.167 mg per mouse, or 0.056 mg per mouse. These doses represented approximately nine, three, and one times the murine recommended daily allowance for vitamin E, respectively. Control mice were given saline only. The next day the mice were challenged with 2.5 x 10(5) Pseudomonas aeruginosa beneath the eschar. Administration of vitamin E was continued on a daily basis for a total of three doses and ending the day after bacterial challenge. Mortality rates were observed for 1 week and were not statistically different among the four groups. We conclude that vitamin E supplementation started after thermal injury in mice does not improve outcome from subsequent challenge with Pseudomonas aeruginosa.
| |
Earth observation solutions addressing coastal erosion issues, their impact on ecosystems and the risks to infrastructure assets
This project develops and tests innovative data processing tools to monitor the impact of climate change on marine ecosystems, particularly in coastal areas. It provides new Earth Observation (EO) products, including high-resolution shore ice mapping, which will be integrated into the vulnerability assessment of Canadians living in coastal areas and used for policy planning and implementation.
The project “Earth observation solutions addressing coastal erosion issues, their impact on ecosystems and the risks to infrastructure assets” has been included within the Climate Change Impacts and Ecosystem Resilience (CCIER) portfolio of projects, supported by the Canadian Space Agency (CSA).
Project overview
Coastal areas are home to some of the most productive ecosystems on the planet. For example, seagrass beds and salt marshes are among the most important carbon sinks in the world and provide many ecosystem services. However, these ecosystems are very sensitive to environmental changes, whether due to human activity (infrastructure, resource exploitation, coastal activities, etc.) or to phenomena associated with climate change. Indeed, the decreasing ice cover, the global sea-level rise and the increasing frequency of storms aggravate the impacts of coastal erosion, which is one of the main causes of the disappearance of highly valuable coastal ecosystems. In the context of climate change, Canada therefore urgently needs more operational coastal monitoring methods to better protect coastal ecosystems and preserve the ecological and economic benefits they provide to humans.
Monitoring of coastal habitats for prevention and public safety, ecosystem conservation and other socio-economic objectives requires highly responsive tools that can monitor the territory over large spatial scales, detect changes that occur at different time scales, and yet remain affordable. Although in situ data collection can provide detailed portraits of coastal areas with high-quality data, it is expensive and difficult to carry out on a large scale, so it is mostly ad hoc, rather than recurrent, and confined to specific areas.
In this context, remote sensing is a tool of choice. Earth observation satellites have close overflight frequencies and data acquisition capabilities that open up a world of possibilities for environmental monitoring of coastal areas over large territories. Remote sensing can thus provide continuous large-scale physical and biological information which, combined with point measurements on the ground in critical or vulnerable areas, can result in continuous, near-real-time monitoring of coastal ecosystems.
Objectives
This project aims to develop and test innovative tools to monitor the impact of climate change on coastal areas and assess their vulnerability to erosion. It provides new EO solutions to coastal end-users, including high-resolution maps of shore ice, coastline, ecosystems and suspended sediments. This project represents the first step in implementing an effective monitoring system capable of providing up-to-date information on coastal zone dynamics. The new EO products and tools, which are highly sought after for management and decision-making purposes, benefit stakeholders to help mitigate vulnerabilities in the context of climate change.
The main objective is to map the impacts of climate change on coastal ecosystems and assess their vulnerability to coastal erosion. The specific objectives are to:
- Investigate and develop new EO data analysis routines for the detection, qualification and quantification of shore ice (e.g. ice-foot, sea-ice, frazil) on the shoreline that protects the coastline, in conjunction with the application of ESA’s shoreline mapping algorithm for Canadian waters to assess erosion from cross-shore bathymetric/topographic profiles and cross-shore displacement;
- Develop a multi-sensor approach (i) to map coastal marine ecosystems based on machine learning and (ii) to assess suspended sediments concentrations. Suspended matter influences water turbidity and light penetration, which impacts on the health of marine habitats.
- Demonstrate the efficiency of the EO solutions for end-users by producing time series of Level 2 products.
Consortium
This project is led by ARCTUS Inc. (Rimouski, QC) as prime contractor, in close collaboration with Hatfield Consultants (Vancouver, BC), the Research Chair in Coastal Geoscience of the Université du Québec à Rimouski (Rimouski, QC) and ARGANS Ltd (Plymouth, UK).
Products
Shorelines and waterlines. The waterline is the instantaneous transition between land (or ice) and water detected by a segmentation method, which identifies differences in the physical parameters of features in an EO product. Its spatial resolution is limited to the pixel size of the initial image. The position of this boundary varies in time with the tidal level. As part of the ESA funded Coastal change from space project, our partners at Argans Ltd apply corrections to adjust the lines to the beach profile and to a tidal datum. The adjusted shorelines correspond to a mean (or extreme value) of tidal elevation.
Ice classification
The automated generation of sea ice classification products is based on a machine learning algorithm using spaceborne C-band SAR data. Environmental variables (temperature, salinity, snow cover, etc.) and ice characteristics such as stage of development (ex. first-year ice), shape (ex. floes, ridges) and concentration, affect the appearance of sea ice in SAR images. The different ice characteristics produce a variety of tones and textures in the images, allowing the classification of land, unconsolidated ice, deformed ice, and smooth or rough open water.
Total suspended sediment
Total suspended sediment (TSS) refers to organic (e.g. algae or decaying matter) or inorganic (e.g. mud) particles floating or drifting in the water column. Major hydrodynamic events (ex. waves, storms) or river discharges (ex. flood, spring freshet) contribute to increased TSS in coastal areas. Excessive suspended sediment can impair water quality for aquatic life, for example by increasing nutrient pollution and preventing light from entering the water column. To monitor TSS in coastal water, a retrieval algorithm connects satellite measurements (i.e. surface reflectance) to TSS concentration.
Ecosystem Classification
Multispectral satellite imagery contains enough information to discriminate between several vegetation types and to map different habitats in optically shallow waters (i.e. shallow and clear enough water to allow the satellite to detect the bottom). High and very high spatial resolutions (i.e. <30 m pixel resolution) provide sufficient coverage and revisit time to monitor annual to decadal variations in vegetation distribution and ecosystem changes. A pixel-by-pixel supervised classification method is used to classify cloud-free (< 10%) Sentinel-2 and Landsat-8 images acquired at low tide (<1m charts zero) during summer and early fall (July-October). The classification maps identify salt marshes, submerged vegetation, submerged and emerged sand and optically deep water.
Find out more about our innovative EO solutions to tackle coastal erosion issues
Web mapping platform
In order to promote operational EO data for integrated coastal zone management, our products are published on interactive web maps specially designed for end-users.
Click here to access the web platform (an account is required).
Click here to fill in the registration form to create an account (please allow up to 48 business hours). | https://arctus.ca/current_projects/ccier/ |
Imagine your object to be made up of a lot of infinitesimally small "straws" - little cylinders.
Each cylinder has an area $dA$ and a length $\ell$. You know that the volume of such a cylinder is $\ell dA$.
Now look at the pressure difference between the top and bottom of that cylinder: at the bottom, the pressure will be greater by $\rho \ell g$ (where $\rho$ is the density of the liquid, and $g$ is the gravitational acceleration) - that's just the way water pressure works, the pressure exactly supports the weighted the column of liquid above it. The difference in pressure is proportional to the difference in depth, which is $\ell$. So with an area at top and bottom of $dA$, the force on the cylinder is $\rho \ell g dA$ - difference in pressure, times area. But if the volume is $\ell dA$, then the force can be written as
$$F = \rho g V$$
Now if an object is made up of many such cylinders, each experiencing a force equal to the weight of the liquid it displaced, then the entire object will also experience a force equal to the weight of the displaced liquid. | https://physics.stackexchange.com/questions/174159/derivation-of-archimedes-principle?noredirect=1 |
Warrants served in Oceanside, Fallbrook, Bonsall, Encinitas, Vista, Rainbow and Lake Elsinore
San Diego County CA— Operation Double Down in the North County leads to 49 arrests with 25 being taken into custody in today’s sweep. From 7:00 a.m. to 2:00 p.m. today, Sheriff’s Deputies served arrest and search warrants in Fallbrook, Bonsall, Encinitas, Vista, Oceanside, Rainbow and Lake Elsinore.
Operation Double Down started in January 2017 as a follow up to Operation El Niño which concluded in the summer of 2016 in Fallbrook. The goal is to disrupt the operation of drug dealers who will fill the void of those arrested in Operation El Niño. During the nine-month crackdown, deputies with the Fallbrook Sheriff’s Substation, Sheriff’s Special Investigations Division (SID) and Criminal Intelligence Detail (CID) conducted more than 50 undercover “buy-walk” operations. They bought fentanyl, heroin, methamphetamine and cocaine from the suspects. A dozen people were identified as part of the drug trafficking operation and were arrested this morning along with other drug dealers.
Photos courtesy: San Diego County Sheriff’s Department
Deputies also recovered stolen weapons, as well as items connected with at least 13 residential burglaries in Fallbrook. They include a pickup truck, a truckload of household items, electronics, furniture, tools, toys, shoes, clothing, as well as $15,000 worth of custom carved wooden bowls.
Those arrested today include:
- Nathan Belleville 10/10/83 – Fallbrook
- Trevor Rogers 6/11/79 – Fallbrook
- Enrique Cazares 7/4/73 – Fallbrook
- Heath Rothenay 4/6/92 – Fallbrook
- Michael Evans 1/14/72 – Fallbrook
- Joseph Sims 4/10/80 – Fallbrook
- Sandra Gilbo 2/8/70 – Fallbrook
- Michael Tetu 12/27/86 – Fallbrook
- Michael Ochoa 7/7/73 – Fallbrook
- Adrian Verdugo 7/2/76 – Bonsall
- Shawn Orr 7/24/72 – Fallbrook
- Bethany Villarreal 3/25/70 – Bonsall
- Eileen Zaragosa Quintana 1/21/68 – Fallbrook
- Crittenton Zayak 10/30/83 – Rainbow
The names of 11 other suspects arrested today are not being released at this time to avoid jeopardizing ongoing investigations.
Those arrested face federal and state charges of conspiracy to distribute a controlled substance, drug sales, possessing stolen property and/or drugs, burglary, resisting arrest and violating parole or probation. Some also had outstanding felony and misdemeanor warrants. Suspects in this investigation face varied sentences depending on their criminal background including probation to 25 years in prison.
Deputies will work to return the stolen items to their owners. If you are a victim, you can call the Fallbrook Sheriff’s Substation at (760) 451-3100. You can always report suspicious activity to the Sheriff’s Department at (858) 565-5200.
Follow the Sheriff’s Department on Twitter: @SDSheriff, @SDSOFallbrook. | https://www.osidenews.com/2017/09/21/49-people-arrested-north-county-drug-dealing-operation/ |
To work out if dual tasking is of help or increases risk of falls, work out the 10% range either side of baseline i.e. for Nick’s baseline of 14.56 seconds, the range is 13.10 to 16.02 seconds. Adding the motor and cognitive task to baseline pre-treatment was not outside the 10% baseline range, so does not increase the risk of falling with this test, but his combined transfer, walking pattern and turns take longer to complete than is deemed ‘safe’.
The TUG may not be a tool of choice to look at gait, but for example, using a tool such as the Tinneti gait scale, you would record no hesitancy of gait initiation during the baseline walk, but hesitation when a second task is added, step length and height of feet are fine until he turns; symmetry affected but path and continuity of gait pattern fine, as is foot distance, but trunk is stiffer – this would give a score between 7 – 9/ 12 depending on the addition of a second task – again, demonstrating there are components that put Nick at moderate risk of a fall in the future.
Tragus to Wall Test[edit | edit source]
Nick never fully extends either knees or hips during his walk (forwards or backwards) despite the good step size. Although his walk is purposeful, the stress through the anterior knee joint constantly flexed during joint loading when in stance could be a reason for his pain. As I wanted to see whether the flexion was correctable, we performed a Tragus to Wall Test to understand the influence of forward pull on upright stance, and therefore gait. | https://www.physio-pedia.com/Parkinson%27s_Case_Study_-_Nick_Pre-Treatment_Assessment |
CFD Projects List for Mechanical and Diploma Students:CFD stands for Computational Fluid Dynamics which is a branch of Fluid Dynamics.CFD uses numerical analysis and algorithms to solve and analyze the problems under the action of fluid flow.To perform the calculations,computers are used in order to simulate the interactions of liquids and gases with the surfaces defined by boundary conditions.
As CFD plays a vital role in the field of Fluid dynamics,I want to explore the (90+Updated) CFD Projects list so that it can be used by the users to make their project successful in the region of Computational Fluid Dynamics (CFD).
CFD Projects List for Mechanical and Diploma Students:
The (90+Updated) CFD Projects List for Mechanical and Diploma Students are shown below.A wide variety of CFD Projects are done daily in the field of research or by mechanical students as a project point of view.The CFD Projects list is as follows.
- CFD Simulation of Soot Formation And Flame Radiation
- CFD Modeling and Quality Forecasting for Cooling and Storage of Pelagic Species
- Hydraulic Design and performance analysis of a Mixed Flow Pump for Marine Water Jet Propulsion
- Computation fluid Dynamics Simulation of Spark Ignition Engine for Gaseous
Fuels with Different Spark Time
- Estimation & CFD analysis of Re-entry Parameters for Hybrid Space Probe Capsule
- CFD Analyses of Ship Hull Forms
- CFD analysis of flow through venturimeter to determine the Coefficient of Discharge.
- Hydrofoil analysis using CFD
- Study of F1 car aerodynamics front wing using computational Fluid dynamics (CFD)
- Design and analysis of dust collector using CFD
- Design and analysis of globe valve as control valve using CFD software
- CFD analysis of shell and tube heat exchanger with fins for waste heat recovery
- CFD analysis of engine valve
- CFD study of Conventional and Under Floor Air Distribution System
- Aluminum Melting Furnace Design Optimization to Improve Energy Efficiency by Integrated Modeling
- Performance improvement of an automobile radiator using CFD analysis
- CFD Analysis Of Airflow And Temperature Distribution In Buildings
- A Project on Flow in gutters and downpipes
- Evaluation of CFD Sub-Models for the Intake Manifold Port Flow Analysis
- Evaluation of CFD to predict smoke movement in complex enclosed spaces
- CFD analysis of a simple convergent flow using ANSYS
- CFD analysis of supersonic exhaust in a scramjet engine
- CFD Analyses Of The Gas Flow Inside The Vessel Of A Hot Isotactic Press
- CFD based process on modeling of a rotary furnace for Aluminum scrap melting
- CFD analysis of combustion and emissions to study the effect of compression ratio and biogas substitution in a diesel engine
- Effect of Swept Blade on Performance of a Small Size Axial Fan
- CFD Solution of Internal (shock tube and coquette flow) & External (Airfoil and cylinder) flow
- CFD Analysis of Mixing and Combustion of a Scramjet Combustor with a Planer Strut Injector
- CFD analysis for transient turbocharger flows by varying flow rate.
- A study of computational fluid dynamics Applied to room air flow
- Design and Analysis of a Radial Turbine with Back Swept Blading
- Investigation of Film Cooling Strategies of CFD versus Experiments-Potential for Using Reduced Models
- CFD analysis of exhaust manifold
- CFD Analysis of economizer in a tangential fired boiler
- CFD design for electric car battery cooling system
- A Project on Aerodynamic design study of ground vehicles
- CFD Application of Flameless Oxidation in Glass Melting Furnaces
- CFD prediction of loads on marine structures
- CFD analysis of fuel tank sloshing
- Optimizing flow rate of a carburetor by CFD analysis
- Computational Fluid Dynamics Modeling to Validate HVAC System Design
- Thermal modeling of “Green House Effect”
- CFD Analysis of PACE Formula-1 Car
- CFD modeling of the automobile catalytic converter
- Heat Transfer in an Automotive Turbocharger Under Constant Load Points: a Computational Investigation
- CFD analysis of centrifugal fan
- CFD calculation of convective heat transfer coefficient and its Validation
- Air and fuel flow interaction in combustion chamber for Various injector locations
- Combined aerodynamic and structural optimization of a high-speed civil transport wing
- Manifold optimization of an internal combustion engine by using CFD analysis.
- Analysis and Optimization of Micro channel Heat Sinking
- Reduction of drag in a buggy car model
- CFD simulation and field application by Mitigating snowdrift at the elevated SANAE IV research station in Antarctica
- Flow through/past sparse bodies
- Analysis of water flow for Laminar & Turbulent Flow in Conventional Water Tap
- CFD using the discrete-vortex method
- A vehicle body Drag Analysis using Computational Fluid Dynamics
- Numerical Solution of Navier – Stokes Equations for Separating and Reattaching Flow over a Double Steps Expansion and Contraction
- CFD analysis of an air cooled condenser by the copper & aluminum material.
- Computational fluid dynamics for the design of turbo machinery
- CFD prediction to optimize front end cooling module of a passenger vehicle
- Design and CFD Simulation of a Battery Module for a Hybrid Electric Vehicle Battery Pack
- Aerodynamic Design for Bus/Car Vehicle
- Conjugate heat transfer analysis in electronics devices
- Analysis of Cyclone dust collector air flow
- CFD analysis of natural convection in differentially heated enclosure
- Fluid and thermal behavior of natural convective boiling at a submerged heated surface
- Heat Transfer Modeling of Large Shipping Containers
- . CFD analysis of an opposed piston internal combustion engine
- Flow Characteristics in a Cross-Flow Fan with Various Design Parameters
- Turbulence models in CFD
- CFD analysis of intake manifold in SI engines
- A theoretical analysis and CFD simulation on the ceramic monolith heat exchanger
- CFD analysis of a diffuser
- Flow analysis of marine propeller
- Fluid flow and temperature distribution in radiators used in automobiles
- Simulating the Blood Flow for the Aorta with a Stenosis
- Numerical analysis of wax melting
- Assessment of turbulence modeling for CFD
- CFD analysis and comparison of vertical tube with smooth tube
- Turbulent flow simulation in Kaplan draft tube
- CFD analysis of rocket nozzle
- ACDF-based analysis of the 14-bis aircraft aerodynamics and stability
- CFD analysis of mixed flow pump Impeller
- CFD Investigation of Airflow in any Car by using Fluent Analysis
- Design improvements on mixed flow pumps by Computational Fluid Dynamics
- Advanced Design For A Tail Wing For Better Performance.
- Computational flow field analysis of a vertical axis wind turbine
- Analysis of multiphase flow in open channel flows using CFD
- CFD analysis of fluid flow and heat transfer in a single tube-fin arrangement of an automotive radiator
- Nozzle design optimization to reduce noise for turbo jet engine.
- Design And Optimization of Automotive Cabin Cooling
- CFD analysis of gas flow behavior in economizer duct
- CFD analysis of an ejector for cooling applications
You should also know the explanation for the projects of:
- Effects Of Minimum Quantity Lubrication On Turning Aisi9310 using Vegetable oil Cutting Fluid
- What is Blue Brain and how a man thinks even after his death?
- What is the Material Selection for Unmanned Aerial Vehicle(Drone)?
- What is Smart Note Taker and where we can use it?
- What is the Present and Future Scope of Powder Metallurgy?
- Optimization of Manufacturing Process Plan of GIMBAL for Targeted Missiles
- A Project on Modelling and Manufacturing of Input Shaft using Computer Aided Manufacturing
- Automatic Speed Control In 4-wheeler by Cruise Control
- Development of CNC Program for End Shield by Unigraphics Software
- Generating NC Program of Journal Bearing using NX CAM software
This is the complete explanation of the updated list of 90+ CFD Projects list in a detailed manner.Hope this CFD Projects list is helpful to you.If you have any doubts, feel free to ask from the comments section. Please Share and Like this blog with the whole world so that it can reach to many. | https://mechanicalstudents.com/cfd-projects/ |
Q:
Oracle machines that halt on every input
Suppose $x, y \subset \mathbb{N}$ and $x \leq_T y$; that is, there is some oracle machine $e$ such that $n \mapsto \{e\}^y(n)$ is the characteristic function of $x$. (In particular $n \mapsto \{e\}^y(n)$ is total.) Is it always possible to choose $e$ such that additionally, for any $y$', $n \mapsto \{e\}^{y'}(n)$ is total?
A:
No, it is not. Let $T=\{t_0, t_1, t_2, ...\}$ be the set of all total indices - that is, all $e$ such that $\Phi_e^X$ is total for all $X$. (The "$\Phi_e^X$" notation is more commonly used nowadays, and I find it much easier to read.) Now consider the set $$A=\{n: \Phi_{t_n}^T(n)=0\}.$$ We have $A\le_TT$, but clearly no total index witnesses this.
Something even stronger is true. For a Turing degree ${\bf d}$, say that $e\in\omega$ is ${\bf d}$-total if for all $X\in{\bf d}$ we have $\Phi_e^X$ is total. For $X\subseteq\omega$, say $e$ is $X$-total if $e$ is $deg(X)$-total. Then we can ask:
If $A\le_TB$, need we always have this witnessed by a $B$-total functional?
Note that the above argument does not address this, and on the face of it this problem seems much harder since the set of $X$-total indices depends on $X$.
However, the answer is still no! Here's a silly proof, via Martin's cone theorem which states that every "nicely definable" set of Turing degrees either contains or is disjoint from a cone (exactly what "nicely definable" means depends on your background set theory, but the Borel case is provable in ZFC alone and is already much more than we need).
Specifically, for $e\in\omega$ let $T_e=\{{\bf d}: e\mbox{ is ${\bf d}$-total}\}$. $T_e$ is clearly Borel (or rather, the set of reals in elements of $T_e$ is Borel), so each $T_e$ contains or is disjoint from a cone. Since the intersection of countably many Turing cones contains again a Turing cone, we have that there is some Turing cone $\mathcal{C}$ satisfying $$\mbox{For all ${\bf d}_0, {\bf d}_1\in$ $\mathcal{C}$ and $e\in\omega$, $e$ is total for ${\bf d}_0$ iff $e$ is total for ${\bf d}_1$.}$$ That is, $\mathcal{C}$ is homogeneous for totality.
Let the set of indices which are total for some (equiv. all) degrees in $\mathcal{C}$ be denoted $\mathcal{T}=\{t_0, t_1, ...\}$. Then since $\mathcal{C}$ is a cone, we can find some set $X\in\mathcal{C}$ with $X\ge_T\mathcal{T}$; now, consider as above the reduction $$\{n: \Phi_{t_n}^X(n)=0\}\le_TX.$$
The point is that even though the set of $X$-total indices seems to depend on $X$, since it's degree-invariant it eventually stabilizes, and then we have sets which compute their own totality predicates. Note that the degree-invariance here is crucial: obviously this same trick fails to give an $X$ which computes $\{e: \Phi_e^X\mbox{ is total}\}$, which in fact always has degree $X''>_TX$. Basically, in a certain sense degree-invariant questions are always "weak": sufficiently powerful reals can answer those questions about themselves.
| |
It probably has crossed your mind one time or the another whether or not insects are greatly affected by water. You ask yourself questions like; can bees drown? Do bees even drink or need water to survive? What do bees do when they are caught out in the rain? Do they even breathe? These are questions that cross our minds whenever we think along this line of thought.
Do bees breathe? Yes, they do. It’s hard to think of insects – as seemingly tiny as they are – having a respiratory system, you wonder about whether they’ve got tiny noses and lungs and rushing blood in their tiny veins. You would be terribly off course if you’ve been thinking along that line. While bees have a respiratory system, it does not in any way resemble that of humans or most other species that isn’t an insect. The honey bees’ respiratory system consists of three main parts
- the spiracles; external respiratory openings (10 of them) that controls the flow of air in and out of the bee
- the trachea which have extended arms that expand and form air sacs
- the air sac (an expanded extension of the trachea arms) that serves as reservoirs for oxygen
How do bees breathe?
Bees like most insects breathe by taking in oxygen through the spiracles which branch out to all parts of the bee’s body. When the oxygen is taken in through the spiracles, it is controlled directly to the part of the bees’ body where it is needed. You’re probably wondering at this point whether bees go through the hassle that vertebrates must go through to deposit oxygen to where it’s needed. The simple answer to that is no, the bees respiratory system is so efficient that air is directly deposited into the tissues, directly from the spiracles. After air is deposited into the tissues, the spiracles access the trachea by transferring air into it. The trachea arms widen from the air that is being pumped from the spiracle to form sacs where unused air can be stored for later use. So technically, bees have oxygen reservoir tanks. Neat, right?
Bees can also speed up the passage of air into their bodies by contracting the air sacs, thereby accelerating up the rate of oxidation. To further impress on the efficiency of a bees’ respiratory system, it should be noted that while the air sacs in bees do not have unlimited capacity, the resting bee stops taking in air while it releases carbon dioxide. This process allows the bee to keep oxygen and carbon dioxide in balanced levels in the body so as to prevent damage to the bee from too much oxygen.
Do bees drink water and do they need it to survive?
Among many, water is considered as a sign or sometimes source of life, that isn’t necessarily incorrect since living things need water to survive. Plants, animals and even insects must take in water regularly to be able to continue living. Bees aren’t exceptional, “like most other animals, the bodies of honey bees are mostly water. Thus honey bees need to drink water routinely as we do” says Eric Mussen, a apiculturist. Bees also need water for other bee activities most especially as a solvent for diluting honey and pollen for the bees to swallow. Nectar can also be used as replacement for water in diluting pollen and honey. In addition to drinking and helping dilute gelatinous food, water is also very essential to the survival of bees as they need it to keep the brood nest area at the right relative humidity.
Can bees breathe underwater?
Can bees breathe under water? No they can’t, but they sure can last longer submerged underwater than most humans. Unlike humans, bees don’t breathe with a nose, they are equipped with spiracles and air sacs that resemble holes and tubes. Bees have a very fine control of the spiracle, allowing them to dictate when air is to be taken in. While underwater, bees close the openings into their bodies (the spiracles) effectively blocking water from entering into the body while underwater and they are also equipped with fine hydrophobic hairs that helps repel liquid from covering the spiracles. Due to the air sacs, bees can – for a relatively long time – tap into the stored air without having to take in air from the atmosphere. So if you’re being chased by a swarm of africanized bees, you may want to reconsider jumping into a river or a pool as a means of escape. Your reserve of oxygen will likely run out faster than their patience, unless of course you are Tom Sietas.
What do bees do in the rain?
Like most insects, bees can detect the fall in air pressure that occurs when it is about to rain. When bees detect the drop in air pressure, they tend to pack up and head for the cool safety of their hives where they bunch together to get warm. Bees who are caught out would not necessarily fly straight for the hive, at least not if it is a light rain, they’ll likely just close their spiracles and continue about their business. But if it’s a downpour, they find shelter as soon as they can because they are not able to cope with the pelting the rain administers on everything in it’s path. You’d be surprised to know that there is also competition among insects; during the rain, many insects go into hiding, effectively reducing the competition for food and other substances that are essential to the life of insects. It wouldn’t be so hard to explain why some insects are still found out in the rain, those insects are probably just more hardworking or water tolerant than their cousins.
Also, many bees, like most other insects, are ectothermic (they are cold blooded) which therefore means that the temperature of their immediate environment greatly affects their body temperature and their body temperature in turn affects their movements. When it rains, bees are generally slower, due in no small part to the cold. The warmer the bee, the faster and more efficient it is. So if you farm a hive, you may want to help regulate the temperature of the hive’s environment to suit the bees’ ectothermic condition if and when possible.
What are the different species of bees and can any breathe underwater?
It feels pleasing to think that every living thing you can think of has more than one species of its kind. For bees, there are over 20,000 known species. Most common are; bumble bees, honey bees, stingless bees etc. Of all the known species, none can breathe underwater. Since they all have seemingly identical respiratory systems, it is safe to say that none can breathe underwater. Although, bees can survive longer underwater than most humans, it still doesn’t mean they have the ability to breathe underwater. If hypothetically they were to try breathing while submerged underwater, opening the spiracles to let in the little amount of oxygen in the water into their bodies, they would bloat and die as they do not have water outlets that will be able to take away the pressure from the surrounding water.
Related Questions
Can bees have their hive underwater?
No, bees cannot have their hives under water. Because of surface tension and other basic biogical factors, any living thing that isn’t aquatic in nature would find movement very difficult underwater. Not only can bees not survive the water pressure for long, they would have to open their spiracles to let in air when their reservoir of oxygen is exhausted which will kill them as surely as if you sprayed insecticide directly onto their bodies.
Do bees thirst for water?
Yes, bees thirst for water. Water is very essential to the survival of all living organisms and bees are no different. When the temperature of a beehive becomes extremely high, the water collector leaves the hive in search of water to take back to the hive. It is interesting to note that not all bees forage for water but some selected number as in division of labor are entrusted with the task of collecting water to quench the hives and help reduce the temperature of the hive. Bees do not only take water when thirsty, they also need water to dilute gel like food substances like honey for easy feeding. Bees also need water to regulate the relative humidity of their hives.
Can bees drown?
Yes bees can drown. While they can last for a relatively short time on stored air, if they are not taken out of the water, they may eventually die once the oxygen in their air sacs is depleted. | https://schoolofbees.com/can-bees-breathe-underwater/ |
The utility model relates to a biogas slurry biological filtering device. The biogas slurry biological filtering device comprises a box body (1) with the open top and a cover plate (2) covered at the top of the box body (1), wherein the interior of the box body (1) is sequentially divided into a first oxygen-consuming chamber (4), an anaerobic chamber (5) and a second oxygen-consuming chamber (6) by partition plates (3), and the first oxygen-consuming chamber (4), the anaerobic chamber (5) and the second oxygen-consuming chamber (6) are sequentially communicated; the first oxygen-consuming chamber (4) and the second oxygen-consuming chamber (6) are provided with gas inlet ends (7) for communicating with an oxygen source, and the first oxygen-consuming chamber (4) is further provided with a liquid inlet end (8) for communicating with a biogas slurry source; and first flow-separated balls inoculated with oxygen-consuming bacteria are arranged in the first oxygen-consuming chamber (4) and the second oxygen-consuming chamber (6), and second flow-separated balls inoculated with anaerobic bacteria are arranged in the anaerobic chamber (5). The biogas slurry biological filtering device can filter and purify biomass slurry and has the advantages of few process steps and low cost. | |
Q:
Forming a committee of 5 with at least 3 teachers
I am working on the following problem
In how many ways can a committee of 5 be formed from a group of 11 people consisting of 4 teachers and 7 students if the committee must include at least 3 teachers
So I know that the answer is ${4 \choose 3}{7 \choose 2} + {4 \choose 4}{7 \choose 1} = 91$ and I can follow the reasoning behind this answer, but I still do not understand why my original approach to solving the problem is incorrect.
My thought was to first choose 3 teachers and then add the extra teacher left to the group of students and pick two. In other words
$${4 \choose 3}{8 \choose 2}$$
I thought that both cases of groups with 3 and 4 teachers would be covered by including the teacher in the group of students when choosing two more people to complete the committee, but I seem to overestimate the correct answer (ie the above is equal to 112), but I don't understand why.
Question: What is wrong with my approach to this problem?
A:
Your approach counts some possibilities twice.
If you have a committee of $4$ teachers and $1$ student, it will be counted $4$ times in your formula, because you are differentiating between the possible ways of selecting $3$ teachers and then another one. For example, for a commitee consisting of $4$ professors $A,B,C,D$ and a student $E$ you are counting
$\{A,B,C\}+\{D,E\}$
$\{A,B,D\}+\{C,E\}$
$\{A,C,D\}+\{B,E\}$
$\{B,C,D\}+\{A,E\}$
as four different options.
Notice by the way that you have $21$ too much, which is exactly $3$ times the seven ways to choose only $1$ student and $4$ teachers.
| |
The invention discloses a marble chamfering device. The marble chamfering device comprises a workbench. The marble chamfering device is characterized in that a group of vertical sliding rails is arranged on the upper surface of the workbench, a group of first electric small cars are arranged on the group of vertical sliding rails, first mounting blocks are arranged on the first electric small cars, first linear motors with upward telescopic ends are arranged on the upper surfaces of the first mounting blocks, a controller is arranged on the surface of one side of the workbench, an electric supply connector is arranged on one side of the controller, a supply terminal of the controller is connected with the electric supply connector through a guide wire, and the input end of the controller is correspondingly connected with the first electric cars, the first linear motors, second linear motors, mini-type linear motors and second mini-type linear motors through guide wires. The marble chamfering device has the beneficial effects that the device has simple operation and low maintenance cost, can adjust a chamfering angle at any time, the chamfering speed is high, the working efficiencyis improved, the device is suitable for some small and medium-sized enterprises, using is convenient, and the novelty is high. | |
Story Highlights
Kingston Mayor, Senator Councillor Delroy Williams, says work to redesign markets operated by the Kingston and St. Andrew Municipal Corporation (KSAMC) is slated for completion within three months.
Senator Williams, who was speaking during a digital town hall meeting on Tuesday (March 31), said this is being undertaken by the Corporation’s engineers in order to facilitate greater management of the market districts.
“The Municipal Corporation has been spending a lot of time over the last 18 months crafting and redesigning the markets. We have to redesign for enforcement, for commerce, [and] for traffic flow and movement of pedestrians,” he said.
Kingston Mayor, Senator Councillor Delroy Williams, says work to redesign markets operated by the Kingston and St. Andrew Municipal Corporation (KSAMC) is slated for completion within three months.
Senator Williams, who was speaking during a digital town hall meeting on Tuesday (March 31), said this is being undertaken by the Corporation’s engineers in order to facilitate greater management of the market districts.
“The Municipal Corporation has been spending a lot of time over the last 18 months crafting and redesigning the markets. We have to redesign for enforcement, for commerce, [and] for traffic flow and movement of pedestrians,” he said.
Mayor Williams noted that it is challenging for the Corporation to institute enforcement measures without first undertaking renovating the market space.
“So as soon as we renovate the markets and redesign the entire market district, there will be a lot of changes and enforcement to accompany that,” he said.
Meanwhile, Minister of Local Government and Community Development, Hon. Desmond McKenzie, indicated that the Ministry is conducting a review of markets islandwide, adding that an accompanying mapping exercise has been completed.
There are 22,000 registered vendors and 77 markets across the country, with the KSAMC having responsibility for 30 of those facilities.
A total of 17 of the markets in the Corporate Area are situated in downtown Kingston. | |
Derrygonnelly’s wing half back, Oisin Smyth comes from a long line of Smyth footballers who’ve pulled on the purple and gold. In 1924, the founding year of the Harps club, Charlie Smyth, Oisin’s great grandfather played for the club. Then followed Oisin’s grandfather, Sean, who is current joint president along with Francie Rasdale.
From 1988 to 1996, Oisin’s father Niall donned the jersey and in 1995 he played at centre half back on Derrygonnelly’s first ever senior championship winning side. This weekend Oisin will follow in his father’s footsteps but this time on a bigger stage, the Ulster final and he too will wear the number 5 jersey with pride.
Three years ago, at the age of 18, Oisin made his championship debut, as a substitute, against Roslea in the 2018 quarter final. While he has three senior championships to his name, it really took until this season to nail down a starting position. A feat that he puts down to pre-season training.
“Being a wee bit more prepared (was key). Obviously, at underage level I would’ve played quite a bit but the level of preparation is completely different to try to break into the (senior) team.
“Realising that I had to get myself into really good shape and be available for the team when the games are on during the league and stuff” has ensured Smyth is a regular on the team sheet now and has gained great experience and confidence.
“Starting games now and the more games I’ve started, the nerves have lessened and I’m a bit calmer and have learned how to use them to my advantage.
“The more you play in Ulster Club the more you understand it. I hadn’t played in Ulster Club at senior level but these few games have really helped me. As a team we know what level to expect. It’s very different to playing in your own county championship. In Ulster we tend to play freer and let the shackles off.”
Oisin is in his final year at university in Dundee and admits football is in the blood.
“You don’t really have a choice in my house when it comes to football” he laughs, “they throw the colours on you and out you go. As soon as I could walk really, dad had me out at the pitch, whether I was old enough to play with a team or not I don’t know, but I don’t think there was any choice in who I was playing for or what I was doing.”
The Smyth family live in Enniskillen, Brewster Park is closer to them than Canon Maguire Park but Oisin has never had any doubt about where his allegiance lies, recalling early memories at his grandparents’ house in Derrygonnelly;
To read more on this story see this week’s Fermanagh Herald. Can’t get to the shop to collect your copy? No problem! You can download a copy straight to your device by following this link https://bit.ly/3gOl8G0
To read more.. Subscribe to current edition
Receive quality journalism wherever you are, on any device. Keep up to date from the comfort of your own home with a digital subscription. | https://fermanaghherald.com/2022/01/harps-blood-running-through-the-veins/ |
Here is how to measure the PRC length with a set of distance measurements in the optical setup.
We need to take distance measurements between reference points on each mirror suspension. For the large ones (SOS) that are used for BS, PRM and ITMs, the reference points are the corners of the second rectangular base: not the one directly in contact with the optical bench (since the chamfers make difficult to define a clear corner), but the rectangular one just above it. For the small suspensions (TT) the points are directly the corners of the base plates.
From the mechanical drawings of the two kind of suspensions I got the distances between the mirror centers and the reference corners. The mirror is not centered in the base, so it is a good idea to cross check if the numbers are correct with some measurements on the dummy suspensions.
I assumed the dimensions of the mirrors, as well as the beam incidence angles are known and we don't need to measure them again. Small errors in the angles should have small impact on the results.
I wrote a MATLAB script that takes as input the measured distances and produce the optical path lengths. The script also produce a drawing of the setup as reconstructed, showing the measurement points, the mirrors, the reference base plates, and the beam path. Here is an example output, that can be used to understand which are the five distances to be measured. I used dummy measured distances to produce it.
In red the beam path in vacuum and in magenta the beam path in the substrate. The mirrors are the blue rectangles inside the reference bases which are in black. The thick lines are the HR faces. The green points are the measurement points and the green lines the distances to be measured. The names on the measurement lines are those used in the MATLAB script.
The MATLAB scripts are attached to this elog. The main file is survey_v2.m, which contains all the parameters and the measured values. Update it with the real numbers and run it to get the results, including the graphic output. The other files are auxiliary functions to create the graphics. I checked many times the code and the computations, but I can't be sure that there are no errors, since there's no way to check if the output is correct... The plot is produced in a way which is somehow independent from the computations, so if it makes sense this gives at least a self consistency test.
global sos_lx sos_ly sos_cx sos_cy tt_lx tt_ly tt_cx tt_cy
%% Survey of the PRC length
%% measured distances
d_MB2_MY = 2000.0;
d_MB3_MX = 2000.0;
d_MB1_M31 = 400.0;
d_M32_M21 = 3000.0;
d_M22_MP = 2000.0;
function d = distance(c1, c2)
d = sqrt(sum((c1-c2).^2));
end
function draw_beam(c1, c2, color)
plot( [c1(1), c2(1)], [c1(2), c2(2)], color, 'LineWidth', 2)
end
function draw_measurement(c1, c2, color, name)
plot( [c1(1), c2(1)], [c1(2), c2(2)], color)
text( (c1(1)+c2(1))/2, (c1(2)+c2(2))/2 + 20, name, ...
'FontSize', 5, 'HorizontalAlignment', 'center', ... | http://nodus.ligo.caltech.edu:8080/40m/9573 |
Peer review is a cornerstone of the academic publication process but can be subject to the flaws of the humans who perform it. Evidence suggests subconscious biases influence one's ability to objectively evaluate work: In a controlled experiment with two disjoint program committees, the ACM International Conference on Web Search and Data Mining (WSDM'17) found that reviewers with author information were 1.76x more likely to recommend acceptance of papers from famous authors, and 1.67x more likely to recommend acceptance of papers from top institutions.6 A study of three years of the Evolution of Languages conference (2012, 2014, and 2016) found that, when reviewers knew author identities, review scores for papers with male-first authors were 19% higher, and for papers with female-first authors 4% lower.4 In a medical discipline, U.S. reviewers were more likely to recommend acceptance of papers from U.S.-based institutions.2
These biases can affect anyone, regardless of the evaluator's race and gender.3 Luckily, double-blind review can mitigate these effects1,2,6 and reduce the perception of bias,5 making it a constructive step toward a review system that objectively evaluates papers based strictly on the quality of the work.
Three conferences in software engineering and programming languages held in 2016the IEEE/ACM International Conference on Automated Software Engineering (ASE), the ACM International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), and the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI)collected data on anonymization effectiveness, which wea use to assess the degree to which reviewers were able to successfully deanonymize the papers' authors. We find that anonymization is imperfect but fairly effective: 70%86% of the reviews were submitted with no author guesses, and 74%90% of reviews were submitted with no correct guesses. Reviewers who believe themselves to be experts on a paper's topic were more likely to attempt to guess author identities but no more likely to guess correctly. Overall, we strongly support the continued use of double-blind review, finding the extra administrative effort minimal and well worth the benefits.
The authors submitting to ASE 2016, OOPSLA 2016, and PLDI 2016 were instructed to omit author information from the author block and obscure, to the best of their ability, identifying information in the paper. PLDI authors were also instructed not to advertise their work. ASE desk-rejected submissions that listed author information on the first page, but not those that inadvertently revealed such information in the text. Authors of OOPSLA submissions who revealed author identities were instructed to remove the identities, which they did, and no paper was desk-rejected for this reason. PLDI desk-rejected submissions that revealed author identities in any way.
The review forms included optional questions about author identities, the answers to which were only accessible to the PC chairs. The questions asked if the reviewer thought he or she knew the identity of at least one author, and if so, to make a guess and to select what informed the guess. The data considered here refers to the first submitted version of each review. For ASE, author identities were revealed to reviewers immediately after submission of an initial review; for OOPSLA, ahead of the PC meeting; for PLDI, only for accepted papers, after all acceptance decisions were made.
Threats to validity. Reviewers were urged to provide a guess if they thought they knew an author. A lack of a guess could signify not following those instructions. However, this risk is small, for example, OOPSLA PC members were allowed to opt out uniformly and yet 83% of the PC members participated. Asking reviewers if they could guess author identities may have affected their behavior: they may not have thought about it had they not been asked. Data about reviewers' confidence in guesses may affect our conclusions. Reviewers could submit multiple guesses per paper and be considered correct if at least one guess matched, so making many uninformed guesses could be considered correct, but we did not observe this phenomenon. In a form of selection bias, all conferences' review processes were chaired byand this Viewpoint is written byresearchers who support double-blind review.
For the three conferences, 70%86% of reviews were submitted without guesses, suggesting that reviewers typically did not believe they knew or were not concerned with who wrote most of the papers they reviewed. Figure 1 summarizes the number of reviewers, papers, and reviews processed by each conference, and the distributions of author identity guesses.
Figure 1. Papers, reviews, reviewers, and author guesses. Reviewers include those on the program and external committees, but exclude chairs. All papers received at least three reviews; review load was non-uniform.
When reviewers did guess, they were more likely to be correct (ASE 72% of guesses were correct, OOPSLA 85%, and PLDI 74%). However, 75% of ASE, 50% of OOPSLA, and 44% of PLDI papers had no reviewers correctly guess even one author, and most reviews contained no correct guess (ASE 90%, OOPSLA 74%, PLDI 81%).
Are experts more likely to guess and guess correctly? All reviews included a self-reported assessment of reviewer expertise (X for expert, Y for knowledgeable, and Z for informed outsider). Figure 2 summarizes guess incidence and guess correctness by reviewer expertise. For each conference, X reviewers were statistically significantly more likely to guess than Y and Z reviewers (p 0.05). But the differences in guess correctness were not significant, except the Z reviewers for PLDI were statistically significantly correct less often than the X and Y reviewers (p 0.05). We conclude that reviewers who considered themselves experts were more likely to guess author identities, but were no more likely to guess correctly.
Figure 2. Guess rate, and correct guess rate, by self-reported reviewer expertise score (X: expert, Y: knowledgeable, Z: informed outsider).
Are papers frequently poorly anonymized? One possible reason for deanonymization is poor anonymization. Poorly anonymized papers may have more reviewers guess, and also a higher correct guess rate. Figure 3 shows the distribution of papers by the number of reviewers who attempted to guess the authors. The largest proportion of papers (26%30%) had only a single reviewer attempt to guess. Fewer papers had more guesses. The bar shading indicates the fractions of the author identity guesses that are correct; papers with more guesses have lower rates of incorrect guesses. Combining the three conferences' data, the 2 statistic indicates that the rates of correct guessing for papers with one, two, and three or more guesses are statistically significantly different (p 0.05). This comparison is also statistically significant for OOPSLA alone, but not for ASE and PLDI. Comparing guess rates (we use one-tailed z tests for all population proportion comparisons) between paper groups directly: For OOPSLA, the rate of correct guessing is statistically significantly different between one-guess papers and each of the other two paper groups. For PLDI, the same is true between one-guess and three-plus-guess paper groups. This evidence suggests a minority of papers may be easy to unblind. For ASE, only 1.5% of the papers had three or more guesses, while for PLDI, 13% did. However, for PLDI, 40% of all the guesses corresponded to those 13% of the papers, so improving the anonymization of a relatively small number of papers would potentially significantly reduce the number of guesses. Since the three conferences only began using the double-blind review process recently, the occurrences of insufficient anonymization are likely to decrease as authors gain more experience with anonymizing submissions, further increasing double-blind effectiveness.
Figure 3. Distributions of papers by number of guesses. The bar shading indicates the fraction of the guesses that are correct.
Are papers with guessed authors more likely to be accepted? We investigated if paper acceptance correlated with either the reviewers' guesses or with correct guesses. Figure 4 shows the acceptance rate for each conference for papers without guesses, with at least one correct guess, and with all incorrect guesses. We observed different behavior at the three conferences: ASE submissions were accepted at statistically the same rate regardless of reviewer guessing behavior. Additional data available for ASE shows that for each review's paper rating (strong accept, weak accept, weak reject, strong reject), there were no statistically significant differences in acceptance rates for submissions with different guessing behavior. OOPSLA and PLDI submissions with no guesses were less likely to be accepted (p 0.05) than those with at least one correct guess. PLDI submissions with no guesses were also less likely to be accepted (p 0.05) than submissions with all incorrect guesses (for OOPSLA, for the same test, p = 0.57). One possible explanation is that OOPSLA and PLDI reviewers were more likely to affiliate work they perceived as of higher quality with known researchers, and thus more willing to guess the authors of submissions they wanted to accept.
Figure 4. Acceptance rate of papers by reviewer guessing behavior.
How do reviewers deanonymize? OOPSLA and PLDI reviewers were asked if the use of citations revealed the authors. Of the reviews with guesses, 37% (11% of all reviews) and 44% (11% of all reviews) said they did, respectively. The ASE reviewers were asked what informed their guesses. The answers were guessing based on paper topic (75 responses); obvious unblinding via reference to previous work, dataset, or source code (31); having previously reviewed or read a draft (21); or having seen a talk (3). The results suggest that some deanonymization may be unavoidable. Some reviewers discovered GitHub repositories or project websites while searching for related work to inform their reviews. Some submissions represented clear extensions of or indicated close familiarity with the authors' prior work. However, there also exist straightforward opportunities to improve anonymization. For example, community familiarity with anonymization, consistent norms, and clear guidelines could address the incidence of direct unblinding. However, multiple times at the PC meetings, the PC chairs heard a PC member remark about having been sure another PC member was a paper author, but being wrong. Reviewers may be overconfident, and sometimes wrong, when they think they know an author through indirect unblinding.
After completing the process, the PC chairs of all three conferences reflected on the successes and challenges of double-blind review. All PC chairs were strongly supportive of continuing to use double-blind review in the future. All felt that double-blind review mitigated effects of (subconscious) bias, which is the primary goal of using double-blind review. Some PC members also felt so, indicating anecdotally that they were more confident their reviews and decisions had less bias. One PC member remarked that double-blind review is liberating, since it allows for evaluation without concern about the impact on the careers of people they know personally.
All PC chairs have arguments in support of their respective decisions on the timing of revealing the authors (that is, after review submission, before PC meeting, or only for accepted papers). The PLDI PC chair advocated strongly for full double-blind, which enables rejected papers to be anonymously resubmitted to other doubleblind venues with common reviewers, addressing one cause of deanonymization. The ASE PC chairs observed that in a couple of cases, revealing author identities helped to better understand a paper's contribution and value. The PLDI PC chair revealed author identities on request, when deemed absolutely necessary to assess the paper. This happened extremely rarely, and could provide the benefit observed by the ASE PC chairs without sacrificing other benefits. That said, one PC member remarked that one benefit of serving on a PC is learning who is working on what; full anonymization eliminates learning the who, though still allows learning the what.
All PC chairs were strongly supportive of continuing to use double-blind review in the future.
Overall, none of the PC chairs felt the extra administrative burden imposed by double-blind review was large. The ASE PC chairs recruited two review process chairs to assist, and all felt the effort required was reasonable. The OOPSLA PC chair noted the level of effort required to implement double-blind review, including the management of conflicts of interest, was not high. He observed that it was critical to provide clear guidance to the authors on how to anonymize papers (for example, http://2016.splash-con.org/track/splash-2016-oopsla#FAQ-on-Double-Blind-Reviewing). PLDI allowed authors to either anonymize artifacts (such as source code) or to submit non-anonymized versions to the PC chair, who distributed to reviewers when appropriate, on demand. The PC chair reported this presented only a trivial additional administrative burden.
The primary source of additional administration in double-blind review is conflict of interest management. This task is simplified by conference management software that straightforwardly allows authors and reviewers to declare conflicts based on names and affiliations, and chairs to quickly cross-check declared conflicts. ASE PC chairs worked with the CyberChairPro maintainer to support this task. Neither ASE nor OOPSLA observed unanticipated conflicts discovered when author identities were revealed. The PLDI PC chair managed conflicts of interest more creatively, creating a script that validated author-declared conflicts by emailing PC members lists of potentially conflicted authors mixed with a random selection of other authors, and asking the PC member to identify conflicts. The PC chair examined asymmetrically declared conflicts and contacted authors regarding their reasoning. This identified erroneous conflicts in rare instances. None of the PC chairs found identifying conflicts overly burdensome. The PLDI PC chair reiterated that the burden of full double-blind reviewing is well worth maintaining the process integrity throughout the entire process, and for future resubmissions.
Data from ASE 2016, OOPSLA 2016, and PLDI 2016 suggest that, while anonymization is imperfect, it is fairly effective. The PC chairs of all three conferences strongly support the continued use of double-blind review, find it effective at mitigating (both conscious and subconscious) bias in reviewing, and judge the extra administrative burden to be relatively minor and well worth the benefits. Technological advances and the now-developed author instructions reduce the burden. Having a dedicated organizational position to support double-blind review can also help. The ASE and OOPSLA PC chairs point out some benefits of revealing author identities midprocess, while the PLDI PC chair argues some of those benefits can be preserved in a full double-blind review process that only reveals the author identities of accepted papers, while providing significant additional benefits, such as mitigating bias throughout the entire process and preserving author anonymity for rejected paper resubmissions.
1. Budden, et al. Double-blind review favours increased representation of female authors. Trends in Ecology and Evolution 23, 1 (Jan. 2008), 46.
2. Gastroenterology, Bethesda, MD, USA. U.S. and non-U.S. submissions: An analysis of reviewer bias. JAMA 280, 3 (July 1998), 246247.
3. Moss-Racusin, C.A. et al. Science faculty's subtle gender biases favor male students. PNAS 109, 41 (Apr. 2014), 1647416479.
4. Roberts, S.G. and Verhoef, T. Double-blind reviewing at EvoLang 11 reveals gender bias. J. of Language Evolution 1, 2 (Feb. 2016), 163167.
5. Snodgrass, R. Single-versus double-blind reviewing: An analysis of the literature. SIGMOD Record 35, 3 (May 2006), 821.
6. Tomkins, A., Zhang, M., and Heavlin, W.D. Single versus double-blind reviewing at WSDM 2017. CoRR, abs/1702.00502, 2017.
a. Sven Apel and Sarfraz Khurshid were the ASE'16 PC chairs, Claire Le Goues and Yuriy Brun were the ASE'16 review process chairs, Yannis Smaragdakis was the OOPSLA'16 PC chair, and Emery Berger was the PLDI'16 PC chair.
Kathryn McKinley suggested an early draft of the reviewer questions used by OOPSLA and PLDI. This work is supported in part by the National Science Foundation under grants CCF-1319688, CCF-1453474, CCF-1563797, CCF-1564162, and CNS-1239498, and by the German Research Foundation (AP 206/6).
The Digital Library is published by the Association for Computing Machinery. Copyright © 2018 ACM, Inc.
Very interesting! It seems then that manual reviewers are no better at guessing than using a machine learning-based approach based on the number of times authors appear in the references, see https://peerj.com/preprints/1757v1.pdf.
Interesting. In a recent double-blind review process we came across one issue: multiple entries by the same author(s), involving self-plagiarisation. The double blind process made this very hard to detect; theoretically a CRP system could detect and warn without revealing author identities, but ours at least did not do so.
The results reported here are interesting. However, since some of the figures cited in the introductory paragraph are being repeated in online discussions, it seems worth pointing out the following:
1) The 1.76x and 1.67x odds multipliers quoted from Tomkins et al. for the effects of author fame and institution reputation are from an early version of that paper. The numbers from the latest version (v6) on arXiv are smaller (1.63 and 1.58, respectively). This most recent version was probably not available when this Viewpoint was submitted or initially drafted. Version 5, posted May 2017 has factors of 1.66 and 1.61, respectively. Of course, these are still large effects.
2. The confidence intervals for these odds multipliers from Tomkins et al. are quite large: the bottom end of the range for the "author fame" correlation coefficient corresponds to an odds multiplier of 1.05. In light of this, and the changes from different version of the paper, I think one should be careful about placing too much emphasis on their exact size.
3. I think it is worth noting some negative findings from the cited papers, particularly when they contradict the other mentioned positive findings. For instance, the Viewpoint mentions that in a certain medical field, earlier work found a bias from American reviewers in favor of American authors. On the other hand, Tomkins et al. find no significant bias based on shared nationality of reviewers and authors. Additionally, Tomkins et al. do not find a statistically significant bias based on author gender (though they argue that when placed in context of a larger meta-analysis of other studies, these results may be considered significant). | https://cacm.acm.org/magazines/2018/6/228027-effectiveness-of-anonymization-in-double-blind-review/fulltext?mobile=true?mobile=false |
This Developmental/Exploratory study will use theoretical models that have been used in medicine to promote access and improve outcomes as the point of departure for examining why low-income and socially disadvantaged Hispanic, African American, and Caucasian parents do not obtain dental care for their children. The short goal of this proposal is to determine the psychosocial, structural and cultural factors that impact parental help seeking and to refine and evaluate an intervention to enhance parental dental care seeking behavior. The long-term goal of this pilot research is to test a community, family, and practice approach to improve access to care in an R01 multi-site efficacy trial. The pilot research will be accomplished in two phases. In Phase I we will use focus groups to develop a family and community based intervention and measures that will be used in Phase II. In Phase II the intervention will be pilot tested using a randomized controlled repeated measures design in which families will be assigned to an intervention group and a control group. Data will be collected from all participating parents by telephone interview at three assessment periods prior to and immediately after the family intervention implementation. Data will be collected for a third time from a subsample of parents in the intervention group who visit the dental office and all parents in the control group. Analysis of Covariates and logic regression procedures will be used to analyze the direct effects of the intervention. Evaluation of the intervention processes will also be conducted. The results of this pilot project are needed to directly support implementation of an R01 multi-site intervention efficacy trial that will (1) empirically cross validate the measures used in the pilot study and refine the intervention model, and (2) test the direct and indirect effects of the interventions. The proposed research addresses the NIDCR's Health Disparities Plan that recognizes the need for patient-oriented research to understand the bases of health disparities: specifically, the sociological, anthropological and political underpinnings of health-care seeking behavior.
| |
Design an arrangement of display boards in the school hall which fits the requirements of different people.
Can you work out how many cubes were used to make this open box? What size of open box could you make if you had 112 cubes?
Hover your mouse over the counters to see which ones will be removed. Click to remove them. The winner is the last one to remove a counter. How you can make sure you win?
Can you cover the camel with these pieces?
What happens when you try and fit the triomino pieces into these two grids?
Can you shunt the trucks so that the Cattle truck and the Sheep truck change places and the Engine is back on the main line?
What is the best way to shunt these carriages so that each train can continue its journey?
How will you go about finding all the jigsaw pieces that have one peg and one hole?
A magician took a suit of thirteen cards and held them in his hand face down. Every card he revealed had the same value as the one he had just finished spelling. How did this work?
Swap the stars with the moons, using only knights' moves (as on a chess board). What is the smallest number of moves possible?
A toy has a regular tetrahedron, a cube and a base with triangular and square hollows. If you fit a shape into the correct hollow a bell rings. How many times does the bell ring in a complete game?
You have 4 red and 5 blue counters. How many ways can they be placed on a 3 by 3 grid so that all the rows columns and diagonals have an even number of red counters?
How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement?
This challenge involves eight three-cube models made from interlocking cubes. Investigate different ways of putting the models together then compare your constructions.
In how many ways can you fit two of these yellow triangles together? Can you predict the number of ways two blue triangles can be fitted together?
This task, written for the National Young Mathematicians' Award 2016, involves open-topped boxes made with interlocking cubes. Explore the number of units of paint that are needed to cover the boxes. . . .
What is the smallest cuboid that you can put in this box so that you cannot fit another that's the same into it?
If you split the square into these two pieces, it is possible to fit the pieces together again to make a new shape. How many new shapes can you make?
Move just three of the circles so that the triangle faces in the opposite direction.
How many different ways can you find of fitting five hexagons together? How will you know you have found all the ways?
Take a rectangle of paper and fold it in half, and half again, to make four smaller rectangles. How many different ways can you fold it up?
How many DIFFERENT quadrilaterals can be made by joining the dots on the 8-point circle?
A tetromino is made up of four squares joined edge to edge. Can this tetromino, together with 15 copies of itself, be used to cover an eight by eight chessboard?
10 space travellers are waiting to board their spaceships. There are two rows of seats in the waiting room. Using the rules, where are they all sitting? Can you find all the possible ways?
What is the least number of moves you can take to rearrange the bears so that no bear is next to a bear of the same colour?
Find your way through the grid starting at 2 and following these operations. What number do you end on?
Cut four triangles from a square as shown in the picture. How many different shapes can you make by fitting the four triangles back together?
One face of a regular tetrahedron is painted blue and each of the remaining faces are painted using one of the colours red, green or yellow. How many different possibilities are there?
Can you find ways of joining cubes together so that 28 faces are visible?
When I fold a 0-20 number line, I end up with 'stacks' of numbers on top of each other. These challenges involve varying the length of the number line and investigating the 'stack totals'.
A dog is looking for a good place to bury his bone. Can you work out where he started and ended in each case? What possible routes could he have taken?
In this town, houses are built with one room for each person. There are some families of seven people living in the town. In how many different ways can they build their houses?
What is the greatest number of counters you can place on the grid below without four of them lying at the corners of a square?
How many different triangles can you make on a circular pegboard that has nine pegs?
Try to picture these buildings of cubes in your head. Can you make them to check whether you had imagined them correctly?
How can you arrange the 5 cubes so that you need the smallest number of Brush Loads of paint to cover them? Try with other numbers of cubes as well.
Take it in turns to place a domino on the grid. One to be placed horizontally and the other vertically. Can you make it impossible for your opponent to play?
This article for teachers describes how modelling number properties involving multiplication using an array of objects not only allows children to represent their thinking with concrete materials,. . . .
A hundred square has been printed on both sides of a piece of paper. What is on the back of 100? 58? 23? 19?
Can you predict when you'll be clapping and when you'll be clicking if you start this rhythm? How about when a friend begins a new rhythm at the same time?
Use the three triangles to fill these outline shapes. Perhaps you can create some of your own shapes for a friend to fill?
Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark?
In each of the pictures the invitation is for you to: Count what you see. Identify how you think the pattern would continue.
Can you cut a regular hexagon into two pieces to make a parallelogram? Try cutting it into three pieces to make a rhombus!
Here you see the front and back views of a dodecahedron. Each vertex has been numbered so that the numbers around each pentagonal face add up to 65. Can you find all the missing numbers?
What happens to the area of a square if you double the length of the sides? Try the same thing with rectangles, diamonds and other shapes. How do the four smaller ones fit into the larger one?
Investigate the number of paths you can take from one vertex to another in these 3D shapes. Is it possible to take an odd number and an even number of paths to the same vertex?
Players take it in turns to choose a dot on the grid. The winner is the first to have four dots that can be joined to form a square.
Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be?
Make one big triangle so the numbers that touch on the small triangles add to 10. | https://nrich.maths.org/public/topic.php?code=-68&cl=1&cldcmpid=2185 |
Linear Optimization (aka Linear Programming, LP) is one of the earliest improvement in Operations Research history. Despite the fact that there are more than 73 years passed since Kantorovich modeled first LP, it is still widely used in many sectors. In order to use some algorithms (ie. for Simplex, problem should be in standard form), the model should be in a certain form. In here, we will briefly show how one problem can be transformed to apply these algorithms.
1) Objective Function
A problem can be easily changed to minimization or maximization type. Assume our problem is given in the form as
and we want to solve this problem as a minimization problem. All we need to do is, multiply both function and term with magical number, -1, and change min to max. So equivalent form is
2) Constraint – Inequality direction: to and vice versa
A greater than or equal to () inequality can be transformed to less than or equal to (), similarly using our magic number, -1. We need to multiply both sides, and change the direction.
3) Constraint – Inequality to Equality
While transforming an inequality to equality, we need to add a new (slack) nonnegative variable. Direction of the inequality is important, since it changes multiplier of the slack variable. For a we can add a slack variable with +1 coefficient. So, if we are transforming the following equation;
into an equality, the new form should be
And for inequality;
and we can perform transformation as;
4) Constraint – Equality to Inequality
This time things are a little bit different. Here, we can use the logic that, an equality is the intersection of greater than or equal to and less than or equal to inequalities. Assume we have;
and then we can write it in the following form;
5) Variable – Forcing variables to be nonnegative
a) If a variable is set to be nonpositive;
then it can be easily transformed to another by simply multiplying with -1. Let and we get;
and you need to transform all x’s to in constraints as well.
b) If a variable is free;
then there is a traditional way. In this way, you need to add two nonnegative variables to define x. Let be our nonnegative variables () then set
This is meaningful, because all numbers can be written as a difference of two nonnegative number. So practically, we need to change all x’s in constraints to
This transformation is especially useful if you will use Simplex Algorithm. It is because, since are obviously dependent, they cannot be a basic variable at the same time. Therefore, for instance let say, -5, simplex will select but not . So it’s a useful transformation.
Reference: Sobel, Joel. “Linear Programming Notes V Problem Transformations.” N.p., n.d. Web. 5 Sept. 2012. <http://www.econ.ucsd.edu/~jsobel/172aw02/notes5.pdf>. | https://orcomplete.com/uncategorized/sertalpbilal/quick-tip-problem-transformations-for-linear-optimization |
Most people think that Quantum Mechanics is a successful theory, and that it’s definitive. But it’s not. The Quantum Mechanics has failured, it has some inconsistences, and the own quantum physicists know it..
Just because it has failured, the quantum field theorists had created in the 20th Century a new theory, named Quantum Field Theory (QFT) , so that to eliminate the inconsistences of Quantum Mechanics.
Quantum Mechanics is based on several fundamental principles, as Bohr’s Principle of Complementarity, Heisenberg’s uncertainty, Pauli’s Exclusion Principle, de Broglie’s duality, etc.
The attempt of the quantum field theorists in trying to eliminate the Quantum Mechanics inconsistences is via mathematics and by keeping the fundamental principles of the theory. So they believe it is possible to eliminate the inconsistence by improving the mathematics of the theory, and there is no need to change the principles.
- then it would be expected that two neutrons should have to form a dineutron, but it is known that dineutrons do not exist in nature.
But Heisenberg’s isospin is a mere mathematical concept, and it cannot explain why two neutrons do not form the dineutron, because only a force of repulsion can separate two neutrons interacting by the attraction force due to the strong force. A mathematical concept cannot create a repulsion force, and so the isospin is an unacceptable explanation.
However, as the quantum field theorists try to eliminate the inconsistence via mathematics, it seems to be impossible to eliminate the inconsistence of the isospin by improving the mathematics proposed by Heisenberg, because the mathematics cannot create a force of repulsion so that to separate two neutrons attracted by the strong force. It seems there is need to change the physical structure of the neutron considered in standard Nuclear Physics, because only by this way it would be possible to explain the existence of a repulsion force between two neutrons, when they interact in the distance of 2fm.
Other inconsistence is concerning the de Broglie’s duality. His hypothesis advanced in 1924 says that particles of matter such as electrons have wake like properties. His hypothesis was supposedly confirmed by Davisson-Germer experiment in 1927, and the scientific community had concluded that duality is a property of matter as proposed by de Broglie.
But there was another different interpretation for the Davisson-Germer experiment, because Schröedinger had discovered a trembling motion of the electron in the Dirac’s electron equation. Schröedinger had interpreted it as a helical trajectory, and so by such new interpretation the duality should not be a property of the matter, but actually it would be a manifestation of the helical trajectory of the elementary particles.
Heisenberg did not accept the hypothesis of the helical trajectory, because that hypothesis would introduce in Theoretical Physics undesirable conjectures. So, the theorists had adopted the Heisenberg’s proposal of considering the mathematical way (without any physical meaning as Schrödinger had proposed) for the interpretation of the electron’s trembling motion in the Dirac’s equation, and so they had rejected the physical way of interpreting it.
Now you may have been invaded by a question, and you say to yourself: “Suppose that Heisenberg was wrong. Suppose that the electron’s trembling motion in the Dirac’s equation is a helical trajectory as interpreted by Schröedinger. Well, in this case Quantum Mechanics is wrong, and there is a serious inconsistence in the theory, and it is impossible to eliminate it via mathematics. Actually there is need to change the fundamental principle of duality as it was proposed by de Broglie, and to consider that duality is a manifestation of the helical trajectory. Therefore Quantum Field Theory must be developed in a way different of that taken by the quantum field theorists, because instead of trying to eliminate that inconsistence via mathematics, actually it must be eliminated by changing a fundamental principle of Quantum Mechanics”.
Well, if such question was occurred to you, you’re right. Such question is just analised in my book THE MISSED U-TURN, the duel Heisenberg vs Schröedinger- from Newton to Rossi’s eCat, where it is shown that the inconsistences of Quantum Mechanics must be eliminated by changing some fundamental principles of the theory.
The book was written so that to be understood by the lay reader, and the Cambridge International Science Publishing decided to publish it. In 16th September 2011 I and the publisher Mr. Victor Riecansky had signed the Agreement for the publication of my book.
Unfortunatelly some physicists had discovered that my book would be published by that important publishing house of London. And they want people do not take knowledge that Quantum Field Theory can be being developed in the wrong way. That’s why they began to threaten Mr. Riecansky and the publishing house, telling him do not publish the book. And so the publisher had decided to broke the Agreement, and do not publish my book.
”It is hard for me to believe those dificulties raised in this manuscript will have escaped the scrutinity of all those proeminent particle theorists. For instance, the author proposes a new Planck constant for the uncertainty principle in the femtometer scale. Had this been true, the string theorists should have encountered the difficulty long time ago and even have proposed their own third different Planck constant”.
3- From the principles of standard Nuclear Physics, light nuclei with Z=N=pair must have spherical shape. That’s why along 80 years the theorists had never supposed that those nuclei could have a non-spherical shape.
The non-spherical shape of light nuclei with Z=N=pair was predicted in my book Quantum Ring Theory, published in 2006. In the page 131 of the book it is explained why they have non-spherical shape, in spite of they have null electric quadrupole moment (earlier 2012 the nuclear theorists had supposed that null electric quadrupole moment always requires a spherical shape). The authors of the paper published in the journal Nature had used the same argument proposed in my book (so Nature published a plagiarism).
4- The light nuclei with Z=N=pair (with Z=2, 6, 8, 10, 12, 14, 16, 18, 20) are stable, but beryllium isotope 4Be8 with Z=4 is not.. Along 80 years the nuclear theorists tried to explain such anomaly. Each theorist had proposed a different method.
Of course, if the fundamental principles of the standard Nuclear Physics should have been correct, there would not be necessary 80 years of attempts, and several different methods.
Besides, in 2009 a new experiment showed that in the isotope 4Be11 the neutron halo is 7fm far away of the rest of the nucleus. This new experimental finding shows that nuclear theorists are in the wrong way.
5- According to the new nuclear model proposed in Quantum Ring Theory, the nuclei aggregation is not promoted by the strong force. Such hypothesis is corroborated by an experiment published in 2009, when for the first time scientists had measured the size of a one-neutron halo with lasers, and they found that in beryllium isotope 11Be the neutron is far away 7fm from the rest of the nucleus. As the strong force actuates in the maximum distance of 2fm, the experiment shows without doubt that the agglutination of nuclei is not promoted by the strong force.
So, the theorists are trying to explain such strange anomaly via mathematics, instead of to accept the obvious conclusion: some fundamental principles of the standard Nuclear Physics are wrong.
Recent experiments published between 2009 and 2012 are suggesting that it is not possible to eliminate the inconsistences of Quantum Mechanics via mathematics by keeping the fundamental principles of the theory, as the quantum field theorists are trying to do. Therefore it is wrong the way adopted by the theorists for the development of the Quantum Field Theory.
A successful theory capable to eliminate the inconsistences of Quantum Mechanics must be developed by considering new fundamental principles different of those proposed in the theory. This is just the way adopted in Quantum Ring Theory. | https://peswiki.com/article:quantum-field-theory-is-being-developed-in-the-wrong-way |
We are pleased to announce the twenty-first release (code name "DeWitt-Morette") of the Einstein Toolkit, an open-source, community developed software infrastructure for relativistic astrophysics. The highlights of this release are:
This release includes NRPyPN, a Python code to compute initial data parameters for binary black hole simulations.
Lean_Public supports curvilinear coordinates provided by Llama.
The include style ("old") Tmunu interface using thorn ADMCoupling has been removed.
One new thorn has been added:
In addition, bug fixes accumulated since the previous release in May 2020 have been included.
The Einstein Toolkit is a collection of software components and tools for simulating and analyzing general relativistic astrophysical systems that builds on numerous software efforts in the numerical relativity community including code to compute initial data parameters, the spacetime evolution codes Baikal, lean_public, and McLachlan, analysis codes to compute horizon characteristics and gravitational waves, the Carpet AMR infrastructure, and the relativistic magneto-hydrodynamics codes GRHydro and IllinoisGRMHD. The Einstein Toolkit also contains a 1D self-force code. For parts of the toolkit, the Cactus Framework is used as the underlying computational infrastructure providing large-scale parallelization, general computational components, and a model for collaborative, portable code development.
The Einstein Toolkit uses a distributed software model and its different modules are developed, distributed, and supported either by the core team of Einstein Toolkit Maintainers, or by individual groups. Where modules are provided by external groups, the Einstein Toolkit Maintainers provide quality control for modules for inclusion in the toolkit and help coordinate support. The Einstein Toolkit Maintainers currently involve staff and faculty from five different institutions, and host weekly meetings that are open for anyone to join.
Guiding principles for the design and implementation of the toolkit include: open, community-driven software development; well thought-out and stable interfaces; separation of physics software from computational science infrastructure; provision of complete working production code; training and education for a new generation of researchers.
For more information about using or contributing to the Einstein Toolkit, or to join the Einstein Toolkit Consortium, please visit our web pages at http://einsteintoolkit.org, or contact the users mailing list [email protected].
The Einstein Toolkit is primarily supported by NSF 2004157/2004044/2004311/2004879/2003893 (Enabling fundamental research in the era of multi-messenger astrophysics).
The Einstein Toolkit contains about 327 regression test cases. On a large portion of the tested machines, almost all of these tests pass, using both MPI and OpenMP parallelization.
The changes between this and the previous release include:
Fully support gcc / gfortran 10
ExternalLibraries support PAPI version 6
Testsuite harness supports running multiple tests in parallel
Fix accidentally removed caching in Piaraha
Documentation uses mathjax when creating HTML docs
Cactus now checks that parameter types declared in USE and EXTEND statements match
Build system correctly propagates VERBOSE=no to sub-makes in make version 4.3 and newer
All tests that used to use ADMConstraints now use ML_ADMConstraints
Thorn Vectors supports POWER9 cpus used in Summit
Correct long standing read-after-free bug in Carpet's SplitAlongDir routine
Correct outputting non-gridfunction data in CarpetIOHDF5
Implement correct "midpoint" rule in thorn Multipole
Fix interaction between thorn NewRad and Cartoon2D
All example parameter files include thorn SystemTopology
Support Python3 in GW150914 example parameter file
Support wide outer boundaries in lean_public and Baikal
Work around slow compilation with new gcc in Baikal
The following features are being marked as deprecated in this release and will be removed in the next release
The "old" (include file based) interface to Tmunu provided by ADMCoupling is no longer included.
Non-piraha parser has been removed from CST.
ADMCoupling and ADMMacros will be removed in the next release
READS / WRITES statements that refer to non-existing variables cause compile time errors and are no longer ignored at runtime even if
presync_mode = off
This release includes contributions by Steven R. Brandt, Federico Cipolletta, Matthew Elley, Zachariah Etienne, Roland Haas, Ian Hinder, Jonah Miller, Erik Schnetter, Barry Wardell, Helvi Witek , and Miguel Zilhao.
To upgrade from the previous release, use GetComponents with the new thornlist to check out the new version.
See the Download page (http://einsteintoolkit.org/download.html) on the Einstein Toolkit website for download instructions.
The SelfForce-1D code uses a single git repository, thus using
git pull ; git checkout ET_2020_11 will update the code.
Supported (tested) machines include:
Default Debian, Ubuntu, Fedora, CentOS 7, Mint, OpenSUSE and MacOS Catalina (MacPorts) installations
Bluewaters
Comet
Cori
Queen Bee 2
Stampede 2
Mike / Shelob
SuperMIC
SuperMUC-NG
Summit
Wheeler
Note for individual machines:
TACC machines: defs.local.ini needs to have
sourcebasedir = $WORK and
basedir = $SCRATCH/simulations configured for this machine. You need to determine
$WORK and
$SCRATCH by logging in to the machine.
SuperMUC-NG: defs.local.ini needs to have
sourcebasedir = $HOME and
basedir = $SCRATCH/simulations configured for this machine. You need to determine
$HOME and
$SCRATCH by logging in to the machine.
All repositories participating in this release carry a branch ET_2020_11 marking this release. These release branches will be updated if severe errors are found. | http://einsteintoolkit.org/about/releases/ET_2020_11_announcement.html |
On the same rail there are two trains approaching each other. The train at the left goes to the right at constant speed t1. The train at the right goes to the left at constant speed t2. Initially, the noses of the trains are d distance units apart. There is a fly at the nose of the left train that starts flying to the right at constant speed f1, where f1 > t1. Similarly, there is a fly at the nose of the right train that starts flying to the left at constant speed f2, where f2 > t2. The flies are so small that we can consider them as points. Any time that a fly reaches another fly, or reaches a train, that fly turns around immediately, never changing the absolute value of its speed. Thus, the movement of each fly is like a zig-zag with an infinite number of rebounds.
Given all the information, can you compute the total distance travelled by each fly until the trains collide? If so, you would prove yourself even better than von Neumann!
Input consists of several cases, each one with d, t1, t2, f1 and f2. All given numbers are strictly positive integers, and no larger than 106. Assume f1 > t1 and f2 > t2.
For every case, print with four digits after the decimal point the total distance travelled by the first fly and by the second fly. The given cases have no precision issues. | https://jutge.org/problems/P64984_en |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.