content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
How to Make an International Call to Thailand from India
Calling the Thailand from India requires you to dial India’s exit code, followed by the Thailand Country Code (ISD Code) and recipient’s 10-digit telephone number.
Below is the dialing method to call to Thailand from India – These are charged International Calling Minutes – The format is mentioned below:
|Calling from||Calling to||Country ISD Code||Continent||Capital City|
|India||Thailand||+66||Asia||Bangkok|
While calling to Thailand from India use International Dialing Code +66 prefix or use 0066
The same code can be used for calling Thailand from any part of world.
Follow this procedure to make an International Call to Thailand from India – Dialing Pattern Example: in Step 1, Step 2 & Step 3
- For Landline, Dial: +66 + Area Code + Land Line Number
- For Mobile Phone Cell Phone, Dial +66 + 10 Digit Mobile Number
Instructions before dialing: | http://planetroam.in/how-to-call-dial-other-countries-from-india-other-phone-numbers/how-to-call-thailand-from-india-thailand-isd-code-dialing-pattern-instruction/ |
Building garage shelves – Storage shelves heavyweights in the household racks, typically required to hold many plus-size items, including machinery and sports equipment. Making your own garage shelves is typically a lot cheaper than buying finished shelving. Do-it-yourself shelves also be adapted to large freezer, bulky sewing machine or other irregular-shaped object can be covered and stored safely.
Find the wall with stud finder, so mark their position at the approximate height of building garage shelves. Most studs are about 16 inches apart. Measure the exact height you want your shelf to be. Draw a line between the two stud items at this altitude, so check it with the level. Adjust if necessary, so it is exactly level. Position a 10-inch plank vertically along the stud, with the top edge is flush with the horizontal shelf line. Pre-drill a hole through each corner of the wood behind the wall, then screw down your plank, which will anchor the shelf. Using 2.5-inch screws. Repeat on the other stud, with the remaining plank.
12 Photos Gallery of: DIY Building Garage Shelves
Position a shelf bracket on top of the plank anchor. The angle of the bracket should be flush with the top edge of the anchor, with the 10-inch end pointing down. Pre-drill the screw holes in the anchor and stud behind the wall, then screw down. Center tree plate over the brackets. Rear edge to sit on top of the anchor and the building garage shelves, with about a one-inch above the front lip. There will be about four inches above the overhang on each side. | https://anandasoulcreations.com/diy-building-garage-shelves/ |
WASHINGTON - Housing and Urban Development Secretary Alphonso Jackson today unveiled HUD's simplified "SuperNOFA," a notice that makes available $2.3 billion in funding opportunities to help produce more affordable housing, assist homeless individuals and families, and promote community development. The Fiscal Year 2004 Notice of Funding Availability includes 49 separate funding opportunities that will help States, local governments and nonprofit grassroots organizations to house and serve lower income families living in their communities (see attached chart).
This notice continues HUD's efforts to further improve the grant application process, promoting greater access for faith-based and other community organizations, and reducing excessive regulations that inhibit the creation of affordable housing. It is HUD's intent to have all of its applications in fiscal year 2005 available on www.Grants.gov for electronic submission.
"I'm proud to announce we've streamlined HUD's grant application process," said Jackson. "This new simplified process is creating a more level playing field for organizations of all sizes to apply for funds and demonstrate their capacity to build neighborhoods, strengthen communities and help our most vulnerable neighbors."
This year, HUD is placing the highest priority on funding local communities and organizations that are working toward removing excessive and burdensome regulations that restrict the development of affordable housing at the local level. HUD will begin awarding priority points to certain applicants in communities that have successfully demonstrated efforts to reduce regulatory barriers that prevent many families from living in the communities where they work. HUD's policy on removal of Regulatory Barriers to Affordable housing can be found at /initiatives/affordablecommunities/index.cfm.
In addition, the Department is continuing to level the playing field for faith-based and other grassroots community organizations applying for federal funding. Applicants will be asked to fill out a questionnaire that will help HUD determine if it is meeting the goal of increasing the participation of these organizations in the Department's programs. Meanwhile, HUD is seeking to remove unnecessary federal regulations that prevent faith-based and community groups from competing on an equal footing with other applicants seeking grants.
To ensure that HUD programs are accessible to small, disadvantaged or women-owned businesses, the Department continues to require grantees make every effort to contract with these business partners in their HUD-funded programs. Too often, these businesses still experience difficulty accessing information and successfully bidding on federal contracts. Currently, HUD leads all other federal agencies in contracting with these businesses.
The grant opportunities announced today are in addition to the $31.5 billion HUD allocates to State and local communities, Pubic Housing Agencies, and Native-American Tribes in the form of block grants, housing choice vouchers and other formula-based funding.
HUD is the nation's housing agency committed to increasing homeownership, particularly among minorities; creating affordable housing opportunities for low-income Americans; and supportive for the homeless, elderly, people with disabilities and people living with AIDS. The Department also promotes economic and community development as well as enforces the nation's fair housing laws. More information about HUD and its programs is available on the Internet and espanol.hud.gov.
###
IMPORTANT NOTE FOR APPLICANTS: HUD will be providing training for potential applicants via satellite and webcast. Contact the HUD field office in your area for details on how to view these sessions via satellite. | https://archives.hud.gov/news/2004/pr04-043.cfm |
Historically, Low spending and interventions on Health from Centre and States: . India has not invested in health sufficiently, though its fiscal capacity to raise general revenues increased substantially from 5% of GDP in 1950- 51 to 17% in 2016-17. . India's public spending on health continues to hover around 1% of GDP for many decades, accounting for less than 30% of total health expenditure . Besides low public spending, neither the Central nor the State governments have undertaken any significant policy intervention, except the National Health Mission, to redress the issue of widening socioeconomic inequalities in health . But the National Health Mission, with a budget of less than 0.2% of GDP, is far too less to make a major impact. And worryingly, the budgetary provision for the NHM has decreased by 2% in 2018-19 from the previous year.
National Health Policy 2017 envisaged raising public spending on health to 2.5% of GDP by 2025; . The Policy seeks to reach everyone in a comprehensive integrated way to move towards wellness. It aims at achieving universal health coverage and delivering quality health care services to all at affordable cost. It seeks to promote quality of care, focus is on emerging diseases and investment in promotive and preventive healthcare. The policy is patient centric and quality driven. It addresses health security and make in India for drugs and devices. . In order to provide access and financial protection at secondary and tertiary care levels, the policy proposes free druqs, free diagnostics and free emergency care services in all public hospitals.
Clear trends that India needs to adopt: Two important trends can be discerned: . tend to invest more on health, and paid out of the pocket declines. Economists have sought to explain this phenomenon as "health financing transition", akin to demographic and epidemiologic transitions. Economic, political and technological factors move countries through this health financing transition. Of these, social solidarity for redistribution of resources to the less advantaged is the key element in pushing for public policies that expand pooled funding to provide health care. Out-of-pocket payments push millions of people into poverty and deter the poor from using health services. Hence, most countries, which includes the developing ones, have adopted either of the above two financing arrangements or a hybrid model to achieve Universal Health Care (UHC) for their respective populations. . For example, according to the World Health Organisation's recent estimates, out-of-pocket expenditure contributed only 20%to total health expenditure in Bhutan in 2015 whereas general government expenditure on health accounted for 72%, which is about 6% of its GDP. . Similarly, public expenditure represents 2%-4% of GDP among the developing countries with significant UHC coverage, examples being Ghana, Thailand, Sri Lanka, China and South Africa.
Measures that need immediate Implementation are: District hospitals are to be strengthened, to provide several elements of tertiary care alongside secondary care. Sub-district hospitals too would be upgradec . A National Healthcare Standards Organisation is proposed to be established to develop evidence-based standard management guidelines. A National Health Information Network also would be established by 2025 A National Digital Health Authority would be set up to develop, deploy and regulate digital health across the continuum of care. | https://unacademy.com/lesson/25th-january-daily-important-editorial-discussion/HL37X3I8 |
Fashion for Good recognises the Cradle to Cradle (C2C) Certified™ Product Standard as one way to measure good fashion, and hence embarked on the successful development of the world’s first two C2C Certified GOLD T-shirts. Based on this experience, Fashion for Good has created this practical and in-depth How-To Guide to help apparel manufacturers and brands begin their journey toward only good fashion.
The How-To Guide was developed in close collaboration with McDonough Innovation (MI), MBDC and two Indian apparel manufacturers, Pratibha Syntex and Cotton Blossom.
The Cradle to Cradle (C2C) Certified™ How-To Guide
The How-To Guide outlines the principles and criteria of the C2C Certified Products Program in order to inspire apparel manufacturers, brands and retailers to start their own C2C Certified journey by: | https://fashionforgood.com/news/resource-library/good-fashion-guide/ |
The Washington Voting Rights Act has passed both chambers of the legislature — sending the bill to Gov Jay Inslee.
The bill will establish an easier process for cities, counties and school districts to move from city-wide elections to district elections — before resorting to a lawsuit. Supporters say this will enable candidates to better reflect the demographic, ethnic and economic make-up of their neighborhoods.
“It sets up a collaborative process for communities to work it out before you have to go to court,” said Rep. Zack Hudgins, D-Tukwila.
The Washington cities of Yakima and Pasco were sued by residents who said that their city-wide city council elections were racially polarized, keeping Hispanic candidates from winning council seats. After switching to elections based on geographic districts, both cities elected their first Hispanic council members.
The House has passed a version of the Voting Rights Act bill in past legislative sessions, but the bills would stall in the Senate. The Senate voted Monday 29-20 for final passage.
This year, in both chambers several Republicans crossed the aisle to support the measure — but several Republicans still spoke against the bill during the floor debates over the past week.
“This bill is an insult to people of color and to minorities,” said Rep. Liz Pike, R-Camas, during the House floor debate. “It says: ‘We don’t believe you’re smart enough or attractive enough to be elected.”
Rep. Monica Stonier, D-Vancouver, replied: “As a woman of color, I’m not offended by this policy.” She said the bill enables local elected officials to better reflect the people of their neighborhoods. Her ancestry includes some Mexican and Japanese roots.
The bill sets up procedures for cities, towns and school districts to decide whether to switch from at-large elected officials to district elected representatives. Those procedures map out how petitions can be submitted to governing bodies to set up referendums on such revisions. The procedures apply when a protected class, such as a significant racial or ethnic minority, is noticeably underrepresented on school board or city council.
At-large elections tend to skew toward those candidates who have greatest name recognition or the biggest campaign budget. That’s a disadvantage for less established candidates or those with less money — who often are people of color or women. District elections aim to break that cycle, so voters can put representatives from their own neighborhoods on boards and councils.
One of the Republicans supporting the bill last week was Rep. Larry Haler of Richland, a city that is 87 percent white. Haler said that the changes in the Voting Rights Act could benefit all underrepresented groups, including lower-income candidates. Weeks ago in a public hearing he supported the bill because five of Richland’s seven city council members came from the same upper-class neighborhood, with city’s poorer neighborhoods not being represented on the council.
History of change
More than two years ago Yakima was forced to adopt district-wide elections after a federal judge determined that the city’s at-large election system was racially polarized, stifling the voice of the Hispanic population. According to the U.S. Census, Yakima’s population is 45 percent Hispanic.
After the change to district elections, all the seats were up for election. Yakima voters elected three Latina candidates to the seven-member council. It was the first time in Yakima history that the voters had elected Hispanic council members.
Since then, the council has hired a bilingual city manager, has bilingual interpreters at city events, has more public forums, meets with the school board to discuss youth issues and has improved access to a 24/7 shelter for the homeless.
Pasco switched to district elections last year. That same year, Pasco, which is 55 percent Hispanic, elected its first three Latinos to the city council. | https://www.seattleglobalist.com/2018/03/05/washington-voting-rights-act-passes-legislature/72542 |
Change is inevitable both personally and in the workplace and can be particularly stressful when it occurs so quickly. Remote workers must have the clarity, support, and resources needed to remain focused and connected working from home, while managers and leaders must also be mindful of the resistance to change and apprehension likely being experienced by many during these times.
Consider This:National Institute of Mental Health
Keep in touch with people who can provide emotional support and practical help. To reduce stress, ask for help from friends, family, and community or religious organizations.
Total: 19 Videos
Creating Your Circle of Trust
Rebounding When Resilience Wears Down
Three Mindsets to Embrace Change
Beware of Adaptability Demons
Managing Uncertainty
Reach Outside Your Comfort Zone
Avoiding Situations
Why Does Trying New Things Feel Uncomfortable?
Change: Deal With It
Making the Complex Simple
Invest in the Process, Not the Outcome
Embrace Whatever Comes Along
How to Be As Resilient As a Daruma Doll
Turn Stress Into Positive Pressure
To Adapt, Change Your Vantage Point
The Importance of Change
A Challenge Is an Opportunity
The Role of the Amygdala: The Almond Effect
How to Increase Resilience
Loading...
Maximum 2 Key Takeaways can be added.
Total: 7 Courses
Metagility: Managing Agile Development for Competitive Advantage
Bouncing Back: Rewiring Your Brain for Maximum Resilience and Well-Being
Resilience: Powerful Practices for Bouncing Back from Disappointment, Difficulty, and Even Disaster
Building Resilience for Success: A Resource for Managers and Organizations
Forging Ahead with Perseverance and Resilience
Organizations Change So Get Ready
Navigating Challenging Situations with Diplomacy and Tact
This book provides a comprehensive approach for managing a new and highly effective breed of agility from the executive level on down.
Maximum 2 Key Takeaways can be added
Total: 4 Resources
Maximum 2 Key Takeaways can be added
Key Takeaways
This training content is only as good as the information you actually retain. What were some of the most interesting points you thought were presented in this content?
Total: 0 Key Takeaways
Learning Applied
Learning shouldn’t stop when the training does. We retain knowledge best when we practice it first-hand. How have you applied this knowledge within your workday?
Total: 0 Learning Applied
Poll
Responded
Rakesh Mishra
Bill Greco
Jeremy Tillman
Cheryl Curtis
Angel Tillman
Discussion Question
Responses (1)View All Responses
Ready To Get Started? | https://www.trainup.com/trainingflo/free-learning-center/change-stress-management |
The present invention is directed, in general, to microprocessors and, more particularly, to a processor architecture employing an improved floating point unit (FPU).
The ever-growing requirement for high performance computers demands that computer hardware architectures maximize software performance. Conventional computer architectures are made up of three primary components: (1) a processor, (2) a system memory and (3) one or more input/output devices. The processor controls the system memory and the input/output (xe2x80x9cI/Oxe2x80x9d) devices. The system memory stores not only data, but also instructions that the processor is capable of retrieving and executing to cause the computer to perform one or more desired processes or functions.
The I/O devices are operative to interact with a user through a graphical user interface (xe2x80x9cGUIxe2x80x9d) (such as provided by Microsoft Windows(trademark) or IBM OS/2(trademark)), a network portal device, a printer, a mouse or other conventional device for facilitating interaction between the user and the computer.
Over the years, the quest for ever-increasing processing speeds has followed different directions. One approach to improve computer performance is to increase the rate of the clock that drives the processor. As the clock rate increases, however, the processor""s power consumption and temperature also increase. Increased power consumption is expensive and high circuit temperatures may damage the processor. Further, the processor clock rate may not increase beyond a threshold physical speed at which signals may traverse the processor. Simply stated, there is a practical maximum to the clock rate that is acceptable to conventional processors.
An alternate approach to improve computer performance is to increase the number of instructions executed per clock cycle by the processor (xe2x80x9cprocessor throughputxe2x80x9d). One technique for increasing processor throughput is pipelining, which calls for the processor to be divided into separate processing stages (collectively termed a xe2x80x9cpipelinexe2x80x9d). Instructions are processed in an xe2x80x9cassembly linexe2x80x9d fashion in the processing stages. Each processing stage is optimized to perform a particular processing function, thereby causing the processor as a whole to become faster.
xe2x80x9cSuperpipeliningxe2x80x9d extends the pipelining concept further by allowing the simultaneous processing of multiple instructions in the pipeline. Consider, as an example, a processor in which each instruction executes in six stages, each stage requiring a single clock cycle to perform its function. Six separate instructions can therefore be processed concurrently in the pipeline; i.e., the processing of one instruction is completed during each clock cycle. The instruction throughput of an n-stage pipelined architecture is therefore, in theory, n times greater than the throughput of a non-pipelined architecture capable of completing only one instruction every n clock cycles.
Another technique for increasing overall processor speed is xe2x80x9csuperscalarxe2x80x9d processing. Superscalar processing calls for multiple instructions to be processed per clock cycle. Assuming that instructions are independent of one another (the execution of each instruction does not depend upon the execution of any other instruction), processor throughput is increased in proportion to the number of instructions processed per clock cycle (xe2x80x9cdegree of scalabilityxe2x80x9d). If, for example, a particular processor architecture is superscalar to degree three (i.e., three instructions are processed during each clock cycle), the instruction throughput of the processor is theoretically tripled.
These techniques are not mutually exclusive; processors may be both superpipelined and superscalar. However, operation of such processors in practice is often far from ideal, as instructions tend to depend upon one another and are also often not executed efficiently within the pipeline stages. In actual operation, instructions often require varying amounts of processor resources, creating interruptions (xe2x80x9cbubblesxe2x80x9d or xe2x80x9cstallsxe2x80x9d) in the flow of instructions through the pipeline. Consequently, while superpipelining and superscalar techniques do increase throughput, the actual throughput of the processor ultimately depends upon the particular instructions processed during a given period of time and the particular implementation of the processor""s architecture.
The speed at which a processor can perform a desired task is also a function of the number of instructions required to code the task. A processor may require one or many clock cycles to execute a particular instruction. Thus, in order to enhance the speed at which a processor can perform a desired task, both the number of instructions used to code the task as well as the number of clock cycles required to execute each instruction should be minimized.
Statistically, certain instructions are executed more frequently than others. If the design of a processor is optimized to rapidly process the instructions that occur most frequently, then the overall throughput of the processor can be increased. Unfortunately, the optimization of a processor for certain frequent instructions is usually obtained only at the expense of other less frequent instructions, or requires additional circuitry, which increases the size of the processor.
As computer programs have become increasingly more graphic-oriented, processors have had to deal more and more with the conversion between integer and floating point representations of numbers. Thus, to enhance the throughput of a processor that must generate data necessary to represent graphical images, it is desirable to optimize the processor to efficiently convert between integer and floating point representations of data.
U.S. Pat. No. 5,257,215 to Poon, issued Oct. 26, 1993, describes a circuit and method for the performing integer to floating point conversions in a floating point unit. The method disclosed, however, requires a two""s complement operation for the conversion of negative numbers; a two""s complement operation requires additional clock cycles and is thus undesirable if the throughput of the floating point unit is to be optimized.
To address the above-discussed deficiencies of the prior art, it is a primary object of the present invention to provide an efficient system and method for converting numbers from integer notation to floating point notation and a computer system employing the same. Preferably, the optimization of the processor should not require any additional hardware or degrade the performance of the processor in performing tasks other than integer to floating point conversions; in particular, the conversion of negative numbers should not require the performance of a two""s complement operation.
In the attainment of the above primary object, the present invention provides, for use in a processor having a floating point execution core, logic circuitry for, and a method of, converting negative numbers from integer notation to floating point notation. In one embodiment, the logic circuitry includes: (1) a one""s complementer that receives a number in integer notation and inverts the received number to yield an inverted number, (2) a leading bit counter, coupled to the one""s complementer, that counts leading bits in the inverted number to yield leading bit data, (3) a shifter, coupled to the one""s complementer and the leading bit counter, that normalizes the inverted number based on the leading bit data to yield a shifted inverted number, (4) an adder, coupled to the shifter, that increments the shifted inverted number to yield a fractional portion of the received number in floating point notation and overflow data, the adder renormalizing the fractional portion based on the overflow data and (5) exponent generating circuitry, coupled to the leading bit counter and the adder, that generates an exponent portion of the received number in floating point notation as a function of the leading bit data and the overflow data.
The present invention therefore fundamentally reorders the process by which numbers are converted from integer to floating point notation to allow such numbers to be converted in a pipelined process. The present invention is founded on the novel realization that one""s complementing (a part of the two""s complementing process required in converting negative numbers) can be allowed to occur before normalization (shifting). The present invention is therefore particularly suited to floating point units (xe2x80x9cFPUsxe2x80x9d) having a pipelined load converter and adder, as the hardware already present in the converter and adder can be employed to perform integer to floating point conversion.
In one embodiment of the present invention, the logic circuitry further includes a multiplexer, interposed between the one""s complementer and the shifter, that selects one of the received number and the inverted number based on a sign of the received number. Thus, the present invention can be adapted for use in additionally converting positive numbers. Positive numbers have no need to be two""s complemented during conversion. Therefore, in this embodiment, steps are taken to bypass the one""s complementing to which negative numbers are subjected.
In one embodiment of the present invention, the exponent generating circuitry comprises a bias converter that generates an uncompensated biased exponent, the exponent generating circuitry adjusting the uncompensated biased exponent as a function of the leading bit data and the overflow data to yield the exponent portion. Those skilled in the art are familiar with the manner in which exponents are biased or unbiased during notation conversion. In this embodiment, the present invention enhances the bias process by further adjusting for any xe2x80x9coverguessingxe2x80x9d that may occur in the adder.
In one embodiment of the present invention, the leading bit counter counts a number of leading zeroes in the inverted number. Alternatively, leading ones in the received (uninverted number) may be counted. Those skilled in the art are familiar with conventional normalization processes in which integers are shifted and thereby normalized.
The foregoing has outlined rather broadly the features and technical advantages of the present invention so that those skilled in the art may better understand the detailed description of the invention that follows. Additional features and advantages of the invention will be described hereinafter that form the subject of the claims of the invention. Those skilled in the art should appreciate that they may readily use the conception and the specific embodiment disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. Those skilled in the art should also realize that such equivalent constructions do not depart from the spirit and scope of the invention in its broadest form.
| |
Recently, I read that the retailer Sunflower Farmers Markets is now growing produce for its stores on a 40-acre farm in the Longmont, Colo. area. According to The Denver Post story, the retailer-operated farm will supply about 5 percent of the produce needed to fill Sunflower's 27 stores. Not a lot, but Sunflower also plans to hold tours and workshops at the farm, using the retailer-farmer role to educate consumers on organic production and to bring them closer to their food.
What do you think of this concept? On one hand, it could build trust between consumers and retailers because retailers can share their intimate knowledge of where and how the fruits and vegetables were grown. On the other hand, grocers who operate farms could be perceived as pushing out small growers. And is the "retailer-farmer" role a realistic undertaking for any grocer other than a larger chain?
Let us know your thoughts. | https://www.newhope.com/blog/what-do-you-think-retailer-farmer-concept |
George Walford: Political Gravitation
The communist parties work in the belief that if they could only gain power they could establish a society which, if not fully communist, would be nearer to that condition than is existing society. Given power they would, they believe, be able to lead, drive, educate or manipulate the general body of the people into moving from their present way of life toward communism. Experience does not support this belief. In Russia and China communist movements have gained power and engaged in a struggle to impose their wishes which has continued over decades and cost millions of lives, but the outcome has not been that the people have moved toward the party. The party has moved toward the people; in both these countries the communist rulers are coming to accept the respect for tradition, the competitiveness, the pursuit of private interests and the patriotism which the general body of the people tend to accept but which communist theory repudiates.
It reminds one of Sir Isaac Newton. His law of gravitation states that every physical body attracts every other physical body, but the earth did not, to any perceptible extent, move toward the apple; it was the apple that fell to earth.
from Ideological Commentary 16, January 1985. | https://www.gwiep.net/george-walford-political-gravitation/ |
Performance evaluation of Pichia kluyveri, Kluyveromyces marxianus and Saccharomyces cerevisiae in industrial tequila fermentation.
Traditionally, industrial tequila production has used spontaneous fermentation or Saccharomyces cerevisiae yeast strains. Despite the potential of non-Saccharomyces strains for alcoholic fermentation, few studies have been performed at industrial level with these yeasts. Therefore, in this work, Agave tequilana juice was fermented at an industrial level using two non-Saccharomyces yeasts (Pichia kluyveri and Kluyveromyces marxianus) with fermentation efficiency higher than 85 %. Pichia kluyveri (GRO3) was more efficient for alcohol and ethyl lactate production than S. cerevisiae (AR5), while Kluyveromyces marxianus (GRO6) produced more isobutanol and ethyl-acetate than S. cerevisiae (AR5). The level of volatile compounds at the end of fermentation was compared with the tequila standard regulation. All volatile compounds were within the allowed range except for methanol, which was higher for S. cerevisiae (AR5) and K. marxianus (GRO6). The variations in methanol may have been caused by the Agave tequilana used for the tests, since this compound is not synthesized by these yeasts.
| |
(Also Kopernik) Polish astronomer and mathematician.
Copernicus is one of the extraordinary thinkers credited with inaugurating the Scientific Revolution in the sixteenth century with the publication of his De revolutionibus orbitum coelestium (On the Revolutions of the Heavenly Bodies, 1543). The revolution in science represents one of the greatest developments in the Western intellectual tradition. Thinkers such as Copernicus, the French philosopher Rene Descartes (1596-1650) and the British mathematician Sir Isaac Newton (1642-1727) departed radically from classical thought and from the ecclesiastical institutions of the Middle Ages. These thinkers brought about a change in the way people think and perceive both themselves and their place in the universe.
Biographical Information
Copernicus was born into a well-to-do family in 1473. Copernicus's father, a copper merchant, died when Copernicus was ten, and Copernicus was taken in by an uncle. In 1491, Copernicus entered the University of Krakow where he studied mathematics and painting. In 1496, he went to Italy for ten years where he studied medicine at Padua and obtained a doctor's degree in canon law at Ferrara. In 1500, in the midst of his studies, Copernicus experienced two events that helped to shape the rest of his life: he attended a conference in Rome dealing with calendar reform and in November of that year witnessed a lunar eclipse. Copernicus continued his medical and legal studies, but also pursued his interest in astronomy, being exposed to the Pythagorean doctrines of cosmology taught in Italy. He developed a dissatisfaction with the Ptolemaic system and conceived the idea of a solar system with the sun at the center. In 1505, Copernicus returned to his native Poland, where he worked as physician to his uncle in his uncle's palace in Heilsberg. In 1512, when Copernicus's uncle died, Copernicus moved to Frauenberg where he belonged to the chapter or regular staff of the cathedral of Frauenberg. While serving in this capacity, Copernicus also developed a system of reform for the currency of the Prussian provinces of Poland (presented as De monetae cudendae ratione, 1526, and published in 1816) and began to make astronomical observations to test his belief in a heliocentric world system.
Copernicus was reluctant to make his ideas public because of their controversial nature. He did allow a summary of the Commentariolus (1530) to circulate among scholars. Johann Albrecht Widmanstadt presented Copernicus' views in lectures at Rome with the current pope, Pope Clement VII, expressing no disapproval. Cardinal Schönberg made a formal request for publication of Copernicus's views. Copernicus published the treatise On the Revolutions of the Heavenly Bodies in 1540. That same year, George Joachim Rheticus, a follower of Copernicus, published another brief account of Copernicus' views in his Narratio prima. The task of overseeing the publication of Copernicus's book was undertaken by a Lutheran minister named Andreas Osiander. Osiander seems to have felt obliged to present Copernicus's material in a way that would not offend Church officials (Martin Luther, the founder of Lutheranism, firmly opposed Copernicus's new theory). Osiander wrote and appended a preface to On the Revolutions of the Heavenly Bodies stating that the heliocentric theory was being presented as a concept to allow for better calculations of planetary positions. The unsigned preface gave the impression that Copernicus himself was undercutting his own theory. In 1542, Copernicus suffered a stroke and paralysis, and continued to decline until his death on May 24, 1543. Tradition relates that the first copy of Copernicus's book On the Revolutions of the Heavenly Bodies reached him on his death-bed, but in face he may never have seen his most important work published. In 1609 German astronomer Johannes Kepler (1571-1630) discovered that Osiander was the author of the preface to the first edition of Copernicus's On the Revolutions of the Heavenly Bodies.
Major Works
On the Revolution of the Heavenly Bodies sets forth Copernicus's heliocentric theory of the solar system, with the sun as the center of a number of plaentary orbits including that of the Earth. Long before Copernicus, Aristarchus of Samos, a Greek astronomer living around 270 BC, had proposed that the sun was the center of things, but his theory was displaced by the teachings of Claudius Ptolemy (c.90-168 AD). Ptolemy proposed that the Earth was the center of the universe. In this system, all the planets, including the Sun and Moon (which were classified as planets) were attached to concentric spheres surrounding and rotating around the Earth. Their motion was governed by the Prime Mover or Just Cause, God. Motions of the planets that presented problems for this geocentric and spherical model were accounted for by means of epicycles (or cycles within cycles). Ptolemy's model of the universe remained dominant for over a thousand years. By Copernicus's time, the tables of planetary positions had become very complex but still did not offer accurate predictions of the positions of the planets over long periods of time. Copernicus realized that tables of planetary positions could be calculated more accurately by working from the assumption that the Sun, not the Earth, was the center of the world system and that the planets, including the Earth, moved around the sun. Copernicus was not an especially good astronomical observer. It is said that he never saw the planet Mercury, and he made an incorrect assumption about planetary orbits, believing that they were perfectly circular. Because of this, he found it necessary to use Ptolemy's cumbersome concept of epicycles (smaller orbits centered on the larger ones) to reduce the discrepancy between his predicted orbits and those he observed. It wasn't until Johannes Kepler that the elliptical nature of planetary orbits was understood. According to critic Harold P. Nebelsick, Copernicus's system was able to describe the "main movements of the planets with greater simplicity and harmony" than the Ptolemaic system could, and it was able to provide "a more accurate measurement of the distance of planetary orbits" from one orbit to another. The heliocentric model developed by Copernicus could explain the astronomical phenomenon known as retrograde ("backwards") motion better than Ptolemy's geocentric model. The fact that most of the planets appear to change direction periodically is more readily explained by the fact that their orbits are outside that of the Earth. The heliocentric model also explained the absence of such "backward" motion in the planet Venus, whose orbit is inside that of the Earth and therefore smaller.
Critical Reception
The earliest reaction to On the Revolution of the Heavenly Bodies was subdued. Only a limited number of books were printed. Books—and in particular scientific texts with numerous illustrations—were expensive and consequently had limited circulation. The book did achieve a number of converts, but only a few highly advanced mathematicians and astronomers could fully understand it. Copernicus himself dedicated the book to mathematicians and did not seem to think that his findings would appeal to a general readership. A later generation of astronomers building on Copernican theories, including Tycho Brahe (1546-1601) and Johannes Kepler, continued to demonstrate that humankind was still learning about what had previously been thought to be a "fixed firmamant" of stars and planets, and Copernicus has grown in regard as a significant and revolutionary thinker for his times.
Principal Works
Monetae cudendae ratio [On Minting Money] (essay) 1528
De revolutionibus orbitum coelestium [On the Revolutions of the Heavenly Bodies] (essay) 1543
Criticism
Robert Small (essay date 1804)
SOURCE: "Of the Copernican System" in An Account of the Astronomical Discoveries of Kepler, The University of Wisconsin Press, 1963, pp. 81-92.
[In the following excerpt from an essay originally written in 1804, Small discusses how Copernicus came to his conclusions regarding heliocentrism and the diurnal rotation of the earth.]
Though the imperfections of the Ptolemaic system were not immediately perceived, especially during the confusion which attended the decline and destruction of the Roman empire, their effects did not fail, in process of time, to become fully evident. In the ninth century, on the revival of science in the east, under the encouragement of the...
(The entire section is 3298 words.)
Marian Biskup and Jerzy Dobrzycki (essay date 1972)
SOURCE: "Copernicus the Economist" and "De Revolutionibus" in Copernicus: Scholar and Citizen, Interpress Publishers, 1972, pp. 83-115.
[In the essays below, Biskup and Dobrzycki discuss first Copernicus's work as an economic advisor to the Prussian Estates and then the development of the ideas and text of his De Revolutionibus.]
Copernicus the Economist
Copernicus was for many years in Warmia engrossed in economic matters and monetary questions. He introduced many new and stimulating ideas into economics, some of them much ahead of his time, and hence did not always meet with understanding. But it is worth looking closer at his...
(The entire section is 7436 words.)
Owen Gingerich (essay date 1973)
SOURCE: "From Copernicus to Kepler: Heliocentrism as Model and as Reality" in Proceedings of the American Philosophical Society, Vol. 117, No. 6, December, 1973, pp. 513–22.
[In the following essay, Gingerich discusses controversies in the early publishing history of De revolutionibus.]
Near the close of Book One of the autograph manuscript of his great work, Copernicus writes:
And if we should admit that the course of the sun and moon could be demonstrated even if the earth is fixed, then with respect to the other wandering bodies there is less agreement. It is credible that, for these and similar causes (and not because of the...
(The entire section is 5470 words.)
Owen Gingerich (essay date 1975)
SOURCE: "'Crisis' versus Aesthetic in the Copernican Revolution" in Yesterday and Today: Proceedings of the Commemorative Conference Held in Washington in Honour of Nicolaus Copernicus, Vistas in Astronomy, Vol. 17, 1975, pp. 85-93.
[In the following essay, Gingerich argues against the notion that there was an astronomical crisis in astronomy before Copernicus published his theories.]
In a chapter in The Structure of Scientific Revolutions entitled "Crisis and the Emergence of Scientific Theories", Thomas Kuhn states: "If awareness of anomaly plays a role in the emergence of phenomena, it should surprise no one that a similar but more profound awareness is prerequisite...
(The entire section is 4007 words.)
John Norris (essay date 1981)
SOURCE: "Copernicus: Science versus Theology" in The Tradition of Polish Ideals: Essays in History and Literature, Orbis Books (London) Ltd., 1981, pp. 132-49.
[In the following essay, Norris discusses the reception of Copernicus's astronomical findings by the Catholic and Protestant churches during the sixteenth centuryry],
This paper is about a Polish citizen and about a revolution which he made. Unlike most Poles, he didn't know he was going to make a revolution, and he probably didn't intend it. He wrote a great and abstruse work, De Revolutionibus orbium coelestium, which hardly any of his contemporaries could read, let alone understand. It was...
(The entire section is 5792 words.)
Edward Rosen (essay date 1983)
SOURCE: "The Exposure of the Fraudulent Address to the Reader in Copernicus' Revolutions" in Sixteenth Century Journal, Vol. XIV, No. 3, Fall, 1983, pp. 283-91.
[In the following article, Rosen discusses the reasons for and outcome of Andreas Osiander'inserting an anonymous preface into the first publication of Copernicus's De revolutionibus.],
In opposition to the immemorial belief that the earth is stationary, Nicholas Copernicus' De revolutionibus orbium coelestium (Nuremborg, 1543)1 proclaimed that the earth is a planet in motion. On its title page this epoch-making work announced the names of its author and publisher. But it gave no hint...
(The entire section is 3838 words.)
Harold P. Nebelsick (essay date 1985)
SOURCE: "Copernican Cosmology" in Circles of God: Theology and Science from the Greeks to Copernicus, Scottish Academic Press, 1985, pp. 200-57.
[In the following chapter, Nebelsick discusses in detail Copernicus's contributions to astronomical research, including his theory of heliocentrism and his revision of the work of Ptolemy and other ancient astronomers.]
The Development of "Heliocentricity"
When and where Copernicus first began to think seriously about his "heliocentric" system is as difficult to ascertain as are his motives for developing it."57 By the end of the fifteenth century Cracow had gained a reputation as a good...
(The entire section is 15397 words.)
Bernard Vinaty (essay date 1987)
SOURCE: "Galileo and Copernicus" in Galileo Galilei: Toward a Resolution of 350 Years of Debate—1633-1983, Duquesne University Press, 1987, pp. 3-43.
[In the following article, Vinaty discusses the relevance of Copernicus's research to the development of Galilean cosmology.]
In the course of the second day of the "Dialogue Concerning the Two Principal World Systems, the Ptolemaic and Copernican," Gianfrancesco Sagredo, Venetian patrician and one of the three persons taking part in the dialogue, recounts:
Certain events had but recently befallen me, when I began to hear this new opinion [Copernican] talked about. Being still very...
(The entire section is 18713 words.)
Hans Blumenberg (essay date 1987)
SOURCE: "The Theoretician as 'Perpetrator'" in The Genesis of the Copernican World, The MIT Press, 1987, pp. 264-89.
[In the following essay, Blumenberg discusses the metaphors of revolution and violence that have characterized assessments of Copernican cosmology through the years.]
On the base of the Copernicus monument in Torun stands this inscription: Terrae Motor Solis Caelique Stator [Mover of the Earth and Stayer of the Sun and the Heavens]. The kings of Prussia had owed the monument to Copernicus for a long time. On 12 August 1773—that is, in the year of the astronomer's 300th birthday—Frederick the Great had made this promise in a letter to Voltaire....
(The entire section is 12682 words.)
Ann Blair (essay date 1990)
SOURCE: "Tycho Brahe's Critique of Copernicus and the Copernican System" in Journal of the History of Ideas, Vol. LI, No. 3, July-Sept., 1990, pp. 355-77.
[Below, Blair discusses astronomer Tycho Brahe 's ambivalence toward Copernican cosmology. Brahe admired Copernicus's desire for mathematical simplicity in his calculations of the motions of the heavenly bodies, but he could not accept Copernicus's theory of heliocentrism.]
For Luther he was the "fool who wanted to turn the art of astronomy on its head"1; for François Viète he was the paraphraser of Ptolemy and "more a master of the dice than of the (mathematical) profession"2; for nearly...
(The entire section is 10534 words.)
Irving A. Kelter (essay date 1995)
SOURCE: "The Refusal to Accommodate: Jesuit Exegetes and the Copernican System" in Sixteenth Century Journal, Vol. XXVI, No. 2, 1995, pp. 273-83.
[In the following essay, Kelter traces the early response of the Catholic exegetical community to Copernican theory.]
On March 5, 1616, the Roman Catholic Church's Sacred Congregation of the Index issued a decree concerning the new Copernican cosmology and current works defending it. The edict prohibited, until corrected, both Nicholas Copernicus' classic work, the Revolutions of the Heavenly Spheres (1543), and the commentary on Job (1584) by the Spanish theologian Didacus à Stunica (Diego de Zúñiga). The Carmelite...
(The entire section is 5949 words.)
Further Reading
American Philosophical Society. Proceedings of the American Philosophical Society Held at Philadelphia for Promoting Useful Knowledge: Symposium on Copernicus. Philadelphia: American Philosophical Society, 1973, 550 p.
A collection of scholarly papers presented at the symposium by Owen Gingerich, Anthony Grafton, Willy Hartner, and Noel Swerdlow on the five hundredth centenary of Copernicus's birth.
Armitage, Angus. Copernicus: The Founder of Modern Astronomy. London: George Allen & Unwin Ltd, 1938, 183 p.
Presents an account of the research that led Copernicus to form his theories of... | http://www.enotes.com/topics/nicolaus-copernicus |
UCR ARTSblock Film Series, Thursday-Saturday, March 21-23, 7 p.m., $9.99 general admission and $5 for students with I.D.
This mini series of space films is shown in conjunction with the ARTSblock exhibition Free Enterprise: The Art of Citizen Space Exploration.
Thursday, March 21, Moon UK 2009
“In an age when our space and distance boundaries are being pushed way beyond the human comfort zone, how do we deal with the challenges of space in real time? How do our minds deal with long periods of isolation? Space is a cold and lonely place, pitiless and indifferent. What kind of a man would volunteer for this duty? What kind of a corporation would ask him to? Moon is a superior example of that threatened genre, hard science fiction, which is often about the interface between humans and alien intelligence of one kind of or other, including digital.” Roger Ebert, Chicago Sun Times.
Friday, March 22, Alien (The Director’s Cut) USA/UK 1979
“The rerelease of Alien in a director’s cut 24 years after its debut turns out to be a great corrective to prevailing warps in movie space and time. In one dimension, the classic sci-fi thriller’s wide-screen grandeur and director Ridley Scott’s verve in filling his huge canvas with elaborate, abstract landscapes is an upside-the-head rebuke to home viewing. And the movie’s tantalizingly slow, oozing pace is a heartbeat-tripping reminder that today’s sped-up blockbuster conventions may improve on speed, but not on thrills. Even the rib-ripping birth scene unfolds at a tempo more familiar to a waltz than a rupture. Pay attention to the enhanced sound mix, which may be the most important cleaning job of all; silence and score have never twined so hauntingly.” Lisa Schwarzbaum, Entertainment Weekly.
Saturday March 23, 2001: A Space Odyssey USA/UK 1968
“Stanley Kubrick’s 2001: A Space Odyssey is one of the greatest films of all time and it is the director’s most profound and confounding exploration of humanity’s relationship to technology, violence, sexuality and social structures. Kubrick’s philosophical inquiries about the nature of humanity are explored throughout all his films but here he explored his preoccupations by examining the place that humans occupy in the universe, asking questions about the way humanity has evolved and suggesting what the next stage of our evolution will be like. But the ultimate meaning of the film is as deliberately ambiguous as the motives and origins of the black monoliths whose gift of intelligence gave humanity the tools it needed to both survive and self destruct.” Thomas Caldwell, Cinema Autopsy. | http://www.ucira.ucsb.edu/ucr-artsblock-film-series-thursday-saturday-march-21-23-7-p-m-9-99-general-admission-and-5-for-students-with-i-d/ |
Five Facts About Boswelia
Boswelia has claimed to be a rather effective natural anti-inflammatory herb. It is an herb that is very important in Ayurvedic medicine. You will often find in combination with other herbs to address specific issues. Boswelia is used to naturally treat many condition in the body. Here is what you need to know:
Boswelia supports the normal function of the kidneys
It helps to maintain and support healthy joints
It can support healthy circulation
Boswelia also provides antioxidant activity
It is most commonly used to promote healthy inflammation response in the body.
I don’t feel that Boswelia is something that we should consider giving a power athlete. I personally have not yet seen enough research out there to promote the use of this herb. While it does have historical, religious, cultural, and medicinal importance, Boswellia has not been thoroughly studied yet, and the scientific data available does not yet support our knowledge of the traditional uses of it. Its very possible that it is a safe herb but I am very cautious when making recommendations about unregulated supplements and I value my client’s safety above anything else.
I do not think it is good that companies suggest other herbs to use in additional therapies. As I stated above, there is still much that do not know about each one and how the interact with each other.
Sources: | https://www.asparagusandgold.com/post/2017/08/24/five-facts-about-boswelia |
DID YOU KNOW…….
In some instances, a color change can occur when fabrics are exposed to sun or artificial light for a period of time.
WHAT DOES IT LOOK LIKE?
Light fading is more intense on the side of the fabric that is exposed to light while in use or storage. It can be just in a small area or a more extensive color loss. Often it appears as angular patterns or streaks. Most of the time, the loss of color leaves a lighter shade, but in some cases, the areas discolored from light exposure could appear to have a darker hue. The original color of the fabric can be seen in the areas protected from light, such as inside the seam allowance, under the collar, darts, or pleats, inside pockets, or on the underside of the fabric.
Exposure to sunlight or artificial light caused this type of dye fading. Such direct and indirect light sources contain invisible light spectrums, such as ultra-violet rays, that can alter the chemical structure of dyes. The main factors that will determine if a fabric will change color from light include the fiber content of the fabric and the chemistry of the dye, as well as the light source, time, and strength of exposure.
The manufacturer must select durable dyes appropriate for expected conditions of normal use and storage for the fabric’s intended end product. However, the owner of the item must take precautions not to allow the fabric to be exposed to prolonged and concentrated light during use, transportation, and storage, because practically all dyes will fade from light if the exposure is strong and long enough. | https://fabricare.ca/tales-from-the-lint-trap-july-16/ |
Hick’s Law, better known as the paradox of choice, states that increasing the number of options increases the decision time to make a choice.
Cognitive bias plays a vital role in the paradox of choice that can increase or decrease your conversion rate. Simplicity can often be complex due to limiting options to appeal to audience segments. Learning the scientific principles of choice is imperative for increasing transactions and retaining customers long-term.
Darrell is a growth marketing expert, and the founder of Growth Hack Guides. He comes with a unique perspective to influence conversions that can help any online business build a loyal customer base in the long term. | https://vwo.com/webinars/hicks-law-cro-success/ |
Many scientists have been investigating the planet’s countless extreme environments, on the hunt for molecules that will inspire the next blockbuster drug or technological breakthrough. With some scientists estimating that less than 5% of fungal species and 1% of prokaryotic species on Earth are known, a lot of biodiversity remains to be explored.
…
From abandoned copper mines in Montana and Vermont to a coal seam in Kentucky that’s been burning for half a century, natural product chemists have begun to identify potential new pharmaceuticals in the most unlikely of places.
“Extremely hostile environments are an evolutionary playground,” says Tomasz Boruta, a bioprocess engineer at Lodz University of Technology, in Poland. “Organisms need to evolve all sorts of new compounds to adapt to those harsh conditions.”
…[T]he chances that these tiny survivor organisms will yield useful molecules is high: the compounds they synthesize have to be potent and unique to help keep them alive. Though the work has yet to arrive at a doctor’s prescription pad, experts say that it’s probably only a matter of time until some of humanity’s biggest disasters begin to yield some lifesaving compounds. | https://geneticliteracyproject.org/2019/05/21/mining-extremely-hostile-environments-for-keys-to-new-drugs-tech-breakthroughs/ |
When Jenn Norrie shares how she built her rewarding career, her involvement in a specific extracurricular activity plays a supporting role early in her story.
While studying agricultural economics and marketing at the University of Saskatchewan, Norrie joined the student chapter of the Canadian Agri-Marketing Association (CAMA), a decision that proved tremendously valuable for the networking and mentorship opportunities it provided.
“That really led me into my path of agriculture marketing and communications for a career,” says Norrie, who is the communication manager at Alltech, covering North America and Europe from her home in Calgary.
“Through CAMA at the student level, we had a chance to network with industry professionals,” she explains. “That was actually really imperative, the networking side of things, because a few individuals that I met along the way while I was a student ended up resulting in a job.”
Norrie, who grew up outside of Calgary, started her career with the agricultural advertising agency AdFarm. “You take marketing classes at school, but to really get in and see it all come to fruition and work on strategy, public relations and digital media — to see it all play out into large campaigns across Canada — was really rewarding,” she recalls.
Her enthusiasm for Canadian agriculture is evident in all she’s done, from earlier marketing and communication roles with UFA and Bayer Crop Science to her present position with Alltech.
“It’s an easy part of the job to be really passionate about agriculture and sharing stories about the industry, and the farmers and ranchers that work hard every day to produce food,” she says. “When you believe in something and you’re passionate about it, it’s a great opportunity to want to share that with more people.”
The international context of her position allows Norrie to share this passion with a vast audience. Her job originally covered North America, and the opportunity to add the European market came up in the past year, involving plenty of early-morning online meetings to connect with colleagues around the world. Thanks to Alltech’s annual international conference, she has had the opportunity to meet people from around the world, many of whom she now works with.
“I’ve met a lot of the key people from our organization that I needed to work with, and through having those relationships we’ve continued to work together virtually.”
Norrie advises those who want to build their own network to step outside their comfort zone and introduce themselves to industry professionals they admire. “Then stay connected with them because you never know where that might lead.”
While the current lack of in-person events has diminished the opportunity for face-to-face networking, Norrie believes this has opened the doors for more connection through virtual methods. “I think people are way more open to getting an email from someone or a LinkedIn request or a direct message on Twitter,” she says, advising people to ask questions about social media posts on topics that interest them to start a conversation.
She has also drawn on the expertise of mentors throughout her career, starting with the industry connections she made through CAMA. In addition to formal mentorship programs, Norrie is a fan of informal mentorship and has reached out to those she admires for career-related discussions over the years. “I know if I have questions or am looking for advice or just want to bounce ideas off of, they’re people who I know I can call,” she says.
“It’s really open conversation and typically none of us in my mentorship capacity have worked together,” she adds. “You can just talk about different things and broaden your horizons because when we’re working in a specific job or a specific company or a specific region… we might get a little blinded from that larger scope.”
In considering the scope of opportunities now available for women in agriculture, Norrie is thankful for the strides made by previous generations that opened doors for so many. “Even from the time when I was in school and started my career, I see more and more women pursuing careers in agriculture.”
While she recognizes that there are areas of agriculture in which improvement is needed, she’s encouraged by the number of women who are making their mark on the industry. “Our federal ag minister is a woman, the Canadian Federation of Agriculture president is a woman — there are more women in many key roles,” she says. | https://www.advancingwomenconference.ca/2021/08/01/making-connections-that-matter/ |
Yes.. this is something that has been researched and tested in the laboratory. I don't know that anyone has actually tried reconfiguring around a damaged piece of an FPGA, if for no other reason than permanent damage in a reconfigurable FPGA is extremely unusual (and probably hasn't ever occurred). There are soft upsets in the configuration memory, and the Virtex and Virtex II have a potential failure mode where an upset in just the wrong place could cause damage (having two logic element outputs fighting each other), but it's very unlikely.
There's a fair amount of test data on radiation behavior (klabs.org or MAPLD are places to look). I'm not sure there's a failure mechanism (with high enough probability) that causes a hard failure of just some gates. (These parts are typically latchup-immune, for instance). I suppose some sufficiently high energy particle could damage a few gates permanently. You'd need very high Linear Energy Transfer, though. There's a paper by Fuller, et al, out there where they zapped a Virtex with 2068 MeV Au ions, looking to see if latchup could be observed at any LET below 125 MeV-cm^2/mg (this is the upper bound for galactic cosmic rays). No latchup detected. They did see an increase in current, but it's because of the configuration upsets causing internal logic contention, and went away when the device was reconfigured. (fluence was 1E7-1E8 ions/cm^2, which is HUGE compared to what you see in real life. There were some changes in current that stuck around for a few hours, but gradually annealed away)
As far as upsets go, typical predicted upset rates aer on the order of 2 upsets/device day in LEO up to 5.9 upsets/device day in GEO. With flare enhancement, it's like 21 upsets/device day for LEO and 81.5 for GEO. (of course, life is better than this.. in most designs, the vast majority of configuration bits are "don't care", so you wouldn't see the upset.. a typical multiplier is 4:1. That is, half an upset/device day for LEO) (all these are for the XVQR300)
(another source reports a cross section for proton SEU of 5E-13 cm^2/bit.. the device has, say, 6E6 bits, so you can figure out what kind of proton flux you need to get a given upset rate)
And, of course, now there's a rad hardened Virtex 5 available (you too can own one for about $80k/copy).. 1Mrad(Si) total dose, config mem upset rate in GEO 3.8E-10 errors/bit/day. Single Event Functional Interrupt (SEFI) of configuration control logic (this would prevent you from reconfiguring on the fly) in GEO is once every 10,000 years.
So it's not really clear that you NEED to be able to reconfigure around damage..
We've only been flying Xilinx Virtex parts for long durations since 2005 (Mars Reconnaissance Orbiter) (there might be some other earlier experiments.. CANDOS used a couple of Virtex II parts on Shuttle was 2 weeks in 2003 and only operated for 10s of hours) We do periodic scrubbing/reloading of the configuration memory, and I'm not sure we even know if there was a transient upset (that is, we don't read it back, we just rewrite, blindly). There's some DoD comm payloads that use Virtex parts, and their mitigation strategy for configuration upsets is to have two devices and ping pong between them.. while chip 1 is being configured, use chip 2, when done, flip, reconfigure chip 2 and use chip 1.
When all is said and done, reconfiguring to get around a human coding error is actually much more likely.
Jim Lux
From: beowulf-bounces at beowulf.org [mailto:beowulf-bounces at beowulf.org] On Behalf Of Nathan Moore
Sent: Wednesday, September 05, 2012 8:24 AM
To: beowulf at beowulf.org
Subject: Re: [Beowulf] Servers Too Hot? Intel Recommends a Luxurious Oil Bath
> On Tue, 4 Sep 2012, Ellis H. Wilson III wrote:
Which is why I was suggesting that, "Maybe the whole thing is just
built, sealed for good, primed with [hydrogen/oil/he/whatever], started,
allowed to slowly degrade over time and finally tossed when the still
working equipment is sufficiently dated."
I remember an "ancient" IBM technical article about the BlueGene, here: http://researcher.watson.ibm.com/researcher/files/us-ajayr/SysJ_BlueGene.pdf
In the work (or maybe it was a closely related paper), the authors make the point that as core count increases and feature size decreases, cpu units will have to be fault tolerant, eg if cosmic rays have toasted 10% of your chip's cores, it should still be able to function. Related, this is one of the great beauties of FPGA's. Jim Lux can probably tell us if this would be real, but it would seem to make sense to program a space probe (ie voyager type) with an FPGA emulated CPU for the sake of damage survivability. In the worst case that the probe encounters something unpleasant and part of the FPGA is damaged, perhaps the rest of the LUT's in the FPGA could be reprogrammed to produce a less powerful, yet still functional, controller. This would take the "field programmable" aspect to the device to a new height...
Nathan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.beowulf.org/pipermail/beowulf/attachments/20120905/3bd31f0e/attachment.html>
| |
The Strange Life and Times of Mystic Baba Vanga
Who was Baba Vanga and why performed her predictions grow to be so well-known? Delivered in 1911 as to what is now Bulgaria, she lost her eyesight to a mysterious illness at age a dozen. She quickly acquired notoriety being a seer who may make precise prophecies about the potential, and her prophecies were sought after by stats for example dictators and community leaders. On this page, I will check out a few of the secrets and techniques behind Baba Vanga’s estimations.
Understanding of Futures Unidentified
Baba Vanga’s forecasts happen to be analyzed for years and years and many believe that she possessed access to exclusive insight into the longer term. Though it may be extremely hard to know for sure how her prophecies happened, there are several hypotheses that were recommended to describe them. A very common idea suggests that she employed her supernatural power to draw on hidden information about the near future. Others advise that she might have utilized astrology or some other forms of divination to make her visions.
The potency of Intuition
Another possible outline for Baba Vanga’s prophecies is her intuition. Whilst not every person believes in intuition, it really has been suggested that Baba Vanga could have got an inborn ability to sense when one thing was going to occur before it genuinely managed. This intuitive potential could describe why a few of her prophecies were so precise, even if these were created ages before they really happened.
Impacting Occasions with Prophecies
Additionally it is possible that a number of Baba Vanga’s prophecies were actually personal-satisfying in general, which means simply generating the forecast influenced events in such a manner concerning take it about or stop it from emerging real. For instance, when someone heard certainly one of her prophecies with regards to a key occasion taking place with their life-time, they can change their actions accordingly in order to ensure which it will not come true – thus inadvertently rewarding the prophecy their selves!
The specific supply of Baba Vanga’s prophetic potential remains to be unidentified nonetheless, there are various theories that make an effort to explain how she surely could make these kinds of exact prophecies in regards to the future. Whether her visions were powered by supernatural factors or just by a tremendously powerful intuition, one important thing is definite – Baba Vanga kept an indelible symbol on history together with her prophecies and will continue to captivate individuals all over the world today. Her affect can still be noticed right now, as much individuals think back on the words with amazement and admiration. | https://www.cripplecreektx.com/the-strange-life-and-times-of-mystic-baba-vanga/ |
DWP has published statistics from the beginning of the Work Programme in June 2011 to the end of December 2015. In this data release, we are able to report on the two-year job outcome performance, i.e. whether or not an individual has secured a job outcome during the entire length of time on the programme.
The headline results are:
- The two-year Job Outcome performance is 26.0%, 2.9 percentage points above DWP’s expectations. This figure is for the whole Work Programme from June 2011 to December 2015.
- Two-year performance over the whole programme has increased slightly, from 25.7% in the December 2015 release to 26% now.
- For those completing the programme in the latest two months, two-year performance has risen from around 22% in 2013 to around 31% now.
- 1.81 million people have been referred to the Work Programme since June 2011.
- 503,200 people have had a ‘sustained’ job outcome through the Work Programme.
- ERSA – the Providers Trade Association, report that 772,000 participants have started work – and may eventually get a ‘Job Outcome’.
- 12.5% of ESA new claimants get a job outcome within two years, below DWP’s expectation of 12.7%. The equivalent figure for ex-IB ESA participants is 4.7%.
- People with a disability and those aged 50 and over are the least successful in getting a job through the Work Programme.
LEARNING AND WORK INSTITUTE COMMENT
Duncan Melville, Chief Economist at the Learning and Work Institute commented:
“The latest Work Programme performance figures show that just over one in four programme participants (26.0%) have secured a sustained job outcome. Evaluation evidence has indicated that the programme has performed similarly to those previous programmes it replaced but at a lower cost. Last year’s Spending Review announced that a new Work and Health Programme would replace the Work Programme and Work Choice.
Subsequently, it became clear that funding for this new programme would only be one fifth of the level of funding previously provided for the Work Programme and Work Choice. Yesterday’s Labour Market statistics release and OBR projections have again made clear that this is a mistake that the Government needs to revisit. The labour market already appears to be cooling, and the OBR’s projections are for a rising unemployment rate from around the middle of next year.
Even with the current unemployment rate of around 5%, we have 2.2 million people who have been on out of work benefits for 2 years or more and 1.6 million who have been on them for 5 years. This is a waste of human potential that needs to be addressed. This requires greater spending on active welfare to work measures to help those suffering long term worklessness to return to work, not 80% cuts in funding.”
Download links
- 2.9 percentage points above DWP's expectations. This figure is for the whole Work Programme from June 2011 to December 2015. | https://www.learningandwork.org.uk/resource/work-programme-statistics-march-2016/ |
Mankind is a species of habit. Based on our childhood, on our education and the experiences we make, we develop certain concepts to explain the world around us. This is a very helpful tool in coping with all the different impressions we perceive everyday. Especially in a world evolving so quickly. Although helpful most of the times, it can also hinder us from further growth, which is why we should always be open to rethink and adjust or even neglect our concepts.
In a way these concepts we form become like an actual physical construct. Like a small house, that we build around us. It offers us comfort and safety. When we interact with the world, we open the shutters and let some light in, knowing that in this house we are secure. Everything that comes in, comes through the windows, which are like lenses. They filter every information based on our experiences. It becomes quite difficult to see the world for what it is, since we always see it through these filters.
Although this house offers tremendous comfort, it also comes with limitations. Since we feel so safe in it, we tend to be reluctant to change anything about it. Often times, instead of taking the impressions in for what they are, we tend to reaffirm what we already believe to know. We don’t really search for new insights, but for a confirmation of the construct we have built. Most of the times it happens sub-consciously. We are not even aware of that process.
The safe house therefore becomes more of a limitation, something that is holding us back in our personal evolution. Some might even call it a prison!
This process appears not only on an individual scale, but also on a societal level. Take the subject of ancient civilisations for example. Modern science tells us, that civilisation is about 5,000 years old and that prior to that time, we were hunters and gatherers.
This is the construct that has been built, the lens through which every new information is seen.
There is however information, that strongly suggests that civilisation dates back far longer than that. Even with hundreds of out of place artefacts, scientists are not willing to change their point of view.
This gets even more evident with the Sphinx in Egypt. According to geologists, the erosion on the outside of that structure could not have been caused merely by sand and wind. Only heavy rainfalls could have caused erosion like that. How is that possible? Egypt is not really known for that kind of weather. The last time the region had strong rainfalls needed to cause such an erosion was after the last ice age, dating the Sphinx to be at least 12,000 years old.
Despite this clear evidence, contemporary egyptologists do not sway from their opinion. Since all of their theories are built on that foundation of civilisation being 5,000 years old, they are not willing to alter their point of view for it would question everything they believe in.
So instead of taking in new information to possibly gain a clearer understanding of reality and of the questions we ask, the information is merely used to reaffirm what is already there. And if it does not fit that concept it is neglected as unsubstantial.
For true evolution, we have to go beyond these limiting beliefs. The first step is to become aware of our constructs and beliefs. See them for what they are and try to understand where they are coming from. This will allow us to become more self aware and gain a deeper understanding of our Self. It may also help us to discover some flaws in the constructs we have created.
It would be too much to ask, and also not very practical to get rid of these constructs all at once. Instead, we can begin to question them, to alter them and to remain open for new insights. Every day offers tremendous potential for new learning experiences. And a certain degree of openness is essential, at least to widen these limiting beliefs. So that the small prison may evolve to a mansion, a whole field, a country, the planet, and one day may even evolve to the whole universe.
At this moment there are no longer any concepts at place, but pure reality for what it is. And doesn’t that sound intriguing? Isn’t that something worth striving for?
Let us therefore remain constantly open. Open for new insights, open for new perspectives, open for questioning our current beliefs to gain a deeper understanding of our Self and our surrounding.
It may be uncomfortable, surprising or even difficult at times, but employing that principle and having it in mind, will undoubtedly lead us to a deeper understanding and accelerate our growth as an individual and as a society. | https://journeyofamystic.com/go-beyond-your-limitations/ |
The invention discloses a power distribution network edge side photovoltaic adaptive integrated prediction method, and the method comprises the steps: taking a Tradaboost algorithm as a main body integrated frame, and taking an extreme learning machine as a basic predictor in a model initial training stage; in the integration process, the extreme learning machine predictors with low prediction performance are deleted, and the extreme learning machine predictors with high prediction performance are correspondingly improved, so that the final integration scale is reduced, and the calculation overhead and storage resources of subsequent prediction are reduced. In a day-ahead rolling prediction stage, in combination with daily actual photovoltaic output data, an online sequence extreme learning machine algorithm is adopted to carry out parameter updating on each basic extreme learning machine predictor, the self-adaption problem of an edge side photovoltaic prediction model is solved, and an effective basis is provided for a power distribution network to generate a control strategy of a distributed power supply. | |
Huawei CFO Meng back in Canadian court fighting U.S. extradition By
VANCOUVER () – Huawei Technologies Chief Financial Officer Meng Wanzhou will be back in a Canadian courtroom on Monday as her lawyers resume their fight to block the United States’ efforts to extradite her.
Meng, 48, was arrested in December 2018 on a warrant from the United States charging her with bank fraud for misleading HSBC (L:) about Huawei’s business dealings in Iran and causing the bank to break U.S. sanction law.
Huawei lawyers will argue that the U.S. extradition request was flawed because it omitted key evidence showing Meng did not lie to HSBC about Huawei’s business in Iran.
Meng, the daughter of billionaire Huawei founder Ren Zhengfei, has said she is innocent and is fighting extradition from her house arrest in Vancouver.
The arrest has strained China’s relations with both the United States and Canada. Soon after Meng’s detention, China arrested Canadian citizens Michael Spavor and Michael Kovrig, charging them with espionage.
Meng will appear in British Columbia’s Supreme Court on Monday for five days of Vukelich hearings – in which the judge will ultimately decide whether to allow the defence to admit additional pieces of evidence in their favour.
In this case, Huawei lawyers will use a PowerPoint presentation to show HSBC knew the extent of Huawei’s business dealings in Iran, which they say the United States did not accurately portray in its extradition request to Canada.
In previously submitted documents, Meng’s lawyers claim the case that the United States submitted to Canada is “so replete with intentional and reckless error” that it violates her rights.
The argument is part of Meng’s legal strategy to prove that Canadian and American authorities committed abuses of process while arresting her.
Lawyers representing the Canadian attorney general are arguing for her extradition to the United States.
Vukelich hearings are rare in extradition cases, said Gary Botting, an extradition lawyer based in Vancouver, but given the complexity of Meng’s case it is not surprising.
The defence’s success “depends entirely on the nature of the evidence… and whether or not there is any substance to their allegations,” Botting added.
Meng’s extradition trial is currently set to wrap up in April 2021, although if either side appeals the case, it could drag on for years through the Canadian justice system.
Disclaimer:Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. All CFDs (stocks, indexes, futures) and Forex prices are not provided by exchanges but rather by market makers, and so prices may not be accurate and may differ from the actual market price, meaning prices are indicative and not appropriate for trading purposes. Therefore Fusion Media doesn`t bear any responsibility for any trading losses you might incur as a result of using this data.
Fusion Media or anyone involved with Fusion Media will not accept any liability for loss or damage as a result of reliance on the information including data, quotes, charts and buy/sell signals contained within this website. Please be fully informed regarding the risks and costs associated with trading the financial markets, it is one of the riskiest investment forms possible. | |
Consumers’ concerns about what is healthy when it comes to food intake have increased as a result of the rise in food-related issues.
One of these concerns is whether it is safe to eat soup that has been left out overnight. We discovered that eating soup that has been left out overnight has health consequences based on observation and research.
No, you cannot eat soup left overnight because it is not safe to eat. Soup that has been left out overnight has been exposed to bacteria that can cause illness.
This article covers everything you need to know about whether you can consume soup that has been left out overnight. We have also included information on what happens if you consume soup that has been left out overnight, how to identify spoiled soup, and how to keep your soup fresh.
Can You Eat Soup Left Out Overnight?
It is not safe to eat soup that has been left out overnight. Soup left out overnight is unlikely to spoil, but it may become contaminated and cause foodborne illness.
When you leave your soup out for an extended period, it becomes contaminated with bacteria. Staphylococcus aureus, Salmonella enteritidis, and Campylobacter are among the microorganisms.
These bacteria are among the most common causes of food poisoning. Some illnesses caused by these bacteria are not life-threatening, but they can make a patient sick and uncomfortable for hours, if not days when they are not treated promptly.
What Happens When I Eat Soup Left Overnight?
These happen when you eat soup left overnight:
1. Decreased Flavor And Aroma
Eating leftover soup deprives you of a pleasant taste and aroma. Your soup’s flavor and taste diminish, and it becomes bland; this means there’s a little chance you will love your soup. Your soup will taste best when it’s hot.
2. Food Poisoning
When you eat soup that has been left out overnight, you risk getting food poisoning. When you leave your soup at room temperature, it cools and becomes contaminated with bacteria. Most bacteria cannot survive heat, which is why you should always reheat your soup before eating it. As your soup cools, bacteria multiply rapidly, increasing the risk of food poisoning.
Diarrhea, vomiting, nausea, and abdominal cramping are the most common symptoms of food poisoning. The symptoms may appear after 30 minutes of consuming the soup in some cases. Food poisoning of this kind can be treated with over-the-counter medications. However, if your symptoms persist, we recommend that you contact a doctor.
3. It Impedes The Rate of Digestion
Cold soup reduces the rate of digestion because it makes your gut work overtime to break down your food. This is unlike hot soup that has been partially broken down. Most people experience bloating after eating cold soup.
4. Reduced Nutritional Content
Because cold soup takes a long time to digest, the body absorbs nutrients slowly. This indicates that your body is not retaining the necessary nutrients for appropriate growth when you eat cold soup.
All of these problems can be avoided by reheating leftover soup or storing it appropriately in a cooler, refrigerator, or freezer. To avoid microbial accumulation, the CDC recommends that perishable food be refrigerated within two hours.
How to Know Soup Has Gone Bad
These are indications your soup has gone bad: If your soup has an unpleasant smell, throw it out right away. If you eat it, you could get food poisoning. Secondly, examine the appearance and texture for any changes. If the soup shows indications of mold or is slimy, discard it right away. This indicates an extreme level of microbial accumulation.
How to Preserve Soup
Although warming your soup can preserve it, if it is not fully reheated, it can be exposed to bacteria in a short period of time. Make sure your soup is hot and steaming when reheating it. Stir and rotate the pot continuously to ensure that it warms evenly.
Keep any soup that will not be consumed within a few hours refrigerated.
Make sure the soup is consumed within three days of being refrigerated and away from the refrigerator door, where the temperature changes and increases the risk of spoilage. Freezing your soup is a good option for long-term storage.
Under the right temperature conditions, frozen soup can last up to 6 months. Slow cookers can also be used to keep your soup warm for an extended period of time. Coolers and thermos flasks can keep soup warm for several hours.
Final Note
Because you can’t eat soup that has been left out overnight, use the preservation measures we have provided.
Furthermore, if your soup has been sitting out for a long time, look for symptoms of spoilage to avoid eating spoiled food. Always reheat your soup before consuming it for your health’s sake.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post? | https://loving-food.com/can-you-eat-soup-left-out-overnight/ |
The project "Danube-Networkers-Lectures" (DALEC) was a multilingual European online lecture series organized by the danube office Ulm / Neu-Ulm together with the ZAWiW of the University of Ulm and other European partner universities.
The first online lecture series (DALEC1) was performed under the title "Values and Routes along the Danube" from November 2011 to August 2013 with five lectures by the partners from Vienna, Belgrade, Budapest, Craiova and Ruse. The language of the lecture was German with consecutive translation into the national languages of all partners.
From March to April 2013, DALEC 2 was jointly organized by the danube office Ulm / Neu-Ulm with the ZAWiW of the University of Ulm and partners from Serbia, Bulgaria and Romania under the title "Blickwechsel zwischen Generationen und Kulturen aus vier Donauländern". The three lectures were performed in English by scientists of the respective countries, transferred at the same time by video conference to the other places and translated into the respective national language consecutively. This was followed by a well-structured discussion with participants from all four countries.
The aim of the project was to use the new media to work out different perspectives between the Danube countries and the different generations, to identify similarities and differences, and to search for solutions to social problems from different perspectives. The event series thus contributed to a better understanding between the people in the Danube region. The project was aimed at all generations, involving senior citizens, professionals, students and pupils. | https://www.uni-ulm.de/en/einrichtungen/zawiw/offers/europe-wide-activities/archiv/dalec/ |
The Ann Greyson scrapbook contains programs, photographs, clippings, fliers, and financial receipts documenting some theater productions of producer and stage manager Ann Greyson.
The M.R. Lauterer collection of general reference theater scrapbooks date from 1876 to 1915 and contain nine volumes of clippings and programs.
Boris Brodenov (1895-1960) was an actor and radio announcer. The Boris Brodenov papers hold clippings, correspondence, photographs, programs, scripts, and scrapbooks documenting his career.
Edward Tuckerman Mason (1847-1911) was an American literary and drama critic. The Edward Tuckerman Mason papers, 1833 to 1914, consist of 15 scrapbooks and a file of letters. | http://archives.nypl.org/controlaccess/3285?term=Theater%20--%20United%20States%20--%2020th%20century |
A new partnership between Texas A&M AgriLife Extension Service and the Texas Council for Developmental Disabilities, TCDD, is creating a statewide presence to improve the lives of individuals with disabilities, caregivers, partners and providers in communities throughout Texas.
According to the U.S. Census Bureau, 19% of people in the U.S. and more than 5 million Texans have a disability, said Andy Crocker, AgriLife Extension statewide program specialist in gerontology and health, Amarillo.
“With Texas A&M AgriLife Extension’s mission to help Texans better their lives, this partnership with TCDD will allow us to serve new audiences with our research-based, practical, applicable education,” Crocker said.
AgriLife Extension programs are delivered throughout the state by a network of local educators and volunteers. For instance, the Family and Community Health Unit focuses on topics such as child and adult health, nutrition, child care, financial management, passenger and community safety, and building strong families. The goal is to encourage lifelong health and well-being for every person, every family and every community.
The disability population in Texas is just as varied and diverse as the state they live in, said Beth Stalvey, Ph.D., TCDD executive director. Just like all Texans, people with disabilities are from diverse cultural backgrounds, live in concentrated urban centers or remote rural areas, and participate in community life in different ways. Because the state is diverse in so many ways, the experiences of people with disabilities and those who provide care are also diverse, which includes how they access support and services, information and resources.
By establishing five regional community outreach coordinators in conjunction with AgriLife Extension, TCDD gains the visibility, benefit and reputation of being active and engaged in understanding specific needs, identifying strategies to reduce barriers and networking to form new partnerships and support at the local level, she said.
Stalvey said the partnership focuses on multiple goals and objectives in TCDD’s 2017- 2021 state plan. This includes support of promising new practices, addressing linguistic and cultural barriers, and promoting leadership and advocacy training among self-advocates, families and other allies.
For more information, go to TCDD’s website: https://tcdd.texas.gov.
—
Through the application of science-based knowledge, AgriLife Extension creates high-quality, relevant continuing education that encourages lasting and effective change.
Subscribe to our e-mail newsletter or connect with us on Facebook, Instagram and Twitter. | https://agrilifeextension.tamu.edu/blog/agrilife-extension-effort-will-expand-impact-for-texans-with-disabilities/ |
A change of tone color in the field or border caused by differences in wool or dye batches. Frequently it occurs when the weaver runs out of one batch of yarn and continues with a second. Another reason for this is when one dye fades at a different rate than another.
All-Over Design
A pattern that is repeated throughout the field. No central medallion is present. Floral all-over designs or Herati designs are good examples.
Arabesque
A popular design in oriental rugs and carpets consisting of intertwining vines, leaves, flowers, buds or branches. Arabesques can be either floral or geometric in nature and are used both in the field and border.
Border
The patterned bands and colored bands framing the field. There are generally three or more bands, the widest being referred to as the main stripe.
Boteh
A leaf-like motif with a curved tip. Frequently found to decorate the whole field as a repetitive pattern. "Bush" in Persian.
Bleeding
Occurs when dyed yarn has not been washed properly after the dyeing process, causing it to "bleed" or run into the surrounding areas. It can also happen to chemical dyes, which are not stable or colorfast. The most common color affected is red.
Broken Border Design
When the border designs cross over the line and enter the field (or vice versa). It looks as though the motif is not confined to its intended position on the rug or carpet. Frequently seen in Persian Kermans and other weavings with French influences.
Cartoon
The design for a carpet or rug, often copied onto graph paper to make it easier to follow. The weaver follows the design on the cartoon.
Chrome Dye
A group of modern synthetic dyes that are composed of potassium dichromate. These dyes are colorfast.
Colorfast
When a dye is stable to both light and washing.
Dye
A substance used to color fibre, yarn or textiles.
Field
The area of a rug or carpet enclosed by a border. It usually has a design, but not always.
Gabbeh
Term used to denote a thick and long piled rug or carpet, usually woven in the Fars region of Iran. It means “unclipped” in Persian.
Gul
A lobed or stepped polygon with geometrical ornamentation that is characteristic of Turkoman weave. In some cases, it has totemic significance for the particular tribe from which it originated.
Herati
A repeating pattern consisting of a rosette bearing palmettes at its four corners.
Kilim
A rug or carpet with no pile. Design is created by the different colors of the weft strands as they are woven through the warp strands.
Knot
The wrapping around the warps of the yarn (wool) threads, the ends of which project to form the pile of the rug or carpet. There are two basic types of knots used in the Persian carpets: the symmetrical (Ghiordes) or Turkish knot and the asymmetrical (Senneh) or Persian knot.
Knot Count
The number of knots per unit of measure, multiplying the number transversely by the number longitudinally per inch (for example). This can sometimes become complicated when the warps become depressed in finely woven rugs and not all the knots can be seen.
Loom
The basic frame used for weaving. Two horizontal beams are used to tie the vertical warps and hold them tightly in place. Looms can be either horizontal or vertical. Horizontal looms are small, used for nomadic weavings and normally used horizontally on the ground. These are easily folded for transportation during migration. Vertical looms are used for weavings of larger rugs and carpets in city and town workshops and are stationary. Several people can sit side-by-side weaving simultaneously.
Mahee
Persian for "fish". The term refers to the overall repeating pattern of a "fish-eye" design.
Medallion
A main field design located in the center of the rug. Shapes can be of diamonds, hexagons, circles, stars, octagons or ovals. A rug may have more than one medallion.
Natural Dye
Dyes derived from vegetal or animal sources to color yarn or textiles.
Common colors are:
- Blue from Indigo
- Red from the Madder plant and from Cochineal insects
- Brown or black from oak bark, acorn husks, tea and walnut husks
- Yellow from artemisia, centaury, daphne, onionskin, pomegranate, turmeric
- Green from Indigo mixed with any of the yellow dyes
Overcasting
A method of finishing the edges of a fabric parallel to the warp in which several warps are wrapped in a circular manner with yarn, which is separate from the rest of the rug.
Palmettes
A stylized fan-like motif resembling the cross section of a lotus flower. It appears in both field and border designs.
Pendant
A small flower or cluster of flowers at the top and bottom of a medallion.
Persian Knot
One of the two major knot types used in oriental rugs and carpets, the symmetrical knot being the other. Both knots usually wrap around two strands of warp. The Persian knot (Senneh) can be either looped over a warp on the left and opened up to the right, or it can be looped over a right warp and opened up to the left.
Pictorial Rug
A rug or carpet that depicts representations of people, places or any other images other than conventional design motifs.
Pile
The raised surface of a rug formed by the weaving of yarn, which projects from the foundation.
Ply
The twisting of two or more strands of yarn together. They are usually plied in the opposite direction in which they were spun.
Prayer Rug
A directional rug with a representation of a Mihrab (prayer niche in the wall of a mosque). In Islam, the rug should be placed towards Mecca, and the faithful will kneel in the Mihrab and pray.
Runner
A long, narrow rug, which usually has a width of up to three and a half feet.
Selvedge
The vertical edge of a rug and carpet where two or more chords of warp are usually wrapped with separate wefts to reinforce the sides. It is in the selvage where the wefts reverse direction.
Symmetrical Knot
One of the two major knot types used in Oriental rugs and carpets, the asymmetrical knot being the other. Both knots usually wrap around two strands of warps. The symmetrical knot (Turkish knot) wraps around both warps and opens up between the two.
Synthetic Dye
Dyes derived from chemical processes rather than from natural resources. Synthetic dye production began in the 1850s and by the 1870s, these dyes began replacing natural ones in the main rug weaving areas. They were cheap and fast to produce and therefore were much more affordable to weavers. By the early part of the 20th century, chrome dyes were introduced. These are modern synthetic dyes used with potassium dichromate. They are usually stable and colorfast.
Tree of Life Design
A design in Oriental rugs or carpets depicting a tree with limbs pointing upwards. This motif can be depicted in many different variations: naturalistic, geometric or abstract.
Warp
Vertical foundation strands running the length of the carpet. Before weaving can begin, warps need to be correctly positioned on the loom. The warp is generally made from cotton, wool or silk.
Weft
The horizontal foundation strands, which are passed over and under the warps at right angles. Besides helping to lock the knots into place, wefts together with warps make up the foundation of a rug. | https://www.ecarpetgallery.com/us_en/glossary/ |
The Hand and Foot card game is a variation of another rummy-style meld-making game known as Canasta. The differences are explained below in the FAQ.
How to Play Hand and Foot?
Hand and Foot is played with the Joker Variant of the Anglo-American Standard deck. This means that instead of 52, the game is played with decks of 54 cards.
More than one deck is needed in order to play Hand and Foot. Traditionally, 5 to 6 decks are combined into one when playing with five or more Players.
Instructions on how to play Hand and Foot are presented below.
First Deal
To start the game, the decks are shuffled into one shoe and each Player is first dealt 11 cards. This first set of 11 cards are allowed to be looked at by the Player they are dealt to. Once each Player has been dealt their 11 cards, known as the Hand, then another set of 11 cards will be dealt.
Second Deal
This second set of cards cannot be looked at until the Hand has been emptied through normal course of play. The second set, known as the Foot, will only be turned over and looked at once the Hand is empty. Once the Foot has also been emptied through the course of play, the game will end.
Set up
Once each Player has been dealt their 22 cards, the remaining cards will be placed face down in the center of the area of play.
This deck is known as the “Stock” and will be the drawing area of the game.
Next to the stock, turn over the card at the top of the stock face-up. If it is a red (♦♥) 3, a 2 of any kind, or a joker, place it on the bottom of the stock and turn over another card. This upturned card represents the discard pile.
Taking Turns
Each Player will take their turn, starting off by drawing two cards from the stock. The course of the Player’s turn, during which time they may lay out any melds on the table, will end with discarding a single card into the discard pile.
If a Player has a card in their Hand or Foot of the same rank as the top pile of the discard pile, they are allowed to “take the pile”, and draw the top 7 cards of the discard pile.
However, when doing this, the Player may only use that card which was on top of the discard pile for a meld that turn. The other 6 cards may not be played until that Player’s next turn.
Melding
In order to start playing melds, a certain number of cards must be placed down in order to meet a minimum point requirement.
Players must keep drawing and discarding until the requisite number of points have been placed down in the area of play. A Player may play a meld in order to meet this point requirement.
- Round 1, 50 points must be contributed.
- Round 2, the minimum is 90.
- Round 3 it is 120.
- Round 4 it is 150.
Books
Melds in this game are finalized into piles called “books” that will be set out on the area of play. There are two kinds of books, Black or dirty books, and Red or clean/natural books.
Clean books are formed from any 7 cards of the same rank. For example, seven 9 cards. These cards may be any color, and any suit, so long as they are the same rank.
There are also dirty melds. Dirty melds are formed from at least 4 cards of the same rank, and up to three wild cards (Jokers and all suits of the 2 cards) in order to make up the remaining missing cards from the meld.
Once a meld or book is complete, it is placed on the table in a single-stack pile, with all cards turned face up. The top card of the pile represents the kind of meld. If there is a normal card on top of the pile, it is Red. If there is a Joker on top of the pile, then it is Black.
Points
Points are scored both for books, and for the value of cards in the books. Kings, for example, are worth ten points per card, whereas 5s are only worth five points.
So, a Red book of Kings and a Red book of 5s both get the same meld score, five-hundred points. The book of Kings is better however because points are then scored for each individual card.
- Red book of Kings is worth 570 points in total.
- Red book of 5s is only worth 535 points.
The game continues until the first Player empties both their Hand and their Foot cards. The final card must be discarded at the end of the turn, following the normal discard procedure.
Playing with Partners
Hand and Foot may be played with Partners. The game will continue as normally, with each “team” acting as a single Player normally would. Each team will be dealt 11 cards for the Hand, and 11 cards for the Foot, and will work together to make their plays and deciding which cards to discard.
Hand and Foot Rules
- According to traditional rules of Hand and Foot, the top card from the stock must be discarded at the start of the game.
- When taking the pile, only the top card may be used for any melds within that same turn. Also, 7 cards must be taken when taking the pile.
- At the start of each turn, 2 cards are drawn. At the end of each turn, one card is discarded. In order to end a round, a Player must discard their last card in this fashion.
Scoring Charts
Meld Values
Individual Card Values
Example Hand
5♦ 7♥ 8♥ Q♥ Q♠ Q♠ 4♣ 5♣ 9♣ Q♣ K♣ K♣
The above is a possible Hand that might be dealt at the start of a round.
This hand qualifies for “start play”, as it is considered to be round 1 and the required point total is 50 to “start play” round 1. There is already two possible melds being formed in this hand, a possible meld of Kings and a possible meld of Queens.
Strategy & Tips
- Strategy for Hand and Foot should include clever discards, and taking the pile at key times.
- Taking the pile adds 7 cards, so only use this tactic when you are behind in points but ahead in number of cards.
- Clean melds are worth many more points, but are harder to come by. If there is an opportunity to make a dirty meld, it may be advantageous to take it. There is always the possibility that perfect card doesn’t come up in the stock.
Frequently Asked Questions
How many cards do you deal in Hand and Foot?
Each player is dealt two different sets of 11 cards. The first deal, and the cards which must be played first, is the Hand. The second set, left face down until the hand has been totally melded or discarded, is the Foot.
Can you play Hand and Foot with three players?
Yes. However, traditionally Hand and Foot is played with five or more players, due to the number of cards in a six-deck shoe (Including jokers). If you are planning on playing Hand and Foot with less players, consider using less decks, perhaps only three or four instead of the usual five or six.
How much is a red 3 worth in Hand and Foot?
In Hand and Foot, the red (♦♥) 3 is worth five points when calculating the starting point value during the “in the game” phase.
How many points do you need to win?
The round (or game if only playing a single round) ends when the first Player discards their last card and empties both their Hand and Foot cards. Players then calculate their scores both for their melds, and the face values of the cards played.
The Players who have not discarded all of their cards must also subtract the point values of the cards still in their hand from their total meld and point score. In order to win, you must have more points than your opponents once the round ends.
What is the difference between Hand and Foot and Canasta?
There are several key differences between Hand and Foot and Canasta, most significantly the mechanics in Canasta about “freezing” the discard pile, and the specific rules around play etiquette, particularly with one’s partner. | https://www.coololdgames.com/card-games/rummy/canasta/hand-and-foot/ |
An asteroid about the size of a car will pass close by Earth Wednesday (Feb. 9), the second space rock in five days to fly near – but pose no threat of hitting – our planet.
The asteroid is called 2011 CA7 and will fly within 64,300 miles (103,480 kilometers) of Earth tomorrow, according to an alert from NASA's Asteroid Watch program. It is about 9 1/2 feet across (nearly 3 meters) and was discovered by astronomers earlier this month.
The asteroid will make its closest pass by Earth at around 2:25 p.m. EST (1925 GMT), according to the small-body database overseen by NASA's Jet Propulsion Laboratory in Pasadena, Calif.
On Feb. 4, the asteroid 2011 CQ 1 sailed within 3,400 miles (5471 km) of Earth during its brief encounter. That asteroid was about 4 feet (1.3 meters) wide, less than half the size of 2011 CA7.
For comparison, the distance between Earth and the moon is about 238,900 miles (384,402 km).
Like Friday's asteroid flyby, 2011 CA7 poses no threat of impacting Earth. Even if it did enter Earth's atmosphere, the space rock would never survive the fiery trip to the surface. It's so small, it would likely break apart or incinerate on the way down.
Scientists say asteroids the size of 2011 CA7 and 2011 CQ 1 regularly fly close by Earth, but their small size makes them hard to spot.
"It's predicted that tiny space rocks pass between Earth and moon almost daily but are too small to be detected and pose very little threat," NASA's asteroid-tracking team wrote in a Twitter message Monday (Feb. 7).
NASA and other dedicated astronomers routinely track asteroids and comets as part of a near-Earth observation program designed to seek out potentially hazardous objects that could pose an impact risk to Earth.
Potentially hazardous asteroids are space rocks about 490 feet (nearly 150 meters) wide or larger that fly too close to Earth for comfort, NASA officials have said. The space agency's Near-Earth Object office at JPL coordinates efforts to detect, track and characterize the threats posed by asteroids and comets. | |
Concealed amidst the lush forest of rhododendron and oak tree at an altitude of 6959ft is a serene hamlet called Lepchajagat. Located 19km from the Queen of Hills Darjeeling, Lepchajagat is hidden jewel which is free from the tourist rush. As the name suggests, it once used to be a hamlet of the Lepcha tribes. The word ‘Jagat’ actually means "the world', so it was the world of the Lepchas. Subsequently it was taken over by the West Bengal Forest development Corporation (WBFDC) and now a reserved forest area.
Lepchajagat with its serene ambiance and pictorial view of Mt Kanchenjunga has bewitched the mind of many tourists traveling in Darjeeling. Enclosed by the rich forested area, far from the maddening crowd and hassle of daily life, Lepchajagat has become the safe haven for the tourists who are looking for solitude amidst the wildlife. Lepchajagat is a sparely populated area with no big market place, hence it won’t be wrong to say Lepchajagat is a secluded hamlet of Darjeeling District. Located on the road connecting Ghoom to Sukhiapokhari traveling to Lepchajagat is easier than anyone can imagine. Lepchajagat offers its visitor with an astounding view of the snow capped mountains of Himalayan range, the view of the daunting hills from Lepchajagat has made it famous among the tourist and botanical enthusiast. Sprawled with numerous trekking trails in and around the quaint hamlet Lepchajagat, through the dense forest of rhododendron and oak trees tourist can relish in the beauty of Himalayan flora while walking in one of those trails. Bird watching is another activity which travelers can partake in while visiting Lepchajagat. The only sound which one can hear while visiting or staying in Lepchajagat is during day time the melodious chirping of exquisite Himalayan bird and at evening sound made by the small insects like cricket. Witnessing the sunrise over the third highest mountain peak Mt Kanchenjunga from the top of Lepchajagat can be a phenomenal experience for any nature lover. Watching the sky change from the dark shade of blue to the light color sky when the sun slowly rise above the horizon reveling the mountainous valley whilst glazing the mountains with an orange hue and making its shimmer like precious jewel.
Find the inner peace and don’t get lost in the crowded city. Lepchajagat overlooking the snow capped mountain above the lush forest of rhododendron and oak tree is an ideal destination far from the rambunctious city life. Sited in a secluded area Lepchajagat with its rich bio-diversity and untainted air is the place where you can spend some quality time with your loved ones while enjoying the wonderful gift of Mother Nature.
The best to visit Lepchajagat is during summer season when the metro city becomes a hot oven, Lepchajagat with its pleasant weather and cool temperature becomes the ideal refuge. Winters are quite cold and chilly due to hit high altitude but if you want to witness the unhindered view of Himalayan range it is the best time to visit Lepchajagat but remember to carry all your winter wear if you are visiting in the winters. | https://northbengaltourism.com/offbeat-destinations/darjeeling/lepchajagat/ |
Why Teamwork Is Important In A Work Place
Effective teamwork in the workplace is important for many reasons but one of the most important reasons is to achieve success. When a team works together effectively, you’re guaranteed a successful outcome of high-quality standards. When working together as a team, you have different people offering artistic ideas and explanations to problems. A team that works well together is also able to encourage one another as they work their way through their tasks and efficiently meeting their purposes.
3 ELEMENTS OF EFFECTIVE TEAMWORK
COMMUNICATION
When working with a group, it’s very important to communicate. When you have good communication, you have the ability for open discussions amongst each other to be on the same page to ensure your work product is completed and as efficient as possible. Effective communication allows a team to bring together different ideas and creativity to achieve more together.
RESPONSIBLITES
Each team member is assigned specific responsibilities to complete in achieving the same goal. Assigning responsibilities to the right person is significant as each team member has different strengths. Team members have different levels of experience and can help in creating the optimal solution. Learning from people’s strengths can enhance other’s knowledge and also increase their confidence.
SUPPORT
A support network can create a sense of belonging that can contribute greatly to job satisfaction. Team members will help and rely on each other and build trust within each other. During challenging or stressful times at work, support is vital for a successful outcome. Look to team members for guidance and support so the focus doesn’t deviate from the overall goal.
The best results come from working effectively as a team such as, a positive team spirit, increased productivity and a better work produce. All in all, working as a team will inevitably translate into success. | https://bankslaw.com/why-teamwork-is-important-in-a-work-place/ |
1. Field Of Invention
The present invention relates generally to a sonication method of encapsulating a hyperbaric gas for use in treating atherosclosis, infections and neoplasms, and for providing systemic oxygenation of tissues.
2. Related Art Statement
Most living organisms require oxygen to maintain homeostasis and viability. Tissues in man and other mammals are oxygenated by virtue of the dissolution and binding of oxygen in blood within capillaries of the lung after diffusion of oxygen across thin alveolar membranes of the lung. The quantity of oxygen bound to hemoglobin and, to a lesser extent, dissolved within serum is usually adequate to maintain an optimal level of oxygenation of all tissues by diffusion of oxygen from blood capillaries to tissue. Although the rate of diffusion of oxygen through soft tissues is actually quite slow, the intercapillary distance is usually small, so that only very short diffusional distances are required, For some tissues, however, the diffusional distances for oxygen are large, and local tissue hypoxia results. The lack of an optimal supply of oxygen interferes with local tissue homeostasis, and pathologic tissue growth is initiated and/or promoted.
Efforts have been made to improve blood oxygenation by inspiration of oxygen at higher than normal oxygen concentration in air. These efforts have not been satisfactory factory because: 1) prolonged inspiration of oxygen at a high partial pressure produces lung toxicity, and 2) blood is nearly saturated with oxygen during ordinary air breathing--accordingly, an increase in the inspired oxygen concentration above that in air does little to increase the content of oxygen within blood.
One approach to problems of improving blood oxygenation would be to encapsulate oxygen under pressure in a manner which allows parenteral injection of oxygen. Gas-containing capsules have been prepared from a variety of substances, including glass. Methods to make such glass particles are known. By way of example, one method is disclosed in U.S. Pat. No. 3,972,721 to Hammel et al, entitled "Thermally Stable and Crush Resistant Microporous Glass Catalyst Supports and Methods of Making", the relevant teachings of which are incorporated herein by reference. The technology presently exists for the manufacture of hollow glass microballoons as small as two microns. For example, FTF-15 glass microballoons can be purchased from Emerson and Cumming of W.R. Grace, Inc. Thus, it is feasible to make hyperbaric gas-filled glass microballoons sufficiently small to pass through all capillaries of the body (approximately 5 microns in diameter) without entrapment following intravenous injection of a suspension of the glass shells. However, only low molecular weight gases such as helium can permeate the glass shells during heating of the latter under hyperbaric gas conditions, so that the gas will be trapped within the microballoons upon subsequent cooling of the glass. Since the permeability of higher molecular weight gases through glass even at elevated temperatures is quite low, a sufficient quantity of oxygen cannot be entrapped.
One method for forming fine glass foam is disclosed in U.S. Pat. No. 4,332,907 to Vieli entitled "Granulated Foamed Glass and Process for the Production Thereof", filed Oct. 4, 1979, the relevant teachings of which are incorporated herein by reference. Another method is disclosed in U.S. Pat. No. 3,963,503, entitled "Method of Making Glass Products, Novel Glass Mix and Novel Glass Product", the relevant teachings of which are also incorporated herein by reference. U.S. Pat. No. 4,347,326 to Iwami et al entitled "Foamable Glass Composition and Glass Foam", filed Aug. 31, 1982, the relevant disclosure of which is also incorporated herein by reference, also teaches a method for making a glass foam. See also, U.S. Pat. No. 4,332,908 to Vieli filed Nov. 27, 1979 entitled "Foamed Granular Glass", and U.S. Pat. No. 4,104,074 entitled "Pulverulent Borosilicate Composition and a Method of Making a Cellular Borosilicate Body Therefrom", the relevant teachings of which patents are also incorporated herein by reference.
However, none of those methods are capable of viably producing sufficiently small microbubbles to permit injection, containing gases at sufficiently high pressures which are critical to the process disclosed below.
Accordingly, it is an object of the present invention to provide a method for encapsulating hyperbaric oxygen in order to treat diseases associated with hypoxia of tissues.
It is also an object of the present invention to provide products made by the disclosed process.
| |
Time of origin of life on this planet, various organisms were evolved and dominated this planet during various periods of geological time chart. This has been found by the evidence obtained from the discovery and study of fossils which allows biologists to place organisms in a time sequence. As geological time passes and new layers of sediments are laid down, the older organisms should be in deeper layer, provided the sequence of the layers has not been disturbed.
In addition it is possible to date/age rocks by comparing the amounts of certain radioactive isotopes they contain. The older sediment layers have less of these specific radioactive isotopes than the younger layers. A comparison of the layers gives an indication of the relative age of the fossils found in the rocks. Therefore, the fossils found in the same layer must have been alive during the same geological period.
You can have an idea about the temporal distribution of various forms of life both plants and
animals in the various geological periods ( fig.1.3)
Phyletic Lineage Of Time
When we look at the biodiversity (the number and variety of species in a place), we ind that there are nearly 2,500,000 species of organisms, currently known to science. More than half of these are insects (53.1%) and another 17.6 % are vascular plants. Animals other than insects are 19.9 % species) and 9.4 % are fungi, algae, protozoa, and various prokaryotes.
This list is far from being complete. Various careful estimates put the total number of species between 5 and 30 millions. Out of these only 2.5 million species have been identified so far.
The life today has come into existence through Phyletic lineages or evolving populations of the organisms living in the remote past. Evolutionary change often produces new species and then increases biodiversity. A phyletic lineage is an unbroken series of species arranged in ancestor to descendant sequence with each later species having evolved from one that immediately proceeded it. If we had a complete record of the history of life on this planet, every lineage would extend back in time to the common origin of all early life. We lack that record because many soft bodied organisms of the past had not left their preserved record as fossils.
Biological Method:
Science is a systematized knowledge. Like other sciences, biological sciences also have a set methodology. It is based on experimental inquiry. It always begins with chance observation. Observations are made with ive senses viz, vision, hearing, smell, taste and touch, depending upon their functional ability. Observations can both be qualitative and quantitative. Quantitative observations have accuracy over qualitative as in the former variables are measurable and are recorded in terms of numbers. An observer organizes observations into data form and gives a statement as per experience and background knowledge of the event. This statement is the hypothesis, which is tentative explanation of observations.
At this stage you should look at the ways of devising hypothesis. There are two ways of formulating
hypothesis. A hypothesis can be the result of deductive reasoning or it can be the consequence of
inductive reasoning.
Deductive reasoning moves from the general to the species. It involves drawing species conclusion from some general principle/assumptions. Deductive logic of “if ……. then” is frequently used to frame testable hypothesis. For example, if we accept that all birds have wings (premise #1), and that sparrows are birds (premise # 2), then we conclude that sparrows have wings. If all green plants require sunlight for photosynthesis, then any green plant when placed in dark would not synthesize glucose, the end product of photosynthesis. The other way of reasoning used in the formulation of hypothesis is inductive reasoning which is reasoning from the species to the general. It begins with specific observations, and leads to the formation of general principle. For instance, if we know that sparrows have wings and are birds, and we know that eagle, parrot, hawk, crow are birds, then we induce (draw conclusion) that all birds have wings. The science also, therefore, uses inductive methods to generalize from specific events.
In fact sometimes scientists also use other ways to form a hypothesis, which may include
- Intuition or imagination
- Aesthetic preference
- Religious or philosophical ideas
- Comparison and analogy with other processes
- Discovery of one thing while looking for some other thing.
These ways can also sometimes form basis for scientific hypothesis. Hypotheses as you already know, are subjected to rigorous testing.
Repeated exposure of a hypothesis to possible falsification increases scientist’s confidence in the
hypothesis when it is not falsified. Any hypothesis that is tested again and again without ever being
falsified is considered well supported and is generally accepted. This may be used as the basis for
formulating further hypothesis. So there is soon a series of hypotheses supported by the results of
many tests which is then called a theory. A good theory is predictive and has explanatory power.
One of the most important features of a good theory is that it may suggest new and diferent
hypotheses. A theory of this kind is called productive.
However even in the case of productive theory the testing goes on. In fact many scientists take it as a challenge and exert even greater efforts to disprove the theory. If a theory survives this skeptical approach and continues to be supported by experimental evidence, it becomes a scientific law. A scientific law is a uniform or constant fact of nature, it is virtually an irrefutable theory. Biology is short in-laws because of the elusive nature of life.
Examples of biological laws are Hardy-Weinberg law and Mendel’s laws of inheritance. You will learn about them in later chapters. You can see that laws are even more general than theories and afford answers to even more complex questions, therefore there are relatively a few laws in biology. | http://askzaib.com/tag/download-notes/ |
Background: In animal studies tumor size is used to assess responses to anticancer therapy. Current standard for volumetric measurement of xenografted tumors is by external caliper, a method often affected by error. The aim of the present study was to evaluate if microCT gives more accurate and reproducible measures of tumor size in mice compared with caliper measurements. Furthermore, we evaluated the accuracy of tumor volume determined from 18F-fluorodeoxyglucose (18F-FDG) PET.
Methods: Subcutaneously implanted human breast adenocarcinoma cells in NMRI nude mice served as tumor model. Tumor volume (n = 20) was determined in vivo by external caliper, microCT and 18F-FDG-PET and subsequently reference volume was determined ex vivo. Intra-observer reproducibility of the microCT and caliper methods were determined by acquiring 10 repeated volume measurements. Volumes of a group of tumors (n = 10) were determined independently by two observers to assess inter-observer variation.
Results: Tumor volume measured by microCT, PET and caliper all correlated with reference volume. No significant bias of microCT measurements compared with the reference was found, whereas both PET and caliper had systematic bias compared to reference volume. Coefficients of variation for intra-observer variation were 7% and 14% for microCT and caliper measurements, respectively. Regression coefficients between observers were 0.97 for microCT and 0.91 for caliper measurements.
Conclusion: MicroCT was more accurate than both caliper and 18F-FDG-PET for in vivo volumetric measurements of subcutaneous tumors in mice.18F-FDG-PET was considered unsuitable for determination of tumor size. External caliper were inaccurate and encumbered with a significant and size dependent bias. MicroCT was also the most reproducible of the methods. | https://pubmed.ncbi.nlm.nih.gov/18925932/?dopt=Abstract |
A pentagon is a geometrical shape, which has five sides and five angles. Here, “Penta” denotes five and “gon” denotes angle. The pentagon is one of the types of polygons. The sum of all the interior angles for a regular pentagon is 540 degrees. If a pentagon is regular, then all the sides are equal in length, and five angles are of equal measures. If the pentagon does not have equal side length and angle measure, then it is known as an irregular pentagon. If all the vertices of a pentagon are pointing outwards, it is known as a convex pentagon. If a pentagon has at least one vertex pointing inside, then the pentagon is known as a concave pentagon.
Area of Concave Pentagon calculator uses area = (3/4)*(Side^2) to calculate the Area, The Area of Concave Pentagon formula is defined as
measure of the total area that the surface of the object occupies of a concave pentagon, where a = concave regular edge, A = Area of concave pentagon. Area and is denoted by A symbol.
How to calculate Area of Concave Pentagon using this online calculator? To use this online calculator for Area of Concave Pentagon, enter Side (S) and hit the calculate button. Here is how the Area of Concave Pentagon calculation can be explained with given input values -> 60.75 = (3/4)*(9^2). | https://www.calculatoratoz.com/en/area-of-concave-pentagon-calculator/Calc-23484 |
On Saturday from 10 a.m. to 2 p.m., the Darien Health Department, in conjunction with the Darien Police Department and the Drug Enforcement Administration, will give the public an opportunity to prevent pill abuse and theft by ridding their homes of expired, unused, and unwanted prescription drugs.
Residents can bring medications for disposal to the Darien transfer station on Ledge Road. The service is free and anonymous, no questions asked.
Last September, Americans turned in 242,000 pounds -- 121 tons -- of prescription drugs at nearly 4,100 sites operated by the DEA and more than 3,000 state and local law enforcement partners, Darien Health Department Director David A. Knauf said in a press release.
The initiative addresses a vital public safety and public health issue. Medicines that languish in home cabinets are highly susceptible to diversion, misuse, and abuse, Knauf said.
Rates of prescription drug abuse in the United States are alarmingly high, as are the number of accidental poisonings and overdoses due to these drugs. Studies show that a majority of abused prescription drugs are obtained from family and friends, including from the home medicine cabinet, Knauf said.
In addition, Americans are now advised that their usual methods for disposing of unused medicines -- flushing them down the toilet or throwing them in the trash -- both pose potential safety and health hazards he said.
Leave the meds in the original containers.
Block out any personal identifying information.
Never flush meds down the toilet.
Items such as needles or hazardous materials will not be accepted. | https://www.dariennewsonline.com/news/article/Darien-Health-Dept-hosts-drug-return-day-1356841.php |
Abstract: Quantum ergodicity, which expresses the semiclassical convergence of almost all expectation values of observables in eigenstates of the quantum Hamiltonian to the corresponding classical microcanonical average, is proven for non-relativistic quantum particles with spin 1/2. It is shown that quantum ergodicity holds, if a suitable combination of the classical translational dynamics and the spin dynamics along the trajectories of the translational motion is ergodic.
Submission historyFrom: Jens Bolte [view email]
[v1] Fri, 7 Apr 2000 16:27:07 UTC (20 KB)
Full-text links:
Download:
(license)
References & Citations
a Loading...
Bibliographic and Citation Tools
Code Associated with this Article
Recommenders and Search Tools
Connected Papers (What is Connected Papers?)
CORE Recommender (What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs and how to get involved. | https://arxiv.org/abs/math-ph/0004007 |
Washington, D.C., January 31, 2011 – The National Pork Producers Council expressed support for federal dietary guidelines released today whose goals are to reduce obesity, encourage the consumption of nutrient-rich foods and increase physical activity. Many cuts of pork, the organization pointed out, are lean, nutrient-dense sources of protein.
NPPC recognizes for food policy and nutrition guidance the importance of the 2010 Dietary Guidelines, which were issued by the U.S. departments of Agriculture (USDA) and Health and Human Services (HHS).
“NPPC agrees with the guidelines’ call for eating nutrient-dense foods, and many cuts of lean pork, including tenderloin and loin chops, contain quality nutrients,” said NPPC President Sam Carney, a pork producer from Adair, Iowa.
Lean meat offers nutrients that often are lacking in Americans, including heme iron, potassium and vitamin B-12, a micronutrient not found in plant-based foods. Based on current consumption data from the HHS National Health and Nutrition Examination Survey, Americans on an average 2,000 calorie-a-day diet consume 5.3 ounces of meat or meat equivalents. The USDA “Food Pyramid” suggests two to three servings of 2- to 3-ounce portions of meat, poultry or fish, meaning from 4 to 9 ounces a day.
“The solution to the obesity problem is not a shift from animal-based foods to plant-based ones but rather a shift from nutrient-poor foods to nutrient-rich foods, emphasizing the consumption of lean meats, including pork, along with vegetables, nuts and beans,” Carney said.
# # #
NPPC is the global voice for the U.S. pork industry, protecting the livelihoods of America’s 67,000 pork producers, who abide by ethical principles in caring for their animals, in protecting the environment and public health and in providing safe, wholesome, nutritious pork products to consumers worldwide. For more information, visit www.nppc.org. | https://nppc.org/nppc-backs-dietary-guidelines-goal-of-more-nutrient-rich-foods/ |
While in earlier times the authoritarian style of parenting was dominant in many families, nowadays there are a variety of different parenting concepts. These include, for example, anti-authoritarian parenting, also known as laissez-faire parenting style, and authoritative parenting style. The latter forms a middle ground between the two concepts.
But what are the advantages of the authoritative parenting style and how can you implement it in everyday life? We would like to explain this to you in the following article.
The authoritative style of education is a democratic style of education, however, it is not anti-authoritarian education. The distinction from the latter concept is that there are definitely rules and limits. If these are not observed, the child must expect appropriate consequences.
However, unlike the authoritarian style of parenting, the authoritative style of parenting is characterized by warmth, caring and Appreciation shaped by the child's personality. The family rules are not arbitrarily dictated, but discussed and explained together. The same applies to the consequences. The child feels like a full member of the family whose needs and views are taken into account.
In addition, the children are given a certain amount of freedom to act and make decisions, which is definitely based on anti-authoritarian education. The child's freedoms are based on his or her age, i.e. they are continuously adjusted. The parents guide the education through respectfully formulated rules and expectations.
The authoritative parenting style is characterized by the following six features:
As well as authoritarian and anti-authoritarian education, the authoritative style of education also brings with it various advantages and disadvantages. We would like to take a closer look at these in the following.
The authoritative parenting style offers numerous advantages:
Independence and sense of responsibility
Children who are raised authoritatively are often more independent and responsible than their peers who enjoy an authoritarian or anti-authoritarian upbringing. This is due to the fact that the children not only have a certain amount of free scope for decision-making, but also have to deal with the consequences of wrongly made decisions.
High self esteem
Since parents consider their children as full personalities and take their opinions as well as wishes into account, a strong Self-esteem which is also an advantage later on in the job.
Ability to work in a team
As a result of the responsiveness (willingness to agree) practiced in the family, authoritatively raised children usually have no problems integrating into social structures. They treat their fellow human beings with tact, empathy and respect, as they have been taught this by their parents. Furthermore, they are extremely willing to compromise.
Trust
Authoritatively raised children experience a secure bond with their parents, making it easy for them to build trust with others later in life. The chance of having equal partnerships and friendships is high.
Flexibility
The authoritative style of education is neither characterized by rigid rules like authoritarian education, nor is it an almost rule-free concept like anti-authoritarian education. Authoritative education is characterized by flexibility, which makes it possible to always readjust the applicable rules as needed. In this way, the child receives age- and situation-appropriate guidance from the parents.
Despite all the advantages, we would like to mention the possible risks of the authoritative parenting style:
Free development questionable
The concept of free development, which is aimed at by authoritative education, is considered questionable by critics. In practice, it often happens that parents do not let their children decide freely after all, in order to protect them from foreseeable negative consequences.
Consequences too harsh
An essential feature of authoritative parenting is that any misconduct results in an immediate consequence. If parents take this too closely, disproportionate reprimands can occur under certain circumstances.
In theory, the concept of authoritative parenting sounds right to you, but you're wondering how to make it work in your everyday life? The following six tips should help you:
Call a family council where you all sit down together and discuss the rules for your harmonious life together. Both the adults and the children may express their wishes and expectations. It is best to put your rules in writing. It is advisable to create a poster with the family rules and hang it up in the home.
As a parent, you have a responsibility to explain to your child the difference between right and wrong. A child is testing his or her limits and cannot always know what behavior will cause harm to others. Therefore, you should clearly articulate what behaviors are unacceptable (e.g., hitting, kicking, biting, insulting).
For your child to be able to make decisions freely, he or she needs to know what consequences to expect. However, make sure that the punishments are proportionate and ideally related to the offense. For example, if your child pushes another child off the swing in the playground, the logical consequence would be that you leave the playground.
Your child depends on your emotional care to develop in a healthy way. Show your child that he or she is important to you and that you love them unconditionally. Never punish your child by withdrawing love, no matter what he or she has done.
Your child is constantly developing. For this reason, it is necessary that you adjust your parenting style accordingly. An older child should gradually be given more and more responsibility and freedom. It is also important to regularly reflect on the consequences to see if there is a need for change.
To learn to make good choices, your child needs ample opportunities to do so. Offer your child choices as often as possible: From choosing what to wear to whether to do homework right after school or late in the afternoon.
The authoritarian parenting style is characterized by strictness, high parental expectations, and little praise. The views and wishes of the children are subordinated to those of the adults. There is no democratic consultation, and the parents have the sole authority to make decisions. Almost until the middle of the 20th century, authoritarian education was common in many families.
Today we know that the authoritarian style of upbringing can have a negative impact on child development. Creativity and individuality are hardly promoted. The child's emotional needs are also only inadequately satisfied or not satisfied at all, because the parents do not act as loving caregivers but as strict authorities.
As a result, mental illnesses such as anxiety disorders, obsessive-compulsive disorders or paranoia can develop. Aggressive behavior is also observed more frequently in children raised in an authoritarian manner. This can be seen, among other things, from the following scientific elaboration out.
Anti-authoritarian education was designed in the 1960s as a counterpart to authoritarian education. Anti-authoritarian education gives children a great deal of freedom to make their own decisions. There are no fixed rules of behavior. If an important decision is pending, parents make suggestions and proposals to their children. For this reason, anti-authoritarian education is not entirely uncontroversial.
Critics note that anti-authoritarian education fosters a lack of a sense of duty, since children must always do only those things that they enjoy. The latter, however, is alien to life in the adult world. Anti-authoritarian education should therefore be well considered.
The authoritative parenting style offers a middle ground between authoritarian as well as anti-authoritarian parenting. As is so often the case, the golden mean between two extremes proves to be the most successful method of achieving the desired goal: In this case, it is, Children to strong and self-confident adults.
Since authoritatively raised children have to follow rules but are nevertheless granted a high degree of freedom of decision and personal responsibility, they shine in later life with Self-confidence and sense of responsibility. This is a great advantage in both private and professional matters.
Let's take up the aspect of professional advantages in a little more detail: Children who have been brought up authoritatively are best able to fit into existing hierarchies. In their childhood, they have learned to respect authority and rules and to behave accordingly. They are therefore rarely conspicuous for their rebellious behavior. The latter often suggests an anti-authoritarian upbringing.
However, children who have been raised in an authoritative manner are quite capable of questioning instructions and defending their own point of view. In doing so, however, they proceed with tact and respect.
Authoritative children are also open to the views of their peers, which makes them perfect team players. They are always on the lookout for constructive solutions and can work out compromises that are acceptable to both parties. The teamwork aspect is a basic requirement in many professions.
It has long been scientifically proven that childhood experiences have an influence on mental health and shape later behavior accordingly. Often, depression or anxiety disorders have their origins in childhood.
Researchers at the Friedrich-Alexander University Erlangen-Nuremberg, Germany, have now Study proves that the suicide rate in authoritatively raised children is significantly lower than in the general population.
The authoritative parenting style brings numerous advantages for both parents and children. This becomes especially clear when you compare authoritarian and anti-authoritarian parenting. Provided you implement the parenting style correctly, your children have a very good chance of growing up to be self-confident, empathetic and successful adults.
Of course, you will have to avoid some stumbling blocks along the way. The authoritative parenting style demands an immediate consequence after misbehavior, which is not always easy to implement in everyday life. If it concerns small things, it is recommended that you first of all relax and rethink the situation. Sometimes rules also need to be adapted.
In summary, it can be said that a stubborn educational concept rarely leads to success and can hardly be implemented consistently. The guidelines of authoritative parenting, coupled with a little composure, can offer you a good orientation to accompany your child in the best possible way on its way to independence. | https://greator.com/en/authoritative-parenting-style/ |
Imperial Chemical Industries Ltd v Merit Merrell Technology Ltd – Who would be an expert?
by Mark Woodward-Smith, Group Managing Director -
Introduction
Merit Merrell Technology Ltd (MMT) were employed by ICI on a new paint making plant for the manufacture, supply, installation and commissioning of steelwork together with offloading and positioning free issue equipment. An NEC3 contract was entered for £1.9m. However, significant change occurred and by 2018 MMT had been paid in excess of £20m by ICI for work undertaken between late 2012 and early 2015. Matters on the project were being resolved amicably until mid-2014 when ICI’s parent company’s (Akzo Nobel) sent their Director of Engineering Excellence, Henk Boerboom, to reduce the cost of the project in line with Akzo Nobel’s expectations.
At this time Akzo Nobel effectively took over administration of the contract leading to the resignation of the independent project manager. There were subsequently four adjudications and multiple hearings at the Technology and Construction Court. The last two court cases related to liability and quantum and were heard by Mr Justice Fraser where ICI received severe criticism from the Judge in respect of the approach adopted by both their witnesses of fact and their expert witnesses.
Witness of fact
During the previous judgement on liability in 2017 Mr Justice Fraser found substantially in favour of MMT and determined that: –
- The project manager resigned due to interference in the administration of the contract by Mr Boerboom
- Mr Boerboom’s assumption of the role of the project manager was invalid
- Mr Boerboom and Akzo Nobel had decided on a course of action designed to force MMT into insolvency and limit ICI and Akzo Nobel financial exposure
- ICI had repudiated the contract
- There were also various other findings that were highly critical of ICI and their approach
Despite this, in the quantum case heard in 2018, ICI (and Mr Boerboom in particular) ignored the earlier findings where it suited them. In his 2018 judgement, Mr Justice Fraser included a particularly scathing appraisal of Mr Boerboom’s honesty and approach: Mr Boerboom wilfully ignored the findings from the previous judgement; displayed an “extraordinarily selective memory…” and was “simply pretending not to have any knowledge” of certain matters; that his evidence was “not remotely accurate, … wholly disingenuous, … positively misleading, and … contrary to the text of a vast number of contemporary emails”; that the judge had reached the point where he “..would not accept anything Mr Boerboom says …”; that the “… attitude by Mr Boerboom to the facts to be reprehensible”; he also concluded that Mr Boerboom’s evidence “… bears remarkably little, if any, resemblance to the truth…” and finally “.. should there be any room for doubt … Mr Boerboom is a wholly unreliable witness”. On the whole, it is difficult to understand how Mr Justice Fraser’s opinion of the evidence provided by ICI’s key witness of fact could have been any worse.
Expert witnesses
Mr Justice Fraser criticism of witnesses was not limited to Mr Boerboom and he also had concerns about the quality of expert evidence, the usefulness of the joint statements, the (unrequested) Scott Schedule and the fact that the court had been left to determine the correct valuation.
Again, ICI and their ‘independent’ experts were on the receiving end of particular criticism.
ICI’s experts were not considered to be independent and their quantum expert particularly was heavily criticised. In an attempt to justify ICI’s stance that MMT had been overpaid by £10m ICI’s expert had to open up all of the agreements that had been contemporaneously reached with the independent Project Manager as well as the sums that had been certified on an interim basis. Whilst the court accepted that in principle it was possible to open up these sums in order to determine their true value (there was an express term allowing this) the onus was on those challenging the values to demonstrate that these were not appropriate. Whilst previous agreements and values might not be cast in stone they were contemporaneous assessments that were independently arrived at by those familiar with the work and would consequently attract “powerful evidential weight”.
ICI’s expert, Mr Kitt from Arcadis, totally ignored these values and instead took a position on what was an issue of fact and law in order to depress the value to suit ICI. The judge made it clear that the question of which facts were to be preferred was a matter for the court to decide and Mr Kitt had gone beyond his remit as an independent expert and had adopted a highly partisan approach in order to support ICI’s position. Mr Kitt also incorrectly ignored a schedule of rates that were contained in the contract that should have been used to value change. Instead of adopting this schedule, or even considering the possibility that this might be preferred by the court, he argued that the works should be valued based upon cost – an approach that was not provided for in the Contract. In what the judge described as a “wholly artificial and contrived” argument, Mr Kitt attempted to portray the schedule of rates as ‘Daywork Rates’ that would result in MMT receiving a windfall. The judge found this to be inexplicable and demonstrated an approach of working towards a desired result rather than allowing the correct value to emerge from the application of due process. The inconvenient fact was that adopting the schedule of rates would not have given the answer ICI wanted and hence these rates were ignored!
The judge found Mr Kitt’s approach as an independent expert was “wholly unsatisfactory” and further comments were made included:
- Despite saying he could not support either party’s valuation he then concluded that he could “do no better than to include ICI’s valuation”
- He chose to entirely ignore agreements that had been reached and took a position on an issue of fact and law that is not within the sphere of an expert witness
- He provided evidence that an independent expert, complying with their duty to the court, should not be giving
- He had not prepared his evidence with sufficient attention to his duty to the court as an independent expert
- He made witness statements in support of ICI’s contention that certain documents were necessary to enable him to perform his expert function. The judge expressed surprise at this and concluded that the requested information was far more extensive than what could reasonably be required and that would ordinarily be available
- He was ignorant of a report that vindicated MMT’s costs and sought to repeat the exercise afresh
- Most importantly, he failed to grasp that he was required to provide an independent valuation and not argue for ICI or “adopt points in a partisan fashion”
To summarise, Mr Kitt had not satisfied his duty to the court and actively sought to favour ICI instead of providing impartial assistance. The judge found a remarkable contrast between the approach adopted by the respective quantum experts and preferred the evidence of MMT’s expert in all respects.
Similarly, the biased approach adopted by ICI’s accountancy expert was also questionable. ICI’s expert, Mr Thompson, was found to have made a statement that he must have known to be incorrect, namely that MMT’s expert, Mrs Baker, “agreed that further information was needed to fulfil her instructions”. Under cross-examination it became clear that he knew that Mrs Baker did not and would not agree with this and the judge stated that “quite how this was then changed by him into an express statement … that she did agree with him is unclear, remarkable, highly regrettable, and simply a demonstration of the further lack of credibility of his evidence”.
Additionally, statements demonstrating Mr Thompson taking a partisan stance on matters of fact were evident throughout his report and he also ignored or contradicted the previous judgment on liability. It was also noted that Mr Thompson had previously been the subject of judicial criticism for presenting a one-sided picture and favouring one party over another and that this had not led to him modifying his approach.
As with quantum, the judge concluded that wherever the accountancy experts differed he preferred the evidence of MMT’s expert in all instances.
The judge also noted that there was “such a preponderance of partisan experts, all called by the same party” and that “if it is a coincidence, it is a remarkable one”. The implication of this is that the experts had allowed themselves to be unduly influenced by ICI and the resultant lack of impartiality was contrary to the overriding obligation to the court set out in the Civil Procedure Rules.
Obligations of the expert
The judge made it clear that the principles governing expert evidence should be carefully adhered to and that any necessary guidance should be sought from instructing solicitors. He referred to the duties of expert witnesses that was set out by Mr Justice Cresswell in what is known as The Ikarian Reefer case, namely:
- Expert evidence presented to the Court should be, and should be seen to be, the independent product of the expert uninfluenced as to form or content by the exigencies of litigation (Whitehouse v. Jordan, 1 W.L.R. 246 at p. 256, per Lord Wilberforce).
- An expert witness should provide independent assistance to the Court by way of objective unbiased opinion in relation to matters within his expertise (see Polivitte Ltd. v. Commercial Union Assurance Co. Plc., 1 Lloyd’s Rep. 379 at p. 386 per Mr. Justice Garland and Re J, F.C.R. 193 per Mr. Justice Cazalet). An expert witness in the High Court should never assume the role of an advocate.
- An expert witness should state the facts or assumption upon which his opinion is based. He should not omit to consider material facts which could detract from his concluded opinion (Re J sup.).
- An expert witness should make it clear when a particular question or issue falls outside his expertise.
- If an expert’s opinion is not properly researched because he considers that insufficient data is available, then this must be stated with an indication that the opinion is no more than a provisional one (Re J sup.). In cases where an expert witness who has prepared a report could not assert that the report contained the truth, the whole truth and nothing but the truth without some qualification, that qualification should be stated in the report (Derby & Co. Ltd. and Others v. Weldon and Others, The Times, Nov. 9, 1990 per Lord Justice Staughton).
- If, after exchange of reports, an expert witness changes his view on a material matter having read the other side’s expert’s report or for any other reason, such change of view should be communicated (through legal representatives) to the other side without delay and when appropriate to the Court.
- Where expert evidence refers to photographs, plans, calculations, analyses, measurements, survey reports or other similar documents, these must be provided to the opposite party at the same time as the exchange of reports (see 15.5 of the Guide to Commercial Court Practice).
Mr Justice Fraser then expanded upon this indicating:
- Experts should have equal access to the same material
- Where there are matters of fact that affect the expert’s opinion it is not the place of an independent expert to determine the version of facts that they prefer – this is a matter for the court
- Experts should not take a partisan stance in favour of the party that appointed them
- Experts should seek to narrow the issues in line with CPR 35.12 and adopt a constructive and cooperative process that is governed by the overriding obligation to help the court.
- Where late material arises that requires further consideration then notice of this should be given to their opposite number as soon as possible and only in exceptional circumstances should further reports be produced during the trial.
- No expert should allow the principles in The Ikarian Reefer to be loosened
Takeaways of this case
The courts will give “powerful evidential weight” to contemporaneous interim assessments and agreements and it will only be in exceptional circumstances that these will be opened up and overturned. In such instances the onus will be on the party disputing the contemporaneous position to demonstrate why this position should not be upheld. Experts should consider any such agreements or interim positions together with any other relevant matters.
Experts should consider all of the facts, including those that are not in favour of the party that appointed them, and should arrive at a range of outcomes based upon the various views on the facts – it is not the job of the expert to determine which version of these ‘facts’ are preferred.
Experts should be independent, objective and unbiased and have an overriding obligation to assist the court in line with Part 35 of the Civil Procedure Rules, the associated Practice Directive and the principles set out in The Ikarian Reefer. This duty to the court overrides any obligation to the instructing party or those paying for the expert services.
Failure to comply with the rules and consider relevant evidence will be rightly criticised and may be costly in terms of court time, the outcome of proceedings and reputation. | https://www.systech-int.com/imperial-chemical-industries-v-merit-merrell-technology/ |
The discovery of a radio-controlled copter on the White House lawn injects a new complication into the debate over the growing popularity of drones used by civilians.
The unintentional security breach Monday at the U.S. presidential mansion in Washington gives ammunition to those who want to see tight restrictions on who can fly unmanned aircraft and where, said Patrick Egan, a drone advocate. It also raises questions about how the government can even enforce such rules.
Hobbyists, filmmakers and other enthusiasts had been making progress in getting the Federal Aviation Administration to be more permissive about civilian drones. The Obama administration was set to release new privacy standards and was reviewing a proposal to allow drones for commercial purposes such as for sporting events and oil-field inspections. Then one landed on the president’s lawn.
“I think this might chill it,” said Egan, who has lobbied the federal government for broader approval to fly unmanned aircraft. “It definitely shows some holes in the plan.”
The owner of the drone, who wasn’t identified by authorities, called the Secret Service Monday morning and told agents that the small unmanned aerial vehicle, or UAV, was being flown recreationally when it accidentally crossed over the fence that surrounds the White House, Brian Leary, an agency spokesman, said in an e-mail.
Common Sense
“The individual has been interviewed by Secret Service agents and been fully cooperative,” Leary said.
The president, who is traveling, and his family were never in danger, according to the White House.
Such a flight never should have been attempted under U.S. regulations and commonsense safety guidelines, said Michael Drobac, executive director of the Washington-based Small UAV Coalition, which represents companies including Amazon.com Inc. and Google Inc. The incident shouldn’t be used as justification to slow the approvals for commercial drone flights, he said.
His coalition is supporting an education campaign on drone rules along with the FAA and other industry groups.
Wake-up Call
While no one was injured, it should still be a wake-up call for security and military officials, said Randall Larsen, a retired Air Force colonel who served as a department chairman at the National War College in Washington. Security officials have been discussing such risks and are concerned that the new wave of cheaper unmanned devices makes them a potential tool for terrorists.
“It’s an enormous concern,” Larsen said in an interview from Austin, Texas. “You’ve got to remember that a small amount of explosive can do a tremendous amount of damage if delivered at the right spot.”
Such concern has drone advocates fearing that the incident may become another roadblock to regulations allowing wider use of the devices.
“Everything that’s negative — and someone can look at this as very negative — doesn’t help people who believe this is technology whose time has come,” said Benjamin Trapnell, an associate professor of aeronautics at the University of North Dakota.
Weapon Potential
Lightweight quadcopters and other such classes of drones have limited ability to carry anything beyond a small camera and can only fly for about 20 minutes or less, he said.
“It would be quite difficult to weaponize unless someone had access to the right kind of explosives,” Trapnell said in an interview.
The most common commercially available quadcopters, such as SZ DJI Technology Co.’s Phantom 2, weigh only a few pounds and frequently come equipped with a camera.
President Barack Obama and first lady Michelle Obama were in New Delhi last night on a diplomatic mission to forge closer ties with India. Their two daughters, Sasha and Malia, remained in the U.S. though it wasn’t clear whether the girls were at the White House in Washington at the time.
The incident is the latest to raise safety and security concerns over the explosion of small, relatively affordable unmanned aircraft on the market. The FAA logged 193 reports of the devices flying too close to other aircraft, buildings or crowds from Feb. 22 through Nov. 11, 2014.
Drones have grown in popularity as prices have fallen and improvements in technology have made them simpler to fly and there are now hundreds of thousands in the U.S., according to Egan.
Washington Airspace
Since shortly after the Sept. 11, 2001, terrorist attacks, it has been illegal to fly planes, including unmanned aircraft, over Washington without special approval.
The FAA does permit drones to be flown by hobbyists provided the flight is purely for recreation and follows safety guidelines, such as flying no higher than 400 feet above the ground. The rules also say they shouldn’t be flown in populated areas or within 5 miles of an airport, both of which would prohibit a flight near the White House. Reagan National Airport is in Virginia across the Potomac River from Washington.
Entry-level small drones must be flown within a short distance of the operator because the radio-control signal has a limited range. Models may be equipped with better radios and video controls, which enable a pilot to fly the craft over longer distances.
More expensive or custom-made quadcopters are built to carry heavier sensors or cameras and would be capable of carrying bigger payloads, Trapnell said.
–With assistance from Del Quentin Wilber in Washington.
Was this article valuable?
Here are more articles you may enjoy. | https://www.insurancejournal.com/news/national/2015/01/26/355451.htm |
Skip to Content
The Discovery Core Experience (DCX) immerses students in small “learning communities” composed of other first year and pre-major students where they can grow fundamental skills for success, identify and connect with campus resources, engage in reflective practices, collaborate in an inclusive and diverse community, and cross disciplinary boundaries.
DCX courses are at the center of FYPP and include a wide range of electives that fulfill distribution and prerequisite requirements of UWB; and provide students the opportunity to explore a range of topics such as biology, business, chemistry, mathematics, literature, writing, psychology, sociology, computer science, or philosophy.
The First Year Learning Goals are designed to encourage students, faculty, and staff to dynamically claim our own education as we practice ever more effective forms of learning. Emerging from UW Bothell's central values of transformative learning, engaged scholarship, and the fostering of an inclusive culture, the goals are shaped to create the context for understanding the many traditions that converge here at the university, to support the creation of knowledge, and to shape new social practices.
The Discovery Core Experience (DCX) invites students to foster relationships within the campus community and beyond; develop connections to campus resources and co-curricular opportunities; and develop the skills necessary for success at the University of Washington Bothell.
The DCX offers students a set of interdisciplinary experiences in order to prepare them for success in their personal, academic, and career pathways. These experiences reinforce themes of inclusivity and diversity while nurturing a sense of belonging and purpose.
The heart of DCX is supporting students in forming connections with communities of peers and scholars by encouraging them to take ownership of their growth through transformative learning, engaged scholarship, and active reflection.
Students will articulate their strengths and growth areas in essential university-level skills such as time-management, study and testing skills, academic integrity, information literacy, and quantitative literacy as fundamental skills for success.
Students will use university resources to develop a sense of community and meaningful connection through co-curricular opportunities that will contribute to their academic goals and their physical and emotional well-being.
Students will engage in reflective practices to develop ownership of their academic plans and personal goals. Students will use self-reflection to develop resiliency, confidence, self-efficacy, self-worth, persistence and growth mindset.
Students will build a toolkit for successful collaborative practices through scaffolded and reflective group assignments, including essential skills in conflict negotiation and cognitive empathy.
Students will join reason and imagination to explore ways to investigate, critique, and pursue meaning through innovative classes that cross disciplinary boundaries. | https://www.uwb.edu/premajor/first-year-discovery/discovery-core-overview |
Pathology associated with the Achilles tendon is a common problem, particularly at the site of insertion. A better understanding of the anatomy in this area would assist in developing and fine-tuning treatment options. A cadaveric examination was conducted using 60 human lower extremities (40 cadavers) to determine the location for the terminal insertion site of the Achilles tendon on the posterior aspect of the calcaneus. The average age of the specimens was 67.8 years (range, 43-98 years). Three different investigators examined each specimen, and a consensus as to the site of termination of the Achilles tendon was made. Upon inspection, 55% (22/40) of the limbs had the Achilles tendon inserting on the superior 1/3 aspect of the calcaneus, 40% (16/40) of the limbs inserted on the middle 1/3, and 5% (2/40) of the limbs inserted on the inferior 1/3. The distribution of the insertion was statistically different from random (P = .000371). Further, 8% (3/40) of the specimens revealed a partially contiguous relationship between the Achilles tendon and the plantar fascia. This correlated with the younger specimens (P < .0001). This study provides a better understanding of the anatomical relationship between the Achilles tendon, the calcaneus, and the plantar fascia.
Level of Clinical Evidence
Keywords
To read this article in full you will need to make a payment
Purchase one-time access:Academic & Personal: 24 hour online accessCorporate R&D Professionals: 24 hour online access
One-time access price info
- For academic or personal research use, select 'Academic and Personal'
- For corporate R&D use, select 'Corporate R&D Professionals'
Subscribe:Subscribe to The Journal of Foot and Ankle Surgery
Already a print subscriber? Claim online access
Already an online subscriber? Sign in
Register: Create an account
Institutional Access: Sign in to ScienceDirect
References
- Stacpoole-Shea, Nguyen H, Harkless LB. Lengthening of the Achilles tendon in diabetic patients who are at high risk for ulceration of the foot.J Bone Joint Surg (Am). 1999; 81A: 535-538
- Tissue-specific plantar fascia-stretching exercise enhances outcomes in patients with chronic heel pain.J Bone Joint Surg (Am). 2005; 85A: 1270-1277
- An equinus deformity of the ankle accounts for only a small amount of the increased forefoot plantar pressure in patients with diabetes.J Bone Joint Surg (B). 2006; 88B: 65-68
- Ankle equinus. Prevalence and linkage to common foot pathology.J Am Podiatr Med Assoc. 1995; 85: 295-300
- Tendo-Achilles lengthening and its effects on foot disorders.J Am Podiatr Med Assoc. 1975; 65: 849-871
- Gastrocnemius recession in the treatment of nonspastic ankle equinus.J Am Podiatr Med Assoc. 1989; 79: 159-174
- Adult acquired flexible flatfoot, treated by calcaneo-cuboid arthrodesis, posterior tibial tendon augmentation, and percutaneous Achilles tendon lengthening.Acta Orthop. 2006; 77: 156-163
- Etiology, histopathology and outcome of surgery in achillodynia.Foot Ankle Int. 1997; 18: 565-569
- Etiological factors associated with symptomatic Achilles tendinopathy.Foot Ankle Int. 2006; 27: 952-959
- Cheung S. Surgery for chronic Achilles tendinopathy: review of 91 procedures over 10 years.J Am Podiatr Med Assoc. 2003; 93: 283-291
- The biomechanical relationship between the tendoachilles, plantar fascia and metatarsophalangeal joint dorsiflexion angle.Foot Ankle Int. 2000; 21: 18-25
- Effect of Achilles tendon loading on plantar fascia tension in the standing foot.Clin Biomech. 2005; 20: 1-10
- Abnormal foot function in diabetic patients: the altered onset of the windlass mechanism.Diab Med. 2005; 22: 1713-1719
- Dynamic loading of the plantar aponeurosis in walking.J Bone Joint Surg (Am). 2006; 86A: 546-552
- The plantar aponeurosis and the arch.J Anat. 1954; 88: 25-30
- The effect of heel elevation on strain within the plantar aponeurosis: in vitro study.Foot Ankle Int. 2001; 22: 433-439
- Functional characteristics of the foot and plantar fascia under tibiotalar loading.Foot Ankle Int. 1987; 8: 4-18
- Achilles tendon insertion: an in vitro anatomic study.Foot Ankle Int. 1997; 18: 81-84
- The Achilles tendon insertion is crescent-shaped. An in vitro anatomic investigation.Clin Orthop Relat Res. 2008; 466: 2230-2237
- Sonoanatomy of the Achilles tendon insertion in children.J Clin Ultrasound. 2004; 32: 338-343
- Marieb E.N. Mallatt J. Human Anatomy. Benjamin Cummings Publishing Co, Redwood City, CA1992: 285-287
- McMinn R.M. Hutchings R.T. Logan B.M. Color Atlas of Foot & Ankle Anatomy. ed 2. Mosby-Wolfe, Barcelona1996: 71-73
- Rohen J.W. Yokochi C. Lutjen-Drecoll E. Color Atlas of Anatomy. ed 4. Williams & Wilkins, Baltimore1998: 462-463
- Hamilton W.G. Netter F.H. Bean K.J. Clinical Symposia: Surgical Anatomy of the Foot and Ankle, Vol 37, No 3. CIBA-GEIGY, Summit, NJ1985: 23-26
- Moore K.L. Clinically Oriented Anatomy. ed 3. Williams & Wilkins, Baltimore1992: 449
- Anatomy of the Achilles tendon and plantar fascia in relation to the calcaneus in various age groups.Foot Ankle Int. 1995; 16: 418-421
Article info
Publication history
Published online: June 25, 2010
Footnotes
Financial Disclosure: None reported.
Conflict of Interest: None reported.
Identification
Copyright
© 2010 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved. | https://www.jfas.org/article/S1067-2516(10)00189-4/fulltext |
Diversity can be found in India’s food as well as its culture, geography and climate. Spices are a vital part of food preparation and are used to enhance the flavour of a dish. Correct use and blending of the aromatic spices is crucial to the proper preparation of Indian cuisine. Even oil is an important part of cooking, whether it’s mustard oil in the north or coconut oil in the south, each section of the country has its preferences. Although a number of religions exist in India, the two cultures that have influenced Indian cooking and food habits are the Hindu and the Muslim traditions. The Portuguese, the Persians and the British made also important contributions to the Indian culinary scene. The Hindu vegetarian tradition is widespread in India, although many Hindus eat meat now. The Muslim tradition is most evident in the cooking of meat.
In the Bhagavad-Gita gita Sri Krishna declares that food is of three types as are sacrifices, austerity and charity.
Sattvic food is the base of a good meal.
There are 6 base dishes to prepare a good meal :
A few of these dishes are enough to make a tasty and nutritious meal.
Especially dal, vegetable and rice are very well to combine.
Beside these six main dishes there are plenty Indian side dishes to choose from.
Very well known Indian (side) dishes are:
Very well known breads are :
Chapatti, nan, idli and paratha.
North Indian kitchen :
A typical North-Indian meal would consist of chapattis or rotis (unleavened bread baked on a griddle) or paranthas (unleavened bread fried on a griddle), rice and an assortment of accessories like dals, fried vegetables, curries, curd, chutney, and pickles. For dessert one could choose from the wide array of sweetmeats from Bengal like rasagulla, sandesh, rasamalai and gulab-jamuns. North Indian desserts are very similar in taste as they are derived from a milk pudding or rice base and are usually soaked in syrup. Kheer is a form of rice pudding, shahi tukra or bread pudding and kulfi, a nutty ice cream are other common northern desserts.
Central India kitchen :
Here you will find the more dry vegetable dishes which are done in some minutes.
South Indian kitchen :
South Indian food is largely non-greasy, roasted and steamed. Rice is the staple diet and forms the basis of every meal. It is usually served with sambhar, rasam (a thin soup), dry and curried vegetables and a curd preparation called pachadi.
Coconut is an important ingredient in all South Indian food.
The South Indian dosa (rice pancakes), idli (steamed rice cakes) and vada, which is made of fermented rice and dal, are now popular throughout the country. The popular dishes from Kerala are appams (a rice pancake) and thick stews. Desserts from the south include the Mysore pak and the creamy payasum.
East Indian kitchen :
The east Indian kitchen is famous of her desserts like rasagolla, chumchum, sandesh, rasabali, chhena poda, chhena gaja, chhena jalebi en kheeri.
These dishes are originally from Bengal, Bihar and Orissa. But they are now also very popular in the North Indian kitchen.
East Zone of India is a hot mix of vegetarian and non-vegetarian food.
Fetish of the people of West Bengal for fish, rice and sweet is legendary and contributes a significant lot to the popular cuisine of not just east zone but also the national cuisine of India. Part of Orissa also shares the love for fish and rice with the state of West Bengal due to the long coastline the duo states shares on the Bay of Bengal. Fish and other sea food are in plenty in this region and so are the recipes.
People in Bihar and Jharkhand love their platter with all the colours of seasonal vegetables which grow in abundance and rich variety here. Influence of Buddhism is apparent here as majority of the population practice vegetarianism. The state came under the influence of mighty Mughals once and naturally the famous Mughal cuisine left its mark here too.
Western Indian kitchen :
The cuisine of Western India is diverse.
It exists of 3 regions: Gujarati, Maharashtria and Goa.
Maharashtrian cuisine is diverse and ranges from bland to fiery hot. Pohay, Shrikhand, Pav Bhaji, Vada Pav are good examples of Maharashtrian cuisine.
Goan cuisine is dominated by the use of rice, coconut, seafood, Kokum and cashew-nuts. With its distinct spices and medium of cooking as coconut oil, both vegetarian as well as non-vegetarian cuisine is equally popular. The Portuguese made important contributions to the Goan culinary scene Gujarati cuisine is almost exclusively vegetarian. Gujarat is one of three states in India, with prohibition on alcohol, along with Mizoram and Manipur. | https://www.hindoefood.eu/voeding-in-india/ |
Defense Department personnel pose a greater potential threat to DODs information systems than hackers on the outside, a senior National Security Agency official said recently.
For many years, we pursued a strategy of isolating our sensitive information systems from outsiders by using strongly encrypted and isolated communications networks, said Michael Jacobs, NSAs deputy director of information systems security.
These measures, while fairly effective against the outsider threat, did little to protect against accidental or malicious threats from insiders, Jacobs said at the MILCOM 98 conference in Bedford, Mass.
DOD has hundreds of thousands of computer users with access to classified Defense networks, Jacobs said. The department must limit individual access to information domains, he said.
A recent DOD report underscored the seriousness of the insider threat and recommended a series of technical and nontechnical countermeasures, he said.
The widespread implementation of access control methods, such as a robust, scalable and interoperable public key infrastructure, will help deter and protect against unauthorized actions, Jacobs said.
DODs technical strategy for information assurance, called defense in-depth, is a series of layered defense levels that act as multiple roadblocks between sensitive Defense information systems and internal and external hackers, Jacobs said.
If youve been in the security business for a long time, you recognize that the concept of defense in-depth is not new, the deputy director said.
For years, security practitioners, especially in the physical security arena, have learned that no single defensive measure can adequately protect vital assets, he said.
Perimeter defensive measures, such as fences, guards and surveillance cameras, must be augmented with internal security controls, such as locks on the doors to rooms and cabinets, and use of personnel badges and sign-in sheets, Jacobs said.
DOD must launch initiatives to detect, protect and respond to IS security in a number critical areas, including WANs, boundary points between WANs and LANs, hosts, servers, networking applications and operating systems used within DODs LANs, Jacobs said.
Defense has two major concerns with regard to WANs: denial of service attacks that could interfere with communications prior to or during an operational deployment, and the confidentiality of DOD classified and sensitive information, he said.
To ensure that information remains confidential, DOD must employ network encryption technology, firewalls, remote access solutions, virus scanners, and intrusion-detection capabilities, Jacobs said.
E-Mail this page
Printable Format
Between the rush enable more virtual collaboration, stalled digitization of archived records and managing records that reside in datasets, records management executives are sorting through new challenges.
The city built cell sites at government-owned facilities such as fire departments and libraries that were already connected to Tucson’s existing fiber backbone. | https://gcn.com/articles/1999/02/nsa-officer-defense-systems-are-at-risk-from-internal-threat.aspx |
Field Asset Data Inventory
Location: Independence, Mo.
Client: Water Pollution Control Department
Burns & McDonnell was contracted by the Water Pollution Control (WPC) Department of Independence, Mo., to procure mobile data collection equipment, develop an inventory plan, and to train city staff on data collection techniques and protocols. The goal of the initiative was to field inventory the estimated 13,000 sanitary system assets and 24,000 stormwater system assets owned and maintained by the city.
Burns & McDonnell conducted staff interviews to determine inventory requirements and needs. After assessing the inventory requirements, Burns & McDonnell wrote a request for proposal (RFP) for WPC to select and procure global positioning satellite (GPS) equipment. The inventory project dictated that both survey-grade equipment and mapping-grade equipment would be obtained. Burns & McDonnell worked with WPC to select and procure the necessary equipment. The project deploys a permanent base station and survey-grade rovers that communicate with the base station through cell phone technology. The survey-grade setup is wireless and operates through Blue Tooth connectivity. The mapping grade configuration is Trimble GeoXTs paired with ESRI’s ArcPad mobile geographic information system (GIS) solution.
Burns & McDonnell also developed the inventory plan documenting all inventory logistics and data collection and data management processes and protocols. Burns & McDonnell also developed WPC’s ESRI geodatabase to house and manage the inventoried data. Custom ArcPad applications were built to facilitate field data collection and quality assurance requirements. Burns & McDonnell’s information management experts developed quality assurance protocols for both field and office use. A pilot study was conducted to proof and refine the equipment setups and the inventory processes. Once the pilot study was completed, Burns & McDonnell trained WPC’s maintenance staff to conduct the inventory.
WPC staff is reaping the benefits of using reliable, accurate data now available through their GIS. Data that wasn’t available previously is now facilitating effective decision making and improving the efficiency of maintenance activities. | http://dataconservation.com/Projects/Detail/Field-Asset-Data-Inventory |
Jungna Nana Park is a designer specializing in knitwear, currently based in London studying at Royal College of Art. She comes from various places, including Chicago, where she studied fine arts and fashion at the School of the Art Institute of Chicago, worked as a designer at a design label in New York, and Seoul, where she spent her childhood. Having spent her time in various locations across the globe, she was persistently met with expectations as a woman. Inspired by personal encounters with preconceived notions of living as a female in today’s society, she questions what it means to live as a woman, confronts the prescribed notions of beauty, and embraces the beauty of being different and having flaws.
Her personal experiences are heavily embedded in her work, exploring the perception of body image for her final collection.
Her recent works were featured in various magazines, including Vogue Italy, Sicky Mag, and Exhibition Magazine held its fashion week five days with five designers; she was selected as the new designer from fashion in the ”YOUNG DESIGNERS” series. | https://2021.rca.ac.uk/students/jungna-park/ |
Critical Literature Review: Clinician and patient experience of psychological formulation: A qualitative synthesis using meta-ethnography. Background: Formulation is generally deemed an essential part of mental health treatment and psychological practice. Considering this and the abundance of existing formulation frameworks, there is surprising paucity of research. Several reviews of the little quantitative evidence have already been conducted, but there has been no review of the existing qualitative literature. Inclusion of qualitative findings in reviews and evidence-based practice is important as they can provide a deeper understanding of patient and clinician experience. Aims: The current review thus aimed to: (1) systematically find, synthesise, and critique qualitative research on the experience of psychological formulation according to patients and clinicians; (2) use meta-ethnography to develop an interpretative conceptual model of existing literature to provide a better overview for patients, healthcare professionals, and policy makers within mental healthcare; and (3) make recommendations to improve practice and guide further research. Methods: Meta-ethnography was used to interpret and synthesise findings. A systematic search found 17 papers meeting inclusion criteria: ten regarding patient views, six regarding clinician views of formulation with patients, and four regarding clinician views of formulation with other staff. Quality was assessed using the Critical Appraisal Skills Programme’s (CASP, 2017) qualitative appraisal tool. Results: Four core themes with 12 subthemes were identified: (1) “Function of formulation”; (2) “Intra-connection: Connecting with the self”; (3) “Inter-connection: Connecting with others”; and (4) “Wider context”. Sensitivity analysis demonstrated overall theme pattern did not differ according to quality. Conclusions: Themes were synthesised using a “line of argument approach”, producing a new conceptual model regarding patient and clinician experience of formulation. Clinical implications for patients and their carers, clinicians, and service managers, policy makers, and funders are discussed and directions for further qualitative and quantitative research are given. Improvement Project: Patient and staff views of psychiatric ward activities and efforts to increase choice: A qualitative study. Background: Assessing staff and patient views of psychiatric inpatient activities is both clinically and economically important. However, no study has yet answered the National Institute of Clinical Excellence’s (NICE, 2011a) call for qualitative research into the “activities and occupations service users want on inpatient wards”. Aims: This paper aimed to respond to this call and fill gaps in the literature by exploring staff and patient views of activities in one acute psychiatric inpatient unit, including: which activities are viewed as most beneficial and best-liked, and why; which other activities participants wish to see offered; whether staff and patient views differ; and how efforts to increase choice are experienced. Method: Seven staff and three inpatients participated in two focus groups using open-ended interview schedules. Findings: Thematic analysis resulted in five core themes: 1) Preferred Activities, 2) Benefits, 3) Challenges, 4) Choice, and 5) Improvement. Each had two to six subthemes. Conclusions: Themes echoed the limited existing research and guidelines on psychiatric inpatient activity provision. Both groups identified their best-liked activities. Several activity suggestions and possible benefits of activities were described, alongside best-liked activities, experience of choice, and challenges to these. Both similarities and differences were found between staff and inpatients. Implications: Further research to explore activities in mental health units with inpatients and staff from different professional backgrounds is needed to continue developing evidence-based guidelines. Research Project: Unpacking the relationship between social anxiety and state paranoia through experimental manipulation of state anxiety. Background: Research demonstrates significant overlap between social anxiety (SA) and paranoia, relating to comorbidity, shared psychological processes, and developmental pathways. Taylor and Stopa (2013) suggest heightened anxiety can temporarily shift individuals with trait-SA towards experiencing increased paranoia, but this has not been experimentally investigated. Aims: The present study aimed to test this theory by evaluating the effects of an anxiety-task on state-paranoia and state-SA in three groups: those with clinical trait-SA (SA-group), those with both clinical trait-SA and trait-paranoia (SAP-group), and healthy controls. Method: 47 participants (twelve SAP-participants, ten SA-participants, and 25 controls) were asked to complete one sociodemographic and four baseline questionnaires (Social Anxiety Interaction and Social Phobia Scales, Green et al. Paranoid Thoughts Scale, and Depression Anxiety Stress Scale-Short Form) to evaluate trait-levels of SA, paranoia, and affect, respectively. Participants then completed three Visual Analogue Scales (VAS) before and after an anxiety-task (Bentall Anagrams Task) to assess differences in state-SA, state-paranoia, and state-affect. Results: Contrary to previous research, results did not find an effect of anxiety-task on state-symptomatology. Although findings supported hypotheses regarding differences between state-SA and state-paranoia scores before the anxiety-task, they therefore did not substantiate the hypothesis that the anxiety-task would lead to increased state-paranoia for individuals with SA. Discussion: This is the first study that aimed to experimentally evaluate Taylor and Stopa’s (2013) hypothesis and one of few to include both clinical groups and controls. Due to failed manipulation of the anxiety-task, the experiment was not a true test of their hypothesis. Several possible reasons are discussed with important implications for research. | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.814280 |
Join the International Women’s Alliance (IWA) at the Women’s Tent, People’s Global Camp against WTO!
December 5, 2013, GOR Ngurah Rai Sports Center, Jalan Melati, Denpasar, Indonesia
In a bid to highlight the current struggles and perspectives of grassroots women in crafting an alternative to the neoliberal agenda of the WTO, the International Women’s Alliance (IWA), in partnership with the Indonesian women’s organizations, Seruni, RUPARI and Beranda Perempuan/Srikandi and the Asian Rural Women’s Coalition (ARWC), GABRIELA Philippines, Asia Pacific Research Network (APRN), Amihan Peasant Women Federation, Phils., Women of Diverse Origins-Canada, Fire-USA, Peace for Life People’s Forum and Movement for Global Justice and Peace, Women’s International Democratic Federation (WIDF) Asia, One Billion Rising Movement and the International Migrants’ Alliance (IMA) will be organizing activities dubbed “Grassroots Women’s Solidarity” as a part of the People’s Global Camp against the WTO scheduled on December 2-6, 2013 at the GOR Ngurah Rai Sports Center, Jalan Melati, Denpasar, Indonesia.
The Grassroots Women’s Solidarity aims to gather leaders and advocates from various women’s organizations, CSO’s and grassroots social movements to raise our collective voices against the devastating impacts of neoliberal trade in our lives and livelihoods and to craft an alternative system that takes grassroots women’s perspective on human rights and women’s equality at heart.
The following are activities set to take place during the Grassroots Women’s Solidarity:
December 5:
- 9:00-11:00am—Workshop on “Grassroots Women’s Struggles and Alternatives to WTO” aims to tackle the wide range of struggles being undertaken by grassroots women and their perspectives on alternatives to the present neoliberal trade and investment system;
- 11:00am-1:00pm—Women’s Solidarity Action Against the WTO, a women- initiated and led action wherein female participants wearing traditional garments will expose the destructive effects of neoliberal trade, not only on the global garments industry, but on women’s lives and livelihoods in general;
- “Women Building Peace and Resisting US Military Intervention in the Asia Pacific and Beyond” Campaign Launch aims to assist and consolidate women’s organizations and advocates in addressing negative impacts of the increasing US military intervention in the Asia-Pacific region. IWA will also be launching a toolkit for this campaign.
Apart from IWA-initiated side-events, members of the Alliance will also be participating in the People’s Global Camp plenary sessions, as well as other side events including
- December 2: Global Day of Action –consisting of internationally-coordinated actions registering the perspectives of various social movements, organizations, CSO’s and advocacy groups regarding the WTO’s neoliberal agenda;
- December 2-6: Travelling Journal: Our Stories, One Journey: Empowering Rural Women in Asia—a travelling journal, featuring journal entries written by rural women, which forms part of the global campaign to achieve food security through a more equitable and sustainable system of growing food;
- December 4: Workshop on Land-grabbing and Women’s Resistance—organized by the Asian Rural Women’s Coalition (ARWC) and the Asia Pacific Forum on Women Law and Development (APWLD) discussing the experiences and campaigns waged by rural women against the neoliberal offensive on agriculture.
- December 4: Speak Out: Women Rise, United Resist for Liberation—initiated by Indonesian organization Seruni, which aims to provide a venue for women to share their thoughts and experiences on the negative impacts of WTO trade facilitation agreements, free trade agreements and Economic Partnership Agreements on women’s lives.
The Grassroots Women’s Solidarity will be an opportune moment for women’s advocates from around the world to come together and articulate their visions and alternatives for a genuinely rights-based, transformative, democratic and equitable trade and investment system.
IWA is enjoining all advocates and parties interested to take part in this momentous occasion to please get in touch with us via email at [email protected]. | https://intlwomensalliance.org/2013/11/14/take-part-take-action/ |
Upon finishing my Bachelor’s degree in Technical physics I found myself at a crossroads of which path I should choose. I considered pursuing a Master’s degree in either Biomedical engineering or Environmental sciences and eventually decided for the latter as I believed I could apply this knowledge for the benefit of humanity and the planet. I almost continued with a PhD in hydrogeology, but before committing to it, I was inspired to look into other scientific areas, which lead me to come back to my unfulfilled wish of researching biomedical and computer sciences. Although I haven’t exactly followed a traditional and streamlined career path, I consider my versatile knowledge and experiences to be my greatest assets as they provide me with a unique toolbox to solve problems in different and unconventional ways.
Barcelona Supercomputing Center, Barcelona, Spain
Supervisor: Dr. Nataša Pržulj
Secondments
Secondment 1: Validating in vitro predictions (disease genes and potential drug repurposing) through wet lab experiments; Biological analysis of uncovered disease mechanisms
Secondment 2: From validated predictions to medical practice in dermatology using QIAGEN’s “sample to insight” strategies
ESR 9 Project
Patient-centric data integration framework for highly dimensional data
The goal of this project is to develop a patient-centric framework that integrates heterogeneous data types into a single framework (systems-level analysis) to advance precision medicine. The data may include drugs, genes and proteins (multi-omics data), diseases, different exposure chemicals (e.g., environmental molecules, toxins, food & beverage, alternative medicine). Integration will follow a novel non-negative matrix tri-factorization (NMTF) based approach. These types of approaches are characterized by a matrix completion property, which means that the resulting reconstructed matrices contain more information about the system than the original input matrices. Therefore, multi-relational heterogeneous data can be mined to uncover novel relationships between mentioned data types, furthering precision medicine. In addition, NMTF approaches perform co-clustering of the data, which will be utilized for patient subtyping / stratification. | https://h2020transys.eu/katarina-mihajlovic/ |
With the landing of a MA60 aircraft, a comprehensive remote sensing experiment wrapped up in Rizhao City, Shandong Province on November 30, 2021.
Starting November 7, the mission lasted for 24 days. Eight flights covering more than 10,000 square kilometers were carried out in Rizhao, bringing back data in visible light and synthetic aperture radar (SAR).
These data will provide solid support to civil applications and scientific research such as multi-band/multi-polarization SAR data fusion, ground object classification, quantitative analysis of target characteristics, high-precision extraction of typical surface elements, safety risk prediction and early warning along the high-speed railways.
The MA60 aircraft is part of China's Airborne Remote Sensing System (ARSS), a national major S&T infrastructure hosted by the Aerospace Information Research Institute (AIR), Chinese Academy of Sciences.
ARSS consists of two medium-sized manned aircrafts together with a range of remote sensing technologies developed by AIR. The two MA60 aircrafts consist of a remote sensing platform with complex modification, multiple equipment, and high observation efficiency. They can load, at the same time, radars and optical devices synchronously on board.
ARSS, which is integrated with SAR, infrared, lidar and optical aeronautical equipment, also incorporates the functions of real-time processing and satellite communication, contributing to emergency-response operations, including disaster prevention and reduction.
According to the Airborne Remote Sensing Center with AIR, more airborne remote sensing missions will be carried out over this area based on ARSS, and the other MA60 remote sensing aircraft, which carries multi-band SAR and hyperspectral equipment, will be taken as the flight platform.
With the help of this integrated airborne experiment platform, scientists can obtain more kinds of comprehensive scientific data under multi-temporal, multi-payload, multi-viewing angles, multi-altitude and multi-control flying conditions in varied regions.
Based on acquired data, a standard and quantitative scientific dataset of aerial remote sensing time series will be produced for agriculture, disaster reduction, ecology, urban development, and other research purposes. | https://techxplore.com/news/2021-12-airborne-remote-home-grown-ma60-aircraft.html |
The present Handbook on Teaching and Learning for Sustainable Development offers a wide range of perspectives, and a comprehensive overview of innovative teaching methods and innovative approaches (e.g., technological, non-technological, social and governance) that show how sustainability teaching may be practised. It contributes to a further understanding of: _ the role of sustainable development in different teaching realities; _ the contribution of sustainable development to citizenship; _ future perspectives in the curriculum; _ the means to reorient education for a sustainable future; _ the various challenges in implementing the principles of sustainable development in practice. In this context, the contributions of the authors play a key role and outline the many ramifications of a broader understanding of sustainability. | https://china.elgaronline.com/abstract/edcoll/9781839104640/9781839104640.00006.xml |
From 2023 most subjects will be taught on campus only with flexible options limited to a select number of postgraduate programs and individual subjects.
To learn more, visit COVID-19 course and subject delivery.
About this subject
- Overview
- Eligibility and requirements
- Assessment
- Dates and times
- Further information
- Timetable(opens in new window)
Contact information
Please refer to the specific study period for contact information.
Overview
|Availability|
Semester 1 (Early-Start) - Dual-Delivery
|Fees||Look up fees|
This subject will support Teacher Candidates’ understanding of science as both a body of knowledge and the enactment of skills and processes associated with scientific thinking, reasoning and inquiry. Teacher Candidates will learn how to use scientific thinking to engage children in science learning experiences and to encourage children’s use of scientific thinking, reasoning and inquiry in order to develop science knowledge.
Seminar topics are centred on children’s everyday lives and the natural world. Topics draw on contemporary research evidence and introduce science content knowledge. In addition, the subject explores opportunities for teachers to collaborate with children, their families and the wider community whilst investigating opportunities for sustainable living. The subject will equip Teacher Candidates to teach children from infancy to eight years of age and is informed by the Victorian Early Years Learning and Development Framework and the science standards in the Victorian Curriculum Foundation to Level Two.
Topics include the relationship between climate change and sustainability, the basic needs of all living things, principles of ‘classification’ in science, plants and their uses in everyday life, invertebrates (‘minibeasts’) and their characteristics, air and water, light and sound, magnetism and electricity, technology, and space and the universe. Teacher Candidates will design, implement, review and refine sequences of science learning experiences.
Intended learning outcomes
On completion of this subject, Teacher Candidates should be able to:
Graduate Standards refers to the Graduate-level Australian Professional Standards for Teachers.
- Critically reflect on research into how students learn and understand the concepts, substance, structure and implications for effective teaching practice, including the creation of effective learning environments (Graduate Standards 1.2, 1.5, 2.1, 3.2, 3.6)
- Understand how to design lesson plans and learning sequences, using knowledge of student learning, curriculum, assessment, reporting as well as effective teaching resources (Graduate Standards1.2, 1.5, 2.2, 3.2, 3.3, 3.6)
- Understand how to set learning goals that provide achievable challenges for students of varying abilities and characteristics (Graduate Standards 3.1, 3.2, 3.4, 3.6
- Select appropriate strategies to differentiate teaching to meet specific needs of students, including digital technologies, literacy, numeracy and 21 st Century skills in order to engage and empower students in their learning (Graduate Standards1.2, 1.5, 2.6, 3.3, 3.4)
- Evaluate teaching programs to improve learning and to determine the effectiveness of strategies and resources (Graduate Standards 3.3, 3.4, 3.6, 5.1)
- Identify assessment strategies including formal and informal diagnostic, formative and summative approaches to assess and to support students’ learning (Graduate Standards 2.3, 3.4, 3.6, 5.1)
- Demonstrate understanding of how to encourage children’s scientific thinking, reasoning and inquiry by providing clear directions, differentiating teaching as appropriate (Graduate Standards 2.1, 2.3)
- Implement teaching strategies for using ICT to expand curriculum learning opportunities for students (Graduate Standards 2.6)
- Demonstrate science content knowledge and competence in seeking out additional information to supplement their own knowledge as teachers (Graduate Standards 2.1, 6.2, 6.4).
Generic skills
This subject will develop the following set of key transferable skills:
- Clinical reasoning and thinking
- Problem solving
- Evidence based decision making
- Creativity and innovation
- Teamwork and professional collaboration
- Learning to learn and metacognition
- Responsiveness to a changing knowledge base
- Reflection for continuous improvement
- Linking theory and practice
- Inquiry and research
- Active and participatory citizenship.
Last updated: 29 July 2022
Eligibility and requirements
Prerequisites
None
Corequisites
None
Non-allowed subjects
None
Inherent requirements (core participation requirements)
The University of Melbourne is committed to providing students with reasonable adjustments to assessment and participation under the Disability Standards for Education (2005), and the Assessment and Results Policy (MPF1326). Students are expected to meet the core participation requirements for their course. These can be viewed under Entry and Participation Requirements for the course outlines in the Handbook.
Further details on how to seek academic adjustments can be found on the Student Equity and Disability Support website: http://services.unimelb.edu.au/student-equity/home
Last updated: 29 July 2022
Assessment
|Description||Timing||Percentage|
Written task focussing on science topic(s) and early childhood pedagogy
|Second half of the teaching period||50%|
Expository essay
|Week after SWOTVAC||50%|
|Hurdle requirement: Minimum of 80% attendance at all scheduled lectures, tutorials, seminars and workshops.||Throughout the teaching period||N/A|
Last updated: 29 July 2022
Dates & times
- Semester 1 (Early-Start)
Principal coordinator Jayson Cooper Mode of delivery Dual-Delivery (Parkville) Contact hours 18 hours. This study period is for Master of Teaching (Early Childhood) students only. Total time commitment 85 hours Teaching period 21 February 2022 to 29 May 2022 Last self-enrol date 4 March 2022 Census date 31 March 2022 Last date to withdraw without fail 6 May 2022 Assessment period ends 24 June 2022
Semester 1 (Early-Start) contact information
Time commitment details
85 hours
Additional delivery details
Master of Teaching (Early Childhood) students must enrol in the February study period.
Graduate Diploma in Teaching (Early Childhood) students must enrol in the Winter Term study period.
Last updated: 29 July 2022
Further information
- Texts
Prescribed texts
Britannica Online
Participants will be provided with a collection of readings via the online Learning Management System (LMS).
- Related Handbook entries
This subject contributes to the following: | https://handbook.unimelb.edu.au/subjects/educ90889/print |
Dear future students,
I have finally completed my Spring 2022 semester, and Writing for Engineering is one of the first English courses I’ve taken in college. This course is easily distinguishable when compared to Freshman Composition, Speech, and all other previous classes I’ve taken, due to its specialty in Engineering writing genres and deep dive into genre analysis. Although I did not have high expectations going into the school year, I was pleasantly surprised by the challenging and intriguing experience that this class rewarded.
To what extent have I achieved the course learning objectives?
I feel that I’ve achieved a considerable understanding of the course learning objectives throughout this course. In fact, with every assignment, the course learning objectives were exercised once again. There were four major assignments in total: the Memo, the Technical Description, the Lab Report, and the Engineering Proposal. With each assignment, there was a repeated process: analyzing the genre structure and sample content, examining effective and ineffective techniques applied by previous students, and critically thinking about how rhetorical elements pertained to our own projects. Then, we would research information using online libraries, find credible sources, create rough drafts, peer review with our classmates, and go into the editing, revising, and finalizing process. Paraphrasing and crediting our sources by citing was also a huge part of each assignment. Even the reflection required at the end of each paper allowed me to think back on what course learning objectives I was practicing and refining, which helped me achieve them even more.
In what ways have my perceptions on what writing is and does evolved this semester?
My previous perception on writing was that it was a creative form. I knew that writing can have many different purposes, such as persuading, informing, inspiring, teaching, or simply expressing emotion. In the past, I’ve analyzed speeches made by Presidents during elections, letters written to military generals in times of war, memoirs, poems, novels, scientific essays, and much more. But all of those genres fit into a certain box for me, I still considered writing as something expressive, because somehow, every piece of writing is always selling a narrative. However, this semester my perception on writing truly broadened. I never considered the technical side of writing, or even the fact that writing guidebooks, instructions, proposals, memos, and other pieces required skill and practice. Instructions and memos, they all seem self explanatory at first glance, especially because they are so common and widely used. However, every genre has a specific structure and conduct that should be followed, and it’s not as easy as it looks. They are fulfilling some sort of purpose, such as gaining the approval of a project, fixing an inconvenience, or receiving funding. These are all things I never really thought about. To elaborate, my perception on writing was previously very narrow. For me, English and STEM were two separate fields. Now, I’m more open minded and I understand that writing skills are essential in every single subject, including engineering.
How does the audience impact the content and purpose of text?
The content and purpose of a text is heavily dependent on the audience. This is because in order to achieve your purpose, you must cater towards your audience’s preferences, accessibility, and beliefs, which causes your content to change. For example, if the purpose of my memo is to persuade my audience, which is Professor Carr, to reconsider a grade for my assignment, then I must speak professionally and respectfully, while also giving valid reasons. If I’m overly casual or aggressive, it may come across as entitled and disrespectful, which will not help me achieve my purpose. Additionally, my content must be clear, straight to the point, and supplied with reasons why I deserve a better grade. Another example is the balloon powered car instructions my group, group 6, wrote and presented. Our audience ranged from middle schoolers to adults. Thus, our content was kid/teen friendly, while also being sophisticated and mature enough for parents or teachers.
Was there a challenge in writing across genres and addressing specific audiences?
I think my biggest challenge was my unfamiliarity with the genres. The engineering proposal had a lot of aspects that I’ve never considered before, like creating a budget and estimating how much we would pay employees, the costs of supplies and materials, etc. The lab report was also difficult for me to start, because I wasn’t sure how to approach an abstract, discussion, and results section. I don’t think I’ve ever done any of these writing pieces before. Understanding the audience took some critical thinking because you needed to understand the other rhetorical elements thoroughly as well in order to find the audience. I needed time to realize who my audiences were.
What happens to the other rhetorical elements when you change one of the elements within the situation? for example, when you change media, do the other elements change?
The other rhetorical elements shift drastically when one of the elements are changed. If the media is changed, the audience changes as well. The purpose and genre can also change. For instance, an advertisement shown in a printed newspaper has a different audience than one shown in an online website.
Now that we have returned to the traditional teaching/learning in our “appropriate” environment of the classroom. discuss how the shift back to the classroom has affected your educational experience, conduction group work, student life, which do you prefer and discuss your transition/experience.
After almost two years of online learning, the traditional in person classroom system was very enjoyable for me. I’m someone who has a difficult time concentrating, especially when there’s so many distractions around me, like my phone or my family members. If my class is online, I am more prone to slacking, giving myself breaks during class, and not being able to pay attention. I was also less motivated to learn and rarely excited to go to class, because I was just waking up, opening my laptop, and looking at a powerpoint screen on Zoom. Every day was repetitive and sometimes even depressing.
Going to classes in person, however, changed a lot of that for me. The class itself was a breath of fresh air, because I was interacting with people directly and we would have intellectual discussions about the topic at hand. When Professor Carr spoke, I felt compelled to listen and take notes. I loved listening to my classmates debate, and I was actually interested in the dialogue. I was also not afraid to voice my own opinion. I felt safe to politely disagree with my peers without being ostracized. My group mates were all kind, helpful, and easy to work with. Presentations were a bit nerve wracking at first, but eventually they became much easier, and I definitely improved my public speaking skills. Additionally, although most people don’t really think about this, the classroom environment itself has a big impact on my learning experience. Since the classroom was bright, decently spaced, and had fun swivel desks, I was more alert and eager to learn. However, there were different challenges I had to face, such as getting to school on time despite the unreliable, dangerous commute, and my irregular sleeping schedule. Overall, I prefer in person learning and this class helped me transition better. | https://tahsinakhan015.commons.gc.cuny.edu/ |
You are using an outdated browser.
For a better experience, please upgrade your browser here.
Fostering Resilience offers information for parents and professionals to help kids build resilience and tools for teens to manage stress and seek help.
We need resilience to get through life’s challenges. Use these tools and resources to build your own resilience and help others build resilience, too.
One of the best things we can do for our kids is to help them build their own resilience so they can handle what life throws their way. Here you'll find others who are working to do just that.
A leading trauma expert and clinical psychologist explains why talking about trauma helps the healing process.
The Obama administration's liaison to the LGBTQ and Asian American and Pacific Islander communities shares insights about how we can take a stand against bullying and build resilience in LGBTQ communities.
Psychologist Kelly McGonigal outlines how we can build resilience by connecting with and caring for other people during times of stress.
Lee Woodruff’s husband, ABC News anchor Bob Woodruff, was critically injured on assignment in Iraq when his vehicle was hit by a roadside bomb in 2006. Sheryl Sandberg and Lee Woodruff discuss resilience and how the Woodruff family has made the most out of Option B.
Resilience is the strength and speed of our response to adversity. It’s a skillset we develop over the course of our lives, and there are concrete steps we can take to build resilience long before we face any kind of difficulty.
Writer and counseling psychologist Lee Daniel Kravetz describes five steps we can take to find realistic hope in the face of adversity.
Resilience expert Ann Masten shares the factors that prepare kids to be resilient when faced with adversity.
Read practical tips developed by pediatrician Kenneth Ginsburg to help you prepare your children to build resilience, overcome setbacks, and thrive.
Youth advocate Limabenla Jamir describes how resilience develops from social support in conflict-affected communities and how it can help young people drive changes in their societies.
Staying silent can make your loved ones feel even more isolated after grief, loss, or hardship. Talking about the elephant in the room is one way to acknowledge your friend’s suffering and speak with empathy and honesty.
Groups of students, educators, parents, and community members who volunteer their time to ensure safe and supportive schools for LGBTQ students.
Peer-to-peer support for LGBTQ people and their allies.
A community where LGBTQ youth can connect with others and discuss issues related to discrimination and identity.
Sheryl Sandberg and Adam Grant discuss how kids are often more resilient than we think. There are concrete things we can do to help them build that resilience, including making sure they know they aren’t facing adversity alone. They discuss the concept of mattering, which is knowing that others notice you, care about you, and rely on you.
Psychologist and author Guy Winch describes how changing our responses to failure can build resilience.
Sesame Street offers bilingual multimedia tools for parents and providers to help kids build resilience.
Good Grief helps kids and their families cope and build resilience following the death of a loved one.
When you treat yourself with the same kindness and understanding you’d show a friend, that’s self-compassion. When you believe in your abilities, that’s self-confidence. We can practice self-compassion and develop our self-confidence on a daily basis to build resilience.
Advice for how to help children navigate the holidays in the wake of death or divorce, from experts on helping kids build resilience.
How can we support a loved one who has recently acquired a disability? How can people become better allies to the disabled community? Disability rights activist Judith Heumann shares advice on building resilience and making communities more inclusive and tolerant for those living with a disability.
After her mother died, Cheryl Strayed – the author of Wild, Tiny Beautiful Things, and Brave Enough – had never felt more lost. Cheryl hiked herself back to “the woman her mother wanted her to be” on the Pacific Crest Trail, developing her resilience along the way. In this conversation with Sheryl Sandberg, Cheryl shares what she learned on her hike and about the enduring power of a mother’s love.
Get tips and resources from OptionB.Org emailed to you or sent straight to your phone. | https://optionb.org/search?q=Collective%20resilience |
Talk about yum! Your dog will love these homemade peanut butter and pumpkin biscotti dog treats. Frankly, I think many humans would find it pretty tasty! Give it a go and let me know if your dog liked his special baked treats.
2. Combine dry ingredients whole wheat flour, cinnamon and baking power and stir until well mixed. Add peanut butter, pumpkin puree and milk to dry mix and knead until all ingredients are well combined.
3. Separate dough into two balls and gently knead 1/2 teaspoon carob powder into each ball, until it is marbled through out the dough.
4. Form dough into flat logs about 6 inches wide and 1 inch high. Bake on prepared baking sheet for 15 to 20 minutes. Remove from oven and allow to cool for 10 minutes then cut logs into 1 inch slices.
5. Place slices on prepared baking sheet and bake for 15 to 20 minutes until the slices are as dry and crunchy. Allow to cool.
This dog treat recipe has been carefully researched and tried and tested on various dogs. However, I cannot rule out the possibility of particular food intolerances or allergies. Please do not make these treats if your dog is allergic to or can’t tolerate any of the ingredients listed below.
Remember – this homemade dog treat recipe is intended for special occasions. It’s meant as an addition to your dogs regular diet and should not replace a proper balanced diet. It is essential to ask the advice of your veterinarian before feeding your dog any homemade food. | https://familypet.com/peanut-butter-pumpkin-biscotti-dog-treats/ |
Experimental and numerical investigation of passive scalars advected by turbulent flows have shown that passive scalar structure functions, $T_{2n}(r)$ have an anomalous power law behaviour: $T_{2n}(r) =
{\langle}(\theta(x+r)-\theta(x))^{2n}{\rangle}= {\langle}(\delta_r \theta(x))^{2n}{\rangle}\sim r^{\zeta(2n)}$, where for anomalous scaling we mean that the exponents $\zeta(2n)$ do not follow the dimensional estimate $\zeta(2n) = n \zeta(2)$. A great theoretical challenge is to develop a theory which allows a systematic calculation of $\zeta(n)$ from the Navier-Stokes equations. Recently [@KR94], it has been realized that intermittent power laws are also present in a model of passive scalar advected by stochastic velocity fields, for $n>1$ [@GAKU95; @CFKL95]. The model, introduced by Kraichnan, is defined by the standard advection equation: $$\label{kraichnan}
\partial_t \theta + {\bf u} \cdot {\mbox{\boldmath $\partial$}} \theta = \kappa \Delta
\theta
+\phi,$$ where ${\bf u}$ is a Gaussian, isotropic, white-in-time stochastic $d$-dimensional field with a scaling second order structure function: $\langle(u_i(x)-u_i(x+r))(u_j(x)-u_j(x+r))\rangle =
D_0 r^{\xi}((d+\xi-1)\delta_{ij}
-\xi r_ir_j/r^2)$. The physical range for the scaling parameter of the velocity field is $0 \leq \xi \leq 2$, $\phi$ is an external forcing and $\kappa$ is the molecular diffusivity.\
A huge amount of work has been done in the last years on the Kraichnan model. Due to the white-in-time character of the advecting velocity field, the equation for passive correlators of any order $n$ are linear and closed. This allows explicit, perturbative calculations of anomalous exponents in terms of zero-mode solutions of the closed equation satisfied by $n$-points correlation function, by means of developments in $\xi$ [@GAKU95] or in $1/d$ [@CFKL95], with $d$ the physical space dimensionality.\
The connection between anomalous scaling and zero modes, if fascinating from one side, looks very difficult to be useful for the most important problem of Navier-Stokes eqs. In that case, being the problem non linear, the hierarchy of equations of motion for velocity correlators is not closed and the zero-mode approach should be pursued in a much less handable functional space.\
From a phenomenological point of view, a simple way to understand the presence of anomalous scaling is to think at the scalar field as made of singular scaling fluctuations $\delta_r \theta(x) \sim r^{h(x)}$, with a probability to develop an $h$-fluctuation at scale $r$ given by $P_r(h) \sim r^{f(h)}$, being $f(h)$ the co-dimension of the fractal set where $h(x) = h$. This is the multifractal road to anomalous exponents [@Fr95] that leads to the usual saddle-point estimate for the scaling exponents of structure functions: $\zeta(2n) = \min_h{(2n h +f(h))}$ [@ISM]. In this framework, high order structure functions are dominated by the most intense events, i.e. fluctuation characterized by an exponent $h_{min}$: $\lim_{n \rightarrow \infty} \zeta(n) = nh_{min}$. The emergence of singular fluctuations, at the basis of the multifractal interpretation, naturally suggests that instantonic calculus can be used to study such special configurations in the system. Recently, instantons have been successfully applied in the Kraichnan model to estimate the behaviour of high-order structure functions when $d(2-\xi)\gg 1$ [@BaLe98], and to estimate PDF tails for $\xi =2$ [@FKLM96].\
In this letter, we propose an application of the instantonic approach in random shell models for passive scalar advection, where explicit calculation of the singular coherent structures can be performed.\
Let us briefly summarized our strategy and our main findings. First, we restrict our hunt for instantons to coupled, self-similar, configurations of noise and passive scalar, a plausible assumption in view of the multifractal picture described above. We develop a method for computing in a numerical but exact way such configurations of optimal Gaussian weight for any scaling exponent $h$. We find that $h$ cannot go below some finite threshold $h_{min}(\xi)$. We compare $h_{min}(\xi)$ at varying $\xi$ given from the instantonic calculus with those extracted from numerical simulation showing that the agreement is perfect and therefore supporting the idea that self-similar structures gouvern high-order intermittency.\
Second, assuming that these localized pulse-like instantons constitute the elementary bricks of intermittency also for finite-order moments we compute their dressing by quadratic fluctuations. We obtain in this way the first two terms of the function $f(h)$ via a “semi-classical” expansion. Let us notice that a rigorous application of the semi-classical analysis would demand for a small parameter controlling the rate of convergence of the expansion, like $1/n$ where $n$ is the order of the moment [@FKLM96] or $1/d$, where $d$ is the physical space dimension[@BaLe98]. As we do not dispose of such small parameter in our problem, the reliability of our results concerning the statistical weight of the $h$-pulses can only be checked from an [*a posteriori*]{} comparison with numerical data existing in literature. At the end of this communication, we will present some preliminary results on such important issue, while much more extensive work will be reported elsewhere.\
Shell models are simplified dynamical models which have demonstrated in the past to be able to reproduce many of the most important features of both velocity and passive turbulent cascades [@ISM].\
The model we are going to use is defined as follows. First, a shell-discretization of the Fourier space in a set of wavenumbers defined on a geometric progression $k_m = k_0 \lambda^m$ is introduced. Then, passive increments at scale $r_m=k_m^{-1}$ are described by a real variable $\theta_m(t)$. The time evolution is obtained according to the following criteria: (i) the linear term is purely diffusive and is given by $-\kappa k_m^2
\theta_m$; (ii) the advection term is a combination of the form $k_m \theta_{m'} u_{m''}$, where $u_m$ are random Gaussian and white-in-time shell-velocity fields; (iii) interacting shells are restricted to nearest-neighbors of $m$; (iv) in the absence of forcing and damping, the model conserves the volume in the phase-space and the energy $E = \sum_m |\theta_m|^2$. Properties (i), (ii) and and (iv) are valid also for the original equation (\[kraichnan\]) in the Fourier space, while property (iii) is an assumption of locality of interactions among modes, which is rather well founded as long as $0 \ll \xi \ll 2$. The simplest model exhibiting inertial-range intermittency is defined by [@BBW97]: $$\begin{aligned}
[\frac{d}{dt} + \kappa k_m^2]\,\theta_m (t) = c_{m}\theta_{m-1}(t) u_{m}(t) +
\nonumber \\
+ a_m \theta_{m-1}(t) u_{m-1}(t) +\delta_{1m} \phi(t),
\label{shellmodel}\end{aligned}$$ with $a_{m} = -c_{m-1}= k_{m}$, and where the forcing term acts only on the first shell. Following Kraichnan, we also assume that the forcing term $\phi(t)$ and the velocity variables $u_m(t)$ are independent Gaussian and white-in-time random variables, with the following scaling prescription for the advecting field: $$\langle u_m (t) u_n(t') \rangle = \delta(t-t') k_m^{-\xi} \delta_{mn}.$$ Shell models have been proved analytically and non-perturbatively [@BBW97] to possess anomalous zero modes similarly to the original Kraichnan model (\[kraichnan\]).\
The role played by fluctuations with local exponent $h(x)$ in the original physical space model is here replaced by the formation at larger scale of structures propagating self-similarly towards smaller scales. The existence in the inviscid unforced problem of such solutions associated with the appearance of finite time singularities is a The analytical resolution of the instantonic problem even in the simplified case of shell models is a hard task. In [@DDG99], a numerical method to select self-similar instantons in the case of a shell model for turbulence, has been introduced. In the following, we are going to apply a similar method to our case.\
We rewrite model (\[shellmodel\]) in a more concise form: $$\label{model}
\frac{d{\mbox{\boldmath $\theta$}}}{dt}= {\rm M}[{ {\bf b} }]{\mbox{\boldmath $\theta$}}.$$ The scalar and velocity gradient vectors, ${\mbox{\boldmath $\theta$}}$ and ${ {\bf b} }$, are made from the variables $\theta_m$ and $k_mu_m$. As far as inertial scaling is concerned, we expect that some strong universality properties apply with respect to the large scale forcing. Indeed, forcing changes only the probability with which a pulse appears at large scale, but not its inertial range scaling behaviour, $P_{k_m}(h) \sim k_m^{-f(h)}$. So, as we are interested only in the evaluation of $f(h)$, we drop the forcing and dissipation in (\[model\]). The matrix ${\rm M}[{ {\bf b} }]$ is linear in ${ {\bf b} }$ and can be obviously deduced from (\[shellmodel\]). The stochastic multiplicative equation (\[model\]) must be interpreted [*à la*]{} Stratonovich. Nevertheless, once the Ito-prescription for time discretization is adopted, the dynamics gets Markovian and a path integral formulation can then be easily implemented. This changes (\[model\]) into: $$\label{modelito}
\frac{d{\mbox{\boldmath $\theta$}}}{dt}= -B{\rm D}{\mbox{\boldmath $\theta$}}+
{\rm M}[{ {\bf b} }]{\mbox{\boldmath $\theta$}},$$ where ${\rm D}$ is a diagonal matrix (Ito-drift) ${\rm D}_{mm}=k_m^{2-\xi}$, and $B$ is a positive constant.\
As we said before, we are looking for coherent structures developing a scaling law $\theta_m \sim k_m^{-h}$ as they propagate towards small scales in the presence of a velocity realization of optimal Gaussian weight. The probability to go from one point to another in configuration space (spanned by ${\mbox{\boldmath $\theta$}}$) between times $t_i$ and $t_f$ can be written quite generally as a path integral over the three fields ${ {\bf b} }$, ${\mbox{\boldmath $\theta$}}$, ${\mbox{\boldmath $p$}}$ of the exponential $e^{-S[{ {\bf b} },
{\mbox{\scriptsize \boldmath$\theta$}},{\mbox{\scriptsize \boldmath$p$}}]}=e^{-\int_{t_i}^{t_{\!f}}
{\cal L}[{ {\bf b} },{\mbox{\scriptsize \boldmath$\theta$}},{\mbox{\scriptsize \boldmath$p$}}]dt} $, where the Lagrangian ${\cal L}$ is given by the equation: $$\label{def_action}
{\cal L}({ {\bf b} },{\mbox{\boldmath $\theta$}},
{\mbox{\boldmath $p$}})=\frac{1}{2}{ {\bf b} }.{\rm D}^{-1}{ {\bf b} }+
{\mbox{\boldmath $p$}}.(\frac{d{\mbox{\boldmath $\theta$}}}{dt}+B{\rm D}{\mbox{\boldmath $\theta$}}-
{\rm M}[{ {\bf b} }]{\mbox{\boldmath $\theta$}}),$$ and ${\mbox{\boldmath $p$}}$ is an auxiliary field conjugated to ${\mbox{\boldmath $\theta$}}$ which enforces the equation of motion (\[modelito\]). The minimization of the effective action $S$ leads to the following coupled equations: $$\begin{aligned}
\frac{d{\mbox{\boldmath $\theta$}}}{dt}&=&-B{\rm D}{\mbox{\boldmath $\theta$}}+
{\rm M}[{ {\bf b} }]{\mbox{\boldmath $\theta$}},\label{eqteta}\\
\frac{d{\mbox{\boldmath $p$}}}{dt}&=&B{\rm D}{\mbox{\boldmath $p$}}-\,^{t}{\rm M}[{ {\bf b} }]
{\mbox{\boldmath $p$}},\label{eqzzeta}\end{aligned}$$ with the self-consistency condition for ${ {\bf b} }$: $$\label{eqC}
{ {\bf b} }={\rm D}\,^{t}{\rm N}[{\mbox{\boldmath $\theta$}}]{\mbox{\boldmath $p$}},$$ where the matrix ${\rm N}[{\mbox{\boldmath $\theta$}}]$ is defined implicitly through the relation ${\rm N}[{\mbox{\boldmath $\theta$}}]{ {\bf b} }={\rm M}[{ {\bf b} }]{\mbox{\boldmath $\theta$}}$.\
We are now able to predict the scaling dependence of variables $b_m$. For a truly self-similar propagation, the cost in action per each step along the cascade must be constant. The characteristic turn-over time required by a pulse localized on the $m-th$ shell to move to the next one can be dimensionally estimated as $1/(u_m k_m) \equiv b_m^{-1}$. Recalling the scaling dependence of ${\rm D}$ and the definition of action (\[def\_action\]), we expect: $\Delta S =
\int_{t_{m}}^{t_{m+1}} {\cal L}dt \sim
k_m^{-(2-\xi)}b_m$. We can thus deduce that $b_m \sim
k_m^{2-\xi}$.\
Let us now discuss how to explicitly find solutions of the above system of equations. Clearly, there is no hope to analytically find the exact solutions of these deterministic non linear coupled equations. Also numerically, the problem is quite delicate, because (\[eqteta\]) and (\[eqzzeta\]) are obviously dual of each other and have opposite dynamical stability properties. This phenomenon can be hardly captured by a direct time integration. To overcome this obstacle, in [@DDG99] it has been proposed a general alternative scheme which adopts an iterative procedure. For a given configuration of the noise, each step consists in integrating the dynamics of the passive scalar (\[eqteta\]) forward in time to let emerge the solution of optimal growth. Conversely, the dual dynamics of the auxiliary field (\[eqzzeta\]) is integrated backward in time, along the direction of minimal growth in agreement with the prediction deduced from (\[eqC\]): $\|{\mbox{\boldmath $p$}}\|
\sim \|{\mbox{\boldmath $\theta$}}\|^{-1}$. Then the noise ${ {\bf b} }$ can be recomputed by the self-consistency equation (\[eqC\]) and the process is repeated until the convergence is reached.\
Self-similar passive solutions must be triggered by self-similar noise configuration: $$b_m(t)=\frac{1}{(t^{*}-t)}F(k_m^{2-\xi}(t^{*}-t)),
\label{pippo}$$ where $t^{*}$ is the critical time at which a self-similar solution reaches infinitesimally small scales in absence of dissipation. To overcome the non-homogeneity of time evolution seen by these accelerating pulses, we introduce a new time variable $\tau=-\log (t^{*}
-t)$. Then, the advecting self similar velocity field (\[pippo\]) can be rewritten under the form: ${ {\bf b(\tau)} }=e^{\tau}{ {\bf C(\tau)} }$ where $C_m(\tau )$ is still the velocity gradient field, but expressed in a different time scale, such that: $C_m(\tau )=F(m\,(2-\xi)\log\lambda -\tau)$.\
The sought self-similar solutions appear in this representation as traveling waves, whose period $T=(2-\xi) \log\lambda $ is fixed by the scaling consideration reported above. In this way, we can limit the search of solutions on the time interval \[$0-T$\], and the action at the final time $t_f=m T$ is deduced by $S({t_f})=m S(T)$.\
Then comes the main point of our algorithm. For a fixed noise configuration ${ {\bf C} }$, the field ${\mbox{\boldmath $\theta$}}$ must be the eigenvector associated to the maximal (in absolute value) Lyapunov exponent $\sigma_{max}$ of the Floquet evolution operator: $$\label{Floquet}
U(T;0)={\cal T}_{-1}\,\exp\,\int_0^{T}
(-B{\rm D}e^{-\tau}+{\rm M}[{ {\bf C(\tau)} }])d\tau.$$ Here ${\cal T}_{-1}$ denotes the translation operator by one unit to the left along the lattice. Similarly, the auxiliary field must be the eigenvector associated with the Lyapunov exponent $-\sigma_{max}$ of the inverse dual operator $^{t}U^{-1}$.\
Starting from an initial arbitrary traveling wave shape for ${ {\bf C} }(\tau)$ with period $T$, we have computed the passive scalar and its conjugate fields at any time between $0$ and $T$, by diagonalization of operator $U$, recomputed the velocity gradient field ${ {\bf C(\tau)} }$ from the self-consistency equation (\[eqC\]) and iterated this procedure until an asymptotic stable state, ${\mbox{\boldmath $\theta^{0}$}}$, ${\mbox{\boldmath $p^{0}$}}$, ${ {\bf C^{0}} }$, was reached. The scaling exponent $\theta_{m}\sim k_{m}^{-h}$ for the passive scalar can be deduced by $\theta^{0}_{m} (h)
\sim e^{m \sigma_{max}T}$, so that $h=(\xi-2)\sigma_{max}$. Note that $h$ is bound to be positive due to the conservation of energy. In our algorithm, the norm of the gradient velocity field ${ {\bf C} }(0)$ acts as the unique control parameter in a one to one correspondence with $h$. The action $S^0(h)$ is, in multifractal language, nothing but the first estimate of $f(h)$ curve based only on the contribution of all pulse-like solutions, more precisely $f(h)=S^0(h)/ln\lambda$.\
We now turn to the presentation and discussion of our main result. By varying the control parameter, we obtain a continuum of exponents in the range $ h_{min}(\xi)\leq h \leq h_{max}(\xi)$. The simple analysis of the $h$-spectrum allows predictions only for observable which do not depend on the $f(h)$ curve, i.e. only on the scaling of $T_{2n}$ for $n \rightarrow \infty$, ($\zeta(n)\sim h_{min}n$ for $n$ large enough).\
Unfortunately, high order exponents are the most difficult quantities to be extracted from numerical or experimental data. Nevertheless, thanks to the extreme simplicity of shell models, very accurate numerical simulations have been done [@BW96] at different values of $\xi$ and in some cases a safe upper bound prediction on the asymptotic of $\zeta(n)$ exponents could be extracted.
To compare our results with the numerical data existing in literature, we have analyzed the shell-model version of passive advection proposed in [@BW96]. In Fig.1, we show the $h_{min}$ curve obtained at various $\xi$ from instantonic calculation, together with the $h_{min}^{num}$ values extracted from direct numerical simulation of the quoted model [@BW96] performed at two different values of $\xi$: the agreement is good. Our calculation predicts, within numerical errors, the existence of a critical $\xi_c \sim 1.75 $ above which the minimal exponent reaches the lowest bound $h_{min}=0$.\
This goes under the name of saturation and it is the signature of the presence of discontinuous-like solutions in the physical space $\delta_r \theta \sim r^0$. Theoretical [@BaLe98] and numerical [@FrMaVer98] results suggest the existence of such effect in the Kraichnan model for any value of $\xi$. The existence of saturation in this last is due to typical real-space effects and therefore it is not surprising that there in not a complete quantitative analogy with the shell-model case.\
Let us now present the other -preliminary- result, i.e. the role played by instantons for finite-order structure functions. If we just keep the zero-th order approximation for $f(h)=S_0(h)/\log{\lambda}$, we get the $\zeta_n$ curve shown in Fig.2, which is quite far from the numerical results of [@BWmm] (the asymptotic linear behavior is in fact not even reached in the range of $n$ represented on the figure). In order to get a better assessment of the true statistical weight of the optimal solutions, we computed the next to leading order term in a “semi-classical” expansion. Fluctuations around the action were developed to quadratic order with respect to ${\mbox{\boldmath $\theta^{0}$}}$, ${\mbox{\boldmath $p^{0}$}}$, ${ {\bf C^{0}} }$, and the summation over all perturbed trajectories leading to the same effective scaling exponent for the ${\mbox{\boldmath $\theta$}}$ field after $m$ cascade steps was performed. It turns out (see Fig.2) that the contribution in the action of quadratic fluctuations, $S_1(h)$, greatly improves the evaluation of $\zeta(n)$.
Naturally, in the absence of any small parameter in the problem, we cannot take for granted that the next correction(s) would not spoil this rather nice agreement with numerical data. But the surprising fact that $S_0+S_1$ is strongly reduced with respect to $S_0$, even for the most intense events, does not imply by itself a lack of consistency of our computation. In any case, the prediction of the asymptotic slope of the $\zeta_n$ curve, based on the value $h_{min}$ is obviously valid beyond all orders of perturbation.\
Moreover, for values of $\xi>1$, we find that the second order exponent extracted from our calculation is in good agreement the exact result $\zeta_2=2-\xi$, suggesting that our approach is able to give relevant statistical information also on not too intense fluctuations.\
In conclusion, we have presented an application of the semi-classical approach in the framework of shell models for random advection of passive scalar fields. Instantons are calculated through a numerically assisted method solving the equations coming from probability extrema: the algorithm has revealed capable to pick up those configurations giving the main contributions to high order moments. Of course, we are far from having a systematic, under analytical control, approach to calculate anomalous exponents in this class of models. Nevertheless, the encouraging results here presented raise some relevant questions which go well beyond the realm of shell-models. To quote just one, we still lack a full comprehension of the connection between the usual multiplicative-random process and the instantonic approaches to multifractality: in particular, it is not clear what would be the prediction for multi-scale and multi-time correlations of the kind discussed in [@bbct] within the instantonic formulation.\
It is a pleasure to thank J-L. Gilson and P. Muratore-Ginanneschi for many useful discussions on the subject. LB has been partially supported by INFM (PRA-TURBO) and by the EU contract FMRX-CT98-0175.
[99]{} I. Daumont,T. Dombre and J.-L. Gilson, e-Print archive chao-dyn/9905017. R.H. Kraichnan, Phys. Rev. Lett. [**72**]{}, 1016 (1994). K. Gawedzki and A. Kupianen, Phys. Rev. Lett. [**75**]{}, 3608 (1995). M. Chertkov, G. Falkovich, I. Kolokolov and V. Lebedev, Phys. Rev. E [**52**]{}, 4924 (1995). E. Balkovsky and V. Lebedev, Phys. Rev. E [**58**]{}, 5776 (1998). G. Falkovich, I. Kolokolov, V. Lebedev and A. Migdal, Phys. Rev. E [**54**]{}, 4896 (1996). U. Frisch, [*Turbulence: The legacy of A. N. Kolmogorov*]{}, Cambridge University Press, Cambridge (1995). T. Bohr, M.H. Jensen, G. Paladin and A. Vulpiani [*Dynamical system approach to turbulence*]{}, Cambridge University Press, Cambridge (1998). R. Benzi, L. Biferale and A. Wirth, Phys. Rev. Lett. [**78**]{}, 26 (1997). G. Parisi, [*A Mechanism for Intermittency in a Cascade Model for Turbulence*]{}’, unpublished (1990). T. Dombre and J.-L. Gilson, Physica D [**111**]{}, 265 (1998) L. Biferale and A. Wirth, Phys. Rev. E [**54**]{}, 4892, (1996). U. Frisch, A. Mazzino and M. Vergassola,\
Phys. Chem. Earth, in press (1999). L. Biferale and A. Wirth, Lecture Notes 491, p. 65 (1997). L. Biferale, G. Boffetta, A. Celani and F. Toschi, Physica D [**127**]{} 187 (1999).
| |
"The drilling is alleged to have taken place through a man-made Mesolithic chalk platform at Blick Mead -- a tepid spring at the edge of the Salisbury Plain -- where archaeological excavations have been ongoing since 2005.
Archaeologists are concerned that further drilling, which is part of a $2 billion scheme to build a tunnel under the ancient site, will cause the water table to drop, potentially destroying key artifacts which have been preserved in the water-logged ground.
"This is a travesty," Professor David Jacques from the University of Buckingham said in a statement. "If the tunnel goes ahead the water table will drop and all the organic remains will be destroyed."
One auroch provided food for 300 people
Among these is the hoof print of an auroch, an extinct prehistoric cattle.
"Thousands of flint tools and bones of extinct animals eaten during prehistoric feasts were also uncovered at the site, according to researchers.
The site is believed to be the earliest known inhabited settlement following the last Ice Age, dating back 12,000 years.
It is also deemed to be highly important as it is where the hunter gatherers who once roamed Britain first encountered Neolithic farmers who went to build Stonehenge, researchers said.
Key to understanding Stonehenge?
"It may be that there are footprints here which would be the earliest tangible signs of life at Stonehenge," Jacques concluded. "If the remains aren't preserved, we may never be able to understand why Stonehenge was built.
"Archaeologists say engineers dug a 10-foot borehole through the chalk platform, which is believed to be constructed from flint and animal bones, without consulting the researchers.
Kate Fielden, honorary secretary of Stonehenge Alliance, told CNN that she is also concerned about the development, and warned that it could "irreparably damage" one of the world's most famous heritage sites.
Highways England nevertheless stated that their installation of a piezometer -- a device used to measure pressure and depth of groundwater in boreholes -- had resulted in no archaeological damage.
"The spokesperson said that the work will have no significant effects on the Blick Mead area.
"The works have been undertaken in a highly professional manner, with an archaeologist on site and with due care being exercised at all times," they continued.
The digging is part of preparations for the construction of a 1.8-mile tunnel and link road past Stonehenge in a bid to hide a busy highway from the popular tourist site.
Reader Comments
It was used as a military training ground in the 1940's. Extensively excavated by by so called archeologist in the 1960's, stones buried and moved,, there is no real context.
Now its is nothing more than, what one could call it a theme park structure, where thousands of peoples come and wonder at the mythology of druids and days gone by, now a celebration of Bacchanalia, at the time of the solstices, related to the Roman era.
And of course, the Romans civilized the ancient Brits. Parallels could be made with the US Empire today.
Sad in the UK, no real recognition is given to neolithic structures and the many other stone circles doted around the landscape of the united Kingdom in Wales Scotland and Ireland, structured societies and communities, all before bringing civilization to the barbarians by the Roman Empire.
Strange, Julius Ceasare didn't see it that way, he apparently came to the UK shores to learn, but documented history tells another story, after all the victors write the history.
C'mon, CNN, don't be so precious. The singular word for cattle is... cow . | https://www.sott.net/article/402261-Archaeologists-enraged-after-6000-year-old-structure-Stonehenge-site-damaged-during-drilling-work |
Fertilization is a central component of farming that impacts both crop productivity and the quality of the environment. The goal is to ensure there are sufficient levels of nutrients based on crop requirements, peak harvest times, and the soil’s natural ability to provide nutrients. Since proper fertilization means achieving a dynamic balance, any imbalance can lead to economic losses and the contamination of groundwater and surface water. The risks are even greater when the soil’s health is affected, since degraded soil requires more fertilizer to compensate for its lower fertility.
IRDA helps farmers make the most of healthy soil by developing new crop management practices that incorporate green manures and rotations, and by optimizing the use of organic and mineral fertilizers. Our experts also perform tests to determine nitrogen, phosphorus, and potassium efficiency coefficients for various organic substances and weigh in on the benefits of different biostimulants. Furthermore, they work to develop indicators to determine a soil’s fertilization requirements from any given sample. IRDA is currently heading up a major project to revise 61 fertilization charts for Ministère de l’Agriculture, des Pêcheries et de l’Alimentation du Québec that have, for decades, served agronomists and farmers as benchmarks for preparing agri-environmental fertilization plans.
IRDA irrigation experts are engaged in a number of fertilization and fertigation projects aimed at maximizing nutrient use by plants. Since water supply plays a major role in how plants respond to fertilizer doses, our research team systematically evaluates a number of factors—such as water availability, temperature, and root distribution in the soil—when devising fertilization strategies.
Québec farmers who implement IRDA’s recommendations achieve stable or higher crop yields, reductions in production costs, and improvements in the quality of their most valuable asset, the soil.
IRDA helps farmers select the best fertilizer and spreading techniques for achieving optimal yields. | https://www.irda.qc.ca/en/services/agricultural-practices/fertilizer-management/ |
A second unpiloted test flight of Boeing’s Starliner crew capsule — ordered after an initial demonstration mission fell short of reaching the International Space Station — is now scheduled for launch from Cape Canaveral in August or September, leaving little margin to conduct the spaceship’s first flight with astronauts before the end of the year.
Boeing and NASA officials confirmed the new schedule in recent statements, following a delay earlier the year from the test flight’s previous target launch date of April 2. Managers blamed that schedule slip on delays in performing software testing to prepare for the upcoming test flight, including difficulties stemming from a winter storm in February that impacted Boeing’s software lab in Houston.
The CST-100 Starliner spacecraft is one of two commercial crew ships developed by U.S. industry under contract to NASA. SpaceX is NASA’s other commercial crew contractor, and that company’s Crew Dragon spacecraft began flying astronauts to the station last year.
Boeing’s Starliner, meanwhile, is still months away from it initially-unplanned second unpiloted test flight, and a crew test flight is expected at least several months after that.
Officials said the external considerations drove the schedule to launch Boeing’s second Starliner Orbital Flight Test, or OFT-2 mission, in the August/September timeframe.
The Starliner spacecraft uses the same space station docking ports as SpaceX’s Dragon crew and cargo ships. One of those ports is currently taken by a Crew Dragon capsule, and both ports will be occupied for a few days later this month with the handover of one Crew Dragon mission to the next.
SpaceX’s next Dragon cargo mission is scheduled to launch June 3 and will spend about a month-and-a-half docked with the space station to deliver fresh supplies, experiments, and a new pair of solar arrays. That precludes a Starliner docking before the second half of July.
The operational crew and cargo missions get priority over test flights in the space station’s schedule.
NASA and Boeing officials also have to find a window in United Launch Alliance’s Atlas 5 launch schedule at Cape Canaveral Space Force Station. Unlike SpaceX, which launches Crew Dragon missions on its own Falcon 9 rockets, Boeing contracted with ULA to boost Starliner crew capsules into orbit.
ULA is a 50-50 joint venture between Boeing and Lockheed Martin, but it operates as an independent company and has other customers. The U.S. Space Force currently has payloads scheduled to launch on three Atlas 5 missions in May, June, and August, carrying a new billion-dollar military missile warning satellite, a menagerie of tech demo experiments, and two space surveillance payloads.
Boeing previously had an early September launch slot booked with ULA for the Starliner’s Crew Flight Test — the capsule’s first demonstration mission with astronauts — when the OFT-2 mission was set for launch earlier this year. That launch slot is now available for the OFT-2 mission, and officials aren’t ruling out moving up the OFT-2 launch to August if one of the Space Force delays one of its missions.
The Atlas 5 launch pad will be tied up in late September through much of October with preparations to launch NASA’s robotic Lucy spacecraft on a marathon trip through the solar system to study asteroids. Lucy has a 23-day planetary launch window opening Oct. 16, and NASA will give the asteroid probe priority over the agency’s other missions.
Steve Stich, NASA’s commercial crew program manager, said last week the Starliner spacecraft assigned to the OFT-2 mission is in “good shape” as it undergoes preparations in a facility at NASA’s Kennedy Space Center in Florida.
“It’s almost ready for launch,” Stich said.
In a statement, Boeing said it will be “mission ready” in May in case an opening arises in the Atlas 5 launch schedule.
“The Starliner team has completed all work on the OFT-2 vehicle except for activity to be conducted closer to launch, such as loading cargo and fueling the spacecraft,” Boeing said. “The team also has submitted all verification and validation paperwork to NASA and is completing all Independent Review Team recommended actions including those that were not mandatory ahead of OFT-2.”
Boeing is taking more time to complete software testing on the Starliner spacecraft while officials wait for an opening in the space station schedule and ULA’s launch manifest, according to Stich. Boeing said in a statement it expects to complete software simulations, including end-to-end confidence and integration testing, before the end of April and will provide the results to NASA reviewers.
Investigators blamed a software error for the OFT-1 mission’s failure to dock with the space station in 2019. A mission timer was wrongly programmed, causing the spacecraft to think it was in a different mission phase when it separated from its Atlas 5 rocket after an otherwise-successful liftoff from Cape Canaveral.
The error caused the Starliner capsule to burn more propellant than expected, consuming the fuel it needed to maneuver toward the space station. Mission managers elected to end the mission early, and the spacecraft landed in New Mexico.
Assuming the OFT-2 mission gets off the pad in late summer, Stich said the Starliner’s Crew Flight Test could take off “toward the end of the calendar year.”
The Crew Flight Test will carry NASA astronauts Butch Wilmore, Mike Fincke, and Nicole Mann to the space station. They will fly on the same reusable Starliner spacecraft that launched and landed in December 2019 on Boeing’s first Orbital Flight Test, while the OFT-2 mission will fly on an unused vehicle.
Boeing said its teams are preparing for the “shortest turnaround time possible” between the OFT-2 mission and the Crew Flight Test. Wilmore, Fincke, and Mann recently suited up and climbed aboard the spacecraft set to fly the OFT-2 mission for life support and communications systems checkouts.
Once Boeing accomplishes the two remaining Starliner test flights, NASA will certify the capsule for regular crew rotation missions to the space station, just as the agency did for SpaceX’s Crew Dragon last year.
NASA has nearly $7 billion in contracts with Boeing and SpaceX covering the development of the two commercial crew spaceships, and six operational crew rotation flights by each company.
With Boeing’s delays, SpaceX is likely to have launched four Crew Dragon missions with NASA astronauts — a test flight and three operational launches — before the Starliner flies with people for the first time.
Steve Jurczyk, NASA’s acting administrator, said the agency originally planned to alternate commercial crew missions between Boeing and SpaceX.
“The plan right now is to alternate — SpaceX, Boeing, SpaceX, Boeing — however, the first Boeing crew flight is delayed, and we’re going to most likely … have four crew flights with SpaceX before the crew test flight with Boeing,” Jurczyk said Tuesday. “So we may have to relook at that, but we haven’t gotten around to talking about that yet.”
NASA will also soon start considering how and when to procure more commercial crew missions to meet the space station’s requirements beyond 2024, he said. But those talks are still to come.
“We really haven’t talked in detail about how we’re going to move forward beyond the current contracts and commitments,” Jurczyk said in an interview with Spaceflight Now. | https://spaceflightnow.com/2021/04/21/boeing-crew-capsule-test-flight-now-scheduled-for-late-summer/ |
Generally speaking, doctors should not prescribe medications to themselves, except in rare emergency situations, according to the ethical guidelines of the American Medical Association, according to the New Hampshire Board of Medicine. Individuals who prescribe medications to themselves, whether through legal channels because they are licensed to write prescriptions or through illicit channels because they may work around and have access to narcotics, may be placing themselves at risk for further problems for a variety of reasons.
- When a doctor prescribes drugs to a patient, he or she will collect valuable and necessary information about medical history, including the medical history of the patient’s family members. This can help them when forming a proper diagnosis.
- Doctors perform a thorough physical or psychiatric examination prior to writing prescription for certain drugs, because the drugs may mask symptoms that need to be addressed.
- Patients who receive a prescription medication are provided with the warnings and hazards associated with the medication from the doctor as well as the pharmacy staff when the prescription is filled through legal channels.
- The instructions for use of the medication are often specific to the individual, including dosages, frequency of use and symptoms to watch for.
When a medical professional eliminates this process, they may use the drugs they intend to help their symptoms in a manner that can lead to tolerance or addiction, and they may be robbing themselves of the treatment they need to combat physical or mental illnesses. Working in the medical profession is a taxing career, physically and mentally. Patients sometimes die, which can drastically affect their caregivers’ well-being. The hours are long and arduous. All these factors can create an environment that can lead to addiction.
Inpatient and Outpatient Treatment Services for Addicted Medical Professionals
Some medical professionals may respond well to outpatient treatment, which means they will continue to work and live with their families during the counseling and treatment process. However, there are some differences between outpatient services and inpatient services that can apply directly to a physical or other health care provider.
When undergoing outpatient care, a doctor who is addicted to drugs or alcohol is going to face the same temptations to abuse drugs due to the chronic, relapsing nature of addiction. With so many substances available at the tip of one’s fingers, it may prove a greater burden when the professional is trying to avoid using drugs. By choosing a residential, inpatient treatment program, the recovering addict is separated completely from their normal routine and their access to drugs is significantly limited.
The process of detoxification is another reason why inpatient treatment may be a better option for medical professionals. Depending upon the types of drugs to which one has become addicted, the withdrawal period during initial detoxification may affect one’s ability to work. For instance, withdrawal from high levels of alcohol can result in trembling and shaking, which may cause a medical professional to become understandably self-conscious. This might lead to the abuse of alcohol, not for the “high” or intoxication, but simply to reduce or eliminate the obvious signs of withdrawal. In a medically assisted detox program, the recovering addict can work through these issues privately and with assistance from knowledgeable staff and other medical professionals.
Another benefit of choosing a treatment center away from home is the privacy aspect. There is a stigma attached to those who abuse drugs, particularly when addiction has developed to the point that the individual is making poor choices in many areas of their life. If the medical professional who needs treatment is concerned about how their colleagues or patients will view their need for recovery, residential treatment can provide the privacy they need to fully concentrate on their health and progress.
Many individuals who suffer from drug addiction also suffer from another mental illness, according to research conducted by the National Institute on Drug Abuse. In order for treatment to be fully effective, the individual needs to receive treatment for both conditions simultaneously. When considering this relationship and the unique variables that each person brings to the treatment process, it is important to receive comprehensive care as quickly as possible in order for the health professional to return to work in a recovered state. Outpatient treatment takes place only a few hours each day. These are difficult hours to set aside when one is working in the health care industry under the best of circumstances, certainly. The treatment process can take many more weeks on an outpatient basis than at a residential facility, where one’s full attention is focused on recovery.
Alternative and Complementary Therapy Can Create a Foundation for Future Health
Once the initial treatment period has ended and the recovering medical professional returns home, he or she will go back to work. The same stress and overwhelming schedule will be waiting for them, leading one to believe that the recovery process may be in jeopardy. Truthfully, addiction has the same relapse rates as other chronic diseases, according to the experts. Therefore, it is important that the recovering addict has an arsenal of tools and life skills to address the risks of relapse. During the course of residential treatment, he or she may participate in alternative and complementary therapies that are designed to last a lifetime. Yoga and meditation have been shown to:
- Improve overall quality of life
- Reduce stress
- Lower blood pressure
- Lower heart rate
- Reduce anxiety levels
- Reduce levels of depression
- Decrease episodes of insomnia
- Increase stamina and strength
As medical professionals return to the workplace, we here at Axis are here to help. We provide our clients with the tools needed to live their lives free from the effects of addiction. If you are a medical professional who is struggling with addiction, there is help available. To learn more about how we can help, contact us today. | https://axisresidentialtreatment.com/drug-rehab/medical-professionals/ |
On-site traffic management
You must manage the risk of collision and injuries when vehicles and powered mobile machinery and equipment operate in the same area as pedestrians.
What are the risks of traffic in the workplace?
Between 2014 and 2016, 10 workers died in Queensland after being hit or trapped by mobile plant. In the same period, there were 1,200 accepted workers’ compensation claims for serious injuries. Effectively managing worksite traffic can help prevent these kinds of incidents and injuries.
Harm can result from:
- being trapped between a vehicle and a structure
- vehicles colliding with each other or a structure
- being hit by a vehicle
- items that fall off vehicles (unsecured or unstable loads).
How do I manage the risks?
Workers and management can work together to reduce the risks from vehicles and mobile machinery and equipment. You can watch our video on managing traffic on site and read more information below.
For workers
Workers have a duty to take reasonable care of their own health and safety and to not negatively affect the health and safety of others.
As a worker, you must:
- follow any reasonable instruction
- cooperate with any reasonable policy or procedure relating to health and safety at your place of work.
The Traffic management for construction or maintenance work code of practice 2008 (PDF, 0.8 MB) has information about the hazards, risks, and responsibilities associated with traffic management for construction or maintenance work. It also has information about traffic control measures.
For businesses
If you’re an employer or a person conducting a business or undertaking (PCBU), it’s your duty to use a risk management approach to manage traffic , as outlined in the Work Health and Safety Regulation 2011.
You may also have responsibilities and obligations under the Traffic management for construction or maintenance work code of practice 2008.
Following a four-step risk management process will help your business meet its responsibilities under work health and safety (WHS) laws.
You can also refer to our:
- traffic management self-assessment tool (PDF, 0.36 MB) for guidance on how to manage traffic risks
- principles for creating safe work.
Four steps to risk management
You can identify potential hazards by:
- observing the workplace to identify areas where pedestrians and vehicles interact. Think about:
- the floorplan
- if work is done close to public areas
- when traffic volumes are higher
- where potential blind spots are
- areas of poor visibility.
- asking your workers, pedestrians and visiting drivers about traffic-management problems they’re aware of
- reviewing your incident and injury records including near misses.
Safe Work Australia has a useful checklist for identifying traffic hazards.
When you’ve identified risks, consider:
- how likely it is that they’ll cause harm
- how serious the harm could be.
This will help you determine what you must do to control the risk and how urgently you have to do it.
Most vehicle incidents at the workplace are from collisions between pedestrians and vehicles that are reversing, loading, or unloading. It’s important to control this risk by keeping people, including customers and visitors, away from vehicles as much as possible.
The traffic management self-assessment tool (PDF, 0.36 MB) will help you identify and assess risks and develop a traffic management plan to control them.
Work health and safety laws require a business or undertaking to do all that is reasonably practicable to eliminate or minimise risks. Ways to control risks are ranked from the highest level of protection and reliability, to the lowest. This ranking is known as the hierarchy of risk control. You must work through this hierarchy to manage risks.
Completely remove hazards, if possible
If possible, completely remove hazards from the workplace. For example, physically separate pedestrian routes from vehicle areas. You could do this by:
- using physical barriers or overhead walkways, or
- only using machinery and vehicles when no pedestrians are around.
Minimise risks
If it’s not reasonably practicable to completely eliminate the risk, consider one or more of the following options, in the order they appear below, to minimise risks:
- substitute the hazard for something safer, for example, replace forklifts with other load-shifting equipment like a walker stacker or pallet jacks
- isolate the hazard from people, for example, create a delivery area away from other pedestrians or work activities
- use engineering controls, for example, speed limiters on forklifts, presence-sensing devices, or interlocked gates.
If the above control measures do not remove the risk, consider the following controls, in the order below, to minimise the remaining risk:
- use administrative controls, for example, warning signs, or schedule delivery times to avoid or reduce the need for pedestrians and vehicles to interact
- use personal protective equipment (PPE), for example, high-visibility clothing.
Refer to Safe Work Australia’s traffic management guide for detailed information on how to control traffic risks.
Regularly review your control measures to make sure they’re working as planned and are effective. Take account of any changes and of the nature and duration of work.
Further information on the risk management process is in the How to manage work health and safety risks code of practice 2021 (PDF, 0.65 MB).
Traffic management plans
If your workplace is large and has a high volume of traffic, a traffic management plan can help you communicate how you’re managing traffic risks in your workplace. It may include things such as:
- the flow of pedestrian and vehicle traffic
- the responsibilities of people managing traffic
- procedures for controlling traffic in an emergency.
See Safe Work Australia’s traffic management guide for more information about traffic management plans.
The committee was established to ensure there is an ongoing consultative forum for injured workers and families affected by a workplace death, illness or serious incident. Read more about the committee.
Standards and compliance
The Work Health and Safety Act 2011 (the WHS Act) provides a framework to protect the health, safety and welfare of all workers at your place of work. It also protects the health and safety of all other
people who might be affected by the work. | https://www.worksafe.qld.gov.au/safety-and-prevention/hazards/hazards-index/on-site-traffic-management |
Apply remote sensing principles and methods to analyze data and solve problems in areas such as natural resource management, urban planning, or homeland security. May develop new sensor systems, analytical techniques, or new applications for existing systems.
Other Job Titles Remote Sensing Scientists and Technologists May Have
Data Analytics Chief Scientist, Geospatial Intelligence Analyst, Remote Sensing Analyst, Remote Sensing Scientist, Research Scientist, Scientist, Sensor Specialist
Tasks & Responsibilities May Include
- Manage or analyze data obtained from remote sensing systems to obtain meaningful results.
- Analyze data acquired from aircraft, satellites, or ground-based platforms, using statistical analysis software, image analysis software, or Geographic Information Systems (GIS).
- Process aerial or satellite imagery to create products such as land cover maps.
- Design or implement strategies for collection, analysis, or display of geographic data.
- Integrate other geospatial data sources into projects.
This page includes information from theO*NET 25.0 Databaseby the U.S. Department of Labor, Employment and Training Administration (USDOL/ETA). Used under theCC BY 4.0license. O*NET® is a trademark of USDOL/ETA. | https://roadtripnation.com/career/19-2099.01 |
in vulnerable communities by building capacities and improving collaboration and service delivery in southern Belize
From 2018 to 2020, HPPB - in partnership with the National Emergency Management Organization (NEMO) - will upgrade the Hurricane Plan to a Multi-Hazard Operation Plan and provide training and supplies to strengthen response to natural disasters in southern Belize.
The project will also improve training curriculums for early warning systems; strengthen Village Emergency Committees; and conduct a nation-wide awareness campaign.
The project is implemented in 30 villages:
Aguacate, Barranco, Bladen, Blue Creek, Corazon Creek, Crique Sarco, Dangriga Town, Galespoint, Graham Creek, Hope Creek, Hopkins, Jordan, Mabilha, Machakil Ha, Monkey River, Mullins River, Placencia, Punta Gorda, Punta Negra, Red Bank, San Benito Poite, San Jose, San Juan/ Cowpen, San Lucas, Santa Ana, Santa Theresa, Sarawee, Seine Bight, Sittee River and Trio.
(Redirects to photo album in HPPBZ's Facebook page)
Provide supplies and materials necessary for disaster response to selected communities.
Strengthen community-based governance and response systems related to natural disasters.
Strengthen accountability of Government disaster response.
During project implementation, HPPB and NEMO will collaborate with: | https://www.humana-belize.org/projects/18-20-us-response/ |
Mar 16, 2018· Calcium bentonite clay is an absorbent kind of clay that typically forms after volcanic ash ages. It's named after Fort Benton, Wyoming, where the largest source of the clay can be found, but ...
Kaolin, also called china clay, soft white clay that is an essential ingredient in the manufacture of china and porcelain and is widely used in the making of paper, rubber, paint, and many other products. Kaolin is named after the hill in China (Kao-ling) from which it was mined for centuries.
The properties of bentonite depend largely on its ion-exchange characteristics. ... The mineral montmorillonite and minerals of the smectite group are usually highly expansive & can take on large amounts of water. Unfortunately, the term bentonite is commonly applied to any light-colored medium to high plastic clay.
Efficient utilization of the low-rank coal has been a headachy problem, especially when firing the high alkali-containing coal in a typical pulverized fuel boiler, where severe slagging and fouling originating from the alkali metal vapors may occur. The additives injection technology has been proved to be a promising method in combating these problems.
Oct 14, 2019· Kaolin, bentonite and other special clays processing with Verdés machines By admin-verdesnews / October 14, 2019 At Verdés, we have a long experience in the supply and installation of machines for crushing and processing special clays such as bentonite, sepiolite, attapulgite (palygorskite) and kaolin.
Fig. 2, Fig. 3 show the infrared spectra of the kaolin and sodium bentonite clay samples respectively. It has been observed that different functional groups absorb characteristic frequencies of infrared radiation. Table 2 shows the FT-IR absorption characteristics of the kaolin sample. FT-IR spectroscopy was used to investigate the chemical ...
Kaolinite (/ ˈ k eɪ ə l ɪ n aɪ t /) is a clay mineral, part of the group of industrial minerals with the chemical composition Al 2 Si 2 O 5 4.It is a layered silicate mineral, with one tetrahedral sheet of silica (SiO 4) linked through oxygen atoms to one octahedral sheet of alumina (AlO 6) octahedra. Rocks that are rich in kaolinite are known as kaolin / ˈ k eɪ ə l ɪ n / or china clay.
Clay minerals are one of the potential good adsorbent alternatives to activated carbon because of their large surface area and high cation exchange capacity. In this work the adsorptive properties of natural bentonite and kaolin clay minerals in the removal of zinc (Zn2+) from aqueous solution have been studied by laboratory batch adsorption kinetic and equi- librium experiments.
Kaolin, bentonite, and zeolites as feed supplements for animals: health advantages and risks ... the characteristics of kaolin and its use in animal or human nutrition. 2. Kaolin availability 2.1. Kaolin deposits in the Czech Republic Kaolin is formed under acidic conditions through
Kaolin, also called china clay, soft white clay that is an essential ingredient in the manufacture of china and porcelain and is widely used in the making of paper, rubber, paint, and many other products. Kaolin is named after the hill in China (Kao-ling) from which it was mined for centuries.
Kaolin is an excellent ceramic raw material. Stoneware and glazes are some of the users of kaolin.Besides, most white ware and sanitary ware uses kaolin as well because kaolin is white after firing, good plasticity, has good shrinkage and strength properties.
construction clays, are the kaolin and bentonite industries, although these large tonnages belie the variety of product specifications and special consumer-designed products that are available as a result of research and development in close liaison with customer needs. It is the physical characteristics of clays, more so than the
Environmental characteristics include the nature and distribution of inorganic contaminants, such as metals and metalloids like arsenic, iron, and lead, in clay-bearing rocks. These environmental factors have the potential to affect the use of clays in natural and industrial applications.
CiteSeerX - Document Details (Isaac Councill, Lee Giles, Pradeep Teregowda): Charakteristika sedimentácie kaolínu a bentonitu v koncentrovaných roztokoch The sedimentation characteristics of two clays, namely kaolinite and bentonite, were determinated at high clay (5 % wt/vol) and electrolyte (1 N) concentrations using various inorganic-organic compounds.
Six types of clays are mined in the United States: ball clay, bentonite, common clay, fire clay, fuller's earth, and kaolin. Mineral composition, plasticity, color, absorption qualities, firing characteristics, and clarification properties are a few of the characteristics used to distinguish between the different clay types. Major domestic markets for these clays are as
Clay colors are caused by impurities. For instance, red clay generally contains a high level of iron relative to other clays, and black clay contains manganese. Clay workability refers to how plastic the clay is. Higher levels of kaolin make clays less plastic, but also less porous. Porcelain clays have the highest firing temperature.
Clay mineral - Clay mineral - Chemical and physical properties: Depending on deficiency in the positive or negative charge balance (locally or overall) of mineral structures, clay minerals are able to adsorb certain cations and anions and retain them around the outside of the structural unit in an exchangeable state, generally without affecting the basic silicate structure. These …
Mar 07, 2015· Clay is kind of generic term for the fine sticky earth substance that forms from the weathering of more complex rock forming minerals. Here is a description of generic clay: Six types of clays are mined in the United States: ball clay, bento...
Bentonite is a soft, fine-grain, inhomogeneous rock that particularly consists of a clay mineral called montmorillonite. Montmorillonite provides the bentonite with its characteristics – great sorption ability, internal swelling capacity when in contact with water, bonding ability and high plasticity. Bentonites are used for many purposes.
The objective of this experiment was to examine the influence of kaolinite clay supplementation (0%, 1%, or 2% diet dry matter [DM] basis) on characteristics of digestion, feedlot growth performance and carcass characteristics of calf-fed Holstein steers fed a steam-flaked corn-based diet.
Free Online Library: Effects of Dietary Inclusion of Sodium Bentonite on Biochemical Characteristics of Blood Serum in Broiler Chickens.(Report) by "International Journal of Agriculture and Biology"; Business, international Research Blood serum Broilers (Chickens) Food and nutrition Physiological aspects Broilers (Poultry)
The cream containing the Garfield bentonite allowed for a passage of only approximately 9% to 28% of the incident rays (at 280 nm and 400 nm, respectively), much less than the creams containing the other bentonites. The Chambers bentonite displayed a distinct UV-protection behavior, especially in the UV-B range.
Kaolin clay is versatile clay with a number of uses and benefits. It also has unique chemical and physical properties that contribute to its use. Read on learn more about this clay powder, how it is good for your skin, hair, if it can be eaten, uses in soap, deodorants, surround wp spray, where to buy and side effects. | https://www.kringinterieurbouw.nl/horcrusher/2020/29531-kaolin-bentonite-characteristics.html |
“Pseudoneglect” refers to a spatial processing asymmetry consisting of a slight but systematic bias toward the left shown by healthy participants across tasks. It has been attributed to spatial information being processed more accurately in the left than in the right visual field. Importantly, evidence indicates that this basic spatial phenomenon is modulated by emotional processing, although the presence and direction of the effect are unclear.
Over the last two decades, much research has focused on the influence of emotion on spatial biases in both patients and neurologically intact individuals, based on the strong influence that emotion has on attention in everyday life, on the tight interconnection between the neural mechanisms that mediate these two phenomena, and on the brain lateralization of emotion processing. In this context, spatial attention tasks such as the line bisection have been used in an attempt to disentangle the issue of emotion and attention lateralization. The rationale is that if attention is right-lateralized and emotion is also right-lateralized (i.e., “right-hemisphere hypothesis” ), then both functions concur in shifting the activation balance in favor of the right hemisphere, enhancing the pseudoneglect in the left hemifield. An alternative account sees positive emotion lateralized to the left and negative emotion to the right (i.e., the “valence-specific hypothesis” ) predicts that negative emotion should increase the relative activation of the right hemisphere and enhance pseudoneglect. In contrast, positive emotion should increase the relative activation of the left hemisphere and attenuate pseudoneglect.
The association between emotion and the right hemisphere goes back to the very early neurology literature when Mills observed that patients with a lesion in the right side of the brain had an impairment in emotional expression. For the right-hemisphere hypothesis, the perception of emotional stimuli is related to the activity of the right hemisphere, regardless of affective valence . Conversely, the valence-specific hypothesis is based on evidence that lesions in the left frontal lobe were related to negative emotional states while lesions in the right hemisphere were more associated with positive or maniac emotional states . For the valence-specific hypothesis, the left hemisphere processes positive emotions, whereas the right hemisphere processes negative emotions . An alternative, the “approach–withdrawal” hypothesis, proposes that brain asymmetries observed for positive and negative emotions are related to the underlying motivational system linked to positive and negative emotions . Accordingly, the left prefrontal cortex is involved in processing approach-related emotions, such as happiness and anger, whereas the right prefrontal cortex processes withdrawal-related emotions, such as sadness and fear. Despite a large body of research, evidence on the interaction between emotion and spatial attention is still not well understood. A systematic review on the relation between pseudoneglect and emotion conducted according to the PRISMA guidelines (see Figure 1), yielded 15 studies published by February 2021 that measured the relationship between emotional processing and spatial attention pseudoneglect.
Inclusion criteria were: (1) original, peer-reviewed articles; (2) written in English; (3) conducted on adults; (4) included at least one task to measure pseudoneglect (line bisection task, landmark task, greyscales task, grating scales task, tactile rod bisection task, lateralized visual detection, cancellation task; and (5) included at least one task with emotional stimuli or employed a measure of emotional state/trait as they relate to pseudoneglect. Articles from all publication years were accepted (see Table 1).
Of the 15 studies meeting the inclusion criteria, 11 studies used visual stimuli, such as faces, words, and pictures with emotional connotations. The main finding is that the majority of the studies found that pseudoneglect was modulated by emotional stimuli or by participants’ self-reported emotional state or trait. However, the direction of these effects is less clear-cut. Of the studies with emotional faces or words, three reported that emotion induces a rightward bias (or attenuates the leftward bias): one study used emotional words , one used angry and happy faces , and one used happy and sad faces . Four studies reported that emotion induces a leftward bias (or attenuates the rightward bias): one study used happy and sad faces and three studies used negative words . One study with faces and words reported mixed results . The two studies using auditory stimuli report a rightward bias when listening to sad and happy music. Moreover, studies on the effects of self-reported affect and traits on pseudoneglect show that positive affect and positive attitude are correlated with a rightward bias. Finally, greater self-reported claustrophobic fear is related to a rightward bias when the line bisection is performed at a short distance .
The entry conclude that there are substantial methodological differences across studies that could account for the heterogeneity in the observed findings. Firstly, the time between presenting the emotional stimuli and spatial attention tasks varies, with some employing simultaneous and others sequential presentation. This difference does not rule out low-level variables (such as surround suppression) due to simultaneous versus sequential stimulus presentation that might contribute to the attention bias . Secondly, some studies present the line flanked by two emotional stimuli and some others flanked by just one stimulus on the left or right side of the line. However, contextual stimuli may influence the localization of the subjective midpoint, biasing the bisection away from the location of the flanker . Indeed, using one flanker seems to increase the attentional load for extracting the segment from the background and reduce the salience of the flanked-line segment . Thirdly, there are individual differences in the attention bias at baseline and this variability does not seem to predict the direction of changes driven by the emotional modulation of the bisection bias. Finally, an additional neural factor may contribute to the complex picture that emerges from the literature. This is related to which hemisphere is preferentially involved in processing the specific category (e.g., faces, words, sounds, etc.) of the stimuli used and their relative position in the visual field (i.e., central vs. peripheral presentation). For instance, visual stimuli such as faces and words likely activate networks of non-parietal visual category-selective regions that include the right fusiform face area and the left visual word form area .
Future studies should consider comparing brain activation asymmetries during the baseline and during the task while taking into account the brain hemisphere that is preferentially involved in processing the category of stimuli used. | https://encyclopedia.pub/entry/13936 |
With the advent of a more creative curriculum in the United Kingdom, teachers and teaching assistants have more freedom in how they deliver lessons which dove tail into the teaching objectives and expected outcomes as provided in the framework of the national curriculum. For children aged between five and seven years of age (primary school children in years 1 and 2), a suitable topic for the spring term might be “Dinosaurs”. With the spring term ending at Easter, the addition of an imaginative series of lesson activities designed around looking at dinosaur eggs would help to tie in the term topic with the holiday period that comes immediately at the end of this term.
A dinosaur egg can be made very simply using a balloon and paper mache to create the desired effect. A single, large egg can be created by the teacher and the teaching assistants or if school resources allow, the children themselves can have a go at making and painting their own paper mache dinosaur eggs. Often the eggs that are made and represent a dinosaur egg are quite large, many people think that the dinosaurs hatched from huge eggs, this is not the case. Although a number of dinosaur genera are known to have laid large eggs, most dinosaurs laid very much smaller eggs than most people imagine. Egg size in egg-laying, terrestrial vertebrates is limited by a number of factors. For example, the egg has to be strong enough to hold the volume of liquid that each one contains, but the egg shell cannot be too thick otherwise the baby inside would not be strong enough to break out of the egg (to hatch). The largest dinosaur eggs known to science have been ascribed to a genus of Titanosaurs (long-necked dinosaurs), which may have measured more than fifteen metres in length. Even so the eggs of these prehistoric animals are about the same size as a football.
To read an article about the size of dinosaur eggs: The Big Eggs of a Dinosaur – Hypselosaurus
When the egg has been made and painted, simply create a little nest for it, using leaves, twigs and such like. If the class has a pet hamster or guinea pig, using some hay or straw that is normally reserved for this class pet also works well. Then over the course of the spring team the children can observe their egg and record any differences in how it looks. For example, once the egg has been put in the nest, take a picture of it and post it up on the class notice board. Then after a week or so, turn the egg round in the nest and get the children to compare what they see with the earlier photograph. A teacher can use this simple exercise to get children to think about what differences can they see and why might the differences have occurred? What does it mean when the egg has moved, what may be going on inside the egg?
Over the next two weeks, the egg can be given a crack and the children made ready for the “hatching of their own dinosaur), again the change in the state of the egg can be used to encourage the children with a creative writing exercise as they compose short letters to their “baby dinosaur”.
A Typical Dinosaur Nest (Fossils)
Finally, the school day dawns (towards the end of the term topic) when the baby dinosaur hatches. However, rather than have to go to the trouble of creating a baby dinosaur for the class, here is a simple tip for any teacher or teaching assistant, allow your dinosaur to escape. To show the escape, simply break the egg open using a sharp pair of scissors before the children come in and ask the caretaker to move one of the ceiling tiles on the suspended ceiling (a feature of most classrooms). The children can learn of their dinosaur’s escape into the roof space and the moved tile in the ceiling would be proof to them of the escape of their pet dinosaur. The teacher can easily leave a trail of three-toed (tridactyl) dinosaur prints from the nest area to the floor immediately below the ceiling tile, creating a trail for the young pupils to follow.
Then it is simply a question of developing plenty of extension activities around the school’s pet dinosaur project. For example, the children can be encouraged to draw what they think the dinosaur may have looked like, what name should it have been called and why? In addition, the children can be asked to think up stories that they might want to read to the dinosaur, or to imagine the adventures that their escaped dinosaur might be having.
Such imaginative and creative ideas can help teachers and teaching assistants to develop interesting lesson plans that challenge pupils to observe, explore and ask questions about living things. Reference materials can be used to find out what palaeontologists know about the fossilised eggs of dinosaurs. As well as covering aspects of the science element of the national curriculum, cross curricula activities such as creative writing and grammar usage which relates to the objectives of the English element of teaching can be incorporated.
To learn more about Everything Dinosaur’s teaching activities in schools: Everything Dinosaur’s School Workshops
Having your own dinosaur egg and watching the egg change and eventually hatch provides an excellent basis for the development of many enriching and challenging lesson ideas with key stage one children. | https://blog.everythingdinosaur.co.uk/blog/_archives/2013/01/20 |
Lawmakers focus on job creation, tax cuts, business incentives and tackling the debt, calling for immediate action.
Key Republican leaders excoriated the Obama administration and Senate Democrats for crying the doom of joblessness and outsourcing, all the while sitting on their hands with no budget as the economy goes to pot.
“We’ve got to get back to sound economic principles that work, instead of putting on another Band-Aid,” said Rep. Allen West of Florida.
In the “CBS Town Hall” earlier this week, West, along with Rep. Paul Ryan of Wisconsin, South Carolina Gov. Nikki Haley and Sen. Tom Coburn of Maryland, responded to audience questions on the economy, offering concrete solutions and calling on their fellow lawmakers for immediate action.
Washington’s bad habit is, we treat the symptoms of the problem rather than the problem,” Coburn said, referring to the President’s stimulus package passed in the wake of the 2008 economic downturn.
“We don’t have any choice. We have to eliminate $9.7 trillion out of the federal government’s expenditures over the next 10 years to not go in the tank. Now, they can dispute that, but that’s the fact,” he said.
Job creation and reducing the budget deficit were the primary themes discussed during the event.
Audience member Crystal Grant, speaking, she said, for lower- and middle-class Americans who make sacrifices every day, asked the Republican lawmakers why they advocated tax breaks for wealthy Americans.
Coburn responded that there would need to be compromise, but lower tax rates are generally good for the economy.
“You’re taking away capital and saying that the government can spend the money better. The fact is, if we were to lower tax rates, we could actually generate increased revenue,” he said.
Coburn said the government could increase revenue by getting rid of ethanol tax credits and other subsidies and by eliminating programs—particularly economic stimulus programs—that are not effective. He referred to 80 such programs out of 188, which, accounting for $6.6 million, have never been tested for effectiveness.
All four Republican lawmakers on the panel criticized the federal government for creating conditions adverse to the growth of business.
Haley spoke of her state several times as an economic model for the federal government.
Although the economy is going down nationally, in South Carolina it is going up because the state is doing the right things, like tort reform and reform of Medicaid, Haley said.
“Companies come with confidence to South Carolina. We are fighting the unions every step of the way. We are a strong right-to-work state and we’re going to stay that way. That’s what gives a company confidence to come and say, ‘This is a state where we can make money. This is a state where we can invest and know that we’re right to hire,’ ” she said.
Haley criticized President Obama for allowing the National Labor Relations Board to sue Boeing for creating jobs in her state. Situations like these create incentives for companies to do business overseas, she said.
In South Carolina, the state government is cutting unemployment taxes for employers and cutting down some unemployment benefits, she said.
“We are incentivizing those that are successful to want to be more successful,” she said.
When asked why businesses seem to be sitting on capital instead of hiring the many unemployed, West said that government is not setting the conditions for the private sector to grow.
“We’ve got a business tax rate of 35 percent, the highest in the world. Who is going to be able to create a job in the United States of America, bring production and manufacturing back to this country, if you’re pushing them away?” he asked.
When asked by CBS News business and economics correspondent Rebecca Jarvis why he focused so much on the budget deficit when Americans need jobs, Ryan responded, saying that the issues are interconnected. Businesses see today’s deficit as tomorrow’s tax increases and interest rate increases, he said.
“These massive deficits are showing businesses that there’s an uncertain future, an uncertain future they don’t want to invest in,” he said.
“That is why a lot of businesses are sitting on capital. I had a round table in Kenosha with business leaders last Thursday,” Ryan said. “All of them are telling me the same thing. ‘We don’t know what’s coming from government next. So many new regulations coming down the pike.’ ”
Coburn was very critical of his fellow senators for a focus on short-term political expediency.
“We’ve got the lowest-level of votes in the Senate in 25 years. People don’t vote because they don’t want to have the courage to defend it,” he said.
Coburn even referred to instances where Senators introduced new legislation for programs that already exist. There is hardly any oversight in the federal government, making for billions of dollars in duplication, he said.
Haley said that the federal government should start with zero-base budgeting.
“Ask, what do we have? And go from there,” Haley said. | https://humanevents.com/2011/06/18/republicans-offer-tough-solutions-in-cbs-town-hall-on-the-economy/ |
The Meaning of Human Existence considers humanity’s purpose and place in the grand scheme of things
Biologist Edward O. Wilson tackles mankind – our origins, our unique place in the universe, and what the future of the species holds – in about 200 pages.
By Danny Heitman, Christian Science Monitor
In The Meaning of Human Existence, Edward O. Wilson tackles the puzzle at history’s heart: “Does humanity have a special place in the Universe? What is the meaning of our personal lives? I believe that we’ve learned enough about the Universe and ourselves to ask these questions in an answerable, testable form.”
Wilson isn’t a philosopher or a theologian, but a biologist who’s spent much of his life studying ants. At age 85, he now presides as an elder statesman of science and a popular explainer of it, following in the literary tradition of Loren Eiseley, Lewis Thomas, and Stephen Jay Gould. Wilson’s career defies easy category, and he likes it that way, arguing that academia’s insular disciplines overlook the central question of humanity’s role on Earth. “Scientists who might contribute to a more realistic worldview are especially disappointing,” he laments. “Largely yeomen, they are intellectual dwarves content to stay within the narrow specialties for which they were trained and are paid.”
That kind of myopia is especially troublesome, Wilson suggests, because science now seems advanced enough to sharply clarify what it means to be human. He begins his quest to explain the purpose of human existence by asserting what it’s not. We’re not here, he maintains, to serve a divine master and prepare for a heavenly afterlife. Religion developed early across humanity to strengthen social bonds, but Wilson concludes that it’s outlived its usefulness. “The great religions are also, and tragically, sources of ceaseless and unnecessary suffering,” he writes. “They are impediments to the grasp of reality needed to solve most social problems in the world.”
Wilson traces human origins to a lucky series of biological developments that conspired to make us masters of the planet. The complex organization of human society invites easy comparison with ants – creatures that, in his other books, have inspired some of Wilson’s best prose. He includes a seemingly obligatory chapter on ants here, and there’s a marvelous passage in which he gently exhales into a colony of leafcutter ants and draws them out with his breath. “I admit that this observation has no practical use,” he concedes, “unless you like the thrill of being chased by really serious ants.”
Such moments of interspecies communion aside, Wilson cautions that the intensely nuanced interactions among humans have little in common with life on an anthill: “Almost all human beings seek their own destiny…. They will always revolt against slavery; they will not be treated like worker ants.”
Evolution wired humans to both compete and cooperate with each other, impulses that can be beneficial but obviously contradictory, says Wilson. “In a nutshell,” he writes, “individual selection favors what we call sin and group selection favors virtue. The result is the internal conflict of conscience that afflicts all but psychopaths….” Life would be much simpler if we could resolve this wrinkle, but Wilson warns against it: “The instability of the emotions is a quality we should wish to keep. It is the essence of human character, and the source of our creativity…. We must learn to behave, but let us never even think of domesticating human nature.”
He mentions another evolutionary trick complicating human destiny. Our refined brains and ability to walk upright make us lords of nature, but also distance us from it, a blind spot that inclines us to abuse the planet we need for our survival. To reach our fullest promise, Wilson proposes a grand partnership between science and the humanities. One reason that humans have thrived is because of our intense awareness of each other, a quality nurtured by the art, literature, music, and theater the humanities produce. Science can benefit from these insights, Wilson tells readers, even as science enriches the humanities, too. “The greatest contribution that science can make to the humanities,” he writes, “is to demonstrate how bizarre we are as a species, and why.”
The Meaning of Human Existence is itself something of the marriage between science and liberal arts, blending empirical observation with memoir to make its points. Even so, Wilson’s voice doesn’t register as intimately as in Naturalist or A Window on Eternity, previous works in which his gifts as a literary artist shined brighter. The Human Age, naturalist Diane Ackerman’s latest offering, ponders many of the questions addressed in Wilson’s new book, but her sentences seem more grounded in the kinds of precise observation that Wilson, at his best, has used to such good effect.
Wilson proves, as usual, a briskly economical writer; any commentator who can summarize the quandary of human existence in less than 200 pages deserves a gold star for brevity. Some of this concision is achieved, though, through a valedictory tone that flies high above the fray, unfettered by inconvenient details. He doesn’t fret, for example, over just how science and the humanities might deepen their connection, especially in a higher education climate where the liberal arts get such short shrift. His prediction of a mechanized future defined by windfalls of leisure time also sounds pie-in-the-sky. Aldous Huxley made the same forecast in 1932, and decades later, we seem busier than ever.
Like the subject it chronicles, The Meaning of Human Existence reads, perhaps necessarily, like a work in progress – a string of commencement speeches stitched together with thread: “Human existence may be simpler than we thought. There is no predestination, no unfathomed mystery of life. Demons and gods do not vie for our allegiance…. What counts for long-term survival is intelligent self-understanding, based on greater independence of thought than that tolerated today even in our most advanced democratic societies.” One can almost see Wilson at the podium, shaded beneath a mortar board, as he offers this assessment of his fellow Homo sapiens. | https://eowilsonfoundation.org/christian-science-monitor-review-the-meaning-of-human-existence-is-itself-something-of-the-marriage-between-science-and-liberal-arts-blending-empirical-observation-with-me/ |
I am passionate about and dedicated to my own personal and professional growth. Beyond completing my formal education and earning advanced degrees, I continue to be a seeker of knowledge and regularly pursue my own in-depth development. For me, learning is a lifelong process, and I have been fortunate and blessed to have worked with many brilliant and masterful teachers to whom I am eternally grateful. They include:
Brugh Joy, M.D., author of Joy’s Way and Avalanche. Dr. Joy introduced me to the mystery of life and opened a door that I had not known existed. I am forever changed by what I experienced from his wisdom and guidance. Brugh affected the way I view behavioral patterns and emotional dynamics at a profound level, as well as the way I perceive the world. He taught me to view the human condition from a “what’s right about it” perspective, rather than “what’s wrong with it,” and embrace all aspects of ourselves as a path to centeredness, personal growth and spiritual evolvement.
Brian Weiss, M.D., author of Many Lives, Many Masters. Dr. Weiss taught me the art and skill of Regression Therapy and its healing power. He is a masterful clinician who focuses on the sources of our wounds, which can lead to healing. Brian is a most compassionate and gifted teacher/healer, and I am so grateful to have had the opportunity to know and learn from him.
Carolyn Conger, Ph.D., embodies the marriage of the psychological and spiritual, with particular attention to the process of opening and unfolding. Carolyn has inspired me to trust the feminine process of intuition and creativity to help guide my clients to an inner wisdom and deeper self-compassion.
Meredith Sabini, Ph.D., has written extensively on dreams related to creativity, cultural issues, spiritual experience, the dying process and our relationship with nature. She is a pioneering researcher on dreams that diagnose illness. I am grateful for her enormous depth of wisdom and insight as I continue to learn and grow from her workshops and dream process group.
Marty Rossman, M.D., is an innovator in the field of mind/body medicine. Marty was my teacher as I completed a certification process in Interactive Guided Imagery. This profound healing tool accesses one’s own inner wisdom to work through deep wounds and practical concerns in a creative and lasting way.
Angeles Arrien, Ph.D., a world-renowned teacher, author and cultural anthropologist whose teachings bridge the disciplines of anthropology, psychology and comparative religion while focusing on universal beliefs shared by humanity. I am so appreciative of learning from, and experiencing, her profound and compassionate wisdom and grace, enabling me to draw insight from indigenous intelligence. | https://drshevafeld.com/mentors/ |
As he tramped along gaily, he thought of his adventures and escapes, and how when things seemed at their worst he had always managed to find a way out; and his pride and conceit began to swell within him. There is surely no animal equal to me for cleverness in the whole world. My enemies shut me up in prison, encircled by sentries, watched night and day by warders; I walk out through them all, by sheer ability coupled with courage. Spacecraft Structures - Session 5 Construct a storyboard or poster of final design testing results, sketches, steps throughout development and journals. Spacecraft Structures - Session 6 Student presentations linking design strategies and observations to science concepts. American Airlines Denver Airport I was surprised to see how thirsty the bricks were which drank up all the moisture in my plaster before I had smoothed it, and how many pailfuls of water it takes to christen a new hearth.
You've discovered a title that's missing from our library. Can you help donate a copy? When you buy books using these links the Internet Archive may earn a small commission. Open Library is a project of the Internet Archive , a c 3 non-profit. We don't have this book yet. Can you donate it to the Lending Library? Learn More.
Search this site. Gahart RN, Adrienne R. Nazareno PharmD. Coyle, Robert A. Novak, Brian Gibson, Edward J.
Spacecraft Structures and Mechanisms. From Concept to Launch. Editors: Sarafin, Thomas P., Larson, Wiley (Eds.) Buy.
The spacecraft structure is the physical platform that supports and integrates subsystems and payload and as such it is of fundamental importance for any spacecraft. The dynamics of the spacecraft structures is of fundamental importance to guarantee the appropriate performance, and similarly the implementation of mechanisms to enable specific functions, ranging from attitude control to deployment of various elements and it is interlinked with the spacecraft structural design. Our research group is working on projects in collaboration with major aerospace companies in the UK including:. The premise of a universal failure criterion is impractical given the number of adherent-adhesive configurations available. However, for a finite number of joint configurations, design rules can be developed based on experimental test data and detailed FE modelling.
This book describes how to develop spacecraft structures and mechanisms, from requirements to ensuring mechanical readiness for launch. The contents cover material selection, spacecraft configuration, mechanism design, and quality assurance. The book addresses its goal of helping people on space programs develop mechanical systems that work at the lowest total cost by presenting the big picture of developing high-quality mechanical products while also providing enough detail to define requirements, establish preliminary designs, develop verification plans, plan tests, and document compliance with requirements.
Skip to Main Content. A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Use of this web site signifies your agreement to the terms and conditions. Revolutionary satellite structural systems technology: a vision for the future Abstract: A number of revolutionary spacecraft structures and control concepts are currently being developed, which when successfully implemented will dramatically change the way future space systems are designed and developed. Many of these concepts will offer new challenges to the designers of the spacecraft structures and control systems. Some of the concepts being explored are: i Ultralight Precision Deployable Systems. Revolutionary concepts for compact, lightweight, deployable precision reflectors based on recent breakthroughs in ultralight optical quality mirrors and precision mechanisms.
Ни у кого не вызовет подозрений, если ключ попадет именно к. И что особенно удачно - эту компанию меньше всего можно было заподозрить в том, что она состоит в сговоре с американским правительством. Токуген Нуматака воплощал старую Японию, его девиз - Лучше смерть, чем бесчестье.
Джабба рассмеялся. - Не кажется ли тебе, что это звучит как запоздалое эхо. Она тоже засмеялась. - Выслушай меня, Мидж.
Он получил кольцо. До смерти напуганный, Двухцветный замотал головой: - Нет. - Viste el anillo. Ты видел кольцо. Двухцветный замер.
Боль в боку немного утихла, да и глаза как будто обрели прежнюю зоркость. Он немного постоял, наслаждаясь ярким солнцем и тонким ароматом цветущих апельсиновых деревьев, а потом медленно зашагал к выходу на площадь. В этот момент рядом резко притормозил мини-автобус.
Моя просьба покажется вам безумной, - сказала она, заморгав красными глазами, - но не могли бы вы одолжить мне немного денег. Беккер посмотрел на нее в полном недоумении. - Зачем вам деньги? - спросил. Я не собираюсь оплачивать твое пристрастие к наркотикам, если речь идет об. - Я хочу вернуться домой, - сказала блондинка.
Она открыла глаза, словно надеясь увидеть его лицо, его лучистые зеленые глаза и задорную улыбку, и вновь перед ней всплыли буквы от А до Z. Шифр!. Сьюзан смотрела на эти буквы, и они расплывались перед ее слезящимися глазами. Под вертикальной панелью она заметила еще одну с пятью пустыми кнопками. Шифр из пяти букв, сказала она себе и сразу же поняла, каковы ее шансы его угадать: двадцать шесть в пятой степени, 11 881 376 вариантов.
Специально для тебя, дорогая. Он стал ждать, когда его компьютер разогреется, и Сьюзан занервничала. Что, если Хейл захочет взглянуть на включенный монитор ТРАНСТЕКСТА. Вообще-то ему это ни к чему, но Сьюзан знала, что его не удовлетворит скороспелая ложь о диагностической программе, над которой машина бьется уже шестнадцать часов. Хейл потребует, чтобы ему сказали правду.
Люди на экране вроде бы сидели в каком-то автобусе, а вокруг них повсюду тянулись провода. Включился звук, и послышался фоновой шум. | https://nebraskansforjustice.org/and-pdf/414-spacecraft-structures-and-mechanisms-from-concept-to-launch-pdf-83-304.php |
PT Goodyear Indonesia Tbk - Goodyear is one of the world's largest tire companies. It employs about 72,000 people and manufactures its products in 53 facilities in 22 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry.
Goodyear Indonesia is now looking for the talented people with a passionate, enterprising spirit to help us shape the future of our business in Indonesia. These are people who enjoy responsibility, strive to achieve, open to change and have a collaborative style. The current opportunity is:
Electrical Technician
Requirements:
- Diploma student (s) majoring in Electrical
- Having at least 2 years experience in related area is advantage
- Experience in heavy equipment manufacturing is preferable
- Enthusiastic communicative and willing to learn
- Willing to work on shift
- Good command in English, both verbal and written is preferable
- Computer literate
Responsibilities:
- Maintain and analyze price reference documentation,
- To make recommendations that will facilitate optimal pricing decisions to be made by Pricing, Finance, Marketing and Sales teams
- Defines and delivers the pricing strategy by working closely with the segment pricing, formulates pricing logic, rules and controls that support the delivery of the strategy
- Monitor/controlling the actual pricing vs the planned price and identify opportunities to optimize sales and profitability through price strategies.
- Monitor and research the competitive price in the market
- Evaluate the mix result and advise for mix improvement
- Work with Product Manager to develop and implement product price strategies including segment strategies for the various markets and channels.
- Bachelor degree in Finance. MBA background is preferable to apply
- Qualified by experiences minimum 7 years in sales & marketing area, and 3 years in the same/similar position
- Proven ability in influence cross-functional teams without formal authority
- Excellent both in communication and interpersonal skills
- A good team player, independent, proactive, and possess excellent problem solving and organizational skills. | http://lowongan.kerjablay.com/2013/05/lowongan-kerja-terbaru-mei-pt-goodyear-indonesia-tbk.html |
This is a great occasion, a great 20th anniversary of events that had world historical importance. But so that we do not forget where we currently are in Central Europe, we should also recall what happened in 2001 when the macabre “coalition of the willing” was formed. I at least cannot forget that with the notable exception of the two greatest, Jacek Kuron and Janos Kis, the whole cohort of the best Central European intellectuals became cheerleaders and enablers on the road to the Iraq war. The revolutionary impulse led them to support what revolutionaries from 1791 to 1948, with a few exceptions, always supported, revolutionary regime change carried out by an external vanguard.
When in 1981 my then journal Telos planned an eventually successful issue on the Polish workers movement, the editor, the late Paul Piccone, predicted that the end result of Solidarity would only be Polish Meanys and Hoffas. He did not say Polish Torquemadas only because, at heart a good Catholic, he probably admired the Inquisition. More subtly perhaps, in 1989 Jürgen Habermas skeptically called the Central European transformations “nachholende Revolution,” catch-up revolutions, and our relations have been strained ever since. I did not think the great transformations were revolutions in my definition of the term, and certainly thought that something new and important was being created.
Neither Piccone, nor Habermas, nor anyone else could have had the imagination to predict either the “coalition of the willing,” or the Polish twins (Lech and Jaroslaw Kaczynski, holding the top two political positions in Poland between 2006 and 2007), or for that matter the possibility that the Hungarian hard right would get over 70% of the actual votes in a European election in 2009.
But I hold on to my claim (that owes much to the theoretical considerations of Janos Kis and the great research of Andras Bozoki on the Hungarian Round Table), namely that 1989 did produce something dramatically new: a political paradigm of radical transformation significantly beyond the dichotomy of reform and revolution, yielding a historically new, superior model of constitutional creation beyond the revolutionary democratic European models, and more generalizable than the American version with its dangers of dual power whose results we have seen in Russia.
More importantly, I would now argue that it was insufficient understanding and internalization of what was actually new and dramatic about 1989 that led intellectuals at least toward the hard right internationally or nationally or both, though generally either one or the other. In short they came to think of themselves either as the revolutionaries of the journalistic cliche “the revolutions of 1989” or, more accurately but even more fatefully, as the protagonists of a revolution that did not happen, or: a “betrayed revolution” that needed to be resumed.
The revolutionaries of the imagined “revolutions of 1989” like most revolutionaries everywhere wished to see their revolution extended and exported. And indeed, how could anyone in Central Europe not observe the fall of the dominos with pleasure, or if they could overcome their fairly general Euro-centrism, the full realization of the new paradigm in the dramatically successful transformation of the Republic of South Africa in the 1990s. As long as one focused on the actual details of the paradigm, and not what was intrinsic to the category of revolution with all its elitist and authoritarian implications – Carl Schmitt’s sovereign dictatorship in the domain of constitution making – there was no risk in these aspirations.
But it was especially fateful when the supposed revolutionaries relied on the concept of totalitarianism, something they often did in spite of the fact that no concept could be more misleading for Gierek’s Poland, Kadar’s Hungary, Gorbachev’s or even Brezhnev’s Soviet Union, or even, pace Kanan Makiya, Sadam’s decrepit Iraq, more a failed state than a system of total rule. It was this concept that led them to discover a la Jeanne Kirkpatrick places that on their own could never generate civil society based oppositions, the fundamental pre-supposition of negotiated transformations such as their own. Then the revolutionary impulse led them to support what revolutionaries from 1791 to 1948, with a few exceptions like Giuseppe Mazzini, always supported, revolutionary regime change carried out by an external vanguard. Their earlier staunch critiques of Leninism were now sacrificed to their hatred of a supposed totalitarianism. The much admired Hannah Arendt’s explicit warning that totalitarianism is a highly exceptional historical regime, as well as her implicit diagnosis that revolution leads to constitutional democracy again only in highly exceptional circumstances with inherited republican structures of self-government, were entirely forgotten.
I will not focus on the right today that rejects the Round Tables and the paradigm of 1989 because they did not represent true revolutions. Quite consistently they have fought for a genuine revolutionary rupture since 1989, and have so far, probably because of the role of the constitutional structures and the influence of Europe, not succeeded. Let us note, however, that their strategy of de-communization has been aped by the American occupiers of Iraq whose de-Baathification has helped to make an unlikely road to constitutional democracy less likely still. The revolutionary right in each country, as the communitarian left in South Africa, can be effectively opposed in my view only if we become clear about what was done in 1989 and the 1990s and what it is that must be avoided both at home and abroad. Again focusing on international politics, the lesson will be especially important with respect to a place like Iran whose democratic activists have great reasons, strategic and normative, to reject the entire heritage of revolution, Islamic or any other. We must on our part spare them all threats, real and imaginary, of revolutionary regime change imposed by an external vanguard. Nothing will be more useful in their own internal attempts at regime change than the dissolution of such a specter now again linked to a hysteria over WMDs. Can we finally learn from the not so distant past? | https://publicsphere.ssrc.org/arato-from-revolutionaries-to-external-vanguards/ |
Safety in Construction | How You can avoid the top 5 causes of personal injury:
Last updated Thursday, June 17th, 2021
Safety in construction is an area that the team here at Shuman Legal is keen to educate our community on. These are the top causes of injuries in OSHA Construction Reporting. We have added the top 5 prevention measures recommended for each of these areas that have led to personal injury claims:
Cause # 1 – Falling from heights
Safety in construction solution: Work from the ground wherever possible – start by eliminating the need to work at height, it is the most effective way of protecting workers from the risk of falls. Complete activities at ground level wherever possible, for example, by using prefabrication methods and tools with extendable handles.
Four basic components to fall protection:
- Proper worker training.
- Select appropriate equipment for your specific work environment.
- Ensuring that all equipment is properly fitted to everyone who will be using it.
- Frequent equipment inspections.
There are four generally accepted categories of fall protection: fall elimination, fall prevention, fall arrest, and administrative controls.
Cause # 2 – Trench collapse
Safety in construction solution: Plan to place equipment a safe distance away from the trench opening and locate all utilities. Water and soil make mud, so always be extra cautious during and after rainstorms. Beware of low oxygen and toxic fumes. Never assume you have time to move out of the way if a collapse starts.
- Move extra excavation materials at least 2 feet away from the trench.
- Remove personnel from the edge of the trench who are not working on it.
- Keep all equipment away from the site to prevent cave-ins and blunt force trauma.
- Do not enter trenches that have not been reinforced or inspected at the start of the day or after a rainstorm.
- Do not work under suspended loads.
- Never start digging till all underground utilities in the area have been accounted for.
- Keep materials and soil piles at least 2 feet away from the edges.
- Make sure air tests are carried out if the trench is more than 4 feet deep. Oxygen deprivation is the second leading cause of fatalities in unregulated trenches.
- Evacuate the trench immediately if you smell a strange odor or see rainwater accumulating at the bottom.
Cause # 3 – Collapsed scaffolding
Safety in construction solution: To prevent scaffolding from falling provide an access ladder. Only use scaffold-grade lumber. Install guardrails and toe boards on all scaffolding 10 or more feet above the ground. Make sure the scaffold can support four times the maximum intended load (including the weight of the scaffold).
- Inspect Scaffolding Before Use.
- Adhere to Guidelines.
- Train Workers Properly.
- Ensure Scaffold Stability.
- Use the Proper Safety Equipment.
- Know the Load Capacity.
- Beware of Power Lines.
- Stay Organized.
Cause # 4 – Electric shock
Safety in construction solution: Use lock-out/tag-out practices to ensure that circuits are de-energized before servicing equipment. Ensure all electrical equipment is properly grounded or double insulated. Inspect tools prior to use and check extension and power cords for wear and tear. If damaged remove the equipment from service.
The more aware you are, the lesser the risk of danger. Follow these preventative measures to protect yourself from electrocution:
- Be aware of overhead power lines and keep a safe distance.
- Use ground-fault circuit interrupters (GFCI).
- Check tools and extension cords for cuts, abrasions, and damaged insulation.
- Do not use power tools and equipment in a way it was not designed for.
- Follow procedures for lockout/tagout.
- Receive proper training.
Cause # 5 – Failure to use appropriate protective gear
Safety in construction solution: Involve employees in discussions concerning what specific protective gear brands, colors, and models to purchase since they will be the ones using it during the workday. Ask employees how their protective gear is working for them and what recommendations they have for the next time you are purchasing more. Address complaints promptly and keep open communication with employees to provide the most comfortable and appealing equipment possible.
- Create a Company Culture of Safety. Creating a company culture in which the health and safety of all employees are a priority will instill an internal motivation in workers to wear protective gear.
- Conduct a Hazard Analysis.
- Carry out regular protective gear training.
- Choose the right protective gear.
- Enforce Your Policy.
If you have been injured at your workplace and need a tireless advocate to get employers or insurance agencies to provide respite to your recovery needs, you will find fierce and experienced advocates in the team here at Shuman Legal. We believe that the way we treat our clients is as important as our courtroom skills.
Our Chicago law firm serves the state of Illinois. This includes the city of Chicago, the greater Chicago region, as well as numerous other cities and towns, including Joliet, Schaumburg, Orland Park, Maywood, Kankakee, Peotone, Marseilles, Peoria, Decatur, Effingham, Danville, Moline, Galena, Cicero and more. | https://www.shumanlegal.com/safety-in-construction-how-to-avoid-the-top-5-causes-of-personal-injury/ |
Termites Foretell Climate Change in Africa’s Savannas
Using sophisticated airborne imaging and structural analysis, scientists at the Carnegie Institution’s Department of Global Ecology mapped more than 40,000 termite mounds over 192 square miles in the African savanna. They found that their size and distribution is linked to vegetation and landscape patterns associated with annual rainfall. The results reveal how the savanna terrain has evolved and show how termite mounds can be used to predict ecological shifts from climate change. The research is published in the September 7, 2010, advanced online edition of Nature Communications.
Mound-building termites in the study area of Kruger National Park in South Africa tend to build their nests in areas that are not too wet, nor too dry, but are well drained, and on slopes of savanna hills above boundaries called seeplines. Seeplines form where water has flowed belowground through sandy, porous soil and backs up at areas rich in clay. Typically woody trees prefer the well-drained upslope side where the mounds tend to locate, while grasses dominate the wetter areas down slope.
“These relationships make the termite mounds excellent indicators of the geology, hydrology, and soil conditions,” commented lead author Shaun Levick at Carnegie. “And those conditions affect what plants grow and thus the entire local ecosystem. We looked at the mound density, size, and location on the hills with respect to the vegetation patterns.”
Most research into the ecology of these savannas has focused on the patterns of woody trees and shorter vegetation over larger, regional scales. Work at the smaller, hill-slope scales has, until now, been limited to 2-dimensional studies on specific hillsides. The Carnegie research was conducted by the Carnegie Airborne Observatory (CAO)–a unique airborne mapping system that operates much like a diagnostic medical scan. It can penetrate the canopy all the way to the soil level and probe about 40,000 acres per day. The CAO uses a waveform LiDAR system (light detection and ranging) that maps the 3-dimensional structure of vegetation and, in this case, termite mounds and combines that information with spectroscopic imaging—imaging that reveals chemical fingerprints of the species below. It renders the data in stunning 3-D maps.
“We looked at the vegetation and termite mound characteristics throughout enormous areas of African savanna in dry, intermediate, and wet zones,” explained Levick. “We found that precipitation, along with elevation, hydrological, and soil conditions determine whether the area will be dominated by grasses or woody vegetation and the size and density of termite mounds.”
The advantage of monitoring termite mounds in addition to vegetation is that mounds are so tightly coupled with soil and hydrological conditions that they make it easier to map the hill slope seeplines. Furthermore, vegetation cover varies a lot between wet and dry season, while the mounds are not subject to these fluctuations.
“By understanding the patterns of the vegetation and termite mounds over different moisture zones, we can project how the landscape might change with climate change,” explained co-author Greg Asner at Carnegie. “Warming is expected to increase the variability of future precipitation in African savannas, so some areas will get more, while others get less rain. The predictions are that many regions of the savanna will become drier, which suggests more woody species will encroach on today’s grasslands. These changes will depend on complex but predictable hydrological processes along hill slopes, which will correspond to pattern changes in the telltale termite mounds we see today from the air.”
This research was funded by a grant from the Andrew Mellon Foundation. The Carnegie Airborne Observatory is supported by the W.M. Keck Foundation and William Hearst, III. SANParks provided logistical support.
| |
Serious sports injuries aren't confined to athletes -- spectators also run that risk, a new study finds. "You don't expect to be injured when you attend a sporting event as a spectator," said Dr. Amit Momaya, a sports medicine orthopedic surgeon at the University of Alabama at Birmingham. "You certainly don't expect to die, yet there are any number of cases where spectators are injured, some fatally, at sporting events."
One little girl had a close call after she was struck by a line drive at a 2017 baseball game at Yankee Stadium in New York City, winding up in the ER with multiple facial injuries. The Yankee organisation has since expanded the protective netting at the stadium. For the study, Momaya and his colleagues scoured two databases, PubMed and Embase, looking for studies on spectator injuries. They found that spectator injuries are uncommon, but when they occur they can be life-threatening and life-changing.
Going back to 2000, the researchers identified 181 spectator injuries. Most of these (123) came from automobile or motorcycle racing. Cycling accounted for 25 injuries, cricket 12, baseball 10, and hockey eight. Among the injuries, 62 were fatal. Of these, 38 were from vehicle racing, 17 from cycling, four from hockey, two from baseball and one from cricket.
Most of these injuries occurred when a spectator was hit by a ball, puck, car or other projectile, Momaya said. In some cases, injury occurred as a player crashed into the stands, hitting a spectator, he added.
The researchers think a central database that records all spectator injuries is needed to see if these injuries are increasing and to guide efforts to limit dangers to sports fans.
"For example, Major League Baseball recently increased the area covered by netting to reduce the risk of fans being struck by foul balls," Momaya said in a university news release. "Without a systematic way to record injuries, there is no way to measure whether that effort is sufficient or if netting should be extended."
Some obvious ways to protect fans -- such as impenetrable barriers at racetracks to prevent vehicles or crash debris from hitting spectator areas -- can be implemented, he said. In addition, higher transparent barriers in hockey arenas could prevent pucks from striking fans.
Many of injuries at bike races were due to crashes between spectators and other vehicles, including a publicity caravan, security motorcycle and a tanker truck. Also, in car racing, injuries occur when support vehicles hit spectators, the researchers found.
Crowd control, event planning and staff training also play important roles in spectator safety, Momaya said.
"There is a fine line between an enhanced fan experience on one hand, balanced against spectator safety on the other," he said. "As a physician, I think safety is the top priority."
The report was published recently in the Journal of Sports Medicine and Physical Fitness. | https://www.theindependentbd.com/home/printnews/178734 |
1.. Introduction
================
There are numerous strategies with inherent advantages and disadvantages that may be used for the evaluation of DNA damage and repair. DNA is the primary target following exposure to stimuli such as ultraviolet (UV) radiation, DNA alkylators, certain environmental carcinogens, oxidative stress and chemotherapeutic drugs ([@b1-ol-0-0-6002]). All these damaging factors produce lesions on DNA and a base alteration promoting a break in the DNA helix ([@b2-ol-0-0-6002]). Double-strand breaks (DSBs) are lethal to cells, as they affect both strands of DNA and promote the loss of genetic information ([@b3-ol-0-0-6002]). DNA damage, which frequently occurs in eukaryotic cells, may promote genomic instability and aid the development of disease, including cancer ([@b4-ol-0-0-6002]). Following DNA damage, cellular responses are induced and allow the cell to repair the damage or process the damage via a variety of mechanisms ([@b5-ol-0-0-6002]). Therefore, DNA repair proteins are important biomarkers for predicting the response of tumors to genotoxic stress and the prognosis of patients with more accuracy. This highlights the importance of detecting and quantifying DNA damage. There are a number of strategies that allow the investigation of these underlying mechanisms and the current review discusses these strategies and highlights their importance. These techniques may be separated into two perspectives: Techniques for detecting DNA damage and techniques for evaluating the underlying repair mechanisms.
2.. Molecular strategies
========================
### Polymerase chain reaction (PCR) and agarose gel electrophoresis
Breaks in DNA reduce the molecular weight of a single DNA strand, and this may be caused by physical, chemical or enzymatic reagents ([@b6-ol-0-0-6002]). DNA breaks and lesions may be detected by PCR or using agarose gel electrophoresis ([@b7-ol-0-0-6002]).
PCR is one of the most frequently used techniques for detecting DNA damage ([@b7-ol-0-0-6002]). DNA amplification is stopped at the sites of damage via the blocking of the progression of *Taq* polymerase, which results in a decrease in the quantity of PCR product and a reduced number of DNA templates, which do not contain the *Taq*-blocked lesions as they are not amplified ([@b8-ol-0-0-6002]). This is considered to be a simple and reliable method in which particular segments of DNA are specifically replicated and visualized using agarose gels that resolve a range of DNA fragments (50--50,000 bp) dependent on the agarose percentage ([@b8-ol-0-0-6002]).
Quantitative PCR (qPCR) has been performed to quantify the amount of DNA damage on both strands, as well as the kinetics of DNA damage removal in the mitochondrial DNA (mtDNA) of human and other organisms ([@b7-ol-0-0-6002],[@b9-ol-0-0-6002]). The technique has been used to measure the formation and repair of UV-induced photoproducts in a 1.2-kb fragment of the *LacI* gene from *Escherichia coli* ([@b8-ol-0-0-6002]) and to measure the damage to mtDNA in *Schizosaccharomyces pombe* cells treated with hydrogen peroxide ([@b10-ol-0-0-6002]). The frequency of cisplatin-induced lesions has been investigated in a series of fragments ranging from 150 to 2,000 bp from the hamster *aprt* gene ([@b11-ol-0-0-6002]). Taken together, these previous studies have demonstrated the ability to detect and analyze gene-specific DNA damage and repair with PCR ([@b12-ol-0-0-6002]). The qPCR method is dependent on high-molecular weight DNA, DNA quantification, qPCR conditions, quantification of amplification products and the calculation of lesion frequencies ([@b8-ol-0-0-6002]), and has the advantage of quantitative detection of DNA damage in a specific gene that is expressed mathematically in terms of lesions per kb and the requirement of only 1--2 ng of total genomic DNA ([@b9-ol-0-0-6002]).
Ligation-mediated PCR (LMPCR) analyzes the distribution of the two types of UV-induced DNA photoproducts, namely cyclobutane pyrimidine dimers and 6--4 photoproducts. The technique has the capability to detect an individual DNA photoproduct at low UV doses (10--20 J/m^2^) and is also highly sensitive for studying the interactions of proteins and DNA *in vivo* ([@b13-ol-0-0-6002]), and for measuring the repair of cyclobutane pyrimidine dimers ([@b14-ol-0-0-6002]). By contrast, terminal transferase-dependent PCR (TDPCR) is a technique that adds a terminal transferase prior to ligation to an oligonucleotide, and as with LMPCR, this method is able to map pyrimidine 6--4 pyrimidone photoproducts and obtain information on the *in vivo* chromatin structure ([@b15-ol-0-0-6002]).
Immuno-coupled PCR (ICPCR) combines nucleic acid amplification with an antibody-based assay in which the detection enzyme in the ELISA is replaced with a biotinylated reporter DNA bound to an antigen-antibody complex ([@b16-ol-0-0-6002]). This methodology allows for the quantification of thymine dimer formations in genes and these have been established to be directly proportional to the global levels identified in UV radiation-exposed human genomic DNA ([@b17-ol-0-0-6002]). PCR-based short interspersed DNA element (SINE)-mediated is also a highly sensitive assay that detects DNA adducts produced by drug treatment, including cisplatin ([@b18-ol-0-0-6002]) or UV-B induced damage, and detects repair in the mammalian genome ([@b19-ol-0-0-6002]). This assay relies on the abundance, dispersion and conservation of the SINEs in mammalian genomes ([@b19-ol-0-0-6002]). Compared with conventional PCR and qPCR, this method differs in that it involves the amplification of long segments of DNA in the transcribed regions of the genome in a faster and more cost-effective manner ([@b18-ol-0-0-6002]).
### DNA repair proteins that are used as molecular markers
#### Ku protein
Ku is a heterodimer consisting of two subunits (70 and 80 kDa) that bind to a 470-kDa catalytic subunit termed the DNA-dependent protein kinase, which is involved in repairing DNA DSBs ([@b20-ol-0-0-6002]). The DSB repair pathway is dependent on Ku protein and is the primary DNA DSB repair mechanism in mammalian cells ([@b21-ol-0-0-6002]). The ability of Ku to function affects numerous nuclear processes besides DNA repair, including telomere maintenance and apoptosis ([@b22-ol-0-0-6002]). Ku protein has also been implicated in cell survival, which suggests that the detection of Ku protein expression may be used as a strategy for evaluating DNA damage and repair ([@b22-ol-0-0-6002]). The majority of previous studies have focused on the function of Ku in DNA DSB repair via the non-homologous end joining pathway, and cells or animals deficient in this protein are defective in DSB rejoining and are hypersensitive to ionizing radiation ([@b23-ol-0-0-6002]). For the expression and purification of full-length Ku heterodimer, it is necessary to have co-expression of Ku70 and Ku80, and subsequently, the protein must be separated and purified via chromatographic techniques ([@b24-ol-0-0-6002]).
#### Phosphorylated histone 2AX (γH2AX) protein
H2AX is a member of the histone H2A family and it has been established that elevated phosphorylation levels of H2AX on genomic DNA damage occur within 1--3 min of DNA damage ([@b25-ol-0-0-6002]). The detection of γH2AX protein phosphorylated at Serine-139 allows an approach for detecting and quantifying DNA DSBs, as the number of Serine-139-γH2AX molecules is associated with the quantity of DNA damage ([@b26-ol-0-0-6002]), therefore it may be used as a marker of DSBs. The primary method for detecting γH2AX is based on immunofluorescence using a specific antibody for Serine-139-γH2AX to demonstrate its localization in chromatin foci at the sites of DNA damage ([@b25-ol-0-0-6002]). Indirect identification has been used via flow cytometry (FCM) using secondary antibodies tagged with fluorescein isothiocyanate (FITC), while DNA has been counterstained with propidium iodide (PI) to analyze an association between the presence of DSBs and cell cycle phase ([@b27-ol-0-0-6002]).
#### X-ray repair cross complementing 1 (XRCC1) protein
The XRCC1 protein serves an important role in promoting efficient repair of DNA single-strand breaks (SSBs) in mammalian cells ([@b28-ol-0-0-6002]). XRCC1 is able to interact with multiple enzymatic components that are involved in the repair process, including DNA ligase IIIa, DNA polymerase β, apurinic/apyrimidinic endonuclease 1, polynucleotide kinase/phosphatase, poly(ADP-ribose) polymerase 1 and 2, and 8-oxoguanine DNA glycosylase ([@b29-ol-0-0-6002],[@b30-ol-0-0-6002]). Previous studies have established that certain polymorphisms in the XRCC1 gene are associated with cancer risk ([@b31-ol-0-0-6002]). The regulation of XRCC1 protein levels in human cell lines has been investigated using RNA interference and demonstrated that the reduction of XRCC1 affects the repair pathways of SSBs, as well as having an important role in DNA base excision repair (BER) ([@b30-ol-0-0-6002],[@b32-ol-0-0-6002]). These events may be evaluated using the comet assay or using fluorescent or analytical techniques that are described in this review. For example, DNA repair assays to evaluate the possible role of XRCC1 in the rejoining of chromosomal SSBs are performed using alkaline elution, alkaline unwinding, or comet assay, meanwhile, for evaluating the role of XRCC1 in the rejoining of DSBs, neutral pH elution from a DNA filter has been employed ([@b33-ol-0-0-6002]).
3.. Fluorescence strategies
===========================
### Comet assay
The comet assay, also known as single-cell gel electrophoresis, is simple and is considered to be one of the gold standard methods for measuring DNA strand breaks (single or double) in eukaryotic cells ([@b34-ol-0-0-6002],[@b35-ol-0-0-6002]). In addition to being a method for detecting DNA breaks, it is also possible to detect UV-induced pyrimidine dimers, oxidized bases and alkylation damage following the introduction of lesion-specific endonucleases ([@b36-ol-0-0-6002]).
This technique identifies the head of the comet as a spherical mass of undamaged DNA, and the damaged DNA (DNA loops around strand breaks) streams out from the head as a tail ([@b37-ol-0-0-6002],[@b38-ol-0-0-6002]). The comet structure was first described in a study by Ostling and Johanson ([@b39-ol-0-0-6002]), which explained the tail in terms of DNA with relaxed supercoiling. In the most frequently performed type of comet assay, cells are embedded in agarose to immobilize the DNA and a lysis process is performed using a detergent and high salt. The comet assay has a limited resolution of 10--800 kb using standard conditions ([@b40-ol-0-0-6002]). Other variants of the comet assay are also used to assess DNA damage and its detection.
### Alkaline single-cell gel electrophoresis
This version of the comet assay uses alkaline denaturation surrounding a DNA break to reveal the break (single or double) ([@b41-ol-0-0-6002]). This method enhances comet tails and extends the range of DNA damage that is detected, but sensitivity has not been increased compared to the use of lesion-specific enzymes ([@b34-ol-0-0-6002]).
### Neutral single-cell gel electrophoresis
This is a variant of the comet assay that uses an alkaline treatment, after which the conditions are restored to neutral, followed by gel electrophoresis in neutral or mild alkaline conditions ([@b42-ol-0-0-6002]). This method is less sensitive but remains able to detect SSBs ([@b43-ol-0-0-6002]).
### Use of lesion-specific enzymes
The use of lesion-specific enzymes may aid in the detection of other types of DNA damage, other than SSBs or DSBs, including oxidized bases or pyrimidine dimers ([@b44-ol-0-0-6002]). The enzymes create an apurinic/apyrimidic site by removing the damaged base; endonucleases specifically detect oxidized pyrimidines, and formamidopyrimidine DNA glycosylases detect 8-oxo-7,8-dihydroguanine and ring opened-purines ([@b35-ol-0-0-6002]).
### Bromodeoxyuridine-labelled DNA-comet fluorescence in situ hybridization (FISH)
This technique combines a comet assay and FISH, and is effective in detecting damage and repair site-specific breaks in DNA regions in individual cells ([@b40-ol-0-0-6002]). This assay may be used to measure and discriminate between SSBs or DSBs or modifications from DNA repair.
### Halo assay
This technique is based on the intercalation of PI into the DNA helix, which causes the DNA to become a supercoiled structure ([@b45-ol-0-0-6002]). Following lysis, the nucleoids of individual cells appear as 'halos' that correspond to DNA loops, which may be measured to determine the chromatin fragility. The 'halo' diameter is proportional with PI concentration and is expressed as relaxed or rewound supercoils at low PI and high PI, respectively ([@b45-ol-0-0-6002]). This method may aid the study of the effects of induced DNA damage, although it only detects alterations in the organization of DNA if the damage has not been repaired, which occurs at radiation doses of 2 Gy. This assay has limitations on its sensitivity, but the advantages are that it is able to measure the DNA damage of a single cell and no labeling of DNA with radioactive precursors is required ([@b46-ol-0-0-6002]).
### Terminal deoxynucleotidyl transferase (TdT) dUTP nick-end labeling (TUNEL) assay
The TUNEL assay detects SSBs or DSBs, as well as levels of apoptosis via the visualization of DNA fragmentation ([@b45-ol-0-0-6002]). This assay primarily uses the ability of the enzyme TdT to incorporate nucleotide analogues conjugated with a fluorochrome onto the free 3′-OH of a DNA strand, therefore allowing the visualization of the nuclei that contain fragmented DNA ([@b47-ol-0-0-6002]). Additionally, fluorescence may be detected using a fluorescent dye conjugated antibody that recognizes biotin- or digoxigenin-tagged nucleotides ([@b48-ol-0-0-6002]). As the assay is able to detect the DNA fragments with fluorescence or radioactivity, microscopy techniques, FCM, photo-multipliers and charge coupled device arrays may be used to detect and quantify DNA damage caused by apoptosis ([@b49-ol-0-0-6002]). Typically, the visualization of DNA damage is possible as the morphological alterations occur in the nucleus, including alterations in structural organization and the collapse of chromatin ([@b49-ol-0-0-6002]). During the degradation of DNA, a specific pattern of fragments is generated by the activity of endonucleases enzymes, and fragmentation of genomic DNA occurs into lower molecular weight fragments from DNA ([@b47-ol-0-0-6002]).
Although this method was designed for detecting DNA damage following apoptosis, DNA fragments with 3′-OH ends may occur in a number of other situations where apoptosis does not take place, including necrosis ([@b49-ol-0-0-6002]). The TUNEL assay is limited in its sensitivity and specificity, but it may also be used to stain cells undergoing DNA repair ([@b50-ol-0-0-6002]). TUNEL is not considered sufficient to establish the type of cell death and must be accompanied by another method that allows for the distinction of the origin of the DNA fragmentation in cells undergoing apoptosis or non-apoptotic DNA damage ([@b51-ol-0-0-6002]). One of the assays that is considered to specifically detect DNA DSBs and used in combination with TUNEL assay is the *in situ* ligation assay ([@b52-ol-0-0-6002]), which is based on ligation of double-stranded oligonucleotide probes by T4 DNA ligase to the ends of the DNA breaks directly in tissue sections ([@b53-ol-0-0-6002]).
### DNA breakage detection (DBD)-FISH
FISH is a technique for the visualization of nucleic acids that improves resolution, speed and safety compared with older methods that use isotopic detection ([@b54-ol-0-0-6002],[@b55-ol-0-0-6002]). This technology also allowed for the development of simultaneous detection of multiple targets, quantitative analyses and live-cell imaging ([@b54-ol-0-0-6002]). FISH is typically used to locate and examine chromosomal, genetic and genomic aberrations that are associated with the development and progression of disease ([@b56-ol-0-0-6002]). Therefore, it has clinically important applications in cytogenetic and oncology, including in identifying gene alterations in patients with cancer ([@b56-ol-0-0-6002]). A modification of this technique, DBD-FISH, has been used to investigate cervical cancer progression by detecting and quantifying DNA breaks in genomic regions that are sensitive to destabilization ([@b57-ol-0-0-6002]). This technique allows detection and quantification of SSBs and DSBs in the genome or in a specific DNA sequence from a single cell ([@b58-ol-0-0-6002]). There are certain disadvantages in fluorescence assays, including the reproducibility and irregularity of the signals, and background autofluorescence ([@b54-ol-0-0-6002]).
### FCM-Annexin V labeling
When DNA breakage occurs, it is important to differentiate between necrosis, autolysis and apoptosis ([@b59-ol-0-0-6002]). FCM was developed to detect apoptosis ([@b60-ol-0-0-6002]); this method allows for the measure of a large number of cells, and is also used to detect DNA strand fragmentation, chromosomal aberrations and chemical adducts in DNA ([@b61-ol-0-0-6002],[@b62-ol-0-0-6002]).
Annexin V protein is used to quantify the number of dead or apoptotic cells ([@b63-ol-0-0-6002]). The lipid bilayer in healthy cells does not allow for Annexin V binding, however, in cells undergoing apoptosis, Annexin V binds to the outer surface of the cell membrane following translocation of phosphatidylserine in the presence of Ca^2+^ ([@b64-ol-0-0-6002]). The number of apoptotic cells may be quantified using FCM ([@b65-ol-0-0-6002]). With the use of a secondary antibody tagged with FITC or PI, this method may detect important proteins involved in DNA repair complexes ([@b27-ol-0-0-6002]). FCM is able to rapidly and sensitively measure DNA damage compared with the frequently used comet assay method.
### Radioimmunoassay (RIA)
The RIA binding assay is used to measure the concentration of antigens using specific antibodies. The target antigen is synthesized with a radiolabel and without a label, and is subsequently bound to specific antibodies ([@b66-ol-0-0-6002]). Following the introduction of a sample, a competitive reaction develops between the radiolabeled antigens and the unlabeled antigens from the sample, and this releases an amount of radiolabeled antigen. Standard curves may be obtained from this process by mixing equal amounts of antibody and radiolabeled antigen, with increasing concentrations of non-labeled antigen in a constant volume; unknown antigen is similarly mixed with antibody and radiolabeled antigen, and the concentration may be subsequently determined ([@b67-ol-0-0-6002]). This assay may be used to estimate the quantity of 6--4 photoproducts and cyclobutane dimers in DNA ([@b45-ol-0-0-6002]).
4.. Chemiluminescence strategies
================================
### Enzyme-linked immunosorbent assay (ELISA)
This is one of the most commonly used immunological methods for the quantification of DNA damage ([@b67-ol-0-0-6002]) and consists of affixing an unknown quantity of antigen to a surface and applying an unknown quantity of antibody to the surface so that the antibody binds to the antigen. The antibody is linked to an enzyme that may be quantified via the addition of an appropriate substrate (colored, fluorescent or radioactive) ([@b45-ol-0-0-6002],[@b67-ol-0-0-6002]).
### Immunohistochemical assay
This assay utilizes fixed cells that have previously been treated with proteases and RNase. This process removes proteins and RNA, and this ensures that cross-reaction with DNA does not occur ([@b67-ol-0-0-6002]). A solution of PI is used to counterstain the cells. The resulting immunofluorescence allows for visualization of the nuclei in adduct-negative cells ([@b45-ol-0-0-6002]). Immunohistochemical assays, in addition to FISH, have served as a more effective screening and diagnostic tool to detect alterations in certain metabolites, including the case of ALK gene in non-small cell lung cancer ([@b68-ol-0-0-6002]).
### Immunological assay
This technique measures the presence of oxidative DNA via the immunoslot-blot system, and uses chemiluminescent detection and secondary antibodies that are conjugated to alkaline phosphatase enzymes and radioactive iodine ([@b69-ol-0-0-6002]). This assay is effective, but is limited by the cross-reactivity of the antibodies with normal DNA bases.
5.. Analytical strategies
=========================
### High performance liquid chromatography (HPLC)-electrospray tandem mass spectrometry (MS)
Oxidative stress and absorption of UV light by nucleic acids has been established to be one of the causes of oxidative DNA damage, which may promote cancer development ([@b70-ol-0-0-6002],[@b71-ol-0-0-6002]). The improvement of HPLC coupled to tandem MS with an electrospray ionization mode, may be a sensitive and accurate method to detect modified bases of the oxidative-damaged DNA and UV-induced dimeric pyrimidine photoproducts ([@b72-ol-0-0-6002]). Notably, during the initial steps of the BER, the simultaneous detection and quantification of altered and released nucleobases from genomic DNA may be conducted using HPLC-MS ([@b73-ol-0-0-6002]). Therefore, this technique may be useful for detecting SSBs, as these lesions and base alterations are involved with proteins of the BER pathway ([@b74-ol-0-0-6002]).
This assay has been used to quantify oxidized nucleosides, including 8-oxo-7,8-dihydro-2′-deoxyguanosine, 8-oxo-7,8-dihydro-2′-deoxyadenosine, 5-formyl-2′-deoxyuridine, 5-hydroxymethyl-2′-deoxyuridine, 5-hydroxy-2′-deoxyuridine and the four diastereomers of 5,6-dihydroxy-5,6-dihydrothymidine within isolated and cellular DNA following exposure to γ-rays ([@b75-ol-0-0-6002]). It is also possible to detect tandem DNA lesions as dinucleoside monophosphates, and in addition to detecting the type of DNA damage, HPLC-MS may also provide information on the location and quantity of DNA damage ([@b75-ol-0-0-6002],[@b76-ol-0-0-6002]). Despite the advantage of accuracy, this assay has the limitations of a high cost and the large amount of experience that is required to accurately use the technique to monitor the formation of low levels of oxidized bases within cellular DNA ([@b75-ol-0-0-6002]). However, it remains the method of choice for measuring modified DNA bases.
### Gas chromatography-mass spectrometry (GC-MS)
To understand diverse cellular processes, including DNA damage, repair and its biological consequences, it is important to characterize and quantify DNA lesions.
MS provides structural evidence for a biological or chemical analysis, and in combination with gas chromatography, it enables measurements of more complex samples ([@b77-ol-0-0-6002]). GC-MS is a technique capable of measuring numerous products of DNA damage, including those of the sugar moiety and heterocyclic bases, as in HPLC-MS ([@b78-ol-0-0-6002]). The MS analysis provides sensitive detection of a single DNA lesion in DNA with multiple lesions or nucleobases following chemical or enzyme degradation of the nucleic acids ([@b79-ol-0-0-6002]). Additionally, this technique measures the kinetics of a number of DNA repair enzymes and is able to identify and quantify the expression levels of DNA repair proteins in human tissues ([@b80-ol-0-0-6002],[@b81-ol-0-0-6002]). Typically, these measurements include the hydrolysis of DNA, the derivatization of hydrolysates and the separation via gas chromatography of hydrolysates that are identified and quantified using MS ([@b78-ol-0-0-6002]). GC-MS has also been used to identify DNA-protein crosslinks, including Thy-Gly, Thy-Ala and Cyt-Tyr, in mammalian chromatin *in vitro* ([@b82-ol-0-0-6002]--[@b84-ol-0-0-6002]).
### Electrochemical methods (EM)
It has been established that DNA may be damaged by reactive oxygen species and the alterations in DNA that are formed are detected using electrochemical methods based on the inherent sensitivity of DNA-mediated charge transport (CT). These methods are also capable of detecting base pair mismatches and the majority of base damage products ([@b85-ol-0-0-6002]). This methodology may detect DNA-mediated CT as a damage detection mechanism for DNA repair enzymes ([@b86-ol-0-0-6002]). There have been hypotheses regarding the development of a sensor for the detection of single base mutations and DNA base lesions in duplex DNA to utilize the sensitivity of this charge to transport DNA films ([@b87-ol-0-0-6002]). The electrochemical method, electrocatalysis, has provided the basis for novel assays to detect low levels of lesions and possible for use as an early diagnostic tool. Although this is a method that provides sensitive, selective and low cost detection of DNA damage, it has the limitation of not being able to recognize thymidine dimer lesions until they are connected with the distortion of DNA double helix ([@b45-ol-0-0-6002]).
6.. Conclusions
===============
[Fig. 1](#f1-ol-0-0-6002){ref-type="fig"} presents a summary of the distinct types of DNA lesions, the repair pathways that are involved and the experimental strategies used to evaluate each type. The importance of the study of DNA damages and how damage may be restored requires further study, as it has clinical implications in multifactorial diseases, including cancer and diabetes. There are a number of methods available for the detection, analysis and quantification of DNA lesions and it is important to identify the advantages and disadvantages of each approach. The combination of these methodologies may provide an overview of DNA lesion analysis and complementary information. In contrast to the methodologies described in the present review, these molecular strategies may be considered to be accurate and sensitive, as they examine the type of DNA damage as well as the repair mechanism involved. Notably, the accumulated research in the current review may promote further studies to demonstrate potential phenotypic alterations that occur from DNA lesions.
This review was supported by CONACyT research funds (grant no. PN-2014-249020) and the National Autonomous University of México (grant no. PAPIIT-IN207216).
{#f1-ol-0-0-6002}
| |
Transfection of a human gene for the repair of X-ray- and EMS-induced DNA damage.
EM9 cells are a line of Chinese hamster ovary cells that are sensitive to killing by ethylmethanesulfonate (EMS) and X ray, since they are unable to repair the DNA damage inflicted by these agents. Through DNA-mediated gene transfer, human DNA and a selectable marker gene, pSV2neo, were transfected into EM9 cells. Resistant clones of transfected cells were selected for by growth in EMS and G418 (an antibiotic lethal to mammalian cells not containing the transfected neo gene). One primary clone (APEX1) and one secondary clone (TEMS2) were shown to contain both marker and human DNA sequences by Southern blot. In cell survival studies, APEX1 was shown to be as resistant to EMS and X ray as the parental cell type AA8 (CHO cells). TEMS2 cells were found to be partially resistant to EMS and X ray, displaying an intermediate phenotype more sensitive than AA8 cells but more resistant than EM9 cells. Alkaline elution was used to assess the DNA strand-break rejoining ability of these cells at 23 degrees C. APEX1 cells showed DNA repair capacity equal to that of AA8 cells; 75% of the strand breaks were repaired with a rejoining T 1/2 of 3 min. TEMS2 showed similar levels of repair but a T 1/2 for repair of 9 min. EM9 cells repaired only 25% of the breaks and showed a T 1/2 for repair of 16 min. The DNA repair data are consistent with the survival data in that the more resistant cell lines showed a greater capacity for DNA repair. The data support the conclusion that APEX1 and TEMS2 cells contain a human DNA repair gene.
| |
Chance and Necessity Do Not Explain the Origin of LifeOriginal Article
Editor’s Note: Trevors & Abel are not fellows of the Discovery Institute but their conclusions, aptly noted in the name of their article “Chance and necessity do not explain the origin of life” echo and reinforce the negative arguments against the sufficiency of chance and necessity of Center for Science & Culture Director Stephen C. Meyer in his article titled “DNA and the Origin of Life: Information, Specification and Explanation,” from Darwinism, Design and Public Education (Michigan State University Press, 2003).
Editor’s Annotation:
How did the complex genetic instructions encoded into DNA come into existence? According to Trevors & Abel all origin of life models that seek to answer the problem either through extraterrestrial means (lithopanspermia) or seek some other form of life (silicon based life) or medium of information conveyance (RNA) suffer from a fundamental flaw. Namely they fail to recognize the difference between the generation of instructions which is a separate and distinct problem from devising a language system with which to record those instructions.(730) As they observe, each specific genetic message from DNA to RNA to protein can only be decoded if the coding/decoding apparatus and operating system pre-exist the message.(734) They also say appeals to necessity fall flat on the mathematical truism that no natural mechanism of nature reducible to law can explain the high information content of genomes (734) which they show through information theory. Chance also seems to be an equally unviable candidate as random sequences are themselves the antithesis of prescribed genetic information and such new information could not be inserted into DNA without sophisticated restriction and ligase enzymes. (735) Furthermore, natural selection seems like an equally implausible candidate as evolution works through the differential survival and reproduction of the superior members of each species. Yet it is the origin of the nucleic acid algorithms at the covalently-bound primary structure level (730)that later makes these species that itself needs to be explained. Nature, they observe has no ability to optimize a conceptual cybernetic system at the decision node (covalently-bound sequence) level (730). Thus new approaches to investigating the origin of the genetic code are required.
Abstract
Where and how did the complex genetic instruction set programmed into DNA come into existence? The genetic set may have arisen elsewhere and was transported to the Earth. If not, it arose on the Earth, and became the genetic code in a previous lifeless, physical-chemical world. Even if RNA or DNA were inserted into a lifeless world, they would not contain any genetic instructions unless each nucleotide selection in the sequence was programmed for function. Even then, a predetermined communication system would have had to be in place for any message to be understood at the destination. Transcription and translation would not necessarily have been needed in an RNA world. Ribozymes could have accomplished some of the simpler functions of current protein enzymes. Templating of single RNA strands followed by retemplating back to a sense strand could have occurred. But this process does not explain the derivation of ‘sense” in any strand. ‘sense” means algorithmic function achieved through sequences of certain decision-node switch-settings. These particular primary structures determine secondary and tertiary structures. Each sequence determines minimum-free-energy folding propensities, binding site specificity, and function. Minimal metabolism would be needed for cells to be capable of growth and division. All known metabolism is cybernetic – that is, it is programmatically and algorithmically organized and controlled. | https://www.discovery.org/a/2664/ |
Learn about the Earth, life, and how we can search for life elsewhere in the universe. Super-Earths And Life is a course about life on Earth, alien life, how we search for life outside of Earth, and what this teaches us about our place in the universe. Now we know of thousands circling nearby stars....
Learn about the physics, chemistry, biology, and geology of the earth’s climate system. Global Warming Science teaches you about the risks and uncertainties of future climate change by examining the science behind the earth’s climate. You will be able to answer such questions as, “What is the Greenhouse Effect? ” Earth’s climate history...
An extensive introduction to synchrotron and X-Ray Free Electron Lasers (XFELs) facilities and associated techniques. Are you interested in investigating materials and their properties with unsurpassed accuracy and fidelity? Synchrotrons and XFELs (X-ray free-electron lasers) are considered to be Science’s premier microscopic tools. What x-rays are and how are they produced Interactions of x-rays with matter...
Learn about DNA structure, how atoms are bonded to make DNA molecules and the periodic table, through Nobel lectures and key scientific papers. In this chemistry course, you will learn about “Life in the Universe. " We will explore DNA as genetic material and atoms as the building blocks of life....
Discover the ultimate origin of all chemical elements essential for life. Explore the Big Bang through Nobel Lectures and scientific papers in part 2 of Life in the Universe. Three pillars of the big bang cosmology will be elaborated. Ch. The discovery of the proton as the ultimate building block of all nuclei will also be covered....
Popular Course Searches
The course deals with how to simulate and analyze stochastic processes, in particular the dynamics of small particles diffusing in a fluid. The motion of falling leaves or small particles diffusing in a fluid is highly stochastic in nature. Therefore, such motions must be modeled as stochastic processes, for which exact predictions are no longer possible....
Learn two methods used to determine molecular structures and their properties in this introduction to Quantum Mechanics. Knowing the geometrical structure of the molecules around us is one of the most important and fundamental issues in the field of chemistry. In molecular spectroscopy, molecules are irradiated with light or electric waves to reveal rich information, including:...
Learn the basics of cement chemistry and laboratory best practices for assessment of its key properties. Every day, we see concrete used all around us – to build our houses, offices, schools, bridges, and infrastructure. But few people actually understand what gives concrete its strength, resistance, and utility. Understanding of the hydration of cement...
Learn about the science and engineering of future quantum networks whose security is guaranteed by laws of quantum physics. Applying exotic quantum properties such as entanglement to every-day applications such as communication and computation reveals new dimensions of such applications. Quantum encoding and entanglement distribution provide means to establish fundamentally secure communication links for transfer of classical and quantum data....
Learn about fundamental concepts and engineering challenges of quantum technologies. Emerging quantum systems are disruptive technologies redefining computing and communication. Teaching quantum physics to engineers and educating scientists on engineering solutions are critical to address fundamental and engineering challenges of the quantum technologies. Identify fundamental differences between quantum mechanics and classical mechanics.... | http://aviationsub.org/physicsd9b2.html?tpl=a100456 |
After many weeks of excitement and anticipation, three groups recently completed their outdoor adventure education component of the program. A total of 28 students and their teachers, coaches and outdoor education leaders spent five days in the Southern Highlands completing a variety of team-building activities.
The participants were challenged in many ways including facing fears, enduring cold and wet weather, and working collaboratively to achieve a common goal. Activities such as abseiling, zip lining and canoeing were used to teach important life skills such as hope, self-regulation and resilience.
Moving forward in the program, participants will be able to reflect upon their experience, the challenges they faced and how they dealt with these challenges. Our coaches will continue to support them in developing teamwork and collaborative skills through the execution of a community project.
Well done to all participants for showing self-confidence and courage throughout the challenging adventure. | https://thehelmsmanproject.org.au/blog-detail.php?Program-update-outdoor-adventure-education-week-36 |
White is on the attack while red attempts to block.
Each team consists of six players. To get play started, a team is chosen to serve by coin toss. A player from the serving team throws the ball into the air and attempts to hit the ball so it passes over the net on a course such that it will land in the opposing team's court (the serve). The opposing team must use a combination of no more than three contacts with the volleyball to return the ball to the opponent's side of the net. These contacts usually consist first of the bump or pass so that the ball's trajectory is aimed towards the player designated as the setter; second of the set (usually an over-hand pass using wrists to push finger-tips at the ball) by the setter so that the ball's trajectory is aimed towards a spot where one of the players designated as an attacker can hit it, and third by the attacker who spikes (jumping, raising one arm above the head and hitting the ball so it will move quickly down to the ground on the opponent's court) to return the ball over the net. The team with possession of the ball that is trying to attack the ball as described is said to be on offence.
The team on defence attempts to prevent the attacker from directing the ball into their court: players at the net jump and reach above the top (and if possible, across the plane) of the net to block the attacked ball. If the ball is hit around, above, or through the block, the defensive players arranged in the rest of the court attempt to control the ball with a dig (usually a fore-arm pass of a hard-driven ball). After a successful dig, the team transitions to offence.
The game continues in this manner, rallying back and forth until the ball touches the court within the boundaries or until an error is made. The most frequent errors that are made are either to fail to return the ball over the net within the allowed three touches, or to cause the ball to land outside the court. A ball is "in" if any part of it touches the inside of a team's court or a sideline or end-line, and a strong spike may compress the ball enough when it lands that a ball which at first appears to be going out may actually be in. Players may travel well outside the court to play a ball that has gone over a sideline or end-line in the air.
Other common errors include a player touching the ball twice in succession, a player "catching" the ball, a player touching the net while attempting to play the ball, or a player penetrating under the net into the opponent's court. There are a large number of other errors specified in the rules, although most of them are infrequent occurrences. These errors include back-row or libero players spiking the ball or blocking (back-row players may spike the ball if they jump from behind the attack line), players not being in the correct position when the ball is served, attacking the serve in the frontcourt and above the height of the net, using another player as a source of support to reach the ball, stepping over the back boundary line when serving, taking more than 8 seconds to serve, or playing the ball when it is above the opponent's court. | http://genderi.org/volleyball-plan.html?page=5 |
Published at Friday, June 08th 2018, 16:39:16 PM by Gaetane Marchetti. area rugs. Texture is very important and is dependant upon whether the space is tailored or more organic. Carpet affects the acoustics of the room: in spaces that need a little dampening, the fullest, deepest carpet will quiet the space.
Published at Friday, April 06th 2018, 20:44:49 PM. area rugs By Irmine Wagner. TIMOTHY CORRIGAN. Unlike most American designers who start with a rug and build the room up from there I like to take a more European approach in which the rug is just a part of the overall design; I hate it when all of the room‘s colors look correspond to the colors of the rug…that looks a little too studied or "decorated" to me.
Published at Wednesday, April 04th 2018, 20:22:05 PM. area rugs By Mare O'Connor. JUAN MONTOYA. First of all I look into the climate (tropical, mountain, cold, etc) before choosing material and design. Size depends on if I want the floor to show or not, but also to delineate the space.
Published at Tuesday, April 03rd 2018, 17:43:21 PM. area rugs By Laycie Dupont. WILLIAM GEORGIS. I look for character when choosing or designing an area rug. I generally look for a rug which leaves between a 6" to 15" wood border, depending on the size of the room.
Published at Sunday, April 01st 2018, 20:00:23 PM. area rugs By Jeraldo Sanz. I do not know what a standard rug really means. I know of antique rugs that are 8 by 10 and cost $250,000. A good contemporary rug should probably cost $10 to 15,000, unless it is a normal rug in which case $5,000 should do it, I must say that I am not quite sure.
Published at Saturday, March 31st 2018, 21:32:50 PM. area rugs By Gwidon Jankowski. Carpets also reflect history and culture (one could literally tour the globe and discover it‘s history by looking at carpets). Carpets can be art and should be purchased with one‘s budget in mind and the time line for that particular space.
Published at Friday, March 30th 2018, 19:20:20 PM. area rugs By Gwidon Jankowski. ELLIE CULLMAN. We look for beauty and durability. Every rug should be well made with color fast dyes and good yarns and straight seams regardless of whether it is hand made of silk or machine made of wool. For bedrooms, a good rule of thumb for size is to leave approximately 12" of floor space between the edge of the rug and the base molding.
Published at Wednesday, March 28th 2018, 20:08:12 PM. area rugs By Gwidon Jankowski. CECIL HAYES. I usually choose hand knotted area rugs in my installations because of their durability. A good rule to follow when selecting a size is always choose a rug that is between 1/2 and 3/4 the size of the furniture grouping. The average cost for an 8‘ X 10‘ area rug is $3,000 to 3,500.
User Favorite
Editor’s Picks
Recent Posts
Categories
Monthly Archives
Static Pages
Tag Cloud
Any content, trademark/s, or other material that might be found on this site that is not this site property remains the copyright of its respective owner/s. | http://mosetoon.com/tag/white-faux-fur-blanket/ |
Already have an account? Get multiple benefits of using own account!
Login in your account..!
Remember me
Don't have an account? Create your account in less than a minutes,
Forgot password? how can I recover my password now!
Enter right registered email to receive password!
Q. The armature of 6 -pole dc generator has a wave winding containing 664 conductors. Calculate the generator emf when flux per pole is 0.06 weber and the speed is 250 rpm. At what speed must the armature be driven to generator an emf of 250 V. If the flux per pole is reduced to 0.058 weber.
Sol. Given P = 6; A = 2 (as wave wound); Z = 664 Φ = 0.06, N = 250 rpm
Thus Eg = PΦZN/60A = 6/2 × 0.06 × 664 × 250/60 = 498V
Now , given Eg = 250 V; Φ = o.o58 weber
Then N = ?
Thus Eg1/Eg2 = Φ1/ Φ2 × N1/N2
Or 498/250 = 0.06/0.058 × 250/N2
Or N2 = 130 rpm
Q. A balanced delta-connected load with a per-phase impedance of 12 + j9 is supplied by a 173-V, 60-Hz three-phase source. (a) Determine the line current, the power factor, th
Q. Sketch the idealized (asymptotic) Bode plot for the transfer function Find the angular frequency at which H(ω) is 0dB and the angular frequency at which θ(ω) = -60°.
Find a minimum two level, multiple-output AND-OR gate circuit to realize these functions (eight gates minimum). F 1 (a,b,c,d) =Σm(10,11,12,15) +D (4,8,14) F 2 (a,b,c,d) =Σm(4
Define the aircraft axes & degrees of freedom. Analyse the different maneuvers that can be done by an aircraft.
Q. What do you mean by Nibble? The nibble is a collection of bits on a 4-bit boundary. It would not be a particularly interesting data structure except for two items BCD (binar
Explain Synchronisation and Parallel Operation Generation, transmission and distribution of electric power have to be conducted in an efficient and reliable way at a reasonable
conters and their function in 1851
BLOCK DIAGRAM OF DIGITAL CONTROL OF ELECTRIC DRIVES
Q. Consider the electromagnetic plunger shown in Figure. The λ-i relationship for the normal working range is experimentally found to be λ = Ki 2/3 /(x+t), where K is a constant. D
Discuss the term D/A conversion. D/A conversion: Digital-to-analog or analog-to-digital conversions are two very significant aspects of digital data processing. Digital-to-
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report! | http://www.expertsmind.com/questions/calculate-the-generator-emf-3017394.aspx |
8 Basic Psychological Processes
A resource that we have when it comes to adapting to the world around us is our behavior. This allows us to modify our environment and our reality to adapt to what happens in our lives. We know that our behavior is mediated by internal mental processes. But what are those mental processes?
The 8 basic psychological processes are: (a) perception, (b) learning, (c) language, (d) thought, (e) attention, (f) memory, (g) motivation, and (h) emotion. Let’s look at each process individually. All are closely related to each other. Although they maintain their terminological independence, many could not exist without the others. It is better to understand this distinction as an artificial classification that facilitates scientific work.
Perception
Perception is responsible for us having an “image” of the reality that surrounds us. It processes the information we receive from the external stimuli of our senses.
Perception is responsible for organizing and giving meaning to all sensory stimulus. The function of this is obvious: knowing the environment around us allows us to move and interact with it. These are basic and necessary skills to achieve an efficient adaptation.
Learning
This is how we modify and acquire knowledge, abilities, skills, behaviors, etc. It works through what happened in the past. Learning also helps us relate our behaviors with their consequences. It is closely related to memory.
The study of learning is given largely to the field of behaviorism. This gave us theories of classical and operant conditioning to explain how we learn.
This process is useful because it allows us to vary our repertoire of behaviors according to what happened in the past. It allows us to respond more adaptively in present and future situations.
Language
The human being is a social being. That’s why language is such an important process. It gives us the ability to communicate with others. This communication, in the case of humans, is carried out through a complex symbolic code, or language. The complexity of our language allows us to accurately describe almost anything, be it past, present, or future.
The usefulness of this process comes from our need to maintain complex social relationships that allow us to survive in a hostile environment. Language allows us a mode of communication broad enough to maintain human societies.
Thought
This is a complex process that psychology defines as the process in charge of transforming information to organize it and give it meaning. The study of thought began with Aristotelian logic. However, this was not an effective form of analysis, because the human being does not reason with logic.
Reasoning is a quick, but somewhat imprecise process that allows us to act effectively in our environment.
The function of thought is a controversial issue. This is partly due to the existing terminological confusion around it. The most accepted idea is that its objective is to act as a control mechanism in the face of situations presented to us.
Attention
Attention focuses our resources on a series of stimuli while ignoring the rest. We receive a large number of stimuli all at once and we cannot attend to all at the same time.
The attention process is adaptive because, if it did not exist, we would find ourselves overwhelmed by stimuli. We would not know which to react to. It is paradoxical that the self-imposition of a cognitive limitation implies an evolutionary adaptation, but it’s true.
Memory
Memory allows us to encode information for future storage and retrieval. This is an essential process and closely related to all other processes.
Memory allows us to remember explicit information such as the capital of France or procedural information like how to ride a bike. Memory exists because it is really useful to have information about our past experiences at our disposal. This allows us to make guesses about the future and act on them. Without this process, the other basic psychological processes would not exist, since all are strongly supported by memory.
Motivation
Motivation is responsible for providing the body with resources to perform a behavior. It is the process in charge of activating the body and putting it in the ideal state. Another important aspect of motivation is direction. Not only does it prepare the body, it is also responsible for directing behavior among possible options.
The function of motivation is to get the individual to direct their behavior toward their goals and objectives. It prevents them from standing still. This process is closely related to emotion and learning.
Emotion
Emotions are reactions to external stimuli. They allow us to guide our behavior and act quickly in response to the demands of our environment. Emotions have three components:
- Somatic: the physiological changes provoked by emotion
- Behavioral: the spectrum of behavior triggered by an emotion
- Feeling: the subjective experience of the individual
Emotion manages our behavior in a fast and effective way. Most decisions lack enough importance for us to spend a lot of time on. That’s where emotion comes in. It is important to understand that any decision is mediated by our emotions to some degree.
In this article, we have exposed the basic processes in a very superficial way. They are all subjects of extensive study with many more details than we could include here. The intensive study of each of them gives us the basic information to understand the behavior and mental processes of the human being. | https://exploringyourmind.com/8-basic-psychological-processes/ |
September Birthdays , Astrological Myths 2
From celebrations honoring ancient goddesses, such as Demeter and Hathor to a Chinese myth involving the Moon, this crop of astrological connections to folklore, legends, and myths of the past also includes events, such as the Ceremony of Lighting the Fire and the annual Pumpkin Festival in France.
September 13th Birthdays
During ancient Egyptian days, if you were born on September 13th, then you would have shared your special day with the Ceremony of Lighting the Fire, which saw participants light lamps and place them in front of images of worshipped gods. It’s not uncommon to find a Virgo born on this day is filled with compassion and a charitable nature. Serving society in a meaningful way might just be your calling.
September 15th Birthdays
On this day, a Chinese legend is said to have taken place. The Chinese emperor Ming Wong was in his garden when he decided to ask a priest what the Moon was made of. Instead of receiving an answer, the priest supposedly transported the emperor to the Moon. Upon his return, the emperor spent days giving gold coins to his people. His reason? He told the people that it was in celebration of the Moon’s birthday and that he had witnessed a miracle. Some of the qualities that an individual born on this day typically possesses include the ability to understand technical details, as well as creativity and artistic talent.
September 17th Birthdays
Today is the first day of the Egyptian month of Hathor, who served as the sky goddess and patron of lovers. During ancient Greek times, this was the date of the annual celebration centered on the goddess Demeter. A festival of secret women’s rites took place on this date. While often seen as serious and self-contained, a Virgo born on this day also showcases characteristics, such as friendliness, compassion, and a kind heart.
Additional September birthday connections include:
September 10th , Beginning at dawn, the Ceremony of the Deermen would take place in Staffordshire, England with dancers placing antler horns on their heads and carrying around poles with antlers to pay homage to Robin Hood and Maid Marion.
September 11th , A festival known as the Day of the Queens took place on this day in Egypt, which honored Queens Hathsepsut, Cleopatra, and Nefertiti , goddesses of the ancient world.
September 12th , The annual Pumpkin Festival in France is held every year, where people flock to produce markets in search of the largest specimen, which by the end of the festivities is cut open and made into bread and soup that is shared amongst those attending the festival. | https://www.unexplainable.net/space-astrology/september-birthdays-astrological-myths-2.php |
---
author:
- 'I. Olivares-Salaverri'
- 'Marcelo B. Ribeiro'
title: Cosmological Models and the Brightness Profile of Distant Galaxies
---
Introduction
============
The most basic goal of cosmology is to determine the spacetime geometry and matter distribution of the Universe by means of astronomical observations. Accomplishing this goal is not an easy or simple task, and due to that, since the early days of modern cosmology several methods have been advanced such that theory and observations are used to check one another. Detailed analysis of the cosmic microwave background radiation, galaxy number counts and supernova cosmology are just a few of the methods employed nowadays in cosmology, deriving results that complement one another. In this work we aim at discussing one of these methods, namely the connection between galaxy brightness profiles and cosmological models.
Thirty years ago, Ellis and Perry (1979) advanced a very detailed discussion where such a connection is explored. Their aim was to determine the spacetime geometry of the universe by connecting the angular diameter distance, also known as area distance, obtained from a relativistic cosmological model, and the detailed photometry of galaxies. They then discussed how the galaxy brightness profiles of high redshift galaxies could be used to falsify cosmological models as the angular diameter distance could be determined directly from observations.
Nevertheless, to carry out this program to its full extent, one would need detailed information about galaxy evolution. Without a consistent theory on how galaxies evolve, it is presently impossible to analyze cosmological observations without assuming a cosmological model. In addition, brightness profiles are subject to large observational errors, making it difficult to achieve Ellis and Perry’s aim of possibly using the angular diameter distance determination to distinguish cosmological models.
This work is based on Ellis and Perry (1979) theory, although our aim is more limited in the sense that we do not seek to determine the underlying cosmological model by directly measuring the angular diameter distance, but to assume the presently most favored cosmology, deriving cosmological distances from it and seeking to discuss the consistency between its predictions and detailed observations of surface brightness of distant galaxies. Our goal is to obtain a theoretical brightness profile by means of the assumed cosmological model and compare it with its observational counterpart at various redshift ranges, and for different galaxy morphologies.
The outline of the paper is as follow. In §2 we introduce the cosmological distances and their connections to astrophysical observables. In §3 we describe the parameters that determine the surface brightness structure and in §4 we discuss the criteria for selecting galaxies, in view of the importance of evolutionary effects in galactic surface brightness.
Cosmological Distances
======================
Let us consider that source and observer are at relative motion to each other. From the point of view of the source, the light beams that travel along future null geodesics define a solid angle $d\Omega_{\scriptscriptstyle G}$ with the origin at the source and have a transversal section area $d\sigma_{\scriptscriptstyle G}$ at the observer.
The flux $F_{\scriptscriptstyle G}$ measured at the source considering a 2-sphere $S$ lying in the locally Euclidean space-time centered on the source is related to the source luminosity by, $$L = \int_{S} F_{\scriptscriptstyle G} d\sigma_{\scriptscriptstyle G}
= 4\pi F_{\scriptscriptstyle G}$$ assuming that it radiates with spherical symmetry and locally this is a unit 2-sphere. If we consider now the flux $F_r$ radiated by the source, but measured at the observer, the source luminosity is $$L = \int_{S} (1+z)^2 F_r d\sigma_{\scriptscriptstyle G},$$ where the factor $(1+z)^2$ comes from *area law* (Ellis 1971) and $z$ is the redshift. This law establishes that the source luminosity is independent from the observer. So these two equations are equal, and we may write that, $$L = \int_{S} F_{\scriptscriptstyle G} \; d\sigma_{\scriptscriptstyle G} =
\int_{S} (1+z)^2 F_r \; d\sigma_{\scriptscriptstyle G},$$ $$(1+z)^2 F_r \; d\sigma_{\scriptscriptstyle G} = const =
F_{\scriptscriptstyle G} \; d\Omega_{\scriptscriptstyle G}
\label{e1}$$ From the viewpoint of the source, we may now define the *galaxy area distance* $d_{\scriptscriptstyle G}$ as, $$d\sigma_{\scriptscriptstyle G} = {d_{\scriptscriptstyle G}}^2
d\Omega_{\scriptscriptstyle G},$$ which considering eq. (\[e1\]), becomes, $$F_r = \frac{L}{4\pi}\frac{1}{(d_{\scriptscriptstyle G})^2(1+z)^2}.
\label{e2}$$ The factor $(1+z)^2$ may be understood as arising from *(i)* the energy loss of each photon due to the redshift $z$, and *(ii)* the lower measured rate of arrival of photons due to the time dilation. With eq. (\[e2\]) it is not possible to make any physics since we cannot measure the *galaxy area distance* $d_{\scriptscriptstyle G}$.
Considering a bundle of null geodesics converging to the observer, that is, light beams traveling from source to observer, they define a solid angle $d\Omega_{\scriptscriptstyle A}$ with the origin at the observer and have a transversal section area $d\sigma_{\scriptscriptstyle A}$ at the source. We may now define the *angular diameter distance* $d_{\scriptscriptstyle A}$ by $$d\sigma_{\scriptscriptstyle A} = {d_{\scriptscriptstyle A}}^2
d\Omega_{\scriptscriptstyle A}.$$ The *reciprocity theorem*, due to Etherington (1933; see also Ellis 1971, 2007) relates the $d_{\scriptscriptstyle G}$ and $d_{\scriptscriptstyle A}$ by means of the following expression, $${d_{\scriptscriptstyle G}}^2 = (1+z)^2 {d_{\scriptscriptstyle A}}^2.
\label{recip}$$ This relation is purely geometric, valid for any cosmology and contains information about spacetime curvature effects. Combining eqs. (\[e2\]) and (\[recip\]), it is possible to connect the flux received by the observer and the *angular diameter distance* by $$F_r = \frac{L}{4\pi {d_{\scriptscriptstyle G}}^2}\frac{1}{(1+z)^2} =
\frac{L}{4\pi {d_{\scriptscriptstyle A}}^2}\frac{1}{(1+z)^4}.$$
Connection with the surface photometry of cosmological sources
==============================================================
Galaxies are objects that can be used to measure cosmological parameters because they are located far enough in order to have significant spacetime curvature effects. The flux emitted by these objects and received by the observer depends on the surface brightness, which, by definition, is distant independent, although it is redshift dependent (Ellis 2007). Based on the reciprocity theorem, and bearing in mind that we actually observe in very restricted wavelengths, it is possible to connect the emitted and received *specific surface brightness*, respectively denoted by $B_{e, \nu_e}$ and $B_{r, \nu_r}$, according to the following equation (Ellis and Perry 1979), $$B_{r,\nu_r}(\alpha ,z) = \frac{B_{e}(R,z)}{(1+z)^3} J[\nu_r (1+z),R, z].
\label{e4}$$ Here $J$ is the *spectral energy distribution* *(SED)*, $R$ is the *intrinsic galactic radius*, $\nu_r$ and $\nu_e$ are respectively the *received* and *emitted frequencies*, and $\alpha$ is defined as the angle measured by the observer between the galactic center and its outer luminous limit, as below (Ellis & Perry 1979), $$R = \alpha \; d_{\scriptstyle A} (z).$$ Note that $d_{\scriptstyle A}$ is given by the assumed cosmological model. Our aim is to compare the surface brightness observational data with its theoretically derived results calculated by means of eq. (\[e4\]) and reach conclusions about the observational feasibility of assumed cosmological model.
To calculate the theoretical surface brightness, we have to assume some dependency between the surface brightness and the intrinsic galactic radius. Considering that a fundamental assumption in observational cosmology is that homogeneous populations of galaxies do exist, the structure and evolution of each member of such group of galaxies will be essentially identical. This assumption implies that *(i)* the frequency dependence of the emitted galaxy radiation does not change across the face of the galaxy, that is, it is $R$ independent, and *(ii)* the radial variation of the brightness is characterized by an amplitude $B_0$, which may evolve with the redshift, i.e., $B_0(z)$, and a normalized radial functional form does not evolve $f[R(z)/a(z)]$. So, the emitted surface brightness can be characterized as (Ellis and Perry 1979), $$B_{e,\nu_e} (R,z) = B_{0}(z)J(\nu_e,z)f[R(z)/a(z)].
\label{e5}$$ Now, let us define the parameter $\beta$ as being given by $\beta = R(z)/a(z)$, where $a(z)$ is the scaling radius. The redshift dependence in the parameters of the equation above is due to the galactic evolution. A detailed study of the parameters of eq. (\[e5\]) and their evolution is fundamental to this work. Otherwise, we will not be able to infer if the difference between the observational data and the modeled surface brightness is due to the cosmological model or to a poor characterization of the brightness structure and its evolution.
Surface brightness profiles
---------------------------
The function $f[R(z)/a(z)]$ characterize the shape of the surface brightness distribution. There exist in the literature various different profiles. Some of them are one parameter profiles, like *Hubble* (1930), *Hubble-Oemler* and *Abell-Mihalas* (1966), characterizing the galactic brightness distribution quite well when the disk or bulge are dominant. They are given as, $$B_{\mathrm{H}, e, \nu_{e}} (R,z) = \frac{B_{0}(z)J(\nu_e,z)}{(1+\beta )^2};$$ $$B_{\mathrm{HO}, e, \nu_{e}}(R,z) = \frac{B_{0}(z)J(\nu_e,z)e^{-R^2/R^2_t}}{(1+\beta )^2};$$ $$\begin{aligned}
B_{\mathrm{AM}, e, \nu_{e}}(R,z) & = & \frac{B_{0}(z)J(\nu_e,z)}{(1+\beta)^2}; \\
& & (\beta \leq 21.4); \nonumber \end{aligned}$$ $$\begin{aligned}
B_{\mathrm{AM}, e, \nu_{e}}(R,z) & = & \frac{22.4 B_{0}(z)J(\nu_e,z)}{(1+\beta)^2}; \\
& & (\beta > 21.4). \nonumber\end{aligned}$$ Other profiles like *Sérsic* and *core-Sérsic* use two or more parameters, reproducing the galactic profile almost exactly (Trujillo et al.2004). $$\begin{aligned}
B_{\mathrm{S}, e, \nu_{e}}(R,z)= B_{eff}J(\nu_e,z) e^{ \left\{ - b_{n} \left[
{\left( \frac{R}{R_{eff}} \right)}^{1/n} - 1 \right] \right\} }\end{aligned}$$ $$\begin{aligned}
B_{\mathrm{cS}, e, \nu_{e}}(R,z)&=&B_bJ(\nu_e,z) 2^{-\frac{\gamma}{\alpha}} {\left[1+{\left(\frac{R_b}{R}\right)}^{\alpha}
\right]}^{\gamma/\alpha} \times \nonumber \\
& \times & e^{\left[-b { \left( \frac{R^{\alpha}+R^{\alpha}_b}{R^{\alpha}_{eff}} \right) } ^{1/n\alpha}
+b2^{1/\alpha n} {\left( \frac{R_b}{R_{eff}} \right)} ^{1/n}\right]},\end{aligned}$$ where $B_{eff}$ is the surface brightness at the effective radius $R_{eff}$ that encloses half of the total light, $B_b$ is the surface brightness at the core or break radius $R_b$. $\gamma$ is the slope of the inner power-law region, $\alpha$ controls the sharpness of the transition between the cusp and the outer Sérsic profile and $n$ is the shape parameter of the outer Sérsic. The quantity $b$ is a function of the parameters $\alpha$, $R_b/R_{eff}$, $\gamma$ and $n$. The parameter $b_n$ depends only on $n$.
Sample Selection Criteria
=========================
To analyze only the effect of the cosmological model in the surface brightness and minimize the effect of evolution, we assume that there exists a homogeneous class of objects whose properties are similar in all redshifts, allowing us to carry out comparisons at different values of z. Thus, galaxy sample selection follows this assumption. Choosing galaxies of different morphologies, we must consider the following requirements:
1. [[**(1)**]{} The existence of different morphological populations at different redshift values. Due to the Hubble sequence we know that not all type of galaxies exist in all epochs. Therefore, it seems reasonable to choose early-type galaxies because they exist at different redshift values and have a lower star formation rate which could imply smoother evolution.]{}
2. [[**(2)**]{} The best frequency band to observe. If we consider all wavelengths, the theory tells us that the total intensity is equal to the surface brightness, so the chosen bandwidth should include most of the SED in the interval $\nu_e$ and $\nu_r$.]{}
3. [[**(3)**]{} If the galaxies chosen are located in clusters or are field galaxies.]{}
I.O.-S. is grateful to CAPES for the financial support.
1966, , 71, 7
1999, *Galactic Astronomy*
1971, in: Sachs, R.K. (ed.) *General relativity and cosmology* Proceedings of the International School of Physics “Enrico Fermi”, Course 47: pp. 104 - 182. Academic Press, New York and London (1971). Reprinted in Gen. Relativ. Gravit., 2009, 41, 581
2007, Gen. Relativ. Gravit., 2007. 39, 1047
1979, , 187, 357
1933, Phil. Mag., 1933, 15, 761. Reprinted Gen. Relativ. Gravit., 2007. 39, 1055
1930, , 71, 231H
2004, , 127, 1917
| |
The Pittsburgh Steelers have been busy in free agency, including the signing of quarterback Mitchell Trubisky, and today I wanted to provide some data context to what he has done in his career so far. Let’s get right to it, starting with Expected Points Added (EPA) and Completion Percentage Over Expected (CPOE) from nflfastR:
As we can see, Trubisky is below the mean in both datapoints in his career, four years with the Chicago Bears leaving much to be desired overall, and minimal time backing quarterback Josh Allen in Buffalo last season. He ranked 30th in EPA out of the 39 quarterbacks on the graph and 31st in CPOE, highlighting his well below average play compared to his peers. I would be remiss if I did not mention recently retired Ben Roethlisberger and seeing how Trubisky has stacked up to him and many other starters in the time period.
Considering the strong free agency Pittsburgh has had, it will be interesting (if he is indeed the starter in 2022) what kind of trend we see with arguably the best top to bottom roster Trubisky has been a part of.
Speaking of trends, what did this data look like in 2020, the last year he received meaningful snaps?
Here we see an improvement compared to his career numbers, mainly his CPOE. Trubisky’s ranks improved to 20th in EPA out of 36 quarterbacks, along with 19th in CPOE. It is important to note he only played in nine games that season, playing the first three weeks, then was benched for Nick Foles (on the left of the graph for comparison) along with missing time with a right shoulder injury, and finally returning to play the last six games of the season.
Seeing the names above and below him give context to his level of improvement, and realistic expectations, whether positive, negative, or stagnant. It is also interesting to see him land above Roethlisberger on the above graph for optimism compared to recent seasons, and seeing quarterback Dwayne Haskins ranking at the bottom in both data points in his last opportunity with Washington.
How did Trubisky fare in completed and intended air yards in 2020?
Trubisky was right at the mean for the qualifying quarterbacks in intended air yards, but had the sixth-lowest rank (but above Roethlisberger) in completed air yards at just over five. This highlights a hopeful improvement for Pittsburgh regarding the depth of throws in the offense in recent seasons, but obviously the completed number and his accuracy will be the crucial factors to improve if he is in fact the guy next season. (P.S. HASKINS…)
While we’re on the topic, let’s look at red zone data for Trubisky’s career, which highlights accuracy when defenses have less ground to cover, along with decision making and team value in scoring situations using Points Earned and Season EPA from SIS:
Unfortunately, Trubisky has been below average in the red zone throughout his career, which is definitely a concern moving forward, but hopefully this change of scenery and a 2021 year sitting and learning behind Allen was beneficial.
Trubisky’s only positive numbers in the data points came in 2018 when he played 14 games with 64 red zone attempts, 18 touchdowns, and one interception, paired with a solid 64.1 completion percentage that ranked top 25 in this five-year sample, and a personal best on target percentage of 75%!
In his other three seasons his completion percentage was a low 55% or below. His worst season, and the second-worst points earned result on the graph, was in 2019 when played 13 games and had 61 red zone attempts, with a low 50.8% completion percentage and 67.2% on target percentage, 14 touchdowns, but tied for the second-most interceptions in a season with four!
Seeing an improvement (while minimal) of 14 touchdowns and two interceptions in 2020 got me wondering what his overall touchdown/interception ratios were in that season:
With fewer games than the players on the right, Trubisky threw 16 touchdowns and eight interceptions for a plus-eight differential. Landing on the upper left is obviously important to note, tying for the second-best differential of players on the left side of the graph. So hopefully this points to some lessons learned with experience, along with a hopeful positive trend around his 2018 red zone numbers with his opportunity in 2022.
Another interesting element to monitor will be the Matt Canada equation, and how the passing attack will look with the scheme and style expecting to change from Roethlisberger’s Hall of Fame input and influence, to Trubisky’s mobile skillset. Let’s start with outside the pocket passing:
Similar to the red zone date, 2018 was the peak season for Trubisky with above the mean results in both data points. His points earned ranked 32nd in the time period, along with an EPA that ranked 14th! Each of his other seasons were below the mean in both data points, with 2019 being his only other positive number in points earned. So again, this points out the highs and lows of his time with the Bears.
When Trubisky decided to take off and scramble, he fared best in his first two seasons. Looking at data from PFF, he had the most volume on such plays in 2018 with 38 attempts and 330 yards. Each of his other three seasons, he had 20-25 scrambles, and in his rookie year provided the most yardage at 229. For recency context, he had 20 scrambles in 2020 for 176 yards which ranked 12th out of 21 players with 20 attempts or more, which was also good for 8.8 yards a clip. For his career he averaged 8.2 yards per scramble, hinting at making good overall decisions of when to take off.
Another big and expected change I’ll wrap up with is the use of play action, and here is how Trubisky has fared in his career using data from SIS:
This element of the game helps most passers across the board, as we can see on the graph with the mean in each data point landing at higher values. Trubisky had his best results in 2019 and 2020, and if the data/trend holds true, this could be the most noticeable and successful change we could see to the Steelers’ offense considering the additions and expected improvement on the offensive line, setting up the threat of the running game and opening up easy opportunities to pass or scramble off of that.
Overall, the data confirms much of what we have heard, the tenure with the Bears left much to be desired along with some flashes of potential, but hopefully my goal of adding context was enjoyable to many of us that did not watch him weekly during that time. Here’s to hoping the Steelers get the most out of a more seasoned player, and hopefully maximizing his strengths in a seemingly better situation.
In an unusually aggressive and active free agency with ample cap space, it will be interesting to see how the pieces come together for Pittsburgh in 2022, and what Trubisky’s time looks like in this next chapter of the post-Ben Roethlisberger era.
What are your thoughts on the Trubisky signing? Do you think the team will draft a quarterback as well? Thanks for reading, and let me know your thoughts in the comments! | https://steelersdepot.com/2022/03/mitchell-trubisky-passing-scramble-data/ |
Surveys have shown that the economic environment tends to receive the greatest amount of attention from export planners. The primary concern in analysing the economic environment is to assess opportunities for marketing the company’s products abroad or possibly for locating some of the company’s production and distribution facilities outside of South Africa. Indeed, when striving to identify potential countries to focus on, one of the major differentiating factors will be the differences in the economic environments that exist between potential target countries.
Decisions about how much of a product people buy and which products they choose to buy are largely influenced by their purchasing power. If a large portion of a country’s population is poor, the market potential for many products maybe lower than it would be if they were reasonable prosperous. If a country is expected to enjoy rapid economic growth and large sectors of the population are expected to share in the increased wealth, sales prospects for many products would clearly be more promising than if the economy were stagnating.
Thus, if you are comparing potential countries to focus your export efforts on, you must consider factors such as the general economic outlook, employment levels, levels and distribution of income, growth trends, etc. It should be borne in mind, however, that when income levels drop, people will generally cut back on their purchases of luxury items before they cut back on necessities. Thus, poor countries which are allocating scarce foreign exchange reserves only to necessities (e.g. cheap clothing, simple agricultural tools, etc.) may prove to be more reliable markets than rich countries for certain export products.
Below are some of the economic factors which should be of interest to the exporter. To learn more about these factors, please follow the corresponding links: | https://exporthelp.co.za/steps/1-planning/environments/economic/ |
TECHNICAL FIELD
The present invention relates to a thermoacoustic temperature control system.
BACKGROUND ART
Conventionally, thermoacoustic temperature control systems in which a prime mover and a load are incorporated in a piping with a working gas encapsulated therein have been known (see, for example, Patent Literature 1). The prime mover includes a prime mover-side heat accumulator and prime mover-side heat exchangers connected to opposite end portions, in an extension direction of the piping, of the prime mover-side heat accumulator. The load includes a load-side heat accumulator and load-side heat exchangers connected to opposite end portions, in the extension direction of the piping, of the load-side heat accumulator.
This thermoacoustic temperature control system can be used as a thermoacoustic refrigeration system in which a refrigerator is employed as a load or a thermoacoustic heating system in which a heater is employed as a load. For example, the aforementioned literature describes a thermoacoustic refrigeration system in which a refrigerator is employed as a load. In this thermoacoustic refrigeration system, at the prime mover, a temperature gradient is generated between the opposite end portions of the mover-side heat accumulator, using heat of a fluid provided from the outside to the prime mover-side heat exchanger (for example, exhaust heat from a plant), the fluid having a temperature that is higher than room temperature. The temperature gradient makes the working gas perform self-excited vibration and thermal energy is thereby converted into acoustic energy (vibrational energy) inside the prime mover-side heat accumulator.
On the other hand, at the load (refrigerator), a temperature gradient is generated between opposite end portions of the load-side heat accumulator, using the acoustic energy transmitted to the load-side heat accumulator through the piping. This temperature gradient produces the working gas having a temperature that is lower than room temperature. As a result of the working gas having a temperature that is lower than room temperature being supplied to the load-side heat exchanger, a temperature of an object connected to the load-side heat exchanger is lowered and the object is thus maintained at a low temperature.
CITATION LIST
Patent Literature
Patent Literature 1: Japanese Patent No. 5799515
SUMMARY OF THE INVENTION
The aforementioned literature indicates an example of a thermoacoustic refrigeration system in which a piping includes a looped piping portion having a looped shape and a branch piping portion extending so as to branch from a part of the looped piping portion, a prime mover is incorporated in the branch piping portion and a load is incorporated in the looped piping portion (see, for example, FIG. 6 in Patent Literature 1).
Generally, in a looped piping portion, an acoustic mass flow of a working gas is generated because of a pressure difference (temperature difference) inside the looped piping portion. Therefore, in the configuration in which a load is incorporated in a looped piping portion, an acoustic mass flow passes through the inside of the load. The passage of the acoustic mass flow through the inside of the load makes it impossible to form an ideal temperature gradient between opposite end portions of a load-side heat accumulator because of the movement of the working gas.
In order to solve this problem, in the thermoacoustic refrigeration system indicated in the aforementioned literature, a blocking film is inserted at a position in the vicinity of a load-side heat exchanger on the low temperature side of the looped piping portion. The blocking film prohibits an acoustic mass flow (working gas) from passing therethrough and is capable of vibrating along with vibration of the working gas and thus allows transmission of a vibrational wave (vibrational energy) of the working gas. Therefore, the insertion of the blocking film as above enables solving the aforementioned problem while allowing transmission of vibrational energy.
Here, since the blocking film vibrates along with vibration of the working gas, stress repeatedly acts on the blocking film. Therefore, there is a problem in durability of the blocking film. Regarding this point, the above literature discloses a technique in which the blocking film is disposed in the vicinity of a position a distance that is half of a maximum amplitude of the blocking film away from the load-side heat exchanger on the low temperature side, in the looped piping portion. The technique prevents interference between the load-side heat exchanger on the low temperature side and the blocking film, enabling enhancement in durability of the blocking film.
On the other hand, for enhancement in durability of the blocking film, unlike in the above literature, the present inventor looked at distribution in magnitude of acoustic energy (vibrational energy) inside the looped piping portion. Then, the present inventor has obtained knowledge on conditions for enhancement in durability of the blocking film from the perspective of the distribution in magnitude of acoustic energy inside the looped piping portion.
The present invention has been made in view of the above point and an object of the present invention is to provide a thermoacoustic temperature control system that enables enhancement in durability of a blocking film inserted in a part of a looped piping portion.
In a thermoacoustic temperature control system according to the present invention, as in the above, a prime mover including a prime mover-side heat accumulator and prime mover-side heat exchangers and a load including a load-side heat accumulator and load-side heat exchangers are incorporated in a piping with a working gas encapsulated therein. Then, the piping includes a looped piping portion having a looped shape and a branch piping portion branching from a branching point that is a part of the looped piping portion, the prime mover is incorporated in the branch piping portion and the load is incorporated in the looped piping portion.
A characteristic of the thermoacoustic temperature control system according to the present invention lies in that a blocking film that prohibits the working gas from passing therethrough and is capable of vibrating along with vibration of the working gas is inserted at a position in the vicinity of the branching point, in a part of the looped piping portion between the load-side heat exchanger on the low temperature side and the branching point.
Acoustic energy (vibrational energy) formed by the prime mover incorporated in the branch piping portion reaches the branching point via the branch piping portion and then makes a circuit of the looped piping portion from the branching point in a direction in which the acoustic energy passes through the inside of the load from the high temperature side to the low temperature side, and after reaching the branching point again, merges with acoustic energy newly reaching the branching point via the branch piping portion and circulates in the looped piping portion again.
Here, distribution in magnitude of the acoustic energy (vibrational energy) inside the looped piping portion is looked at. When the acoustic energy moves inside the piping, the magnitude of the acoustic energy is gradually decreased because of energy loss that inevitably occurs. Therefore, the magnitude of the acoustic energy becomes gradually smaller as the acoustic energy moves from the branching point to the looped piping portion, and reaches a minimum immediately before the acoustic energy reaching the branching point again, and at a point of time when the acoustic energy has reached the branching point again, becomes larger again because of merging with new acoustic energy and subsequently becomes gradually smaller as stated above. In other words, in the looped piping portion, the magnitude of the acoustic energy reaches a maximum at the branching point and reaches a minimum at a position in the vicinity of the branching point, between the load-side heat exchanger on the low temperature side and the branching point.
On the other hand, for enhancement in durability of the blocking film, maximum stress acting on the blocking film may be reduced. In order to reduce the maximum stress acting on the blocking film, a maximum amplitude of the blocking film may be reduced. In order to reduce the maximum amplitude of the blocking film, the magnitude of the acoustic energy (vibrational energy) passing through the blocking film may be reduced. In other words, insertion of the blocking film at a position at which the acoustic energy reaches a minimum inside the looped piping portion enables enhancement in durability of the blocking film to the extent possible.
The above-stated characteristic of the thermoacoustic temperature control system according to the present invention is based on such knowledge. In other words, inserting the blocking film at a position in the vicinity of the branching point, in the part of the looped piping portion between the load-side heat exchanger on the low temperature side and the branching point enables inserting the blocking film at a position at which the acoustic energy reaches a minimum inside the looped piping portion. As a result, the durability of the blocking film can be enhanced to the extent possible.
In the thermoacoustic temperature control system according to the present invention, it is preferable that: each of respective end portions of three parts of the piping, the three parts converging from three directions toward the branching point, may be connected to a corresponding connection end portion of three connection end portions of a three-way piping joint; and the blocking film may be directly inserted between an end portion of a part of the piping, the part extending from the load-side heat exchanger on the low temperature side toward the branching point and the corresponding connection end portion of the connection end portions of the three-way piping joint.
According to the above, the blocking film is directly attached to the corresponding connection end portion of the three connection end portions of the three-way piping joint. Therefore, the configuration in which “the blocking film is inserted at a position in the vicinity of the branching point, in the part of the looped piping portion between the load-side heat exchanger on the low temperature side and the branching point” can easily be provided.
Also, instead of the blocking film alone, a blocking film sub-assembly including the blocking film and a pair of ring-like holding members that hold the blocking film so as to sandwich the blocking film from opposite sides may be directly inserted between an end portion of a part of the piping, the part extending from the load-side heat exchanger on the low temperature side toward the branching point and the corresponding connection end portion of the connection end portions of the three-way piping joint.
According to the above, when the blocking film is replaced, the blocking film sub-assembly may be replaced instead of the blocking film alone. In the blocking film sub-assembly, the blocking film is protected by the pair of holding members, and thus, handling of the blocking film is easy in comparison with the blocking film alone. Therefore, in comparison with the case where the blocking film is replaced alone, ease of the work of replacement is enhanced. Furthermore, in preparation for future replacement of the blocking film, a number of blocking films can be kept not in the state of blocking films alone but in the state of blocking film sub-assemblies. Therefore, ease of keeping the blocking films is enhanced in comparison with the case where the blocking films are kept alone.
Also, in the thermoacoustic temperature control system according to the present invention, it is preferable that a length, from the connection end portion connected to the end portion of the part of the piping, the part extending from the load-side heat exchanger on the low temperature side toward the branching point, to the branching point, of the three-way piping joint be shorter than a length, from the connection end portion connected to an end portion of a part of the piping, the part extending from the load-side heat exchanger on the high temperature side connected to an end portion on the high temperature side of the opposite end portions, in the extension direction of the piping, of the load-side heat accumulator, toward the branching point, to the branching point, of the three-way piping joint.
According to the above, the blocking film can be brought further closer to the branching point in comparison with a case where as the three-way piping joint, one having a length, from an connection end portion connected to the end portion of the part of a piping, the part extending from the load-side heat exchanger on the low temperature side toward the branching point to the branching point, the length being larger than a length, from a connection end portion connected to the end portion of the part of the piping, the part extending from the load-side heat exchanger on the high temperature side toward the branching point, to the branching point, thereof is used. As a result, the blocking film can be inserted at a position at which acoustic energy becomes further smaller inside the looped piping portion, enabling further enhancement in durability of the blocking film.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1
is a diagram schematically illustrating a thermoacoustic temperature control system according to an embodiment of the present invention.
FIG. 2
FIG. 1
is a diagram illustrating an example of a section of the prime mover-side heat accumulator and the load-side heat accumulator illustrated in .
FIG. 3
FIG. 1
is a graph illustrating variation in magnitude of acoustic energy relative to positions in the looped piping portion illustrated in .
FIG. 4
FIG. 1
is a diagram illustrating a specific configuration of piping around a branching point in the thermoacoustic temperature control system illustrated in .
FIG. 5
FIG. 1
FIG. 4
is a diagram of a case where a blocking film sub-assembly is employed instead of a blocking film alone in the thermoacoustic temperature control system illustrated in , the diagram corresponding to .
FIG. 6
FIG. 1
is a diagram of a thermoacoustic temperature control system according to an alteration of the embodiment of the present invention, the diagram corresponding to .
FIG. 7
FIG. 6
is a diagram illustrating a specific configuration of piping around a branching point of the thermoacoustic temperature control system illustrated in .
FIG. 8
FIG. 6
FIG. 7
is a diagram of a case where a blocking film sub-assembly is employed instead of a blocking film alone in the thermoacoustic temperature control system illustrated in , the diagram corresponding to .
MODES FOR CARRYING OUT THE INVENTION
Configuration
Operation
40
40
Position of Insertion of Blocking Film and Operation and Effects of Blocking Film
40
Specific Piping Configuration for Inserting Blocking Film at Position f in Vicinity of Branching Point p
1
A thermoacoustic temperature control system according to an embodiment of the present invention will be described below with reference to the drawings.
FIG. 1
1
10
20
10
30
10
40
30
1
30
As illustrated in , the thermoacoustic temperature control system includes a piping made of a metal, a prime mover incorporated in the piping , a load incorporated in the piping and a blocking film . As described later, the load can function as a refrigerator that maintains a temperature of an object at a temperature that is lower than room temperature (refrigeration temperature) or a heater that maintains a temperature of an object at a temperature that is higher than room temperature. In other words, the thermoacoustic temperature control system has a function that adjusts a temperature of an object connected to the load .
10
11
12
11
12
11
12
12
11
12
The piping includes a looped piping portion , which is a piping part having a looped shape, and a branch piping portion that branches from the looped piping portion , a space inside the branch piping portion communicating with a space inside the looped piping portion . The branch piping portion is a piping part that extends linearly from a branching point p at which the branch piping portion branches from the looped piping portion . An end portion in the extension direction of the branch piping portion is sealed by a predetermined sealing member.
10
10
12
The piping is actually formed by joining a plurality of linear pipings and curved pipings using predetermined joining members (typically, bolts and nuts). As described later, a part of the piping , the part corresponding to the branching point p, may be used as a three-way piping joint. It is a matter of course that the branch piping portion may be a piping part extending in a curved manner or may be a piping part that is a combination of a piping part extending in a curved manner and a piping part extending linearly.
10
11
12
A predetermined working gas (helium in the present embodiment) is encapsulated under predetermined pressure in the entirety of the piping , that is, both of the looped piping portion and the branch piping portion . Note that for the working gas, e.g., nitrogen, argon, air or any of mixtures thereof may be employed instead of or in addition to helium.
20
12
20
21
12
22
21
23
21
20
20
12
The prime mover is incorporated at an intermediate point in the branch piping portion . The prime mover includes a heat accumulator incorporated inside the branch piping portion , a high temperature-side heat exchanger disposed so as to face an end portion on the high temperature side of the heat accumulator and a low temperature-side heat exchanger disposed so as to face an end portion on the low temperature side of the heat accumulator . Note that although a single prime mover is provided in the present example, a plurality of prime movers may be incorporated in series in the branch piping portion as necessary.
FIG. 2
21
12
21
21
12
21
a
a.
As illustrated in , the heat accumulator is, for example, a cylindrical structure having a round shape in a section in a direction perpendicular to the extension direction of the branch piping portion . The heat accumulator includes a plurality of through flow channels extending parallel to one another along the extension direction of the branch piping portion . The working gas vibrates inside the plurality of flow channels
FIG. 2
21
21
12
21
21
a
In the example illustrated in , the plurality of flow channels are defined and formed in a matrix by a multitude of walls vertically and horizontally partitioning the inside of the heat accumulator . Note that as long as a plurality of through flow channels extending in the extension direction of the branch piping portion are formed inside the heat accumulator , the inside of heat accumulator may be partitioned in any manner that may be, e.g., a honeycomb manner.
21
21
For the heat accumulator , typically, e.g., a structure made of ceramic, a structure in which a plurality of mesh thin plate of stainless steel are stacked in parallel with a fine pitches or a non-woven fabric material formed of metal fiber can be used. Note that for the heat accumulator , instead of one having a round shape in lateral section, one having, e.g., an elliptical shape or a polygonal shape in lateral section can be employed.
21
12
12
12
12
11
Upon a predetermined temperature gradient being generated between the opposite ends of the heat accumulator , the working gas inside the branch piping portion becomes unstable and performs self-excised vibration along the extension direction of the branch piping portion . As a result, a vibrational wave (also a referred to as “sound wave”, “vibration flow” or “work flow”) formed by a longitudinal wave vibrating along the extension direction of the branch piping portion is formed and the vibrational wave is transmitted from the branch piping portion to the looped piping portion via the branching point p.
22
23
The high temperature-side heat exchanger is connected to a high temperature-side heat source (illustration omitted) and the low temperature-side heat exchanger is connected to a low temperature-side heat source (illustration omitted) having a temperature that is lower than that of the high temperature-side heat source. Typically, for the high temperature-side heat source and the low temperature-side heat source, a heat source having a temperature that is higher than room temperature and a heat source having a room temperature are used, respectively. As the heat source having a temperature that is higher than room temperature, for example, a heat source relating to exhaust heat from a plant can be used. Note that for the high temperature-side heat source and the low temperature-side heat source, a heat source having room temperature and a heat source having a temperature that is lower than room temperature may be used, respectively.
22
22
21
23
23
21
22
23
In the high temperature-side heat exchanger , heat exchange is performed between a medium supplied from the high temperature-side heat source and the working gas inside the high temperature-side heat exchanger . Consequently, a temperature of the working gas around the end portion on the high temperature side of the heat accumulator is adjusted so as to be close to the temperature of the high temperature-side heat source. In the low temperature-side heat exchanger , heat exchange is performed between a medium supplied from the low temperature-side heat source and the working gas inside the low temperature-side heat exchanger . Consequently, a temperature of the working gas around the end portion on the low temperature side of the heat accumulator is adjusted so as to be close to the temperature of the low temperature-side heat source. Note that for each of configurations of the high temperature-side heat exchanger and the low temperature-side heat exchanger , a configuration of a known heat exchanger can be used.
21
22
23
22
23
21
21
10
a
A temperature gradient is generated between the opposite ends of the heat accumulator by means of cooperation between the high temperature-side heat exchanger and the low temperature-side heat exchanger described above. In other words, the high temperature-side heat exchanger and the low temperature-side heat exchanger form “prime mover-side heat exchangers” that perform heat exchange with the working gas so as to generate a temperature gradient between the opposite end portions of the plurality of flow channels of the heat accumulator in order to make the working gas encapsulated in the piping perform self-excited vibration.
30
11
30
31
11
32
31
33
31
The load is incorporated in a part of the looped piping portion . The load includes a heat accumulator incorporated inside the looped piping portion , a high temperature-side heat exchanger disposed so as to face an end portion on the high temperature side of the heat accumulator and a low temperature-side heat exchanger disposed so as to face an end portion on the low temperature side of the heat accumulator .
FIG. 2
31
21
20
31
11
31
11
31
a
a.
As illustrated in , the heat accumulator has a configuration that is similar to that of the heat accumulator of the prime mover . In other words, the heat accumulator is, for example, a cylindrical structure having a round shape in a section in a direction perpendicular to an extension direction of the looped piping portion and includes a plurality of through flow channels extending parallel to one another along the extension direction of the looped piping portion . The working gas vibrates inside the plurality of flow channels
20
31
31
Upon a vibrational wave of the working gas, the vibrational wave being generated on the prime mover side, being transmitted to the inside of the heat accumulator , a temperature gradient is generated between the opposite end portions of the heat accumulator by acoustic energy provided by the vibrational wave.
30
32
33
32
32
31
Where the load is used as a refrigerator, typically, the high temperature-side heat exchanger is connected to a source having room temperature (illustration omitted) and the low temperature-side heat exchanger is connected to an object to be maintained at a temperature that is lower than room temperature (low temperature). In the high temperature-side heat exchanger , heat exchange is performed between a medium supplied from the heat source having room temperature and the working gas inside the high temperature-side heat exchanger . Consequently, a temperature of the working gas around the end portion on the high temperature side of the heat accumulator is adjusted so as to be close to room temperature.
31
31
33
33
32
33
As a result, a temperature of the working gas around the end portion on the low temperature side of the heat accumulator is adjusted to a temperature that is an amount of a temperature difference lower than room temperature, the temperature difference corresponding to the temperature gradient generated between the opposite end portions of the heat accumulator . As a result of the working gas having the temperature that is lower than room temperature being supplied to the inside of the low temperature-side heat exchanger being supplied, in the low temperature-side heat exchanger , heat exchange is performed between the working gas having the temperature that is lower than room temperature and the object. Consequently, a temperature of the object is adjusted so as to be maintained at the low temperature. Note that for each of respective configurations of the high temperature-side heat exchanger and the low temperature-side heat exchanger , a configuration of a known heat exchanger can be used.
30
33
32
33
33
31
Where the load is used as a heater, typically, the low temperature-side heat exchanger is connected to a heat source having room temperature (illustration omitted) and the high temperature-side heat exchanger is connected to an object to be maintained at a temperature that is higher than room temperature (high temperature). In the low temperature-side heat exchanger , heat exchange between a medium supplied from the heat source having room temperature and the working gas inside the low temperature-side heat exchanger is performed. Consequently, the temperature of the working gas around the end portion on the low temperature side of the heat accumulator is adjusted so as to be close to room temperature.
31
31
32
32
As a result, the temperature of the working gas around the end portion on the high temperature side of the heat accumulator is adjusted to a temperature that is an amount of a temperature difference higher than room temperature, the temperature difference corresponding to a temperature gradient generated between the opposite end portions of the heat accumulator . As a result of the working gas having the temperature that is higher than room temperature being supplied to the inside of the high temperature-side heat exchanger , in the high temperature-side heat exchanger , heat exchange is performed between the working gas having the temperature that is higher than room temperature and the object. Consequently, the temperature of the object is adjusted so as to be maintained at the high temperature.
32
33
32
33
As described above, the high temperature-side heat exchanger and the low temperature-side heat exchanger form “load-side heat exchangers” that produce a working gas for adjusting a temperature of an object, the “working gas having a temperature that is lower than room temperature or a temperature that is higher than room temperature”, and performs heat exchange between the working gas having the temperature that is lower than room temperature or the temperature that is higher than room temperature and the object to adjust the temperature of the object. Specifically, the high temperature-side heat exchanger forms a “load-side heat exchanger on the high temperature side” and the low temperature-side heat exchanger forms a “load-side heat exchanger on the low temperature side”.
40
11
11
11
12
20
30
The blocking film is inserted in a part of the looped piping portion in order to prevent generation of an acoustic mass flow of the working gas inside the looped piping portion . In other words, in a looped piping portion such as the looped piping portion , generation of an acoustic mass flow due to a pressure difference (temperature difference) inside the looped piping portion makes the working gas circulate inside the looped piping portion. Note that in a piping portion with an end portion sealed such as the branch piping portion , no acoustic mass flow is generated because there is no destination of movement of the working gas. Therefore, in the present configuration, no acoustic mass flow is generated on the prime mover side and an acoustic mass flow can be generated on the load side.
30
31
40
11
40
If an acoustic mass flow passes through the inside of the load , it becomes impossible to form an ideal temperature gradient between the opposite end portions of the heat accumulator because of movement of the working gas. In order to solve this problem, in the present configuration, the blocking film is inserted in a part of the looped piping portion . The blocking film prohibits passage (movement) of the working gas itself and is capable of vibrating along with vibration of the working gas and thus allows transmission of a vibrational wave (thus, acoustic energy or vibrational energy) of the working gas.
40
11
40
Therefore, for the blocking film , a degree of airtightness, the degree enabling prohibiting passage (movement) of the working gas itself, and a degree of flexibility (elasticity), the degree enabling a center portion to vibrate in the extension direction of the looped piping portion with a peripheral edge portion fixed are required. For a material forming the blocking film , e.g., metal, glass, ceramic, resin, rubber or fiber can be employed.
40
11
33
40
In the present configuration, the blocking film is inserted at position f in the vicinity of the branching point p, in the part of the looped piping portion between the low temperature-side heat exchanger and the branching point p. The insertion position of the blocking film will be described in detail later.
1
1
30
32
33
22
23
20
32
33
30
21
22
23
21
11
12
31
30
FIG. 1
Operation of the thermoacoustic temperature control system configured as described above will briefly be described below based on the content of the above description. As illustrated in , in the thermoacoustic temperature control system , where the load is used as a refrigerator, the high temperature-side heat exchanger is connected to a heat source having room temperature and the low temperature-side heat exchanger is connected to an object to be maintained at a temperature that is lower than room temperature (low temperature). With this situation, upon activation of the high temperature-side heat exchanger and the low temperature-side heat exchanger of the prime mover and the high temperature-side heat exchanger and the low temperature-side heat exchanger of the load , a temperature gradient is generated between the opposite ends of the heat accumulator by means of cooperation between the high temperature-side heat exchanger and the low temperature-side heat exchanger . The temperature gradient causes a vibrational wave resulting from self-excited vibration of the working gas to be formed in the heat accumulator . This vibrational wave (sound wave) travels in the looped piping portion from the branch piping portion via the branching point p and is transmitted into the heat accumulator of the load .
31
31
32
31
31
31
33
33
Upon the transmission of the vibrational wave of the working gas into the heat accumulator , a temperature gradient is generated between the opposite end portions of the heat accumulator by acoustic energy provided by the vibrational wave. In addition, as a result of the activation of the high temperature-side heat exchanger , the temperature of the working gas around the end portion on the high temperature side of the heat accumulator is adjusted to a temperature close to room temperature. As a result, the temperature of the working gas around the end portion on the low temperature side of the heat accumulator is adjusted to a temperature that is an amount of a temperature difference lower than room temperature, the temperature difference corresponding to the temperature gradient between the opposite end portions of the heat accumulator . The working gas having the temperature that is lower than room temperature is supplied to the inside of the low temperature-side heat exchanger . Therefore, in the low temperature-side heat exchanger , heat exchange is performed between the working gas having the temperature that is lower than room temperature and the object. Consequently, a temperature of the object is adjusted so as to be maintained at the low temperature.
30
33
32
31
31
31
32
32
On the other hand, where the load is used as a heater, the low temperature-side heat exchanger is connected to a heat source having room temperature and the high temperature-side heat exchanger is connected to an object to be maintained at a temperature that is higher than room temperature (high temperature). As a result, the temperature of the working gas around the end portion on the low temperature side of the heat accumulator is adjusted to a temperature close to room temperature. Therefore, the temperature of the working gas around the end portion on the high temperature side of the heat accumulator is adjusted to a temperature that is an amount of a temperature difference higher than room temperature, the temperature difference corresponding to the temperature gradient between the opposite end portions of the heat accumulator . The working gas having the temperature that is higher than room temperature is supplied to the high temperature-side heat exchanger . Therefore, in the high temperature-side heat exchanger , heat exchange is performed between the working gas having the temperature that is higher than room temperature and the object. Consequently, a temperature of the object is adjusted so as to be maintained at the high temperature.
12
11
40
Note that as described above, in the present configuration, in the branch piping portion , no acoustic mass flow is generated because there is no destination of movement of the working gas, and in the looped piping portion , no acoustic mass flow is generated as a result of the insertion of the blocking film .
40
40
40
As described above, the blocking film vibrates along with vibration of the working gas, and thus, stress repeatedly acts on the blocking film . Therefore, it is very important to ensure durability of the blocking film .
40
11
40
40
11
For enhancement in durability of the blocking film , the present inventor looked at distribution in magnitude of acoustic energy (vibrational energy) inside the looped piping portion . Then, the present inventor obtained knowledge on an insertion position of the blocking film necessary for enhancement in durability of the blocking film from the perspective of the distribution in magnitude of acoustic energy inside the looped piping portion . This point will be described below.
20
12
12
11
30
12
11
FIG. 1
Acoustic energy (vibrational energy) formed by the prime mover incorporated in the branch piping portion reaches the branching point p via the branch piping portion and then makes a circuit of the looped piping portion from the branching point p in a direction in which the acoustic energy passes through the inside of the load from the high temperature side to the low temperature side (direction indicated by the two black arrows in ). Then, after the acoustic energy that has made the circuit reaching the branching point p again, the acoustic energy merges with acoustic energy newly reaching the branching point p via the branch piping portion and circulates in the looped piping portion again.
11
10
11
FIG. 3
FIG. 1
Here, distribution in magnitude of the acoustic energy (vibrational energy) inside the looped piping portion is looked at. When the acoustic energy moves inside the piping , the magnitude of the acoustic energy is gradually decreased because of energy loss that inevitably occurs. Therefore, as illustrated in , the magnitude of the acoustic energy becomes gradually smaller as the acoustic energy moves in the order of points a, b, c, d inside the looped piping portion from the branching point p (for points a to f, see ).
30
30
30
31
a.
Until the acoustic energy that has reached point d (therefore, the high temperature-side end portion of the load ) reaches point e (the low temperature-side end portion of the load ), the acoustic energy is partly consumed in order to generate a temperature gradient inside the load and is also partly consumed because of viscous dissipation caused by passage through the plurality of fine flow channels Therefore, a gradient of the decrease of the acoustic energy becomes particularly large between points d and e.
FIG. 3
11
33
After the acoustic energy reaching point e, the magnitude of the acoustic energy becomes gradually smaller because of the aforementioned energy loss as the acoustic energy moves from point e to the branching point p. As described above, the magnitude of the acoustic energy reaches a minimum immediately before the acoustic energy reaching the branching point p again. Then, at a point of time when the acoustic energy has reached the branching point p again, the acoustic energy becomes larger again because of merging with new acoustic energy and subsequently gradually becomes smaller as described above. In other words, as can be understood from , in the looped piping portion , the magnitude of the acoustic energy reaches a maximum at the branching point p and reaches a minimum at a position in the vicinity of the branching point p, between the low temperature-side heat exchanger and the branching point p.
40
40
40
40
40
40
40
11
40
On the other hand, for enhancement in durability of the blocking film , maximum stress acting on the blocking film may be reduced. In order to reduce the maximum stress acting on the blocking film , a maximum amplitude of the blocking film may be reduced. In order to reduce the maximum amplitude of the blocking film , the magnitude of the acoustic energy (vibrational energy) passing through the blocking film may be reduced. In other words, insertion of the blocking film at a position at which the acoustic energy reaches a minimum (magnitude close to a minimum) inside the looped piping portion enables enhancement in durability of the blocking film to the extent possible.
FIG. 1
40
11
33
40
11
40
Based on the above knowledge, in the present configuration, as illustrated in , the blocking film is inserted at position fin the vicinity of the branching point p, in the part of the looped piping portion between the low temperature-side heat exchanger and the branching point p. Consequently, the blocking film can be inserted at a position at which the acoustic energy reaches a minimum (magnitude close to a minimum) inside the looped piping portion . As a result, the durability of the blocking film can be enhanced to the extent possible.
40
11
13
FIG. 1
FIG. 4
In order to easily provide the configuration in which “the blocking film is inserted at position fin the vicinity of the branching point p, in the looped piping portion ” such as illustrated in , more specifically, as illustrated in , a piping configuration using a three-way piping joint around the branching point p can be employed.
FIG. 4
13
13
13
13
12
12
13
11
11
33
13
11
11
32
13
a,
b,
c
a
c
a
a
b
b
In the example illustrated in , from among three connection end portions of the T-shaped three-way piping joint , an end portion of the branch piping portion is connected to the connection end portion corresponding to an end portion of a right-side arm portion of a right-left pair of linearly-extending arm portions of the T-shape, and an end portion of the looped piping portion extending from the low temperature-side heat exchanger toward the branching point p is connected to the connection end portion corresponding to an end portion of a left-side arm portion of the T-shape, and an end portion of the looped piping portion extending from the high temperature-side heat exchanger toward the branching point p is connected to the connection end portion corresponding to an end portion of a leg portion of the T-shape.
40
11
11
13
13
40
11
13
a
a
a
a
Then, the blocking film is directly inserted between the end portion of the looped piping portion and the connection end portion of the three-way piping joint . In other words, a circumferential edge portion of the blocking film is directly attached between a ring-like end surface of the end portion and a ring-like end surface of the connection end portion so as to be in contact with and held between the end surfaces.
40
40
13
40
11
a
The blocking film can be fixed using, for example, predetermined joining members (typically, bolts and nuts) and a predetermined adhesive. As described above, the blocking film being directly attached to the corresponding connection end portion of the three-way piping joint enables easily providing the configuration in which “the blocking film is inserted at position f in the vicinity of the branching point p, in the looped piping portion .
FIG. 4
1
13
13
2
13
13
13
1
40
40
11
40
a
b
Here, in the example illustrated in , it is preferable that a length d, from the connection end portion to the branching point p, of the three-way piping joint be shorter than a length d, from the connection end portion to the branching point p, of the three-way piping joint . Consequently, the three-way piping joint , the length d of which is small, can be used, enabling the blocking film to be brought further closer to the branching point p. As a result, the blocking film can be inserted at a position at which acoustic energy becomes further smaller inside the looped piping portion , enabling further enhancement in durability of the blocking film .
FIG. 5
40
60
11
11
13
13
60
40
50
40
40
50
40
a
a
Also, as illustrated in , instead of the blocking film alone, a blocking film sub-assembly may directly be inserted between the end portion of the looped piping portion and the connection end portion of the three-way piping joint . The blocking film sub-assembly is an integrated object including the blocking film and a pair of annular holding members that holds the blocking film so as to sandwich the blocking film from the opposite sides. The pair of holding members can be fixed to the blocking film using, for example, predetermined joining members (typically, bolts and nuts) and a predetermined adhesive.
60
40
60
40
60
40
50
40
40
40
40
40
40
40
60
40
40
As described above, where the blocking film sub-assembly is employed, when the blocking film is replaced, the blocking film sub-assembly may be replaced instead of the blocking film alone. In the blocking film sub-assembly , the blocking film is protected by the pair of holding members , and thus, handling of the blocking film is easy in comparison with the blocking film alone. Therefore, in comparison with the case where the blocking film is replaced alone, ease of the work of replacing the blocking film is enhanced. Furthermore, in preparation for future replacement of the blocking film , a number of blocking films can be kept not in the state of blocking films alone but in the state of blocking film sub-assemblies . Therefore, ease of keeping the blocking films is enhanced in comparison with the case where the blocking films are kept alone.
The present invention is not limited only to the above-described typical embodiment and various applications and alterations are possible without departing from the object of the present invention. For example, each of the following modes to which the above-described embodiment is applied can be carried out.
FIG. 1
FIG. 6
11
12
30
30
40
30
30
11
12
30
40
30
In the above-described embodiment, as illustrated in , in a part of the looped piping portion , the part extending from the branching point p in a direction along the extension direction of the branch piping portion , the load is disposed in such a manner that the low temperature-side end portion of the load faces the branching point p, and the blocking film is inserted in a position in the vicinity of branching point p, between the low temperature-side end portion of the load and the branching point p. On the other hand, as illustrated in , a load may be disposed in a part of a looped piping portion , the part extending from a branching point p in a direction orthogonal to an extension direction of a branch piping portion , in such a manner that a low temperature-side end portion of the load faces the branching point p, and a blocking film may be inserted at a position in the vicinity of the branching point p, between the low temperature-side end portion of the load and the branching point p.
40
11
13
FIG. 6
FIG. 7
In order to easily provide the configuration in which “the blocking film is inserted at a position in the vicinity of the branching point p, in the looped piping portion ” such as illustrated in , specifically, as illustrated in , a piping configuration using a three-way piping joint around the branching point p can be employed.
FIG. 7
13
13
13
13
12
12
13
11
11
32
13
11
11
33
13
40
11
11
13
13
a,
b,
c
a
c
b
b
a
a
a
a
In the example illustrated in , from among three connection end portions of a T-shaped three-way piping joint , an end portion of the branch piping portion is connected to the connection end portion corresponding to an end portion of a right-side arm portion of a right-left pair of linearly-extending arm portions of the T-shape, an end portion of the looped piping portion extending from a high temperature-side heat exchanger toward the branching point p is connected to the connection end portion corresponding to an end portion of a left-side arm portion of the T-shape, and an end portion of the looped piping portion extending from a low temperature-side heat exchanger toward the branching point p is connected to the connection end portion corresponding to an end portion of a leg portion of the T-shape. Then, the blocking film is directly inserted between the end portion of the looped piping portion and the connection end portion of the three-way piping joint .
40
11
40
13
a
This configuration also enables easily providing the configuration in which “the blocking film is inserted at a position in the vicinity of the branching point p, in the looped piping portion ” by the blocking film being directly attached to the corresponding connection end portion of the three-way piping joint.
FIG. 7
1
13
13
2
13
13
13
1
40
40
11
40
a
b
Here, in the example illustrated in , also, it is preferable that a length d, from the connection end portion to the branching point p, of the three-way piping joint be shorter than a length d, from the connection end portion to the branching point p, of the three-way piping joint . Consequently, the three-way piping joint , the length d of which is small, can be used, enabling the blocking film to be brought further closer to the branching point p. As a result, the blocking film can be inserted at a position at which acoustic energy becomes further smaller inside the looped piping portion , enabling further enhancement in durability of the blocking film .
FIG. 7
FIG. 8
40
60
11
11
13
13
a
a
Also, in the example illustrated in , as illustrated in , instead of the blocking film alone, a blocking film sub-assembly may directly be inserted between the end portion of the looped piping portion and the connection end portion of the three-way piping joint .
FIGS. 1 and 6
20
12
12
11
20
Also, in various examples described above (), the prime mover is incorporated in the branch piping portion with the end portion sealed. On the other hand, an additional looped piping portion including another branching point is formed at the end portion of the branch piping portion branching from the branching point p of the looped piping portion and the prime mover may be incorporated in a part of the looped piping portion. In this case, in order to prevent generation of an acoustic mass flow of a working gas inside the looped piping portion, it is preferable to insert another blocking film in a part of the looped piping portion.
REFERENCE SIGNS LIST
1
THERMOACOUSTIC TEMPERATURE CONTROL SYSTEM
10
PIPING
11
LOOPED PIPING PORTION
11
11
a,
b
END PORTION
12
BRANCH PIPING PORTION
12
a
END PORTION
13
THREE-WAY PIPING JOINT
13
13
13
a,
b,
c
CONNECTION END PORTION
20
PRIME MOVER
21
HEAT ACCUMULATOR (PRIME MOVER-SIDE HEAT ACCUMULATOR)
22
HIGH TEMPERATURE-SIDE HEAT EXCHANGER (PRIME MOVER-SIDE HEAT EXCHANGER)
23
LOW TEMPERATURE-SIDE HEAT EXCHANGER (PRIME MOVER-SIDE HEAT EXCHANGER)
30
LOAD
31
HEAT ACCUMULATOR (LOAD-SIDE HEAT ACCUMULATOR)
32
HIGH TEMPERATURE-SIDE HEAT EXCHANGER (LOAD-SIDE HEAT EXCHANGER)
33
LOW TEMPERATURE-SIDE HEAT EXCHANGER (LOAD-SIDE HEAT EXCHANGER)
40
BLOCKING FILM
50
HOLDING MEMBER
60
BLOCKING FILM SUB-ASSEMBLY | |
It is extremely difficult to predict when a crisis will start or end. Even though there might be a consensus regarding its causes (Abbot et. al. 2009), there is typically less agreement on which one(s) dominate(s) and, consequently, which measures will effectively tackle them. That applies to crises from the global financial panic to swine flu and certainly includes the food price crisis. From a policymaking viewpoint, the critical question is how to balance short- and long-term interventions and how, in practice, to triangulate cautious macroeconomic measures, effective compensatory social policies, and a lasting stimulus without disastrous distortions.
Overly general recommendations from soft evidence
In a recent article, I analyse the formulation of policy recommendations for the food price crisis in a comprehensive sample of (more than thirty) studies by international institutions (Cuesta 2009). In particular, I focus on the depth and merits of the policy discussion and the connection between generated knowledge and specific policy advocacy. Most of the studies either fail to provide any policy discussion or provide “soft” general recommendations such as calls for further analysis or the need to adopt both short- and long-run policies. Recommendations also include policy directions hardly linked specifically to the food crisis but rather to any crisis, long-standing sectoral concerns, and institutional mandates: investing in agriculture, increasing food production, easing assistance to small-scale producers to increase their productivity, and investing in the improvement of existing systems of social protection and security. Even at this broad level of analysis, there is a set of policies receiving less clear support or commitment: elimination of trade barriers, limitations on the production of bio-fuels, and global coordination in the implementation of policies.
Overly specific recommendations from hard evidence
None of the studies mentioned above, however, draw recommendations from a systematic quantitative comparison across policy alternatives. Only a handful – Arndt et al. (2008), IMF (2008) and Valero-Gil and Valero (2008) – conduct more rigorous exercises evaluating policy options in the context of the food price crises. They typically simulate the distributive effects of interventions from the expansion of conditional cash transfer programmes, provision of price subsidies, and elimination of tariffs. Results show that increasing transfers – little is analysed in terms of improving targeting – is most effective way to compensate for consumption loses and expected increases in poverty incidence and depth. These schemes outperform bold but short-ranging tariff reductions and typically regressive food price subsidies. Unfortunately, this hard evidence relates to a tiny sample of countries, Mozambique, Mexico and Nicaragua, leaving us to question their global relevance.
Searching for a systematic yet representative comparison
In response to the shortcomings in such policymaking analyses, some colleagues at the Inter-American Development Bank and I have proposed a systematic way of comparing policy interventions based on their expected consequences across critical policy-making dimensions: degree of targeting and scope of the measures (coverage), fiscal cost (cost), degree of distortion (efficiency), and reversibility (political economy). The outcome is a detailed physiognomy of interventions’ potential effects that might be complemented by specific estimations of orders of magnitude.
Table 1. Physiognomy of crisis interventions
The following table characterises the package of interventions adopted in the Andean Region, a part of the world interesting for its heterogeneous mix of net oil-exporters and food-importers, on the one hand, and economic and political ideologies, on the other. “Desirable” interventions are those that have:
- broad coverage or, if targeted, effective transfers to the poorest sectors;
- low fiscal costs;
- low levels of distortion and even generate positive incentives;
- are easily reversible after completing their mission.
Table 2. Balance of crisis policy mix
Conclusions
Since the brunt of the food price crisis drifted out of the spotlight, little has been achieved in terms of a rigorous comparison and ranking of alternative policy responses. This gap calls for a humble reflection on our collective ability to anticipate the emergence and magnitude of crises. We need to work towards analytical toolkits and/or protocols that deal specifically with knowledge generation for crisis prevention. Toolkits of this sort already exist in the analysis of poverty impacts and empowerment, for example. Rather than establishing or relying on a mechanical procedure, such as a database, these protocols should describe analytical techniques, relevant indicators, scopes for analyses, and participatory strategies. With such tools, policymakers will be better equipped to confront the next crisis.
References
Abbot, P., C. Hurt and W. Tyner (2009) What’s Driving Food Prices. March 2009 Update. Farm Foundation Issue Report. Farm Foundation.
Arndt, C., R. Benfica, N. Maximiano, A. Nucifora and J. Thurlow (2008) Higher Fuel and Food Prices: Impacts and Responses for Mozambique. Agricultural Economics, 39, Supplement: 497-511
Cuesta, J. (2009) ‘Knowledge’ or Knowledgeable Banks? International Financial Institutions Generation of Knowledge in Times of Crisis, Development Policy Review, forthcoming
International Monetary Fund, IMF (2008) Elevated Food Prices and Vulnerable Households: Fiscal Policy Options, chapter 4 in 2008 Regional Economic Outlook, Western Hemisphere. Washington DC: IMF. | https://voxeu.org/article/next-food-crisis |
Veto by Nixon Secures Transparency of MO Government
Early in July, amidst a much cooler climate, Gov. Jay Nixon vetoed a measure that sought to limit the openness and transparency of public and governmental entities.
Specifically, the vetoed legislation aimed to shelter public entities from disclosing minutes, votes, and records; it also allowed for closed meetings.
Without public access to important information — whether it is school district board minutes or the budget of fire protection districts — injustices may go unnoticed and our public officials may be tempted to act in unethical and elusive ways.
Take, for example, a recent embezzlement scandal in Brentwood. As Chad Carson reported, the city administrator of the suburban municipality was found to have stolen nearly $30,000 of city funds. That money, largely from tax receipts, was thrown away at a riverboat casino. Increased government accountability is the only effective solution Missouri citizens have to prevent such abuses in the future.
It is improbable to assume that the general public will suddenly besiege public entities with information requests — commonly known as Sunshine Law Requests. Therefore, the protection of this right is critical to policy analysts and journalists statewide who, in their endeavor for truth, rely on accountability. After all, your government cannot be accountable without transparency.
We often chastise our elected officials’ performance — ironic, since we elect them. However, when they strive to bolster the sense of public duty, as Gov. Nixon illustrated here, some praise and an attaboy are due.
So now, even as the mercury seems to higher and higher each day, Missourians can feel good about greater openness, transparency, and accountability in government. | https://showmeinstitute.org/blog/transparency/veto-by-nixon-secures-transparency-of-mo-government |
What it means
- African Charter on Human and Peoples Rights (Article 18.2-4): The State shall have the duty to assist the family which is the custodian of morals and traditional values recognized by the community. The State shall ensure the elimination of every discrimination against women and also ensure the protection of the rights of women and the child as stipulated in international declarations and conventions. The aged and the disabled shall also have the right to special measures of protection in keeping with their physical or moral needs.
- American Convention on Human Rights (Article 17.4): The States Parties shall take appropriate steps to ensure the equality of rights and the adequate balancing of responsibilities of the spouses as to marriage, during marriage, and in the event of its dissolution. In case of dissolution, provision shall be made for the necessary protection of any children solely on the basis of their own best interests.
- ASEAN Human Rights Declaration: The family as the natural and fundamental unit of society is entitled to protection by society and each ASEAN Member State. Men and women of full age have the right to marry on the basis of their free and full consent, to found a family and to dissolve a marriage, as prescribed by law.
- Arab Charter on Human Rights (Article 33.2): The State and society shall ensure the protection of the family, the strengthening of family ties, the protection of its members and the prohibition of all forms of violence or abuse in the relations among its members, and particularly against women and children. They shall also ensure the necessary protection and care for mothers, children, older persons and persons with special needs and shall provide adolescents and young persons with the best opportunities for physical and mental development.
How it relates to violence against women
It is important to remember that violence against women is not only about the act itself, it is also about power. The Universal Declaration of Human Rights tells us the family is the basic unit of society. Yet, one of the most dangerous places for a woman to be is in her own home. When inequality within the family goes unaddressed, it limits women’s decision-making and control of her future. In these situations, the likelihood of experiencing gender-based violence increases. And when acts of VAW go unaddressed or are even socially accepted, it can contribute to a dangerous culture of impunity for gender-based crimes that will perpetuate gender inequality. For example, 75% of men (aged 15-49) in the Central African Republic believed a husband was justified in beating his wife for certain reasons – including burning the food and refusing sexual relations – while 80% of women (aged 15-49) believed the husband was justified for the same reasons.
Inequality in the family may also be unjustly supported by law. Article 40 of Yemen’s Personal Status Act No. 20 (1992) states that: “A husband has the right to be obeyed by his wife in the interest of the family…” These types of laws violate a state’s human rights obligations and prevent women from accessing justice when gender-based violence occurs at the hands of their husbands.
Examples of violence against women that violate this right include:
- Economic abuse by a spouse or family member
- Marital rape
- Female infanticide
Click on the cases to the right (or, for mobile users, at the bottom of this page) to learn more about the right to equality in the family and violence against women. | https://blogs.lse.ac.uk/vaw/landmark-cases/equality-in-the-family/ |
Many passionate environmental activists are using social media platforms to raise awareness about the harsh effects of climate change and the importance of sustainable living.
Whether you are already committed to eco-friendly practices or you’re aiming to join in the fight to save the planet, we’ve compiled a list of our picks for 10 eco influencers you need to follow heading into the next decade.
It’s not about Meatloaf.— Greta Thunberg (@GretaThunberg) January 6, 2020
It’s not about me.
It’s not about what some people call me.
It’s not about left or right.
It’s all about scientific facts.
And that we’re not aware of the situation.
Unless we start to focus everything on this, our targets will soon be out of reach. https://t.co/UwyoSnLiK2
1. Greta Thunberg
Teen climate and environmental activist Greta Thunberg earned the honorable distinction of being named Time magazine’s 2019 Person of the Year for her ongoing efforts to protect and preserve the Earth. The 16-year-old Swedish student gained global attention with her own climate strikes from school, asking government leaders to act against growing climate issues. Furthermore, her urgent, passionate call for change inspired millions of other students to launch protests and strikes to support the cause.
Thunberg has spoken at numerous climate conferences, including the 2019 UN Climate Action Summit which took place in New York. Among her many accolades, she was also nominated for the 2019 Nobel Peace Prize. She has over 9 million followers on Instagram (@gretathunberg) and over 3 million followers on Twitter (@GretaThunberg).
For the holidays, Trump wants to give us more asthma, disease, and higher electricity bills. Even Scrooge tried to reduce coal use. https://t.co/UqxPzkXePc— Mike Bloomberg (@MikeBloomberg) December 20, 2019
2. Mike Bloomberg
The former three-term New York City mayor is recognized as a global leader for his ardent support of environmental change. In June 2019, Bloomberg launched Beyond Carbon, “the largest-ever coordinated campaign against climate change in the United States.”
During his time as mayor, he helmed PlaNYC, a plan to unite a multitude of NYC agencies to help create a “greener, greater New York.” Reducing the city’s carbon footprint and increasing water conservation are just two of Bloomberg’s targeted initiatives. As a 2020 presidential hopeful, Bloomberg is pushing his goals to build a 100 percent clean economy. @MikeBloomberg has over 2.4 million followers on Twitter.
Our family's trash of 2019. While the family has been shrinking (with both boys now in college), our trash has actually doubled compared with last year!---- Why? Each year-- shares similar discards with the previous one (personal info of an expired passport--, produce stickers, Scott's contact lenses and the labels of their packaging, the old silicone from the kitchen sink--, etc). But each year is also obviously different. What makes 2019 different from other years is that much of what we threw out was related to either our lifestyle before zero waste, or the new one we just adopted. When we bought our house in 2017, we put some leaf guards-- on our gutters. This was before adopting waste-free living, so at the time of purchase we did not consider the end of the life☠ of the product we chose, and went with a plastic mesh. Over the past 12 years, it has fallen apart and when the wind blows--, chunks sometimes fall off -I recently looked for the manufacturer hoping to show them what's become of their product overtime; I could not find them, so hopefully they're out of business! Moving into the Airstream with Scott this year also meant having to adapt our new living environment to our needs. We had to store the trailer (storage label is the yellow bit in the jar), sell the Prius-- (service label is white and red), and buy some things new, such as anti-slip mats. Although sold wrapped in paper, no matter how careful I was in cutting the mats✂ to size with the trailer's curved design, I ended up with scraps (white bits in the jar). How do I feel generating more trash this year than the previous one? Optimist that (maybe) we'll do better the next!-- ✨Happy New (Zero Waste) Year everyone! ✨ #alwaysroomforimprovement #goingtiny #zerowaste #zerodechet #zerowastehome #ZWHtour #zerowastehomebook #zerowastelifestyle #ZWH
3. Bea Johnson
French native best-selling author of “Zero Waste Home” Bea Johnson is credited with founding the Zero Waste lifestyle movement. She travels around the world to educate live audiences about the importance and ease of living a trash-free life.
Her book has been translated in 27 languages and tells the story of how she transformed her life to reduce waste. Johnson has made over 100 television appearances and has amassed over 400K followers on social media. Johnson declares her eco-conscious mantra on her website: “Refuse, Reduce, Reuse, Recycle, Rot (and only in that order) is my family’s secret to reducing our annual trash to a jar since 2008.”
Did you get new clothing for the holidays? If you want to transition your wardrobe, don't throw old clothing away! You can re-sell it at a secondhand store or on a site like Poshmark, consign it on a site like the real real, donate it friends or a charity, and compost it if it's natural, and lastly terracycle it.⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ What's your favorite piece of clothing that you have purchased secondhand or received from a friend? ⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀ #TrashIsForTossers #PackageFree #PackageFreeShop
4. Lauren Singer
Lauren Singer is a New York-based eco influencer who documents her Zero Waste lifestyle journey on her Trash Is For Tossers blog, as well as her @Trashis4Tossers Twitter and @trashisfortossers Instagram pages.
Singer gives her 340K Instagram followers and her website audience easy-to-implement changes for achieving an eco-friendlier lifestyle. The multi-tasking blogger also owns online stores with sustainable products including Package Free and The Simply Co. which sells organic, vegan cleaning products, and detergent.
What if we actually pulled off a Green New Deal? What would the future look like? As part of our year in review, The Intercept presents a film narrated by @AOC. https://t.co/o0fUZXf0YZ pic.twitter.com/RyQa2U08VF— The Intercept (@theintercept) January 1, 2020
5. Alexandria Ocasio-Cortez
Rep. Alexandria Ocasio-Cortez (D – N.Y.) and Sen. Ed Markey (D-Mass) introduced New York’s Green New Deal which Mayor Bill de Blasio put into effect in April 2019. The plan aims to cut greenhouse gas emissions by 30 percent by 2020.
The Green New Deal is part of the OneNYC 2050 strategy to build a strong and fair city and secure its future. Ocasio-Cortez is very outspoken about the climate crisis and has addressed it to leaders in speeches across the globe, including at the C40 World Mayors summit in Copenhagen in October 2019. Ocasio-Cortez has over 6 million followers on Twitter @AOC.
Solving the world’s toughest challenges—like fighting the worst impacts of climate change—requires lots of new ideas and talented people working across many fields. I’m glad to see this effort to support this important work. https://t.co/eeFpmXtgv9— Bill Gates (@BillGates) January 6, 2020
6. Bill Gates
Microsoft Co-Founder Bill Gates is consistently ranked as one of the richest people in the world. His high business profile enables him to reach a massive social media audience across the globe.
Gates discusses the harsh effects of climate change in multiple posts on his GatesNotes blog. He’s also written a book on the subject called “How to Avoid a Climate Disaster” which is due for release from Doubleday in June 2020. The Seattle native executive currently boasts over 48 million followers on Twitter at @BillGates.
in theory I like to live simply. just eat fruits & veg, drink heaps of water, play in nature. . but when it comes to whipping up a meal, I’ve got a pantry full of jars storing spices & seeds so these ALL come out & veggies are chopped & mass experiments ensue! ---- . What starts as a simple banana smoothie usually evolves rapidly into a mermaid smoothie sea-protein bowl. . i keep chopped bananas in the freezer AT ALL TIMES for daily summer smoothie mania (& also in case I need to make n’ice cream at a moments notice). . i blend banana with whatever fresh fruit is on hand (even veggies! like kale or broccoli) & water. that’s it if i’m watching my coins & out of all the fancy toppings. otherwise i’ll sprinkle in pea protein (@thesourcebulkfoods) but also sometimes just raw peas!! + always chia seeds + hemp seeds + a medjool date or 2 + oat milk if i’m super active that day (hi carbs! not scared! love energy) + the magic ingredient that conveniently turns all of my smoothies a very on-brand mermaid ----♀️ BLUE!!!! blue spirulina phycocyanin powder...aka blue-green algae that boosts immune system, increases metabolism, improves digestion, high in vitamin b & rich in PROTEIN. (Avail at @thesourcebulkfoods ) . As a vegan i’m always looking to feel FULL from plant proteins & this is a fav. Especially because it’s an OCEAN plant! --------♀️ . BLEND it all up & top with toasted seeds. if you’re trying to be cute, slice a banana or other fresh fruits on top and sprinkle some granola or toasted coconut flakes. . SO bloody delicious and heaps photogenic if you’re trying to EAT FRESH & take food pics. you’re welcome. . -- sorry i have only been sharing blue smoothie recipes I HAVE mermaid retreat starting tomorrow & i’ll be cooking up a plastic-free STORM so you will witness some delights & this will probs inspire me to get back in the kitchen a bit after the retreat too -- Y U M
7. Kate Nelson
Kate Nelson uses the handle @plasticfreemermaid and has over 84K followers on Instagram. She gives lots of tips for sustainable living on her website blog at iquitplastics.com. The Aussie native influencer isn’t just talking the talk. She has notably led a single-use plastics free lifestyle for an entire decade. Plus, her Instagram account features photos of her eco-friendly food purchases and visits to natural locales around the world with important messages like “plastic kills” while scuba diving.
Tesla’s mission is to accelerate the world’s transition to sustainable energy.— Tesla (@Tesla) May 9, 2019
--⚡----
(a thread)
8. Elon Musk
Globally renowned tech billionaire Elon Musk is the founder and CEO of SpaceX and the CEO of Tesla. He is aiming to make huge strides in the fight against climate change by launching reusable rockets into space, producing a fleet of self-driving cars, and building a high-speed low power transportation system called the Hyperloop.
The innovative corporate leader has already succeeded in sending reusable rocket parts to the International Space Station (ISS) with his Falcon rocket series. Ultimately, he aims to make space travel as convenient as air travel. Currently, @ElonMusk has over 30 million followers on Twitter.
Sad Christmas story. Under this 2017 Trump admin rule, a timber co. would not be responsible for killing partridges if it cut down pear trees. Even if it knew the partridges would die in the act. Here’s what 2 years of this enviro rollback has wrought: https://t.co/576d5j4Lrm— Lisa Friedman (@LFFriedman) December 24, 2019
9. Lisa Friedman
Lisa Friedman is a well-known climate change reporter for The New York Times. She covers climate and environmental policy in Washington in her expansive body of articles. She also travels around the world to raise awareness of climate-focused topics. Friedman reports on politically-driven climate change action and provides tips her readers can use to combat climate change at home. @LFFriedman has over 33K followers on Twitter.
COMMUNITIES UNITE // "It is during our darkest moments that we must focus to see the light." – Aristotle. And it is during Australia's nationwide bushfires, amongst the loss, sadness, despair and anger, that we have also seen people and communities come together to help, support and shine the light for others. A huge thanks to all of you who have donated money, time, food and other essential products to support victims and their families, affected Aussie communities, firefighters and volunteers. And a huge thanks to @celestebarber who in just three days, using her fame and wide influence, and through her Facebook bushfire relief campaign, has managed to raise $19 million for @nswrfs. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Now if you haven't donated to Australia's bushfire relief yet, please head on over to @nswrfs @cfavic @cfsfoundation @foodbankaus @redcrossau @salvosau @wireswildliferescue @portmacquariekoalahospital and @1300koalaz to donate and show your support, or click on the link in our bio and visit 'Bushfires' to access the proper links. ⠀⠀⠀⠀⠀⠀⠀⠀⠀ Looking for reliable information on how the bushfire situation is developing in specific regions in Australia to ensure that you and your family, friends and loved ones are safe from fire danger? Swipe up to 'Latest Articles' via #linkinbio. Stay safe everyone and let’s keeping shining the light to help those who need it most. Photo: Ash Hogan--#australianbushfires #bushfirerelief #bushfiresaustralia #nswbushfires #ecowarriorprincess
10. Jennifer Nini
Jennifer Nini is the editor-in-chief and founder of Eco Warrior Princess, a website that is run by a team of eco-conscious writing and blogging “warriors.” The passionate group covers multiple aspects of Zero Waste and sustainable living practices including eco-friendly fashion, beauty, wellness, and other lifestyle tips. As a brand, Eco Warrior Princess seeks to redefine “what it means to live green.” The site currently has over 43K followers on Instagram. | https://alt923.radio.com/blogs/joe-cingrana/10-eco-influencers-you-need-to-follow |
Some writers may think that placing in and to together in a sentence means that they should become into. This is a common error. Although they have similar spellings and sounds, into and in to are not synonyms. Keep reading for examples of when to use into versus in to.
In to vs. Into: Usage Tips You Won’t Forget
How to Use Into
While both in and to are prepositions that describe position, into indicates movement. It is part of a verb phrase that establishes the action and moves the noun. There are a few different ways to use into correctly in different sentences.
Going Inside Something Else
Into can indicate a movement toward the interior of a concrete or abstract noun. These items can include a place, an item, or a thought. Here are some examples of into in a sentence.
- I headed into the club.
- Marv put the money into his wallet.
- Please put the laundry into your drawers.
- It’s common to go into shock after being in a car accident.
It’s tempting to use in instead of into for these examples. However, in describes an object when it is already in its place, not when it is moving there. If your noun ends up within another noun by the end of the sentence, you’ve used into correctly.
Transformation
Another way to use into is when one noun transforms into another noun. This process can be magical or figurative, depending on the rest of the sentence. Check out these sentences in which into indicates transformation.
- The caterpillar transitioned into a butterfly.
- I realized that I am turning into my mother.
- A great website can convert clicks into customers.
- The noise in the classroom was quickly turning into chaos.
These examples could convey the same meaning as the word become. Like the other use of into, transformative sentences take a noun from one state to another. They work inside a verb phrase to demonstrate movement.
How to Use In To
In to uses the same two prepositions as into, but that’s where the similarities end. Both in and to are common parts of separate phrases in which in is the last word of a phrase and to the first word of another phrase. It gets confusing when they are next to each other in a sentence.
When In Is Part of a Phrasal Verb
In is often the last word in a phrasal verb, which refers to a verb followed by a preposition. You use phrasal verbs every day without knowing it. Examples of phrasal verbs include:
- Drop in
- Log in
- Break in
- Turn in
- Fill in
- Move in
- Give in
- Chime in
- Hand in
It gets challenging when a phrase with to immediately follows these phrasal verbs. You may feel that combining in and to is the correct answer, but that’s not the case.
When To is a Preposition
Like into, the preposition to indicates movement from one location to another. However, it does not include the in part of into that denotes the interior position of the noun. Check out these examples of prepositional phrases that coincidentally come after a phrasal verb with in.
- You can chime in to the discussion.
- Don’t give in to his demands.
- Now log in to the email server.
- She handed her notice in to the shift supervisor.
- Turn in to the third driveway on the right.
The most problematic phrasal verb is often turn in, as turn has several different meanings. The sentence “Turn in to the driveway on the right” seems like it should have the word into instead because it shows movement. However, a reader could misinterpret the sentence to mean “transform into a driveway.” You should use in to when using this phrasal verb to avoid confusion.
You can also rearrange the fourth sentence to avoid this issue altogether: She handed in her notice to the shift supervisor. This way, in and to are no longer next to one another.
When To Is Part of an Infinitive
Infinitive phrases use the word to plus a verb. When combined with a phrasal verb, they can form in to in a sentence. However, they should never be shortened to into. For example:
- Consider chipping in to support this charity.
- Did he break in to steal the necklace?
- You have to log in to see your account.
- The student dropped in to discuss his paper.
- Henry is moving in to save money on rent.
These instances are less confusing than when to is used as an infinitive phrase. Into does not look as correct in these cases. Still, it’s important to understand exactly why in to is the proper choice.
Useful Tips for Telling Them Apart
If you’re staring at in and to and wondering whether or not to combine them, there are some easy ways to figure it out. Use this checklist to determine whether you mean to say in and to or into.
Use into if:
- Your noun moved inside something else.
- Your noun was transformed into another noun.
Use in to if:
- Your noun hasn’t moved in the sentence.
- You’re trying to say “in order to.”
- Into doesn’t make sense.
More Tricky Grammar Resources
Into and in to aren’t the only prepositions that are easy to mix up. Learn how to use in and on correctly with an informative article. Then explore the difference between maybe and may be. You can also practice telling prepositions apart with a selection of fun preposition games. | https://grammar.yourdictionary.com/vs/in-to-vs-into-usage-tips-you-wont-forget.html |
10 Most Famous Paintings by Paul Cezanne
Paul Cezanne was one of the leading artists of Post Impressionism. Cezanne's exploration of geometric simplification and optical phenomena inspired many painters of the 20th Century to experiment with simplifications and complex multiple views....
10 Most Famous Paintings by Vincent Van Gogh
Considered among the greatest painters of all time, Vincent Van Gogh was a Post-Impressionist artist of Dutch origins. His work, which is notable for its rough beauty, emotional honesty, and bold color, had a...
10 Most Famous Paintings by Pablo Picasso
Pablo Ruiz y Picasso (1881 - 1973) was a Spanish artist who is regarded by many as the greatest painter in history. He was one of the most influential artists of the twentieth century....
10 Most Famous Paintings by Salvador Dali
Salvador Dalí (1904 – 1989) was a Spanish artist who is most famous for his works in Surrealism, an influential 20th century movement, primarily in art and literature. Surrealist artists rejected the rational in...
10 Major Inventions of the Industrial Revolution
The Industrial Revolution was a period of major industrialization which began in Great Britain in the mid-18th century and spread to other European countries, including Belgium, France and Germany, and to the United States....
10 Most Famous Modern Art Artists And Their Masterpieces
Modern art is a term used to describe the artworks produced in the period from around the 1860s to the 1970s. Art after the 1970s is often called contemporary art or postmodern art. Primarily,...
10 Most Famous Spanish Artists And Their Masterpieces
Spain has a rich tradition in art and has played a major role in the history of western painting. Spanish Golden Age was a period from the early 16th century to the late 17th...
10 Most Famous Paintings by Rene Magritte
René François Ghislain Magritte (1898 – 1967) was a Belgian artist most renowned for being one of the leaders of the influential 20th century art movement, Surrealism. Surrealist artists rejected the rational in art....
10 Most Famous Poems by Langston Hughes
Active in the twentieth century, James Mercer Langston Hughes (1902 - 1967) was an African American writer most renowned for his poetry and for being the leading figure of the movement known as the...
10 Most Famous Graffiti Artists In The World
Graffiti are writing or drawings that have been scribbled, scratched or painted illicitly on a wall or other surface, often within public view. The word graffiti, or its singular form "graffito", come from the... | https://learnodo-newtonic.com/category/articles/top-ten-lists?filter_by=popular7 |
Not so long time ago, I was eating out during my working hours. I thought it was a good idea. But after a month I checked my bank account and I was frightened.
My Food expenses consisted of 45% of my monthly expenses!
So I decided that I’ll cook on my own. Unfortunately, there were times when I was hungry as hell because I didn’t have anything to eat. Especially when I needed to work 40 hours weekly and attend classes on the university.
Then the idea came to my mind: What if I just prepare meals in advance so I won’t be so pissed off during constantly preparing.
Then I found about meal planning and I set it up.
Introduction
What’s that?
It’s a cooking system that helps you in having access to your favorite food wherever you want and as cheap as possible. You simply buy food in advance and cook it 1-2 times per week and put meals into plastic containers and put them in the fridge.
Why Even Bother?
It’ll save you a lot of time, energy and money. It will also improve your health. It’s a good investment.
Who Can Benefit?
People who want to save money and time.
How to Apply it?
I decided to finally automate my process of eating.
I will tell you what I’ve done and how you can copy it.
Before we start:
My goal is to show you how to automate your life and avoid spending money on stupid things. I’m not a nutritionist so don’t take my word for your meal choices. Also if you have any health problems, consult with your doctor to be sure what you can eat.
Step 1: Identify How Much You Should Eat in a Body Calculator.
The first question comes to my mind: what should I eat and in what amount.
I’m not a bodybuilder so I’m not gonna count every calorie but I’ll use this tool just to estimate how much should I eat. As a famous quote says: “What gets measured gets managed”.
I shouldn’t be hungry during the day and on the other hand, I shouldn’t eat too much.
Let’s say I need to eat 3000 calories during the day.
I’d prefer to eat 4 meals a day.
3000/4=750 calories per meal.
Now choose your macros proportion. It may differ whether you’d prefer fat for your main fuel or carbs.
But regardless you should eat about 2 grams of protein per kilogram of your body weight.
So in my case, that would be 140 grams. So when I divide it for 4 meals that’s 140/4=35 grams of protein per meal.
Step 2: Specify Your Weekly Meal Plan.
Find out what healthy meals you like to eat. I’m looking forward to bring some variety to my meal plan but for now, it’s very straightforward. For me each day looks like this:
Meal 1: Protein Shake
Meal 2: Chilli con Carne OR chicken with rice OR Beef with Pasta
Meal 3: Chilli con Carne OR chicken with rice OR Beef with Pasta
Meal 4: Oatmeal
Check macronutrients and calories of ingredients in each meal and define right portions.
Step 3: Buy Kitchen Equipment
In past, I’ve struggled with cooking because of all that equipment. A pan could make a little trouble when you need to stay all the time nearby it.
That’s why I’ve recently bought the slow cooker and a rice cooker. It saves a lot of time.
All I need to do now is to just put all the ingredients in one big pot called slow cooker and then I can do whatever the hell I want!
Rice cooker can be helpful as well. Instead of cooking rice every day I can cook a lot in one go and then enjoy it for the next couple of days.
Look at the recipes and your kitchen and define what you’d need to buy to prepare the food you like to eat. Chances are you’ve already got all you need.
Buying a slow cooker and rice cooker is optional but if you have a busy schedule like me and also don’t like staying too long in the kitchen then you should reconsider buying that equipment.
Step 4: Do Shopping
Many people buy their foods in a supermarket. But, what if you instead buy them online? It’ll save you a lot of time and energy. And also you’ll be less likely to buy junk food when you got prepared list.
You know that feeling when you go nearby some junk foods in a supermarket and can’t resist to buy them? Yea I know it too. This way you’ll avoid it.
Now, what to buy in what amount?
You need to know how much space you have in your fridge and in the refrigerator.
But first let’s start with dividing your food into few categories:
First: perishable and non-perishable.
Second: Protein-based, carb-based and fat-based.
You can buy in bulk your non-perishable food every 4 months.
However when it comes to perishable food buy it every 1-2 weeks depending on the size of your refrigerator.
Remember that not only you want to automate your diet but hit your macros as well so you’ll stay lean and healthy.
Step 5: Prepare your Meals
Now when you have all the ingredients you can prepare everything in bulk.
Let’s start with a meat. Put your meat in a slow cooker. Then add the rest of ingredients but rice. That will be vegetables, spices and so on. Set slow cooker for 4 hours when heated in high power and for 8 hours when heated in low power.
Put rice in a rice cooker.
What will be proportions of all that? Define how much you need for 1 meal and multiply it for 6 so you’ll have 6 meals.
After your food is cooked, separate it through your boxes and put it in a refrigerator.
Now when you have all your food prepared you just need to pick one or two of these boxes and carry with yourself wherever you go.
Math behind it
Let’s see the math of preparing lunches at home:
Let’s see how much time I spent on doing all of that:
I spent:
- 10 minutes on determining what I need to buy and buying it.
- 10 minutes on repacking my food.
- 10 minutes on preparing my food in a slow cooker and rice cooker 2 times per week.
- 10 minutes on putting meals in boxes 2 times a week
- 5 minutes per day on washing the dishes.
10+10+2*10+2*10+7*5=95 minutes per week
How much it costs me?
I live in Poland so my prices may differ from yours. But let’s say that I’m spending on lunches about $1.5 per day. Where lunch costs $4.5 if I decided to eat out. So it saves me $3 on each meal per day
So it’s 30*$3=$90 monthly per each meal.
Comparison
What are the alternatives?
Cooking Each Day:
- Spending a lot of time
- Saving money
- Too much focus on preparing meals
- Control over portions
Ordering Meals/Eating out:
- Saving time
- Spending a lot of time
- It can be unhealthy
- No control over portions
Meal Prepping (Diff Path):
- Saving time
- Saving money
- Not distracted by food
- Control over portions
- Automating preparation
- Immediate access to food
My Results
I have full energy during the whole day and I’ve saved a lot of money. I also have more time to do the most important stuff.
What’s more, I’m no longer worried that I have nothing to eat or that I throw my money away.
And last but not least, I can be fully concentrated on my work.
Where to Start?
Just pick the simplest recipe possible. Then go to the grocery store (or online) and buy all your ingredients. Cook it and see if you’ll like it.
You can always switch back if you don’t find it easy and practical.
Try for 2 weeks and see it for yourself! | https://www.diffpath.com/automate-diet/ |
The Arizona Highway Patrol came upon a pile of smoldering metal embedded into the side of a cliff rising above the road at the apex of a curve. The wreckage resembled the site of an airplane crash, but it was a car. The type of car was unidentifiable at the scene. The lab finally figured out what it was and what had happened.
It seems that a guy had somehow gotten hold of a JATO unit (Jet Assisted Take Off – actually a solid fuel rocket) that is used to give heavy military transport planes an extra “push” for taking off from short airfields. He had driven his Chevy Impala out into the desert and found a long, straight stretch of road. Then he attached the JATO unit to his car, jumped in, got up some speed and fired off the JATO!
The facts as best as could be determined are that the operator of the 1967 Impala hit JATO ignition at a distance of approximately 3.0 miles from the crash site. This was established by the prominent scorched and melted asphalt at that location. The JATO, if operating properly, would have reached maximum thrust within 5 seconds, causing the Chevy to reach speeds well in excess of 350 mph and continuing at full power for an additional 20-25 seconds. The driver, soon to be pilot, most likely would have experienced G-forces usually reserved for dog-fighting F-14 jocks under full afterburners, basically causing him to become insignificant for the remainder of the event. However, the automobile remained on the straight highway for about 2.5 miles (15-20) seconds before the driver applied and completely melted the brakes, blowing the tires and leaving thick rubber marks on the road surface, then becoming airborne for an additional 1.4 miles and impacting the cliff face at a height of 125 feet leaving a blackened crater 3 feet deep in the rock.
Most of the driver’s remains were not recoverable; however, small fragments of bone, teeth and hair were extracted from the crater and fingernail and bone shards were removed from a piece of debris believed to be a portion of the steering wheel.
Note: This is the original “Darwin Award” story which circulated widely on the net and inspired the Darwin Awards as an internet tradition. It is apocryphal; it never really happened. However it’s too good a story not to have happened.
This page was last updated January 1, 2004. | https://richardhartersworld.com/darwin95-2/ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.