text
stringlengths 189
549k
| id
stringlengths 47
47
| dump
stringclasses 95
values | url
stringlengths 15
2.2k
| file_path
stringlengths 110
155
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 49
127k
| score
float64 2.52
5.19
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
The point is to provide entropy. Truecrypt must generate a secret key for the volume. It does so by generating a bunch of random bits. Here (as often in cryptography), what is important is not really that the bits that make up the key are random in a statistical sense, but rather than the key cannot be predicted or reproduced by an attacker. A computer is a deterministic machine¹: if the attacker knows in what state it was in when you started to generate the key, he can run Truecrypt and generate the same key.
By moving the mouse, you are providing input that the attacker cannot reproduce. The more input you provide, the harder the key will be to reproduce. For example, if the computer only recorded a single motion as left or right, then there would only be two possible keys, and the attacker could try them both; the key would only have 1 bit of entropy (no matter how long the key is). Ideally, the key must be completely random; if the key is, say, a 128-bit key, the random number generator must have 128 bits of entropy available. Human movements are somewhat predictable (you aren't going to move the mouse two meters left), but the more you move, the more entropy you feed into the pool.
The mouse motion is not related to the 100,000 rounds. The rounds are a different issue, related to how hard it is for the attacker to reproduce your password. Humans are notoriously poor at choosing and remembering complex passwords, so the attacker can try all plausible passwords by brute force. For this reason, cryptographic systems that use passwords don't use them as-is, but perform some computation (a cryptographic hash, say; PBKDF2 is generally recommended these days) on the password many times over. This computation is expensive; its running time is proportional to the number of rounds. The system must perform this iterated computation once per password attempt; the attacker must also perform it once per password attempt. If it takes 1 second for your system to process your password when you mount the volume instead of 10 microseconds, it's not a big deal, because password processing is only a tiny fraction of what you use your CPU for anyway. But for the attacker, who's spending all his CPU time brute-forcing passwords, being able to perform only 1 cracking attempt per second and not 100,000 per CPU is a big hit.
¹ Some computers have a hardware random number generator, which derives its randomness from physically impossible to predict (or at least hidden and extremely hard to predict) sources. Nuclear decay is good for this but impractical. On mobile devices, camera white noise works fairly well. But many computers lack such a hardware random generator. | <urn:uuid:091ce364-b94e-4a29-bcb3-4efd4ec08bc3> | CC-MAIN-2013-48 | http://security.stackexchange.com/questions/10438/what-are-you-doing-when-you-move-your-mouse-randomly-during-a-truecrypt-volume-c | s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163728657/warc/CC-MAIN-20131204132848-00006-ip-10-33-133-15.ec2.internal.warc.gz | en | 0.94354 | 553 | 3 | 3 |
Profession wastewater treatment operator
Wastewater treatment operators operate equipment used in a water or wastewater plant. They treat and clean drinking water before it is distributed to the consumer and process wastewater to remove harmful substances before returning it to rivers and seas. They take samples and perform tests to analyse the water quality.
- Water chemistry analysis
Principles of complex water chemistry.
- Perform water treatments
Perform regularly water testing, ensuring that water management and filtration processes follow reasonable management practices, industry standards, or commonly accepted farming practices. Record previous water contaminations, the source of contamination and contamination remedied. Take mitigation measures to guard against further contamination.
- Perform water treatment procedures
Perform operations such as filtering, sterilising, and dechlorinating in order to purify water for consumption and food production using different procedures and technologies such as micro-filtration, reverse osmosis, ozonation, carbon filtration, or ultraviolet (UV) light.
- Perform water chemistry analysis
Perform water chemistry analysis
- Use water disinfection equipment
Operate equipment for water disinfection, using different methods and techniques, such as mechanical filtration, depending on needs.
- Measure water quality parameters
Quality assure water by taking into consideration various elements, such as temperature.
- Document analysis results
Document on paper or on electronic devices the process and the results of the samples analysis performed.
- Carry out waste water treatment
Perform waste water treatment according to regulations checking for biological waste and chemical waste.
- Interpret scientific data to assess water quality
Analyse and interpret data like biological properties to know the quality of water.
- Operate water purifying equipment
Operate and adjust equipment controls to purify and clarify water, process and treat wastewater, air and solids, recycle or discharge treated water, and generate power.
- Maintain water treatment equipment
Perform repairs and routine maintenance tasks on equipment used in the purification and treatment processes of water and waste water.
- Monitor water quality
Measure water quality: temperature, oxygen, salinity, pH, N2, NO2,NH4, CO2, turbidity, chlorophyll. Monitor microbiological water quality.
- Dispose of sewage sludge
Operate equipment to pump the sewage sludge and store it into containers in order to transform the gases it emits into energy. After this phase, dry the sludge and evaluate its potential reuse as fertilizer. Dispose of the sludge if it contains hazardous elements.
Optional knowledge and skillsperform sample testing ensure proper water storage maintain water distribution equipment maintain desalination control system water reuse test samples for pollutants maintain specified water characteristics operate sewage treatment plants on ships manage desalination control system use personal protection equipment water policies apply health and safety standards ensure equipment maintenance ensure compliance with environmental legislation operate pumping equipment operate hydraulic machinery controls laboratory techniques prepare samples for testing
Source: Sisyphus ODB | <urn:uuid:74fde2bc-f3e0-42e7-8957-7f395143fee9> | CC-MAIN-2023-23 | https://www.123test.com/professions/profession-wastewater-treatment-operator/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648635.78/warc/CC-MAIN-20230602104352-20230602134352-00608.warc.gz | en | 0.869816 | 612 | 3.296875 | 3 |
In our eBook, Building a Digital Literacy Program that Nurtures Future-Ready Students, we discuss the process for building an inclusive and aligned digital literacy program. An essential element of this is empowering teachers to address these higher-order skills with students – to move them from passive users of technology to creators, innovators, and global learners – through impactful technology integration in the classroom.
Ultimately, empowered teachers empower learners. The success of any program, including a digital literacy program, rides on the ability of teachers to effectively implement it in their classrooms, which means that teachers must be well-supported in order to achieve the program’s vision.
The Levels of Technology Integration
More and more we hear about technology integration in the classroom. Districts are pushing for it. States are creating funding options to make technology more accessible. And at the national level, both government and non-governmental organizations are developing programs for it. Heck, we even talk about it a lot (Exhibit A, B, and C). But what does it mean to effectively integrate technology in the classroom?
Kim Mattina – teacher and technology education enthusiast – uses the SAMR model as a gauge for measuring the efficacy of technology integration in the classroom. As she explains:
The SAMR model is a framework created by Dr. Ruben Puentedura that categorizes four different levels of classroom technology integration. The acronym ‘SAMR’ stands for Substitution, Augmentation, Modification, and Redefinition.
- Technology is used as a direct substitute with no functional change.
- Example: students type a research paper into a Google document instead of writing it on paper
- Technology is used as a direct substitute with functional improvement.
- Example: student include videos and links to resources in their research paper
- Technology allows for significant task redesign.
- Example: students share and collaborate in a Google document
- Technology allows for new tasks that were never created before.
- Example: students virtually connect with other classrooms nationwide or worldwide to present their research
Ultimately, the goal is to get to those deeper levels of technology integration in the classroom – modification then redefinition – rather than just substitution and augmentation. Recent studies, though, have concluded that the majority of technology use in classrooms exists only at those lower levels.
The State of Current Technology Integration
A study by PwC found that nearly two-thirds of technology integration in the classroom is passive (watching videos and reading websites) with less than a third of technology use being active (producing videos, coding, and analyzing data).
Another report affirmed that, when using technology in the classroom, teachers are likely “digitizing traditional learning instead of enhancing it." The survey’s respondents revealed that digital learning occurred through the following formats:
- 90 percent use PDFs and Word documents
- 70 percent use online videos
- 42 percent use games
Similarly, an AdvancEd study that performed classroom observations determined that only:
- 47 percent of classrooms showed evidence of using technology to gather or evaluate information for learning.
- 35 percent of classrooms used technology to communicate or work collaboratively.
- 37 percent showed used technology to problem solve, research, or create projects and original works.
Finally, in her thesis, Dr. Delnaz Hosseini assessed the barriers to digital literacy instruction in K-2 classrooms. She explains:
“Overall, results indicate that students are provided with opportunities to develop basic computer literacy skills…but they seldom engage in activities that promote the development of information literacy skills…which focus on the students’ ability to gather, analyze, and effectively apply information acquired through digital sources.”
From the Teacher’s Perspective
These results, though, shouldn’t come as a surprise. In the same study by PwC, only 10 percent of teachers surveyed feel confident teaching higher-order digital skills. And eSchool News found that 78 percent of teachers feel underprepared to integrate technology into their teaching.
What does this indicate? Teachers do not yet feel comfortable teaching digital skills, which is reflected in the passive use of technology in the classroom. This is likely because teachers lack the training and instructional resources to do so.
Barriers to Tech: In Dr. Hosseini’s research, teachers were asked to identify the most significant barriers to teaching digital skills. Unsurprisingly, lack of time, which includes competing priorities for limited classroom time and planning time to devote to technology lessons, was a resounding response.
Teachers also expressed student-related barriers like self-management skills, reading and writing abilities, age, and student-to-teacher ratio. These hindered the ability for students to meaningfully engage in digital literacy instruction and activities.
Supports for Tech: Aside from barriers to digital literacy instruction, teachers also cited enhancements, which included knowledge sharing about technology standards, demo lesson observations, district tech coaches, onsite tech monitors, and access to technology.
Interestingly, teachers expressed they were more knowledgeable about 21st century skills and digital literacy and less about adopted technology standards. This indicates that teachers understand the value of teaching higher-order digital skills but need the resources to address them in ways that align with grade-level expectations.
Teachers also ranked confidence to design technology lessons highly on the list of enhancements. Confidence really gets to the core of what teachers need to incorporate digital literacy in their classrooms. Tech monitors, tech coaches, and lesson observations help reinforce effective technology integration, allowing teachers to enhance their practice and grow through feedback. And knowledge of technology standards gives teachers the confidence to meaningfully address digital literacy within curriculum.
Similarly, participants in PwC’s study rank their biggest needs for improved technology integration in the classroom:
- 79 percent want more professional development
- 81 percent want more funds to attend professional development
- 81 percent want more ‘release time’ to attend professional development
- 81 percent want more resources and other course materials
So, teachers need professional development, professional development, professional development, and resources.
Reimagine Technology Integration in the Classroom
If we return to the earlier listed barriers, a lot of these dissipate with training, support, and resources. With classroom-ready resources and an understanding of how to integrate technology into core curriculum, time problems are significantly reduced. Appropriate grade-level materials that are developed to maximize learning allow for a seamless integration of digital literacy. And ongoing support empowers teachers to grow as technology does, to also become creators, innovators, and global learners, too.
Digital literacy is not just another program in a long list of them; it is the program. At its core, this program aims to transform the ways students learn by empowering them with digital skills to succeed in their future. Teachers are the key achieving this vision. | <urn:uuid:e0c734e3-eb56-4fc9-83f7-0071f5b2172f> | CC-MAIN-2020-34 | https://equip.learning.com/improve-technology-integration-in-the-classroom | s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00180.warc.gz | en | 0.940087 | 1,398 | 3.90625 | 4 |
By Jonny Lupsha, Wondrium Staff Writer
A Chinese metal miner has written two collections of poetry about his work. Mining for rare metals involves detonating explosives underground. Mining of any kind is a dangerous occupation.
Working as a miner is rarely seen as glamorous. Miners burrow and blast into the Earth for precious metals and coal, emerging covered in dirt and soot. Dozens are killed every year. However, one Chinese miner has turned the worries, dangers, and hopes of the mining industry into two successful books of poetry. Chen Nianxi is becoming a major figure in “migrant worker literature,” in which the honor and dangers of everyday work collide with the toll they take.
Explosions and cave-ins make mining a risky job. Coal mining is especially dangerous since it adds combustible materials, poisonous gases, and inhalation of coal dust into the mix. In his video series The Industrial Revolution, Dr. Patrick N. Allitt, Cahoon Family Professor of American History at Emory University, revealed the dangers of coal mining, using Britain as an example.
Ancient Coal Mining
“Since ancient times, inhabitants of Britain had burned coal from geological outcrops, but weathering made it poor fuel,” Dr. Allitt said. “By the time of the Romans, they understood the need to dig into the ground to recover it to find fuel that was going to burn better. Coal is the compressed remnants of plants that grew millions of years ago; mining it has always been, and still is, among the most dangerous jobs in the world.”
Dr. Allitt cited cave-ins, carbon monoxide release, and disease-causing coal dust inhalation as risks of coal mining specifically. These dangers often breed animosity between miners and the mine owners who profit from their work. The work environment isn’t the only source of stress for miners.
“The traditional coal mine had a shaft about eight feet in diameter,” he said. “This dug straight down into the earth until it meets the coal seams, and then lateral tunnels are dug out from the bottom of the shaft, the “pit head,” to dig up the coal itself. It’s hauled up the shaft in woven baskets […] called corves by a team of horses or turning a windlass at the top to drag the corves up the shaft.”
Early miners used what’s called the “bord-and-pillar” system. Using pickaxes, they’d cut out some of the coal but leave large pillars of it to support the roof of the cave. That system wasted approximately half of the available coal.
Modern Times, Modern Problems
As with any fossil fuel, coal is a finite resource. As its use continued, mines had to get bigger and deeper. This came with a series of dangers that could no longer be handled with ancient mining methods. One of the biggest dangers was that of flooding.
“Flooding was severe, especially in mines near the coast, such as the huge mining area around Newcastle upon Tyne, which developed early on,” Dr. Allitt said. “Primitive pumps were unsatisfactory; chains of buckets drawing water out of the mines simply weren’t efficient enough. If the mine was set on high ground, it was sometimes possible to build an adit or a drain lower down, but better methods were urgently needed by 1700.”
Another danger involved ventilation. Mines are very difficult to supply with circulating air, and the poisonous gases hadn’t gone away over the years. However, this problem was bigger than carbon monoxide or coal dust inhalation. A phenomenon known as “choke damp” was a mixture of carbon dioxide, nitrogen, and water vapor. It was unbreathable and could suffocate miners.
Finally, there was “fire damp,” which is methane. Methane can explode.
“Newcastle on Tyne had a reputation for ‘fiery pits’—in other words, pits that were full of fire damp—and there were frequent lethal explosions,” Dr. Allitt said. “For example, 30 men were killed in Gateshead near Newcastle in an explosion in 1705; 69 more were killed at Chester-Le-Street in a 1708 explosion.”
If there is a silver lining to these clouds, it may be found in migrant worker literature like that of Chen Nianxi, the miner-turned-poet. | <urn:uuid:f3333cb0-92f6-4985-8d39-a70ab925c48d> | CC-MAIN-2022-40 | https://www.wondriumdaily.com/chinese-metal-miner-and-poet-writes-migrant-worker-literature/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00743.warc.gz | en | 0.97476 | 944 | 2.953125 | 3 |
There are many things involving a sedentary lifestyle. Every day we make choices that can affect our health. Taking the lift to the first floor, the bus to go from one stop to the next one and driving to the shop in the corner… Being a sedentary person is not strictly related to people that avoid gyms or sports in general.
So what is the real meaning of being sedentary? The World Health Organization (WHO) defines physical activity as “any bodily movement that requires energy expenditure”. That means that if you are avoiding moving your body throughout the day, you fit into the sedentary category _and that is not a compliment. It is considered that 10.000 steps a day it is the minimum required to leave the sedentarism zone.
Are we sedentary?
In 2008, data from WHO informed pointed out that globally, around 31% of adults aged 15 and over were insufficiently active and that around 3.2 million deaths each year are attributable to insufficient physical activity.
Ten years later, a study published in September 2018, declared Latin America as the region with the highest number of sedentary people. In this context, Brazil leads the rank with 47% of the population being considered sedentary. The study collected data from the past 15 years and it shows that Brazil had the largest increase in the sedentarism numbers (over 15%).
The same study pointed out that 1 in every 5 adults is considered sedentary around the world. However, more aggravating 4 out of 5 teenagers (people under 15 years old) is not physically active.
Why is sedentarism a problem?
Even if we don’t consider the fact that exercising helps you in terms of losing weight and getting your body shaped, there are a lot of benefits in the background that are far more important than the visual side of it.
Avoiding sedentarism helps you improve your muscular and cardiorespiratory fitness, reduce the risk of heart diseases, diabetes and even cancer. Studies proved that some types of cancer (such as breast and colon) could be avoided with an active lifestyle.
It is extremely important to note that a sedentary life may also mean a shorter life. Studies show that people who are insufficiently active have a 20% to 30% increased risk of death compared to people who are sufficiently active.
Sedentarism and depression
If you are a sedentary person, you probably feel surprised when someone tells you that exercising makes them feel happier, right? Some may even risk saying that they are “addicted to the gym”, which makes all the sedentary people wonder if that is possible… Well, yes it is and there is a very real reason why that happens.
In a more chemical view, sedentary people are more keen to battle depression and fatigue.
Cortisol is a hormone produced when our body is under stress or any negative feeling such as anxiety. Exercising helps to “burn” that hormone and it also releases another hormone called endorphin.
On its own, endorphin is one of the best hormones our body can release as it interacts with the receptors in your brain and reduces our perception of pain. Those hormones also trigger a positive feeling in the body, making us feel more relaxed and even happier. That is the reason why people you know that constantly exercises always say that they feel so good about doing it so, and if you have a sedentary lifestyle that sounds so… impossible.
How can I change?
Following the WHO study and a plan launched earlier this year to increase people’s physical activities, the city infrastructure also affects how citizens relate to sedentarism. Fear of crashing your bike or being run over while taking a walk, lack of security outside your home and even the lack of space are the main reasons why people _specially with a low income and that can’t afford a gym_ end up not exercising. Creating cycle and walking tracks, improving road safety and developing a structure for physical activities in parks and open spaces should be steps taken by the government to encourage people to keep on moving.
Following the World Health Organization, in one week, adults over 18 years old should do at least 150 minutes of moderate-intensity physical activity. To improve their health, the best practice would be a total of 300 minutes a week of physical activity.
When it comes to kids and adolescents, the minimum required it is 60 minutes of moderate to vigorous activities, which usually is fulfilled with Physical Education classes at school. However, it is highly recommended that they reach over 60 minutes a day as it will help them develop and will provide additional health benefits.
It all can be done with small choices, such as walking or cycling to work, taking the stairs instead of the lift and dedicating at least one hour a day to look after your body. Physical health is also mental health and even yoga and pilates classes are a great option to escape sedentarism.
The first step is to want to make a change. Check your path to work or school or even to the supermarket and start adding those 10.000 steps on your day to day activities. If you don’t like the gym environment, search for parks closer to your home and try to take long walks before work or even during your lunchtime and bring yourself as far away as possible from being a sedentary person. | <urn:uuid:f1492737-c47f-4487-92f5-eb82c7209080> | CC-MAIN-2020-29 | https://gowkt-blog.azurewebsites.net/en/english/problems-of-a-sedentary-lifestyle/ | s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151761.87/warc/CC-MAIN-20200714212401-20200715002401-00446.warc.gz | en | 0.9586 | 1,108 | 3.015625 | 3 |
The Politics of Palm Oil
The Politics of Palm Oil
KYOTO: Palm oil plantations and processing have become a strategic industry for Indonesia. Palm oil is the country’s third largest export earner, contributing substantial foreign exchange earnings and providing opportunities for small-scale farmers to partake in this vibrant agro-business, thus developing the rural economy and spurring local employment. In Southeast Asia, palm oil is a traditional commodity dating back to the colonial period. But by the 1980s, increasingly high global demands for palm oil – for food products, cosmetics and even biofuels – led to industrial-scale plantations, particularly on Indonesia’s Sumatra and Kalimantan islands with their favorable climate and fertile, loamy soil conditions.
In 2008, Indonesia’s replaced Malaysia as the world’s top exporter of palm oil as a result of a series of state-led programs designed to boost palm oil production, such as privatization of previously state-run estates. Today, Indonesia has 6 million hectares of oil palm plantations. It produces up to 25 million tons of palm oil annually, or half of the world’s total production, delivering around 5 percent of the country’s annual gross domestic product. This success is also due to the industry opening to foreign investment. Malaysia and Singapore happen to represent the majority of foreign investors, outnumbering those from outside the region. Through single investments and joint ventures with local companies, the two countries control more than two thirds of the total production of Indonesia’s palm oil.
This context provides an inexorable correlation between investments from Malaysia and Singapore and the forest fires caused by the habitual slash-and-burn method used by farmers as a cheap and convenient way to clear the land for rapid turnaround of cultivation. This year in particular, the smoke haze from Sumatra has caused an even greater devastating impact on Malaysia and Singapore, in terms of economic loss and potential health hazards. The polluted haze reached dangerous levels in the two neighboring countries; Malaysia even declared a state of emergency in Muar and Ledang districts in the southern Johor state
So when the governments of Malaysia and Singapore condemned Indonesian farmers, they seemed to overlook the fact that private firms from their own countries have played a major part in the outbreak of the smoke haze. A crisis of good governance is responsible for this transnational problem.
Indonesia does have laws prohibiting slash and burn methods. For example, Article 78 of the 1999 Forestry Law stipulates that anyone found guilty of burning forests is subject to up to 15 years in prison and a maximum fine of Rp 5 billion (US$525,000). At the same time, Central Kalimantan issued its own regulation in 2008 which allows controlled burning by some small farmers. The rationale behind such regulation is that a complete ban would have adversely affected small producers and hurt the province’s rice output.
One question that must be tackled is why can managers of commercial plantations of the palm oil in Indonesia continue to pose a threat to the environment and regional economy? Helena Varkke, who studies corporate communications and sustainable development, argues in a recent study that the regionalization of the oil palm plantation sector has shaped a political culture characterized by a deep-rooted patronage system. Owing to a similar shared culture of patronage politics, Malaysia and Singapore were successful in inserting themselves into the existing patronage networks in Indonesia, which are also operating in key industries like palm oil.
In the palm-oil sector, the patronage system serves as an essential structure involving around the production, marketing and distribution, while connecting significant actors together to facilitate their businesses through legitimate mechanisms such as palm-oil consortiums. These consortiums normally consist of local producers, senior bureaucrats and influential businessmen who have forged close links with top national leaders. For example, in the case of Indonesia, a powerful politician plays a leading role in key decisions in the group which owns a large palm oil company. These decisions could cause a huge impact on the nation’s palm oil industry. For foreign companies, it is imperative to establish links with Indonesia’s powerful individuals or institutions to break into the industry.
Several Malaysian companies doing palm oil business are significant investors with connections with Indonesian authorities, Varkkey explains. Singapore companies have in recent years also emerged as players in the Indonesian palm oil industry. Some of these conglomerates have become the world’s largest palm oil producers based in Indonesia. Normally, their board of directors consists of high-flying Singaporean personalities in politics and business. As an example of the existing patronage system, many Malaysian companies gained benefits from the Malaysian-Indonesian investment treaty in 1997 when Indonesia pledged to allocate 1.5 million hectares of land to Malaysian investors for palm oil development.
Following the pattern of Indonesia’s patronage system, Malaysian and Singaporean companies found the need to build relations with local strongmen and the national leaders of Indonesia. From setting up subsidiaries, earning licences to production and property rights to plantation lands, to appointing influential Indonesian figures to sit on the board, Malaysian and Singaporean companies have further entrenched the patronage politics within the palm oil industry. Strong connections with leaders at the top can help lubricate all kinds of transactions.
Peatlands are suitable for oil palm, yet also extremely prone to fire. Under such sensitive conditions, the Indonesian government enacted legislation in 1999 for the control on proportions of peatlands used for palm oil plantations and the ban on slash-and-burn tactics. Often, such legislation is ignored, simply because of protections offered to firms by those of influence within the Indonesian government and a lack of enforcement. Thus, plantations prefer ground burning instead of the more expensive and inconvenient mechanical approach to clear land using excavators and bulldozers. Indonesia’s Duta Palma is one the companies with the worst record in illegal burning, Varkke claims. And many political leaders largely remained silent or showed indifference when the smoke struck Singapore and Malaysia in June, with one party member telling Singapore to stop acting like a child.
Some state agencies, like the Indonesian Anti-Corruption Commission, work closely with a local NGO, Indonesia Corruption Watch, and are investigating a number of cases involving foreign companies and alleged illegal land clearing. But their efforts are stonewalled by the Indonesian courts. Instead of acting in defence of good governance, courts choose to protect the powerful in the industry in which they have vested interests. In 2010, an unnamed Malaysian-owned plantation was brought to court, but the case was stopped from continuing on to a higher court.
The intricate cross-border connections within the palm oil industry create an awkward situation and, more importantly, a crisis of good governance in Southeast Asia. With name-calling and scapegoating over the polluted haze, the governments of Indonesia, Malaysia and Singapore have engaged in a rhetorical exercise. In reality, all parties are skating around the real issues, discomforted over delving too deeply into the root of their shared problem. | <urn:uuid:df1a0efc-d24b-467e-b1fa-5e897ab8cc8a> | CC-MAIN-2019-47 | https://yaleglobal.yale.edu/content/politics-palm-oil?fbclid=IwAR1oq_1wHDaz5Jk-d8Z5JruRD7M2BMz4HLs-kOtlrtAozNfdjmkQJfuZLCs | s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00329.warc.gz | en | 0.946827 | 1,412 | 3.203125 | 3 |
Ok so mathematically you can divide any number by any other (nonzero) number and you can keep dividing that number however many times you want. Like dividing 1 by 2 and then by 2 again etc. And this is the basis of the famous paradox that mathematically, you cant really move from point a to b because first you need to get to the middle of a and b. and then to the middle of the middle. and then the middle of the middle of the middle, etc. But what if space is not continuous, but quantized? Like what if there is a smallest possible length, and you cannot be in between that length, meaning you cannot physically divide that length by 2 to get to the middle (even though mathematically you could). Wouldnt that have some serious consequences on the physical application of calculus to the real world? (maybe not when working with large bodies, but definitely with small scales?) For example the intermediate value theorem wouldnt hold true... Idk im not calculus expert (only had calc I and II nd basic physics) but this thought occurred to me and has bothered me.. | <urn:uuid:553b025e-e831-46b2-9f59-61222b551d0b> | CC-MAIN-2016-36 | https://www.physicsforums.com/threads/if-space-is-not-continuous-then-is-calculus-wrong.529812/ | s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982905736.38/warc/CC-MAIN-20160823200825-00023-ip-10-153-172-175.ec2.internal.warc.gz | en | 0.960408 | 226 | 2.84375 | 3 |
Sampling: How Much Can I Know With Only A Limited Amount of Data?
We rarely have complete data about something we want to know. For example, we often want to know who will win a political election, but we do not have the time to ask every potential voter who they plan to vote for. And people's attitudes change during a campaign.
However, it is possible to take get the opinion of a small number of people and make an estimate of the possible outcomes of the election.
That small number of people is called a sample, and the estimate we make about the broader population is called an inference. The use of statistical techniques to make inferences is called inferential statistics.
Types of Sampling
The method used to determine what subset of people or things you base your inference on is called sampling.
There are two broad types of sampling techniques, and a number of subtypes within those broad types.
Probability sampling is when any member of the target population has a known probability of being included in the sampled population. Such techniques include:
- Simple random sampling involves randomly selecting members of the population. There is an equal probability that any member of the For example, a random number generator (like the Excel RAND() function) can be used to select a subset of numbers from a phone number list that covers the population. In the natural sciences, an example might be placing remote cameras at random locations in an area when trying to get a count of the population of a specific type of animal in that area
- Systematic sampling involves selecting members from the target population at a regular interval. For example, every 10th number from a sorted list of addresses
- Stratified sampling involves taking random samples from different subgroups of the target population and then using known information about the size or characteristics of those subgroups to adjust the results. For example, if studying the opinions about college students toward a university policy, different classes (freshmen, sophomores, etc) or majors might have different opinions about that policy. Separating calculations into different sub-groups can be useful for discerning the differences between those groups
Nonprobability sampling is used in cases where probability sampling is impractical. While these techniques are not as reliable as probability sampling techniques for making inferences about the broader population, results from these samples can offer suggestions of research paths that would justify the expense and difficulty of further probability sampling. Some common subtypes of nonprobability sampling include:
- Convenience sampling involves sampling a group of people you can conveniently access. For example, American college students are some of the most studied populations in history. College students tend to be young and from a specific set of social classes, so they cannot be said to be a random sample of the broader population. However, college students are readily accessible to the academic researchers that are their professors, and they often are happy to be research subjects in exchange for pizza or other relatively small amounts of compensation
- Purposive sampling is a variant on convenience sampling that is commonly used when information about a specific subgroup of the broader population is needed. For example, if information about the opinions of men are needed, volunteers might stand on a busy street corner and specifically target only men that pass by for further inquiry (and likely rejection)
- Snowball sampling involves asking sampled subjects for recommendations of other people that might fit a specific profile. For example, if studying a relatively small ethnic group, asking a subject for a recommendation on friends and family members that might participate would make it easier to find other members of that ethnic group
Central Limit Theorem
If we sample a characteristic of a population that can be represented as a continuous variable, we can take the mean of that sample and get the sample mean. But how confident can we be that sample mean is anywhere close to the population mean?
A curious and wondrous fact about random sampling is that if you take multiple random samples that involve estimating the arithmetic mean of some population characteristic, the distribution of deviations (errors) between sample means from the population mean will be a normal distribution.
This is the central limit theorem, which was originally proposed by French mathemetician Abraham de Moive in 1733 and later developed by French mathemetician Pierre-Simon Laplace in 1812 and Russian mathemetician Aleksandr Lyapunov in 1901. What it means is that you can use standard deviation and the rules of probability to calculate confidence intervals and assess how reliable your sampling results are. The central limit theorem is one of the most important theorems in statistics.
Within a normal distribution, we know that around 68% of the values are within one standard deviation of the mean and we know that 99.7% of the values are within two standard deviations of the mean.
Therefore, we can be 68% certain that our sample mean is within one standard deviation of the population, and 99.7% certain that our sample mean is within two standard deviations of the population. Since what we are dealing with here is a potential amount of error between the sample mean and population mean, this standard deviation of error is called standard error.
The standard error is used to estimate how far the sample mean deviates from the actual population mean.
Standard error for a sample mean (x) is calculated as the standard deviation of the population (σ) divided by the square root of the sample size. The greater the sample size, the closer the sample mean can be assumed to get to the actual population mean.
σx̄ = σ / √n
Since we rarely know the standard deviation of the population (which is why we're sampling in the first place) we estimate the standard error using the standard deviation of the sample (s):
σx̄ = s / √n
Uses for standard error is described below.
Confidence Intervals For Means
The level of confidence you want determines how many standard errors above or below the mean you are willing to accept. Commonly used levels of confidence include 90%, 95%, and 99%. Which one of these is acceptable depends on how accurate you need your estimates to be.
In a normal distribution, we know that 95% of the values are within 1.96 standard deviations (a z-score of 1.96) above or below the mean.
Accordingly, if we have a random sample, we can be 95% confident (confidence interval) that the actual population mean is within 1.96 standard errors above or below the sample mean (s).
s ± 1.96 * σx̄
Confidence interval is sometimes described in terms of margin of error. Confidence interval is the whole range of values above and below the mean. Margin of error is the difference between the mean and the bottom or top of the confidence interval (z-score times standard error).
Using a simulated sample of heights from 100 American men in this CSV file, the confidence interval based on standard error can be calculated in R:
> heights = read.csv("simulated-male-height.csv")$Inches > print(heights) 63.3 63.6 63.8 64.1 64.3 64.5 64.8 65.0 65.2 65.4 65.6 65.7 65.9 66.0 66.1 66.2 66.3 66.5 66.6 66.7 66.8 66.9 67.0 67.1 67.2 67.3 67.4 67.5 67.6 67.7 67.7 67.8 67.9 68.0 68.1 68.1 68.2 68.3 68.3 68.4 68.5 68.6 68.6 68.7 68.8 68.8 68.9 69.0 69.0 69.1 69.2 69.2 69.3 69.4 69.5 69.5 69.6 69.7 69.8 69.8 69.9 70.0 70.1 70.2 70.3 70.3 70.4 70.5 70.6 70.7 70.8 70.9 71.0 71.1 71.2 71.3 71.4 71.5 71.6 71.7 71.8 72.0 72.1 72.2 72.3 72.4 72.5 72.7 72.8 73.0 73.2 73.4 73.6 73.9 74.1 74.3 74.6 74.8 75.1 75.3 > n = length(heights) > stderr = sd(heights) / sqrt(n) > moe = round(1.96 * stderr, 2) > paste("Estimated average height for men = ", mean(heights), "inches +/-", moe, "inches") "Estimated average height for men = 69.235 inches +/- 0.56 inches"
Using a simulated sample of heights from 100 American men in this CSV file, this formula can be used to calculate standard error:
=STDEV(A2:A101) / SQRT(COUNTA(A2:A101))
To calculate margin of error for means given:
- That standard error in cell B2
- The Z-score for a 95% level of confidence (z = 1.96) in cell B3
=B3 * B2 / SQRT(B4)
Confidence Intervals for Proportions
If your sample data is dichotomous (e.g. Mac vs PC), or categorical that can be expressed as dichotomous (e.g. people whose favorite fruit is apple), what you are estimating is the population proportion in each group (x% use MAC, y% use PC). In that case, the confidence interval is:
p ± Z * σp
based on the standard error for proportions:
σp = √(p * (1 - p) / n)
Z is the z-score for the desired confidence interval (1.96 for a 95% level of confidence)
p is the proportion from the sample
n is the size of the sample
For example, in a survey of 32 people asking what operating system their home computer or laptop uses, 13 (40.6%) said Mac OS and 19 (59.4%) said PC (Windoze). To estimate the number of Mac users in the general population:
σp = √(0.406 * (1 - 0.406) / 32) = 0.087
p ± 1.96 * 0.087
40.6% ± 17%
Using survey responses from this CSV file, we can calculate the confidence interval for proportions in R:
> # Read survey results > os = read.csv("operating-system-survey.csv")$OS > print(os) Mac PC PC Mac PC Mac PC Mac PC PC PC PC Mac PC PC PC Mac Mac PC PC PC PC Mac Mac Mac Mac PC PC Mac Mac PC PC Levels: Mac PC > > # Get the percentage of peo > mac = sum(os == 'Mac') / length(os) > > # Calculate standard error > stderr = sqrt(mac * (1 - mac) / length(os)) > moe = 1.96 * stderr > paste0("Estimated Mac users = ", round(mac * 100, 1), "% +/- ", round(moe * 100, 1), "%") "Estimated Mac users = 40.6% +/- 17%"
Using survey responses from this CSV file, we can calculate the confidence interval for proportions in Excel.
When dealing with categorical data in Excel, you can use the COUNTIF() function to find the number of cells in your sample that match a particular value. The first parameter is the data and the second parameter is the condition. Note that the second parameter must be enclosed in quotes.
For example, with the CSV file linke above, the data in cells A2:A33 is either "Mac" or "PC". This will return the number of Mac users:
You can then get the proportion (percentage) of yes cells with:
=COUNTIF(A2:A22, "Mac") / COUNTA(A2:A22)
If you put your proportion in cell B2 and your sample size in cell B3, the standard error for proportions is:
=SQRT(B2 * (1 - B2) / B3)
If you then put the Z-score for your level of confidence in cell B3 (for 95% confidence, this is 1.96), the Excel formula for margin of error for proportions is:
=B4 * SQRT(B2 * (1 - B2) / B3)
Z-Scores For Other Levels of Confidence
The Z-score for a 95% level of confidence is 1.96 standard errors.
In R, the qnorm() (normal curve quantile function) can be used to calculate the z-score for other confidence intervals. The math for the parameter is a bit confusing because it is based on margin of error rather than confidence level:
> confidence = c(0.99, 0.95, 0.9, 0.8) > z = qnorm(1 - ((1 - confidence) / 2)) > paste0("Z scores for ", confidence * 100, "% confidence interval = ", round(z, 2)) "Z scores for 99% confidence interval = 2.58" "Z scores for 95% confidence interval = 1.96" "Z scores for 90% confidence interval = 1.64" "Z scores for 80% confidence interval = 1.28"
In Excel, z-scores for other levels of confidence can be calculated with the NORMSINV() function. With the confidence interval in cell B2 (as a percent on a scale of 0 to 1):
=NORMSINV(1 - ((1 - B2) / 2))
Confidence Intervals With Small Sample Sizes (the t-distribution)
When the sample size is 30 or less, the possibility of large errors increases and the distribution of errors becomes a Student's T Distribution rather than a normal distribution.
The t-distribution also requires the number of degrees of freedom (the size of the sample minus one) for calculation.
The density graph compares the normal distribution with an analagous t distribution with 10 degrees of freedom. Note that the tails of the t distribution are taller, which represents additional area of uncertainty associated with small samples compared to large sample sizes. As degrees of freedom increase, the t distribution begins to approximate the normal distribution.
T is used instead of the z-score to calculate a margin of error for sample means or proportions:
s ± t * σx̄
p ± t * σp
Confidence interval for means:
> # Simulated small sample > set.seed(0) > heights = rnorm(15, 69.2, 2.8) > > # 95% confidence > degrees_freedom = length(heights) - 1 > t = qt(0.95, degrees_freedom) > stderr = sd(heights) / sqrt(length(heights)) > moe = t * stderr > > paste("Estimated average height is", round(mean(heights), 1), "inches +/-", round(moe, 1), "inches") "Estimated average height is 69.5 inches +/- 1.4 inches"
Confidence interval for proportions: Using the example above of a survey where 40% of respondendents indicated using a Mac rather than a PC for their home computer or laptop, we can see the wider margin of error for proportions associated with a small sample:
> mac = 0.4 > > # Small Sample (t distribution) > sample_size = 10 > stderr = sqrt(mac * (1 - mac) / sample_size) > t = qt(0.95, sample_size - 1) > moe = t * stderr > > paste0("Estimated Mac users (small sample) = ", round(mac * 100, 1), "% +/- ", round(moe * 100, 1), "%") "Estimated Mac users (small sample) = 40% +/- 28.4%" > > # Large sample (normal distribution) > sample_size = 100 > stderr = sqrt(mac * (1 - mac) / sample_size) > moe = 1.96 * stderr > > paste0("Estimated Mac users (large sample) = ", round(mac * 100, 1), "% +/- ", round(moe * 100, 1), "%") "Estimated Mac users (large sample) = 40% +/- 9.6%"
In Excel, the TINV() function can be used to calculate t for use with standard error to calculate a confidence interval.
To calculate the margin of error for means given:
- A level of confidence in cell B2
- The standard deviation in cell B3
- The sample size in cell B4
=TINV(1 - B2, B4 - 1) * B3 / SQRT(B4)
To calculate the margin of error for proportions:
=TINV(1 - B3, B4 - 1) * SQRT(B2 * (1 - B2) / B4)
Excel Confidence Interval Functions
As you might expect, Excel has functions to simplify confidence intervals for means. CONFIDENCE.NORM() can be used to calculate margin of error with sample sizes over 30 and CONFIDENCE.T can be used with sample sizes of 30 or less. Both take the same parameters:
=CONFIDENCE.NORM(alpha, stdev, sample_size) =CONFIDENCE.T(alpha, stdev, sample_size) alpha = 1 minus level of confidence (0.05 for 95% confidence level) stdev = standard deviation of the sample sample_size = count of values in the sample
For example, given a level of confidence in cell B2, the standard deviation in cell B3, and the sample size in cell B4, the margin of error (for a sample size less than 30) using the t-distribution function:
=CONFIDENCE.T(1 - B2, B3, B4)
Excel does not have a convenience function for confidence interval for proportions.
How Big a Sample Do I Need to Have the Confidence I Want?
Using simple algebra, we can transform the formulas for margin of error to find the sample size (n) that we need get the margin of error (E) that we are able to tolerate:
For population mean estimates:
n = (Z * s / E)2
For population proportion estimates:
n = Z2 * p * (1 - p) / E2
Estimating s and p
For the mean formula, you need a sample mean (s), and for the proportion formula you need the sample proportion (p). But since you have not yet done the sampling yet, you cannot know these values.
There are three imperfect but practical ways to estimate these values:
- Two-stage sampling design involves doing a preliminary survey to get an estimate of the sample mean (s) or the proportion (p)
- For means or proportions, when available, you can use results from a prior survey or estimate
- For proportions you can use p = 50% as a worst case scenario. This causes the p * (1 - p) part of the formula to be its maximimum possible value so the sample size is the largest possible value
Note that the examples below presume a random sample from the entire population you are basing the estimate upon. If you are trying to make estimates about Americans, your random sample would need to be drawn from a list of all Americans with 100% participation. Drawing a perfectly random sample from anything other than a trivial or captive population is usually impossible, requiring more sophisticated modeling techniques to adjust the results and compensate for segments of the population that were undersampled.
Example Sample Size Estimation in R
Following the examples using male height above, suppose you wish to get an estimate for female height ±1 inch (E = 1) with 95% confidence (Z = 1.96). For s you use a value you have seen on the internet of 63.7 inches:
> z = 1.96 > s = 63.7 > maxerr = 1 > n = (z * s / maxerr)^2 > paste("Minimum sample size =", ceiling(n)) "Minimum sample size = 15589"
Following on the proportions example above, suppose you wish to get a better estimate of the percent of Americans that use Macs as their personal home computers or laptops ±5% (E = 0.05) at a 95% level of confidence (Z = 1.96). Using the estimated mean of 40% that from the survey given above:
> z = 1.96 > p = 0.4 > e = 0.05 > n = (z^2) * p * (1 - p) / (e^2) > paste("Minimum sample size =", ceiling(n)) "Minimum sample size = 369"
Example Sample Size Estimation in Excel
Following the examples using male height above, suppose you wish to get an estimate for female height ±1 inch (E = 1) with 95% confidence (Z = 1.96). For s you use a value you have seen on the internet of 63.7 inches. The power() function is used for the exponent:
=POWER(1.96 * 63.7 / 1, 2)
Following on the proportions example above, suppose you wish to get a better estimate of the percent of Americans that use Macs as their personal home computers or laptops &plusmin;5% at a 95% level of confidence (Z = 1.6). Using the small sample estimated mean of 40% given above:
=POWER(1.96, 2) * 0.4 * (1 - 0.4) / POWER(0.05, 2) | <urn:uuid:e6b7fc10-d87b-4774-a30e-ea8f2e95a2b2> | CC-MAIN-2018-26 | http://michaelminn.net/tutorials/r-sampling/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864958.91/warc/CC-MAIN-20180623113131-20180623133131-00101.warc.gz | en | 0.864857 | 4,678 | 3.859375 | 4 |
Diagnosing liver disease may involve liver function tests, a liver biopsy and more advanced forms of imaging. Read more.
Hepatitis is inflammation of the liver. It can cause liver damage, affecting its vital functions. It is often caused by various forms of hepatitis viruses; the most common in the U.S. are hepatitis A, B, and C. People can also get inflammation of the liver from heavy alcohol use, toxins, some medications, and some medical conditions, such as diabetes and obesity. Learn more about the different types of Hepatitis:
Fatty liver disease is a condition in which excess fat is stored inside liver cells, making it harder for the liver to function. One cause of fat buildup in the liver is heavy alcohol use, referred to as alcoholic fatty liver disease. This is a common, but preventable disease and is the earliest stage of alcohol-related liver disease. Read more about the different stages of alcohol-related liver disease.
When the buildup of fat in the liver is not related to significant alcohol consumption, the condition is called nonalcoholic fatty liver disease (NAFLD).
Liver transplantation is a surgical procedure performed to remove a diseased or injured liver from one person and replace it with a whole or a portion of a healthy liver from another person, called the donor.
Helpful resources to find a clinical trial can be found:
Eating well is a great lifestyle change that can help your liver function at its fullest potential. Making changes in your diet include limiting fats and sugars while increasing consumption of fruits, vegetables, lean meats and whole grains. Learn more about:
Last updated on July 28th, 2022 at 03:11 pm | <urn:uuid:716df382-e7da-437d-9267-2395e5865622> | CC-MAIN-2022-40 | https://liverfoundation.org/about-your-liver/frequently-asked-questions/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00182.warc.gz | en | 0.949216 | 347 | 3.546875 | 4 |
Weigela is a flowering perennial shrub that is hardy in USDA zones 4 through 8. It is prized for its spring and summer flowers, whose fragrance resembles that of honeysuckle. Commonly planted as hedges, foundation plantings or in beds and borders, weigela is low-maintenance and thrives in sunny sites and moist soil. Preparation for winter will help ensure that the plant emerges in the spring healthy and with minimal damage.
Water your weigela shrub deeply several times in the fall well before the first hard frost to support the roots to withstand winter drought conditions. Water a few times during winter if conditions are dry. Water only on a sunny day, in the mid-morning, when the surface of the soil is warmed. Resume a regular irrigation schedule in the spring after the last hard frost.
Mulch around the base of your weigela with an organic material laid down in at least a 3-inch-thick blanket. Use pine straw, leaf mold or shredded bark to insulate the roots from cold and drought. Reapply a fresh layer each year in the fall as winter approaches.
Prune back any branches that did not make it through the winter, in the spring, after the last hard frost has passed. Cut down to a point of healthy wood or down to the crown of the plant. Use loppers for small-diameter branches and a pruning saw for larger branches.
Pull all of the loose cuttings from the canopy and collect any debris that has accumulated on the soil below, to prevent the disease and insect prone conditions that rotting plant tissues provide. Burn, chip, compost or otherwise discard the cuttings as you prefer. | <urn:uuid:e41ccd6d-3864-4cd6-97b1-48f4ed815ce1> | CC-MAIN-2016-30 | http://www.gardenguides.com/104873-care-weigela-winter.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826773.17/warc/CC-MAIN-20160723071026-00097-ip-10-185-27-174.ec2.internal.warc.gz | en | 0.929633 | 350 | 2.65625 | 3 |
During Holy Week, it is Christian tradition to trace the pathway which Jesus took towards Jerusalem, sometimes following the stories recounted in Mark 11-14. In the city of Jerusalem, Jesus was arrested, crucified and died; in this city, for untold years, pilgrims had gathered in festive celebration, to remember, to retell the stories, to nurture their faith, to seek the Lord.
In Jewish tradition, the pilgrims travelling towards the city would join in songs—some of which are included within the book of Psalms in Hebrew Scripture and Christian Bibles. On their journey towards the city, according to this tradition, the pilgrims would sing Psalms 120—134. These are known as The Songs of Ascent, for they were sung as the pilgrims climbed higher towards the city, and then higher still towards the Temple at the highest point in the city.
This series of blogs use these ancient songs as the focus for reflecting, to envisage what that journey was like for Jesus and his followers, travelling as pilgrims to the city to celebrate Passover.
It was during that week that everything came to a head.
A gathering of friends and family; a joyful occasion, with exuberant celebration, meeting up after months or years in our own villages. We had walked with other pilgrims, heading towards the city, climbing the road, singing the psalms, looking forward to the festival.
Each step closer to the city was a step that brought us closer to the heart of our faith. Each step along the way was a step that brought us higher, nearer to the holy mount. Each stage along the way was matched with a psalm of ascent, singing with joy as we drew near to the holy place.
So we sang, together: Those who trust in the LORD are like Mount Zion,
which cannot be moved, but abides forever. As the mountains surround Jerusalem, so the LORD surrounds his people, from this time on and forevermore. (125) In the silence, reflect on Psalm 125
When the LORD restored the fortunes of Zion, we were like those who dream. Then our mouth was filled with laughter, and our tongue with shouts of joy; Those who go out weeping, bearing the seed for sowing, shall come home with shouts of joy, carrying their sheaves. (126) In the silence, reflect on Psalm 126
And then, we were at the foot of the holy place, the Temple first built by Solomon, then rededicated and rebuilt in the time of Herod; the Temple where the Lord God dwelt, where he dwelt in the Holy of Holies.
So we sang: Unless the LORD builds the house, those who build it labour in vain. Unless the LORD guards the city, the guard keeps watch in vain. (127) In the silence, reflect on Psalm 127
A gathering of friends and family; a joyful occasion, with exuberant celebration. We had walked with other pilgrims, heading towards the city, climbing the road, singing the psalms, looking forward to the festival.
Each step closer to the city was a step that brought us closer to the heart of our faith. Each step along the way was a step that brought us higher, nearer to the holy mount. Each stage along the way was matched with a psalm of ascent, singing with joy as we drew near to the holy place. So we stepped out, full of faith, on our journey to Jerusalem.
It was during that week that everything came to a head.
Water is on our mind, on the east coast of Australia, at the moment. Widespread flooding has occurred. Houses and businesses in many seaside locations, as well as in inland flood plains beside rivers, have been inundated by rising waters. People have been evacuated, some were stuck away from home, some now have no home to return to amd live in.
The power of water has been on display all around us. Constant sheets of wind-driven rain have fallen across hundreds of kilometres on the eastern coast of Australia. Surges of creek and river waters created currents that moved vehicles—even houses—and spread across flood plains, invading domestic and industrial spaces in towns and suburbs. Crashing ocean waves menaced beaches and cliff-faces, and currents swirled fiercely in the ocean.
We stand in awe and trepidation before the power of water—just as, a little over a year ago, we stood in awe and trepidation as roaring fires swept through bushland, invaded towns and suburbs, and wrought widespread and long-lasting damage. Then, we pondered, as now, we reflect on what this manifestation of “Nature, red in tooth and claw” means for us, as people of faith. (See https://johntsquires.com/2020/01/12/reflecting-on-faith-amidst-the-firestorms/)
Is this a demonstration of divine power in the pouring rain and rising floodwaters? Is this, somehow (as some would maintain), God declaring judgement on human beings, for our sinful state and rebellious nature?
I have been looking at a range of public commentary on the floods. One church website (not Uniting Church) includes these statements: “[These] devastating floods are not to be considered as an act of judgement upon our world, but instead, a warning to repent. Whether it’s drought, bushfire, flood or pandemic, these disasters are an important time for us all to consider Christ in the crisis. As we pray for the recovery of our land from these devastating floods, let us also pray that through this disaster might be a fresh opportunity for people to find eternal comfort and security in Christ Jesus.”
This appears to understand the floods as God seeking to make human beings respond with an act of faith in Jesus. Whilst ancient understandings may have made this kind of immediate connection between an event in nature and the intentions of God, we cannot make such a simple link. It’s much more than just “flood—warning—repentance—faith”. We need to reflect more deeply.
Water in our bodies helps us to form saliva, regulate body temperature through sweating, contribute to the brain’s manufacturing of hormones and neurotransmitters, lubricate our joints, and enable oxygen to be distributed throughout the body. Water facilitates the digestion of food, and the waste that is produced in our bodily systems is regularly flushed out as we pass urine. And we use water every day, to wash away solid bodily waste, to clean our hair and skin, to wash our clothes and keep our kitchen utensils clean.
Water is also a source of enjoyment: sitting on the beach, watching the powerful rhythmic surge of wave after wave; sitting beside the babbling brook, appreciating the gentle murmuring of running water; sitting beside the pool, listening the the squeals of delight as children jump into the water, splashing and playing with unrestrained glee.
The power of the ocean, of course, has often drawn the attention of human beings. We are reminded of this when swimmers are caught in rips and transported rapidly out into the ocean, or towards the jagged rocks at the edge of the beach. Sadly, the son of a friend was caught in a rip one day a few years ago. His two companions were rescued; the body of our friend’s son has never been found. The power of the ocean, whipped up by the wind, can be intense and unforgiving.
Water makes regular appearances in the Bible. It is a key symbol throughout scripture. It appears in the very first scene, when the priestly writer tells how, “in the beginning … the earth was without form and void … and a wind from God was hovering over the face of the waters” (Gen 1:1-2).
It also appears near the very end of the last book of scripture, where the exiled prophet reports that “the Spirit and the bride say, ‘Come.’ And let everyone who hears say, ‘Come.’ And let everyone who is thirsty come. Let anyone who wishes take the water of life as a gift” (Rev 22:17).
Water flows throughout the scripture as a central image, appearing another 720 times in the intervening pages of scripture. Water enables healings to occur, for instance (Namaan, commander of the army of the king of Aram, in 2 Kings 5; the man by the pool at the Sheep Gate in Jerusalem, in John 5).
To the people of Israel, as they retold their foundational myth of the Exodus and the subsequent forty years of wandering in the wilderness, the gift of water was a sustaining grace. Parched by desert thirst, the Israelites cried out for water, Moses struck the rock, and water flowed (Exod 17:1–7; Num 20:2-13). Rivers flowing with water then provided food for the people living in the land—the fish of the waters (Deut 14:9; Lev 11:9), alongside the beasts of the land and the birds of the air (Ezek 29:3-5; Deut 14:3–20; Lev 11:1–45).
Flowing water—“living water”—is one of the images adopted in John’s account of Jesus, to explain his role within the society of his day: “Let anyone who is thirsty come to me, and let the one who believes in me drink. As the scripture has said, ‘Out of the believer’s heart shall flow rivers of living water” (John 7:37–38).
The precise scriptural quote is unclear—commentators suggest that the reference may be to Prov 18:4 (“the fountain of wisdom is a bubbling brook”), or Zech 14:8 (“living waters shall flow out of Jerusalem”), or Psalm 78:16 (“[God] made streams come out of the rock, and caused waters to flow down like rivers”), or Rev 22:1–2 (“the river of the water of life, bright as crystal, flowing from the throne of God and of the Lamb through the middle of the street of the city”). The uncertainty as to the precise reference alerts us, however, to the many instances where “living water” is mentioned.
The imagery of water was used, in addition, in earlier stories in this Gospel. To the request of the woman of Samaria at the well, “give me some water”, Jesus replies, “If you knew the gift of God, and who it is that is saying to you, ‘Give me a drink,’ you would have asked him, and he would have given you living water” (John 4:7–10).
To the crowd beside the Sea of Galilee, who asked, “Sir, give us this bread always”, Jesus replied, “I am the bread of life. Whoever comes to me will never be hungry, and whoever believes in me will never be thirsty” (John 6:34–35). Water is powerfully creative, restorative, empowering.
Water also threatens destruction: witness the paradigmatic stories of the Flood (Gen 6:1–9:17) and the Exodus from Egypt (Exod 14:1–15:21, retold in Psalms 78 and 105). The destructive power of massive flows of water is evident in both of these stories: water falling from the heavens (Gen 7:4, 12) in one version of The Flood story, water rising from The Deep in an alternate version (Gen 7:11, 8:2).
Although (as we noted above), the gift of water was a sustaining grace to the people of Israel as they wandered in the wilderness, from the time of settlement in the land of Canaan, the Great Sea to the west of their lands (what we know as the Mediterranean Sea) was seen as a threat. In the sea, Leviathan and other monsters dwelt (Ps 74:13-14; 104:25–26; Isa 27:1).
The Exodus was made possible because the waters of the Red Sea had caught and drowned the Egyptian army (Exod 14:23–28); this unleashing of destructive divine power was celebrated by the escaping Israelites in victory songs (Exod 15:2–10, 19–21), in credal remembrance (Deut 11:2–4; Josh 24:6–7), and in poetic allusions in psalms (Ps 18:13–18; 66:6; 77:18–20; 78:13, 53; 106:8–12; 136:10–16).
In like manner, the waters in The Flood caused almost compete annihilation of living creatures on the earth (Gen 6:12–13, 17); only the family of Noah and the animals they put onto the Ark were saved from the destructive waters (Gen 6:19–21 indicates “two of every sort”, whilst Gen 7:2–3 refers to “seven pairs of all clean animals … and a pair of the animals that are not clean”).
Both the creative power of water, and destructive capabilities of water, led the people of Israel to ascribe power to God over the seas and the rivers. The Psalmist affirms of God that “the sea is his, for he made it, and the dry land, which his hands have formed” (Ps 95:5).
Accordingly, the Lord God, who “made heaven and earth, the sea, and all that is in them (Ps 146:6), was seen as able to “rule the raging of the sea; when its waves rise, you still them” (Ps 89:9). God’s power over creation is also expressed through flooding: “The floods have lifted up, O LORD, the floods have lifted up their voice; the floods lift up their roaring. More majestic than the thunders of mighty waters, more majestic than the waves of the sea, majestic on high is the LORD!” (Ps 93:4).
In our current context, such words are deeply troubling. Can it be that God is exercising divine judgement through the increased rainfall and rising floodwaters currently being experienced? There are two problems with this point of view, both with an inherently theological note to be sounded.
The first relates to the nature of God, and how God interacts with the created world. The ancients had a view that God was an interventionist God, directly engaging with the created world. When something happened “in nature” (like a birth, a death, a flood, a fire, and earthquake, etc), it was seen to be directly attributable to God. It simply happened “to” human beings.
Contemporary scientific and sociological views, however, would provide much more room for human agency. When things happen, what contribution does the human being (or an animal of some kind) have in the process? We would want to say that events that take place do not “just happen”; they are shaped by the actions of human beings in history, by our intention and interaction.
So, the second element I see as integral to understanding the current situation, theologically, is the contribution that human beings have made to the current environmental situation. Why are floods occurring more regularly, and with more intensity, in recent times? The answer is, simply, that we are seeing the effects of climate change right around the earth.
We human beings know this. We have known it for some decades, now. Yet policy makers bow to the pressures and enticements they receive from vested interests in business, pressing and bribing to ensure that their businesses can continue—even though it contributes the greatest proportion to the rise in temperature.
For every one degree Celsius that temperature rises, the atmosphere holds 7% more water. Given the right atmospheric conditions (such as we have seen develop in the last week), that water will get dumped somewhere—in recent times, that has been over much of the east coast of Australia, in massive amounts.
And it is obvious to thinking human beings, that how we have lived, how we have developed industries, how we have expanded international travel, how we have expanded the transportation of food and other goods around the globe, how we have mined deeper and wider to find fossil fuels to sustain this incessant development, has all contributed to that rise in temperature.
Certainly, a fundamental human response to the tragedies we have seen unfolding around us through the rainfall and flooding, is one of compassion. Compassion for the individuals who have borne the brunt of the damage that has occurred.
Compassion and thankfulness for the emergency services personnel and others who have spent countless hours in assisting those caught by the floods. Compassion and careful listening provided by Disaster Recovery Chaplains in many evacuation centres.
Compassion, practical support, and prayerful support for all who have been affected by these events, is fundamental.
Yet whilst the massive rainfall and the high floods are the processes of nature at work around us, we know that we have intensified and exacerbated them. And we see tragic results in the rivers that have surged and flooded in recent days—just as the same instability in the earth’s system has generated more intense and more frequent cyclones, created more intense and more frequent fires, warmed the oceans and melted the edges of the polar caps, and caused other observable events around the world.
This past week, there have been two opportunities for us to remember what we are doing to the planet—opportunities to commit to a different way of living in the future. The first was Australia’s Overshoot Day, on 22 March. This is the day that Australia has used up its yearly allocation of the earth’s resources. What should have taken 365 days has taken Australians 81 days. You can read about this at https://www.insights.uca.org.au/overshoot-day-and-a-theology-of-creation/
So, in the midst of the increased and more intense cyclones, and more regular meltings, and bleachings of coral, and eruptions of fire storms, and flooding of plains, God is communicating with us: the world cannot go on like this, the planet can not sustain our incessant disregard for its natural ways.
So let’s not blame God for dumping all that water and flooding all those homes and businesses. Let’s look closer to home, and consider how, in the years ahead, we can adjust our lifestyle, reduce our carbon footprint, live more sustainably, and treat God’s creation with respect and care. | <urn:uuid:5f35e32b-d817-4a4c-bec7-818fea9c13d8> | CC-MAIN-2022-21 | https://johntsquires.com/2021/03/30/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00305.warc.gz | en | 0.962803 | 3,912 | 3.125 | 3 |
Detector used to measure projectors response to changing inputs
This page details the method I used to optimize the gamma tracking of my projector. The basic idea came from Steve Smallcombe's VPL-VW10HT page. If you want to try this and don't want to build your own detector, they are available from EnhancedHT.
What is Gamma?
In an ideal world; when a display device (like a TV or projector) is given an input signal of a given level, say 1/2 of the maximum value, the display would then produce an image that was 1/2 of the maximum picture intensity. This would be a nice linear relationship. Input a value of 1/4, get 1/4 out.
Unfortunately CRT display devices like TV sets and computer monitors do not exhibit this. Instead there is a non-linear relationship between the input and output. This relationship is described by a power function where the exponent is the gamma of the display. To compensate for this, video cameras and video transfer operators add the inverse of this gamma function to make everything come out nice in the end. The gamma used in video production is 2.2. (Output = Input ^ 2.2, approximately)
If a display device does not have the proper gamma response, the picture will be less than satisfactory. If the gamma is too low, the picture will have to much contrast and will loose detail. If it is too high, the picture will be washed out.
LCD projectors do not have an inherent gamma response like CRT's, so they must be designed and set up correctly to mimic the proper gamma curve.
Digital Multimeter measures changes in photocell relative to the incident light level
Complete detector, made of a piece of rolled black foam and electrical tape with photocell mounted at back.
Photocell used. Radio Shack part 276-1657, 5 Piece assortment. I used one of the smaller photocells. Photocells are really just light sensitive resistors. As the level of incident light increases, photocell resistance decreases. Photcells have a non-linear response to light. The cell I used had a gamma of 0.7.
Using the Do-It-Yourself (DIY) detector and digital multimeter described above, and the graduated IRE windows on Title 17 of Video Essentials, I measured the projector's response to each input level. You can also use the Avia test DVD.
Tests were done with the CC40R color correcting filter in place.
What is IRE? IRE units are the scale defined by the Institute of Radio Engineers to measure the amplitude of a video signal. 0 IRE is black, and 100 IRE is white.
The Inital Projector Settings Plot shows an ideal 2.2 Gamma curve (in blue with diamonds) and the projectors actual curve (in pink with squares).
What does this tell us? The overall shape is close, but the gamma level is too high, this means that the picture will look a bit washed out. Details in bright scenes may be lost. Also note the dogleg turn at about 90 IRE, this suggests that one or more colors (Red, Green or Blue) may be overdriven and its output is flattening out before 100 IRE (brightest white) is reached.
Gamma Curve Comparison:
Just for fun and to see if the methodology used would detect the difference; I repeated the measurements with the the projector's alternate gamma setting. This setting is intended for use when displaying text for presentations, so I didn't expect much.
As you can see by the radical departure from the 2.2 Gamma response, this setting stinks for video. Watching some test footage proved this. Details in light scenes were crushed out of existence, overall dark scenes were mostly okay, but sunlit scenes looked washed out and dull.
*Note to self: DO NOT USE THIS SETTING!! EVER!
Adjustments and Validation
To improve on the gamma response found in the first plot; I reset the Theater Tek DVD player video adjustments to their default state. The Theater Tek default video settings are supposed to correspond to a scoped 0-700mV output on the Radeon graphics card in the HTPC.
After resetting the defaults, I used Video Essentials to adjust the brightness and contrast on the projector the best that I could. I ended up needing to adjust the Theater Tek player's contrast down a bit in order to be able to properly adjust the brightness on the projector (the projector was clipping the input). This costs a bit of headroom, but I couldn't find another way to do it.
I also adjusted the color bias and gain using the service menu and the program drive level for the LCD panels a bit also. If you really want to know the gory details of what was done, look here.
The results can be seen in the Corrected Settings plot. Really tracks nicely. The visual results are also easy to see. The overall picture level seems lower but the contrast is still there and the black level seems improved. The highlights on peoples faces seem smoother, less stark, more natural.
The combined results of the filter and calibration pleases me greatly. With the improvements in black level, color rendition, and resulting improvments in picture depth it's like getting a new projector. | <urn:uuid:282b1d2f-a117-4109-b0d5-a3aa95d73f47> | CC-MAIN-2016-26 | http://myhometheater.homestead.com/gamma.html | s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00069-ip-10-164-35-72.ec2.internal.warc.gz | en | 0.929576 | 1,094 | 2.734375 | 3 |
The Unfolding Saga of the Fourth Industrial Revolution
Anchored in the heart of innovation and transformative technology, the Fourth Industrial Revolution, also known as Industry 4.0, presents an exciting new chapter in the history of human advancement. This epoch is distinguished by a fusion of technologies that blur the lines between physical, digital, and biological spheres, where interconnected devices play a starring role.
Interconnected Devices: The Sinews of Industry 4.0
Interconnected devices, or the Internet of Things (IoT), form the fundamental fabric of the Fourth Industrial Revolution. These “smart” devices, ranging from household appliances to industrial machines, are equipped with sensors, software, and other technologies that allow them to connect and exchange data with other devices and systems over the internet.
This enhanced connectivity gives rise to unprecedented levels of communication, system integration, and real-time data exchange. It’s a symphony of interactions that offers vast opportunities for efficiency, personalization, and innovation in diverse fields from manufacturing to healthcare, agriculture to urban planning.
A New Paradigm in Manufacturing
The impact of interconnected devices is perhaps most visible in the manufacturing sector, where the concept of the ‘smart factory’ is taking root. In these dynamic environments, machinery and equipment armed with sensors can relay real-time data about their performance, maintenance needs, and operational efficiency.
This continuous stream of information enables predictive maintenance, reducing downtime, and enhancing productivity. Furthermore, it allows for a modular approach to production where systems can self-organize and adapt to changes or faults in real-time, fostering a more flexible and resilient manufacturing process.
Transforming Healthcare with Interconnectivity
In healthcare, interconnected devices are revolutionizing how we monitor, manage, and enhance well-being. Wearable devices can track vital health metrics, while more sophisticated IoT-enabled medical devices allow for remote patient monitoring, reducing hospital visits and enabling more personalized care.
Additionally, real-time data sharing between medical devices can facilitate more precise diagnostics, more efficient resource use, and the opportunity for AI-based analytics that could unlock new insights in disease prevention and treatment.
Interconnected Devices and Sustainable Cities
The power of interconnected devices extends into the realm of urban planning and smart cities. Here, IoT technologies provide real-time insights into various aspects of city life, from energy usage and traffic patterns to air quality and waste management.
These insights enable more efficient use of resources, improved services, and a more sustainable approach to urban living. By integrating various city systems, from transportation to utilities, a holistic and more sustainable approach to urban management can be achieved.
Security Considerations in an Interconnected World
As interconnected devices become more prevalent, the matter of security becomes increasingly important. Protecting these devices from cyber threats and ensuring the privacy of the vast amounts of data they generate is paramount. The challenge lies in creating robust security protocols that can keep pace with the rapidly evolving technological landscape.
Despite the complexities, the potential benefits of interconnected devices in the Fourth Industrial Revolution are vast. They offer a pathway to a world of increased efficiency, sustainability, and personalization. As we continue to embrace these advancements, we’re not just spectators to the Fourth Industrial Revolution, but active participants shaping our collective future. | <urn:uuid:7106eea4-c9f3-4349-b79b-c7fc08147e07> | CC-MAIN-2023-50 | https://4ir.co.za/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00883.warc.gz | en | 0.901415 | 667 | 3.015625 | 3 |
Contrary to popular belief, artificial intelligence (AI) is actually creating many new jobs. According to Forbes, eighty percent of companies said that AI led to the creation of new jobs. Although this obviously varies based on industry, and some industries will be adversely affected, two-thirds of respondents said that AI did not lead to a decrease in jobs.
So which jobs are being created by AI? Following are just some of them, according to the MIT Sloan Management Review.
Trainers will be needed to teach artificial intelligence how to operate. This is especially important when it comes to customer service. Empathy trainers will be needed to teach Alexa and other chat bots how to respond with the proper empathy in certain situations. Explainers will also be needed to help explain to nontechnical business leaders how artificial intelligence operates and how it can be used.
Of course, there will also be a need for sustainers. Humans will be needed to make sure that artificial intelligence is working the way it is supposed to be and to make the necessary changes when needed. Many businesses do not have full confidence in letting artificial intelligence run on its own course without any monitoring.
In addition, according to the CEO of Microsoft, artificial intelligence can help people with certain disabilities do more jobs than they were previously able to.
Of course, there will be a need for specific professionals to operate AI. There will be a need for engineers. There will be a need for copywriters to write the scripts for chatbots. | <urn:uuid:d6293708-036a-47af-bdce-a3bd6b231b03> | CC-MAIN-2019-09 | https://www.common.org/artificial-intelligence/creating-jobs/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489425.55/warc/CC-MAIN-20190219061432-20190219083432-00585.warc.gz | en | 0.965117 | 300 | 2.8125 | 3 |
No matter where you are in your health journey, it’s always a good time to check in and correct those sneaky habits you may have picked up along the way that are “supposed” to be good for you, but actually do more harm than help. Here’s five that will surprise you:
1. Brushing after every meal
Your dentist hasn’t been lying to you all these years, we promise! Brushing after meals is a healthy habit that helps prevent plaque build up and other dental disasters, but the key is to brush at the right time. Experts recommend waiting at least 30 minutes after eating highly acidic foods before you brush. The acid from foods like citrus fruits and soft drinks can soften your tooth enamel and speed up the acid’s erosion effects.
2. Drinking bottled water
You MUST drink water! Drinking at least half your body weight in ounces daily is ideal for overall wellness. However, reconsider drinking bottled water. The plastic containers may cause some bottled water to become tainted with chemicals and bacteria. Invest in an aluminum or BPA-free water bottle, or a water filter for your tap.
3. Squatting over toilet seats
The idea of sitting on a public toilet seat is gross, but according to one doctor, squatting may increase women’s risk of urinary tract infections. “Squatting causes the pelvic muscles to contract and tighten around the urethra,” says Elizabeth Kavaler, MD, urologist. This prevents the bladder from emptying all the way, setting up a perfect environment for UTIs. Instead of squatting, use the paper toilet liners most restrooms provide (or line with tissue) and have a seat.
4. Sitting up straight
Good posture is good for your health, but sitting up straight can also be bad for your back. Researchers found that sitting in a reclined position is best because it takes stress off of the spinal disks in the low back. Too much sitting – “sitting disease” – is also common and has been linked to serious health issues like heart disease. If you sit for long periods of time, take breaks to stand and walk around.
5. Catching up on sleep
Most of us are living for Friday just so we can rest and sleep on the weekends, but trying to catch up on sleep is like trying to catch the wind: you can’t! “It’s a myth that if we miss sleep over the course of the workweek we need to catch up on an hour-for-hour basis on the weekend,” says Gregory S. Carter, MD of UT Southwestern Medical Center.” The best thing you can do is get yourself in the habit of going to sleep earlier during the week and getting at least seven to nine restful hours of sleep a night. | <urn:uuid:7c7cc3bb-5cc4-41bb-8bcb-d5becdf9ddbe> | CC-MAIN-2017-22 | https://communityjournal.net/surprise-5-healthy-habits-really-arent/ | s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612008.48/warc/CC-MAIN-20170529014619-20170529034619-00435.warc.gz | en | 0.951092 | 590 | 2.65625 | 3 |
This article by Dr Dolly Theis, then a PhD student with the Unit’s Population Health Interventions programme, was shortlisted for the 2022 Max Perutz Science Writing Award. You can read all ten shortlisted and winning articles here.
It’s April 2020. The Prime Minister Boris Johnson has just left hospital where he was in intensive care with serious Covid-19-related health problems. Evidence about who is more vulnerable to Covid-related complications and death is still emerging. However, one thing that appears to be clear is that people living with obesity are at a greater risk of Covid-19 related hospitalisations, serious illness and death. Having been sceptical of strong government intervention on diet and obesity just one year prior, the Prime Minister’s close encounter with death is catalytic and he decides that the government must do something about obesity. In July 2021, his government publishes an obesity strategy.
This year, 2022, officially marks three decades of government obesity strategies in England. The first was published in 1992 and it included some ambitious population obesity reduction targets. Needless to say, these were not met. In fact, in this time the obesity prevalence has actually increased from 13% of men and 16% of women living with obesity in 1993 to now more than a quarter of adults (27% of men and 29% of women) in 2019. How and why has this happened? How can government obesity policy have failed so badly after all these years? And what are the consequences of this epic policy failure?
Fuelled by these questions, I analysed the 14 obesity strategies for England that have been published since 1992, which collectively contain no less than 689 policies. My research found that successive governments have failed to successfully reduce the obesity prevalence and related inequalities not only because of the policy ideas proposed, but also because of the way they have been proposed.
The hundreds of different policy ideas to tackle obesity and the related inequalities include school food and curriculum changes, guidance and standards for the food industry, provision of healthy food vouchers for low-income families, and a weighing and measuring programme for primary school-aged children. However, research shows that the largest proportion of the policy ideas are unlikely to be effective or equitable. For example, information campaigns have remained very popular with the government. The thinking being that government publishes dietary advice, people engage with it, they change their behaviour, and then ultimately their health and weight improves. However, this is unlikely to work for most people because individual behaviour change is tremendously difficult, especially long-term and especially when you live in conditions or face circumstances that make such change very hard. Evidence shows that shaping the environment and other key external influences to make it easy for people to enjoy a healthy life is much more likely to be effective and equitable. And yet, a much smaller proportion of the government’s obesity policies have focused on doing this.
My research also found that the government has tended to propose policies in a way that makes it unlikely they will be implemented. I identified seven key pieces of information necessary for effective implementation, but only 8% of policies fulfilled all seven criteria, versus the largest proportion of policies (29%) that were proposed without a single one. Only 9% of policies were proposed with a cost or allocated budget, 19% with any cited scientific evidence upon which the policy was based, and just 24% were proposed with a monitoring or evaluation plan.
The above has led to an obesity policy merry-go-round where the same or similar policies are proposed again and again by different governments or different secretaries of states, and yet are largely unlikely to be effective and equitable or get progressed fully from implementation right through to monitoring, evaluation and beyond. For example, there has been a Conservative Party Government since 2015, which has published not one, but four obesity strategies containing many of the same or similar policies. New prime ministers have come in and instead of seeing through the policies already proposed or in progress, they have all published new strategies. But it’s not just new governments that can come in and start again. In the last year, Prime Minister Boris Johnson has scrapped or sought to delay or revoke some of his own 2020 obesity strategy policies.
Meanwhile, problems such as poor diets and rising obesity rates are getting worse. The Covid-19 pandemic revealed how serious the consequences of failed obesity policy can be, as the Prime Minister Boris Johnson so personally experienced. The Global Burden of Disease (2017) found that poor diet is a factor in one in five deaths around the world. Four of the top five risk factors for healthy years of life lost to disease, disability and death are related to poor diet and physical inactivity. The evidence is writ large that poor diets have devastating consequences and there is increasing evidence on likely effective and equitable interventions. So, why does government obesity policy not reflect this?
One major barrier is that governments have tended to favour a less interventionist approach to reducing obesity, regardless of political party. Political decision-making is a primary arena in which scientific evidence comes up against ideology. The influence of neoliberalism, which advocates broad notions of individual responsibility, choice, a market-driven economy, and anti-government intervention, has been found by previous research to clash with more interventionist public health policies. Governments may have avoided stronger interventionist policies, e.g., legislation and fiscal measures, for fear of being perceived as controlling what people eat. The vilification of such intervention is commonly referred to as “nanny-statism” – the unwelcome interference of the state in people’s liberties and choices. Since politicians rely on the electorate to vote them back into power and Government relies on Parliament to support and facilitate policies, maintaining public and political popularity and avoiding potentially unwelcome policies are important. The question that remains is can scientific evidence be viewed as being more compatible with a neoliberal ideology? And if so, then how?
Through my research, I am trying to understand how government policy can more effectively, equitably and rapidly solve major problems like rising obesity rates. Breaking the decades-long cycle of ineffective obesity policies not only has profound implications for population health, but for government and the way it works too. A 2021 National Audit Office report found that the Department of Health and Social Care did not know how much it spent tackling obesity and yet it continues to spend billions of pounds treating the consequences. From our own health to the way that our country is run, improving government obesity policy matters to us all. | <urn:uuid:609ae27e-adf4-4986-b044-8c5b5a43af42> | CC-MAIN-2024-10 | https://www.mrc-epid.cam.ac.uk/blog/2022/11/01/failed-obesity-policy-max-perutz-2022/ | s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475727.3/warc/CC-MAIN-20240302020802-20240302050802-00487.warc.gz | en | 0.965974 | 1,317 | 2.84375 | 3 |
Your jaw is among the most complicated joints in your body. It has a broad range of motion. It can open and close, move side to side, and slide forward and backward. All of these motions are utilized during talking, chewing, and yawning. The jaw joint is commonly referred to as the TMJ or temporomandibular joint. There are many reasons why the jaw may not be functioning properly, and that can lead to pain and discomfort, as well as other associated symptoms.
The way people experience TMJ dysfunction can vary greatly:
Tenderness in the muscles of the face and jaw
Because of the variety of symptoms that a person with TMJ problems can experience, it can be difficult to find a way to address the root cause. Recommendations often include pain medications or anti-inflammatories, and perhaps being fitted for a bite guard. These approaches, unfortunately, provide only surface-level relief.
Upper Cervical Care and the TMJ
The focus of upper cervical chiropractic is on the upper two vertebrae in the spine. What many people may not realize is how closely positioned these vertebrae are to the joints of the jaw on each side. Because of this, even a slight malposition of the atlas (C1) or axis (C2) can affect the jaw's ability to work properly. Given the fact that many people who suffer from TMJ pain or discomfort also have neck pain and headaches, it makes sense that the jaw and the neck are connected. The upper cervical chiropractic approach is designed to be extremely specific. Adjustments are very gentle and do not use a great deal of force to accomplish the necessary correction. When the upper cervical spine is realigned, the surrounding nerves and soft tissues have the chance to heal. This can lead to a jaw that moves and works the way it was designed to. | <urn:uuid:598b3414-e1d5-4c44-81bf-d2bc406c3d0d> | CC-MAIN-2019-22 | https://www.drnimira.com/blog/tmjd-a-natural-solution-to-a-complex-problem | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254731.5/warc/CC-MAIN-20190519081519-20190519103519-00490.warc.gz | en | 0.956482 | 383 | 2.734375 | 3 |
Behavioral finance is a new area of financial research that explores the psychological factors affecting investment decisions. It attempts to explain market anomalies and other market activity that is not explained by the efficient market hypothesis.
The fundamental basis of behavioral finance is that psychological factors, or behavioral biases, affect investors, which limits and distorts their information and may cause them to reach incorrect conclusions even if the information is correct.
Behavioral biases can be categorized in many ways, and many of these categories may be overlapping or indistinguishable. The following factors have been published as having some impact on the market. Contrarian and momentum strategies take advantage of some of these psychological factors.
Emotion often overrules intelligence in decisions and can filter facts, where the investor may give too much weight to facts that are agreeable and may tend to ignore or underweight facts that are antithetical to one's predisposition.
There is the fear of regret, or regret avoidance, which causes investors to hold losing positions too long in the hope that they will become profitable or sell winners too soon to lock in profits lest they turn into losses.
Overconfidence in one's abilities can lead to higher portfolio turnover and to lower returns. Despite the fact that few investors or even professional portfolio managers beat the market over an extended time, there is still a considerable amount of active portfolio management. One study has shown, on the average, that men have lower returns than women because they trade more actively, presumably because they have greater confidence in their abilities.
Conservatism is the hesitation of many investors to act on new information, where such information should dictate an action or a change to one's strategy. This hesitation to news, for instance, eventually leads to action at different times for different investors, leading to momentum as more investors start reacting to the news.
Belief perseverance is the persistence of one's selection process and investment strategies even when such strategies are failing or are suboptimal, causing the investors to ignore new information that may contradict one's decisions.
Loss aversion is the propensity for people to avoid losses even for possible gains. Hence, people often hang onto losing stocks longer than they should, since selling would actualize the loss; otherwise, it is still just a "paper loss". Loss aversion is related to the marginal utility of money, where the 1st dollars are more valuable than additional dollars. For instance, most investors would avoid an investment where there was a 50% chance of either earning $50,000 on a $100,000 investment, or an equal chance of losing the same amount, because the $50,000 lost would have greater marginal utility, and therefore be more valuable to the investor, than the $50,000 potentially gained.
Misinformation and Thinking Errors
Misinformation and thinking errors probably account for most market inefficiencies, since no one knows everything and, even if they did, they may not make the best decisions based on that information.
Forecasting errors are a typical example, since even stock analysts are frequently wrong about future earnings and prospects for the companies that they cover.
Representativeness is the extrapolation of future results based on a limited set of observations or facts; hence, this is sometimes known as small sample neglect. This is best illustrated by investors seeking a fund in which to invest, and basing their decision on the fund's most recent performance rather than covering a longer duration, especially during bear markets. Often, this is the result of the lack of due diligence, where the investor relies on brochures or other advertising materials put out by the company or fund instead of doing independent research.
Narrow framing is the evaluation of too few factors that may affect an investment. For instance, an investor may buy a security because of its past performance without considering economic factors that may be changing that could change the performance of the security. Mental accounting is a specific kind of narrow framing in which different investments are mentally segregated, applying different criteria and due diligence to different investments where the different consideration may be unwarranted. For example, treating separate investments as if they were the only investment rather than as part of one's portfolio.
Biased information gathering and thinking is the distortion of personal bias on facts and thinking to conform with one's current opinions or actions.
Limits to Arbitrage
So if behavioral bias misprices stocks, why don't arbitrageurs take advantage of the mispricing, by buying and selling until the mispriced stocks are equal to their intrinsic value? Behavioral advocates have argued that mispricing persists because there are limits to arbitrage.
First, there is the risk that the mispricing will get worse, and, therefore, present a risk to arbitraging. The market influence of behavioral bias may be greater than the influence of arbitrageurs, especially since arbitraging behavioral bias is not risk-free. And even if prices do trend toward their intrinsic prices, it may take longer than arbitrageurs think—maybe longer than their investment horizon.
Secondly, financial models may be inaccurate, making arbitrage risky.
Thirdly, much of the arbitrage activity would require short selling, which, with stocks, is very risky, and many institutional investors are not permitted to sell short.
Can Behavioral Bias Explain Market Anomalies?
Possibly, but there has been no connection of cause to effect. The problem with behavioral finance is that it is too general, too murky. Virtually any anomaly can be explained by selecting whatever reasons that seem plausible. But this doesn't establish cause and effect. In fact, many things that seem reasonable are wrong.
The biggest critique of behavioral finance is that it is more of a philosophy than an actual science, since there are few if any controlled experiments to verify cause and effect. And even if there were such experiments, they probably wouldn't be useful for investors since it would be almost impossible to quantify these factors, not only for an individual, but certainly for the investing population, and using statistics would not yield precise enough results to be useful for investors. Not even institutional portfolio managers with high-speed computers and sophisticated financial models can predict the markets, for if they could, most of them would be outperforming the indexes, and, yet, many studies have shown that they rarely do over an extended time, and even the ones that do may do so out of pure chance. After all, when thousands of portfolio managers are trying to outperform the market, some will succeed simply because of luck.
The Largest Behavioral Factor: Lack of Information
No doubt behavioral bias does explain much of the market activity. However, I believe that the biggest factor in behavioral finance is simply the lack of information. Because financial markets are so complex, no one can even know all of the relevant factors that explain its activity, let alone quantify those factors to arrive at an accurate forecast. Moreover, different investors will have different information, and will have a different reaction to that information than others. Even to say that a stock is mispriced is an opinion—what is the true price of a stock? Many would say its intrinsic value, but intrinsic value depends on the present value of the stock's future cash flows. To know the present value of future cash flows, you would have to know the capitalization rate that the market is using to discount those cash flows, then you would have to know what those cash flows will be, including dividends and the stock sale price. Since no one can know these things with certainty, intrinsic value is only an opinion as to what it should be, and different investors will have a different opinion. Furthermore, the opinions of any individual investor will change over time as she learns new information.
Markets may be rational if every factor that affects them could be known and quantified, but that isn't possible. How do you explain the stock market bubble in the late 1990's. Irrational exuberance—is that another behavioral bias? No doubt. After all, the market was rising so fast and high, that people thought that it would just continue, just as they thought about real estate prices a few years afterwards, before the real estate bubble finally collapsed. Obviously, in these cases, people were buying because they thought the market would continue its trend and that they would get out just before it fell. In other words, they were buying because of what they were expecting the market to do, not on what individual stock prices should be.
As I sit here, writing this in early January, 2009, the stock market is still sitting near its nadir since December, 2008, and yet, even though the market has dropped about 40% in 2008, with most stock prices much less than they were in 2007, nobody is rushing in to buy at these depressed prices. Why not? Shouldn't the arbitrageurs be rushing in, buying stocks at a frenzied pace, until the stocks reached their "true price"? The limits to arbitrage were listed earlier, but those limits don't apply here. For instance, it seems almost a certainty that prices are not going to go much lower than they are now. Secondly, regardless of what financial model you use, you would have to agree that stocks are much better bargains now than they were in the recent past. Thirdly, since prices are depressed, only buying is needed to bring those prices back to their "proper level". No short selling is needed. And, yet—few arbitrageurs are taking advantage of these bargain prices—at least not enough to raise them quickly. Anyone buying now can probably earn abnormal returns, since the market will almost certainly go up from here—albeit, slowly.
The main problem in researching behavioral finance is linking cause to effect. While the psychological basis of many investment decisions can be explained by principles that seem reasonable, there is no hard evidence that such principles actually explain the events studied. There are few studies that study, in detail, individual investors that links their motivations to their investment decisions.
A good example of this rational attribution to specific causes is the daily reporting of news. Regardless of what the markets do, or what individual stocks do, their activity is "explained" by the reporters as being due to this or that. The explanations sound reasonable because the reporters select what seems rational to them, then look no further for anything that might contradict their explanation. If the market is up, they select reasons why it is up, and if it is down, then they select different reasons that "explain" the market decline. But there is little reason to believe that the proffered explanations have validity. After all, it is reasonable to assume that the markets will always do something even if there was no news at all, which is why they exhibit the random walk.
While there is no doubt that behavioral bias does affect the markets significantly, it has little use in forecasting the markets, since the many facets of human behavior cannot be quantified, and it will not enable the individual investor to earn abnormal returns based on its tenets. | <urn:uuid:ad085adc-7c8a-4973-a1db-e27c4a90a328> | CC-MAIN-2018-17 | https://thismatter.com/money/investments/behavioral-finance.amp.htm | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946120.31/warc/CC-MAIN-20180423164921-20180423184921-00414.warc.gz | en | 0.96988 | 2,236 | 3.09375 | 3 |
The switching of the three-phase inverter needs to be controlled by digital logic – typically a programmable microcontroller (MCU) - to regulate the torque or velocity of the motor while maximizing the efficiency (required torque with minimum current usage). With the use of hall sensors on the motor it is reasonably straight forward to control a BLDC motor using a six-step (trapezoidal) commutation control technique with limited digital logic resources (a very small programmable MCU or even a hard coded ASIC).
This six-step approach has some limitations surrounding torque inefficiency:
An approach that works better for most of these motors is called Field Oriented Control (FOC). In FOC, you can produce a stator field that is oriented and synchronized to the rotor field, which maximizes torque production. The transition between stator states is smooth, removing torque ripple and improving the dynamic performance of the system. The voltages seen by the motor phases are sinusoidal, enhancing efficiency. FOC isn’t that much more complex than six-step BLDC. It measures at least two phase currents instead of one bus current; does some additional math calculations; two proportional-integral (PI) current controllers instead of one; and a few more calculations for the pulse width modulation (PWM) generation.
However, there is the issue of the rotor sensor. The hall sensors used in six-step BLDC do not give enough accuracy on the position of the rotor magnetic field location for FOC. Further, hall sensors have some upfront costs (including additional wiring and voltage requirements), as well as lifetime costs due to their low reliability and high system failure rate. Additionally, some applications simply can’t use hall sensors due to mechanical limitation (e.g. compressors). A solution could be to use a different type of rotor magnetic sensor. Digital encoders (often used in high precision servo drives) and analog resolvers (often used for the EV propulsion motor) give the resolution required for FOC, but are expensive and impractical compared to simple hall sensors. The only solution then is Sensorless FOC.
Sensorless FOC rely on software algorithms to estimate the rotor magnetic field position (and often rotor velocity) based on the currents and/or voltages in the inverter. Sensorless rotor position estimators (or observers) have been theorized, developed and in use for over 25 years. But their practical implementations have pretty much been constrained to those companies with extensive investment in creating this expertise (AC drives, industrial motor control, some advanced appliance and automotive). At TI, we have been providing software libraries and system examples of Sensorless FOC for 20 years. Through this process, we have realized some significant limitations of the conventional Sensorless FOC solutions available from semiconductor suppliers (including our own). Therefore, we created a new software observer (FAST) and control solution (InstaSPIN-FOC) which solves these challenges.
InstaSPIN-FOC capability is made available through use of an on-chip library integrated into three members of TI's 32-bit real time Piccolo™ MCU controllers. Piccolo MCU devices are widely used in industrial and automotive applications and are available in industrial (-40 to 105C) and AEC automotive Q100 (-40 to 125C) temperature grades. The quickest way to get started spinning your motor is to purchase an InstaSPIN-FOC enabled three-phase motor control evaluation module of the appropriate voltage and current level.
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with respect to these materials. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff. | <urn:uuid:955124ac-a917-436a-805b-c9ab5c9ef219> | CC-MAIN-2017-39 | http://e2e.ti.com/blogs_/b/motordrivecontrol/archive/2016/10/31/the-other-motors-in-electric-vehicle-systems-part-3 | s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690228.57/warc/CC-MAIN-20170924224054-20170925004054-00694.warc.gz | en | 0.910314 | 923 | 2.9375 | 3 |
Reflexology is a method whereby the application of pressure by the fingers and thumbs to the specific areas and points located on the hands, feet and even ears of the body carries about release of stress and recovers the health.
Reflexologists recognize and identify the specific reflex points and areas on the hands and feet that correspond to other parts of the body and systems. They state that by applying pressure on these reflex points, there is an effect that is beneficial on the corresponding organs as well as the overall health and healing in the body.
For example, the treatment of reflexology has the notion that a particular area in the “arch” of one’s foot corresponds to the bladder and appropriate pressure on this area will therefore affect the functioning of the bladder. There are many best Reflexologist in Bowmanville for reflexology, but Spinewise is the best clinic who have for treating all kinds of problems.
Millions of people all around the world are using reflexology as a complementary therapy along with other treatments to combat anxiety, diabetes, kidney malfunctions, headaches, sinusitis etc.
Reflexology is found to be beneficial for restoration of harmony and balance in the body as well as for releasing tension and stress. It can produce a serene mind and a complete state of relaxation. Research done in the US and across the globe indicates that reflexology has positive benefits. | <urn:uuid:195cc59d-b496-4fae-96a4-71e5c2b502c3> | CC-MAIN-2022-05 | https://www.spinewise.ca/2017/05/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00290.warc.gz | en | 0.954933 | 281 | 2.578125 | 3 |
Update: 1/25/2021 | How to Propagate Plants
If you’ve got a green thumb and want to add a new skill to your arsenal, plant propagation can be a rewarding way to expand your indoor jungle.
Plant propagation is very interesting to watch. It’s also a great way to save money when adding indoor plants to your home.
And what is really exciting is that you can potentially get new plants into your home for free or at a low-cost through a water propagation method.
We prefer the water propagation method because it’s a great way to get the exact replica of a plant without much effort. All that is required is some time and your attention.
Below we’ll go over how to propagate and what you need in order to do it successfully.
What Does It Mean to Propagate a Plant?
Plant propagation is the process of growing new plants from a variety of sources. These sources include seeds, cuttings, and other plant parts (Wikipedia).
Why Do People Propagate Plants?
There are two reasons, 1) to get new plants for free or for a low-cost, and 2) to trim house plants that are overgrown and cluttering your home.
How to Propagate Plants
While there are many ways to propagate plants, in this guide we’ll show you how to water propagate.
What Do I Need to Propagate Plants?
- Propagation soil, or a mix of potting soil, vermiculite (helps keep the soil moist), and perlite (prevents compaction).
- Propagation chamber – clear container & lid, deep enough for tall seedlings. We love using mason jars.
- Sharp clippers, knife or razor blade
- Rooting hormone – speeds up the rooting process and protects from disease
- Small pots for rooted cuttings
Get Root Cuttings of Your Desired Plant
To propagate your plant, you need to get a root cutting. You can get a root cutting by purchasing them online or by swapping with other plant enthusiasts. There are tons of plant groups on Facebook where you can buy or trade for different plants.
You can also find plants along the sidewalk, in a park, or anywhere you are allowed to cut and bring plant trimmings home with you.
Root cuttings have nodes, which are where the stems will grow from during propagation.
Add Your Plant to Water
Using a propagation medium, such as a mason jar, water bottle, vase or even these cute beakers, add water until the vessel is 4/5 full. Add the plant and make sure the nodes are submerged. Do not submerge any of the leaves. If leaves happen to become submerged due to length, cut the leaves off with scissors or a blade.
The next part is to wait. It will take weeks, maybe even months before you see roots start to grow.
If your plant is struggling with root formation, you can always encourage it with some root hormone powder. Simply sprinkle it onto the roots, shake off any excess and add it back to the propagation vessel.
How Long Does It Take Roots to Form?
It can take anywhere from 1 to 4 weeks for roots to fully form. On average, we typically get roots long enough for potting at around 3-4 weeks.
You do not want your roots to be too long because it could be detrimental to the long term wellbeing of the plant. Why? Because water has no nutrients and it can damage the plant in the long run.
At week 1 of propagation, I could see 3/4″ stems forming on my Mini Monstera cutting.
When Should I Pot My Propagated Plant?
You should pot your propagated plant when the stem is 1-2 inches long. I typically wait until the stem is 2″ long and there are at least two stems to support the plant when potted.
You can continue to leave the plant in water until the stems start to coil. We typically recommend not letting it get too long to preserve the health of the plant.
When Will My Propagated Plants Grow New Leaves?
This depends on a few conditions; current weather conditions, soil, type of plant, etc.
When I initially potted my plants, the leaves started to brown. I thought my plants were going to die and all the money and time I spent propagating went down the drain.
I kept my plant soil moist (but not damp) and moved my plants near a humidifier 24/7. One month after potting, I finally saw some new growth! See the images below for new leaves from my propagated plants.
What Soil Should I Use?
You should use a soilless potting mix to start and slowly add soil with each watering. Why? Because the soilless potting mix will allow the roots to breathe as the plant grows. This will also prevent root rot.
We like using this soilless mix recipe post propagation:
- 4 to 6 parts sphagnum peat moss or coir (coconut shards)
- 1 part perlite (small white stones)
- 1 part vermiculite
With each water, add a bit of soil so that it permeates through the soilless soil over time.
Propagation takes time, attention and patience. It’s a great way to get new plants that would have been too expensive to buy fully grown. And the results of your hard work will pay off in the end when you start to see new growth.
If you have any questions about propagation or how to care for your plants, feel free to send us an email.
Did you find our guide on how to propagate plants helpful? Please share it on Pinterest!
Sign-up for our monthly newsletter for updates and more. We promise we won't spam you! Feel free to unsubscribe anytime. | <urn:uuid:8a614ac0-17e9-4ff7-bdff-daf30f39534c> | CC-MAIN-2022-05 | https://schimiggy.com/how-to-water-propagate-plants/ | s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00536.warc.gz | en | 0.928382 | 1,358 | 2.640625 | 3 |
In the chapter on Simple harmonic motion, we showed how it could be represented as a projection of uniform circular motion. Now let's be explicit:
On this still from the animation above, the graph at right shows the displacement y of simple harmonic motion with amplitude A, angular frequency ω and zero initial phase:
y = A sin ωt.
If uniform circular motion has radius A, angular frequency ω and zero initial phase, then the angle between the radius (of length A) and the x axis is ωt as shown. The rotating arm here is called a phasor, which is a combination of vector and phase, because the direction of the vector (the angle it makes with the x axis) gives the phase.
Two phasors with fixed phase
Let's see how useful this phasor representation is when we add simple harmonic motions having the same frequency but different phase.
Here we have the brown phasor with magnitude A and initial phase 0
y1 = A sin ωt
and the green one with magnitude A and initial phase φ:
y2 = B sin (ωt + φ).
We say y2 is ahead of y1 by φ or more commonly, y2leads y1 by φ.
From vector addition, we can see that the red phasor is the sum of the brown and the green ones. The amplitude and phase of the red phasor can then be obtained by trigonometry or geometry.
Mixing two sounds
This film clip from the chapter Interference shows an example of the addition of two sine waves with varying amplitude and phase.
Both loudspeakers are being driven by the same oscillator and, assuming that the speakers are similar, they each output the same sound wave. The one that is closer to the camera and microphone will be both louder (larger amplitude) and ahead in phase. Adding such signals becomes a little more complicated.
Constructive and desctructive interference
When one adds two simple harmonic motions having the same frequency and different phase, the resultant amplitude depends on their relative phase, on the angle between the two phasors. In this animation, we vary the relative phase to show the effect.
Here, the phasor diagram shows that the maximum amplitude occurs when the two are in phase: this is called constructive interference. The minimum sum arises when they are 180° out of phase, which is called destructive interference.
What if the frequencies are different?
If one simple harmonic oscillation has angular frequency ω1 and the other angular frequency ω2, we could say that the second leads the first by a variable angle φ, where dφ/dt = ω2 − ω1.
In general, this representation doesn't usually lead to substantial simplification. However, an interesting phenomenon arises when
f2/f1 = ω2/ω1 = m/n, where m and n are small integers.
This is interesting, because it is also the condition for musical consonance. In the example below, the ratio is 3/2, which in music is the perfect fifth, one of the most harmonious consonances.
This animation shows how consonance may be demonstrated using Lissajous figures, in which one harmonic oscillation is plotted as the y coordinate and the other as the x coordinate. We have a separate page on Lissajous figures. | <urn:uuid:c771c656-3104-405c-a1eb-9c3441b64ea8> | CC-MAIN-2015-11 | http://www.animations.physics.unsw.edu.au/jw/phasor-addition.html | s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462426.4/warc/CC-MAIN-20150226074102-00059-ip-10-28-5-156.ec2.internal.warc.gz | en | 0.919013 | 711 | 4.1875 | 4 |
Etching (from the Latin radere "scratch, take away, remove") designates a graphical gravure
process of artistic printmaking.
For rotogravure printing the smooth flat surface of a printing plate (copper or zinc) injury
in the form of lines or dots added (= erase). In addition one uses an
etching needle (steel needle).
When drypoint drawing with an etching of the hardest steel is carried out directly on
the printing plate. Different depths can be generated from most delicate lines to stronger
furrows with raised edges which receive a lot of color and give a greater density
when printing. An etching liquid is not used.
When etching the drawing is scratched in a first applied to the plate relatively soft covering.
The plate is then etched with an etchant, only the points are attacked, at which the top layer
has been breached. After rinsing the plate, the cover layer is removed. Quelle:wikipedia
"Landscape" Aquatinta etching / drypoint 2017
"Sylt" Aquatinta etching / drypoint 2017
"Cologne" drypoint 2010
"leaf skeleton" drypoint 2008
"oiltubes" drypoint 2008
"dolls" drypoint 2008 | <urn:uuid:eb5988ce-2616-4f4c-8ade-d1aa2de9a5aa> | CC-MAIN-2018-05 | https://www.sabine-odenthal.com/home-english/printmaking-etching/ | s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889660.55/warc/CC-MAIN-20180120142458-20180120162458-00030.warc.gz | en | 0.865877 | 267 | 3.21875 | 3 |
This important title, often found in connection with name ‘El, is found in several biblical passages in reference to Israel’s God (e.g., Gen.17.1; 28.3; 35.11;49.25; Ex. 6.3; Num. 24.4, 16; Ps. 68.15; Job 8.3,5, etc.). ‘El-Shaddai is P’s favored title for God before the revelation of the divine name to Moses. But what is its meaning, and what is its historical derivation? Traditionally, following the LXX (i.e., the Septuagint, or ancient Greek translation of the Hebrew Bible), which uses pantokrator, and the Vulgate (a Latin translation of the Bible by St. Jerome), which uses omnipotens, the term has often been rendered in English translation as “Almighty,” but it is now generally considered that this interpretation is fallacious, and possibly stems from a similar sounding Hebrew root $-d-d, meaning “to destroy.” Some modern scholars have suggested several other possibilities, such as connecting it with the Hebrew word $ad, meaning breast. However, since ‘El-Shaddai was a male diety, this seems somewhat unlikely. Another suggestion is that it is related to the Hebrew word sadeh, meaning “field.” However, this root uses a different sibilant (sin) in its root than does Shaddai (shin). | <urn:uuid:d9f7428c-aac2-4317-b4ef-42c3725b3f36> | CC-MAIN-2021-25 | http://faithpromotingrumor.com/tag/lxx/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00179.warc.gz | en | 0.974298 | 321 | 2.96875 | 3 |
Exploring how education emplaced in local biodiversity can deepen values of equality, diversity and inclusivity among human beings
Specially written for Vikalp Sangam
A Black-hooded Oriole was hovering and nibbling a half-ripe papaya on a tree, at the center of a brinjal field, and feeding it to its fledgling sitting on a branch. After a while the adult bird flew away to a gooseberry tree nearby and watched the young one with a side glance, as if to see if it would begin feeding on its own.
At the end of October 22’, four of us nature educators from Palluyir Trust were in the village of Pichanur, in Coimbatore, Tamil Nadu, for two full days of activities with the local children. The previous day we had gone birding (when we watched the behavior of the black hooded oriole closely), did a nature journaling exercise and played a few games in the afternoon. Around 50 children had come, some from Pichanur and adjacent villages, and others from a tribal settlement located a little outside the village. The fact that caste segregation ran deep in this landscape, even among children, was only slightly apparent to me as an outsider. But its back blatantly breached the water in how the children sat separately to eat lunch. And in how some children did not budge, or come together, when I tried to get them into small groups, which then I left to the teacher to make.
Next day on children’s popular request it was half-a-day of butterflies. In the morning we went on a long walk in the village outskirts, field guides and observation tables in hand searching for butterflies. We saw Common Banded Peacock butterflies (Mayil Azhagi in Tamil ) mud-puddling on a heap of wet red soil. A Southern Birdwing (Ponnazhagi) patrolled over the coconut plantations bringing much excitement and yelling each time it passed overhead. Along the waysides between plantations and fallow lands Four-rings (Nangu Valaiyan) were plentiful with their slow flutter low over the verge grasses. We also saw the full life cycles of the Plain tiger (Vendhaya Variyan) and the Lime butterfly (Elumicchai Azhagi) on a Calotropis and a Lime plant respectively. At the end we sat on the banks of the Kumittipathi river to share our findings, observations and questions, and listen to each other. Some children had spotted over 25 species. Later in the day, Sandhya, one of their teachers shared with us rather movingly that somehow during the activity the children had gradually started interacting, and by midway they were freely talking to one another, helping each other find, identify and observe. This was something she said she struggles to bring about on a daily basis, given the socio-political setting of that place.
This occurrence left me thinking for several days after we returned to Chennai. What was in the nature of watching butterflies or birds or trees that was able to erase, if only for a few hours, such deep social segregation? Was it simply that while watching a butterfly together, caste was irrelevant? Or even more significantly, to pay attention, to wonder and to raise questions, one had to drop away social constructs, if not bring those constructs also into observation, and be on a humanely equal plane? Something or several things, about observing deeply and connecting to other beings, could connect human beings too.
I was presenting to teachers of Abacus Montessori school, a month later, possible ways of integrating climate literacy as a core aspect of school education. During the end-discussion, my colleague Kaveri, a teacher of humanities and political science, said something deeply intriguing. Her grade 9 children, she shared, who had been through the ‘Farm, Environment and Society programme’ (which I co-design with the teachers) since elementary school, were far more political, active discussants and self-thinking than other classes she had experienced. Somehow this seemed to transfer – from the nature-based learning pedagogy they experienced during the last three years in which most modules focused on observing biodiversity in Chennai – into other areas of life. She too independently echoed that “something about practicing the skills of observing keenly for oneself” extrapolated into their engagements and thinking around society and history too. Later in December, on the stage of the second Green Literature Festival at Bangalore, I heard Professor Mahesh Rangarajan, among India’s eminent environmental historians, say – observing other species, with their diversity and different ecological needs living together, ‘makes the human being sensitive to all kinds of otherness’.
What are the ways in which biodiversity is a political science teacher? People have been finding numerous transformative political ideas from other species. Alexis Pauline Gumbs finds profound teachings of resistance and ways of shattering capitalism from whales, dolphins and seals – in her utterly brilliant book ‘Undrowned’ – black feminist lessons from marine mammals. Jean Paul Gagnon in his three part essay explores the question of Non-human democracy, and uses ‘interspecies-thinking’ to draw operative democratic lessons from bees, bonobos, termites and microbes deeply applicable in human society, as well as provoking ideas for what a multispecies democracy might seem like. In the book ‘Evolution’s Rainbow’ – Joan Roughgarden tells us ways in which other species from insects to fishes can teach us to live in a diverse society, especially a gender diverse one. That nature is profoundly queer. That binary, polarized nature is hard to come by.
But at a more simpler level for a child – how does and what kind of subtle political learnings happen through the regular practice of observing nature?
I realize, from experience and reviewing research, that the most simplest and formative political value direct engagement with the rest of nature is able to offer is an immersive exposure to ‘diversity’. Children meet and learn implicitly that there is never just one voice, one narrative, one story in the profoundly non-binary multi-species world. One need not even highlight this truth as an educator. ‘Difference yet coexistence’ is the lens through which the living world lets itself be seen. Other beings speak to us subliminally. They tell us plainly what theologian Catherine Keller articulates – “for difference itself is relation; we exist only in the relationality of our differences”. Observing biodiversity can shift us from the consumer/recipient location capitalist culture has cornered most human beings into. Direct observation makes us active foragers of deeper meanings and purposes – which by itself is politically countercurrent. Gregory Cajete, a Tewa elder and educator writes in his book ‘Look to the Mountain’ – “Observing how things happen in the natural world is the basis of some of the most ancient and spiritually profound teachings of Indigenous cultures. Nature is the first teacher and model of process. Learning how to see Nature enhances our capacity to see other things”.
My team-mates and bold young nature educators Nikkitha and Charlotte shared with me, when I discussed this question with them, how they and their friends have naturally developed a daily practice of looking – in the waysides, shrubs, grasses – especially for ‘what is not easily seen’. This they felt was the beginning of critical thinking, which transferred to other areas of their life, and interactions with people too. The perpetual effort to look for the invisible or the invisibilized. Surendhar Boobalan, a friend and fellow nature-educator in Pondicherry shared another aspect of equity which emerges when he takes his primary children for birding. That he no longer is able to notice the distinction of studious and unstudious, bright and dull children – which a confined classroom sometimes forces him into.
The political-pedagogical processes one follows as a nature educator is also vastly different from traditional classroom instruction – where power and spotlight is concentrated into one person – what I’ve begun calling a ‘pedagogy of control’. In a marshland or a park, if a frog or a heron decides to show/teach something else, and the learners’ energies flow in a different direction other than my own plan, I am always learning to leave space for it – for nature to directly be the teacher – aware of the fact that I am always both educator and learner in that setting as is everybody else. When the learning space is the real world, the educator has little choice but to drop control and evolve a ‘cooperative pedagogy’ – where power, knowledge and focus is beautifully, sometimes equally, distributed multi-people, multi-species. These multi-species values are already present and practiced in several Adivasi cosmologies – the Idu Mishmi, the Santhal, the Jenu Kurubas and Kattunayakans, to name a few.
Through Palluyir trust (Palluyir in Tamil means biodiversity/multispecies/all of life), and in collaboration with Pudiyador (an organization which works to empower marginalized communities across Chennai through education) – we run the Youth Climate Internship – a programme for youth from three climate vulnerable communities – Urur-Olcott kuppam, Ramapuram, and Kakkan colony. Through the programme we make 10 field trips to deeply observe and understand the ecology and socio-political landscape of Chennai. We learn advocacy tools and the law, we study other species and habitats in our neighborhoods and the youth engage the people in their locality in walks and activities. This December, on a cold Sunday morning at Urur kuppam, we had a half-a-day module on ‘questioning’. In the morning we did a ‘curiosity map’, an exercise to actively strengthen our muscles of wonder and curiosity. All through that week, Blue buttons (coin-sized jellyfish-like creatures which float on the ocean’s surface) were washing ashore along the city’s coast – a phenomenon which happens two or three times a year, sometimes due to strong landward winds/seismic events, and at other times unexplainably. We pondered about the recent Blue button beaching, then asked questions about it – covering which, when, what, why, how, and who, and some questions beyond the purview of these words. We made sure we asked questions past what the mind could easily think of and across the threshold of comfort, consciously challenging our capacity to wonder. Then we headed out onto the beach to each make a curiosity map of our own. The winter sun was two fist-spans over the ocean and pleasantly warm. Some fisherboats were coming back, having cast crab-nets early in the morning. An Olive ridley sea turtle had washed ashore dead, with an impact injury on the right bottom of its shell, possibly from a trawler strike. Among the sixteen of us, we each chose one creature or scene on the coast and exercised our curiosity. Drew and coloured, then made a map of questions, consciously pushing our wonder beyond its zone of comfort. Decorator worms, Tower shells, Crows, Ark shells, Ghost crabs, Goose barnacles, a sand star and the sea turtle helped us exercise our wonder. “How do barnacle shells form under the sea?” “How does a clam make the inside of its shell soft and the outside rough?” “How far can a turtle see inside water?”, “How does it help a tower snail to be shaped like a screw?”, “What happened to the creatures inside these empty shells?”, “Can turtles dream?”. To wonder, to question as a daily practice of living is a radical political act. They help change age-old, often obsolete, social constructs and myths holding in place structural inequalities and patterns of capitalist existence on the Earth. Wonder will keep alive constant reimagination – political, cultural, spiritual, which is perhaps the mark of a sapient species.
Contact the author. | <urn:uuid:c4215e82-c374-4f3a-b4e7-cfef36d569dc> | CC-MAIN-2023-14 | https://vikalpsangam.org/article/can-biodiversity-be-a-political-science-teacher/ | s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00735.warc.gz | en | 0.959737 | 2,541 | 2.828125 | 3 |
K-12 teachers across the country have been asked to undertake a daunting task: take the classroom and culture they’ve been building since August and move it into a virtual space. For some, tools like Zoom, Google Classroom, or even Slack are relatively new or haven’t been used as anything more than a homework tracker.
For educators thinking about how best to begin the shift to online learning, Usable Knowledge has compiled practices and strategies from higher education institutions (which have previously offered online instruction) that can translate into the K-12 classroom.
Let your pedagogy inform the technology you choose to incorporate.
- Consider the way you teach and your values as an educator in order to choose your platforms. Do you find that lecturing is the best way to transmit information? Or do students need to be able to collaborate and talk to one another? Are you teaching to an end-of-the-year final? Or do you want your students to demonstrate a certain skill set by the end of the semester? There are no wrong answers, but certain virtual formats are likely better equipped to handle certain pedagogical style.
Rebuild your classroom community in a digital space.
- Know your students and what their limitations are in terms of technology — send out a questionnaire to gather the information you need about each students’ learning environment.
- Let students take the initiative in designing the class format. How do they want to use their time? Is it easier to do the reading on their own time or watch a prerecorded lecture and use virtual meetings for questions? Allowing students to have agency over learning is especially critical now that so many other aspects of life are in flux.
- Reestablish norms. Should students express their questions in a chat or discussion board, or should they use the “Raise Hand” feature available in some group video chats?
- Allow time for students to socialize and even to be silly — after all, they’re not getting much interaction outside of their homes. Incorporate things like icebreakers before easing into instruction.
- Establish expectations clearly and early on. Make sure students know what they can do to succeed — whether that’s participate on discussion boards or complete a project. Also be willing to be flexible in success metrics as this is new for many and what works in a classroom may not translate to online learning.
Develop new ways of encouraging engagement.
- Think about value-added technology. Many technologies offer a whiteboard feature or the option to share your screen.
- Arrange breakout sessions where small groups can meet virtually to discuss a text or work on a problem together, then reconvene with the whole class.
- Keep lessons brief — around 10 minutes. This is especially critical if students are working in an environment where they can be easily distracted or have other tasks that require their attention.
- Consider using polling features in the software as a way to check in and monitor what students are learning.
Other things to consider
- Get in touch with parents and let them know what your expectations are for their children. Let them know how they can support learning.
- Think about closed captioning and prerecording to make content accessible to all — and that includes parents who will be partners in this work.
- Use a calendar feature to outline due dates and class meeting times.
Overall, flexibility and iteration are key to ensuring student success. Be willing to let go of something if it’s not working. Communicate clearly and often with students to let them know you are in this together with them and you are there to support their learning. | <urn:uuid:61f37d21-6ff2-4236-815c-81b8e98298de> | CC-MAIN-2022-21 | https://www.gse.harvard.edu/news/uk/20/03/shift-online-teaching | s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00727.warc.gz | en | 0.945133 | 744 | 2.984375 | 3 |
Eating Disorder Awareness Week is February 1 - 7, 2018
It's closer than you think!
Eating Disorder Awareness Week (EDAW) is approaching fast.
This year we are promoting the message that One Size DOESN'T Fit All, to shine a light on the fact that Eating Disorders can and do affect individuals of all genders, ages, races and ethnic identities, sexual orientations and socio-economic backgrounds.
- English Toolkit (PDF) (plain text)
- French Toolkit (PDF) (plain text)
- Visuals (English 1 2 3 4 5) (French 1 2 3 4 5)
- Canadian Research on Eating Disorders (pdf)
- Key Messaging from NIED (pdf)
As always, EDAW is a truly collective effort. Provinces, territories and many municipalities are set to light notable landmarks in purple from February 1 to 7 to support eating disorder awareness - from Vancouver City Hall to the Peace Bridge in Fort Erie, ON to Charlottetown City Hall. Groups across the country are planning awareness-raising and educational activities for their communities, using the hashtags #EDAW2018 (#SemTA2018 in French) and #7billionsizes to get the word out.
At NEDIC, we'll be hosting a number of activities as well: our 5th annual voicED event, voicED; our 4th annual Twitter chat; and a community panel co-hosted with Sheena's Place and the National Initiative for Eating Disorders (NIED).
If you have an event planned, let us know and we'll happily promote it on our website as well as our Facebook and Twitter feeds.
We want to remind Canadians that talking has the potential to save lives. We know that through open, supportive dialogue, we can help break the shame, stigma and silence that affect nearly 1 million Canadians that are living with a diagnosed eating disorder.
We'll be reaching out to the media to seek coverage for EDAW with stories that demystify this often misunderstood and underfunded mental illness. You can help by reaching out to local news outlets or by displaying one of our brand-new EDAW 2017 posters. Click here to purchase this poster today (quantities are limited!) or visit our online store to check out the poster bundle-packs available.
Every effort, small or large, makes a difference. If you are unable to get involved this year, please consider making a donation to NEDIC. Your donation will support NEDIC's National Toll-Free Helpline as well as our community Outreach & Education programming. All gifts of $20 or more receive a tax receipt. | <urn:uuid:451ba481-b469-4e03-bf8e-89e7e9653f74> | CC-MAIN-2018-22 | http://nedic.ca/news/eating-disorder-awareness-week | s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865023.41/warc/CC-MAIN-20180523004548-20180523024548-00409.warc.gz | en | 0.937013 | 548 | 2.609375 | 3 |
The final stages of liver cirrhosis are referred to as decompensated cirrhosis and involve progressive failure of the liver due to the accumulation of scar tissue, according to the University of Maryland Medical Center. A number of symptoms may occur as cirrhosis progresses into its final stages; these include jaundice, fluid accumulation in the abdomen, kidney failure, bleeding in the stomach and intestines, and encephalopathy due to toxin accumulation in the blood.Continue Reading
Doctors determine the presence and stage of liver fibrosis, or fibrosis that has progressed to cirrhosis, with a biopsy, according to the American College of Gastroenterology. In some cases, a biopsy is not necessary and a diagnosis can be made with a combination of blood tests, endoscopy, a physical examination or imaging studies.
Early symptoms of liver failure caused by severe cirrhosis include tiredness, diarrhea, queasiness and loss of appetite. Confusion, jaundice, edema around the abdomen and sleepiness are signs of advanced liver failure, notes WebMD.
The most common causes of cirrhosis of the liver are alcohol abuse and infection with diseases such as viral hepatitis and schistosomiasis, according to the University of Maryland Medical Center. Rarely, certain genetic disorders, bile duct problems and exposure to certain chemicals can also lead to cirrhosis. Regardless of the cause, cirrhosis results in the build-up of scar tissue in the liver. The liver mostly is able to offset the loss of function due to this scarring in the beginning states of cirrhosis, though symptoms such as fatigue or abdominal pain may occur. The decompensated stage of cirrhosis begins once liver function declines sufficiently that other body systems are affected.
Progressive scarring within the liver can also cause high blood pressure inside of the organ. This condition, called portal hypertension, is a common condition for people with cirrhosis, explains eMedicineHealth. As it advances, portal hypertension can cause overall fluid retention and intestinal bleeding. This bleeding can progress to the stomach and esophagus, causing enlarged veins and potentially fatal bleeding. This gastrointestinal bleeding often causes someone with cirrhosis to vomit blood.
Many of the complications experienced by patients suffering from decompensated cirrhosis can be life-threatening, notes the University of Maryland Medical Center. Bleeding from enlarged veins in the abdomen presents a serious medical emergency, and encephalopathy due to cirrhosis can eventually progress into coma. People with advanced cirrhosis also have a higher risk of developing liver cancer. While treatments can help slow or stop the progression of cirrhosis, the only way to restore liver function once end-stage liver disease occurs is a liver transplant.Learn more about Conditions & Diseases | <urn:uuid:f099f1b0-3ca6-4973-8ab8-d75fc6dc1e17> | CC-MAIN-2018-17 | https://www.reference.com/health/final-stages-cirrhosis-liver-7996727b4b112621 | s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945497.22/warc/CC-MAIN-20180422061121-20180422081121-00288.warc.gz | en | 0.914967 | 568 | 2.953125 | 3 |
St. Baptist John is the most ancient church of Alba only after St. Lawrence’s Cathedral and Santa Maria Del Ponte, which is now disappeared. This church gave its name to the third neighborhood of the city, as it is well documented at the number CCCXI of the medieval “Rigestum Comunis Albae”. The coeval codex of Alerino dei Rembaudi confirms that San Giovanni Church has the same age of San Lorenzo Cathedral. Its original shapes can only be recognized from the few existent representations (a drawing on table from the sixteenth century, the Theatrum Sabaudie, some drawings of C. Rovere and some old photographs) and from the archive documents.
In 1556 the church and part of the surrounding buildings were given to the Augustinian priests, whose monastery located beyond the spinning structure and near S. Maria della Consolazione Church was destroyed during the wars between France and Spain. In 1700 a magnetic compass was added to the bell tower and provided with a clock that, during the Napoleonic invasion, was transferred to the bell tower of San Damiano Church; its façade was beautified, some artworks were added to the church and the colonnade was erected on the south side of it. After the Napoleonic invasion, the Augustinians abandoned the church, which became a warehouse.
Michele Travaglio became parish priest of the Church in 1819, during the same year of its restoration, and he tried in different ways (even by begging the King) to bring the cure of the souls back in San Giovanni and to redecorate the church in the best way as possible because it was unstable and lacking of every kind of furniture. Different holy furniture and artworks were recovered from the demolished church of San Francesco.
His successors tried to beautify the church; for example in 1830 the choir was rebuilt, in 1834 the façade was renovated and in 1876 the brothers Vittino di Centallo put a new organ on the platform. In 1884 the canonical Nicolao Strumia raised the church of 4 meters and built a new façade and the paneled ceiling still present today from the drawing of the engineer Fantazzini from Turin.
In the last decades the structure of the church and its furniture remained unchanged excluding the presbytery reconstruction, the renovation of the floor and the heating system and finally the recent and important renovation works realized on the church and on the rectory by the current priest Renato Gallo.
Even though only few parts survived from the ancient medieval building, the ancient artworks hosted there are various and even precious. Fragments of frescoes are on the wall of the counter façade, hidden by the platform of the organ. They show some feminine shapes kneeled down to prey and two San Giovanni (The Evangelist and The Baptist). Concerning the style, they recall the French painting: in particular they are similar to some works of the Master of Nicola, operating in Savigliano.
The first chapel on the right hosts the baptistery. As the inscription on the wall remembers, its current aspect dates back to 1939, when the architect from Alba Giovanni Oreste Dellapiana intervened. On the inside there is the marble baptismal font and, on the wall, a sculptural terracotta group showing the baptism of Christ and realized by Virgilio Audagna (1903-1995) from Turin.
In the third chapel an interesting renaissance table shows Virgin Mary with the child between the Saints Agostino and Lucia; this work was commissioned by the Counts Verri della Bosia, patrons of the altar, to an unknown artist of the beginning of the sixteenth century. The painting is considered influenced by the school of Macrino for its characteristics.
In the chapel once dedicated to Saint Onorato and property of the Society of Silk Workers (which was strictly connected to this church because the market of cocoons and silkworms had place in the square in front of the church) an image of the protector was positioned in 1822: San Giobbe in disgrace, realized by the painter Giuseppe Chiantore (1747-1824) from Savigliano. The painting was restored in 2006.
In the presbytery rebuilt since 1830 there’s the major altar, realized in 1894 by Cesare Fantazzini. Over the altar there are five tables painted with tempera attributed for certain to the painter Gandolfino da Roreto from Asti (documented between 1493 and 1518). These little paintings might have been part of the predella of a great polyptych signed by Gandolfino, dated 1493 and now kept in the Sabauda Gallery of Turin. This artwork, that originally was located in St. Francis’ Church in Alba, was taken away after the demolition of the building and given (excluding the predella) from the bishop Michele Fea to the royal collections of Turin in 1837.
On the right of the altar, against the wall, there’s a bench realized with some fragments coming from an ancient wood choir, originally located in St. Francis’s Church and then dismembered. Thanks to what has been reported by the researcher Giuseppe Vernazza from Alba, who saw it when it was still entire, we know that it has been realized in 1429 by Urbanino da Surso (c.1480-l461/63) from Pavia, under commission of the parish Marco from Sommariva.
Over the bench there’s a beautiful eighteenth-century painting where the Virgin Mary is represented with the child, St. John, God, the Holy Spirit and St. Julius del Taricco.
On the back wall of the church, one can observe the frescoed triptych of Paolo Gaidano, a painter coming from Poirino who realized this in 1887: at the centre the Virgin Mary with the child between St. Therese d’Avila and St. Francis and at both sides two little angels.
Under the triptych there’s The Baptism of Christ, a beautiful painting painted by Giovanni Antonio Molineri (1577-1631) during the last years of his life.
In the presbytery, on the left wall, there’s the great painting Madonna del Carmine, originally located in the chapel with the same name. It has been attributed to Guglielmo Caccia called “Moncalvo” because of the strong similarity with some of his works that had a similar subject; today it is considered as a work realized by an anonymous follower of the artist.
In the chapel today become the access for the rooms of the rectory there’s an interesting painting arrived at St. John’s Church after a donation made in 1992. The painting shows St. Baptist John in the desert and an angel and it is attributed to Giuseppe Doneda (or Danedi) called “Montalto” (1609 – ca. 1678-1679).
On the wall in front of the chapel there’s the painting showing Emmaus’ Dinner with the characteristic iconography influenced by Caravaggio. The painting, signed GAM and dated 1629, can be attributed or referred to the already mentioned painter Giovanni Antonio Molineri.
In the first chapel on the left there’s the precious table showing the Virgin Mary with the child signed and dated by Barnaba da Modena in 1377. Barnaba, documented between 1361 and 1383, grew in the Emilian environment and has archaic and byzantine traits.
St. Baptist John’s Church is located in Piazza Pertinace 4, in Alba (Cuneo). The church can be visited everyday excluding during religious celebrations (8.00;18.00). | <urn:uuid:6d269c40-64e9-48e9-a0a9-dd5ed58dc7f0> | CC-MAIN-2019-22 | http://www.turismoinlanga.it/en/chiesa-di-san-giovanni-battista-alba/ | s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256100.64/warc/CC-MAIN-20190520182057-20190520204057-00204.warc.gz | en | 0.96532 | 1,657 | 2.53125 | 3 |
Mother's day is commonly celebrated during the second day of May. It is a world tradition which is done to honor mothers, motherhood, maternal bonds and the importance of mothers in the society in every part of the world. This celebration is renamed as an important day for every family; It is significant time of the year when mothers feel important and cherished. Although it is celebrated in different dates the essence of the day is more important than the date.
Back in 1912, Anna Jarvis branded the phrases "second Sunday of May" and "mother's day", and also created the Mother's day International Association. The apostrophe in the term "Mother's day" is specifically place in between the letters "R" and "S" which denotes a singular possessive term for each family and for their mother alone and not all the mothers around the world. In countries like the United States, Philippines, Brazil and other countries, This is celebrated in the month of May, specifically during the second Sunday. While in some Arab countries, it is celebrated in the month of March. In Roman Catholic churches, mother's day is associated with the devotion to Virgin Mary, the mother of Jesus Christ. During mother's day a special prayer service is done in her honor.
Gifts such as cards and flowers are often given to mothers by their family and friends. These simple gifts symbolize their respect and love for their mothers who have dedicated her life for her family. Just for one day, mothers around the world are given importance and if permitted by the government, this kind of celebration could have a special holiday which allows family to celebrate it outside their homes.
Aside from birthdays, mother's day is one of those days when mothers feel special and appreciated. It is a reminder of how important it is for women and mothers to be treated with respect respect and love for they have been a big part of a family. In fact, they are the reason why families are established and started. Mothers take the major role in starting a family. They are the fruit bearer and the mediator of the family. That is why mothers should not be taken for granted for they have contributed a great deal in our society. And for mothers, take this opportunity when you will feel loved and cherished for it is only once a year. You should take advantage of it and celebrate it with your family. … | <urn:uuid:3c93de58-fff2-46bf-843d-e489da843a18> | CC-MAIN-2019-18 | http://www.todsshoesretail.com/2018/01 | s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425160058-00136.warc.gz | en | 0.978068 | 477 | 2.859375 | 3 |
White blood cell protein aids melanoma
Unwitting helper Scientists have pinpointed a molecular mechanism in mice that helps skin cancer cells confound the animal's immune system, according to a study.
The discovery, if duplicated in humans, could one day lead to drug treatments that block this mechanism, and thus the cancer's growth, the study's authors report.
In experiments on mice, researchers showed for the first time that a protein called interferon-gamma (IFN-gamma) plays a key role in the spread of melanoma, a notoriously aggressive form of cancer resistant to standard chemotherapy.
The same kind of ultraviolet radiation that leads to sunburn caused white blood cells to infiltrate the skin of the mice, says Glenn Merlino, a scientist at the US National Cancer Institute and the main architect of the study.
The white blood cells, in turn, "can produce IFN-gamma. We believe that IFN-gamma can promote melanoma in our model system, and perhaps in people," he says.
Injecting the mice with antibodies that block IFN-gamma interrupted this signalling process, effectively reducing the risk of UV-induced skin cancer, the researchers found.
"We are trying to develop inhibitors that are more practical than antibodies, a small molecule, for example," says Merlino.
Ideally, such a treatment would mean that someone exposed to large doses of UV radiation could escape the potentially lethal threat of skin cancer.
"But we would never encourage intense sunbathing, even if such a treatment were available," cautions Merlino.
Cases of cutaneous malignant melanoma are increasing faster than any other type of cancer.
In 2000, more than 200,000 cases of melanoma were diagnosed and there were 65,000 melanoma-associated deaths, according to the World Health Organisation (WHO).
Upending assumptions about interferons
The findings, reported in the journal Nature, could upend assumptions about the relationship between interferon proteins and cancer, say the study's researchers.
Up to now, interferons were thought to impede the formation of cancer tumours. One in particular, interferon-alpha, has been widely used to treat melanoma, both as a first-line drug and an adjutant.
Earlier research has raised doubts as to effectiveness of the treatment, which also has serious side effects.
The highest recorded incidence of melanoma is in Australia, where the annual rates for women are 10 times the rate in Europe and more 20 times for men.
The main risk factors are high exposure to the Sun and other UV sources such as sun-beds, along with genetic factors.
The disease is far more common among people with a pale complexion, blue eyes, and red or fair hair. | <urn:uuid:41332ac6-254d-4c6c-b1f4-27e90eeac9d3> | CC-MAIN-2015-22 | http://www.abc.net.au/science/articles/2011/01/20/3117078.htm?topic=health | s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928780.77/warc/CC-MAIN-20150521113208-00096-ip-10-180-206-219.ec2.internal.warc.gz | en | 0.958257 | 575 | 3.046875 | 3 |
Objective: This overview of ultraviolet (UV) phototoxicity considers the interaction of UVA and short-wavelength VIS light with the retina and retinal pigment epithelium.
Methods: The damage mechanisms underlying UV retinal phototoxicity are illustrated with a literature survey and presentation of experimental results.
Results: Depending on the wavelength and exposure duration, light interacts with tissue by three general mechanisms: thermal, mechanical, or photochemical. Although the anterior structures of the eye absorb much of the UV component of the optical radiation spectrum, a portion of the UVA band (315-400 nm) penetrates into the retina. Natural sources, such as the sun, emit energetic UV photons in relatively long durations, which typically do not result in energy confinement in the retina, and thus do not produce thermal or mechanical damage but are capable of inducing photochemical damage. Photochemical damage in the retina proceeds through Type 1 (direct reactions involving proton or electron transfers) and Type 2 (reactions involving reactive oxygen species) mechanisms. Commonly used drugs, such as certain antibiotics, nonsteroidal anti-inflammatory drugs, psychotherapeutic agents, and even herbal medicines, may act as photosensitizers that promote retinal UV damage, if they are excited by UVA or visible light and have sufficient retinal penetration.
Conclusions: Although the anterior portion of the eye is the most susceptible to UV damage, the retina is at risk to the longer UV wavelengths that propagate through the ocular media. Some phototoxicity may be counteracted or reduced by dietary intake of antioxidants and protective phytonutrients. | <urn:uuid:bf4652a8-78e8-4649-811d-4f5432c040a4> | CC-MAIN-2016-44 | http://journals.lww.com/claojournal/Abstract/2011/07000/Ultraviolet_Phototoxicity_to_the_Retina.6.aspx | s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719465.22/warc/CC-MAIN-20161020183839-00254-ip-10-171-6-4.ec2.internal.warc.gz | en | 0.897518 | 328 | 3.28125 | 3 |
Scientists find humans genetically meant to be vegetarians; meat linked to kidney stones
31 August 2006Scientists studying kidney-stone diseases have stumbled across evidence that humans may be genetically more suited to vegetarianism than meat eating.
The discovery was made when the placement of an enzyme known as AGT, which is linked to the rare kidney-stone disease PH1, was found in one area of the liver in herbivores and another in carnivores, Professor Chris Danpure, of University College London, said yesterday.
Evolutionary science indicated that about 10 million years ago the distribution of the enzyme in human ancestors appeared to change from favouring a omnivorous diet to plant eating.
Humans began eating meat only in the past 100,000 years, a habit which has increased dramatically in recent times.
"It would appear that the diet we have now is incompatible with the distribution of this enzyme, which was designed for a herbivore diet, not meat eating," he said.
The human placement of the enzyme was the same as in rabbits, sheep and horses.
"One of the consequences of this could be the high frequency of kidney stones in humans, especially in western societies."
Danpure, who is a guest speaker at the annual Queenstown Molecular Biology Meeting this week, said if the link was proven it had potential for identifying people susceptible to kidney-stone diseases.
More than 300 leading scientists from New Zealand and overseas are attending the conference this week, which was opened by Research, Science and Technology Minister Steve Maharey on Tuesday night.
Molecular biology was helping transform the New Zealand economy and a recent survey found that biotechnology income to New Zealand companies in 2005 was $855 million, he told delegates.
Chris Holbein | Vegan Special Projects Coordinator
Please support our workwww.supportpeta.com | <urn:uuid:bab9c982-c578-4cff-bdd4-3727229e96e9> | CC-MAIN-2015-35 | http://www.animalliberationfront.com/Practical/Health/LiversLikeVeggies.htm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644063881.16/warc/CC-MAIN-20150827025423-00309-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.973125 | 375 | 2.75 | 3 |
- SCU Home Page
- About SCU
- On Campus
- News & Info
President's seal of office
Seals were historically used to authenticate the origins of documents. Wax to seal the document was seared with the unique symbol of the government, university, or other office from which it came.
The seal that will be featured in the inaugural ceremony is similar to the one used by Fr. John Nobili, S.J., the founder and first president of the university.
In the center is a shield containing the signet of the Jesuit order: the first three letters of the Greek spelling of the word Jesus (IHS), over a cross, at the base of which are the three nails of the crucifixion.
An American eagle with fully extended wings perches atop the shield, holding the olive branch of peace in its talons. The almost simultaneous dates of the formation of the United States of America and the founding of the Mission Santa Clara are symbolized in the 13 stars above the eagle. Surrounding this are the words Universitas Sanctae Clarae in California. | <urn:uuid:4b2e86a6-7ddb-49a8-a305-554d0d37da76> | CC-MAIN-2015-35 | http://www.scu.edu/president/history/inaguration/seal.cfm | s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645330174.85/warc/CC-MAIN-20150827031530-00338-ip-10-171-96-226.ec2.internal.warc.gz | en | 0.926281 | 225 | 2.953125 | 3 |
The county of Los Angeles may not like this distinction, but environmental engineer John Novak says the sludge from this area of California has the worst odor of any he has ever tested. A walk inside his laboratory, sealed-off from other testing facilities on the Virginia Tech campus, produces instant agreement.
"This county can haul its sludge hundreds of miles into the desert, and it still gets complaints," Novak smiles.
On the East Coast, a $400 million sludge handling system, slated to be built along the Potomac River by the Washington D.C. Water and Sewer Authority by 2010, may not be able to completely thwart the odor problems if it uses current technology.
Novak, the Nick Prillaman Professor of Civil and Environmental Engineering, is working with both localities, as well as others, to identify better processes for the destruction of organic solids and the elimination of disease causing organisms in biosolids.
Any time a treatment plant works with water or wastewater, sludge is generated. And twice a week, Novak's lab receives two shipments of the processed solids from the sewage. Novak laughingly admits that if "a package stinks, it belongs to me."
"Biosolids management is one of the most important aspects of wastewater treatment because of economic and health and safety issues," Novak says. "The cost of biosolid treatment and hauling is a major expenditure for wastewater treatment utilities. Pathogens and odor problems may restrict the biosolid disposal options and affect hauling costs."
Biosolids applied to land in the form of fertilizer can also impact ground water quality, primarily through nitrogen contamination.
Novak's approach to reduce the volatility of waste and to remove nitrogen from the process differs from some of the previously tried techniques. His work is based in part on some successful treatments of wastewater where a sequential anaerobic and aerobic digestion, called a dual-digestion process, is used.
"Recent studies suggest that some solids in sludge are degraded only during the anaerobic digestion and some only during the aerobic digestion treatments," Novak explains. "Therefore, a dual digestion, using both anaerobic and aerobic treatments would be expected to provide a reduction in the volatile solids beyond that achieved when using only one of the processes."
His initial studies indicate that his theory is correct. The dual treatment achieved up to a 65 percent volatile solids reduction, compared to 46 and 52 percent when using one of the single anaerobic digestion processes. His studies also showed that more than 50 percent of the nitrogen and 80 percent of the ammonia can be removed from anaerobic effluent after digesting it aerobically.
He reported his findings at the 2006 Residuals and Biosolids Management Conference in Cincinnati, Ohio.
Novak has also investigated the role that two specific metals, iron and aluminum, play in odor coming from sludge treated anaerobically. Working with researchers from Carollo Engineers and CH2M-Hill, they used a centrifuge simulation method developed at Virginia Tech to anaerobically digest a blend of primary and waste activated sludge from 12 different wastewater treatment plants.
Their findings indicated that aluminum reduced the odor potential for sludges that were high in iron.
The Water and Environmental Research Foundation has supported Novak's research on odors from sludges since 2000, As he conducted his studies, the 35-year veteran of water, sludge, solid and hazardous waste treatments, has learned that some new technologies are partially responsible for an increase in odors.
"In recent years, companies started selling sludge dewatering systems that consist of new centrifuges that reduce the amount of water in the process, thus reducing costs," Novak says. However, the odor increases. A $600,000 facility in Charlotte, N.C., with the more recently developed centrifuge technology is an example of a new plant hearing complaints about its foul aroma.
"The production of odors from sludges is a complex biochemical process," Novak says. "Odors, primarily from organic sulphur compounds, can be produced from anaerobically digested dewatered sludge cakes, especially when high solids centrifuges are used for dewatering. Even when digestion is effective, centrifugation can generate headspace concentrations of total volatile organic sulphur that are quite high and likely to cause odor problems."
If odors remain a problem, the dewatering process may need to be changed, Novak asserts.
The College of Engineering at Virginia Tech is internationally recognized for its excellence in 14 engineering disciplines and computer science. The college's 5,500 undergraduates benefit from an innovative curriculum that provides a "hands-on, minds-on" approach to engineering education, complementing classroom instruction with two unique design-and-build facilities and a strong Cooperative Education Program. With more than 50 research centers and numerous laboratories, the college offers its 1,800 graduate students opportunities in advanced fields of study such as biomedical engineering, state-of-the-art microelectronics, and nanotechnology. Virginia Tech, the most comprehensive university in Virginia, is dedicated to quality, innovation, and results to the commonwealth, the nation, and the world.
Cite This Page: | <urn:uuid:6671de2d-d626-4bf9-a2a7-c36216fb2e8a> | CC-MAIN-2016-07 | http://www.sciencedaily.com/releases/2007/01/070102101004.htm | s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145519.33/warc/CC-MAIN-20160205193905-00109-ip-10-236-182-209.ec2.internal.warc.gz | en | 0.943627 | 1,089 | 3.171875 | 3 |
What is efflorescence?
Concrete products and mortar consist of sand, gravel, cement and water. The cement is produced by burning together, among other things, alumina and lime. Being natural raw materials, the composition of these substances vary, depending on their origin.
Although quite rare, efflorescence is a naturally occurring phenomenon caused by rainwater, condensation or dew penetrating into the pores of concrete. This carries calcium hydroxide, otherwise known as lime, to the outer surface. The water then evaporates, leaving a white film or bloom.
When does efflorescence occur?
Lime naturally occurs in the cement which is used in the manufacture of all concrete products including roof tiles. Since the lime content of the concrete can vary and the weather conditions obviously differ, the level of efflorescence can also fluctuate considerably.
In the case of mortar bedding at ridges and hips, rainwater may wash the lime down the roof and deposit it on the roof tiles. Once deposited on the surface of the roof tile, the calcium hydroxide reacts with carbon dioxide in the air and becomes an insoluble calcium carbonate.
Does efflorescence disappear naturally?
The process soon ceases as the outer pores of the mortar are closed by the insoluble carbonate. The same chemical process which brings the lime to the surface of a tile carries on, enabling it to be degraded and washed away by the rain, so that eventually the efflorescence disappears by itself - usually in a matter of months. Once the lime has disappeared from the surface of the tiles it rarely re-occurs. Rainwater is slightly acidic, therefore long term weathering will eventually remove the efflorescence, but it is difficult to predict how long this will take. If it is considered necessary to remove the efflorescence without waiting for natural weathering to take its course, then a suitable proprietary hydrochloric acid solution can be applied to the affected area.
Sandtoft treat the surface of all their concrete tiles with acrylic polymer coatings to not only minimise the formation of efflorescence, but to give stronger and longer lasting colours. If efflorescence does appear, it has no detrimental effect on the long-term performance of the tile.
What is lime blow?
Occasionally clay tiles may show small white-centred chips on their surface. These occur when pockets of lime immediately below the surface expand, causing the surface above the lime to be pushed up or 'blown'. This expansion takes place as soon as the tiles leave the kiln as the tiles absorb moisture from the atmosphere. Lime occurs naturally in most clays and expansion can usually be prevented by the manufacturer submerging the tiles in water.
How can you prevent lime blow?
Manufacturers make every effort to prevent lime blows, although occasionally the process can still occur before the tiles have been fully soaked. The expansion action of the lime only occurs immediately after the tiles leave the kiln. The process stops once the tiles have absorbed moisture and cannot re-occur. Therefore there is no risk of further 'pitting' to the tile surface after the tiles have been laid on the roof.
It is a common misconception that clay products can be attacked by frost action due to irregularities within the surface finish but there is no possibility that the small pits, or 'lime blows', will affect the future durability of the tiles. | <urn:uuid:58ee1ee3-1657-4c09-95a3-2db3449d1229> | CC-MAIN-2023-50 | https://www.wienerberger.co.uk/tips-and-advice/roofing/what-are-the-different-types-of-roof-tile-surface-finishes.html | s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100264.9/warc/CC-MAIN-20231201021234-20231201051234-00652.warc.gz | en | 0.937603 | 702 | 3.6875 | 4 |
Rehoboam chose to listen and he did well. He chose the group to whom he would listen and chose wrongly. The choices he made led to an outcome for which he had no choice!
When all Israel saw that the king refused to listen to them, they answered the king: “What share do we have in David, what part in Jesse’s son? To your tents, Israel! Look after your own house, David!” So all the Israelites went home. But as for the Israelites who were living in the towns of Judah, Rehoboam still ruled over them. King Rehoboam sent out Adoniram, who was in charge of forced labor, but the Israelites stoned him to death. King Rehoboam, however, managed to get into his chariot and escape to Jerusalem. So Israel has been in rebellion against the house of David to this day (2 Chronicles 10:1-19).
Rehoboam’s choices brought loss to him as a leader. No leader would deliberately choose to lose. But leaders who choose to not listen are ultimately choosing to lose. Leaders choose to lose when they fail to listen. Servant leaders learn to listen well as they observe what Rehoboam lost.
Leaders lose influence when they don’t listen.
Rehoboam’s influence dissolved immediately as thousands of people “went home.” Yes, he was still on the throne, but he lost his ability to influence which is the essence of true leadership. The people talked of David’s house (Rehoboam’s blood line) as something they no longer had a part in. Rehoboam was still the king, but he was no longer their leader!
Servant leaders recognize that leadership is influence. If they lose the ability to influence others, their leadership is finished. Servant leaders listen well so they can continue to influence well.
Leaders lose authority when they don’t listen.
People turned away from Rehoboam’s leadership when they saw he wasn’t listening. Like many leaders who sense they are losing power, he tried to prove that he still had authority by sending Adoniram to force people to work. In the process he lost Adoniram and had to run for his own life! It was a visible demonstration of the authority he lost. Tough leaders, even dictators, can rule with power for some time. But when the people they are leading have enough, the people will become unmanageable.
Servant leaders don’t seek authority, but they earn it as they listen well.
Leaders lose people when they don’t listen.
Rehoboam’s refusal to listen caused him to lose most of the nation. He lost leaders, he lost priests, he lost farmers, he lost mothers and fathers. He lost people who could have been a part of his call to build the nation. But more than having the size of his kingdom significantly reduced, he lost all the potential good that could have come from a united kingdom.
God’s purposes for the nation were greatly hindered by Rehoboam’s actions. The nation would never recover. Never again would there be the worship that characterized David’s reign or the splendor of Solomon’s. The kings of the world would no longer come to learn from God’s people. That’s a high price to pay for refusing to listen!
Servant leaders seek to leverage maximum impact for the Kingdom of God with the largest number of people they can influence. They know that as they lead, they will lose some followers. But servant leaders listen well so they don’t lose any that should be on their team.
Until next time, yours on the journey,
For further reflection and discussion:
- In my leadership journey, was there a time that I did not listen well? What did I lose? What have I learned from that and what am I currently doing differently?
- As a leader, who have I lost along the way? Did their leaving have anything to do with me failing to listen? What might I have done differently?
- Are there people currently following me as a leader, but they have lost some confidence in my leadership because I have not listened well? What can I do to correct that?
Copyright, Global Disciples 2019. | <urn:uuid:4ef8c36b-5d0f-4463-b721-068c818fb7d4> | CC-MAIN-2021-31 | https://leadersserve.com/2019/10/30/learning-from-rehoboam-leaders-lose-when-they-dont-listen/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152129.33/warc/CC-MAIN-20210726120442-20210726150442-00433.warc.gz | en | 0.983209 | 923 | 2.515625 | 3 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 297