text
stringlengths
205
677k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
15
2.02k
file_path
stringlengths
125
126
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
47
152k
score
float64
2.52
5.16
int_score
int64
3
5
March 31, 2017 – You can’t blame Diane Kodama, manager of the Ellicott Slough National Wildlife Refuge, south of San Francisco, for having a strong connection to Santa Cruz long-toed salamanders, given how much effort she and her colleagues have put into helping the endangered amphibians survive the historic drought that plagued California the last several years. When they undertook a delicate, first-of-its-kind captive rearing of the salamanders two years ago, Kodama and her team didn’t know whether the young creatures would even survive the first few days, much less after they released them into the wild. So it was a tremendous relief when mature salamanders were spotted in the area last November. But in a surprising twist, at least some of them appeared to have returned not to Buena Vista Pond, where they historically breed, but to the site just up the hill where they were reared in plastic tubs almost two years before. Through a process biologists don’t yet understand—magnetic geo-positioning, navigation by stars or perhaps tracking of site odors—some of the salamanders seem to have been imprinted with the location of the rearing tubs as they grew from finned larvae to legged metamorphs. “We are fairly certain the returning ones are the rescue-reared salamanders, given their age and unprecedented numbers,” Kodama says. The mature salamanders’ appearance at the captive breeding site was a wrinkle in the normal lifecycle, necessitating relocation to their original breeding pond. Typically, the larvae hatch from eggs in marshy ponds, sprout legs and migrate upland to underground burrow habitat, then return to the same ponds as mature adults to court and lay eggs on stalks of submerged vegetation. The drought that began in 2011 shrunk Buena Vista Pond and threatened to wipe out the salamanders that breed there. So in 2014, Kodama, refuge biologist Chris Caris and their partners at the U.S. Fish and Wildlife Service’s Ventura office, California Department of Fish and Wildlife and Santa Cruz County’s Resource Conservation District made the decision to rehabilitate the pond, lining it with clay so that it would retain more rainwater. The relined pond made a positive difference during the salamanders’ winter breeding season of 2014-15, capturing more of the minimal winter rainfall. Unfortunately, drought conditions persisted and something else had to be done. The team determined that it wouldn’t be feasible to artificially augment the water in the pond, given the pond’s location and the sheer quantity of water needed. Instead, they opted to begin the tricky task of rescue-rearing—a technique that had never been used for the endangered sub-species. After a good deal of research—including consultations with fresh water aquarium experts and biologists experienced in captive rearing of other amphibian species—they made the decision to go ahead. Still, Kodama says, “We were going in with a lot of unknowns.” The idea was to capture larvae in the breeding pond through dip-netting and transfer them via small buckets to pond-like habitat created in 100-gallon tubs. It was a necessarily slow procedure to introduce the larvae to the water of their temporary homes, involving several hours of acclimatization before the buckets were submerged in the tubs and the larvae swam out. Over the next several weeks, staff closely attended to the larvae, making sure frozen blood worms were eaten, testing water quality and monitoring for metamorphosis. In anticipation of legs sprouting, they added floating platforms to the tanks and gradually drew the water levels down. When the first legged metamorphs arrived, the biologists transferred them to tubs with most soil, wood cover objects and live food, to train for salamander life in the surrounding oak woodlands. For Caris, the operation was a flashback to high school. “I was an aquarium nerd back then. But I never expected I’d be using that experience in my work at the refuge. “ He was excited about the rescue-rearing, but only cautiously hopeful. “They were going to die if we didn’t do anything—that much we knew. We thought it would be worth it if we could save a dozen.” Kodama describes her emotions during the rescue-rearing: “Every triumph you have—‘They’re eating worms!’ ‘They’re taking to the water!’ ‘They’re growing!’—you take pride in it.” As it turned out, more than 300 young salamanders were successfully reared and released into the wild. The local ecosystem stood to benefit from the successful rearing as well. The Santa Cruz long-toed salamanders exist nowhere on earth except a small range in neighboring Santa Cruz and Monterey counties. Among other critical roles they play in their native habitat is the consumption of mosquito larvae. As with other amphibians, their migration between water and land also makes them a good indicator of the health of the entire watershed in the area. “Salamanders aren’t flashy,” Kodama says, “but they are critical to the ecosystem. If they’re doing well, the watershed is likely to be doing well. It can be frustrating when you see the threats from humans—the development and contamination, especially from pesticides.” Caris echoes the point: “The health of the salamander is an indication of how much humans have altered the environment. I don’t believe we have the right to be responsible for the extinction of a species.” The return of the rescue-reared salamanders shows that, sometimes, we can make a positive intervention, as well.
<urn:uuid:250fa04f-b3bf-4188-8dc7-f92291029a04>
CC-MAIN-2023-50
https://yubanet.com/california/return-of-the-rescue-reared-salamanders/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100135.11/warc/CC-MAIN-20231129173017-20231129203017-00000.warc.gz
en
0.963713
1,235
3.3125
3
Create Digital Children's Books They say, if you can't figure out how to do something with your computer, ask a teenager. While computers intimidate adults, they seem to pique the curiosity of children. This computer curiosity starts at a very young age. Some Web entrepreneurs have learned how to profit from this young audience - by creating children's ebooks. If you can tell stories that are interesting to children, you can tap this market too. An important component of a children's ebook is the pictures. If the ebook is written for preschoolers, the artwork should be built from basic shapes with bold colors. If written for older children, the pictures don't have to be of professional artist quality, but they must be interesting. Children's ebooks should be designed to teach reading. Woven within the story should be additional learning about history, geography, mathematics or science. Most importantly, your story should teach a moral lesson. This will please the parents, who are, after all, the ones who will pay for the ebooks. How to start a children's ebook business: Create a Web site that is attractive to, and usable by children. access to the beginning of each story. This gets the child involved and curious to learn what happens in the rest of the story. Curious enough, we hope, to convince mom or dad to pay a small fee to download the entire story in an ebook. Each ebook should contain the beginning of another story and a link to the download page for that ebook. Include extra free goodies with each ebook, for example, cartoons, games, puzzles, coloring pages, and patterns for making crafts or toys. This will make the child and the parent anticipate what kind of goodies might come with the next ebook purchase. Place ads for child-related products on your Web site and in your ebooks. Select toys that are interesting to children, and have an educational element that justifies purchase by mom or dad. There is a wide variety of ebook formats and ebook compilers to choose from. I recommend Help Workshop, a free download from Microsoft's Web site. With this application you can develop your ebook as regular web pages, and then compile them into a standard Windows help file. You can have static pictures in your ebook or visit the Design section of this Web site to learn how to use Java Script to easily add animation, sound, and music to your ebooks (and your Web site). You can sell ebooks individually for the low price of $4 or $5 each. This is a digital download product, so you have no manufacturing, delivery, or overhead cost. After you have created eight or ten ebooks, you can sell a subscription that lets the child download any ebooks they want for one year. The subscription fee should represent a cost savings over purchasing the ebooks individually. You can accept payments online through PayPal. Place a link to PayPal on your Web site. The customer logs in (or signs up for a free account) at PayPal's Web site. Using your email address, they deposit money in your PayPal account. PayPal sends you an email notification of the payment, along with the customer's email address. You email the customer a password to download the ebook. For examples of successful childrens ebook Web businesses, visit the links below:
<urn:uuid:70dbee09-e260-44c0-adcf-1b80a4e7b221>
CC-MAIN-2023-50
http://bucarotechelp.com/getpaid/webizspc/98002640.asp
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.911934
716
2.5625
3
Set pointer to thread id #include <linux/unistd.h> long set_tid_address(int *tidptr); For each thread, the kernel maintains two attributes (addresses) called set_child_tid and clear_child_tid. These two attributes contain the value NULL by default. If a thread is started using clone(2) with the CLONE_CHILD_SETTID flag, set_child_tid is set to the value passed in the ctid argument of that system call. When set_child_tid is set, the very first thing the new thread does is to write its thread ID at this address. If a thread is started using clone(2) with the CLONE_CHILD_CLEARTID flag, clear_child_tid is set to the value passed in the ctid argument of that system call. The system call set_tid_address() sets the clear_child_tid value for the calling thread to tidptr. When a thread whose clear_child_tid is not NULL terminates, then, if the thread is sharing memory with other threads, then 0 is written at the address specified in clear_child_tid and the kernel performs the following operation: futex(clear_child_tid, FUTEX_WAKE, 1, NULL, NULL, 0); The effect of this operation is to wake a single thread that is performing a futex wait on the memory location. Errors from the futex wake operation are ignored. set_tid_address() always returns the caller's thread ID. set_tid_address() always succeeds. This call is present since Linux 2.5.48. Details as given here are valid since Linux 2.5.49. This system call is Linux-specific. This page is part of release 3.74 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at http://www.kernel.org/doc/man-pages/.
<urn:uuid:dc8d719b-fd64-4ea7-be3a-955dd7cb36f5>
CC-MAIN-2023-50
http://carta.tech/man-pages/man2/set_tid_address.2.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.794357
452
3.3125
3
Collection of the objects from The Babylonian Collection of the Department of Near Eastern Civilizations of the Yale University The Babylonian Collection is an independent branch in the Yale University. It originated from a gift of J. Pierpont Morgan in 1909. It includes the largest assemblage of cuneiform inscriptions of the world and is one of the five largest in the United States as regards documents, seals, and other artifacts from ancient Mesopotamia. In addition, the Yale Babylonian Collection houses a small selection of South Arabian antiquities. This group of objects includes 11 inscribed items, mainly bases of stela and statue, five alabaster heads, and three other small articles, i.e. one bull’s head, one small statue fragment and one small alabaster jar lid. The items were purchased for Yale from Mr. C. Tutenberg by Prof. C. C. Torrey in 1926, with a subvention from Simeon Baldwin, former governor of the State of Connecticut. On the basis of the object typology and of the onomastics, Prof. F. Renfroe maintained that the items are Qatabanian and that their original provenance was Timnaʿ. The Yale South Arabian collection was photographed in June 2012 by a team of the DASI project including Prof. Alessandra Avanzini, Alessia Prioletta (epigraphist) and Gianluca Buonomini (photographer). Ulla Kasten, YBC associated curator, provided us with her organizational support. Photos are courtesy of the Yale Babylonian Collection. This is the collection home page. You can begin the consultation of the whole collection by using the indexes and tools menu on the left or you can consult only one of its sub-collection, when present, by choosing from the list below.
<urn:uuid:429324a0-f993-4a90-ab3c-4ac83d97d965>
CC-MAIN-2023-50
http://dasi.cnr.it/index.php?id=135&prjId=1&corId=0&colId=11&navId=721553109&rl=yes
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.922166
379
2.53125
3
Art therapy students present their first artwork exhibition! - Category: News - Date: July 8, 2014 During the art therapy course children learnt to listen to their friends, confidently express their own opinions, follow group rules and express their emotional memory through colours. The course also worked on boosting their self-confidence and independence and gradually the students became more responsible for their actions, as well as more sensitive and helpful to those around them. So what did our children and parents think? All students enjoyed learning about and practicing calming relaxation and meditation exercises which they can now use before starting homework. Elder children agreed that the classes and topics covered had been interesting and they particularly liked that the fact that the artworks portraying their negative emotions could be ripped up or torn at the end. Parents of younger children were happy to see that their children had acquired skills which would assist them during their first steps at mainstream school, while mothers of children aged 6 upwards were impressed by their children’s deepened insight and ability to analyse their own works as well as their friends’. So, what’s next? Children and parents are looking forward to starting the next stage of the programme called “Self-awareness” which will start in September 2014 and will focus on emotional intellectual development. Parents agree that it is essential for junior children (who will soon attend senior school) to be able to cope with their emotions and will soon have to make their own decisions.So the objective of the courses’s second stage is to foster the development of a balanced personality, to educated the children about emotional skills and qualities including: - Self-perception (an understanding and awareness of one’s own emotions and feelings) - Self-control (management of one’s emotions, states and internal resources as well as how to release negative emotions in a socially acceptable manner) - Communication (social sensitivity) - Self-confidence (genuine self-esteem and responsibility including responsible decision making). Please see Our Classes page for more information about our art therapy classes for children of all ages. For further information and to register call 07900106971 or email [email protected] Art therapy is the primary method of the tested and socially responsible personality development programme entitled “I am a creator”, compiled by Marija Mendele-Leliugiene. The programme has been successfully implemented in Lithuania since 2001 in various ‘Raphael’ social and business initiatives, working with people of all ages and professions.< Back
<urn:uuid:4d2d925c-ee6c-4e5c-a155-239bdefac784>
CC-MAIN-2023-50
http://ltlt.co.uk/2014/07/art-therapy-students-present-first-exhibition/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.98009
522
2.609375
3
Livingston County, MI Bat Control Company Bat Exclusion Process The bat exclusion process is a humane and effective method of removing bats from a structure while preventing their re-entry. It is crucial to note that bats are important for ecosystems as they contribute to insect control, so exclusion methods focus on providing an alternative habitat for them rather than causing harm. Here are the general steps involved in the bat exclusion process: Inspection: A thorough inspection of the building is conducted to identify entry points, roosting sites, and the extent of the bat colony. This often involves checking for gaps, cracks, or other openings in the structure. Sealing Entry Points: Identified entry points are sealed off with exclusion devices such as one-way doors or tubes. These devices allow bats to leave the roost but prevent them from re-entering. It’s crucial to ensure that all potential entry points are addressed. Monitoring and Removal of Exclusion Devices Live Bat Exclusion The exclusion devices are monitored to ensure that all bats have left the building. Once it is confirmed that the colony has vacated, the exclusion devices are removed, and remaining entry points are permanently sealed. Bat Guano Cleanup and Sanitation Guano (bat droppings) and urine can accumulate in attics, and cleanup may be necessary to eliminate odors and potential health risks associated with guano. This step may involve removing contaminated insulation and disinfecting the area. Common Bat Removal Questions Getting rid of bats should be approached with care and in adherence to local regulations, as many species are protected due to their ecological importance. If bats have become a nuisance in or around your home, consider the following questions. To deter bats from roosting in or around your property, consider the following methods. It’s important to note that these methods are generally designed to make the area less appealing to bats rather than causing harm: use lights, seal entry points, install bat houses, netting, odor repellents, keep the area clean, and most important a professional exclusion. Bats may be attracted to your house for several reasons, and understanding these factors can help you implement appropriate measures to discourage their presence. Here are common reasons bats might be attracted to a residential area: Roosting sites, insect attraction, water sources, vegetation and landscaping, lighting, warmth, and structural gaps and openings. While some people use mothballs as a repellent for bats, it’s important to note that the effectiveness of mothballs in deterring bats is questionable, and their use for this purpose is not recommended. Mothballs contain naphthalene or paradichlorobenzene, and the strong odor emitted by these chemicals is intended to repel moths and other fabric-damaging insects. The presence of one bat in your house does not necessarily mean there are more, but it’s important to investigate the situation thoroughly. Bats are generally social animals and often live in colonies, so finding one bat may indicate that others are nearby If bats are found inside your house, it can indicate a few different possibilities, and understanding the situation is important for taking appropriate action. Here are some potential meanings: Accidental entry, roosting nearby, mating or breeding activity, or structural issues.
<urn:uuid:1ebb4004-2861-431c-9ffe-ee5d6dac4cbc>
CC-MAIN-2023-50
http://mibatcontrol-livingston.com/wildlife-control-services/bat-removal/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.936692
676
3.09375
3
“Happiness is when what you think, what you say, and what you do are in harmony. The frequency of vibration determines the mass of a particle. The greater the vibration, the greater the mass. Likewise, the electric charge and the weak and strong interactions of a particle are determined by the way the string vibrates. The strings are basically the same, but they vibrate differently. For example, the proton is nothing more than a trio of vibrating strings, each of which corresponds to a quark. When we listen to Chopin or Beethoven, each piece of music is a shared vibration that carries us away. Just as the three quark strings produce proton music. When captured by measuring instruments, like particle accelerators, and bubble chambers, the proton music is translated into mass, positive electric charge, and spin. The atom, which is a combination of protons, electrons, and neutrons, has many more instruments in its orchestra to create its music. For further information about Planck E PressCenter, please contact us. The mission of the Planck E PressCenter is to promote ideas, products and theories that have not yet reached the mainstream, as captured in our first release Eccentrics and their Ingenious Solutions. We encourage you to submit your ingenious solution, article, press release or "out of the mainstream" technical idea for publication on the Planck E PressCenter. Please send us an e-mail to [email protected] and enquire how. To learn more about holistic engineering, solutions inspired by nature, monetization of diseconomies, training courses or the incorporation of Being Data to your day-to-day, please follow us on the social networks.
<urn:uuid:1223fd2f-9b06-40e4-8824-80e3e0fa6cc9>
CC-MAIN-2023-50
http://presscenter.planck-e.com/article/Superstrings_From_Music_to_Quantum_Mechanics/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.91823
364
3.0625
3
As Hurricane Hilary barrels towards the Southwest, residents are bracing themselves for what experts are predicting to be a catastrophic and life-threatening flood event. The rare occurrence of Hurricane Hilary transforming into a tropical storm is set to commence on Saturday and extend through the coming week. Despite its recent downgrade from a menacing Category 4 to a Category 3 hurricane on Saturday, the National Hurricane Center reports that Hurricane Hilary still maintains its strength as a significant hurricane with maximum sustained winds clocking in at 125 miles per hour. The storm’s pace has quickened, now surging at 16 mph, and it is currently situated 235 miles west of the southern tip of the Baja California Peninsula. It is projected that as Hurricane Hilary moves north-northwestward, it will gradually weaken due to the influence of cooler waters on its trajectory towards Southern California. Recent updates indicate that the storm’s acceleration will hasten its impact on the United States. The Southwest can anticipate the onset of heavy rain from the storm as early as Saturday, a precursor to the core’s stronger winds, which are expected to arrive as early as Sunday morning. These winds will bring with them a surge of intense and perilous rainfall, according to statements from the National Hurricane Center. The National Weather Service in San Diego elucidated, “Hilary’s accelerated pace and its slight eastern shift in trajectory indicate that the most impactful time frame will be from Sunday morning through Sunday evening.” In response to the imminent threat, California has issued its first-ever tropical storm warning that spans from the state’s southern border to just north of Los Angeles. The forecast for the Southwest region paints a picture of continuous heavy rainfall until early next week. The most severe conditions are expected to manifest from Sunday into Monday, as Hurricane Hilary marches forward. The excessive rainfall could potentially deliver more precipitation in certain areas than the annual average, engulfing parts of California, Nevada, and Arizona. Specifically, sections of Southern California and Nevada are projected to receive between 3 to 6 inches of rain, with localized areas possibly accumulating up to 10 inches, as outlined by the National Hurricane Center. Elsewhere, rainfall amounts ranging from 1 to 3 inches are foreseen. While the core of Hurricane Hilary remains a potent threat, the National Hurricane Center has underscored that the impact of heavy winds and rain will begin well in advance of the storm’s center. With road conditions poised to worsen, potential power outages, and flood hazards looming, regional authorities have initiated preparations across the board. Governor Joe Lombardo of Nevada has deployed 100 state National Guard troops to the southern part of the state, a region susceptible to significant flooding. Anticipating the situation, President Joe Biden announced the proactive positioning of Federal Emergency Management Agency (FEMA) personnel and resources in Southern California and surrounding areas, should the need for their intervention arise. Southern California is rallying its resources and reinforcing its defenses. If Hurricane Hilary makes landfall as a tropical storm, it will be the first such incident in the state in nearly 84 years, according to data from the National Oceanic and Atmospheric Administration (NOAA). Parts of Southern California are bracing for the highest level of risk for excessive rainfall, an unprecedented Level 4 threat. Although such high-risk conditions have been rare, responsible for a small fraction of days per year, they have accounted for the majority of flood-related damage and a significant number of flood-related deaths over the last decade. As a response to this impending threat, California has marshaled water rescue teams, California National Guard personnel, and flood-fighting equipment in preparation for Hurricane Hilary’s arrival. In addition, constant highway maintenance will be in effect to enhance road safety. Southern California Edison, a major electricity utility serving over 15 million people in the region, has warned of potential impacts from Hurricane Hilary. The utility is gearing up to address power outages while urging residents to gather emergency supplies like flashlights, portable chargers, and coolers. Recognizing the vulnerability of the homeless community, officials in Los Angeles and San Diego are conducting outreach efforts and offering temporary shelter. Los Angeles County Sheriff’s Department is even mapping out encampments at risk and providing aerial advisories to inform the affected population. Los Angeles County Sheriff Robert Luna expressed, “Our hope is for minimal damage and, most importantly, no loss of life. However, we are preparing for the worst-case scenario and are ready to assist not only our county but also neighboring counties if needed.” In parallel, San Diego has been working diligently to clear storm drains, ensure street accessibility, and ready equipment under the guidance of Mayor Todd Gloria. The threat posed by Hurricane Hilary has prompted significant scheduling changes across various events. Major League Baseball has reshuffled its weekend games, converting Sunday matchups hosted by the Los Angeles Angels, Los Angeles Dodgers, and San Diego Padres into split doubleheaders on Saturday. In the realm of Major League Soccer, the LA Galaxy and LAFC matches originally scheduled for Sunday have been postponed to later dates.
<urn:uuid:4a07d3db-4869-4dd5-a9dd-be58f0f17899>
CC-MAIN-2023-50
http://truefeed.info/tag/southwest/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.946262
1,045
2.578125
3
How Telescopes Work I remember the first time that I looked through my Dad’s telescope lens. He loved all things scientific, especially if they could be observed while he was outdoors! I saw the surface of the moon for the first time through the lens of my Dad’s scope. It looked like it was so close that I could touch it. I was hooked. That telescope succeeded in beginning a lifelong love affair with physics. In 1608, a Dutch eyeglass maker placed 2 lenses, one in front of the other, and discovered that he could see things, magnified. The concept of the telescope was born. Telescopes work by focusing light from distant objects to form an image. An eyepiece then magnifies this image for your eye. There are 3 types of telescopes. There are refractors, reflectors, and telescopes that incorporate both concepts. 1. Refracting telescopes are the most widely used and oldest form of telescope. They work by “bending” light through convex and flat glass. The lens at the end of the telescope is curved and will collect the light from the far away object. It’s curved head will focus the light and magnify it. The flat lens will then “straighten” the light and allow the human eye to see it. The eyepiece will also magnify the object and, along with the flat lens, allow the human eye to see it as normal. 2. Reflecting telescopes use mirrors instead of lenses to manipulate the photons(light) of the far away object. The first reflecting telescope can be credited to Sir Isaac Newton. It is still the most popular reflecting scope and is still very popular with true astronomers today. Because light is distorted when using lenses, the mirrors will show a “truer” image. However, they do “flip” the image you are viewing. This is not relevant unless you are searching for north or south poles. Reflecting scopes give much truer color than refracting scopes and for this reason, many amateur astrophysics buffs use them. Keep in mind that they are much harder to keep clean. The mirrors can become foggy and dirty when using the scope outside. 3. For the third kind of telescope, think Hubble or Keck. These incorporate both reflector and refractors and would not be feasible for the amateur or common astronomer. I think that to cover the workings of the telescope we must cover not only “how” they work, but what you need to look for if you decide to purchase a telescope of your own. Purchasing, for your specific needs, is relevant in “how” your telescope will work for you. 1. Steer clear of department store telescopes. They are that cheap for a reason. They typically use plastic optics (lens’) and if you think that light bends drastically when going through glass, try to get it to go through plastic! They’re not worth your hard earned money. 2. Large print announcing “great magnification” should be steered clear of, too. Magnification is over-hyped. The magnification of your telescope is controlled by your eyepiece. Not the telescope itself. What you are looking for is a low powered eye-piece with a wide field of range. This will allow more light to enter your eye, and therefore provide a clearer picture. Large telescope magnification will only increase optical flaws and increase the rotation of the earth. So beware! 3. Choose a telescope mount wisely, as well. Your primary concerns for your mount will be portability, stability and point of tracking. 4. The eyepiece on your telescope is very important. This is your window through your telescope. The eyepiece is the small, removable lens that you actually look through. Remember if you purchase more eyepieces for your telescope, to match any eyepiece to the aperture for the clearest image! As a side note: skip the computer soft-ware. They are typically worthless and almost no help at all. Buy a good astrophysics book and you will find them much more helpful. And they can be brought along with you! Finally, like Galileo and Newton, you can be fascinated by the world around you. You do not need to be a physicist to appreciate all that you see. You just need to be able to “see” it. There is more out there than the moon. A good telescope will give you a new set of eyes. Eyes that see the phases of Venus, some of Jupiter’s moons, comets, nebulae or perhaps even gazes of the Great Spiral Galaxy. The beauty of space and all she holds is at the end of that telescope lens.
<urn:uuid:18852297-f865-4fd6-b318-d9ee380387f9>
CC-MAIN-2023-50
http://www.actforlibraries.org/how-telescopes-work-3/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.953583
989
3.875
4
Click here to read the original article. What is international macroeconomics? The Mundell-Fleming trilemma is rooted in international macroeconomics, which is the study of policymaking decisions that affect how global economies interact with one another. International macroeconomics deals with issues such as balance of payments, trade agreements, and capital controls. What is the Mundell-Fleming trilemma? The Mundell-Fleming trilemma is described as the essence of international macroeconomics, according to Michael Klein from Tufts University. It states that, given the choice between monetary autonomy (explained under ‘Context’), free capital mobility, and a fixed exchange rate, an economy can only choose two of them. The trilemma is represented below. For example, let’s take an economy that decides to maintain both monetary autonomy and a fixed exchange rate. Assume the U.K. has its interest rate set at 2% a year, and its exchange rate is at parity with the U.S. dollar, i.e. $1 will get you £1. The U.K. decides to fix its exchange rate such that it is always at parity with the U.S. dollar. Given this scenario, U.S. investors convert $10,000 a day to £10,000 and invest it in the U.K. economy. Now, assume that the Bank of England wants to decrease interest rates in order to encourage British investment in the local economy: the interest rate falls to 1%. U.S. investors now feel less inclined to invest as much money as their return has decreased by 1%. Thus, they want to convert fewer U.S. dollars into the British sterling, i.e. the exchange rate falls. This is illustrated below. Forsaking a fixed exchange rate If an economy decides it wants monetary autonomy and free capital mobility, it will have to give up control over its exchange rate. Assume that a certain event in the U.K., such as Brexit, has caused panic amongst international investors. They withdraw their financial capital (such as money) due to uncertainty over the future of the British economy, which depletes the U.K.’s reserves of foreign currency. Without the Bank of England’s increasing interest rates to encourage capital influx, foreign reserves stay low. Without these reserves, the U.K. cannot maintain a fixed exchange rate (explained under ‘Context’), and must allow its exchange rate to float. Forsaking monetary autonomy If an economy decides to maintain both a fixed exchange rate and free capital mobility, it must forsake monetary autonomy. To maintain a fixed exchange rate, the U.K. must have a certain level of foreign reserves, i.e. it has to ensure that foreign investors are incentivized to invest in the British economy, which would maintain the level of foreign reserves. If it wants to allow capital to move freely between borders, it must set an interest rate high enough to encourage a certain level of foreign investment. In other words, the U.K.’s interest rate must be determined by the amount of foreign reserves it requires to maintain a currency peg. An example of a currency peg is the Hong Kong dollar, which, since 1983 has been pegged to the U.S. dollar at a rate of US$1 = HK$7.80. Between 1974 and 1983, the Hong Kong dollar was allowed to float. In 1974, the U.S. dollar depreciated, which encouraged capital influx to the U.S., and consequently, capital outflows from Hong Kong. As Hong Kong chose not to stem capital outflows, it had to choose between allowing its currency to float and forsaking monetary autonomy. Until 1983, it chose the former. To read more about the history of the Hong Kong dollar, click here. The trilemma and the EU The Euro was first launched in 1992, under the Maastricht Treaty. In order for an economy to participate in the Euro, it had to fulfill six conditions, one of which was to have an interest rate set close to the EU average. In the run-up to the establishment of the Euro, economies fixed their currencies to the Deutschmark (the currency used earlier in Germany) and allowed their capital to move freely across borders. The participating European economies then relinquished monetary autonomy and followed Germany’s interest rate closely. Wim Duisenberg, the head of the Dutch central bank, was dubbed “Mr. Fifteen Minutes” due to the alacrity with which he copied the interest rate decisions of the Bundesbank, the German central bank. Today we can see the implications of a single interest rate in the EU. For economies that followed Germany’s business cycle back when the Euro was first established, such as the Netherlands, there was little impact of copying Germany’s interest rate. However, for economies that did not parrot Germany’s business cycle, such as Greece and Spain, interest rates were too low during booms, which caused major troubles when their economies faced busts. After the Global Financial Crisis (GFC), the EU was hesitant to embark upon QE, a policy that necessitated lower interest rates across the common market. Even though it would have helped the suffering PIIGS (Portugal, Italy, Ireland, Greece, and Spain) economies by encouraging domestic investment, Germany voiced its hesitance, stating that it would leave Germany vulnerable to hyperinflation. Determining an interest rate that placates all members of a common market has proven to be nearly impossible. The history of the trilemma The first mention of some tension when it comes to international macroeconomic policymaking was by J.M. Keynes, who, in his 1930 essay “A Treatise on Money”, stated that “[preserving]… the stability of local currencies… and [preserving] an adequate local autonomy for each member over its domestic rate of interest and its volume [poses a dilemma]”. This dilemma (Keynes assumed free capital mobility) was the basis of Keynes’ criticism of the interwar gold standard: trade imbalances forced deficit countries to raise interest rates and lower wages to stop the hemorrhage of capital, which led to mass unemployment. If surplus economies increased their imports, this problem would be self-solving, but no surplus economy had any mandate to do so. In the Bretton Woods conference, Keynes proposed a solution in which an international clearing bank (ICB) aids with deficits and dissuades surpluses. Unsurprisingly, this idea faced great opposition from America, which was an economy with a large trade surplus. The ICB was abandoned, but the idea of an international bank aiding deficit countries became the basis for the IMF. Marcus Fleming was in touch with Keynes when he wrote his paper on the impotence of monetary policy in the face of a fixed exchange rate and freely-moving capital. Independently, Canadian economist Robert Mundell reached a similar conclusion, but was inspired by different circumstances. Years after the Second World War, there were scarcely any countries that faced rapid and free capital mobility. Canada was an exception: it allowed capital to travel freely through its border with America. Because it valued monetary autonomy highly, it had no choice but to let its currency float from 1950 to 1962. Maurice Obstfeld, the current Chief Economist of the IMF, was the first one to mention the term “policy trilemma” in a paper he published in 1997. Since then, the trilemma has become a centerpiece of macroeconomic textbooks, and a conversation piece for international policymakers. Why the trilemma matters International trade and globalized economic activity became increasingly commonplace after the Second World War. Economies that had to deal with sudden capital flight or influx, or struggled to maintain control over their currency, had to turn to a previously-ignored idea to not only understand why they did not have as much control over their markets as they did before the war, but also to understand how to deal with it and what the opportunity costs of choosing certain policies were. The Mundell-Fleming trilemma paved the way for conversation about policymaking with reference to international economies. Today, we notice the impact of all policy decisions on global economic activity: divergence in terms of QE policies between the U.S. and Japan might lead to carry trade; China’s maintaining both monetary autonomy and a strongly-managed exchange rate means that it must employ capital controls; the Bank of England’s decision to continue with QE resulted in a depreciation of the GBP. A criticism of the trilemma A big critique of the Mundell-Fleming trilemma has been provided by Helene Rey, from the London Business School, who stated that an economy that allows both capital flows and a floating exchange rate will not necessarily have control over its monetary policy. To hear her lecture on why, click here. 1. What is monetary autonomy? Monetary autonomy denotes a central bank’s ability to choose its policies, especially interest rates, without taking into account the impact of its interest rate on international markets. If a central bank has to take into account the reaction of international markets due to an interest rate decision, it would be because higher interest rates would encourage investors to purchase local government bonds, leading to capital influx. Alternatively, lower interest rates would lead to capital flight. 2. Why does a fixed exchange require lots of foreign reserves? A central bank that wants to peg its exchange rate must meet decreased demand for its currency by lowering the stock of its currency in the forex market, and increased demand by increasing the stock of its currency in the forex market. To increase the stock of its currency, a central bank must buy foreign currency using its own currency. Conversely, to decrease the stock, it must buy its own currency using foreign currency, which means that it should have a stock of foreign currency to do so. An economy must have enough foreign currency in its reserves to have the full flexibility to both buy and sell its own currency. Hong Kong, for example, has a large stock of foreign reserves to maintain its peg to the U.S. dollar. For this reason, speculators do not attack the Hong Kong dollar.
<urn:uuid:201bb5cb-52f9-47ed-9528-245e6f6e7a71>
CC-MAIN-2023-50
http://www.econdiscussion.com/articles/category/monetary-policy
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.957064
2,144
3.4375
3
According to the USDA, the plant hardiness index for Background Information How the map was started Every plant can adapt to a range of environments. Gardeners have learned through experience where the great variety of landscape plants can be grown. Over the years many schemes have been proposed to help gardeners locate those environments when they introduce new species, forms, and cultivars. The pooling of many of these schemes culminated in the development of the widely used 'Plant Hardiness Zone Map,' under the supervision of Henry T. Skinner, the second director of the U.S. National Arboretum. In cooperation with the American Horticultural Society, he worked with horticulral scientists throughout the United States to incorporate pertinent horticultural and meteorlogical information into the map. Indicator Plant Examples The following are names of representative persistent plants listed under the coldest zones in which they normally succeed. Such plants may serve as useful indicators of the cultural possibilites of each zone.
<urn:uuid:0ff6ddc5-a5a1-4653-816c-ecdb988aa15f>
CC-MAIN-2023-50
http://www.ersys.com/usa/48/4819972/usda.htm
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.90644
215
3.5
4
The normal fallopian tube extends from the area of its corresponding ovary to its terminus in the uterus. The tube measures between 9-11 cm in length. At the ovarian end, the tube opens into the peritoneal cavity and is composed of approximately 25 finger-like projections termed the fimbriae. In its extrauterine course, the fallopian tube is enveloped in a peritoneal fold along the superior margin of the broad ligament. The fallopian tube is histologically composed of three layers: a mucosal membrane, a wall of smooth muscle and a serosal coat. The serosa is lined by flattened mesothelial cells. The muscularis mucosae is composed of two layers: an outer longitudinal and an inner circular layer. There is additionally, an inner longitudinal layer in the intramural segment of the tube that extends for 2 cm laterally. The inner circular layer forms the bulk of the muscular coat. The outer longitudinal layer comprises inconspicuous smooth muscle cells interspersed with loose connective tissue. The mucosa rests directly on the muscularis. It consists of a luminal epithelial lining and a scant underlying lamina propria with sparse spindle and angulated cells. The luminal complexity is more marked towards the ovarian end compared to the interstitial and isthmic portions containing only five to six blunt papillae. Three histologic cell types comprise the epithelial layer: ciliated (20-30%), secretory (55-60%) and intercalary cells. Ciliated cells are believed to be more frequent in the ovarian end of the fallopian tube. The ciliated cell has a columnar shape and contains a oval or round nucleus, often located perpendicular or parallel to the long axis of the cell. The secretory cell is usually a more narrow columnar cell with approximately the same height as the ciliated cell. The nucleus is ovoid and perpendicular to the long axis of the cell. The chromatin is more dense and the nucleolus smaller than that seen in the ciliated cell. The intercalary cell, or peg cell is a columnar cell occupied chiefly by a thin, dark-staining nucleus.
<urn:uuid:e15db9af-5f63-47f5-a78a-a3c81e6444db>
CC-MAIN-2023-50
http://www.proteinatlas.org/learn/dictionary/normal/fallopian+tube
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.916823
447
3.140625
3
The timeline is England, late-18th century. Tea has found its way from the table of the aristocracy to the table of the every man. Gone are the days when tea was served to only men in English coffeehouses. In the homes of the aristocracy, tea now is locked away in elaborately-carved wooden tea chests; the key kept safe by the “lady of the house” should the chambermaid, the footman or the butler decide to help him or herself. Tea now is also being enjoyed in almost every home, tea room and workplace. Maid servants are enjoying a ‘tea break’ twice a day … with an allowance taken from their wages to pay for the tea. Apprentices in manufacturing plants are allocated a parlor where they can have a twice daily tea break. Children in orphanages are given tea with milk and sugar. Not everyone, however, thinks this ‘tea drinking’ is appropriate for the lower classes. Jonas Hanway in A JOURNAL OF EIGHT DAYS JOURNEY wrote “The use of tea descended to the Pleboean order among us, about the beginning of the century … men seem to have lost their stature, and comeliness, and women their beauty. Your very chambermaids have lost their bloom, I suppose by sipping tea … It is the curse of this nation ….” He wasn’t alone. William Cobbett wrote in COTTAGE ECONOMY, “The tea drinking has done a great deal in bringing this nation into the state of misery in which it now is. It must be evident to every one that the practice of tea drinking must render the frame feeble, and unfit to encounter hard labor or severe weather.” By the mid-1800’s, a professional man (doctor, lawyer) might earn £50 a year, while the average workman was only earning about 20 shillings a week. A live-in chambermaid might earn £5 per year, while the butler of the house would earn £20 . With Tea selling for more than £26 a pound, how was anyone ever going to afford this beverage? One word …. “smouch“. We might call it recycling, they called it “smouch“. Servants in the royal and affluent households, as well as workers in coffee houses, would take the used tea leaves and sell them through the back door to unscrupulous dealers. These “smouch” dealers would then add things like tree leaves, sheep’s dung and saw dust as fillers. They would color the leaves with iron sulphate, verdigris and copper. They would dry this mixture and then sell the “smouch” back to the tea merchants. It is believed that within an eight mile area, approximately 20 tons of “smouch” was manufactured every year. This flourishing underground market, in addition to smuggling, is what made it possible for tea to reach the commoner. “METHOD OF MAKING SMOUCH WITH ASH LEAVES TO MIX WITH BLACK TEAS” “When gathered they are first dried in the sun then baked. They are next put on the floor and trod upon until the leaves are small, then lifted and steeped in copperas, with sheep’s dung, after which, being dried on the floor, they are fit for use.” Taken from Richard Twining’s “Observations on the Tea and Window Act and on the Tea Trade, 1785”. The tea that was being imported from China and enjoyed now by all classes was green tea … not black tea as so many people associate with Great Britain. It was what we now refer to as “gunpowder” green tea. Black tea came about because the Chinese were becoming just as unscrupulous as the “smouch” dealers. The Chinese, knowing that people expected their green teas to have a bluish tint when steeped began adding gypsum to their tea just before firing the leaves, giving their cheaper teas the right color. Partly due to the fact that forests were being completely decimated in order to manufacture “smouch”, and due to the fact that poisonous dyes were being used, an Act of Parliament was passed in 1725 banning the mixing of tea leaves with any other leaves. This Act went completely unnoticed, which prompted another edict from the government in 1777 banning the sale of “smouch” altogether. Tea drinkers eventually became concerned about some of the more bizarre ingredients they were ingesting. When you think of all the copper, lead, gypsum and iron that people were drinking, sheep’s dung doesn’t sound so bad! The public became so concerned about these poisonous dyes, they began asking for ‘black’ tea … which is why black tea is the predominate tea enjoyed throughout Great Britain. And with smuggling so rampant at that time “smouch” was no longer an issue. So how about it …… would you like a little “smouch” with your tea?
<urn:uuid:c8662122-261e-45eb-86eb-a210cb310f8e>
CC-MAIN-2023-50
http://www.teatoastandtravel.com/tag/iron/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.978719
1,091
3.234375
3
Passwords. Encryption. Firewalls. Secure networks. None of these are out of the reach of the US National Security Agency (NSA), their British counterparts at the Government Communications Headquarters (GCHQ), and other government as well as private agencies around the world. Any sense of privacy and security should be dismissed completely in this world. Assume that you’re data is open to prying eyes. Assume that your private information is an open book. If you put anything on a digital device that has access to the internet, it can no longer be considered private, secure, or safe. The whistleblowing efforts of Edward Snowden as well as investigations by journalists and watchdog groups are showing us every day just how insecure our data really is. The latest log to throw on the fire comes from reports that the US and British security agencies (and probably other organizations) have “cracked much of the online encryption relied upon by hundreds of millions of people to protect the privacy of their personal data, online transactions and emails.” If you think your 23-character password with lower-case letters, capital letters, numbers, and special characters will protect you, you’re wrong. These agencies are working from within. They have established vulnerabilities within the coding of websites and services that give them backdoor access regardless of how long your password is. They don’t need to know your mother’s maiden name or the city in which you met your spouse to gain access to your accounts and data. They probably already have it. According to Maximum PC: The most worrying bit, though, is that the agency owes a lot of its eavesdropping capabilities to its success in secretively influencing tech companies to alter their product designs, “insert vulnerabilities into commercial encryption systems” and weaken security standards. All these activities are part of the SIGINT (signals intelligence) Enabling Project, a program the NSA has spent around $800 million on since 2011. The paranoid ramblings of people (myself included) over the past few years are proving to be barely showing the tip of the NSA security iceberg. Their greatest strength (besides undisclosed and untracked budgets) is time – they’ve been working on these projects for over a decade. Today, they’ve mastered the science and art of getting into any file, any computer, and any database they want. They’ve also mastered the ability to catalog, sort, and retrieve the data as well. To those who take the security of their data, their personal information, and their digital lives seriously, there really is only one thing that can be done. If unplugging is impossible, people have to take the necessary steps to keep anything important to them offline completely. * * * “Encryption” image courtesy of Shutterstock.
<urn:uuid:d7481bfc-6718-4273-b0fd-878e82a8bbc7>
CC-MAIN-2023-50
http://www.techi.com/2013/09/assume-that-no-personal-data-is-safe-from-the-nsa-gchq-and-others/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.957677
583
2.53125
3
The Plan for Romeo and Juliet to consummate their marriageWhen the Nurse explained the plan—a rope ladder, “the cords,” would be placed from Juliet’s bedroom, “the highway to her bed,” so that Romeo and Juliet could have sex and thereby consummate their marriage—Romeo responded by saying “bid my sweet prepare to chide.” What does "to chide" mean?I’ve never been fully confident that I understood this line. What does “to chide” mean in this context? Why should Juliet “prepare to chide”? I’ve never seen the line analyzed or glossed, but it is the pivotal moment in the drama. Up to this point, alternatives are possible. The marriage has not been consummated and can be easily annulled. Juliet still could marry Paris, and Romeo find another Rosalind or Juliet. Or Romeo and Juliet could announce that they have married and accept the ire of their families and banishment. Their marriage might, as Friar Lawrence planned all along, put an end to the enmity between their families and soften the Prince against banishment. This is the point of no return—a secret marriage followed by a clandestine consummation and a cloak-and-dagger plan for resurrection and return—and this is the line which marks the point of no return: “bid my sweet prepare to chide.” The dictionary definition of "to chide"The dictionary definition of “to chide” is “to scold or rebuke” and the word is used elsewhere in the play with this meaning, but what could Romeo possibly mean by saying “Juliet should prepare to scold or rebuke”? From the context of the dialogue, we would expect Romeo to say something like “Juliet should prepare to be my lover” or some more poetic Shakespearian equivalent. Basically, in the simplest of terms, he must be saying “tell her to prepare to have sex.” But why does he say it this way or, more to the point, why does Shakespeare have him say it this way? Shakespeare's pun on chide/chafeI have long suspected that “to chide” was, in this context, a pun suggesting “to chafe.” Finding this web page which compares “to chide” and “to chafe” <http://wikidiff.com/chafe/chide>, my suspicions were confirmed, my prophecy fulfilled. The verb “to chafe” means “to excite heat by friction; to rub in order to stimulate and make warm.” More telling for our purposes, Shakespeare uses “to chafe” and “to chide” in ways that bring their meanings close together. For example, compare: “the troubled Tiber chafing with her shores” from Shakespeare’s Julius Cesar “As doth a rock against the chiding flood” from Shakespeare’s Henry VIII
<urn:uuid:b357750f-33a2-4ebf-9dc2-8a77e1e24462>
CC-MAIN-2023-50
http://www.thesourgrapevine.com/2017/11/test-question-how-did-romeo-respond.html?m=0
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.944517
656
2.515625
3
Foot Pain Treatment Foot Pain Remedy Guide There are many reasons why people experience foot pain. Some are due to high-impact exercises, wearing the wrong size of shoes, underlying medical conditions, diet which is low in vitamins and minerals, long period of walking, and weak foot muscles. With various foot pain treatments, all these problems can be solved. The foot is one of most injury-prone parts of the body due to strains and pressure from walking, running, jogging, and other activities. This part of the human body consists of 26 bones, 33 joints, and 120 muscles. The feet serve as means of pushing the leg forward, the bodyís shock absorber from pressure, and the balancing mechanism for the whole body frame. Due to its relatively small size, the feet receive more than half of the bodyís weight; this means that these parts of the human anatomy receive a combined weight which amounts to hundreds of tons in a single day. It has been estimated that an average person walks around 7,000 to 10,000 steps everyday. For those people who are overweight and obese, foot pain is often an occurrence since the pressure from these body parts is much heavier than normal. Foot pain may also be a result of underlying medical conditions such as rheumatism and arthritis. Also, studies revealed that wearing the wrong size of shoes will result to foot injuries which are usually painful. There are Various Foot Pain Treatments and Remedies: - For those people who are experiencing foot pains, opt for low-impact exercise since this will lessen injuries from the feet. Since most foot pains are caused by repetitive movement, avoid doing high-impact exercises which put to much pressure and strain on these areas. - There are topical medicines available over-the-counter which give relief to foot pains. - Soaking the feet in warm water will give soothing effect on its muscles since the temperature will be able to penetrate on painful areas. - Massage therapy for ailing feet. Just be sure to hire a massage specialist which can target painful muscle points. There are also massaging electronics devices available today which no longer requires people to go to spas and massage centers. - Seek a doctorís advice if the pain is intolerable since it may be a result from underlying medical conditions. - Do foot exercises which can make the feetís muscle stronger and more flexible. Exercises for Foot Pain - Controlled Rise: Stand on both feet (medium apart) and slowly rise the left foot using the ball area. Do this in a span of four seconds and then hold this position with that same time. Gradually lower the heels on the floor. Repeat the same procedure on the right foot. - Strengthening the toes: (This must be done barefooted.) Lift the toes simultaneously to strengthen its joints and muscles. This can be done either standing or in lying position.
<urn:uuid:f8575b63-0d54-40b5-9c94-248db58b8e5b>
CC-MAIN-2023-50
http://www.tonehealth.org/Foot-Pain-Treatment.htm
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.947166
592
2.546875
3
Tina Rosenberg, writing in The Haunted Land: Facing Europe’s Ghosts After Communism: The Stasi complex on Normannenstrasse in the Lichtenberg district consisted of 41 brown concrete buildings. In addition, the Stasi possessed 1,181 safe houses, 305 vacation homes, 98 sports facilities, and 18,000 apartments for meetings with spies. The Stasi had a budget of 4 billion East German marks. It had 97,000 full-time employees—after the army, it was East Germany’s largest employer. There were 2,171 mail readers, 1,486 phone tappers, and another 8,426 people who monitored phone conversations and radio broadcasts. In addition, there were about 110,000 active unofficial collaborators and perhaps ten times that many occasional informants. The Stasi kept files on 6 million people. There were 39 separate departments—even a department to spy on other Stasi members. A master file with a single card for each Stasi employee, collaborator, and object of surveillance stretches for more than a mile—the cards for people named Müller alone reach a hundred yards. Stacked up, the Stasi’s complete files reached 125 miles. They weighed fifty tons per mile; in total, 62,500 tons.
<urn:uuid:774f3746-0d81-4fb4-8e69-f634ace4e03e>
CC-MAIN-2023-50
http://www.vault45.com/tag/tina-rosenberg
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.955483
259
2.859375
3
Studying Andy Warhol starts with the Warhol Museum which offers a whole set of lessons for school aged pupils that explore his life, artistic practice and legacy. There is a lot of material available here including how to critique a piece of art. There are lots of questions for children to discuss about his work, his techniques, etc. This is a brilliant aid for anyone studying Warhol. Arty Craftsy Mom has ten great studying Andy Warhol projects for children: - Printable Soup Can Labels - Warhol and the Queen - Flowers Print Art - Andy Warhol Paper Toy - DIY Andy Warhol Costume - Warhol Art Project - Uncle Andy’s - Warhol’s Cat - Self-Portrait Marker Art - Foam Tray Printing KinderArt has a project on Pop Art Portraits for younger children Scholastic has a similar project aimed at older pupils. Andy Warhol Activity: Hands in Silhouette, using brightly coloured paper as a background Teaching Ideas has some Playing with Colour ideas based on Warhol’s work. There are also a mass of Andy Warhol free templates to get younger children started on the Internet. The TES makes a couple of unit plans available see: Andy Warhol Unit Plan by Fairy Kitty ” 6 great lesson plans which perfectly cover this unit. Particularly helpful questions to engage children and provoke meaningful discussions. Some good tips to give to children while they are producing their work. ” (TES) Also: A scheme of work and Presentation from Guilz. I really like this short project from Crayola, it is a way of incorporating both facts about Warhol and work in the style of!
<urn:uuid:ed619529-ebf4-48ce-904e-4c5a0cdfe8fc>
CC-MAIN-2023-50
https://123ict.co.uk/2019/08/16/studying-andy-warhol/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.922974
360
3.625
4
Climate change teaching resources have recently been provided by the Educational Foundation, linked to The Economist. The Educational Foundation runs the new themed events for children, the ideas started as weekly club events but the resources can easily be used in classrooms, after school clubs, or for home schooling. The following links provide resource downloads. There are nine sets of resources linked to the Climate Change theme, the most recent is Building Back Greener – a scheme of work based on a green recovery of the globe after the COVID pandemic. There is one on plastics, packaging and sustainability, looking to see what impact plastics have on the environment. These are linked to 2021 news. Coronavirus and the environment looks at how the levels of emissions in the atmosphere plummeted dramatically during lockdown. Extreme Weather – a set of six lessons in the climate change teaching resources pack helps children to explore the impact of extreme weather on different communities and considers responses to them. World Earth Day resources, even though they were designed two years ago will be new to many children. This set of lessons challenges learners to consider the specific steps they can take to tackle climate change, discuss the obstacles and think about whose responsibility it is.
<urn:uuid:6dc62643-e9ca-4b5f-ae6c-0187ac59bdb1>
CC-MAIN-2023-50
https://123ict.co.uk/2021/05/25/climate-change-teaching-resources/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.9535
242
3.953125
4
By Stian Skjerping, Screen Story A New Approach to Treatment Phobias are a type of anxiety disorder that can be debilitating for those who suffer from them. They are characterised by an intense and irrational fear of a specific object, situation, or activity. Phobias can be triggered by a variety of things, including animals, heights, flying, and social situations. Traditional treatment for phobias involves exposure therapy, where patients are gradually exposed to the object of their fear. Taking a participant to many different real environments is expensive, time-consuming, and logistically challenging. By utilising 360° videos and virtual reality, individuals can be exposed to their phobia by using a Virtual Reality-headset. This technology enables patients to experience their fear in a realistic and immersive way, without the need to visit other places. Exposure therapy using 360° videos can therefore be a cost-effective alternative to real environments. One of the key benefits of 360° video technology is that it can be customised to meet the needs of each patient. The therapist can control the level of exposure, gradually increasing it as the patient becomes more comfortable. This allows patients to progress at their own pace, without feeling overwhelmed or anxious. It also allows to treat a wide range of phobias. Whether the patient has a fear of spiders, heights, or public speaking, the technology can be adapted to meet their needs. By providing a virtually safe and controlled environment for patients to confront their fears, this technology can make it easier and more effective for patients to overcome their phobias. As the technology continues to evolve, it is likely that we will see more and more therapists incorporating it into their treatment plans.
<urn:uuid:5f919cba-4f03-4561-819a-95b95f79f759>
CC-MAIN-2023-50
https://360visi.eu/2023/06/09/overcoming-phobias-with-360o-video-technology/?vp_page=2
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.941665
383
3.078125
3
Millions of people are facing an ice cold reality of huddling under blankets to stay warm this winter to keep money for food on tables as spiralling energy costs leaving many unable to heat their homes. On top of the financial and emotional burden placed on people, the energy crisis has also created an underlying health time bomb. Heart attacks and strokes, as well as hypothermia are all potential risks to the body and its function, when in cold conditions for long periods of time. When you are sitting in the house feeling cold your body is making adjustments to keep you safe and well, but they come at a cost. “The body is working jolly hard at 10 degrees” is how one university professor described the matter. Jolly hard to survive. Life in all its forms is a biological masterpiece. Organs such as the heart, liver and brain are capable of transferring data with 100% accuracy to perform highly precise functions to keep the body running at safe and sustainable levels. It is instinctive information held in genes that tells penguins to huddle as one to withstand the horrendously cold Antarctic blasts. The saltwater crocodile automatically knows to bathe in the baking sun of Australia to heat up its engine before going about its devastatingly brutal lifestyle and hunt and survive. From the blubber packed 150-ton blue whale to the million-strong ant colonies with all individuals acting as one organism to achieve success for the common good. Life in all manner of creatures has mastered the function and equipment necessary to survive. Then we come to what remains the greatest mystery in evolutionary history – the human body. Such is the leap in intelligence and capability of the human brain, we have evolved differently from our animal counterparts. Our knowledge to control fire has prevented the need for us to grow shaggy coats or thick fur. We get to wear our own fashion branded and celebrity endorsed insulating garments, if we choose to. Cunning, patience and planning – the ability to hunt, farm and fish – means large fangs, venom, claws or stamina were not necessary body features for humans to nourish themselves throughout our time on earth. However, as global warming causes species such as the Polar Bear to face a heartwrenching struggle to evolve and survive, we humans too, face our own never-ending period of adaptation. With industrialisation across all corners of the globe, humankind has not only created a pressure cooker planet that sees millions of species fight for their very future, it has also seen a worldwide chasm of its own – between wealth and poverty. This divide can be seen in all countries, and one of the most pressing concerns regarding this is the issue of people heating their homes. The global panic around fuel supply has seen energy costs sky rocket, and caused a realisation than millions of people can now not afford to effectively heat their homes. One person who has seen the problem at its harshest is Belfast foodbank volunteer Paul Doherty. Each week he sorts and delivers out food parcels packed with cereals, soup, beans, pasta, nappies and much more to help those struggling in the Northern Irish capital. However, as well as struggling to put food on the table, people are also unable to heat their home and Doherty admitted conversations with locals are increasingly turning towards the cost of heating. “To be honest with you, it’s people at their wit’s end. You’re seeing the worry and despair in their face. “We’re seeing whole families sitting round a dinner table wearing coats. That is a reality. I’ve seen that multiple times.” Ambulance workers in Glasgow have reported visiting homes which feel ice cold, where the patients are clearly struggling to cope. Tanya and Will work as ambulance crew in the east of the Scottish city. They seem how serious the issues can become. Tanya said: “It is sad to see people are living like that. “There’s been quite a few patients I have been out to who can’t afford to buy food. They have to choose one or other, heating or food. “So they’ll sit quietly at home and it’s usually a relative or a friend who will phone for them as they don’t want to bother anybody. “They’re sitting there and you can’t get a temperature off them because they’re so cold. “So you take them into hospital because they are not managing. You know if you leave that person at home they are probably going to die through the fact they are so cold.” Temperatures in Scotland have plummeted below -10 during this winter and about 44 people a day were taken to hospital suffering from hypothermia. Will explained the problems for patients sitting at home in freezing conditions: “If they are not turning the heating on they are not going to feel better. “Respiratory illnesses and seasonal bugs take hold much easier if they are not able to look after their basic needs such as food and heating. “If they are not able to keep on top of that then they will get sick.” An experiment was carried out by BBC journalist James Gallaher to see what harm to the body was caused by living in an unheated home. He visited the University of South Wales and was made subject of the test by Professor Damian Bailey, who summarised: “Ten degrees is the average temperature that people will be living in, if they can’t afford to heat their homes.” The scientists measured how Gallaher’s body performed at a 21C, before temperatures were dropped to see what variations occurred. Prof Bailey described the brain is tasting the patient’s blood and sending signals to the rest of the body. The body needs to keep its core, which include the major organs such as heart and liver, at 37C. At 18C, Gallaher described the hairs on his arms were starting to stand up to help insulate his body. As the experiment continues the temperatures dropped and blood vessels in hands are the next sacrifice made by the body to keep warm blood its critical organs. At this point women are affected more than men as oestrogen in their blood vessels are more likely to constrict. At 11.5C the body begins to shiver so that its muscles can generate heat. The instinctive brain will adapt and distribute its resources to maintain critical functions of the body but there are shortages in logical thinking as displayed by Gallaher’s downturn in success of a shape-sorting game. He commented: “I wouldn’t want to be trying to do school homework in a cold room or to have this compound something like dementia.” Professor Bailey explained the increased risk of harm as temperatures drop. He said: “Science tells us that 18 degrees is the tipping point… the body is now working to defend that core temperature. “The body is working jolly hard at 10 degrees. That increasing blood pressure is a risk factor for a stroke, it’s a risk factor for a heart attack. “You’re delivering less blood to the brain, so there’s less oxygen and less glucose sugar getting into the brain and the downside of that is it’s having a negative impact on your mental gymnastics. The cold also causes the blood itself to thicken which increased the risk of a dangerous blockage. Professor Bailey concluded: “The evidence clearly suggests that cold is more deadly than the heat, there are a higher number of deaths caused through cold snaps than there are through the heat snaps. “So I really do think that more recognition needs to be paid for the dangers associated with cold.” Icy conditions also allow many infections such as flu and pneumonia thrive as inflammation in the lungs is more common. The cold hard reality is that many cannot afford to heat their home day and night throughout the winter and the advice from Professor Baily is one of common sense and practicality. He described it as “preparing for a mountaineering expedition”. Some of the best ways to stay warm are to wear clothes made of great insulating materials such as wool, wear gloves socks and hats to prevent loss of crucial body heat and generate body heat by moving around and not just sitting watching TV. The body is capable of protecting itself, but a few sensible tips will help you maintain its resources and fend of the long winter cold.
<urn:uuid:21812790-eab3-4fb8-8c1a-b9303ed17486>
CC-MAIN-2023-50
https://activalways.com/health/damage-cold-can-do-to-your-body/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.96616
1,794
2.828125
3
COVID-19 Deaths Largely Preventable The thousands of deaths caused by Covid-19 may be largely preventable Covid-19 causes either mild symptoms of cough and body ache etc. in about 81% of those infected. However, in about 4% of cases, patients develop severe acute respiratory distress syndrome (ARDS). ARDS kills an average of 9000 patients a year in the United States alone due to sepsis and has a mortality rate of up to 50%. However, new research has found a treatment that is very successful in resolving ARDS and reducing the death rate. This therapy is high dose vitamin C in combination with thiamine and hydrocortisone. There have been several studies using various forms of this treatment. Some of these studies have shown that the death rate can be reduced substantially. Some of the studies have been ignored because the SOFA score, which is a marker for sequential organ failure used in evaluating critically ill patients, did not change substantially. However, the death rate did and more importantly, the number of days in the ICU decreased substantially! If the treatment protocol had been more aggressive and continued for a longer period of time. I think the results would have been much better. Dr. Marik's trial showed that ARDS can be quickly controlled with a combination of thiamine, a 4.5 gm dose of vitamin C, and hydrocortisone. So let’s hear from the nurses involved in the trial. These patients are recovering overnight. As a functional medicine doctor with over 25 years of experience with vitamin C, I can tell you that the doses used in the Marik study are well below what we use in the office every day on patients. There was another trial showing a drop in death rate but not the SOFA score. Still, they did not load the patients with enough vitamin C, and their dose of vitamin C was very low. Further, they discontinued the vitamin C after only 96 hours. This resulted in the patient's symptoms getting worse after vitamin C was stopped. Let's hear from Dr. Paul Marik: This diagram from the trial shows the mortality rate increased sharply after the vitamin C infusion was stopped. See figure 5. The Kaplan-Meier mortality curves So what is this data tell us? It tells us that ARDS caused by the coronavirus can be treated effectively and quickly with nutritional medicine techniques. Rather than mass-producing respirators, we need to improve and duplicate this treatment throughout the United States.
<urn:uuid:0fe4f90f-a1a1-486f-8c2e-7fccfef4635d>
CC-MAIN-2023-50
https://agapenutrition.com/blogs/coronavirus/covid-19-deaths-largely-preventable
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.965548
517
2.78125
3
Sep, 2017 the development team in spiral sdlc model starts with a tiny set of requirement and check through every development phase for same set of requirements. I have mentioned spiral model as one of the software development methodologies over there. Jun 11, 2018 software engineering spiral model with diagram. Ppt the spiral model powerpoint presentation free to. The following pointers explain the typical uses of a spiral model. The spiral model is a combination of sequential and prototype models. Since it follows the philosophy of iterative development, the process is also called as prototyping model. A software project repeatedly passes through these phases in iterations called. A spiral model of software development and enhancement barry w. Apr 02, 2016 the spiral model is one of the best models of how to solve the problems in the waterfall model. Mar, 2017 before starting spiral model in software development life cycle, i would suggest you to check this post software development life cycle you could see different types of software development methodologies in that post. Comparison between waterfall model and spiral model tech. Vmodel is sdlc model where process execution takes place in a sequential order in vshape hence it named as vmodel. Spiral model in software engineering computer notes. A spiral model of software development and enhancement. The only difference is that at the time of the identifying the requirements, the development team and the customer hold discussion and negotiate on the requirements that need to be included in the current iteration of the software. There are many software process models that can be implemented by software engineers. This definition of the spiral model explains what the systems development lifecycle sdlc model is used for and how is helps with risk management. Advantages and disadvantages what is the spiral model. Vmodel introduction to software development life cycle sdlc. Risk is essentially any adverse circumstance that might hamper the successful completion of a software project. The spiral model was defined by barry boehm in his 1988 article. The software engineering team in spiralsdlc methodology starts with. Most of the requirements are known upfront but are expected to evolve over time a need to get basic functionality to the market early on projects which have lengthy development schedules on a project with new technology spiral sdlc model adds risk analysis, and 4gl rad prototyping to the waterfall model each cycle. The spiral model is another important sdlc model that came into use when the iteration in product development came into the applied concept. Oct 26, 2015 spiral model is not so wellknown as other sdlc software development life cycle models such as scrum or kanban, for example. A spiral model is a realistic approach to the development of largescale software products because the software evolves as the process progresses. Spiral lifecycle model was initiated by boehm and is meant to be used while working with high risk projects. There are specific activities that are done in one iteration spiral where the output is a small prototype of the large software. For example, the spiral architecture driven development is the spiral based software development life cycle sdlc which shows one possible way how to reduce the risk of noneffective architecture with the help of an in conjunction with the best practices from other models. The phases and steps taken by software engineering teams using the model are also outlined as well as the benefits and limitations of its application. Ppt spiral model powerpoint presentation free to download. The spiral life cycle model is a type of iterative software development model which is generally implemented in high risk projects. Understanding the difference between the two models will make it easier to decide, which is the right model to be used for software development. A spiral model of software development and enhancement barry boehm computer. This model supports risk handling, and the project is delivered in loops. Spiral model in software development life cycle sdlc. Ppt a spiral model of software development and enhancement. The spiral model is similar to the incremental model, with more emphasis placed on risk analysis. Ieee defines the spiral model as a model of the software development process in which the constituent activities, typical requirements analysis, preliminary and detailed design, coding, integration, and testing, are performed iteratively until. It combines aspects of the incremental build model, waterfall model and prototyping model, but is distinguished by a set of six invariant characteristics. Spiral model is one of the most important software development life cycle models, which provides support for risk handling. Spiral model is a combination of iterative development process model and sequential linear development model i. The spiral model for software development prototyping in rad types of rad efficient development balances economy, schedule, and quality schedule faster than average economy costs less than average product better than average quality. Each loop of the spiral is called a phase of the software. In its diagrammatic representation, it looks like a spiral with many loops. The baseline spiral, starting in the planning phase. The ppt describes the concept of spiral model and how it is beneficial in different scenarios. Planning, risk analysis, engineering and evaluation. Spiral model is a combination of a waterfall model and iterative model. The phase in this approach is same as the phase in the spiral approach. The spiral or incremental model is usually used in software development. Software development life cycle models and methodologies. As the name suggests, all the activities is executed in the form of a spiral. Boehm, trw defense systems group stop the life cyclei want to get off. You can use this spiral diagram design in many powerpoint presentations related to several presentation topics like software development process model, showing the topdown or bottomup concepts in a business presentations or even to create a circular timeline design based on. A software project repeatedly passes through these phases in iterations called spirals in this model. A comparison between three sdlc models waterfall model. Boehm, a spiral model of software development and enhancement. The older of the two models is the waterfall model. Nov 10, 2015 history barry boehm first described the spiral model in his 1986 paper, a spiral model of software development and enhancement. Spiral model for developing a software a free powerpoint ppt presentation displayed as a flash slide show on id. In this system development method, we combine the features of both, waterfall model and prototype model. Sep 15, 2014 spiral model is an evolutionary software process model which is a combination of an iterative nature of prototyping and systematic aspects of traditional waterfall model. There is often the waterfall model vs spiral model debate, which can be heard in the corridors, when a new software development process is undertaken. This model is best used for large projects which involve continuous enhancements. Spiral model free download as powerpoint presentation. This spiral model is a combination of iterative development process model and. Spiral model he spiral model is a combination of waterfall and iterative development process with emphasizing on more risk analysis. Based on the unique risk patterns of a given project, the spiral model guides a team to adopt elements of one or more process models, such as incremental, waterfall, or evolutionary prototyping. What are the examples of softwares using spiral model. Sdlc models, software engineering, waterfall model, spiral model. The spiral model is widely used in the software industry as it is in sync with the natural development process of any product, i. The spiral model is the property of its rightful owner. Spiral model can be pretty costly to use and doesnt work well for small projects. Comparing the waterfall model with the spiral model the following table provides a comparison between the spiral model and the traditional waterfall model. Free spiral diagram template for powerpoint is a simple diagram design created with powerpoint shapes. A spiral model of software development and enhancement barry boehm computer, may 1988 text pp3445 software process model the primary functions of a software process model are to determine the order of the stages. Some famous process models are the waterfall model, spiral model, iterative model, and agile model, etc. Spiral model of software engineering in hindi youtube. The features of this model is a emerges from the combination of waterfall model and prototype model. What is spiral model advantages, disadvantages and when to. Here our main focus is to discuss the incremental model. Free powerpoint templates download free powerpoint backgrounds and powerpoint slides on software development free computer illustration powerpoint template free computer powerpoint template is expressly designed for information technology with the image of an animated computer, laptop, and mobile screens. The exact number of loops of the spiral is unknown and can vary from project to project. The spiral model combines the idea of iterative development with the systematic, controlled aspects of the waterfall model. The spiral model is similar to the incremental development for a system, with more emphasis placed on risk analysis. Its a riskdriven model which means that the overall success of a project highly depends on the risks analysis phase. The software engineering team in spiral sdlc methodology starts with a small set of requirement and goes through each development phase for those set of requirements. The spiral model first described by barry boehm in 1986 is a software development methodology that aids in choosing the optimal process model for a given project. The sidebar elements of the winwin spiral model describes these extensions and their goals in more detail. Southern california worked at general dynamics, rand, trw director of darpa information science and technology office 19891992 fellow of acm, ieee cocomo cost model, spiral model. Software development life cycle or sdlc for short is a methodology for designing, building, and maintaining. Winwin, a groupware tool that makes it easier for distributed stakeholders to negotiate mutu. The initial phase of the spiral model is the early stages of waterfall life cycle that are needed to develop a software product. This paper introduced a diagram that has been reproduced in several subsequent publications discussing the spiral model. Spiral model in software engineering with case study slideshare. Introduction to agile model agile vs waterfall vs spiral model agile methodology is a software development model that encourages the continuous iteration of development and testing in the entire software development lifecycle of the project. Hope you found the information on the spiral model helpful. What are the challenges that spiral model prevents. The model is designed, implemented and tested incrementally till the product is finished. Introduction ensures the design flaws before the development of a. The spiral model is a riskdriven software development process model. Nov 18, 2014 the winwin spiral approach is an extension of the spiral approach. Spiral model is an evolutionary software process model which is a combination of an iterative nature of prototyping and systematic aspects of traditional waterfall model. Spiral model is not so wellknown as other sdlc software development life cycle models such as scrum or kanban, for example. Comparing the waterfall model with the spiral model. Apr 22, 2020 spiral model is a combination of a waterfall model and iterative model. Incremental model, advantages, examples, case study. What is the difference between a spiral model and an. It is the combination of both iterative and waterfall model. If so, share your ppt presentation slides online with. In other tutorials, we will also learn about these topics. In addition, it guides and measures the need of risk management in each cycle of the spiral model. This spiral model is best to use for large projects which required more management and planning. In 1988 boehm published a similar paper to a wider audience.90 1493 930 1569 1287 1095 399 82 869 1190 1260 715 443 357 1576 1420 189 783 1628 831 826 216 460 104 1159 1423 198 1451 1602 880 1636 374 1518 1273 1441 992 79 1613 1121 1298 1103 789 133 1420 78 548
<urn:uuid:84ae9f8b-8bac-43a7-a1a1-9ee6477b2fd1>
CC-MAIN-2023-50
https://anescapde.web.app/733.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.921554
2,399
2.90625
3
(as of Jan 17, 2023 15:28:17 UTC – Details) Whether you would like your kids to be more clever than other kids ? how to improve your kids for Science, Technology, Engineering, and Math skills ? Huaker kids building block is the best choice for you ! Exercise hand-eye coordination-children need to use their hands dexterously when build the block which could help to promote the development of fine motor,Stacking scattered building blocks to create complex objects exercise hand-eye coordination as well. Cultivate observation-Objects such as car built by children are actually common in life. They must first learn to observe, and then in the process of playing, use building blocks to express the things they observe in daily life. Observation is cultivated unconsciously. Cultivate communication skills- It is best to let the children build blocks with other children, which is more fun than playing alone. Moreover, children build building blocks together, which will stimulate each other’s inspiration, so they will play more seriously, which is also good for cultivating children’s ability to get along with others. Make children more confident-The process of building blocks can be completely controlled by the child, which will bring satisfaction and self-confidence to the child. In the process of playing with building blocks, children can also learn a lot of mathematics knowledge, cultivate a sense of space, imagination, creativity and language expression ability, etc. SUPPORTS STEM EDUCATION: STEM Construction Engineering Building Blocks is designed to improve your children’s Science, Technology, Engineering, and Math skills. Enhance your kids’ creativity and imagination. Suitable for 3,4,5,6, 7, 8, 9 ,10 years old toy. CREATIVITY & IMAGINATION toys: 125 piece building toys,allow the kids could combine it into a lot of different models such car, truck , robot any kinds of the objects the kids can imagine which useful to improve your children’s Science, Technology, Engineering, and Math skills. Enhance your kids’ creativity and imagination . EASY TO CLEAR UP AND COMING REUSABLE STORAGE BOX: This toys could be clear up with water when it is dirty and put back into the storage box in case the kids make a mess of the house . Each of the 125 piece bright, colorful, non-toxic pieces are certified Phthalate, Lead, Cadmium, and BPA-free. We invested in child-friendly materials and rigorous lab tests to make sure your kids are 100% safe from harmful chemicals. BEST GIFT FOR KIDS : Building block gift toys are prefect for Ages 3 4 5 6 7 8 9 10 Year Old Boys and Girls years old, teen girls gifts, kids birthday gifts ,holiday, christmas, halloween, new year! WHAT IS INCLUDING: With 125 pieces, this set allows kids to design a virtually infinite number of configurations 1x Set kids building block (125 Pieces) 1x REUSABLE STORAGE BOX Over 3 years old Over 3 years old Over 6 years old Over 3 years old CREATIVITY & IMAGINATION toys: The building toys are designed as 125 piece including the 6 building tools , the building blocks can be combined into many different patterns such as vehicle, robots , amimals and anythings the kids can imagine which can great improve their Hands-on ability, imagination ,creativity , master fine motor skills,hand-eye coordination. it is a good and interesting interactive games between children and parents, kids and their friends, teachers and students ! GREAT STEM LEARNING :This teacher-recommended, innovative play set provides your kids with a solid foundation for in-demand skills! From science and physics to math and geometry, our educational toys encourage kids to build their critical thinking and problem solving abilities as well as their teamwork and social skills. SAFETY MATERIAL:The kids Building Blocks STEM Construction Engineering Kits are easy to wash and come with a sturdy storage box for quick and convenient cleanups. all 125 piece bright, colorful, non-toxic are certified Phthalate, Lead, Cadmium, and BPA-free which make sure your kids to use it without any worried ! PERFECT GIFT FOR KIDS: building block gift toys are prefect for 3-10 year old boys, teen girls gifts, kids birthday gifts ,holiday, christmas, halloween, new year! A full-color builder’s guide with 17 ideas for beginner, intermediate and advanced builders ensures creativity, success make all kinds of vehicle, robots, and amimals.
<urn:uuid:1132e6a9-107b-4d2d-b423-af2b9bdd9d82>
CC-MAIN-2023-50
https://ans32.com/huaker-kids-building-stem-toys-125-pcs-educational-construction-engineering-building-blocks-kit-for-ages-3-4-5-6-7-8-9-10-year-old-boys-and-girls-best-gift-for-kids-creative-games-fun-activity/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.92244
987
3.15625
3
Why aren’t there flea collars for people? I see ads for products to protect pets from fleas and ticks, and nasty tick-borne diseases are becoming more common. I’m tired of having to strip and do an extensive tick check after every walk in the woods. —Bill Consider the dog, Bill, and how it lives. It sheds and slobbers. It dwells close to the ground. It doesn’t shower much. It rolls around in dirt, and will happily do the same in feces or rotting animal remains where available. Let’s just say if your personal habits depart much from the aforementioned, you might not really need flea control of any sort. Particularly in developed countries, modern hygiene has rendered fleas pretty much a medical nonissue. Where they remain a problem (e.g. in sub-Saharan Africa) it’s often because they burrow into the feet and hands—more easily countered with a pesticide wash than with dedicated neckwear. But let’s separate the fleas from the ticks here, and the havoc-wreaking potential of each. Granted, fleas have run up a more impressive score if you take the historical view—they carried bubonic plague, after all. But while we’ve got plague all but under control these days, one can’t say the same about the infectious diseases passed along by ticks, which as you note present increasingly grave threats to human health. Blame climate change in part, as more regions become warm and humid enough to support tick activity; growing populations of deer and mice that carry ticks are playing a role too. The major Lyme-spreading tick was found in just 30 percent of American counties in 1998, but nearly 50 percent by 2016. Last month, the Centers for Disease Control and Prevention welcomed summer by announcing a three-fold increase in the number of people infected with vector-borne diseases—vectors here being ticks, mosquitoes, and their colleagues—between 2004 and 2016, noting that public-health bodies are woefully underprepared for the growing epidemiological menace. Conditions spread by ticks constitute a small rogues’ gallery of disease, including low-profile up-and-comers like babesiosis and old favorites like Rocky Mountain spotted fever. But Lyme disease remains the biggest vector-borne game in town: some 30,000 cases are reported every year in the U.S., and studies estimate the actual number is ten times that. If you don’t catch it early, long-term Lyme symptoms include arthritic joint pain, brain inflammation, and facial palsy. And it’s true: dogs do enjoy better protection against Lyme than we do, thanks to readily available vaccination. Why no equivalent for dog’s best friend? In fact, a safe, largely effective Lyme vaccine was cleared for use 20 years ago—and disappeared shortly thereafter. Lymerix, as it was called, had the misfortune of showing up at a crucial juncture of the anti-vax era—i.e., shortly after the 1998 publication of the infamous (and since retracted) report in the Lancet falsely linking the measles-mumps-rubella vaccine with autism. That year, during the FDA approval process for Lymerix, some members of the reviewing panel expressed concerns the vaccine could theoretically trigger an immune response leading to arthritis. The drug having tested safe in clinical trials and this risk being, again, purely hypothetical, the panel approved it unanimously. Word got out, though, to a public then in its first flush of vaccine panic. Soon enough, news reports were linking Lymerix to isolated cases of fever and joint pain, and sales of the product fell through the floor. A 2007 study found no increased incidence of arthritis in vaccine recipients, but the Lymerix ship had long since sailed: facing lawsuits and turning relatively little profit, its manufacturer pulled it off the market in 2002. (Other factors that probably didn’t help its chances: the vaccine was expensive, and even after the 12-month, three-shot regimen required for full protection you still had a non-negligible 20 percent chance of remaining susceptible to Lyme disease anyway.) Where’s that leave us? A French company is developing a Lyme vaccine that might provide even better protection, though its CEO acknowledges it will be “hard to convince anti-vax lobbyists,” and the drug’s years away from public release. And while Lyme may be the worst tick-borne offender out there, it may not hold that title for long: meet the Powassan virus, currently still rare but on the rise in the Northeast. Yale epidemiologist Durland Fish wrote recently that Powassan “could surpass Lyme disease in its impact upon public health”: infection leads to encephalitis, causing fatality in 10 percent of cases and permanent brain damage in fully half. There’s no vaccine for this one either, although in Europe they’re vaccinating against a similar form of encephalitis, so we’ve at least got a starting place. But you may as well get used to those full-body tick checks, Bill. One worries any sensible prophylactic treatment could meet its match in a vaccine-wary American populace, just as Lyme vaccine did and may again. You can wear bug spray, tall socks, and long sleeves, but, as they say, there’s no cure for stupid.
<urn:uuid:7b5a6eae-3d0b-4336-9271-c736697ac7d6>
CC-MAIN-2023-50
https://anvil.connectsavannah.com/extras/the-straight-dope/do-dogs-have-better-protection-from-ticks-than-people-do/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.95327
1,133
2.578125
3
High atop Fort Mountain lie the remains of a cyclopean wall approximately 855 feet in length. While nobody knows who built the zigzagging wall or why, many theories have been tossed around as to its existence. All that’s known for sure is that the wall, commonly called the Rock Fort was made from rocks gathered from the mountain’s summit. Someone spent a lot of time stacking the stones, but as of yet no one knows just who that was. Currently, the most frequent explanation for the enigmatic barrier is that Native Americans built it sometime around the sixth century for religious ceremonies. Before 1917, some believed Spanish conquistador Hernando de Soto built the wall as a fortification to protect himself or his men from the Cherokee Indians that resided in the area. It’s known he had been scouring the land for gold, and the mountain’s supply of gold and silver made it a logical place for him to be. Today, however, many believe he never actually made it far enough north to have created the wall. Another theory is that the wall is the work of Madoc, the legendary Welsh explorer. Still others claim the “moon-eyed people” from Cherokee mythology were the wall’s architects. Despite its murky origins, one thing is for sure: The fort is definitely an enchanting place. If you visit today, you can see what remains of the wall. Though much of it has now fallen, if you use your imagination you can see it must have been something quite special in its heyday. Know Before You Go Because this is within the State Park there is a parking fee of $5. There is also a small hike to get to the wall after you park, and at this time there is no wheelchair access. The hike isn't too bad and is well worth it. Also be sure to check out the Fire Tower that is north of the wall, just follow the signs. Beyond that is a beautiful lookout that shows off the city of Chatsworth and the amazing landscape that surrounds the entire area.
<urn:uuid:d2f43d22-bdc0-4f33-b77f-5a96c7809e03>
CC-MAIN-2023-50
https://assets.atlasobscura.com/places/fort-mountain-state-park
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.979251
429
3.578125
4
Help Seabirds by Protecting Two Forage Fish Species Please urge the Pacific Fishery Management Council (PFMC) and National Marine Fisheries Service (NMFS) to protect the shortbelly rockfish and northern anchovy, two critically important fish for seabirds. West coast seabirds like Marbled Murrelet, Common Murre, and Rhinoceros Auklet rely on northern anchovy and shortbelly rockfish to feed their chicks. These little fish are the base of the food chain for seabirds, marine mammals, and larger fish like salmon. Forage fish support the health of the entire ocean ecosystem, making them key to supporting the economies of coastal communities. When these fish are plentiful, seabirds and other wildlife thrive. When they’re scarce, many species often suffer from starvation.
<urn:uuid:47d44f20-34ff-4b22-8690-484019f144d2>
CC-MAIN-2023-50
https://audubonportland.org/take-action/help-seabirds-by-protecting-two-forage-fish-species/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.893751
172
3.578125
4
Passing healthy habits on to your children sets them up for better, healthier lives as they grow. Unfortunately, babies don’t understand the value of these habits; they just want to know why there’s unfamiliar food on their plates. Improving your baby’s eating habits is an ongoing process that takes patience, but the results are worth it for both you and your child. Make mealtimes healthier and more enjoyable with these tips that can improve your baby’s eating habits. Introducing a variety of foods early is crucial to avoiding picky eating habits as your kids grow older. Once your child is ready to start exploring solid foods, encourage a little adventure with the foods you eat. Try different types of foods along with a variety of flavors and textures. Introduce new things in small amounts so that your little one can take their time exploring. If your baby makes a face when trying something new, they might just be surprised at the change. If they turn their nose up at a specific food one day, roll with the rejection and simply try again the next day. Staying patient will create a positive and encouraging experience as your little one discovers new foods. Enjoy Meals Together Babies learn a lot by watching. If you want your baby to enjoy healthier foods—especially foods with unique textures and flavors—be a good role model with your own eating habits. Sit together at the table and serve the same meal to yourself and your child. Your little one will be much more enthusiastic about trying new foods when they get to copy you. Add Color, Convenience, and Fun to Mealtimes One of the most important tips that can improve your baby’s eating habits is to create a relaxing and encouraging atmosphere during meals. The more pressure you put on your baby to eat certain foods, the less likely they are to do so. Keep things calm by staying patient and positive. You can also make mealtimes more fun by using colorful dishware like suction-cup bamboo baby plates. The fun colors and gentle materials are perfect for your baby, while the suction-cup base helps prevent messes and makes mealtime less stressful for you. Building healthy habits starts early. With a little patience and encouragement, you can teach your child healthy and valuable eating habits that will benefit them for the rest of their lives.
<urn:uuid:41679f2b-f580-44f4-8f8d-ff87d6e9e97f>
CC-MAIN-2023-50
https://avanchy.com/blogs/avanchy/tips-that-can-improve-your-babys-eating-habits
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.950913
482
2.875
3
The object of this pamphlet is to stimulate the use of Portland Cement Concrete as a material for the building purposes on landed estates in Great Britain and Ireland, on the grounds of its applicability and economy, more especially where suitable materials are on the spot and labour plentiful. Having had considerable experience of its uses, I venture to recommend it, and beg to submit a few well-chosen and inexpensive designs for the consideration of those who may have occasion to erect such buildings and require to study economy. (preface, vii) The accompanying designs are well worth a look for those interested in either concrete or nineteenth-century architecture. Birch’s drawings remain picturesque, employing elements such as half-timbering and rustication. If I had not known Birch was advocating for the use of concrete, I would not have guessed that it was the intended material for construction. Beauty is probably not the first word that comes to mind when people think of concrete,but one look through photographs in Le Béton en Representation:La Mémoire Photographique de l’Enterprise Hennebique 1890-1930 is enough to make you fall in love with the material. Francoise Hennebique’s “ferro-concrete”, a system of steel reinforced concrete patented in 1892 as Béton Armé, helped popularize the use of concrete in Europe and the near east. In addition to establishing his own company, Hennebique cannily used a network of agents to license the use of his patented system to firms constructing their own designs with the amazing results that between 1892 and 1902 over 7,000 structures were built using the system Hennebique. You might think, that is all very interesting, but this book is in French and I can’t read it. Why would I check this out? Check it out for the amazing, inspiring photographs! The commercial photographers who documented these buildings may have only been interested in creating realistic images to use in advertising and promotional materials, but these images are so beautiful! Photography captures the prismatic surfaces and stark lines of concrete silos, bunkers, and staircases and transforms bland industrial structures into stunning modernist compositions. The fine tonal gradations of concrete are captured so exquisitely by the medium of black and white photography, the hard shell metamorphoses into a sensual surface, almost suggestive of human skin. To be inspired by more images like these , Le Béton en Représentation can be found in the library catalog here. Blog from the University of Texas Architecture and Planning Library
<urn:uuid:e4202d44-7bcf-4433-beb7-b9c0d3950803>
CC-MAIN-2023-50
https://battlehall.lib.utexas.edu/tag/concrete/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.951719
538
2.59375
3
According to a recent Pew Research Center report, children in the United States are more likely to live in a single-parent household than children in at least 129 other countries, including China, India, and all of Europe. Maintaining a household as a single parent presents a number of challenges, the largest of which for many single-parent families are financial. According to the Bureau of Labor Statistics’ 2018 Consumer Expenditure Survey, single parents with minor children devote a larger share of their spending on necessary costs of living, like food and housing, than married parents do. COVID-19 has only intensified the challenges faced by single parents. Those who have lost their jobs might struggle to cover living expenses, while those still employed must cope with a lack of child care options and schools being closed. In spite of the strain experienced by many single parents, the percentage of single-parent households has tripled from less than 10 percent of families with children in 1950 to about 30 percent in 2019. The 22.7 percent of households with children that were headed by a single mother last year represents the lowest percentage of single-mother households since 2003. On the other hand, single-father households reached an all-time high of 7.4 percent in 2019. In 1950, single-father households accounted for only 1.1 percent of households with kids. The proportion of single-parent families also varies widely by race and ethnicity. According to the U.S. Census Bureau’s Current Population Survey, about one in four white families, and one in three Hispanic families, are headed by a single parent. Meanwhile, nearly 60 percent of black families are headed by a single parent, although this represents the lowest value since 1982. The percentage of white and Hispanic single-parent families has increased by 5.5 and 4.1 percentage points, respectively, over the same time period. Geographically, Southern states tend to have the highest percentage of single-parent households, with Louisiana leading the nation at just over 40 percent. Mississippi, South Carolina, Arkansas, Florida, and Georgia all report percentages of at least 35 percent. Only 19 percent of family households in Utah are headed by a single parent. Hawaii and Idaho have the second and third lowest rates of single parenting, both at about 26 percent. To determine which states have the most single parents, researchers at Smartest Dollar analyzed data from the U.S. Census Bureau’s 2018 American Community Survey. For each state, researchers calculated the percentage of families with children under 18 living in households headed by a single parent. The analysis also includes the percentage of households with single moms, single dads, and the total number of households headed by a single parent in each location. Consistent with state-level trends, researchers found that a number of cities in the South report a high percentage of single-parent households. Several cities in the Midwest also appear atop the rankings, including Cleveland with the national high of 73.3 percent of single-parent households. The analysis found that in Pennsylvania, 33.1% of families are headed by a single parent, which is above the national average of 32.1%. Here is a summary of the data for Pennsylvania: - Percentage of families with a single parent: 33.1% - Total families with a single parent: 415,373 - Percentage of families with a single mom: 24.0% - Percentage of families with a single dad: 9.1% For reference, here are the statistics for the entire United States: - Percentage of families with a single parent: 32.1% - Total families with a single parent: 10,519,285 - Percentage of families with a single mom: 23.7% - Percentage of families with a single dad: 8.4% For more information, a detailed methodology, and complete results, you can find the original report on Smartest Dollar’s website: https://www.smartestdollar.
<urn:uuid:fc1b17d2-22e3-4bc4-a376-ffc1e1cdc5c0>
CC-MAIN-2023-50
https://beavercountyradio.com/news/in-pennsylvania-33-1-of-families-are-headed-by-a-single-parent/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.956968
810
2.9375
3
No matter how much water you have stored, eventually you are going to run out. In an emergency situation, you may find yourself without clean and safe drinking water. Knowing how to find and treat additional sources of water will help you "be ready" for an emergency. Remember to always start with the cleanest water you can find. - Learn the three steps of treating water and how to and it safe for drinking, cooking, first-aid, and hygiene.. - Learn about different kinds of water filters and for what situations they are best suited. - Learn different methods of purifying water and when to use them. - Talk to your local emergency preparedness or outdoor retailer and purchase water treatment supplies for your disaster supply kit and for at home use. - BONUS: Purchase a large capacity water filter for home that will filter at least to a 0.3 or 0.2 micron size. Contamination in Water Water can be contaminated with organisms such as protozoa (about 3 microns in size or more) like Cryptosporidium and Giardia. These relatively larger microorganisms can be easily filtered with a quality filter, but some chemical purification processes may not be potent enough to kill them. Water can also be contaminated with smaller bacteria (0.4 microns in diameter and up) like E.coli, Salmonella, and Cholera. Most of these can be filtered with a high quality water filter, but it is recommended to do a purification process as well. Smaller still are viruses (about 0.01 microns) like Rotavirus, Hepatitis A and E. Dangerous viruses are generally not found in North American water systems. Very few water filters filter out something as small as a virus, but most any effective purification process will kill them. Contamination can also be from chemicals and heavy metals. Some of it is naturally occurring in water, and some can come from industrial or agricultural runoff. These can only be filtered out of the water. Finally, sediment contamination (dirt, sand, organic matter) can be stirred up from the movement of the water or by disturbing the bottom of streams or other outdoor water sources. These can be removed during the pre-filter and filter processes. Whatever your source of water, don’t assume the water is clean, just because it’s flowing and looks clean. Treat all outside water sources with caution. Even if it looks clean – TREAT IT! If there is any question as to the safety of your water, from inside or outside sources, treat it. You can’t afford to get sick, especially during an emergency situation. Water treatment is a three part process. All are necessary, especially if you are unsure as to the quality of your water source and what is contaminating it. Before anything else, always choose the CLEANEST water you can find. Pre-filtration is the first process. This gets the large particles, the "chunks" out and lengthens the useful life of your filter. Your filter can easily filter out these large, visible particles, but your filter will clog much sooner. Save money and save your filter by pre-filtering. Allow suspended particles to settle to the bottom of the container or collect them at the top to be skimmed off. Then pour the water through cloth, cotton, cheese cloth, or a t-shirt. Anything that is clean to pour the water through. Have a container to catch the pre-filtered water. Remember, this is only pre-filtered water. It is not yet ready for drinking. The next two processes can be done in either order depending on your situation. Filtration is a physical barrier that traps particles and contaminants as water passes through. The lower the micron level of the filter, more smaller particles will be filtered out. There are many different options, levels of quality, sizes, types, and styles of water filters. Depending on the filter, they can remove chemicals, sediments, protozoa, bacteria, and heavy metals. Most filters do not remove viruses, but we don’t have an issue with viruses in our water in North America. All filters have a suggested water capacity for replacement. How well you pre-filter your water will determine if your filter's life span will fall short of or exceed it's suggested replacement time. When water flow greatly diminishes or pumping gets extremely difficult, it is time to dispose of and replace or clean your filter. Refer to manufactures directions for replacement or cleaning. Consider your needs, abilities, number of people you will be supplying water for, potential sources of water, and potential contaminants when deciding which water filters to purchase. - Gravity fed filters use gravity to pull the water through the filter. they tend to be larger and sit on a tabletop or hang. They are slower than other filters but do not require constant attention. They work well for at home, work, or camping with a group. - Pump filters are usually what people think of with water filters. The hand pumping action pushes water through the filter. It is faster than gravity fed and can be used with a small group of people. this is the style you would want in a group emergency evacuation kit. - Straw filters are meant to be used by only one person. One end of the filter goes in the water while the user sucks on the other end like a straw and draws the water through the filter. Some of these can also be used in the line of a hydration bladder for filtration on the go. To be most effective, straw filters need to have their source water purified first. - Water bottle filters are similar to straw filters in that the user draws the water through the filter while sucking on the mouthpiece. The source water is contained in the bottle around the filter. To be most effective, these too need to have their source water purified first. - Ceramic and fiber glass filters use different media to block the particles as the water pass through them. Microscopic pores in the materials allow the water to pass while catching the contaminants. Ceramic Filters have the added benefit of being able to be rinsed off to prolong their usefulness. Fiberglass and other types of filters need to be replaced periodically according to manufacturers’ instructions. Gravity fed, pump, straw, and water bottle filters usually use some combination of these materials to filter water. - Reverse osmosis is a process used in your home's pressurized in-line system. It will work if there is pressure in the water line. Most do not require electrical power unless you need an electrical pump. - Activated carbon absorbs chemicals and dissolved minerals as the water passes through. It is often used to make tap water taste better and to remove bad odor. Activated carbon filters by themselves are NOT adequate for removing harmful microorganisms. Some gravity fed or pump filters use activated carbon as a part of their filtration process to aid in removing chemicals and dissolved minerals while improving taste. Avoid “home made” filters unless there is no other option. There is a lot of misinformation about how to make a filter yourself. If you choose, you can learn the principles so you know how if you need to, but use it as a last resort. Spend the money now to get quality filters for disaster supply kits and for your other emergency needs. Ask local outdoor or preparedness dealers to help you find which water filters are right for your preparedness needs and budget. Purification involves either a heat, UV light, or chemical process that kills the organisms living in the questionable water. Most purification processes do not remove anything from the water. - Heat purification processes require fuel, which is possibly a scarce commodity in an emergency. - A rolling boil (recommended) for three minutes at Utah’s average elevation will kill all organisms in the water. After the three minutes, let the water cool before using. Higher elevations will require a longer boiling time. A lid helps water boil faster and prevents loss from evaporation. Remember, boiling does not remove anything from the water, it just kills it. Sediments, chemicals, heavy metals and salts remain if not already filtered out. Distilling is more involved, but does a better job than boiling alone because it removes the water from the contaminants. While boiling will kill most microorganisms in water, distillation will remove heavy metals, salts and most other chemicals. Distillation involves boiling water and then collecting only the vapor that condenses. The condensed vapor will not include salt or most other impurities. Easy distiller: Fill a pot halfway with water. Tie a cup to the handle on the pot’s concave lid so that the cup will hang right-side-up when the lid is upside-down. Make sure the cup is not dangling in the water. Boil the water for 20 minutes. The water that drips from the lid into the cup is distilled. - Ultra violet (UV) light purification or sunlight can also be used to purify water. - There are commercial products available that use UV light, but they require batteries. UV light pens can be used for small quantities of water and are popular with travelers for use in foreign countries at restaurants when discretion is needed. It’s not practical for large quantities of water. - UV sunlight and a clear, plastic, two liter soda pop bottle can be used to purify water. It takes time and a clear, sunny day, but it works. Know that it is not very effective with cloudy water. Pre-filter the water first. The bottle be clear plastic and no more than two a liter size, but you can use smaller. Lay the bottle perpendicular to the sun for the greatest sunlight penetration. It is more effective if the bottle is laid on a reflective surface like foil or a mirror. Wait six hours on a sunny day or two full days if it's partly cloudy. - Chemical purification does not remove anything from the water; it only kills what’s in it. Be sure to closely follow manufacturer’s directions in how to use their products. - Iodine has been used for chemically treating water for many years. It has a 4 to 5 year shelf life in unopened bottle, 1 year in an opened bottle, and is susceptible to heat, light and moisture. It comes in tablets or drops. Iodine kills all microorganisms but larger protozoa, so if you are unaware of what's in the water, be sure to filter it as well. Do not use iodine longer than a few weeks straight. It is intended for short term emergency use only. Do not use it if you are pregnant, have an allergy to iodine or have a thyroid problem. - Sodium hypochlorite (liquid bleach) has a six month to one year shelf life. It rapidly begins to break down into nothing more than salt and water, so don't purchase a large supply unless you plan on using it and rotating it. Use only pure bleach, not scented and not colored. Know also that it may not be effective against larger protozoa by itself. To use, add about eight drops per gallon of water. The Red Cross website says use 16 drops, some bleach manufactures say use six drops. It depends on the potency of your bleach. After adding the drops, stir and wait 30 minutes. If there is not a faint smell of bleach, do about eight more drops and wait another 30 minutes. If no faint bleach smell, the water is too contaminated for drinking and should be disposed of. Please note: Calcium hypochlorite (dry bleach) is NOT the same thing as sodium hypochlorite (liquid bleach). Some people want to use calcium hypochlorite instead because it has a longer shelf life than liquid bleach. Calcium hypochlorite is difficult to use to get the correct concentrations in your drinking water. Too little does not effectively treat the water and too much will make you sick or worse. You need to use pool water testing equipment to get the correct parts per million. For this purpose calcium hypochlorite is not recommended. If you insist, talk to the manufacturer about its proper use in drinking water purification. Additionally, it may not be effective against all protozoa by itself. - Chlorine dioxide (recommended) has a 4 to 5 year shelf life and comes in liquid or tablets. The liquid form treats water faster (about 30 minutes), but it’s a little more difficult to mix the two reacting chemicals together and let them sit for 15 minutes before pouring them in the water than to just drop in a tablet which requires four hours of processing. If this is your preferred method of water purification, use the liquid for at home treatments and the tablets for disaster supply kits and backpacking. Chlorine dioxide is iodine and chlorine free. It works by releasing nascent oxygen, which is a strong oxidant and a powerful germicidal agent into the water. It has greater pathogen killing power than iodine or chlorine, but is much safer for continual use as long as it is used in correct dosages in the drinking water. Unlike iodine, chlorine dioxide does not discolor water, nor does it give water an unpleasant taste. It's often used to improve the taste of water. 12 Areas: Water Other Sources of Water Disaster Supply Kit: Water
<urn:uuid:d6004cbe-04fa-48ce-b85d-3886e9c1c156>
CC-MAIN-2023-50
https://bereadyutah.org/family-preparedness/12-areas-of-preparedness/water/water-treatment/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.932644
2,762
3.734375
4
This page compiles our content related to dysrhythmias (cardiac). For further information on diagnosis and treatment, follow the links below to our full BMJ Best Practice topics on the relevant conditions and symptoms. Cardiac dysrhythmia (or arrhythmia) is a disturbance in the rate of cardiac muscle contractions, or any variation from the normal rhythm or rate of heart beat. The term encompasses abnormal regular and irregular rhythms as well as loss of rhythm. Cardiac dysrhythmias are found in a vast range of conditions and may be defined in a number of ways, including by site of origin (e.g., supraventricular, ventricular, atrial), mechanism of disturbance (e.g., fibrillation, automaticity, reentry or triggered activity), rate of disturbance (e.g., tachycardia, bradycardia) and electrocardiogram appearance (e.g., long QT syndrome). Dysrhythmias may be acute or chronic, and some (especially ventricular arrhythmias) may be life-threatening. Sudden cardiac death is the most severe manifestation of ventricular arrhythmias (e.g., ventricular fibrillation). Characterized as a rapid regular rhythm arising from a discrete area within the atria. It occurs in a wide range of clinical conditions, including catecholamine excess, digoxin toxicity, pediatric congenital heart disease, and cardiomyopathy. On ECG, P waves are visible before every QRS, and different from the P waves in sinus rhythm. Onset and termination of arrhythmia are abrupt. Response to vagal maneuvers and adenosine may be evaluated to exclude alternative diagnoses. Typical atrial flutter (counterclockwise cavotricuspid isthmus-dependent atrial flutter) is a macroreentrant atrial tachycardia with atrial rates from 250 to 320 bpm. Ventricular rates range from 120 to 160 bpm, and associated 2:1 atrioventricular block is common. ECG shows negatively directed saw-tooth atrial deflections (f waves) seen in leads II, III and aVF, with positively directed deflections in lead V1. This rhythm is closely related to atrial fibrillation. Atrial fibrillation (AF) is a supraventricular tachyarrhythmia characterized by uncoordinated atrial activation and variable ventricular response. Acute AF is defined as a new onset or a first detectable episode of AF, whether symptomatic or not. ECG shows absent P waves; presence of rapidly oscillating fibrillatory waves that vary in amplitude, shape and timing; and irregularly irregular QRS complexes. Atrial fibrillation (AF) is a supraventricular tachyarrhythmia characterized by uncoordinated atrial activation and variable ventricular response. Chronic AF may be paroxysmal (defined as recurrent AF >1 episode ≥30 seconds in duration that terminates spontaneously within 7 days), persistent (AF that is sustained >7 days or lasts <7 days but necessitates pharmacologic or electrical cardioversion), longstanding persistent (a subgroup of persistent AF, defined as continuous AF >1 year in duration), or permanent (refractory to cardioversion or accepted as a final rhythm). ECG shows P waves are absent and are replaced by rapid fibrillatory waves that vary in size, shape and timing, leading to an irregular ventricular response when atrioventricular conduction is intact. The term "lone AF" applies to patients younger than 60 years of age without echocardiographic or clinical evidence of cardiac, pulmonary, or circulatory disease. However, because definitions are variable, and all patients with AF have some form of pathophysiological basis, the term “lone AF” is potentially confusing and should not be used. The most common arrhythmias diagnosed in Wolff-Parkinson-White syndrome are atrioventricular reentrant tachycardia, atrial flutter and atrial fibrillation. Typically the heart rate varies between 150 and 240 bpm. Congenital cardiac abnormalities are strong risk factors (especially Ebstein anomaly). Ectopic ventricular rhythm faster than 100 bpm lasting at least 30 seconds or requiring termination earlier due to hemodynamic instability. Ventricular tachycardia (VT) is defined on ECG by the presence of a wide complex tachycardia (QRS ≥120 msec) at a rate ≥100 bpm. Usually observed in ischemic and non-ischemic cardiomyopathy, but idiopathic VT may also be observed in patients without structural heart disease. Ectopic ventricular rhythm with wide QRS complex (≥120 milliseconds), rate faster than 100 bpm lasting for at least 3 consecutive beats but terminating spontaneously in less than 30 seconds. ECG shows nonsustained ventricular tachycardia with a single QRS (monomorphic) or changing QRS (polymorphic) morphology at cycle length between 600 and 180 ms. It may occur in the absence of any underlying heart disease, but is more commonly associated with ischemic and non-ischemic heart disease; known genetic disorders such as long QT syndrome, Brugada syndrome, and arrhythmogenic right ventricular cardiomyopathy; congenital heart disease; metabolic problems, including drug toxicity; or electrolyte imbalance. Characterized by a prolonged QT interval on ECG and may be congenital or acquired. In congenital LQTS, mutations within 15 identified genes result in a variety of channelopathies affecting myocardial repolarization, thus prolonging the QT interval. Acquired LQTS may occur secondary to ingestion of QT interval-prolonging drugs, electrolyte imbalances, or bradyarrhythmias. ECG shows a prolonged QT interval and abnormal T-wave morphology. Patients are at increased risk for syncope, ventricular arrhythmias (e.g., torsade de pointes) and sudden cardiac death. Unless there is an identifiable reversible cause, treatment primarily involves lifestyle modification and beta-blocker therapy with the implantation of a cardioverter-defibrillator (ICD) in selected cases. Sudden cardiac arrest is a sudden state of circulatory failure due to a loss of cardiac systolic function. Can result from 4 specific cardiac rhythm disturbances: ventricular fibrillation, pulseless ventricular tachycardia, pulseless electrical activity (electrical activity and no cardiac output) and asystole. Any heart rhythm slower than 50 bpm, even if transient. Some consider bradycardia to be a heart rate <60 bpm; however, this is in dispute and most consider rates of <50 bpm to represent bradycardia. Some patients, even if asymptomatic, may require interventions (e.g., pacemaker) to prevent life-threatening complications. Rhythm disturbances responsible may be acute, chronic or paroxysmal long-standing. They include: sinus node dysfunction (sinus bradycardia, sinoatrial nodal pauses/arrest, sinoatrial nodal exit block); atrioventricular (AV) conduction disturbance (first degree AV-block, second degree AV-block [Mobitz I, Mobitz II, 2:1 block, high degree AV-block], third degree AV-block); and AV dissociation (isorhythmic dissociation, interference dissociation). Impaired conduction from the atria to the ventricles, with various degrees of severity. Some patients may be asymptomatic. Signs and symptoms include heart rate <40 bpm, high (or, less commonly, low) blood pressure, cannon A waves, nausea or vomiting, and hypoxemia. May be classified by degree of atrioventricular block, and the severity of symptoms are not necessarily directly related. Strong risk factors include AV-nodal blocking and antiarrhythmic medications, and increased vagal tone. May occur in patients with Lyme disease. Palpitations are defined as the abnormal awareness of one's own heartbeat. May present in non-life-threatening cardiac conditions (e.g., ventricular and atrial premature contractions, and supraventricular tachycardias) and potentially life-threatening conditions (e.g., ventricular tachycardia, hypertrophic cardiomyopathy, Brugada syndrome, and long QT syndrome). Detailed evaluation of palpitations (e.g., rate and degree of regularity, association with position, presence on awakening) can help diagnose the type of arrhythmia present. Tachycardia, generally defined as a heart rate ≥100 bpm, can be a normal physiological response to a systemic process or a manifestation of underlying pathology. Several methods of classification of tachyarrhythmia are helpful in organising and assessing tachycardias. These include: sinus versus non-sinus causes; atrial versus ventricular arrhythmias; narrow- versus wide-complex tachycardias; regular versus irregular arrhythmias; and classification based on the site of origin of the arrhythmia. Typically presents with components of gastrointestinal, constitutional and/or cardiovascular symptoms. Paroxysmal atrial tachycardia is a classic arrhythmogenic toxic manifestation of digoxin overdose. Initial workup should focus on determining whether the patient is hemodynamically compromised from the rhythm itself, and if so, consideration of digoxin immune antibody fragments (Fab) therapy if the patient is found to be digoxin toxic. BMJ Publishing Group This overview has been compiled using the information in existing sub-topics. Use of this content is subject to our disclaimer
<urn:uuid:9c99c719-231f-4afe-8250-7f6cbc44c365>
CC-MAIN-2023-50
https://bestpractice.bmj.com/topics/en-us/837
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.850769
2,057
3.609375
4
Memorial Day was originally called Decoration Day and became a federal holiday after the American Civil War to commemorate the soldiers who died in the war. By the 20th century, Memorial Day was extended to honor all Americans who died while serving in our country’s armed forces. Nearly one in four seniors is a veteran, making Memorial Day a special holiday among retirement communities. Here are some of the ways you can observe and honor the holiday with loved ones this year. A great way to show your support for veterans is by attending a local Memorial Day parade. Wave your American flag and celebrate those who have served and continue to serve our country. Children and grandchildren will love the fun atmosphere and learning about American history. Memorial Day also marks the unofficial beginning of summer, so be sure to dress for warm weather and bring plenty of water and sunscreen. Red, White and Blue Picnic Many retirement communities observe Memorial Day by hosting a picnic for residents. Enjoy red, white, and blue themed food and drinks while gathering with family and friends to celebrate and honor the holiday. Visit Local Memorials Another great way to observe Memorial Day is by visiting the memorials or grave sites of veterans. Honoring our fallen heroes with American flags or flowers is a special way to remember their service and sacrifice. Observe the National Moment of Remembrance To honor those who have served, observe the National Moment of Remembrance on Memorial Day. At 3 p.m. local time, all Americans are asked to observe a moment of remembrance for those who have died while serving. During this time, there is a moment of silence followed by the playing of the “Taps” bugle call. Write a Letter to Veterans A special way to pay our respect to veterans and current servicemen and women is by sending a care package or handwritten letter. Organizations like Operation Gratitude send thousands of letters and packages to veterans and individuals currently in service. Writing these letters is an easy and fun activity for seniors and grandchildren to complete together on Memorial Day.
<urn:uuid:7f544b95-28b2-4d7d-8d72-01b472a55af0>
CC-MAIN-2023-50
https://bethanylutheranvillage.org/5-ideas-for-celebrating-memorial-day-with-seniors/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.946386
415
2.578125
3
Welcome to the International Institute of Tropical Agriculture Research Repository What would you like to view today? Thermal conductivity, thermal diffusivity, and thermal capacity of some Nigerian soils MetadataShow full item record Thermal conductivity increased with increasing soil water content. Clayey soils had lower thermal conductivity than sandy soils at all water levels studied. Thermal conductivity ranged from 0.37 to 1.42 for sandy loam, from 0.37 to 1.90 for loam, from 0.38 to 1.71 for sandy clay loam, and from 0.39 to 0.41 mcal/s [middle dot] cm [degrees]C for clay soils at water contents from 0.02 to 0.16 cm3 cm-3. Thermal conductivity differences among soils were smaller at lower soil water contents than at higher water contents. Thermal conductivity did not continuously increase with water content for the washed sand. Soil containing gravel had lower thermal conductivity than gravel-free soil. Thermal conductivity for a gravelly soil measured in situ was appreciably lower than that of the sieved and homogeneous laboratory soil columns. Thermal diffusivity of sandy or loamy soils increased with water content to the peak and then decreased with further increase in water content. Soils of fine texture, however, did not exhibit a distinct thermal diffusivity peak. Volumetric heat capacity calculated from measured values of thermal conductivity and diffusivity agreed closely with those estimated from volume fractions of soil components (by the de Vries equation) for coarse- to medium-textured soils, but not for fine-textured soils. At air-dry wetness, clayey soils generally had higher thermal capacity than sandy soils. Multi standard citation Permanent link to this itemhttps://hdl.handle.net/20.500.12478/1880 Non-IITA Authors ORCID Digital Object Identifier (DOI) Showing items related by title, author, creator and subject. Plantain (Musa spp. AAB) bunch yield and root health response to combinations of physical, thermal and chemical sucker sanitation measures Hauser, S. (2007)Plantain is an important staple food in West and Central Africa and the Congo Basin. The crop is largely grown in extensive 'slash and burn' systems, drawing heavily on the natural resource base, but is low-yielding due to its high susceptibility to a complex of root and corm pests and diseases. Farmers are unaware of nematodes, fungi and banana weevil eggs, and therefore practise virtually no pest or disease control. This study evaluated the effects on plantain bunch yield of factorial combinations ... Diet dependent life history, feeding preference and thermal requirements of the predatory mite Neoseiulus baraki (Acari: Phytoseiidae) Domingos, C.A.; Melo, J.W.D.S.; Gondim, M.G.; Moraes, G.J. de; Hanna, R.; Lawson-Balagbo, L.M.; Schausberger, P. (2010)Neoseiulus baraki Athias-Henriot (Acari: Phytoseiidae) has been reported from the Americas, Africa and Asia, often in association with Aceria guerreronis Keifer (Acari: Eriophyidae), one of the most important pests of coconut (Cocos nucifera L.) in different parts of the world. That phytoseiid has been considered one of the most common predators associated with A. guerreronis in Brazil. The objective of this study was to evaluate the feeding preference and the effect of food items commonly present ...
<urn:uuid:91bdedaf-3597-4d6e-9988-263a42070c60>
CC-MAIN-2023-50
https://biblio1.iita.org/handle/20.500.12478/1880
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.877144
780
2.890625
3
Shirly miller 0,0 Ivermectin 12 mg is a medication primarily used to treat parasitic infections in humans and animals. It belongs to the class of drugs known as anthelmintics and is effective against various types of parasites, including worms and mites. Ivermectin 12 mg is often prescribed for conditions such as river blindness and strongyloidiasis. While its potential use for other purposes, like addressing COVID-19, has been debated, its efficacy and safety should be determined through rigorous scientific research. Always consult a healthcare professional before using Ivermectin 12 mg, adhere to prescribed dosages, and ensure its proper sourcing to ensure appropriate usage and minimize risks.
<urn:uuid:adf607a3-0cae-470d-92c5-3423e6e035fa>
CC-MAIN-2023-50
https://biomolecula.ru/authors/23182/favs
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.948997
145
2.5625
3
Modals are a special group of helping verbs: can and could, will and would, shall and should, may and might, and must. They are a tricky lot, bringing a range of nuances to sentences. Perhaps the most interesting feature of modals is the contrast between their epistemic and deontic uses. Epistemic uses of modals relate to the speakers’ beliefs and opinions about an action as being possible or necessary. Deontic uses relate to canons or principles and express what ought to be or is permitted, logically or ethically. A third use is known as dynamic and refers to ability, as in “Marketa can speak fluent Czech.” Together dynamic and deontic modals characterize real-world obligations, possibilities, and abilities as opposed to ones based on a speaker’s beliefs. Let’s look at some examples. Must shows its epistemic sense when a speaker says something like “it must be raining” after seeing evidence of rain, like wet people or dripping umbrellas. Might and may can also be used epistemically, with weaker evidence, like cloudy skies. Must is used deontically to indicate necessity or obligation: “You must follow the instructions exactly,” while deontic uses of may indicate permission as in “You may borrow my book.” Can is most often used dynamically to indicate ability, but it is also used deontically to give or ask for permission or to make a command (“You can go now.”). Can can even be used epistemically as in “The bridge can be icy” where it indicated a belief based on experience or knowledge. Could functions much like can: She could speak fluent Czech. (past ability or belief about a current possibility) You could go now. (giving permission or acknowledging an opportunity) The bridge could be icy. (belief based on experience or knowledge) Will and shall are somewhat marginal modal verbs, used primarily for future actions and states, but even they show epistemic and deontic uses. The first sentence below reports a belief while the second sentence asserts an obligation. The pizza will be here soon. (prediction based on belief) Owners shall maintain the premises. (obligation based on law or contract) Should behaves similarly, expressing both beliefs and obligations: The pizza should be here soon. (prediction based on belief) You should wash your hands and avoid touching your face. (obligation based on good practice) Modal would is unlikely to be used deontically. Instead, would often expresses past habitual actions or a conditional future, viewed as a situation of high probability: She would always send postcards. (habitual or usual action) I would visit, but I’m afraid to fly. (belief) That ringing would be the doorbell. (belief) The distinction between dynamic, epistemic, and deontic is one of the most puzzling pieces of the verb system. For me, the easiest way keep things straight is with the mnemonic ABC: for ability, belief, and canon. So when you encounter a modal, ask how it is being used. Is it A, B, or C?
<urn:uuid:3317405f-6d6d-47bd-bbaa-922eacbfdd8a>
CC-MAIN-2023-50
https://blog.oup.com/2021/04/the-abcs-of-modal-verbs/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.952533
688
3.890625
4
Yesterday, California Governor Jerry Brown spoke at the release of the updated State Water Action Plan, emphasizing the impact of climate change on water resources and the importance of applying climate science to water planning. At the release, one of his senior staff, Wade Crowfoot, put a finer point on it: “Water planning and infrastructure need to deal with the climate reality…longer and more severe droughts and flooding” This is interesting in light of decisions made by the Brown Administration’s own California Water Commission just a few weeks ago. Tasked with allocating $2.7 billion to new water infrastructure projects, the Commission approved regulations in the last days of 2015 that do include planning for climate change, but stop short of dealing with the “climate reality.” The irony is that California has some of the most proactive climate policies and some of the best climate science in the nation. Indeed, at a time when international consensus has been achieved around the real and present dangers of climate change and the need for global action, California should be positioning itself as a leader, not a laggard. Climate reality or climate fiction? While the Governor’s office is asking for water managers to plan for the “climate reality,” the California Water Commission approved draft regulations suggesting that climate change impacts do not occur after mid-century: “After 2050, climate conditions shall be assumed to remain at 2050 conditions.” (Excerpt from draft regulations for Water System Investment Program, approved by the California Water Commission on December 16, 2015) They also require project planners to build for what is being termed a “median” scenario—not for a range that includes more extreme scenarios that would significantly stress the system. These regulations are problematic for two main reasons. First, climate science tells us that the most significant changes in temperature, water supply, and water demand occur after 2050. Stopping the analysis of climate change at 2050 makes very little sense, particularly when project benefits can be calculated out to 2099. Secondly, when it comes to building long-lived infrastructure, we are most concerned about extreme conditions, not average or median conditions. Together, these provisions call into question whether new water infrastructure will be able to deal with the impacts of climate change. California has prepared for earthquake uncertainty, and can prepare for climate uncertainty Think for a minute about how we build buildings for earthquakes. Because California is crisscrossed with active earthquake faults, almost all buildings are constructed to withstand a severe earthquake, not a “median” earthquake. In addition, the uncertainty of whether, where, and with what force an earthquake will happen has not stopped the state from continuing to construct buildings, highways, or bridges. Rather, it has required a new level of analysis and robustness in building safety that is continually evaluated and updated. The same should be required when it comes to climate change. In fact, the Department of Water Resources’ Climate Change Technical Advisory Committee advised that water planning and infrastructure use a similar standard, what they called a “stress test framework” to analyze the impact of future climate conditions on water resources: “A stress-test approach using scenarios of constructed extreme events along with analyses of vulnerability to these events, offers a vehicle to assess extremes in a planning process… Stress tests focus on identifying weaknesses and breaking points to the water system that stem from different facets of extreme events” (DWR Climate Change Technical Advisory Committee 2015, pgs. 41-42). When California voters approved the water bond, they provided a once-in-a-generation opportunity to secure our water needs well into the future. In discussions at the California Water Commission proceeding approval of the project regulations, several Commissioners suggested or supported important amendments that would have helped ensure that taxpayer money would be well-spent. But pressure to finish quickly led to the approval of inadequate regulations that do not follow the advice of the state’s own Climate Change Technical Advisory Committee, and fall short of what is needed to ensure a reliable and resilient water system for the future.
<urn:uuid:714657bb-7fe7-40c5-b47b-b233bebef8c2>
CC-MAIN-2023-50
https://blog.ucsusa.org/juliet-christian-smith/governor-brown-water-action-plan/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.941793
837
2.578125
3
Biking is one of the most popular activities in the great outdoors worldwide. It’s a fantastic chance to see new areas, work up a sweat, and have an excellent time. Getting on a bike and riding it may significantly improve your health and well-being. In this post on our website, we will discuss the many benefits of biking and how it may help improve your overall quality of life. Table of Contents Benefits of Biking for Your Physical Health Cycling is a great way to enhance your cardiovascular fitness and general health. Regular biking may help reduce the risk of developing cardiovascular illnesses such as heart disease, stroke, etc. Cycling may also help lower blood pressure and improve circulation, reducing cardiovascular risk, including heart attacks and strokes. Cycling is a fantastic exercise for weight reduction objectives. This is a tremendous technique to burn many calories without excessive strain on your body. If you ride your bike often, you may burn calories and lose weight without putting undue stress on your joints. Muscle Strength and Endurance Cycling may help you build both physical strength and endurance. Cycling strains several leg muscles, including the quadriceps, hamstrings, and glutes. While riding a bike, your core muscles will be used, which will help you improve your balance and stability. Cycling is a low-impact exercise that is easy on the joints due to the pedaling action. Unlike high-impact activities like running, biking does not put substantial pressure on your knees, ankles, or hips. Riding a bike may also enhance your joint mobility and flexibility. Improved Immune System Cycling improves the immune system. Regular exercise may help reduce the risk of acquiring chronic diseases such as diabetes, heart disease, and obesity, all of which may damage the immune system. Riding your bike regularly may help improve your immune system and reduce your chances of being unwell. Benefits of Biking for Your Mental Health Reduced Stress and Anxiety Cycling may be an excellent way to relieve stress and anxiety. Several studies have shown that exercising is one of the most effective ways of relieving stress and anxiety. If you ride your bike regularly, you may assist in reducing stress and anxiety while improving your overall mood. Improved Brain Function Cycling has been demonstrated to improve cognitive wellness. A regular exercise regimen has been shown in studies to improve both cognitive function and memory. Riding your bike regularly may help you improve your brain function and mental abilities. Cycling has been demonstrated to improve sleep. Regular physical exercise has been shown to improve the quality and duration of one’s sleep. Everyday bike riding may help you sleep better, enhancing your overall health and well-being. Riding a bike might be an excellent approach to increasing your self-esteem. Exercising has been shown to boost one’s self-esteem and physical perception of oneself. Riding your bicycle regularly may increase your self-esteem and make you feel better. Improved Social Connection Cycling may help you connect with people more effectively. Meeting new people and connecting with those who share your interests are two of the many advantages of going on bike rides. Spending time with loved ones and friends may also enhance your emotional and physical health, which is another way riding may help you feel better overall. Cycling is a fantastic sport for improving one’s health and well-being. If you ride your bike daily, you may improve your cardiovascular health, lose weight, develop your physical strength and endurance, and improve the health of your joints. Riding a bike may also help reduce stress and anxiety, improve cognitive function, improve sleep quality, improve self-esteem, and enhance social connection. Various benefits to biking may help improve your overall quality of life, and these benefits are accessible to you whether you are an avid cyclist or just starting. Get on your bike and experience the benefits of biking it as soon as feasible!
<urn:uuid:9403aba6-9faf-403b-94f8-ec05b321b3c4>
CC-MAIN-2023-50
https://blogking.uk/benefits-of-biking/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.94682
794
2.953125
3
According to a study by the Northwestern University Super Aging Project, there are some seniors who have been able to outsmart their peers and avoid any mental decline that usually occurs as we age. What makes them different from the majority? A certain amount of mental decline is normal as we age, but it seems that some seniors have been able to outwit it. the Northwestern study found seniors over 80 years of age who not only matched, but even beat out memory performance of people in their fifties! The researchers found important differences in thinning of the cortex between some older brains in comparison to that of a normal-aging brain at the same age. The research compared the brains of 10 normally developing seniors over 80 and 12 seniors of the same age who showed no memory decline, against 14 middle-aged adults whose average age was 58. The “super-agersâ€, as they were dubbed, had much thicker left anterior cingulate cortex than even the middle age group. In addition, the brain of one super-ager who had died revealed that, although there was some plaque and tangles in the mediotemporal lobe, characteristic in larger quantities in patients with Alzheimer’s, there was none at all in the anterior cingulated. They are not certain what all this means in why some people with this thickening are better at keeping their brains intact they do know that the anterior cingulated is part of the attention network in the brain, as well as playing a role in error detection and motivation. Although there is still a mystery as to why, this preliminary study does suggest that those super-agers have less pathological damage or clinical problems than seen in most older brains, which seems to be all too common with cognitive aging. There seems to be no obvious lifestyle changes that would be a factor. Perhaps there is just the luck of the draw when it comes to “good genes.†It seems to be increasingly clear that “super-protective genes†and problems with health resulting from poor lifestyle choices are behind a great deal of the damage to our brains. Note: It should be emphasized that these unpublished results are preliminary only. This conference presentation reported on data from only 12 of 48 subjects studied. About the author: Mempowered! Why a select group of seniors retain their cognitive abilities: http://www.memory-key.com/research/news/why-select-group-seniors-retain-their-cognitive-abilities Harrison, T., Geula, C., Shi, J., Samimi, M., Weintraub, S., Mesulam, M. & Rogalski, E. 2011. Neuroanatomic and pathologic features of cognitive SuperAging. Presented at a poster session at the 2011 Society for Neuroscience conference.
<urn:uuid:d2093ead-c306-4c4f-89cd-a5cca6fd50de>
CC-MAIN-2023-50
https://brainathlete.com/signs-brain-aging/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.969895
588
2.96875
3
Published on: 01.04.2015 Number of pages: 130 Written by: Vineeth G. Nair Publish by: PACKT Publishing As far as I know this is first book about Beautiful Soup, so I was really excite to read it. I prefer learning from book, tutorial as second choice, last choice is from documentation. Form somebody who is starting to use Beautiful Soup I would recommend this book, even somebody who had used Beautiful Soup (like myself) can find value in it. What I have learned from “Getting Started with Beautiful Soup” book: - Beautiful Soup can be used to change the content of an HTML/XML document. - how to set input encoding for Beautiful Soup document - default output encoding for .prettif() and .encode() is UTF-8, but it can be changed also. - different formatters (minimal, html , None , function) can be passed as parameters to prettify(), encode(), and decode()
<urn:uuid:5706bed8-4799-4674-b1d3-d945a18b859a>
CC-MAIN-2023-50
https://buklijas.info/blog/2015/04/01/getting-started-with-beautiful-soup-book-review/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.886822
211
2.625
3
Disputes between neighbors over roaming cats can be a common issue in residential areas. These disputes often arise due to concerns about the cat’s safety, the impact on wildlife, property damage, or hygiene-related problems. Resolving these disputes requires open communication, understanding, and a cooperative approach. 15 Problems Roaming Cats Cause Roaming cats may cause various problems for neighbors, including: - Damaging gardens, flower beds, or lawns. - Leaving waste on neighboring properties. - Hunting birds and other wildlife. - Disturbing or fighting with other pets. - Creating noise, especially at night. - Triggering allergies – Roaming cats may cause discomfort for neighbors who suffer from cat allergies, as their fur and dander can spread into neighboring properties. - Trespassing into homes – Curious cats may enter open windows or doors, causing distress for both the cat and the homeowner. - Spreading disease – Free-roaming cats can spread diseases, such as feline leukemia or feline immunodeficiency virus, to other cats in the neighborhood. - Attracting unwanted animals – Food left outside for roaming cats can attract other unwanted animals, such as raccoons, rats, or possums, leading to additional problems for neighbors. - Traffic hazards – Roaming cats may be at a higher risk of getting hit by cars, creating a dangerous situation for both the cat and drivers in the neighborhood. - Overpopulation – Unspayed or unneutered roaming cats can contribute to overpopulation, leading to an increase in stray and feral cats in the area. - Property marking – Cats may spray urine to mark their territory, leading to unpleasant odors and property damage. - Disturbing bird feeders – Cats may target bird feeders in neighboring yards, impacting the local bird population and upsetting homeowners who enjoy birdwatching. - Disrupting garbage bins – Roaming cats can knock over trash cans or dig through garbage, creating a mess and attracting other pests. - Scratching or damaging outdoor furniture – Cats may use outdoor furniture or structures to sharpen their claws, resulting in damage to the property. To address disputes over roaming cats, consider the following steps: 1. Open Communication Approach your neighbor calmly and respectfully to discuss the issue. Clearly express your concerns and try to understand their perspective as well. 2. Offer Solutions Suggest practical solutions that benefit both parties. These could include: - Encouraging the cat owner to keep their cat indoors, especially at night. - Installing cat-proof fencing or deterrents around your property. - Providing the cat owner with information on cat enclosures or “catios” that allow outdoor access while keeping the cat contained. 3. Be Empathetic Remember that the cat owner may not be aware of the problems their cat is causing. Try to maintain a friendly and understanding tone to foster cooperation. 4. Involve a Mediator If direct communication fails, consider involving a neutral third party, such as a community mediator or a representative from a local animal welfare organization, to help facilitate a resolution. 5. Research Local Regulations Familiarize yourself with local laws and regulations related to pet ownership, roaming cats, and property rights. In some cases, legal action may be necessary, but it should be considered a last resort. 6. Promote Responsible Pet Ownership Encourage your community to adopt responsible pet ownership practices, such as spaying/neutering, microchipping, and providing secure outdoor spaces for cats. This can help prevent future disputes and create a more harmonious environment for everyone involved. By addressing disputes over roaming cats with empathy, open communication, and practical solutions, neighbors can work together to find a resolution that meets everyone’s needs and promotes the well-being of the cats involved. How to Prevent Your Cat From Visiting Your Neighbor To safely and humanely prevent your cat from going next door to your neighbor without causing pain or distress, consider the following strategies: Keep your cat indoors – Provide a stimulating indoor environment with toys, scratching posts, and climbing structures to keep them engaged and entertained, reducing their desire to roam outdoors. Supervised outdoor time – Allow your cat to explore outside under your supervision. Use a harness and leash specifically designed for cats to guide them around your yard, keeping them away from your neighbor’s property. Create a “catio” – Build or purchase a secure outdoor enclosure or “catio” that allows your cat to enjoy the outdoors while staying within a designated area on your property. Install cat-proof fencing – Consider installing cat-proof fencing around your property. Options like fence rollers or specialized barriers at the top of your existing fence can make it difficult for your cat to climb over without causing harm. Enrich the indoor environment – Make your home more stimulating for your cat by providing a variety of toys, hiding places, and climbing structures, making the indoors more appealing and reducing their desire to roam outdoors. Spay or neuter your cat – Spaying or neutering your cat can decrease their urge to wander, making them less likely to venture onto your neighbor’s property. Tire your cat out – Engage your cat in interactive play sessions to expend its energy and reduce its desire to roam outdoors. Establish a routine – Maintain a consistent daily routine for your cat, including feeding, playtime, and sleep Which Cat Breeds Roam the Most? Some cat breeds are more prone to roaming and exploring due to their high energy levels, curiosity, or strong hunting instincts. While individual personalities and temperaments can vary significantly within each breed, the following cat breeds are known for their tendencies to roam: - Bengal – Bengals are highly active, intelligent, and adventurous cats with a strong instinct to explore their surroundings. - Siamese – Siamese cats are known for their curiosity, sociability, and athleticism, which can contribute to their desire to roam. - Abyssinian – Abyssinians are energetic, playful, and highly intelligent cats that enjoy exploring new environments. - Norwegian Forest Cat – These large, athletic cats are natural climbers and have strong hunting instincts, which can lead to a desire to roam outdoors. - Maine Coon – Maine Coons are large, intelligent, and independent cats that may be more inclined to explore their surroundings. - Savannah – As a hybrid breed with wild ancestors, Savannah cats are known for their high energy levels and strong instincts to hunt and explore. - Oriental – Oriental breeds, like the Siamese, are known for their high energy levels, intelligence, and curiosity, which can lead them to roam. - Turkish Van – Turkish Vans are highly active and agile cats that enjoy exploring and climbing, which can contribute to their desire to roam. Do note that even though these breeds may have a greater tendency to roam, individual cats’ personalities and behaviors can vary significantly. Providing a stimulating indoor environment, appropriate outdoor enclosures, or supervised outdoor time can help satisfy their natural instincts while keeping them safe and minimizing the risks associated with roaming. Outdoor Cat Forces Himself Into Neighbor’s Life (Video) "In ancient times cats were worshipped as gods; they have not forgotten this." -- Terry Pratchett
<urn:uuid:1d38c9ab-6561-4398-bbd8-acb60fa5e294>
CC-MAIN-2023-50
https://catcheckup.com/how-can-neighbors-resolve-cat-disputes
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.92278
1,524
2.953125
3
This module covers common base, common collector and common-emitter amplifiers. In addition, the student is introduced to the effect of AC signals on amplifiers, FET amplifiers and multistage amplifiers. The student will also learn the differences between Class A, B, and C amplifiers and their applications in industry. Emphasis is placed on design, problem solving, and troubleshooting of amplifier circuits. Upon completion of this module the student will be able to: - List three main characteristics of linear amplifiers. - Describe the effect of AC signals on an amplifier. - Name three configurations for BJT amplifiers. - Explain why coupling capacitors and bypass capacitors are used in amplifier circuits. - List three configurations for FET amplifiers. - Discuss the advantages and disadvantages of direct coupling, capacitor coupling and transformer coupling. - Differentiate between class A, B and C amplifiers. - Define crossover distortion. - Troubleshoot amplifier circuits.
<urn:uuid:ce28c3b9-e6dc-4bdd-9a63-f0cd1562997d>
CC-MAIN-2023-50
https://cfcc-gbc.com/node/63
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.835061
207
3.921875
4
India is one of the countries in the world that provides a great education system. There are a wide variety of learning opportunities available for the people of India. However, nowadays e-learning has played a major role in Indian education since the pandemic started. At first online learning and online education was new to the students, but now everyone is adapted to it and it has numerous benefits over traditional learning. e-learning has now become possible for almost everyone because of the developing technologies and the internet. After 2020, the education system has seen a major shift and so many changes have been adopted. e-learning in India has been introduced because of the impossibility of the students to come out and learn during the pandemic. Let us have a look at the benefits of e-learning in detail. One-to-One Interaction during e-learning In online learning, each student gets a chance to speak up. Teachers can spend some extra time with students by creating individual meetups with every student once or twice a week. This will help the teachers to get to know their students individually and provide the required support to their students. Interaction with fellow Students Teachers can include some engaging activities among the students and help them to know about each other even in digital learning. Team activities and projects should be given where they can participate and collaborate with their friends and get to know each other better. Parents and Teacher’s Interaction Teachers will also get a chance to interact with the parents as they don’t need to come to school and wait for a long time to meet teachers. Let us look at some advantages of e-learning and how it is going to bring major changes in the education system. Benefits of e-learning There are a lot of benefits of online-learning which makes it more suitable for the future. We can see the top advantages of e-learning. Affordable for everyone Students who want to study at top universities but couldn’t afford it can benefit greatly from this e-learning. For students who dream to pursue higher education in institutions that are located in other states, e-learning would be a boon for them. They don’t need to shift to a different state and travel for hours and hours for their education. They can save on their accommodation and travel costs and time by learning in the comfort of their homes because of this e-learning. You just need a smartphone or laptop and good internet connectivity to pursue your education. Students don’t need to pay for study material every year. They can use ebooks and pdfs to learn and save the cost of buying physical books every year. By transforming learning digitally, we can also prevent the wastage of paper. E-learning in schools greatly helps the students to get an experience of distance learning and they feel it easy and comfortable in the future when they opt for abroad universities that offer distance learning. Some students fail in education mostly because of the same learning pattern followed by all. Every student has a different capability and requires various periods to complete their tasks. In e-learning, students can study at their own pace and it assures flexibility in learning. By encouraging the students to learn at their pace, their academic performance could be improved. Teachers can also suggest various study materials and references for their students since there are numerous sources available online. Students will also get a chance to pick a material that is suitable for them. Fast learning technologies Students who start their education online will learn about the technologies faster. As they search the internet for references and queries, they would be experienced in using the tools and technologies at a very young age. E-learning is not only about watching lecture videos online. It also involves online assessment, online assignment/project submission, presentation preparation, etc. which makes the students well aware of the technologies. Aspects of e-learning in the future Since online learning has become easier nowadays, utilizing most of it in the future requires some improvements. There is a lot of research done by various universities and researchers on techniques to improve the quality of e-learning. - E-learning would make use of augmented reality and virtual reality to provide live experience to the students and make them learn through gamification methods. - Teachers should be well-trained to handle the technologies and manage online learning. Cloud labs should be given to the students to practice hands-on experiments. Online assessments should be established in a proper way that benefits the students. E-learning platforms are being transformed to provide a better experience for the students. More technologies and tools are being infused into learning. As we use technology for everything in our lives today, it is essential to continue it for our education also. E-learning has helped us immensely and will continue to do so. Classplus has ensured the growth of your coaching business with your online coaching app by launching unlimited video uploads on your app, marketing dashboard, stats dashboard etc.
<urn:uuid:f065c268-99e5-4031-83de-634d61f3e8af>
CC-MAIN-2023-50
https://classplusapp.com/growth/e-learning-in-india/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.968021
1,014
2.65625
3
Cloud computing has revolutionized the world as we know it. Cloud is involved in nearly every aspect of our lives now, and the way we function in this decade is something our ancestors had never even dreamed of! Cloud is also evolving at nearly lightning speed even now, and its prospects are unreal. Which, of course, begs the question, where did cloud start and how did we get here today? Back in the 50s… When we think of the history of the cloud, we can probably think of the 90s when the internet became popular and started integrating with people’s daily lives. However, in truth, the actual origins of the cloud are back in the 50s. In 1950 a mainframe computer was developed to connect computers across a network for military purposes. Back then, the development of a mainframe costed millions of dollars. In the 50s and 60s, computers were mainly enormous mainframes and catered to multiple users at a time. Most of these mainframes were non-storage, and their clients could barely compute anything. These mainframes were the true origins of cloud computing way before there was even a vision for cloud computing. ARPANET And Virtual machines In the late 60s, the ARPANET or Advanced Research Projects Agency Network was developed, which is often referred to as the internet’s forerunner. It was also exclusive to the Defense Department of the United States. It was intended to connect computers at the Pentagon only. In the 70s came the Virtual Machine by IBM. It was an operating system that allowed mainframe systems to have a separate interface for each node. This was where cloud computing was visualized. With the development of VMs, people could enable different servers to compute in the same environment. Even though everyone, like operating systems, did not commonly use VM are today, it was not just limited to military use anymore. The 90s And The Internet Until the 90s, the internet became popular, and the term Cloud Computing was officially coined. ARPANET had formally shut down in the early 90s. The World Wide Web was an invention of an English researcher, Tim Berners-Lee, at CERN. The internet wasn’t conceptually much different back then than it is now. It was intended to hyperlink documents and other resources to each other, enabling easy access to everyone. By the end of the 90s, the internet was being used to host many websites that offered all sorts of services. People could watch videos, listen to music, and even UX design came into being, granting everyone the access and utility to handle complex tasks which only coders could do before. This digital world seemingly connected everything and had infinite potential just started to be referred to as the cloud. At this time, Iaas and Paas services were also quickly gaining traction over the internet. In the late 90s, Saas (Software as a service) also broke in with Salesforce, which offered applications online. Cloud In The 2000s In 2006, Google’s Eric Schmidt used the term cloud to refer to Google Services. He, therefore, gets the credit for coining this term after over 50 years of its origins. Amazon seized the opportunity to capture the entirety of cloud so far and began Cloud Computing as we know it under the name of Amazon Web Services (AWS). AWS launched in 2002 and primarily focused on helping their partners embed their e-commerce services in their businesses. Then, in 2006 Amazon moved on to offering IaaS offerings under EC2 (Elastic Compute Cloud). They also made their own innovations in cloud computing using On-Demand, which allowed users to only pay for the cloud capacity they were using. IBM Adopted cloud services in 2007 and announced it would open cloud spaces for enterprising. They released iconic cloud services such as IBM CloudBurst and IBM SmartCloud. In 2008, Google decided to introduce its own cloud space under Google App Engine, which was a PaaS service. Microsoft joined shortly after with the announcement of Windows Azure. Then, in 2009, Amazon introduced some of its most sought-after services, including Amazon Cloudwatch, RDS (Relational Database Service), Auto Scaling, and Amazon VPN. All of these services are still in use today, showing how aptly Amazon Web Services have kept up with the vision of the cloud. Another well-known e-commerce giant emerged from China in 2009, known as Alibaba Group. In 2010, OpenStack became one of the first and most common open-source cloud services. In 2014, Amazon Web Services introduced Serverless Computing under the name Lambda, which further implemented new ways of computing within enterprises. Containerization also started developing in 2014. SaaS (Software as a Service) kept becoming increasingly popular in the 2010s with new applications swarming the internet, and by 2020, SaaS was estimated to be a $157 billion industry. Cloud computing is here to stay and is probably not going anywhere any time soon. The 2010s were an excellent time for digitization, but survival was still possible without the internet. In 2020, since the Covid-19 pandemic took over the world, the entire world went digital almost overnight. Companies shifted entire infrastructures online, and Work From Home became common only thanks to cloud computing. Multi-Cloud Computing and Hybrid Cloud computing are evolving further as their need is magnified and companies prioritize cloud savings. The world has become increasingly solution-oriented, mainly due to the potential of The Cloud and how it seems to evolve interminably. Everyone is now dependent on the cloud, and we should be grateful to the roots that made cloud computing what it is today so that the world didn’t come to a halt just because we couldn’t go out anymore. Conclusion to Where Did Cloud Start And How Did We Get Here Today The history of the cloud is rich and dates back to more than 70 years ago. We have come very far in less than a century, and there is still more potential for cloud computing that needs exploring. Further blogs within this Where Did Cloud Start And How Did We Get Here Today category.
<urn:uuid:fe4a2271-529d-4310-939e-49741e1a6368>
CC-MAIN-2023-50
https://cloudcomputingtechnologies.com/where-did-cloud-start-and-how-did-we-get-here-today/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.975947
1,278
3.3125
3
In this upcoming course, you will learn to build a system similar to what you use here on Code With Stein. A learning management system is a platform where teachers can upload content like videos, written articles, quizzes and similar. And students can sign up (and maybe pay) to get access to that content. In this tutorial series, I will start of by using basic Django to build the structure of the platform. And then slowly introduce different key concepts of Django. Intro and demo Stein Ove Helset Hey! My name is Stein Ove Helset. I'm a self-taught software developer with over a decade of experience working full time as a web developer. Through this website and YouTube channel, I have taught thousands of people how to code, build websites, games and similar. If you're looking for an introduction to coding and web development, you've come to the right place.
<urn:uuid:5d92b141-48f5-4e65-a7d0-2212ee429788>
CC-MAIN-2023-50
https://codewithstein.com/courses/build-a-learning-management-system-using-django/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.909265
189
3.171875
3
New Quantum Bit Platform Points to New Direction for Low-Cost, Large-Scale Quantum Computers A paper, titled “Single electrons on solid neon as a solid-state qubit platform,” published in the journal Nature by Assistant Professor Xufeng Zhang, electrical and computer engineering, who had a key contribution in the quantum microwave measurements, and researchers at Argonne National Lab and other collaborative institutions, demonstrates a fundamentally new quantum bit (qubit) platform. The qubit platform is achieved by freezing neon gas to a solid at a temperature only 10 millidegrees above absolute zero and then trapping a single electron on its surface. Solid neon provides a super clean environment for the trapped electron, and therefore can protect its delicate quantum states. In this way, the electron exhibits superb properties as a qubit such as long coherence times, and it can be easily operated by a microwave superconducting circuit underneath the solid neon. This platform points to a new direction for developing low-cost large-scale quantum computers and will enable broad applications in quantum information science and technology.
<urn:uuid:6058900d-61cc-4d67-8deb-97cd77007d81>
CC-MAIN-2023-50
https://coe.northeastern.edu/news/new-quantum-bit-platform-points-to-new-direction-for-low-cost-large-scale-quantum-computers/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.891274
218
2.921875
3
Many people have only recently begun to learn about the astonishing healing and infection-fighting benefits of colloidal silver. But truth be told, colloidal silver’s broad-spectrum antimicrobial qualities have been known since the late 1800s when the substance was first produced. Here’s a quick overview of what medical and clinical research experts were saying about colloidal silver over 100 years ago… Hi, Steve Barwick here, for TheSilverEdge.com… Colloidal silver use came into being shortly after Edison harnessed electricity in 1879 and researchers learned how to drive tiny silver micro-particles from pure silver rods suspended in water, by running electricity through them. Chemical techniques for producing colloidal silver were also invented. And soon, numerous forms of colloidal silver were being used by doctors around the world, with names like Collargol, Protargol, Electrargol, Lunasol and many others. What made colloidal silver so popular with doctors? In many cases, it was the vast panoply of infectious disease conditions which colloidal silver rapidly and quite effectively cured. As Dr. Reynold Webb Wilcox, M.D., stated in January 1900, in the New England Medical Monthly and Prescription, in an article titled “Internal Antisepsis”: “…colloidal silver has a very beneficial influence and often effects a rapid cure in recent and chronic sepsis and furunculosis, when secondary changes in the vital organs have not occurred. [Doctors] have treated osteomyelitis, phlegmonous angina, furunculosis, erysipelas, so-called gonorrheal and articular rheumatism, etc., by this method. Various reports, some very enthusiastic, have been presented; on puerperal fever (Peters, Jones, Voorhees), cerebrospinal meningitis (Schirmer),acute mastitis (Cumston), malignant scarlet fever (Crede), divers septic processes (Werler), furunculosis (Wolfram), and finally in purpura in the horse (Dieckerhoff). Wilcox's own experience in septic phlebitis, of which an unusually large percentage has occurred in his typhoid fever cases, has been most satisfactory… In one instance of septic phlebitis following amoebic dysentery the results were almost marvelous.” In layman’s terms, colloidal silver cured infections of the bloodstream, infected boils, infected bone, infection-induced fevers and just about everything you could throw at it if an infectious condition was involved in the disease process. And as Robert Bartholow wrote of silver’s astonishing infection-fighting qualities in A Practical Treatise on Materia Medica and Therapeutics, way back in 1908: “As a topical agent, silver may be used in surgical diseases, wounds, injuries, and in cases of septic decomposition. Wherever diseases -- either pure or mixed infectious -- are caused by the staphylococcus, the streptococcus, and other forms of low organisms, this remedy is effective in a high degree.” And as also reported in 1908, in the journal Post-Graduate (Vol. 23, #10, page 911), from the New York Medical School and Hospital: “The properties of colloidal silver are, therapeutically, the following: antiseptic, inhibiting bacterial growth, inorganic ferment exercising catalytic action. The drug is absolutely harmless and should be used (1) when at the outset the infection assumes a serious aspect; (2) when an infection, at first localized, tends to become generalized." In other words, colloidal silver was used to stop infections from becoming more serious, and to stop already-serious infections from spreading. And as Dr. William Halstead, M.D. (see image above), one of the founding fathers of modern surgery, wrote of the infection-fighting qualities of silver way back in 1913: "We may only have scratched the surface of silver’s medical brilliance! Already it is an amazing tool! It stimulates bone-forming cells, cures the most stubborn infections of all kinds...and stimulates healing in skin and other soft tissues. I know of nothing which could quite take its place, nor, have I known anyone to abandon it who had thoroughly familiarized himself with the technique of its application. " Unfortunately, it took until 1975, some 60 years later, for medical science to finally accept silver’s role in stimulating bone-forming cells, and curing stubborn tissue infections around bone breaks, thanks largely to the remarkable clinical research of Dr. Robert O. Becker, M.D. of Syracuse Medical University. But let’s get back to the older medical quotes. As the British Medical Journal stated in February 1917: “Colloidal silver has been used successfully in septic conditions of the mouth including pyorrhea alveolysis, throat, ear, and in generalized septicemia, leucorrhea, cystitis, whooping cough and shingles." It seems there was hardly any kind of infection that colloidal silver didn’t work for! As Alfred E. Searle, author of Colloids in Biology and Medicine wrote in 1919: "Applying colloidal silver to human subjects has been done in a large number of cases with astonishingly successful results...it has the advantage of being rapidly fatal to microbes without toxic action on its host. It is quite stable. It protects rabbits from ten times the lethal dose of tetanus or diphtheria toxin.” As researcher Fred Wilbur Tanner, stated in the journal Bacteriology and Mycology of Foods, in 1919: "Biasiotti (1910) reported that colloidal silver electrically prepared would kill Staphylococcus aureus, B. typhi, and Bacterium diphtheria; in a few hours. The silver is probably united to the protoplasm in some way because Gram positive bacteria are made Gram negative.” As Dr. Henry Crooks, M.D. stated in 1921: "I know of no microbe that is not killed by silver in laboratory experiments in six minutes." And finally, according to the August, 1920 edition of The National Druggist, Vol. 50, page 388: "Colloidal silver is powerfully destructive of toxins of bacterial origin …Experiments on rabbits show that colloidal silver renders subject 'immune' from the effects of large quantities of tetanic or diphtheritic serum. …colloidal silver has proved its value in combating the following ailments among others -- tonsilitis, gonorrhoeal conjunctivitis, spring catarrh, pustular eczema of scalp, septic ulcers of legs, boils, chronic cystiitis, ringwork, soft sores. But we must not prolong the list of the good works of colloidal silver. Suffice it that at the present time it is the most extensively used in medicine of all the sols." In short, colloidal silver was the star infection-fighting agent of the early 1900’s, before the advent of prescription antibiotic drugs like sulfa and penicillin in the 1930’s and 40’s. Silver’s Untimely Demise and Much-Needed Resurrection Once prescription drugs were developed and the modern pharmaceutical industry began forming, colloidal silver usage by medical doctors gradually fell out of favor. But over the ensuing decades, as more and more bacteria began developing resistance to prescription antibiotic drugs, cutting edge doctors began looking back to the past to find out what was used before antibiotic drugs were developed. And what they found was, of course, colloidal silver. As newer research was conducted in the 1970’s on silver as an infection-fighting agent, the interest in colloidal silver as a safe and powerful antimicrobial substance was rapidly renewed. And today, usage of safe, natural colloidal silver has become popular with millions of Americans who have experienced its phenomenal healing and infection-fighting benefits. What’s more, many modern-day medical doctors and clinical researchers are embracing colloidal silver usage, as well. As Dr. Joseph Weissman, M.D. board certified immunologist and Assistant Clinical Professor at the University of California Medical School has stated: "Today, many antibiotics are losing the battle with germs. Fortunately, the best germ killer, which was discovered over 2,000 years ago, is finally getting the proper attention from medical science - natural silver. I sincerely recommend that everyone have electrically generated colloidal silver in their home as an antiseptic, antibacterial and antifungal agent." And as Dr. Ron Surowitz, D.O., M.D., former head of the Florida Osteopathic Medical Association has stated: "Sometimes a treatment can be worse than the illness, but not in the case of Colloidal Silver. I have my patients spray or swish Colloidal Silver in their mouths from one to three times daily, depending on the severity of their condition…or, in a dose of one quarter to one teaspoon, and even up to one tablespoon, one to three times daily for situations that are more problematic. I also have them use it in small amounts daily as a preventative...It's interesting how many of my patients improve with the use of Colloidal Silver. It enhances the immune system where other antibiotics cause yeast overgrowth... ...[As an example] one patient had persistent yeast infections. Having had no success with both prescription and non-prescription treatments, she called me, at her wit's end. I suggested that she try a douche of two teaspoons of Colloidal Silver in a quart of water. Three days later she was evaluated in my office. The symptoms of her yeast infection had vanished, and there was no visible sign of infection." And as Herbert Slavin, M.D., founder and director of the Institute of Advanced Medicine, Lauderhill, Florida has stated: "...Ionic silver is increasingly being recognized for its broad-spectrum antimicrobial qualities and the fact that it presents virtually none of the side-effects related to antibiotics. Ionic silver is entirely non-toxic to the body...Reports of any pathogens developing resistance to ionic silver are rare. Some reports indicate it even kills drug-resistant strains of germs. Ionic silver is also a powerful tissue-healing agent, so much so that it has been used topically for decades in burn centers and currently represents one of the fastest growing sectors – if not the fastest growing sector – in wound care today. The fact that ionic silver is effective against a very broad range of bacteria is well established and, due to recent advances in the delivery of ionic silver together with the problems associated with antibiotics, it is being used in a rapidly growing range of dietary-supplement, medical, and industrial products. ...A study at the University of Arizona recently showed ionic silver to be effective against the coronavirus that researchers use as the surrogate for SARS." And as board certified Clinical Nutritionist Byron J. Richards, CCN, has stated; "…The high efficacy in the use of silver to kill bacteria and fungus is not in question by anyone. This does not mean it kills every type of bacteria or fungus. And in the ones it does kill it does not mean it kills all of them. It simply means that the antibiotic properties of silver are quite potent – and the risk to human health in terms of toxicity is negligible. This is a far better risk/benefit profile than commonly used antibiotics." Finally, as Dr. Jonathan Wright, M.D., of the famous Tahoma Clinic in Washington State has written: "Colloidal silver just might be the next germ-fighting wonder drug. And not just for the serious threats making headlines: It's also effective against bacterial infections like strep throat, viruses like the flu, and fungal infections like Candida. No matter how much a germ mutates, it can't change enough to escape the damaging effects of colloidal silver. And in the process, the silver doesn't harm human tissue or kill off the good bacteria in the intestine the way antibiotics and other medications do… …Beginning in the 1970s, several independent researchers found that silver ions easily destroy Candida and other fungi. But it wasn't until a pilot study during the mid- 1990s that included human patients suffering from terminal AIDS that medical researchers established solid evidence showing just how quick and effective silver ions can be in the treatment of Candida as well as HIV. In this study, nine individuals who were near death were divided into two subgroups. One group suffered from HIV and a terrible Candida infection. The other group suffered from both HIV and an extreme form of malnutrition (known as Wasting Syndrome). The researchers found that in both groups the colloidal silver was capable of killing pathogens and purging the bloodstream of germ defenses in order to restore the immune system." In short, we have come back full circle to nature’s wonder cure for infection and disease: colloidal silver. What worked in the beginning against infection and disease continues to work today, even as prescription antibiotic drugs continue to fall to the wayside, thanks largely to the crisis of antibiotic-resistant superbugs. And yes, colloidal silver even kills the superbugs, as thoroughly documented with numerous clinical studies in my previous article, “Colloidal Silver versus the Superbugs”. Learn More About Colloidal Silver… If you’d like to learn more about colloidal silver and its astonishing array of infection-fighting benefits, one of the easiest ways is to read through the real-life success stories of literally hundreds of experienced colloidal silver users at this link. Another way to learn more about colloidal silver is to watch some of the short “how to” videos on using colloidal silver, at this link. And even another great way to learn more is to scroll through the over 100 short articles on colloidal silver and its usage, on the Colloidal Silver Update page. If you’re a Facebook member, you can also join the Colloidal Silver Secrets Community on Facebook where over 9,000 members share their experiences with colloidal silver. To join, just click the link in this paragraph, and then “Like” the page when it comes up. Finally, if you’d like to learn how to make your own high-quality colloidal silver for less than 36 cents a quart, with a brand new Micro-Particle Colloidal Silver Generator from The Silver Edge, just click the link in this sentence. And don’t forget to claim your FREE copy of the new, 30-page Colloidal Silver Safe Dosage Report by clicking the link in this sentence. Meanwhile, I’ll be back next week with another great article on colloidal silver… Yours for the safe, sane and responsible use of colloidal silver, Steve Barwick, author The Ultimate Colloidal Silver Manual The Ultimate Colloidal Silver Manual Important Note and Disclaimer: The contents of this Ezine have not been evaluated by the Food and Drug Administration. Information conveyed herein is from sources deemed to be accurate and reliable, but no guarantee can be made in regards to the accuracy and reliability thereof. The author, Steve Barwick, is a natural health journalist with over 30 years of experience writing professionally about natural health topics. He is not a doctor. Therefore, nothing stated in this Ezine should be construed as prescriptive in nature, nor is any part of this Ezine meant to be considered a substitute for professional medical advice. Nothing reported herein is intended to diagnose, treat, cure or prevent any disease. The author is simply reporting in journalistic fashion what he has learned during the past 17 years of journalistic research into colloidal silver and its usage. Therefore, the information and data presented should be considered for informational purposes only, and approached with caution. Readers should verify for themselves, and to their own satisfaction, from other knowledgeable sources such as their doctor, the accuracy and reliability of all reports, ideas, conclusions, comments and opinions stated herein. All important health care decisions should be made under the guidance and direction of a legitimate, knowledgeable and experienced health care professional. Readers are solely responsible for their choices. The author and publisher disclaim responsibility and/or liability for any loss or hardship that may be incurred as a result of the use or application of any information included in this Ezine. Copyright 2014 | Life & Health Research Group, LLC | PO Box 1239 | Peoria AZ 85380-1239 | All rights reserved.
<urn:uuid:1ee8604e-cb81-4c4e-b388-2e60aa016565>
CC-MAIN-2023-50
https://colloidalsilversecrets.blogspot.com/2014/02/a-brief-history-of-old-medical-quotes.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.943539
3,497
3.078125
3
When using MediaSDK encoder for yuv soruce, we can create surface in the video memory and use hardware encoder to encode the yuv pictures. I wonder how the copy operaiton is done. Is DMA is used, thus CPU is free to do other things during the operation? Or CPU cycles are used to copy the yuv pictures in the the video memory? In some benchmark results, it seems the density of encoder using yuv soruces is lower than the density of transcoder when elementary bit streams are used. I wonder if it is related the copy operation. I'm not sure I completely understand your question. The hardware encoder operates on NV12 (YUV) surfaces that reside in video memory. If the YUV data to be encoded is not already in video memory, the Media SDK library implementation can copy the data from system memory to video memory. Intel is continuously optimizing this operation to be the best for the platform, so you will often notice that newer graphics drivers improve performnace. The hardware used for the copy will depend on the platform and implementation. When transcoding, the 'decode' and 'processing' operations may all occur in video memory, and there is no need to copy to or from system memory. Thanks for your reply. My question is how YUV picture copy from system memory to video memory is done. For example, is it direct memory access (DMA) is used such that CPU is free to do other jobs? or CPU is actually used to do the copy operations and hence the operation is CPU intensive. I understand the implementation details may be different from different version of drivers. I just want to know the operation from high level point of view. The answer is actually "both", as it depends specifically on the platforms capabilities and driver implementation. You may see significant CPU usage for this operation on some platforms.
<urn:uuid:8b681b02-5ab0-44f0-8abc-8f2dcc33d9f9>
CC-MAIN-2023-50
https://community.intel.com:443/t5/Media-Intel-Video-Processing/Efficiency-in-copying-yuv-pictures-to-surfaces-in-video-memory/td-p/964667
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.914908
389
2.578125
3
A rent agreement, also known as a lease agreement, is a legal contract between a landlord and a tenant that outlines terms and conditions of renting a property. It specifies the rights and responsibilities of both the landlord and the tenant and provides a framework for the rental arrangement. Rent agreements are used for both residential and commercial properties and typically include the following details: - Names and addresses of landlord and the tenant - Details of the rental property, such as the address and size - Rent amount and payment schedule - Security deposit, if applicable - Start and end date of the rental agreement - Termination clause and conditions under which the agreement can be terminated - Maintenance responsibilities of the landlord and tenant - Late payment charges or penalties, if applicable - Use of the rental property and any restrictions on its use - Signatures of both parties and witnesses, if required Rent agreements can be either oral or written, but it’s recommended to have a written agreement to avoid any disputes that may arise in the future. Written agreements provide clarity and can be used as evidence in a court of law if necessary. It’s important for both landlords and tenants to understand the terms of the rent agreement before signing it. Landlords should ensure that the agreement complies with the relevant laws and regulations, and tenants should clarify any doubts or questions they have before signing the agreement. Overall, rent agreements provide a legal framework for the rental arrangement and help ensure that both landlords and tenants are aware of their respective rights and responsibilities. How Is A Rent Agreement Formed? Here are the steps for forming a rent agreement: Gather information: The first step in forming a rent agreement is to gather all the necessary information, such as the names and addresses of the landlord and the tenant, details of the rental property, rent amount, payment schedule, and other terms and conditions. Draft the Agreement: Based on the information gathered, the landlord or their legal representative can draft the rent agreement. The agreement should be clear and concise, outlining the terms and conditions of the rental arrangement. Review and Revise: Once the agreement is drafted, it should be reviewed and revised to ensure that it complies with the relevant laws and regulations and accurately reflects the terms agreed upon by the landlord and the tenant. Sign The Agreement: Once the agreement has been reviewed and revised, both the landlord and the tenant should sign the agreement. Witnesses may also be required, depending on the laws in the relevant jurisdiction. Provide Copies: Once the agreement is signed, both the landlord and the tenant should receive a copy of the agreement for their records. Register The Agreement: Depending on the jurisdiction, it may be required to register the rent agreement with the local authorities or government agencies. It’s important to check the relevant laws and regulations to ensure compliance. What Are Its Key Benefits? In India, registering a rent agreement has several benefits for both landlord and the tenant. Here are some of the key benefits: Legal Validity: A registered rent agreement is legally valid and enforceable in a court of law. It provides a legal framework for the rental arrangement and can be used as evidence in case of any disputes between landlord and the tenant. Protection for Both Parties: A registered rent agreement protects interests of both the landlord and the tenant. It outlines terms and conditions of the rental arrangement, including the rent amount, payment schedule, security deposit, maintenance responsibilities, and other important details. This helps to prevent any misunderstandings or disputes between the parties. Proof of Address: A registered rent agreement serves as proof of address for both the landlord and the tenant. This can be useful for a variety of purposes, such as applying for a passport, opening a bank account, or obtaining other government-related documents. Compliance with the Law: In some states in India, it is mandatory to register a rent agreement. Failure to do so can result in penalties and legal consequences. Registering a rent agreement ensures compliance with the relevant laws and regulations. Stamp Duty Payment: In most states in India, stamp duty must be paid on the rent agreement at the time of registration. This helps to generate revenue for the government and is an important source of income for state governments. What Is The GST Registration For The Rent Agreements? In India, GST (Goods and Services Tax) registration is required for certain businesses and individuals who meet specific criteria, including those who are engaged in the rental of commercial or residential properties. GST is a tax that is levied on supply of goods and services in India. When it comes to rent agreements, if the total rent collected by the landlord during a financial year exceeds the threshold limit of Rs. 20 lakhs (as of January 2023), the landlord is required to register for GST and collect GST from the tenant on the rent amount. The landlord can claim input tax credit on the GST paid on any goods or services used to maintain or improve rental property. The tenant, on the other hand, can claim input tax credit on GST paid on the rent amount if they are registered under GST and use the rental property for business purposes. If the tenant is not registered under GST, they cannot claim input tax credit on the GST paid on the rent amount. It’s important for landlords and tenants to be aware of the GST registration requirements for rent agreements and comply with the relevant regulations to avoid any legal issues. What Are Some Essential Pointers To Be Kept In Mind While Making The GST Registration For Rental Agreements? Here are 10 essential pointers for rent agreements in India for GST registration purposes: Name and Address of Landlord and Tenant: The rent agreement should clearly state the names and addresses of landlord and the tenant. Rental Property Details: The agreement should specify the details of the rental property, such as the address and size of the property. Rent Amount: The agreement should clearly state the rent amount that the tenant is required to pay and the payment schedule. Security Deposit: If a security deposit is collected, it should be clearly mentioned in the agreement along with the terms and conditions for its refund. Duration of the Agreement: The agreement should specify the start and end date of the rental agreement. Termination Clause: The agreement should mention the conditions under which agreement can be terminated by either party. Maintenance Responsibilities: The agreement should specify the responsibilities of the landlord and the tenant for maintenance and repair of the rental property. Late Payment Charges: The agreement should mention the consequences of late payment of rent, such as late payment charges or penalties. Use of the Rental Property: The agreement should specify the permitted uses of the rental property and any restrictions on its use. GST Registration Details: The agreement should mention the GST registration details of both the landlord and the tenant, if applicable. By including these essential pointers in the rent agreement, landlords and tenants can ensure compliance with GST registration requirements in India and avoid any legal disputes related to rental agreements. In conclusion, registering a rent agreement is an important legal requirement in India, and it has become even more crucial since the introduction of GST. As a landlord or a tenant, it’s essential to understand the 10 essentials rent agreement pointers for GST registration in India, including the payment of stamp duty, compliance with the relevant laws and regulations, and the inclusion of important details such as the rent amount, payment schedule, and maintenance responsibilities. By following these pointers, landlords and tenants can protect their interests, ensure compliance with the law, and avoid any legal issues related to the rental arrangement. Commercial Lease Agreements
<urn:uuid:e6e55383-a560-42c0-b294-9fde33b2881c>
CC-MAIN-2023-50
https://corpbiz.io/learning/10-essentials-rent-agreement-pointers-for-gst-registration-in-india/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.921344
1,561
2.984375
3
PERFECT FOR MUMS TO BE If you are planning to start a family or pregnant, eating healthy with right nutrients is of utmost importance. Folic acid or folate is also referred to as a pregnancy superhero. And so is the Corn. - This grain is rich in protein and fiber that you need in pregnancy. One ounce of sweet corn has nearly 5gm protein and 2.9gm of fiber. - The nutrient dense sweet corn is a rich source of minerals and vitamins. One single large serving of sweet corn contains 386 mg potassium and folate of 60 gm. Folic acid is particularly necessary to reduce chances of birth defects in the unborn child. - Sweet corn is rich in beta carotene and antioxidants like xanthins, lutein that improves eyesight of your unborn child. - According to studies conducted by the Cornell University, sweet corn contains anti oxidants that help fighting against cancers. - The grain contains phenolic compound known as ferulic acid that helps fighting against tumors and reduces your risks of breast cancers. - You can find volumes of phytochemicals in sweet corn. - Vitamin B12 contained in corn helps prevent anemia which is absolutely undesirable in pregnancy. This essential mineral also helps in formation of new blood cells. - Sweet corn is a major source of beta-carotene which supplies vitamin A to the body upon conversion. Vitamin A in pregnancy ensures healthy mucus membrane and skin in your unborn child. It also helps boosting the immune system. - Journal of Nutritional Biochemistry states that consumption of corn husk oil helps lowering plasma LDL by reducing absorption of cholesterol in the body. Presence of the “Good Cholesterol” offers plenty of benefits to your body in pregnancy. A side that's on everyone's side... Good source of vitamins: Like other cereals, corn is also a good source of vitamins, especially B complex vitamins. Vitamins such as thiamin, niacin, folate, etc can be found in corn. These vitamins play a vital role in the development of your child. Thiamin is good for maintaining nerves and brain development. Niacin is helpful in metabolizing sugars, proteins and fatty acids. Folate is helpful in maintaining new cells and reduces the risk of anemia. Other vitamins like vitamin A, vitamin K and vitamin E are also found in corn. Good for eyesight: The yellow colored seeds of corn are very good for eyesight. This yellow color comes from a biochemical known as zeaxanthin. This biochemical has been known to improve eyesight, not only for aging people, but it is also good for children, as consuming it from an early age acts as a protector of eyesight. (Sommerburg, et al, 1998). Supply of essential minerals: Corn or maize is also a good source of minerals as well. Essential minerals such as phosphorous, potassium, magnesium, zinc, iron and trace elements such as selenium are also found in corn. Phosphorous is good for bone health apart from calcium. Potassium is a better electrolyte when compared to sodium and iron is useful in making hemoglobin. For Everyday Corn Lover - Corn is a rich source of vitamins A, B, E and many minerals - The cellulose content in the corns is 7 times more than those in rice and wheat flours, and the carotene content is 5 times higher. - Corn starch has a variety of usage from repositioning wallpaper, manufacturing crayons, packing, storing and moving process. - Did you know that corn contains antioxidants that make you look younger and more beautiful? - Did you know that corn has carbohydrates that help soccer players sustain enough energy for 90+ games? - Did you know that corn is low in saturated fat and is recommended in your diet plan? - Did you know that corn has calcium that is essential for everyone especially children? - Did you know that corn has carbohydrate that gives netball players energy? - Did you know that corn provides protein that helps weight lifters to build muscles? - Did you know that corn is a good source of vitamins for a child’s nerves and brain development? Please Note – Quantities stated below are averages only per 100 grams kernel serving. It varies from region to region and type of sweet corn grown. So use it as guidelines only. |Vitamin A||13 µg||1%| |Vitamin C||5.5 mg||6%| |Vitamin D||0 µg||~| |Vitamin E||0.09 mg||1%| |Vitamin K||0.4 µg||0%| |Vitamin B1 (Thiamine)||0.09 mg||8%| |Vitamin B2 (Riboflavin)||0.06 mg||4%| |Vitamin B3 (Niacin)||1.68 mg||11%| |Vitamin B5 (Panthothenic acid)||0.79 mg||16%| |Vitamin B6 (Pyridoxine)||0.14 mg||11%| |Vitamin B12||0 µg||~| |Aspartic acid||252 mg| |Glutamic acid||655 mg| |Saturated fatty acids||0.2 g| |Monounsaturated fatty acids||0.37 g| |Polyunsaturated fatty acids||0.6 g| |20:5 n-3 (EPA)||0 mg| |22:5 n-3 (DPA)||0 mg| |22:6 n-3 (DHA)||0 mg| Fun Activities for Kids Grow your own! What you will need 1. Five to six corn kernels 2. Paper towels 3. One sandwich-size zipper-lock bag 5. Black markers 1. Wet a paper towel completely and then wring out excess water. 2. Put five to six corn kernels in the center of the paper towel. (Using this many kernels will increase the chances of sprouting.) 3. Put the paper towel and the kernels in the zipper-lock bag so that the kernels can be seen. Close the bag and label it. 4. Lay the bag in a place exposed to natural daylight or a grow lamp, where you can observe it. 5. Check on the bag regularly, water the kernels and watch the corn grow. (When the corn grows too tall for the bag, unzip the top.)
<urn:uuid:fba29f13-ad27-4ea3-a503-ea07ee225850>
CC-MAIN-2023-50
https://countrycorn.com.au/corn-benefits/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.867277
1,386
2.671875
3
How to Use a Compressor – Sound Engineering 101 Learning how to use a compressor is easy if you know how it works. Most effects processors are fairly simple to use; plug in an equalizer (for example), twiddle the controls, and listen to the output, and you pretty much know what you’re doing, and all you need is some experience behind you. Compressors don’t fall into this category. Plug them in and listen. What’s it doing? Unless someone has told you, then you probably won’t know. Play with the controls. What do they do? Don’t know either. What do the indicators mean? Difficult to tell. It’s all a bit frustrating really… Unfortunately, you need to be *told* what a compressor does. Furthermore – even after you know what it does – someone needs to explain why the things that it does are considered useful. You won’t figure it out for yourself. Normally – for non-technical people – the explanations of what a compressor does, are so bewildering that they end up even more confused than they were before: “xDBs in, equals yDBs out, over zDb threshold, according to this graph” etc. etc. What Does a Compressor Do – Compression Explained Fortunately, I have a friend who explains it very well, and very succinctly: “What does a compressor do, Alan?”, “It makes the loud bits quieter. “I see… But surely if it just makes the loud bits quieter, can’t you then turn EVERYTHING right up, and make get everything really, REALLY loud?” “Exactly.” So there you go. Simple isn’t it? A compressor just makes the loud bits quieter, allowing you to crank everything up to maximum volume. But under what situations would this be useful? Increasing Overall Loudness Firstly, there’s the obvious application of making your music/mp3s sound as loud as possible. This trend is starting to get a bit silly, and is beginning to prevent people from producing albums of good dynamic range. Nevertheless, if you master a rock, pop, hip-hop, EDM or dance track with no compression at all, then the chances are it will sound pitifully quiet compared to the rest of the songs in your playlist. It will probably sound like it has been severely under-recorded. Not to mention, specific genres, especially the bass heavy and club genres mentioned, hip-hop, R&B, EDM, other sub-genres of dance are often over compressed and limited to give that big loud in your face sound. Compression lets you get a much higher average level onto your mp3s (or CDs if you’re still making those) without affecting the music too much. For Live Sound & Protecting Speakers A very practical application of compression is in live PA setups such as a rock concert. There is a danger that very, very loud sounds will blow up the loudspeakers as well as risking serious hearing damage for those near the stage. The solution is to put a compressor in place. This makes the loud bits – and in this case only the very loudest of the loud bits – quieter, so as to avoid damage to equipment (and people). Such hard compression of only the very top peaks of music, is called “limiting. In the digital and plugin world, usually a compressor and a limiter are two different plugins. In terms of hardware there are some compressors that also have a limiting function as well as individual limiters and compressors. Limiting is often done over the whole song and is less common to be used on individual instruments and sounds. For More Even Vocals Another example of when compression is used is to compensate for the fact that many vocalists have poor microphone technique and control over their dynamic range. When they sing quietly, they sing *far* too quietly. When they sing loudly, they are *way* too loud. A compressor can reduce the “dynamic range” of the vocalist to a more manageable level, which is why a compressor is sometimes called a “Dynamics Processor”. Sitting Vocals in The Mix In fact, most vocals will benefit from compression in recording. Evening out the volume of the vocals helps the vocals as they say, “sit in the mix” which means there not overpowered by or overpowering the other instruments in the song. In the days of analogue recording, the engineer would manually move the fader for the vocal track on the desk up and down depending on how loud or quiet the singer was singing each phrase. This would allow the compressor to work less hard and for the reduction effect of the compressor to less obvious resulting in a smoother and more natural vocal. This is still a good technique (which we can discuss in another lesson) but these days most home studio guys, especially those recording themselves, rely even more on compression to even out the dynamic range of their voice. Making Instruments More Even Bass guitar is another instrument which can be hard to play consistently throughout a song. Any minor errors in the bass guitarists playing can leave “holes” in the song where the bass seems to disappear. This works for all instruments in a song. A compressor can help keep the performance at a consistent level. There are obviously many other instruments and sounds that could benefit from compression at some time. So, as you can see, there are many applications for a compressor. Basically, in any situation where sound volume levels are getting out of control, a compressor can be used to “tame” the extremes of volume and keep it within a reasonable range, entirely according to your needs. Now that we know what a compressor does, we can start to learn how to use a compressor. Since compressors have many different applications, the way that you use a compressor depends very much on what you are trying to achieve with it. In this article, we will look at four main applications of a compressor which are all quite different. Most applications are just variations on these four different uses, so they should serve as a good starting point for most of the things you will want to do. The four main applications that we will look at, are: 1. Hard limiting – to prevent speakers or digital recordings from overload 2. Compressing an instrument or vocal 3. Adding “punch” to bass drums and bass guitars 4. Compressing a final mix In addition, we will look at a specialized fifth example: But before we get into these, let’s look at the theory behind compressors and what the controls actually do. This is a little difficult to understand at first, so don’t worry if you haven’t “got it” the first time around. It will make more sense after you’ve experimented a bit with a real compressor in front of you. Note that not all compressors have all of these controls, and some compressors are very “minimalist” indeed. If you don’t have all these controls, then look at the compressors instruction manual to see what preset values the “missing” controls are set to. What the Controls do Firstly, in order to compress the volume range of something into a more “workable” volume range, you need to have in your mind an idea of what the lowest “normal” volume level is, and what the “loudest” volume level is, and have a mental idea of how “loud” you are prepared to let the loudest get. The “Threshold” control, sets the volume level at which the compressor starts to do its work. Below this volume level, the compressor will literally do absolutely nothing. So you basically set the “Threshold” control to the lowest volume level at which you want the compressor to start working. We will discuss in the examples how you actually make this setting. Naturally, if the “Threshold” control is set to maximum, the compressor won’t ever do anything at all because the level of the music is usually way below this level, and therefore remains totally unaffected. The “Ratio” control sets how “powerful” the compressor is. At its lowest setting (1:1), the compressor literally does nothing, and is effectively “switched off”. On the other hand, at its highest setting (normally marked 20:1 or even infinity-to-one), the compressor is 100% powerful – so powerful in fact that it TOTALLY PREVENTS the volume level getting even the *slightest* bit louder than the threshold level! Hard to believe? Try it and see. Set the compressor ratio at maximum, play some sound through the compressor and start turning the threshold level down until you hear the effect. If you are playing solo drums through the compressor the effect is quite astounding. The only problem with doing this is that (naturally) the total volume gets so much quieter, because you are “constraining” it (compressing it) – so very much. Make-Up Gain and Output That’s why compressors and compressor plugins are almost always equipped with a powerful gain control marked “Output” or “Gain make-up” in order to boost the volume level back up to a reasonable level after it has been “squashed” down. Every time you turn the “Threshold” down, you are “constraining” the sound more and more, and making it quieter, and so you almost always need to use the “Output” control to boost the level back up again. This is a bit irritating, so several compressors have a switch – normally marked something like “Auto gain make-up” or similar – to automatically boost the output as you turn the “Threshold” down. It’s not on every compressor, but it is a nice little feature to have, and saves you fiddling about with the “Output” control all the time. For clarity in the following examples though, I have assumed you either don’t have this switch, or that it is turned off. So far so good. “Threshold”, “Ratio”, and “Output” are the main controls on an compressor, and “theoretically” give you everything you need. So what are the other controls for? Soft vs Hard Knee Ratio Well, sometimes – in the real world – things aren’t quite so simple. For example, you can have a vocal that is sometimes too quiet, sometimes too loud, and occasionally, way, way, way, way too loud. Wouldn’t it be nice if the compressor somehow had an automatic “Ratio” control? That’s why many compressors have a “soft-knee” or “over-easy” control. With the “soft-knee” control turned on, the compressor doesn’t simply and immediately “kick-in” at the level set by the “Threshold” control – it merely “starts” to work. As the level gets louder and louder, it reaches a level where it is finally reaching the “power” of compression that is set by the “Ratio” control. So if you wanted to control a vocal that was wildy out-of-control in terms of levels, you could switch on the “soft-knee” control, set a “Ratio” much higher than normal, and set the “Threshold” control to the quietest “acceptable” vocal sound level. When the vocal exceeds this level with the “Soft-knee” control switched on, the compressor starts to compress at fairly moderate levels. If however, the vocal gets wildly out of control and attempts to get *seriously* loud, then the compressor starts working much harder to pull it back to sensible levels. It’s a bit like having an automatic “Ratio” control, with the maximum compression “power” controlled by the setting of the “Ratio” knob on the front panel. Then there are the “Attack” and “Release” controls. So what do these do? If you’ve followed this explanation so far, you’ll realize that a compressor is a bit like having a smart guy hanging onto a volume control and adjusting it by hand according to the music. But how quickly can this “person” respond? Well, the “Attack” control, adjusts how quickly this “person” is, at turning down the volume when things get too loud. The “Release” control is how quickly that same “person” can turn the volume back up again when things have calmed down. Why Do You Need It? But why would you want to adjust this? Surely you would want it to be instantaneous? (after all, it *is* supposed to by an automatic system…) It turns out that in practice, in many situations, you don’t want the volume to be “instantly” cranked down the moment things get too loud. Under certain conditions you can really *hear* the volume being pulled down, and this is very undesirable. Instead, it *sometimes* sounds better if the “person” is a bit sloppy and slow at yanking the volume down. The “Attack” control affects this sloppiness. What about the “Release” control? Well, in a similar way, if the compressor is too fast at turning the volume control back up again, you can hear it working (the audible effect is known as “pumping”). It just sounds “artificial”. So the “Release” control adjusts the speed at which the compressor “recovers” after yanking down the volume. The exact speed which sounds “correct” depends on the music, so that’s why you can adjust it by hand. The examples following in a moment give some suggested settings, but by all means experiment in order to find the most “natural” sounding setting. And that leads us to another control. It is a switch, and it is sometimes marked “Automatic” and sometimes marked “Peak/RMS”. So what does this switch do? Well, as I mentioned before, the “Attack” and “Release” settings really depend on the music you are using the compressor on. But music continually changes. What the “Automatic” or “Peak/RMS” switch does, is to switch on an automatic setting that attempts to “listen” to the music and continually set the correct “Attack” and “Release” settings for you. Of course it doesn’t always do the best job, and that is why you also have manual control if you want it. It is important to realize that with this switch turned on, the “Attack” and “Release” controls are disabled and will do nothing. Some compressors (unfortunately) don’t have “Attack” and “Release” controls at all, and are either set to preset values, or permanently set to RMS (automatic). Some of these are very popular for tracking vocals though such as the LA-2a because they are so quick and simple to use. It’s best to use the this type in a more subtle way since you can’t undo compression if it’s been recorded that way to begin with. There is also an IN/OUT switch (often marked BYPASS). This is essential. It is there so you can switch the compressing action on and off and thus hear the difference your changes have made. To make best use of this switch, you need to set the “Output” control such that the sound appears to be at roughly the same level irrespective of whether the compressor is switched IN or OUT – this allows you to easilly make comparisons by listening. The final control on most stereo or two-channel hardware compressors is called “Link”. What is this for? Whenever you adjust the volume control on a stereo mix, you always expect both left and right volume levels to change at the same time don’t you? (otherwise the mix would wander off to one side or the other). That’s what the “Link” control is for. It makes sure that both left and right hand volumes always change in time with each other, so the mix stays “in the middle”. As an added bonus, the “Link” control *usually* (although, not always) disables one set of compressor controls on a two-channel unit, and takes all of its settings from just one set of controls. This is because on a stereo signal you normally want *exactly* the same settings on both sides – as well as keeping the volume levels equal. This is not the case on all compressors though, so it is important that you check your manual to find out whether you need to set the controls on both channels to be the same, or whether you only need to use one set of controls (the other ones being disabled). On some compressors, even if one set of controls is disabled, the “Output” controls for each side may be independent (don’t ask me why – it does seem a bit silly) – again, you *must* check the manual, as it is not always easy to tell simply by playing with the settings and listening. Now You Know What All The Controls Do That completes our “tour” of the controls. I hope you understood it. Read it a couple of more times if you don’t, and if you’re still feeling lost, perhaps it might come to you after you’ve tried these examples. Examples & How to Use The Controls So: Now onto the examples… Example 1: Hard Limiting The problem: You are doing a live gig or an important digital recording. You want to leave the music completely untouched, but what you don’t want, is totally unexpected loud peaks causing damage or distortion. The solution: You want to stop TOTALLY any music or sound exceeding your expected maximum level. This is usually an emergency situation in mixing, although very common in mastering. This is also quite simple to do with a compressor or if you’re mixing in the box (on the computer), a limiter plugin is better. How to Limit On a Compressor (Without a Dedicated Limiter) To do this on a compressor, Set the “Attack” and “Release” controls to their fastest – after all, it will only “kick-in” during cases of emergency, and you want it to respond immediately (to prevent distortion), and you also want it to recover immediately (so no-one notices anything happened). Make sure that any “Automatic” or “Peak/RMS” switch is turned off – so that the “Attack” and “Release” controls actually work and are not in automatic mode. The “over-easy” or “soft-knee” switch (if present) should be turned off too. Then set the “Threshold” control to maximum (probably marked +10Db or +20Db, but on some digital plugins it may be marked as zero). This will prevent the compressor doing anything just yet. Then set the “Ratio” control to maximum (normally marked 20:1 or even infinity-to-one). You won’t hear anything happening just yet, because the “Threshold” control is set to maximum – effectively bypassing the unit. Now, play the LOUDEST MUSIC SIGNAL YOU EVER EXPECT TO HEAR through the compressor, and look at the levels. Now, slowly turn down the “Threshold” control, carefully listening and looking at the levels. The moment you even *start* to hear a decrease in volume or see it on the meters then stop, and back off a tiny bit. You have found your optimum settings. Just to check, try playing some excessively loud music through the compressor. You will find that it refuses to exceed the maximum level you have set, no matter how loud the input! Needless to say, if you get a bit silly and try to blast the compressor with INCREDIBLY loud music, you may indeed hear the compressor start to distort (it still won’t exceed that maximum level though!). But this is just unrealistic. You are setting it up to handle only the most unusual, unexpected and extreme cases, which will be well below the level of distortion. Example 2: Compression of an Instrument or Vocal The problem: You are working with a fairly good vocalist. Normally when they sing loudly everything is fine – but every now and again they sing their little heart out so much that the recording either distorts, or is simply just way too loud. Unfortunately the vocalist is so unpredictable that when this happens you don’t have time to adjust the recording levels because it happens almost at random, and is difficult to predict. The solution: An ideal application for a compressor! Start with a good recording level for normal recording, and with a fast “Attack” and a moderately slow “Release” on the compressor (ensuring these controls are on a “manual” setting). Also switch ON the “Over-easy” or “Soft-knee” button (if the compressor has one). As before, begin with both the “Threshold” and “Ratio” at maximum. Whilst the vocalist is singing at a fairly QUIET to moderate level, slowly turn down the “Threshold” control until either your ears or the meters detect the slightest faint drop in level. If your compressor has a “Gain reduction” meter it should *just* begin to indicate a change. Turn up the “Output” control until the quiet part is at a good level for you. Now go to the LOUDEST part of the song and get the vocalist to sing (or play back a recording). With the ratio at maximum, you should now find that – ironically – the sound is far too quiet! Simply turn the ratio control down until the level is just about as loud as seems reasonable. The “Gain reduction” meter – if you have one – will probably be lighting up lots of pretty lamps at this point (unless it is just a boring VU Meter) As a final pass on this example, you might want to get the artist to perform the song once through (or playback a “take” if you are compressing on playback), and at this point you might want to play with the “Attack” or “Release” settings to get the most “natural” sound. Be careful if choosing an slow “Attack” though, as it might allow the compressor to “overshoot” and exceed the levels which you so carefully set previously. Helping to Prevent Clipping When Recording You often hear home studio guys recommend using a compressor when tracking to prevent clipping. In reality, you can just record with enough headroom (record at a lower volume and leave enough volume for the loud peaks) so that your vocals don’t clip/distort. However, since compression can even out the dynamics of a vocal, that often makes performing and listening to compression when tracking a vocal useful. Example 3: Adding “Punch” to a Sound (normally bass instruments like bass guitar or bass drum) The problem: The artist is performing a fairly rhythmic pattern, but somehow they don’t seem to be “punching” through the mix, even though their sound is basically quite good. Every time they play a riff you know you really want to “feel” the “impact” – but it is simply not there. The solution: Although compressors are normally associated with *reducing* peak levels, did you know that they are capable of actually GENERATING amazing peaks? This technique generally only works with “percussive” sounding instruments like drums, guitar (including bass), and spiky keyboard sounds like “clavinet”, that are playing a rhythmically “pulsating” part. The technique relies on the fact that the “Attack” control can be used to make the compressor respond in a sloppy way – thereby allowing loud signals to “overshoot” and generate peaks that weren’t even there in the first place! To do this, start with a moderately slow release, a SLOW attack, and with the ratio and threshold at maximum. The “Soft-knee” or “Over-easy” control (if present) should be OFF. Play back the quietest part of the performance, and as before, turn down the “Threshold” gradually. You should find a setting where although the instrument is starting to get a bit quieter, it is starting to have more “punch” to it. Use the “Output” control to restore the level to a good volume. Now go to the loudest part of the song. You will find at these settings that the instrument is – surprisingly – too quiet. Turn down the ratio until the sound is loud enough. Now check out the quiet part of the song again. You might now find that it is not as punchy anymore, and you might have to turn down the “Threshold” some more (and of course boost the “Output” to compensate). Finally, rehearse the part (or playback), and adjust the “Attack” to give you the “punchiness” you need overall. The “Release” control is quite critical in this scenario too. If you have it set too fast, you can hear the compressor “breathing” or “pumping” (you’ll know what I mean when you hear it!). On the other hand, if you set “Release” too slow, then you will start to lose the “punchiness” – it is a tricky balance. Example 4: Compressing a Final Mix Ooooh! This is the trickiest one of the bunch! You will probably have one of two problems. Either (a) the mix overall doesn’t sound “punchy” enough – which requires slightly different settings to the previous example, OR – (B) you have the more common problem – you simply can’t get your mix to sound “loud” enough compared to other recordings that you have in your collection. The problem (a): The mix overall doesn’t sound “punchy” enough The solution (a): If your mix doesn’t sound punchy enough you have to start with some “preset” settings on your compressor as follows: Start with the “Automatic” or “Peak/RMS” switch turned ON (RMS setting). Music is a complex thing, and a “final mix” even more so. The “Automatic” or “RMS” setting will literally “listen” to your music and try to find the “ideal” settings for both the “Attack” and “Release” controls and disable them. If your compressor doesn’t have an “Automatic” or “RMS” setting, then set both the “Attack” and “Release” settings to medium. In both cases we will end up adjusting them manually later so don’t worry too much. Set (as before), the “Threshold” setting to maximum (which “bypasses” the compressor), but this time pre-set the “Ratio” control to about 3:1 or thereabouts. Now, whilst playing the mix gradually turn down the “Threshold” level until you start to get a more punchy sound. You will (as always) have to turn up the output to compensate. When you can hear the compressor making a difference, try experimenting with the “Attack” and “Release” settings. If you previously set “RMS” or “Automatic” ON, then try to match both “Attack” and “Release” to the same sound as “Automatic” and use that as your starting point. The slower the “Attack” the longer the overshoot. Sometimes a short attack will sound good (making quick transients), other times, a slower attack will sound more appropriate. I should mention, it’s a good idea to go around ALL the controls in turn, making slight changes until you believe that you have the best settings on all of them. Another good idea is to use the IN/OUT button (or enable/disable button if you’re using a compressor software plugin) to compare results with the original – using the “Output” control to match the sound level between the IN and OUT settings, so they are at the same volume – this greatly helps make a good comparison. The problem (B): You can’t get your mix to sound “loud” enough compared to other recordings The solution (B): You really need TWO sorts of compression here. Firstly, you need “limiting” set up as per example (1) previously. Turn down the “Threshold” until you can start to hear the limiter making an unpleasant difference to the mix. Then turn it back up a bit, and try to find the position where you have the best balance between cutting down the peaks, and making an undesirable change to the music. In most cases it should be possible to apply quite a lot of limiting without any significant difference to the sound of the track. Now that you have trimmed off the peaks, you can crank up the “Output” to a much more respectable level. This is one of the processes often done in mastering for mp3s and CDs. But you might want your song to sound louder still. If that’s the case, then apply another compressor BEFORE the limiter and just try some conventional compression as in solution (B) above (but probably with a faster attack). Many mastering compressors have a compressor AND a limiter combined in one unit for this very purpose. As an alternative approach, set quite a fierce compression (5:1 or more), and switch on the “over-easy” or “soft-knee” button, and with fast attack. Start (as always) with the “Threshold” high, and slowly turn it down until you achieve the balance between a good amount of compression, and best sound quality. Adjust the “Release” control to help minimise how much you can “hear” the compressor working. The speed of “Release” setting is different depending on the speed and type of music – let your ears judge it. That concludes our four main examples. Advanced Use: Side-Chaining Some hardware analog compressors have a “Side-chain” socket on the back. Most DAWs also offer Sidechaining to compressor plugins. How it’s setup and routed will vary a little from DAW to DAW. I’ll probably do an article on that at some point. So what’s it for? The compressor works by feeding the sound through the compressor itself, but also by feeding the sound to the compressor “control system”. The control system “listens” to the sound and controls the compressor’s reduction threshold. Whatever you send to the side-chain won’t be heard through the output of the compressor or the listener. Think of it like a set of instructions. When you hear this sound, reduce the volume. The listener doesn’t actually hear what is set to the compressor. It’s just telling the compressor to reduce the volume when you hear peaks in this sound, even though the listener never hears the input of the side-chain. What we hear is the reduction effect only. The release time knob controls the duration/length of the reduction. The attack knob controls how quickly to do it for. For example. If the attack knob is set to 0 ms and the release knob is set to 50 ms and the compressor is set to side-chain, any peaks through the side-chain input become an instruction to the compressor which is saying to the compressor…. “when you hear a peak through the side-chain input, instantly (in this example the attack is set to 0 ms, which means instant. – 2ms would mean after 2 milliseconds and it has a fade in from 0 to 2ms and so on) reduce the volume of whatever is presently routed through the compressors input for a duration of 50 ms. When is this used? In bass heavy genres such as EDM, Hip-hop, R&B, Pop and so on, it is very common to side-chain the kick drum to bass. Doing this will reduce the bass at the time of the kick drum. You set the attack almost as fast as possible without hearing click sounds coming through the sound being side-chained. You can often just get away with an attack of 0, or 5ms-10ms. Then you set the release to about the duration of the kick maybe 50ms or so, so that you hear the bass dropping when the kick hits without a gap of silence before it starts to rise back up again. Since kick drums and sub-bass in bass heavy genres often occupy similar frequencies, this reduces the build up and overlap, allow both being able to raise the kick and bass higher before distortion/clipping occurs in the song but also letting the kick drum punch through the mix more, making it appear louder and more punchy. You Also Hear This On The Radio When Radio Announcers Speak. So You Can Do the Same For Podcasting On the radio, most stations send the announcer mics through to a side-chain compressor on each music channel. What this does is automatically turn down the music when the announcer is speaking so that you can hear them over the music. Pay attention the next time you hear a radio announcer speak over a song. Unless they manually turn the fader down on the desk first, you will hear the song quickly reduce in volume when they speak but raise slightly in between their pauses and when they finish speaking. This is something you can do for podcasts to. To do it yourself you just feed the sound of your voice into the side-chain. In that way you can create a system that automatically fades down music when you speak, and fades it back up when you stop speaking. You have full control over the fade in/out rates using the “Attack” and “Release” controls. Creative Uses of Side-Chaining You could feed the reverb returns from your lead vocal reverb through the compressor, but plug the dry vocal into the side-chain. That gives you a system where the reverb fades down when the vocalist sings, giving quite a “dry” sound, but returns to a strong “wet” reverb inbetween words and phrases in the song. It keeps the reverb from messing up the vocal in the parts when the words get busy. Such a technique is also useful for controlling the level of repeat “echo” effects at the end of phrases. Pumping music effect The best example of this effect is probably the song “Call on Me” Notice how all of the music and even the vocal is side-chained to the kick to create an interesting effect. You hear this on some EDM on synth sounds to create more of a pumping type of effect. You have a vocal, but sounds such as the letters “S” and “T” are sounding really harsh, and burst through the mix too much. An extremely common problem. You don’t want to equalize them out of the vocal sound, because the vocal sound is actually quite nice apart from those explosive “S” and “T” sounds. This is where De-Essing comes in. If you’re mixing on the computer it’s as simple as inserting a de-esser plugin, selecting a frequency, often between 3-6k (sometimes higher), set the threshold and the plugin will try to detect bursts in this frequency range only and reduce them, thereby turning down the S and T sounds when they occur. Pretty cool right. Aside from De-Esser plugins, there are also hardware De-Esser units. These work pretty much the same way as the plugin. If you need a hardware De-Esser, this is going to be your best and easiest option. Make Your Own Hardware De-Esser If though, for some reason, you’re doing live sound on an analogue board without a De-Esser rack or you’re in an analogue studio and you don’t have a De-Esser rack you can make your own De-Esser out of an EQ rack and a Compressor rack that has a side-chain input. You’ll probably never have to do this but hey, it’s handy to know and a fun experiment if you want to try it. Using Side-Chaining to Create a De-Esser Remember that the side-chain is a system that lets you insert something – a sound source or even a sound source that runs through another piece of hardware (or plugin) – like a graphic equalizer for example – immediately before the compressors control system. Note this is NOT in the main audio path and doesn’t affect the sound as such – just the way the compressor responds. This system lets you over-emphasise a certain frequency that you want the compressor to listen out for. Remember the problem from earlier: You have a vocal, but sounds such as the letters “S” and “T” are sounding to loud. Let’s reduce them! How to Do it The solution: Using fast “Attack” and quite fast “Release”, set the compressor at about 3:1 and then place an equaliser into the side-chain. Suck out all bottom end, and middle, and apply a boost at around 3 to 6KHz. You will find that adjusting the “Threshold” will control how powerful the “S” and “T” sounds can get. Overdo it and the vocal can sound very strange. Set it so that it gets the “S” and “T” sounds just how you like them. You Don’t Need More Than One Hardware Compressor To Do This Note: It is understandable to think that you might need two hardware compressors on a vocal – one to perform de-essing, followed by another one doing “normal” compression. This is true when mixing on the computer in which ideally you’d use a De-Esser plugin before the compressor plugin. If you’re using hardware though, you can do this all with just one compressor. One compressor can do the two jobs at the same time! Simply compress the vocal as normal. Then insert an equalizer into the side chain, and apply a small boost around the sibilant region (around 3KHz-6KHz). This will cause the compressor to over-react on sibilant sounds, thereby de-essing at the same time as compressing. Use the EQ BOOST to control how much sibilant sounds are CUT. It is VERY, VERY IMPORTANT not to overdo de-essing. If you do, the singer will sound like they have a lisp. Or should that be lithp? 🙂 In any case, once you’ve screwed it up by overdoing it during recording, then there is little you can do to rescue it later – so go easy on it! (you can always de-ess some more during mixing if required). You Don’t Need More Than One Hardware Compressor To Do This It’s important to understand that the equalizer you choose to place in the side-chain does NOT affect the frequency response of the sound going THROUGH the compressor, just what the compressor internally LISTENS to. It therefore effects how the overall volume responds to changes in volume at certain frequencies. This is known as “Frequency SENSITIVE compression”. It is also possible to purchase more complex compressors that actually DO affect the frequency response in different bands, and this is known as “Frequency SELECTIVE” compression – there is a big difference between the two, although the names are similar, and even professionals get the two terms mixed up sometimes. There is no such thing as a compressor setting that will work for every source or every type of source because all sources are different. A setting that might work for one vocalist in one song where they are singing lots of long notes won’t necessarily work for the same singer on another song when they are singing shorter faster notes. The best settings for shorter faster notes are short release times. That way the compressor has time to come back up again to work properly for the next note. The best settings for longer notes are long release times. That way the compressor will hold the note for longer rather than only compressing the start of the note. Fast attack times can grab notes faster but can sound less natural and you will loose the attack and transient of each note if you make it too fast. Useful as a Starting Point So any ‘preset’ you might find or have, that you use without tweaking is wrong. That’s why you need to take the time to learn what each function does. It’s ok to use presets as a starting point but you’re going to need to tweak them. In saying that, I can give you some presets of some averages as a starting point but use your ears and adjust them accordingly. This is purely for a loose guide. Generally speaking, you want to adjust the threshold so that you’re getting around 5-10db of reduction, however, the more even the source sound is, the less you need to compress, therefore, the less reduction you will need. For example, a limited EDM kick drum might need no compression since it’s exactly the same every time. So the threshold isn’t worth mentioning for any presets below. Some Presets (Just a Guide Only) Vocals – sung Ratio 4:1-5:1, 5-10ms attack, 100-250 ms release Vocals – rapping Ratio 4:1-5:1 5-10ms attack, 50-100 ms release Synth bass & sub bass Ratio 4:1-5:1, 0-10ms attack, 40-200ms release Ratio 4:1-8:1, 5-15ms attack, 40-250 ms release Most synths & Piano Ratio 4:1, 5-20ms attack, 40-250 ms release Distorted guitar power-chords Ratio 5:1-8:1, 5-15ms attack, 40-250 ms release Master bus or drum buss – subtle limiting Ratio 2:1, 10ms attack, 50ms release In Summary, knowing how to use a compressor takes time and practice. It’s not something so easy to learn from an article alone. The more you learn to use them, the more you can predict the behavior of the compressor based on it’s parameters. At least now, you should hopefully have a better understanding of how compression works and what the controls do. In all of the above examples, the settings and approaches suggested are merely a guide. Your best teacher of compression is your own ears, and the compressors that you own. When you find a setting that really works on a certain instrument, write it down – and make your own preset. It might save you some time later when you next record that same instrument. You need to work with your compressor for a long time, and develop a good working relationship with it, until you can really trust what it is up to. At that point, you can rely completely on your ears rather than presets and typical settings. Different models of hardware compressors and plugins sounds different. That’s why people talk with great affection about certain old valve compressors, or perhaps a particular model of DBX compressor that they love. Lots of engineers love the LA-2 compressor because it’s so simple to use. Hip-hop and R&B guys the Tube-Tech CLB-1 compressor. Both of those are opto-compressors. Opto-compressors are often a little more natural sounding on vocals. FET compressors give more of that rock style punch and often act a little faster. In general, hardware compressors can sound pretty different to each other and since so many are now emulated in software plugins, so can different compressor plugins. As with all compressors, settings vary a little between equipment and plugins. A setting that sounds great on one compressor, might sound terrible on another. This applies to software plugins too, which is surprising, as one would expect the maths and figures to be identical in each one but that is often not the case. The fact that people have a personal preference for different types of compressed sound, means that there will always be a market for compressors from different manufacturers as well as plugin models of those. There will always be the “classic” compressors that almost everyone likes, and there will also be a number of obscure quirky units and plugins that only appeal to a select few. You Need to Hear it! Compression is an extremely difficult thing to describe in writing, and you really need to hear compression – in all its different forms – to get an understanding of how it can help you. Never apply compression to something simply because other people do. Apply it because you KNOW that you really NEED it and that you UNDERSTAND exactly what it is DOING to the sound. If in doubt, when recording and tracking compress too little rather than too much (it is very difficult – indeed, often impossible – to undo bad compression later), and remember that too little compression during recording can always be made up for when mixing later with a software compressor. In fact, UAD plugins are pretty cool since they are DSP so they have no latency like software plugins but you can choose to record with the compression or just hear the compression but record the un-compressed signal. That’s a great tip and makes it impossible to mess up during recording. As always, practice makes perfect, and I hope that this article has gone some way towards demystifying the process for you. This article was originally written by Xar at the Serious Sounds Network http://serious-sounds.net/, a sound engineering and music production forum that has now been merged over to Current Sound. This article has been updated and modified by Tom Watson at Current Sound – Los Angeles Recording Studio, to bring it up to date with modern digital recording. Subscribe to the newsletter at the bottom of this page to be informed of more free sound engineering lessons. If you found this article interesting, please click or more of one of the social share icons below to share it.
<urn:uuid:094782fd-95e9-469f-952d-76a836804f76>
CC-MAIN-2023-50
https://currentsound.com/2020/02/10/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.941716
10,165
2.671875
3
Dr. Cushing was born in Cleveland Ohio. The fourth generation in his family to become a physician, he showed great promise at Harvard Medical School and in his residency at Johns Hopkins Hospital (1896 to 1900), where he learned cerebral surgery under William S. Halsted After studying a year in Europe, he introduced the blood pressure sphygmomanometer to the U.S.A. He began a surgical practice in Baltimore while teaching at Johns Hopkins Hospital (1901 to 1911), and gained a national reputation for operations such as the removal of brain tumors. From 1912 until 1932 he was a professor of surgery at Harvard Medical School and surgeon in chief at Peter Bent Brigham Hospital in Boston, with time off during World War I to perform surgery for the U.S. forces in France; out of this experience came his major paper on wartime brain injuries (1918). In addition to his pioneering work in performing and teaching brain surgery, he was the reigning expert on the pituitary gland since his 1912 publication on the subject; later he discovered the condition of the pituitary now known as “Cushing’s disease“. Today, April 8th, is Cushing’s Awareness Day. Please wear your Cushing’s ribbons, t-shirts, awareness bracelets or Cushing’s colors (blue and yellow) and hand out Robin’s wonderful Awareness Cards to get a discussion going with anyone who will listen. And don’t just raise awareness on April 8. Any day is a good day to raise awareness. I found this biography fascinating! I found Dr. Cushing’s life to be most interesting. I had previously known of him mainly because his name is associated with a disease I had – Cushing’s. This book doesn’t talk nearly enough about how he came to discover the causes of Cushing’s disease, but I found it to be a valuable resource, anyway. I was so surprised to learn of all the “firsts” Dr. Cushing brought to medicine and the improvements that came about because of him. Dr. Cushing introduced the blood pressure sphygmomanometer to America, and was a pioneer in the use of X-rays. He even won a Pulitzer Prize. Not for medicine, but for writing the biography of another Doctor (Sir William Osler). Before his day, nearly all brain tumor patients died. He was able to get the number down to only 5%, unheard of in the early 1900s. This is a very good book to read if you want to learn more about this most interesting, influential and innovative brain surgeon. What Would Harvey Say? (BPT) – More than 80 years ago renowned neurosurgeon, Dr. Harvey Cushing, discovered a tumor on the pituitary gland as the cause of a serious, hormone disorder that leads to dramatic physical changes in the body in addition to life-threatening health concerns. The discovery was so profound it came to be known as Cushing’s disease. While much has been learned about Cushing’s disease since the 1930s, awareness of this rare pituitary condition is still low and people often struggle for years before finding the right diagnosis. Read on to meet the man behind the discovery and get his perspective on the present state of Cushing’s disease. * What would Harvey Cushing say about the time it takes for people with Cushing’s disease to receive an accurate diagnosis? Cushing’s disease still takes too long to diagnose! Despite advances in modern technology, the time to diagnosis for a person with Cushing’s disease is on average six years. This is partly due to the fact that symptoms, which may include facial rounding, thin skin and easy bruising, excess body and facial hair and central obesity, can be easily mistaken for other conditions. Further awareness of the disease is needed as early diagnosis has the potential to lead to a more favorable outcome for people with the condition. * What would Harvey Cushing say about the advances made in how the disease is diagnosed? Significant progress has been made as several options are now available for physicians to use in diagnosing Cushing’s disease. In addition to routine blood work and urine testing, health care professionals are now also able to test for biochemical markers – molecules that are found in certain parts of the body including blood and urine and can help to identify the presence of a disease or condition. * What would Harvey Cushing say about disease management for those with Cushing’s disease today? Patients now have choices but more research is still needed. There are a variety of disease management options for those living with Cushing’s disease today. The first line and most common management approach for Cushing’s disease is the surgical removal of the tumor. However, there are other management options, such as medication and radiation that may be considered for patients when surgery is not appropriate or effective. * What would Harvey Cushing say about the importance of ongoing monitoring in patients with Cushing’s disease? Routine check-ups and ongoing monitoring are key to successfully managing Cushing’s disease. The same tests used in diagnosing Cushing’s disease, along with imaging tests and clinical suspicion, are used to assess patients’ hormone levels and monitor for signs and symptoms of a relapse. Unfortunately, more than a third of patients experience a relapse in the condition so even patients who have been surgically treated require careful long-term follow up. * What would Harvey Cushing say about Cushing’s disease patient care? Cushing’s disease is complex and the best approach for patients is a multidisciplinary team of health care professionals working together guiding patient care. Whereas years ago patients may have only worked with a neurosurgeon, today patients are typically treated by a variety of health care professionals including endocrinologists, neurologists, radiologists, mental health professionals and nurses. We are much more aware of the psychosocial impact of Cushing’s disease and patients now have access to mental health professionals, literature, patient advocacy groups and support groups to help them manage the emotional aspects of the disease. Novartis is committed to helping transform the care of rare pituitary conditions and bringing meaningful solutions to people living with Cushing’s disease. Recognizing the need for increased awareness, Novartis developed the “What Would Harvey Cushing Say?” educational initiative that provides hypothetical responses from Dr. Cushing about various aspects of Cushing’s disease management based on the Endocrine Society’s Clinical Guidelines.
<urn:uuid:e2b5e8d3-7259-4bee-ad76-3c48e685ffff>
CC-MAIN-2023-50
https://cushieblogger.com/tag/cushings-awareness-challenge-2020-cushings-awareness-day/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.966898
1,384
3.21875
3
The hagiography (also in the spelling hagiography , from ancient Greek τὸ ἅγιον tò hágion "the holy, sanctuary" or ἅγιος hágios "holy, venerable" and -graphy ) comprises both the representation of the life of saints as well as the scientific research of such representations. Hagiographic sources are texts or material remains that are suitable for providing information about the earthly life of the saints, their cult and the miracles that the respective cult community believes have been performed. The texts include, for example, Viten (Heiligenleben), translation reports , monastery and diocese chronicles , mentions in other chronicles and other historiographical genres , authenticity (authentication documents for relics), calendars , veneration-serving literary genres in liturgical manuscripts, e.g. hymns , sequences , antiphons or litanies , epigraphic testimonies ( inscriptions ); The material remains include icons and other pictorial representations, cult buildings , cult implements , graves of saints, relics and reliquaries, votive offerings and devotional objects . Hagiography or hagiology In order to separate the meanings of hagiography and hagiology, a proposal is to designate only the description of life (vita) as hagiography, while scientific research is called hagiology . As a hagiologion or hagiologium , a more or less scientific edition with descriptions of lives and studies of saints is called accordingly. In a figurative sense, the term hagiography or the adjectival use hagiographically describes a biography that represents the person described as a "saint" in the sense of an exemplary person without blemishes and presents him to the reader on the one hand as a moral model, on the other hand as an elect of God worthy of cultic worship . Since such a representation often has one-sided encomiastic features, shows an uncritical and euphemistic tendency, neglects historical source criticism and is not committed to a strictly rationalist concept of truth, the expression can also be used in a pejorative sense. Ever since the Reformation and increasingly since the 19th century, which, with the onset of historical source criticism and the establishment of a rationalist concept of truth shaped by the natural sciences , became increasingly alien to the idea of the supernatural , hagiography has increasingly encountered fundamental criticism . The Bollandist company , the Acta Sanctorum , supported by the Jesuit order , sought to defend hagiography against this blanket rejection by critically examining and collecting the tradition. From a literary perspective, the term hagiography is rejected by the Middle Latin Walter Berschin, who points out that historical truth cannot be a criterion of genre or quality. Instead of hagiography, we should speak of biography. On the other hand, the hagiographic intention results in a certain hagiographic discourse, referred to by Berschin as the biblical background style, which is reflected in the recourse to certain literary models, to biblical examples and hagiographic topoi . Within this hagiographic discourse, there is now another difference between hagiography and ancient biography. As Albrecht Dihle has shown, the latter was not based on historiography but on the interest of philosophical ethics in the morally autonomous individual as a model. In this respect, however, there is a fundamental difference in the hagiographical conception of God's intervention as a metaphysical power in historical and biographical processes. Because that is the Holy God to the tool and every hagiography to a piece of salvation history , a testament to the gracious self- revelation of God in history and for the redemption of the promised salvation promises . Because of this new outlook, it was precisely the specific, unique event itself that gained importance, while ancient biography was primarily interested in the generalizable moral attitude that manifested itself in an event. The prerequisite for this development was the fact that biography had already developed into a genus of historiography under the special conditions of the Roman Empire . The history of Christian hagiography began in the 2nd century with descriptions of the lives of martyrs , ascetics or hermits and holy virgins . In the Middle Ages , the heyday of hagiography, there were biographies of almost all the saints in the church. In the Latin-speaking area alone, the Bibliotheca Hagiographica Latina lists well over 10,000 numbers with its supplements. An important collection of legends of saints from the Middle Ages is the Legenda aurea by Jacobus de Voragine from 1263 to 1273 . In addition to the already mentioned Acta Sanctorum of the Bollandists, collections followed in the early modern period, such as the Sanctuarium (volume 1–2, Venice 1474) by Boninus Mombritius (1424–1502?), De probatis vitis Sanctorum from Al. Lippomano olim [1551–1560] conscriptis nunc primum emendatis et auctis (volume 1–6, Cologne 1570–1576) by Laurentius Surius (1522–1578) and the Acta primorum martyrum sincera (Paris 1689) by Thierry Ruinart . The historical cognitive interest of hagiographic research today is mostly less in the authenticity of the tradition than in researching the collective memory or dealing with it, as well as in social and mental history issues. Hagiographical sources also play a not insignificant role in connection with research on the history of monasticism, orders and monasteries , dioceses and other ecclesiastical institutions as well as on the legitimation and representation of rule of the medieval and early modern nobility and kingship . Sources of Christian hagiography are Vita, Passio, Miracula, translation reports , letters, lists of saints, calendars, martyrologies or menologion and synaxarion as well as liturgical books such as antiphonaries , sacraments , books of hours , and finally sources of cult history such as registers of relics, reliquaries and the inscriptions (authenticity) inserted into them. , Memories, altars and altar titles (inscriptions with the names of the saints) as well as consecration notes (notae dedicationis), sculptures and pictorial representations. Vita: This source of hagiographic research developed from the trial files (acta) and the representation of people condemned to death ( passio = 'suffering') for their beliefs ; later life descriptions (vitae) of the martyrs were written. As the persecution of Christians decreased, the attention to the characteristics of a saint increased in the life of confessors, ascetics and bishops , so that their vitae served as a source of hagiographic historiography . The term vita is also used in a more general form of the tradition of a way of life (conversatio) . Passio: originally referred to the martyrdom report, but was used synonymously for Vita from an early stage without distinction and also used for confessors , as the godly life in following Christ was seen as a path of suffering. Miracula: A striking example of hagiographic historiography is the transmission of miracles in a person's vita. A plausible miraculum (report of a miracle) as a criterion for canonization has been passed down with preference in hagiographic sources, but has not been assumed. Miracle collections, often as the second part of a vita or passio , are therefore a common form of literature. Translation report: describes the raising of the bones , the transfer of the relics and their burial (depositio) at the place of cultic veneration. Translation reports are often the earliest cult evidence. They can appear independently, often in the form of a letter, or as part of a vita or passio . Structure of a classical Christian hagiography Hagiographies were traditionally short texts that were arranged in an anthology chronologically according to the memorial days of the saints. They should set an example for the Christian way of life. Classical hagiography followed a fixed scheme. - Introduction by the author. - Childhood and youth of the saint. Description of virtues and miracles that distinguish the saint from other adolescents. - Life as a charismatic , church official (priest, bishop, abbot), anchorite , ascetic : frequent motifs are victory over temptation, founding a monastery, building churches, battles with the devil, teachings and sermons, proselytizing pagans or heretics , divine visions, prophecies, Healing and other miracles. - Death or martyrdom and tale of miracles, - Further reports of miracles and deeds: sometimes the relics turn out to be indestructible or the saint appears to the bereaved in visions and determines the place where his relics are to be buried and venerated. Punishment miracles in the case of despisers of the cult. - Notes on reliquary surveys and translations. - Comparison with other saints. - Epilogue, prayer, author's epilogue. Not only Christianity, but also other religions, such as Judaism , Islam , Hinduism , Buddhism , Confucianism and Daoism , developed ideas of exemplary and therefore worthy people, some of them long before Christianity came into being, to which the development of diverse memorial and cult forms corresponds. Manuals and resources - Johann Evangelist Stadler, Franz Joseph Heim (ed.): Complete Lexicon of Saints or life stories of all saints, blessed, etc. ... in alphabetical order, with two supplements containing the attributes and the calendar of the saints. Vol. 1–5, Schmid, Augsburg 1858–1882 (via the ecumenical lexicon of saints [see web links] also on the web). - Subsidia Hagiographica. Société des Bollandistes , Brussels 1886ff. (90 volumes so far, including essential aids). - Bibliotheca hagiographica latina antiquae et mediae aetatis. Vol. 1-2. (= Subsidia Hagiographica. Vol. 6). Société des Bollandistes, Brussels 1898–1901 (reprint 1992). - Bibliotheca hagiographica latina antiquae et mediae aetatis. Novelty supplement. Edidit Henricus FROS. Société des Bollandistes, Brussels 1986. - René Aigrain : L 'hagiography. Ses sources, ses méthodes, son histoire. Paris 1953 (repr. 2000). - Bibliotheca sanctorum. Vol. 1–12 + index volume, Rome 1961–1970. - Wolfgang Braunfels et al. (Ed.): Lexicon of Christian iconography . Vol. 5–8 Iconography of the Saints. Herder-Verlag , Freiburg im Breisgau 1973–1976. - Marc Van Uytfanghe : Art. Adoration of saints II (hagiography) . In: Real Lexicon for Antiquity and Christianity . Vol. 14, Stuttgart 1988, Col. 150-183. - Réginald Grégoire : Manuale di agiologia. Introduzione alla letteratura agiografica (= Bibliotheca Montisfani. Vol. 12). Fabriano ² 1996. - Claudio Leonardi among others: Art. Hagiography. In: Lexicon of the Middle Ages . Vol. 4, 1989, col. 1840-1862. - Veit Neumann (ed.): Saints. Hagiography as theology . Echter-Verlag, Würzburg 2020, ISBN 978-3-429-05433-5 . - Guy Philippart (Ed.): Hagiographies. Histoire internationale de la littérature hagiographique de latine et vernaculaire, en Occident, des origines à 1550. Tournhout 1994ff. - Alphons M. Rathgeber : legend of saints. Images of life of noble people and holy friends of God. Nuremberg 1936; 2nd edition ibid. - Dieter von der Nahmer : The Latin saints vita. An Introduction to Latin Hagiography. WBG , Darmstadt 1994, ISBN 978-3-534-19190-1 . - Walter Berschin : biography and epoch style. Volumes 1–5, Hiersemann, Stuttgart 1984–2004, ISBN 3-7772-8606-0 . - Bruno Steimer and Thomas Wetzstein (adaptation): Lexicon of Saints and Adoration of Saints (Lexicon for Theology and Church compact). Volume 1-3. Herder, Freiburg im Breisgau et al. 2003, ISBN 978-3-451-28190-7 . - Jakob Torsy: The Big Name Day Calendar . 3720 names and 1560 biographies of our saints. 13th edition, Herder, Freiburg im Breisgau 1976; Reprinted 1989, ISBN 978-3-451-32043-9 . - Otto Wimmer: Handbook of names and saints, with a history of the Christian calendar. 3. Edition. Tyrolia, Innsbruck / Vienna / Munich 1966; from 4th edition 1982 by Otto Wimmer and Hartmann Melzer , under the title: Lexicon of Names and Saints . Nicol, Hamburg 2002, ISBN 3-933203-63-5 . - Gereon Becht-Jördens: Biography as salvation history. A paradigm shift in genre development. Prolegomena to a formal historical interpretation of Einhart's Vita Karoli. In: Andrea Jördens u. a. (Ed.): Quaerite faciem eius semper. Studies on the intellectual-historical relationships between antiquity and Christianity. A gift of thanks for Albrecht Dihle on his 85th birthday from the Heidelberg Church Fathers Colloquium (= Studies on Church History. Vol. 8). Kovac, Hamburg 2008, pp. 1–21. - TJ Heffermann: Sacred Biography. Saints and their Biographers in the Middle Ages. New York / Oxford 1988. - Dieter Hoster: The form of the earliest Latin saints' lives from the Vita Cypriani to the Vita Ambrosii and their ideal of saints. Cologne 1963, (Dissertation University of Cologne, Philosophical Faculty 1963, 161 pages, 21 cm). - Friedrich Prinz : Hagiography and cult propaganda. The role of the commissioners and authors of hagiographic texts of the early Middle Ages. In: Journal of Church History . No. 103, 1992, pp. 174-194. - Friedrich Prinz: The saint and his lifeworld. Reflections on the socio-historical and cultural-historical value of life and miracle stories. In: Monasticism, Culture and Society. Contributions to the Middle Ages, for the author's 60th birthday , Munich: CH Beck 1989, pages 251–268, ISBN 3-406-33650-7 . - Wiebke Schulz-Wackerbarth: Adoration of saints in late antique and early medieval Rome. Hagiography and topography in discourse = contexts. New contributions to historical and systematic theology, Volume 47. Göttingen: Edition Ruprecht 2020, ISBN 978-3-8469-0286-8 - Literature on the subject of hagiography in the catalog of the German National Library - Ernst Tremp: Hagiography. In: Historical Lexicon of Switzerland . - Ecumenical Lexicon of Saints - Homepage of the Societé des Bollandistes - Publications on hagiography in the Opac der Regesta Imperii - Guy Philippart: Hagiographes et hagiographie, hagiologes et hagiologie: des mots et des concepts. Published in: Hagiographica. Volume 1 1994 - Walter Berschin: Biography and Epoch Style (see literature below), Vol. 1, pp. 17–24. - Albrecht Dihle : On the ancient biography. In: La biography antique. Huit exposés suivis de discussions. (Entretiens sur l'antiquité classique 44). Fondation Hardt, Vandoeuvres-Genève 1998, pp. 119-146; Albrecht Dihle: Ancient Foundations. In: Walter Berschin (Hrsg.): Biography between Renaissance and Baroque. Mattes, Heidelberg 1993, pp. 1-22; Albrecht Dihle: Studies on the Greek biography. (Treatises of the Academy of Sciences in Göttingen, Phil.-hist. Class 3). 2nd Edition. Goettingen 1970. - Cf. Becht-Jördens: Biography as Salvation History (see literature below). - See Albrecht Dihle: The emergence of the historical biography. (Meeting reports of the Heidelberg Academy of Sciences, Phil.- hist. Class 1986, 3). Winter, Heidelberg 1987. - For more details see the article Acta Sanctorum .
<urn:uuid:2ee92296-35bb-47b2-9086-2a2fd315d97f>
CC-MAIN-2023-50
https://de.zxc.wiki/wiki/Hagiographie
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.806989
3,664
3.265625
3
There are a few video codecs that are commonly used today. Two of these codecs are FFmpeg and Xvid. Both of these codecs have their own strengths and weaknesses, but many people do not know the difference between them. In this blog post, we will compare and contrast FFmpeg and Xvid to help you understand which one is right for you. What is FFmpeg? FFmpeg is a powerful open-source software suite that is widely used in the multimedia industry. With tools for encoding, decoding, transcoding, muxing, demuxing, streaming, filtering, and playing audio and video files, FFmpeg enables users to create and manipulate digital media content with unprecedented flexibility. Whether you are a professional video editor or a hobbyist just looking to try your hand at filmmaking, FFmpeg has the tools you need to get the job done quickly and efficiently. And with its robust community of developers constantly working to improve the software and add new capabilities, there is no limit to what you can achieve with FFmpeg. What is Xvid? Xvid is a popular video encoding format that was developed as an open-source alternative to other, more commercial encoding formats such as MPEG-4. Unlike many other codecs, Xvid is freely available and can be used on a wide range of platforms and software programs. Because it has such a large community of users, there is also a robust support network for Xvid that provides helpful tips, troubleshooting guides, and tutorials for anyone looking to start using this powerful video format. Whether you are an amateur filmmaker or a professional videographer, Xvid is sure to meet your video encoding needs with its high-quality output and user-friendly interface. Difference between FFmpeg and Xvid There are a few key differences between FFmpeg and Xvid. First, FFmpeg is open source software, while Xvid is not. This means that anyone can contribute to the development of FFmpeg, and there is no single company or entity in control of the project. Second, FFmpeg offers a wider range of features than Xvid. For example, FFmpeg can be used to encode and decode a variety of video formats, while Xvid is limited to encoding MPEG-4 videos. Third, the quality of the encoded video is generally better with FFmpeg than with Xvid. This is due in part to the fact that FFmpeg uses more advanced algorithms for compression and decompression. Finally, FFmpeg is faster than Xvid, meaning that it can encode and decode video in real-time without any noticeable delay. Overall, FFmpeg is a more powerful and versatile tool than Xvid, making it the better choice for most users. FFmpeg and Xvid are both video codecs that have their own benefits and drawbacks. FFmpeg is open source, while Xvid is not. FFmpeg supports more formats than Xvid. However, Xvid has a smaller file size than FFmpeg. Ultimately, the best codec for you depends on your needs. If you need to encode videos for a wide range of devices, then FFmpeg is probably the better option. But if you’re looking for a lightweight codec that will work with most devices, Xvid is a good choice.
<urn:uuid:62fafa3b-9683-44a7-8669-6afdeaf4c359>
CC-MAIN-2023-50
https://differencebetweenz.com/difference-between-ffmpeg-and-xvid/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.951903
661
3.015625
3
February 12, 1981 GDR-Iran Exchange of Opinions This document was made possible with support from Blavatnik Family Foundation 1. GDR-Iran Exchange of Opinions On February 12, 1981, a government delegation of the Islamic Republic of Iran was staying in the GDR under the leadership of the Minister of Education, Dr. Mohammed Javad Bahonar. He indicated to Comrade Oskar Fischer that his goal was find out the GDR's position toward Iran's Islamic revolution, the Iraqi invasion of Iran, and the preparedness of the GDR for further cooperation between the two countries. Bahonar said that the Islamic revolution under Ayatollah Khomeini would be led to victory. The GDR was among the first countries that supported and recognized the Islamic revolution. The Islamic Republic of Iran would like to develop close relations for the mutual benefit with all countries and governments that recognize the goals and the outcomes of the revolution. On international problems, Bahonar explained that Iran is interested in securing freedom, ending the arms race, and ensuring popular national independence. He condemned the "spiral" of the arms race and interference in the domestic affairs of nations. He said that Iran is concerned about the presence of the "superpowers" in the Persian Gulf and in the Indian Ocean. He said that the Islamic Republic of Iran supports allowing the nations of this region taking responsibility for its security. According to him, the Iranian people are unconcerned about the increased military presence of the USA in the region. They would not like for "another power to take the place of the USA" in the struggle to remove this danger. Regarding the Iran-Iraq conflict Iran shares the view of the GDR, that it only serves the purposes of imperialism and must be ended as soon as possible. This is of great significance for the continuation of the revolution in Iran. The Iranian government does not expect for the GDR to give up its friendly relations with Iraq. But it asks the question of who is the aggressor. Iran would like for the GDR to influence Iraq and pull it back. The position on the Iran-Iraq conflict for Iran is an important point for the development of further relations. On Afghanistan Bahonar stated that the Islamic Republic of Iran condemns any interference of imperialism, particularly by the USA, in this country. This however does not mean that they accept the "presence" of another country. A "government that is forced on the people" can make no decisions that the people do not support. Iran advocates for all peoples of the region to decide their own fate, without external influence and pressure. Bahonar emphasized that Iran is prepared to expand and deepen bilateral cooperation with the GDR in political, economic, and cultural spheres. Bahonar invited the GDR Minister for Foreign Affairs as well as the Vice President of the National Council to visit Iran. He welcomed proposals to conclude further treaties and demonstrated particular interest in the use of the GDR's experience in the area of public education and higher education, as well as the cooperation of social forces united in the National Front. He requested the sharing of comprehensive informational materials on the GDR's education system. Comrade Oskar Fischer elaborated on the peace policy of the GDR and the principle standpoint of the GDR toward the Iranian popular revolution. He indicated that the imperialists are preparing new actions against the peace efforts of the people and that the international situation is coming to a dangerous point. The imperialist course of heavy armament, the acceleration of the arms race, the long-term armament program of NATO, the Brussels missile decision, the so-called new nuclear strategy of the USA, and not least the neutron weapon plans of the USA endanger the peace of the entire world. In this connection Comrade Oskar Fischer emphasized the role of the USSR and the countries of the Socialist community in the struggle for peace, security, detente, and disarmament. The Foreign Minister of the GDR assessed the good relations that have developed between the GDR and Iran in many areas, particularly the economy and trade. He declared the preparedness of the GDR to further develop and deepen the relations between both countries, as well as to extend them into other areas. On this point he presented a number of suggestions, which were positively received by the Iranian partners. Comrade Oskar Fischer presented the position of the GDR on the Iran-Iran conflict and Afghanistan. Comrade Kirchhoff informed the delegation about the experiences of the National Front during the democratic transformation and the creation of the developed Socialist society of the GDR. The conversations are the first political exchange of opinion between the two governments and have created starting points for further development of political relations. They took place in a sober, open-minded, and constructive atmosphere. Representatives of the German Democratic Republic and the Islamic Republic of Iran discuss the arms race, the presence of superpowers in Afghanistan, the Persian Gulf, and the Indian Ocean, the Iran-Iraq conflict, and the potential for bilateral cooperation between East Germany and Iran. The History and Public Policy Program welcomes reuse of Digital Archive materials for research and educational purposes. Some documents may be subject to copyright, which is retained by the rights holders in accordance with US and international copyright laws. When possible, rights holders have been contacted for permission to reproduce their materials. To enquire about this document's rights status or request permission for commercial use, please contact the History and Public Policy Program at [email protected].
<urn:uuid:9b3b3460-139d-4a0a-a2cf-6a9e8de6d15f>
CC-MAIN-2023-50
https://digitalarchive.wilsoncenter.org/document/gdr-iran-exchange-opinions
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.948181
1,128
2.953125
3
December 10, 1945 Malik, 'On the Question of a United Government in Korea' ON THE QUESTION OF A SINGLE GOVERNMENT FOR KOREA The question of the creation of an independent Korea was first posed in the Cairo Declaration signed by Roosevelt, Churchill, and Chiang Kai-shek [Jiang Jieshi]. The liberation of Korea from under Japanese domination puts on the agenda and makes pressing the question of turning this country into an independent and self-governing country, as well as [the question of] the creation of a single government of Korea. All the political and public groups of Korea, regardless of their political views, not only declare a desire to have their own national Korean government, but are also trying to take steps to organize such a government. (The creation of the so-called "People's Government of Korea" in Seoul in September of 1945). The United States of America, through the military command of the American occupation forces in South Korea, supports the idea of the creation of a single all-Korean governing body, and likewise it advances the idea of the economic and political unification of Korea. It is politically inadvisable for the Soviet Union to resist the creation of a single Korea government. The creation of such a government is one of the most important political issues associated with the problem of the future of Korea and capable of determining its future. The nature of the government of Korea cannot fail to interest the Soviet Union inasmuch as the nature of this government will be one of the decisive factors in the determination of the future position of Korea from the point of view of our political, economic, and defense interests in the Far East. By virtue of the above our main task is to take steps for the composition and nature of the activity of the Korean government to promote turning Korea into one of the bastions of our security in the Far East and for Korea not to be turned into an instrument directed against the Soviet Union in the hands of any country unfriendly to us. At the present time the political situation in Korea is in the process of formation. Many various groups, movements, societies, and parties are appearing on the political horizon. There is still no precise definition of these political movements, however one can already note three main political trends right now: the Communist Party of Korea and the trade union and public organizations associated with it, the Democratic Party - the party of the big national bourgeoisie and landowners, and the People's Party, created along the lines of the Kuomintang and combining the most diverse political elements, from the petit bourgeoisie to representatives of the big bourgeoisie. Korean émigrés who have recently returned from the US and Chongqing, where they were active in the roles of all sorts of "provisional governments of Korea" or passing themselves off as candidates for future rulers of Korea after liberation of this country form Japanese dominance are beginning to play a prominent political role in Korea. The Communist Party of Korea characterizes the current political situation in Korea as a stage of a bourgeois-democratic revolution which has arisen as a consequence of the national liberation of Korea which, however, occurred not as a result of their own efforts and the struggle of the Korean people, but with the aid of and under the influence of outside forces, that is, the United Nations, which defeated Japanese imperialism and liberated Korea. The slogans of the Communist Party of Korea are: the democratic dictatorship of the working class and peasantry, winning over the masses, and the creation of a single democratic front. The main tasks with which the Communist Party of Korea is faced are: "the achievement of the complete national independence of Korea and agrarian reform with confiscation of all the landowners' land and its distribution to the peasants." Since the first days of liberation of Korea the Communist Party of Korea has quickly revived its political activity and notably increased its influence on the masses. The Communist Party of Korea received four ministerial portfolios in the "Korean people's government" created in Seoul on 6 September of this year but which was not recognized by the American occupation authorities and was disbanded on 16 September. There has been a noticeable tendency on the part of the American occupation authorities to limit the political activity of the Korean Communist Party. The question of renaming the Communist Party of Korea the Workers and Peasants Party has even been raised in the CC. It stands to reason that in the formation of a single government of Korea the Americans together and in alliance with Chiang Kai-shek (and possibly with the support of the British[)], and relying on reactionary Korean elements, in particular the pro-American political émigrés who returned to Korea from the US and Chongqing, will oppose the inclusion of Communists and genuinely democratic elements in a single government of Korea in every possible way. By virtue of all the above it would be advisable to adopt the following decisions: 1. Affirm and again declare the independence of Korea. 2. Advocate the creation of a provisional government of Korea. Choose [izbrat'] this government with the participation of all Korean democratic, public, and political organizations. 3. These organizations should elect a temporary committee to prepare for a congress of a representative people's (constituent) assembly of Korea. 4. The congress of the constituent assembly should be preceded by the holding of local broad democratic meetings and in Korea-wide [meetings] of workers, businessmen, and other population groups for a broad discussion and nomination of candidates for delegates to the constituent assembly and for a single government of Korea. 5. Form a special allied commission of representatives of the USSR and the US to carry out the preparatory work, observe, and assist the provisional government as well as the committee to prepare the convening of the Korea-wide constituent assembly. (It will possibly be necessary to also include representatives of China and Britain in this commission). The commission should submit recommendations to the governments of the USSR and US (China and Britain). 6. Create a Mixed Soviet-American Commission of representatives of the Soviet and American commands to solve all the current issues which arise from the fact of the presence of Soviet and American troops on Korean territory. 10 December 45 This document discusses the creation of an independent Korea. Roosevelt, Churchill, and Chiang Kai-shek first presented the idea at the Cairo Conference in 1943. The United States supports the creation of a single Korean state while the USSR opposes it. The document discusses the importance of the answer to the unification question for the Soviet Union's political and economic future as well as its interest in the Far East. Associated People & Organizations - Korean reunification question (1945- ) - Korea (North)--Foreign relations--Soviet Union - Korea (North)--Foreign relations--United States - Korea (South)--Foreign relations--United States - Korea (South)--Foreign relations--Soviet Union - Korea--History--Allied occupation, 1945-1948 - Korea--History--Japanese occupation, 1910-1945 The History and Public Policy Program welcomes reuse of Digital Archive materials for research and educational purposes. Some documents may be subject to copyright, which is retained by the rights holders in accordance with US and international copyright laws. When possible, rights holders have been contacted for permission to reproduce their materials. To enquire about this document's rights status or request permission for commercial use, please contact the History and Public Policy Program at [email protected].
<urn:uuid:7b5c68fb-23dd-4d22-a9ca-30029a35d265>
CC-MAIN-2023-50
https://digitalarchive.wilsoncenter.org/document/malik-question-united-government-korea
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.945096
1,524
2.953125
3
Active DTM Countries Flow monitoring points Online Interactive Resources What is the Displacement Tracking Matrix? The Displacement Tracking Matrix (DTM) gathers and analyzes data to disseminate critical multi layered information on the mobility, vulnerabilities, and needs of displaced and mobile populations that enables decision makers and responders to provide these populations with better context specific assistance. Global Data Institute Established in 2022, IOM's Global Data Institute (GDI) works to enhance the availability and use of data to achieve stronger governance outcomes and positive impacts for migrants and societies in line with IOM's Migration Data Strategy. DTM is one of the founding pillars of the GDI, alongside the Global Migration Data Analysis Centre (GMDAC). This map shows the geographic coverage of DTM data collection around the world as of Nov 2022 The map used here is for illustration purposes only. Names and boundaries do not imply official endorsement or acceptance by IOM.
<urn:uuid:71d2cc0d-da53-430c-b584-34110c7e5a0a>
CC-MAIN-2023-50
https://dtm.iom.int/?f%5B0%5D=country_report_published_date%3A2020&f%5B1%5D=country_report_type_facet%3A20&f%5B2%5D=country_report_type_facet%3A22&title=&field_summary_value=&field_component1_target_id_verf=All&sort_by=field_published_date_value&sort_order=DESC&page=5
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.823388
194
2.546875
3
Artículo publicado en Internet con motivo de la conmemoración de los 30 años de proyectos europeos. Adult Education – Slovenia, 2016-18 ‘Students need to learn to effectively communicate, to express themselves, to process information, to be active citizens, so that their education transfers into their employability.’ Jose Tomas Pastor Perez is head of the Science and Technology Department at CFPA Mercè Rodoreda, a public learning centre for adults located in the small city of Elche, Spain. In this role, he has positioned himself as an innovative educator and a teaching enthusiast. Through participation in different Erasmus initiatives, Jose has come to view the model of adult education in a new light, one based on asking students ‘How can I assist you in reaching your goals?’ Reflecting on his experiences, Jose sees his work as complementing and helping innovate school curricula, with courses that stress practical skills for the new knowledge society such as online job-searching techniques, creating online portfolios and social media recruitment. He has also introduced non-formal teaching methods into the learning process. For instance, at his centre students learn about science and technology by creating and overseeing their own science museum. ‘Preparing objects for the museum not only helps students learn about aerodynamics or optics but also organise events, conduct guided tours, and interact with the local community. That is much more beneficial than just sitting with a book’, he says. His efforts have translated into tangible benefits for the students, securing their entry into the job market, their future employability and career development. While Jose’s contribution to quality adult education has been recognised by numerous awards, such as the ‘Miguel Hernández award’ from the Spanish Ministry of Education, this is merely a positive ‘side-effect’ of his efforts. ‘My main goal is to introduce new innovative teaching methods into the training process in order to offer better services for society. The Erasmus + programme helps with this significantly.’
<urn:uuid:7e866342-75a3-4750-ac13-f87361369420>
CC-MAIN-2023-50
https://educavia.blogspot.com/2017/11/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.951095
437
2.546875
3
There is no telling what the future holds for this country, but a drive towards sustainability is happening now. This drive could be the result of a growing political climate, or it could be that eco-friendly voices are finally being heard. Whatever the reason, more corporations are asking green-focused questions, like how their products and overall production practices are affecting the environment. To learn more, checkout this infographic sponsored by the University of California Riverside’s Online Master of Science in Engineering program. Add This Infographic to Your Site <p style="clear:both;margin-bottom:20px;"><a href="https://engineeringonline.ucr.edu/blog/major-corporations-growing-interest-in-sustainable-product-design/" rel="noreferrer" target="_blank"><img src="https://s3.amazonaws.com/utep-uploads/wp-content/uploads/engineering_ucr/2018/03/14172351/Sustainable-Product-Design-Ar2-new.png" alt="Infographics on Major Corporations' Growing Interest in Sustainable Product Design" style="max-width:100%;" /></a></p><p style="clear:both;margin-bottom:20px;"><a href="https://engineeringonline.ucr.edu" rel="noreferrer" target="_blank">Online Master of Science in Engineering </a></p> A Closer Look at Corporate Interest in Sustainability Money talks, which is exactly what is evident with many corporations. The fact is corporations–as a whole–are spending 877 million dollars on sustainable technologies and processes. This was the total spending in the year 2015. But studies have shown that this number is expected to surpass 1 billion dollars by 2019. What is even more telling is that the global consumer growth has been 4 percent if a company commits to sustainability. Keep in mind that companies that fail to do so have only seen a 1 percent growth at best. Another survey showed that a whopping 66 percent of consumers are willing to pay more for brands that have committed to sustainability. Corporations also have their own perspectives regarding sustainability as well. A survey showed that 72 percent of the CEOs from different companies consider sustainability to be one of their top 10 priorities. And–to further cement just how committed some of these companies are–another survey showed that 40 percent of companies implemented a sustainability program in their business in 2015. Sustainability is definitely making a grand entrance into corporate discussion, so much so that 42 percent of them feel that it has already entered the mainstream and shows no signs of slowing down. What is LCA and How it is Helping the Movement Paying attention to how much sustainability can save a company in the long run makes a great case as to why many corporations are expanding their sustainability practices. Sure, there is a financial upheaval at the beginning of any change because companies have to invest time, effort, and capital to make green-conscious changes, but this investment proves to be worth it. One of the ways that companies are attempting to make changes is by first conducting an LCA or a Life Cycle Assessment on their manufacturing. Conducting a LCA helps a company identify how to best cut back on energy and promote sustainability. When evaluating a company, the LCA’s assessment process consist of four stages. The first is to identify the purpose of the LCA. The second is to identify both the human and ecological impact the production has. Third is to evaluate opportunities to reduce energy usage and material inputs. And, lastly, the LCA calculates specific energy and raw material consumption during each stage of production. This intense process allows a company to see where and how to improve productivity in a way that is much more sustainable. There are several large companies that have done their part in improving sustainability and many large companies have successfully made bold moves into sustainability. Kraft, for example, was able to reduce 109.5 million pounds of packing reduction back in 2010. This was accomplished by using recyclable packing for their roasted peanuts that uses 84 percent less packing weight than before. Needless to say, Kraft definitely has been saving a lot of money since their initial investment in sustainability. The company has not stopped their efforts to continue improving. Coca-Cola was able to reduce weight for all their containers across the board. For example, 8 oz glass bottles now weigh 50 percent less than before, and 12 oz aluminum cans weight 30 percent less. These companies could be considered well-known pioneers who are putting a strong spotlight on the importance of sustainability. Of course, they are not the first companies to adopt greener practices nor have they gone as far as other companies. But they are well-known companies, which helps propel the cause. Issues with LCAs and How to Improve Them There is no denying the fact that LCAs have provided companies a better way of improving their eco-friendly processes. But that does not mean that the assessment program does not come with a few hiccups that need to be addressed. For one, there is simply not enough data to know how every single process is affecting the environment or human health. This is especially true regarding manufacturing processes that are relatively new and have not been tested. Sometimes it is hard to quantify environmental impacts as well since the environment is an ever-changing factor that is hard to predict. This is one of the reasons why LCAs–even though proven effective–are still relatively inconsistent. But there is definitely room for improvement in more ways than one. Most specialists are attempting to make the LCA process more inclusive as an attempt to control more variables. For example, it would better to evaluate raw material procurement. The evaluation in this area could expose the sustainability practices of the supplier and help recycle any unused materials. Plus, exposing the supplier’s sustainability practices might help bring more attention the importance of sustainability. More suppliers, along with other contractors that work with a particular company, will see why they, too, must make the switch to sustainability. LCAs could also include a detailed evaluation of distribution as well. The assessment could reveal better distribution channels and routes that might help reduce oil consumption. Updated delivery vehicles might also be highlighted as well. These are just some of the benefits of improving LCAs, but there are several other improvements. It is easy to see that sustainability is becoming an important factor for many corporations. And it makes sense that it is getting traction because it helps companies make money as well as save it. Not to mention the fact that the environment will ultimately gain from the new direction this country is taking regarding sustainability; it will allow the earth to heal from pollution and overconsumption. Hopefully, the changes start to affect more companies because the earth cannot afford any delays. Add This Infographic to Your Site
<urn:uuid:010d064c-6d70-49a5-bb36-89c7d3747cfb>
CC-MAIN-2023-50
https://engineeringonline.ucr.edu/blog/major-corporations-growing-interest-in-sustainable-product-design/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.957457
1,410
2.515625
3
- PROFIT AND LOSS Profit / Loss is the money a business makes after taking into account all its expenses. Calculated as follows: Total Sales minus Total Expenses Where Total Sales is more than Total Expenses the business has made a profit. Where Total Sales is less than Total Expenses the business has made a loss. - GROSS PROFIT AND NET PROFIT There are two types of business expense – - Those that relate to the products being sold or manufactured or to the service being provided. These are known as Cost of Sales or Cost of Goods. - Those that are incurred in the day-to-day running of a business. These are called Overhead Expenses. Gross Profit is calculated as follows:- - Total Sales minus Cost of Sales = Gross Profit. Net Profit is calculated as follows:- - Gross Profit minus Overhead Expenses = Profit or Loss. Total Sales 2500.00 Less Cost of Sales 1200.00 Gross Profit 1300.00 Less Overhead Expenses 1000.00 Net Profit 300.00 - COST OF SALES The calculation of Cost of Sales is different for each type of business - Retail: Cost of Sales is the amount paid for the products which are going to be sold plus the cost of getting them to your business premises. - Manufacture: Cost of Sales is the amount paid for the raw materials used in making the product plus the amount paid for the labour which went into the making of the product. - Services: Cost of Sales is the amount paid for the labour which went into performing the service plus the cost of any materials used plus (if applicable) the cost of getting to the customer’s premises. - SELLING PRICE Selling price is the amount you are going to charge your customers for your products or services. It is calculated as follows: Cost of Sales + Mark Up = Selling Price - MARK UP AND MARK UP % This is the amount you are going to add to the cost of sales to arrive at your selling price. It has to be big enough to cover your overhead expenses plus savings. A percentage (known as the Mark Up %) is normally used to do this calculation. Cost of Sales 150.00 Mark up % = 75% 112.50 Selling Price 262.50 - GROSS MARGIN AND GROSS MARGIN % Gross margin represents the portion of the sales value which the business wants to keep to be put towards paying for overhead expenses. It is expressed as a percentage. Total Sales 2500.00 Less Cost of Sales 1200.00 Gross Profit 1300.00 Using the above example, the gross margin% would be calculated as follows: Gross Profit divided by Total Sales x 100 = Gross Margin % R1300.00 / R2500.00 x 100 = 52% - WORKING CAPITAL CALCULATION This is a calculation to find out how much money you need to have available to pay for business expenses plus additional product. The following information is needed: - The total value of your monthly overhead expenses (use average costs for expenses like electricity and telephone). - The estimated value of products that you want to buy in the next three months. To do the calculation, add together the following amounts: - Total Monthly Overhead Expenses x 3 months; - Value of products you wish to buy; - 10 – 15% extra in case of problems. The answer is the amount of working capital you need. - START UP CAPITAL Start-up capital is the amount of money you need to get your business opened up and running. It has to be sufficient to pay for all or some of the following: - Deposit on premises; - Equipment / Shop fittings; - Goods to stock your shop / Raw materials to use; - Overhead expenses for at least three months. - OVERHEAD EXPENSES These are business expenses that have nothing to do with Cost of Sales. Examples of some of the most common overhead costs Salaries (not those associated with Cost of Sales) Telephones and cell phones - BREAK EVEN POINT / ANALYSIS This is a calculation to find out how much you have to sell to make enough gross profit to pay for your overhead expenses. The following information is needed. - Your average mark up; - The total value of your monthly overhead expenses. Average mark-up is difficult to estimate especially for new businesses but it gets easier when you have been trading for some time. - Calculate Gross Margin % as follows: – (N.B. You can use any figure for sales in this calculation) - Sales divided by average mark-up = Cost of Sales - Sales less Cost of Sales = Gross Profit - Gross Profit / Sales x 100 = Gross Margin % - Calculate Break Even Point - Total Monthly Overhead Costs divided by Gross Margin x 100 = Break Even Point If the total of your monthly overhead expenses is R5300.00 and your Gross Margin % is 52% then your breakeven point is R10 192.00. In other words you need to sell R10192.00 worth of product each month to be able to pay for all your monthly expenses. - CASH FLOW The amount of money coming into the business from sales and other sources minus the money paid out for whatever reason. - RETAIL, SERVICE AND MANUFACTURING Please refer to FAQ No 10 for explanations of these terms. - BUSINESS ENTITIES / BUSINESS STRUCTURES There are many different types of business entities /structures available in South Africa. These are the most common. - You start trading as yourself; - You do not have to register the business; - It is the simplest form of business and costs very little to set up or close down. - There is no need to have a business name unless you want to; - You have to declare your income to SARS so keeping proper records is essential; - You cannot have partners, only employees; - If your business fails, your creditors can sell all your personal assets (your house, car and furniture) to get their money back. - Two or more people start a business together; - They are joint owners of the business and are equally responsible for the decisions made on behalf of the business; - You do not have to register a partnership but it is advisable to have a partnership agreement drawn up; - Proper record keeping is essential; - If the business fails, the people who are owed money by the partnership can sell all your personal assets (your house, car and furniture) to get their money back. Close Corporation (also known as a cc) - This is the easiest way to set up a formal structure however the Government is currently changing the rules that apply to close corporations. Private and Public Companies - These are legal entities heavily controlled by law. They are only used for medium and large businesses. Wholesalers buy goods from other businesses and sell them to retailers and manufacturers. They sometimes sell direct to the public. A contract is a document in which all the terms and conditions of an agreement between two parties are set out. Once it is signed by the two parties, it becomes legally binding upon each one of them. Examples of contracts are lease agreements, employment contracts between employers and employees, partnership agreements. - RENTAL AGREEMENT This is a formal agreement between the owner of a building (the lessor) and the person or business (the lessee) renting space in the building. It sets out the monthly rental to be paid by the lessee and the length of time the lessee may occupy the space. An employer is a business or person who employs other people to do work for them. Employers have to register with the South African Revenue Services for the payment of PAYE and UIF. An employee is someone who works for another person or business. They can be employed part-time or full-time. Employees earning more than R5000.00 per month must register with the South African Revenue Services for the payment of income tax. - CASUAL LABOURER A casual labourer is someone who works for less than 24 hours a month for another person or business. - SALARYS & WAGES This is what people are paid for their work. Sometimes the words “salary & wage” are used interchangeably however. - Generally however, the word Salary is associated with a person who has a full time position and who is paid monthly - Wages is usually associated with someone who is paid daily or weekly or on an hourly basis – perhaps also earning overtime for extra hours worked. - VALUE ADDED TAX (VAT) VAT is a tax which is collected by businesses on behalf of the Government. This is done by adding 14% to the selling price of goods. Only businesses whose sales are more than R1 000 000 (one million Rand) per year have to do this. - INCOME TAX Income tax is calculated on the profits made by a business or on the earnings of an individual. The rate of tax differs depending on the type of business structure . For sole traders, tax is only payable when the salary paid to the owner plus profits is more than R60 000.00 South African Revenue Services - SELLING ON CREDIT / SELLING ON ACCOUNT When you allow a customer to take the goods they have chosen away from your premises with the understanding that they will pay you at a later date, you are selling on credit or on account. Before doing this you must do the following:- - Get all the customer’s particulars such as ID No, place of work, home address and telephone numbers; - Get the names and telephone numbers of three other businesses who have sold to the customer so that you can check whether the customer has a good payment history; - Record all the details of the goods taken, the selling prices and the total amount owing; - Get the customer to sign for the goods; - Agree on the payment dates. - MARKET RESEARCH Finding out what people want and can pay for. Promotion of goods or services Offering more than one type of product or service. Concentrating on one particular type of product or service. - BUSINESS CYCLE Fluctuations in business activity caused by the following: High season – low season; Recession – boom; High demand – low demand. - STOCK ON HAND The quantity and value of all the products you have not yet sold or the raw materials you have not yet used, whether you have paid for them or not. - FIXED ASSETS These are things that your business owns and uses in the everyday running of the business but they are not “used up” – in other words, they last for at least a year and much longer in some cases. Examples of fixed assets Land and buildings Equipment and machinery - CURRENT ASSETS These are things that your business owns which will be used up or turned into cash within a year. Examples of current assets Cash in the Bank Debtors (money owed to you if you are selling on credit) Stock on Hand - LIABILITIES – LONG TERM AND SHORT TERM These are amounts your business owes to other people or businesses. - Long term liabilities are debts you will take more than a year to pay off. Example – Loans from the bank. - Short term liabilities are debts you will be paying off in less than a year. Example – Amounts owed to your suppliers. Even though Fixed Assets last a long time, they eventually wear out so the real (market) value of an asset in the year it was bought would be much higher than its real (market) value after five years. Real or market value is the price you would get if you sold the asset. Depreciation is the amount you put into your financial records each year to reduce the value of your fixed assets to realistic values. It treated like an expense and reduces the profit earned for the year. - RECORD KEEPING Record keeping means recording all the activities of the business especially where money is involved. This should be done on a daily basis. Example of figures you should record - Your daily sales figures; - The amount you paid to suppliers for products or raw materials; - If you are a manufacturer – the number of each type of product you made each day; - If possible, the number of each type of product you sold each day; - The amount you paid out for stationery or other expenses; - If you are selling perishable foodstuff – the number of products you had to throw away because they were past their sell-by date. - SUPPLY AND DEMAND Supply is the amount (value and quantity) of a product you have available to sell. Demand is the amount of the product the customers want to buy. If you have too much or too many of a product, you will be left with some of them unsold but if you have too few, you will have lost sales because you were unable to supply all of the customers. - SELL BY DATE This is the last date on which you can sell a product because after that it will begin to deteriorate. This only applies to foods and other products such as cosmetics. - GET RICH SCHEMES These are offers, usually to do with investing money in someone else’s business, which promise big profits in a very short time. They should be avoided because they often end up with money being lost. - LOAN SHARKS These are people who offer to lend money at very high interest rates (often illegally high). Loans of this nature should be avoided. Some businesses, such as retailers and restaurants, allow other businesses to use their name and sell their products under strict conditions set out in a franchise agreement. - PYRAMID SCHEMES Pyramid schemes are illegal in South Africa as they are a form of fraud. They are based on a non-sustainable business model that involves the exchange of money, primarily for enrolling other people into the scheme, without a product or services being delivered. The conference highlights the way blended learning is being used around the https://dissertationauthors.com/ world to enrich teaching and learning.
<urn:uuid:5ff37299-7c58-4c27-ba45-2f61070acba3>
CC-MAIN-2023-50
https://epicsolutions.org.za/mechanics/glossary-of-terms/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.949804
3,053
3.390625
3
I need help completing the post-lab data analysis for my report on the measurement of chloride in local water samples. Below are the class’s average volumes of 0.010M AgNO3 added for each of the three samples (tap water, lake water, and effluent). -Volume of silver nitrate used (Tap Water) = 8.22 mL -Volume of silver nitrate used (Lake Water) = 21.10 mL -Volume of silver nitrate used (Effluent Water) = 22.77 mL With that information I have to answer this question: Moles of Cl- reacted Hint: The chemical equation for the reaction of chloride with silver nitrate shows a 1:1 molar ratio between AgNO3 and Cl-: AgNO3 + Cl- -> AgCl + NO3-
<urn:uuid:7fe66f96-57b4-44ff-a14d-45fc0bd890db>
CC-MAIN-2023-50
https://essaysresearch.org/below-are-the-classs-average-volumes-of-0/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.875678
174
3.203125
3
Nov 11, 2019 Zachi I Attia, Jennifer Dugan, John Maidens, Adam Rideout, Francisco Lopez-Jimenez, Peter A Noseworthy, Samuel Asirvatham, Patricia A Pellikka, Dorothy J Ladewig, Gaurav Satam, Steve Pham, Subramaniam Venkatraman, Paul Friedman and Suraj Kapa An ECG-enabled stethoscope was used on 100 patients referred for an echocardiogram, and an AI algorithm retrained to identify low ejection fraction (EJ) on a single ECG lead reliably detected low EF in standard auscultation positions—meaning that now we can easily determine using a simple stethoscope which patients require diagnostic echocardiograms. There are various ways the heart’s function can be compromised—often we can’t simply pick these up during a routine examination. An important modality we use as an adjunct is Echocardiogram, which is an ultrasound of the heart that gives us a black-and-white picture of how the heart looks like, and how well it is functioning. On the echocardiogram (or simply echo), we measure an interesting value called the ejection fraction (EF). The EF shows us how much blood is being pumped out by the heart in each contraction, and the normal values are usually >50%. Patients who have had a heart attack, or those in heart failure, might have a reduced EF, representing a reduced pumping capability of their heart. These are the patients we generally want to do an echo for. However, not every patient referred for an echocardiogram has abnormal features. The authors wanted to know whether the single lead ECG input we get from the ECG-steth could also be used to determine cardiac function in terms of EF. Normally, this is something we need an echo for, so determining EF just while doing routine auscultation could be a potential game-changer. An AI-based algorithm had previously been found to detect low EF (≤35 percent) with an accuracy of almost 90%, based on input from 12-lead ECGs. Now the brains behind this wanted to check whether input from a single lead ECG (like the one we get from an ECG-steth) could also reliably detect low EF. Around 100 patients who had just come for a routine echo also underwent cardiac auscultation using the ECG-steth, which was providing ECG input on a handheld device. The ECG findings were then ‘fed’ to an AI algorithm, which had been retrained to detect low EF on a single lead ECG, after having shown promising results for 12-lead ECG analysis. This study shows that it is now possible to reliably detect low EF using single ECG lead analysis obtained from a handheld stethoscope. Meaning that if you are on the rounds, and have an ECG-steth at hand, you can easily detect which patients might have a reduced heart function while doing standard auscultation of the heart. Rapidly being able to diagnose low EF on auscultation can help decide which patients need an emergent transthoracic echocardiogram, and can save valuable time and $$$. Just another one of the wonders AI continues to amaze us with! This is the first standardised “competition” allowing a number of algorithms to submit and assess their capability to identify and differentiate low and high grade gliomas on MRI scans. AI has potential to augment clinical practice, but needs to be developed and deployed in the right way. In this paper, the authors share key considerations for identifying opportunities with AI and building and deploying AI models.
<urn:uuid:979b7741-66cf-4aa6-9237-d2e505954c52>
CC-MAIN-2023-50
https://explainthispaper.com/ai-detects-reduced-heart-function-with-a-simple-stethoscope/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.932824
787
2.671875
3
When we dive into the world of mechanical engineering, especially the realm of engines, one component stands out for its pivotal role – the connecting rod. In essence, a connecting rod is a crucial element in various types of machinery, predominantly in internal combustion engines. It acts as a conduit for motion from the piston to the crankshaft, converting reciprocating motion into rotational motion, a key operation principle of these engines. Understanding the Role of a Connecting Rod A connecting rod is an integral part of the engine’s mechanism. It serves as the connecting link between the piston and the crankshaft. When the piston moves up and down, the connecting rod transforms this linear movement into a rotational motion, which drives the crankshaft. The functionality of engines heavily relies on this crucial transformation of motion. Anatomy of a Connecting Rod Connecting rods are engineered with precision to withstand immense forces and pressures. They usually have a long, cylindrical form with two flat ends. One end, typically larger, is designed to connect with the crankshaft. This end is often referred to as the ‘big end.’ The other, smaller end, known as the ‘small end,’ fastens to the piston. These rods are usually crafted from high-strength materials like steel or aluminum, each offering unique properties and advantages. Material Choices and Their Impact The material used to manufacture connecting rods significantly impacts their performance, durability, and the overall functioning of the engine. Steel rods, known for their exceptional strength, are common in high-performance applications. Conversely, aluminum rods are lighter, helping to reduce the overall weight of the engine, thus boosting efficiency. Connecting Rod Failures and Maintenance Even with their robust construction, connecting rods are not impervious to failure. Material fatigue, incorrect installation, or inadequate lubrication are common culprits that can lead to rod failure. Hence, routine inspection and maintenance are paramount for early problem detection and to avoid devastating engine failures. To encapsulate, the connecting rod is a significant component of internal combustion engines, seamlessly converting linear motion into rotational motion, thereby powering the engine. Crafted from resilient materials and designed to endure considerable stress, these rods mandate regular maintenance to ensure their optimal performance and longevity.
<urn:uuid:8e6e7621-16aa-454b-9eb7-b3df5e4e1e37>
CC-MAIN-2023-50
https://fdautoparts.com/post/5719
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.931234
471
3.59375
4
The scientific name for “nursemaid’s elbow” is subluxation of the radial head in the child’s elbow joint. As we know, three bones converge at the elbow: the humerus, the ulna and the radius. When the head of the radius is pulled out of its usual place of insertion in the elbow joint due to a traction movement, it is not able to perform its normal movement. This is why the child is left with the arm immobile along the body, or with the forearm in 90º flexion. This is a very common injury in children from 2 to 6 years of age, who are the ones who usually hold hands with an adult during walks. Its causes are diverse, for example when the child does not want to walk and you “pull” on his arm to encourage him and help him to walk. Or when the child stumbles and we give him a tug to prevent him from falling. It can also be the child himself who causes this injury after a false move or fall. Why does this injury occur? The ligaments are much looser during this infantile stage, which is why it is easier for the bones to slip between the ligaments and slip out of place during traction movements. Often the pain passes quickly and the child stops crying, but will keep the arm immobile. It is then that adults come to the emergency room when they see the immobilisation of the arm. And in fact, the diagnosis of nursemaid’s elbow is simple to make: a careful examination of the arm is all that is needed, without the need for an X-ray. Treatment should be immediate, and a qualified professional should be called in to perform the appropriate manoeuvre to put the elbow back in position. Did you know about this type of injury? Find out more about the importance of paediatric physiotherapy with us.
<urn:uuid:6b291cf7-6efb-42a7-861f-8d5c576b2c5a>
CC-MAIN-2023-50
https://fisiomovebalance.es/en/babysitters-elbow-do-you-know-how-to-recognise-it/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.965751
400
3.671875
4
Have you ever been to one of those palm readers before? Maybe got some insight and interesting predictions about your life. Well, what if we told you that your grip could show us into your future too? How can that be? Well, in several recent studies, researchers have found significant evidence to suggest your grip strength can predict several markers of physical and mental health as well as early all-cause mortality (1). Regardless of adult age! Yeah, that’s pretty scary, and all the reason to start not just building your squeezing power but total body strength too! Having a capable grip is an underrated, but essential aspect of daily tasks and physical performance. We use the power in our hands every day to do many things that you probably took for granted until now. We’ll go more into that momentarily but first, we’d like to bring your attention to a unique finger-focused grip-strengthening exercise called the plate pinch. An isometric lower arm and hand building variation holding (No pun intended) several advantages over other similar grasp strengthening methods. So let’s dig into it! Muscles Worked During The Plate Pinch What part of the body do you use to pinch something? Your fingers of course. Well, there are muscles that move, contract, and allow you to do that, found in the hand, and forearm. Let’s talk about those muscles and the others used during the plate pinch. Arms and hands There are 35 muscles that move your hand and forearm, which are divided into two categories of fibers, flexors and extensors. They run like cables from the forearms, through the wrists, and into the hands where they attach to tendons in the fingers which allows us to squeeze and open the hand. It’s important to note that there are no muscles in the fingers, just bones, tendons, ligaments, connective tissues, blood vessels and nerves. And unlike muscles, they take longer to adapt to resistance loads, and are hence more susceptible to injuries if you add too much weight too quickly. But the fingers do play a massive role in grip related activities. Building a powerful grip requires patience, and slow, steady progress. Here are brief descriptions of grip muscle functions. - Extensors – To enable a full strength grip with your fingers, the wrists need to be somewhat extended to stabilize the wrists. - Flexors – These are the muscles that squeeze and let us grip objects. Here’s an entertaining and educational video lesson on the anatomy of your grip, if you’re interested and have a few spare minutes to watch. We’ll make this one quick cause there’s not much to see here folks. But the plate pinch should start off as a deadlift which engages every muscle from those in the feet, to the legs, glutes, core, back, shoulders, and neck. Then when you’re standing in one pose and holding the plates, those mentioned muscles shift into cruise mode where they’re only engaged isometrically (Stationary, no contraction of lengthening). How To Do The Plate Pinch For this exercise, you’ll need at least one weight plate without handles that allow you to get a full finger wraparound grip. We recommend using the soft coated plates with the standard center hole for sliding it onto a bar. You’ll also need to understand basic deadlifting technique to safely pick up the weights. This is essentially maintaining a straight back, tensing your abs, and lifting with your legs, not your back. - Use proper lifting technique to pick up your weight plates, or roll them to the desired spot where you’ll perform the exercise. - Start from a bent-over deadlift position while balancing the weight plates on the floor in your hands. Squeeze or pinch the plates with a full grip using all fingers and your thumbs. - Stand up, keep your shoulders back, tense your abs, and arms, and simply squeeze the weights for 10-20 seconds or until your grip feels tired. From here you can determine if the weight is too light or heavy. - When you’re finished, squat down and place the weight on the floor. - Take a minute rest and repeat. For your convenience, here’s a video demonstration of the plate pinch. - There are two common ways to make plate pinches more challenging. Obviously you can grab heavier plates, or grab more than one plate in your hand. - If possible, we recommend using urethane-coated bumper plates which are good for gripping and if you have to drop the weight, they’re safer for the floor and you. - Choose a weight that allows you to maintain a grip for a minimum of 10 seconds per hold. - Always be mindful of maintaining safe lifting technique when picking up, transporting, and setting the weights down. - Try not to move and squirm around during your sets as it could contribute to muscles imbalances due to the potential for uneven movement, and muscle compensation. - Wear lifting gloves if it helps prevent the weight from slipping. - Avoid curling your wrists. In This Exercise - Target muscle group: Wrist and forearms - Type: Grip strength and mental stamina - Mechanics: Isolation - Equipment: Weight plate - Difficulty: Beginner Benefits of The Plate Pinch Plate pinches offer their own unique benefits, but even without that, building a stronger grip is necessary for so many reasons. Unique Finger Focused Grip Exercise Unlike wrist curls and those hand gripping tools found in every variety store, plate pinches train the gripping muscles differently, and in my opinion, potentially more safely. Especially if you’re letting the weight roll onto your fingers during wrist curls which isn’t good for them or your elbows. During plate pinches, the weight is not loaded directly, and downward onto the finger bones, but rather you’re squeezing and holding the weight up against gravity with tips of your fingers and thumbs, activating more muscle strength and arm tension. And while there are no muscles in the fingers, there are tendons and other contents that adapt to strength training, albeit, at a slower rate than muscles, as mentioned above. So while the hands, containing the residual muscles from forearms contribute more to gripping power, the fingers are important hooks during exercises. Oftentimes, the muscles don’t give out when training, it’s the grip. So tighten up on your clench, and lock the weight in with the strength of a coconut crab! May Help You Live Healthier and Longer According to an article by Harvard, one study tested and monitored the grip strength of roughly 140,000 people over a four-year span. What they found was that for every 11-pound strength decrease in grip strength, there was a 16 percent higher risk of death, 17 percent higher potential for heart disease, as well as a six and seven percent increased risk of stroke and heart attack, respectively. Moreover, a weak grip can increase the likelihood of developing common chronic diseases such as Type II diabetes, cancer, etc. But what was just as alarming was that the correlation between grip and the aformentioned health factors remained consistent despite different variables, like age, for instance. In fact, researchers were convinced that grip strength is a more accurate method to assess your biological age, rather than solely looking at one’s chronological age (How old you are in years) which doesn’t always tell the whole story. As the two can be different. “People who maintain their grip strength age more slowly. They stay healthier longer and are stronger throughout their bodies”, explained geriatric medicine specialist Ardeshir Hashmi, MD. So whether you’re a younger or older adult, you should be engaging in weight-bearing activities. A weak grip is bad news for all adult age brackets. And the cherry on top, another study showed accelerated aging in the DNA of its participants with weaker gripping strength (2). Plate pinches are a low intensity, beginner friendly way to start building up your grip, while challenging the arms and engaging other muscles, which in turn can help build and maintain strength. Strengthen Mental Grit Too It’s not a secret that life can be extremely challenging. And oftentimes the answer is to do things that are challenging but rewarding. This can help us not just to be mentally stronger, but it’s one way to discipline ourselves, which carries over into other endeavors. Building mental fortitude gives you the strength to keep going, snap out of a rut, and take on those difficult situations. Yes, it may seem like a motivational speech, however, it’s one of the potential benefits of this grip training variation. Grip Strength is a Basic Human Necessity From opening a water bottle or jar to turning on the water hose, buttoning your pants, carrying items, and engaging in basic body strengthening activities, you’re pretty limited without a capable grip. We’re not saying you need to train your grip like a maniac to continue performing these basic tasks. But maintaining strength is important, and it does correlate to well-being, injury prevention, and according to research how long you’ll live. Plus, plate pinches are a form of total body isometric resistance training which is one of the two most common muscle and strength building techniques. Common Mistakes During The Plate Pinch Without a proper understanding of what we’re trying to accomplish with the plate pinch, you can quickly lose your grip (Pun intended) on its potential, and possibly cause more harm than benefits. Let’s see what those mistakes look like… Using Plates with Handles, aka Cheating So while you could use those weight plates with handles for easy gripping and it’s one way to fortify your hand strength, that’s not the point. We don’t want the fingers wrapped around but rather pinching the weights for a more finger-focused variation. Not Deadlifting The Weight Up You should start with the bottom of the weight plate on the floor and the top in your grip while in a deadlift position. Never just bend over and pick the weight up. Why? Because you should always practice safe lifting habits, and be intentional. Use your legs and the power of proper deadlift technique to pick up the weight. Slipping on Your Form While it’s true you’re just holding the plates without movement, your posture and body need to be appropriate, to support the weight without stressing your shoulder joints, traps, spine, etc. Simply follow the plate pinch instructions included above to do it right! Gripping and Slipping If the plates are constantly slipping out of your hands, there’s no viable way to do this exercise asymmetrically between your left and right side. One hand may be stronger than the other, but you’re just creating a mess if your setup is not appropriate to train both sides equally. Either use the proper plates with non-slip coating, lower the weights, or get a pair of gloves to prevent your hands from slipping. Otherwise, you’ll be fighting to keep the weights in your grip with fingers slipping, and your body compensating, creating imbalances over time. 7 Variations and Alternatives Of The Plate Pinch You’d be surprised at how many ways you can hold a weight plate, or the techniques you can do. So we’ll show you some of those plus the best alternative methods to build a crushing grip. Plate Pinch and Carry Like a farmer’s walk but with a half grip, the plate pinch and carry is what its name suggests, you simply walk with plates in hand. What is the benefit of a walking plate pinch? Well it’s quite simple, when you introduce movement, the grip muscles have to work harder to hold the plates. More effort goes into stabilizing the arms when you’re walking, plus you have to focus more with this variation. You’ll also learn to develop more functional grip strength in a more unstable environment (Walking motion). Plate Flip and Catch We wanted to toss (Pun intended) this one in the mix too, because it is technically a plate pinching variation but with a crazy side. You’ll definitely need to use lighter soft coated plates and appropriate flooring (Or head outdoors on grass) as you’ll be letting go of the plate entirely before catching it in mid air over and over again. The key is to time your flips perfectly so that you can grab it in your hand without dropping it. The plate flip and catch adds a more dynamic element to grip training that also challenges your coordination and several functional abilities, while improving your reflexes and certain mental aspects of performance. - Make sure to use your legs and not your back when catching and flipping the plate. - Keep your core muscles tense to protect your back and create efficient transfer during the plate toss. - Don’t wear open toed shoes, unless you want some real problems. Check out a demonstration of the plate flip and catch via the following video. The farmer’s carry is a popular Strongman workout and competition event where the athlete walks for distance while holding two long and heavy implements with bar style handles attached. It’s one of the most recommended grip building techniques, and you can use any type of weights, whether dumbbells, kettlebells, a trap bar, etc. I wrote a full guide on farmer’s walks right here. Thick Bar Exercises Regardless of the exercise, you can use a fat or thick bar that fills up more of your grip. How will that increase your squeezing strength? For one you’ll have to work a little harder to maintain the bar in your hand, but you’ll need to shift your mental motors into a higher gear as well. These things have been around forever and we promise many of you were obsessed with them at one point (Guilty as charged). Sold in most variety stores, they’re no gimmick speaking from personal testing, and what I found from taking on the hardest metal grippers was they’re a fun way to challenge friends and family to see who has a stronger grip. Boys will be boys, right?… Well, you can find hand squeezers in just about every resistance level for anyone from the absolute beginner, to the elder person, or the most extreme beasts that live among us! You can also choose ones covered in foam, made of hard plastic, or straight up aluminum and metal. But one issue with these grippers is that people often use them as the only method to strengthen their clamping power. You should be combining techniques for best all around results. Also, if you’re using the heaviest grippers, don’t it everyday as it can place a lot of stress on the fingers and hands. Here’s our write up on these useful gadgets that includes how to use them and all their benefits. In the plate pinch, your objective is to squeeze the weight so that it doesn’t fall from your grip. With finger extensions, your goal is to open your hand against the resistance of a rubber band. Most gripping moves focus on clamping down or squeezing, but what about the opposite? After all, the muscles in your hands can both close and open which requires different muscles. Related: How to build a stronger grip at home. A good old fashioned grip building method, wrist rollers can be purchased or easily engineered with items you already have lying around the house, like thick plastic tubing, a drill to make a hole, a long piece of durable strap-like material, and a weighted object. But there are simpler ways to construct a wrist roller at home which we explained in detail in this wrist roller training guide. How long should I hold the weights for to build a stronger grip? Don’t focus so much on the time, instead place your efforts in ensuring you’re using the appropriate resistance while being able to grip the plates evenly without them slipping. As we mentioned, you should opt for the bumper style plates, or wear gloves. Just don’t let the weights slide toward the ground during your sets. Then when you got the setup and proper technique down pat, you can start timing yourself against the weight loads! Progressing in weight. Where does the plate pinch rank among the best grip strengthening exercises? While it’s true your forearms and hands contain the gripping muscles, the unique open grip style of plate pinches can help to strengthen the tendons and other material in your fingers for better gripping power since the fingers do play an important role in supporting weight bearing actions. This is different than say wrist curls or farmer’s walk where the palms support a lot of the weight. Overall, plate pinches are useful in their own way, not used for building massize gripping power but emphasizing the role of your fingers and hands in gripping activities. Well, if that information wasn’t a wakeup call, then we don’t know what would constitute one. Knowing just how vital grip strength is to health and well-being isn’t something everyone knows by default. After all, you could just be weak, right? Well, don’t rely on that method of thinking. Hopefully now you have a newfound reason to start incorporating more grip training in your workouts, and the plate pinch is definitely worthy of your attention. With all the compelling research out there, yesterday was the best time to start training, and the second best is right now. For the sake of your long term health, well-being, and living longer! - Bohannon RW. Grip Strength: An Indispensable Biomarker For Older Adults. Clin Interv Aging. 2019 Oct 1;14:1681-1691. doi: 10.2147/CIA.S194543. PMID: 31631989; PMCID: PMC6778477. 2023) Grip strength is inversely associated with DNA methylation age acceleration, Journal of Cachexia, Sarcopenia and Muscle, 14, 108–115., , , , and (
<urn:uuid:53b0d0e5-5166-4074-98aa-576cc4833be9>
CC-MAIN-2023-50
https://fitnessvolt.com/plate-pinch-guide/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.926722
3,861
3.140625
3
Objective: To investigate the association of breast feeding with height and body mass index in childhood and adulthood. Design: Historical cohort study, based on long term follow up of the Carnegie (Boyd-Orr) survey of diet and health in pre-war Britain (1937–1939). Setting: Sixteen urban and rural districts in Britain. Subjects: A total of 4999 children from 1352 families were surveyed in 1937–1939. Information on infant feeding and childhood anthropometry was available for 2995 subjects. Main outcome measures: Mean differences in childhood and adult anthropometry between breast and bottle fed subjects. Results: Breast feeding was associated with the survey district, greater household income, and food expenditure, but not with number of children in the household, birth order, or social class. In childhood, breast fed subjects were significantly taller than bottle fed subjects after controlling for socioeconomic variables. The mean height difference among boys was 0.20 standard deviation (SD) (95% confidence interval (CI) 0.07 to 0.32), and among girls it was 0.14 SD (95% CI 0.02 to 0.27). Leg length, but not trunk length, was the component of height associated with breast feeding. In males, breast feeding was associated with greater adult height (difference: 0.34 SD, 95% CI 0.13 to 0.55); of the two components of height, leg length (0.26 SD, 95% CI 0.02 to 0.50) was more strongly related to breast feeding than trunk length (0.16 SD, 95% CI −0.04 to 0.35). Height and leg length differences were in the same direction but smaller among adult females. There was no association between breast feeding and body mass index in childhood or adulthood. Conclusions: Compared with bottle fed infants, infants breast fed in the 1920s and 1930s were taller in childhood and adulthood. As stature is associated with health and life expectancy, the possible long term impact of infant feeding on adult mortality patterns merits further investigation. - leg length - breast feeding - Boyd-Orr cohort Statistics from Altmetric.com If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. The benefits of breast feeding during infancy are well known, but the effects on childhood and adult health are less clear.1 Cohort studies suggest that breast feeding is associated with reduced adult serum cholesterol concentrations,2,3 ischaemic heart disease mortality,4 non-insulin dependent diabetes mellitus,5 and Helicobacter pylori infection,6 although protective effects have not been found in all studies,7 and in some animal models breast feeding is associated with atherogenic cholesterol profiles.8 However, these studies are difficult to interpret for several reasons. Firstly, those who are breast fed may be different from those who are bottle fed, for example with respect to socioeconomic confounding factors related to future health.9 The potential for socioeconomic confounding, and its direction, may be different at different time periods because socioeconomic patterns of breast feeding have varied throughout the 20th century.10,11 Secondly, previous studies of breast feeding and later health have generally adjusted only for one marker of socioeconomic conditions, usually adulthood social class, which may not accurately reflect the underlying confounding exposures of interest. As well as reflecting genetic factors, height is an accepted marker for childhood diet and health throughout the growing years.12 Childhood and adult stature is in turn related to future cause specific morbidity and mortality, providing evidence for an influence of early life factors on later health.13–,15 Leg length appears to be the component of childhood height most sensitive to early adverse dietary and economic exposures.16–,19 Leg length is also the component of height most strongly related to cause specific mortality.13 However, the specific dietary and environmental exposures associated with height earlier this century are only starting to be elucidated.16 In a recent study, increased duration of breast feeding was associated with reduced prevalence of childhood overweight and obesity.20 However, this association may reflect incomplete adjustment for social factors associated with breast feeding today,21 which are currently more pronounced than they were earlier this century.22 The Carnegie (Boyd-Orr) survey of diet and health in pre-war Britain was a large cross sectional survey involving 1352 families. As part of this survey, data on breast feeding incidence and duration and a variety of socioeconomic and anthropometric variables were collected.23,24 The aim of this analysis was to investigate the association between breast feeding in the 1930s and (a) a wide range of family level socioeconomic variables, and (b) childhood and adult height, the components of height (leg length and trunk length), and body mass index (BMI). MATERIALS AND METHODS The Carnegie survey The Carnegie survey was a cross sectional study of diet, social circumstances, and health carried out in 16 urban and rural districts in Britain between 1937 and 1939. The sampling frame and examination methods used have been described in detail elsewhere.23,24 Briefly, 1352 families were surveyed, and the cohort is based on the records of 4999 children from these families who were aged between zero and 19 years at the time of the survey.23 The number of subjects has increased slightly since earlier publications as a result of further searches of archived records and contacts with surviving study members. Families were generally identified from the more deprived localities through contacts made by local health workers, and two thirds consented to participation.24 The survey records have been retrieved from archives at the Rowett Research Institute, Aberdeen, and individuals have been traced using the National Health Service Central Register. Cross sectional data on childhood anthropometry and infant feeding methods were recorded for 2995 subjects (60%) from 1072 families. The reduced number available for analysis is mainly because in two centres medical examinations were not performed. Details of the method of infant feeding and duration of breast feeding were obtained from the mother at the time of the survey for each child aged 0–19 years. Thus breast feeding occurred between 1918 and 1939, depending on the age of the child at measurement. We coded subjects as breast fed (at least two weeks) or bottle fed; for breast fed infants, we coded breast feeding duration as: two weeks to < two months; ≥ two to < six months; ≥ six to < 12 months; ≥ 12 months; and unknown. The six month cut off reflects the contemporaneously recommended weaning age.25 The ≥ 12 month category reflects prolonged breast feeding. Per capita weekly household expenditure on food, income, and household size were obtained from the original survey material.23 Birth order was based on the child’s position among the children living in the household at the time of the survey, and social class of the head of the household was assigned using the Registrar General’s 1931 classification.24 The techniques used to measure standing height, leg length, and body weight have previously been described.23 Standing height was measured to the nearest millimetre with a portable measuring stand. Leg length was measured with a steel tape measure and recorded as the distance from the ground to the summit of the iliac crest. Trunk length was calculated by subtracting leg length from overall height. Using these data internally, age and sex standardised z scores for height, leg length, trunk length, and BMI were computed using polynomial regression techniques.13,16 The z scores for BMI were based on the reciprocal transformation of BMI.26 As height measurement in children under 2 years of age tends to be unreliable and because of a large number of missing data in children of this age and in the 15 and over age band, z scores were calculated only for children aged between 2 and 14 years 9 months as in previous reports13—that is, children breast fed between 1922 and 1937. Internally derived standards have been used because no acceptable cross sectional reference standards exist for childrens’ heights and weights in the 1930s.13 For children 8 years of age and under, each unit of z score for childhood height and leg length is approximately equivalent to 4–6 cm and 3–4 cm respectively, and for children over 8 years of age each unit of z score for childhood height and leg length is approximately equivalent to 6–9 cm and 4–6 cm respectively. Of those aged between 2 years and 14 years 9 months, information on both breast feeding status and height, trunk length, leg length, and BMI was available on 2455, 2366, 2369, and 2444 subjects respectively. For 416 of the subjects with breast feeding data, there were data on measured birth weight in pounds and ounces.27 These data were used to assess whether birth weight was associated with breast feeding in this subsample. Follow up self reported anthropometry Between 1997 and 1998, all 3182 surviving members of the Boyd-Orr cohort traced at the National Health Service Central Register were sent health and lifestyle questionnaires.28 As part of the survey, subjects were asked to report their current height (in feet and inches), inside leg measurement (in inches), and weight in light clothing (in stones and pounds). After two reminders, 1647 completed questionnaires were returned (52% response). From these, information on adult height, trunk length, leg length, and BMI was available for 1029, 746, 761, and 999 respectively of the subjects with childhood breast feeding data. In a validation study on a subsample (294) of these subjects, correlations between self report and measured height and weight were over 0.90 and for leg length were over 0.70.28 Shorter people and older people tended to over-report their height, and the overweight under-reported their weight.28 Surviving members of the Boyd-Orr cohort were also asked about parental height. Data were available on maternal and paternal heights for 787 offspring with breast feeding data. The mean maternal and paternal heights reported by individuals in each family were used to assess whether breast feeding was associated with maternal or paternal height in this subsample. All analyses were performed using Stata release 7.0.29 The age, sex, and geographical (survey district) distributions of breast feeding status and breast feeding duration were compared using unpaired t tests, the Wilcoxon rank sum test, and the χ2 test for heterogeneity as appropriate. Mean birth weights and parental heights in never and ever breast fed children were compared using unpaired t tests. Associations between breast feeding and socioeconomic factors were investigated using logistic regression. Clustering effects may have arisen because several cohort members belonged to the same families and therefore shared genetic influences on anthropometry, as well as childhood socioeconomic conditions and propensity to being breast fed. To account for this, we estimated robust standard errors30 adjusted for clustering at the family level for the association between breast feeding and socioeconomic variables. Separate models with additional adjustment for age and survey district were also fitted. Associations between breast feeding and anthropometry (age and sex standardised z scores for height, leg length, trunk length, and BMI) were examined by random effects linear regression modelling using the maximum likelihood estimator. This allowed for between-family differences in mean height. Separate models with additional adjustment for socioeconomic factors (social class, number of children, birth order, and income category, all entered as categorical variables) and survey district were also fitted. All effect estimates were calculated on the same subset of subjects with complete data for the index dependent variable and all covariates adjusted for, in order to ensure an unbiased assessment of confounding. Likelihood ratio tests were used to assess whether the association of breast feeding with anthropometry differed with respect to age, sex, and income group (interaction). To assess effect modification by age on childhood anthropometry, we investigated associations between breast feeding and anthropometry in two age bands: children aged 8 years or less and those aged over 8 years. Eight years of age was chosen so that there were approximately equal numbers of survey members in each group and to ensure that all children in the younger age group were prepubertal. To further investigate the potential for confounding by shared environmental exposures other than breast feeding, the association between breast feeding and stature was also estimated in the subset of families who had at least one pair of children discordant for breast feeding status. Factors associated with breast feeding A total of 2178 (72.7%) subjects were breast fed for at least two weeks (4.3% of subjects were breast fed but for less than two weeks). There were no differences in the proportion of males (72.8%) and females (72.6%) who were breast fed. The median duration of breast feeding was nine months and was the same in males and females (p = 0.46). The mean (SD) ages of breast fed (6.8 (4.0) years) and bottle fed (6.9 (4.0) years) children were similar at the time they were measured in childhood. Breast feeding rates differed substantially in different survey locations, varying from 59.0% to 85.7% (p < 0.0001). Breast feeding was not associated with the number of children in the household, the child’s birth order, or the social class of the head of the household (table 1⇓). Families with a weekly per capita income of over 20 shillings were more likely to breast feed (80.4%) than families with an income of less than 10 shillings (72.3%). The adjusted odds ratio (OR) comparing the highest with the lowest income groups was 1.79 (95% confidence interval (CI) 1.04 to 3.08). There was evidence that expenditure on food was associated with breast feeding after adjustment for survey district (OR for trend: 1.14 (95% CI 0.99 to 1.31)), but this was of borderline significance. For the children with birthweight data (n = 416), there was no difference in the mean birth weights of breast fed (7 lbs 8.3 oz) and bottle fed (7 lbs 9.7 oz) children (difference 1.4 oz (95% CI −3.3 to 6.1); p = 0.56). For those with data on parental height (n = 787), there were no differences between breast and bottle fed children in mean maternal (161.2 v 160.5 cm; p = 0.24) and paternal (172.1 v 172.3 cm; p = 0.75) heights. When duration of breast feeding was examined in relation to socioeconomic factors (table 2⇓), there was a suggestion that although children from affluent families were more likely to be breast fed, breast feeding was continued for shorter time periods. Breast feeding and childhood growth Table 3⇓ shows the sex specific mean differences between breast and bottle fed subjects in relation to childhood stature and BMI. There was no association between breast feeding and BMI in childhood. However, breast fed subjects were significantly taller than bottle fed subjects; in fully adjusted models, breast fed boys were on average 0.20 SD (95% CI 0.07 to 0.32) taller than bottle fed boys; breast fed girls were on average 0.14 SD (95% CI 0.02 to 0.27) taller than bottle fed girls. When trunk length and leg length were examined separately, leg length was the component of childhood height most strongly related to breast feeding. The association between breast feeding and leg length appeared to be stronger in boys (0.23 SD; 95% CI 0.10 to 0.36) than girls (0.13 SD; 95% CI 0.01 to 0.26). However, there was no statistical evidence of interaction between sex and breast feeding on childhood leg length (p = 0.28), height (p = 0.83), trunk length (0.48), or BMI (p = 0.11). There was also no statistical evidence of interaction between family income and breast feeding on any of the growth variables. The associations between breast feeding and childhood height and leg length were also observed when the analysis was restricted to families in whom there were children with discordant breast feeding history (height: n = 191 families and 615 individuals; mean height difference (boys and girls), breast v bottle fed: 0.10 SD (95% CI −0.01 to 0.21; p = 0.075); leg length: n = 180 families and 562 individuals; mean leg length difference: 0.15 SD (95% CI 0.03 to 0.26; p = 0.012)). There was no association between breast feeding and trunk length in families in whom there were children with discordant breast feeding history (mean trunk length difference: −0.05 SD (95% CI −0.18 to 0.08; p = 0.45)). The median duration of breast feeding among those siblings who were breast fed in this subsample was nine months (equivalent to the whole sample). There was evidence of interaction between age and breast feeding on height (p = 0.014), leg length (p = 0.006), and trunk length (p = 0.069), but not on BMI (p = 0.18). Table 4⇓ presents breast feeding/growth associations in children aged 8 years or less and in those aged over 8 years (the pre-specified age categories). As there was no evidence of interaction between sex and breast feeding on growth, results are presented for boys and girls combined. The positive association of breast feeding with height was greater in those aged over 8 than in those under 8. Therefore, age specific associations of breast feeding with anthropometry were investigated in more detail using two year age bands (fig 1⇓). There was some evidence of a linear increase in the effect of breast feeding on height (p for trend: 0.034) and leg length (p for trend: 0.006), but not trunk length (p for trend: 0.14) with increasing age group. Figure 2⇓ shows the association between duration of breast feeding and childhood anthropometry. There was no evidence of an increase in BMI for each unit increase in duration of breast feeding category compared with bottle feeding as the baseline group (0.02 SD; 95% CI −0.01 to 0.05; p = 0.23). Although there was statistical evidence of an increase in height (0.06 SD; 95% CI 0.03 to 0.09; p < 0.0001) and leg length (0.06 SD; 95% CI 0.03 to 0.09; p < 0.0001) for each unit increase in duration of breast feeding category, there was no clear evidence of a dose response after two months. Breast feeding and adult size Table 5⇓ shows the associations between breast feeding and self reported adult anthropometry. There was no relation between breast feeding and adult BMI, but among males the associations of breast feeding with adult height and leg length seen in children persisted into adulthood in the fully adjusted models (mean height difference: 2.49 cm (95% CI 0.94 to 4.03); mean leg length difference: 1.25 cm (95% CI 0.09 to 2.40)). Expressed in terms of z scores, the fully adjusted mean height and leg length differences were 0.34 SD (95% CI 0.13 to 0.55; p = 0.002) and 0.26 SD (95% CI 0.02 to 0.50; p = 0.034) respectively. There were also positive associations of breast feeding with height and leg length among adult females, but these were smaller than those observed in adult males and of borderline significance. There was no association between breast feeding and adult height when the analysis was restricted to families in whom there were children with discordant breast feeding history (n = 70 families and 178 individuals; mean height difference: −0.81 cm (95% CI −3.55 to 1.93; p = 0.56), although the sample available for this analysis was small. As far as we are aware, this is the first study to investigate the long term influences of breast feeding on both height and weight beyond mid childhood and into adulthood. We found that, in children examined in the late 1930s in the Carnegie (Boyd Orr) survey, breast feeding in the 1920s and 1930s was associated with greater childhood and adult stature in both males and females. Leg length was the component of childhood height that was most strongly related to breast feeding, perhaps reflecting the fact that leg length is the component of height most sensitive to prepubertal influences on growth.16 If the socioeconomic covariates used to adjust for social and economic conditions in the 1930s were insufficiently detailed, or individuals were misclassified, then we may not have completely controlled for socioeconomic confounding (residual confounding).31 However, residual confounding is unlikely because the observed associations were not attenuated after controlling for a range of socioeconomic variables. We found no differences in mean parental height between breast fed and bottle fed infants in subsamples of the surveyed children. We also observed an association between breast feeding and childhood height when the analysis was restricted to within family height differences in relation to within family differences in breast feeding in an attempt to control for shared environmental and economic exposures influencing breast feeding and stature. There was no longer an association between breast feeding and later adult height in families with both breast and bottle fed infants (n = 70), although the wide confidence interval highlights the imprecision of this effect estimate. Adult heights were self reported, resulting in some misclassification.28 However, for this source of misclassification to explain the observed associations of breast feeding with adult height and leg length would require elderly adults who were breast fed as infants to over-report their height and leg length compared with elderly adults who were bottle fed. We believe that any misclassification bias is more likely to be non-differential, biasing the results towards the null value of no association rather than explaining the observed results. It is also possible that sickly neonates were less likely to have been breast fed,10 and early poor health rather than breast feeding may have affected their growth.32 Furthermore, within-family differences in feeding patterns may reflect differences between family members in infant health. In observational studies, it is difficult to control for such effects; however, we found no differences in mean birth weights between breast fed and bottle fed infants in a subsample of the surveyed children. Biased recall of breast feeding is unlikely to explain our results because the recall period was relatively short in our study and several reports show close agreement between long term recall and postnatal records.33 Comparison with other studies The constitution of artificial feeds has changed over the course of the last century. In the past, infants who were artificially fed were given fresh cows milk with added sugar, patent preparations of dried cows milk, machine skimmed condensed milk, or patent foods made from wheatflour or arrowroot.10,25 It is difficult to assess whether such differences would be observed between infants breast fed and formula fed today. The quality of alternatives to breast feeding in the 1920s and 1930s may underlie the positive associations observed between breast feeding and linear growth, raising uncertainty about the contemporary relevance of our results. The observed associations are, however, suggestive of an important role of infant feeding on later growth, which may be of particular relevance in developing countries where large numbers of people do not have access to modern infant formula milk, clean water, sewage disposal, or fuel.34,35 In developed countries, the association between breast feeding and linear growth suggests that infant nutrition may be a biologically relevant exposure underlying the height and leg length mortality associations observed in cohorts born in the 1920s and 1930s.13,36,37 In agreement with several previous studies, we found no association between breast feeding and childhood BMI,3 although a more recent study showed a protective association of breast feeding on overweight and obesity.20 The differences between the recent study and our results may be explained by differences in the alternatives to breast feeding between 1920 and 1990, or incomplete adjustment for social and economic differences in breast feeding,22 which are more pronounced now than in the 1920s and 1930s.25 The absence of an association between breast feeding and BMI in subjects at age 60–80 years does not exclude the possibility of an association at younger ages. Reports on the influence of breast feeding on childhood height in more recent birth cohorts provide conflicting results. Some show no association between breast feeding and height up to 7 years of age, but unlike this paper there are no studies in later childhood or adulthood.38,39 Another study found that breast fed children were significantly taller than formula fed children at age 7 years, but that the association disappeared in a multivariable model. However, this model included skeletal maturity as a covariate, which may have had the effect of over-controlling for a factor on the causal pathway—for example, if breast feeding influences maturity.40 The breast feeding rates observed in this cohort were relatively high (ever: 72.7%; over six months: 41.9%) compared with more recent breast feeding patterns (ever: 66%; over six months: 22%).21 However, our breast feeding rates mirror those derived from contemporaneously published literature.41 In line with findings from studies conducted in the 1930s and 1940s, we found a relatively weak relation between markers of economic status and a history of ever having been breast fed.42 There are a number of possible reasons for the association between breast feeding and later height. These include (a) setting of the growth trajectory through optimum nutrition32; (b) protection against enteric or respiratory infections which were more common in those who were artificially fed10,25; (c) the potential psychological effects of breast feeding and maternal bonding on future upbringing43; (d) the establishment of taste thresholds and behavioural patterns of eating that influence later growth.25 It is not possible within this dataset to assess which of these is more likely. We found no evidence of interaction between breast feeding and family income on height, suggesting that the impact of infant feeding method is not greater among those who are poorer (who would be expected to be exposed to adverse conditions, predisposing for example to respiratory and enteric infections). The reasons why the association of breast feeding with height was more pronounced after mid-childhood are unclear. If breast feeding increased stature by improved nutrition alone, we would have expected breast feeding associations with height to be more pronounced in younger children. There are a number of possible explanations for this finding. Firstly, it may have been a chance result. Secondly, as the surveyed children included in the analysis were aged 2–14 years, the age interaction could reflect changes in the alternatives to breast feeding among different birth cohorts. However, the switch to formula type milk consisting of full cream powders did not begin until the 1940s, after the survey was completed.25 Finally, it has been suggested that physiological or metabolic “programming” occurs at critical periods in early development and determines events in later life.44 Breast feeding influences neonatal levels of various hormones affecting growth,45,46 including insulin-like growth factors.47 Evidence for a “programming” effect of breast feeding comes from studies suggesting that early nutrition influences expression of growth hormone receptor on the growth plates of long bones in late infancy,48 and that the timing of menarche is related to environmental factors operating near birth.49 The pattern of human linear growth can be divided into three phases, each with a specific hormone profile: infancy, childhood, and puberty.50 The possible mechanisms underlying our observation of an association between breast feeding and height above 8 years of age are speculative, but may have plausibly arisen in either of two ways. Firstly, breast feeding may influence growth tempo—that is, the rate at which a child matures—throughout childhood such that breast fed children are more advanced into puberty and hence taller.51 The suggestion of a linear increase in the effect of breast feeding on height and leg length throughout childhood (fig 2⇑) lends support to this possibility. Secondly, breast feeding may differentially influence growth at puberty compared with earlier life. If growth tempo were the whole explanation, differences in height between breast fed and bottle fed children would disappear after puberty and adult heights would be similar. However, we also found an association between breast feeding and adult height, suggesting an effect of breast feeding on height over and above the rate at which a child matures. We have shown that stature was related to breast feeding in children surveyed in the 1930s and that this association persisted into adulthood. We found no evidence that breast feeding in the 1920s and 1930s influenced childhood or adult BMI. Longer term studies have shown that breast feeding is associated with adult mortality and morbidity, but the mechanisms linking breast feeding to later health are unclear.1 One potential mechanism may lie on the same pathway as that linking height to future mortality.13 We thank the following: Professor Philip James, director of The Rowett Research Institute, for the use of the archive, and in particular Walter Duncan, honorary archivist to the Rowett; the staff at the NHS Central Register at Southport and Edinburgh; Sara Bright for data entry; Jonathan Sterne for statistical advice; Mark Taylor for entering breast feeding data; and Professor John Pemberton for information on the conduct of the original survey. We also acknowledge all the research workers who participated in the original survey in 1937–1939. RMM is supported by the Wellcome Trust.
<urn:uuid:d9959c98-0184-4e77-98b7-fc90d92ac3cc>
CC-MAIN-2023-50
https://fn.bmj.com/content/87/3/F193
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.96379
6,119
3
3
Microscope is an optical instrument consisting of lens for making enlarged or magnified images of minute objects. Robert Hook (1635 – 1703) made and used a first compound microscope. The basic difference between simple and compound microscope is that simple microscope uses only one lens whereas, compound microscope uses more than one lenses to further magnify the object. The ability of microscope to distinguish two adjacent object as separate and distinct images rather than single blurred image is called resolving power of microscope. Theoretically, size of image can be increased by adding lenses. Magnification of object is only useful if the enlarged image is clearly visible. So, effective magnification depends on resolving power of microscope. A microscope’s resolving power is dependent on wavelength of beam for illumination and optical quality of lenses. Shorter wavelength gives better resolution. Resolution or shortest distance (d) can be calculated by using the formula, d = λ/2nsinϴ where, d = resolution, λ = wavelength of light, n = refractive index and ϴ = angle subtended. Note: Oil is used with oil immersion lenses because it has higher numerical aperture than air and also higher refractive index and this greatly improves resolution. Read more
<urn:uuid:65867302-219e-4a38-9e9e-1aa1d6a7d62e>
CC-MAIN-2023-50
https://foodtechnotes.com/tag/pcm-microscope/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.918446
248
3.6875
4
(Message 1) “If was God, I could do better!” You may have uttered similar words or had the same thoughts. Why is it that God doesn’t always seem good, or fair, or even interested in us? Join us as we unpack some good thinking and some bad thinking regarding God. #searchingforabettergod Psalm 145:17-19 “The Lord is righteous in all his ways and faithful in all he does. The Lord is near to all who call on him, to all who call on him in truth. He fulfills the desires of those who fear him; he hears their cry and saves them.” Start talking. Find a conversation starter for your group. Share one time in your life when you were in a situation/job/relationship in which someone in authority over you didn’t live up to or quite meet up to a standard that you thought was deserving of your respect? Do you recall times in your life when you witnessed a mentoring situation (parent/teacher/friend) and said to yourself, “I will never do ____” and then, if you’re being honest, you turned around and did some of the same things when you were the mentor? Share an example. Start thinking and sharing. Ask a question to get your group thinking and to create openness. Read Habakkuk 1:2-4. How do the events of Habakkuk’s day compare with what’s happening in the world today? Have you ever thought you could do better than God or thought to yourself, “God, how could you let this happen?” Share with the group. You’re in good company if you have questioned God. The book of Psalms is filled with times that David questions God, but he always ends with trusting him. Read and discuss the following pairs of verses: Psalm 10:1 and Psalm 10:17; Psalm 43:1-2 and Psalm 43:5; Psalm 13:1-2 and Psalm 13:5-6; Psalm 42:9-10 and Psalm 42:11. Share others you may know. Troy said, “There’s something precious and powerful that happens in our lives when our beginning place is God I respect and honor you.” Read and discuss the following verses: Psalm 111:10; Proverbs 1:7; Proverbs 14:27; and Proverbs 15:33. Read and discuss Jeremiah 29:13, Numbers 23:19, and Isaiah 14:24. Think of your past week of interacting with God. Are you at a place where you can say to God, “I don’t understand, here’s how I feel, and I’m going to trust you God–no matter what?” Share times where you trusted God even though you weren’t sure of or didn’t agree with the outcome. Start doing. Commit to a step and live it out this week. Read Philippians 4:6-7, Proverbs 3:5-6, Psalm 56:3-4, and Psalm 145:17-19. What are Paul, Solomon, and David telling us to do? Discuss some things that will help us make a conscious effort to cry out to God and then trust Him with the outcome. Continue to find ways to encourage each other to memorize verses. Start praying. Be bold and pray with power. This week, and with your group now, pray boldly. Expect answers and praise God in the waiting. Praise Him and trust Him even if the answers aren’t what you wanted.
<urn:uuid:1c026e06-ef6a-47d0-b60e-375c896bd056>
CC-MAIN-2023-50
https://generationschurch.com/message/if-i-was-god-i-could-do-better/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.953615
785
2.875
3
Home » World Maps » Europe » Kosovo Kosovo Map and Satellite Image |The Republic of Kosovo is located in southeastern Europe. Kosovo is bordered by Albania and Montenegro to the west, Serbia to the north and east, and the Republic of North Macedonia to the south. Kosovo declared its independence from Serbia in 2008. Since then, Kosovo has been recognized as a sovereign state by over 100 member states of the United Nations, including the United States. Kosovo Bordering Countries:Albania, Montenegro, Republic of North Macedonia, Serbia Regional Maps:Map of Europe, World Map Where is Kosovo? Kosovo Satellite Image Kosovo Cities:Besiane, Brod, Decan, Dragash, Ferizaj, Fushe Kosova, Gjakova, Gjilan, Gjurakoc, Gllogoc, Istog, Janjeve, Junik, Kacanik, Kamenica, Klina, Leposavic, Lipjan, Malisheva, Mitrovica, Novo Brdo, Obilic, Peja, Pristina, Prizren, Rahovec, Shtime, Skenderaj, Strpce, Suva Reka, Viti, Vitomirice, Vushtrri, Zubin Potoc, Zvecan. Kosovo Locations:Accursed Mountains, Badovc Lake, Batllava Lake, Binacka Morava River, Erenik River, Gazivoda Lake, Ibar River, Klina River, Lepenac River, Llapi River, Radoniq Lake, Sharr Mountains, Sitnica River, Velika Rudoka, White Drin River. Kosovo Natural Resources:Kosovo has a large reserve of lignite, and this coal is used to generate about 90% of the country's electricity. Other natural resources include bauxite, chrome, kaolin, lead, magnesium, nickel, and zinc. Kosovo Natural Hazards:Destructive earthquakes are a natural hazard for Kosovo. Kosovo Environmental Issues:Air pollution, water pollution, and water scarcity are all environmental issues for Kosovo. Additionally, coal mining, construction, and other human activities are causing degradation of the land.
<urn:uuid:98cfbb2b-6e60-42d3-9e82-91525e89f77b>
CC-MAIN-2023-50
https://geology.com/world/kosovo-satellite-image.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.849411
483
3.171875
3
Pregnancy is a unique and transformative experience, divided into three trimesters, each lasting about three months. While a full-term baby may be born between 37 and 42 weeks, it is important to note that every pregnancy is different. The journey through pregnancy, childbirth, and the postpartum period presents a blend of challenges and rewards. Building a strong support network is essential to navigate this experience successfully. Your healthcare provider can offer valuable guidance and provide access to resources, while your loved ones can offer both emotional and practical support. Pregnancy signifies an exceptional and transformative phase in life. By gaining an understanding of the distinct stages of pregnancy and emphasising regular prenatal care, you can contribute to ensuring a healthy pregnancy for both yourself and your baby. The First Trimester (0 to 12 Weeks) The first trimester comprises the first 12 weeks and is the most crucial time for your baby's development. It is a time of rapid growth and development. During the first 3 months, the fertilised egg (zygote) develops into an embryo and then into a foetus. The foetus develops distinct facial features, limbs, organs, bones and muscles. The foetus has a regular heartbeat, fingers, toes and eyelids have formed and the nerves and muscles can work together. By the end of the first trimester, the foetus measures around 7.5cm and weighs nearly 30g. Besides the physical development of your baby, you are likely to experience many changes during the first trimester of your pregnancy. These include: - Nausea and vomiting (morning sickness) - Tender, swollen breasts - Mood changes - Cravings for certain foods - Frequent urination - Weight changes The Second Trimester (13 to 27 Weeks) The second trimester is often regarded as the most enjoyable period of pregnancy because symptoms like morning sickness, extreme fatigue, and breast tenderness tend to alleviate. During this phase, you will notice various changes, including the expansion of your belly, the appearance of stretch marks on your abdomen, thighs, breasts, and buttocks, as well as the darkening of your areola (the skin surrounding the nipples). Some swelling may occur in your ankles, fingers, and face. Towards the end of the second trimester, you may even begin to feel the baby's movements. In terms of fetal development, the second trimester brings significant changes as well. The fetus grows to be approximately 30cm long and weighs around 0.7kg by the end of this trimester. Here are some notable developments: - The first bowel movement, known as meconium, forms in the intestines. - The fetus gains the ability to see, hear, make sucking motions, and even scratch itself. It can kick, move, and turn from side to side. - Skin, hair, and nails start to form. - The lungs develop (though they are not yet functional). - The fetus establishes a regular sleep-wake pattern. - In males, the testicles begin to move into the scrotum, and in females, eggs start forming in the ovaries. - Taste buds begin to develop. - Bone marrow initiates the production of blood cells. - Fine hair, known as lanugo, covers the body. - The eyes shift to the front of the face, and the ears move from the neck to the sides of the head. The Third Trimester (28 to 40 Weeks) The third trimester extends from approximately week 28 until delivery, which typically occurs around week 40. By this stage, most of your baby's organs and body systems have already developed, and they will continue to grow and mature. The bones undergo further hardening, resulting in more noticeable movements. The eyes begin to open, and the lungs reach full formation. The fine hair called lanugo disappears, and a protective waxy coating known as vernix takes its place. Towards the latter part of this trimester, your baby will descend lower in your abdomen and assume a head-down position. During this trimester, you may encounter various discomforts, including heartburn, breathlessness, swelling in the ankles, face, and fingers, difficulty sleeping, and mood swings. Changes in your breasts, such as milk leakage and alterations in nipple appearance, are also common. Frequent urination and the development of haemorrhoids may be experienced as well. The Fourth Trimester (Postpartum) Some medical professionals talk about a fourth trimester, which is the 3-month transitional period after giving birth. These 3 months play a pivotal role in the health of the mother and her baby. Although this can be an exciting time, many women find it challenging. These challenges may be brought about by hormonal changes, changes to your body after birth and the demands of motherhood. They include: - Recovering after delivery, especially if you have stitches. - Dealing with a discharge of blood and tissue for several weeks after giving birth. - Cramping, especially during breastfeeding. - Adjusting to the new role of being a mother. - Having difficulties related to breastfeeding such as sore breasts. - Experiencing fatigue, often due to a lack of sleep. - Postpartum depression If you are struggling to cope with being a new mother, there are some changes you can make to help reduce the stress. These include limiting visitors, getting assistance with housekeeping, eating healthy meals, and resting when the baby does. You should raise any concerns about your own or your baby’s health and attend your follow-up appointments. If you experience ongoing low mood or have any thoughts of harming yourself or your baby, you should seek urgent medical attention. These may be signs of postpartum depression which requires treatment with psychotherapy, medicine or both.
<urn:uuid:859992e1-b67a-448e-9e5d-28048b07b5ab>
CC-MAIN-2023-50
https://globmed.co.uk/blog/pregnancy-trimesters-explained/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.941092
1,208
3.328125
3
Struggling with Tooth Sensitivity? Our Dentists Can Help! Sensitive teeth can cause pain or discomfort when coming in contact with certain stimuli like cold air, hot or cold foods, hot or cold drinks, sweets, or even touch. The Dental office provides various treatments to combat your teeth sensitivity. Depending on the type of sensitivity that you have, different methods are available to fit your unique needs. If you have dentinal sensitivity, the middle layer of your tooth is exposed. This could be caused by receding gum lines. One treatment includes an in-office application of resin coating that covers the exposed area. The results can be effective for up to six months. Pulpal sensitivity comes from the pulp chamber of the tooth where nerves and tissues are stored. This sensitivity can be caused by decay, infection, recent fillings, or the grinding of teeth. Many different solutions exist for pulpal sensitivity depending on the symptoms. Talk with your Dentist to figure out what options are best for your sensitivity.
<urn:uuid:e9eec090-a3a1-47fc-a69e-4c343332df0c>
CC-MAIN-2023-50
https://greeleydental.com/the-easy-treatment-for-sensitive-teeth-that-actually-works/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.907508
209
2.625
3
Lycoperdon perlatum is also known as a common puffball mushroom, a gem-studded puffball, devil’s snuff-box and warted puffball. Lycoperdon literally translated means wolf farts, and this name is used for other puffball species as well. The spores are blown from the top of the common puffball on maturity, hence the literal translation. Lycoperdon perlatum was described in 1796 by Christiann Persoon, since then it has acquired several other synonyms including Lycoperdon gemmatum Batsch, Lycopredon perlatum var perlatum Pers., and Lycoperdon bonordenii Massee [1.]. The Common Puffball is a ‘common’ mushroom, found amongst other places on commons. Its surface is covered in various sized pearls or warts; these warts eventually fall off, leaving a brownish mark behind. The Common puffball can grow alone, but often will be found growing in groups. Lycoperdon perlatum may be the most common puffball in Northern America and is relatively easy to identify. It is widespread throughout Europe, Asia in addition to Africa, Australia and Central America. The season for harvesting starts in July and ends in November, maybe later in warmer climates. Lycoperdon Perlatum Identification and Description Fruiting-body: 3 – 6 cm in diameter; up to 9cm tall. Pear or round-like and may have a flattened top. The flesh is white and marshmallowy, but becomes brown and has a powdery texture as the spores are produced. The warts are spine shaped with a wide base becoming narrower at the apex. Gills: Non-gilled mushrooms Stem: The stem is white and tapered under the fruiting body and also has warts on it. Smell: Mild tasting. Taste: Non-descript taste but works beautifully in dishes with other mushrooms. Some suggest it is slightly sweet, and due to its spongy texture, it picks up flavours from other ingredients. Spores: Spherical; under an electron microscope, they had ‘rodlet’ patterns. Spore color: White when immature gradually turning a yellow-brown. Edibility: Edible. It is suggested that the mushroom should be eaten while it is still young and the flesh is still white before it has turned brown or the spores have changed from their white color. The puffball should be cut in half and the color of the flesh examined before consuming. Habitat: Found in all types of woodland areas, especially near hardwoods and conifers. Grows on most grazing lands, commons and heaths, and roadside or in urban areas. Phylum: Basidiomycota; Class: Agaricomycetes; Order: Agaricales & Family: Agaricaceae Lycoperdon Perlatum Look a likes There are fungi shaped like balls called Earthballs (Scleroderma citrinum) that look similar to the Common puffball; the difference being that the Earthballs are inedible and may cause poisoning if ingested by humans or animals. Lycoperdon Perlatum Benefits Lycoperdon perlatum contains useful, biologically active components [2.]. Using different methods of extraction from the fruiting body of Lycoperdon perlatum (water, methanol and ethanol), the microbial activity of the mushroom was tested on bacteria. Antimicrobial activity was demonstrated against Staphylococcus aureus, Pseudomonas aeruginosa, Escherichia coli, Bacillus cereus, Candida albicans and Candida glabrata in the methanol and ethanol Lycoperdon perlatum extracts. The water-based extract was also resistant to all bacterial strains except for Pseudomonas aeruginosa [3.]. When compared to other mushrooms, Lycoperdon perlatum showed the most antimicrobial activity in vitro, a zone of 15mm of no microbial activity in the presence of the extract was considered highly active. Lycoperdon perlatum showed a microbial inhibition zone of 24mm for Bacillus subtilis with 19mm and 18mm, respectively for Escherichia coli and Staphylococcus aureus [4.]. Healing properties and prevention of bleeding North American Indians used puffballs for medicinal purposes, in particular as a styptic (able to stop a wound bleeding when applied). The soft, centre of dried and immature puffballs, when broken up and then applied onto the broken skin or wound, helps to prevent continued bleeding. The Cherokee Indians also used it as a healing agent for sores [5.]. It is also reported that the fibrous mass that is left after the spores have escaped the puffball can be used as a wound dressing. Reactive oxygen species can cause extensive damage to cells, so the search for biologically active compounds to mitigate the effects continues. The antioxidant properties were considerable when water extracts of Lycoperdon perlatum were examined compared to other mushrooms, showing the highest radical-scavenging activity (43.2% at a dose concentration of 4.0mg/ml) [6.]. Lycoperdon Perlatum dosage Lycoperdon perlatum are edible mushrooms; however, only when they are young, and the flesh is still white. The presence of trace metals in some cohorts of collected mushrooms may limit the consumption volume; however, there is no standard recommended dosage as there will be batch differences. Lycoperdon Perlatum Toxicity, Safety & Side Effects Do not self-administer Lycoperdon perlatum as a substitute for conventional medicinal treatment. Always seek medical advice if pregnant before taking any food supplement. Lycoperdon perlatum it is not a replacement for traditional medicine it is essential to seek and adhere to medical advice before supplementing the diet for medicinal effects. Lycoperdonosis is a rare respiratory condition that may come about following inhalation of large quantities of spores from mature puffballs [7.] including Lycoperdon perlatum. Lycoperdon perlatum mushrooms sourced from Turkey were not considered edible despite having excellent antioxidant properties, this followed an examination of the fruiting bodies after microwave digestion revealed high levels of heavy metals including Iron, Zinc, Potassium and Magnesium [6.]. Analysis of Polish mushrooms also identified the presence of high levels of mercury [8.]. - Nature, F. First Nature. 16/03/2020]; Available from: https://www.first-nature.com/fungi/lycoperdon-perlatum.php. - Barros, L., et al., Chemical composition and biological properties of portuguese wild mushrooms: a comprehensive study. J Agric Food Chem, 2008. 56(10): p. 3856-62. - Akpi, U.K., Odoh, C. K., Ideh E. AND Adobu, U., antimicrobial activity of lycoperdon perlatum whole fruit body on common pathogenic bacteria and fungi. African journal of clinical and experimental microbiology, 2017. 18(2): p. 79-85. - Ramesh, C. and M.G. Pattar, Antimicrobial properties, antioxidant activity and bioactive compounds from six wild edible mushrooms of western ghats of Karnataka, India. Pharmacognosy research, 2010. 2(2): p. 107-112. - Burk, W.R., Puffball Usages Among North American Indians. 1983. - Sarikurkcu, C., et al., Metal concentration and antioxidant activity of edible mushrooms from Turkey. Food Chemistry, 2015. 175: p. 549-555. - Strand, R.D., E.B. Neuhauser, and C.F. Sornberger, Lycoperdonosis. N Engl J Med, 1967. 277(2): p. 89-91. - Falandysz, J., et al., Mercury content and its bioconcentration factors in wild mushrooms at Łukta and Morag, northeastern Poland. J Agric Food Chem, 2003. 51(9): p. 2832-6.
<urn:uuid:0d5cc3cd-b3dc-4b50-a976-aa9ca37f17da>
CC-MAIN-2023-50
https://healing-mushrooms.net/lycoperdon-perlatum
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.888319
1,747
2.828125
3
Calculates the logical equivalence of two expressions. Result = Expression1 Eqv Expression2 Result: A numeric variable that will contain the result of the comparison. Expression1, Expression2: Expressions that you want to compare. When testing for equivalence between Boolean expressions, the result is True if both expressions are either True or False. In a bit-wise comparison, the Eqv operator only sets the corresponding bit in the result if a bit is set in both expressions, or in neither expression. Sub ExampleEqv Dim A As Variant, B As Variant, C As Variant, D As Variantsee #i38265 Dim vOut As Variant A = 10: B = 8: C = 6: D = Null vOut = A > B Eqv B > C ' returns -1 vOut = B > A Eqv B > C ' returns 0 vOut = A > B Eqv B > D ' returns 0 vOut = (B > D Eqv B > A) ' returns -1 vOut = B Eqv A ' returns -3 End Sub
<urn:uuid:f9df5903-b127-4024-8e38-c3ba83bed8ea>
CC-MAIN-2023-50
https://help.libreoffice.org/latest/en-GB/text/sbasic/shared/03060200.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.700702
227
3.875
4
In 1927, the Polish-Jewish physical anthropologist Henryk Szpidbaum published an account of his recent expedition to Mandate Palestine on behalf of the Polish Society for the Exploration of the Mental and Physical Condition of the Jews. He had traveled to Palestine not to investigate the Zionist settlers, but rather the Samaritans, an obscure religious group of no more than 150 members living in the town of Nablus. In the introduction to his study, Szpidbaum described the Samaritans as “a living monument [Denkmal] of the biblical period. This tribe can be traced back 2800 years, during which it should be noted that the Samaritans have never left their country of Palestine. Detailed knowledge of this tribe will hopefully help to solve many difficult problems concerning the anthropology of the former inhabitants of Canaan and partially [also of] today’s Jews ” (Szpidbaum 1927). Unfortunately, he warned, the community might soon disappear forever: “The Samaritans believe themselves to be a vanishing tribe [due to] the insufficient number of women. [Footnote:] In order to counter the extinction, the Samaritans try to enter into mixed marriages with Jews. For the time being there is only one such a marriage.” Szpidbaum’s rhetoric about Samaritan antiquity and pending extinction is representative of the concepts of group identities and their historical value in early twentieth-century physical anthropology, which were subsequently absorbed into the emerging field of human population genetics. Between Szpidbaum’s visit to Nablus and the establishment of the Israeli state in 1948, medical practitioners championed new molecular methods for human classification, relying on the frequencies of ABO blood types and the prevalence of inherited medical conditions. However, both the advocates of these new methods and those of traditional anthropometry agreed that specific human populations were particularly important and deserving of study: “living monuments” of the ancient Near East, such as the Samaritans in Palestine, Zoroastrians in Iran, Copts in Egypt, and Assyrians in Iraq, whose shrinking numbers were perceived to signal a total eclipse of the region’s biblical and pre-Islamic history. From the 1920s to the Human Genome Diversity Project, scientists have called for urgent projects of salvage genetics on such communities—sometimes in the name of aiding the groups’ recovery, but always in the name of rescuing history itself from anticipated oblivion. Here, I briefly trace how discourses of physical anthropology and, later, population genetics transformed Middle Eastern minority communities from relics of a religious past, representing an antiquated way of life, to valuable biological remnants of mythical origins. While focusing on the case of the Samaritans, I explore broader questions about how historically marginalized research subjects interact with representatives of the scientific community to create and reshape narratives of ethnogenesis as well as social practices like endogamy. How are these narratives of ancient populations, inscribed upon the bodies of living people, interpreted according to the contingencies of technology and nationalist politics? Henryk Szpidbaum was only one of at least five anthropologists who measured the bodies of the Samaritans at Nablus between 1900 and the 1930s. Accordingly, in the 1906 edition of the Jewish Encyclopedia, Harvard anthropologist Henry Minor Huxley wrote that the highly inbred Samaritans had “preserved the ancient type in its purity; and they are to-day the sole, though degenerate, representatives of the ancient Hebrews.” However, as Szpidbaum noted, the rapidly dwindling population and the growth of Zionist settlement in Mandate Palestine challenged both the community’s self-perception and its commitment to endogamy. While Samaritans had consistently regarded themselves as spiritual and biological “cousins” to the Jews, rabbinical Judaism denied Samaritan claims to Israelite ancestry, characterizing them instead as the descendants of Mesopotamians resettled in Palestine after the Assyrian conquest in 722 BCE. Huxley (1906) described how this combination of religious and ethnic beliefs maintained a communal rift between the Samaritans and Palestinian Jews, even when the former were in dire demographic straits: “The Samaritans themselves claim the perfect purity of their stock. Only as a last resort would they seek wives outside their own sect; and in this case they would naturally wish to marry among the people of the most closely allied religion, the Jewish. The Jews hate and despise the Samaritans with the greatest bitterness, and would do all in their power to prevent marriages between the two sects.” These local prejudices were not shared by the secular, socialist-inspired Labor Zionists who arrived in Palestine at the turn of the century. Generally, these settlers dismissed the dogma and practices of Orthodox Judaism and were eager to find local allies in the project of building the modern Jewish state. It was under these circumstances that several Jewish anthropologists and physicians like Szpidbaum, Samuel Weissenberg, and Rina Younovitch presented their data on anthropometric measurements and blood type frequencies as supporting Samaritan ancestry claims (Weissenberg 1933; Younovitch 1933). Their work provided a scientific basis on which to accept the Samaritans’ account of their origins, dramatically revising the traditionally hostile Samaritan-Jewish relationship. Armed with these calculations about the community’s bones and blood, a number of influential Labor Zionists portrayed the Samaritans as the indigenous Palestinian kin to the Ashkenazi Jewish settlers returning from exile. Perhaps the most significant of these figures was Yitzhak Ben-Zvi, a Russian-Jewish historian and future president of Israel (between 1952-63). Around 1907, Ben-Zvi moved in with a Samaritan family—the Tsedakas, who had relocated from Nablus to Jaffa—while conducting research on Samaritan history and traditions. Ben-Zvi was instrumental in convincing the Tsedaka paterfamilias to allow his sons to marry Jewish immigrant women. Out of five such intermarriages arranged between the 1920s and 30s, three were successful and played a key role in the Samaritan community’s demographic recovery. Ben-Zvi soon became a prominent advocate for the community as a whole, seeking to grant the Samaritans Israeli citizenship rights. Although the Law of Return only stipulated that Jews would attain citizenship after immigrating to Israel, Ben-Zvi argued that Samaritans should also be eligible on the grounds that in “racial” terms, they were Israelites. In the early 1950s, he orchestrated the creation of a Samaritan neighborhood within the city of Holon (near Tel-Aviv) and encouraged the migration of more Samaritan families out of Nablus (then within Jordanian territory) into Israeli state borders (Schreiber 2014). By the early 1960s, less than 30 years after Szpidbaum’s report, the overall Samaritan population had more than doubled, with about 150 living in Holon and about 200 in Nablus. Yet while the fresh blood of new members and the generally positive relationship with Israel seemed to bode well for the community’s survival, a rising generation of scientists—both Israeli and Anglo-American—posed additional concerns about the Samaritan past and future. In their eyes, Ben-Zvi’s policies of integrating Samaritans into Israeli society threatened both the legitimacy of Jewish nationalism and the Samaritans’ biological uniqueness. Chaim Sheba, the influential Israeli physician and sometime director of the Ministry of Health, challenged the notion that the Samaritans descended from an autochthonous Judaean population at all. For Sheba, who came from an Orthodox Jewish family, the stakes involved in the Samaritans’ genetic identity were at once religious and political: the efforts of his fellow Labor Zionists to rehabilitate the Samaritans as native Israelites undermined the primacy of the biblical accounts of Jewish history that they used to justify their claims to Palestinian land. During the 1950s and 1960s, Sheba committed himself to using genetic data—specifically, the incidence rates of hereditary diseases—to reconstruct the migration patterns of the Jewish Diaspora according to biblical narratives. Ultimately, he argued that because the Samaritans lacked the gene for an enzyme deficiency common among Middle Eastern Jewish communities, genetics confirmed the writings of rabbinical Judaism that claimed the Samaritans actually belonged to “a foreign stock brought over by the Assyrians” (Sheba et al. 1962). Not Samaritans, but only Jews themselves, he contended, embodied the remnants of the ancient Judaean gene pool. Meanwhile, American geneticist Leslie C. Dunn (1963) of Columbia University expressed fears that “out-marriage would shortly terminate [the Samaritan community’s] existence as a biological entity.” In other words, even if the Samaritan community gained members through occasional marriages with Jewish women, whose children and grandchildren in turn benefited from the reduced incidence of genetic diseases, they still faced another kind of extinction by introducing foreign elements into the ancient Hebrew gene pool so diligently preserved by their ancestors. For Dunn and like-minded members of the international scientific community, the healthy new generation of Samaritans represented the community’s declining value for biological and historical research. A young Israeli genetic anthropologist, Batsheva Bonné, soon countered these concerns and reoriented the scientific conversation about the Samaritans’ biological identity to account for the community’s own interests. In 1960, Bonné established a correspondence with Yisra’el Tsedaka: the nephew of Yefet Tsedaka, the community’s de facto leader in Holon. Using the information he supplied about Samaritan family genealogies, she recorded a comprehensive demographic survey of the Holon Samaritans for her master’s thesis at the University of Chicago. Her earliest publication from the thesis echoed Sheba’s promotion of the rabbinical Jewish biblical interpretation that the Samaritans descended from “settlers who were transplanted into Palestine” (Bonné 1963). However, in her subsequent doctoral work on the blood-type genetics of the Samaritans, she came to favor the narrative of shared Jewish-Samaritan origins, attributing Samaritans’ genetic difference from Jews to their long reproductive isolation and the effects of endogamy. This shift in representation is certainly related to the long-term relationship she cultivated with the community, involving weekly visits to their neighborhood in Holon. She portrays this relationship as one of mutual respect and appreciation, as opposed to other researchers’ more exploitative approaches to collecting Samaritan blood from the community in Nablus (Burton 2018). In her autobiography, Bonné writes that her presence among the Samaritans “was not that of an anthropologist-scientist with a foreign tribe whose customs and traditions are anchored in another world.” Rather, she claimed, her personal friendships with the Tsedakas and other Samaritan families enabled the discussion of delicate medical information (Bonné-Tamir 2010). In 1966, Bonné published two articles on her doctoral work (one provocatively entitled “Are There Hebrews Left?”), which directly addressed both the conflicting accounts of Samaritan origins and the genetic effects of their recent marriages with Jewish women. She took pains to clarify that these out-marriages had been confined to a single family lineage (namely that of her close friends, the Tsedakas) and argued that the preponderance of genetic evidence supported the Samaritans’ claims of centuries of reproductive isolation (Bonné 1966b). In a challenge to her mentor Sheba, Bonné wrote: “Whether present gene frequencies are related or unrelated to those of many generations—a fact we cannot know—the Samaritans represent a descendant population from the old Hebrew kingdom; not of the total Israelite kingdom but of a small branch of it, as indeed they claim.” Still, she chided both the various geneticists with whom she collaborated as well as previous scholarship for their obsessive reliance on the Samaritans to reconstruct an ancient Judaean gene pool, arguing: “In itself, the usefulness of concluding that the Samaritans are the living representatives of ancient Hebrews, is doubtful” (Bonné 1966a). Bonné instead promoted studies of the medical genetics of the Samaritans, whose small community and endogamous marriage practices had produced high rates of inherited disorders. With the help of the Tsedakas, she initiated a program of premarital genetic counseling (still extant today) for young Samaritans in the hope of reducing the prevalence of these disorders, which has successfully induced the community to consider genetic problems in its matchmaking practices (Schreiber 2014). However, she acknowledged that the Samaritans themselves were interested not only in medical aid, but also in the capacity of genetic information to reinforce the community’s claims of shared ancestry with Jews. For example, she described the response of Yefet Tsedaka, the Holon Samaritans’ leader, to Sheba’s research on the G6PD enzyme deficiency, a maternally inherited disorder common among Middle Eastern Jews but absent among Ashkenazim and Samaritans. Whereas Sheba had declared this evidence that Samaritans were not descended from the Israelite tribes, Tsedaka reinterpreted Sheba’s data in a manner consonant with the community’s beliefs. He explained to Bonné that the Samaritans belong to the tribe of Ephraim while the Ashkenazim descend from the tribe of Benjamin—namely, the descendants of the biblical matriarch Rachel. He deduced that Rachel must not have possessed the gene for the enzyme deficiency, whereas the other wives of Jacob did and passed it down to their descendants among the Jewish population (Bonné 1966a). The potential of scientific epistemology to overcome rabbinical prejudices has perhaps influenced Samaritans’ continued cooperation with Israeli and foreign geneticists in the era of DNA sequencing. While Bonné’s publications on the Samaritans, as recently as 2003, explicitly avoid the question of community origins, an independent collaborative group of American and Israeli geneticists sequenced Samaritan mitochondrial and Y-chromosome DNA with the professed aim of re-testing the rabbinical Jewish and Samaritan narratives (Bonné-Tamir et al. 2003). Ultimately, these studies also claimed to support a shared ancestry for Jews and Samaritans predating the Assyrian conquest (Shen et al. 2004; Oefner et al. 2013). Samaritans therefore have reason to look favorably upon genetic research, given that they still face discrimination and occasionally outright harassment from Orthodox Jews in both Israel and the West Bank (Schreiber 2014; Droeber 2014). Like other marginalized Middle Eastern minorities, Samaritans responded to the social and scientific co-production of ethnic nationalism and physical anthropology by reconceptualizing their histories in terms of the maintenance of not only religious but also ethnic, i.e. biological, purity. In many cases, the scientific attention that has imagined these groups as the living repositories of ancient gene pools has also offered affirmation of their ancestry claims, securing their faith in and cooperation with genetic research. Yet while these discourses have reinforced a conflation of biological ancestry and endogamy with community history and identity, the rate of outmarriage among the Samaritans is actually increasing. In fact, they now embrace the practice for the express purpose of decreasing the incidence of genetic diseases in the community, which genetic counseling alone could not achieve. With this “eugenic” justification, Samaritan men (though not women) have been permitted to marry not only Israeli Jews, but also Christian wives imported from Ukraine. The Samaritan understanding of genetics and its significance to community health as well as ethnonationalist politics evidently allows them to represent “living monuments” on their own terms. Having “proved” the authenticity of their origins and attained important political privileges from the Israeli state, Samaritans have consciously chosen to transform their gene pool to ensure the community’s continued growth and survival. Bonné, Batsheva.1963. “The Samaritans: A Demographic Study.” Human Biology 35 (1): 61–89. ———. 1966a. “Are There Hebrews Left?” American Journal of Physical Anthropology 24 (2): 135-145. ———. 1966b. “Genes and Phenotypes in the Samaritan Isolate.” American Journal of Physical Anthropology 24 (1): 1-19. Bonné-Tamir, Batsheva. 2010. Ḥayim ʻim ha-genim: ḥamishim shenot meḥḳar ba-geneṭiḳah shel ʻedot Yiśraʼel. Yerushalayim: Karmel. Bonné-Tamir, Batsheva, et al. 2003. “Maternal and Paternal Lineages of the Samaritan Isolate: Mutation Rates and Time to Most Recent Common Male Ancestor.” Annals of Human Genetics 67 (2): 153–164. Burton, Elise K. 2018. “‘Essential Collaborators’: Locating Middle Eastern Geneticists in the Global Scientific Infrastructure, 1950s–1970s.” Comparative Studies in Society and History 60(1): 119-149. Droeber, Julia. 2014. The Dynamics of Coexistence in the Middle East: Negotiating Boundaries between Christians, Muslims, Jews and Samaritans. London: I. B. Tauris Publishers. Dunn, Leslie. 1953. Letter to Arthur Mourant. 2 October. Arthur E. Mourant Papers, PP/AEM/K.21:Box 29, Wellcome Library, London. Huxley, Henry Minor. 1906. “Samaritans.” Jewish Encyclopedia. Peter J. Oefner et al. 2013. “Genetics and the History of the Samaritans: Y-Chromosomal Microsatellites and Genetic Affinity between Samaritans and Cohanim.” Human Biology 85(6): 825–57. Schreiber, Monika. 2014. The Comfort of Kin: Samaritan Community, Kinship, and Marriage. Leiden: Brill Publishers. Sheba, Chaim et al. 1962. “Epidemiologic Surveys of Deleterious Genes in Different Population Groups in Israel.” American Journal of Public Health 52 (7): 1101–1106. Shen, Peidong et al. “Reconstruction of Patrilineages and Matrilineages of Samaritans and Other Israeli Populations from Y-Chromosome and Mitochondrial DNA Sequence Variation.” Human Mutation 24, no. 3 (September 2004): 248–60 Szpidbaum, Henryk. 1927. “Die Samaritaner: anthropobiologische Studien.” Mitteilungen der Anthropologischen Gesellschaft in Wien 57 (5/6): 139-158. The New Samaritans. 2007. Directed By Sergei Grankin et al. Surrey, United Kingdom: Journeyman Pictures. Weissenberg, Samuel. 1909. “Die autochthone Bevölkerung Palästinas in anthropologischer Beziehung.” Zeitschrift für Demographie und statistik der Juden 5: 129–139. Younovitch, Rina. 1933. “Etude sérologique des juifs samaritains.” Comptes rendus des seances de la Societe de Biologie 85 (112): 970–971. The mentioned condition is glucose-6-phosphate dehydrogenase (G6PD) deficiency, which causes favism in individuals with Mediterranean ancestry. Samaritans in Nablus also face suspicion from their Muslim and Christian Palestinian neighbors, who consider them to be “Jews” in light of their entitlement to Israeli citizenship. See Chapter 8 in Schreiber, The Comfort of Kin. The somewhat sensationalized experience of some of these wives adjusting to the religious restrictions of Samaritan life is portrayed in the documentary “The New Samaritans” (Journeyman, 2007).
<urn:uuid:34fc23f9-44d9-4ead-ad9a-bdf47e7bf242>
CC-MAIN-2023-50
https://histanthro.org/notes/living-monuments/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.925492
4,242
3
3
The failure of the Capital National Bank of Lincoln in January of 1893 occurred during a low point in Nebraska’s business and political life. Legislative investigating committees discovered instances of fraud involving the management and disbursement of state funds. C. W. Mosher, president of the failed Capital National, was found to be involved in the fraudulent management of state penitentiary labor. A committee investigating the hospital for the insane at Lincoln discovered overcharges for such items as coal. Betts, Weaver and Company, for example, had charged the state for 438,000 tons of coal but delivered only 336,000 tons. For these and similar fraudulent acts, the Nebraska House of Representatives voted articles of impeachment against the following Republican state officials (all members of the Board of Public Lands and Buildings) who had been in office during the preceding years: John C. Allen, secretary of state; Augustus R. Humphrey, board commissioner; George H. Hastings, attorney general; and John E. Hill, state treasurer; as well as William Leese, former attorney general, who had been a member of the board from 1885 to 1891; and Thomas H. Benton, state auditor from 1889 to 1893. These men were tried before the Nebraska Supreme Court. A number of criminal prosecutions in the district court were made of persons found guilty of defrauding the state in contracts. Delay and political influence resulted in only one conviction, that of one of the worst offenders. He was sentenced to two years in the penitentiary, but served only a few months. In the impeachment trials before the Nebraska Supreme Court, the cases against Hill, Benton, and Leese were dismissed according to the decision of the court, because the impeachment of a state officer after his term of office had expired was unconstitutional. There remained the impeachment cases against Allen, Humphrey, and Hastings. After an extended trial a majority opinion held that the delinquencies of the respondents had been due to errors of judgment only and were not impeachable under the constitution.
<urn:uuid:9c05c36e-ea4c-42b2-9d90-08b514edaca4>
CC-MAIN-2023-50
https://history.nebraska.gov/publications_section/mosher-c-w-and-the-capital-national-bank/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.978522
412
2.8125
3
Yes, or something like it. Most accounts have him saying it to Halifax in 1940, though he possibly repeated it on other occasions. It is sometimes reported that he said "J'assume la France" rather than "Je suis la France". In either case, the expression signified his intention to assume responsibility for France. Many of the smaller countries overrun by the nazis had properly constituted governments-in-exile in London. In some cases the monarch himself (or herself) had also fled to London. For nationals of these countries (those who wanted to fight on at least) the issue of legitimacy did not arise. The legal government continued to function, albeit from London. None of this applied to De Gaulle's initially tiny organisation. For most people in 1940, the legal government of France was that of Petain, based in Vichy. De Gaulle had no monarch, virtually no armed forces, no access to funds (beyond what the British allowed him) and no territory. For his project to succeed he had to create and encourage the idea that a legitimate French state existed beyond Vichy. He could only do so by projecting his personality and by acting - even from the tiniest beginnings - the role of the world statesman.
<urn:uuid:4d7c5162-6a56-4ddb-adbe-217c9fd1a230>
CC-MAIN-2023-50
https://history.stackexchange.com/questions/17817/did-de-gaulle-really-say-i-am-france-je-suis-la-france
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.971229
261
2.640625
3
A lotto is a type of gambling game that involves playing a lottery draw. While this activity is widely popular and has been practiced for centuries, it is still a form of gambling. For this reason, there are many rules that govern lotteries. Listed below are some of the most common rules governing lotteries. Listed below are some of the reasons why you shouldn’t play the lottery. These rules vary from country to country, so make sure to check your local laws before you start playing. Lotteries were banned in England from 1699 to 1709 The late seventeenth and early eighteenth centuries saw a massive increase in lottery ticket prices. These tickets were heavily advertised and sold at astronomical markups. Contractors would purchase the tickets for low prices and resell them at exorbitant markups. Furthermore, the government was not able to collect taxes on side bets placed on the winning numbers. Because of these practices, lotteries were condemned as an unsavory form of mass gambling and a false drawing. French lotteries were abolished in 1836 The history of French lotteries traces back to the late 1500s, but there are significant differences from Italian lotteries. French lotteries first gained popularity during the reign of Francis I, but did not really reach their height until the mid-17th century, when Louis XIV won the top prize in a drawing. The king returned the prize to the government, where it was eventually redistributed. French lotteries were eventually banned in 1836, but the Loterie Nationale was restarted in 1933. Despite being a flop, the French lottery was revived again after the end of World War II. The Dutch state-owned Staatsloterij is the oldest running lotteries The Netherlands has one of the oldest running lotteries in the world, and in January 2015, EUR 30.3 million was awarded to one player. Five x 1/5 tickets won EUR 6.06 million each. Interestingly, this prize is the third-highest one, but it’s been won countless times. In January 2017, two players in Gelderlond and Zeeland split the prize. A player in Friesland won it in December 2013 and in Groningen in December 2011. Lotteries are a form of gambling In the early nineteenth century, British colonists introduced lotteries to the United States, but they were banned in 10 states between 1844 and 1859 due to Christian opposition. However, lotteries soon gained popularity and are now legal in most states. Lotteries have a variety of uses. They can be a form of entertainment, raise funds for charities, and promote philanthropy.
<urn:uuid:0473545e-034a-4d3b-af9c-21c5f0617bb6>
CC-MAIN-2023-50
https://hotel-brongto.com/why-you-shouldnt-play-the-lottery/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.981648
557
2.578125
3
Ads by Google Can you learn to be a good drawer? In fact, say scientists, while some are born with natural talent, anyone can learn to draw well. Researchers at University College London believe those unable to draw are not seeing the world as it really is – and simply need to work on their visual skills. How long does it take to become a good drawer? It takes 5,000-10,000 hours of practice to become very good at any skill; the same in drawing. To speed up the learning time, you need to have a professional art teacher or attend a very good art course. What are the 5 basic skills of drawing? The “drawing basics” are the five main skills of drawing. They’re the ability to: recognize edges, lines, and angles; to reckon proportion and perspective; deciphering shadow, highlights, and gradations of tone; and lastly, the ability to unconsciously drawstring them all together – which comes to you with practice. What should I learn first in drawing? The first thing that most drawing tutorials teach you to draw is shapes, starting with a sphere. After all, any object that you see around you can be constructed by using one, or a combination of, three different shapes: A circle – a sphere is a circle in 3D. A square – a cube is a square in 3D. Why is drawing so hard? The pattern our eyes take when we’re drawing something is sequential. They don’t flick about nearly as much. But instead of our eyes flicking all around the object we’re drawing, a typical zigzag pattern, they creep around the outline a bit at a time, pausing as they go. That’s what makes drawing accurately so hard. How can I learn to draw faster? Activities for day two… - Practice drawing objects loosely and quickly. - Time your drawings. Start at 4 minutes and progress to just 1 minute for each drawing. - Try to draw with your shoulder and elbow instead of just with your wrist. - Don’t erase and just focus on capturing the form as quickly as possible. Can I learn drawing by myself? You can learn to draw, as long as you can hold a pencil. Even without natural talent, you will learn drawing, if you practice often. With enough motivation and dedication, anyone will learn drawing, if he/she believes in himself/herself. Taking the first steps is never easy. Is drawing a talent or skill? So is drawing a talent or skill? Drawing is a Skill, so you can learn how to draw even if you are not talented. It will take more time and effort but generally the artists who are not that talented most of the time outperform the talented artists in the long run. Can I learn to draw by copying? Copying art can help you learn how to draw various types of eyes, mouths, feet, cats, dogs, etc. After you practice drawing other artwork, you will gain the confidence and knowledge of how to draw your own illustrations without copying. Can I learn to draw at 30? No, it’s never too late to learn to draw. All it takes is practice. You probably won’t be much good until at least a year in, and by two years in you’ll really start to shine. Is 17 too late to learn to draw? No,it’s not too late to start drawing, especially not at 17,you’re still young you have time. Drawing is a skill you acquire over time,all you need is consistent practice daily. Start by drawing human forms,then branching out to other things like,environments ,props,vehicles etc. Can I become an artist at 30? If you’re closer in age to 30 than 20 and have no drawing experience you may need to sacrifice a lot of free time. It depends on your goals. But if you’re looking for a career in your 30s then you need to be drawing every day as many hours as possible. Same goes for someone in their 30s getting close to 40. Is it too late to become a artist? Whether you’re eight years old or 80, it’s never too late to start making art. Give in to your passion, find the right inspiration and dive right in. She decided to become an artist after completing her education, raising children and having a “real job” while doing lots of arts and crafts on the side. Can a self-taught artist be successful? There is nothing glamorous about being a self–taught artist. If you are disciplined, then you can achieve anything as a self–taught artist that a trained artist could achieve. In fact, formal art training can be restrictive to the learning of some artists, who may be better suited to the self–taught path. What is a self-taught artist called? Outsider art is art by self–taught or naïve art makers. Typically, those labeled as outsider artists have little or no contact with the mainstream art world or art institutions. In many cases, their work is discovered only after their deaths. Should I learn to draw or paint first? Drawing is usually taught first because it allows you to work on the foundational skills with a relatively quick turn around. Paint takes time to mix and dry, so it is more suited to a final work, but pencils and crayons are quick and decisive–perfect for practice.
<urn:uuid:13c00ef1-5882-4dd9-9f73-26f0932d53ea>
CC-MAIN-2023-50
https://how.co/ht/review-how-to-become-a-good-drawer-84513/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.962409
1,164
3.28125
3
Classism describes discrimination based on social origin. It is directed against people from the poor or working classes. It affects life expectancy and limits access to housing, education, health care, and recognition. With Solidarisch gegen Klassismus (With Solidarity against Classism), Humanity in Action Senior Fellow Francis Seeck and Brigitte Theißl compile the first German-language anthology on anti-classicism. Some people confuse the historical art epoch “classicism,” with “classism” which makes clear that the term “classism” is still little known compared to sexism or racism. This article points out how important this book is and that there is no need to be afraid of that classism wants to replace the concept of class. The anthology deliberately focuses on personal stories and shows that it is important to see experiences through a different point of view. Read the full article from der Freitag here (in German).
<urn:uuid:952cd286-af4c-43eb-b1d0-841f469e59e5>
CC-MAIN-2023-50
https://humanityinaction.org/news_item/alumni-news-germany-francis-seeck-seeing-with-poverty/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.948837
198
3
3
Articles about Empathic Parenting Counseling Homepage Empathic Parenting Counseling Homepage on fostering emotional intelligence and an emotionally connected, trusting bond between parents and children. By being attuned to a child’s emotions and experiences, validating their feelings, and teaching them essential emotional skills, empathic parenting can help set a child up for success in life by establishing healthy attachments and setting a solid foundation of emotional wellbeing. When Little Lungs Encounter Big Risks: Managing Toddler’s Vape Inhalation Children that are empathic tend to be more sensitive to the emotions of others. They often feel the pain or joy that others are feeling, including their thoughts (sometimes) and intentions (good or bad). This can be a challenge when trying to interact with family members or friends who may not understand being an empath. It’s important to remind yourself and your child that they can experience a barrage of excess emotions. It’s also helpful to recognize that this is a normal part of their development and that it’s not necessarily your job to “fix” their emotional reactions. Roots of Empathy is a non-profit organization that supports the emotional health of children in the community by providing parents with education on infant safety and parenting techniques. By educating parents in the cognitive aspect of empathy (perspective taking) and the affective aspect of empathy (emotional connection), we hope to help parents become more responsible citizens and provide their children with a strong, supportive model for parenting. The Roots of Empathy program is run by volunteers, many of whom are local families with young children. Each family commits to volunteering in the classroom with their baby at least once every three weeks for the entire school year. The classroom is filled with a variety of babies that reflect the cultural, racial and linguistic tone of the community.
<urn:uuid:8156eca2-48bb-41da-b699-120581de6bf3>
CC-MAIN-2023-50
https://ilduro.org/category/blog/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.950768
382
2.90625
3
The Importance of Holistic Lifestyle Choices In today’s fast-paced world, it’s easy to get caught up in the hustle and bustle of everyday life. We often find ourselves juggling multiple responsibilities and neglecting our own well-being. However, taking the time to nourish both our body and soul is essential for overall health and happiness. What is Holistic Living? Holistic living is an approach to life that considers the whole person – body, mind, and spirit. It recognizes that these aspects are interconnected and that true well-being can only be achieved by addressing all of them. Unlike conventional medicine, which focuses solely on treating symptoms, holistic living aims to prevent illness and promote optimal health by addressing the root causes of imbalance. It encompasses various practices such as healthy eating, exercise, mindfulness, and self-care. The Power of Healthy Eating One of the fundamental aspects of holistic living is nourishing our bodies with wholesome, nutrient-rich foods. A diet rich in fruits, vegetables, whole grains, and lean proteins provides the essential vitamins, minerals, and antioxidants our bodies need to thrive. By choosing organic and locally sourced foods, we not only support our own health but also contribute to the well-being of the environment. Eating mindfully and savoring each bite allows us to connect with our food and appreciate the nourishment it provides. It’s important to note that holistic living is not about strict diets or deprivation. It’s about finding a balance that works for you and listening to your body’s needs. The Role of Exercise in Holistic Living Regular physical activity is another crucial component of holistic living. Exercise not only helps maintain a healthy weight but also boosts mood, reduces stress, and improves overall well-being. Engaging in activities that you enjoy, such as walking, yoga, or dancing, not only makes exercise more enjoyable but also promotes a sense of fulfillment and connection with your body. Mindfulness and Self-Care In our fast-paced world, it’s easy to get caught up in the constant stream of thoughts and distractions. Practicing mindfulness allows us to slow down, be present, and cultivate a deeper sense of self-awareness. Simple activities such as meditation, deep breathing exercises, or spending time in nature can help us reconnect with ourselves and reduce stress. Self-care practices such as taking a relaxing bath, reading a book, or engaging in a hobby are also essential for nurturing our soul. The Benefits of Holistic Living Adopting a holistic lifestyle can have numerous benefits for both our physical and mental well-being. Some of the benefits include: - Improved overall health and vitality - Increased energy levels - Reduced stress and anxiety - Enhanced mental clarity and focus - Greater sense of purpose and fulfillment Nourishing our body and soul through holistic lifestyle choices is a powerful way to enhance our overall well-being. By taking a holistic approach to life, we can create a harmonious balance between our physical, mental, and spiritual aspects, leading to a happier and healthier life.
<urn:uuid:d1e882ca-d047-4bcc-b5bf-be8ab21a9f0b>
CC-MAIN-2023-50
https://influencergazette.com/nourishing-the-body-and-soul-the-power-of-holistic-lifestyle-choices/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.943096
642
2.703125
3
Building a productive team begins with three key elements: communication, cooperation, and resource allocation. To be successful, a first element of an effective team is good communication. By communicating well, a group of distinct individuals can function like spokes in a wheel. That means everyone knows what their responsibilities are and what the team’s goals are. Problems within the team that develop can be resolved amongst all subordinates associated with them. Hence, good communication is a necessary tool to ensure success. Cooperation is a second essential factor to building a successful team. If there is friction between team members, productivity can be seriously impaired. Hence, it is not favorable to have each worker to perform work just for the sake of his or her personal interests. Team members who compete with one-another at a level beyond friendly competition can adversely affect the team’s functionality. The total production of a proficient team is greater than the sum of all its parts. This is why team cooperation is extremely important. Allocation of resources plays as a third vital element in creating a team that is productive. Apparently, a successful team begins with proficient team members. Such individuals must be assigned duties where their strengths can work to provide optimal output. At the same time, team members can do a variety of task as a means of improving their weaknesses. This involves testing each member and assessing their strengths and weaknesses and assigning them roles accordingly. In addition to these three elements, any prosperous team has the following: - A clearly defined purpose – There is a reason why this team was created. Its purpose serves as its mission and consists of the team’s identity, its main goals, and what it wants to achieve in addition to their goals. - Behavioral norms – Each person must know certain things expected of him or her as rules of behavior, methods of making decisions, team support and interaction between members, and meeting ground rules. Trust is also necessary as all members must have faith in each other to do as they say they will, maintain confidences, give support where needed, and remain consistent in their behavior. - Success indicators – Any team must know what types of outcome it wants to generate, signs that indicate it is successful, how it evaluates its progress, and its reward system for meeting its goals. - Roles and responsibilities – Teams can make decisions effectively by establishing typical roles, who’s in charge of the decision-making process, who manages what type of decisions, and knowing their level of authority of decision making. Although there are differences in roles, levels of experience, and perspectives, the team functions as a partnership. All contributions made are respected and true consensus is attained when appropriate. - Rules of Operation – Knowing how meetings will be conducted by the team is determined by factors as: how often meetings are held, how long will they run, who facilitates them, what kind of agendas are desired (here’s a good meeting minute template), how the agendas will be created, types of meeting notes, to whom and when the notes are distributed, how the members communicate once the meeting is over, and all other details about managing meetings. In addition to operating efficiently, the team endlessly works to improve their efforts. They comprehend the importance of ongoing improvements and how this helps support the overall objectives of the organization. By continuous augmentation, the team can drive their performance to higher levels. Finally, process orientation is essential to a team’s success. This means having proficient planning techniques, problem-solving tools, agendas, periodic meetings, as well as ways to improve their own methods of organization and efficient production. Check out our previous articles: - Not Getting the Desired Traffic? Time to Change your Web Design! - Big Ideas Make Big Things Happen – SEO Analysis - Doing the Internal Linking the Right Way - Google’s Search Results Requested To Be Removed - Apple Safari 5 – What’s in it for a Windows user? We hope you enjoyed this article! Please don’t forget to subscribe to our RSS-feed or follow Inspirationfeed on Twitter, Google+, and Facebook! If you enjoyed the following article we humbly ask you to comment, and help us spread the word!
<urn:uuid:249cc9ff-1d3c-4d4d-91c5-cf756d26932f>
CC-MAIN-2023-50
https://inspirationfeed.com/what-makes-a-successful-team/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.957876
864
3.1875
3
Each week, we will choose a new story to read to children aged 0 to 6. Children’s stories are a source of discovery of the world and an open door to the imagination. It is a privileged moment to nourish and stimulate the imagination of the child, to use his creativity, to allow him to express his dreams and fears, to answer riddles, to make connections with his daily life. A great opportunity to learn to express yourself, gain self-confidence, enrich your vocabulary, develop a logical sense of a story, acquire notions of history by traveling in time and space. One thing is certain: Fairy tales help children to grow.
<urn:uuid:899e8733-2a7f-4899-a223-663cc5bd226b>
CC-MAIN-2023-50
https://institutguylacombe.ca/programs-from-0-to-6-years-old/story-time/?lang=en
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.962674
134
2.921875
3
INSIGHT by the European Commission Today, the EU and Norway have established a Green Alliance to strengthen their joint climate action, environmental protection efforts, and cooperation on the clean energy and industrial transition. The agreement was signed in Brussels by the President of the European Commission, Ursula von der Leyen, and Norway’s Prime Minister, Jonas Gahr Støre. “Norway is a long-standing and reliable partner to the EU and we share a common vision for building a climate-neutral continent. We want our societies and economies to prosper together while reducing emissions, protecting nature, decarbonising our energy systems, and greening our industries. This Green Alliance makes our bond even stronger and allows us to design a better future together.” -President von der Leyen Both sides reiterate their commitment to their respective 2030 targets of at least 55% greenhouse gas emission reductions compared to 1990, and to achieving climate neutrality at the latest by 2050. They aim to keep global temperature rise within the 1.5C limit under the Paris Agreement while ensuring energy security, environmental protection and human rights. The EU and Norway will work closely together to ensure the successful implementation of the Paris Agreement and the historic biodiversity agreement reached at the UN Biodiversity conference COP15. The EU-Norway Green Alliance, prepared and negotiated under the auspices of Executive Vice-President for the European Green Deal Frans Timmermans, will focus on the following priority areas: 〉strengthening efforts to combat climate change including cooperation on climate adaptation, carbon pricing, carbon removals, and carbon capture, transport, utilisation and storage; 〉increasing cooperation on environmental issues with a focus on halting and reversing biodiversity loss, forest degradation and deforestation, promoting circular economy and addressing the full life cycle of plastics, the development of global standards for the management of chemicals and waste and sustainable ocean management; 〉supporting the green industrial transition and further enhancing political and industrial cooperation through strategic partnerships, such as a future Strategic Partnership on Sustainable Raw Materials and Batteries Value Chains; 〉accelerating the clean energy transition with a focus on hydrogen and offshore renewable energy. 〉decarbonising the transport sector across all modes of transport, with special regard to zero GHG emission and zero pollution shipping; 〉increasing regulatory and business cooperation to set global standards for the innovative environmental solutions required to accelerate the transition to circular and net-zero economies; 〉consolidating existing collaboration on research, education, and innovation in the areas of decarbonisation, renewable energy, and bioeconomy; 〉working together to promote sustainable finance and investments to set Europe on a pathway to an environmentally sustainable, climate-neutral and climate resilient economy. A Green Alliance is the most comprehensive form of bilateral engagement established under the European Green Deal, with both parties committing to climate neutrality and to aligning their domestic and international climate policies to pursue this goal. This is only the second agreement of its kind, following the EU-Japan Green Alliance signed in 2021. The EU and Norway also agree to jointly promote ambitious climate action on the global stage. To this end, the two parties, as leading major donors of climate finance, will cooperate to support developing countries and emerging economies in the process of implementation of their climate and environment policies. To help keep global temperature rise within the 1.5C limit, the agreement confirms that full respect for the precautionary principle is paramount in the Arctic region. | All opinions expressed are those of the author and/or quoted sources. investESG.eu is an independent and neutral platform dedicated to generating debate around ESG investing topics.
<urn:uuid:b6d166dd-f643-4e7c-916e-23da2f5b2eb6>
CC-MAIN-2023-50
https://investesg.eu/2023/04/24/new-eu-norway-green-alliance-for-joint-action-climate-environment-energy-and-clean-industry/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.906194
748
2.578125
3
Background and Identification The Apple IIGS is the fifth and most powerful of the Apple II computer family. It is a 16-bit personal computer produced by Apple Computer, Inc. beginning in September 1986. The Apple IIGS features the Macintosh design with resolution and color similar to the Commodore Amiga and Stari ST and remains compatible with earlier Apple II models. The “GS” in the name stands for “Graphics and Sound,” referring to its then-state-of-the-art audio and enhanced multimedia hardware. The Apple IIGS includes a 16-bit 65C816 microprocessor with direct access to megabytes of random-access memory (RAM) and mouse. This microprocessor was a dramatic departure from any previous Apple II computer. The Apple IIGS was the first computer produced by Apple to use a color graphical user interface and Apple Desktop Bus interface for keyboards, mice, and other input devices (color was introduced on the Macintosh II six months after the release of the Apple IIGS). It is the first personal computer to have a wavetable synthesis chip using technology from Ensoniq. The IIGS includes either 256 KB or 1 MB of memory which is expandable up to 8 MB. Apple ceased IIGS production in December 1992 as Apple increasingly focused on the Macintosh platform. The IIGS includes Apple’s multi-colored apple logo with the name “Apple IIGS” in the bottom left-hand corner of the computer’s front face. Manufacturer: Apple Computer, Inc. Release date: September 15, 1986 Introductory price: US$999 (equivalent to $2,330 in 2019), excluding monitor Discontinued: December 1992 - Apple ProDOS - Apple GS/OS CPU: 65C816 @ 2.8 MHz Memory: 256 kB or 1 MB (expandable up to 8 MB) Graphics: VGC 12-bpp palette, 320×200, 640×200 Sound: Ensoniq ES5503 DOC 8-bit wavetable synthesis sound chip, 32 channels of mono or 16 of stereo
<urn:uuid:a33e8f30-d8f4-4b3d-875e-a827ccd5079c>
CC-MAIN-2023-50
https://it.ifixit.com/Device/Apple_IIGS
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.893952
437
2.53125
3
Document Type : Applied Article PhD Student, Climatology, Faculty of Humanities, Islamic Azad University of Najafabad Associate Professor, Department of Meteorology, Faculty of Humanities, Islamic Azad University of Najafabad Associate Professor, Rangeland Research Department, Forests and Rangelands Research Institute, Agricultural Research, Education and Extension Organization, Tehran, Iran Changes in climate systems are one of the most challenging environmental phenomena. This phenomenon affects environmental characteristics such as rainfall, drought, high-quality waste movement, etc., and may cause their order to be disrupted . Drought is a recurring climatic phenomenon in the climate system whose effects are not limited to arid and semi-arid regions . There are several factors that contribute to the occurrence of drought. Changing and intervening in these factors in order to prevent drought occurrence is beyond human power and is impossible. On this principle, it is possible for these conditions to occur in any region of the globe in rich and poor, wet and dry, developed and under developing countries, and so on . Due to the fact that drought indicators are valid only for one place and do not have the necessary spatial resolution to assess drought, and also due to the complexity and mechanism of climate, especially in the changes from year to year and decades, it is necessary to study the detection of processes affecting these changes and fluctuations. One of the most important factors affecting climate fluctuations on an annual basis is the role of climate patterns and indicators far from the region . Materials and methods In this study, the average monthly temperature data of Borujen, Lordegan, Shahrekord, and Koohrang stations were used. Remote link pattern data was also obtained from NASA. In this study, 26 remote linking models were used. In the present study, the results were evaluated seasonally for the years between 1397 to 1399 using Mann Kendall test. Afterward, relationship between temperature and drought of SPEI index has been used. In order to evaluate the trend of change in mean temperature, first, the statistical quality and homogeneity of data of Borujen, Lordegan, Shahrekord, and Koohrang stations were evaluated using test run test. Then, the relationship between drought using SPEI drought index and following a series of data from the homogeneous pattern was confirmed. The anomaly and normality of the mean temperature data were then investigated using the Kolmogorov-Smirnov test. According to the results and analyzes of the Kolmogorov-Smirnov test, if it was significant, ie p was less than 0.05, it means that the distribution is not normal. Mann-Kendall test was used to evaluate the significance of the change trend, and 95% and 99% confidence intervals were examined. Discussion and Results The results of descriptive statistics show that the highest average temperature in Borujen, Shahrekord, Koohrang and Lordegan stations in July is 22.74, 23.38, 22.21 and 27.80 ° C, respectively, and the lowest average temperature in Borujen, Shahrekord, Koohrang, and Lordegan stations in January are -0.99, -1.20, -3.85 and 3.76 degrees Celsius, respectively. The annual averages in Borujen, Shahrekord, and Koohrang and Lordegan stations are 11.29, 11.48, 9.90, and 15.79 degrees Celsius, respectively. The results showed that Chaharmahal and Bakhtiari province seems that the implantation of drought areas based on teleconnection patterns and its relationship with drought index is a kind of association of the passage of precipitation systems for this. Although there are areas in this province that are not the same in terms of rainfall, but the lack of patterns and systems from a distance will be the absence of rainfall and drought in the whole province. Not only there will be a drought in the rainy areas of the province, but it will also cause a lack of rainfall for the low rainfall areas as well as a drought. In the spring observations, it was found that based on climatic scenarios, there are no changes in the number of events and even drought classes compared to the base aura. However, in the Middle Ages, drought events and drought classes have changed to moderate to severe drought segments relative to the North Atlantic and Arctic linkage patterns for all stations. The fluctuations of drought and wetlands in Chaharmahal and Bakhtiari province are different from other areas. The relationship between droughts in this area and the negative phase pattern has led to drought in this area. The aim of this study was to identify and zoning the drought of Chaharmahal and Bakhtiari province with the help of SPEI drought index. Then, the relationship between each zone and atmospheric-oceanic connection patterns was analyzed. The results showed that Chaharmahal and Bakhtiari province was divided into four different and distinct zones in terms of the severity of the drought index: southeastern, northwestern, northern, and southern half, which shows the location of the zones. The effect of precipitation systems and their passage on Chaharmahal and Bakhtiari province. Drought and wet season in each of the areas (Shahr-e Kurd, Borujen, Lordegan, and Koohrang) where drought and wet season are seen consecutively in these areas. The most severe droughts are related to area four (Koohrang). Among the remote connection patterns, the western hemisphere hot pool pattern has the greatest impact on the occurrence of drought in the southwestern regions of the province. The relationship between this index and drought is positive in this area. Due to the drought in the Borujen area, most of the long-distance link patterns, including the Atlantic Index and the Pacific and North Atlantic Decades fluctuation pattern in autumn are significant. Drought in the southern hemisphere (Lordegan) in the warm season (spring and summer) shows a significant relationship with the tropical pattern of the South Atlantic, the tropical index of the North Atlantic and the East Atlantic. Droughts in the northwest (Shahrekord) R show a significant relationship with the multivariate index of Enso and North Atlantic and East.
<urn:uuid:0b1efce6-9b74-4bc0-951b-46a7bf00f336>
CC-MAIN-2023-50
https://jhsci.ut.ac.ir/article_84401.html?lang=en
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.943564
1,302
2.640625
3
HoME is a version of Smalltalk which can be efficiently executed on a multiprocessor and can be executed in parallel by combining a Smalltalk process with a Mach thread and executing the process on the thread. HoME is nearly the same as ordinary Smalltalk except that multiple processes may execute in parallel. Thus, almost all applications running on ordinary Smalltalk can be executed on HoME without changes in their code. HoME was designed and implemented based on the following fundamental policies: 1992 theoretically, an infinite number of processes can become active; (2) the moment a process is scheduled, it becomes active; (3) no process switching occurs; (4) HoME is equivalent to ordinary Smalltalk except for the previous three policies. The performance of the current implementation of HoME running on OMRON LUNA-88K, which had four processors, was measured by benchmarks which execute in parallel with multiple processes. In all benchmarks, the results showed that HoME's performance is much better than HPS on the same workstation. ASJC Scopus subject areas - Computer Graphics and Computer-Aided Design
<urn:uuid:b3247102-7495-4ea2-8cd6-8ead61dd16d7>
CC-MAIN-2023-50
https://keio.elsevierpure.com/en/publications/the-design-and-implementation-of-home
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.960586
227
2.640625
3
In a study of patients tested for COVID-19, researchers at the University of Chicago Medicine found an association between vitamin D deficiency and the likelihood of becoming infected with the coronavirus. “Vitamin D is important to the function of the immune system and vitamin D supplements have previously been shown to lower the risk of viral respiratory tract infections,” said David Meltzer, Chief of Hospital Medicine at UChicago Medicine and lead author of the study. “Our statistical analysis suggests this may be true for the COVID-19 infection.” The research team looked at 489 patients at UChicago Medicine whose vitamin D level had been measured within a year before being tested for COVID-19. Patients who had vitamin D deficiency (defined as less than 20 nanograms per milliliter of blood) that was not treated were almost twice as likely to test positive for COVID-19 compared to patients who had sufficient levels of the vitamin. It’s important to note that the study only found the two conditions were frequently seen together; it does not prove causation. Meltzer and colleagues are currently planning further clinical trials. Half of Americans are thought to be deficient in vitamin D, with much higher rates seen in African Americans, Hispanics and individuals living in areas like Chicago where it is difficult to get enough sun exposure in winter. (However, research has also shown that some kinds of vitamin D tests don’t detect the form of vitamin D that is present in a majority of African Americans—which means those tests might falsely diagnose vitamin D deficiencies in those individuals. This particular study accepted either kind of test as criteria.) COVID-19 is also more prevalent among African American individuals, older adults, nursing home residents and health care workers—populations who all have increased risk of vitamin D deficiency. “Understanding whether treating vitamin D deficiency changes COVID-19 risk could be of great importance locally, nationally and globally,” said Meltzer, the Fanny L. Pritzker Professor of Medicine. “Vitamin D is inexpensive, generally very safe to take, and can be widely scaled.” Meltzer and his team emphasize the importance of experimental studies to determine whether vitamin D supplementation can reduce the risk, and potentially severity, of COVID-19. They also highlight the need for studies of what strategies for vitamin D supplementation may be most appropriate in specific populations. They have initiated several clinical trials at UChicago Medicine and with partners locally. Written by Gretchen Rubin.
<urn:uuid:935288fd-2395-428a-931f-5b6ae9e5298c>
CC-MAIN-2023-50
https://knowridge.com/2020/09/vitamin-d-deficiency-may-raise-risk-of-getting-covid-19/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.953538
519
3.65625
4
Climate change poses an enormous threat to the exercise and enjoyment of human rights in Latin America. Climate impacts spell out a serious list of structural threats that cause harm to people’s lives and rights. Climate change is rooted in inequality, colonialism, and the improper use of natural resources. The economic model imposed on our societies has generated cross-border environmental damage through climate impacts. The large emitters in the North have put our most vulnerable communities at the forefront of climate change impacts. These communities are being forced to suffer loss and damage disproportionately. The socioeconomic and environmental context of the Global South has been shaped by colonialism and extractive capitalism, which historically has diminished people’s welfare. While communities in the Global North still enjoy the benefits of their carbon-intensive economies, in the Global South, the bill for loss and damage is quietly rising at the expense of those living in vulnerable conditions. In Latin America, the issue of loss and damage is imperative as there have been manifestations of climate change for years. How to approach the issue, however, still presents a challenge for the region. Latin American countries lack the tools or do not address the urgency of documenting, analyzing, and reporting the seriousness of the loss and damage experienced in the region. Despite the threatening context for sustainable development and social welfare, the Nationally Determined Contributions (NDCs) for Latin America include little information on loss and damage. This lack of monitoring, analysis, and reporting presents the greatest challenge to a fair, appropriate, and human rights-based response to loss and damage. This analysis contrasts the information available on climate impacts in the region with NDC data from several Latin American countries and establishes some key recommendations to improve the approach.
<urn:uuid:8a46f4c0-7acd-447b-97ff-16a2d6a2688f>
CC-MAIN-2023-50
https://larutadelclima.org/loss-and-damage-in-a-latin-american-context-summary-of-data-in-ndcs/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.945006
343
3.375
3
Individual Retirement Account A means by which an individual can receive certain federal tax advantages while investing for retirement. The federal government has several reasons for encouraging individuals to save money for their retirement. For one, the average life span of a U.S. citizen continues to increase. Assuming that the average age of retirement does not change, workers who retire face more years of retirement and more years to live without a wage or salary. Uncertainty over the future of the federal SOCIAL SECURITY system is another reason. U.S. workers generally contribute deductions from their paychecks to the Social Security fund. In theory, this money will come back to them, usually upon their retirement. But a substantial number of politicians, economists, and scholars contend that the Social Security fund is being drained faster than it is being filled, and that it will go broke in a number of years, leaving retirees to survive without government assistance. Regardless of its future, many people consider the retirement benefits of Social Security to be inadequate, and they look for other methods of funding their retirement years. Many employers offer retirement plans. These plans vary in form but generally offer retirement funds that grow with continued employment. Yet this benefit is not always available to workers. A changing economy has caused some employers to cut back on retirement plans or to cut them out completely. Often, part-time, new, or temporary workers do not qualify for an employer's retirement plan. And individuals who are self-employed may not choose this job benefit. To help people prepare for their retirement, Congress in 1974 established individual retirement accounts (IRAs) (EMPLOYEE RETIREMENT INCOME SECURITY ACT [ERISA] [codified in scattered sections of 5, 18, 26, and 29 U.S.C.A.]). These accounts may take a variety of forms, such as savings accounts at a bank, certificates of deposit, or mutual funds of stocks. Initially, IRAs were available only to people who were not participating in an employer-provided retirement plan. This changed in 1981, when Congress expanded the IRA provisions to include anyone, regardless of participation in an employer's retirement plan (Economic Recovery Tax Act [ERTA] [codified in scattered sections of 26, 42, and 45 U.S.C.A.]). The goal of ERTA was to promote an increased level of personal retire ment savings through uniform discretionary savings arrangements. A movement to bolster the FEDERAL BUDGET by eliminating many existing tax shelters prompted portions of the TAX REFORM ACT OF 1986 (codified in scattered sections of 19, 25, 26, 28, 29, 42, 46, and 49 U.S.C.A.) and another change in IRA laws. This time, Congress limited some of the IRA's tax advantages, making them unavailable to workers who participate in an employer's retirement plan or whose earnings meet or exceed a certain threshold. Yet, other tax advantages remain, and the laws still allow any one to contribute to an IRA, making it a popular investment tool. It is difficult to understand the advantages that an IRA offers without understanding a few basics about federal INCOME TAX law. Generally, a person calculating the amount of tax that he or she owes to the government first determines the amount of income received in the year. This is normally employment income. Tax laws allow the individual to deduct from this figure amounts paid for certain items, such as charitable contributions or interest on a mortgage. Some taxpayers choose to take a single standard deduction rather than numerous itemized deductions. In either case, the taxpayer subtracts any allowable deductions from yearly income and then calculates the tax owed on the remainder. Taking deductions is only one of the ways in which a taxpayer may reduce taxes by investing in an IRA. But IRAs have proven to be popular with taxpayers. This popularity has prompted expansion of the federal tax rules to encourage additional savings and investment through IRAs. In 2003 there were 11 types of IRAs: - Individual Retirement Account - Individual Retirement Annuity - Employer and Employee Association Trust Account - Simplified Employee Pension (SEP-IRA) - Savings Incentive Match Plan for Employees IRA (SIMPLE IRA) - Spousal IRA - Rollover IRA (Conduit IRA) - Inherited IRA - Education IRA - Traditional IRA - Roth IRA Despite the many variations, the two most important remain the traditional IRA and the Roth IRA. In traditional IRAs, a single filer may deduct IRA contributions as long as his or her income is less than $95,000 (to qualify for a full contribution) or $95,000-$110,000 to qualify for a partial contribution. Joint filers may deduct IRA contributions as long as their adjusted gross income is less than $150,000 (to qualify for a full contribution). If their adjusted gross income is between $150,000 and $160,000, they may qualify for a partial contribution. IRA contribution limits increased in 2002 and will increase over the next few years. For individual taxpayers, contributions are limited to $3,000 for tax years 2003 and 2004. In tax years 2005 through 2007, contributions are capped at $4,000. They are eventually capped at $5,000 for individual taxpayers in 2008 through 2010. Various plans may constitute employer-maintained retirement plans, such as standard pension plans, profit-sharing or stock-bonus plans, annuities, and government retirement plans. Someone who does not participate in such a plan—whether by choice or not—is entitled to contribute to an IRA up to $3,000 a year or 100 percent of her or his annual income, whichever is less. The amount contributed during the taxable year may then be taken as a deduction. A married taxpayer who files a joint tax return with a spouse who does not work may deduct contributions toward what is called a spousal IRA, or an IRA established for the spouse's benefit. If neither spouse is a participant in an employer-provided retirement plan, up to $4,000 may be deductible. Taxpayers who contribute to Traditional IRAs usually realize tax benefits even when the law does not permit them to take deductions. That is because income earned on Traditional IRA contributions is not taxed until the funds are distributed, which usually occurs at retirement. Income that is allowed to grow, untaxed, for several years, grows faster than income that is taxed each year. To avoid abuses and excessive tax shelters, Congress has placed limits on the extent to which IRAs can be used as a financial tool. Individuals with IRAs may currently make contributions limited to $3,000 a year; contributions exceeding that amount are subject to strict financial penalties by the INTERNAL REVENUE SERVICE each year until the excess is corrected. The owner of an IRA generally may not withdraw funds from that account until age 591/2. Premature distributions are subject to a ten percent penalty in addition to regular income tax. Taxpayers may be able to avoid this premature distribution penalty by "rolling over," or transferring, the distribution amount to another IRA within 60 days. An individual may elect not to withdraw IRA funds at age 591/2. However, the law requires IRA owners to withdraw IRA money at age 701/2, either in a lump sum or in periodic (at least annual) payments based on a life-expectancy calculation. Failure to comply with this rule can result in a 50 percent penalty on the amount of the required minimum distribution. Contributions to an IRA must stop at age 701/2. In 1997, Congress provided for a new type of IRA—the Roth IRA, named for former Senator William V. Roth, Jr. The Roth IRA was part of the Taxpayer Relief Act of 1997, Pub.L. No. 105-34, 111 Stat. 788 (codified as amended in scattered sections of 26 U.S.C.). Contributions to a Roth IRA are not deductible from gross income, and the Roth IRA allows no deductions for contributions. Instead, Roth IRAs provide a benefit that is unique among retirement savings schemes: If a taxpayer meets certain requirements, all earnings from the IRA are tax-free when the taxpayer or his or her beneficiary withdraws them. There are other benefits as well, such as no early distribution penalty on certain withdrawals, and no need to take minimum distributions after age 701/2. The chief advantage of the Roth IRA is the ability to have investment earnings escape taxation. However, taxpayers may not claim a deduction when they contribute to Roth IRAs. Whether it is more advantageous to use Roth IRAs or traditional IRAs depends on each taxpayer's personal situation. It also depends on what assumptions the taxpayer makes about the future, such as future tax rates and the taxpayer's earnings in the interim. One may open a Roth IRA if he or she is eligible for a regular contribution to a Roth IRA or a rollover or conversion to a Roth IRA. A taxpayer is eligible to make a regular contribution to a Roth IRA even if he or she participates in a retirement plan maintained by his or her employer. These contributions may be as much as $3,000 ($3,500 if 50 or older by the end of the year). There are just two requirements: the taxpayer or taxpayer's spouse must have compensation or ALIMONY income equal to the amount contributed; and the taxpayer's modified adjusted gross income may not exceed certain limits. These limits are the same as in traditional IRAs: $95,000 for single individuals and $150,000 for married individuals filing joint returns. The amount that a taxpayer may contribute is reduced gradually and then completely eliminated when the taxpayer's modified adjusted gross income exceeds $110,000 (single) or $160,000 (married filing jointly). A traditional IRA may be converted to a Roth IRA if modified adjusted gross income is $100,000 or less, and if the taxpayer is either single or files jointly with his or her spouse. Although taxpayers converting traditional IRAs to Roth IRAs must pay tax in the year of the conversion, the long-term savings often greatly out-weigh the conversion tax. Boes, Richard F., and G. Michael Ransom. 1994."Untangling the IRA Rules." Tax Adviser 25 (August 1). J.K. Lasser Institute. 1996. J.K. Lasser's Your Income Tax 1996. New York: Macmillan. Kaster, Nicholas. 1998. Roth IRAs after 1998 Tax Law Changes. Chicago: CCH. Levy, Donald R., and Avery E. Neumark. 2000. Quick Reference to IRAs, 1999. New York: Panel Publishers.
<urn:uuid:25e4c0c3-da2e-40f2-9aaa-c8b8b9baee0c>
CC-MAIN-2023-50
https://law.jrank.org/pages/7617/Individual-Retirement-Account.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.950951
2,195
3.046875
3
Do you need help with NCEA Level 1 Science, Level 2 Science, or Level 3 Science? This article outlines several strategies for improving your performance in NCEA Science. It includes resources, tips, and techniques recommended by experienced teachers, for use students in New Zealand schools. In this article: - NCEA science overview - Why get a good grade - More about unit standards and achievement standards - Top tips to achieve in NCEA science What is NCEA science? Year 11, Year 12, and Year 13 high school students can take NCEA Science. There are a number of different NCEA Science classes and courses available, covering different parts of the curriculum. Students will sit achievement standards, earning credits though internal assessments and end of year exams, which will go towards National Qualifications. NCEA stands for the National Certificate in Educational Achievement and is the main national qualification for secondary school students in New Zealand. Employers recognise NCEA; universities and polytechnics use it as a selection tool. You will choose your syllabus each year from a range of courses and subjects offered by your school. Students are assessed against a numbers of standards in each subject. For example, an NCEA Level 1 science standard is “Investigate implications of heat for everyday life”. An NCEA Level 3 science standard is “Demonstrate understanding of processes in the atmosphere system”. Why is a good grade in science important? Science has applications to so much of everyday life – for example, science offers an understanding of how to produce food, how to work out its energy consumption and how this translates to your energy output, and how it gets from the ground to your plate at night. Success in science is not just about knowing how to carry out general science and experiments. Getting a good grade in your science classes is important in its own right. Good grades are helpful for getting into your most desired course of future study. They are also useful for landing that scholarship or apprenticeship, and for having independent proof of your skillset without needing to constantly demonstrate it. Before we jump into the tips and techniques for tackling NCEA Science, you might have some bigger questions about NCEA. What is an NCEA unit standard or achievement standard? There are two types of assessment standards in NCEA: unit standards and achievement standards. Assessments can receive a grade of Not Achieved, Achieved, Merit, or Excellence, and follow the New Zealand curriculum. Teachers use a range of tests or projects to form the internal assessment for some standards. Other standards are assessed externally at the end of the year by NZQA (the New Zealand Qualifications Authority) either by exam or portfolio. Each standard is worth a number of credits, which the student earns by achieving the standard. If the student performs particularly well, they will receive their credits at a Merit or Excellence level. If they achieve consistently high marks across several standards, then they can receive a Merit or Excellent endorsement for the subject or course as a whole. NCEA Scholarship Exams If you do well in science, you may wish to sit New Zealand Scholarship examinations at the end of your NCEA Level 3 year. The subject requirements are the same as Level 3, although assessed to a much higher standard. Successful candidates receive a financial award. With that out of the way, let’s discuss some strategies for fulfilling your potential! Collect evidence for an NCEA Science experiment Technology is a daily part of our lives – we use it to connect with friends, watch YouTube clips or find funny memes relating to the implementation of technology in the classroom. Technology also offers a vast range of resources to help with research and recording findings. Allowing students to be innovative in collecting evidence has taken the pressure off them to only excel in one area, creating more opportunities for all students to do well. “Successful student evidence resulted where teachers showed awareness of the need to provide a range of opportunities to demonstrate their understanding”.https://www.nzqa.govt.nz/ncea/subjects/science/nmr/ Students can demonstrate evidence of science work in many ways, including blogs and video clips. As teachers work with students to present succinct evidence, the quality of work improves. Collecting evidence is particularly relevant when on field trips – to aid in the recollection of findings and the ability to link to clear examples. For Excellence, the marking criteria requests you to demonstrate an understanding of results that incorporate evidence. This separates you out from an achieved mark (given for offering a simple description). Remember that collecting only relevant evidence that directly addresses standard criteria is the goal over collecting large quantities of evidence. Have a look at science exemplars from the NZQA website which showcase past student work, providing commentary that matches unit standard criteria. This gives an understanding of how a grade can move from Achieved to Merit and Merit to Excellence. Here is a section from a Level 1 Science Exemplar (Copyright: NZQA): Focus on quality Further to the above, collecting evidence aids in increasing report quality. Scientific quality is important for ethical considerations, and for maintaining integrity of research. A focus on quality over quantity gives confidence in results which if applied to a real-world scenario is crucial for getting products to market. For example, drug companies need to prove consistency of their products through a series of trials – giving customers confidence in their use. Being able to filter out information or evidence that will not add to the quality of your work is an essential skill to learn. It will clarify your research, keep you on track and assist in the learning experience by diluting unnecessary information. In saying this, experiments need to be tested and proved. Ensuring there is enough valid data to provide a quality scientific response is vital. You need to be able to demonstrate an understanding of the evidence at the curriculum level you are being graded on to reach the Excellence criteria. If drawing diagrams makes it easier to explain or demonstrate a reaction, then do this, but make sure you annotate them so the examiner can follow your chain of thought to ensure your workings are eligible to be marked. Your focus on quality also relates to the information you retain. Learning terminology all in one go results in confusion between concepts (such as: meiosis and fertilisation). Instead, break the concepts down into manageable bite size chunks. “Focus on understanding the concepts of genetics and then learning the words to describe them better”.https://studytime.co.nz/resources/genetics-strategy-guide-ncea-level-1-science/ Learn from the feedback given to you throughout the year by your teacher. When you get a question wrong, take time to understand where you made mistakes and where you lost marks. This focus on correcting errors (rather than doing as many practice tests as possible) is where you will find your efforts paying off when it comes to the final examination. Be an active participant It may seem obvious, but many students are not aware of how to be an active class participant. The New Zealand schooling system is based on an interactive approach to the learner’s responsiveness. Therefore to get the most out of your classes you need to be taking notes, listening to the teacher, asking questions and engaging in discussions. The first step you can take is to sit near the front of the class so you can easily hear and see the teacher and avoid distractions. This will assist you in taking quality notes that you can review after class. You do not need to write down everything the teacher says. During class, develop a note taking system that works for you that you can add to later in your own revision time. Try to associate your learning with a real-world example to help you visualise material and retain more information. Forming a study group with fellow students to discuss class material will also help you to remember concepts. Ask questions during class to demonstrate your willingness to learn. You will grow in your understanding of scientific concepts, and your teacher will see your areas of weakness. They will then be better placed to offer you guidance if they notice you are struggling. Work through the study material before looking at the sample problems and solutions. This way you have a holistic understanding of the material and will recall it more readily, rather than using an example to guide you (which won’t be there during the examination). “One of the biggest causes of low performance on science tests is the habit of using the examples to do the homework. You must do the problem yourself; do not let the example do it for you!”https://www.sciencemag.org/careers/2002/02/effective-study-strategies-will-help-you-ace-your-science-courses Remember that your teacher wants you to do well so do not be afraid to approach them to request extra guidance after class hours. This shows that you are eager to succeed and may assist when it comes time to receive additional tutoring for scholarship examinations. Relate your NCEA Science study to the real world Many science concepts relate to the real world. Chemical reactions, graphs, energy, or genetics all have direct applications to the real world. An understanding of how science is being used in these situations will assist you in describing the event or problem, and you can check that your answer makes sense based on what you already know. “The beauty comes in its connections” Relating your study to a real-world context fosters neural pathways that will be triggered during recall in examinations to help you answer the questions. Understanding topics as cohesive units makes them appear less complex. Linking concepts and terms to ideas you already understand and relate to helps with memory recall. Relate your genetics study to what you already know from your own family traits. Use this knowledge when drawing a punnet square as it will help you determine the dominant and recessive genes. For example you may have brown eyes like your mother while your dad has blue eyes which indicate that the blue eye gene is the recessive gene. “Many candidates could benefit by practicing creating scenarios where they can apply their learnt knowledge to unknown situation to achieve the uppermost grades”.https://www.nzqa.govt.nz/ncea/subjects/assessment-reports/science-l1/ Sometimes questions purposefully have excess information in them. Just like in the real world, use your problem solving skills to decipher what information is relevant and what is not. Understand chemical reactions You will be required to investigate chemical reactions. This is a process where a substance, either a chemical or a compound, is converted to produce a new substance through rearranging the atoms. You will need to grasp cencepts that are common in external examinations. These include rates of reaction (with ability to graph results) and why ions form and undergo certain changes. Examiners will be impressed if they can see that you can confidently explain what is going on. It’s a good idea to develop a diagram and test yourself on the concepts. Describe how substances are produced through demonstrating the workings involved in chemical equations. It is not enough to answer that carbon and oxygen produce carbon dioxide. You need to show how you know this through the equation. Balancing chemical equations is a skill that (if perfected) will gain an Excellence mark. Prepare using past exam papers, and you will see this skill is always required. Knowing the periodic table and symbols is crucial for these equations. NCEA Science revision We have mentioned above that you should take accurate notes during class – this is vital when it comes to revision. There are many methods of revision. It is important you find the solution that works most effectively for you to get the most out of it. - Utilise the study guide found on the NZQA website - Use suggestions from your teacher to form a study checklist - Organise your assessments and tasks using your checklist as a visual tool. This will mean you don’t miss deadlines or skip important information. Checklists provide motiviation. As we see each item ticked off we become more inclined to continue getting ticks, thereby potentially increasing productivity. Checklists are beneficial for marking off all areas of study in a topic. By breaking the topic of genetics down into components (DNA, genotype and phenotype, genetic variations etc.) you can make sure all areas of study are covered. For example you would use DNA as the heading and under that have your areas of knowledge. ‘I can define, describe and explain the structure of DNA’ ‘I can define and describe chromosomes’ ‘I can define meiosis and mitosis’Genetics Checklist – NCEA Level 1 Science Take a look at the Assessment Schedule and Marking Schedule for each Unit Standard to help you prepare for testing. The Science Formula relating to the Unit Standard will be provided in the test. Familiarity with the formula is necessary in order to carry out calculations, descriptions and graphical interpretations. NCEA Science practice exam questions Complete official examination papers from previous years. This is one of the best ways to test how you are performing in a subject. NZQA publishes old exam papers on their website, which are available from the Science resources page. By completing past papers you will become familiar with the style of questions that will appear in the external exams and get an idea of how the paper is organised. Then, when you arrive at your exam, you will have a good idea of what to expect. Another good reason to practice questions from exam papers ahead of time is to get a feel for the level of difficulty of the exam questions. This way you’ll know whether you are on track with your revision, or if you need to do a bit more work beforehand. Some people find it stressful to look at old exam papers and are shocked at how hard the questions look. If you are going to be shocked, it is far better to get that out of the way in the comfort of your own home – instead of waiting for the exam to start. Try to spread out your use of past exam papers, as there are only a few available. Leave most of them for the week or two before the exam. This way, you’re giving yourself time to do more revision, without using them all up before you’ve covered the necessary concepts. It is a great idea to work through at least one past exam paper in the same timeframe of your exam. That way you can practice the questions and answers, and get a feel for how quickly you’ll need to work. Remember to try and leave some time in the exam to take a second look at your work, if you can. Make your resources work for you – utilise textbooks, worksheets and online material to enhance your learning. At the start of your course, your teacher will recommend the textbook they want you to work through. It is also helpful to look at other textbooks to see whether their explanations make more sense to you. When you receive your textbook you may feel overwhelmed by its size, but textbooks are not designed to be read cover to cover. Use the contents page and index page to find the information you need. Flag these pages with post it notes or similar to come back to them. Don’t get weighed down in material in the textbook that is not needed for your level of study. If you are unsure, your teacher will guide you on what you will need to know. Other textbooks and study guides that are worth having a look at are those available from ESA Publications. ESA Publications is a long-established educational publisher in New Zealand. They produce Study Guides and Workbooks for primary and secondary schools covering the New Zealand Curriculum and NCEA. Find ESA’s Science resources here. ESA Digital offers access to ESA’s online learning resources including: - Biology Level 1, Biology Level 2, Biology Level 3 - Chemistry Level 1, Chemistry Level 2, Chemistry Level 3 - Earths Science Level 2, Earths Science Level 3 - Science Level 1 Many teachers now will upload their notes online. Some of these resources will be available only for students at that school, while some will be open to everyone. Search online and see what you find to add to your notes and examples. A number of tutoring and revision websites publish free NCEA science resources or tutorials for students. Some are NCEA tutorial videos, some are written texts, some are textbook ebooks – resources and help can come in a variety of forms. - Study Time: “StudyTime is an online platform dedicated to helping NZ kids make the most of high school.” The site contains checklists, tutorial videos, strategy guides, online tutoring options, and more. - NCEA on TKI: TKI is the online learning basket for the New Zealand Curriculum. The NCEA portion of TKI is mostly targeted at teachers, but students and whanau can also find some useful articles and information there. - No Brain Too Small is an online science site developed by four practicing New Zealand science teachers sharing their resources, from flash cards, to power points to revision notes. If you learn best through discussion (and student forums do not work for you) then look into hiring a science tutor. You do not need to be struggling to have a tutor work with you; tutors can extend your current knowledge to help you study for scholarship exams. They sometimes work with small groups as well as in person or online. Here are a few other online resources you may find beneficial: Use a digital platform like LearnWell Digital Digital platforms such as LearnWell Digital use gamification to make learning science more rewarding, allowing you to learn and revise NCEA science online. LearnWell Digital is a superb solution for schools. It is an online learning management system (LMS) that provides an easy-to-use way for teachers and schools to create content pages, quizzes, assessments and more. Not only that, but the LearnWell team has a range of resources available, which teachers can use as they are or can customise for their students’ needs. Teachers can track student progress through the programme. They can gain insights into how students are responding to the topic. It is therefore easy for teachers to identify any areas where further explanation or in-class discussion might be needed. Having quality NCEA resources right there at their fingertips means that teachers can get on with focussing on the students needs, instead of spending hours searching for good material. By having a combined in-class and online programme, schools are better able to provide a flexible work environment. Schools can then supply each child in the class with content that allows them to work at their level. Students can complete work in class or from home. Teachers and students can discuss questions and answers together both in and out of the classroom on an online discussion forum. If you would like to find out more about how LearnWell can help you, please contact us.
<urn:uuid:2fc23a82-d0ea-470a-9b08-756efffc7c82>
CC-MAIN-2023-50
https://learnwell.co.nz/blogs/news/learning-ncea-science-help
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.937575
3,968
3.59375
4
Sam Sees Snow From the Series My Reading Neighborhood: Kindergarten Sight Word Stories It’s snowing! What will Sam do outside? This simple story incorporates words from the Kindergarten-level Dolch Sight Word List to build literacy skills. |Interest Level||Preschool - Grade 1| |Publisher||Lerner Publishing Group| |Number of Pages||16| Lerner eSource™ offers free digital teaching and learning resources, including Common Core State Standards (CCSS) teaching guides. These guides, created by classroom teachers, offer short lessons and writing exercises that give students specific instruction and practice using Common Core skills and strategies. Lerner eSource also provides additional resources including online activities, downloadable/printable graphic organizers, and additional educational materials that would also support Common Core instruction. Download, share, pin, print, and save as many of these free resources as you like! My Reading Neighborhood: Kindergarten Sight Word Stories These supportive texts draw from the Dolch sight words for Kindergarten to help emergent readers practice their literacy skills. View available downloads →
<urn:uuid:17d3a961-6ab4-47ea-a0ca-0ebf227b67bb>
CC-MAIN-2023-50
https://lernerbooks.com/shop/show/13142
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.806496
228
2.828125
3
The Soap Creek Valley is located within the traditional homelands of the Ampinefu Band (also known as the Mary's River band) and Luckiamute band of Kalapuya. Since time immemorial, the Kalapuya have lived on and cared for the land that Euro-American settlers later claimed in the Willamette Valley beginning in the 1830s. Euro-Americans’ desire for farmland led to individual, and later, government-sanctioned actions, to dispose the Kalapuya and other Indigenous Oregonians from their lands. Devastating diseases created conditions for an even more aggressive displacement of Native peoples from their lands throughout the Oregon Trail period (1840s-1850s), as Indigenous communities were unable to repel the increasing waves of white settlement. Following the Willamette Valley Treaty of 1855 (Kalapuya etc., Treaty), Kalapuya people were forcibly removed by federal law to reservations in Western Oregon. Today, their living descendants are a part of the Confederated Tribes of the Grand Ronde Community of Oregon and the Confederated Tribes of Siletz Indians.
<urn:uuid:5d34fb1c-89bb-45ca-9bd9-760bf8e9594d>
CC-MAIN-2023-50
https://letitiacarson.omeka.net/exhibits/show/learning-from-the-land/indigenous-history
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.957777
235
3.140625
3
‘This year we see yet another rise in global fossil CO2 emissions, when we need a rapid decline.’ Global carbon emissions in 2022 remain at record levels – with no sign of the decrease that is urgently needed to limit warming to 1.5°C, according to the Global Carbon Project science team. If current emissions levels persist, there is now a 50% chance that global warming of 1.5°C will be exceeded in nine years. The new report projects total global CO2 emissions of 40.6 billion tonnes (GtCO2) in 2022. This is fuelled by fossil CO2 emissions which are projected to rise 1.0% compared to 2021, reaching 36.6 GtCO2 – slightly above the 2019 pre-COVID-19 levels. Emissions from land-use change, such as deforestation, are projected to be 3.9 GtCO2 in 2022. Projected emissions from coal and oil are above their 2021 levels, with oil being the largest contributor to total emissions growth. The growth in oil emissions can be largely explained by the delayed rebound of international aviation following COVID-19 pandemic restrictions. The 2022 picture among major emitters is mixed: emissions are projected to fall in China (0.9%) and the EU (0.8%), and increase in the USA (1.5%) and India (6%), with a 1.7% rise in the rest of the world combined. The remaining carbon budget for a 50% likelihood to limit global warming to 1.5°C has reduced to 380 GtCO2 (exceeded after nine years if emissions remain at 2022 levels) and 1230 GtCO2 to limit to 2°C (30 years at 2022 emissions levels). To reach zero CO2 emissions by 2050 would now require a decrease of about 1.4 GtCO2 each year, comparable to the observed fall in 2020 emissions resulting from COVID-19 lockdowns, highlighting the scale of the action required. Land and ocean, which absorb and store carbon, continue to take up around half of the CO2 emissions. The ocean and land CO2 sinks are still increasing in response to the atmospheric CO2 increase, although climate change reduced this growth by an estimated 4% (ocean sink) and 17% (land sink) over the 2012-2021 decade. This year’s carbon budget shows that the long-term rate of increasing fossil emissions has slowed. The average rise peaked at +3% per year during the 2000s, while growth in the last decade has been about +0.5% per year. The research team – including the University of Exeter, the University of East Anglia (UEA), CICERO and Ludwig-Maximilian-University Munich – welcomed this slow-down, but said it was “far from the emissions decrease we need”. The findings come as world leaders meet at COP27 in Egypt to discuss the climate crisis. We must not allow world events to distract us from the urgent and sustained need to cut our emissions. “There are some positive signs, but leaders meeting at COP27 will have to take meaningful action if we are to have any chance of limiting global warming close to 1.5°C. The Global Carbon Budget numbers monitor the progress on climate action and right now we are not seeing the action required.” Professor Corinne Le Quéré, Royal Society Research Professor at UEA’s School of Environmental Sciences, said: “Our findings reveal turbulence in emissions patterns this year resulting from the pandemic and global energy crises. “If governments respond by turbo charging clean energy investments and planting, not cutting, trees, global emissions could rapidly start to fall. “We are at a turning point and must not allow world events to distract us from the urgent and sustained need to cut our emissions to stabilise the global climate and reduce cascading risks.” Land-use changes, especially deforestation, are a significant source of CO2 emissions (about a tenth of the amount from fossil emissions). Indonesia, Brazil and the Democratic Republic of the Congo contribute 58% of global land-use change emissions. Carbon removal via reforestation or new forests counterbalances half of the deforestation emissions, and the researchers say that stopping deforestation and increasing efforts to restore and expand forests constitutes a large opportunity to reduce emissions and increase removals in forests. The Global Carbon Budget report projects that atmospheric CO2 concentrations will reach an average of 417.2 parts per million in 2022, more than 50% above pre-industrial levels. The projection of 40.6 GtCO2 total emissions in 2022 is close to the 40.9 GtCO2 in 2019, which is the highest annual total ever. The Global Carbon Budget report, produced by an international team of more than 100 scientists, examines both carbon sources and sinks. It provides an annual, peer-reviewed update, building on established methodologies in a fully transparent manner. Brendan Montague is editor of The Ecologist. The full Global Carbon Budget report is available online.
<urn:uuid:59942781-cbfe-4254-92c6-ab248883a0ca>
CC-MAIN-2023-50
https://lifeinbalance.co.za/2022/11/11/fossil-fuel-emissions-still-increasing/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.910529
1,057
3.28125
3
In recent years, a growing body of research has demonstrated profound health and wellness benefits from the therapeutic use of red and near-infrared light. But how exactly do these wavelengths affect the body? Here’s an in-depth look at the science behind infrared and red light therapy. Infrared and red light therapy belongs to a category of treatments known as photobiomodulation or low-level light therapy (LLLT). This involves delivering red or near-infrared light to the body through lasers or LED devices. Wavelengths in the 600-950nm range have specific properties that allow them to penetrate several centimeters below the skin. Cellular mitochondria in the tissue absorb these light photons and convert them into beneficial cellular changes. Photobiomodulation with red/near-infrared light produces a wide range of positive effects in the body through modulating numerous biological processes. But what does the research say about how it works? One of the core effects of photobiomodulation is the increasing production of adenosine triphosphate (ATP) inside cells. ATP is the prime energy source used by your cells. Red and infrared light absorbed by mitochondria boosts ATP synthesis. This was demonstrated in a 2012 study exposing cultured neurons to infrared light, resulting in elevated ATP levels. By increasing cellular energy, infrared/red light enhances tissue repair, reduces pain and inflammation, boosts metabolism, and optimizes overall cell and organ performance. Chronic inflammation is at the root of most diseases. Red and infrared light have scientifically demonstrated powerful anti-inflammatory effects. For example, a 2014 study showed how light therapy reduced inflammatory cytokines in muscles after strenuous exercise. Researchers in this study concluded light therapy “consistently demonstrates anti-inflammatory effects across a diverse range of injuries and diseases.” Light exposure reduces inflammation by lowering oxidative stress, stimulating healing factors, and modulating the immune response. This helps treat pain, autoimmune conditions, respiratory diseases, and more. Dozens of studies show red and infrared light can significantly accelerate the healing of skin, connective tissue, bones, nerves, and muscles. For example, a 2014 study found photobiomodulation cut exercise-induced muscle damage and post-workout soreness in half. Another study showed light therapy increased collagen production in the skin by 146% compared to controls. Light exposure speeds healing by reducing inflammation, stimulating fibroblasts and macrophages, increasing blood flow, and boosting tissue regeneration and growth factors. As the primary sites in cells for light absorption and ATP production, mitochondria play a central role in infrared/red light bioeffects. Research shows photobiomodulation not only increases ATP short term but also enhances mitochondrial function and biogenesis long term. This study demonstrated red light improved mitochondrial movement along neurons’ axons. Enhanced mitochondrial motility promotes better distribution of energy production. Another study found infrared light therapy protected mitochondria from antibiotic drug toxicity. Photobiomodulation improves the health and performance of the cellular powerhouses. Red light significantly enhances blood flow and circulation. Within minutes of red light exposure, vessels dilate as nitric oxide, growth factors, and other beneficial messengers are released. Better circulation provides more oxygen and nutrients for healing and optimal cell metabolism. Research shows red and infrared light influences the hypothalamic-pituitary axis and sympathetic nervous system. This modulates stress hormones like cortisol as well as reproductive hormones. Stem cells are key to repairing damaged tissues. Red light exposure triggers a cascade that activates stem cells. Photobiomodulation stimulates macrophages to secrete cytokines that induce mobilization and migration of stem cells to injury sites as shown in this study. Once activated, stem cells then differentiate into whatever cell types are needed to regenerate and repair tissue. While studies clearly demonstrate red and infrared light modulate numerous biological processes, there is still much to discover. Ongoing research continues to further refine optimal wavelengths, dosages, and treatment protocols for different applications. But the fundamental ability of photobiomodulation to improve cellular energy function remains proven and safe. Advancements in LED technology will lead to more affordable, accessible infrared/red light therapy devices for managing pain, optimizing health, and slowing aging. The future is bright for this revolutionary modality! Given these scientifically validated mechanisms of red and infrared light, it’s understandable how photobiomodulation therapy can benefit a diverse assortment of conditions: From elite athletes to the geriatric population, infrared and red light can enhance health and performance in people of all ages. Serious side effects are essentially non-existent. Red light therapy is the name given to a large range of therapies that use certain wavelengths of light to promote healing, improve skin tone, and enhance circulation. Red light therapy is said to be effective for pain management, acne treatment, and the healing of certain sports injuries, among other applications. The following is a true review from the user: “I have used it for several days, great for me. When I use it, l always wear the glasses, then let the red light irradiate on my face and body, normally, 10- 15 mins every session. My face and back have small acne now and I feel a difference in my skin, my inflammation has decreased. And I sleep better than before. My family doctor says red light will also do help for hair growth and pain relief, I will try it longer in the coming winter and will update my review afterward.” “I bought this Red Light Therapy Panel about a month ago, hoping to use it to alleviate my back pain. It came with the panel and all the needed accessories: a power cord, hanging cords and a goggle (shown in picture). To some, it is smaller than shown, but to me, the size is just about right. It would not be as easy to hang if it is too large. Putting it up was pretty easy (I put it up in the bathroom), but you do need to select a place where you can hang it to the right height (so you could shine the red light on your body area of choice), and is close to an electric outlet. Once turned on, the panel is quite bright (see picture). So, it is better to put on the goggle before turning it on. As I am new to this, I took it slow and followed the instructions closely. I stayed with my back about 10 inches away from the panel, kept the sessions for 10 minutes, and 2 times per week. So far, I feel it is working as well as expected. After each session, my back pain feels much alleviated, and over a few weeks time, it does feel progressively better. Hopeful this panel will make a real lasting difference in my back health.” Infrared and red light therapy is backed by thousands of clinical trials and over 50 years of scientific research. Studies have revealed photobiomodulation produces widespread benefits in the body by: From skin to muscles and brain, red and infrared light can optimize cellular performance. Daily use provides anti-aging effects and enhanced well-being. Talk to your doctor about adding infrared or red light therapy to your wellness regimen. Consistent home use can promote whole-body health. The supporting science is clear on the efficacy of harnessing red and near-infrared wavelengths for therapeutic benefit.
<urn:uuid:42bd7014-398d-465d-b142-fcd818fce155>
CC-MAIN-2023-50
https://lighttherapyred.com/infrared-red-light-therapy-scientific-studies-evidence
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.917731
1,527
3.15625
3
Conventions used in this article: - $ – execution on the command line by a non-privileged user - # – execution on the command line by a superuser - the actual command to be executed on the command line or code of program to be compiled - OUTPUT: output produced on the command line by command execution - NOTE: general notes and additional information In simple words a Computer Vision is a scientific field which attempts to provide a sight to the machine. This scientific field has expanded rapidly in recent years. Among researchers this growth is because of many improvements of vision algorithms and among the computer vision hobbyists this is due to the cheaper hardware components and processing power. OpenCV library plays a great role in the Computer Vision field as it helps greatly to reduce cost and preparation time of computer vision research environment needed by university students, hobbyists and professionals. OpenCV also provides a simple to use functions to get the work done in a simple, effective and elegant manner. OpenCV was started by Intel, and later it was transformed to an open source project now available on SourceForge.net. OpenCV library has multi-platform availability, and it is partially written in C++ and C language. Despite the fact that this library is available on many Linux distributions from its relevant package repositories, in this article we will attempt to install and use OpenCV library compiled from a source code downloaded from SourceForge.net web site. The reasons for compiling a source code may include: - new version 2.0.0 recently released and more features available - some bugs fixed which affected Linux OpenCV 1.0.0 versions ( such as cvGetCaptureProperty() etc. ) - more support is available for OpenCV 2.0.0 version than for former 1.0.0 version This article will start with installation of OpenCV on Debian 5.0 ( Lenny ). Later a reader will be guided through a number of examples on how to use OpenCV to display an image, play a video and use camera to capture the video input stream.
<urn:uuid:aad8e00f-6fa9-4e2e-8929-189815797e76>
CC-MAIN-2023-50
https://linuxconfig.org/page/247
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.93226
423
2.984375
3
Why Is Fintech Important? Published: October 29, 2023 Discover the significance of fintech in the world of finance and how it is revolutionizing the industry. Stay updated and understand the importance of embracing this technological advancement. (Many of the links in this article redirect to a specific reviewed product. Your purchase of these products through affiliate links helps to generate commission for LiveWell, at no extra cost. Learn more) Table of Contents In recent years, the world of finance has undergone a significant transformation due to the rise of financial technology, commonly known as Fintech. Fintech refers to the use of innovative technologies to provide financial services in a more efficient, accessible, and user-friendly manner. This includes mobile banking, digital payments, online lending platforms, robo-advisors, and blockchain technology. The impact of Fintech has been revolutionary, disrupting traditional financial institutions and changing the way individuals and businesses manage their finances. With the use of smartphones and the internet becoming increasingly prevalent, Fintech has gained immense popularity and acceptance worldwide, offering a range of benefits and opportunities. One of the primary reasons why Fintech has become so important is its significant economic impact. Fintech has not only revolutionized the financial services sector but also contributed to economic growth and development. The rapid adoption of Fintech solutions has led to the creation of new jobs, increased productivity, and stimulated innovation. Furthermore, Fintech has played a crucial role in increasing financial inclusion. By leveraging technology, Fintech has made financial services accessible to individuals who were previously excluded from the formal banking system. This includes the unbanked and underbanked population, particularly in developing countries. Through mobile banking and digital payment solutions, individuals can now easily and securely access and manage their finances, empowering them with economic opportunities. Fintech has also enhanced efficiency and reduced costs in the financial industry. Traditional financial institutions often have complex and time-consuming processes. However, Fintech has streamlined these processes through automation, artificial intelligence, and machine learning. This has resulted in faster transaction processing, improved customer experience, and reduced operational costs for both financial service providers and consumers. Economic Impact of Fintech The emergence of Fintech has had a profound impact on the global economy. It has not only transformed the financial sector but also contributed significantly to economic growth and development. Here are some key ways in which Fintech is shaping the economy: - Creation of new jobs: The growth of Fintech has led to the creation of a multitude of new job opportunities. Fintech companies require professionals with expertise in technology, data analysis, cybersecurity, and finance, resulting in increased employment in these fields. Moreover, the collaborative ecosystem of Fintech encourages entrepreneurship and the launch of innovative startups, further boosting job creation. - Innovation and productivity: Fintech fosters innovation by leveraging emerging technologies such as artificial intelligence, big data, and blockchain. This innovation drives productivity gains by automating routine tasks, reducing manual errors, and improving overall efficiency. For example, automated risk assessment algorithms enable faster and more accurate loan approvals, benefiting both borrowers and lenders. - Increased access to finance: Fintech has expanded access to financial services, particularly in underserved and unbanked populations. The traditional banking system often fails to reach individuals in remote areas or those with limited financial resources. However, Fintech solutions such as mobile banking and digital wallets offer easy, affordable, and secure ways to access and manage money, empowering individuals with financial inclusion and economic opportunities. - Support for small and medium enterprises (SMEs): Fintech has leveled the playing field for SMEs by providing them with easier access to financing and innovative financial tools. Online lending platforms and crowdfunding have made it easier for small businesses to secure loans and investments. Additionally, Fintech has enabled SMEs to efficiently manage tasks such as payroll, accounting, and inventory management, enhancing their competitiveness and growth potential. - Global economic integration: Fintech has facilitated cross-border transactions and expanded international trade by enabling faster, more secure, and cost-effective payment solutions. Blockchain technology, for instance, has the potential to revolutionize supply chain management, reducing paperwork, enhancing transparency, and minimizing fraud in global trade. This increased efficiency in international transactions benefits businesses and consumers alike. The economic impact of Fintech is not limited to individual sectors but extends to the overall economic landscape. Governments and regulatory bodies are increasingly recognizing the potential of Fintech to drive economic growth and are implementing supportive policies and regulations. This ensures a conducive environment for Fintech innovation and adoption, further fueling economic expansion. Increasing Financial Inclusion One of the key advantages of Fintech is its ability to promote financial inclusion by providing access to financial services to individuals who were previously excluded. In many parts of the world, a significant portion of the population remains unbanked or underbanked, meaning they have limited access to traditional banking services. Fintech bridges this gap by leveraging technology and innovative solutions. Here’s how Fintech is increasing financial inclusion: - Mobile banking: Fintech has made banking services accessible to individuals through their mobile phones. This is especially significant in developing countries where smartphones are becoming increasingly prevalent. With mobile banking applications, individuals can open bank accounts, deposit and withdraw money, make payments, and access other financial services with ease, even without a physical branch nearby. - Digital payments: Fintech has revolutionized how people make payments. Digital payment platforms, such as mobile wallets and payment apps, enable secure and convenient transactions without the need for physical cash or traditional banking infrastructure. This benefits individuals who may not have access to traditional banking services, allowing them to participate in the digital economy and make purchases online or in-person. - Online lending platforms: Traditional lending institutions often have strict eligibility criteria that exclude many individuals, particularly those without a credit history or collateral. Fintech has introduced online lending platforms that make it easier for individuals to access loans. These platforms use alternative data sources and algorithms to assess creditworthiness, enabling a wider range of borrowers to obtain loans at competitive rates. - Microfinance: Fintech has facilitated the growth of microfinance institutions and peer-to-peer lending platforms. These platforms connect lenders directly with borrowers, eliminating the need for intermediaries and reducing costs. Microfinance institutions leverage technology to provide small loans to individuals and small businesses, empowering them to start or expand their ventures and improve their livelihoods. - Rural and remote access: Fintech has the potential to reach individuals in rural and remote areas through digital channels. Traditional banks often find it economically unviable to establish physical branches in such areas. However, Fintech enables individuals to access financial services through mobile devices and internet connectivity, overcoming geographical barriers and bringing banking services to previously underserved communities. By increasing financial inclusion, Fintech not only provides individuals with access to basic financial services but also empowers them to participate in the formal economy, save money, build credit, and invest in their future. This leads to poverty reduction, economic growth, and improved quality of life. Enhanced Efficiency and Cost Reduction Fintech has brought about a paradigm shift in the financial industry by revolutionizing processes and significantly enhancing efficiency. Traditional financial institutions often have complex and time-consuming operations, which can lead to high costs and inefficiencies. Fintech solutions leverage technology to streamline processes and reduce costs, benefiting both service providers and consumers. Here are some key ways in which Fintech enhances efficiency and reduces costs: - Automation: Fintech leverages automation to replace manual and repetitive tasks with advanced algorithms and artificial intelligence. For example, chatbots and virtual assistants can handle customer inquiries and provide support, reducing the need for human intervention. This not only saves time and resources but also improves the overall customer experience. - Streamlined payment processes: Fintech has revolutionized payment systems, making transactions faster, more convenient, and secure. Traditional payment methods, such as checks and wire transfers, can be slow and cumbersome. However, with Fintech innovations like online banking, mobile wallets, and digital payment platforms, transactions can be completed instantly, eliminating the need for physical paperwork and reducing processing time. - Reduced operational costs: By embracing Fintech, financial institutions can significantly reduce their operational costs. Digital processes eliminate the need for physical branches, reducing overhead expenses such as rent and maintenance. Additionally, Fintech enables the automation of back-office functions like accounting, documentation, and compliance, reducing the need for manual labor and minimizing errors. - Improved risk assessment: Fintech utilizes advanced data analytics and algorithms to assess risk. This enables financial institutions to make more accurate and informed decisions regarding loan approvals, investment strategies, and risk mitigation. By leveraging technology to analyze a vast amount of data, Fintech solutions can identify potential risks, detect fraudulent activities, and make risk management more efficient. - Personalized financial services: Fintech allows financial institutions to offer personalized services tailored to individual customer needs. With the help of machine learning algorithms, Fintech platforms can analyze consumer data in real-time, providing customized recommendations, investment advice, and tailored financial products. This not only enhances customer satisfaction but also saves time by eliminating the need for customers to search for suitable options themselves. By enhancing efficiency and reducing costs, Fintech creates value for both financial service providers and consumers. In turn, this leads to increased profitability for businesses, lower fees for customers, and more accessible financial services for a broader population. Disruption of Traditional Financial Institutions The rise of Fintech has disrupted the traditional financial landscape, challenging the dominance of traditional financial institutions such as banks and insurance companies. Fintech startups and innovative solutions have emerged as strong contenders, offering alternative ways of delivering financial services. Here are some key aspects of how Fintech is disrupting traditional financial institutions: - Alternative lending: Fintech has revolutionized lending by providing alternative sources of funding. Traditional banks often have rigid lending criteria and lengthy approval processes. Fintech lending platforms, on the other hand, use advanced algorithms and alternative data sources to assess creditworthiness, making it faster and more accessible for individuals and small businesses to secure loans. This has challenged the traditional loan approval process and put pressure on banks to innovate and adapt. - Digital banking: Fintech has introduced digital-only banks or neobanks that operate exclusively online, without physical branch locations. These digital banks offer a range of banking services, such as account opening, payments, and money management, through mobile apps and websites. The convenience, lower fees, and streamlined processes offered by digital banks have attracted a growing number of customers, posing a threat to traditional banks that rely on physical branches. - Disintermediation: Fintech has facilitated direct transactions between individuals and businesses, reducing the need for intermediaries. Peer-to-peer lending, crowdfunding, and digital payment platforms enable individuals to lend or invest directly in projects or businesses, bypassing traditional financial intermediaries. This disintermediation disrupts the traditional banking model by offering more efficient and cost-effective alternatives for both borrowers and lenders. - Robo-advisors: Fintech has introduced robo-advisors, which are automated investment platforms that provide financial advice based on algorithms and data analysis. Robo-advisors offer low fees, personalized recommendations, and accessibility to a wider range of investors. This has disrupted the wealth management industry where traditional financial advisors typically catered to high-net-worth individuals. Robo-advisors are now challenging traditional wealth management firms by providing accessible and cost-effective investment solutions. - Open banking: Open banking initiatives, driven by regulatory requirements, allow customers to share their financial data securely with third-party Fintech companies. This enables Fintech startups to develop innovative products and services by leveraging customer data. Open banking disrupts the traditional banking model by encouraging competition and enabling Fintech companies to offer personalized, data-driven financial solutions that traditional banks may struggle to provide. As Fintech continues to evolve and gain traction, traditional financial institutions are recognizing the need to adapt and embrace innovation. Many are forming partnerships with Fintech startups, investing in technology, and launching their own digital initiatives to stay competitive in the rapidly changing landscape. The disruption caused by Fintech is reshaping the financial industry, driving innovation, and ultimately benefiting consumers by providing them with more choices and improved financial services. Technological Innovation in Financial Services Fintech has brought about numerous technological innovations that have transformed the way financial services are provided and consumed. Technology has enabled financial institutions to enhance their offerings, improve efficiency, and deliver a better customer experience. Here are some key technological innovations in financial services driven by Fintech: - Blockchain: Blockchain technology is revolutionizing various aspects of the financial industry, particularly in areas such as payments, identity verification, and smart contracts. By providing a secure and transparent decentralized ledger, blockchain enables faster and more secure transactions, reduces fraud, and eliminates the need for intermediaries in many financial processes. - Artificial Intelligence (AI) and Machine Learning: AI and machine learning algorithms enable the processing of vast amounts of data and the extraction of valuable insights. In finance, AI is used for risk assessment, fraud detection, customer service chatbots, and investment recommendations. Machine learning algorithms analyze data and patterns to provide personalized financial advice, automate customer interactions, and improve decision-making processes. - Big Data: The availability of large volumes of data has enabled financial institutions to gain deeper insights into customer behavior, market trends, and risk assessment. Big data analytics helps identify patterns, predict customer preferences, and detect fraudulent activities. By leveraging big data, financial institutions can offer personalized products and services, make data-driven decisions, and mitigate risks more effectively. - Mobile and Digital Solutions: Fintech has leveraged the widespread adoption of smartphones and the internet to provide mobile banking solutions, digital wallets, and payment apps. These tools allow individuals to access financial services anytime, anywhere, making transactions more convenient and enhancing financial inclusion. Moreover, mobile and digital solutions enable real-time notifications, easy account management, and secure transactions. - Robotic Process Automation (RPA): RPA automates repetitive and rule-based tasks, reducing human error and increasing operational efficiency. Financial institutions use RPA for tasks such as data entry, loan processing, and account reconciliation. By automating these processes, organizations can save time, reduce costs, and allocate human resources to more complex and strategic tasks. - Internet of Things (IoT): IoT devices, such as wearable sensors and smart devices, allow for the collection of data related to personal finance, health, and insurance. This data enables personalized financial services, such as usage-based insurance, personalized health insurance, and personalized offers based on spending habits. The integration of IoT with financial services opens up new opportunities for customization and risk assessment. These technological innovations have not only improved the efficiency and accuracy of financial services but also paved the way for new business models and opportunities. Financial institutions that embrace these advancements can gain a competitive edge, deliver superior customer experiences, and drive financial inclusion by reaching underserved populations. Fostering Innovation and Entrepreneurship Fintech has emerged as a catalyst for innovation and entrepreneurship in the financial sector. It has created a conducive environment that encourages the development of new ideas, technologies, and business models. Here are some key ways in which Fintech fosters innovation and entrepreneurship: - Democratization of access: Fintech has significantly lowered the barriers to entry for entrepreneurs in the financial industry. With the availability of open APIs, cloud computing, and affordable technology, startups can access the necessary infrastructure and resources to build innovative financial solutions. This democratization of access has leveled the playing field, allowing new players to compete with established financial institutions. - Collaborative ecosystem: Fintech has fostered a collaborative ecosystem where partnerships between startups, financial institutions, and technology providers are common. This collaboration allows startups to leverage the expertise, resources, and distribution networks of established players. Financial institutions, on the other hand, benefit from the agility and innovation of startups. This collaborative approach encourages the development of disruptive ideas and accelerates their adoption in the market. - Availability of funding: Fintech startups have access to a wide range of funding options, including venture capital, angel investors, and crowdfunding. The success stories of Fintech unicorns have attracted investors, making it easier for entrepreneurs to secure funding for their ventures. Additionally, Fintech-focused incubators and accelerators provide mentorship, networking opportunities, and financial support to early-stage startups, nurturing their growth and innovation. - Regulatory sandboxes: Regulatory bodies around the world have recognized the importance of fostering innovation in the financial industry. As a result, they have established regulatory sandboxes, which allow startups to test their innovations in a controlled environment. These sandboxes provide a safe space for startups to explore new ideas, validate their business models, and seek regulatory guidance. By facilitating experimentation, regulatory sandboxes support the development of cutting-edge solutions without compromising consumer protection. - Disruptive business models: Fintech has paved the way for new and disruptive business models that challenge traditional approaches. The rise of peer-to-peer lending, crowdfunding, robo-advisory, and digital currencies are examples of Fintech-driven business models that have gained traction in the market. These models offer innovative solutions to long-standing challenges and provide entrepreneurs with opportunities to disrupt established industries. - Focus on user experience: Fintech has shifted the focus from traditional financial products to user-centric solutions. Startups in the Fintech space prioritize designing intuitive and user-friendly interfaces that enhance the customer experience. By placing users at the center of the innovation process, Fintech entrepreneurs can identify pain points, develop more tailored solutions, and differentiate themselves in a crowded market. Fintech has created an environment that encourages innovation, collaboration, and entrepreneurship in the financial industry. By leveraging technology and disrupting traditional models, Fintech startups have the opportunity to transform the way financial services are delivered, driving efficiency, customer satisfaction, and financial inclusion. Regulatory Challenges and Opportunities As Fintech continues to revolutionize the financial industry, it also presents both regulatory challenges and opportunities for financial institutions and regulatory bodies. The rapid pace of technological innovation in Fintech often outpaces the development of regulatory frameworks. Here are some key regulatory challenges and opportunities associated with Fintech: - Regulatory uncertainty: The evolving nature of Fintech poses challenges for regulators in terms of adapting existing regulations or creating new ones. As Fintech expands into areas such as blockchain, cryptocurrencies, and digital identity, regulatory bodies face the challenge of balancing innovation and consumer protection. Striking the right balance requires collaboration between regulators, policymakers, and industry stakeholders. - Data privacy and security: Fintech relies heavily on the collection and analysis of large amounts of data. Regulatory frameworks need to address privacy concerns and ensure that consumer data is adequately protected. Fintech companies must comply with data protection regulations, establish robust cybersecurity measures, and provide transparency to users about how their data is used. Addressing these challenges fosters consumer trust and confidence in Fintech solutions. - Consumer protection: Fintech brings new risks that regulators need to address to protect consumers. As financial services become more digital and decentralized, regulations must mitigate risks such as fraud, unauthorized access, and unfair practices. Regulatory initiatives to ensure transparency, disclose risks, and enforce fair lending practices are crucial to maintain consumer trust in the Fintech ecosystem. - International coordination: Fintech operates across borders, requiring regulatory coordination between different jurisdictions. Harmonizing regulations and fostering international collaboration can promote innovation, facilitate cross-border transactions, and ensure a level playing field for Fintech startups and traditional financial institutions. Regulatory sandboxes and information-sharing platforms play a crucial role in facilitating this coordination. - Opportunities for regulatory innovation: Fintech presents opportunities for regulatory innovation that can foster growth and innovation while protecting consumers. Regulatory sandboxes, for example, allow startups to test their innovations within a controlled environment, enabling regulators to gain insights and ensure compliance. Open banking initiatives, which promote the secure sharing of consumer information, can spur innovation and competition in the financial industry. - Supporting financial inclusion: Regulatory frameworks can encourage Fintech solutions that promote financial inclusion. By providing clarity and support for innovative business models, regulators can foster greater access to financial services for underserved populations. Regulatory sandboxes and tailored regulations for startups targeting financial inclusion can create an enabling environment for Fintech to address the needs of unbanked and underbanked communities. Overall, striking the right balance between fostering innovation and ensuring consumer protection is a key challenge for regulators in the Fintech space. By embracing innovation, collaborating with industry stakeholders, and tailoring regulations to support responsible Fintech practices, regulatory bodies can maximize the benefits of Fintech while mitigating potential risks. Risks and Security Concerns in Fintech While Fintech brings countless benefits and opportunities, it also introduces risks and security concerns that need to be addressed to ensure the stability and trustworthiness of financial systems. Here are some key risks and security concerns in the Fintech space: - Cybersecurity threats: Fintech platforms handle sensitive financial data, making them prime targets for cyberattacks. Hackers may attempt to steal customer information, commit fraud, or disrupt financial operations. Fintech companies must invest in robust cybersecurity measures, encryption technologies, multi-factor authentication, and regular security audits to protect customer data and maintain the integrity of their systems. - Data privacy: The collection, analysis, and storage of large volumes of personal and financial data raise privacy concerns. Fintech companies must adhere to data protection regulations and ensure transparent practices regarding data usage, sharing, and consent. Prioritizing robust data privacy measures and providing clear privacy policies builds trust and confidence among users. - Lack of regulatory oversight: The rapid growth of Fintech often outpaces the development of regulatory frameworks. This can create gaps in oversight and expose users to potential risks. Regulators need to adapt and establish comprehensive guidelines that address Fintech-specific risks, enforce compliance with data privacy and consumer protection regulations, and ensure fair practices in the industry. - Operational risks: Fintech companies may face operational risks, including system failures, technical glitches, and inadequate business continuity plans. Such risks can disrupt services, compromise customer data, and erode trust. Robust risk management practices, regular system testing, and disaster recovery plans can help mitigate operational risks and ensure the smooth operation of Fintech platforms. - Financial fraud: Fintech platforms can be vulnerable to financial fraud, such as identity theft, phishing attacks, and unauthorized transactions. Rigorous user authentication measures, real-time transaction monitoring, and education on security best practices can safeguard against financial fraud and protect users’ financial assets. - Regulatory compliance: Fintech companies must navigate complex regulatory landscapes to ensure compliance with financial regulations, consumer protection laws, and anti-money laundering (AML) practices. Failure to comply with regulations can result in legal consequences and reputational damage. Establishing a strong compliance framework and building partnerships with legal and compliance experts are essential for Fintech firms to navigate the regulatory landscape successfully. Addressing these risks and security concerns requires a collaborative effort among Fintech companies, regulators, and users. Promoting cybersecurity awareness, implementing robust security measures, enforcing data privacy regulations, and enhancing regulatory oversight are crucial to safeguarding the Fintech ecosystem and ensuring a secure and trustworthy environment for financial transactions. Fintech has revolutionized the financial industry, bringing about significant changes and opportunities. The economic impact of Fintech has driven innovation, created new jobs, and enhanced financial inclusion. By leveraging technology, Fintech has increased access to financial services, improved efficiency, and reduced costs. However, Fintech also presents regulatory challenges and security concerns that need to be addressed. Regulatory bodies must adapt to the evolving Fintech landscape, establishing frameworks that encourage innovation while ensuring consumer protection. Cybersecurity threats, data privacy issues, and operational risks require robust measures to protect users’ information and financial assets. Despite these challenges, Fintech offers immense potential for growth, collaboration, and customer-centric solutions. It fosters innovation and entrepreneurship by democratizing access to resources, supporting incubation programs, and encouraging collaboration between startups and established financial institutions. This collaboration drives the development of disruptive business models and promotes healthy competition. As Fintech continues to evolve, it is crucial for all stakeholders, including financial institutions, regulators, entrepreneurs, and consumers, to stay informed, adapt to technological advancements, and safeguard against risks. By doing so, we can maximize the benefits of Fintech, drive financial inclusion, and create a more accessible, efficient, and secure financial ecosystem for all.
<urn:uuid:a0060a66-a639-4576-8af5-5531f1bb8c87>
CC-MAIN-2023-50
https://livewell.com/finance/why-is-fintech-important/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.925865
5,124
2.75
3
Bev Robertson and Pam Jenkins are both avid gardeners and members of Local Food Connect. In March 2017, they ran some workshops on how to use a dehydrator successfully. Here are their notes. Click here to borrow Local Food Connect’s Fowlers Ultimate Dehydrator. Do you want an easy way to preserve food at home which requires very little equipment? Do you need a food storage method that doesn’t take up much space? Are you keen on providing your family with preservative free healthy snacks to enjoy? Are you looking for portable food for camping or backpacking? Equipment needed for home food drying Commercial dehydrators will give more consistent results with quick and uniform drying which preserves colour, flavour and texture. Most dehydrators allow you to set the temperature for drying of different foods. For example, herbs are best dried at lower temperatures that don’t drive off volatile oils, while meats are typically dried at higher temperatures. Sharp knife, mandolin or food processor for cutting food into thinner pieces. Stainless steel saucepan for cooking and blanching. Clean, sterilised jars/containers with airtight lids for storage of dried foods. What types of food can be dried? Just about anything – vegetables, fruit, herbs, meat. All work out to be much cheaper than supermarket dried goods. Preparing food for dehydrating Before you dehydrate anything make sure work surfaces, your hands, all food and equipment are thoroughly washed. Generally food needs to be sliced or chopped in uniform pieces between 3-5 mm thick. Pre-treating – some foods can be pre-treated before drying to help preserve colour and flavour. For example, apples, apricots, bananas, peaches, pears, nectarines may darken during drying and storage. Soaking fruit in a bowl of water with a couple of tablespoons of lemon juice helps to reduce browning; as does dipping the sliced fruit in other acidic fruit juices such as orange or pineapple. Certain dried vegetables may become tough and strong flavoured after a period of storage. Steam blanching them prior to drying inactivates the enzymes and preserves the natural vitamins and minerals. After blanching, transfer the food to ice water to stop the cooking. Fruits and vegetables can be dehydrated without pre-treatment as long as they are exposed to good air circulation and warmth in the dehydrator. How long does it take to dry food? Anywhere from a few hours to days depending on what is being dried, as the moisture content varies. Thinner items dry faster than thick. Each dehydrator works differently and the more you put in, the longer it will take to dry everything. Overnight drying is a good idea. Don’t turn up the temperature in an attempt to dry foods quicker as this will seal the outside, leaving moisture within, which will in turn lead to the food spoiling. Spread the food evenly on trays keeping space between pieces and avoiding overlapping. It’s fine to combine different foods of a similar type and size in the same load; however, don’t dry strongly flavoured foods such as onions with fruits. Rotating the trays occasionally will assist with drying. How do you know when the food is dry enough? Different foods will have different textures when dried. Some will be brittle, whilst others will be leathery and pliable. Check the drying tables in the dehydrator instruction book. Let the food cool. If the food feels squishy it isn’t dry enough. Continue drying. Note that individual pieces in a batch may dry at different rates. Simply remove pieces as they are done, cool, place in a jar and seal. After the items are in a tightly sealed jar at room temperature, check for condensation on the inside of the lid after a day or so. If there is condensation then you need to dry it more or put it in the fridge/freezer to prevent mould. Storing dried foods Airtight dry containers should be used to store dried foods. You can re-use jars with undamaged well sealed lids. When storing large quantities divide it up into smaller batches in case problems develop in a particular batch. Label with the name of the food and the date it was dried. Keep the containers in a dark, dry, cool place. Herbs that are to be used within a few months may be crumbled to save space. However, for best flavour retention in long-term storage, herb leaves should be intact, then crumbled. Fruit roll ups can be rolled in plastic wrap and stored in glass jars with a tight fitting lid. They can be refrigerated or frozen to retain freshness. Using dried foods Many dried foods, especially fruits, are ready to eat as snacks and can be used in baked goods and other recipes. Re-hydrating is the process of adding water back to the dried food. Most vegetables can be re-hydrated by soaking for several minutes to hours or just added to soups and casserole dishes.
<urn:uuid:ab4e9773-a879-4aa2-a24f-c0e4f368f139>
CC-MAIN-2023-50
https://localfoodconnect.org.au/eating/success-with-dehydrating/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.941571
1,062
2.5625
3
Received: August 20, 2018; Published: August 27, 2018 *Corresponding author: Andrew Hague, Professor of Advanced Medicine, President of Cell Sonic Limited, Manufacturers of medical equipment, United Kingdom Only by increasing productivity can earnings be increased. This applies to doctors as much as all trades. Better techniques enable diseases to be cured that have hitherto been impossible by traditional methods. By these means, a doctor can heal more patients more easily. Thus, a doctor can earn more, and patients be charged less. In every country, a medical doctor is highly respected and usually paid double the average wage and often much more especially if they can be self-employed and charge the patients directly. Governments and insurance companies always control the medical providers. Doctors are seen as having the power of life and death over people and this is feared by politicians who seldom understand medicine. So long as students continue to apply to medical schools, the renumeration is settled by supply and demand. If there are enough doctors, the pay must be enough. If doctors left the profession and fewer students enrolled, the pay would be increased to attract more. All this assumes that a doctor is able to restore health and postpone death. Longevity has increased, so doctors and the medical industry claim to be earning their keep. But is living longer a result of what a doctor can do when called upon to stop an illness? What about the improvements in sanitation that happened over a hundred years ago in the modernised countries and are being installed now in emerging countries. Soap does more to maintain health than aspirin. It has to be said that a headache is never the result of an aspirin deficiency . Plumbers contribute more to health than doctors. Some can earn as much as doctors by working hard and taking on unpleasant jobs. To become a doctor requires more study and expense than ever faces an apprentice plumber and the skills are different. Think back to school days. Those who passed exams all had good memories. They may have lacked imagination but when it came to remembering names and dates, they had total recall and that is all an exam tested. Medical students are similarly tested on their memory . Now ask whether a good memory is more important than an instinct for asking the right questions when it comes to puzzling out what the cause of an illness is. Having learned a lot of case histories, a doctor may remember that what he sees in the clinic is the same as reported in a text book and on that basis prescribes drugs which should help. That, unfortunately, is not good practice and usually supplied with the advice, “If you don’t feel better in a few days, come back and we’ll try something else.” Whether the medicine works or not, the doctor gets paid. By contrast, if the new toilet did not flush, the plumber would not be paid until it worked properly. The doctor will have provided the patient with a medicine that can be justified as correct by reference to existing practice. It will have been used before and tested. Whether the circumstances are the same, one can never be sure but at least the uncertainties are limited if the drug, and it usually is a drug, has been around long enough to be well tested. This is all about legalities. The doctor’s first responsibility is to protect himself and his colleagues. It is not about doing the best for the patient. A patient who has not been treated by a doctor, cannot sue that doctor. This is now the first rule of medicine. The language of medicine is deliberately non-conventional. It will be argued that it eliminates misunderstandings and can be more precise. So, who misunderstands? It is always the patient. Who dares to argue with a doctor? His rebuttal will be to let you seek a second opinion. Go for an operation and you will have to sign a disclaimer. Only another doctor would understand the legal clauses. Dentists are the same. As you sit in the waiting room two minutes before the appointment, the nurse presents you with a tenpage document and asks you to sign it; you do, so that if anything goes wrong, it is your fault and you still have to pay. The whole procedure should be changed to put the patient first. In every other business, the customer comes first. Ask any doctor about their pay and they will tell you they do not do it for the money, they do it for the satisfaction of helping people. Does that mean they would work for less pay? No way! It means they know what to say to maintain their status. The world’s first welfare state was created by Aneurin Bevan in Britain after the tragedy of the second world war. He wanted all hospitals to be government owned and all doctors to work for the government. The doctors’ unions protested. They were self-employed and could charge whatever the patients would stand. Eventually Bevan had to compromise and allowed the doctors to work for others as well as the government. In other trades, this is known as moonlighting and is generally frowned upon but for doctors it is regarded as the step up to the high earnings. Private medicine was allowed to run alongside the National Health Service and gave the surgeons and those at the top of the NHS an opportunity to put in a few extra hours in private hospitals so that they can bring their total earnings to about five times more than they would get from the NHS alone. Usually insurance companies were covering the costs of treatment in the private hospitals. The doctors were the same whether the patient was in private care or the NHS. The difference being that the floors in the private hospitals are carpeted and patients had individual rooms with a television. The patient would also jump the queue. The NHS would keep the patient waiting and that wait would be longer if the surgeon was not available due to commitments in the private hospital. Bevan’s socialist ambitions were thwarted, not by the patients who are the electorate, but by the doctors. Nevertheless, the NHS, now in its 70th year grumbles along with almost daily complaints about insufficient money. The assumption that spending more on health care will improve health is false and based on pleas from the medical profession. Never having worked in any other job, they think they work harder than anyone else and have higher responsibilities. How often does a doctor say, Unfortunately, you are not responding to the medication? It is the patient’s fault. Pay doctors and their support staff according to results and their attitude will change. All other workers are paid on this basis. Productivity is the measure of efficiency and performance. No longer should the medical profession be allowed to hide behind their failure to apply better means of healing. If a doctor is unable to cure a disease, let him sue his medical school or employers for keeping him ignorant. There are many examples where failure to cure or heal should not be tolerated. The belief that it is better to let a patient die than treat them with something that has not been proven over many years on thousands of patients is a scam perpetrated for profit by corrupt regulators supported by existing suppliers. Despite asking for innovation, the obstacles to innovation in medicine are colossal. The complaints should be aimed by patients at their politicians who control the laws and regulators governing medical practice. In Britain there are 140 amputations a week, mostly because diabetic ulcers cannot be healed. This figure is rising, having doubled in the last few years. An amputation costs £25,000 plus the cost of equipping the patient with a prosthesis and their rehabilitation. Twenty years ago, the ability of low powered lithotripters to heal non-healing wounds was announced by orthopaedic surgeons who had observed the effect when treating wounds; they healed better than expected. Roll forward to the present day and NICE, National Institute of Clinical Excellence, in Britain states that the technology is still experimental . That statement was made by a lawyer, not a doctor. In Germany, treatments are paid for (reimbursed) by insurance companies, so they only pay for approved treatments. CellSonic VIPP was being used successfully by Dr Christian Busch at Tübingen University and following a complaint by a patient to her insurer for not getting adequate treatment and knowing that Busch with his CellSonic machine could help she informed the insurer and a director then telephoned Dr Busch. He agreed that CellSonic is the best of all methods for wound healing and criticised the insurance company for reimbursing the negative pressure machine which does not work. The company director agreed and asked for €400,000 for further trials. With adequate trials already done, why should the insurance company want more money? In 2017, the total cost of wound care in Britain was £5.3 billion. The number of wounds in a year is 2.2 million so the cost to the NHS of each wound is £2,409. As these wounds never heal, the costs run on until the patient dies or has an amputation. Even an amputation often fails to intercept the infection preventing the wound closing, so the crisis continues . The cost of healing a wound with CellSonic VIPP including labour and the machine is £130.00 plus 30% for cleaning and dressings totalling £169.00. This gives a saving of £2,240 per wound (£2,409 - £169) which spread over 2.2 million patients would be a saving of almost £5 billion a year. This amount is of no interest to the politicians or the NHS management. They think that the answer is to have more to spend on methods that do not work. The refusal of doctors to try machines offered to them free of charge confirms a major failure in the system. They have no incentive to improve. One of the main reasons for absenteeism from work is lower back pain. The nurse at the local clinic will prescribe a pain killer. This will numb the brain making the pain tolerable, not cure the cause of the pain and not make the person able to perform better at work. Over many years, CellSonic VIPP has been found to cure all types of back pain and even succeeded in growing new nerves in cases of severed spinal cord. The treatment does not use drugs so there are no side effects . For lower back pain, the treatment takes a few minutes and can be done by a nurse. Severed spinal cord takes longer and repeat treatments have to be carried out for a few months but given the fact that this is bringing a patient back from paralysis it is a breakthrough that restores life to a patient. I met a young girl in India who had been treated and asked her if she could now tell when she needed to go to the toilet. She smiled and realised I appreciated the enormity of her predicament. Her nerves from the waist downwards were working again. She was now getting back her dignity and independence. Only Cellsonic VIPP is able to bring about this transformation . A third of the world’s population is affected by cancer at some stage in their life and it remains a major killer. There is a general belief that cancer is caused by a bio-chemical failure and the cure has to be a drug. The aim is to selectively kill cancer cells with a deadly poison called chemotherapy (banned in warfare where is it known as mustard gas) but it also kills all cells, not just mutating cells. Radiation is more dangerous and causes cancer, yet it is used to burn away cancer cells whilst trying to avoid harming adjacent healthy cells. The cure rates from chemo and radiation are miserably low and during the treatment the patient is tortured. When you educated me on the fact that CellSonic not only produced sound waves but also CellSonic produces a short duration, high powered electrical field it looks more likely that you have created a nonsurgical form of irreversible electroporation using a combination of sound waves PLUS a high-powered electric field. The combination of VIPP sound and high-powered electrical fields have never been combined before to my knowledge to treat cancer. Noble prize work if you can live to collect it. This is a paradigmbreaking disruptive technology. WOW! In my opinion, it is the combined effect of sound and electric field that produces the unique effects of VIPP which is why your technology is different from your competitors. So, if I can hang around, I may get a Nobel prize for medicine. Great! The real reward will be to know that deaths and pain have been halted. It surprises me that the medical establishment does not want a cancer cure. In Britain, the 1939 Cancer Act forbids advertising anything to do with cancer. The purpose is to prevent people being given false hope. Why that act was used to threaten a company selling a bright light to help a woman check if there are lumps in her breast only the woman at the Advertising Standards Agency knows. The breast light had been approved by a renowned doctor . Their website now refers to cancer, so I hope they are not now legally threatened. In the USA, the opioid crisis is killing and costing. The Financial Times article says that pharmaceutical companies will be sued just as tobacco companies were sued for the damage their products inflicted on the innocent public . The US Department of Health and Human Services reports that in the late 1990s, pharmaceutical companies reassured the medical community that patients would not become addicted to opioid pain relievers and healthcare providers began to prescribe them at greater rates. Increased prescription of opioid medications led to widespread misuse of both prescription and non-prescription opioids before it became clear that these medications could indeed be highly addictive. At the core of this problem is pain and mostly the pain of cancer. CellSonic VIPP has now been approved in the USA to treat pain in Stages III & IV Cancer and chronic disease Patients. The CIRBI link is Pro0002913 and the submission was written by Annie Brandt of the Best Answer for Cancer Foundation . CellSonic does not use drugs so there are no side effects and the results are apparent within three days. The procedure works on all types of cancers. Big Pharma is in a self-inflicted crisis. CellSonic can help them as well as patients. Cancer is an electrical fault as explained by the researchers at Bradford University in England . Pharmaceuticals will not resolve an electrical fault. With the title of this article suggesting that doctors should be paid more, the question should now be that if CellSonic can save the life of a cancer patient, what could the doctor charge? To help answer this, we have the value of a life. In August 2018, a jury in California awarded $289 million to a school’s ground-keeper with cancer after finding that the weed killer he used had not adequately described the risks to human health. The weed killer contained glyphosate and was made by Monsanto, a chemicals company that was recently taken over by Bayer. The German company’s share price plunged after the jury’s decision . Does this mean that CellSonic should approach the victim and offer to clear his cancer for $289 million less a sum to pay for the legal costs he has suffered? The idea is fanciful, but the amount of money is real. This is what a court places on the value of the man’s shortened life and discomfort. There are millions of people suffering as is this man and their cancers are the results of all things such as tobacco, pollution, electrical powerlines, smartphones and the stress of divorce. The cost of treating cancer with CellSonic is about the same as healing a wound. It is not even as much as one night’s board in a private hospital. Therefore, the profit margin could be astronomical. It won’t be because I will not allow it and it will cost considerably less than the useless cancer treatments now inflicted. In the USA, a cancer patient incurs a spend of $200,000 a year either personally or from an insurance company and they die in the fifth year. With doctors paid by results, they would increase their incomes by providing a better service. Only by changing the terms of remuneration will doctors be lifted out of their complacency and be taken back to the basics; by the best means available, to save their patients from pain and death. Like any other tradesman, they need the right tools to do the job and that equipment is readily available and affordable. Bio chemistryUniversity of Texas Medical Branch, USA Department of Criminal JusticeLiberty University, USA Department of PsychiatryUniversity of Kentucky, USA Department of MedicineGally International Biomedical Research & Consulting LLC, USA Department of Urbanisation and AgriculturalMontreal university, USA Oral & Maxillofacial PathologyNew York University, USA Gastroenterology and HepatologyUniversity of Alabama, UK Department of MedicineUniversities of Bradford, UK OncologyCirculogene Theranostics, England Radiation ChemistryNational University of Mexico, USA Analytical ChemistryWentworth Institute of Technology, USA Minimally Invasive SurgeryMercer University school of Medicine, USA Pediatric DentistryUniversity of Athens , Greece The annual scholar awards from Lupine Publishers honor a selected number Read More... We know the financial complexity of Individual read more... The annual scholar awards from Lupine Publishers honor a selected number read more...
<urn:uuid:a22a0b01-3e9d-4f97-9d47-3cf02f78e34f>
CC-MAIN-2023-50
https://lupinepublishers.com/medical-science-journal/fulltext/doctors-should-be-paid-more.ID.000109.php
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.970676
3,541
2.859375
3
Error: No layouts found To stay healthy, neurons must prevent protein aggregates and defective organelles such as mitochondria from accumulating inside them. We now know that an animal species has found a solution to its neuronal trash problem—one that might also be present in humans and lead to neurodegenerative disease if it becomes dysfunctional. Researchers studying the roundworm C. elegans have discovered that neurons in adult worms possess a previously unrecognized garbage-removal mechanism: The neurons expel large (4-micron diameter) membrane-bound vesicles (dubbed “exophers”) that are filled with clumped protein and damaged cellular organelles, including mitochondria. The findings are described in a paper published in February in Nature. One of the paper’s senior authors is David H. Hall, Ph.D., professor in the Dominick P. Purpura Department of Neuroscience. The researchers observed that inhibiting other avenues of protein degradation—autophagy and proteasomal digestion, for example—enhanced exopher production. And when roundworm neurons were induced to express high levels of neurotoxic huntingtin protein, they produced significantly more exophers than did neurons in control worms. Inducing neurons to express another toxic protein (amyloid-forming human Alzheimer’s disease fragment) yielded similar results. Significantly, neurons stressed by toxic proteins seem to function better after they generate exophers. For example, several strains of roundworm express altered proteins that progressively impair touch sensation. At midlife in these strains, the touch sensitivity of a particular touch-detector neuron was enhanced in worms that produced exophers earlier in their lives compared with worms that had not. After discovering that exophers can also expel mitochondria, the researchers found they could trigger exopher production by stressing, damaging or otherwise impairing mitochondrial quality. For example, increased production of neuronal exophers was observed in roundworm strains in which either of two genes involved in mitochondrial maintenance was rendered defective. What is the fate of exophers and their trash after neurons jettison them? Data supported by electron microscopy suggested that at least some of the material is degraded by neighboring cells of the worm’s hypodermis (the cell layer that secretes its outer cuticle layer). But a portion of the exopher material entered the worm’s body cavity and was scavenged by distant cells. If human neurons possess the equivalent of exophers, the researchers note, then this transfer of potentially toxic material could have implications for neurological disease. Recent findings indicate that mammalian neurons can expel protein aggregates associated with Alzheimer’s, Parkinson’s and prion disease. Once outside the neuron, these aggregates can be taken up by other cells—possibly the way disease damage spreads in the brain. “We propose that exophers are components of a conserved mechanism that constitutes a fundamental, but formerly unrecognized, branch of neuronal proteostasis [protein homeostasis] and mitochondrial quality control, which, when dysfunctional or diminished with age, might actively contribute to pathogenesis in human neurodegenerative disease and brain aging,” the researchers conclude.
<urn:uuid:b14a79e8-eef6-46f0-89ed-f8bfe558cb68>
CC-MAIN-2023-50
https://magazine.einsteinmed.edu/backup/winterspring-2017/protective-mechanism-could-underlie-brain-disease/
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00800.warc.gz
en
0.942273
650
3.84375
4