text
stringlengths 198
630k
| id
stringlengths 47
47
| metadata
dict |
---|---|---|
This post may contain affiliate links.
Santa and his eight tiny reindeer will start making their rounds soon! Today, however, you can read one or more of these reindeer books for kids!
Rudolph the Red-Nosed Reindeer isn’t the only reindeer book on the shelf. Head to your local library and stock up on these.
Below you’ll find classics and new favorites. These books will make a great addition to your holiday lessons this month.
Reindeer Picture Books
Fill your book basket with a great collection of reindeer picture book for kids. Most of these books can be found at your local library or used bookstore.
If you have a hard time finding them, you can order them on Amazon by clicking the images below.
Reindeer | Explore the polar region, one of the most extreme environments on Earth, by following a reindeer through its day as it eats, sleeps, and cares for its young.
The Wild Christmas Reindeer | Little Teeka thought she had to be firm with the reindeer to get them ready for Santa’s important flight, but when her bossy yelling only got their antlerstangled up, she knew she had to try something different.
The Great Reindeer Rebellion | Oh, no! There’s trouble brewing in the North Pole: Santa’s reindeer have gone on strike and he’s auditioning other animals to take their place. But when the cats abandon station to chase some mice and the elephants fall through the roof, what’s Santa to do? Will his eight trusty reindeer ever fly again?
Imogene’s Antlers | One Thursday Imogene wakes up with a pair of antlers growing out of her head and causes a sensation wherever she goes.
I See a Reindeer, but… | Illustrations and humorous rhyming text portraying a young child’s adventure exploring the wonders of the North Pole and several characters seen along the way.
Reindeer Christmas | Late one snowy winter evening, two young children and their grandmother happen upon a weak and weary deer while feeding the animals in the forest. Together the children care for the deer and warm him by the fire until he is back to health, completely unaware of the surprise that awaits them on Christmas morning!
I Wish | When Anja discovers an abandoned reindeer baby in the woods, she cares for it and raises it as her own. They become dear friends and have many adventures together, but as the reindeer grows he wishes to rejoin his kind. So Anja leads him to join the greatest reindeer of all—those of Santa’s sled team.
A New Reindeer Friend | Anna and Elsa are preparing for their kingdom’s very first royal ball! Children ages 2 to 5 will love reading how the royal sisters head outside to find flowers for their party and end up rescuing a baby reindeer—with help from Olaf!
Flight of the Reindeer: The True Story of Santa Claus and His Christmas Mission | In Search of Santa Claus: Those who know him best tell their remarkable tales.
Reindeer Moon | Have you ever been enchanted by Christmas? …been lost in the flame of a candle, the glitter of tinsel, the fragrance of fresh pine boughs? Have you ever seen a Reindeer Moon?There comes a time when every child wonders…”Do reindeer really know how to fly?”
Santa’s Reindeer | Shh! Have you ever heard a reindeer’s sleigh bells in the sky on Christmas Eve? Have you ever heard a reindeer’s hoofbeat on the roof of your house? Or listened to the clatter of antlers outside in the darkness? Of course you haven’t! Santa’s reindeer are so skillful that they can fly in and out of your neighborhood without anyone hearing a thing.
Reindeer |It’s winter on the Arctic tundra. As a blizzard blows around them, a herd of reindeer is searching for food. The land is covered with snow, but the hungry deer know how to find a meal. Using their hard hooves, they dig down through the snow to find moss to eat.
Reindeer | When spring comes to the far north, reindeer become restless. They are ready for the long walk to their summer pastures. Follow the reindeer, and the people who herd them, through the year as they move from the forest to the tundra. There, the reindeer raise their calves, play, and grow antlers until the weather turns cold again.
Remarkable Reindeer | A wonderful picture book full of facts about reindeer. It’s the perfect length for reading to young children.
Uncles and Antlers | Get in the holiday spirit and count from one to eight in this playful tribute to a team of reindeer relatives, each quirky and fun, who help a certain jolly old man bring delight to children each year.
The Naughtiest Reindeer | It’s the night before Christmas and Rudolf is sneezing his little red nose off. So Santa needs another reindeer to help pull the sleigh. Rudolf’s sister Ruby is a little reindeer who always finds herself in big trouble. Will she find a way to be on her best behavior, or will she bring chaos to Christmas Day?
Reindeer Dust | Reindeer Dust is an interactive picture book that engages the imagination through family participation. The book tells the exciting tale of how the Reindeer Dust tradition first began. Designed with the entire family in mind, the book also includes an easy recipe and poem to be read on Christmas Eve.
Olive, the Other Reindeer | Olive, the other reindeer…is a story about a Santa Claus’s dog. And how he wants to becomes part of Santa’s reindeer team.
If your little readers are reindeer fans, as well, you will want to explore these fun activities. Check out these sticker books, coloring books, puzzles, and more.
Keep preschoolers engaged this season with a fun set of preschool reindeer printable math and literacy activities. Perfect for December preschool centers!
This cute reindeer paper plate craft not only gets kids in the Christmas spirit, but it helps them fine tune their motor skills as they lace the nose. | <urn:uuid:eec86f81-a583-468e-bc42-b6c0a57976a7> | {
"date": "2020-01-22T04:52:36",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9379698038101196,
"score": 2.65625,
"token_count": 1372,
"url": "https://teachingwithchildrensbooks.com/reindeer-picture-books/"
} |
In the early 1960s, interactive computing began to spread out from the few tender saplings nurtured at Lincoln Lab and MIT – spread in two different senses. First, the computers themselves sprouted tendrils, that reached out across buildings, campuses, and towns to allow users to interact at a distance, and to allow many users to do so at the same time. These new time-sharing systems blossomed, accidentally, into platforms for the first virtual, on-line societies. Second, the seeds of interactivity spread across the country, taking root in California. One man was responsible for sowing those first transplants, a psychologist named J.C.R. Licklider.
Joseph Carl Robnett Licklider — known to friends as “Lick” — specialized in psychoacoustics, a field that bridged the gap between imaginary states of mind and the measurable physiology and physics of sound. We met him briefly before, as a consultant in the 1950s FCC Hush-a-Phone hearings. He had honed his skills at the Psycho-Acoustics Laboratory at Harvard during the war, devising techniques to improve the audibility of radio transmissions inside noisy bombers.
Like so many American scientists of his generation, he found ways to continue to meld his interests with military needs after the war, but not because he had a special interest in weaponry or national defense. The only major civilian sources of money for scientific research were two private institutes founded by the industrial titans of the turn of the century: the Rockefeller Foundation and Carnegie Institute. The National Institutes of Health had only a few million dollars to spend, and the National Science Foundation was created only in 1950, with a similarly modest budget. To get funding for interesting science and technology in the 1950s, your best bet was the Department of Defense.
So, in 1950, Licklider joined an acoustics lab at MIT directed by the physicists Leo Beranek and Richard Bolt, and funded almost entirely by the U.S. Navy.1 Once there, his expertise on the interface between the human senses and electronic equipment made him a natural early recruit to MIT’s new air defense project. As part of the Project Charles study group, tasked with figuring out how to implement the Valley Committee air defense report, Licklider pushed for the inclusion of human factors research, and got himself appointed co-director of radar-display development for Lincoln Laboratory.
There, at some point in the mid-1950s, he crossed paths with Wes Clark and the TX-2, and instantly caught the interactive computing bug. He was captivated by the idea of being in total control of a powerful machine that would instantly solve any problem addressed to it. He began to develop an argument for “man-computer symbiosis,” a partnership between human and computer that would amplify humankind’s intellectual power, in the same way that industrial machines had amplified its physical power. He noted that some 85% of his own work time2
…was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability. …the operations that fill most of the time allegedly devoted to technical thinking are operations that can be performed more effectively be machines than by men.
The overall concept did not stray too far from Vannevar Bush’s Memex, an intellectual amplifier that he sketched in his 1945 “As We May Think,” though Bush’s mix of electro-mechanical and electronic components gave way to a pure electronic digital computer as the central intellectual engine. That computer would use its immense speed to shoulder all the brute-force clerical work involved in any scientific or technical project. People would be unshackled from that drudgery, freed to spend all of their attention on forming hypotheses, building models, and setting goals for the computer to carry out. Such a partnership would provide tremendous benefit to researchers such as himself, of course, but also to national defense, by helping American scientists stay ahead of the Soviets.
Soon after this Damascene encounter, Lick brought his new devotion to interactive computing to a new position, at a consulting firm run by his old colleagues, Bolt and Beranek. As a sideline from their academic physics work, the two had dabbled with consulting projects for years; reviewing, for instance, the acoustics of a movie house in Hoboken, New Jersey. Landing the acoustics analysis for the new United Nations building in New York City, however, brought them a slew of additional work, and so they decided to leave MIT and consult full-time. Having acquired a third partner in the meantime, architect Robert Newman, they now went by Bolt, Beranek and Newman (BBN). By 1957, having grown into a mid-sized firm with dozens of employees, Beranek felt that they risked saturating the market for acoustics work. He wanted to extend their expertise beyond sound to the full range of interaction between humans and the built environment, from concert halls to automobiles, across all the senses.
And, so, naturally, he sought out his old colleague Licklider, and recruited him on generous terms as the new vice-president of psychoacoustics. But Beranek had not reckoned with Licklider’s wild enthusiasm for interactive computing. Rather than a psycho-acoustics expert, he had acquired… not a computer expert, exactly, but a computer evangelist, eager to bring others to the light. Within the year, he had convinced Beranek to lay out tens of thousands of dollars buy a computer, a meager little thing called the LGP-30, made by a defense contractor called Librascope. Having no engineering expertise himself, he brought on another SAGE veteran, Edward Fredkin, to help configure the machine. Despite the fact that the computer did little but distract Licklider from his real work while he tried to learn to program it, he convinced the partners to put down still more money3 to buy a much better computer a year-and-half later: DEC’s brand new PDP-1. Licklider sold B, B, and N on the idea that digital computing was the future, and that somehow, sometime, their investment in building expertise in the field would pay off.
Shortly thereafter, Licklider, almost by accident, found himself in the perfect position for spreading the culture of interactivity across the country, as head of a new government computing office.
In the Cold War, every action brought it’s reaction. Just as the first Soviet atomic bomb had spurred the creation of SAGE, so did the first Soviet satellite in orbit, launched in October 1957, trigger a flurry of responses from the American government. All the more so because, while the Soviets had trailed the U.S. by four years in exploding a fission weapon, in rocketry it seemed to have leaped ahead, beating the Americans to orbit (by about four months, as it turned out).
One of the responses to Sputnik to create, in early 1958, an Advanced Research Projects Agency (ARPA) within the Defense department. In contrast to the more modest sums available for civilian federal science funding, ARPA was given an initial budget of $520 million, three times the budget of the National Science Foundation, which had itself been tripled in size in response to Sputnik.
Though given a broad charter to work on any advanced projects deemed fit by the Secretary of Defense, it was initially intended to focus on rocketry and space – a vigorous answer to Sputnik. By reporting directly to the Secretary of Defense, ARPA was to rise above debilitating and counterproductive inter-service rivalries and develop a unified, rational plan for the American space program. But in fact, all of its projects in that field were soon stripped away by rival claimants4: the Air Force had no intention of giving up control over military rocketry, and the National Aeronautics and Space Act, signed in July 1958, created a new civilian agency to take over all non-weaponized ventures into space. Having been created, ARPA nonetheless found reasons to survive, acquiring major research projects in ballistic missile defense and nuclear test detection. But it also became a general workshop for pet projects that the various armed services wanted investigated. Intended to be the dog, it had instead become the tail.
The first foray by ARPA into computing was, in a sense, busy work. In 1961, the Air Force had two idle assets on its hands and needed something for them to do. As the first SAGE direction centers neared deployment, the Air Force had brought on, RAND Corporation, based in Santa Monica, California, to train personnel and prepare the twenty-odd computerized air defense centers with operational software. RAND spun off a whole new entity, System Development Corporation (SDC), just to handle this task. SDC’s newly acquired software expertise was a valuable resource for the Air Force, but SAGE was winding down and they were running out of work to do. The Air Force’s second idle asset was a (very expensive) surplus AN/FSQ-32 computer which had been requisitioned from IBM for SAGE but turned out to be unneeded. The Department of Defense solved both problems by assigning ARPA new research task of command-and-control, to be inaugurated with a $6 million grant to SDC to study command-and-control problems using the Q-32.
ARPA soon decided to regularized this research program as part of a new information processing research office. Around the same time, it had also received a new assignment to create a program in behavioral science. For reasons that are now obscure, ARPA leadership decided to recruit J.C.R. Licklider to oversee both programs. The idea may have come from Gene Fubini, director of research for the Department of Defense, who would have known Lick from his time working on SAGE.
Like Beranek, Jack Ruina, then head of ARPA, had no idea what he was in for when he brought Lick in for an interview. He thought he was getting a behavioral science expert with a dash of computing knowledge on the side. Instead he got the full force of the man-computer symbiosis vision. Computerized command-and-control required interactive computing, Licklider argued, and thus the primary thrust of ARPA’s command-and-control research program should be to push forward the cutting edge of interactive computing. And to Lick that meant time-sharing.
Time-sharing systems originated with the same basic principle as Wes Clark’s TX series: computers should be convenient for the user. But unlike Clark, the proponents of time-sharing believed that a single computer could not be used efficiently by a single person. A researcher might sit for several minutes pondering the output of a program before making a slight change and re-running it. During that interval the computer would have nothing to do, its great power going to waste, at great expense. Even the hundred-millisecond intervals between keystrokes loomed as vast gulfs of wasted time for the computer, in which thousands of computations could have been performed.
All of this processing power need not go to waste, if it could instead be shared among many users. By slicing up the computer’s attention so that it could serve each user in turn, the computer designer could have his cake and eat it – provide the illusion of an interactive computer completely at the user’s command, without wasting most of the capacity of a very expensive piece of hardware.
The concept was latent in SAGE itself, which could serve dozens of different operators simultaneously, each monitoring his own sub-sector of airspace. After meeting Clark, Licklider immediately saw the potential to combine the shared user base of SAGE with the interactive freedom of the TX-0 and TX-2 into a potent new mix, and this formed the basis of his advocacy for man-computer symbiosis, which he proposed to the Department of Defense in a 1957 paper entitled “The Truly Sage System, or Toward Man-Machine System for Thinking.” In that paper described a computer system for scientists very similar in structure to SAGE, with a light-gun input, and “simultaneous (rapid time-sharing) use of the machine computing and storage facilities by many people.”
Licklider, though, lacked the engineering chops to actually design or build such a system. He managed to learn the basics of programming at BBN, but that was as far as his skills went. The first person to reduce time-sharing theory to practice was John McCarthy, an MIT mathematician. McCarthy wanted constant access to a computer in order to craft his tools and models for manipulating mathematical logic, the first steps, he believed, towards artificial intelligence. He put together a prototype 1959, consisting an interactive module bolted onto the university’s batch-processing IBM 704 computer. Ironically, this first “time-sharing” installation had only one interactive console, a single Flexowriter teleprinter.
By the early 1960s, however, the MIT engineering faculty as a whole had become convinced that they should invest wholesale in interactive computing. Every student and faculty member with an interest in programming who got their hands on it, got hooked. Batch-processing made very efficient use of the computer’s time, but could be hugely wasteful of the researcher’s – the average turnaround time for a job on the 704 was over a day.
A university-wide committee formed to study the long-term solution for the growing demand for computing resources at MIT, and time-sharing advocates predominated. Clark fought a fierce rearguard action, arguing that the move to interactivity should not mean time-sharing. As a practical matter, he argued that time-sharing meant sacrificing interactive video displays and real-time interaction, crucial features of the projects he had been working on with the MIT biophysics lab. But more fundamentally, Clark seemed to have a deep philosophical resistance to the idea of sharing his workspace. As late as 1990, he refused to connect his computer into the Internet, and stated outright that networks “are a mistake” and “don’t work.”5
He and his disciples formed a sub-sub-culture, a tiny offshoot within the already eccentric academic culture of interactive computing. But their arguments in favor of small, un-shared computer workstations did not find purchase with their colleagues.6 Given the cost of even the smallest individual computer at the time, such an approach seemed economically infeasible to the other engineering faculty. Moreover, most assumed at that time that computers – the intellectual power plants of a dawning information age – would benefit from economies of scale, in the same way that physical power plants did. In the spring of 1961, the final report of the long-range study committee sanctioned large-scale time-sharing systems as the way of the future at MIT.
By that time, Fernando Corbató, known to colleagues as “Corby,” was already working to expand the scope of McCarthy’s little experiment. A physicist by training, he learned about computers while working on Whirlwind in 1951, while a grad student at MIT. 7 After completing his doctorate he became an administrator for MIT’s newly formed Computation Center, built around the IBM 704. Corbató and his team (initially Marge Merwin and Bob Daley, two of the best programmers in the Center) called their time-sharing system CTSS, for Compatible Time-Sharing System – so-called because it could run simultaneously with the 704’s normal batch-processing operations, seamlessly snatching computer cycles for users as needed. Without this compatibility the project would indeed have been impossible, because Corby had no funding for a new computer on which to build a time-sharing system, and shutting down the existing batch-processing operation was not an option.
At the end of 1961, CTSS could support four terminals. By 1963, MIT hosted two instances of CTSS on 3.5 million dollar transistorized IBM 7094 machines, with roughly ten times the memory capacity and processing power of their 704 predecessor. The system’s supervisor software passed through the active users in a roughly round-robin fashion8, servicing each for a fraction of a second before moving on to the next. Users could store programs and data in their own private, password-protected area in the computer’s disk storage, for later use.9
Each computer could serve roughly twenty terminals. That was enough to not only support a couple of small terminal rooms, but also to begin spreading access to the computer out across Cambridge. Corby and other key individuals had office terminals, and, at some point, MIT began providing home terminals to technical personnel so that they could do system maintenance at odd hours without having to come on-campus. All of these early terminals consisted of a typewriter with some modifications to support reading from and writing to a telephone line, plus a continuous feed of perforated paper instead of individual sheets. Modems connected the terminals via the telephone system to a private exchange on the MIT campus, via which they could reach the CTSS computer. The computer thus extended its sensory apparatus over the telephone, with signals that went from digital to analog and back. This was the first stage in the integration of computers into the telecommunications network. The mixed state of AT&T with respect to regulation facilitated this integration. The core network was still regulated, and required to provide private lines at fixed rates, bu a series of FCC decisions had eroded its control over the periphery, and thus it had very little say over what was attached to those lines. MIT needed no permission for its terminals.
The desired goal of Licklider, McCarthy, and Corbató had been to increase the availability of computing power to individual researchers. They had chosen the means, time-sharing, for purely economic reasons – no one could imagine buying and maintaining a computer for every single researcher at MIT. But this choice had produced unintended side-effects, which could never have been realized within Clark’s “one man, one machine” paradigm. A common file area and cross-links between users accounts allowed users to share, collaborate, and build on each other’s work. In 1965, Noel Morris and Tom Van Vleck facilitated this collaboration and communication, with a MAIL program that allowed users to exchange messages. When a user sent a message, the program appended it to a special mailbox file in the recipient’s file area. If a user’s mailbox file had any contents, the LOGIN program would indicate it with the message “YOU HAVE MAIL BOX.” The contents of the machine itself were becoming an expression of the community of users, and this social aspect of time-sharing became just as prized at MIT as the initial premise of one-on-one interactive use.
Lick, having accepted ARPA’s offer and left BBN to take command of ARPA’s new Information Processing Techniques Office (IPTO) in 1962, quickly set about doing exactly what he had promised – focusing ARPA’s computing research efforts on spreading and improving time-sharing hardware and software. He bypassed the normal process of waiting for research proposals to arrive on his desk, to be authorized or rejected, instead going into the field himself and soliciting the research proposals he wanted to authorize.
His first step was to reconfigure the existing SDC command-and-control research project in Santa Monica. Word came down to SDC from Lick’s office that they should curtail their work on command-and-control research, and instead focus their efforts on turning their surplus SAGE computer into a time-sharing system. According to Lick, the basic substrate of time-shared man-machine interaction must come first, and command-and-control would follow. That this prioritization aligned with his own philosophical interests was a happy coincidence. Jules Schwartz, a SAGE veteran, architected the new time-sharing system. Like its contemporary, CTSS, it became a virtual social space, including among its commands a DIAL function for direct text messaging between on-line users, as can be seen in this example exchange between John Jones and a user identified by the number 9:
Next, to provide funding for the further development of time-sharing at MIT, Licklider found Robert Fano to lead his flagship effort: Project MAC, which lasted into the 1970s.10 Though the designers initially hoped that the new MAC system would support 200 simultaneous users or more, they had not reckoned with the ever-escalating sophistication and complexity of user software, which easily consumed all improvements in hardware speed and efficiency. When launched to MIT in 1969, the system could support about 60 users on its two central processing units (CPUs), roughly the same number per CPU as CTSS. However, the total community of users was much larger than the maximum active load at any given time, with 408 registered users in June 1970.11
Project MAC’s Multics system software also embodied several major advances in design, some of which are still considered advanced features in today’s operating systems: a hierarchical file system with folders that could contain other folders in a tree structure; a hardware-enforced distinction between execution in user and system mode; dynamically linked programs that could pull in software modules as needed during execution; and the ability to add or remove CPUs, memory banks, or disks without bringing down the system. Ken Thompson and Dennis Ritchie, programmers on the Multics project, later created Unix (a pun on the name of its predecessor) to bring some of these concepts to simpler, smaller-scale computer systems.
Lick planted his final seed in Berkeley, at the University of California. Project Genie12, launched in 1963, begat the Berkeley Timesharing System, a smaller-scale, more commercially-oriented complement to the grandiose Project MAC. Though nominally overseen by certain Cal faculty members, it was graduate student Mel Pirtle who really led the time-sharing work, aided by other students such as Chuck Thacker, Peter Deutsch, and Butler Lampson. Some of them had already caught the interactive computing bug in Cambridge before arriving at Berkeley. Deutsch, son of an MIT physics professor and the prototypical computer nerd, implemented the Lisp programming language on a Digital PDP-1 as a teenager before arriving at Cal as an undergrad. Lampson, for his part, had programmed on a PDP-1 at the Cambridge Electron Accelerator as a Harvard student. Pirtle and his team built their time-sharing system on a SDS 930, made by Scientific Data Systems, a new computer company founded in 1961 in Santa Monica.13
SDS back-integrated the Berkeley software into a new product, the SDS 940. It became one of the most widely used time-sharing systems of the late 1960s. Tymshare and Comshare, companies that commercialized time-sharing by selling remote computer services to others, bought dozens of SDS 940s for their customers to use. Pirtle and his team also decided to try their hand in the commercial market, founding Berkeley Computer Corporation (BCC) in 1968, but BCC fell into bankruptcy in the 1969-1970 recession. Much of Pirtle’s team ended up at Xerox’s new Palo Alto Research Center (PARC), where Thacker, Deutsch and Lampson contributed to landmark projects such as the Alto personal workstation, local networking, and the laser printer.
Of course, not every time-sharing project of the early 1960s sprung from Licklider’s purse. News of what was happening at MIT and Lincoln Labs spread through the technical literature, conferences, academic friendships, and personnel transfers. Through these channels other, windblown, seeds took root. At the University of Illinois, Don Bitzer sold his PLATO system to the Department of Defense as a means of reducing the cost of technical education for military personnel. Clifford Shaw created the JOHNNIAC Open Shop System (JOSS), which the Air Force funded in order to improve the ability of RAND employees to perform quick numerical analyses.14 The Dartmouth Time-Sharing System had a direct connection to events at nearby MIT, but was otherwise the most exceptional, being a purely civilian-funded effort sponsored by the National Science Foundation, on the basis that experience with computers would be a necessary part of a general education for the next generation American leaders.
By the mid-1960s, time-sharing had not taken over the computing ecosystem. Far from it. Traditional batch-processing shops predominated in sales and use, especially outside university campuses. But it had found a niche.
In the summer of 1964, some two years after arriving at ARPA, Licklider moved on again, this time to IBM’s research center north of New York City. For IBM, shocked to have lost the Project MAC contract to rival computer maker General Electric after years of good relations with MIT, Lick would provide some in-house expertise in a trend that seemed to be passing it by. For Lick, the new job offered an opportunity to convert the ultimate bastion of conventional batch computing to the new gospel of interactivity.15
He was succeeded as head of IPTO by Ivan Sutherland, a young computer graphics expert, who was succeeded in turn, in 1966, by Robert Taylor. Licklider’s own 1960 “Man-Machine Symbiosis” paper had made Taylor a convert to interactive computing, and he came to ARPA at Lick’s recommendation, after a stint running a computer research program at NASA. His personality and background formed him in Licklider’s mold, rather than Sutherland’s. A psychologist by training and no technical expert in computer engineering, he compensated with enthusiasm and clear-sighted leadership.
One day in his office, shortly after taking over the IPTO, a thought dawned on Taylor. There he sat, with three different terminals, through which he could connect to the three ARPA-funded time-sharing systems in Cambridge, Berkeley, and Santa Monica. Yet they did not actually connect to one other – he had to intervene physically, with his own mind and body, to transfer information from one to the other. 16
The seeds sewn by Licklider had borne fruit. He had created a social community of IPTO grantees, that spanned many computing sites, each with its own small society of technical experts, gathered around the hearth of a time-sharing computer. The time had come, Taylor thought, to network those sites together. Their individual social and technical structures, once connected, would form a kind of super-organism, whose rhizomes would span the entire continent, reproducing the social benefits of time-sharing on the next higher scale. With that thought began the technical and political struggle that would give birth to ARPANET.
Richard J. Barber Associates, The Advanced Research Projects Agency, 1958-1974 (1975)
Katie Hafner and Matthew Lyon, Where Wizards Stay Up Late: The Origins of the Internet (1996)
Severo M. Ornstein, Computing in the Middle Ages: A View From the Trenches, 1955-1983 (2002)
M. Mitchell Waldrop, The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal (2001)
- Licklider, “Man-Computer Symbiosis”, IRE Transactions on Human Factors in Electronics, March 1960. Interestingly, Licklider assumed that this was merely an intermediate stage of technological development, before computers developed the ability to think fully on their own. ↩
- $150,000, about $1.25 million in today’s dollars. ↩
- Last to go was Project Orion, a spaceship that was to be propelled by dropping nuclear bombs out of its tail and exploding them. ARPA dropped funding in 1959, since it could not justify it as anything other than a civilian space program rightfully belonging to NASA. NASA, for its part, didn’t want to sully its squeaky-clean image by association with nuclear weapons. The Air Force reluctantly provided enough money to keep the lights on, but project finally died after the 1963 treaty that banned the testing of nuclear weapons in the atmosphere or space. Though the idea is technically sweet, it is difficult to imagine that any government would sanction launching a rocket full of thousands of nuclear weapons into the air. ↩
- “Charles Babbage Institute, “Oral history interview with Wesley Clark” (1990). ↩
- Severo Orenstein, Computing in the Middle Ages. After losing support in Cambridge, Clark’s group, whom Orenstein called the “little-dealers,” as against the time-sharing “big-dealers,” set up shop at Washington University, in St. Louis. ↩
- He is also, to the best of my knowledge, the only person mentioned so far in this story that is still alive, as of January 2019, at the age of 92. ↩
- The actual scheduling algorithm was a bit more complex than a pure round-robin, and involved a two-level queue. Fernando J. Corbató, et. al., “An Experimental Time-Sharing System”, Proceedings of the Spring Joint Computer Conference (1962). ↩
- David Walden and Tom Van Vleck, eds., The Compatible Time-Sharing System (1961-1973) (2011). You can see Corby describing the state of the system in 1963 in a television program from 1963 here. The scheduling system was not strictly round robin, it actually consisted of a two-level priority queue. ↩
- Variously interpreted as Mathematics And Computation, Multiple-Access Computer, and Machine-Aided Cognition. ↩
- Massachusetts Institute of Technology, “Project MAC Progress Report VII, July 1969 – July 1970” (July 1970). ↩
- The exact origins and original intentions of Project Genie and the Berkeley Timesharing System have not, as far as I can tell, been thoroughly excavated by researchers. Based on the memos here, it seems to have involved, at least in part, a plan to build a SAGE-like system using graphical displays and light guns for interaction. ↩
- A whole separate article could be written on the little-known Santa Monica tech scene at the time. RAND Corporation, SDC, and SDS, all headquartered there, were all making significant contributions to cutting edge computing in the early 1960s. ↩
- Shirley L. Marks, “The JOSS Years: Reflections on an Experiment”, December 1971. ↩
- The job didn’t work out. Lick was sidelined and miserable, and his wife felt isolated in the wilderness of Yorktown Heights. He transferred to IBM’s Cambridge office, then ended up back at MIT in 1967 as head of Project MAC. ↩
- Waldrop, The Dream Machine, 262. Whether this actually happened is actually somewhat difficult to tell in retrospect. Compare to Licklider’s own account in William Aspray and Arthur Norberg, “An Interview with J. C. R. Licklider,” 28 October 1988, which makes it clear that there was in fact only a single console: “…I had a console in my office. It was connected to computers here [i.e. MIT, the interview was conducted in Cambridge] and in California.” In fact there is no technical reason I’m aware of that a single terminal could not have been used to dial into three different computer systems. The clinching piece of evidence that convinces me that Taylor’s account is correct, and there were three terminals (at least by the time he took over), is from Where Wizards Stay Up Late, which actually specifies the three different models used (p.12). It’s hard to believe this level of specificity was simply invented or mis-remembered. Why have three terminals when one could do? Barring some technical limitation I’m unaware of, it’s possible that the users at IPTO wanted to keep the three “conversational” records with each computer clearly distinct, or that it was useful to have multiple people each using a different computer at the same time. ↩ | <urn:uuid:874ad46d-f053-4ddf-ae71-89130faf1d98> | {
"date": "2020-01-22T06:48:03",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9643476605415344,
"score": 2.859375,
"token_count": 6847,
"url": "https://technicshistory.com/2019/01/24/extending-interactivity/"
} |
Many vocational schools and community colleges offer pharmacy technician programs, some of which are accredited by the American Society of Health-System Pharmacists, ASHP. Associate degree programs usually take approximately two years to complete while most certificates can be obtained in as little as one year or less. Job training and certification ensures that pharmacy technicians have an understanding of pharmacy operations and protocol, the ability to work with different prescription drugs, and a commitment to abide by ethical standards. A typical pharmacy technician program will include training in multiple areas in both the technical and practical side of the occupation.
With a dramatic change in technology over the past several years and an increased scope of practice for pharmacist and other health care personnel, it is critical that pharmacists and pharmacy technicians stay abreast of changes. The world of pharmacy is a highly regulated profession, and this is why pharmacy law is included in the coursework in a pharmacy technician program. Laws regulate the recordkeeping and the labeling of all drugs handled within a pharmacy. With a significant change in practice settings, it is important to understand the laws involved in the pharmacy profession to ensure safe and legal practice.
Learn more about the Laws Governing Pharmacy Technician’s Practice.
Working as a pharmacy technician requires an understanding of what your scope of practice is. Providing advice outside of your scope could be considered breaking the law. The studies of pharmacy ethics are covered in coursework to provide you and your cohort an understanding of your moral obligations and virtues in the relationships you establish with patients and other healthcare professionals. As a pharmacy technician, you have a duty to observe the law and uphold the ethical principles of the profession.
Learn more about ethical issues in pharmacy tech.
With a higher degree of integration across a diverse range of healthcare settings and providers, both pharmacist and technicians are enhancing patient care now more than ever. Technicians need a clear understanding of healthcare systems and the role they play in helping to improve patient care. Pharmacies are no longer just order-and-product fulfillment centers; their functions have moved to a profession of leadership within drug therapy management. By elevating the role of pharmacy technicians, other professionals have more time working with patients one on one. With a model such as this, technicians become part of the care team. Coursework involving an overview of healthcare systems give pharmacy technician students an overall understanding of the multifaceted practice settings in which a technician may work as part of a team of healthcare professionals.
Healthcare professionals have a language of their own and pharmacy technician coursework provides an opportunity for future technicians to learn tips and tricks to pick up the foreign terminology. This part of the curriculum prepares students and helps give them confidence and a basic understanding of medical terminology. While much of the terminology will be related to drug names and types, there will also be more general medical terminology to learn. You will be introduced to abbreviations and terms, which you will use daily in your career with co-workers, patients, and other healthcare providers. Knowing medical terms and abbreviations will make your job much easier as you enter your new profession.
Pharmacy technicians need to be familiar with the physical and chemical properties of a drug. Coursework in pharmacology is designed to provide you with an understanding of the function of drugs and the body’s role in processing and reacting to certain medications. You will study the field of toxicology to understand the adverse effects of poisons or chemicals on human beings. In addition, an overview of the body’s immune system and how the body maintains equilibrium or homeostasis will be explained.
Working as a technician, you will have frequent opportunities to apply such knowledge and will have an understanding of why certain medications are stored in a specific way and why certain drugs are administered in particular manners. This, in turn, gives you the confidence to perform your job-related duties correctly. Your overall knowledge of pharmacology will have an effect on how you are perceived by patients and the medical community.
Anatomy and Physiology
In any healthcare setting, the ability to practice safely is of the utmost importance. To achieve this, healthcare providers must have knowledge and understanding of the human body. An introduction to anatomy and physiology in pharmacy technician coursework is geared towards providing students with a general roadmap of the human body while learning about the major organ systems. Students gain knowledge in understanding how the organs work to keep you alive and get an overview of the systems:
- Lymphatic respiratory
- Reproductive systems
Knowing how certain medications will affect how these major organ systems function is an important aspect of working in a pharmacy.
A wide range of knowledge and skills are necessary in order for pharmacy technicians to play a role in the improvement of public health while ensuring the safe and effective use of medications. As part of the healthcare system, a major role for technicians is medication order entry and fulfilling prescriptions. As part of these duties, you must have the ability to calculate individual drug doses and accurately convert between units of measurement.
Accordingly, it is essential that you have a fundamental understanding of everyday math problems that you will encounter regularly working as a pharmacy technician. In your pharmaceutical calculation coursework, you will focus on solving pharmacy-related math problems using ratio-proportion methods of calculation and dimensional analysis. In addition, you will focus on dosing, drug concentrations, and dilutions.
Considering that an error in a dosage calculation or dilution could pose a significant amount of harm to a patient, pharmacy calculations are considered the most important area of study for technicians. To effectively contribute to the daily practice of pharmacy, you must be capable of performing a variety of calculations.
During your coursework, it is likely you will have the opportunity to gain clinical experience, although state laws vary in regards to how and when pharmacy technician students gain their on-the-job training and what requirements are necessary. Schools have often partnered with retail drugstores for on the job training opportunities. Hands-on training at approved pharmacies or medical centers are another option for students.
State laws vary in certification requirements, as well, but most employers will only hire pharmacy technicians who are certified by the National Healthcare Association (NHA) or the Pharmacy Technician Certification Board (PTCB). Both of these programs require applicants to have a high school diploma.
NHA requires applicants to have at least one year of experience or to have completed a training program while PTCB requires all applicants to pass an exam. Specialized training is available if you want to work exclusively for a retail drugstore chain. Becoming specialized will allow you to serve as a general pharmacy technician, central pharmacy operations technicians, or a community pharmacy technician.
Seeking certification is recommended to enhance job opportunities and your earning potential. Pharmacy technicians must complete 20 hours of continuing education in order to take the recertification exam, which is a requirement every two years.
As one can probably see, the curriculum is extensive and covers a broad range of topics. The more you can learn, the better off you will be. I wish you all the best in your journey. | <urn:uuid:0ba7ab39-c83e-480d-a9ab-ff8fb171f5d5> | {
"date": "2020-01-22T05:47:30",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9416484236717224,
"score": 2.859375,
"token_count": 1431,
"url": "https://theepharmacytechnicians.com/what-taught-pharmacy-technician-course/"
} |
With the continuous discourse on ideology that is often accompanied by words such as terrorism, globalisation or imperialism, the definition is not only ambiguous but has an unsavoury association to other terms that are themselves vague. Indeed, there certainly exists an adverse meaning to ‘ideology’ as being a belief system that legitimises a doctrine for violence and subordination. But what exactly is ideology? An ideology is said to be, “[a] cultural representation of the social order that makes this order seem immutable and supremely legitimate… placing it beyond change by human agencies, outside the history of human actions and social relations, and beyond the framework of material constraints, which are its ultimate determinants.”[i] According to Karl Marx, ideology or the superstructure is a conceptual method of social organisation. The collective are enticed into believing in ideological and material values, the latter of which is merely invented by the bourgeois; the oppressed are thus inadvertently supporting the ruling class’ domination. “Everyone believes his [bourgeois] craft to be the true one… [i]n consciousness – in jurisprudence, politics, etc. – relations become concepts.”[ii] Thus the superstructure contains a collection of historically retained ideas that legitimise the dominate classes.
Conversely, Michel Foucault analysed ideology – what he later names discourse – as a social function of truth that authenticates social stratification and hierarchical arrangements, whereby “like it or not, it [ideology] always stands in virtual opposition to something else which is supposed to count as truth.”[iii] Power in discourse can only emerge effectively when interpretation is no longer needed and is automatically processed as truth, which prompts repression and power. However, power in discourse is not always negative, but provides a pleasant and a productive network that efficiently conditions and closes the gap between politics and culture. This distinctly coincides with the superstructure, for not only are the elite exercising dominance over the masses but ideology exists because citizens desire it. Eric Hobsbawm highlighted the existence of what he referred to as the imagined national community,[iv] namely that the values set within ideological beliefs are merely invented to hold the administration of a State together by motivating a national character and providing political and social cohesion. “Politics is so deeply rooted in the native genius of each nation that the continuity of separate political traditions constantly resist the levelling forces at work in the social and economic spheres of modern life.”[v] However, this does not make the nation ‘unreal’ but should instead be viewed as a concept that enables, “[e]xperience and the interpretation of the world.”[vi]
Ultimately, power requires recognition.
The relationship between power and identity is most obvious in the new concept of the nation: the nation, first as a community of equal individual citizens and then as a community founded upon a shared culture, becomes the legitimate locus of power… strategically, identity not only legitimizes power but provides also an effective instrument for mobilization.[vii].
The legitimisation of ideological constructs often involve Othering or the proposition that x is more legitimate than y within essentialists categorisations, which is the view that all properties in an entity must contain the same attributes. Jean-Paul Sartre claims that the anti—Semite creates the ‘Jew’ by becoming an object representing what is loathed and thus causally becoming the very purpose or reason for his being and identity.[viii] The belief in the existence of properties or characteristics that are either universal or essential consequently legitimises these properties that are apparently eternally fixed. For instance, if the properties in x are eternal or essential, than it must be that the properties in y are not and in such instances, the legitimisation of x leads to the domination or subjugation of y. Membership thus requires the acknowledgement that certain properties within the entity are eternal or essential, leading to recognition and thus power.
Nevertheless, subjugation is not always violent and can contain positive elements that are tolerated even by those being subjugated.[ix] As an instrument for political and social development, the ideological attitudes to modernisation have often been used as an apparatus in Turkish political rhetoric. Ziya Gökalp, a Turkish sociologist and political activist who influenced Mustafa Kemal Atatürk, claims that there are two functional processes of modernisation that have caused such massive structural changes in society. “The first was in culture-nations (Durkeim’s term for societies) where the advanced division of labor was creating an occupational group structure in which individuals were incorporated… the second level was that of civilisation, which Gökalp saw as the supranational grouping to which different nations belonged and in which they related.”[x] Atatürk believed that secularisation and modernity will gradually relegate the position religion has in both politics and society, yet, along with many secularists, this imagined interpretations of the possible future has thwarted the possibility of understanding alternative social and political processes. Instead, radical fundamentalism and religious and cultural revivalism are interpreted as a retrograde condition where people are reverting back to the old and inferior position because of their failure to adapt to the precipitating social transformations.
“The sense that religion has no place in contemporary politics is evidence in common claims that people “retreat” or “take refuge” in religion to escape so-called rapid socio-political change. The implication of this language is the theopolitical actors and movements are at odds with historical necessity (almost pathologically so), and should not be as predominant as they are.”[xi] Modernity has paradoxically increased the vitality of religion. Originally thought to be unsympathetic to culture and society, globalisation has instead provided the room for religious and cultural development. Andrew Davison labels this as interpretative perplexity; what we once thought to be clear becomes more perplexing than originally presumed.[xii] Davison attempts to analyse the meaning behind these political prejudices (made especially by political scientists who engage in policy assessment), particularly the convincing idea of historical development and the saturation process of social and political globalisation. Prejudices regarding the apparent direction of secularism have interrupted a better comprehension of theopolitics (theocracy) in contemporary political discourses.
Instead of acknowledging these prejudices and attempting to work comparatively, political theorists and scientists have adopted methodological attitudes that only justify secularisation. Thus, using hermeneutics to explain the interpretation of political language and the deeper expressive meanings behind these interpretations, Davison references Hans-Georg Gadamer’s idea that prejudice guides interpretation.[xiii] Though some have argued that cultural change and development through global expansion and modernity threatens the existence of past traditions and long-established customs, others maintain that it is a necessary historical process that improves the conditions of society. “[P]atterns of behaviour identified as modern tend to prevail over those considered to be traditional… when universalistic norms supersede particularistic ones.”[xiv] Emile Durkeim was an early figure who sought an understanding of the function and significance religious has vis-à-vis maintaining the balance of society. Structural functionalism is a social systems paradigm that analysis how smaller elements in society play a functional role in the whole of the social system.
According to Durkheim, collective representations are conditioned ideals, a type of intellectual and emotional semiotic interaction within a group or society that legitimise shared historical meaning. “It is also a symbolic resource: an actor who does not conceive of him/herself as a link to an historical chain cannot elaborate a discourse of legitimization or a teleological vision that gives a sense to his actions’ he/she cannot give a meaning to his/her present combats.”[xv] According to Lowell Dittmer, symbols transcend objective interpretations and are no longer dependent on referential meaning, thus extending space and time.[xvi] Symbols become the autonomous link between a political structure and political psychology, whereby “[s]ymbols tend to merge with ‘language’ on the one hand and with the substantive ‘reality’ that language represents on the other.”[xvii]
Semiotics expose features of cultural symbolism and the interaction with belief-systems since group symbols can illustrate peculiar features that the materialist approach to social analysis may not achieve. It can provide a useful introduction to the influences and properties of a given culture by reducing communication to symbolic exchanges. “Although it is legitimate to treat social relations – even relations of domination – as symbolic interactions, that is, as relations of communication implying cognition and recognition, one must not forget that the relations of communication par excellence – linguistic exchanges – are also relations of symbolic power in which the power relations between speakers or their respective groups are actualized.”[xviii]
While Sartre believed that all people are essentially free and are built by nothing but the choices that they make, identity and recognition plays a pivotal role in current political and social dynamics that therefore makes it wholly deterministic. This dichotomy between individuality and the deterministic social environment is that the latter can facilitate the decision making process and since individuality or freedom is isolating and thus fearful by extension, or at the very least the co-deterministic environment substantiates this fear of individuality so as to endorse conformity, what eventuates is the diminishment of one’s humanity.[xix] To overcome this fear and escape from freedom, the individual makes one choice and that is to submit to the precipitating social environment; thus identity becomes symptomatic of this conformity and ‘being’ or individuality becomes unconscious and identity inauthentic. This is particularly effective in a social environment that lacks agencies that support individual autonomy, such as education and justice. Thus prejudice becomes a product of this dynamic between the individual and society and is utilised as a socio-communicative tool to interpret the dialectic of nature and historical determinism, albeit the formula is paradoxically detrimental to a just social environment since state legitimacy can be undermined by exclusive identity politics and antagonising relations between citizens and the state.
[i] J. Oppenheimer, “Culture and Politics in Druze Ethnicity”, 1:3 (1977) 623
[ii] Karl Marx, The German Ideology (Moscow: Progress Publishers, 1976) 101
[iii] Paul Rabinow, The Foucault Reader (London: Penguin Books, 1984) 60
[iv] E.J. Hobsbawm, Nations and Nationalism Sine 1780: Programme, Myth, Reality (New York: Cambridge University Press, 1990) 159. See Also Benedict Anderson’ ‘Imagined Communities’
[v] Lucian W. Pye and Sidney Verba, Political Culture and Political Development (New Jersey: Princeton University Press, 1998) 111
[vi] Martin Sokefeld. Struggling for Recognition: The Alevi Movement in Germany and in Transnational Space (New York: Berghahn Books, 2008) 22
[vii] Ibid., 29
[viii] Jean-Paul Sartre, Anti-Semite and Jew: An Exploration of the Etiology of Hate
[ix] Martin Sokefeld, op. cit., 30
[x] Andrew Davison, Secularism and Revivalism in Turkey: A Hermeneutic Reconsideration (New Haven: Yale University Press, 1998) 111
[xi] Ibid., 2
[xii] Davison, op. cit., 114
[xiii] Hans-Georg Gadamer is a German philosopher who wrote Wahrheit und Methode (Truth and Method).
[xiv] Pye and Verba, op. cit., 12
[xv] White and Jongerden, op. cit., 13
[xvi] Lowel Dittmer, “Political Culture and Political Symbolism”. World Politics 29:4 (July, 1977) 577. To extend space and time is to emotionally – rather than rationally – accept words to be true even if it is clearly to be proven false, i.e. Holocaust deniers.
[xvii] Ibid., 558
[xviii]Pierre Bourdieu and John B. Thompson, Language and Symbolic Power, Harvard University Press (1991) 37
[xix] Jean Paul Sartre, Critique of Dialectical Reason | <urn:uuid:06e9b397-c253-4dce-91d8-6fdb180035ee> | {
"date": "2020-01-22T06:13:46",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9248713254928589,
"score": 2.875,
"token_count": 2594,
"url": "https://themoraltraveller.com/tag/capitalism/"
} |
If you are looking for a deeper understanding of the ills in our food system and how to address them than “vote with your fork,” this is a book you should read as soon as possible. Land Justice stands in contrast with so many food movement books that never question the basic premise that with a few adjustments, we can correct the excesses of the capitalist marketplace. Eric Holt-Gimenez lays out the book’s basic premise: “Racial injustice and the stark inequities in property and wealth in the US countryside aren’t just a quirk of history, but a structural feature of capitalist agriculture. This means that in order to succeed in building an alternative agrarian future, today’s social movements will have to dismantle those structures. It is the relationships in the food system, and how we govern them, that really matter.” (P. 2)
A collection of essays, Food Justice brings together stories of old injustices and on-going ones, stories that we all need to hear and take to heart. We learn about the Gulluh Geechee farmers, George Washington Carver and Booker T. Whatley, the Republic of New Africa, the Land Loss Prevention Fund, women farmers, white and black, the Acequia Communities, the Mashpee Wampanoag Tribe, Rosalinda Guillen and farm worker organizing in Washington State, the People’s Community Market in Oakland, the Black Community Food Security Network in Detroit, and students taking action in Occupy the Farm in Berkeley.
Prefaces from three voices open Land Justice – a Native American, an African American and a family-scale farmer –voices that must be heard if we are to sort out the strands in the history of the “land problem” in this country and imagine a way forward towards a more just food system.
Winona LaDuke contrasts mainstream industrial, monocrop agriculture with the indigenous approach “based on biodiversity and the use of multiple locally adapted crops.” (P. xii) Plants, LaDuke tells us, are magical and “provide complex nutrients, medicinal values, cultural and spiritual connections, and they feed the soil.” She recounts the struggle of Native Americans for control of their land culminating in the successful class action suit, Keepseagle vs. Vilsack (1999) which won $680 million in reparations. While this award is far from adequate, it marks the resurgence and recovery of indigenous farming that is underway. LaDuke declares that it is time for “decolonization.”
Taking as her chant “This Land is Contested,” LaDonna Redmond, laments the Indian removal that preceded the importation of slave labor: “The holocaust of the indigenous set the stage in the US for the rise of capitalism.” (P. xv) The free labor of 12 million enslaved Africans on stolen land “is what built the wealth of the so-called New World.” The Homestead Act, which allowed many landless European immigrants to access land, was not for former slaves. Redmond urges the soli-darity of her people with Native Americans, the water and land protectors, and calls for unity against “corporate oligarchy and federal imperialism.” (P. xvii)
Belittling the ‘vote with your fork’ vote analogy, Iowa farmer George Naylor declares: “We need to recognize how market forces affect farmers, the land, and consumer behavior, and demand policy solutions to achieve a sustainable future.” (P. xix) Naylor insists that “We need to de-commercialize food and land.” To accomplish this, Naylor proposes that we replace the cheap food policy that has enabled corporate dominance, with a system based on “Parity,” the New Deal farm programs involving “conservation-supply management to avoid wasteful, polluting over production; a price support that actually set a floor under the market prices rather than sending out government payments; grain reserves to avoid food shortages and food price spikes; and a quota system that was fair to all farmers and changed the incentives of production.” (P. xxi)
The authors of these essays have every reason to be bitter and pessimistic given what they have experienced and the long history of atrocities that mar our country’s past. Yet, despite the recitals of inhuman cruelty and brutal greed, Land Justice leaves the reader energized and inspired by the writers’ courage and determination. Together they show us a path forward through alliances and collaboration with the marginalized communities represented to “change the politics of property.” This book makes a major contribution to helping us develop a radical and coherent program for transformative change. As Holt-Gimenez concludes his incisive introduction, the authors are “in a struggle to remake society.” It is up to us to harken to their passionate words and to take “land justice” as “both a vision and a clarion call.” (P. 13) | <urn:uuid:53124410-84ac-4cd1-ad20-9e644351544c> | {
"date": "2020-01-22T05:59:26",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9312397241592407,
"score": 2.640625,
"token_count": 1060,
"url": "https://thenaturalfarmer.org/article/land-justice-re-imagining-land-food-and-the-commons-in-the-united-states/"
} |
Patients with severe head injury need all the help they can get. Mannitol is one tool that is time-tested and cheap. But how do you decide who gets it and when?
Mannitol is a powerful osmotic diuretic that pulls extracellular water from everywhere, including the brain. By reducing the size of the brain overall, it drops pressure inside the skull (ICP) somewhat.
Mannitol can be used anytime during the acute phase of trauma care for three indications in patients with head trauma:
- Focal neurologic deficit. This is due to transtorial herniation, and may manifest clinically as unilateral pupil dilation or hemiparesis. It may also be seen on CT scan.
- Progressive neurologic deterioration. This is typical of rising ICP and can be diagnosed when your previously talking patient becomes lethargic.
- Clinical evidence of high ICP. This is the Cushing response (hypertension with bradycardia). Do not treat this hypertension with other meds, it is a brain protective mechanism!
The literature does not have any good studies that show effectiveness or survival benefit. However, most trauma professionals have seen the dramatic improvement in neurologic status that can occur after early administration.
Bottom line: Mannitol is cheap and it works! Consider it early if any of the three indications above are seen. And don’t forget to put a urinary catheter in immediately because the diuresis that it causes is impressive. And no studies thus far have been able to prove that hypertonic saline is any better or worse than mannitol. | <urn:uuid:920831c5-0005-425d-a7ca-330363610e41> | {
"date": "2020-01-22T06:17:56",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9288243055343628,
"score": 2.609375,
"token_count": 335,
"url": "https://thetraumapro.com/tag/mannitol/"
} |
Waste tires are a persistent and widespread problem in the United States. Many individuals assume that they can handle tire disposal on their own, either by stockpiling used tires on their land or by dumping tires in unregulated areas.
In our previous blog, “Tire Recycling Options and Why They Matter,” we discussed the astonishing statistics about waste tires and the benefits of recycling tires rather than sending them to a landfill or disposing of them yourself.
In this blog, we delve deeper into eight of the potential risks of handling tire disposal on your own rather than partnering with a professional waste tire disposal expert.
Many individuals assume that as long as they take their spare tires to a landfill, they have disposed of this type of waste responsibly. However, one of the major issues with tire waste in the United States is the amount of space these non-biodegradable objects take up in landfills.
Tires can cause crowding and will often float to the surface of landfills after they’ve been covered, contributing to high disposal costs with no foreseeable end. Recycling is a much better option.
2. Fire Risk
Stockpiles of waste tires both on private land and in landfills pose serious health, safety, and environmental risks. Scrapped tires are, first and foremost, fire hazards. Tire rubber is highly flammable and particularly appealing to vandals.
Additionally, once a tire fire ignites, the rubber can potentially burn for months before it goes through the available fuel, even in smaller stockpiles. To reduce the risk of fires, tire storage facilities and recycling plants comply with meticulous regulations about the environment tires are kept in.
3. Groundwater Contamination
If you saw a fire burning, your first instinct would likely be to try and use water to put out the flame. While this tactic works for most fires, grease and tire fires are exceptions to the rule. Not only does pouring water on tire fires generally not put them out, but the choice can cause groundwater pollution.
As tires burn, the rubber melts and releases the chemicals used in tire manufacturing. A well-meaning passerby who pours water over this mess actually allows the chemical sludge to spread around and potentially reach fresh water sources.
4. Insect Infestation
While tires do not biodegrade, they do change as they sit in a stockpile or landfill over time. Specifically, waste tires often collect moisture on their surfaces and release methane gas. This combination creates the perfect environment for mosquito infestations.
Illegal dumping grounds, tire landfills, and stockpiles can encourage populations of particularly dangerous mosquitoes and increase the incidences of disease like West Nile virus.
5. Poor Air Quality
In addition to the methane gas release as tires age in direct sunlight and other weather conditions, the high risk of tire fires also contributes to a high risk of air pollution. Tire fires can contaminate local air with the same chemicals that pouring water on a tire fire could spread into the groundwater.
6. Regulation Noncompliance
Because waste tires have become such as serious problem in the United States, most states have implemented regulations about how tires should be dealt with when they are no longer useful. When you resort to DIY methods like stockpiling or dumping, you likely violate your state restrictions or federal regulations.
This type of noncompliance could lead to fines or even criminal charges if your actions directly led to a dangerous situation caused by your waste tires. Instead, work with a disposal and recycling company that is certified to deal with scrap tires.
7. Resource Waste
Tire production requires large quantities of natural resources as well as synthetic chemicals. Old tires are mostly recyclable, which allows these resources to be reused in other products like paving materials, fuel, or insulation.
When tires are allowed to sit in a dump and not decompose, all of the resources used to create those tires goes to waste, requiring the use of more of the same resources for manufacturing purposes. As mentioned in our last blog on waste tires, an estimated 77% of scrap tires are not recycled, meaning more than three-quarters of tire resources go to waste.
8. Soil Degradation
Studies of the effects of waste tire piles on the surrounding ecosystem indicate that the chemicals released as tires age can fundamentally alter the local soil. Specifically, waste tires may eradicate the beneficial bacteria that provide nutrients for flora and fauna.
Tire recycling eliminates the long periods of time that waste tires sit unattended in undeveloped areas altering the ecosystem.
The next time you invest in new tires, make sending the old set off for proper storage and recycling the last critical step in the purchase and installation process. Inquire at a reputable local disposal contractor to determine how to properly dispose of your waste tires.
For comprehensive disposal services, including waste tire removal in compliance with Illinois EPA and DEM regulations, trust the experienced team at Tri-State Disposal. | <urn:uuid:c58b5e7e-fdce-4427-81f5-ec68a0bcf28f> | {
"date": "2020-01-22T05:29:27",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9367814064025879,
"score": 2.9375,
"token_count": 1008,
"url": "https://tri-statedisposal.com/8-dangers-of-diy-tire-disposal-tri-state-disposal/"
} |
A recent World Wildlife Fund for Nature-Pakistan (WWF Pakistan) study recommends awareness raising campaigns to encourage farmers to adapt to climate change impacts like extreme changes in temperatures.
The per acre yield of the three major crops in the province may fall between 8 and 10 per cent over the next two decades if farmers are not encouraged to adapt to climate change, says Ali Dehalvi, a researcher with the team that worked on Climate Change in the Indus Eco-region study.
He says this amount to a loss of up to Rs30,000 per acre for growers of wheat, rice and cotton crops.
The study identifies varying harvest and cultivation timings, choice of crops grown in a year and inputs and various on-farm soil and water conservation techniques as the strategies employed by farmers to adapt to the impacts of climate change. However, Dehlavi says, the proportion of farmers using these strategies is not very high. “Only about 50 percent of our respondents are using these techniques to prevent losses in crop yields,” he says.
Dehlavi calls for awareness raising programmes for farmers on changing patterns of rainfall and extreme temperatures. He says these should be undertaken jointly by agricultural extension officers of the provincial government and staff of non-governmental organisations. “The biggest challenge is to encourage farmers to experiment with these adaptation techniques to see what works in their specific conditions,” he adds.
Meanwhile, Dr Mohsin Iqbal, head of the Coordination and Agricultural Section at the Global Change Impacts Studies Centre in Islamabad, suggests that development of seeds that are resistant to varying temperatures is the only sustainable solution to the impact of climate change in the country. He says extreme temperatures will likely shorten the growth period of crops like wheat and rice and increase the quantity of water required for growth. This will impact yield quality as well as quantity, he adds.
A recent study by the German Watch places Pakistan sixth amongst countries most affected by extreme weather events between 1994 and 2013. Pakistan Meteorological Institution Chief Meteorologist Dr Ghulam Rasul says recent rains in the Punjab and sudden rise and fall in temperature are related to climate change. He says such changes in weather at this time of the year can damage wheat crop nearing harvest in the province. Rasul says his 2012 research had found a 0.5 per cent increase in average temperature in the country between 2000 and 2012.
Published in The Express Tribune, March 18th, 2015. | <urn:uuid:e26cd314-3f32-4b0d-add1-b682c8e4a82e> | {
"date": "2020-01-22T06:47:02",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9395046830177307,
"score": 3.171875,
"token_count": 495,
"url": "https://tribune.com.pk/story/854870/changing-patterns-teach-farmers-how-to-adapt-to-climate-change-says-dehalvi/"
} |
Online CourseThe package components are delivered online:
- The Athlete’s Guide to Diabetes ebook
- Continuing education exam
Renowned researcher and diabetes expert Dr. Sheri Colberg offers best practices and tips for managing blood glucose levels for athletes of all ages. She provides the most up-to-date information on
- insulin and other medications and their effects on exercise,
- nutritional practices and supplements, including low-carbohydrate eating,
- the latest technologies used to manage glucose, including continuous glucose monitoring (CGM),
- injury prevention and treatment as well as tactics for diabetes-related joint issues, and
- mental strategies for maximizing performance and optimizing health.
After reading this book and successfully completing the 50-question multiple-choice exam, you will be able to do the following:
- Summarize the types of exercise training that adults with diabetes should engage in.
- Explain the importance of different types of physical activity to diabetes management.
- Describe the basics of energy systems and how exercise metabolism is altered by diabetes.
- List the types of medications prescribed to manage blood glucose and describe their impact on activity.
- Understand dietary practices related to exercise and alterations for people with diabetes.
- Recognize the emerging role of technologies in diabetes exercise management.
- Assess the safety and effectiveness of exercise by those with diabetes-related health complications.
- Predict the mind-set of an athlete with diabetes that will lead to success.
- List the most common athletic injuries and how to prevent or avoid them.
- Explain the changes to diabetes regimens (diet, insulin) that may be needed during physical activity.
- Summarize the usual practices of active individuals with diabetes who engage in a variety of fitness, endurance, power-endurance, power, and recreational sports activities.
AudienceA continuing education course for strength and conditioning coaches, personal trainers, athletic trainers, and other certified fitness professionals.
Table of ContentsPart I. The Athlete’s Toolbox
Chapter 1. Training Basics for Fitness and Sports
Chapter 2. Balancing Exercise Blood Glucose
Chapter 3. Ups and Downs of Insulin and Other Medications
Chapter 4. Eating Right and Supplementing for Activity
Chapter 5. Using Technology and Monitoring to Enhance Performance
Chapter 6. Thinking and Acting Like an Athlete
Chapter 7. Preventing and Treating Athletic Injuries
Part II. Guidelines for Specific Activities
Chapter 8. Fitness Activities
Chapter 9. Endurance Sports
Chapter 10. Endurance–Power Sports
Chapter 11. Power Sports
Chapter 12. Outdoor Recreation and Sports
Appendix A. Diabetes, Sports, and Related Organizations
Appendix B. Diabetes, Sports, and Nutrition Websites | <urn:uuid:02ce25f0-5900-40ba-b8aa-fa46b9fb4935> | {
"date": "2020-01-22T04:48:42",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8574334383010864,
"score": 2.8125,
"token_count": 562,
"url": "https://us.humankinetics.com/collections/cka-approved-courses/products/athletes-guide-to-diabetes-with-ce-exam-the"
} |
In times of major scientific and technological advances, we tend to think of our social world as a product on inevitable progress (or “manifest destiny’) originating with the emergence of the Enlightenment movement in eighteenth-century Europe. We associate the Enlightenment with the rigorous thinking of logic and mathematics; and, for the most part, we embrace the Enlightenment ideal that all problems may be solved through the systematic engagement of reasoning. We then view history as a direct path from scientific practices in the eighteenth century to contemporary neuroscience dependent on sophisticated scanning devices.
This view of history tends to overlook the emergence of a Counter-Enlightenment movement towards the end of the eighteenth-century, a period also associated with the rise of romanticism in opposition to the sterile formalities of classical structures. The term itself may have been coined only in the twentieth century by Isaiah Berlin, for whom the movement was highly significant in his study of the history of ideas. While Berlin had no trouble recognizing the material progress due to Enlightenment thinking, he felt that one should also acknowledge the limitations of the Enlightenment stance.
Most important was Berlin’s conviction that reason is not a solution for everything, coupled with a rigorous critique of idealist thinking. Ironically, he could take on the latter issue by accepting Enlightenment on its own terms. Within than context one assumes that reason will lead one to an “ideal state.” That state may be in the distant future, but one can still gauge one’s progress in approaching it. Berlin, however, decided to extrapolate into that remote future with the innocent question, “What do we do when we get there?” The fallacy of idealism is the very idea that the ideal is a state, some static configuration that can no longer admit of change. In biological terms the only time an organism achieves such a static state is in death, and in ecological terms not even death is static.
Berlin preferred to think that life is not a matter of solving problems, as if all problems could be cast neatly in the mathematical style of Enlightenment thinking; Rather, life is a matter of “making do” in an ever-changing series of situations, both adverse and beneficent. Rather than aspiring to any state, life is an ongoing process, much of which involves taking false steps and then compensating for them. The point of view was nicely captured in the title Berlin gave to one of his collections of essays, The Crooked Timber of Humanity.
If Jacques Offenbach had confronted Berlin’s writing and taken the time to negotiate his often highly convoluted writing, he probably would have nodded in agreement. Most of his operettas are about human foibles and how, in spite of the adversities of those foibles, things can still work out by making the right compensatory moves. Thus, one way to view Les Contes d’Hoffmann (the tales of Hoffmann) is as an effort to escalate such Counter-Enlightenment thinking from operetta to serious opera. E. T. A. Hoffmann was, after all, one of the earliest authors to explore the aesthetic of romanticism, the “literary face” of Counter-Enlightenment thinking.
Laurent Pelly’s staging of the current San Francisco Opera production of Les Contes d’Hoffmann does much to advocate the Counter-Enlightenment message. It is at its most explicit in portraying Spalanzani as some B-movie version of Victor Frankenstein the ultimate lampoon of Enlightenment thinking. (Mary Shelley’s Frankenstein can easily be read as a confrontation between Enlightenment and Counter-Enlightenment thinking cast in the literary framework of the Romantic movement.) However, there is also Pelly’s decision to play up the diabolical side (reinforced by the libretto text) of the nemesis figure in all four of his guises. It is almost as if Pelly knew that, in Hebrew, satan is a common noun meaning “opposition.” It is because of the presence of opposition that we do not advance along straight lines, growing, instead, that “crooked timber” brought about by false steps and compensations.
The integral edition of Les Contes d’Hoffmann edited by Michael Kaye and Jean-Christophe Keck makes it clear that Offenbach, himself, had to contend with many of his own false steps and compensations while working on this opera. Indeed, death intervened before it could be said that he had resolved those false steps to his satisfaction. (In Enlightenment language, Hoffmann’s score never achieved its “ideal state.”) Thus the legacy of Les Contes d’Hoffmann is one of Counter-Enlightenment thinking applied not only to the characters of the narrative but also to the very nature of making an opera. | <urn:uuid:5aa7adac-909e-4071-b14a-aac1bb7550c5> | {
"date": "2020-01-22T05:52:42",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.957870602607727,
"score": 3.28125,
"token_count": 1010,
"url": "https://usedview.com/jacques-offenbach-and-the-counter-enlightenment/"
} |
Passive and active rainwater harvesting systems can reduce use of potable water for outdoor irrigation in new and existing development. Passive rainwater harvesting involves directing rainwater to the landscape using pipes, channels, berms, and basins, while an active approach involves collection of rainwater from a catchment area, such as a roof, and storage in barrels or cisterns/tanks. Basic rainwater harvesting systems are relatively easily installed.
Rainwater harvesting can be incentivized via a rebate program for existing homes and can be required in new construction through ordinance or development standards.
A growing number of communities in the Southwest have enacted rainwater harvesting ordinances. Santa Fe, County New Mexico requires that new homes larger than 2,500 square feet include a rainwater harvesting system and a cistern/pump system with a goal of capturing runoff from at least 85% of the roof area. The Flagstaff Rainwater Harvesting Ordinance (Ordinance 2012-03) requires passive rainwater harvesting techniques for new single family residential houses in order to keep water onsite, and active systems for other developments. Commercial rainwater capture systems, because of the relatively large size of the catchment area, can capture large volumes of water for on-site irrigation.
Using this non-potable water supply for a non-potable use such as plant irrigation, saves higher quality, treated water for drinking, which in turn saves treatment costs for utilities. Harvesting rainwater also reduces storm runoff by retaining and infiltrating water on site. This reduces the volume of pollutant runoff to streams from the urban environment.
A number of communities in Arizona, New Mexico, Texas and California offer rainwater harvesting incentives, including rebates or vouchers, for the purchase and installation of systems that typically focus on the gallons of water that can be stored in a tank. Tucson Arizona is unique in that it offers rebates for both passive and active rainwater harvesting systems.
In 2008, the City of Tucson adopted the Tucson Rainwater Collection and Distribution Requirements (Ordinance 10597), the first of its kind in the nation. Development of the ordinance was spurred by an interest in water harvesting as one mechanism to address the water demand of increasing growth and the need to acquire more costly long-term water supplies. Tucson was also motivated by its long-term leadership and innovation in conservation, resource management, technology, policy and regulation, as well as wanting to create sustainable, cost-effective policies for new development.
The ordinance requires all commercial development and site plans to submit a rainwater harvesting plan with a landscape water budget that shows how 50% of the estimated yearly landscape water budget will be provided by a rainwater harvesting system. This requires early collaboration between the project civil engineer and landscape architect to ensure the grading plan captures sufficient rainwater for the landscape areas. Compliance is determined based on site inspections after site grading and prior to issuance of a certificate of occupancy. Failure to meet the rainwater harvesting requirement is considered water wastage subject to monetary fines. The ordinance also prohibits use of private covenants, conditions, and restrictions that prohibit rainwater harvesting.
The ordinance resulted from a stakeholder group convened by the City and composed of builders, developers, environmental groups, and others, which worked to develop an acceptable proposal. The group agreed to the 50% requirement due to concerns by the local homebuilders association that a higher percentage would require the installation of more expensive cistern systems. As a result, passive rainwater harvesting systems are being used to meet the requirements. Even with the compromise, the Tucson Metropolitan Chamber of Commerce publicly opposed the ordinance as potentially driving development outside the city limits. This has not been proven, and at least one major redevelopment project chose to voluntarily incorporate the ordinance prior to adoption, demonstrating that some businesses are interested in building in a way that considers community values.
Tucson Commercial Rainwater Harvesting Ordinance 10597 is found within the City of Tucson Code, Chapter 6, Building, Electricity, Plumbing and Mechanical Code and Development Standard No. 10-03.0.
- Flagstaff Rainwater Harvesting Ordinance
- Santa Fe Rainwater Catchment Systems Ordinance for Commercial and Residential Development
- The Compendium of Rainwater Harvesting Resources can be found here.
- Western Resource Advocates also has resources on water harvesting in The Case for Conservation – Rainwater Harvesting | <urn:uuid:82930ab2-6d92-40a7-a27b-a2fc3fe64130> | {
"date": "2020-01-22T05:15:11",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9294632077217102,
"score": 3.265625,
"token_count": 884,
"url": "https://verderiver.org/rainwater-harvesting-ordinances/"
} |
Congresswoman Haley Stevens (MI-11) has launched the Congressional Plastics Solutions Task Force, a coalition of lawmakers working together with state and local officials and industry representatives to facilitate investment in recycling technologies and promote education on plastics generation and recovery. Congresswoman Stevens has been a leader on recycling issues in Congress, chairing the first Science Committee hearing on recycling in a decade earlier this year. Reps. Kim Schrier (D-WA-08), Paul Tonko (D-NY-20), and Greg Murphy (R-NC-03) participated in the launch meeting.
From the food we eat to the clothes we wear, plastics have shaped every aspect of modern life. At the same time, insufficient strategies for recycling and waste management and the lack of robust secondary markets for plastics is creating steeper costs for municipalities for their recycling programs and devastating effects on public health. Less than 9 percent of all plastic created is recycled each year – and we are on course to generate over 12 billion metric tons of plastic waste by 2050. The majority of these plastics are accumulating in landfills and the environment where it will outlive the next several generations.
The Congressional Plastics Solutions Task Force will convene periodic meetings to:
- Highlight changes and innovations in the private sector;
- Increase understanding of the challenges facing our domestic recycling infrastructure;
- Identify and discuss innovative approaches to plastics generation and recovery, as well as the latest research on those topics;
- Build consensus among Members of Congress around opportunities to address plastic waste.
“Plastics have become fundamental to almost all aspects of our lives, from food storage to 3-D printing technology, and have enabled us to make great technological advances,” said Congresswoman Stevens. “With this progress, however, comes a cost. Some estimates suggest that Americans dispose of 22 million tons of products that could have been recycled every year. We produce far more plastic than we can properly recycle, domestically and internationally. The extent of plastics pollution is becoming ever more apparent and more alarming. The news is not all bleak, however. There are a number of new technologies that are being developed to increase the efficiency and availability of plastics recycling, repurpose more recycled plastics into high-value products, and ultimately, reduce the impact of plastic on the environment and human health. By creating the Congressional Plastics Solutions Task Force, I hope to bring my colleagues together with industry and other stakeholders to build momentum around these emerging technologies and move toward a more sustainable future.”
“I am pleased and excited about the new Congressional Plastics Solutions Task Force that Congresswoman Haley Stevens is putting forth,” said Pat Williams, Canton Township Supervisor.“I have been at the table discussing this issue with Congresswoman Stevens since she took office. Her efforts are absolutely crucial, not only for our local communities but for our entire country.”
“For communities like Plymouth, the environmental benefits of recycling must be weighed against the economic and administrative factors that can make it difficult to maintain municipal recycling programs without increasing costs for residents,” said Paul Sincock, Plymouth City Manager. “As a City Manager, I know that a stronger end market for plastics would increase the value of the recycled goods we collect, helping to preserve and strengthen municipal recycling programs across the country. Congresswoman Stevens has been paying close attention to this issue, inviting me to testify before her Research & Technology subcommittee earlier this year to speak to the challenges of overseeing a municipal recycling program. I applaud the formation of the new Congressional Plastics Solutions Task Force, which will bring stakeholders to the table to help address our growing plastic waste crisis.”
“Rep. Haley Stevens’ leadership on an issue of critical importance to the environment is greatly appreciated,” said American Beverage Association President and CEO Katherine Lugar. “The Congressional Plastics Solutions Task Force will bolster our industry’s efforts to decrease the use of new plastic by increasing the collection of plastic so it can be remade into new products, and not wasted in landfills or as litter. This task force demonstrates how we can work together in support of new ideas and investments that will benefit consumers, conserve resources and protect the environment.” | <urn:uuid:509b947c-542b-454c-8bd4-72d6177b2a6a> | {
"date": "2020-01-22T05:16:16",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9418137073516846,
"score": 2.71875,
"token_count": 862,
"url": "https://wasteadvantagemag.com/rep-haley-stevens-launches-congressional-plastics-solutions-task-force/"
} |
There’s no question that the seeds of civilisation were sown with the beginning of agriculture. In fact, at dif- ferent times, agriculture has shaped the rise of civilisation in every region of the world. In the “fertile crescent” between the Tigris and Euphrates rivers, it was rye. In Mesoamerica, it was squash and maize. In Egypt, it was the precursor of modern wheat. In China it was rice. Most of these were wild grasses that became domesticated, likely through a combi- nation of the effects of climate change and inventiveness by small groups of people trying to feed themselves.
Between 13,000 and 7,000 years ago there were anumber of alternating cold-dry and warm-wet periods.
During these times, hunting and gathering wasn’t as suc- cessful as it had been, and the ranges and habitats of plants and animals changed. As regional climates grew cold and drought-ridden, people collected wild foods to take with them when they moved to warmer, wetter areas and culti- vated them.
“Necessity is the mother of invention” and, because humans are resilient and adaptable creatures, agriculture, or the use of deliberate practices for growing and harvest- ing plants and animals, came into being. People learned about genetic selection. And the fabric of civilisation has ever since been interwoven with the threads of agricultural innovation.
As agriculture spread, people began in earnest to take ownership of places where food grew. The more successful families created wealth for themselves and their communi- ties. It wasn’t long before agriculture came to be associated with political power. The pharaohs of Egypt, Rome’s sen- ate, the Aztec kings controlled the food stores of their civi- lisations. It’s not much different today. Only it’s not govern- ments that wield the big stick of control over who grows what, where, and how much farmers get paid for their la- bours—or even who gets to eat—it’s “global market forces” and, increasingly, biotech corporations like Monsanto.
While the marketplace is the determinant for the cost of food commodities, governments set policies and regula- tory controls, and sometimes enter into trade agreements that result in ruinous effects on farmers. In the past few decades, this has certainly happened in Canada and in Brit- ish Columbia.
In Vancouver Island’s Alberni Valley, agriculture as a contributor to the local and regional economies has shrunk by about 70 percent from what it was, in part due to chang- ing government policies. For instance, 25 years ago there were ten dairy farms, four or fi ve hog farms, and a number of commercial poultry, beef, potato, and fruit and vegeta- ble farms. Today, there may be four dairy farms left, no commercial hog, poultry or beef producers, and one or two comparatively small-scale vegetable and berry producers. There is one commercial greenhouse that grows mostly tomatoes and cukes.
When asked why this happened, former dairy farmers Bob and Ann Collins said it was due to a combination of factors. The price of grain rose. All the supporting infra- structure disappeared. Services, such as processing and transportation, were controlled from farther away. Where it used to cost one cent per litre to get locally produced milk to markets, soon all the local milk was trucked away and it cost producers three or more times as much money to get their products to market.
This meant as much as a 20 percent decrease in their already narrow margins. Profi tability dropped like a stone. Farmers were told to “go big or go broke.” The few who could afford to “go big” are still in business, but they took up the slack and the others either went broke or went out of the farming business. Some stayed small and took outside jobs to make ends meet. This was hard on farmers and hard on families. Some farmland was sold and subsequently lost to farming.
How does this affect the local economy?
Collins explained: “Take McKinnon’s Dairy. They used to employ 16 people. There were a few part-timers, but mostly full-time employees who made a decent living. The workers lived in the Valley and they spent their money in the Valley, and all the infrastructure that supported the business was in the Valley. But the public, ever on the lookout to save a penny, didn’t give enough support to local producers. A quart of locally produced milk may have cost a few cents more than mass-produced milk from big farms outside the region. So local milk was trucked away and we traded those 16 jobs for a truck driver who brings milk into the Valley a couple of times a week and who might buy a cup of coffee and a sandwich at the local Tim Horton’s.”
Collins says, “I have absolutely no passion for growing food anymore because people have no respect for where it comes from. People have to be consistent and aggressive about seeking out and supporting local producers. But it’s too easy to go to the supermarket and buy cheap food that was grown in Chile or Mexico, or wherever, and that’s been soaked in pesticides and harvested while still unripe.”
About 10 years ago, Ann Collins and Lisa Daniels started Port Alberni’s Farmers’ Market under the auspices of the Farmers’ Institute. It’s been very successful for providing a venue for the public to buy some home-grown produce, including eggs, fruits and vegetables in season, honey products, cut flowers, locally processed meats, and locally made crafts. It’s probably the only farmers market in BC that operates 12 months of the year. While it’s successful and is in the process of expanding, it currently doesn’t allow for the financial scale that farmers must maintain to keep their farms viable and support a family.
Says Collins, “People say they want to save farmland, but if they won’t support local farmers, then they’re asking us to save it at our own expense. It isn’t going to happen. I’m just waiting for the day when a banker and a lawyer come walking up my driveway looking for something to eat.”
“What will save local farming in the future will be for producers to get away from commodity production. Farmers will need to become innovative, to embrace some value-added component that people will pay a fair price for. Whether it’s some form of agri-tourism or a specialty crop, such as a winery, farm diversification is key to rural sustainability.”
[See Watershed Sentinel, Vol 13 No 4, Aug-Oct 2003; “Farming in the New Millennium” and “How You Gonna Keep ‘Em Down on the Farm?”] | <urn:uuid:fe4871e7-9045-4c96-abc5-bf8d5b5cadcf> | {
"date": "2020-01-22T04:49:09",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9727692008018494,
"score": 3.03125,
"token_count": 1460,
"url": "https://watershedsentinel.ca/articles/commercial-agriculture-pushing-out-local-farming/"
} |
Full Moon — 3/8
Last Quarter — 3/14
New Moon — 3/22
First Quarter — 3/30
Daylight Saving Time begins for most states in the U.S. on March 11 at 2 a.m. local time. Advance clocks 1 hour.
The Vernal Equinox occurs on March 20 at 12:14 a.m. CDT, signalling the beginning of Spring. Daylight increases for three months until late June. At this time, the sun appears directly above the equator, meaning that individuals living at the equator have the sun appear directly overhead. The sun does not appear directly overhead from the Chicago area.
The month opens with the spectacular Venus-Jupiter gathering in the western sky, just after sunset. With binoculars and a clear horizon, locate Mercury low in the sky early in the month.
By mid-month, Jupiter and Venus appear close together. While millions of miles apart, the two planets appear about 3 degrees (six full moons) apart. The chart above shows the pair on March 12, one of the nights they appear closest. Notice the view is one hour later as daylight saving time (advance your clock one hour) on March 11.
The animation above shows Venus and Jupiter each night during March 2012 in the early evening sky. Watch to two planets appear to converge then separate.
March 24: The waxing crescent moon appears below Jupiter and Venus, near the western horizon.
March 25: Jupiter and the moon are paired nicely, with the moon appearing slightly higher and to the right of Jupiter
March 26: Tonight, Venus and the moon are nicely paired with both objects appearing about the same height above the western horizon. This is the night to catch a classic photographic view of the moon and Venus together.
March 27: The moon stands above Venus and Jupiter as the planetary pair continues to separate.
At the same time that the brilliant group gleams in the western sky, Mars lies low in the eastern sky. It is the brightest starlike object in this part of the sky, but it dramatically under shines the bright duo in the west. Mars appears slightly red-orange and its color can be distinguished with binoculars. On March 3,Earth passes between the sun and Mars — an opposition. At this time, Mars is about 60 million miles away. An opposition for Mars occurs about every 25 months. Because Mars’ orbit is moderately elliptical, this opposition occurs when Mars is farthest from the sun (aphelion), it is not as close or as bright as several previous oppositions.
The waxing gibbous moon appears near Mars on March 6 and March 7.
A few days later, the Moon appears near Saturn and Spica. Saturn rises just around midnight in the southeastern sky. The chart above shows the planet-star pair with the moon for March 10 and March 11. The constellation Corvus is nearby.
The chart above shows the planets at mid_March 2012. Notice that an imaginary line extended from Earth to Venus goes to Jupiter. That is why the two planets appear close together in our sky, but they are widely separated in space. Additionally notice that our planet is between Mars and the sun — they are on opposite sides of Earth. | <urn:uuid:38b55d77-49f8-42e3-93ec-c48eea5aadfc> | {
"date": "2020-01-22T06:40:09",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9306377172470093,
"score": 3.53125,
"token_count": 669,
"url": "https://whenthecurveslineup.com/2012/03/01/sky-watching-march-2012/"
} |
In early May, pale yellow carpets some hillsides of Northern Utah. The plants are a non-native known as Dyer’s Woad. This Asian member of the cabbage family has been cultivated as a dye and medicinal plant in Europe and Asia for 2000 years. Dyer’s Woad produces a glorious blue dye, but the process is tricky. No synthetic dye equals the color and characteristics of woad dyes.
Woad had arrived in Utah by 1932 as a seed contaminant. Now it is a noxious weed. Woad has a number of unique abilities that contribute to its vigor. Being a biennial plant, it spends the first year of life as a rosette of leaves, building reserves. In its second year, those reserves allow a woad plant to send forth a tall, lanky stem covered with pale yellow flowers that ultimately yield up to 10,000 seeds per plant.
Although Dyer’s Woad is not toxic, few animals relish it either. The seeds have chemicals that inhibit germination and root elongation in other plants, giving woad a competitive edge. Woad causes millions of dollars in losses each year, so control is a major issue. Herbicides and mechanical removal are best used against the rosettes, but nature has provided a native fungus that views woad as dinner. This rust fungus is very effective at eliminating or severely reducing seed production. Plants infected with the rust fungus are misshapen, wrinkly, and covered in dark spots. Those spots brim with rust spores. Therefore, when removing woad, leave the sickly plants to infect yet more woads.
This is Linda Kervin for Bridgerland Audubon Society.
Photos: Brad Krupp, Utah State University, Bugwood.org
Text: Michael Piep, Utah Native Plant Society
Intermountain Herbarium: http://herbarium.usu.edu/
Washington Weed Board: http://www.nwcb.wa.gov/weed_info/Written_findings
Edmonds, J. 2006. The History of Woad and the Medieval Woad Vat. http://www.lulu.com/product/paperback/the-history-of-woad-and-the-medieval-woad-vat/4928037
Shaw, R.J. 1989. Vascular Plants of Northern Utah. Utah State University Press, Logan, Utah. http://www.usu.edu/usupress/books/index.cfm?isbn=1417
Welsh, S.L., N D. Atwood, S Goodrich & L.C. Higgins. 2008. A Utah Flora, 4th Ed. Brigham Young University, Provo, Utah. http://www.amazon.com/Utah-Flora-Stanley-L-Welsh/dp/0842525564 | <urn:uuid:97bdea12-8ccf-40e1-b9fe-8e92f342b9fc> | {
"date": "2020-01-22T05:22:21",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8834779262542725,
"score": 3.640625,
"token_count": 606,
"url": "https://wildaboututah.org/dyers-woad/"
} |
Hi, I’m Holly Strand from Stokes Nature Center in beautiful Logan Canyon.
One cold dawn in January I glanced out the kitchen window at the snow-covered yard. Something moved and my eyes focused on a large dog-sized animal half hidden and crouched behind some rabbit brush. Funny I thought. I’ve never seen a large dog like that in this neighborhood. I continued to watch the backyard visitor but it was very still and hard to see. I raced upstairs to get a better view from the second floor window. As I reached the window I got a brief glimpse of the animal as it melted away into a large ravine. It was definitely not a dog. Dogs don’t “melt” as a method of locomotion.
Given the size, the color, the time of day, and the way it moved, I’m pretty sure that I saw a mountain lion. In winter our yard is a mountain lion pantry, plentifully stocked with live mule deer steaks browsing on our trees. Undoubtedly that’s what attracted my morning visitor.
The mountain lion, or cougar as it is often called, was once the most widely distributed mammal in the Americas. Nowadays, In the United States, it is now mainly restricted to remote areas in the western part of the country including in Utah. According to the Division of Wildlife Resources, the only place in Utah they’re not found is in the salt flats west of the Great Salt Lake.
Although they are found everywhere in the state, the animals are rarely seen. They are extremely secretive and largely nocturnal. They usually know where you are before you know where they are, so they can easily avoid human contact .
Mountain lion attacks are extremely rare and there have been no deaths from them in Utah. Nevertheless, they can kill people and with wildlife-human confrontations on the increase it’s good to know what to do if you meet one of these big kitties. First of all, don’t run from or turn your back to a mountain lion. Its instinct is to chase running animals. Make yourself look as big as possible by raising your arms up high. Speak loudly and fight back if attacked. If you live near mountains or rocky cliff areas keep a close eye on children and pets especially at dusk and dawn.
Thanks to the Rocky Mountain Power Foundation for supporting the development of this Wild About Utah program.
For Wild About Utah and Stokes Nature Center, I’m Holly Strand.
Images: Courtesy USDA and US FWS Digital Library
Text: Holly Strand, Stokes Nature Center
Sources & Additional Reading:
Mountain Lion, Wildlife Notebook, Utah Division of Wildlife Resources, http://wildlife.utah.gov/publications/pdf/newlion.pdf
Starving Cougar Attacks Vernal Man, Hans Moran, Deseret News Nov. 12, 1997, http://www.deseretnews.com/article/594408/Starving-cougar-attacks-Vernal-man.html
Mountain Lion, National Geographic, http://animals.nationalgeographic.com/animals/mammals/mountain-lion.html | <urn:uuid:d5731e98-2afc-49da-a6df-804b58a9f750> | {
"date": "2020-01-22T04:56:52",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9367594718933105,
"score": 2.6875,
"token_count": 674,
"url": "https://wildaboututah.org/tag/mountain-lion/"
} |
Americans spend $18 billion a year on deodorant and antiperspirant in a quest to cover up body odor and reduce sweating.1 For many, applying deodorant is a regular part of their morning routine, but it hasn’t always been this way.
The first deodorant, which killed off odor-producing bacteria, wasn’t introduced until 1888. The first antiperspirant, which reduces both bacterial growth and sweat production, came about 15 years later. Even then, however, most people were wary of applying such products to their underarms.
The Smithsonian wrote of these early products, “many people — if they had even heard of the anti-sweat toiletries — thought they were unnecessary, unhealthy or both.”2
It wasn’t until the early to mid-1900s that the idea of regular deodorant usage took off, thanks to a clever copywriter who created controversial advertisements warning women that their armpits might be smelly and they might not even know it.
The strategy of exploiting female insecurity worked, the Smithsonian reported, with sales of one deodorant reaching $1 million by 1927.3
In 2016, we’ve come full circle in a sense, as some people are realizing that applying various personal care products every day isn’t always necessary, effective or, importantly, healthy.
Do you need to worry about the health risks of applying your deodorant?
Let’s take a look…
Antiperspirants May Kill Off Beneficial Armpit Bacteria
It’s becoming widely known that your body’s microbes play an intricate role in your health. You cannot survive without them, and it’s best to work with them, for instance by eating fermented foods and avoiding antibacterial soaps, rather than killing them off indiscriminately.
Researchers recently revealed, however, that habitual use of deodorants and antiperspirants has a significant effect on armpit bacterial density and variation.
For starters, when use of such products was discontinued, there was a marked increase in bacterial density, approaching that which was found among individuals who regularly do not use any such products.
When antiperspirants were applied, bacterial density dramatically declined and differences in the types of bacteria were also noted. According to the study, which was published in the journal PeerJ:4
” … [I]ndividuals who used antiperspirants or deodorants long-term, but who stopped using product for two or more days as part of this study, had armpit communities dominated by Staphylococcaceae, whereas those of individuals in our study who habitually used no products were dominated by Corynebacterium.
Collectively these results suggest a strong effect of product use on the bacterial composition of armpits.
Although stopping the use of deodorant and antiperspirant similarly favors presence of Staphylococcaceae over Corynebacterium, their differential modes of action exert strikingly different effects on the richness of other bacteria living in armpit communities.”
There’s still a lot to learn about what health effects this microbial tweaking may cause, although it’s known that Corynebacterium bacteria, which produce body odor, may help protect against pathogens while Staphylococcaceae bacteria can be beneficial or dangerous.5
Antiperspirants May Increase Odor-Producing Bacteria in Your Armpits
The reason your sweat smells is because the bacteria living in your armpits break down lipids and amino acids found in your sweat into substances that have a distinct odor.
Antiperspirants address this problem using antimicrobial agents to kill bacteria and other ingredients such as aluminum that block your sweat glands. However, separate research has revealed antiperspirants affect the bacterial balance in your armpits, leading to an even more foul-smelling sweat problem.6
Those who used antiperspirants saw a definitive increase in Actinobacteria, which are largely responsible for foul-smelling armpit odor. Other bacteria found living in people’s armpits include Firmicutes and Staphylococcus, but the odors they produce are milder, and they’re not produced quite as readily.
It turned out the less odor-causing bacteria may be killed off by the aluminum compounds (the active ingredient in most antiperspirants), allowing bacteria that produce more pungent odors to thrive instead.
In some participants, abstaining from antiperspirant caused the population of Actinobacteria to dwindle into virtual nonexistence.
This means using an antiperspirant may make the stink from your armpits more pronounced, while quitting antiperspirants may eventually mellow the smell. The researchers explained in Archives of Dermatological Research:7
“A distinct community difference was seen when the habits were changed from daily use to no use of deodorant/antiperspirant and vice versa … Antiperspirant usage led toward an increase of Actinobacteria, which is an unfavorable situation with respect to body odor development.
These initial results show that axillary cosmetics modify the microbial community and can stimulate odor-producing bacteria.”
Is There a Link Between Antiperspirant and Cancer?
If you look at the ingredients in your antiperspirant, you’ll likely find that it contains aluminum, which acts as a “plug” in your sweat ducts to reduce sweating.
Studies also show a high incidence of breast cancer in the upper outer quadrantof the breast, nearest to where antiperspirants are applied, together with “genomic instability.”9 Back in 2005, researchers concluded:
“Given the wide exposure of the human population to antiperspirants, it will be important to establish dermal absorption in the local area of the breast and whether long term low level absorption could play a role in the increasing incidence of breast cancer.”
In 2013, researchers found increased levels of aluminum in nipple aspirate fluid from women with breast cancer compared to women without the disease. They also detected increased levels of inflammation and oxidative stress, noting:10
” … [O]ur results support the possible involvement of aluminum ions in oxidative and inflammatory status perturbations of breast cancer microenvironment, suggesting aluminum accumulation in breast microenvironment as a possible risk factor for oxidative/inflammatory phenotype of breast cells.”
Parabens in Deodorant May Be Linked to Breast Cancer
Parabens are preservatives that are found in many antiperspirants and deodorants. These chemicals have estrogenic activity in human breast cancer cells, and research published in 2012 found one or more parabens in 99 percent of the 160 tissue samples collected from 40 mastectomies.11
Separate research also detected parabens in 18 of 20 tissue samples from human breast tumors.12
While a definitive link hasn’t been made, the growing collection of research suggests caution is warranted. Considering chemical antiperspirants and deodorants are an optional product, it may be a risk that’s not worth taking.
Are Natural Deodorants Safe?
In general, deodorants may be somewhat safer than antiperspirants simply because they don’t typically contain aluminum. There are many brands of aluminum-free antiperspirants on the market, as well, and some of these are safer alternatives. However, be aware that aluminum is just one of the toxic ingredients in personal care products — you can find other chemical toxins to avoid in your personal care products in the infographic below.
Alternatively, just use plain soap and water. This is what I use, typically in the morning and after I exercise. A paste made from baking soda and water also works as a natural deodorant.
Tips for Reducing Your Body Odor Naturally
Body odor certainly isn’t dangerous, but it can be offensive to others. Not everyone produces smelly sweat under their arms, by the way. About 2 percent of people have a single gene variation that leaves their underarms sweat- and odor-free. It’s the same gene variation that causes dry flaky earwax as opposed to “wet” sticky earwax. Research shows that even these odor-free people typically use deodorants and antiperspirants anyway, even though they don’t need to.13
If you have foul body odor, this is typically related to toxins being expelled; it’s probably not your “natural” scent. If you’re living a “clean” lifestyle, meaning a lifestyle in which you’re minimally exposed to dietary and environmental toxins and therefore have a low toxic burden, your sweat will be close to odorless.
Please don’t attempt to stop your body’s natural sweating by using antiperspirants. Profuse sweating can actually help decrease body odor. Your body releases sweat to help regulate its body temperature to prevent you from overheating, and there are many other benefits to it as well.
Sweating helps your body to eliminate toxins, which supports proper immune function and helps prevent diseases related to toxic overload. Sweating may also help kill viruses and bacteria that cannot survive in temperatures above 98.6 degrees Fahrenheit, as well as on the surface of your skin.
Interestingly, research involving bacterial transplants to stop excessive body odor is being conducted. The idea is to fight odor-causing bacteria with their own kind: more bacteria. Researchers explained:14
“We have done transplants with about 15 people, and most of them have been successful … All have had an effect short term, but the bad odor comes back after a few months for some people.”
Another option for eliminating body odor, aside from washing regularly with soap and water, is exposure to sunlight. Ultraviolet light, specifically UVB, is a very potent germicidal. I have noticed that by tanning my armpits it eliminates armpit odor nearly completely, probably because the UVB kills any odor-causing bacteria.
As mentioned, a paste of baking soda and water is an effective deodorant for some people. You can also try dabbing a bit of apple cider vinegar under your arms. If you want a deodorant that smells great and can be put into “stick” form like you may be used to, try Tree Hugger’s natural recipe below.15
Homemade Natural Deodorant With Coconut Oil16
Breakfast, blood sugar, & inflammation
Recent research has shown that Inflammation is responsible for 7 out of 10 Deaths in the United States. But it doesn’t have to be the same way for you.
In fact, in a fairly short amount of time, you could start to experience better sleep…less stomach issues…more energy and stamina…less muscle and joint pain…a drop in weight…lower stress levels…and much, much more!
Learn how to Prevent—Even Reverse—Most Major Diseases by “Turning Off” Inflammation!
Did you know that one of the best times to stretch is right before bed? However…
What stretches should you do? Here’s a 1-minute stretch routine you can do before bed...
Lisa, Yoga Coach
eatlocalgrown / wisemindhealthybody
...easy, 3-minute exercise that completely cured his horrendous snoring! We can both finally sleep!
Today is a good day. Tonight will be even better. Why?
Because you're about to learn easy throat exercises that cure (not just treated) your stubborn snoring – in 3 minutes – starting TONIGHT!
...even if straps, sprays and even torturing CPAP masks have failed you in the past.
Most people heal their snoring in just a few minutes per day using these powerful throat exercises. And they're so easy, you can do them, regardless of your age or physical shape.
Use them anytime, anywhere... even while stuck in traffic or watching TV.
Plus the results are permanent!
Did you know that your bodyfat can become "calorie-resistant"?
True. And it's completely unaffected by even the strictest diets... and most intense exercises.
However, there's good news- Calorie-resistant bodyfat can be now removed...
It's a little-known, calorie-burning hormone we all have... just waiting for the right spark to come alive. It's not thyroid, leptin, ghrelin, insulin, adiponectin, HGH or any other "fat loss" hormone you may know. Read more to find out precisely how to unleash its calorie-burning power:
To your health!
PS - Studies show that it can also reduce your risk of diabetes by 53.7%, a heart attack by 83.3% and stroke by 51.4%. Here's more of the scientific proof...
I bet you can’t guess which muscle in your body is the #1 muscle that eliminates joint and back pain, anxiety and looking fat. This “hidden survival muscle” in your body will boost your energy levels, immune system, sexual function, strength and athletic performance when unlocked.
If this “hidden” most powerful primal muscle is healthy, we are healthy.
d) Hip Flexors
Take the quiz above and see if you got the correct answer!
In April, 2009, researchers stunned the medical community when they reported chronic inflammation as the root cause of several major diseases.
See, every year 610,000 people in the U.S. die of heart disease. Cancer claims another 584,000...stroke 130,000...Alzheimer's disease nearly 85,000 — and the list goes on.
Truth is, we now know... chronic inflammation is responsible for 7 out of the top 10 leading causes of death in the United States! Hundreds of studies and scientific reviews prove it.
Fortunately, newer research shows you can prevent-even reverse-most major diseases by "turning off" inflammation. And in our new book, we show you how to do just that.
If you or a loved one is suffering from a debilitating condition-and you"re not sure what the culprit is-now's the time to find out...while you can still do something about it!
--> Grab your FREE copy of this groundbreaking soft cover book today (while supplies still last.)
Over the past year, my friend Dave over at PaleoHacks has been working on a super secret cookbook project with our good friend Peter Servold a Le Cordon Bleu trained Chef and owner of Pete's Paleo...
And today, this new incredible Paleo Cookbook is finally available to be shipped right to your door for FREE!
The cookbook is called Paleo Eats, and it's filled with over 80 chef created, insanely tasty Paleo recipes which means they are free from gluten, soy, dairy, and refined sugar.
Get your FREE copy of Paleo Eats Here. (Grab this today, because they only ordered a small batch of these cookbooks for this freebie promotion, and they will sell out FAST!)
Sponsored Health Resources
In the years that I've been working on this website project I've come across some amazing resources by some very special people. I'd like to share them with you here.
NOTE: I update these links often so please check back to see what's new!
1) Everyone knows green smoothies are healthy right? Have you heard of a “red” smoothie? If not, check out this story…
2) Forget what you've read about 10-day lemonade cleanses, 7-day detoxes with green juices and Gwyneth's gruel. All you need to do, and this is perfect for Saturday or or anytime really, is a simple 1-day cleanse.
3) This “hidden survival muscle” in your body will boost your energy levels, immune system, sexual function, strength and athletic performance when unlocked.
4) I thought it was virtually impossible for a website to be able to tell me anything even a little bit insightful after only submitting my name and date of birth... I was wrong!
5) Turmeric is amazing. The problem is - It's hard to absorb!
6) Wonder why your stomach still sticks out even though you're hammering the core exercises every day? It's a common myth that bulging belly is due to weak abdominal muscles.
7) Even if you're the most active of athletes, you may still suffer from tight hip flexors due to the amount of time you spend each day planted to a chair.
Enjoy! Let me know how these work out for you. And if you run across anything I've missed please let me know. | <urn:uuid:a3551fcf-91ab-4626-be9b-5032f1caf59a> | {
"date": "2020-01-22T05:47:48",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9347721934318542,
"score": 3.109375,
"token_count": 3507,
"url": "https://wisemindhealthybody.com/dr-mercola/how-dangerous-is-your-deodorant/"
} |
(Latin: a suffix; expressing capacity, fitness to do that which can be handled or managed, suitable skills to accomplish something; capable of being done, something which can be finished, etc.)
A suffix that forms adjectives. The suffix -ible has related meanings; expressing ability, capacity, fitness; capable of, fit for, able to be done, can be done, inclined to, tending to, given to.
This list is only a small sample of the thousands of -able words that exist in English.
2. Of great importance, utility, or service.
3. Having admirable or esteemed qualities or characteristics.
2. Capable of gaining mastery over something; such as, an emotion, passion, or temptation.
2. Characteristic of something that is able, or liable, to change suddenly and unpredictably, or likely to change often: The stock market has variable investments with profits going up and then down; often as a result of statements made by certain government agencies.
3. Descriptive of anything that is inconsistent or uneven in quality or performance; not always the same: Joan's savings account has a variable interest rate which fluctuates daily.
Go to this Word A Day Revisited Index
so you can see more of Mickey Bach's cartoons.
2. Worthy of reverence, especially by religious or historical association: venerable relics.
3. With reference to places, buildings, etc.; hallowed by religious, historic, or other lofty associations: the venerable halls of the abbey.
4. Venerable; abbreviated, Ven. or V.; Roman Catholic Church. Used as a form of address for a person who has reached the first stage of canonization.
5. Used as a form of address for an archdeacon in the Anglican Church or the Episcopal Church.
6. Impressive or interesting because of age, antique appearance, etc.: a venerable oak tree.
7. Extremely old or obsolete; ancient; such as, a venerable house.
2. Able to maintain an independent existence or able to live after birth.
3. Capable of success, or continuing effectiveness; practicable; such as, a viable plan or a viable national economy.
Viable was originally restricted to the senses of "able to grow" and "able to survive"; as, in a viable fetus.
Its extended sense of "able to be done" or "worth doing"; as, in "viable alternatives", is now well established and acceptable in the English language.
2. A descriptive term referring to someone who can be injured or killed, as by a misfortune or a calamity.
2. Something which may be violated, broken, or injured.
Flint and alkaline salts are vitrifiable. | <urn:uuid:0f4088fb-4673-48f0-a838-6a6370904af3> | {
"date": "2020-01-22T05:47:42",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.920660138130188,
"score": 2.8125,
"token_count": 570,
"url": "https://wordinfo.info/units/view/2365/page:42/s:agitable"
} |
Ever wonder if you’re brushing your teeth correctly?! I know, silly right? You just slap on that toothpaste, scrub and you’re done. Nope! Not so fast. There are actually specific techniques you should use when brushing your teeth.
- Surfaces – Brush ALL surfaces of your teeth. Back, front & biting. Use a circular motion.
- Placement – Put half of the brush on your gums and half on your tooth. Use gentle circular motions all along the gum line. Do this for the back side by your tongue and roof of mouth too!
- Hold – Hold the toothbrush very lightly. Someone should be able to swipe the toothbrush right out of your hand!
- No rinsing! A dentist secret is when you are done spit – but DO NOT RINSE! The toothpaste has all these fantastic vitamins inside that make your teeth strong. If you rinse with water you are rinsing all the good stuff off. So spit, as many times as you like, and wait at least 30 minutes before eating or drinking anything. This should be before bed.
There you have it – Now you are a pro! You can head into the Christmas holidays with healthy teeth that sparkle and shine!
Dr. Randi Polster | <urn:uuid:0d5e0220-2be3-4814-bb72-ecf20a49783e> | {
"date": "2020-01-22T05:57:29",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9197906255722046,
"score": 2.546875,
"token_count": 265,
"url": "https://www.alligatordental.com/blog/brushing-teeth-the-right-way-by-dr-randi/"
} |
Welcome to the Volcanoes & Super Volcanoes Learning Section
VOLCANOES - a volcano is an opening, or rupture, in a planet's surface or crust, which allows hot magma, volcanic ash and gases to escape from the magma chamber below the surface. Volcanoes are generally found where tectonic plates are diverging or converging. A mid-oceanic ridge, for example the Mid-Atlantic Ridge, has examples of volcanoes caused by divergent tectonic plates pulling apart; the Pacific Ring of Fire has examples of volcanoes caused by convergent tectonic plates coming together. By contrast, volcanoes are usually not created where two tectonic plates slide past one another. Volcanoes can also form where there is stretching and thinning of the Earth's crust in the interiors of plates, e.g., in the East African Rift, the Wells Gray-Clearwater volcanic field and the Rio Grande Rift in North America. This type of volcanism falls under the umbrella of "Plate hypothesis" volcanism. Volcanism away from plate boundaries has also been explained as mantle plumes. These so-called "hotspots", for example Hawaii, are postulated to arise from upwelling diapirs with magma from the core–mantle boundary, 3,000 km deep in the Earth.
SUPER VOLCANOES - is a volcano capable of producing a volcanic eruption with an ejecta volume greater than 1,000 km3 (240 cu mi). This is thousands of times larger than normal volcanic eruptions. Supervolcanoes can occur when magma in the mantle rises into the crust from a hotspot but is unable to break through the crust. Pressure builds in a large and growing magma pool until the crust is unable to contain the pressure. They can also form at convergent plate boundaries (for example, Toba) and continental hotspot locations (for example, the Yellowstone Caldera). Although there are only a handful of Quaternary supervolcanoes, supervolcanic eruptions typically cover huge areas with lava and volcanic ash and cause a long-lasting change to weather (such as the triggering of a small ice age) sufficient to threaten species with extinction.
We ask you to please donate to the Big MaMa Earth Learning Academy an environmental solution based non-profit. Your donations will be utilized for continuing education about “Volcanoes & Super Volcanoes” and other important issues. | <urn:uuid:d12d31c7-bd4d-468d-9c77-6227aea8b9b3> | {
"date": "2020-01-22T05:54:58",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9111618399620056,
"score": 4.375,
"token_count": 509,
"url": "https://www.bigmamaearth.com/volcanoes--super-volcanoes.html"
} |
Our 2019 toolkit provides resources for the public to learn about antimicrobial resistance, as well as resources for healthcare, scientific, and administrative professionals.
Transmissible CRE infections have been recognized for the last two decades. The first major documentation of CRE occurred in Okazaki, Japan, in the 1980s. Today, CRE infections tend to be most common in India and Southeast Asia.
Fundamentally, doctors are blind without diagnostic tools. “It’s like looking for someone in a cave without any light,” says Dr. Mark Miller, Chief Medical Officer at bioMérieux. “You have no idea where you are.” | <urn:uuid:828ca9bf-692e-4a46-8913-6c7dfc14284d> | {
"date": "2020-01-22T05:50:07",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9367431998252869,
"score": 2.59375,
"token_count": 135,
"url": "https://www.biomerieuxconnection.com/category/value-of-diagnostics/diagnostics/page/2/"
} |
Intervention materials for KS3
A set of resource materials available at two standards. The blue series is designed to build confidence for pupils who arrive at secondary school having just achieved the expected standard but have insecure understanding. The red series supports rapid progression for those arriving below the expected standard.
The booklets evidence tracking of pupils' progress and scaffold meaningful marking through insightful feedback to help pupils understand how to move forward in their learning.
A set of nine mini booklets to support teacher assessment of the strengths and weaknesses of pupils arriving at, or having exceeded, the expected standard.
The booklets link to the key elements of the primary curriculum and, used as an assessment of prior knowledge, will help the teacher to rapidly diagnose insecurities in understanding. This will enable the focusing of teaching to ensure more able pupils are equipped to progress securely to the highest levels of attainment.
The booklet style promotes the importance of thinking and reasoning skills. This builds confidence and security of understanding. | <urn:uuid:b85f55e7-168a-49c6-b359-b327c1b2385e> | {
"date": "2020-01-22T06:58:25",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9407129287719727,
"score": 3.96875,
"token_count": 198,
"url": "https://www.birminghamathsteam.co.uk/ks3-intervention"
} |
The main components of the ALBATROS Esi-ToF are shown below. Ions are created at atmospheric pressure by an electrospray source (e.g. ion-spray, nano-spray, micro-spray, E-spray, etc.) or any other source of positive or negative ions.
After passing highly efficient differential pumping stages symbolized by the skimmer and the apertures, the ions are injected orthogonally into the time-of-flight mass spectrometer.
In normal mode of operation the ion gate is open, and there is no gas in the collision cell. Then the ions pass unhindered into the reflector and detector of the mass analyser.
To perform MS/MS experiments, there is a collision cell and an ion gate at the Wiley-McLaren focus. Here a packet of ions is selected, which is broken up in the collision gas. The reflector then analyses the fragments. | <urn:uuid:45354823-c3d8-4a4f-92c0-0c1108fc18d2> | {
"date": "2020-01-22T04:46:14",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8540967702865601,
"score": 2.953125,
"token_count": 201,
"url": "https://www.bme-bergmann.de/mass-spectrometry/albatros-esi-tof/"
} |
Flashcards in Lecture 28 DA Deck (61):
Which bony process makes up most of the nose?
Frontal process of the maxilla.
What is found at the nose midline, seperating it in two?
The septal cartilage.
What encloses the nasal cavity?
Two lateral cartilages, one on each side.
What forms the nostril inferiorly?
Two alar cartilages, one for each nostril.
What is a nare?
What is the nasal septum formed by?
Which bone is found at the nose midline, directly posterior to the septum?
Where is the ethmoid bone?
Directly above the vomer bone, to which it attaches.
What can be found superior to the ethmoid bone?
The cribiform plate.
What is found superior to the cribiform plate, and what sits next to it?
The crista gali, and the olfactory bulb sits beside it.
Where is the posterior nare located?
At the very back of the nostril cavity at the sphenoid bone.
Which bone are the three conchae found on?
What happens if you fracture the cribiform plate?
Leads to infection, haemorrhage and rhinorrhoea.
Does the medial wall of the nasal cavity have any projections?
What is the difference between the superficial layer of the nostril versus the nasal cavity?
Nasal membrane is highly vascular and mucous, whereas the nostril has hair and keratinised skin.
Why is the nasal cavity so highly vascular? What significance does this have for athsmatics?
Blood vessels allow air to warm to room temperature. Cold air makes smooth muscle spasm, bronchioles with no cartilaginous support can spasm in athsmatics. It also humidifies the air.
How much of the nasal cavity is ciliated?
What is the purpose of cilia in the hasal cavity, and how can they be damaged?
They move towards the nasal cavity, to remove pathogens. They are damaged by cigarette smoke.
Where is the olfactory area, and why is it so named?
It is the superior space (top 1/3rd) of both the medial and lateral wall of the nasal cavity. Olfactory nerves pass here.
Where is the nostril vestibule?
The area lined with hair and keratinised skin.
Which area of the nose is most likely damaged in a nosebleed?
The nasal cavity as its vascular.
Name the three turbinates, and the wall they hang off.
Superior, middle, and inferior turbinates, hanging off the lateral wall.
What is the purpose of the turbinates?
Produce turbulence as air is breathed, allowing it to be closer to body temperature and higher humidity.
What is a consequence of the nasal cavity having such small space?
It can easily be congested, especially when infected/inflammed.
What is found beneath each concha?
A meatus, named after its respective concha.
What can be found in the nasal meati?
Superior and middle have openings to the paranasal sinuses, while the inferior has an opening to the lacrima.
What are the four main groups of paranasal sinuses?
Are the paranasal sinuses lined with mucosa?
Where is the frontal sinus?
It is most anterior at around the top of the nose and under the eyes.
Where is the ethmoid sinus?
It is posterior to the frontal sinus and a collection of small cavities.
Where is the sphenoid sinus?>
Posterior to the ethmoid sinus and slightly inferior. It is one big cavity.
Where are the maxillary sinuses?
Lateral to the cheekbones, on either side of the cheeck.
Which of the paranasal sinuses are in the ophthalmic region?
Frontal, ethmoid, and sphenoid.
What happens when paranasal sinuses of the ophthalmic region are blocked?
The pain is referred to the ophthalmic division of the trigeminal nerve, so pain will be felt in the ophthalmic region.
What happens when the maxillary sinus is blocked?
The pain will be referred to the maxillary region of the trigeminal nerve, and pain will be felt in the maxilla/cheek.
What is a danger of molar removal in terms of the maxilla region?
When molars are removed, they can fracture the maxilla, and and cause an infection of the maxillary sinus.
Where do the ophthalmic paranasal sinuses sit relative to the nasal cavity?
What can be said about the maxillary sinus concerning its drainage? How can it be improved?
It drains less well, and is easier for bacteria to infect it. Patient at a slight decline allows for better drainage.
Where is the opening of the maxillary sinus found?
On the medial wall.
If you have a cold, and are a side sleeper, the more superior nostril is unblocked, while the inferior one is blocked. Explain why.
Superior nostril is above the level of the opening for the sinus and will drain well. Inferior nostril is under the level of the opening and will not drain properly.
To which meati do the paranasal cavities open to?
Only the superior and middle meati.
Where does the frontal sinus drain to?
To the middle meatus, to the hiatus semilunaris, at its superior point.
Where does the sphenoid sinus open to?
To the superior meatus, via the sphenoethoidal recess.
Where are the openings for the posterior ethmoid sinuses found?
Beneath the sphenoethmoidal recess, to the superior meatus.
Where does the maxillary sinus open to?
To the hiatus semilunaris, its inferior point.
Where is the opening for the frontal sinus?
At the frontonasal duct.
Where do anterior air cells drain to?
To the frontonasal duct.
Where do middle air cells drain to?
To the bulla ethmoidalis.
Where is the bulla ethmoidalis found?
Posterior to the hiatus semilunaris.
What drains to the inferior meatus?
Inferior meatus has an orifice of the nasolacrimal duct, which connects to the lacrimal sac via the lacrimal duct.
Where is the lacrimal gland found?
Superolaterally to the eye in the orbital cavity.
Where is the lacrimal sac found?
If you split the nasal cavity into quadrants using an X, what supplies the posterior quadrant?
Both walls are supplied by the sphenopalatine artery. Fracture here can cause a high pressure nose bleed.
If you split the nasal cavity into quadrants using an X, what supplies the superior quadrant?
Supplied by the ethmoidal artery, which is a branch of the ophthalmic artery.
If you split the nasal cavity into quadrants using an X, what supplies the inferior quadrant?
Supplied by the greater palatine artery, which also supplies the upper jaw.
If you split the nasal cavity into quadrants using an X, what supplies the anterior quadrant?
Supplied by the superior labial arteries.
What supplies the ala of the nose?
Lateral nasal branch of the facial artery.
What is the nasal cavity drained by?
Submucosal plexus via ophthalmic, sphenopalatine and facial veins.
If you split the nasal cavity from the anterior nare to the cribiform plate with a line, what is the nerve supply to the superior/anterior and inferior/posterior half?
Anterior/superior half is supplied by the ophthalmic division of the trigeminal nerve.
Posterior/inferior half is supplied by the maxillary division of the trigeminal nerve.
Which nerve is the ophthalmic region of the nasal cavity supplied by?
Anterior ethmoidal nerve, which is a branch of the nasociliary nerve, which is a branch of the ophthalmic nerve, which composes the ophthalmic division of the trigeminal nerve. | <urn:uuid:d6a124dc-6bb9-451b-a8cb-7af79dc672bc> | {
"date": "2020-01-22T05:56:06",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.861552894115448,
"score": 3.09375,
"token_count": 1807,
"url": "https://www.brainscape.com/flashcards/lecture-28-da-4295522/packs/5999697"
} |
Our editors will review what you’ve submitted and determine whether to revise the article.Join Britannica's Publishing Partner Program and our community of experts to gain a global audience for your work!
Iphicrates, (born c. 418 bc—died c. 353), Athenian general known chiefly for his use of lightly armed troops (peltasts); he increased the length of their weapons and improved their mobility by reducing defensive armour.
Iphicrates used his peltasts skillfully in the Corinthian War (395–387), nearly annihilating a battalion of Spartan hoplites near Corinth in 390. After the war he served the Persians as a mercenary commander, then returned to Athens. His expedition (373) to relieve Corcyra of a Spartan siege was successful, but he failed in attempts to recover Amphipolis (367–364).
Retiring to Thrace, Iphicrates fought for the Thracian king Cotys against Athens. The Athenians soon pardoned him and made him a commander in their struggle against their rebelling allies (Social War, 357–355). Iphicrates and two of his colleagues were prosecuted by Chares, the fourth commander, after they had refused to give battle during a violent storm. Iphicrates was probably acquitted but he died soon afterward.
Learn More in these related Britannica articles:
Chares, Athenian general and mercenary commander. In 357 bcChares regained for Athens the Thracian Chersonese from the Thracian king Cersobleptes. During the Social War (Athens against her allies, 357–355), he commanded the Athenian forces; in 356 he was joined by Iphicrates and Timotheus with reinforcements. Having…
ArmyArmy, a large organized force armed and trained for war, especially on land. The term may be applied to a large unit organized for independent action, or it may be applied to a nation’s or ruler’s complete military organization for land warfare. Throughout history, the character and organization of…
TacticsTactics, in warfare, the art and science of fighting battles on land, on sea, and in the air. It is concerned with the approach to combat; the disposition of troops and other personalities; the use made of various arms, ships, or aircraft; and the execution of movements for attack or defense. This… | <urn:uuid:29f8a768-4f06-47c2-85da-8b227f01f59a> | {
"date": "2020-01-22T05:12:55",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9661598205566406,
"score": 3.453125,
"token_count": 495,
"url": "https://www.britannica.com/biography/Iphicrates"
} |
Remembering the Fallen: on this day in 1915, Rifleman Ernest Franklin, 1st Battalion, the Royal Irish Rifles, was killed in action during the Battle of Loos.
At the outbreak of the Great War he had held the position of butler of a house in Drogheda, County Louth in Ireland. By the time of his death, his battalion had spent almost a full year in the trenches. They had prepared for deployment to France during September and October of 1914, then arrived near Laventie in the Pas de Calais at the beginning of November. They saw action at the Battle of Neuve-Chapelle the following March, and while they helped to secure the village they suffered heavy casualties: nineteen officers including their colonel, as well as 440 men from other ranks. Two months later they once again saw success tempered by heavy casualties at the Battle of Aubers Ridge.
Rifleman Franklin lost his life on the first day of the Battle of Loos, which was the biggest attack of 1915 by the British, and the first time that they used poison gas. His family received this information in a letter from the British Red Cross: “I regret to say that the only news about the above is extremely sad. Rifleman Whitford, 8990, Machine Gun Section, 1st Royal Irish Rifles, now in hospital abroad, place unknown, says that Pte Franklin was killed in the action of September 25th, Rfm says :- "I did not see him killed myself , but I saw him lying dead on the field afterwards, when we were retiring. I went close to him and I know that he was dead." We never take a single report of death as final as even eye witnesses are sometimes mistaken. We are therefore continuing our enquiries, besides watching the Prisoners' Lists from Germany for Pte Franklin's name. If you wish to write to Rfm Whitford, the only way to do so is by writing C/o the Record Office at Dublin, marked "Please Forward" With much sympathy for your suspense…”
His body had not been recovered from the battlefield, but it is assumed that he was indeed killed in action, as averred by Rifleman Whitford. His name appears on the Ploegsteert Memorial to the Missing, which is a Commonwealth War Graves Commission memorial at Hainault in Belgium. The memorial contains the names of 11,367 missing soldiers from the battles which were fought in the area around the village of Ploegsteert.
Ernest, born in Coventry in Warwickshire, was married - his daughter was born the day after his death. | <urn:uuid:12bc444f-6d24-45af-a06c-058127a3144c> | {
"date": "2020-01-22T05:31:41",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9888013005256653,
"score": 2.640625,
"token_count": 544,
"url": "https://www.britisharmedforcesthebest.com/single-post/2018/09/25/Rifleman-Ernest-Franklin-1st-Battalion-the-Royal-Irish-Rifles"
} |
If you're thinking of being more active in 2020 as one of your 2020 New Year's resolutions, why not try orienteering?
More and more people are discovering that orienteering is a fun and challenging activity that gets them exploring the great outdoors. They are gaining new skills in finding their way in unknown terrain and crossing rough and sometimes hilly ground.
You are always discovering somewhere new! It's a competitive sport with something for everyone.
Photo credit: Steve Rush (Bristol Orienteering Klub)
The sport of orienteering offers many benefits, but its foremost attraction is that it is fun!
1. Time outdoors is great for us physiologically:
For one it improves our Vitamin D levels. Getting a sufficient amount of vitamin D is important for normal growth and development of bones and teeth, as well as improved resistance against certain diseases. The Vitamin D Council says “your body is designed to get the vitamin D it needs by producing it when your bare skin is exposed to sunlight”.
2. Increased time being outdoors with nature improves people’s health and happiness:
Increased time being outdoors with nature has been shown to significantly improve people’s health and happiness. The UK’s first month-long nature challenge, which took place in 2015 by the University of Derby involved people "doing something wild" every day for 30 consecutive days. It showed that children exposed to the natural showed increases in self-esteem. They also felt it taught them how to take risks, unleashed their creativity and gave them a chance to exercise, play, and discover. In some cases nature can significantly improve the symptoms of Attention Deficit Hyperactivity Disorder (ADHD), providing a calming influence and helping them concentrate. “Intuitively we knew that nature was good for us as humans, but the results were beyond brilliant” said Lucy McRobert, Nature Matters Campaigns Manager for The Wildlife Trusts.
3. Increased cardiovascular capacity:
Orienteering involves walking, jogging and running, often in rough terrain. All three of these activities increase aerobic capacity and cardiovascular strength. The Department of Health in their Start Active, Stay Active report state “regular physical activity can reduce the risk of many chronic conditions including coronary heart disease, stroke, type 2 diabetes, cancer, obesity, mental health problems and musculoskeletal conditions.”
4. Sharpens decision-making skills:
Orienteering offers the development of individual skills in navigating while problem-solving to locate each control. Decision making is paramount: Should I go left or right? Should I climb that hill or go the long way around it? These decisions that constantly arise require thinking more than quick reactions or instinct; again, that is why orienteering is often called the thinking sport.
Research shows even one 30-minute cardio session pumps extra blood to your brain, delivering the oxygen and nutrients it needs to perform at max efficiency. Cardio also floods the brain with chemicals that enhance functions such as memory, problem-solving, and decision-making.
5. A balance between the physical and the mind:
The ultimate quest for the orienteer is to find that balance between mental and physical exertion, to know how fast they can go and still be able to interpret the terrain around them and execute their route choice successfully.
Orienteering is a challenging outdoor adventure sport that exercises both the mind and the body. The aim is to navigate in sequence between control points marked on a unique orienteering map and decide the best route to complete the course in the quickest time.
Events are held by clubs across the country offering courses to suit all technical and physical abilities. Volunteers are always on hand to show you how to get started and give you some tips and tricks that will mean you get the most out of your orienteering experience.
We understand that a lot of people might be quite put off by starting at a competitive event, although there is no need to be, which is why we also offer a way to take part in your own time through permanent orienteering courses. These fixed orienteering routes are located across the UK for you try. You simply download a map and just go!
Many local orienteering clubs run regular coaching sessions, often at mid-week ‘club nights’, or on weekends. If you want to talk about how you can experience orienteering or how you can get involved, click here to find your local club. There are also University Orienteering Clubs across the UK. So if you're heading off to University in the new year then you are encouraged to find out more and access the list of University Orienteering Clubs here.
If you are interested and want to find out more about the sport of orienteering before contacting your local club then this set of Frequently Asked Questions will help. | <urn:uuid:e99effc2-de86-4774-9bc5-274d37d5d297> | {
"date": "2020-01-22T05:06:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9609372615814209,
"score": 2.546875,
"token_count": 981,
"url": "https://www.britishorienteering.org.uk/index.php?pg=news_archive&item=5013"
} |
Fill out the brief form below for access to the free report.
Shlaes: High Tax Rates and the Lessons of the 1950s
How will the tax rate increases included in this week's budget deal impact the economy? One view receiving lots of attention is that the historical experience of the 1950s suggests that high tax rates are not an impediment to economic growth. After all, the 1950s featured a top marginal federal income tax rate as high as 92%, and the economy grew at an impressive rate — indeed, five years in the 1950s featured a real annual GDP growth rate in excess of 4%. But the tax and growth experience of the 1950s, as stated above, is misleading. My colleague Amity Shlaes sets the record straight in her recent column for Bloomberg. Shlaes writes that four main "illusions" are often thrown about in discussions of the 1950s tax experience. First, while official tax rates were indeed sky high in the 1950s, the effective rates (i.e., the rate people actually pay after all deductions and exemptions are figured in) were much lower. When income from capital gains is considered, effective rates were as low as 31% by 1960. The second fallacy Shlaes clears up is the belief that in the 1950s the "government soaked the rich." While "fairness" was certainly an important justification for the high rates during the 1950s (see the Bush Institute's recent study on this topic by fellow Joseph Thorndike), Shlaes writes that in the 1950s, "those earning more than $100,000 paid less than 5 percent of the taxes collected in the U.S., a far smaller share than the wealthiest shoulder today." Thus the lesson from the 1950s is not that "soaking the rich" leads to growth, but rather that raising rates on the rich can have the opposite effect. Third, Shlaes notes, the overall tax climate of the 1950s differed compared to our tax climate today. In the 1950s tax rates were headed in a downward trajectory, and people knew it. Today there's much uncertainty about taxes, but the cues coming out of Washington suggest that rates will only climb higher in coming years. While the prospect of lower taxes in the 1950s encouraged growth, today's fear of future tax hikes will likely hold the economy back. Finally, America's position in the global economy of the 1950s was quite different from its position in 2013. In the 1950s the U.S. enjoyed economic "primacy." In that decade the economies of most other developed countries were still trying to recover from the destruction of WWII. This meant the U.S. faced less international competition and could charge high tax rates without sacrificing too much growth. Today the international scene is much different. Global competition makes tax rates highly relevant, and the U.S. already has higher rates than most other developed countries. While the U.S. could get away with high tax rates in the 1950s, it is no longer afforded that luxury. Continued tax denial will spell continued slow growth. You can read Shlaes's entire column here. | <urn:uuid:b99de329-2974-483d-90f9-0d3dbae9bd55> | {
"date": "2020-01-22T04:46:19",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9811870455741882,
"score": 2.6875,
"token_count": 633,
"url": "https://www.bushcenter.org/publications/articles/2013/02/shlaes-high-tax-rates-and-the-lessons-of-the-1950s.html"
} |
You are here
Japan's lofty 'hydrogen society' vision hampered by cost
[TOKYO] Japan has lofty ambitions to become a "hydrogen society" where homes and fuel-cell cars are powered by the emissions-free energy source, but observers say price and convenience are keeping the plan from taking off.
Prime Minister Shinzo Abe has dubbed hydrogen the "energy of the future", and hopes it will help Tokyo meet the modest emissions targets it has set ahead of a UN climate change conference this month.
Tokyo wants to see cars, buses, and buildings powered by the clean energy in the coming years, and has even laid out plans for a "hydrogen highway" peppered with fuelling stations, all in time for the Tokyo 2020 Olympics.
Japan, which is the world's sixth largest greenhouse gas emitter, has "constructed a vision of society" based on hydrogen, said Pierre-Etienne Franc, director of advanced technologies for French industrial gas firm Air Liquide.
Toyota's hydrogen car, Mirai - which means "future" in Japanese - launched in 2014, after two decades of tireless research. The car recently rolled out in the United States and Europe.
While it has wowed some, production has lagged behind demand and high costs have turned off many consumers.
A Mirai fuel-cell vehicle costs 6.7 million yen, or about US$55,000, nearly double a comparable electric car. Fuel cells work by combining hydrogen and oxygen in an electrochemical reaction, which produces electricity. This can then be used to power vehicles or home generators.
Environmentally-conscious motorists like the Mirai because unlike conventional cars, it does not emit CO2. It also has a longer cruising range and takes only a few minutes to refuel, compared to several hours required for an electric rival.
"(Fuel-cell vehicles) appear to be the ideal green cars," said Hisashi Nakai, who works in Toyota's strategy planning department.
Mr Nakai dismisses concerns that hydrogen poses a dangerous explosion risk - the gas is highly volatile and flammable - insisting the tank of the car has been rigourously tested and can "withstand any shocks". But he admits price remains a major barrier.
"The main problem is the cost - we have just started, it doesn't happen overnight," he adds.
Air Liquide's Franc also bemoaned heavy regulations on building fuelling stations.
"In its superb ambition, Japan has failed in its strategy with extremely restrictive regulations," he said, referring to safety rules to prevent leakage of the flammable gas.
Building hydrogen stations is two or three times more expensive than in Europe or the US, he said.
With a 395-million yen (S$4.5 million) price tag the stations remain scarce, although the government has vowed to build 76 of them by early 2016.
Toyota is not the only player: in late October Honda unveiled its own hydrogen car, and Nissan is also involved in the effort.
The hope is that increased competition could drive prices down.
Mr Abe has laid out his vision for a hydrogen market worth one trillion yen annually by 2030.
Equipping houses with hydrogen-producing technology is another part of the plan, with the first green homes unveiled in 2009.
The aim is to equip 1.4 million residences with the technology by 2020, and a staggering 5.3 million only a decade later.
It's a slow journey though. Only 100,000 houses are hydrogen-powered so far, despite government subsidies and efforts by manufacturers, namely Panasonic and Toshiba, to bring down prices.
At two million yen per house, the technology remains out of reach for many.
"The technology is not fully developed, it will likely take several more years before it reaches mass production," said Hubert de Mestier, a former executive with French energy giant Total.
Colourless and odourless, hydrogen is extremely light and takes up a lot of space so it has to be compressed before it is transported and stored, which adds to costs.
It's also not entirely environmentally friendly: greenhouse gas-emitting fossil fuels are often required to generate the gas in the first place.
"Selling the hydrogen economy without changing the method of production is heresy," says Mr Franc.
Speaking at a Detroit automotive conference in January, Tesla chief executive Elon Musk - who advocates battery powered electric cars - rejected hydrogen as a viable alternative fuel citing its volatility and flammability. He added that it was a complicated process to produce energy this way.
He said: "Hydrogen is an energy storage mechanism, it's not a source of energy. So you have to get that energy from somewhere. It's extremely inefficient." Japan has said it would like to produce totally green hydrogen through electrolysis, where the electricity comes from renewable sources such as water, solar or hydraulic, as opposed to gas or oil.
The move comes after Tokyo was forced to boost the use of pricey fossil fuels to fill the gap left by shutting off nuclear reactors in response to the 2011 Fukushima disaster.
But some say the resource-poor country - where 90 per cent of electricity is now produced with fossil fuels - is putting the cart before the horse.
"Japan should not be mistaken. If it is keen on becoming a sustainable country, the government should invest first in renewable energies" said Greenpeace ecologist Ai Kashiwagi.
"After that, hydrogen will come." | <urn:uuid:82d96752-003f-442f-95e2-74dc696d3cc2> | {
"date": "2020-01-22T05:46:47",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9624162316322327,
"score": 2.8125,
"token_count": 1125,
"url": "https://www.businesstimes.com.sg/government-economy/japans-lofty-hydrogen-society-vision-hampered-by-cost"
} |
Poor School Children Records, 1810-1842
“An Act to provide for the education of the poor gratis” was passed by the Pennsylvania General Assembly in 1809. The act required county commissioners to direct township tax assessors to receive the names of all children between the ages of 5 and 12 who resided in the township whose parents were unable to pay for their schooling. This information was recorded at the end of the tax assessments for each township. The names of the children would then go through an appeals process to determine their eligibility and once confirmed they were entitled to free schooling provided at the county’s expense.
To take advantage of this program, children would attend subscription schools located throughout the county. The law did not establish a central “poor school” to teach children eligible for this program. Generally, once a quarter, the teacher or school directors of the individual subscription schools would submit a bill to the county commissioners for reimbursement. These records are indexed separately under Teacher’s Bills.
In 1834 the Public Education Law was passed. This law provided for the establishment of free public schools, which made the 1809 law obsolete. However, Chester County still continued to pay for the schooling of poor children until 1837. Most townships stopped returning the names of poor children after 1837, though several still submitted names until 1842.
The Poor School Children index was created from the Commissioners’ and Treasurer’s Account Books, which record the final lists of children qualified to receive free schooling, in conjunction with the original tax lists. The account books are arranged by year and then by township. The lists of poor school children are typically found on the last pages of each township’s tax lists, usually following the list of freemen. It is recommended that researchers look at both the tax lists and the account books for each entry unless otherwise noted.
The entries always include the full name of the child, township, and date. An extended explanation of some index categories and terms can be found below.
Last Name – In most cases, this is the last name of the head of household who is likely the parent or guardian of the child/children. Some years clearly identify the head of household as the parent, but not in all instances. In some cases a child residing in a household does not share the same surname as the head of household. In an attempt to ensure that these children can be found, they were indexed under both surnames. Most of these entries can be readily identified by looking at the Comments field and seeing a statement to this effect “Living with …”
Tax Only – These entries are only found in the tax lists and were not recorded in the account books.
Spouse – This information was rarely recorded. If a name was not index, then it will not be found on the original record.
Comments – Entries in this field that include “(See tax list for this entry)” typically indicate that more information can be found on the tax list than what is supplied in the account books.
The bills submitted to the Commissioners' Office by teachers seeking reimbursement for teaching and providing supplies to the poor school children designated by the township assessors. The bills may contain the following information:
Full name of teacher
Full name of child
School and township
Number of days child was taught | <urn:uuid:21629b3e-fb95-4885-9254-96b0541d11b1> | {
"date": "2020-01-22T06:16:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9720550775527954,
"score": 3.515625,
"token_count": 683,
"url": "https://www.chesco.org/1710/Poor-School-Children-Records-1810-1842"
} |
A lionfish between two electrodes during tests of a prototype of a robot designed to cull the invasive species. ocean support foundation
Invasive lionfish are not recognized as a threat by native fish, allowing them to gorge to the point of obesity.
Lionfish eat everything from shrimp and squid to molluscs and lobster, and have decimated native species populations. ocean support foundation
Studies have shown lionfish wiping out 80-90% of reef biodiversity within weeks of arrival, including species that maintain the reef itself. ocean support foundation
Methods to control lionfish populations have had mixed results.
Spear-fishing derbies have reduced their numbers in localized areas, but have not impacted the wider spread. ocean support foundation
A man eats in a private restaurant in Havana where the lionfish is served.
There are increasing efforts to reduce the lionfish population by marketing them as food, with conservation group REEF producing a dedicated cookbook. YAMIL LAGE/AFP/AFP/Getty Images
Design drawing of the robot, from Robots in Service of the Environment (RISE). The machine combines a remote-operated vehicle with a bespoke electrocution device. It will be equipped with recognition software to ensure the wrong fish are not killed. RISE
RISE was launched by Colin Angle, CEO of iRobot, and the technology was based on the company's previous creation - the Roomba vacuum cleaner.
The new company hopes the new robot can also be cheap and easy to operate in order to appeal to casual users.
RISE do not expect to eliminate the invasive lionfish, but hope to reduce their numbers sufficiently to allow the ecosystems to recover. | <urn:uuid:5c4f0805-b76e-466e-ae6c-2d47e41812c9> | {
"date": "2020-01-22T04:57:46",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.955818235874176,
"score": 3,
"token_count": 343,
"url": "https://www.cnn.com/2016/09/18/world/gallery/killer-robots-lionfish/index.html"
} |
Dr. Maria Montessori referred to education as an aid to life and she referred to independence as the child’s greatest gift. To foster independence, we offer the child choices and the opportunity to do things for oneself. Whenever the child asks for help in doing a task, such as putting on a jacket or buttoning a shirt, it is important that we show the child how to do this task and then undo it so that the child can have the opportunity to do it for himself. Dr. Montessori stated that, “Every unnecessary help is an obstacle to development.”
Within the classroom, the Practical Life area directly impacts and encourages the child’s independence. The Practical Life area gives the child the opportunity to learn how to care for oneself as well as how to care for the environment.
Materials that we use to aid the child in learning to care for oneself include: the dressing frames (which include fastenings such as buttons, snaps or zipper), hand washing and cloth washing.
Materials that aid the child in learning to care for the environment involve sweeping; mopping up a spill; polishing silver, wood and shoes; and table washing.
The materials in the Practical Life area are crucial for the child's early work in the classroom environment. These materials aid the child in developing concentration, building independence, gaining control of movement and following through with a logical sequence of action.
These activities also prepare the child for his or her later work in the Sensorial, Math and Language areas of the classroom.
Outside of the classroom, Practical Life connects easily to life at home, aiding the child in building a bridge between home and school. There are some very simple ways to help your home become more accessible to your child. Some general principles for Montessori at home are:
Our goal is to help the child to do things by him or herself and it's important for children to have the opportunity to do things for themselves so that they know how capable they are! While it takes more planning, it's much easier and better for the child in the long run.
When preparing a space with the child in mind, we offer low shelving with limited numbers of toys, books and games. By offering a limited number, we're increasing the chance for a successful clean up and also those items tend to get more use. You can store the extra toys, books and games in a "treasure chest". Periodically you can bring the treasure chest out for your child to put in an "old" toy and take out a "new" toy. Rotating the toys creates new interest.
When thinking about an eating space for the child, a small, low table and chair are wonderful to use for snacks. There are some great ideas out there to allow your child to be independent in preparing his or her own snacks such as adding a small pitcher in the refrigerator or a low shelf in the pantry with acceptable snack options.
Including your child in preparing of the meal, setting the table or helping to clear the table, load the dishwasher or sweep the kitchen after the meal is over all help your child to be an active participant in daily life. Offering child sized cleaning tools along with your own is also great incentive. And adding stools by the sinks, in the bathroom or other areas of the house give your child the opportunity to exercise independence.
"All the efforts of growth are efforts to acquire independence. A matter of vital importance to an individual is that he should be able to function by himself. In order to grow and develop, the child needs to acquire independence. When does the child need to begin to do things by himself without our help? The answer is simple. The child needs to do things by himself from the beginning of life, from the moment he is capable of doing things. This urge is revealed again and again by the child. We have so often heard children of a few years of age say: "Help me do it by myself." By helping the child to do things by himself you are helping the independence of the child." --Dr. Maria Montessori.
Working in the garden - developing balance and strength walking with the wheelbarrow.
Building and creating in the sandbox, discovering new textures
and what happens when you mix sand and water.
Working with some of the language objects - learning the names of common
tools and matching the tool to a picture.
Cleaning the window - hand strength and motor coordination, completing a task successfully, and enjoying a clean window!
'Fixing the boat' - language, following directions, prepositions
'Row, row, row your boat' - music, movement, coordination and balance
Washing snack dishes - mixed ages allow the younger children to learn from the older members of the community, and older children take on a leadership role.
Having a great time listening to music!
Working on pincer grip, hand strength and fine motor control.
Whew! We have had a busy 2 weeks and the children are settling into the classroom routine beautifully. Remember -'Montessori 101' our parent education night is next Tuesday, September 25. RSVP here
Last week we welcomed our new and returning toddlers and tomorrow morning we will begin our regular schedule. We are looking forward to a wonderful year! Below is some information about the toddler class at COLCMS and some pictures of our toddlers from the phase-in week.
There are many firsts in toddler-hood, and starting school for the first time is often an occasion marked by mixed feelings. From the time your child was born, you have been the center of her world. In ever expanding concentric circles, her world has expanded. First to immediate family, then to other trusted caregivers, then to new friends, and now the whole new world of school. It is our hope that this blog will keep you updated and informed about what is happening in this exciting new world.
- Ms. Stefanie and Ms. Lise
The toddler class provides a safe environment for children for children to grow and learn based on their particular needs. Tables and chairs are very small and the teacher/child ratio is lower. The environment offers toddlers a special atmosphere of understanding, respect, and support as they explore and grow.
Toddlers have many opportunities throughout the day to work towards developing language, art, music, motor, social, and practical like skills. The practical life area of the classroom provides students an opportunity to care for themselves and the classroom. The wipe up spills, set the table for lunch, clean up after themselves, etc.
Repeated successes help the child build self-esteem through these very real activities. Self-esteem is not something that we can give children, but rather a feeling that comes from within - the satisfaction of knowing 'I can do it!'
Gross and fine motor development is fostered through a variety of manipulative activities which increase in difficulty as the child's skills become more refined. Simple sensorial activities allow the toddlers to respond to the urge to use their whole bodies to explore everything around them. There are many exercises which also help strengthen and develop the necessary muscles needed for writing.
The toddler program is also designed to meet the very young child's sensitive period for language by offering creative concepts to expand their growing vocabularies. Stories, songs, games, poems, objects representing everyday items, books, and language cards all help nurture growing language skills. The toddlers also learn the names for the feelings they experience as they have some of their earliest social interactions.
Self-help skills that lead to independence are another vital part of the Montessori toddler program. Children are gently encouraged to hang up their own jacket, put on an apron, or carry a large puzzle, rather than say 'I can't.' At all times the adults strive to answer the child's need to 'help me do it by myself!'
Guided and independent work, freedom of choice within limits, creative outlets such as art and music, and opportunities for movement all come together to create a safe space for growth.
As a parent you will always be your child's first and most important teacher. Remember, you are not teaching a child, but the adult he will one day become. Have faith in your child and know that in this first of many steps toward independence, you have made an excellent choice at COLCMS!
"If help and salvation are to come,
they can only come from the children,
for the children are the makers of men."
Dr. Maria Montessori
We are an AMI accredited Montessori school growing daily in spirit & intellect! | <urn:uuid:e5841803-5aed-4367-9f14-c59df300cf2f> | {
"date": "2020-01-22T05:47:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.958707869052887,
"score": 3.640625,
"token_count": 1774,
"url": "https://www.colmontessori.com/blog/archives/09-2012"
} |
|Did you mean to convert||parasang||to||megametre|
How many parasang in 1 mm?
The answer is 1.6666666666667E-7.
We assume you are converting between parasang and millimetre.
You can view more details on each measurement unit:
parasang or mm
The SI base unit for length is the metre.
1 metre is equal to 0.00016666666666667 parasang, or 1000 mm.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between parasang and millimetres.
Type in your own numbers in the form to convert the units!
1 parasang to mm = 6000000 mm
2 parasang to mm = 12000000 mm
3 parasang to mm = 18000000 mm
4 parasang to mm = 24000000 mm
5 parasang to mm = 30000000 mm
6 parasang to mm = 36000000 mm
7 parasang to mm = 42000000 mm
8 parasang to mm = 48000000 mm
9 parasang to mm = 54000000 mm
10 parasang to mm = 60000000 mm
You can do the reverse unit conversion from mm to parasang, or enter any two units below:
A millimetre (American spelling: millimeter, symbol mm) is one thousandth of a metre, which is the International System of Units (SI) base unit of length. The millimetre is part of a metric system. A corresponding unit of area is the square millimetre and a corresponding unit of volume is the cubic millimetre.
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! | <urn:uuid:63deab82-5db4-43dc-b762-762253b8b53d> | {
"date": "2020-01-22T05:41:30",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.7608752250671387,
"score": 2.984375,
"token_count": 448,
"url": "https://www.convertunits.com/from/parasang/to/mm"
} |
Obesity is known to cause severe health problems such as cancer and heart disease, but new research suggests it could also be a key factor in the development of dementia.
Researchers from the University of Oxford found that patients under the age of 70 who are admitted to hospital for obesity-related problems carry a much higher risk of dementia than those who are not obese.
They also found that the highest risk is among those with a record of obesity when they are in their 30s.
The study involved the examination of data from hospital records for the whole of England between 1999 and 2001. In all cases of recorded obesity, researchers looked for evidence that people received care for or died from dementia.
Overall, data was taken from 451,232 people with obesity, and measured against a control group. Results showed that for those aged 30 to 39, the risk of developing dementia was 3.5 times higher than those in the same age who were not obese.
The University students also found that obese people in their 40s had a 70% risk, while those in their 50s had a slightly smaller risk of 50%. For those aged 60 – 69, the level of risk dropped again to 40%.
Authors of the study – which was published in the Postgraduate Medical journal – said: “The risk of dementia in people who are obese in early to mid-adult life seems to be increased.
“The level of risk depends on the age at which they are recorded as being obese (which may be an age or a birth cohort effect) and, while obesity at a younger age is associated with an increased risk of future dementia, obesity in people who have lived to about 60 to 80 years of age seems to be associated with a reduced risk.”
Dr Clare Walton, the research communications manager at the Alzheimer’s Society charity described the results as “striking”, but valuable for highlighting the importance of healthy living for both mental and physical well-being.
“Given the growing body of evidence that being overweight in mid-life rather than in later years seems to be the bigger risk factor for dementia, it is never too early to start making healthy lifestyle choices.
“We know what is good for your heart is good for your head and that the best way of reducing your risk of developing dementia is to eat a balanced diet, maintain a healthy weight, exercise regularly and get your blood pressure and cholesterol checked.” | <urn:uuid:8cd464e3-d4fe-40fc-897d-80bfe2c26a14> | {
"date": "2020-01-22T06:43:06",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9833596348762512,
"score": 2.953125,
"token_count": 498,
"url": "https://www.counselling-directory.org.uk/blog/2014/09/04/obesity-and-dementia"
} |
The carotenoids lutein and zeaxanthin were the most strongly associated with reduced risk of macular degeneration (MD). These are obtained primarily from dark green, leafy vegetables such as spinach, collard greens, kale, mustard greens, and turnip greens. Eating spinach and collard greens five or more times a week was found to noticeably reduce the risk of MD.
- They may protect against photodamage of the retina by filtering out blue light, which is not stopped by the cornea and lens, and which can damage the retina over time
- They may protect against peroxidation of fatty acids in the photoreceptor membrane
- They may protect the blood vessels that supply the macular region.
Lutein and zeaxanthin absorb best when taken with fat. For maximum absorption take supplements with any meal that contains fat or take along with any fatty acid supplements. The typical dose of lutein is between 10 and 20mg per day, and zeazanthin 1 to 4mg or higher per day.
Lutein / Zeaxanthin can help with the following
Supplementation with lutein (15mg three times per week), but not vitamin E (alpha-tocopherol 100mg three times per week), improved visual acuity and glare sensitivity in a study of 17 patients with age-related cataracts. No significant adverse
effects were observed during this two year long study. [Nutrition 2003;19(1): pp.21-4] This comment regarding vitamin E means that vitamin E, at the dose used in this study (alpha-tocopherol 100mg three times per week), did not produce any improvement, while lutein did.
People with diets higher in lutein and zeaxanthin had a lower risk of developing cataract. [American Journal of Clinical Nutrition, 1999, Vol. 69, pps. 272-277]
Lutein, an antioxidant found in spinach and kale, works extremely well in protecting the retina against sunlight damage [Methods Enzymol 1992:213: pp.360-6]. Supplementation with 6mg of Lutein daily may decrease the occurrence of macular degeneration by more than 50% [JAMA 1994:272: pp.1413-20]. Lutein is one of the primary antioxidants for the macula rather than for the lens of the eye.
Six months of lutein 15mg per day, vitamin E 20mg per day and nicotinamide 18mg per day improved electrophysiologic measures of macular function in a pilot study of 30 patients with early age-related maculopathy as well as in eight healthy people who served as controls. [Ophthalmology 2003;110(1): pp.51-60]
The prevention of macular degeneration requires a lower dose of lutein and zeazanthin than does treating an already exisitng condition.
However, a study of 2,335 adults in Australia over a period of 5 years suggests that an increased intake of lutein, zeazanthin or other antioxidants may not have a protective effect. [Ophthalmology 2002;109(12): pp.2272-8]
|Likely to help|
Increasingly poor eyesight often accompanied by light sensitivity, distorted vision and a blank or dark patch in the center of vision.
A chemical compound that slows or prevents oxygen from reacting with other compounds. Some antioxidants have been shown to have cancer-protecting potential because they neutralize free radicals. Examples include vitamins C and E, alpha lipoic acid, beta carotene, the minerals selenium, zinc, and germanium, superoxide dismutase (SOD), coenzyme Q10, catalase, and some amino acids, like cystiene. Other nutrient sources include grape seed extract, curcumin, gingko, green tea, olive leaf, policosanol and pycnogenol.
A 10-layered, frail nervous tissue membrane of the eye, parallel with the optic nerve. It receives images of outer objects and carries sight signals through the optic nerve to the brain.
Transparent structure forming the anterior part of the eye.
A type of oxidation that results in the formation of peroxides in body tissues which contain high proportions of oxygen.
Chemical chains of carbon, hydrogen, and oxygen atoms that are part of a fat (lipid) and are the major component of triglycerides. Depending on the number and arrangement of these atoms, fatty acids are classified as either saturated, polyunsaturated, or monounsaturated. They are nutritional substances found in nature which include cholesterol, prostaglandins, and stearic, palmitic, linoleic, linolenic, eicosapentanoic (EPA), and decohexanoic acids. Important nutritional lipids include lecithin, choline, gamma-linoleic acid, and inositol.
(mg): 1/1,000 of a gram by weight. | <urn:uuid:32796311-f815-4370-a68f-3755487dc4d3> | {
"date": "2020-01-22T04:23:13",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9089947938919067,
"score": 2.765625,
"token_count": 1054,
"url": "https://www.digitalnaturopath.com/treatments/lutein-zeaxanthin/"
} |
Carrying heavy backpacks are common among school age children and that’s a problem. I know this because I’m a Chiropractor and a mom to two teenagers.
But here are the facts to back me up on this problem and how to prevent back problems for children.
According to a study conducted by the Simmons College Graduate Program in Physical Therapy, over 55 percent of teenagers carry backpacks that are over 15% of their body weight, a limit that is recommended by American Chiropractic Association as well as American Academy of Orthopedic Surgeons. By the end of their teen years, for example, more than half of youths experience at least one low back episode and according to the research, this increase may be due, at least in part, to improper use of backpacks.
So what should you do? Here are some helpful tips on carrying backpacks properly.
- Make sure your child’s backpack weighs no more than 5 to 10 percent of his or her body weight. A heavier backpack will cause your child to bend forward in an attempt to support the weight on his or her back, rather than on the shoulders, by the straps.
- Buy an appropriate size backpack. The backpack should never hang more than four inches below the waistline. A backpack that hangs too low increases the weight on the shoulders, causing your child to lean forward when walking.
- A backpack with individualized compartments helps in positioning the contents most effectively. Make sure that pointy or bulky objects are packed away from the area that will rest on your child’s back.
- Bigger is not necessarily better. The more room there is in a backpack, the more your child will carry-and the heavier the backpack will be.
- Urge your child to wear both shoulder straps. This will better distribute the weight. Even though hanging one strap over one shoulder might be considered “cooler” , lugging the backpack around by one strap can cause the disproportionate shift of weight to one side, leading to neck and muscle spasms, as well as low-back pain.
- Wide, padded straps are very important. Non-padded straps are uncomfortable, and can dig into your child’s shoulders.
- The shoulder straps should be adjustable so the backpack can be fitted to your child’s body. Straps that are too loose can cause the backpack to dangle uncomfortably and cause spinal misalignment and pain.
- If the backpack is still too heavy, talk to your child’s teacher. Ask if your child could leave the heaviest books at school, and bring home only lighter hand-out materials or workbooks.
- Help your child sort through everything before packing up and see what can be left home that day. Place heaviest items in first, the closer they are to a child’s back, the less strain they’ll put on those muscles.
- Proper backpack lifting tips: Make sure your kids bend their knees when they first lift their packs, to avoid further strain on their back muscles. 1) Face the backpack before you lift it. 2) Bend at the knees. 3) Using both hands, check the weight of the pack. 4) Lift with your legs, not your back. 5) Carefully put one should strap on at a time; never sling the pack onto one shoulder.
- Although the use of rollerpacks -or backpacks on wheels – has become popular in recent years, only those students who are physically able to carry a backpack should use them cautiously and on a limited basis. The plastic frame and wheels add extra pounds to the rollerpacks and they may be heavy to carry when they need to – like on and off school buses and up and down the stairwells. Some school districts have begun banning the use of rollerpacks because they clutter hallways, resulting in dangerous trips and falls.
Hope these tips are helpful for those kids who are going back to school this Fall. By the way, these tips are also appropriate for those of you who carry heavy purses. You should check to see what you can discard from your purse and sort through them to lighten the load on your shoulder. Carrying a heavy purse on one shoulder creates the same problems as those students who carry their heavy backpacks on one shoulder.
Image: American Chiropractic Association | <urn:uuid:a2fb118e-d953-4528-9c7a-6ef81226202a> | {
"date": "2020-01-22T05:38:52",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9502652287483215,
"score": 2.75,
"token_count": 898,
"url": "https://www.drkarenslee.com/backpack-safety-tips-for-children/"
} |
Chatbot Messenger Development
In this pace of life, where human interactions in person are rare and everyone prefers to interact virtually, the messengers have become a quintessential accessory for the people to have a conversation. When we talk about people connecting and having conversations through messengers, the foundation of this technology is chatbot. A chatbot is an automated messaging software that uses AI (Artificial Intelligence) to initiate a conversation with the people and can be embedded through any major messaging applications. Some of the best AI-powered chatbots are as follows: Watson Assistant, Bold 360, Rulai, LivePerson, Inbenta, Ada and Vergic.
A chatbot is a program that was developed with the aim of having conversations with people using the internet. Earlier these technologies were limited for establishing a conversation with people who want to connect socially but the recent trends show that many companies and applications use AI-powered chatbots to communicate with their customers, resolve their queries and getting feedback and reviews from them. It can respond to people like a real person, all thanks to the combination of predefined scripts and machine learning applications. Whenever it is asked a question, the chatbot will respond according to the knowledge database available to it at that point in time.
The response of a chatbot is defined by its underlying software and access to the database. For example, if you ask Apple’s Siri, Samsung’s Bixbie or Amazon’s Alexa about the weather conditions or latest news, it would respond according to the latest weather reports or the latest news, it can access. Nowadays, many leading e-commerce companies are focusing on using chatbots to enchants their customer service experience.
In this technology-oriented world, this technology is a must for establishing good communication channels between people. Chatbots are used in messaging apps, smart AI-powered smartphones, smart TV and many other playback devices such as Alexa and MI TV. These work examine human behavior, all thanks to AI technology and work accordingly.
Developing a chatbot messenger is not rocket science, but it can be challenging, but with a great vision for details, one can develop a chatbot of great value. There are two types of chatbots: basic and AI-powered. When it comes to hosting a chatbot, it is very necessary to choose the right platform such as Facebook Messenger, Slack, Discord, Telegram, and Kik. After choosing the right platform, one needs to settle upon the services used for building and developing chatbots, such as Microsoft bot frameworks, Wit.ai, Api.ai, and IBM’s Watson. The next step is to select suitable development platforms such as – Chatfuel, Texit.in, Octane AI and Motion.ai. For the development of AI and enhancement of user experience, NLP algorithms are implemented. NLP API is used to develop NLP capabilities in chatbot. Some sources for the same are Microsoft LUIS, Wit.ai, and Api.ai.
The development and building of a chatbot is the primary step. Having developed and built the chatbot for completion of a specific purpose, it becomes necessary to identify the target audience. The promotion of technology is very important to draw more users to it. The AI-powered chatbots are made exclusively to enhance the communication channels for better user experience, using the latest tech-driven software along with the real-time database. This AI integrated technology is surely going to take the networking and communication channels on a high level. | <urn:uuid:03193f90-34aa-49d1-bc34-a0f7f4443de6> | {
"date": "2020-01-22T05:39:00",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9431105256080627,
"score": 2.859375,
"token_count": 720,
"url": "https://www.eastafricadigitalmarketers.com/chatbot-messenger-development/"
} |
Knee replacement is one of the most commonly performed operations in the United States with over 700,000 procedures performed annually (1). Besides providing anesthesia care in the operating room, anesthesiologists are dedicated to providing the best perioperative pain management in order to improve patients’ function and facilitate rehabilitation after surgery. In the past, pain management was limited to the use of opioids (narcotics). Opioids only attack pain in one way, and just adding more opioids does not usually lead to better pain control.
In 2012, the American Society of Anesthesiologists (ASA) published its guidelines for acute pain management in the perioperative setting (2). This document recommends “multimodal analgesia” which means that two or more classes of pain medications or therapies, working with different mechanisms of action, should be used in the treatment of acute pain.
While opioids are still important pain medications, they should be combined with other classes of medications known to help relieve postoperative pain unless contraindicated. These include:
- Non-steroidal anti-inflammatory drugs (NSAIDs): Examples include ibuprofen, diclofenac, ketorolac, celecoxib. NSAIDs act on the prostaglandin system peripherally and work to decrease inflammation.
- Acetaminophen: Acetaminophen acts on central prostaglandin synthesis and provides pain relief through multiple mechanisms.
- Gabapentinoids: Examples include gabapentin and pregabalin. These medications are membrane stabilizers that essentially decrease nerve firing.
The ASA also strongly recommends the use of regional analgesic techniques as part of the multimodal analgesic protocol when indicated.
When compared to opioids alone, epidural analgesia produces lower pain scores and shorter time to achieve physical therapy goals (3). However, higher dose of local anesthetic (numbing medicine) may lead to muscle weakness that can limit activity (4). In addition, epidural analgesia can lead to common side effects (urinary retention, dizziness, itchiness) and is not selective for the operative leg, meaning that the non-operative leg may also become numb.
Femoral Nerve Block
A peripheral nerve block of the femoral nerve is specific to the operative leg. When compared to opioids alone, a femoral nerve block provides better pain control and leads to higher patient satisfaction (5). One area of controversy is whether a single-injection nerve block or catheter-based technique is preferred. There is evidence to support the use of continuous nerve block catheters to extend the pain relief and opioid-sparing benefits of nerve blocks in patients having major surgery like knee replacement. When a continuous femoral nerve block catheter is used, the pain relief is comparable to an epidural but without the epidural-related side effects (6). One legitimate concern raised over the use of femoral nerve blocks in knee replacement patients is the resulting quadriceps muscle weakness (7).
Saphenous Nerve Block (Adductor Canal Block)
The saphenous nerve is the largest sensory branch of the femoral nerve and can be blocked within the adductor canal to provide postoperative pain relief and facilitate rehabilitation (8, 9). In healthy volunteers, quadriceps strength is better preserved when subjects receive an adductor canal block compared to a femoral nerve block (10).
In actual knee replacement patients, quadriceps function decreases regardless of nerve block type after surgery but to a lesser degree with adductor canal blocks (11). Recently there have been reports of quadriceps weakness resulting from adductor canal blocks and catheters that have affected clinical care (12, 13).
According to a large retrospective study of almost 200,000 cases, the incidence of inpatient falls for patients after TKA is 1.6%, and perioperative use of nerve blocks is not associated with increased risk (14). Patient factors that increase the risk of falls include higher age, male sex, sleep apnea, delirium, anemia requiring blood transfusion, and intraoperative use of general anesthesia (14). The bottom line is that all knee replacement patients are at increased risk for falling due to multiple risk factors, and any clinical pathway should include fall prevention strategies and an emphasis on patient safety.
Other Local Anesthetic Techniques
In addition to a femoral nerve or adductor canal block, a sciatic nerve block is sometimes offered to provide a “complete” block of the leg. There are studies for and against this practice. Arguably, the benefit of a sciatic nerve block does not last beyond the first postoperative day (15). Surgeon-administered local anesthetic around the knee joint (local infiltration analgesia) can be combined with nerve block techniques to provide additional postoperative pain relief for the first few hours after surgery (16, 17).
For more information about anesthetic options for knee replacement, please see my post on My Knee Guide.
- The Center for Disease Control and Prevention. FastStats: Inpatient Surgery. National Hospital Discharge Survey: 2010 table. http://www.cdc.gov/nchs/fastats/inpatient-surgery.htm. Accessed January 30, 2015.
- American Society of Anesthesiologists Task Force on Acute Pain M: Practice guidelines for acute pain management in the perioperative setting: an updated report by the American Society of Anesthesiologists Task Force on Acute Pain Management. Anesthesiology 2012, 116(2):248-273.
- Mahoney OM, Noble PC, Davidson J, Tullos HS: The effect of continuous epidural analgesia on postoperative pain, rehabilitation, and duration of hospitalization in total knee arthroplasty. Clin Orthop Relat Res 1990(260):30-37.
- Raj PP, Knarr DC, Vigdorth E, Denson DD, Pither CE, Hartrick CT, Hopson CN, Edstrom HH: Comparison of continuous epidural infusion of a local anesthetic and administration of systemic narcotics in the management of pain after total knee replacement surgery. Anesth Analg 1987, 66(5):401-406.
- Chan EY, Fransen M, Parker DA, Assam PN, Chua N: Femoral nerve blocks for acute postoperative pain after knee replacement surgery. Cochrane Database Syst Rev 2014, 5:CD009941.
- Barrington MJ, Olive D, Low K, Scott DA, Brittain J, Choong P: Continuous femoral nerve blockade or epidural analgesia after total knee replacement: a prospective randomized controlled trial. Anesth Analg 2005, 101(6):1824-1829.
- Charous MT, Madison SJ, Suresh PJ, Sandhu NS, Loland VJ, Mariano ER, Donohue MC, Dutton PH, Ferguson EJ, Ilfeld BM: Continuous femoral nerve blocks: varying local anesthetic delivery method (bolus versus basal) to minimize quadriceps motor block while maintaining sensory block. Anesthesiology 2011, 115(4):774-781.
- Jenstrup MT, Jaeger P, Lund J, Fomsgaard JS, Bache S, Mathiesen O, Larsen TK, Dahl JB: Effects of adductor-canal-blockade on pain and ambulation after total knee arthroplasty: a randomized study. Acta Anaesthesiol Scand 2012, 56(3):357-364.
- Hanson NA, Allen CJ, Hostetter LS, Nagy R, Derby RE, Slee AE, Arslan A, Auyong DB: Continuous ultrasound-guided adductor canal block for total knee arthroplasty: a randomized, double-blind trial. Anesth Analg 2014, 118(6):1370-1377.
- Kwofie MK, Shastri UD, Gadsden JC, Sinha SK, Abrams JH, Xu D, Salviz EA: The effects of ultrasound-guided adductor canal block versus femoral nerve block on quadriceps strength and fall risk: a blinded, randomized trial of volunteers. Reg Anesth Pain Med 2013, 38(4):321-325.
- Jaeger P, Zaric D, Fomsgaard JS, Hilsted KL, Bjerregaard J, Gyrn J, Mathiesen O, Larsen TK, Dahl JB: Adductor canal block versus femoral nerve block for analgesia after total knee arthroplasty: a randomized, double-blind study. Reg Anesth Pain Med 2013, 38(6):526-532.
- Chen J, Lesser JB, Hadzic A, Reiss W, Resta-Flarer F: Adductor canal block can result in motor block of the quadriceps muscle. Reg Anesth Pain Med 2014, 39(2):170-171.
- Veal C, Auyong DB, Hanson NA, Allen CJ, Strodtbeck W: Delayed quadriceps weakness after continuous adductor canal block for total knee arthroplasty: a case report. Acta Anaesthesiol Scand 2014, 58(3):362-364.
- Memtsoudis SG, Danninger T, Rasul R, Poeran J, Gerner P, Stundner O, Mariano ER, Mazumdar M: Inpatient falls after total knee arthroplasty: the role of anesthesia type and peripheral nerve blocks. Anesthesiology 2014, 120(3):551-563.
- Abdallah FW, Brull R: Is sciatic nerve block advantageous when combined with femoral nerve block for postoperative analgesia following total knee arthroplasty? A systematic review. Reg Anesth Pain Med 2011, 36(5):493-498.
- Mudumbai SC, Kim TE, Howard SK, Workman JJ, Giori N, Woolson S, Ganaway T, King R, Mariano ER: Continuous adductor canal blocks are superior to continuous femoral nerve blocks in promoting early ambulation after TKA. Clin Orthop Relat Res 2014, 472(5):1377-1383.
- Mariano ER, Kim TE, Wagner MJ, Funck N, Harrison TK, Walters T, Giori N, Woolson S, Ganaway T, Howard SK: A randomized comparison of proximal and distal ultrasound-guided adductor canal catheter insertion sites for knee arthroplasty. J Ultrasound Med 2014, 33(9):1653-1662. | <urn:uuid:0367e83e-2d61-477d-828c-7198e5523ece> | {
"date": "2020-01-22T04:29:27",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8059162497520447,
"score": 2.671875,
"token_count": 2230,
"url": "https://www.edmariano.com/archives/858"
} |
The Obama Administration has announced a plan for the creation of a new national Science, Technology, Engineering and Math (STEM) Master Teacher Corps made up of the nation's finest STEM subject educators. The STEM Master Teacher Corps will launch with 50 exceptional teachers established in 50 sites and will be expanded to 10,000 Master Teachers over a four year period.
President Obama said, "If America is going to compete for the jobs and industries of tomorrow, we need to make sure our children are getting the best education possible. Teachers matter, and great teachers deserve our support."
The selected teachers will make a multi-year commitment to the Corps and will received an annual stipend of $20,000 on top of their base salary in return for their expertise, leadership and service. The STEM Master Teacher Corps will be launched with $1 billion from the President's 2013 budget request.
The Obama Administration has also announced that the President would be dedicating $100 million of the existing Teacher Incentive Fund to helping school districts establish career ladders that identify, develop, and leverage highly effective STEM teachers.
Today's announcements align with the President's belief that excellent STEM teaching requires both deep content knowledge and strong teaching skills, and his strong leadership in working to improve STEM education
There is an application deadline of July 27 and over 30 school districts across America have already expressed interest in competing for this funding.
The plans are an acceptance of a recent recommendation by the President's Council of Advisors on Science and Technology (PCAST) which called for a national STEM Master Teacher Corps to identify and help retain the nation's most talented STEM teachers by building a community of practice among them and raising their profile.
In order to ensure America's students are prepared for success in an increasingly competitive global economy, we must do more to ensure that teaching is highly respected and supported as a profession, and that accomplished, effective teachers are guiding students' learning in every classroom. The Obama Administration's 2013 budget includes a new, $5 billion program – the RESPECT Project, which stands for Recognizing Educational Success, Professional Excellence, and Collaborative Teaching
Secretary of Education Arne Duncan, White House Domestic Policy Council Director Cecilia Muñoz, White House Office of Science and Technology Policy Director Dr. John Holdren, and PCAST Co-Chair Dr. Eric Lander will meet with outstanding math and science teachers at the White House to discuss efforts to build up the STEM education profession. | <urn:uuid:6b996383-0068-40f7-a853-48757d9972e0> | {
"date": "2020-01-22T04:58:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9464935064315796,
"score": 2.734375,
"token_count": 496,
"url": "https://www.educationnews.org/education-policy-and-politics/obama-administration-announces-stem-master-teacher-corps/"
} |
Anticipating the need for secure communications for the next level of device connectivity Microchip have integrated a complete hardware crypto engine into their PIC24F family of microcontrollers. Computers normally use software routines to carry out data encryption number crunching but for low power microcontrollers this method will generally use up too much of the processor’s resources and be too slow.
Microchip have integrated several security features into the PIC24F family of microcontrollers (identified by their ‘GB2’ suffix) to protect embedded data. The fully featured hardware crypto engine supports the AES, DES and 3DES standards to reduce software overheads, lower power consumption and enable faster throughput. A Random Number Generator is also implemented which can be used to create random keys for data encryption, decryption and authentication to provide a high level of security. For additional protection the One-Time-Programmable (OTP) key storage prevents the encryption key from being read or overwritten.
These security features increase the integrity of embedded data without sacrificing power consumption. With XLP technology, the “GB2” family achieves 180 µA/MHz Run currents and 18 nA sleep currents for long battery life in portable applications. | <urn:uuid:5124dff2-9cca-470e-90a8-19d11f290280> | {
"date": "2020-01-22T05:46:00",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8825202584266663,
"score": 2.953125,
"token_count": 250,
"url": "https://www.elektormagazine.com/news/Microchip-PIC-crypto"
} |
It’s not just a myth – electric vehicle (EV) batteries really do suffer in the cold.
That’s the conclusion from a new study undertaken by the American Automobile Association (AAA), which found when temperatures dropped to -6.5°C, the EV range fell by an average of 41%, based on the five models tested.
The organisation claims the study is the first to have used standard, repeatable methodology to confirm the problem and compare the effect of winter temperatures on different models.
It suggests impact on range was very similar among the cars tested, which included the BMW i3s, the Chevrolet Bolt, the Nissan Leaf, the Tesla Model S and the Volkswagen e-Golf.
Tests show just turning on the EVs at -6.5°C revealed a 12% loss in range, a figure which dropped even further when cabin heat and seat heaters were turned on, plummeting by 41% – this bring an EV like the Chevrolet Bolt down to just 140 miles per charge.
Greg Brannon, AAA’s Director of Automotive Engineering, said: “We found that the impact of temperature on EVs is significantly more than we expected. It’s something all automakers are going to have to deal with as they push for further EV deployment because it’s something that could surprise consumers.”
A spokesperson for Tesla said: “Based on real-world data from our fleet, which includes millions of long trips taken by real Model S customers, we know with certainty that, even when using heating and air conditioning, the average Model S customer doesn’t experience anywhere near that decrease in range at 20 degrees Fahrenheit (-6.5°C).
Source: Energy Live News | <urn:uuid:ac33644e-c868-4239-a6d9-dc3713fe2c07> | {
"date": "2020-01-22T06:37:14",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9515799880027771,
"score": 2.6875,
"token_count": 358,
"url": "https://www.energetskiportal.rs/en/its-not-just-a-myth-ev-batteries-really-do-suffer-in-the-cold/"
} |
The critical combination of robotics dynamic contact and guidance, navigation, and control (GNC) have become an increasingly important aspect of European space missions. Capture of uncooperative targets for Active Debris Removal (in the framework of Clean Space initiative) as well as landing and sampling on low-gravity bodies such as comets, asteroids and small moons present such combination. To support the development of existing and upcoming missions as well as R&D activities in these high-visibility, technological fields, the need for upgrading the verification capabilities of the Automation and Robotics (A&R) laboratory to better accommodate robotics and GNC activities has become apparent.
Enabling Space Exploration and Planetary Science
Robotics is a fascinating subject, enabling a lot of interesting research activities. However it is important to understand that for ESA robots are just a convenient tool to operate scientific payloads on space environments, thereby facilitating ESA’s goals in space science and exploration. ESA’s Orbital Robotics Lab services focus on proving that a robot and its scientific payload can work as an effective means of scientific investigation/exploration. | <urn:uuid:6df98215-e9b7-40e7-93d7-2d1683686cb3> | {
"date": "2020-01-22T06:09:20",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9443060755729675,
"score": 2.53125,
"token_count": 222,
"url": "https://www.esa.int/Enabling_Support/Space_Engineering_Technology/Automation_and_Robotics/About_the_Orbital_Robotics_Lab"
} |
CDP : CDP, short for Cisco Discovery Protocol runs over Layer 2 (the data link layer) on all Cisco routers, bridges, access servers, and switches. CDP allows network management applications to discover Cisco devices that are neighbors of already known devices. CDP runs on all LAN and WAN media that support SubNetwork Access Protocol (SNAP). Cisco Discovery Protocol (CDP) is a protocol supported by Cisco devices and gives limited information about the devices and used for automatic discovery of Cisco networking components in a network.
The following are true about CDP:
1. CDP - Cisco Discovery Protocol is a Cisco proprietary Layer 2 protocol.
2. CDP uses a multicast packet to the common destination address 01-00-0c-cc-cc.
3. CDP packets are sent out with a non zero TTL after an interface is enabled and with a zero TTL value immediately before and interface is made idle. This enables the neighbouring devices to quickly discover the state of neighbours.
4. CDP packets will never be forwarded beyond the directly connected devices. To find CDP information on indirectly connected routers, administrators can "telnet" to the intended destination device and run CDP command.
The following command sets the cdp timer, holdtime
R1(config)#cdp timer 30
R1(config)#cdp holdtime 90
The "Show cdp interface" command displays the status of all interfaces that are running cdp. For determining the neighbouring devices in a Cisco network, you can use the command "show cdp neighbours".
The following example is sample output from the show cdp neighbors command.
The Device ID column in the output indicates the remote node ID and the Port ID column indicates the remote port.
The command : Switch#show cdp interface [<type> <mod>/<num>]
Displays the CDP information pertaining to a specific interface.
The command : Switch#show cdp neighbors [<type> <mod>/<num> | vlan <vlan-id>][detail]
Displays the cdp information in detail, including the IP address for telnetting to the neighbor device.
LLDP : LLDP(Link Layer Discover Protocol) is a neighbor discovery protocol that is used for network devices to advertise information about themselves to other devices on the network. LLDP like CDP, runs over the data-link layer of your network that includes a non Cisco devices or different network layer protocols.
LLDP supports a set of attributes that it uses to discover neighbor devices. These attributes contain type, length, and value descriptions and are referred to as TLVs. LLDP supported devices can use TLVs to receive and send information to their neighbors. Details such as configuration information, device capabilities, and device identity can be advertised using this protocol.
To globally disable LLDP the following command is used
Switch(config)#no lldp run
To globally enable lldp following command is used
clear lldp counters - Resets the traffic and error counters to zero.
clear lldp table - Deletes the LLDP table of information about neighbors.
show lldp - Displays global information, such as frequency of transmissions, the holdtime for packets being sent, and the delay time for LLDP to initialize on an interface.
show lldp entry entry-name - Displays information about a specific neighbor.
You can enter an asterisk (*) to display all neighbors, or you can enter the name of the neighbor about which you want information.
show lldp errors - Displays LLDP computational errors and overflows.
show lldp interface[interface-id] - Displays information about interfaces where LLDP is enabled. You can limit the display to the interface about which you want information.
show lldp neighbors - displays information about neighbors.
Typical output show lldp neighbors command output is shown below
show lldp neighbors[interface-id][detail] - Displays information about neighbors, including device type, interface type and number, holdtime settings, capabilities, and port ID.
You can limit the display to neighbors of a specific interface or expand the display to provide more detailed information.
show lldp traffic - Displays LLDP counters, including the number of packets sent and received, number of packets discarded, and number of unrecognized TLVs.
The following example shows sample output for statistics for all LLDP traffic on the system
show lldp traffic Field Descriptions
|Total frames out||Number of LLDP advertisements sent from the device.|
|Total entries aged||Number of LLDP neighbor entries removed due to expiration of the hold time.|
|Total frames in||Number of LLDP advertisements received by the device.|
|Total frames received in error||Number of times the LLDP advertisements contained errors of any type.|
|Total frames discarded||Number of times the LLDP process discarded an incoming advertisement.|
|Total TLVs discarded||Number of times the LLDP process discarded a Type Length Value (TLV) from an LLDP frame.|
|Total TLVs unrecognized||Number of TLVs that could not be processed because the content of the TLV was not recognized by the device or the contents of the TLV were incorrectly specified.| | <urn:uuid:0a342274-6c48-4a34-baa9-7a8c54fbdd5f> | {
"date": "2020-01-22T07:01:12",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.835856556892395,
"score": 2.546875,
"token_count": 1096,
"url": "https://www.examguides.com/CCENT/ccent-22.htm"
} |
Accurately adjusting the minimum ventilation to the animals' needs can be quite difficult in the first weeks after introducing the chicks. The ventilation needed is lower than the minimum fan capacity and the air is hard to control because there is not enough volume.
Accurate minimum ventilation is crucial in order to extract harmful waste substances such as CO2, NH3, moisture and dust and to introduce oxygen rich air. This is particularly important with young, still developing animals and in order to prevent respiratory problems. Furthermore, accurate minimum ventilation prevents extra heating costs since it eliminates unnecessary heat removal.
The smart MicroControl feature in the Lumina poultry climate computer ensures a good start for your animals by further optimising the minimum ventilation.
The most accurate minimum ventilation with MicroControl
MicroControl is a completely new way of controlling involving a smart pattern which is used to alternate the minimum ventilation control between on and off.
The ventilation cycle always starts with a fixed ON time to ensure that the fresh air reaches the centre of the house.
The ventilation then stops so that the fresh air is mixed properly and can spread among the animals.
We call this unique way of modulating MicroControl.
MicroControl is more efficient and the minimum ventilation can be controlled much more accurately and economically. The test results are very promising:
- An improved living environment for the animals due to better CO2 and RH values in the house during the crucial first couple of weeks (the hardest weeks of the entire production cycle)
- More stable temperatures in the house and fewer differences between front and back or left and right
- Lower heating costs
- Better litter quality
MicroControl is applied in houses with (Fantura) air inlet valves where there is sufficient room in the ridge of the house to mix the fresh air with the air in the house. MicroControl can be applied in combination with on/off fans and controllable fans. The controllable fans can then also run at low capacities during the ON time. MicroControl works best in combination with the easy to control Fancom fans and iFans.
This modulating MicroControl feature for the incoming air in the latest version of the Lumina 35, 36, 37 and 38 poultry computers is unique and enables users to better adjust the minimum ventilation to the needs of the animals, particularly where it concerns young chicks. The incoming air is used much more efficiently and the animals benefit from a healthy micro-climate that stimulates growth in all circumstances.
This MicroControl option is a standard feature on the Lumina 35, 36, 37 and 38 poultry climate computers, delivered from November 2019. Users of older Lumina computers can update their computers in order to use this option. | <urn:uuid:39fb15c5-65fc-4baa-b629-285b1f97de1b> | {
"date": "2020-01-22T06:29:56",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9158080816268921,
"score": 2.875,
"token_count": 545,
"url": "https://www.fancom.com/microcontrol-for-the-best-start-of-young-broilers"
} |
We know from astrophysical observations that there is a matter-antimatter asymmetry in the Universe, with everything that we see being made up almost entirely of matter. However, if Big-Bang cosmology is correct, then matter and antimatter would have initially been produced in equal amounts. So where has all the antimatter gone? The dominance of matter over antimatter can only be explained if there is there is a violation of both charge-conjugation (C) and parity (P) in the laws of physics. CP-violation does indeed occur in the known fundamental particle interactions, but the level of CP-violation is not large enough to explain the observed matter-antimatter asymmetry in the Universe. This means that new particles or new types of particle interactions must exist in order to explain this astrophysical phenomenon.
In this project, we will search for new sources of CP violation using data recorded by the ATLAS experiment at the Large Hadron Collider (LHC). In particular, we plan to study weak-boson scattering processes, which have only recently been observed at the LHC experiments. As part of this project the student will gain experience in the analysis of large datasets from particle physics experiments, will develop new observables to search for CP-violation, and will develop new techniques for identifying the hadronic decays of the weak bosons using machine-learning algorithms.
A minimum of a 2i class UK Masters honours degree or international equivalent is required. Or a first degree with an additional Masters degree or international equivalent. | <urn:uuid:2af3fa57-5eee-4881-b0ea-aa542992028b> | {
"date": "2020-01-22T04:31:32",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9269241094589233,
"score": 3.109375,
"token_count": 317,
"url": "https://www.findaphd.com/phds/project/searching-for-quantum-mechanical-interference-effects-that-would-explain-the-origin-of-the-matter-antimatter-asymmetry-in-the-universe/?p118042"
} |
Fifty Years Later, the Immigration Bill That Changed America
by Foster, on News
On the 2016 campaign trail, immigration has been a flash point unlike any other. But as Donald Trump pushes his scheme to build a wall across America’s southern border and Hillary Clinton promises to go further than President Obama in protecting migrants without documentation, a major immigration reform a half-century ago is a reminder that policy changes often don’t go as planned. For today’s politicians, perhaps the biggest takeaway of the Immigration and Nationality Act is to expect unintended consequences.
It was back in 1965, during the depths of the Cold War and the peak of the civil rights movement, that the United States overhauled its immigration laws. Working with liberal Democrats and liberal Republicans (who existed back then), President Lyndon Johnson pushed a bill that did away with the “national origins quota” system. The old quota system, in place since the 1920s, determined who could immigrate to the U.S. based on ethnicity, with a heavy tilt toward Western Europeans—especially the English, Irish and Germans. Only small allotments were granted to Eastern Europeans, Asians and Africans.
That became an issue for the United States in the ‘60s, when new countries were emerging from colonialism, pitting the U.S. and the Soviet Union in a contest for their allegiances. Republican Senator Jacob Javits, a liberal from New York, noted in September 1965 that the immigration system, with its bias toward Western Europeans, “remains today a target for Communist propaganda…making our effort to win over the uncommitted nations more difficult.”
The racial discrimination inherent in the quota system clashed with the idealism of Civil Rights and Voting Rights Acts. And most of all, the ethnic limits ran contrary to many Americans’ image of their country. “As President Kennedy so aptly stated, we are a ‘nation of immigrants,’” Massachusetts Republican Senator Leverett Saltonstall told his colleagues during the debate on the bill. “There is scarcely an area of our national life that has not been favorably affected by the work of people from other lands.”
By ‘65, however, some conservatives in the U.S. House publicly “worried about the size and scale of future Latin American immigration,” says Dan Tichenor, a professor of political science at the University of Oregon, “and were trying to put barriers in its way.” Liberal lawmakers didn’t like that idea, but they doubted that the new restrictions would have much impact. The limits were high enough, Senator Javits conceded, that immigration from the Western Hemisphere under the new law “would be approximately the same as the level reached last year”—a modest 140,000 or thereabouts. Yet the total number of persons of Mexican origin in the U.S. went from 5 million in 1970, the first census after the act, to almost 34 million today.
The Western Hemisphere cap was one key concession that opponents of Johnson’s immigration reform were able to extract. The other significant change was that visas be prioritized for migrants with family ties in the United States. Johnson and the bill’s supporters backed a system that would have put a priority on skill, which ended up being secondary in the new law.
When Johnson signed the Immigration and Nationality Act at the foot of the Statue of Liberty 50 years ago this October, he declared that the new law undoing the old quota system was “not a revolutionary bill. It does not affect the lives of millions.” In fact, it did. The new system, which opened up American immigration to the world, has dramatically shifted the blend of people coming to the country while contributing to the surge in immigrants from Mexico and Latin America entering the U.S. without documentation—neither of which its authors ever intended.
There were “a whole series of consequences unleashed” by this new law, says UCLA Law professor Hiroshi Motomura, author of Americans in Waiting: The Lost Story of Immigration and Citizenship in the United States. Though the 1965 law eliminated ceilings on visas for specific ethnicities across Asia and Africa, it did keep a cap in place for the Eastern Hemisphere—encompassing migrants from Europe, Africa and Asia. As a compromise, it also set the cap on immigration from the Western Hemisphere for the first time. That’s right: The U.S. used to allow unlimited immigration from Mexico. Even as restrictionists had layered on more and more limits on immigrants, starting with the Chinese in the 1880s, the Japanese around the turn of the century, and the rest of Asia, Africa and much of Europe in the 1920s, the U.S. allowed the open flow of immigration from Canada and nations to the south, part of what was considered a “good neighbor” policy.
The conservatives who backed a system of giving a majority of visas to family members of U.S. citizens “thought we would see an expansion in Southern and Eastern European immigration,” says Tichenor. “They never really anticipated the dramatic increase in Asian and Latin American immigration” that resulted thanks to family unification rules. Essentially, the new law allowed American citizens to obtain visas for not only their small children and spouses, but also their sisters and brothers and adult children, who then became citizens and began the process over again.
That started a slow but steady progression of Asian and Latino migration, which had only small populations in the United States before ‘65. In the 1950s, Europeans made up 56 percent of those immigrants obtaining lawful permanent residence in the U.S., while those from Canada and Latin America were 37 percent, and all of Asia accounted for a measly 5 percent, according to Department of Homeland Security statistics. By this past decade, however, Europeans had dropped to just 14 percent of new lawful permanent residents, compared with 35 percent from Asia and 44 percent from the Americas.
One more factor had a major impact: At the same time immigration law was shifting in 1965, a new national workforce policy was also kicking in. A year earlier, in 1964, the federal government ended what was known as the Bracero Program, launched during World War II’s labor shortages to provide temporary laborers from Mexico to American farms and fields. But the program was rife with worker abuses and ardently opposed by labor unions, which believed the migrants pushed down wages for Americans. That opposition finally succeeded in halting the Bracero Program in ’64, to the consternation of the agriculture industry.
Proponents of the move in the Department of Labor and elsewhere believed they could wean farmers off Mexican labor. But “many of the same people who were coming under the Bracero Program or their relatives or the people who were in those networks continued to come,” says Boston College professor Peter Skerry, an expert on immigration and ethnic politics. It’s just that now they came illegally. Over the ensuing decades, that reality combined with the new caps on migrants from Latin America turned what had been legal migration, illegal.
Economic trends in both Latin America and the U.S. also encouraged more migration. As Motomura explains it, 1965 was the “beginning of a mismatch of the legal immigration system and the demands of the economy.” Specifically, urbanization and economic dislocation drove Mexicans and other Central Americans from rural areas north in search of work, while Americans were obtaining higher levels of education and moving away from menial labor. “In 1950, more than half of the labor force were high school dropouts. Now it’s less than 5 percent,” notes Tamar Jacoby, president of the business-backed coalition ImmigrationWorks USA. The law’s drafters “didn’t foresee that.” That’s an understatement.
The lesson of unintended consequences is something advocates on both sides of today’s immigration debate acknowledge. “The first lesson is: Don’t believe everything a politician tells you. As we’ve seen with all kinds of social innovations from the 1960s and 1970s, the assurances of their promoters turn out to be incomplete or false,” says Mark Krikorian, the head of the Center for Immigration Studies, which advocates for much tighter controls on immigration. He and Jacoby agree that the family migration provisions have pushed the system out of whack. But they’re vehemently divided over whether the country still needs robust immigration, and if unmet labor demand is at the root of America’s glut of undocumented migrants.
Disagreements on immigration ultimately come down to a debate over what America should be and how its economy should work. Though President Johnson promised the law “ will not reshape the structure of our daily lives,” the ensuing shifts in population and migration patterns has indeed meant “big changes in American life,” says Skerry, for good and for ill. The last time politicians hashed out a new immigration system, they didn’t entirely weigh those implications. Today’s leaders would be wise to think about the ripple effect before they mess with the borders. | <urn:uuid:0c680ba1-22d1-4c66-bd10-5eff24f6d9d0> | {
"date": "2020-01-22T06:26:05",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9722696542739868,
"score": 3.484375,
"token_count": 1902,
"url": "https://www.fosterglobal.com/blog/fifty-years-later-the-immigration-bill-that-changed-america/"
} |
Before European colonisation the Noongar people have been living in the most symbiotic relationship with each other and in harmony and balance with their natural environment.
As evidenced by archaeological findings on the Upper Swan River, the Noongar people, who are the traditional dwellers of the South West of Australia, have been living in Australia for over 45,000 years.
Their culture was already uniquely and richly established with wisdom passing with love and respect down through generations.
"Despite their history of oppression and marginalisation Noongar people have continued to assert their rights and identity. They have a unique, vibrant, identifiable and strong culture existing as one of the largest Aboriginal cultural blocs in Australia. This is no doubt due to the immense strength, support and dynamism of Noongar family groups most of which can trace their lineage back to the early 1800s. In fact most contemporary Noongar people know their ancestry and vast family groups to an astonishing degree." Source: Noongar.org
The story of what happened makes very unpleasant reading, but it is a story that should also be told.
John Host and Chris Owen's book " It's Still in My Heart, This is My Country" won the 2010 Human Rights Literature award. "The award winning book unfolds the largely untold, unknown and assumed extinct history and culture of the Noongar people - the traditional owners of the South West of WA." ~ News UWA
The WA Minister for Culture and the Arts, Mr John Day, said that “the judges felt this book had the potential to alter the path of historical Aboriginal research and that the work has led to a paradigm shift in the way Aboriginal culture and identity are defined and understood. "
Rabbit Proof Fence (2002) | <urn:uuid:ac72e8c9-21ee-43e0-8a35-8d9837a6fce0> | {
"date": "2020-01-22T05:30:23",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9646450877189636,
"score": 3.4375,
"token_count": 358,
"url": "https://www.fremantlewesternaustralia.com.au/fremantleancienthistory.htm"
} |
The predicted continuation of strong drying and warming trends in the southwestern United States underlies the associated prediction of increased frequency, area, and severity of wildfires in the coming years. As a result, the management of wildfires and fire effects on public lands will continue to be a major land management priority for the foreseeable future. Following fire suppression, the first land management process to occur on burned public lands is the rapid assessment and emergency treatment recommendations provided by the Burned Area Emergency Response (BAER) team. These teams of specialists follow a dynamic protocol to make post-fire treatment decisions based on the best available information using a range of landscape assessment, predictive modeling, and informational tools in combination with their collective professional expertise. Because the mission of a BAER team is to assess burned landscape and determine if stabilization treatments are needed to protect valued resources from the immediate fire effects, the evaluation of treatment success generally does not include important longer term ecological effects of these treatments or the fates of the materials applied over the burned landscape. New tools and techniques that have been designed or modified for BAER team use are presented in conjunction with current post-fire treatment effectiveness monitoring and research. In addition, a case is made to monitor longer term treatment effects on recovering ecosystems and to make these findings available to BAER teams. | <urn:uuid:7b5a5afc-62d1-4c2c-8927-d1f8901ae24c> | {
"date": "2020-01-22T05:12:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9326128363609314,
"score": 3.1875,
"token_count": 258,
"url": "https://www.fs.usda.gov/rmrs/publications/emergency-post-fire-rehabilitation-treatment-effects-burned-area-ecology-and-long-term"
} |
V. Planning and assessment
“All the world’s a stage, and all the men and the women merely players.”
Just as a good play needs a good story that captivates the viewer, a military interaction requires planning and adaptation. As on stage, the solution of multi-dimensional challenges is accomplished in several acts. The focus of the considerations has been chosen independently of the distinction between operational and tactical levels. Although most of the reference documents tend to focus on operational and strategic planning levels, the planning portion of this chapter is aimed more at staff members who are interested in the influence of their contributions to these levels.
Assessments and situation analysis are the basis of good planning and further operations. An actor has to put himself in all facets of his portrayed character.
A CIMIC staff member makes the deduction and conclusion by identifying and analysing all key elements of the PMESII (TE) domains. Once the information has been evaluated and products developed; these products will contribute to the decision making process.
Only the interaction of all actors results in a well-sounding symphony. | <urn:uuid:a6d6d5c5-f445-4d65-bbeb-c885a2c42550> | {
"date": "2020-01-22T06:12:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9636486172676086,
"score": 2.546875,
"token_count": 227,
"url": "https://www.handbook.cimic-coe.org/5.planning-and-assessment/5.1introduction/"
} |
Meaningful health system improvements are hindered when systematic information about prices, quality and utilization levels are not available. All-payer claims databases (APCDs) are an important tool for revealing spending flows within a state and measuring progress over time. To fully realize their value, implementation of an APCD requires broad stakeholder engagement, sufficient funding, participation by consumer representatives, and extensive data access so that the data can be used for a variety of public purposes. APCDs are a necessary step to building healthcare transparency in states.
Every year, billions of lines of healthcare data are generated when healthcare services are billed and paid by insurers. These claims data contain a wealth of information about what services are being provided and what they cost. But these data are often locked up in proprietary datasets owned by insurers or aggregators that often deny access or charge high prices.
All-payer claims databases (APCDs)1 are used to unlock this data by collecting healthcare claims and other data into databases that can be used by a wide variety of stakeholders to monitor and report on provider costs and the use of healthcare services. Armed with this information, policymakers, regulators, payers and other key stakeholders can begin to address unwarranted variation in prices, healthcare waste and other consumer harms.
APCDs are large-scale databases created by states that contain diverse types of healthcare data (see Exhibit 1).2 APCDs usually contain data from medical claims with associated eligibility and provider files. APCDs may also include HMO encounter data and/or pharmacy and dental claims.3 All-payer claims databases differ from insurers' proprietary claims databases in that APCDs bring together data from multiple payers and are assembled and managed in the public interest.
When the data includes Medicaid and Medicare claims as well as fully insured and self-insured commercial claims we call it an all-payer claims database. When it includes only some of these payers it is referred to as a multi-payer claims database. Generally, APCDs are created through state legislation, although in some circumstances they are created by voluntary data reporting arrangements.
All-payer claims databases are beneficial for a wide range of stakeholders, including policymakers, consumers, payers and researchers, and have been touted as a key part of health system transformation because they increase healthcare spending transparency and help inform decision making.
Consumers can benefit from the increased price transparency that APCDs provide, particularly when the data is used to create a consumer-friendly website that enables them to compare cost information for specific procedures across providers. More importantly, they benefit indirectly when the data in the APCD is used by other stakeholders to reduce pricing variation or improve quality.
Policymakers and regulators can use APCD data for a wide variety of purposes. A key use is to understand the health pricing landscape in their state and identify areas of unjustifiably high costs.
For example, policymakers can use all payer claims data to understand and evaluate the effects of state efforts to improve value for consumers. APCD provide a complete picture of health spending in a state, enabling them to evaluate if efforts to control spending in one area represent a net improvement, or whether those efforts lead to spending increases somewhere else. Finally, policymakers can also gain a better understanding of the health status and disease burden of their state population, in order to reduce health disparities and otherwise improve the general health of state residents. Conversely, in the absence of APCD data, state policymakers and others are limited in their ability to monitor state progress on these vital issues.
Payers may be interested in a more complete picture of spending and provider practice patterns than they can glean from their own claims and encounter data.7 For example, they might use hospital cost data to identify high- and low-value hospital's’, successful cost containment strategies, or the prevalence of different diseases at a state level. In addition, employers have used APCDs to track progress of cost, quality and preventive service measures across their employee populations. Employers may also use health status and disease prevalence information to create wellness programs or other targeted interventions for their employee populations.
Researchers are interested in this information in order to study the outcomes of state or federal health reform initiatives on spending and quality, to gain a deeper understanding of disease prevalence or other public health issues, or to better understand provider pricing variations, among other issues.8
All-payer claims databases are a fairly recent innovation with few states having broad implementation (see Exhibit 2). Hence, they are just beginning to provide information for valuable research on trends in cost, quality and utilization. Nonetheless, the APCD Council—a learning collaborative of government, private, nonprofit and academic organizations focused on improving the development and deployment of state-based APCDs—has catalogued more than 40 research studies on the impact of APCDs.9
Below are some examples of reports that have been produced using data from APCDs:
Vermont: Using APCD to Inform Rate Review
Vermont used funds from an ACA rate-review grant to investigate how their multi-payer claims database could inform the rate-review process, such as improving their ability to validate insurance company rate filing applications, medical trend analyses, and generating comparative data for benchmarks.10
New Hampshire: Cost Evaluations
State agencies have created reports from their APCD that focus on healthcare service and health insurance premium costs and costs drivers, enrollment trends and disease patterns.11 New Hampshire also commissioned a study that allowed the Medicaid agency to compare its provider rates with those of the commercial payers when revising its fee schedule.
Utah: Population Health
The Utah Department of Health published a report enabling stakeholders to understand the Utah population in a new way.12 Specifically, the report examined the healthy population of Utah—and their exact location within the state—to identify what specific preventative and routine healthcare they are receiving to keep them healthy.
Oregon: Reports for Policymakers The Oregon Health Authority published a report using APCD data that examines the outcomes of health system transformation efforts.13
Much has been made of consumer-facing websites that enable consumers to compare prices. While providing clear, actionable information on prices is a worthy goal and only fair to consumers, it is important to realize that only a small portion of overall spending is “shoppable” by consumers.14 Moreover, a number of barriers exist to providing actionable information to consumers. Two evaluations of the New Hampshire website (an early example of a consumer-facing pricing website) found that consumer use of the website has been modest and has yet to encourage consumer price-shopping.15 The evaluations found, however, that the data was useful to policymakers by highlighting the wide gaps in provider prices in the state.
Creating all-payer claims databases is foundational to informing other strategies designed to improve healthcare value for consumers. Implementing these databases involves a host of decisions, all with profound impact on the value of the resulting database.
For states that haven’t enacted APCD legislation, there are several issues to consider, including the development goals, governance and administration, the scope of data collected, funding sources, privacy issues and reporting requirements. For states that have already enacted APCDs, consider whether or not adjustments need to be made to the overarching structure.
Establish Broad APCD Goals
All-payer claims databases can be developed with broad or narrowly defined goals. For example, the goal of Minnesota’s APCD is to provide more information to the state’s health department, and data use is limited to this department.16 On the other end of the spectrum, Maine’s APCD was developed with the broad goal to improve the health of Maine citizens.17
Advocates should work towards an APCD with broad goals and tie it directly to improved health value for the state’s citizens and its government. Experts agree that states need to design APCDs based on a common vision of use and agreement on how the dataset will provide value to a broad groups of stakeholders.18
APCD Governance With Consumer Involvement
The governance structures of existing APCDs vary widely (see below for descriptions of typical governance models). Most states with existing APCDs have chosen to have an oversight body that has the authority to collect and disseminate the data. The organization housing the data is typically a state agency, such as the department of insurance or the health department, or an independent nonprofit APCD administrator. Ongoing conversations with stakeholders about measurement strategies, reporting requirements and processes, and projected timelines help to build consensus about the approach and focus of data uses.19
The governance model should be tailored to local conditions but in all instances should include consumer representation on the board or advisory group.
Four Models of State APCD Governance
Model 1: State Health Data/Policy Agency Management
Legislation authorizes the state agency or health data authority to collect and manage data, either internally or through contracts with external vendors. Legislation grants legal authority to enforce penalties for noncompliance and other violations, while separate regulations define reporting requirements. A statutory committee or commission is defined in law, or the state agency appoints an advisory committee. States with this model include Kansas, Maine, Maryland, Massachusetts, Minnesota, Oregon, Tennessee and Utah.
Model 2: Insurance Department Management
The APCD is managed by a state agency responsible for the oversight, regulation and licensing of insurance carriers. Advisory committees of major stakeholders guide decisions. Reporting is mandated under the authority of the Insurance Code, with penalties for noncompliance. The only state with this model is Vermont.
Model 3: Shared Agency Management
Two state agencies with separate authorities share in the governance and management of data collection, reporting and release—such as an agency with health insurance claims expertise and one charged with tracking and improving health status of state residents. The shared responsibilities are defined in statute and expanded on in a Memorandum of Understanding that further defines the scope of authority and the process of decision making. In New Hampshire, for example, the agencies are the Department of Health and Human Services and the Insurance Department.
Model 4: Private APCD Initiatives
A private APCD initiative may be established in states without legislative authority. Data are collected voluntarily from participating carriers with no authority to leverage penalties for nonreporting. A board of directors composed of all major stakeholders guides the decision-making process. Examples of this model include the Wisconsin Health Information Organization and the Washington Health Alliance.
Source: Love, Denise, et al., “All-Payer Claims Databases: State Initiatives to Improve Healthcare Transparency,” The Commonwealth Fund (2010).
Establishing Sustainable APCD Funding
Adequate funding is essential to the success of an APCD. The goal for each state is to build a sustainable APCD system that provides consistent and robust information across the state’s healthcare system over time. Funding requirements vary greatly but could range from $500K to establish a bare-bones system to several million dollars.20
States should identify funding sources as part of the legislative process. Public APCDs may be funded with appropriations or industry fees and assessments. States can also write a non-compliance financial penalty into their legislation to be levied on payers who do not meet the reporting requirements. States may also be able to take advantage of Medicaid matching funds.
Some states expect a portion of APCD funding to come from the future sales or licensing of data products. Another option, used by Wisconsin’s voluntary APCD, is funding through subscription and membership fees. However, this method of funding may include restrictions on the use of data (see below).
APCD Data Access and Privacy Issues
Claims data contains sensitive personal information. Determining how to protect consumers’ privacy while establishing APCDs is one of the most important issues states will face.
States must decide who will be able to access the APCD data and for what purposes.
There is wide variation in state approaches on this matter.
Minnesota only allows the state health department to access their APCD data, a policy which might be seen as a way to protect patients’ privacy but also greatly restricts the ways in which the data can be used. In contrast, both Maine and New Hampshire publish aggregated payment data on a public website where anyone can access it.
About half of the states with APCDs currently only allow de-identified patient information to be collected. This limits the ability to track treatment, outcomes and disparities over time and to drill down on policy implications. To address this problem, while protecting patient privacy, the trend seems to be towards allowing patient identifiers--a number assigned to each patient that is not linked to personally identifiable information.21 This allows for better long-term tracking and connecting with public health and clinical data, all of which is very useful for researchers and other stakeholders.
It is critical that APCD consider broad data access policies to ensure that the value of the data is fully realized. Once privacy considerations are addressed, a variety of stakeholders should be able to access the data at as detailed a level as possible. Further, states should use the data for regular reports and analyses so that policymakers and regulators begin to incorporate these findings into their work.22
Incorporating Medicare and Medicaid Data
The APCD Council recommends the inclusion of Medicaid and Medicare claims data to get a more complete picture of a state’s practice patterns and spending. Medicare's fee-for-service program accounts for one fifth of all healthcare spending. Recent changes at CMS have made it easier to obtain Medicare claims data information.23 The submission of Medicaid data to the APCD should be coordinated with the state office that stewards the Medicaid data.
Incorporating Encounter Data
Plans that operate under capitation (e.g., a staff-model HMO) do not generate claims in the usual sense. States such as Colorado are working on protocols that allow this data to be added to the APCD for a more complete picture of spending and practice patterns.24
Pharmacy and Dental Data
Pharmacy and dental claims are often generated using a separate system from medical claims and APCD operators will have to explicitly include this data in the data reporting requirements to ensure a complete picture of spending. The APCD Council has been working to create standards that require the collection and inclusion of this information.25
A non-uniform approach for APCD data submission can mean increased costs to all stakeholders. If each state uses a “one-off” data collection, data cannot be easily merged or analyzed across states. Different extracts must be created for each data collection entity, costs for payers submitting data, especially for payers that are operating in multiple states.
In an effort to bring standardization to the APCD data collection process, the APCD Council worked with other entities to establish reporting APCD guides for eligibility as well as medical, pharmacy and dental claims files.26
Including Healthcare Quality Information
Claims data, by itself, has the ability to provide some limited quality signals, such as identifying areas of overtreatment or inappropriate treatment (such as overuse of CT scans or Caesarean sections), find patterns of preventable medical errors and harm and tally the associated cost.
A next step for APCDs is be to integrate other non-claims data sources, such as patient registries, vital records, clinical data, and patient reported surveys.27 While challenging, the combined data provide an opportunity for even more valuable analysis.
States like Massachusetts and Colorado are currently working on developing patient safety and quality reports by incorporating data from other sources. For example, combining electronic medical records with claims data to identify opportunities for improving outcomes for Medicaid patients.28 The NH Hospital Scorecard allows consumers to view patient satisfaction, patient safety and clinical quality measures.29
Other possibilities include identifying conflicts of interests that might be driving prescribing or testing patterns. With access to data, researchers can correlate the introduction of a new drug with pharmaceutical sales practices, and discover if there is a pattern of inappropriate prescribing by an individual physician. As important, the data could expose a link between tests and whether or not the referring doctor has a financial stake in the testing lab.
Supporting Health Equity Work
Demographic data, such as race, ethnicity and language preference may also be incorporated into an APCD. This data enables researchers to identify health disparities, and provide evidence for public health and institutional interventions. This data can be collected at the point of enrollment into health insurance programs, as well as at the point of care. However, the collection of this data is not currently standardized; efforts to do so are increasingly found at the state level.
Voluntary Efforts Are More Challenging
APCDs are generally created by state legislation, although in some circumstances they are created by voluntary data reporting arrangements.30
In general, strong legislation will yield a more robust dataset than voluntary efforts. Voluntary initiatives cannot compel data submission by all payers in a state and thus the data can be incomplete. Further, the use of aggregated data may be restricted if one or more contributors of data oppose public release. APCD data often have de-identified personal health data in order to track service use over time. Privacy laws make it difficult for private entities to receive and release de-identified patient data without legal authority.
Examples of states that have established an APCD based on a voluntary basis:
Wisconsin: In 2005, the Wisconsin Health Information Organization (WHIO) was created voluntarily by providers, employers, payers and the state to improve healthcare transparency, quality and efficiency in the state. WHIO members and subscribers use the data to identify gaps in care for treatment of chronic conditions and provide real-world data about per episode costs of care, population health, preventable hospital readmissions, variations in prescribing patterns and much more.
California: The nonprofit California Healthcare Performance Information System (CHPIS), founded in 2012 by three of the largest health plans in California and the Pacific Business Group on Health, serves as a voluntary multi-payer claims database. In 2013, CHPIS acquired its first year of CMS fee-for-service Medicare data—for over five million California beneficiaries—and commercial claims for HMO, POS, PPO, Medicare Advantage products from Anthem Blue Cross, Blue Shield of California and United Healthcare. It does not have data on “allowed amounts” or provider fee schedules, but is focused on quality, efficiency, and appropriateness of care. Plans and purchasers have two thirds of the seats on its board, with providers and a consumer group making up the other third. The first public report on physician-level quality ratings is expected later in 2015.
Washington: The Puget Health Alliance, established in 2004, helped to created a purchaser-led, multi-stakeholder collaborative, voluntary APCD. The database comprises approximately 65 percent of the non-Medicare claims in the region. The Alliance changed its name to the Washington Health Alliance and expanded its activities statewide and access to the Alliance’s database by researchers and other interested parties is possible but is very limited. In May 2015, the state instituted a law that establishes an APCD and mandates a strict requirement that all health insurers must submit data.
Capturing Spending By Uninsured
It is almost impossible for an APCD to capture spending by uninsured individuals because their visits to providers do not generate a “claim” that goes to an insurance company.
Maine is the only state that has incorporated uninsured claims, and then only partially. Maine Health provides identification cards to uninsured individuals using their services to better manage their care and to document uncompensated care. Maine Health then submits pseudo-claims to a third-party administrator (TPA) owned by a national insurer for processing as if they were from insured patients, but no payment is made. Summary information on the uninsured patients is produced by the TPA for Maine Health and claims data files are submitted to the state APCD. From a policy perspective, capturing data on the uninsured is important and Maine has the potential to be a model for the rest of the states.
Denied claims are typically not included in APCDs. While the inclusion of denied claims would increase researcher and regulators’ ability to access health plan’s role in spending flows, their inclusion would increase the amount of data that would have to be collected and stored.
Policymakers and other stakeholders need access to information on healthcare spending in their state to better understand their unique healthcare market and to help make more informed decisions. It is well established that there are wide variations in treatment patterns and what providers charge for the same procedure. It is also well established that much of our healthcare spending is wasteful and that many providers do not align with evidence-based quality standards. State policymakers need to step into this void by enacting and funding robust APCDs that can help them pursue initiatives to bring better healthcare value to the residents of their state.
All-payer claims databases have the potential to help inform significant changes that will benefit consumers, however inadequate funding, overly restrictive data release policies and other issues related to the ability to collect data have restricted APCDs from reaching their full potential as policy making tools.
1 Although we use the term “All-Payer Claims Databases” throughout this issue brief, it is also important to include encounter data from HMOs and integrated systems, such as Kaiser, that do not have claims submitted.
2 Miller, Patrick, Why State All-Payer Claims Databases Matter to Employers, The Bureau of National Affairs, Inc. (2012).
4 Love, Denise, et. al., All-Payer Claims Databases: State Initiatives to Improve Healthcare Transparency, The Commonwealth Fund (September 2010).
5 Miller, Patrick, et. al, “All-Payer Claims Databases: An Overview for Policymakers,”, AcademyHealth (May 2010).
6 Love (September 2010).
7 Miller (May 2010).
9 APCD Showcase Website (http://www.apcdshowcase.org/).
10 Kennedy, Lisa, and James Highland, Assessment of Vermont’s Claims Database
to Support Insurance Rate Review, Vermont Department of Banking, Insurance, Securities & Healthcare Administration
11 Tu, Ha, and Johanna R. Lauer, Impact of Healthcare Price Transparency on Price Variation: The New Hampshire Experience, Center for Studying Health System Change (November 2009).
12 Utah Department of Health, Utah Atlas of Healthcare: Making Cents of Utah's
Health Population (October 2010).
13 Oregon Health Authority, Leading Indicators for Oregon’s Healthcare Transformation, http://www.oregon.gov/oha/OHPR/RSCH/docs/All_Payer_all_Claims/Leading_Indicators_April_2
14 See Consumers Union, What’s the Case for Price Transparency In Healthcare?, forthcoming October 2015.
15 White, Chapin, et al., Healthcare Price Transparency: Policy Approaches and Estimated Impacts on Spending, WestHealth Policy Center (May 2014).
16 Minnesota Department of Health, FAQ All Payers Claim Databases
17 State Health Access Data Assistance Center, Maine’s Healthcare Claims Database Website
18 The Network for Excellence in Health Innovation, All Payer Claims Databases: Unlocking the Potential, Issue Brief (December 2014).
19 Green, Linda, et al, Realizing the Potential of All-Payer Claims Databases,The Robert Wood
Johnson Foundation (January 2014).
20 For more information, see: APCD Council, Cost and Funding Considerations for a Statewide All-Payer Claims Database (APCD) (March 2011).http://www.apcdcouncil.org/file/79/download?token=UUNHTnXi
21 Miller (2012).
22 Green (2014).
23 The Commonwealth Fund, In Focus: Medicare Data Helps Fill in Picture of Healthcare
Performance (April/May 2013).
24 Center for Improving Value in Healthcare, Colorado All Payer Claims Database Annual Report 2014, (February 2015).
25 Learn more about the APCD Council’s Proposed Core Set of Data Elements for Data Submission at http://www.apcdcouncil.org/sites/apcdcouncil.org/files/media/apcd_council_core_data_elements_5-10-12.pdf
26 APCD Council Standards (https://www.apcdcouncil.org/standards)
27 The next step for APCDs will be to integrate other non-claims data sources, such as patient registries, vital records, clinical data and patient reported surveys.
28 APCD Council Showcase, Combining Electronic Medical Records with Claims Data to Identify Opportunities for Improving Outcomes for Medicaid Patients (September 2013).
29 APCD Council Showcase, Scorecard,http://www.apcdshowcase.org/content/nh-hosptial-scorecard-website
30 Peters, Ashley, et al., “The Value of All-Payer Claims Databases to States,” North Carolina Medical Journal (Sept. 5, 2014). | <urn:uuid:c98ee2ce-c7e8-40cb-bb2d-8e72422e75e7> | {
"date": "2020-01-22T04:25:23",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9298843741416931,
"score": 2.96875,
"token_count": 5166,
"url": "https://www.healthcarevaluehub.org/advocate-resources/publications/all-payer-claims-databases-unlocking-data-improve-health-care-value/"
} |
Accelerator physics: alternative material investigated for superconducting radio-frequency cavity resonators
In modern synchrotron sources and free-electron lasers, superconducting radio-frequency cavity resonators are able to supply electron bunches with extremely high energy. These resonators are currently constructed of pure niobium. Now an international collaboration has investigated the potential advantages a niobium-tin coating might offer in comparison to pure niobium.
At present, niobium is the material of choice for constructing superconducting radio-frequency cavity resonators. These will be used in projects at the HZB such as bERLinPro and BESSY-VSR, but also for free-electron lasers such as the XFEL and LCLS-II. However, a coating of niobium-tin (Nb3Sn) could lead to considerable improvements.
Coatings may save money and energy
Superconducting radio-frequency cavity resonators made of niobium must be operated at 2 Kelvin (-271 degrees Celsius), which requires expensive and complicated cryogenic engineering. In contrast, a coating of Nb3Sn might make it possible to operate resonators at 4 Kelvin instead of 2 Kelvin and possibly withstand higher electromagnetic fields without the superconductivity collapsing. In the future, this could save millions of euros in construction and electricity costs for large accelerators, as the cost of cooling would be substantially lower.
Experiments in the USA, Canada, Switzerland and HZB
A team led by Prof. Jens Knobloch, who heads the SRF Institute at HZB, has now carried out tests of superconducting samples coated with Nb3Sn by Cornell University, USA, in collaboration with colleagues from the USA, Canada, and Switzerland. The experiments took place at the Paul Scherrer Institute, Switzerland, at TRIUMF, Canada, and the HZB.
“We measured the critical magnetic field strengths of superconducting Nb3Sn samples in both static and radio-frequency fields”, says Sebastian Keckert, first author of the study, who is doing his doctorate as part of the Knobloch team. By combining different measurement methods, they were able to confirm the theoretical prediction that the critical magnetic field of Nb3Sn in radio-frequency fields is higher than that for static magnetic fields. However, the coated material should display a very much higher critical magnetic field level in a radio-frequency field. Thus, the tests have also shown that the coating process used currently for the production of Nb3Sn might be improved upon in order to more closely approach the theoretical values.
The publication has been mentioned on the Cover of „Superconductor Science and Technology“ , (2019): Critical fields of Nb3Sn prepared for superconducting cavities; S. Keckert, T. Junginger, T. Buck, D. Hall, P. Kolb, O. Kugeler, R. Laxdal, M. Liepe, S. Posen , T. Prokscha, Z. Salman, A. Suter and J. Knobloch. | <urn:uuid:d0c343ee-bd80-4156-b457-fe9c98c592d1> | {
"date": "2020-01-22T06:19:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9240697026252747,
"score": 2.8125,
"token_count": 654,
"url": "https://www.helmholtz-berlin.de/pubbin/news_seite?nid=20622;sprache=en;seitenid=74699"
} |
St. Paul CME
The story of St. Paul CME cannot be told without relating the story of the renowned farmer and plantation owner, David Dickson. David Dickson was one of the wealthiest planters in Hancock County at a time when Hancock County was arguably the wealthiest county in pre – Civil War Georgia. Even though his land was not regarded as particularly suited to raising cotton and other cash crops, David pioneered agricultural techniques that were way ahead of his time. His fame as a planter was known far and wide.
David Dickson was also an outcast among the white planters of the region because of his open romantic relationship with one of his slaves, Julia Francis. Upon his death, the daughter(Amanda) he had with Julia was set to solely inherit all of Dickson’s landholdings – about 17,000 acres appraised at $309,000.00. After legal battles that went all of the way to the Georgia Supreme Court with the other Dickson would-be heirs, Amanda America Dickson won her right to the land and became the wealthiest African-American woman in the country.
To learn more about the life of Amanda Dickson click here.
The St. Paul CME church was organized in 1857 by slaves on the Dickson Plantation. In 1870, property near the brush arbor where services were conducted was deeded over to the church. This property is the site of the present church and cemetery. Lucius Holsey, a well-known Bishop in the CME church, founder of Paine College in Augusta, and former slave of Richard Malcolm Johnston, began his preaching career at this church. There have been many notable church leaders since, and the congregation is still active today having established a new sanctuary close to the one above.
St. Paul is off of the beaten path. There is a wide open field in the middle of the woods at the junction of St. Paul Church Road, Pine Bloom Road, and an unnamed dirt road that leads back to the Dickson Plantation house. The church grounds offer a quiet, peaceful place to reflect upon the lives of slaves and former slaves as they struggled to define themselves within the limits of their reality. The cemetery is unusually large for such a rural cemetery, and many of the markers are home-made. The congregation must have felt strongly about leaving reminders of the past for the future. Maintenance of the cemetery and church grounds is a daunting task considering the size, but things are well kept. We encourage visitors to be mindful of the sacred history of this place and the stories – both good and bad – that it has to tell. | <urn:uuid:990270ca-33ed-4d73-9def-8cd88ab2ee3f> | {
"date": "2020-01-22T05:17:26",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9831939339637756,
"score": 2.640625,
"token_count": 535,
"url": "https://www.hrcga.org/church/st-paul-cme/"
} |
IS SHORT FOR 'IDENTIFICATION OF MEDICINAL PRODUCTS'
Here are the 5 ISO Standards defined
IDMP in a nutshell
The IDMP Standards are a set of 5 ISO international standards that has been developed in response to a worldwide demand for internationally harmonised specifications for identification and description of medicinal products. IDMP provides the basis for the unique identification of medicinal products, which facilitates the activities of medicines regulatory agencies worldwide by jurisdiction for a variety of regulatory activities (development, registration and life cycle management of medicinal products, pharmacovigilance, and risk management). They can also be applied to Investigational Medicinal Products (IMP).
In IDMP Standards messaging specifications are included as an integral part. They describe and protect the integrity of the interactions for the submission of regulated medicinal product information in the context of the unique product identification; they include acknowledgement of receipt including the validation of transmitted information. Health Level Seven (HL7) Message Exchange are normative within the IDMP Standards.
IDMP Standards are completed with Implementation Guides which are currently in development (2015), as well as with Technical Specifications (TS) 16791 (provides guidance for the identification of medicinal products by using intenational supply chain Standards, securing traceability, safe supply chain and other market requirements) and Technical Requirements (TR) 14872 (Requirements for the implementation of the Standards for the identification of medicinal products for the exchange of regulated medicinal product Information), the latter being in development.
Just fill out your Name and E-Mail and get our IDMP info brochure for free. | <urn:uuid:8cbfa068-6c7f-436e-8104-01bd51318f5d> | {
"date": "2020-01-22T06:31:19",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9081447720527649,
"score": 2.609375,
"token_count": 318,
"url": "https://www.idmp1.com/idmp-standards/"
} |
(Last Updated on : 09/04/2013)
Garhwal Division is the home for Garhwali people. It is the administrative and the north western region of the north Indian state of Uttarakhand
. Garhwal Division lies in Himalayan Mountain, and it got its name from the Gahadvala Dynasty of India
who was basically Rajputs. On the north it is bounded by Tibet, Kumaon Division on the east, and Uttar Pradesh
on the south and the state of Himachal Pradesh
in the North West region. It includes the various districts like Chamoli
, Pauri Garhwal, Rudraprayag
, Tehri Garhwal
, and Uttarkashi
. The people of this region speak Garhwali language and are known as Garhwali. The administrative hub of Garhwal Division is Pauri. It is often said that Gadhwal was named because of 52 Gadhs 52 chieftains, each chief with his own independent fortress (gadh).
Origin of Garhwal Division
The region of Garhwal Division was previously settled by the Kols. Kols are aboriginal inhabitants of the austro-asiatic physical type and they were later joined by the Indo-Aryan Khas/Khasas who arrived from the northwest by the Vedic period
The Khas are actually believed to be the descendents of the Eastern Iranian origin of ancient Kamboj. Some believe that the Khasas have arrived from Tajikistan and share some common physical traits with the Tajik people.
Historians doing research on Garhwal and Kumaon say that at the beginning there were only three castes: Khas Brahmin, Silpkar and Khas Rajput. The primary occupation of the Khas Rajput was of the law enforcement and Zamindari. The Khas Brahmin's main work was to perform religious rituals in the temples and educational centres of the elite class. The Shipkar used to work for the Rajputs and skilled with handicrafts. The surname of the Khas originated from Bahuguna from Bahugani and Pandey from Pandeygaon. However, the cast of the Uttarakhandi people cannot be understood from their surname.
History of Garhwal Division
The Garhwal Kingdom was set up by the Rajputs. One of the chiefs, Ajai Pai, lessened all the small principalities under his own wave and set by the Garhwal Kingdom. He and his successors ruled the Garhwal Kingdom and also the adjacent Tehri Garhwal
for an uninterrupted time till 1803 when the place was invaded by the Gurkhas.
The Gurkhas ruled the place for twelve years until serious encroachments by the British Rulers and that led to the fierce war called the Gurkha War in the 1814. After the battle, the territory was fully converted to British District. The British district of the Garhwal was in the Kumaon Divison of the United Provinces.
Geography of Garhwal Division
The geography of Garhwal Division mainly consists of rugged mountain ranges spread in all direction and are separated by narrow valleys which in most cases become ravines or deep gorges. The sole level portion of the district was the small and narrow band of waterless forests between the fertile plains and southern slopes of Rohilkhand. The highest mountain is in the Chamoli District
, the main peak is the Nanda Devi (25,643 ft), Kamet (25,446 ft), Chaukhamba (23,419 ft), Trisul (23,360 ft), Dunagiri (23,182 ft), and Kedarnath Temple
(22,769 ft). One of the main sources of the Gange River
, the Alaknanda, receives the whole drainage of the District. The Alakanda River
joins the Bhagirathi River
at Devprayag and the united stream gets the name Ganges.
Cultivation and agricultural activity is mainly confined to the direct vicinity of the rivers, which are employed for irrigational purposes.
People of Garhwal Division
The people who have Garhwali roots are known as the Garhwali and they have originated from Indo-Aryan group who mainly inhabited in the Garhwal Himalayas. The Garhwali people speaking the Garhwali language or any other local dialect live in areas like Tehri Garhwal
, Pauri Garhwal, Dehradun, Haridwar, Uttarkashi, Rudraprayag
, Chamoli, and Bageshwar districts of Uttarakhand, India. Culture of the people is a mixture of local population coupled with the various traditions of the immigrants. Bulks of the people are involved in the tourism, agriculture and the defense industry.
Language of Garhwal Division
The language of the Garhwal division is actually the Central Pahari Language which belongs to the North Zone of the Indo-Aryan and is also native to the Garhwal. Out of 325 recognized language in India, Garhwali is one of them and is spoken by over 2,267,314 people in the Uttarkashi, Chamoli, Tehri Garhwal
, Pauri Garhwal, Dehradun, Haridwar and Rudraprayag districts of Uttarakhand. This language is also spoken by people living in other part of the country like Haryana
, Punjab, Himachal Pradesh, Delhi
, Uttar Pradesh | <urn:uuid:753f98f6-8bce-490c-b80c-faa0ec0f7ee1> | {
"date": "2020-01-22T05:35:55",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9586809277534485,
"score": 3,
"token_count": 1164,
"url": "https://www.indianetzone.com/56/garhwal_division.htm"
} |
This bind off methods join 2 pieces with only one bind off, without further seam.
You will need : your 2 pieces, placed on two different needles, and a third needle.
Place the two pieces right sides facing one another, on two different needles, points going the same way.
With 3rd needle, knit together the first stitch of front needle with the first stitch of back needle. You get one stitch on 3rd needle.
Knit together the next stitch from front needle with the next stitch from back needle.
You have 2 stitches on 3rd needle.
Pass first stitch on 3rd needle over the last stitch.
You have one stitch on 3rd needle.
Repeat steps 3 and 4 until you have only one stitch left on the 3rd needle.
Cut and pull yarn.
From the right side
From the wrong side.
You can work the 3 needle bind off on the right side, with wrong sides facing one another. Here how it will look on the right side.
On the wrong side | <urn:uuid:9896eaa9-0ef6-4f8c-b60a-7c018756fa22> | {
"date": "2020-01-22T06:13:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8965677618980408,
"score": 2.5625,
"token_count": 213,
"url": "https://www.instantsdelouise.fr/en/portfolio/3-needles-bind-off/"
} |
Security & Compliance
Everyone gets a suspicious email from time to time. Maybe it asks for personal or account information. Maybe it directs you to a legitimate-seeming website, and there you’re asked to fill in details about yourself. Perhaps, it’s a generic request or it may seem to come from someone specific, like your boss.
No matter the form, these phishing scams have common characteristics: someone wants your information or for you to take action and they don’t have your best interests in mind. The scammers could be looking to loot your bank accounts or set you up for identity theft. Perhaps, they are trying to gather intel for blackmail or to make a political point.
According to news and statistics, phishers can be successful. They’ve infected big retailers with malware, gotten funds transferred from companies to bogus accounts, and infiltrated government networks. More than 9 in 10 computer users fail to identify phishing emails in a test, according to surveys.
So, what can be done to prevent phishing and how can a cloud solution help?
Well, phishing depends on compliance by your or another victim. The solution, then, is simple: don’t comply. Specifically, you should:
From this list, it’s possible to see how a cloud solution can help. A cloud-based email solution, for example, can identify and thereby help defang malware by blocking access to malicious files and by scanning incoming email. It also can provide a means for the two-way communication needed to alert you and others of phishing attempts. The information gathered also helps tune the software’s response and improve the protection against phishing.
Scams have been getting more sophisticated. Spear phishing attacks, for instance, use highly specific information, such as an email address from a CEO, and target a very narrow group of victims, such as a branch at a foreign office.
While, in theory, user training and education can help prevent nearly all types of phishing attacks, many attacks still succeed because people are human, after all. They are in a hurry and click on a link without thinking. Perhaps, they panic because the request appears to come from legitimate source, has an immediate deadline, and carries with it a threat of significant consequences.
Therefore, a comprehensive anti-phishing solution should also be included with your cloud technology. Scamming techniques are constantly evolving. For instance, newer variations scoop up information from social media and insert that into a phishing request, with the intention of making it seem genuine. To counter that, you need a solution that is adaptable and also incorporates the information generated by the experiences of many users. Putting the power of the cloud to work helps ensure that this happens in an automated fashion. In this way, it complements and enhances user training.
SECURITY & COMPLIANCE | <urn:uuid:3010be1a-dffb-491c-87eb-52892ac5ebaf> | {
"date": "2020-01-22T05:11:08",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9476649165153503,
"score": 2.625,
"token_count": 591,
"url": "https://www.intermedia.net/it-challenges/security-compliance/cloud-solution-prevent-phishing"
} |
A flowmeter is a sensory device that is used to measure linear, nonlinear, volumetric or the mass flow rate of a liquid or a gas passing through it. Flowmeters typically consist of a primary device, transducer and a transmitter. As fluid passes through the primary device the transducer senses this movement and then the transmitter produces a usable flow signal from the transducer signal.
Flowmeters come in many variants including volumetric flowmeters that measure the volume of fluid passing through. Velocity flowmeters that measure the velocity of a flowing stream, typical examples include magnetic, turbine, ultrasonic, and vortex shedding and fluidic flowmeters. Mass flowmeters measure the mass flow of a flowing stream and include devices such as Coriolis mass and thermal flowmeters. Other flowmeter variants include insertion flowmeters that measure flow at one location in a pipe and flowmeters that measure liquid flowing in an open channel.
Kempston Controls stocks an select range of flowmeters from market leading manufacturers such as ABB and Endress Hauser designed to suit many applications. Give our dedicated sales team a call on +44 (0) 1933 411411 or use the contact form above to discuss your flowmeter requirements.
Showing 1 - 10 of 13 results
Estimated lead time: 13 to 14 days
Estimated lead time: 11 to 12 days
Estimated lead time: 15 to 16 days
Estimated lead time: 12 to 13 days
Estimated lead time: 16 to 17 days | <urn:uuid:e1b9c1fa-49cd-4a1b-b6e5-19d7aea3a10f> | {
"date": "2020-01-22T05:30:33",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9167457818984985,
"score": 2.671875,
"token_count": 318,
"url": "https://www.kempstoncontrols.co.uk/category/535/sensors-encoders-transducers/process-automation/flowmeter"
} |
From the moment that we started KDG, one of the main focuses was to promote gardening to children to help with their development. It can be tricky to know when to start promoting gardening and we are relatively fortunate because T’s birthday is in February so when he turned three, Spring was right around the corner and I couldn’t wait to get him out there!
However, I did take a step back at this point and thought to myself that for a toddler, growing seeds really is something quite alien! Taking something so small, putting it in soil and waiting for it to grow, is quite an odd concept.
Cress Heads is a good place to start
Cress Heads is a superb way to introduce growing to children. It is amazingly easy and you can have some great fun decorating the eggs before planting the seeds. You can literally be as crazy as you like – ours are rather simple. What I love most about it is growing them on cotton wool as it helps children see the progress of them. If you were to plant them into soil, you wouldn’t actually see the seeds! It’s also a great activity to do all year round as you don’t even need to go outside! If you want to learn more about it, check out our post here.
However, cress heads does have its limitations as you can’t really see the roots that are growing and once the plant reaches the top, there isn’t that much else to see – other than the fun of a great hair cut and lovely egg sandwiches!
What will children learn?
There are a number of skills that growing beans in this way will provide children. As this is an easy activity to complete, children can nuture the beans throughout their growth as they are only going to require water to grow.
Patience is also a big part of growing and gardening in general. This is a great stepping stone to get some of those skills underway. A little longer than growing cress heads but the results are just incredible – mainly due to the beans being much larger than cress seeds.
Personally, I love doing this activity around the month of March. This allows a good amount of time for the activity to develop, just in time for the garden beans grow. Plus, children get to watch the same growth but in the garden with the added benefit of beans at the end of it. Within 5 weeks of growing, the beans, in the glass were already at the height of our three year old and he is just so excited to show anyone that comes to our house. To him, they are huuuuuge!
How long it takes: Around 4 weeks for the beans to fully grow
Equipment needed: Household items & some beans
What you are going to need
- A large clear container (we used a glass)
- Cotton Wool (the long rolls are best – same as the one we used for cress heads)
- Broad bean seed
Put the cotton wool into the glass jar. You won’t need too much of it but enough for the beans to germinate and take root. Enough to fit into one hand is quite enough.
Place the bean into the glass jar nestled to the side so that you are able to see the root growth when it begins. We actually placed two beans into the glass, one on each side. Mainly so that they could have a race to see which one grows the fastest. If your children are older, you can place one side to the light and the other covered with some tape. This will show the effect that light has on growth. Once placed, add a little water and then wait for the magic to happen.
Soon enough, you will start to see the beans sprout and the root growth start to appear. This is a really exciting time as you see the root come out of the bean. This literally happens over a couple of days and it is worth while making some time each day to look at the progress of the root growth. If your glass is on the windowsill, there can be some sunny days that will dry out the cotton wool so make sure that it is kept moist to encourage further growth.
Once the root is established, you will start to see the plant growing out of the other side of the bean. This happens really quickly and it is certainly worth checking back on a daily basis.
The bean plant
From the moment that you see the plant appear from the bean, it won’t be long before the plant is growing outside of the grass and heading towards the light.
Remember to keep the seeds watered at all times. You will certainly know if they start to dry out as they will start to wither. All in all, this is great experiment to do with children. You can either do this at the same time as growing beans in the garden or beforehand so that children know what to expect.
Most importantly have fun! | <urn:uuid:fc7a75ea-19ba-4558-8097-490ac47ec9d4> | {
"date": "2020-01-22T06:16:20",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9738104343414307,
"score": 2.59375,
"token_count": 1015,
"url": "https://www.kidsdogardening.com/an-easy-way-to-show-plant-growth-to-children/?utm_source=rss&utm_medium=rss&utm_campaign=an-easy-way-to-show-plant-growth-to-children"
} |
Water is the lifeline of any yard, be it small or large. Watering the lawns and plants on a regular basis is an essential part of having a healthy and vigorous landscape. Water is a very precious natural resource and also limited in supply. As there is no substitute of water on the earth we must do our bit to conserve this precious natural resource and prevent any sort of wastage. Here are few tips and tricks to ensure effective use of water in your yard:
Place Plants in Groups – Landscaping 101
Certain plants need more water whereas some need less. If you place these plants according to their water requirements then it would ease the process of watering as well as be more effective. Native plants are a good option to reduce water wastage as they survive on natural supply of water and eliminate the need of regular lawn watering.
Take Care of Landscape Grounds
Mulches and ground cover plants are a good way to save on water. These things help the plants and soil to retain moisture and thus require less water. They keep the area cool and also stop water from evaporating. Composting the soils is also an effective way to hold moisture for a longer period and keep the plants healthy without additional help. Aerating soils is also an option to prevent frequent watering as it helps the water to penetrate deeper and keep the roots nourished. A professional lawn care service can be hired to implement some of these water conservation ideas
Lawn Irrigation Process
Choosing an irrigation system wisely can ensure optimal use of the available water without waste. Installing an automated irrigation system that provides a measured and focused way of watering is a good choice. Drip irrigation is one of the most effective ways to save on water yet keep the lawn perfectly moist. The time of watering also plays a role. Early morning watering prevents evaporation. Also avoid watering during windy weather as it would lead to more water wastage. Also necessary to keep a check on the pipes and parts to make sure there is no leakage.
Xeriscaping Landscape Design
Choosing plants that are drought tolerant for your garden is called xeriscaping. This method can keep your landscape looking healthy even when there is no water. These are ideal for places which suffer from water scarcity and droughts frequently.
Recycle Water for Lawn Use
The places which receive ample rain offer the scope to use this water for gardening. There are various ways in which this rain water can be stored and used for watering the plants and lawns. Also water used in the kitchen for boiling, steaming or washing can be used in the landscaped lawn as they are often more nutrient rich.
These are some of the very easy yet very effective ways to conserve one of the most precious natural resource. All we need is a little planning and proper execution to take care of your yard as well as the precise use of water. | <urn:uuid:bf4deca6-89de-4979-bfdf-ac65f6c70a9e> | {
"date": "2020-01-22T04:56:09",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9602526426315308,
"score": 3.125,
"token_count": 589,
"url": "https://www.lawncareandlandscape.com/blog/top-5-tips-for-conserving-water-in-your-lawn/"
} |
Fort Janeaux (1879-1883) – Also called Janeaux’s Post, Fort Turnay, and Medicine Lodge, this trading post was established by Francis A. Janeaux, a licensed Metis Indian trader and later the founder of Lewistown, Montana. He and his wife, Virginia Laverdure Janeaux, established a homestead in the fall of 1879 on Big Spring Creek, and in partnership with the trading firm of Leighton Brothers, Janeaux built a substantial post. The trading post, which measured about 100 by 150 feet, was surrounded by a stockade with two bastions at diagonal corners. In the middle were several log cabins, one for him and his family, and the others reserved for clerks and interpreters. The post traded buffalo robes, furs, meat, and pemmican with traveling bands of Missouri River Indians and with about 100 families of the Red River Metis. No sooner had Janeaux established his trading post, when he found himself in direct competition with Alfonzo S. Reed and his Reed’s Fort Settlement, which was situated just ½ mile away. However, in the end, Janeaux would win out. In 1882, he and his wife donated a plot of 40 acres to develop the townsite of Lewistown and the following year he sold his store. By 1884, a two-story hotel was built facing the store, and before long livery stables and saloons surrounded his old trading post. Today, his post would have sat at what is the intersection of Third Avenue North and Broadway, right in the center of present-day Lewiston, Montana.
By Kathy Weiser-Alexander, updated January 2018. | <urn:uuid:f43fc472-ba57-4017-8a5e-8b39cda0d1f2> | {
"date": "2020-01-22T06:24:46",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9842016696929932,
"score": 2.796875,
"token_count": 351,
"url": "https://www.legendsofamerica.com/fort-janeaux-montana/"
} |
While we slumber at night, our brains are busy synthesizing information and solidifying memories – processes that allow us to learn and retain data. Neurological activity varies tremendously during different sleep cycles, and prior studies have shown that people learn new information more rapidly and retain it better over the long-term just before nodding off to sleep.
A newly published study from Paris’ PSL Research University has expanded on this concept, finding that we tend to retain information better during two specific phases of sleep —Rapid Eye Movement sleep (when we usually dream), and non-REM sleep stage N2.
The researchers included 20 healthy participants in their sleep study, and evaluated their brain activity by utilizing electroencephalography (EEG), electromyography (EMG) and electrooculography (EOG). During the study, the researchers played white noise that contained several sound repetitions/patterns while the participants were awake, as well as during REM sleep and non-REM sleep. The next morning, the study participants were assessed on their memory recall of patterns they heard.
The participants found it easier to remember the sound patterns in the white noise when it was played during REM sleep and N2 sleep, compared to deeper stages of sleep. Based on these findings, the researchers believe that when information is heard during deep (Stage 3) sleep, it’s more difficult to learn it again compared to hearing it for the first time. They attribute this concept to the notion that the brain casts off unnecessary memories during deep sleep phases.
Research on how we retain and process information during different types of sleep may prove beneficial in the long-run, though the prospect of learning during sleep is still a long way off. In the interim, scientists agree that quality sleep is essential to the brain and our overall health.
Studies have also demonstrated that prolonged periods of low-quality sleep or too little sleep can negatively impact our ability to learn and remember new information.
We all feel better and more productive when well-rested. But did you know there are serious long-term effects associated with sleep deprivation? Consistent sleep loss not only affects cognitive abilities and memory, but can lead to a risk of health problems including depression, high blood pressure, diabetes, heart disease and obesity.
Proper rest and quality sleep are the foundation to a healthy life. One of the easiest ways to ensure you’re slumbering deeply and getting much-needed REM sleep, is getting the right mattress. If you are waking up tired, foggy and grumpy, chances are it’s time to visit Mattress World Northwest, where our Sleep Specialists can help you find the right mattress. We are proud to be Oregon’s #1 mattress retailer, offering the lowest prices on name brands you can trust.
Our extensive inventory includes premium innerspring, memory foam and latex mattresses that come with our unbeatable Comfort Guarantee. Stop by any of our convenient Portland-area mattress showrooms and check out our great mattress deals on models by Simmons, ComfortAire and Stearns & Foster! | <urn:uuid:d853e1f8-aae1-41e4-8e53-0039202d2111> | {
"date": "2020-01-22T04:46:02",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9505995512008667,
"score": 3.609375,
"token_count": 630,
"url": "https://www.mattressworldnorthwest.com/articles/boost-memory-recall-sleep/"
} |
As drought and water shortages become California’s new normal, more and more of the water that washes down drains and flushes down toilets is being cleaned and recycled for outdoor irrigation.
But some public officials, taking cues from countries where water scarcity is a fact of life, want to take it further and make treated wastewater available for much more — even drinking.
“This is a potential new source of water for California,” said former Assemblyman Rich Gordon. “We need to find water where we can.”
In a sense, the water we drink today has been recycling since the beginning of time, thanks to the natural water cycle. Recycling wastewater in a treatment plant simply speeds up that process, and experts say the source of water is not as important as its quality.
“There are places in the world where people are drinking recycled water,” Gordon said. “In fact, it’s the water our astronauts drink at the space station.”
Water recycling is more the norm in countries like Singapore, Israel, Saudi Arabia and Australia, which have long had water shortages. Israel reclaims about 80 percent of its wastewater, while Singapore reclaims almost 100 percent. The reclaimed water is extensively used to irrigate agricultural lands and recharge aquifers in Israel, while most of Singapore’s water is used for industrial purposes.
And because sending loads of water into space wasn’t an option, NASA scientists installed the Environmental Control and Life Support System at the International Space Station so astronauts could safely drink recycled water.
A poll from last year revealed that 83 percent of Californians are “ready to use” recycled water “in their everyday lives.” And a spot survey in downtown San Jose supported the poll’s findings.
“I would drink it,” said Ing-Shien Wu, a Mountain View resident who works in San Jose. “Yeah, it sounds weird. Yes, it was once your waste. But in some sense we are recycling the water anyway.
It goes out and it gets evaporated and comes back as rain. So if they have something that’s comparable, sure. Why not?”
Right now, as much as 5 percent of Santa Clara County’s water supply comes from recycled water, all of which is currently designated for non-potable uses such as irrigation for landscaping and golf courses.
The bulk of that recycling happens at four wastewater treatment plants in the county, whose primary job is to remove all the junk from water before it is flushed into San Francisco Bay. But a small portion of the cleaned wastewater gets a second life — it goes through a few more steps that progressively remove the tiniest of pathogens and harmful chemicals.
Making that water fit for public consumption requires more quality checks and more filtering — all of which shakes out at the Silicon Valley Advanced Water Purification Center, located at the base of the bay, along Zanker Road.
Since its opening in 2014, the plant has been producing about 8 million gallons of near-potable water every day. That’s enough to maintain about 10 Palm Springs golf courses for a day.
“We go above and beyond conventional treatments,” said Paolo Baltar, a civil engineer at the Santa Clara Valley Water District.
The Purification Center pumps partially cleaned water from the nearby San Jose Regional Wastewater Facility through thousands of tiny tubes to get rid of pathogens in a process called microfiltration. The water next flows through reverse osmosis membranes to remove salts, and it finally gets bombarded with intense ultraviolet light to break down any remaining chemicals or pathogens. The water quality is monitored constantly.
But before anyone can drink the purified water, it must go through one final cleaning: It is blitzed with hydrogen peroxide to kill any remaining pathogens.
What Santa Clara County is doing is nothing new — Orange County uses the same process to
reclaim wastewater to pad its drinking water supply indirectly by pumping the purified water into the ground.
That’s something the Santa Clara Valley Water District hopes to implement in the near future, said Nai Hsueh, a board member of the water agency.
“At this point, the purification center is experimental,” Hsueh said. “But we are aggressively pursuing plans to produce purified water for potable use.”
The agency has two more advanced water purification projects being planned — one a new plant in Sunnyvale, close to the Donald Summers Water Pollution Control Plant, as well as an expansion of the existing Zanker Road plant.
There is statewide interest in directly mixing the purified water into drinking water supplies, but the “ick” factor has been a barrier.
Terms such as “treated wastewater” and “toilet to tap” don’t exactly help that perception.
Singapore, for example, chose the term “NEWater” for its recycled water. The Santa Clara Valley Water District calls it “purified water.”
A brewery in Half Moon Bay has a more creative approach: It periodically makes beer out of recycled wastewater. The brewing process is exactly the same, although the brewery isn’t allowed to sell it yet.
“We’re doing this to get people to be aware that this is water and we should be making use of these technologies that work,” said Lenny Mendonca, owner of the Half Moon Bay Brewing Company.
To address the public perception issue, former Assemblyman Gordon was able to pass Assembly Bill 2022 last year. It enables water agencies in the state to distribute bottles of advanced purified recycled water for educational purposes.
“The idea behind it was for people to get used to the concept that we can actually purify water to drinking standards,” Gordon said.
The new law went into effect in January. But the Santa Clara Valley Water District doesn’t seem to have any plans to move forward with the bottling process. In part, this is because the law requires having a bottled water facility approved by the U.S. Food and Drug Administation on site, Baltar said.
The Purification Center on Zanker Road does its own outreach through public tours for people to see and understand the water’s journey from being slightly muddied to becoming crystal clear. What the tours don’t do, however, is have people taste the water for themselves. But regardless, Hsueh said, the tours are useful to convince people of the high quality of the recycled water.
“When people see it,” said Garth Hall, a deputy operating officer at the water district, “they see it’s just water.” | <urn:uuid:1bd2036e-994c-417a-b465-8fd1ad03e340> | {
"date": "2020-01-22T05:33:49",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9579794406890869,
"score": 3.3125,
"token_count": 1409,
"url": "https://www.mercurynews.com/2017/07/05/toilet-to-tap-some-in-drought-prone-california-say-its-time/"
} |
The costs of renewable energy fell to a record low in 2018, according to a new report from the International Renewable Energy Agency (IRENA). Renewable sources are already the cheapest way to generate electricity in many parts of the world, the intergovernmental agency reports, and they're rapidly outpacing the affordability of fossil fuels on a global scale.
Within the next year, electricity generated by onshore wind and solar photovoltaic (PV) technologies will be consistently cheaper than electricity generated by any fossil-fuel source, the report forecasts. On top of the "hidden" costs of fossil fuels — from dangerous mining and drilling operations to the greenhouse gas emissions that are now disrupting climate patterns all over the planet — this is further boosting the economic case for a global shift to renewable energy.
"Renewable power is the backbone of any development that aims to be sustainable," IRENA Director-General Francesco La Camera says in a statement released May 29. "We must do everything we can to accelerate renewables if we are to meet the climate objectives of the Paris Agreement. Today's report sends a clear signal to the international community: Renewable energy provides countries with a low-cost climate solution that allows for scaling up action."
The biggest cost reduction in 2018 was for concentrated solar power (CSP), which saw a 26% drop in its global weighted-average cost of electricity generation, according to IRENA. This was followed by a 14% drop for bioenergy costs, 13% for solar PV and onshore wind, 11% for hydroelectricity, and 1% for geothermal and offshore wind. These reductions are being driven by technological improvements as well as increased production, Reuters reports.
Hydroelectricity remains the cheapest form of renewable power overall, at a global weighted-average cost of just under $0.05 per kilowatt hour (kWh), but several other sources are now commonly below $0.10 per kWh, according to IRENA. That includes onshore wind, at a little more than $0.05 per kWh, and solar PV, which averages less than $0.90 per kWh globally. Even CSP, the most expensive renewable source, increasingly rivals fossil fuels at about $0.19 per kWh. (For comparison, developing a new power plant based on fossil fuels like oil or gas tends to range from $0.05 to $0.15 per kWh, according to Forbes.)
These are global averages, so the costs are still higher in some countries. But they're also even lower in others — solar PV, for example, has recently fallen as low as $0.03 per kWh in Chile, Mexico, Peru, Saudi Arabia and the United Arab Emirates.
This trend shows no signs of slowing down, IRENA adds. Costs of renewable energy are expected to continue falling into the next decade, especially for solar- and wind-power technologies. More than 75% of onshore wind and 80% of solar PV projects due to be commissioned next year will generate power at lower prices than the cheapest new fossil-fuel options, according to the report. On top of that, IRENA points out, they're on pace to achieve this milestone even without financial assistance. | <urn:uuid:a7291da1-a794-46b0-879e-76f9e82e3a43> | {
"date": "2020-01-22T06:49:19",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9421316385269165,
"score": 3.53125,
"token_count": 654,
"url": "https://www.mnn.com/earth-matters/energy/blogs/renewable-energy-becoming-cheaper-fossil-fuels-irena"
} |
A New Perspective
Displacing Two-Over-Three Polyrhythms
by Aaron Edgar
This month we’re going to vary the basic phrasing of polyrhythms. Typically, both sides of a polyrhythm begin together on the first note of the rhythm. We can vary this by displacing one or both sides of the rhythm. We’ll focus on a two-over-three polyrhythm in 3/4. Dotted quarter notes comprise the two side of the rhythm, and quarter notes comprise the three side.
Exercise 1 displaces the two side by starting it on the “&” of beat 1. Notice that we still have two- and three-note groups of equally spaced notes within the same time frame.
The two side can be displaced by one more 8th note to start on beat 2. We can also displace the three side by an 8th note so it starts on the “&” of every beat.
Increasing the subdivision makes this concept especially interesting. If we double the subdivision from 8ths to 16ths we can create the same polyrhythm, but with the added option to create versions that have no point in which the two sides occur simultaneously. Exercise 2 demonstrates this concept by starting the two side on the “e” of beat 1 to create a linear polyrhythm.
I try to practice unique concepts like these within the context of a groove. This allows me to really feel how the rhythms work with or against the pulse, which is imperative if you want to apply what you’re practicing musically. In the next few examples, we’ll play 8th notes on the hi-hat, the two side of the polyrhythm on the snare, and the three side on the bass drum.
Exercises 3 and 4 demonstrate the two other positions for the two side in which it doesn’t occur simultaneously with the three side.
Exercise 5 displaces the three side by a 16th note to the “e” of each beat.
Exercise 6 displaces both sides of the polyrhythm. For an interesting variation, try accenting the “&” of each beat on the hi-hat.
In Exercise 7, we’re going to embellish the groove slightly. We’ll start the two side on the “&” of beat 1 and our three side on the “a” of beat 1. There’s also one additional bass drum note on beat 1. Accenting the “&” of each beat with the hi-hat adds an upbeat feel.
In the next example, we’ll use the ride bell to represent our three side while playing the two side on the snare starting on the “a” of beat 1.
We can also take a New Breed–style approach by using the three side of this polyrhythm in an ostinato and leaving one limb free to play variations of the two side. In Exercise 9, the bass drum plays the three side on the “a” of each beat with an additional note on beat 1. With your right hand playing the ride cymbal and your left foot playing the “&” of each beat, your left hand is free to play each displacement of the two side. Here’s the ostinato.
Here are the six placements of the two side of a two-over-three 16th-note polyrhythm. Exercise 10 demonstrates the fourth placement.
Beat 1 and the “&” of 2.
The “e” of beat 1 and the “a” of 2.
The “&” of beat 1 and beat 3.
The “ah” of beat 1 and the “e” of 3.
Beat 2 and the “&” of 3.
The “e” of beat 2 and the “ah” of 3.
Next we’ll apply the rhythm to a more challenging pattern. We’ll use an ostinato that includes a snare on the “&” of beat 2 played with the right hand. The left hand plays the two side between a pair of bells or other small effects cymbals, as demonstrated in Exercises 11 and 12.
We can also create unique variations with this polyrhythm by using 16th-note triplets. In 3/4 time, this subdivision contains eighteen 16th-note-triplet partials. The dotted quarter note is equivalent to nine 16th-note triplet partials, while the three side takes up six partials (which equals a quarter note).
Exercise 13 places a basic two-over-three phrasing over a 16th-note-triplet double bass pattern. The three side is played on a China cymbal as quarter notes, and the two side is played on beat 1 and the “&” of beat 2.
Exercise 14 places the snare on the third partial of the 16th-note triplet on beat 2 and the last note of the 16th-note triplet on beat 4.
Also try displacing the three side within the 16th-note triplets. Exercise 15 moves the three side to the “&” of each beat while starting the two side on the second 16th-note triplet partial on beat 2.
As daunting as these examples may seem, always try to make them groove. Don’t lose sight of musicality when diving into the polyrhythmic rabbit hole.
Aaron Edgar plays with the Canadian prog-metal band Third Ion and is a session drummer, clinician, and author. He teaches weekly live lessons on Drumeo.com. You can find his book, Boom!!, as well as information on how to sign up for private lessons, at aaronedgardrum.com. | <urn:uuid:49720535-51e7-496c-aae2-372af825fd6f> | {
"date": "2020-01-22T05:38:48",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9245060086250305,
"score": 2.984375,
"token_count": 1247,
"url": "https://www.moderndrummer.com/article/january-2017-new-perspective-displacing-two-three-polyrhythms/"
} |
Part VIII in our Wealth psychology series: Assistant Professor Avni Shah explains why we push harder as we get closer to accomplishing a goal, and how you can make it work for you when it comes to planning for retirement.
- Remember back to when you were a kid and running a race. The finish line comes into sight, and you give it that extra push to finish strong.
Where do we get that energy from? Behavioral economists might call this goal gradient theory.
Goal gradient theory says that a fast start leads to greater success. And knowing how well you're doing along the way generates higher achievement. It can work with our savings, too.
Here it is in action. In a study, people were given punch cards for a free coffee. One group received a card with 10 empty spaces to fill, while another group received a card with 12 spaces, but with the first two spaces already punched. Which group do you think was more likely to fill their cards? The group with the two free punches bought more coffee faster. They felt they were closer to reaching that goal of a free coffee.
We make financial decisions based on our perception of money, time, and context. In the case of retirement, it's time that's at play. Retirement, for example, is a pretty big goal, which can seem elusive and overwhelming, especially when it's so far into the future.
One tip. Set small achievable goals for yourself and track your progress towards them. Maybe it's a certain amount to be saved every year. It can be a good way to work around your own procrastination and maintain momentum. You'll work harder when you see the progress you're making. | <urn:uuid:fcfaca65-57e1-4f01-a208-7c4cbcfbbb04> | {
"date": "2020-01-22T04:42:52",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9724530577659607,
"score": 2.78125,
"token_count": 343,
"url": "https://www.moneytalkgo.com/video/getting-to-that-finish-line-goal-gradient-theory/"
} |
Learn about Huntington’s disease and how MS Queensland can provide assistance and support.
About Huntington’s disease
Huntington’s disease (HD) is a disease of the brain that is passed down from parent to child. HD is not evident at birth and symptoms will usually not appear until a person is between 35 and 55 years of age. From the onset of symptoms, people with HD have a life expectancy of 10 to 25 years. There is currently no cure for HD but treatments that can help ease certain symptoms are available.
As a genetically acquired disease, you are only at risk of developing the disease if one or both of your parents also has the disease. You cannot “catch” Huntington’s disease and it cannot skip a generation.
How MS Queensland can help
MS Queensland has a long history of helping people with MS and other progressive neurological diseases such as an acquired brain injury. We know that many of the symptoms of and treatments for MS are also common to other PNDs and we offer our knowledge, expertise and understanding in this field to a broad range of people. Some of our services include:
- Service coordination
- NDIS access assistance
- Employment services | <urn:uuid:74916123-0184-4e01-95a3-abbb112dc295> | {
"date": "2020-01-22T05:21:13",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9678317904472351,
"score": 3.09375,
"token_count": 247,
"url": "https://www.msqld.org.au/related-diseases/huntingtons-disease/"
} |
The business plan: content, use and model
What is a business plan? How does one establish a convincing business plan? What should a business plan contain? Where can you find a model or an example of a free business plan?
Definition: A business plan is a written document that presents a business project that will be created or a taken-over, rolling out all aspects of the project.
A business plan is not only a financial plan: the financial aspects are not the only elements that should be presented.
The business plan is often developed in Word or Powerpoint, or any software of this type. There is no standard format and no particular length to which you must adhere.
How is a business plan useful?
The business plan has one main objective: TO CONVINCE.
The business plan is drafted for presentations to third parties, who may be future partners, suppliers, customers, members, or financers: banks, relatives, venture capitalists, or the donors of a crowdfunding operation.
When reading the business plan, these people:
- must understand the idea and the concept
- should be reassured of the adequacy of the project leaders and the project itself
- need to be convinced by the potential of the market
- must be satisfied by the planned marketing and sales strategy
- will have to be persuaded by the imagined economic and organizational model
- should be reassured about all the risks likely to weigh on the company
A business plan may also be necessary when selling a company to determine the valuation of the business.
The business plan can also be written on your own, that is to say, remaining internal in the company: the idea is to ask questions in order to decipher the direction of the company. Writing a business plan allows you to formulate your ideas, to approach all aspects without forgetting anything, and to ask the right questions. It is also a good working tool between partners, allowing for agreement on a common vision.
In the same way that it is wise to have an updated CV at all times, it is wise to have a business plan on Word kept up to date in order to be able to present your project at any time, to anyone interested, on any occasion.
A business plan is simultaneously the summary of a market study, the presentation of the offer, a financial plan and an important step in the life of the entrepreneur. This is a real test of the project owner’s ability to be lucid, realistic, and convincing.
What does a business plan contain?
The business plan consists of several parts (the number and the order are given as indicated):
- an introduction of the project leaders: CV (skills, experience, diplomas …)
- the history of the project and the idea
- the presentation of the product or service
- the presentation of the market: the market study
- the description of the marketing and marketing strategy
- the technical means provided for
- the human resources and the target organization
- the choice of legal status and the reasons for this choice
- financial aspects, i.e. financing, profitability and cash flow (See our article on the financial plan)
- Annexes: additional documents or details concerning the above points
What are the keys to designing a quality business plan?
As we have seen above, the # 1 objective of the business plan is to convince.
To write a quality business plan, you must first be convinced by what you write!
- Write simple sentences, get straight to the point.
- Present data as dots or tables rather than as long paragraphs.
- Highlight important data.
- Reinforce your arguments with factual, concrete, verifiable elements.
- Show that you have identified all the risks and that you have a response for each one.
- Seduce the reader by allowing him to project himself using concrete elements: sketches of the product, illustrations, scenarios, examples.
- Use colour to increase enthusiasm.
- Have your document re-read by outsiders. | <urn:uuid:ccb5a308-b5f6-4dc2-a725-ae17446d32a4> | {
"date": "2020-01-22T04:46:43",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9302381873130798,
"score": 2.828125,
"token_count": 821,
"url": "https://www.my-business-plans.com/business-plan-excel-free-download/"
} |
We all like to get down and dirty once in a while, after all, we ain't nothin' but mammals. But doing it safely is pretty darn important -especially when you take into consideration that STDs (also known as STIs) are among the leading cause of disease and death in Florida.
According to the information on FloridaHealth.gov, HIV, STDs, TB, and viral hepatitis, remain among the leading causes of morbidity and death in Florida, especially among at-risk populations.
Since 2013 cases of STDs have been on the rise. Even cases of syphilis, which were at their all-time low for the US in 2000 according to the Center for Disease Control, have risen drastically especially in Florida. You can see the increase from 2013-2017 on the chart below.
Note: The chart has not been updated for 2018/19 but the Health.Gov STD website page where it was sourced from was updated and modified in February 2019.
The government website further states that anyone who is sexually active is at risk for contracting these diseases, but some groups are more affected including young people from 15-24, gay and bisexual men, and those who have multiple partners. Additionally, nursing and expecting mothers can pass on certain diseases to their child - some of which are fatal and detrimental to the health of their babies.
While most STDs and their symptoms can be treated without incident if caught in time, it's not unheard of for some STDs to cause secondary infections, long-lasting complications, and even cancer.
According to Cancer.net, HPV, also known as genital warts, is the most common STD and still remains the leading cause of ovarian cancer. Approximately 79 million Americans are currently infected with HPV - that's about 80% of people who are sexually active.
STDs such as HPV, HIV, Herpes, and Hepatitis C that are viral cannot be cured but may be maintainable with modern medicine.
The best way to avoid STDs is to remain abstinent but there are other ways to keep yourself protected including the use of condoms, vaccines, and asking your potential partners to get tested for diseases before engaging in sexual activity.
More information about STDs in Florida can be found here. | <urn:uuid:e74ba824-d8f4-4868-8281-efa19d79a995> | {
"date": "2020-01-22T06:15:10",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9738050699234009,
"score": 2.734375,
"token_count": 459,
"url": "https://www.narcity.com/us/fl/miami/news/sexually-transmitted-diseases-are-among-the-leading-causes-of-disease-and-death-in-florida"
} |
Procompsognathus is a dinosaur that was discovered near Württemberg, Germany in 1909 by Albert Burrer. In 1913, it was named Procompsognathus by Professor Eberhard Fraas – a name which means “before elegant jaw.” It was given this name because its jaws are missing some of the components that would evolve in later dinosaurs.
Some interesting facts about Procompsognathus are that it is not only one of the earliest dinosaurs, living during the Triassic period (about 222 million years ago), but that it is also one of the tiniest dinosaurs to ever live. This dinosaur was only about 10 inches high (at the hips), 3.8 feet long and weighed a mere 2.2 pounds.
This dinosaur belongs to the Order Saurischia – meaning that members of this family were the early ancestors of modern birds. They are also known as “lizard hipped” because they have a hip structure that closely resembles that of a lizard. This includes a pubis bone that points downward and forward.
As you can tell from the Procompsognathus pictures, these dinosaurs very much resembled lizards. And they probably moved very much like a lizard – using quick short bursts to cover ground. This probably made it a pretty efficient hunter for its size.
Its diet probably consisted of small mammals, insects and other reptiles. This means that it diet could have consisted of mammals such as the Tritylodon; insects such as spiders, millipedes and centipedes; and reptiles such as Proganochelys. It might even have been a scavenger – feeding off of the carcasses of animals that have already died. However, scientists believe this is probably unlikely because this dinosaur has a lot of small pointed teeth – which it wouldn’t need if it merely scavenged its meals.
Paleontologists believe that these dinosaurs could run very quickly. While most estimates state that this dinosaur’s top speed was probably around 30 miles per hour, other estimates state that Procompsognathus may have been able to reach a top speed of 43 miles per hour. If that’s true, then that would mean this dinosaur could have run as fast as a modern day ostrich! | <urn:uuid:56f6f9a0-e074-47f5-99ba-3617f0e5743e> | {
"date": "2020-01-22T05:38:04",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9779133796691895,
"score": 4,
"token_count": 470,
"url": "https://www.newdinosaurs.com/procompsognathus/"
} |
A synthetic chemical similar to the active ingredient in marijuana makes new cells grow in rat brains. What is more, in rats this cell growth appears to be linked with reducing anxiety and depression. The results suggest that marijuana, or its derivatives, could actually be good for the brain.
In mammals, new nerve cells are constantly being produced in a part of the brain called the hippocampus, which is associated with learning, memory, anxiety and depression. Other recreational drugs, such as alcohol, nicotine and cocaine, have been shown to suppress this new growth. Xia Zhang of the University of Saskatchewan in Saskatoon, Canada, and colleagues decided to see what effects a synthetic cannabinoid called HU210 had on rats’ brains.
They found that giving rats high doses of HU210 twice a day for 10 days increased the rate of nerve cell formation, or neurogenesis, in the hippocampus by about 40%.
Just like Prozac?
A previous study showed that the antidepressant fluoxetine (Prozac) also increases new cell growth, and the results indicated that it was this cell growth that caused Prozac’s anti-anxiety effect. Zhang wondered whether this was also the case for the cannabinoid, and so he tested the rats for behavioural changes.
When the rats who had received the cannabinoid were placed under stress, they showed fewer signs of anxiety and depression than rats who had not had the treatment. When neurogenesis was halted in these rats using X-rays, this effect disappeared, indicating that the new cell growth might be responsible for the behavioural changes.
In another study, Barry Jacobs, a neuroscientist at Princeton University, gave mice the natural cannabinoid found in marijuana, THC (D9-tetrahydrocannabinol)). But he says he detected no neurogenesis, no matter what dose he gave or the length of time he gave it for. He will present his results at the Society for Neuroscience meeting in Washington DC in November.
Jacobs says it could be that HU210 and THC do not have the same effect on cell growth. It could also be the case that cannabinoids behave differently in different rodent species – which leaves open the question of how they behave in humans.
Zhang says more research is needed before it is clear whether cannabinoids could some day be used to treat depression in humans.
Journal reference: Journal of Clinical Investigation (DOI:10.1172/JCI25509) | <urn:uuid:841c2df8-207a-4082-baf6-9e87d220cfae> | {
"date": "2020-01-22T05:27:47",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9615433812141418,
"score": 2.890625,
"token_count": 495,
"url": "https://www.newscientist.com/article/dn8155-marijuana-might-cause-new-cell-growth-in-the-brain/"
} |
First of all, during the planning the business strategies necessary to define the goals of the organization. The main overall objective of the organization is a clear reason for its existence, it is designated as its mission.
All targets are developed for this mission.
Developed goals serve as benchmarks for the subsequent process of decision – making. Mission details the status of the company and provides direction and guidance for defining goals and strategies at different levels of development. Formation of the mission include: figuring out what business the firm is engaged, the definition of the working principles of the firm under pressure from the external environment, the definition of company culture.
This mission of the company as part of the problem of determining the basic needs of consumers and effectively to create customer satisfaction in support firms in the future.
Managers often believe that their main mission – profit. Indeed, to satisfy some inner need the firm will eventually be able to survive. But to earn a profit, the firm must monitor the environment of their activities, taking into account the value approaches to the concept of the market.
Common goals of the company are formed and are based on an organization’s overall mission and set of values and goals that are oriented top management:
• Specific and measurable goals (it helps to establish a clear framework of reference for follow-up and assessment of progress)
• The orientation of the targets in time (here it is necessary not only what the firm wants to implement, but also when the result should be achieved)
• Achieving the goal (serves to increase the efficiency of the organization); establishing of difficult achievable goals can lead to disastrous results.
• Mutually supporting objectives (actions and decisions, which are necessary to achieve the same goal, shouldn’t interfere with other objectives)
Objectives of the organization are divided into economic and noneconomic. Non-economic goals include social goals such as improving working conditions. People – this is the most important factor in the success of the organization, so we shouldn’t forget about their interests. Economic goals of the organization in terms of the economic activities can be divided into quantitative and qualitative. Example of quantitative goals – increasing the market share of firm. Example quality objectives – to achieve by the technological superiority of the industry. Activities of the organization is very diverse, so the organization can not be focused on a single goal and should identify some of the most significant landmarks of action.
So the business strategy is a combination of different objectives.
Download Sample From Here
Illustrative business plan samples
OGSCapital’s team has assisted thousands of entrepreneurs with top-rate business plan development, consultancy and analysis. They’ve helped thousands of SME owners secure more than $1.5 billion in funding, and they can do the same for you. | <urn:uuid:a0dea4d0-4b0b-4520-b1c3-5fb19f96a0c8> | {
"date": "2020-01-22T07:01:25",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9537752270698547,
"score": 3,
"token_count": 570,
"url": "https://www.ogscapital.com/article/planning-of-business-strategies/"
} |
More Science Worksheets
A series of free Science Lessons for 7th Grade and 8th Grade, KS3 and Checkpoint Science in preparation for GCSE and IGCSE Science.
Reactions between Metals and Non-metals
Reaction of metals with oxygen
Reaction of metals with sulfur
Burning Magnesium in Air
1. Clean a small strip of magnesium ribbon.
2. Record the mass of the crucible and lid on a balance (mass A1).
3. Place the magnesium ribbon in the crucible, replace the lid and record the mass (mass A2).
4. Place the crucible on the pipe-clay triangle and heat strongly.
5. Re-weigh the crucible and lid (mass A3).
6. Record these results in a suitable table.
7. How has the mass changed? Can you explain these results?
Reaction between zinc and sulfur
Zinc and sulfur react when heated. Zinc sulfide, a luminescent compound, is formed.
Reactions of metals with oxygen
Magnesium reacts with oxygen
Copper reacts with oxygen
Iron reacts with oxygen
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | <urn:uuid:9493493d-fae4-49a3-a999-551558e9d1fd> | {
"date": "2020-01-22T05:55:56",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.8267424702644348,
"score": 4,
"token_count": 353,
"url": "https://www.onlinemathlearning.com/metal-non-metal-reaction.html"
} |
Plant a Garden!
Gardening, the process of growing one’s own food, is arguably one of the oldest human activities. Conservative estimates state that the earliest traces of gardening activity show up around seven thousand years ago, while other evidence suggests that our most primitive ancestors began simple gardening to supplement their hunting and foraging much further back. No matter which theory you choose to believe, there is no question that gardening is an activity that has long been ingrained in our souls. Considering our longstanding history with gardening, it appears it is something that should come easily and naturally, but many of us feel a little intimidated at the idea of planting our own gardens. You may have been born with a natural green thumb, or you may have a few insecurities built up around your ability to sustain any type of plant life. The good news is that it is possible to plant and nourish your very own organic garden regardless of your limits in the departments of ability, time and available space.
Before diving into some tips for starting your organic vegetable garden, I feel that it is worth a few minutes of time to talk a little bit about why you should be interested in gardening in the first place. Gardening, digging your hands into the earth and then enjoying, the fruits of your labor boasts some serious positive benefits for your body and spirit, including:
- You get all the benefits of a healthier diet. Whether you eat it fresh, or opt for freezing and canning, gardening is going to encourage healthier eating habits and a greater diversity of fresh foods. You are more likely to prepare fresh meals and eat at home when your food source is your own back yard.
- Gardening is good for your heart. Gardening is good physical exercise and chances are that if you are out in your garden, you are going to spend at least thirty minutes engaging in quality low impact exercise. Gardens can be built in containers or on raised beds to accommodate people with back and joint problems. It is well known that a sedentary lifestyle contributes to an increased risk of heart disease, and gardening is healthy, enjoyable way to lower that risk. Additionally, research shows that people who are deficient in vitamin D are also at a greater risk of heart disease. Spending thirty minutes a day out in the glorious sunshine with your garden can help boost your levels of vitamin D.
- Speaking of the benefits of vitamin D, the extra daily dose that you get from gardening helps to build a stronger immune system. Gardening also helps to build up your immune system in another unique way. When you are digging in the earth, you are going to encounter some friendly bacteria by the name of Mycobacterium vaccae. Exposure to these bacteria has been shown to help regulate an improperly functioning immune system and ease symptoms of arthritis, allergies and inflammatory skin conditions.
- Gardening is good for your mental health. While you are out there digging, planting and tending to your garden, you are doing more than just exercising your body; you are also helping to lower your levels of a stress hormone called cortisol. Lower levels of cortisol are connected to better memory function, improved sense of self-esteem, and lessened feelings of anxiety and depression.
- Gardening builds community. People who garden are generally more involved in their neighborhoods and communities. While you are out tending to your garden, you have more opportunity to get involved in conversation with your neighbors and notice the small details of daily neighborhood activities. You can also join a community garden and form friendships with likeminded people while bonding over your shared gardening experience. Growing your own fruits and vegetables opens you up to the reality that we are all connected on a deeper, more meaningful level.
I could go on and on, talking about the benefits of gardening, however, just talking about why it is so wonderful is not enough to make it happen. What you need is some simple, practical advice for starting your own organic garden. Even if you were born with a thumb that doesn’t resemble any shade of green, these tips will help you see that with the right mindset and the right approach, gardening is simple, enjoyable and far from intimidating. Here are ten easy to follow steps for planting and tending to your very own organic vegetable garden.
1. Plan and Start Small
Before you put your hands in the earth, it is important to have some type of game plan of what you want to plant and how you envision it all coming together. The first thing you want to look at is your local climate. Gardening gives you a natural opportunity to eat seasonally by planting and harvesting the foods that grow best in your local environment during the current growing season. Also consider the limits of what you are working with. Do you have limited sunlight? Are you planning on planting in containers rather than in a ground garden? How much fresh produce can your reasonably consume, preserve or give away? These are all important questions to ask yourself before you get started.
Also, a common new gardener mistake is to be overzealous in your ambitions. If you are new to gardening, start small, and start with plants that are easy to grow. It is best to start off with five or fewer plants that are easy to tend to rather than dive into the deep end with a large garden of possibly hundreds of plants. Depending on what area you live in, you can add plants as the season progresses, giving you a chance to grow your garden as your skill and confidence grows along with it. See the list at the end of this article of easy to grow garden plants that are perfect for beginners.
2. Consider Vertical Space
Being limited in terms of space is no reason to not explore your gardening passion. Urban gardening has become a hot trend, proving that you don’t even need a yard of your own to produce a bounty of fresh produce. Trellises can be used to train plants to grow upwards rather than outwards, and are perfect for gardening in small spaces such as patios and balconies. Vertical planting is especially useful for plants that require support and those that grow on outward stretching vines. Examples of plants that make good use of vertical space include tomatoes, peas, cucumbers, melons, squash and smaller varieties of pumpkin. If you are short on space, also look at plants that can be planted overhead in hanging baskets, such as herbs and hot peppers.
3. Choose your Plants
There are a few things that you want to consider when deciding on plants for your garden, especially if it is your first time. Of course, there are basics like choosing plants that will thrive in your climate and soil conditions. However, there is more that you might wish to consider. For instance, what is the purpose of your garden? Are you looking to develop a new hobby and want to start out with a few super easy plants? Are you looking to reduce your grocery bill? Are you interested in a theme garden such as a “pizza garden”, “salad garden” or even a “pickle garden”?
If you are looking to reduce the amount that you spend at the grocery store, consider which types of fruits and vegetables you enjoy the most. Next, take into consideration how much those particular items cost. For instance, potatoes are relatively inexpensive compared to specialty salad greens. If you enjoy both and have limited space, you will save more money by gardening the costlier produce yourself. Additionally, you want to consider how easy a plant is to grow and care for. If you are new to gardening, plants that are more temperamental and difficult to care for might result in frustration and discouragement.
Most beginning gardeners also find it helpful to visit a gardening center and purchase plants that have already been started rather than starting their own plants from seeds. Try out a few seasons of gardening with pre-purchased plants before taking on the task of starting from seed.
4. Prepare the Soil
By choosing plants that grow well in your area, you already have a great start in making sure that your soil and plants are compatible. Even with that, you want to make sure that your soil is nutrient rich. This doesn’t mean reaching for chemical fertilizers. There are plenty of all natural steps to take to increase the nutrient quality of your soil. You can purchase organic fertilizers such as sea minerals and fish fertilizer, or you can also add compost and other plant based waste material, known as mulch, to your garden soil. Decomposing leaves, damp woodchips and even pine needles are all suitable plant waste that will fertilize your soil. When you begin preparing the ground, or your containers, for planting, dig up the first 2-4 inches of soil and mix in the organic fertilizing material. When it comes time to plant, dig a hole, place a little more fertilizing material in the bottom, followed by a layer of soil then your plant. This helps to build a solid, nourishing foundation for your garden plants.
5. Keep Companions Together
Companion gardening is a method of gardening that keeps plants grouped together based on compatibility. Some plants thrive when put together, while others struggle unnecessarily. The idea is that each plant gives and takes something from the soil that it is planted in. Each plant requires resources to thrive. Companion planting ensures that plants complement each other rather than compete for the same precious resources. Examples of compatible combinations include; asparagus and basil, strawberries and bush beans, broccoli and lettuce, and eggplant and peppers.
6. Pest Control
One of the main benefits of growing your own produce is that you know exactly what is on the food you nourish your body with and you can avoid chemical additives and their toxic effects. Bear in mind, however, one a gardener’s top woes are the invasion of unwanted garden pests. You do not have to reach for chemical pesticides to protect your crop, though. There are natural steps to take that will prevent pests from coming to visit in the first place. One of the most important things you can do is revisit tip number five and look at companion gardening to deter unwanted garden guests.
7. Water and Sunshine
These two ingredients for a successful garden are so basic that it is easy to overlook them, and that is the exact reason that they need to be mentioned. Most garden plants require “direct sunlight,” which translates to anywhere from four to eight hours of direct sun exposure each day. Even plants that thrive in partial shade still need adequate amounts of sunlight, so keep this in mind when choosing the perfect spot for your garden. Additionally, don’t automatically count on mother nature to take care of your garden’s water needs. Check your soil regularly. It should be damp, but not completely saturated. It should not feel dry and sandy. Of course, there are exceptions to every rule, so make sure you are aware of the individual needs of each of your plants and choose accordingly.
8. Keeping a Continual Harvest and Planning for the Future
When planting a garden, it is tempting to plant everything and fill up every possible inch of space. This is fine if you are looking for one big harvest, but seasoned gardeners know that you can space out the timing of your plants. For example, you might have some spinach going in your container garden. When your first “crop” of spinach is just about ready for harvest is the perfect time to plant some more. This means that you have a continual stream of the fruits and vegetables that you enjoy the most. This is easiest to do if you have a reputable garden center in your area that supplies healthy plants throughout the growing season rather than starting each one from seed yourself. Additionally, know that for best results you will want to switch out what you grow in your garden every two to three years. This assures that the soil does not become overly depleted in certain nutrients because of needing to support the same type of plant season after season.
9. Keep a Gardening Journal
Starting a garden journal today is the best way to plan for tomorrow’s garden. Make note of what you plant, where you plant it and when you plant it. Then make notes regarding soil quality, what types of natural fertilizers you used and other growing conditions. This will help you trouble shoot and make note of your gardening successes. It is also a great place to write memories of your gardening experience along with jotting down recipes to try once it is time to harvest all your hard work.
Now, with these tips under your gardening belt, it is time to head out and get started. Here is a list of some of the easiest garden plants to grow, perfect for beginning gardeners and for those who want to garden, but need low maintenance plants due to busy schedules and daily life commitments.
Top Plants for Beginners
- Green Beans
- Salad Greens
Don’t let your perceived lack of a green thumb stop you from getting outside and immersing yourself in the world of organic gardening. Gardening is more than a pastime, it is an experience that will provide fuel for your body while healing your spirit and brightening your mood. Fresh air, wholesome food and beautiful sunshine. There are few combinations that are more restorative. Start planning today and a bountiful garden will be your reality tomorrow. | <urn:uuid:5f184cfa-0c56-4a95-894d-29477fd342cf> | {
"date": "2020-01-22T04:54:11",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9569165706634521,
"score": 2.96875,
"token_count": 2715,
"url": "https://www.oznaturals.com/blogs/the-natural-life/plant-a-garden"
} |
Has The City Of Atlantis Been Discovered In The Eye Of The Sahara?
By: Paul Wagner | Jun. 5, 2019
If you feel challenged by our relatively unconscious society, you may be one of the many dreamers who fantasizes about the lost city of Atlantis. Some believe the Eye of the Sahara in Mauritania holds the secrets we’ve long imagined to be true. Stretching 14.6 miles across, the Eye appears to be from another world. Considering Plato’s writings on the subject, it’s possible that this incredible structure is the final resting place of millions of Atlanteans.
While Plato’s descriptions of Atlantis are epic and mind-blowing, many believe he barely scratched the surface. He described Atlantis as a massive formation of concentric circles, alternating between land and water, similar to how the Eye is seen today. He emphasized that Atlantis was a wealthy, utopian civilization that created the basis for the Athenian democratic model. Plato went on to describe the land as rich in gold, silver, copper, other precious metals, and gemstones.
According to Plato, the story of Atlantis, first told by the ancient Egyptians, has all the elements you’d expect from a culture that was not only ahead of its time, but also wildly arrogant. Atlantis was a leader in academia, architecture, agriculture, technology, diversity, and spiritual empowerment, their navy and military were unmatched, and the Atlantean kings ruled with extreme authority. It’s no surprise that Atlantis fell in ways similar to Rome, and potentially in a similar way to how the United States could fall.
“This power came forth out of the Atlantic Ocean … an island larger than Libya and Asia put together … Now in this island of Atlantis, there was a great and wonderful empire which had rule over the whole island and several others, and over parts of the continent.”
― Plato, Timaeus/Critias
Soon after waging an aggressive, unprovoked war on parts of Asia, the Atlanteans were defeated by the only army willing to defend the continent: the Athenians. Amidst the battles, the Gods thrust violent tsunamis, earthquakes, tornados, hurricanes, and floods upon the Empire of Atlantis. As if admitting its sins, Atlantis burst apart, dissolved into the ocean and desert, and was never seen again.
The Eye of the Sahara, also known as the “Richat Structure” and “Eye of Africa” is located in the Sahara’s Adrar Plateau in Mauritania, the Islamic Republic in Northwest Africa. This massive geologic, inverse dome contains rocks and sediment dating back to a time before life on Earth.
Visible from space, the Eye of Sahara resembles a massive bullseye, which began to form when the supercontinent Pangaea broke apart. The igneous rocks embedded in the Eye include carbonates and black basalts akin to Hawaii’s Big Island.
The Richat Structure And Atlantis
Many believe Plato’s stories about Atlantis were parables and that he used Atlantis to set the stage for his ideology. Plato’s Atlantean narrative might be in the same vein as James Cameron’s Avatar, in which he warns us that corporate greed and racism can quickly pollute and potentially destroy our civilization.
King Atlas, aka King of Atlantis, and namer of the Atlantic Ocean is the same person as Atlas of Mauritania. Herodotus’s map from 450 BC places Atlantis in the same place as the Eye. The Egyptians, the first tellers of the Atlantis story, were colonized by Atlantis. It’s through their lineages that we came to learn about Atlantis and its precise location.
The circular isle of Atlantis was described to have a diameter of 127 Stadia. 1 Stadia = 607 feet. When you multiply 127 x 607, the result is 77,089 ft. This is equivalent to around 14.6 miles – the diameter of the Eye.
More Similarities Between Atlantis And The Eye Of The Sahara
Solon, Plato’s relative, was an Athenian statesman and poet who traveled to Egypt and learned about Atlantis first-hand. It’s these stories that Solon relayed to Plato.
In Plato’s Critias and Timaeus dialogues, he describes Atlantis as three alternating zones of water and two of land, which could easily be transposed onto the physical structure of the Eye that we know today.
The nearby mountains were seen as representatives of the Gods and celebrated for their lush rivers and waterfalls. These mountains were said to be in the north, the precise location of the Eye’s mountains. When you look at the satellite images of the Eye, you can see the river and water lines that appear throughout the landscape.
Plato described the sea to the south of Atlantis and the desert surrounding the area, which also appears in satellite images.
It was said the fresh water flowed from the center island of Atlantis, which also exists in the center circle of the Eye.
Satellite imagery shows that weather pushed mud across the region, which could easily be attributed to a tsunami, one of the many aspects of the weather system that simultaneously destroyed Atlantis.
Mauritania exports copper and gold, which were plentiful throughout the Empire of Atlantis.
Eye of Sahara from space
Plato reported that elephants, and many other animals, were abundant on Atlantis, many elephant bones have been found near the Eye.
Black, red and lighter colored rocks were reported to be embedded throughout the land of Atlantis. This is also true of the Eye.
There have been thousands of artifacts found in and around the Richat Structure. Most are 12,000 years and older, which puts them in the time frame of Atlantis. These items include arrowheads, spears, stone spheres, surfboards, oars, ship hulls, and more.
Legend tells us that Atlantis was an empire made of ten kingdoms, with the island of Atlantis as the capital. The God Poseidon gave birth to five sets of twins, ten children in total, each one running one of the ten kingdoms. Having twins is a rare occurrence. It just so happens that the highest birth rate of twins on planet earth is found in Nigeria – very close to Mauritania.
Plato and Solon were known to have integrity and were therefore rarely challenged. Atlantis is the only story of Plato’s that was ever disputed.
One of the strangest aspects of the history of Atlantis is that none of these theories are presented in Wikipedia, and every related page is locked. This includes pages about the Eye, King Atlas, and the God Poseidon. How is it that the universally accepted concept that Atlantis existed is left out of one of society’s most treasured resources?
If you’re still on the fence about the Eye of the Sahara being the location of the city of Atlantis, consider that the City of Troy was thought to be a myth for thousands of years, until it was found,exactly where Homer said it would be.
“There were a great number of elephants in the island, and there was provision for animals of every kind, both for those who live in lakes and marshes and rivers, and also for those who live in the mountains and on the plains, and therefore for the animal which is the largest and most voracious of them.”
― Plato, Timaeus/Critias
Paul Wagner is a 5-time EMMY Award winning writer, a clairvoyant reader, and an Intuitive-Empath. He is the creator of “The Personality Cards,” a powerful and inspiring Oracle-Tarot deck, helpful in life, love and relationships. Paul studied with Lakota shamans in the Pecos Wilderness who nurtured his empathic abilities and taught him the sacred rituals. He tours the world lecturing, and has lived at the ashrams of enlightened masters, including Amma, the Hugging Saint, for whom he’s delivered keynote lectures at her worldwide events. Paul lovingly offers intuitive readings and coaching to help others with self-discovery, decision-making, healing, and forgiveness. Learn more at PaulWagner.com. Book a session with Paul: HERE. | <urn:uuid:3e890e66-a811-43cd-ba89-3efcff3ffa01> | {
"date": "2020-01-22T06:54:05",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9597057700157166,
"score": 2.9375,
"token_count": 1717,
"url": "https://www.paulwagner.com/city-of-atlantis-in-the-sahara"
} |
A recent outbreak of salmonella has reminded the whole industry that it needs to take a fresh look at prevention to avoid a resurgence of cases
Salmonella is one of the top foodborne bacterial diseases worldwide, with eggs and eggbased products being the primary cause of salmonella-related illnesses. A recent outbreak at a packing centre in Kent that lead to the Food Standards Agency issuing a product recall was a stark reminder of the risks involved. UK producers have long recognised that there is no single measure to control salmonella; it is instead widely accepted that a holistic industry commitment and approach is needed.
At the recent Elanco Layer Conference, a recurrent theme was avoiding complacency about previous achievements and continuing to strive to improve best practice to reduce salmonella outbreaks in the future. David Heckman, global food safety consultant at Elanco, was very clear when he addressed the conference: “Complacency has no place in salmonella prevention and management.”
SALMONELLA IN EUROPE
Mark Williams, chief executive of the British Egg Industry Council, told delegates that the UK is the fourth largest EU egg producer with 11.2% of the laying hens in the EU market. Despite it being such a big egg-producing nation, the UK’s salmonella enteritidis prevalence is the lowest across Europe – 0.1% – the same as Romania, which has only 2.1% of the EU’s laying flock. Germany has the largest market share at 12.8% and has a salmonella enteritidis prevalence of 0.5%. But in Latvia, salmonella prevalence rises to 5.7%.
The European Centre for Disease Prevention and Control (ECDC) published a report in February 2019 about the proactive control of salmonella. The report stated that a decrease in salmonella variations throughout Europe by just 1% or 2% would have a significant decrease in human salmonella cases, reducing them by 6%.
The review also took note of risk factors for laying hens, which revealed a lower occurrence in non-cage compared with cage systems. Conclusive evidence was found to suggest that an increased stocking density, larger farms and stress result in increased occurrence, persistence and spread of salmonella in laying flocks.
THE 3RS OF SALMONELLA PREVENTION
Heckman presented to the conference his plan for prevention of salmonella, which he called the 3Rs: (1) relentless (2) risks and (3) uRgency. Relentless, he said, refers to the relentless pressure that the industry is under from multiple sectors including the media, activists and the Government. Rules around egg production have also become relentless with more rules than in the past; most of which are made around producers.
Risk, Heckman said, is prevalent in the industry and it is essential that we know exactly where it is coming from at all times. When assessing risk, it’s important to focus on three main areas: economic, legal and business, and brand. From an economic perspective, a salmonella outbreak can have catastrophic effects on businesses with the possibility of being unable to sell a product or prices being discounted.
Heckman said that from a business and brand point of view, it’s important to remember the media’s ability to influence public opinion, snowballing the issue and creating lasting negative thoughts towards the brand well beyond the time of the outbreak. While in most cases the accountability is on the consumer to prepare their food correctly, the legal risk when producing a potentially contaminated product means that accountability for illness cannot be shifted to the consumer.
The final R is uRgency, said Heckman. It is crucial to ensure your story is being told correctly and misinformation on social media is being addresses with urgency. While salmonella is still a real risk, it’s important to remember that the UK has some of the best standards in the world, and, as standards and prevention continue to improve, this risk will decrease.
To ensure the standards are upheld it is important producers do not become complacent but to keep on top of prevention strategies. That way, the UK’s excellent record on salmonella will be maintained. | <urn:uuid:472492f3-c472-4f1c-b306-80d4c8df7ccc> | {
"date": "2020-01-22T06:21:39",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9592971801757812,
"score": 2.546875,
"token_count": 877,
"url": "https://www.poultrynews.co.uk/health-welfare/food-safety/no-room-for-complacency-on-salmonella.html"
} |
Christine Jorgensen undergoes her gender confirmation surgery in Denmark, she started taking hormones in 1950 but underwent surgery in 1952. Jorgensen "the first transsexual in the United States to publicly announce her change of sexual identity". She dies in 1989 due to complications of cancer.
Shortly after Harry Benjamin published The Transsexual Phenomenon, Johns Hopkins opens a gender identity clinic "to diagnose and treat transsexual individuals and to conduct research related to transsexuality." Over the course of the next 10 years "more than forty university-affiliated gender clinics existed throughout the United States."
The Harry Benjamin International Gender Dysphoria Association was established in 1978. It was named after Harry Benjamin, who was considered to be a "pioneer scholar and researcher on transsexualism." The association was established 'to foster communication among professionals that were involved in the treatment and research of gender identity disorders."
First issue of the Standard of Care for Gender Identity Disorders
The first version of the Standard of Care for Gender Identity Disorders is issued. These guidelines were made to help professionals in how to treat people with gender identity disorders. This consensus is still published today and is constantly changing as new research is produced.
In the DSM-III gender identity disorder was introduced as a "disparity between anatomical sex and gender identity". This was broken into three categories "transsexualism, nontranssexualism, and not otherwise specified" this lasted until 2013 when DSM-5 came out. Within DSM-5 the name gender identity disorder is changed to gender dysphoria, and the change was to "include only a medical designation of people who have suffered due to the gender disparity, thereby respecting the concept of transgender in accepting the diversity of the role of gender"
Julie Hesmondhalgh played trans character Hayley Cropper on hit soap opera Coronation Street. Although Julie herself is not trans, this was a huge stepping stone towards more trans characters being portrayed on television.
The Harry Benjamin International Gender Dysphoria Association is renamed The World Professional Association for Transgender Health. The reason for the name change was "to eliminate the term 'gender dysphoria' and to put an emphasis on overall health and well-being instead of illness"
WHO removes gender identity disorder as a mental illness
2018 - 2019
The World Health Organization approved the removal of gender identity disorder from the International Classifications of Diseases (ICD). They have changed gender identity disorder to gender incongruence which is better known as gender dysphoria. Gender incongruence is now under the chapter of sexual health within the ICD instead of the mental health chapter. | <urn:uuid:a4730ce3-674c-42ef-8a9c-be44d7639bcd> | {
"date": "2020-01-22T06:49:46",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9454259276390076,
"score": 2.71875,
"token_count": 528,
"url": "https://www.preceden.com/timelines/501007-gender-dysphoria--trans-health"
} |
Printing on corrugated board is a complex process; many input variables affect the results to a varying extent. Not only the printing process itself has an influence on print quality; the pre-conditions of the substrate affect it as well. The topography of the liner surface is one of many important influence factors. As a first step, laboratory tests concerning the influence of the corrugated board production process on the liner surface topography were carried out (Rehberger et al., 2006). The result was that the movement of the liner on a hot plate, as compared to unmoved sheets, is the major criteria in surface roughness changes on coated and uncoated liners. Pilot trials have been carried out, since laboratory tests cannot be scaled up to real conditions. The first pilot trial with an uncoated liner did not result in any surface topography changes in conjunction with gloss, even though the corrugator was set to extreme temperature, pressure and speed conditions. These settings were adjusted to the pre-heater and double facer of the corrugator. The second pilot trial with coated liners, though, showed a clear impact on the topography of the liner surface. Using STFI-MicroGloss meter, the visually perceivable gloss lines have been analyzed and, as result, the average gloss line values computed. The results showed that production speed has the highest influence. The topographical measurements with AFM, FRTMicroProf (r) and CLSM disclosed that these glossy stripes have a much lower nano-scale surface roughness as compared to the raw material. An extreme condition occurs when the corrugator is restarted after a full-stop. One collected sample from the start-up showed longish bubbles across the flute. Not only lowspeed causes gloss lines, so do also the standard settings set by the operator for optimum corrugated board quality.
Finally, printing trials in flexography and ink-jet were performed to determine the gloss influence of the substrate and whether the gloss lines still appear in the print. The print images were measured with the STFI-MicroGloss. The result for the flexographic printed images is that none of the gloss lines from the substrate appears in the print. The same is valid for the ink-jet printed images. Only the gloss from the print is recognizable. Further trials are necessary to shed light on the interrelation between substrate, gloss and print quality. | <urn:uuid:0928f2de-54db-4540-91cd-32fdb41c0bb2> | {
"date": "2020-01-22T04:46:07",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9487199187278748,
"score": 2.625,
"token_count": 494,
"url": "https://www.printing.org/taga-abstracts/t070367"
} |
This interactive workshop will explore how to design and implement engaging, research-based instructional strategies and routines to support a comprehensive literacy program. Offered by the Provincial Outreach Program for the Early Years, participants will focus on providing opportunities for playful literacy activities that connect reading, writing and oral language as well as designing literacy activities that allow for meaningful connections to the Core Competencies.
When: Friday, January 24, 2020
Where: Learning Services – Board room
Time: 9:00 am – 3:00 pm
Registration before January 17, 2020 by emailing Wanda Forster, District literacy support teacher at [email protected]. | <urn:uuid:b2a7edd5-320a-46a5-a081-bd3421dc41f7> | {
"date": "2020-01-22T04:48:47",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9139291644096375,
"score": 2.5625,
"token_count": 129,
"url": "https://www.prn.bc.ca/LT/2019/12/19/popey-literacy-learning-through-inquiry-play/"
} |
According to a recent CDC report, oral problems such as cavities and gum disease affect millions of Americans each year. Oral health is often overlooked or valued less than the rest of your body’s wellbeing. However, dental issues will not only seriously affect your mouth but can also impact the rest of your body. In this post, your dentist in Sacramento explains how maintaining good oral hygiene is a strong investment in your overall health.
How Are the Two Connected?
Like the rest of your body, your mouth contains large amounts of bacteria. While this is normally not a cause for concern, poor oral hygiene can lead to oral bacteria traveling to other parts of your body. Your mouth acts as an entry point to your digestive and respiratory tracts. Reduced salvia flow, a side effect of certain medications, also increases the number of bacteria in your mouth. The inflammation caused by gum disease allows these germs to enter your bloodstream.
What Conditions Are Linked?
Your oral health contributes to various afflictions, including:
- Heart attacks, strokes and endocarditis. Heart disease and infection can occur when bacteria from your mouth travels through your bloodstream.
- Pregnancy and birth complications. Premature birth and low birth weight have been linked to gum disease.
- Pneumonia. Specific bacteria from your mouth can be pulled into your lungs, causing respiratory issues.
- Diabetes. Gum disease does not cause diabetes, but can further complicate the disease, impairing the body’s ability to utilize insulin.
How Can I Protect My Health?
The link between your oral and overall health is serious, but the implementation of a strong oral hygiene routine can help keep you healthy. Simple steps include:
- Brush your teeth twice a day with fluoride toothpaste
- Floss and use mouthwash daily
- Maintain a healthy diet and limit your sugar intake
- Replace your toothbrush at least every three months
- Visit your dentist twice a year for checkups and cleanings
- Avoid tobacco use
Caring for your mouth will benefit your entire body and can even lower your risk of developing certain diseases. Those with preexisting conditions should be especially vigilant of their oral health. If you believe you are suffering from an oral health problem, contact your dentist immediately.
About the Author
Dr. Scott Grivas is committed to helping patients of all ages enjoy happy and healthy smiles. With over a decade of experience, Dr. Grivas is committed to providing patients with the most up-to-date and effective care. He is a member of the International Association of Mercury Free Dentists, the American Academy of Cosmetic Dentistry and has been recognized by the International Dental Implant Association. If you have further questions about oral health, he can be reached through his website or at 916-929-9222. | <urn:uuid:ac110b64-eb2e-4889-9d97-505ec0ae3d40> | {
"date": "2020-01-22T06:29:01",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9330618977546692,
"score": 2.921875,
"token_count": 577,
"url": "https://www.sacsmiledesign.com/blog/2019/06/dentist-in-sacramento-overall-health/"
} |
In Visual Arts at Southland Boys’ High School students learn to explore ideas, techniques and skills as a means of visual communication. Students engage in the exploration of Maori visual culture as well as European, Pasifika, Asian and other cultures to enrich their appreciation in the Visual Arts.
The Visual Arts Programme at Southland Boys’ High School is structured around four inter-related strands:
- Understanding Art in context
- Developing practical knowledge
- Developing ideas
- Communicating and interpreting
Within these areas students explore skills, culture, social issues and concepts both individually and collaboratively.
Visual Art is taught as a subject in Years 7-9. At Year 10 it becomes an option and is chosen as an NCEA subject in Years 11-13. At NCEA Achievement Standard level students develop their visual literacy and engage with a wide range of art experiences in increasingly complex thinking and processes. This is a whole year course.
At Years 7-10 students engage in a wide range of art making skills. Years 11-13 explore in depthy an area of specialist interest. Visual Art provides opportunities to focus on Painting, Design, Print-making, Sculpture and Photography. | <urn:uuid:26ca387a-4519-44e8-b5cf-8d878cc2570f> | {
"date": "2020-01-22T04:31:04",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9199663400650024,
"score": 2.90625,
"token_count": 247,
"url": "https://www.sbhs.school.nz/visual-arts/"
} |
Austinites are proud of the contributions we've made as a city to live more sustainably. We recycle, compost, grow our own food, raise chickens and goats, and we're starting to see more electric cars and hybrids on our roads than any other city in Texas. In fact, to help reach that goal Austin made a commitment back in 2007 to be completely carbon neutral by 2020, partly by having 330 vehicles in our city's fleet be plug-in electric automobiles. While that date is certain to be slipped at our current rate of growth and available programs, we are doing our best collectively to help move Austin in the right direction.
How many electric cars will we have?
According to Jessi Devenyns of the Austin Monitor, "ERCOT is predicting that by 2031, 20 percent of the vehicles on Texas roads will be electric. In Austin, this would translate to 320,000 vehicles producing $128 million a year in e-fuel revenue. It would also mean that these vehicles would make up 10 percent of the energy load on the city’s grid." That's a lot of electric cars and buses!
Now that batteries are coming down in price, electric vehicles are more affordable than ever. Not only is the fuel cost of owning a plug-in electric vehicle much lower than a traditional internal combustion vehicle, but the maintenance costs are significantly lower as well. According to Austin Energy, "The only routine maintenance required when owning a plug-in electric vehicle is the occasional tire rotation: no oil changes, transmission fluid, belts and hoses being replaced, for example."
How will we accommodate all these plug-in cars that need to charge?
Did you know that Austin already has well over 300 charging stations positioned all over the city to help EV owners stay charged throughout their day? Austin Energy has put together an interactive map to show you where you can go to charge your electric vehicle although most EV owners will install a charging station at their own home and several apartment complexes have started offering these to tenants. Some companies, like ChargePoint, are offering interactive apps that allow you to enter your location and it will direct you to the nearest charge station.
What does it cost to install a charging station in a home?
We are starting to see folks use solar panels to power their electric vehicle charging stations. Austin Energy's $2,500 residential rebate program for solar panels is still ongoing, making the choice to commit to electric easier on your wallet and on your mind. According to the website, FIXR, the national average cost to install a charging station in a residential home is around $1,000-$3,000+, depending on the equipment used, the installation contractors, and available rebates. It's important to look around to find the best option for your home and the type of vehicle you are interested in charging.
Roughly 95% of charging of an EV will be done at home. As such, property owners have a variety of options to choose from when installing personal chargers at home. Most people go with the standard 12-hour "overnight" chargers but can upgrade to models that can charge a battery completely in less than 30 minutes. It depends on your lifestyle as well as your budget.
With gasoline prices in flux and cost of materials on the rise, electric vehicle options are looking more appealing to the masses every day. The future is here and it's electric. Will you be ready when it's time to buy your next vehicle? | <urn:uuid:c68f75e5-6c6b-4672-a844-60a0afbcfc3c> | {
"date": "2020-01-22T06:11:09",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9655911922454834,
"score": 2.640625,
"token_count": 708,
"url": "https://www.seeaustinareahouses.com/blog/article/Are-You-Ready-For-An-Electric-Vehicle/BL363E1E6458E44A"
} |
Kids Learning After School (KLAS) provides a safe and enriching environment for students to receive help with academic, physical, and arts enrichment activities, which are based on the needs and interests of participating students at seven school sites within the District: Bishop, Ellis, Fairwood, Lakewood, San Miguel, and Vargas Elementary schools and Columbia Middle School. Students are welcome to join KLAS to improve academic achievement and self-esteem and enjoy the fun and enriching learning activities.
The program is partially funded through a grant from After School Education and Safety Program (ASES) and all students are eligible to participate based on availability and teacher recommendation. For more information, please contact your school principal directly.
KLAS program goals include:
- Supporting student academic achievement to meet state standards
- Increasing self-esteem and improving life skills
- Offering positive interaction in a safe and enriching environment | <urn:uuid:aa619a2e-4538-42c2-ada5-328f2bf5dcc3> | {
"date": "2020-01-22T04:58:35",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9512810111045837,
"score": 2.53125,
"token_count": 184,
"url": "https://www.sesd.org/domain/246"
} |
Considerable evidence suggests that changes to brain activity in the cerebellum are involved in autism spectrum disorder. In addition, disruption of a molecular pathway that controls protein synthesis — the mTORC1 pathway — has been implicated in the disorder. In a mouse model in which the mTORC1 pathway has been selectively disrupted in cerebellar neurons called Purkinje cells (PCs), the mice show numerous behaviors that are consistent with symptoms of autism.
Wade Regehr and his colleagues at Harvard Medical School plan to use electrophysiological approaches to determine the mechanisms underlying the behavioral abnormalities in these mice. Their goal is to gain insight into the cellular changes in the cerebellum, such as alterations to synapses, the junctions between neurons, that lead to behaviors associated with autism. They have designed studies to test the following hypotheses: (1) synaptic inputs to PCs are altered, leading to inappropriate PC activity, (2) the intrinsic excitability of PCs is altered, leading to inappropriate firing and (3) synaptic outputs from PCs are perturbed, thereby altering target-cell firing.
The researchers have begun to test these hypotheses and have already obtained preliminary data suggesting that intrinsic excitability of the PCs is altered. Extending these studies promises to provide new insights into the role of cerebellar PCs and the cognitive role of the cerebellum in autism spectrum disorder. | <urn:uuid:64938aca-fa22-4f14-9717-51aab5a13bef> | {
"date": "2020-01-22T04:53:24",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9428490400314331,
"score": 3.15625,
"token_count": 279,
"url": "https://www.sfari.org/funded-project/underlying-mechanisms-in-a-cerebellum-dependent-model-of-autism/"
} |
by Harriet Hall, MD
Depression affects approximately 10% of Americans. It can be fatal; I found estimates of suicide rates ranging from 2-15% of patients with major depression. When it doesn’t kill, it impairs functioning and can make life almost unbearably miserable. It is a frustrating condition because there is no lab test to diagnose it, no good explanation of its cause, and the treatments are far from ideal.
Jonathan Rottenberg is a psychologist and research scientist who began to study depression after his own recovery from a major depressive illness. He teaches psychology at the University of South Florida, where he is the director of the Mood and Emotion laboratory. He has launched the Come Out of the Dark campaign to start a better, richer national conversation about depression. In a new book The Depths: The Evolutionary Origins of the Depression Epidemic, he reviews insights from recent experiments and asks a number of difficult questions, such as why humans evolved to be subject to incapacitating depressions. He comes up with some startling hypotheses, including the idea that evolution favored depression because of its survival value and that depression is essentially a good thing. He offers his ideas as the basis of a paradigm shift.
Is there an epidemic of depression? Rates of depression have been rising in most but not all countries. Is it a real epidemic, or might we be seeing the influence of increased awareness through the media and better diagnosis? I’m not sure we have enough evidence to be certain at this point.
What is depression? Is it:
- A defect in brain chemistry? This is the basis of drug therapy. The chemistry imbalance hypothesis is simplistic, misleading, and essentially wrong. Antidepressants do indeed alter brain chemistry as they relieve symptoms, but that doesn’t necessarily mean that a chemical imbalance caused the problem, and it doesn’t explain what caused the imbalance or why it happened when it did.
- A defect in thoughts? This is the basis of cognitive behavioral therapy (CBT). Is mere thinking enough to think yourself into a depression or out of one? The evidence suggests otherwise.
- A defect in childhood experience? This is the basis of psychoanalysis. Freud’s theories have been largely discredited, and people with the most appalling childhoods can have normal adulthoods.
- Not a defect at all? This is what Rottenberg proposes.
Evolution did not design us to be happy. It designed us to survive and reproduce. The function of mood is to integrate internal with external information to enhance fitness. Mood affects behavior: an anxious mood focuses attention on threats; a good mood broadens attention and leads people to seek out variety; and a negative mood first mobilizes effort, then eventually de-escalates effort when a task proves hopeless, conserving resources that can later be used to better purpose. Our moods occur first: we feel happy or sad, we feel a need to explain why we feel that way, and we think of a reason that would explain the mood. The reasons we come up with are not necessarily the right ones, and often they are mere confabulations.
Low mood has its benefits. Non-depressed people tend to overestimate their abilities, are prone to positive illusions, and demonstrate overconfidence and blindness to faults. When depressed, people are more realistic; they are more deliberate, skeptical, and careful in processing information from the environment.
Low mood can be triggered in animals and humans by factors such as separation from the group, removal to an unfamiliar environment, the inability to escape from a stressful situation, the death of a significant other, scarce food resources, prolonged bodily pain, and social defeat. Low mood serves as an alarm system. It gets our attention and lets us know something is wrong. Depression allows us to stop, retreat to an emotional cocoon, analyze what went wrong, and hopefully change course to avoid future calamities.
But low mood has its costs, too. Whatever the benefits, there are plenty of negative effects like distorted thinking, delusions, suicide, difficulty in concentrating and functioning, and weakened executive functions in the brain.
A shallow depression can be adaptive, but a deep depression is maladaptive. There’s a continuum, and any cut-off point to divide normal from abnormal is arbitrary. Rottenberg thinks low moods used to be helpful in the environment where humans evolved, but that the environment has changed in ways that make low moods less advantageous today.
He describes animal and human experiments that shed light on depression. Animals show signs of depression too. Animals often act as if they are mourning after they lose a significant other. In the “tail test,” rats suspended by their tails conserve their resources better if they give up quickly and stop struggling. Their low mood resolves quickly when the stress is over. Adolescent girls who had depressive symptoms became more disengaged from goals over time, but the more disengaged they were, the better off they were in later assessments, reporting lower levels of depression. In another study, a negative mood was found to enhance the quality and concreteness of persuasive arguments. In a starvation experiment, subjects developed the signs of depression as their bodies reacted to conserve the insufficient calories. Their energy and concentration diminished, they lost all interest in sex, and they ruminated obsessively about food. By preventing action they couldn’t afford, depression contributed to their survival on scanty rations. Their depression lasted longer than the experiment; Rottenberg hypothesizes that this strategy is effective because it holds behavior in place until depleted resources can be rebuilt.
How does this normally-resilient mood system fall into deep depression? Prolonged shocks produced helpless behavior in dogs, so they didn’t even try to escape from shocks when it was possible to escape. Chronic mild stress in rats reduces their pleasure-seeking behavior for months afterwards; their responsiveness to rewards returns when they are given antidepressants. Undergoing several stressors at once increases the likelihood of depression in both animals and humans. Not every animal shows prolonged depression, just as not every human becomes depressed under equivalent stresses. Genetic variation is likely the reason: it has been estimated that 30-40% of susceptibility to depression in humans is genetic.
Some kind of loss is always present in depression, whether it be the death of a child or an imagined loss of status. Bereavement is one kind of depression, once thought to be a separate entity but now considered to be part of the same continuum.
How long do minor depressions last? There are no good treatments for minor depression, and doctors often resort to “watchful waiting.” This may be a mistake: a study showed that after a month, only 6% of patients had recovered. Another study found that 72% of people who had a minor depression were still bothered by one or more symptoms of depression when interviewed a year later. At any given time, 22% of the population has at least one significant symptom of depression. Mild depressions outnumber deep ones six to one. Low-level sadness is so ordinary it is often overlooked. But having a mild depression quintuples the risk of a later major depression.
Depression can be triggered by events, temperaments, and routines such as sleep patterns, night shifts, and artificial light. Fish with different temperaments have different success in different environments; the bold fish are more likely to enter a trap, while wary fish are slower to adapt to changing conditions. Humans have an additional problem: Rottenberg says “Homo sapiens has the distinction of being a species that can become depressed without a major environmental insult.” We think our way into deeper depressions by rumination and self-flagellation. We worry about remote or nonexistent possibilities. When we are depressed we think we ought to be able to fix ourselves; but we can’t, and that makes us even more depressed.
Sometimes depressed people can’t even get out of bed. This reflects a lack of goals. They don’t see any good reasons that would motivate them to get up. Humans can set goals in abstract domains where progress is hard to measure. When they hold on to failing goals, they become depressed. They need to disengage from the failing goals. Self-help books and the ideals of happiness in our society create high expectations and perceived failures. In the West the idea of happiness usually involves high levels of arousal like enthusiasm and excitement; in general, those who place the highest values on that kind of happiness tend to be the least happy. Asians tend to place greater value on low arousal states like calm and serenity.
According to Rottenberg, depression arises not from a defect, but from what we do well: thinking, using language, holding onto ambitious goals, and even our drive to be happy. Rottenberg says “The picture of depression that emerges is richer, more interesting, and in some ways more troubling than defect-model approaches would allow.”
He offers clues about how low moods can be better managed: appreciating the costs of thinking, sometimes accepting a low mood with equanimity, aiming for goals that are high but not too high, knowing when it is time to give up on a goal, and realizing that happiness is not itself a goal but “a fleeting byproduct of progress towards other goals.” Despite the evolutionary directive to become depressed, we retain a margin of control to shape its course.
We have learned that depression comes on more gradually and lifts more gradually than we once thought. We can’t predict whether a patient will respond to any treatment, but that doesn’t mean we shouldn’t keep trying. We used to think antidepressants took 6 weeks to show an effect, but we often see patients improving in the first two weeks, even those taking a placebo! Early improvement doesn’t predict final outcome. Early improvers may face fewer life problems, have an innate resilience, or maybe they are just lucky. Recovered patients may still have some residual depression and fear that a relapse could happen any time. A deep depression can re-program our mood system so that it favors a return to low mood states; but the same brain plasticity also allows for re-re-programming to a more normal state with treatments like mindfulness-based cognitive therapy, which attempts to disconnect sad moods from negative thoughts about the self. Mood can be rebuilt by changing the way we think, our environment, our relationships, and our health habits like sleep, diet, and exercise.
“Just like hunger or pain, moods are survival-relevant mental states that can bind together thoughts, feelings, and memories [and] change our mental priorities.” But they lead to mood-congruent memory, where we retrieve memories that match our current mood and are unable to call up contradictory memories. This can fortify us to change our situation, but it also tends to deepen the depression and makes us mentally less nimble.
Depression can be viewed as an opportunity. Rottenberg describes a patient who used her depression as a lens to re-evaluate everything in her life and re-set her priorities. Her life was better after the depressive episode than before.
Rottenberg calls his ideas the “mood science approach” to depression. He says:
The evolutionary perspective asks us to be patient, to learn to tolerate some degree of low mood, and to listen to what it is that low mood can tell us.
I don’t think his approach qualifies as a “paradigm shift,” but he does provide some valuable insights about this frustrating condition. Some of these insights are speculative, but most are based on recent animal and human research.
I used to try to reduce the stigma and guilt of obesity by telling obese patients that their tendency to store calories as fat was not a bad thing per se: it would give them a survival advantage over thinner people in a starvation environment or an environment of alternating feast and famine. But in our modern environment, where food is plentiful, the survival advantage is with the non-obese. I don’t think what I said made them lose weight more successfully, but I hope it provided some degree of comfort and reduced guilt. In the same way, Rottenberg’s concepts may help destigmatize depression. Depressed patients may feel better about their condition if they are told that it is a result of evolutionary traits that are basically good for us but that sometimes overdo it. If nothing else, the ideas and the experimental evidence Rottenberg presents provide plenty of food for thought.
Harriet Hall, MD also known as The SkepDoc, is a retired family physician who writes about medicine, so-called complementary and alternative medicine, science, pseudoscience, questionable medical practices and critical thinking. She received her BA and MD from the University of Washington, did her internship in the Air Force (the second female ever to do so), and was the first female graduate of the Air Force family practice residency at Eglin Air Force Base. During a long career as an Air Force physician, she held various positions from flight surgeon to DBMS (Director of Base Medical Services) and did everything from delivering babies to taking the controls of a B-52. She retired with the rank of Colonel. She is an editor and one of the five MD founders of the Science-Based Medicine blog. Dr. Hall writes the SkepDoc column in Skeptic magazine, and is a contributing editor to Skeptic and Skeptical Inquirer, as well as a medical advisor and author of articles on the Quackwatch website. She recently published Women Aren’t Supposed to Fly: The Memoirs of a Female Flight Surgeon and co-author of the recently released textbook “Consumer Health: A Guide to Intelligent Decisions,” and was appointed to the Executive Council of the Committee for Skeptical Inquiry.
Healthy Skepticism is republishing selections from Dr. Hall’s blog with permission. Please visit Science Based Medicine. | <urn:uuid:471804f6-d792-4138-aa00-129f65a778c7> | {
"date": "2020-01-22T05:05:18",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9583074450492859,
"score": 3.03125,
"token_count": 2889,
"url": "https://www.skepticink.com/health/2014/10/30/depression-re-examined-new-way-look-old-puzzle/"
} |
The GDPR or General Data Protection Regulation is an evolution in information protection that was agreed upon by the European Parliament and Council in 2016. It replaces the 1995 Data Protection Directive. It is a law that demands the companies to be more accountable regarding the uses of the personal records of EU citizens. It adds to the rights of the existing individuals of the UK and EU. This regulation puts an obligation on the companies to recognize the risks that they pose for the individuals and to make sure they are justifying them. In short, it is a progression of personal data protection. Though effective from 25th May 2018, the groundwork for this law was already primed up for last two decades. It works on the principles of transparency, fairness, accuracy, security, and respect of the individual’s data that an organization wishes to process.
Why Did the Companies Send These Emails?
The purpose of those emails is basically to ask the EU customers to renew their informed consent for further marketing communications and data processing. Because under GDPR no organization can process the individuals’ personal data without an explicit consent unless there is a legal basis. The consent needs to be specific, freely-given in plain words or an explicit affirmation by the individual. While a valid GDPR-complying consent helps to build up customer engagement and trust and to put individuals more in control, invalid consent can ruin faith and even harm the reputation of your business.
General Data Protection Regulation
If you deal with personal information of people living in European Union, or if your business is based in there, then you are likely to get affected by GDPR. Hence, you too need to obtain consent from your customers. The consent has to be unambiguous, specific, freely-given, and informed.
GDPR has introduced specific changes to the existing Protection Directive to improve the way businesses deal with the personal information. Now, people are in power to demand businesses to reveal or challenge the data they hold. It offers a chance for the individuals to review their data and accordingly give consent to the companies for processing them. Even, they can withdraw their consent. Under GDPR, the consent must be an opt-in option and does not allow pre-ticked opt-in boxes. It should also be separated from other terms and conditions and should not be linked to having to sign up for a service.
Measures Taken for Individuals’ Data Protection
To demonstrate that the business meets the principles of information protection, i.e., responsibility and accountability. Such measures include data protection by design and default and pseudonymisation.
Protection by Design and by Default
It requires the business to incorporate data protecting designs while developing the business processes for services and goods. For this, the privacy settings should be set at the highest level, and the database processor should take procedural measures that the entire processing cycle complies with GDPR.
It is a method to alter the personal data in such a way that the end data recognized to without extra information. It is recommended for reducing the data subjects’ risks while allowing the data processors to fulfill their data protection obligations.
In both the measures, the databases owners have the records encryption and decryption keys with them.
Improvement in Individuals’ Right
GDPR is a directive meant for the protection of the citizens of the UK and across EU. While it enforces data protection regulatory requirements on the business, it has also granted certain rights to the individuals:
Right to Access
According to GDPR, people have the right to access their information and also seek to know how the businesses are processing their databases. On such request, the businesses are bound to provide an overview of where the information is being used along with a copy of the actual records.
Right to Erasure
The new regulation has replaced the prevailing ‘Right to be Forgotten’ with Right to Erasure. Now, data owners have right to request deletion of their personal data (including those that are significant to regulatory agreement).
GDPR and Gambling Regulation
Both ICO (Information Commissioner’s Office and Gambling Commission are aware that use of personal information is vital for tackling issues like gambling-associated offenses, problem gambling. According to ICO, GDPR is not meant to avert companies from taking any step that is obligatory in the public interest to stay in compliance with the regulatory requirements for obtaining a license.
GDPR cannot be used as an excuse for not taking steps that lets the organization stay in compliance with license requirements, promote licensing purposes or responsible gambling. ICO would, however, offer assistance and support to help the business to execute with the authoritarian framework and GDPR.
The introduction to the effective date of GDPR piloted many to alter their privacy policies to adapt to the new requirements. While doing so, they have ended up in sending an endless number of emails, messages, on-site notifications despite having two years for preparation. It has been widely criticized as it has been causing unnecessary fatigue among the database owners. Some emails have wrongly asserted that the consent has to be taken on the effective of GDPR. In reality, prior consent would also work as long as it is well-documented and is in compliance with the GDPR requirements. Most of the phishing emails are a falsified version of the emails. There is nothing to panic, GDPR is just an obligation for the individuals, and there is time for getting it done. It grants you with more rights to better control the usage of your personal information. Decide if you want to give consent to the businesses for processing your information or not, or do you want to erase it. After all, it is your consent! | <urn:uuid:83ae2172-0aab-4fb1-af43-8da22cfe8ae6> | {
"date": "2020-01-22T05:06:22",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9464327692985535,
"score": 2.828125,
"token_count": 1143,
"url": "https://www.slotozilla.com/blog/gdpr-great-terrible"
} |
Balloons could help the small-satellite revolution reach new heights.
Los Angeles-based startup Leo Aerospace is developing a system that will loft bantam spacecraft using a rocket dropped from a giant hot-air balloon about 60,000 feet (18,000 meters) above Earth's surface.
Such "rockoons" had something of a heyday in the 1950s, when they were employed on dozens of suborbital atmospheric-research flights. But they haven't made much spaceflight noise since. (Today's prominent air-launch vehicles, such as Northrop Grumman's Pegasus rocket and Virgin Galactic's SpaceShipTwo spaceliner, are carried aloft by planes.)
Leo Aerospace's autonomous aerostat, named Regulus, is far more advanced than the simple helium balloons of 60 years ago. The autonomous Regulus features multiple thrusters to maintain stability and orientation, for example, as well as a proprietary rail system for the three-stage, 33-foot-long (10 m) rocket.
That rocket will be capable of launching 73 lbs. (33 kilograms) of payload to a 340-mile-high (550 kilometers) sun-synchronous orbit, or 126 lbs. (57 kg) to a circular orbit 186 miles (300 km) up, according to Leo Aerospace's website.
The company also plans to conduct suborbital missions using Regulus and a 10-foot-long (3 m) rocket, which will be able to get 220 lbs. (100 kg) to an altitude of 250 miles (400 km).
Those rockets will be expendable, but Regulus is designed for rapid and extensive reuse. Indeed, each individual balloon will be able to fly 100 missions, Leo Aerospace co-founder Bryce Prior said earlier this month during a presentation at the U.S. Air Force's first-ever Space Pitch Day.
And the system is mobile, essentially employing a semitruck as a launchpad.
"We can launch from anywhere that you can fit a cargo container," said Prior, who also serves as Leo Aerospace's head of operations and strategy.
Prior did not disclose exactly how much the company plans to charge for an orbital launch. But he did say the cost will likely be just one to three times what customers currently pay for "ride-share" access on big rockets like SpaceX's Falcon 9.
Ride-share participants don't have total control over where their satellites are deployed; as hitchhikers, they must make do with the mission profile required by the primary payload. Leo Aerospace will offer small-satellite operators that control with a dedicated launch, Prior said.
This business plan is similar to that of Rocket Lab, a pioneer in dedicated small-satellite launches. But Leo Aerospace aims to carve out a niche by focusing on even tinier spacecraft; Rocket Lab's 57-foot-tall (17 m) Electron booster can loft about 500 lbs. (225 kg) to orbit on each roughly $5 million liftoff.
Leo Aerospace has some competition for this slice of the spaceflight pie. Spanish startup Zero 2 Infinity has similar goals, for example, and is also developing a rockoon system, which performed its first rocket-powered test flight in 2017.
Regulus will come online as a high-altitude platform by the middle of next year, if all goes according to plan, Prior said. This system will have utility in its own right, even without a rocket on board. For example, Regulus could help engineers test technology for Mars entry, descent and landing. (The air thins out considerably high above Earth, providing a decent analogue of the Red Planet's atmosphere.)
Leo Aerospace aims to start providing suborbital launches in 2021 and orbital missions by the end of the following year, Prior added.
- Success of Tiny Mars Probes Heralds New Era of Deep-Space Cubesats
- Rocket Lab's Electron Rocket
- World View's 'Stratollite' Balloon Stays Aloft for Record 32 Days
Editor's note: This story originally stated that Leo Aerospace received $750,000 earlier this month at the U.S. Air Force's first Space Pitch Day event. That is not correct; the company presented at Space Pitch Day but did not make a "closed-door pitch" for on-the-spot funding. Leo Aerospace is instead applying for government funding as part of the normal Small Business Innovation Research (SBIR) Phase II process and expects to hear back about a potential award soon.
Mike Wall's book about the search for alien life, "Out There" (Grand Central Publishing, 2018; illustrated by Karl Tate), is out now. Follow him on Twitter @michaeldwall. Follow us on Twitter @Spacedotcom or Facebook. | <urn:uuid:318b422c-acff-4564-80db-32d0f7be4a7d> | {
"date": "2020-01-22T05:53:22",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9281477332115173,
"score": 3.21875,
"token_count": 984,
"url": "https://www.space.com/balloon-launch-rockets-leo-aerospace.html"
} |
Submitted by Communications Team on November 13, 2017 - 2:29pm
For some, shorter days and colder weather can trigger feelings of sadness and fatigue, common symptoms of seasonal mood disorder or (SAD). If you find yourself in a seasonal slump, try these tips from Special Tree’s Psych and Social Work team to help beat the winter blues.
Seasonal Affective Disorder is a type of depression that recurs seasonally. Typically, us Michigan folks are more likely to experience seasonal depression during the winter months, when our internal body clocks shift due to decreased sunlight; the colder temps don’t make things any easier. While some of us look forward to all that winter brings, others will feel like ‘tis the season to be melancholy.
Submitted by Communications Team on May 9, 2016 - 1:24pm
In honor of Mental Health Awareness Month, here’s a list of positive coping skills from Special Tree Neuropsychology team that may help when you're experiencing strong emotions such as anger, anxiety, or depression. The purpose of these activities is to help you learn to be more resilient and stress tolerant. These activities are not likely to create more stress or problems so give them a try! Learn more about Special Tree's Neuropsychology services here. | <urn:uuid:ac5da1cc-8922-4256-abdf-76f05d9b4cc7> | {
"date": "2020-01-22T05:11:45",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9098148345947266,
"score": 3.203125,
"token_count": 263,
"url": "https://www.specialtree.com/tags/caregiver-support"
} |
Have you ever come across people who are the dentists’ favorite patients simply because they never have dental problems? It could be because they take perfect care of their teeth by brushing and flossing after every meal. Or it also could be because they take note of the foods they eat to keep their teeth healthy. Whatever the reason, you need to practice good dental behavior by eating right and brushing regularly to avoid dental issues such as plaque, periodontal disease etc. Here are some foods that you should add to your diet to keep your teeth healthy.
First on the list of teeth fortifying foods includes milk. Basically, milk contains calcium which is beneficial to all the bones in your body as well as teeth. Milk keeps the teeth stronger and healthier and also protects you from getting periodontal disease. Also, it fortifies the jawbone keeping it healthy and strong.
Women are more prone to getting periodontal disease if there is lack of enough calcium in their diet. Therefore, it’s important to drink and eat calcium rich foods with milk at the top of the list. More specifically, you need to take skimmed or low-fat milk which gives you all the nutrients without any clogging of the arteries, like that experienced with whole milk.
Salmon, a type of fish, is the best source of Vitamin D which is essential for keeping the teeth strong and healthy. Vitamin D allows for proper absorption of calcium which protects the teeth or gums from any oral disease. Basically, taking milk alone doesn’t do all work but taking salmon or other sources of Vitamin D improves the absorption rate of calcium in the body.
It might look surprising but citrus fruits, more specifically oranges, improve oral health by strengthening the connective tissue as well as the blood vessels. These connective tissues are responsible for maintaining support for the teeth in the jaw. Also, vitamin C found in oranges slows down or prevents the progression rate of gingivitis.
Next to oranges, strawberries are also exceptional sources of Vitamin C with help repair your gum and prevent oral diseases. Vitamin C assists in the production of collagen, a crucial protein in maintaining the integrity and strength of your gums. A cup of fresh strawberries everyday will certainly do the trick.
Drinking clean water everyday removes any debris left by food and maintains high levels of saliva thereby improving oral health. Saliva is the first line of defense against oral decay since it contains minerals and proteins which are responsible for counteracting acids that affect the enamel. Saliva contains at least 95% of water and it would work wonders if you kept yourself hydrated at all times.
Remember, hydrating yourself with sugary and fizzy drinks is not advisable since the content of this drinks is harmful to your teeth. Taking water also displaces any sugary content left in the mouth thereby keeping you free from oral decay. Add all these foods in your diet and delay the progression of oral diseases or prevent them altogether. | <urn:uuid:e0a414e8-bbd4-4bd8-b8dc-22cc4746a569> | {
"date": "2020-01-22T05:07:55",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9522675275802612,
"score": 3.21875,
"token_count": 604,
"url": "https://www.spiritdental.com/blog/all/the-best-foods-for-healthy-teeth"
} |
The dangers of dehydration
Make sure you always drink enough fluids
Becoming extremely dehydrated—defined by the World Health Organization as when you lose more than 10% of your body weight in fluid—is a potentially life-threatening condition. Dehydration occurs when you use or lose more fluid than you take in, and your body doesn't have enough water and other fluids to carry out its normal functions.
So, what are the symptoms, and how can you prevent dehydration?
Browse the galley for an overview of the dangers of dehydration. And remember, always seek expert medical advice about symptoms and causes, diagnosis and treatment.
CELEBRITY Child stars | <urn:uuid:b5261dd6-98f0-4f2d-9689-456451d13a9f> | {
"date": "2020-01-22T06:39:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8509151339530945,
"score": 3.109375,
"token_count": 136,
"url": "https://www.starsinsider.com/health/388230/the-dangers-of-dehydration"
} |
Skill Level: Beg
Pre- requisite: None
Skills acquired (Hardware/ Software):
- Product design
Soft Skills acquired:
- Problem Solving
- Mechanical design concepts
- Echo Location
Real Time Applications : Environmental science, biomimicry, robotics, programming
In this super fun class students create smart autonomous robots that avoid obstacles , go hunting for objects and more!
The Bat inspired autonomous robot that uses echo-location to navigate!
See bat-mobiles, Fly drones &
a with hands on robotics building fun!
Have you ever considered how a bat can see without any light?
Biomimicry is the study and emulation of nature to perform tasks or solve problems.
a sub and bat share the same capabilities in using sonar or echolocation to detect objects in their sensory ranges.
* Beginners are welcome
"RO-BAT"- Build Super fun Sonar robots - Full day!
Please refer to FAQ for details. | <urn:uuid:7c44d07c-e991-458e-955a-7d215cde856d> | {
"date": "2020-01-22T04:58:30",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.8145879507064819,
"score": 3.03125,
"token_count": 203,
"url": "https://www.stremhq.com/product-page/make-your-own-ro-bat"
} |
Education is an ever swinging pendulum. One thing that is not swinging back in our society is our advancements in technology. Who knew just 10 years ago that most people you would walk by on the street would be carrying a computer in their pockets. With that advancement comes an entirely new generation that does not know life before the internet. If they want to know something, they simply ask Siri for the answer. Soft skills are no longer innate but must be taught through practice and repetition. Data is showing that the student-centered model of education is how these tech-infused students are learning and the more hands-on they are, the better.
(This post may contain affiliate links which won’t change your price but will share some commission. Please read our disclosure policy for more information.)
Some courses seem easier than others to implement student-centered instruction. Certainly, when you think of a science class, you think of the labs that are direct hands-on experimentation. A Social Studies class can be filled with simulations that represent different events from history. But what about a physical education class? Surely those are one-size-fits-all when it comes to activities and learning objectives. If you feel that statement is as solid as it can get, then I would believe you haven’t investigated student-centered physical education before.
Why is Student-Centered Physical Education Necessary?
I remember my gym teacher in my senior year. We were a good group of kids who tried hard, but she was very sports-oriented and if you weren’t “good” at an activity, she got really irritated that you weren’t trying. It’s not that we weren’t trying…there were just some activities we didn’t excel at. It was frustrating for us, but that was back in the days of cookie-cutter instruction, especially in a gym class. Not every student is going to be a star at every activity. That is the case for every course and every topic. Some students will naturally “get” some things and have to work harder at others. Knowing that, and knowing the learning styles of this generation, why wouldn’t we work with that as educators to help them to excel to the best of their abilities?
A physical education class is a unique beast. Some kids are naturally athletic while others have no interest in doing anything physical. Sparking the interest of every student is a difficult task to accomplish. The best thing to do is to find what motivates each and every student. The students who are more athletic will be easier to appeal to, but finding out what those who are on the other end of the spectrum enjoy will be more of a challenge. A student-centered physical education class does just this. Maybe they are more into designing in an engineering type sense. Perhaps they can design some type of obstacle course that correlates to a topic in the curriculum. Perhaps during a dancing unit, you can appeal to a student who enjoys history to try to find some cultural dances from around the world to teach to the other students. This idea is a little outside of the box, but it will give the students more of an opportunity to buy-in to the class.
Another way to increase student buy-in is to encourage goal setting. These don’t have to be large goals, but the unit can be directed towards each student reaching their personal goals. Perhaps someone is terrible at basketball. By the end of the unit, maybe their goal is to have one ball go through the net. You can also pair students up for this either by similar goals or in a mentoring sense to help one another. Maybe those who are really great at basketball can create a “how-to” type project to help those that are struggling. The possibilities are really endless with this, but it takes knowing your students.
Modifying activities and breaking them down into the specific skills that need to be mastered in order to complete the activity with more zest is also a great option. This differentiates, but you do need to be careful with this method; the last thing you want is the less apt students feeling like they are inadequate. Don’t approach this by saying “this is the skill you are working on”, but make it more so that different groups are working on different fundamentals: don’t have some students working on throwing a baseball and others playing an actual game. Think of this as a more drill-based activity that meets your students where their abilities are and help them to excel further.
In an era where our students are often more sedentary than previous generations, it is more important now than ever to try to make sure they’re all engaged in our gym classes, and certainly, student-centered methods will be more appealing to all. If your district has it available, there are also a number of digital tools that can be utilized in the physical education class. Games like PokemonGo and apps like Coacheseye and Vidalyze bring a digital element that still encourages movement and physical activity. Students can also create “how-to” videos for different physical activities and games, which could be utilized in the differentiated model discussed previously. Another simple idea is to invest in a few pedometers that students can wear during the class and have a variety of “step” competitions.
The key to all of this is making sure that all students are engaged. In traditional physical education classes, it is difficult to make sure every student is engaged both enthusiastically and to the best of their physical ability. However, in a student-centered physical education class, just adding some of these different elements helps to sustain interest in each student and have them performing better than if the class was a cookie-cutter. To learn more, you can check out the book Student-Centered Physical Education. It focuses on the middle school gym class, but certainly, it’s concepts can be adapted for older or younger students as you see fit. There are so many options out there to make gym class work for every student in your classroom. I challenge you to try out some student-centered physical education techniques in your next class. | <urn:uuid:dcfdc441-5a0f-4197-944d-b9880b982dee> | {
"date": "2020-01-22T04:36:02",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9762367010116577,
"score": 2.796875,
"token_count": 1270,
"url": "https://www.studentcenteredworld.com/student-centered-physical-education/"
} |
“In every walk with nature, one receives far more than he seeks.” – John Muir
Who doesn’t enjoy a walk outdoors in nature? The fact that nature settings are less and less accessible to those who live in cities should be concerning, especially with respect to overall health and well-being. The fact is, however, that continuing research shows nature has multiple benefits for your well-being.
More than 50 percent of the world’s population lives in urban areas and that proportion is projected to increase to 70 percent by 2050. Despite many benefits of urbanization, studies show that the mental health of urban dwellers is negatively affected by their city environment, with greater prevalence of anxiety and mood disorders and an increasing incidence of schizophrenia. Finding that bit of green space in cities or spending time in nature visiting rural areas may do more than provide a temporary escape from concrete, steel and glass.
Being in nature improves creativity and problem-solving.
Ever been stumped, hit a wall, unable to arrive at a well-reasoned decision? Most people have, at one time or another. It isn’t coincidence that talking time out to be in nature can result in a subsequent creativity surge and/or the sudden realization of a workable solution. Beyond that, according to 2012 research published in PLoS One, there is a cognitive advantage that accrues from spending time in a natural environment. Other research published in Landscape and Urban Planning found that complex working memory span improved and a decrease in anxiety and rumination resulted from exposure to natural green space.
Individuals with depression may benefit by interacting with nature.
Research published in the Journal of Affective Disorders in 2012 suggested that individuals with major depressive disorder who engaged in 50-minute walks in a natural setting showed significant memory span increases compared to study participants who walked in an urban setting. That participants also showed increases in mood was noted, the effects were not found to be correlated with memory, leading researchers to suggest that other mechanisms or replication of previous work may be involved.
Reductions in anxiety levels may result from green exercise.
While exercise is nearly universally recommended as a means of improving overall health and well-being, the benefits of green exercise have recently been studied relative to how such activity reduces levels of anxiety. Researchers found that green exercise produced moderate short-term reductions in anxiety, and found that for participants who believed they were exercising in more natural environments, the levels of reduction in anxiety were even greater.
Urban and rural green space may help mitigate stress for children and the elderly.
Relief of stress is an ongoing goal for millions of Americans living in urban areas, as well as for residents of cities across the globe. For children and the elderly, access to parks, playgrounds, gardens and other green areas in cities can help improve the health of these groups vulnerable to some of the challenges of urbanization.
Reduce stress by gardening.
Gardening can produce more than food for the table or aesthetically pleasing plants and landscaping. Working in the garden is also beneficial for reducing acute stress. So says the research from Van Den Berg and Custers (2011) who found reduced levels of salivary cortisol and improved mood following gardening.
A nature walk could help your heart.
Among the many health benefits ascribed to being in nature, say scientists, is the protective mechanism that nature exerts on cardiovascular function. This is due to the association between improved affect and heat reduction from natural environments in urban areas. Other research found that walks in nature reduce blood pressure, adrenaline and noradrenaline and that such protective effects remain after the nature walk concludes. Japanese researchers in a study published in 2011 suggested that habitual walks in a forest environment benefit cardiovascular and metabolic parameters. Another Japanese study of middle-aged males engaging in forest bathing found significantly reduced pulse rate and urinary adrenaline, as well as significantly increased scores for vigor and reduced scores for depression, anxiety, confusion and fatigue.
Mood and self-esteem improve after green exercise.
A 2012 study published in Perspectives in Public Health found that study participants, all of whom experienced mental health issues, engaging in exercise in nature activities showed significant improvements in self-esteem and mood levels. Researchers suggested that combining exercise, social components and nature in future programs may help promote mental healthcare. Research by Barton and Pretty (2010) found that both men and women experienced improvements in self-esteem following green exercise, with the greatest improvements among those with mental illness. The greatest changes in self-esteem occurred with youngest participants, with effects diminishing with age. Mood, on the other hand, showed the least amount of change with the young and the old.
Green space in a living environment increases residents’ general health perception.
Not everyone lives in a natural environment, where abundant trees and open space provide welcoming respite from everyday stress and a convenient outlet for beneficial exercise. However, the addition of thoughtfully-planned open spaces in urban environments can add to city dwellers’ perceptions of their general health. That’s according to 2006 research published in the Journal of Epidemiology and Community Health.
Nature can improve the quality of life for older adults.
As adults age, they often experience diminished quality of life due to medical issues and mental health concerns. In a 2015 study published in Health and Place, researchers found that nature exerts an influential and nuanced effect on the lives of older adults. They further suggested that a better understanding of how seniors experience both health and landscape will better inform methods to improve daily contact with nature that can lead to a higher quality of life for this population.
Natural environments promote women’s everyday emotional health and well-being.
Sedentary lifestyle in urban environments has been lined with poor mental health among women. Yet, it’s more than just getting up from the desk in an office environment and taking a quick walk that works best to augment overall emotional health and well-being. There’s increasing evidence that public access to natural environments helps women to alleviate stress and anxiety and facilitate clarity, reassurance and emotional perspective.
* * *
This article was originally published on Psych Central.
To automatically get my posts, sign up for my RSS feed.
Want to get my free newsletter? Sign up here to receive uplifting messages and daily positive quotes in my Daily Thoughts. You’ll also get the top self-help articles and stories of the week from my blog and more. | <urn:uuid:51bacf2a-5dca-4838-bda5-302a44c8535c> | {
"date": "2020-01-22T05:52:56",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9491995573043823,
"score": 3.0625,
"token_count": 1318,
"url": "https://www.suzannekane.net/tag/well-being-and-nature/"
} |
Method Math will help your students become successful and effective problem solvers! Using the problem-solving ROSE process, students will learn a step-by-step method of how to approach and solve word problems. With plenty of practice using each step of the process, they will also develop skills in analytical and critical thinking, deductive reasoning, and writing. A 15 1/4'' x 21 1/2'' (40cm x 54.6cm) color poster outlining the ROSE process is included with each book. Reproducibles included. 8 1/2'' x 11'' (21.5cm x 28cm). 80 pp. | <urn:uuid:17e53fb9-8306-44c6-9376-5d63b106bafe> | {
"date": "2020-01-22T05:06:25",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9314025044441223,
"score": 3.640625,
"token_count": 131,
"url": "https://www.teachersparadise.com/c/method-math-stepbystep-problem-solving-gr-3-p-285.html"
} |
In 1992, doctor-couple Regi M George and Lalitha visited Sittilingi.
Tucked near the foothills of the Kalrayan and Sitteri hill ranges, this remote tribal village in Dharmapuri district, Tamil Nadu, was cut off from the rest of the modern world.
It was home to ‘Malavasis’ or ‘Hill People’ who eke out a living through rain-fed agriculture.
How did the couple get there?
The couple first met as students of the Government T D Medical College, Alappuzha.
In the early 90s, after completing their medical training, Dr Regi and Dr Lalitha worked in a hospital in Gandhigram. People from far-flung areas travelled miles for the treatment of preventable illnesses like diarrhea and childhood pneumonia.
Rattled by the lack of healthcare access, the couple decided to backpack for a year, and document the most sensitive areas in need of help.
This quest led them to Sittilingi.
What disconcerted them most was the sheer lack of healthcare facilities.
In the advent of any medical emergency, the tribals would have to travel to Salem or to Dharmapuri. Because the nearest hospital was more than 50 kilometers away!
And to find one in the event of surgical intervention meant travel over 100 kilometres!
What pushed the couple further was that this hamlet recorded an infant mortality rate of 150 per 1,000 babies, the highest in all of India!
One out of five babies in the Sittilingi Valley died before their first year, and many mothers died during childbirth.
Located in the middle of a forest, buses ran four times a day. But getting to the bus stand required a long walk, lasting several hours.
It could have been easy for Regi and Lalitha to walk away. But they didn’t.
They decided to stay and make affordable healthcare available to Sittilingi’s two lakh people.
Since then, it’s been 25 years and the couple is only moving forward with their project, Tribal Health Initiative (THI).
The hospital was functioning from a hut that had a single room which operated as an out-patient and in-patient unit. All it had was a 100-W bulb and a bench for the patient to lay on.
Speaking to The Better India, Dr Regi says, “We had no money to buy land, so we set up a small clinic on government land, nothing more than a small hut built by the tribals. We worked out of this hut for three years, conducting deliveries and minor surgeries on the floor. ”
Friends and well-wishers donated funds to build a ten-bedded hospital. Today, they have come a long way from the thatched hut to a 35-bed full-fledged hospital, which is equipped with an ICU and ventilator, a dental clinic, a labour room, a neonatal room, an emergency room, a fully functional laboratory, a modern operation theatre and other facilities like X-Ray, Ultrasound, endoscopy, and echocardiography, like any other modern hospital.
Besides, the infant mortality rate in Sittilingi has reduced to 20 per 1,000, now one of the lowest in India. Moreover, no mothers have died in childbirth in the last ten years!
How did the couple achieve it?
Most deliveries in these areas happened at home. A lack of knowledge about childbirth complications or adequate postnatal care led to a very high rate of infant and maternal mortality.
He shares, “We started training health auxiliaries who were tribal women in their 40s and 50s to identify complications during childbirth. They visited homes in their respective areas during each delivery and ensured hygiene and sanitation. For instance, they checked if the umbilical cord was cut and tied properly.”
He adds that in the case of a complicated pregnancy, they would ensure that the mother was rushed to the hospital as soon as she went into labour. These women also visited the newborn within a week to check upon its health.
When they first began, they had to raise funds, even for the simplest procedures. They were also isolated without family or friends.
Besides, their two boys were young and had no schools in the vicinity. But they did not give up. The boys were home-schooled until class four.
Was there resistance among the tribals? Naturally.
But over the years, looking at their work and hardships, all to give the community the best healthcare, helped the community trust the couple.
“They had not seen a real doctor in a long time. If a child were admitted due to meningitis, the villagers would think it was affected by spirits and look for a witch doctor. In the case of snakebite, they wanted to do a puja. We learnt that one of the most important practices was never to counter their beliefs. If they said they wanted a puja to be conducted, we let them do it by the bedside.”
Dr Regi adds how their idea was to make quality healthcare available and affordable. Even today, deliveries are conducted at costs as low as Rs 1,000, and 80-90 per cent of OPD admissions are reserved for the tribal population.
“Some may think it is biased, but it is really them (the tribals) who need our help the most,” he insists.
How then does the hospital run?
Sustenance is difficult, but the couple isn’t giving up.
“We charge nominal amounts. In most cases, people pay, but there are times when they just give us what they have. So the hospital’s annual turnover, donations from good Samaritans, mostly Indians and NRIs and CSR funds, help us run THI without any government help.”
Are they in need of funds? Yes.
But not for expansion but to further subsidise treatments for the poor.
“We do not want cost to hinder their access to healthcare. So whether they can afford it or not, we want to help them. And of course, there is a constant need for money to keep these services running. We issue a pink card for all the babies born in the hospital, which allows them free care until the age of three. Because this service is free, parents take their children to the hospital. But if this service were to stop because of the lack of funds, the parents won’t get their children to the hospital until their health deteriorates drastically,” he informs.
Similarly, they also run an old age insurance scheme which provides access to free healthcare all-year round at Rs 100.
But their work doesn’t end here. The couple has also started an array of other projects to empower the community.
More than 95 per cent of their staff is tribals. Dr Lalitha has been ensuring that women employed at THI also get employee benefits like Provident Fund (PF) and gratuity.
“Most of our nurses, lab technicians, paramedics, and health auxiliaries are tribal boys and girls, who we have been trained by us or others. It is a hospital for the tribals by the tribals, operating in a 50 km radius, serving one lakh people every year.”
Getting women on board wasn’t easy. Especially when it was uncommon for daughters to work since they were married off early. Today, these women are skilled to the extent that they can run the hospital without supervision.
Under Sittinlingi Organic Farmers’ Association (SOFA), formed in 2004, they have mobilised over 500 farmers to give up the use of pesticides and grow chemical-free food, providing a green solution to long-standing woes of low yield, uncertain incomes, and infertile land.
Preserving culture and the dying arts
The couple is also preserving the history and cultural heritage of the tribe by reviving the dying art of Lambadi embroidery. This art form is an amalgamation of pattern darning, mirror work, cross stitch, overlaid and quilting stitches with borders of ‘Kangura’ patchwork done on loosely-woven dark blue or red handloom base fabric.
Often mistaken as Kutchi (Kachhi) embroidery because of mirror work, the shells and coins are unique to this type of embroidery, with the stitches being different.
Dr Lalitha is working towards promoting the Lambadi handcrafts under the name ‘Porgai’, which stands for ‘pride’ in the Lambadi dialect.
Under the brand ‘Svad’, women entrepreneurs are given credits to make organic products using local produce. They make over 25 organic products which includes powders of different grains, millets and spices, helping them earn additional income.
They also launched a farmer insurance policy, under which every farmer family is insured for Rs 50,000 in case of death. This money is pooled from within the community, where every farmer contributes Rs 100.
Dr Regi observes, “Just building and running a hospital isn’t enough. Whether it is eating healthy chemical-free food by adopting organic farming or promoting entrepreneurship among women, the key to a healthy community is dependent on upliftment in different fields.”
In his final message to other healthcare experts, Dr Regi says, “Our minds were full of doubts when we started. We had no money when we started, but we had sincerity of purpose. And sometimes, you just have to close your eyes, trust yourself and take that leap of faith. Like Paulo Coelho says, ‘When you want something, all the universe conspires in helping you to achieve it.’ The same happened to us. There is a crying need in our country, and we need to extend a helping hand.”
If this story inspired you, donate to their cause.
Bank details for Indian donors
A/c holder’s name: tribal health initiative
A/c. number: 11689302723
Branch: State bank of India, Kotapatty, Harur
IFSC code: SBIN0006244
Donors abroad can get in touch via email at [email protected], and the team will guide them.
(Edited by Shruti Singhal) | <urn:uuid:f1881593-eb5d-4fd6-96cf-e316ea1f1758> | {
"date": "2020-01-22T06:03:53",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9705932140350342,
"score": 2.578125,
"token_count": 2186,
"url": "https://www.thebetterindia.com/180247/tamil-nadu-tribal-affordable-healthcare-regi-lalitha/?utm_source=fb&utm_medium=link&utm_campaign=couple+began+hospital+in+a+hut&utm_content=jovita&fbclid=IwAR2oOtV1H_z8JaT_3PvDp2_iAUufozavNKo26WoN4AJbbmx5zk5S5UcCW9k"
} |
Clay County, Florida
Manatees in the St. Johns River near Green Cove Springs
Part of the Greater Jacksonville Metropolitan area, Clay County is located in northeastern Florida. Clay County contains 601square miles of land area and 43 square miles of water surface. The county seat is Green Cove Springs.
Clay County was carved out of Duval County when it was designated by the Florida Legislature on December 31, 1858. The county was named in honor of Henry Clay, a former Senator from Kentucky who also served as the US Secretary of State for several years in the earlier 1800's.
In the early days, Clay County, with its therapeutic warm springs and temperate climate, was a popular destination for tourists from the northern states. Most folks visiting the area came on steamboats. The local spring water was of such quality that President Grover Cleveland had it shipped to the White House. Then Henry Flagler built the Florida East Coast Railway along the coast and the tourists headed south for places like Palm Beach, Fort Lauderdale and Miami.
During World War II, the American military built and operated several training bases in Clay County, effectively turning Camp Blanding (in the center of Clay County) into the 4th largest city in Florida for a few years. These days, Clay County is a popular residential location for military personnel (and retirees) because of the proximity to still-operating bases in nearby Duval County.
Private Sector, wages or salary: 77%
Government Sector: 18%
Unincorporated, Self-Employed: 5%
Population Density: 311 People per Square Mile
Median Resident Age: 35.9 Years
Cost of Living Index for Clay County: 84.8
Median Household Income: $57,780
Median Home Value: $175,740
Health Care, Construction, Educatinal Services, Government, Lodging & Food Services, Finance & Insurance Services, Professional Services, Retail Services, Social Services
|Population by Age|
|18 & over||140,695|
|65 & over||22,292|
|Population by Ethnicity|
|Hispanic or Latino||14,609|
|Non Hispanic or Latino||176,256|
|Population by Race|
|Hawaiian or Pacific Islander||214|
|Two or more||5,628| | <urn:uuid:bda33a9a-750d-425e-9aa7-c327413ba5f0> | {
"date": "2020-01-22T05:41:12",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9171228408813477,
"score": 2.765625,
"token_count": 482,
"url": "https://www.tidewater-florida.com/places/clay/index.htm"
} |
Understanding adolescents is a problem at best, and the adolescent who’s in poor health or affected by psychological stress is a fair better conundrum. Art history’s emergence as a discipline is normally traced to Hegel, although Winckelmann’s The History of the Art of Antiquity (1764), might be seen as a beginning too.( four ) Hegel was after all a thinker and art history’s debt and connections to philosophy, and to German scholarship, continue to mark out the self-discipline.
In the ninth century inscriptions on the Karchung rdo ring, the foundation of the Lhasa gtsug lag khang, the most revered Lhasa temple, is attributed to the reign of Srong btsan sgam po. The Tibetans had encountered the marvels of Buddhist art as an oblique results of their military enlargement towards the Himalaya in addition to to the Silk Routes and China.
They originate from numerous areas in Java courting from the 7th to the 15 th century, the Hindu-Buddhist interval within the historical past of the Indonesian archipelago. Should you find citations for articles on BHA, then you’ll be able to Interlibrary Loan a replica of the article.
Audio recording about Duchamp’s iconoclasm of a hundred years ago has arrived at the centre of 20th-century art history. Along with articles, Artwork Full Text indexes reproductions of artistic endeavors that seem in listed periodicals.
As of March 2010, the database contained about 1,489,580 titles, together with 777,580 articles. This is a perfect device for art historians, artists, designers, students, and normal researchers. The right way to paint and draw sooner: 15 ideas for highschool Artwork college students Struggling to keep up with the workload is a standard concern for many Artwork college students – particularly those that work with a detailed, life like method. | <urn:uuid:2c4ffe68-8054-4bbe-b432-01f7f5866ad4> | {
"date": "2020-01-22T04:48:07",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9309528470039368,
"score": 2.765625,
"token_count": 406,
"url": "https://www.tregellysfibers.com/definition-topics.html"
} |
Seattle Floating Bridge- Diversity in the Depths
Seattle’s 520 - the longest floating bridge in the world is the result of a very deep lake. Utilizing the existing structure to create a bio-diverse habitat in the middle of a deep lake will serve as a prototype for restoring shoreline ecology of Lake Washington. The existing 72 mile edge along Lake Washington is constantly interrupted by a variety of development including retaining walls, docks, concrete and rip rap edges, with little native habitat and large expansive lawns. Most of this existing shoreline does little to sustain a diverse ecosystem along the water's edge. Our proposal seeks to create a floating littoral zone habitat through the deepest part of Lake Washington by utilizing the existing floating pontoons. As 520 floating bridge is currently part of the Salmon Migration Route, our project aims at recognizing the importance of creating a safer trip home for everyone. ‘Diversity in the depths’, moves from the depth of the lake bed floor to the edge of the riparian zone, illustrating that the health of what is below the water is ultimately connected not only to our own health, but the health of the entire ecosystem. | <urn:uuid:b3ae3430-7442-48ff-93c3-a6a0a29a27ec> | {
"date": "2020-01-22T04:30:57",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.909886360168457,
"score": 2.75,
"token_count": 236,
"url": "https://www.tsstudio.org/content/seattle-floating-bridge"
} |
Automatic Speech Recognition
Automatic speech recognition (ASR) is a technology that can be used to transcribe spoken words into written text.
Ubiqus Spain uses one form of ASR, which is the Large Vocabulary Continuous Speech Recognition (LVCSR), based on the automatic identification of very short audio sequences. This technology makes it possible to produce a highly quality transcription, if provided with and subject to a high quality audio recording.
The state of the art of ASR has greatly evolved in recent years, and our R&D team is contributing to its permanent growth.
There are 4 Steps to the Process:
Voice Activity Detection
Firstly, it is important to identify when talking /speech is present during the recording, in order to cut the soundtrack into segments. The machine will then work on each of these segments.
Next, it’s important to identify the different speakers in each recording, and to group them into segments according to their identity, solving the problem of ‘who spoke when?’. For this, the machine uses different models containing specific data (languages, voice). It is therefore able to differentiate the subtleties of a language (such as accents for example). Note that at this point, we are still in the “mathematical” treatment of the data.
This is when the actual transcription starts. A list of possible syllables (phonemes) is established for each audio segment. For now, no full sentences have been generated only one long list of possibilities, each with a score.
The computer chooses, amongst/between all the phonemes and words learned during the initial training, those that are most likely to form the most accurate sentence (a bit like how a GPS identifies the best route). It is this sentence that is transcribed into the document.
This process is applied to every segment of the recording to produce, in fine, the complete transcription.
At the end of this automated process, the document is re-read by our teams, like we do for any other Ubiqus Spain document: On top of verifying the content as a whole, the proofreader will also ensure the speech has been correctly attributed. | <urn:uuid:6a420b60-1dfb-444b-a72b-7b2d2bb1646d> | {
"date": "2020-01-22T06:20:28",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9409770965576172,
"score": 3.15625,
"token_count": 453,
"url": "https://www.ubiqus.es/en/automatic-speech-recognition/"
} |
Make sure your children get plenty of liquids to stay healthy and active this summer, and help them develop good hydration habits for a lifetime.
By Debra Wittrup
Children are much more prone to dehydration than adults because their bodies don't cool down as efficiently, and they are never more at risk than during the heat of summer. The danger arises when fluids are leaving the body through sweating faster than they are being replaced, and severe dehydration can be life-threatening. Taking a few simple precautions will protect your child and allow him to enjoy the summer fun safely.
Perhaps the best way to keep your child hydrated is to get her used to drinking liquids regularly. Offer healthy beverages at every meal and with snacks. And if you know a particularly busy or strenuous day is coming up in your child's schedule, add some extra hydration in her first meal of the day or even the night before. The American College of Sports Medicine recommends drinking the equivalent of a standard bottle of water (16.9 oz.) about 2 hours before vigorous exercise.
Wet Their Whistles
Don't wait until your child is thirsty to offer refreshment; by that time he is already dehydrated. Three studies by the University of Connecticut found that more than half of the children at sports camps were significantly dehydrated despite the availability of water and sports drinks and the encouragement to drink liquids. Get your child in the habit early on by scheduling frequent beverage breaks during activity, about every 20 minutes or so in hot weather. If possible, take all hydration breaks in a shady spot.
Banned from the Sport
When choosing drinks for kids, avoid those that have caffeine, such as iced tea or many sodas. As a diuretic, caffeine can contribute to the dehydration process by increasing fluid loss. In addition, as a stimulant, it can depress the symptoms of dehydration. Beverages such as soda or juice-flavored drinks might taste refreshing, but the high sugar content is unhealthy for many reasons and should be avoided for hydration except as a last resort.
Many fruits are excellent sources of water as well as being a nutritious snack. Offer fruits often during playtime and throw them in the cooler for after-game snacks. Fruit juice has a higher concentration of sugar than whole fruit and because of that, it's not the best beverage choice for hydration during strenuous exercise. But the AAP (American Academy of Pediatric) does see a place for it among your options: for activity periods longer than three hours, the AAP suggests a drink of half water and half 100-percent juice.
Eat Your Veggies
Always include high-water-content foods in your daily meal planning to help your family stay well-hydrated at all times so strenuous activities don't find them in a deficit. In addition to water, fruit, fruit juice, and many vegetables are excellent sources of hydration. Clear soup, especially when made with vegetables, offers an ideal way to get liquid into the diet along with good nutrition.
As they get older, you won't be able to follow your kids everywhere to ensure they're getting the liquids they need. But you can help them to understand the importance of hydrating frequently for good health. Instill in them early on the habits of frequent beverage breaks and choosing liquids wisely. Help those good habits along by always packing good sources of hydration into their lunchboxes or backpacks as not-so-subtle reminders to keep up the good work! | <urn:uuid:d5eed312-94bc-45d4-9615-03ad15d65332> | {
"date": "2020-01-22T06:09:40",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9658851623535156,
"score": 2.6875,
"token_count": 710,
"url": "https://www.vancouvermartialarts.com/blog/keeping-kids-hydrated"
} |
The U.S. health-care system is the most expensive in the world, costing about $1 trillion more per year than the next-most-expensive system — Switzerland’s. That means U.S. households pay an extra $8,000 per year, compared with what Swiss families pay. Case and Deaton view this extra cost as a “poll tax,” meaning it is levied on every individual regardless of their ability to pay. (Most Americans think of a poll tax as money people once had to pay to register to vote, but “polle” was an archaic German word for “head.” The idea behind a poll tax is that it falls on every head.)
Despite paying $8,000 more a year than anyone else, American families do not have better health outcomes, the economists argue. Life expectancy in the United States is lower than in Europe.
“We can brag we have the most expensive health care. We can also now brag that it delivers the worst health of any rich country,” Case said.
Case and Deaton, a Nobel Prize winner in economics, made the critical remarks about U.S. health care during a talk at the American Economic Association’s annual meeting, where thousands of economists gather to discuss the health of the U.S. economy and their latest research on what’s working and what’s not.
The two economists have risen to prominence in recent years for their work on America’s “deaths of despair.” They discovered Americans between the ages of 25 and 64 have been committing suicide, overdosing on opioids or dying from alcohol-related problems like liver disease at skyrocketing rates since 2000. These “deaths of despair” have been especially large among white Americans without college degrees as job options have rapidly declined for them.
Their forthcoming book, “Deaths of Despair and the Future of Capitalism,” includes a scathing chapter examining how the U.S. health-care system has played a key role in these deaths. The authors call out pharmaceutical companies, hospitals, device manufacturers and doctors for their roles in driving up costs and creating the opioid epidemic.
In the research looking at the taxing nature of the U.S. health-care system compared with others, Deaton is especially critical of U.S. doctors, pointing out that 16 percent of people in the top 1 percent of income earners are physicians, according to research by Williams College professor Jon Bakija and others.
“We have half as many physicians per head as most European countries, yet they get paid two times as much, on average,” Deaton said in an interview on the sidelines of the AEA conference. “Physicians are a giant rent-seeking conspiracy that’s taking money away from the rest of us, and yet everybody loves physicians. You can’t touch them.”
As calls grow among the 2020 presidential candidates to overhaul America’s health-care system, Case and Deaton have been careful not to endorse a particular policy.
“It’s the waste that we would really like to see disappear,” Deaton said.
After looking at other health systems around the world that deliver better health outcomes, the academics say it’s clear that two things need to happen in the United States: Everyone needs to be in the health system (via insurance or a government-run system like Medicare-for-all), and there must be cost controls, including price caps on drugs and government decisions not to cover some procedures.
The economists say they understand it will be difficult to alter the health-care system, with so many powerful interests lobbying to keep it intact. They pointed to the practice of “surprise billing,” where someone is taken to a hospital — even an “in network” hospital covered by their insurance — but they end up getting a large bill because a doctor or specialist who sees them at the hospital might be considered out of network.
Surprise billing has been widely criticized by people across the political spectrum, yet a bipartisan push in Congress to curb it was killed at the end of last year after lobbying pressure.
“We believe in capitalism, and we think it needs to be put back on the rails,” Case said. | <urn:uuid:e66b3e28-7430-42a3-87b2-5a13cb009e39> | {
"date": "2020-01-22T05:01:17",
"dump": "CC-MAIN-2020-05",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606696.26/warc/CC-MAIN-20200122042145-20200122071145-00216.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9671984910964966,
"score": 2.578125,
"token_count": 898,
"url": "https://www.washingtonpost.com/business/2020/01/07/every-american-family-basically-pays-an-poll-tax-under-us-health-system-top-economists-say/"
} |
Subsets and Splits