id
stringlengths
47
47
text
stringlengths
176
666k
safety_metadata
dict
metadata
dict
<urn:uuid:f364a9db-153a-44ef-aca7-2bcacdb220c3>
In a way, diving can be compared to flying an airplane; the hardest part is taking off and landing. This is true for diving as well; the ascents and descents are often what cause people the most trouble. Once you’re at depth and neutrally buoyant, it’s “easy peasy.” It’s getting to that point and making a nice controlled ascent to your safety stop and the surface that can be a bit tricky… Making a controlled ascent and descent can add to your overall enjoyment of the dive as well as help prevent barotraumas (pressure related injury). After a controlled descent you arrive at depth already neutral, calm, relaxed, and ready to enjoy the dive. This a vastly different than the over weighted negative free fall many divers make; struggling to equalize and landing on the bottom causing damage to the reef or wreck and ruining the visibility. This is a bad way to start a dive, and it takes time to regain control and enjoy the remaining portion of the dive. Also, this is often the cause of many pressure related injuries to the ears and sinuses because the diver is descending faster than they can equalize those air spaces. Making a controlled ascent is important for similar reasons. Making a nice controlled ascent to your safety stop, you arrive neutrally buoyant and can hover effortlessly while continuing to enjoy the marine environment around you. You also avoid injury by allowing time for the expanding gas in your air spaces to escape safely. All too often we see divers “float” up and blow right past their safety stop ending up at the surface wondering what the heck happened. We are going to take a look at how to control your ascent and descent by using the art of buoyancy control. In order to control your descent, you should remain neutrally buoyant. This seems counterintuitive at first. “If I’m neutral, how am I going to go down?” Well, if you are properly weighted you should really only be descending when you exhale all the gas out of your lungs, and you should only be descending a couple of feet at a time. We were all taught in our open water course, that to be properly weighted and an empty BCD you should float at eye level while holding a full breath. When you exhale, you should become slightly negative and descend, when you take another breath, you should become neutral again. As you descend in this controlled manner, you will need to add small amounts of air to your BCD every few feet to counter the compression of your wetsuit/dry-suit and BCD to remain neutral. This allows you to easily stop your descent to equalize, acclimate to a thermo-cline, make contact with your buddy, watch that dolphin swimming by, or avoid landing on the bottom. Now you will arrive at your target depth already neutrally buoyant, air spaces comfortably equalized, and ready to look for that dolphin you spotted on your way down! Ascending should be pretty much the same process, just in reverse. You should already be neutrally buoyant, so to ascend you should only need to swim lightly towards the surface. Now it’s extremely important to monitor your dive computer’s ascent rate indicator to make sure you are going up at a safe rate and vent the expanding gas out of your BCD so you do not become positively buoyant. If you become positive, you can find yourself on an uncontrolled ascent to the surface. Floating up, the air in your BCD can begin to expand faster than you can dump it, which makes you even more buoyant and you float up faster while the air in your BCD expands even faster. This is a dangerous situation which can result in a barotraumas or even decompression sickness. To avoid this potentially hazardous situation, pause every few feet on your way up to make sure you are not starting to float towards the surface. This way, once you reach your safety stop, you can easily just hold your position hovering weightlessly showing off your excellent buoyancy skills to that dolphin that decided to come back to play. Your safety stop is an excellent place to make a final weight check. If you are weighted properly, you should have almost NO air in your BCD at the end of the dive and be perfectly neutral. If you are struggling to stay down, you may need another pound or 2, if you have to add air to your BCD to remain neutral, you can probably take a little weight off for the next dive. Like everything else, practice makes better. Luckily, on every dive, you have to make at least one descent and ascent, so you may as well use this time to practice. We’ve found that using a line or sloping bottom as a visual reference is a great way to get your ascents and descents dialed in, and suggest using them to hone your skills whenever available. Even better than practicing on your own, is having an instructor help you through it with an SDI Advanced Buoyancy course. Find an instructor near you here! Contact SDI TDI and ERDI If you would like more information, please contact our World Headquarters or your Regional Office.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content with no unsafe elements\", \"score\": 0}\n```\n\nThe text provided is entirely focused on diving techniques and safety measures, specifically regarding buoyancy control during ascents and descents. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThis text does not contain any unsafe or inappropriate content, and it is clearly promoting contact information for organizations without any harmful or inappropriate themes." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1211-0", "llm_safety_score": 0, "reason": "educational content with no unsafe elements" }, { "custom_id": "1211-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.tdisdi.com/the-art-of-buoyancy/", "date": "2014-07-30T17:11:52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270877.35/warc/CC-MAIN-20140728011750-00374-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9495922923, "token_count": 1083, "score": 2.640625, "int_score": 3 }
<urn:uuid:909bfd65-6b09-45a8-8c4d-025246bb3ab5>
Fire season is officially open. Fire is natural in the foothills of Southern California; many of the plants that have been growing here for the last few thousand years need fire to release their seeds from dormancy. Fire also facilitates cycling of essential nutrients as the ash formed is deposited on the soil surface. Clearing out the dense cover also allows light into formerly dark areas so that dormant seeds may sprout. Many shrubby species will soon resprout from underground stems, and it won’t take many years to have a dense thicket again: a source of fuel that is just ripe for the next spark and puff of wind to create an inferno. These native plants are uniquely adapted to such catastrophic events by having an enlarged underground burl that can quickly form new shoots using stored nutrients. Their deep roots are able to survive the intense temperatures that occur during a fire (even one and a half inches below the surface, the soil may reach 300 degrees Fahrenheit). These stalwarts of the chaparral will, within just a few years, dominate the scenery once again. The proclivity for this fiery renewal of the native chaparral and the human desire to occupy available space creates a recipe for disaster. As houses are built deeper and deeper into the native vegetation, it is only a matter of time until wildfire strikes near enough to pose a serious threat. Fire will travel wherever there is something to burn. Designing a garden in the wildland/urban interface means choosing plants that don’t result in thickets of secondary, woody growth. Softer perennials and especially those that don’t have aromatic foliage (a clue that they are full of flammable oils and resins) are obvious choices. Isolate groups of plants in “islands” surrounded by “seas” of groundcovers so that they don’t create “ladders” for the fire to follow. Other rules to remember are more or less common sense. Trim overhanging branches and keep them at least 15 feet away from roadways. Rake up dead leaves, barks, and twigs, and remove them from roofs and gutters. Be sure to keep palms that hold their old fronds trimmed up, as well. Don’t locate the firewood pile near any structures, and keep vegetation mowed low around it. Trim vegetation while it is still somewhat green, and beware of stones hiding in the grasses that in collision with mower or tractor blade can cause sparks. It’s not a bad idea to have a hose ready nearby to dampen things down and, heaven forbid, get the upper hand quickly should these precautions fail. One last caution: Make sure mowers, string-trimmers, and chainsaws are in good condition to avoid electrical shorts and gas leaks that can fuel the tiniest spark. If choosing plants that are attractive and less likely to burn seems a daunting task, there is one great place to go for inspiration. Visit the Firescape Demonstration Garden on the corner of Mission Ridge Road and Stanwood Drive right across the street from the firehouse. Everything from groundcovers to flowering perennials are planted here, and trees and shrubs are trimmed to the recommended guidelines. Remember, 95 percent of all wildfires are caused by people. A little caution and advance preparation can increase the odds for a defensible garden that may even provide a measure of protection against wildfire.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content with no unsafe elements\", \"score\": 0}\n```\n\nThe text discusses the natural role of fire in the ecosystem and provides advice on how to create a fire-resistant garden in the wildland/urban interface. There are no unsafe elements, inappropriate language, or harmful themes present." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1212-0", "llm_safety_score": 0, "reason": "educational content with no unsafe elements" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://independent.com/news/2012/may/29/fire-prevention/", "date": "2014-08-01T02:31:42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273874.36/warc/CC-MAIN-20140728011753-00422-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9488863945, "token_count": 706, "score": 3.546875, "int_score": 4 }
<urn:uuid:f6df6ffd-bbbf-4e08-97f2-5a38a501167a>
Canadarago Lake netting (2008) The 2008 biannual netting of the 1,900 acre Canadarago Lake was completed in September. This sampling effort involves setting two 150 ft long monofilament gill nets overnight each month from June through September. The objective of this study is to monitor the abundance of walleye and yellow perch. Although the catch of 12.8 walleye/net indicates an abundant population, it represents a 41% decline from the record 21.6 fish/net recorded in 2003. The decline may be related to the growing alewife population in the lake. Since first discovered in the lake in 1999, the alewife catch has increased 5,633% to the 16.9 alewife/net recorded this year, an increase of 282% since 2006. Additional data analysis will be required to determine if the growing alewife population is responsible for the decline in walleye abundance since 2003. If the alewife population continues to expand, it is expected that a walleye stocking program will need to be initiated to maintain the high quality walleye fishery in this lake. The catch of 99.3 yellow perch/net is the third highest catch on record for the 14 nettings conducted since 1983. The abundant population suggests that the 2008/09 winter ice fishery for this species should be excellent.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text is a detailed report on a scientific study related to fish populations in Canadarago Lake. It contains no unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to ecological monitoring and conservation efforts, which are suitable for public and educational use." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1213-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.dec.ny.gov/outdoor/65014.html", "date": "2014-08-01T01:46:18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273874.36/warc/CC-MAIN-20140728011753-00422-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9244913459, "token_count": 281, "score": 2.546875, "int_score": 3 }
<urn:uuid:bd24caf8-139a-4eca-aba6-9e23b755cc96>
Unfavorable weather conditions in Kauai, Hawaii cancelled a six-day launch window for the Low-Density Supersonic Decelerator (LDSD), NASA’s saucer-shaped craft that would perform a landing test amid conditions similar to Mars’ atmosphere. High wind mixed with heavy rain has pummeled the Pacific Missile Range Facility for about two weeks, making it too difficult to properly launch the LDSD, a Rose Bowl-sized craft that was designed by the Jet Propulsion Laboratory and NASA. LDSD was supposed to blast 180,000 feet high using a helium balloon and accelerator and, inside a wind tunnel, descend back down to Earth in condition similar to Mars’ atmosphere. However, the weather has made it impossible. “NASA will research range availability for the coming weeks and the costs associated with extending the test flight period for launching LDSD’s high-altitude balloon and test vehicle, with programmatic decisions required to proceed,” a NASA statement reads. A decision regarding the project’s future will likely be made Thursday morning, according to a NASA spokeswoman. Kauai and the hinterlands of Australia were the only two viable locations to safely launch the craft. LDSD, if it ever does blast off, could be a major step forward in space landing equipment. No advancements have been made since 1972, not even to launch the Mars Curiosity rover back in 2012.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n*Explanation:*\nThe text discusses weather conditions affecting a scientific launch and does not contain any unsafe, inappropriate, or harmful content. It is entirely suitable for public use and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1214-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.sbsun.com/science/20140611/nasas-flying-saucer-launch-window-rained-out-in-hawaii", "date": "2014-08-01T02:15:11", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273874.36/warc/CC-MAIN-20140728011753-00422-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9425190091, "token_count": 295, "score": 2.625, "int_score": 3 }
<urn:uuid:9b238753-c448-42a5-8c4b-a9c4c2a393d4>
COMMITTEE ON AGRICULTURE Rome, 25-29 January 1999, Red Room Item 8 of the Provisional Agenda 1. Although as yet only a small industry, organic agriculture is becoming of growing importance in the agriculture sector of a number of countries, irrespective of their stage of development. For example, in several developed countries organic agriculture has come to represent a significant portion of the food system (10 percent in Austria, 7.8 percent in Switzerland)1 and many others are experiencing growth rates that exceed 20 percent annually (e.g. USA, France, Japan, Singapore)2. Some of the developing countries have small domestic organic markets (e.g. Egypt) and a few have begun to seize the lucrative export opportunities presented by organic agriculture (e.g. exports of Mexican coffee, Ugandan cotton). 2. Though only a small percentage of farmers are expected to become organic producers, consumer demand for organically produced food and fibre products provides new market opportunities for farmers and businesses around the world. It also presents new challenges for FAO. For many years, and with great success, the private sector alone has developed the concepts and markets for organic products. However, the surge in consumer interest has created new interest from the public sector, and developing countries are particularly in need of good information. Member countries are requesting FAO assistance as they seek to determine the potential of such markets in specific areas. Governments need to know the potential of organic agriculture to contribute to sustainability in order to direct research and extension efforts. Countries also seek FAO's assistance in deciphering the multitude of rules various traders expect to be followed; increasing international trade in organic products has placed FAO in the forefront of efforts to achieve greater harmony in organic standards. 3. The World Food Summit Plan of Action recognized the importance of "appropriate input technologies, farming techniques and other sustainable methods, such as organic farming, to assist farming operations to be profitable, with the goal of reducing environmental degradation, while creating financial resources within the farming operation." This paper discusses the opportunities and constraints of organic agriculture and the public policies influencing the adoption of organic agricultural practices. The paper proposes a coherent and cross-sectoral FAO programme in organic agriculture with four distinct functions, all aimed at enabling member countries to make informed choices about organic management. COAG is asked to endorse FAO's intention to develop such a coherent programme. 4. An organic label indicates to the consumer that a product was produced using certain production methods. In other words, organic is a process claim rather than a product claim. An apple produced by practices approved for organic production may very well be identical to an apple produced under other agricultural management regimes. 5. Several countries and a multitude of private certification organizations have defined organic agriculture. In the past, differences in these definitions were significant but the demand for consistency by multinational traders, has led to great uniformity. The International Federation of Organic Agriculture Movements (IFOAM), a non-governmental organization internationally networking and promoting organic agriculture, has established guidelines that have been widely adopted for organic production and processing. 6. Most recently, the Codex Committee on Food Labelling has debated "Draft Guidelines for the Production, Processing, Labelling and Marketing of Organically Produced Foods"; adoption of a single definition for organic agriculture by the Codex Alimentarius Commission is expected at its next meeting in June, 1999. According to the proposed Codex definition, "organic agriculture is a holistic production management system which promotes and enhances agro-ecosystem health, including biodiversity, biological cycles, and soil biological activity. It emphasises the use of management practices in preference to the use of off-farm inputs, taking into account that regional conditions require locally adapted systems. This is accomplished by using, where possible, agronomic, biological, and mechanical methods, as opposed to using synthetic materials, to fulfil any specific function within the system." 7. Organic agriculture is one of several approaches to sustainable agriculture and many of the techniques used (e.g. inter-cropping, rotation of crops, double-digging, mulching, integration of crops and livestock) are practised under various agricultural systems. What makes organic agriculture unique, as regulated under various laws and certification programmes, is that: (1) almost all synthetic inputs are prohibited,3 and (2) `soil building' crop rotations are mandated.4 The basic rules of organic production are that natural inputs5 are approved and synthetic inputs are prohibited. But there are exceptions in both cases. Certain natural inputs determined by the various certification programmes to be harmful to human health or the environment are prohibited (e.g. arsenic). As well, certain synthetic inputs determined to be essential and consistent with organic farming philosophy, are allowed (e.g. insect pheromones). Lists of specific approved synthetic inputs and prohibited natural inputs are maintained by all certification programmes and such a list is under negotiation in Codex. Many certification programmes require additional environmental protection measures in addition to these two requirements. While many farmers in the developing world do not use synthetic inputs, this alone is not sufficient to classify their operations as organic. 8. The demand for organic products has created new export opportunities for the developing world. While some consumers express a preference for locally-grown organic foods, the demand for a variety of foods year-round makes it impossible for any country to source organic food entirely within its own borders. As a result, many developing countries have begun to export organic products successfully (e.g. tropical fruit to the European baby food industry, Zimbabwean herbs to South Africa, six African nations export cotton to the European Community). Typically, organic exports are sold at impressive premiums, often at prices 20 percent higher than identical products produced on non-organic farms. The ultimate profitability of organic farm varies, however, and few studies have assessed the long-term potential for such market premiums. Nevertheless, under the right circumstances the market returns from organic agriculture can potentially contribute to local food security by increasing family incomes. 9. Entering this lucrative market is not easy, however. Farmers are denied access to developed country organic markets for two to three years after beginning organic management since such countries will not certify land and livestock as organic before that time, arguing that it is necessary for the purging of chemical residues. Under the Draft Codex guidelines, however, products produced on land under organic management for at least one year but less than the two-three year standard can be sold as "transitional organic", although few markets have yet developed for such products. 10. In most cases farmers and post-harvest businesses seeking to sell their products in developed countries must hire an organic certification organization to annually inspect and confirm that these farms and businesses adhere to the organic standards established by various trading partners. The cost for this service can be expensive, although it varies in relation to farm size, volume of production, and the efficiency of the certification organization (e.g. IFOAM certification costs a maximum of 5 percent of sales value, but where local certification organizations exist it reduces to 2 percent of sales value). Few developing countries have certification organizations within their borders, and even when sufficient resources are available to pay for certification farmers often lack the information to find credible inspectors. 11. While most developing country traders have focused on export markets in the developed world, domestic market opportunities for organic food or eco-food may also be exploited. In China, for example, there is a growing market for "green food" which, according to government grading standards, is produced without certain pesticides and fertilizers and with biological methods. Chinese farmers also produce organic food for export (e.g. tea to the Netherlands, soybeans to Japan). 12. Whether the intent is to sell organic products domestically or abroad, reliable market information is difficult to obtain. There is virtually no systematic production or market survey data being collected with which to assess the rate and pattern of organic market growth. In particular, no projections for the market in the developing world have been made, nor have markets systematically been identified for developing country exports. Estimates of the public's willingness to pay premiums, the impact of regional attitudes and tastes, and the incidence of market fraud have not been undertaken. B. FARM PRODUCTIVITY 13. Farmers will probably experience some loss in yields when converting their operations to organic production. There is a period of time between the discarding of synthetic inputs and sufficient biological activity being restored to the land (e.g. growth in beneficial insect populations, nitrogen fixation from legumes) during which pest suppression and fertility problems are typical. The degree of yield loss varies, however, and depends on factors such as the inherent biological attributes of the farm, farmer expertise, and the extent to which synthetic inputs were used under the previous management system. Where soil fertility is low and biological processes have been seriously disrupted, it may take years to restore the ecosystem to the point where organic production is possible. In such cases other sustainable approaches, which allow judicious use of synthetic chemicals, may be more suitable start-up solutions. One strategy to survive the difficult transition period involves converting farms to organic production in partial instalments so that the entire operation is not at risk. 14. Most studies find that organic agriculture requires significantly greater labour input than conventional farms. This is especially true in areas of low ecological potential. However, when labour is not a constraint organic agriculture can benefit underemployed labour in rural communities. Furthermore, the diversification of crops typically found on organic farms, with their various planting and harvesting schedules, may distribute labour demand more evenly which could help stabilize employment. Land tenure is also critical to the adoption of organic agriculture. It is highly unlikely that tenant farmers would invest the necessary labour and sustain the difficult conversion period without some guarantee of access to the land in later years when the benefits of organic production are attainable. 15. Soil-building rotations need to be designed both from the economic and the technical points of view - uses must be identified for all the crop and livestock products produced. As in all agricultural systems, diversity in production increases income-generating opportunities and can, as in the case of fruits, supply essential health protecting minerals and vitamins to the family diet. It also spreads the risks of failure over a wide range of crops. It is possible that, even on those farms where organic crop yields are lower than those produced under systems which use high levels of inputs, the overall economic yields of the farm will be competitive since organic systems benefit from market premiums and sometimes lowered input costs. 16. The insights generated by organic farmers in their search for site-specific production strategies can be of great benefit to non-organic farmers interested in expanding their management options. However, organic farmers still face huge uncertainties. A lack of information is an obstacle to organic conversion (e.g. surveys find that 63 percent of sub-Saharan African farmers and 73 percent of North American (US and Canada) organic farmers cite a lack of knowledge as the greatest barrier to adoption). Extension personnel rarely receive adequate training in organic methods and studies have shown that they sometimes discourage farmers from converting. Furthermore, institutional support in developing countries is scarce. Professional institutions with a capacity to assist farmers throughout the production, post-production and marketing processes are non-existent in many developing countries. While there are helpful research results that immediately could be extended to farmers, much more are needed. In 1990, FAO sponsored a conference at which organic research needs were identified (e.g. economics of stockless farms, animal husbandry, nitrogen cycling); however these challenges have largely gone unmet. C. ENVIRONMENTAL IMPACTS AND SUSTAINABILITY 17. The explicit goal of organic agriculture is to contribute to the enhancement of sustainability. Nevertheless, negative impacts may occur and organic agriculture is not an exclusive method for sustainable farming. The soil and water protection and conservation techniques of sustainable agriculture used to combat erosion, compaction, salinization and other forms of degradation are evident in organic farming. The use of crop rotations, organic manure and mulches improves soil structure and encourages the development of a vigorous population of soil micro-organisms. Mixed and relay cropping provide a more continuous soil cover and thus a shorter period when the soil is fully exposed to the erosive power of the rain, wind and sun. Terracing to conserve moisture, and soil are used in appropriate situations and particular attention is paid in irrigated areas to on-farm water management. Properly managed organic farming reduces or eliminates water pollution and helps conserve water and soil on the farm (although improper use of manure can seriously pollute water). A few developed countries compel or subsidise farmers to use organic techniques as a solution to water pollution problems (e.g. Germany, France). 18. Organic farmers rely on natural pest controls (e.g. biological control, plants with pest control properties) rather than synthetic pesticides which, when misused, are known to kill beneficial organisms (e.g. natural parasites of pests, bees, earthworms), cause pest resistance, and often pollute water and land. Reduction in the use of toxic synthetic pesticides, which the World Health Organization (WHO) estimates to poison three million people each year, should lead to improved health of farm families. 19. Organic farmers aim to make the maximum use of the recyclable fertility in on-farm crop residues (straws, stovers and other non-edible parts) either directly as compost and mulch or through livestock as farmyard manure. Eliminating the use of synthetic nitrogenous fertilizer greatly lowers the risks of nitrogen contamination of water. Crop rotation is a widely used method of fertility maintenance and pest and disease control, which is used in large- and small-scale farming in both developed and developing countries, especially under intensification. Fodder legumes are well-known fertility-building crops and are grown on vast areas in sub-tropical Asia and in semi-arid regions for the dual purpose of feeding livestock and adding nitrogen to the farm fertility cycle. Grain legumes may also produce a reasonable crop without nitrogenous fertilizer. Leguminous crops in rotations add various amounts of nitrogen to the overall farm system through biological fixation; other nitrogen-fixing plants such as Azolla may also be used. 20. Biological nitrogen fixation is a powerful technique but it often requires some addition of minerals to the soil, especially phosphorus. Most certification programmes restrict the use of mineral fertilizers which may be necessary to supplement the organic manure produced on the farm. Natural and organic fertilizers from outside the farm are used (e.g. rock phosphate, potash, guano, seaweed, slaughterhouse by-products, ground limestone, seaweed, wood-ash). While most certification programmes prohibit the use of sewage sludge and night-soil they are still used in some places. However, sludge may contain many contaminants including heavy metals which can have a deleterious and cumulative effect on the soil, while night-soil contains human pathogens and must be carefully composted before use. 21. Crop rotations encourage a diversity of food crops, fodder and under-utilized plants; this, in addition to improving overall farm production and fertility may assist the on-farm conservation of plant genetic resources. Integrating livestock into the system adds income through organic meat, eggs and dairy products, as well as draught animal power. Tree crops and on-farm forestry integrated into the system provide shade and windbreaks while providing food, income, fuel and wood. Integrated agri-aquaculture may also be found within diverse organic agricultural systems. Economic objectives are not the only motivation of organic farmers; their intent is often to optimize land, animal, and plant interactions, preserve natural nutrient and energy flows, and enhance biodiversity, all of which contribute to the overall objective of sustainable agriculture to preserve natural resources and ecosystems for future generations. A. PUBLIC-PRIVATE PARTNERSHIPS 22. The environmental and economic benefits of organic agriculture have captured the attention of several countries; however, only a small number have enacted policies to assist the organic sector. Most assistance has developed in the private sector, especially by NGOs. This private sector infrastructure is not only recognized by countries, but also encouraged. 23. Farmers and consumers in almost all countries rely on a system of private self-organized producer organizations and independent certifiers which have, over the years, provided an economically-efficient mechanism of certification. The degree to which private organizations are significant is indicated by the fact that IFOAM has some 650 individual and institutional members in over 100 countries, 75 percent of which are in developing countries. However, the network of private certifiers needs to expand; as mentioned earlier, many developing countries still lack certification organizations. 24. Organic programmes, as well as the Draft Codex guidelines, maintain evolving "input lists" of acceptable inputs for organic production, processing aids, and ingredients. Countries, in consultation with civil society organizations, can propose their own input lists to Codex and negotiations will probably take place over what ultimately is acceptable in the international marketplace. Many developing countries have not drawn up material lists and generally lack the resources and training to effectively participate in international standard-setting processes. 25. Engaging in international trade requires quantities of goods well beyond the production capacity of individual small organic farmers. NGOs have supported farmers in their efforts to establish appropriate organizations for collecting, processing and marketing organic products. Countries could support NGOs in building organizations which play a key role in disseminating best practices and ensuring that farmers use adequate production and post-production techniques. B. TECHNICAL SUPPORT 26. Northern developed countries have invested by far the most in organic agriculture research but even so, the contribution is minimal compared to overall research agriculture (e.g. less than 0.01 percent of the US Department of Agriculture research budget is directed to organic agriculture). The lack of extensive formal organic research combined with the highly site-specific nature of organic agriculture, suggests that it would be most advantageous for farmers themselves to participate in locally-based, applied field research. Experience with FAO-initiated Integrated Pest Management (IPM) Farmer Field Schools and community forestry projects has shown that farmers, whether owners or tenants, large or small, can practice good scientific methods if they are given orientation and technical support. Countries could also undertake market research. The USA, for example, analysed the market for organic goods in more than 20 countries, with a focus on opportunities for US exports. 27. Draft Codex guidelines, as well as most national and private organic standards, disallow genetically engineered organisms (GEOs), including transgenic crops, in organic production. By providing consumers with a non-GEO produced alternative organic agriculture has secured an important market, at the same time relieving much of the pressure to impose out-right bans on GEO-produced food. While current biotechnology applications have been rejected for organic production, it is possible that future scientific developments and further discussions may produce technologies acceptable to consumers (e.g. the value of tissue culture in producing disease-free planting materials). C. DIRECT ASSISTANCE 28. Interest in environmental protection and the preservation of small family farms has led developed European countries to subsidize organic production to varying degrees. The subsidies can provide significant encouragement (e.g. UK provides up to £450 per ha during the conversion period for designated land areas; European Community subsidies have helped 15 percent of Irish organic farmers develop dairy operations). Interested communities would have to be advised about organizational aspects and on procuring and commercializing the necessary inputs as well as on marketing any surplus produce (e.g. seed and planting material of adapted varieties, suitable livestock genitors). 29. Consumer confidence in the integrity of organic claims is essential if goods are to be sold at a premium. Because organic products cannot be distinguished from conventional products, consumers depend entirely on certifiers to truthfully distinguish organic from non-organic goods. Countries could better enforce organic standards by punishing those who engage in fraudulent activities as well as undertaking systematic tracking and measuring of fraud and its impact on the market. 30. FAO has undertaken several activities specific to organic agriculture, most of which are in the early stages of development. An FAO Library bibliographic search identified 49 FAO reports on organic agriculture. The FAO Regional Office for Europe has been supporting meetings of researchers working on organic agriculture, sponsored a conference in 1990 on Biological Farming in Europe, and, most recently, an expert Round Table in 1997 which established a working group on Research Methodologies in Organic Farming under the European System of Co-operative Research Networks in Agriculture (ESCORNA). The Regional Office for Asia and the Pacific has co-ordinated an Asian Bio and Organic Fertilizer Network that for more than a decade has issued annual bulletins on organic recycling in the region. Software has been developed by FAO's Land and Water Development Division to facilitate collecting data on the use of organic nutrients. The Codex Committee on Food Labelling is currently considering Draft Guidelines for the Production, Processing, Marketing and Labelling of Organically Produced Foods. Perspectives and guidelines for post-harvest handling of organic fruits, vegetables, aromatics, and spices in developing countries are under development by the Agroindustries and Post Harvest Management Service. 31. In 1997 an internal E-mail network on organic agriculture was established to facilitate the exchange and evaluation of information and develop common understandings among FAO staff. A focal point on organic agriculture was nominated within the Environment and Natural Resources Service. After obtaining formal liaison status within FAO in 1997, IFOAM met with FAO experts from all technical departments in March, 1998 to identify areas of potential collaboration. 32. Additional FAO efforts will be necessary to respond to the frequent requests from member countries for organic agriculture information and assistance. FAO could effectively collaborate, and build partnerships with existing institutions (e.g. national organic programmes/associations, IFOAM and other NGOs, and national and international agricultural research centres) in several areas, presuming that adequate resources are secured. Proposed major functions include: A. PROVISION OF A FORUM 33. As part of its Regular Programme, FAO could provide a forum for member countries to agree on standards for organic production, labelling, and other market related concerns. Some of this would continue to take place within the Codex Committee on Food Labelling but FAO could, in addition, convene expert groups to respond to emerging issues such as soil building crop rotations, pathogen problems associated with manure use, and the development of post-harvest handling codes. The recommendations of these expert groups could be provided to the Codex Committee on Food Labelling or any Codex-like mechanism that provides a forum for international debate. FAO Codex Regional Committees could further consider regionally-based organic standards and input lists that take into account local production needs as well as international market demands. B. POLICY ANALYSIS AND ADVICE 34. Basic intelligence is needed to fully understand the magnitude and potential of the organic sector. FAO could request organic production and trade data from countries through its regular annual questionnaire. FAO could also develop appropriate standards and classification for data collection efforts by countries and private organizations. FAO could also serve as a clearinghouse for market research by gathering and reporting on information generated by various entities. A roster of resource centres and experts involved in research and marketing could be maintained and networking in developing countries could be promoted. Data collection, although part of FAO's Regular Programme functions, would require dedicated funding. For well-defined, in-country studies, an appropriate mix of regular and extra budgetary funds must be secured. For example, Technical Cooperative Programme resources could be used. Once data has been collected and analyzed, FAO could provide advise on how to develop local production priorities in accordance with organic market demands. C. FACILITATION OF RESEARCH AND EXTENSION INFORMATION EXCHANGE 35. Organic farming tends to reveal interesting research questions having implications for enhancement of sustainable production systems for and beyond Organic Agriculture. FAO could facilitate communication of these questions to researchers so that they may be considered in international research centres, such as the Consultative Groups for International Agricultural Research (CGIAR) and the National Agriculture Research Systems (NARS). Research collaboration could be increased through networks and regular research fora that respond to the cross-sectoral and multi-disciplinary expertise needs of organic agriculture. FAO could also help develop relevant curricula for higher education and appropriate extension and communication programmes. While some of these network opportunities touch upon FAO's Regular Programme functions including the CGIAR TAC Secretariat and NARS Secretariat, others will require extra-budgetary resources. D. TECHNICAL ASSISTANCE 36. Working with national programmes in pilot activities, FAO could assist in better integrating organic, IPNS, and IPM information to ensure that all such techniques are available to farmers. Among the greatest opportunities for effective FAO involvement is the application of the "Farmer Field School" model in places where market opportunities suggest that organic agriculture would be profitable. FAO-sponsored Farmer Field Schools in organic agriculture could evaluate, on a local basis, the contributions of organic production to food security through adaptive field trials. The highly local nature of organic production means that community-based expertise and organizational capacity is needed. FAO could also promote the development of locally-based certification organizations in developing countries, which could eliminate the costly practice of hiring outside experts to certify organic operations. All of these pilot activities would require the identification and mobilisation of extra-budgetary resources, and could be strengthened by collaboration with member countries' research, development, and extension institutions. 37. FAO has the responsibility to give organic agriculture a legitimate place within sustainable agriculture programmes and assist member countries in their efforts to respond to farmer and consumer demand in this sector. Organic agriculture may contribute to the overall goals of sustainability. First, organic farmers and processors, in their attempts to adhere to rigorous certification standards, may discover new and innovative production technologies that apply to other agricultural systems as well. Second, organic agriculture may provide market opportunities for farmers and processors who choose to alter their practices to meet certain consumer demands. Finally, organic agriculture promotes the national and international public debate on sustainability by creating awareness of environmental and social concerns that merit attention. 38. In light of the above, an Organization-wide, cross-sectoral programme in organic agriculture is needed. Such a programme would focus on: provision of information and cost-effective discussion fora on organic production and trade; institutional support and policy advice to members; facilitation of research, extension and networking; technical assistance for developing skills, organic standards and certification capacities; and pilot projects that explore and promote feasible organic agricultural techniques. 39. The programme would have an institutional core where some activities would be anchored within the Regular Programme and, where possible, strengthened with extra-budgetary resources. FAO, in partnership with public and private institutions, should undertake related programme activities. The Sustainable Development Department, through the Environment and Natural Resources Service of the Research, Extension and Training Division would continue providing co-ordination by promoting and catalysing technical divisions' work on organic agriculture. The Committee on Agriculture may wish to endorse this proposal and to provide guidance on how FAO might best shape a coherent programme on organic agriculture reflecting the needs and opportunities of member countries. 1 Data sources: Austria: Market Brief, prepared by the U.S. Department of Agriculture, Foreign Agriculture Service, November 1996; Switzerland: USDA Outlook Conference Proceedings, February 23, 1998. 2Data sources: USA: Federal Register, Vol. 62, No. 241, December 16, 1997; France: USDA/FAS Market Brief, December 1997; Japan: USDA/FAS Market Brief, September 1994; Singapore: USDA/FAS Market Brief, August, 1997. 3No single definition of "synthetic" exists, although the various material lists of allowed and prohibited inputs for organic production, developed in different countries and by different certification programmes, are fairly consistent, reflecting an implicit agreement on a definition. The few legal definitions of synthetic reflect the common understanding of the term in organic trade. For example, British Columbia, Canada defines "synthetically compounded" as "a process which chemically changes a material extracted from naturally occurring plant, animal or mineral sources, excepting microbiological, mechanical and heat processes." The USA defines synthetic as "a substance that is formulated or manufactured by a chemical process or by a process that chemically changes a substance extracted from naturally occurring plant, animal, or mineral sources, except that such term shall not apply to substances created by naturally occurring biological processes." 4 Crop rotation is the practice of alternating crops grown on a specific field in a planned pattern or sequence in successive crop years. Organic certification programmes require `soil building' crop rotations, meaning that they must be specifically designed to steadily improve soil tilth and fertility while reducing nitrate leaching, weed, pest and disease problems. IFOAM, for example, recommends specific rotations that include legumes and requires the rotation of non-perennial crops "in a manner that minimises pressure from insects, weeds, diseases and other pests, while maintaining or increasing soil, organic matter, fertility, microbial activity and general soil health." Under limited cropping conditions (e.g., mushrooms, perennials) crop rotations may not be applicable; in such cases other methods that contribute to soil fertility may be required by certification programmes. 5 Natural is commonly understood as anything that is non-synthetic.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a formal document discussing organic agriculture and does not contain any unsafe or inappropriate content. It is suitable for public use, educational environments, and general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a detailed discussion about organic farming practices, market opportunities, and challenges faced by farmers in developing countries. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text discusses organic agriculture and its environmental impacts without any unsafe or inappropriate content. It is suitable for educational and public use.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text discusses organic agriculture, public-private partnerships, technical support, and direct assistance in a manner that is entirely appropriate and suitable for all audiences. It does not contain any unsafe, inappropriate, or harmful content.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text discusses organic agriculture, FAO activities, and potential collaborations without any unsafe or inappropriate content. It is fully appropriate for all audiences and does not contain any harmful language or themes.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is entirely focused on discussing organic agriculture, sustainable farming practices, and the role of FAO in promoting these techniques. There are no unsafe, inappropriate, or harmful elements present in the text." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1215-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1215-1", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1215-2", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1215-3", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1215-4", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1215-5", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.fao.org/unfao/bodies/coag/cOAG15/X0075E.htm", "date": "2014-07-10T23:07:54", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776423075.13/warc/CC-MAIN-20140707234023-00024-ip-10-180-212-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9354096651, "token_count": 6043, "score": 2.796875, "int_score": 3 }
<urn:uuid:060896a6-e785-4c47-a516-6d712606ca0e>
The major problem limiting the effectiveness of mental health care is the failure to promote well-being due to an excessive contemporary focus on deficits symptomatic of mental ill-being. Our treatments focus on symptoms rather than on causes of dysfunction and fail to promote healthy functioning. As a result, improvements in treatment are usually weak, incomplete, and temporary, whether based on psychopharmacology or psychotherapy (1). In contrast, randomized controlled trials of treatments designed to promote well-being have been shown to reduce dropout, relapse, and recurrence rates compared with treatments for symptoms of disorder (2—4). In addition, psychiatric care to promote well-being reduces the stigma and increases recovery of mental health. In other words, mental health care appears to be more effective than mental illness care. An understanding of personality development provides a systematic way to promote health, which can be defined as an integrated state of physical, mental, social, and spiritual well-being, rather than merely the absence of disease or infirmity (5, 6). More specifically, mental health has been described as "A state of well-being in which the person realizes and uses his or her own abilities, can cope with the normal stresses of life, can work productively and fruitfully, and is able to contribute to his or her community" (6). This definition of mental health corresponds closely to the description of maturity of personality, particularly the character traits of self-directedness, cooperativeness, and self-transcendence, which allow a person to work, love, and serve others. In fact, there is extensive data indicating that individual differences in personality are causal antecedents contributing to the full range of psychopathology (7). Mental health professionals and educators use a wide variety of methods for the development of character and well-being. The traditional mainstays of psychiatric treatment have been various forms of pharmacotherapy and psychotherapy, but there are also many other effective evidence-based treatments that promote greater mind-body awareness by means of work on physical exercise, diet, sleep hygiene, deep breathing exercises, guided imagery, muscular relaxation, or meditation (8—10). The effects of conventional and alternative methods are often indistinguishable, suggesting that some common mechanism is being influenced by complementary pathways (11). How do these diverse biological, psychological, and social interventions influence well-being? One clear answer has been given to this question by sages throughout history (1). Well-being and the integration of personality depend fundamentally on self-awareness, as summarized in the Delphic injunction to "Know Yourself." Rather than relying on the authority of ancient sages, however, here we will consider evidence to evaluate the possibility that character development can be facilitated by diverse methods that enhance a person's awareness of any aspect of their being, including their body, thoughts, or psyche. To do so, we must evaluate ways of measuring the different aspects of being and well-being. Each of these terms and their aspects need to be clearly defined and measured in ways that can be implemented efficiently in clinical practice, as we will describe in detail. We must also consider the mechanisms by which increasing awareness may promote increased well-being and reduced ill-being. First, we review the relationship between personality and a wide range of psychiatric disorders. Second, we evaluate the impact of character structure on a wide range of measures of well-being, including positive emotions, negative emotions, life satisfaction, perceived social support, and perceived health. Third, we describe a practical and inexpensive clinical method for facilitating the maturation and integration of personality based on an understanding of the processes of human thought, which underlie changes in personality and well-being. Remember that most psychopathology has a complex or multifactorial causation and that it develops as a nonlinear dynamic system. This means that disorders depend on combinations of multiple genetic and environmental variables and that the same factor may contribute to many different disorders and that the same disorder may arise from interactions of different sets of causes. The effect of a variable can be very different, depending on which other variables it is combined with. As a result, we should not expect simple linear chains of development from molecular variables to clinical syndromes, even though contrary claims are made frequently and just as frequently fail to be confirmed. Complexity of pathogenesis is another way of talking about the "equifinality" and "multifinality" properties of development observed consistently for personality and psychopathology (12, 13). The complexity of personality and psychopathology has been amply confirmed by etiological research. We have no laboratory tests for any psychiatric disorder. Genetic variables, when found, have weak and inconsistent effects when evaluated one at a time because their effects depend on other unmeasured genetic and environmental variables with which they are combined. There are no algorithmic formulae or shortcuts for understanding the complex biopsychosocial processes involved in the development of well-being and its disorders. Personality traits are an important component of the complex development of the full range of psychopathology but do not explain most of the variance in risk for psychiatric disorders. The incomplete determination of psychopathology by personality is partly related to the incompleteness of what is traditionally called personality. A fuller account of all aspects of a person needs to include sexuality, physicality, emotionality, intellect, and spirituality in an integrated developmental perspective (14). Furthermore, the development of psychiatric disorders depends on internal and external variables that pull on the self, such as the appropriateness of a person's life style, goals, and values within the personal, social, and cultural setting in which they live (15). We cannot separate the self or the whole person from their family or culture any more than we can separate a person's body and mind. Efforts to do so result in "brainless" or "mindless" perspectives, which eventually lead to confusion rather than understanding (16, 17). As a result of such complexity, we must be modest about the extent to which personality explains the etiology of psychopathology. Yet it is also clear that personality variables contribute to most psychopathology and that the integration of personality allows such improved adaptation that people can enjoy the way they have learned to live well with the strengths that they always had or the abilities that they can acquire with experience or treatment. This is the modest etiological but hopeful therapeutic claim that we think is justified by what we know about personality development: Nearly all human beings have enough capacity for self-awareness to allow them to learn how to live well—with satisfaction, purpose, and a sense of meaning greater than their individual selves, even if their self-awareness is initially limited. Some randomized controlled trials have been done to directly support this claim using diverse methods to increase awareness of the body, thoughts, and psyche of a person (2, 4, 8—11), but much more is needed before this claim can be considered well-documented. Our limited knowledge of the science of well-being is a well-known reality that every practicing psychiatrist already faces daily with each patient encountered. What all mental health professionals need is a practical way to help people make their lives better. So here we will review the facts that are available to guide such a hopeful person-centered approach to well-being. Psychiatrists have been divided by interest and training into schools that emphasize the importance of major categories of illness, as described on axis I of DSM, or the development of personality traits, as described on axis II of DSM. Sometimes these distinctions have been confounded with the belief that neurobiology or nature causes the major categories of illness, whereas psychosocial development or nurture causes personality traits. The dichotomies of nature or nurture, neurobiology or psychosocial development, and category or trait are no longer scientifically tenable perspectives (17). The brain does not have centers for different categories of illness; it is modular, with each module having functions that participate in a variety of integrative processes in combination with other modules to allow purposeful adaptation to internal and external events. These modules are extensively interconnected, like the Internet, so that communication operates as a nonlinear dynamic system with quantum-like properties (1). The dynamics of shifts in brain states is depicted in Figure 1. The two brain states, A and B, in Figure 1 represent different configurations of a person's brain modules. The two brain states or configurations of personality differ in their level of information content and well-being. These states are technically described as "metastable": a person can be stuck in the less well configuration A because the only way to reach the more well configuration A is by temporarily giving up some of the advantages or pleasures associated with configuration B. In clinical terms, people must face who they are and recognize both the advantages and disadvantages of their current way of living and of other potential attitudes toward life before they can change without self-defeating conflict. As a result of such relative stability, configurations of traits can appear to be a categorical disorder temporarily until they radically transform, usually in the flash of a moment of insight leading to a change in outlook. Figure 1.The Nonlinear Dynamics of Shifts in Brain States Differing in Level of Well-being or Integration of Information Content. (Reproduced with Permission of the Washington University Center for Well-Being.) It is important for clinicians of all perspectives to recognize that there is substantial similarity in diagnosis and treatment whether viewed through the lens of major clinical syndromes, personality disorders, or the traits they share in common. The similarity between major syndromes and personality disorders is illustrated in Table 1. Although it may seem surprising to people accustomed to a biomedical model, even the major syndromes of schizophrenia, bipolar disorder, and major depressive disorder are strongly influenced by premorbid personality traits (18—21). As previously mentioned, temperament and character may not explain all of the components of a clinical syndrome, but their contribution is usually sufficient to effect recovery by understanding the personality configuration that led to vulnerability to the disorder. Phenotypic Similarity between Major Clinical Syndromes and Personality Disorders The components of temperament and character that are most strongly associated with different major psychiatric disorders are summarized in Table 2. A personality dimension is only indicated as high (+) or low (−) when it is regularly observed and useful for therapeutic intervention. The strongest effects are denoted as very high (++) or very low (− −). Table 2 gives the observations that are of practical utility for a clinician interested in diagnosis and treatment, and more details are given in the references cited in the text. Personality Traits That Are Highly Likely to Be High (+) or Low (−) in Different Mental Disorders and States of Mental Health Extensive findings about temperament and character configurations contributing to schizophrenia have recently been reviewed (21, 22). Perceptual aberrations and magical thinking characterize the character configuration of low self-directedness and high self-transcendence because such people have poor reality testing (i.e., low self-directedness) combined with vivid fantasy (i.e., high self-transcendence). The full schizotypal or disorganized character configuration also includes social intolerance, hostility, and distrust (i.e., low cooperativeness). Schizophrenics also are asocial because they are both anxiety-prone (i.e., high in harm avoidance) and emotionally aloof (i.e., low reward dependence). In a study of schizophrenics and their nonpsychotic siblings, asociality was found to be a mediating variable leading to schizophrenia only when combined with schizotypal character traits or with the neurocognitive deficits typical of schizophrenics, such as poor working memory and low crystallized intelligence (21). More importantly, high self-directedness was strongly correlated with the absence of neurocognitive deficits, such as low crystallized intelligence, in the nonpsychotic siblings of schizophrenics, whereas it is not correlated with intelligence in the general population. In other words, high self-directedness is protective against neurocognitive deficits and psychosis in siblings of schizophrenics (21). High self-directedness is also strongly associated with measures of well-being among schizophrenics in treatment (23). Patients with major depressive disorders and their siblings have a personality structure different from that of healthy control subjects and their siblings (18). Study of people with major depression and their siblings has allowed a rigorous evaluation of the heritable trait effects of personality as well as the extent to which they are influenced by mood states. The causal importance of personality in major depression can be tested in a sibling study because both personality and major depression are heritable. In the families of patients with major depression, the Temperament and Character Inventory (TCI) personality traits are all moderately heritable (18), just as they are in samples of healthy people in the general population (24). Likewise the risk of major depression was increased in the siblings of patients with depression compared with the siblings of healthy control subjects. The current level of depressed mood was correlated with high harm avoidance, low novelty seeking, and low self-directedness, indicating some state effect of mood on self-reports of personality. People with past histories of recurrent depression had particularly high harm avoidance scores, suggesting severity of predisposition and/or scarring from past episodes. Well siblings of patients with depression who have never had depression were lower in harm avoidance, higher in novelty seeking and reward dependence, and lower in cooperativeness. Persistence, cooperativeness, and self-transcendence did not influence the risk of depression in siblings. Other studies confirm the strong impact of high harm avoidance and low self-directedness on risk of depression (25—27). Higher harm avoidance scores reduce the response to antidepressant drugs, so that treatment responses are poorer and slower, and outcomes are worse at 6 and 18 months (28—30). Higher harm avoidance is associated with earlier onset, a greater number of past depressive episodes, and higher future recurrence risks (20, 29, 31, 32). Temperament configuration influences drug response and can be used to guide choice of drug, dosage, and adjunctive medications according to some studies (33, 34). Both tricyclic antidepressants and selective serotonin reuptake inhibitors reduce harm avoidance scores, which mediates a reduction in depression (30, 34, 35). Lowering harm avoidance increases serotonergic activity (30, 36) and enhances connectivity between the amygdala and the anterior cingulate cortex, which serves as a crossroads to enable emotional regulation, cognitive planning, and purposeful activity (37). Antidepressants and cognitive behavior therapy both increase self-directedness, which protects against relapse over the long term (31). Patients with bipolar disorder have a personality structure different from that of those with unipolar depression or healthy control subjects. In terms of temperament, people with mania are higher in both novelty seeking and harm avoidance than are control subjects, whereas those with unipolar depression are usually higher only in harm avoidance (19, 38, 39) unless they have some comorbid feature such as substance abuse or suicide attempts (20, 40). The strong association of novelty seeking with bipolar disorder in adults is also observed in milder variants such as hyperthymia or cyclothymia (41, 42) and in prepubertal and adolescent bipolar disorder (43, 44). The full personality profile is usually made up of a sensitive (NHR) or explosive (NHr) temperament and a cyclothymic (sCT) or disorganized (scT) character, which are associated with emotional lability and immaturity, respectively. (In the personality profile, lower case letter refer to low scores and capital letters refer to high scores for the following traits: N, novelty seeking; H, harm avoidance; R, reward dependence; S, self-directedness; C, cooperativeness; T, self-transcendence.) Unfortunately, it is impossible to disentangle the effects of mood state on reporting in the absence of a sibling study in which mood is measured. When patients with euthymic bipolar disorder have been studied, they have consistently been found to have low persistence (45, 46), but sometimes also high harm avoidance (47) or low novelty seeking (46). However, high novelty seeking predicts a failure to recover euthymia in prospective studies (48, 49), so those who recover to stable euthymia may differ in their antecedent personality traits. Findings about comorbidity also indicate that bipolar disorder cannot be considered as a homogeneous category: bipolar patients with comorbid anxiety have higher harm avoidance and lower self-directedness than other bipolar patients, and bipolar patients with comorbid alcoholism have lower persistence (19). In summary, people with low persistence (i.e., easily discouraged and frustrated) or both harm avoidance and novelty seeking (i.e., conflicting needs for security and excitement) are vulnerable to bipolar disorder, particularly when their character is immature so that they cannot regulate their emotional conflicts, frustration, and disappointments. High harm avoidance is a strong and sensitive predictor of the presence and severity of anxiety disorders in adult and child psychiatric patients and in the general population (50—55), but not everyone with high harm avoidance develops a clinical anxiety disorder (53). Essentially, high harm avoidance increases a person's sensitivity to and difficulty coping with the impact of negative life events (56—58). For example, high harm avoidance augments startle responses when a person's prestimulus outlook is negative, whereas low harm avoidance reduces startle responses when a person's prestimulus outlook is positive (59, 60). As a result of differences in sensitivity to internal and external events, the form of any clinical disorder depends not only on harm avoidance but also on other personality traits that influence the perception of life events (1, 61). Except for obsessive-compulsive disorder (OCD), which is quite distinct in terms of personality, there is much overlap among the other anxiety disorders in both symptoms and personality configurations. Generalized anxiety disorder is most often associated with only high harm avoidance, whereas panic disorder and social anxiety disorder are somewhat more likely to occur in people who are both high in harm avoidance and low in novelty seeking (32, 62). In addition, people with social anxiety disorder are also unlikely to be average in reward dependence—they are likely to be either high in reward dependence (so that they are avoidant and rejection-sensitive) or low in reward dependence (so that they are cautious and aloof). Of course, any of these temperament profiles can be associated with weak development of character traits, particularly low self-directedness, which is moderately correlated with high harm avoidance (62—64). Patients who are higher in character development have pure anxiety disorders that respond better to antidepressants or cognitive therapy than those with less character development (63, 65). Obsessive-compulsive disorder (OCD) has a distinct personality configuration compared with that of healthy control subjects and of those with other anxiety disorders. Adult and child patients with OCD and their first-degree relatives have a methodical temperament with very much higher harm avoidance, lower novelty seeking, and lower reward dependence (designated as an nHr temperament profile) compared with healthy control subjects and first-degree relatives of control subjects (66, 67) and are high in persistence (68). In terms of character, patients with OCD usually are low in self-directedness and cooperativeness (65, 69, 70). Personality traits are differentially associated with severity and type of symptoms among patients with OCD: greater symptom severity is correlated with low self-directedness and low cooperativeness, whereas hoarding is associated with higher persistence and/or higher harm avoidance (69—72). Patients with chronic fatigue syndrome have a premorbid personality configuration similar to that seen in OCD except that they are not likely to be low in reward dependence (73). High novelty seeking and low self-directedness are the two personality traits most consistently associated with alcohol and drug abuse. High novelty seeking involves impulsive thrill-seeking and experimentation with new things, such as experimenting with alcohol, cigarettes, or recreational drugs at an early age (74—77). Low self-directedness is just as strongly associated with substance abuse because it involves inadequate executive control of a person's emotional drives (78). Thus, both high novelty seeking and low self-directedness contribute to difficulty inhibiting, delaying, or otherwise controlling impulsive behaviors, such as substance abuse, pathological gambling, or binging (79—82). Novelty seeking interacts with other personality and familial variables in influencing the development of alcohol abuse (74). If a person has one or more alcohol-dependent parents, then high novelty seeking is an even stronger predictor of alcohol dependence in the child than if the child had no alcoholic parent (83). On the other hand, low novelty seeking protects children from developing alcoholism if they have alcoholic parents. Essentially novelty seeking serves as a nonlinear moderator of the familial risk of alcoholism, like an intrinsic ability to either expose a person to risk (if novelty seeking is high) or to protect them from risk (if novelty seeking is low). Likewise the risk of alcohol abuse is increased if a person is extremely high or extremely low in harm avoidance or extremely high or extremely low in reward dependence, but the overall effects of novelty seeking and self-directedness are predominant in most circumstances (74). The overall effect of the extreme deviations of harm avoidance and reward dependence are missed if only group averages are compared. Nevertheless, if individuals low in novelty seeking begin to use alcohol to cope with stress, they may become dependent, particularly if they are high in harm avoidance and/or low in self-directedness, as often occurs in people who begin to drink during a depressive episode. The impact of social pressure and cultural attitudes on substance dependence is particularly evident in the changing relationships between personality and cigarette smoking over the past 50 years. Initiation of regular cigarette smoking is usually associated with high novelty seeking combined with high persistence and low reward dependence (84, 85). Some people initiate smoking to relieve anxiety if they are high in harm avoidance and want to relieve negative affect or stress. Likewise, the withdrawal from cigarettes in regular smokers is not a consistent physiological syndrome, but one that depends substantially on personality. Smokers high in novelty seeking experience greater abstinence-induced increases in nicotine withdrawal, negative affect, and cigarette craving than smokers low in novelty seeking. In contrast, smokers high in harm avoidance report greater abstinence-induced increases in negative affect and urges to smoke to relieve stress (84). Reward dependence is not associated with abstinence effects. Cessation of cigarette smoking is strongly moderated by character traits, so that individuals who are highly self-directed are much more likely to succeed in quitting smoking when they decide to do so. In people born before 1950, cigarette smoking was a prosocial behavior and was correlated with high cooperativeness, particularly in men. However, when some people began to regard smoking as an antisocial behavior, it became correlated with low self-directedness and low cooperativeness, as is seen in many people born after 1950 (78). This example illustrates the importance of understanding psychopathology as the expression of dysfunction in a complex dynamic system regulating biopsychosocial adaptation (15). Personality assessment provides a systematic way to understand the motives of a person and the internal and external influences that pull on the self in an ever-changing biological, social, and cultural context. Adequate clinical understanding of mental disorders requires awareness of the meaning of desires, feelings, relationships, and events in relation to the goals and values of a person and the collective expectations of the society within which the person functions. Eating disorders provide well-defined clinical subgroups of individuals, who differ objectively in their weight, subjective features such as body image, and patterns of behavior such as restrictive dieting, binging, and/or purging. Temperament and character traits have been extensively studied in eating disorders and show strong utility for both differential diagnosis and treatment planning (86—88). High harm avoidance and low self-directedness are typical characteristics shared by all groups of patients with eating disorders compared with healthy control subjects, both before and after recovery (88). Restrictor anorexic individuals are perfectionistic and have high harm avoidance, high persistence, and relatively higher self-directedness compared with patients with other eating disorders. In contrast, binging bulimic individuals have approach-avoidance conflicts and are high in harm avoidance and novelty seeking along with lower self-directedness and lower cooperativeness (89—94). Anorexic individuals who also binge or purge are most like bulimic individuals in personality but may be high in persistence like restrictive anorectic patients (87, 93, 94). Likewise obese patients, whether they have food binges or not, are usually like normal-weight bulimic individuals in that they are high in harm avoidance and novelty seeking and low in self-directedness (80, 81). Not all patients with anorexia have the restrictive pattern with high persistence, however; some anorexic individuals are withdrawn and avoidant without enthusiasm for eating, as in individuals who are high in harm avoidance (H) and low in novelty seeking (n), combined with a depressive character structure [low in all three characters (sct)] (95). Hence the structure of personality can reveal more than the subdivision of eating disorders according to weight and eating behaviors alone. The temperament and character traits appear to have a causal role in the pathogenesis of eating disorders because they are heritable, expressed early in development, moderately stable throughout life, not explained by mood or metabolic status, and present before treatment and after recovery (86, 88). The role of each personality trait in eating disorders has been examined in relation to different attitudes toward eating and weight. High novelty seeking is correlated with impulsivity and disinhibition, whereas low self-directedness is related to overall restraint in eating and susceptibility to hunger (96). Low self-directedness is the strongest predictor of severity of vomiting in bulimic individuals (97). People with both low self-directedness and high self-transcendence are prone to excessive fantasy and seem to be particularly reactive and susceptible to unrealistic societal messages about slenderness (96). Temperament and character are also important predictors of prognosis in prospective longitudinal studies. Cognitive behavior therapy in bulimic individuals resulted in a reduction of harm avoidance and an elevation of self-directedness compared with pretreatment, but the values for the bulimic individuals continued to differ from the values for healthy control subjects (86, 88). High self-directedness at intake predicts rapid improvement with cognitive behavior therapy and sustained recovery at 1 year in bulimic individuals (86). Dropout rates from brief psychotherapy are high for patients with eating disorders who have borderline features, including high novelty seeking, high harm avoidance, low reward dependence (NHr) and disorganized character traits of low self-directedness, low cooperativeness, and high self-transcendence (scT) (98). Mature character development greatly facilitates treatment by helping with the emotional regulation of anger and frustration (99). Anorexic patients who are high in both self-directedness and cooperativeness at intake are more likely to recover fully with cognitive behavior therapy, whereas those who do not improve are higher in harm avoidance and lower in self-directedness (100). High self-directedness and high persistence are also strong predictors of sustained weight loss in morbidly obese people after gastric banding (101, 102). These findings about the role of personality in the differential diagnosis and treatment of psychopathology indicate that the assessment of personality provides more information than is available from efforts to subdivide patients into categorical groups. The clinical syndromes remain useful because the same personality profile may lead to different problems under different conditions. Yet the personality configurations add much information to help in understanding the pathogenesis of the disorder, in prognostication of outcome, and in planning effective treatment interventions that are individually tailored to the patient's resources and interests. Remember that personality traits occur in all possible combinations, whereas clinicians are accustomed to thinking as if a person has only one basic diagnosis. In fact, we know that there is extensive overlap among the major clinical syndromes, so that having a mood disorder or schizophrenia does not provide any protection from having other disorders. The phenomenon of comorbidity has revealed the fundamental inadequacy of categorical diagnosis (103). Once a clinician realizes that the syndromes are not mutually exclusive diseases and that they actually share many functional components, then the clinical utility of multidimensional personality assessment becomes clear. Our diagnostic and treatment processes are greatly facilitated by having a way to understand the patterns of comorbidity that we observe in people by knowing what their personality structure is. Once we know the personality structure, we can often anticipate what other problems individuals may have even if they did not volunteer the information in their initial presenting complaints. For example, the TCI profile allows clear prediction of what comorbid syndromes are present in people presenting with any diagnosis, such as a major depressive disorder or bipolar disorder (19, 50, 104). If persons presenting with a mood disorder has high harm avoidance, which they usually do, then they are likely to have comorbid anxiety and somatization. If patients with major depression have high novelty seeking, they are likely to have comorbid syndromes associated with poor impulse control such as substance dependence, bulimia, binge eating, or pathological gambling. And vice versa, if patients with depression has substance dependence we should investigate whether they are high in novelty seeking. If patients with major depression are low in cooperativeness, we can anticipate that they will have problems with hostility or suspiciousness. The structure of personality forms a clear scaffold on which we can systematically organize our thinking about the functional interactions that occur over the course of people's lives as they shape and adapt to an ever-changing internal and external environment. We can see that every component of personality is always important in adapting rather than trying to oversimplify human motivation by reducing it to a fuzzy set of idealized categories that are really metastable configurations of dynamic functional processes by which people seek health and happiness with more or less success (105). PERSONALITY AND WELL-BEING In addition to understanding the relationship of personality to psychopathology, a mental health professional also needs to understand the relationship of personality to health as a state of physical, mental, social, and spiritual well-being (6). To proceed scientifically, we must be able to measure these aspects of well-being in a reliable way. We studied personality with the TCI and related it to well-validated measures of well-being in 1,102 consecutive community volunteers from the Sharon region of Israel (106). The measures of well-being we used were the Positive and Negative Affect Scale (107), the Satisfaction with Life Scale (108), and the Multidimensional Perceived Social Support Scale (109). Perceived health was also rated on a five-point Likert scale based on the item "How would you rate your health over the past 30 days?" from the 12-item General Health Questionnaire (110). Table 3 summarizes the correlations among these measures. Positive and negative affectivity were largely uncorrelated (r=−0.09), which reminds us that the presence of positive affect (i.e., happiness) is not at all the same as the absence of negative affect (i.e., not being depressed or upset). Scores for life satisfaction, perceived social support, and perceived health were weakly but positively correlated with one another (r=+0.23—+0.31), so we formed a Composite Health Index (CHI) as the mean of these three measures. Table 3 shows that each measure of health was strongly correlated with the CHI (r=+0.7—+0.8). These findings show that health and happiness are correlated but made up of distinguishable components of physical, mental, and social well-being, as suggested by the World Health Organization definition of health (WHO, 1946). Correlations ×100 among Measures of Aspects of Well-Being: Positive Emotionality, Negative Emotionality, Life Satisfaction, Perceived Social Support, Perceived Health, and a Composite Health Index Next we began to consider the way a person's character profile influenced these components of well-being. From prior work reviewed in the preceding section, we knew that recovery from mental disorders in prospective treatment studies involved greater maturity or adaptability of TCI character traits. In contrast, measures of well-being have little relationship to temperament traits except for harm avoidance (111). Prior work had suggested that a gradient in adaptiveness of character could be described along an approximate gradient shown in Figure 2 from the creative character (designated as SCT) with high self-directedness, high cooperativeness, and high self-transcendence to the depressive character (designated as sct) with low self-directedness, low cooperativeness, and low self-transcendence. Therefore, we sought to evaluate the relationship between character configuration and well-being for each of the measures of well-being in Table 3. Variation in Negative Affect as a Function of Character Profile. Overall results are summarized in Table 2 for well-being in a format that parallels that for mental disorders. We will consider each of the measures in sequence to describe the way each character trait has distinct functional effects while interacting with the other traits. First, let's consider the relationship of character structure to the differences between individuals in negative affect ("unhappy"), as depicted in Figure 2. All four character configurations with high self-directedness had less negative affect than those with lower self-directedness, demonstrating a strong association of self-directedness with lower negative affect. Cooperativeness had a weak association when self-transcendence was high: higher cooperativeness was associated with lower negative affect in the contrast between moody and disorganized profiles (sCT versus scT) and between creative and fanatical profiles (SCT versus ScT). Self-transcendence was not associated with lower negative affect in any contrast. The interactions of character dimensions on positive affect, depicted in Figure 3, were different from those observed for negative affect. Higher self-transcendence was consistently associated with higher positive affect for each of the four possible configurations of self-directedness and cooperativeness (SCT versus SCt, ScT versus Sct, sCT versus sCt, and scT versus sct). Likewise, higher self-directedness was consistently associated with higher positive affect for each of the possible configurations of the other two characters. The association of higher cooperativeness with positive affect was also highly significant when self-directedness was low: higher cooperativeness was associated with higher positive affect than lower cooperativeness in the contrast of moody versus disorganized profiles (sCT versus scT) and for dependent versus depressive profiles (sCt versus sct). When self-directedness was high, higher cooperativeness was weakly associated with higher positive affect (SCt versus Sct and SCT versus ScT). In other words, the association of character profiles with positive affect was dependent on specific combinations of traits. Variation in Positive Affect as a Function of Character Profile. Likewise we evaluated the relationship between character profiles and the other measures of well-being shown in Figure 4. Profiles with higher self-directedness were again consistently associated with better life satisfaction, perceived social support, and perceived health than those with lower self-directedness. In addition, profiles with higher cooperativeness were associated with greater perceived social support than those with lower cooperativeness. Variation in Life Satisfaction, Perceived Health, and Perceived Social Support as a Function of Character Profile. Another way to recognize the impact of character structure is to examine the proportion of people with each configuration who are in the top sixth and the bottom sixth of the population in overall health as measured by the CHI. Figure 5 illustrates the strong impact of self-directedness on health. Comparison of Percentage of People in Each Character Configuration Who are in the Top One-sixth ("Best Health") and Bottom One-sixth ("Worst Health") of the Total Israeli Sample on the CHI. As summarized in Table 2, high self-directedness is consistently associated with all our measures of well-being: lower negative affect, higher positive affect, greater life satisfaction, greater perceived social support, and better health. Accordingly, highly self-directed people are typically described as hopeful. Cooperativeness is associated with greater social well-being as measured by perceived social support. Accordingly, highly cooperative people are typically described as loving. Self-transcendence is associated with greater positive affect, so self-transcendent people are often described as exuberant or joyful. Self-directedness has pervasive importance for the development of well-being, but each aspect of character makes a unique contribution. These results in a general community sample confirm the results of work reviewed in the prior section showing that character development has a crucial role in recovery from ill-being and the development of well-being. How then can character and well-being be systematically developed? Today many psychiatrists have limited time for psychotherapy, but nevertheless there is a great unmet need for ways to help facilitate character development. Personality assessment methods such as the TCI do enhance a clinician's practice by promoting a person-centered therapeutic dialogue (112). However, additional methods need to be integrated with personality assessment to promote character development and well-being in a systematic way. We are proposing use of a multi-method psychoeducational course developed by C. R. Cloninger in consultation with the Anthropedia Foundation for the purpose of promoting growth in self-awareness (2). The course, available on a set of DVDs (Anthropedia, http://anthropedia.org) integrates experiences and practices that promote growth in self-awareness with evidence-based techniques from a wide variety of therapeutic approaches (cognitive-behavioral, person-centered, psychodynamic, logotherapy, well-being therapy, positive psychology, and others) and engages patients in practices that stimulate the development of the prefrontal cortex and lead to well-being (1, 2). The course combines assessment of temperament and character with a self-directed exploration of personal well-being. The course contains five distinct modules or sections (Table 4). In the first module, a patient takes the TCI along with other brief measures of emotionality and life satisfaction to provide a baseline assessment and to stimulate reflection (107, 108). The patient reflects on what has provided lasting personal satisfaction in the past to improve his or her self-knowledge and become more likely to work toward satisfying goals. This self-exploration can help motivate the patient for constructive change, as well as help to establish a therapeutic alliance (112—115). Self-observation about attitudes characteristic of well-being (1) and related evidence-based exercises that promote satisfaction, such as practicing acts of kindness, suggest ways for the patient to begin to know himself or herself better (116). Functional brain imaging shows that such exercises activate the medial and anterior prefrontal cortex (Brodmann areas 8, 9, and 10), which is implicated in self-aware processes characteristic of self-directedness, cooperativeness, and self-transcendence (1, 117—120). The Know Yourself Course The second part of the course requires the patient to examine the steps of thought to understand the processes of thought that can lead to greater life satisfaction (Table 5). This extends cognitive-behavioral descriptions of thought to recognize the three major systems of learning and memory that have evolved in the long evolutionary history of human beings (14, 121). Once patients have begun to reflect on and identify what they find satisfying and to realize that their current habits of thinking may be interfering with their health and happiness, modules three to five of the course provide meditations and exercises to help patients activate their prefrontal cortex so that they can be more aware. Each exercise builds on the other but focuses on a different aspect of the being and a different system of learning. Specifically, patients first learn a simple meditation for relaxing the body and calming the emotional brain and then observing their own flow of thoughts to understand without judging or blaming. With this introduction, most people can begin to experience mindfulness and let go of judging and blaming (1, 122, 123). Then the next two modules help to enhance this natural ability. Functional Properties, Information Processing, and Awareness of the Three Major Steps in Thought in Self-aware Consciousness A second meditation is introduced to stimulate perception using all five senses in a way that is harmonious and joyful. Such voluntary activation of integrative sensory processing is designed to facilitate the perception of unity and harmony, which activates the medial and anterior prefrontal cortex and facilitates the development of rational intuition and emotional regulation. Use of rational intuition without judging or blaming is associated with deactivation of dorsolateral prefrontal cortex along with activation of the medial and anterior prefrontal cortex (124). Syntactical functions of self-aware consciousness, such as the perception of harmony and improvisation in music (124, 125), depend on the maturity of the late-myelinating terminal association areas in prefrontal, inferior temporal, and inferior parietal cortex (126, 127). Accordingly, patients are encouraged to practice integrative sensory awareness exercises regularly in a wide variety of their daily activities to promote the perception of unity and connectedness while acting spontaneously, as well as to reduce feelings of loneliness and isolation. The final meditation focuses on deepening awareness of one's outlook on life, which is the backdrop that colors a person's perceptions. The ability to shift outlook on situations or temporal viewpoint is a function of contextual "mental time travel" that depends on self-awareness (124, 128). Enhanced awareness of one's outlook and its triggers helps a person become free of the momentum of past conditioning and attitudes, thereby facilitating self-transformation. Essentially the meditations provide exercises that instill calm and awareness of each of the three steps in thought. Each meditation helps in this process until a person is aware of all the steps in thought, which reduces his or her vulnerability to unnecessary conflict and dissatisfaction, as described in more detail elsewhere (1, 112). The practice of most psychiatrists and mental health professionals often permits only short visits and limited time for psychotherapy. The availability of materials such as the Know Yourself course and assessment methods such as the TCI can help to enhance what clinicians can do in their office and help patients by having access to materials that intensify and elevate their clinical experience. We welcome communication and suggestions about how we can further assist you to incorporate a person-centered approach to well-being in your own clinical work.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\nThe text provided is a scholarly discussion on mental health care and well-being. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are focused on promoting mental health and well-being, which are appropriate for all audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a scholarly discussion about personality, psychopathology, and well-being. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to psychological and psychiatric topics, making it fully appropriate for educational and public use.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a scientific discussion about the relationship between personality traits and various mental disorders. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a detailed discussion of temperament and personality traits in the context of psychiatric disorders, specifically bipolar disorder and anxiety disorders. The text does not contain any unsafe, inappropriate, or harmful content. It is written in a scholarly and academic tone, making it fully appropriate for educational and public use.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a detailed discussion of personality traits and their associations with various psychological conditions and behaviors. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments.", "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\nThe text provided is a scholarly discussion about personality traits and their relationship to eating disorders and obesity. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a scholarly discussion about personality, mood disorders, and well-being. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provided is a detailed analysis of the relationship between character dimensions (self-directedness, cooperativeness, and self-transcendence) and various measures of well-being such as negative and positive affect, life satisfaction, perceived social support, and health. The content is entirely academic and does not contain any unsafe, inappropriate, or harmful elements. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"educational and therapeutic content\", \"score\": 0}\n```\n\nThe text provided is entirely focused on educational and therapeutic content, discussing self-exploration, cognitive-behavioral techniques, and mindfulness exercises. It does not contain any unsafe or inappropriate content, profanity, derogatory terms, or references to illegal activities. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1216-0", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1216-1", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1216-2", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1216-3", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1216-4", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1216-5", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1216-6", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1216-7", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1216-8", "llm_safety_score": 0, "reason": "educational and therapeutic content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://focus.psychiatryonline.org/article.aspx?articleID=53118&quiz=q3", "date": "2014-07-22T07:19:08", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00224-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9447972775, "token_count": 8770, "score": 3.484375, "int_score": 3 }
<urn:uuid:ccf6292e-3d7b-4ef9-b2e3-7d40b3836119>
Eliminate standing water around your farm to reduce the number of mosquitoes biting you and your animals. This season’s invigorating rains have increased the mosquito population across much of the country. Along with itchy bites, mosquitoes can harbor a host of diseases that put you and your livestock at risk. While it’s impossible to eliminate all the mosquitoes around your farm and home, you can take steps to decrease mosquito habitats on your property. Mosquitoes love standing water and use anything that holds water as breeding grounds. Still or stagnant water less than 1 inch deep will support mosquito growth. Look for and empty clogged gutters and leaf-filled drains, and drain outlets from air-conditioners, plastic wading pools, dog dishes, old tires, birdbaths and potted plant saucers. Even tire ruts, rotting stumps, old tree holes and puddles need to be addressed. The goal is to eliminate as many sources of standing water as possible. Bodies of water that cannot be permanently drained, such as bird baths or other landscape water features, can be treated by using "Mosquito Dunks” to control the mosquito’s larval stage. Most of these larvicidal briquettes will last several weeks and are safe to use around animals, including pets. Eliminating mosquito breeding habitats is only part of the battle. While it will reduce the number of mosquito larvae that reach adulthood, adult mosquitoes can fly in from other areas. Mosquitoes will hide in tall grass, shrubbery and other dark and shaded areas. Keep grass mowed and shrubbery trimmed. If using an insecticide, choose one labeled to kill mosquitoes on shrubbery and perimeter areas of your home. Products containing the active ingredient bifenthrin, carbaryl, cyfluthrin, permethrin or malathion can be used to kill mosquitoes that land in these areas. Read and follow pesticide label directions carefully. Here are some recommendations from the University of Georgia Cooperative Extension on how to reduce mosquito breeding. - Clean out eaves and gutters. - Remove old tires or drill holes in those used for playground equipment to allow for drainage. Cover stacked tires with plastic or store them under a shelter to avoid rain filling them with water. - Check boats for standing water. Be sure to clear drain holes, turn the boat over, cover it or increase angle to aid drainage. - Check tarps on equipment that might collect water in pockets or indentations. - Remove vegetation or obstructions in drainage ditches that prevent the flow of water. - Turn over or store plastic pots. - Dispose of broken, unused or discarded toys that can hold water. - Pick up all beverage containers and cups. - Replace water in birdbaths twice per week. - Replace water in pet and other animal feeding dishes or troughs at least twice per week. - Fill hardwood-tree holes that hold water with sand. - Dispose of broken or unused kiddie pools. - Don’t leave garbage can lids lying around upside down. - Change water in bottom of plant containers, including hanging plants, at least twice per week. - Fix dripping outdoor faucets that create pools of water. By reducing mosquito breeding sites and taking a few simple precautions, you can reduce the number of mosquitoes taking a bite out of you and your livestock.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content with no unsafe elements\", \"score\": 0}\n```\n\nThe text provides educational information on reducing mosquito breeding sites and does not contain any unsafe or inappropriate content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1217-0", "llm_safety_score": 0, "reason": "educational content with no unsafe elements" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.hobbyfarms.com/farm-industry-news/2013/07/05/15-ways-to-reduce-mosquitoes.aspx", "date": "2014-07-22T07:22:43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00224-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.910372138, "token_count": 711, "score": 3.1875, "int_score": 3 }
<urn:uuid:50ad4999-91e7-46bb-bbb6-2d317f34eacc>
This theme deals with ethical questions from a variety of perspectives. It is concerned with practical decision-making and the way people think they ought to lead their lives. Ethics explores the possible grounds for making moral decisions and examines notions such as freedom, values, responsibility and virtue. Ethics also entails a reflection upon experiences such as friendship, hospitality and love. A study of applied ethics explores approaches to important issues, some of which may be of international concern. This theme allows students to explore philosophically such questions as: Are there fundamental moral principles that apply in every situation? How do we decide if a particular action is right or wrong? How should we treat people? Are moral decisions culturally influenced? What do we mean when we say something is right or wrong? The study of ethical principles—what they might be, whether they are universal and how they influence our actions. This includes deontological and duty-based theories; teleological or consequentialist theories like utilitarianism; and notions of virtue and virtuous character. The study of the meaning of ethical statements and the foundation of moral judgements. What do we mean when we say an action is right or wrong? Do moral values exist independently of humans? Do moral questions apply only to humans? What is the relationship between statements of fact and statements of value? The study of ethical approaches to important contemporary issues such as bio-medicine, the environment and politics. (from the IB Philosophy Guide)
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text discusses ethical questions and principles from various perspectives, focusing on moral decision-making, values, and contemporary issues. There is no unsafe, inappropriate, or harmful content in the text." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1218-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.sevenoaksphilosophy.org/ethics.html", "date": "2014-07-22T07:18:51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997857710.17/warc/CC-MAIN-20140722025737-00224-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9155779481, "token_count": 294, "score": 4.125, "int_score": 4 }
<urn:uuid:1be7929a-2af7-4470-b94f-215b8c4ce31e>
|Papyrus of Hunefer This papyrus was found in the tomb of the scribe Hunefer in Thebes. It dates from the 19th Dynasty, about 1285 BC. It can be seen in the British Museum. |What does the papyrus show? This is a famous scene from the Book of the Dead, chapter 125. It shows the soul of the deceased (Hunefer) being led to judgment by Anubis. A court of gods sits above. Hunefer's heart is weighed on the scales on the left, against the feather of truth (maat) on the right. of the Heart If the heart is heavy with sin, (if Hunefer has done lots of bad things in his life), then his heart will sink lower than the feather of truth. You can see another Anubis in the middle adjusting a weight on the scales. The goddess Maat is on top of the scales. The spell in the papyrus was supposed to help him pass the test. Hunefer has been bad? If Hunefer has been bad, then a strange monster called Ammut - that's the creature with the crocodile's head and hippo's body - will devour him! Hunefer has been good? You can see the god of writing, Thoth, recording the result of the weighing. He is holding the rush pen and palette. If Hunefer has been good he will be brought before Osiris (who is shown just to the right of this scene), and allowed into heaven. |What is heaven In the afterlife, the soul lived a blissful life, perhaps working in the fields of reeds with his loved ones, or hunting in the marshes. The picture here shows Sennedjem with his wife Iyneferti, who sows seeds after he has ploughed the soil.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical and cultural information about the afterlife\", \"score\": 0}\n```\n\nThe text provides historical and cultural information about the afterlife in ancient Egypt, specifically from the Book of the Dead. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1220-0", "llm_safety_score": 0, "reason": "historical and cultural information about the afterlife" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://timetrips.co.uk/papyrus_of_hunefer.htm", "date": "2014-07-23T20:06:20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00160-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9622612, "token_count": 383, "score": 2.875, "int_score": 3 }
<urn:uuid:6b111a12-531b-450c-a062-d388f3c5fe8e>
Find the Latest Resources in Education Today Social media is a must for America’s STEM futureBy Wendy Henry Read more by Contributor July 15th, 2013 Over the last several decades, the United States has declined as an economic and educational global leader in science, technology, engineering, and math (STEM). To regain economic success and global competitiveness in these fields, government at all levels should launch campaigns that raise STEM awareness and increase student engagement. Actively leveraging social media channels is one potential path to drive K-12 excitement in STEM education and jobs through challenges. According to the American Council for Technology and Industry Advisory Council’s (ACT-IAC) Quadrennial Government Technology Review, only about a third of bachelor’s degrees earned in the U.S. are in a STEM field, compared to more than half of university degrees earned in China and Japan. With the decline in K-12 interest and engagement in these fields, the U.S. may likely continue to fall short on maintaining an adequate pipeline of STEM-educated professionals. But just how troubling is this situation, and what does it mean for the nation and our future? Jobs in STEM are increasing three times faster than jobs in the rest of the economy and as a result, by 2018, the U.S. faces a projected deficit of 230,000 qualified advanced-degree STEM professionals. To remain competitive, the U.S. needs to close the gap between the knowledge and skills needed in STEM, and the number of available professionals to fill those needs. (Next page: How social media is taking root—and how it can help STEM fields)
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text is focused on educational and policy-related content regarding STEM education and does not contain any unsafe, inappropriate, or harmful elements." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1221-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.eschoolnews.com/2013/07/15/social-media-is-a-must-for-americas-stem-future/", "date": "2014-07-23T20:09:07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00160-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9406752586, "token_count": 339, "score": 2.6875, "int_score": 3 }
<urn:uuid:4fcdfffd-e683-4d3c-9236-99fce68e7f38>
In the Automobile industry an electronic control unit (ECU) is a embedded electronic device, basically a digital computer, that read signals coming from sensors placed at various parts and in different components of the car and depending on this information controls various important units e.g. engine and automated operations within the car and also keeps a check on the performance of some key components used in the car. An ECU is basically made up of hardware and software (firmware). The hardware is basically made up of various electronic components on a PCB. The most important of these components is a microcontroller chip along with an EPROM or a Flash memory chip. The software (firmware) is a set of lower-level codes that runs in the microcontroller. The ECU is characterized by: · many analog and digital I/O lines (low and high power) · power device interface/control · different communication protocols (CAN, KWP-2000, etc.). · large switching matrices for both low and high power signals · high voltage tests · intelligent communication interface adapters (standard or custom) · automatic fixture recognition and software sequence enable · power device simulation 2. Different Types of ECU's Depending on what it is used for ECU's are named and differentiated as below : ECM – Engine Control Module. (Many a times in the industry ECM are called as ECU - Engine Control Unit). The ECM also known as EMS (Engine management system) is an ECU in an internal combustion engine that controls various engine functions like fuel injection, ignition timing and idle speed control system. All these control are done based on data (like engine coolant temperature, air flow, crank position etc) received from various sensors. The ECM’s also learns about the engine as we drive our car. The "learning" is actually a process that the ECU uses to track the tolerance changes of the sensors and actuators on the engine. For examples the idle-air bypass valve (automatic choke) at idle with the A/C in the CAR on and off. The ECM stores these "learned" values in battery backed-up RAM so that it doesn't have to start from scratch the next time the engine is turned over. A detail discussions on ECM’s are done in the later part of the paper. With the enforcement of the Federal Emission Regulations in 1981 ECU’s are being used popularly in most of the vehicles. In the aeronautical applications these systems are popularly called as ‘FADECs’ (Full Authority Digital Engine Control). EBCM – Electronic Brake control module. This is an ECU that is used in the ABS (anti-lock braking system) module of a car. They were introduced in the early 1970’s to improve vehicle braking irrespective of the road and whether conditions. Though its very recently that it started gaining popularity. The EBCM regulates the braking systems on the basis of five inputs that it receives. 1. The Brake: This input give the status of the brake pedal i.e. deflection or assertion. This information is acquired in a digital or analog format. 2. The 4 W.D: This input gives the status in digital format whether the vehicle is in the 4-wheel-drivemode. 3. The Ignition: This input registers if the ignition key is in place, and if the engine is running or not. 4. Vehicle Speed: This input gives the information about the speed of the vehicle. 5. Wheel speed: In a typical application this will represent a set of 4 input signals that convey the information concerning the speed of each wheel. This information is used to derive all necessary information for the control algorithm PCM – Powertrain control module. PCM is an ECU that monitors and controls speed control, A/C, charging and Automatic Transmission. The inputs that are fed to the PCM are from: · throttle position sensor, · output shaft speed sensor, · vehicle speed sensor · engine speed sensor (CKP) · brake switch · cruise control switches · overdrive on/off switch · governor pressure sensor. Using these inputs it does transmission control, valve control through PWM outputs, torque converter clutch and transmission protection relay control and provides the feedback to the driver through the dashboard overdrive lamp. VCM – Vehicle control module VCM is an ECU that takes care of systems like: · Electronic Powersteering (EPS) systems · Adaptive Cruise control (ACC) systems · Airbag control system (ACS) systems. · Electronic Stability Control (ESC) systems. The VCM is typically installed in the middle of the car between the passenger and the engine compartment. They are connected to various kinds of sensors to control various systems in the car. They take inputs from crash sensors (micro-machined accelerometers) and sensors that detect occupant weight, seating position, seat belt use and seat position to determine the force with which the frontal air bags should deploy. Similarly they take inputs from Steering wheel angle sensors, Wheel speed sensors, Yaw rate sensors, Lateral acceleration sensors to provide an output to the ESC for the safest driving experience. BCM – Body control module. BCM is an ECU that takes care of seating control unit, wiper control, power windows and power hoods in convertible cars (e.g. Benz SL Roadster). 3. Typical Inputs/Outputs of an ECU An ECU consists of a number of functional blocks: 1. Power Supply – digital and analog (power for analog sensors) 2. MPU – microprocessor and memory (usually Flash and RAM) 3. Communications Link – (e.g. CAN bus) 4. Discrete Inputs – On/Off Switch type inputs 5. Frequency Inputs – encoder type signals (e.g. crank or vehicle speed) 6. Analog Inputs – feedback signals from sensors 7. Switch Outputs – On/Off Switch type outputs 8. PWM Outputs – variable frequency and duty cycle (e.g. injector or ignition) 9. Frequency Outputs – constant duty cycle (e.g. stepper motor – idle speed control) And typically in an Engine Control Unit there are many kinds of sensors and actuators connected and its very important to know the kind of I/O they require. Let us have a look at some of the most typical kind of sensors and actuators that are connected to an Engine control module and what kind of I/O’s do they require. Manifold Air temperature Sensor (MAT) The sensor is a thermistor. It is normally mounted at the air duct housing of the manifold. Electrical resistance of the thermistor decreases in response to the temperature rise and this can be measured using analog channel with some signal conditioning. (excitation, amplification etc.) Coolant Temperature Sensor (CTS) The CTS too uses a thermistor to detect the temperature of the coolant in the engine and feeds the voltage signal to an analog input channel of the ECM. Camshaft/Crankshaft Position Sensor (CPS) The CPS is very important as it monitors the engine speed and the piston position in the engine. Traditionally variable reluctance sensors were used to measure this but nowadays various IR sensors and latest rotary encoders are used to do the same. These encoder signals are provided as frequency inputs to the ECU’s. Knock Sensor (KS) The KS is a typical piezoelectric sensor, it picks up the knocking vibration from the cylinder block where it is fixed and this complex/dynamic analog signal is fed to the ECU. Heated Oxygen Sensor (HO2S) The HO2S is an air quality measurement sensor. The sensor is basically made of ceramic zirconia which is placed in the exhaust manifold enclosed in a closed tube. The zirconia generates voltage from approximately 1V Max in richer conditions to 0V in leaner conditions. This analog signal is passed on to the ECM. Throttle Position Sensor (TPS) The TPS is a potentiometer that transforms the throttle position into output voltage which is fed to the ECM. Vehicle Speed Sensor (VSS) The VSS is placed on the transaxle. It is a pulse generator and provides a digital signal to the ECM. Manifold Absolute Pressure (MAP) The Manifold Absolute Pressure sensor measures changes in the intake manifold pressure resulting from engine load and speed changes. The ECM sends a 5-volt reference signal to the MAP sensor. As pressure changes in the intake manifold occurs, the electrical resistance of the MAP sensor also changes. By monitoring the sensor output voltage, the computer can determine the manifold absolute pressure. The higher the MAP voltage output the lower the engine vacuum, which requires more fuel. The lower the MAP voltage output the higher the engine vacuum, which requires less fuel. Under certain conditions, the MAP sensor is also used to measure barometric pressure. This allows the computer to automatically adjust for different altitudes. The computer uses the MAP sensor to control fuel delivery and ignition timing. These are some of the most important signals that the ECM takes to control the fuel injection system in an efficient way for proper fuel management. The NI hardware that can be used with these sensors can be chosen from the list given below: 4. Different types of Protocols used Automobile protocols can be broken down into following major categories. On-board diagnostics are into existence from the early 1980’s. But in the recent years they have become highly sophisticated. Thus there are highly reliable protocols just used for on-board diagnostics. Some of the most frequently used once are: ODBII protocol – This is a one of the most popularly used standard introduced in the mid 90’s and takes care of the complete engine control and monitoring of the chassis and the accessories. It is used by almost all CAN ISO 11898 – Another very popular protocol used by almost all the automaker for on-board diagnostics. The pin details are as below. Pin 2 - J1850 Bus+ Pin 4 - Chassis Ground Pin 5 - Signal Ground Pin 6 - CAN High (J-2284) Pin 7 - ISO 9141-2 K Line Pin 10 - J1850 Bus Pin 14 - CAN Low (J-2284) Pin 15 - ISO 9141-2 L Line Pin 16 - Battery Power Keyword 2000 and J1850 – These protocols are basically used by GM, Chrysler for on-board diagnostics. J1850 is a very old protocol and is being phased out. ·Body and Powertrain Body and Powertrain networks may consist of CAN, LIN, or J1850 protocols. CAN is versatile protocol and is majorly used in various categories of in-vehicle networks. High speed CAN is often used for Powertrain applications such as engine timing to ensure that the car runs efficiently. LIN -- Local Interconnect Network (LIN) is a UART based network that was developed strictly for body applications. For example, a LIN network connects all of the electric devices in a car door. LIN and CAN may coexist. It is majorly used by Chrysler, BMW and Volkswagen. ·Multimedia and Drive by wire MOST -- It is a fiber optic network that has been optimized for use in the automobile. It is designed for use with simple devices such as microphones and speakers along with more complex devices such as security devices such as those used to locate stolen automobiles. MOST technology is being developed and promoted by a Cooperation, which includes BMW, Daimler-Chrysler, and Audi. IDB 1394 -- It is the latest addition to the IDB family of in-vehicle networks, designed for high speed multimedia applications that require large amounts of information to be moved quickly on a vehicle. Previously known as IDB-M, IDB-1394 is built on IEEE-1394 technology that has already gained wide acceptance in the consumer electronics community. The IDB-1394 specification defines the automotive grade physical layers (e.g. cables, connectors), power modes, and the higher layer protocols needed to assure interoperability of all IDB-1394 devices. Drive by wire protocols have not been fully developed at this point. There is also some disagreement about which protocol will become the industry standard. While Flexray offers high speed, it is expensive and further away from standardization. FlexRay -- It is a scalable, flexible high-speed communication system, which meets the increasing technical demands in the automobile industry. With its data rate of up to 10 MBits/s, it is ideal for X- by wire applications. 5. Design and Testing The traditional way to develop automotive embedded systems has been to build hardware boards that represent all or part of each ECU and part of its surroundings, often called plant models, and use them for bench testing. Unfortunately, the bench approach has many limitations. First, creating all the needed hardware boards is costly. Second, the performance requirements of the most powerful ECU’s (those used for Powertrain control) are so demanding that it is no longer possible to build boards that allow adequate measurements to be taken. Finally, and most importantly, this bench testing approach is based on a sequential design process where hardware is developed, plant model prototypes are built and software development begins. To overcome these limitations control design engineers have adopted a highly efficient design process often referred to as the “V” diagram. Though originally developed to encapsulate the design process of software applications, many different versions of this diagram can be found to describe different product design cycles. The one given below is typically what is used in ECU design cycle. In this diagram the general progression of time in the development stages is shown from left to right. However, this is often an iterative process, and the actual development will not proceed linearly through these steps. Instead, you will spend time in each step and even have to backtrack from time to time. The goal is to make this cycle as efficient as possible by minimizing the amount of backtracking between steps as well as the time spent in each step. The y-axis of this diagram can be thought of as the level at which the system components are considered. Early on in the development, the requirements of the overall system must be considered. As the system is divided into sub-systems and components, the process becomes very low-level down to the point of loading code onto individual processors. Afterwards, components are integrated and tested together until such time that the entire system can enter final production testing. Therefore the top of the diagram represents the high-level system view and the bottom of the diagram represents a very low-level view. Lets have a look into these steps one-by-one. In this step initially the design engineers documents the needs and requirements of the project using spreadsheet or word processing applications. The documentation also takes care about the various specification of the engine and the various norms it needs to comply with. It also marks the limit levels of the parameters involved in controlling the engine. Once the specifications are documented the actual design process begins in which first a software model of the ECU and the engine is built. And once the models are build the third step involves software-in-the-loop simulation. In this step both the software models; ECU model and Engine model; are connected together in a closed feedback loop and then simulated to analyze the dynamic characteristics of the entire system. During the simulation the ECU model monitors the output from the Engine model and adjusts the inputs to the Engine model in order to improve the performance of various engine functions like fuel injection, ignition etc. National Instruments provides three options for building a software model. 1st Option --- LabVIEW. LabVIEW with the Control Design and Simulation Bundle provides a very good platform to design and model the Engine and the Engine control unit. The Control Design and Simulation bundle basically consists of three toolkit for this purpose. System Identification Toolkit and Control Design Toolkit:- This toolkit contains various System Identification VI’s that helps to build mathematical models of the dynamical systems of an engine. These models can then be integrated together to generate a complete model of the engine which using the control design VI’s can be analyzed and used to design the mathematical model of the ECU. Simulation Module:- The simulation module provides a simulation environment within LabVIEW. Customers can also build the engine and the ECU models in block diagram form (like Simulink® models) using the various VI’s available in LabVIEW for logical and arithmetic functions, signal processing, filtering etc. along with Dynamical elements, look-up tables etc provided by the simulation module. After the model is designed Software-in-the-loop simulation can be done using the same simulation environment. The documentation for the model can be prepared using any word or spreadsheet application. 2nd Option --- MATRIXx MATRIXx has similar capabilities to Matlab®/Simulink®. It is ideal tool for customers who are planning to build their models from scratch and are looking for tools that can accomplish very complex control design application and simulate it at high speed. MATRIXx is basically made of four products:- Using XMath and SystemBuild software, customers can model their ECU’s and Engines and then using these models can perform Software-in-the-loop simulation. XMath software is a basic analysis and visualization software environment that controls SystemBuild and all related MATRIXx environment. It also helps in handling data and performing numerical analysis for SystemBuild. SystemBuild is a graphical programming environment that can be used to model and simulate the Engine and the ECU system. It has a choice of over 80 block types that can be used to build complex Engine and ECU models. For documenting the model MATRIXx has a product called DocumentIt which will auto create documents in various formats from the models build in SystemBuild. 3rd Option --- LabVIEW + Simulink® If you have a ready-made engine model built in Simulink® and wouldn’t like to rework/rebuilt the model. Then there are two options available: 1) Using Simulation Module we can translate the existing Simulink® model (.mdl file) into a LabVIEW block diagram code which can be done very easily in a three step process. 2) Simulation Interface Toolkit (SIT) is a LabVIEW add-on that provides tools to create LabVIEW interface to communicate successfully with an existing Simulink® model. a) Communicate directly between LabVIEW and the Simulink® model. The SIT Server needs to be started from MATLAB® to allow this communication. The Simulink® model can also be running on the host PC or another different PC. The documentation can be done by using any word or spreadsheet application. b.) Run the model on a Real-Time system by converting the Simulink® model to a DLL. Once the DLL is built, neither SIT Server nor the model is needed. Rapid Control Prototyping It is also called as Engine-in-the-loop Simulation (EIL). We shouldn’t confuse this with the term rapid prototyping (RP) which refers to a class of technologies that can automatically construct physical models from Computer-Aided Design (CAD) data. For RCP, the software ECU model that has been designed is downloaded to a Real-time Hardware prototype target. The Target can be any Real-time hardware (ideally a PXI system or a cRIO system).Thus the software model of the ECU is given I/O interface which is in turn connected to sensors and actuators attached to the engine. The software that would be required is LabVIEW , LV RT and LV FPGA (if using cRIO or 7831R in PXI). If the model is build in MATRIXx then a dll can be created out of it and imported into LabVIEW where the I/O interface can be provided and the code can be downloaded into one of the Real-Time Hardware targets mentioned above. If it’s a Simulink® model then after being imported or translated into LabVIEW the I/O interface can be provided to the same and then target to the RT Hardware like PXI or cRIO As mentioned in the Input Output section appropriate hardwares can be used in the PXI or cRIO depending on the parameters being controlled and the sensors being used. In this step the core ECU model is modified to interface with the I/O available in the actual ECU and then is converted into a C code using a C –code generator. In some cases they are also converted into an ada code. And then this code is downloaded as the control algorithm to the 32-bit microcontroller inside the ECU. Currently LabVIEW doesn’t have the capability or tools to convert the block diagram into a C code. Thus those who are using NI Control design bundle to build the model of the ECU has to ‘hand code’ the model in C. Though it is tedious to do this than using an auto C code generator many customers prefer to do this as in most cases the C code generator creates a lot of errors in the code generated which is very difficult to debug. Anyway if the customer is using MATRIXx to build the model than we have a product called ‘AUTOCODE’ which can generate a C code or an ada code from the ‘SystemBuild’ model that was build for the ECU. Hardware-in-the-loop Simulation (HIL) Once the code containing the control algorithm is downloaded to the ECU we can test the performance of the ECU under extreme conditions, which cannot be achieved in real world, by performing HIL simulation. In this step the actual ECU is tested by simulating an engine using the Engine model that we had created earlier. In vice-versa to what we did in RCP, here in HIL the software model of the Engine is downloaded to a Real-Time hardware and the appropriate I/O interfaces are provided. These I/O are then connected to the actual ECU. Then various engine conditions can be simulated and the ECU can be tested to its limits which wouldn’t be possible if it was tested using an actual Engine. Again like RCP the hardware for the HIL is decided according to the signals and the sensors that we are going to simulate, this can be chosen from the list given in the Input/Output section MATLAB® and Simulink® are registered trademarks of The MathWorks, Inc. Other product and company names listed are trademarks and trade names of their respective companies.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"technical content with no unsafe elements\", \"score\": 0}\n```\n\nThe provided text is purely technical and educational, discussing the functions and types of ECUs in automobiles without any unsafe or inappropriate content.", "{\"reason\": \"technical content with no unsafe elements\", \"score\": 0}\n```", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```", "{\"reason\": \"technical documentation\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a technical documentation about the design and simulation of engine control units and software-in-the-loop simulation processes. It does not contain any unsafe or inappropriate content, profanity, derogatory terms, or references to illegal activities. Therefore, it is fully appropriate for all audiences and educational environments.", "{\"reason\": \"contains technical information without inappropriate content\", \"score\": 0}\n```\n\nThe provided text discusses technical aspects related to software and hardware integration in the context of automotive engineering. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1222-0", "llm_safety_score": 0, "reason": "technical content with no unsafe elements" }, { "custom_id": "1222-1", "llm_safety_score": 0, "reason": "technical content with no unsafe elements" }, { "custom_id": "1222-2", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1222-3", "llm_safety_score": 0, "reason": "technical documentation" }, { "custom_id": "1222-4", "llm_safety_score": 0, "reason": "contains technical information without inappropriate content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.ni.com/white-paper/3312/en/", "date": "2014-07-23T20:08:15", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00160-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.913515985, "token_count": 4824, "score": 3.515625, "int_score": 4 }
<urn:uuid:8bf78301-6b8f-4810-9774-8832ce65dc1d>
Evaluating Groups - FairlyDo you want to assign group projects, but are unsure of how to grade them fairly? Here is some information on how to assess students work and how to implement collaborative learning into the classroom. - Tips for Planning a Group Project - Establishing Group Rules and Norms - Techniques for Evaluating Group Projects - Example Forms - References and Resources - Give students clear directions and specific criteria for project. - Form groups of three - five students. - Ask students to establish group expectations and norms. - Give students real problems to solve. - Promote group cohesion by informing the students of a group and individual evaluations. - Allow risks and mistakes. - Set time frames to check progress. - Hold individual members responsible for group sub tasks. Myers (1992) suggests the following classroom rules: - Each member of a group is responsible for his/her own work and behavior. - Group members must help any other group members who ask for it. - No member of the group can ask the teacher for help unless all members of the group have the same question. Sometimes, it is helpful to ask group members to articulate their expectations for the learning team. See this form to help groups do that. Take 15 - 20 minutes to do the following. First ask EACH individual student to write down answers to the following questions: "When you work with a group what do you expect OTHERS to do/act?" and "What do you expect your TEAM will accomplish this semester?" (form for this) Then get the group members together to review the individual and group expectations and outline BOTH individual expectations for EACH member of the group and the expectations for the team as a group. The group should also consider and discuss any repercussions and/or consequences of NOT living up to team expectations? The last step is for the team members all sign off on expectations. - Peer / Self Evaluation of Roles - Students rate themselves as well as other group members on specific criteria, such as responsibility, contributing ideas, finishing tasks, etc. This can be done through various grading forms or having students write a brief essay on the group/members strengths and weaknesses. - Individual Journals - Students keep a journal of events that occur in each group meeting. These include who attended, what was discussed and plans for future meetings. These can be collected and periodically read by the instructor, who comments on progress. The instructor can provide guidance for the group without directing them. - Minutes of Group Meetings - Similar to journals are minutes for each group meeting, which are periodically read by the instructor. These include who attended, tasks completed, task planned, and contributors to various task. This provides the instructor with a way of monitor individual contributions to the group. - Group and Individual Contribution Grades - Instructors can divide the project grade into percentage of individual and group contribution. This is especially beneficial if peer and self evaluations are used. - Grading Form - This tool is used for students evaluating other student's papers. This is a criteria-based form that has points assigned for various criteria. - Grading Form for Technical Papers - This tool can be used for students to evaluate other student's papers that follow a technical format, such as APA. This is a criteria-based form where points are assigned for each criteria. - Group Grading Form - This tool can be used for students grading all the other members of a group and their contributions. This is a Likert scale for each member. - Peer Evaluation - This tool can be used as a grading form for students who participated in a collaborative writing project to evaluate other members contribution to the final product. - Group Presentation - This is a grading form for group presentations. This is criteria-bases with students earning a number of points for various sections or criteria. - Other forms: Group/Team Evaluation; Student Peer Evaluation; Peer Assessment Inventory; Group Case Evaluation Sheet; Peer Evaluation; Peer Evaluation of Group Work; Group Process Evaluation; Grading Scenario for group work; Peer Evaluation Presentation form. References and Resources More information on this and other subjects is available from the Walker Teaching Resource Center at 401 Hunter Hall. The following list of articles will be beneficial for future study. - Carter, J. H. (1995). Dealing with parasites in group projects. Paper presented at the Annual Meeting of the Speech Communication Assoc. (81st, San Antonio TX, Nov. 18-21, 1995). ERIC # ED 392 100. - O'Quin, K. (1996). Depth by doing: Cooperative research projects in social psychology. In Proceedings of the Annual Conference on Undergraduate Teaching of Psychology. (10th, Ellenville NY, March 20-22, 1996) ERIC # ED 405 031 - Mason, E. (1972). Collaborative Learning. New York: Agathon Press, Inc. - Dale, H. (1997). Co-authoring in the classroom: Creating and environment for collaboration. Theory and Research into Practice (TRIP) Series. National Counsel of Teachers of English, Urbana, Ill. ERIC # ED 402 625 - Keaten, J. A. & Richardson, M. E. (1992). A field investigation of peer assessment as part of the student group grading process. Paper presented at the Western Speech Communication Association convention. (Albuquerque NM, Feb. 14, 1992). ERIC # ED 361 753 - Strong, G. (1993). Teaching writing with small groups. In Thought Currents in English Literature, vol. LXVI, Dec. 1993. ERIC # ED 396 29
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provides educational content on how to grade and manage group projects fairly in an academic setting. It does not contain any unsafe or inappropriate language, themes, or references. Therefore, it is fully appropriate for all audiences, including educational environments.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text sample is a list of academic references and citations. It does not contain any unsafe, inappropriate, or harmful content. The references are related to educational practices and research, making the text fully appropriate for all audiences, including children and sensitive environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1223-0", "llm_safety_score": 0, "reason": "educational content" }, { "custom_id": "1223-1", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.utc.edu/walker-center-teaching-learning/faculty-development/online-resources/group-projects.php", "date": "2014-07-23T20:07:51", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00160-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.8935511112, "token_count": 1170, "score": 4.125, "int_score": 4 }
<urn:uuid:fc0d4f8b-6428-4763-8842-4e53527eb94b>
Charting the Increase in America's Diversity March 15, 2001 Using 2000 Census data, USA Today has compiled an index which tracks the growth in racial and ethnic diversity in 64 U.S. metropolitan areas. The index calculates the probability that two people picked at random will be of a different race and ethnicity, given the percentage of each race in the overall population. The probability is expressed on a 0-to-100 scale. Here are some of the results: - Because of the increase in immigration, there is an almost one-in-two chance, or 49 on a 100 point scale, that any two people chosen at random would be of a different race and ethnicity. - That's up from a one-in-three chance (34) in 1980 and four-in-10 chance (40) in 1990. - Even in an already diverse state such as Texas, the index went from 55 to 62 during the 1990s, and in a state with little diversity, such as Iowa, it went from 8 to 14. The index is based on each of the five race categories recognized by the federal government -- white, black, Asian, American Indian and Native Hawaiian -- and the percentages of Hispanics and non-Hispanics, who can be of any race. Source: Haya El Nasser and Paul Overberg, "Index Charts Growth in Diversity," USA Today, March 15, 2001. Browse more articles on Government Issues
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text is purely informational and discusses demographic changes based on census data. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1224-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.ncpa.org/sub/dpd/index.php?Article_ID=8579", "date": "2014-07-24T21:42:43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892495.1/warc/CC-MAIN-20140722025812-00096-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9512250423, "token_count": 299, "score": 3.28125, "int_score": 3 }
<urn:uuid:1acf8703-2440-41b8-830f-dddcbfb85df0>
Despite the oxymoronic name, 'dry water' is very real. This bone-dry water-silica compound could provide a way to transport dangerous liquids and gases safely - inside trillions of water-drop sized packages. 'Dry water' is comprised of 95% water, with a thin layer of silica coating each droplet, essentially turning it into a dry powder. When it's mixed with certain liquids or gasses, they combine with the water - which then traps them in a silica cage. Hence, they become non-reactive, and are easily transported without worrying about accidental detonation and the like. 'Dry water' was first discovered in the late 60s, and was immediately snatched up by cosmetic companies, eager to make use of its unique properties. It resurfaced in 2006, and researchers at the University of Liverpool have been working on new applications for the hydrate. This substance gleefully combines with both liquids and gasses - and this feature makes it very useful. The primary application would be carbon dioxide sequestering. The 'dry water' can absorb three times the mass of CO2 as its constituent ingredients could. The research also indicates the substance could be used a number of other ways: for storing and transporting methane (from natural deposits, or as fuel); as a way of speeding up the reaction between hydrogen gas and maleic acid to produce succinic acid, which is used to make drugs, food ingredients, and consumer products; or, to aid in transporting emulsions. What about getting the stored materials out again, once they've been sequestered? Dr. Ben Carter, a researcher on the product, says it's quite straightforward to separate: A dry liquid (either pure water or a solution of something dissolved in water) can be separated back to liquid + silica by either of two methods. You can centrifuge it at high speed to force the two apart, or you can add an alcoholic solvent like methanol or ethanol. This reduces the water surface tension as the alcohol penetrates the water droplets, causing the dry liquid to fall apart. If you've stored a gas in DW as a gas hydrate, all you have to do to release it is warm up the material to melt the hydrate (hydrates normally form at 0 degrees C under pressure, and can be stored at -20 degrees C without the need to be kept under further pressure). The dry water itself is easy enough to manufacture. The hydrophobic silica and water are blended together at 19,000 rpm for 90 seconds, which coats the water droplets completely. Video of the presentation the researchers gave today to the American Chemical Society:
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses the properties and potential applications of 'dry water' (a water-silica composite) in a scientific and educational manner. There are no unsafe elements, inappropriate language, or references to illegal activities. The content is suitable for educational environments and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1225-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://io9.com/5618980/dry-water---yes-that-exists---provides-energy-for-futuristic-batteries?tag=oxymorons", "date": "2014-07-28T05:36:40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00284-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.950391233, "token_count": 550, "score": 3.546875, "int_score": 4 }
<urn:uuid:69280ea6-da5f-46e2-9f78-5943eb661f40>
With a three-year grant from NASA, a Cornell chef, nutritionist, food and biological engineer and vegetarian cooking teacher are collaborating to develop and test tasty, nutritious and economical recipes that astronauts can prepare from a limited set of 15 to 30 crops grown in future space habitats. Wheat and potatoes are the staples to be complemented with rice, soy and peanuts, salad crops and fresh herbs, all to be grown hydroponically in artificially lit, temperature-controlled space farms. "Our goal is to develop a database of food-processing information and a menu of at least 100 primarily vegetarian recipes of familiar and new menu items based on crops raised in a bioregenerative life support system," said Jean Hunter, associate professor of agricultural and biological engineering at Cornell who is heading up the project. The team also is developing a food-related decision-making strategy for NASA to use in bioregenerative life support systems for multiyear missions such as a lunar scientific colony or Martian surface exploration. Bioregenerative life support, in which plants and microorganisms regenerate air, water and food for the crew, is envisioned for long-term space exploration, starting 15 to 20 years from now. "Because the cost of transporting food for these missions will be astronomical, only about 15 percent of calories will be from Earth-made foods," added David Levitsky, professor of nutritional sciences and of psychology at Cornell who also is working on the project. "Food plays a critical role in the overall psychological well-being of isolated crews." For more information, contact Susan Lang, (607) 255-3613. Some of our culture's most enduring robots appeared in the 80s. The Aliens series produced another evil android, and we saw light robot fare in the form of Short Circuit. Two of the great robots of all time also showed up: The Terminator and RoboCop. Major global metropolitan areas are implementing a vast number of technology, energy, transportation, and Internet projects to make the metropolis a friendlier, greener, safer, and more sustainable place to be. Here’s a look at robots depicted in movies and on TV during the 1950s and 1960s. We tried to collect the classics here, omitting the scores of forgettable B movies such as Santa Claus Conquers the Martians and Dr. Goldfoot and the Bikini Machine. Stay tuned for slideshows of robot stars from later decades. Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text discusses a research project focused on developing nutritious and economical recipes for astronauts. It does not contain any unsafe, inappropriate, or harmful content. The text is entirely suitable for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1226-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.designnews.com/author.asp?section_id=1386&doc_id=222857&piddl_msgorder=asc", "date": "2014-07-28T05:20:18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00284-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9321808219, "token_count": 551, "score": 3.3125, "int_score": 3 }
<urn:uuid:5a16a635-b368-4d6c-9377-2712724da1dc>
Last name origins & meanings: - English: from a pet form of the personal names Giles, Julian, or William (see Gill 1). - English: topographic name for someone living at the top of a glen or ravine, from northern Middle English gil(l) ‘glen’ + heved ‘head’. Comments for Gillett
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a description of the origin and meaning of the last name \"Gillett.\" It contains no unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and does not require any score." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1227-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://genealogy.familyeducation.com/surname-origin/gillett", "date": "2014-07-29T12:45:14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00332-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8031052351, "token_count": 84, "score": 2.609375, "int_score": 3 }
<urn:uuid:3eb9840b-c12a-4f88-a976-35c87008e93c>
Mountains to Molehills: Independence Published: Wednesday, July 4, 2012 at 8:24 a.m. Last Modified: Wednesday, July 4, 2012 at 8:24 a.m. On this Independence Day, your Mountains to Molehills columnist encourages folks to think about the meaning of words to the United States’ national anthem — and not just the first stanza. OK, go ahead and admit it. Who didn’t know there are four stanzas for the patriotic song that is sung with great pride at many events year-round and probably at least hummed by folks everywhere today? Let’s pause, as we celebrate our independence today, to consider Francis Scott Key’s message in the poem he penned in 1814 after witnessing the bombardment of Fort McHenry by the British Royal Navy ships in Chesapeake Bay during the Battle of Fort McHenry in the War of 1812. The Star-Spangled Banner became the national anthem in 1931. Here’s the entire national anthem: The Star-Spangled Banner O say can you see by the dawn’s early light, What so proudly we hailed at the twilight’s last gleaming, Whose broad stripes and bright stars through the perilous fight, O’er the ramparts we watched, were so gallantly streaming? And the rockets’ red glare, the bombs bursting in air, Gave proof through the night that our flag was still there; O say does that star-spangled banner yet wave, O’er the land of the free and the home of the brave? On the shore dimly seen through the mists of the deep, Where the foe’s haughty host in dread silence reposes, What is that which the breeze, o’er the towering steep, As it fitfully blows, half conceals, half discloses? Now it catches the gleam of the morning’s first beam, In full glory reflected now shines in the stream: ’Tis the star-spangled banner, O! long may it wave O’er the land of the free and the home of the brave. And where is that band who so vauntingly swore That the havoc of war and the battle’s confusion, A home and a country, should leave us no more? Their blood has washed out their foul footsteps’ pollution. No refuge could save the hireling and slave From the terror of flight, or the gloom of the grave: And the star-spangled banner in triumph doth wave, O’er the land of the free and the home of the brave. O thus be it ever, when freemen shall stand Between their loved home and the war’s desolation. Blest with vict’ry and peace, may the Heav’n rescued land Praise the Power that hath made and preserved us a nation! Then conquer we must, when our cause it is just, And this be our motto: “In God is our trust.” And the star-spangled banner in triumph shall wave O’er the land of the free and the home of the brave! LAST BUT NOT LEAST: There is no shortage of July Fourth celebrations in these hills, beginning with the Independence Day parade this morning at 11 down Main Street in Hendersonville. The Heritage Museum Ice Cream Social with free ice cream, plus a watermeloneating contest, will follow the parade. From 7 to 9 tonight, the Music on Main Street concert series will entertain at the Visitors Information Center. Henderson County’s fireworks show will start at dusk, with viewing from the south side of Hendersonville. A large red balloon will be launched this morning, and if you can see the balloon, you will be able to see the fireworks. ◆ In Columbus, the 50th Fabulous Fourth of July celebration will last from 10 a.m. to 10 p.m. with plenty of food, crafts, music, family games, a farmers’ market and fireworks. Reach Lou Parris at 828-694-7836 or [email protected]. Reader comments posted to this article may be published in our print edition. All rights reserved. This copyrighted material may not be re-published without permission. Links are encouraged.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text is a patriotic article about the U.S. national anthem and July 4th celebrations. It contains no unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1228-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.blueridgenow.com/article/20120704/NEWS/120709920&tc=ix", "date": "2014-07-29T12:49:08", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00332-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8997988701, "token_count": 913, "score": 2.875, "int_score": 3 }
<urn:uuid:a9246f27-b4c7-4cd9-9371-ef2de865081f>
Anemia is a condition in which the body does not have enough healthy red blood cells. Red blood cells provide oxygen to body tissues. Normally, red blood cells last for about 120 days in the body. In hemolytic anemia, red blood cells in the blood are destroyed earlier than normal. Hemolytic anemia occurs when the bone marrow is unable to replace the red blood cells that are being destroyed. Immune hemolytic anemia occurs when the immune system mistakenly sees your own red blood cells as foreign substances. Antibodies then develop against the red blood cells. These antibodies attack the red blood cells and cause them to break down too early. Red blood cells may be destroyed due to: Other causes are: You may not have symptoms if the anemia is mild. If the problem develops slowly, the first symptoms may be: If the anemia gets worse, symptoms may include: A test called a complete blood count (CBC) can help diagnose anemia and offer some hints to the type and cause of the problem. Important parts of the CBC include red blood cell count (RBC), hemoglobin, and hematocrit (HCT). These tests can identify the type of hemolytic anemia: Treatment depends on the type and cause of the hemolytic anemia: In rare cases, surgery is needed to take out the spleen. This is because the spleen acts as a filter that removes abnormal cells from the blood. Outcome depends on the type and cause of hemolytic anemia. Severe anemia can make heart disease, lung disease, or cerebrovascular disease worse. Call your health care provider if you develop symptoms of hemolytic anemia. Anemia - hemolytic Jager U, Lechner K. Autoimmune hemolytic anemia. In: Hoffman R, Benz EJ Jr, Silberstein LE, et al., eds. Hematology: Basic Principles and Practice. 6th ed. Philadelphia, Pa: Elsevier Saunders; 2012:chap 44. Price EA, Schrier SS. Extrinsic nonimmune hemolytic anemias. In: Hoffman R, Benz EJ Jr, Silberstein LE, et al., eds. Hematology: Basic Principles and Practice. 6th ed. Philadelphia, Pa: Elsevier Saunders; 2012:chap 45. Updated by: Todd Gersten, MD, Hematology/Oncology, Florida Cancer Specialists & Research Institute, Wellington, FL. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Isla Ogilvie, PhD, and the A.D.A.M. Editorial team. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. Copyright 1997-2014, A.D.A.M., Inc. Duplication for commercial use must be authorized in writing by ADAM Health Solutions.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\nThe text provided is purely educational and informative, discussing medical conditions and treatments in a factual and non-offensive manner. It does not contain any unsafe content, inappropriate language, or themes that would make it unsuitable for public use or educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1229-0", "llm_safety_score": 0, "reason": "educational content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.nlm.nih.gov/medlineplus/ency/article/000571.htm", "date": "2014-07-29T12:50:58", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00332-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8616493344, "token_count": 675, "score": 3.828125, "int_score": 4 }
<urn:uuid:96f90c2c-f516-4efa-9864-7324f4d4115b>
Guide to Self-Studying Japanese A large proportion of Japanese learners self-study. Finding places to learn Japanese in a classroom environment can be difficult and expensive. Here’s a guide on how you can learn Japanese for free and from the comfort of your sofa. When learning Japanese, the most important step is to learn Hiragana and Katakana, the writing alphabets of Japanese. The best way I’ve found to do that is to make flashcards. Make sure you practice writing as well as recognizing them, this will not only be a great skill to have but will also reinforce the shapes in your mind. [Hiragana 42], the best guide I’ve found to learn the Hiragana (in a day!) [Hiranana and Katakana Quiz Site] [Kana Invaders Game] [Anki] An amazing program that will make sure you never forget any Kana…. The next step is to start learning vocabulary. Where can you find what to learn? Use a site like Memrise to find word lists (for example, there is a word list for all the vocabulary in starter textbooks like Genki), and use the amazing interface to learn them and keep them in your long term memory. [Memrise] as mentioned above to find and learn vocabulary lists. [Most Common Words List] [Anki] An amazing program that will make sure you never forget any Vocabulary…. While encountering vocabulary, you’re likely to be coming across super-complicated-looking Kanji. You can learn Kanji through Memrise as above, but there are some other websites that may be of interest. [Kanji Damage] A great site where you can learn Kanji through mnemonics. [WaniKani] by the same people who make TextFugu can help you learn Kanji from scratch. [Anki] An amazing program that will make sure you never forget any Kanji…. The next step is to apply that new vocabulary to grammar points and start making sentences. If you can’t get your hands on textbooks like Genki, don’t fear! There are a lot of great online grammar resources. [TextFugu] a highly rated ‘online textbook’ which will guide you right from the beginning of learning Japanese. [Guide to Japanese] another online textbook with a lot of grammar points and excellent explanations. So you probably started to learn Japanese because you have some interest in Japanese media. Time to start using it to your advantage! Aside from the obvious watching Anime, J-dramas and films, why not try Reading Japanese News? Watching Japanese TV? Just make sure you are making these activities productive - note down new vocabulary, add them to Anki, and keep learning! It’s much easier to learn things you’re interested in. The most important but difficult part of self-studying Japanese is getting your own compositions checked. Utilize all that grammar and vocabulary and write a short piece, it could be a diary entry or a short essay. Get it recorded for you by a native on RhinoSpike, and checked for grammar and consistencies on Lang-8.These sites also give you the chance to connect with Japanese natives, and perhaps start up some language exchanges! For more resources, take a look at my Ultimate Resources List Any more tips? Please submit them here!
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content with no unsafe elements\", \"score\": 0}\n```\n\nThe text provided is an educational guide for self-studying Japanese. It contains no unsafe or inappropriate content, uses respectful language, and does not include any themes that could be considered harmful or offensive. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1230-0", "llm_safety_score": 0, "reason": "educational content with no unsafe elements" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://alyssbaskerville.tumblr.com/", "date": "2014-07-30T21:14:52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271654.40/warc/CC-MAIN-20140728011751-00380-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.930298388, "token_count": 722, "score": 2.96875, "int_score": 3 }
<urn:uuid:589b95b5-431f-4265-96c9-69b3c16c1b0d>
Eco Complex, Bordentown, NJ There are five basic steps in the process of developing a successful wetland construction plan. This course describes and develops all of these: PLANNING YOUR PROJECT: Before developing a wetland construction plan, you must understand the purpose of the wetland and the ultimate goals of the project. Your overall wetland plan must account for location, size, physiognomy, basic hydrologic regime, and current/future land use. The proposed primary function(s) must be specific, measurable, and achievable. The wetland is built by constructing structural elements that support the primary function(s). SELECTING YOUR CONSTRUCTION MODEL: You must select appropriate models for construction. The hydrogeomorphic model analyzes suitable water supply and landscape position, while the wetland construction model covers basic methods of wetland construction suited to the nature of the water supply. PREPARING YOUR WATER BUDGET & HYDROGRAPH: Water inputs and outputs must be used to prepare water budgets and hydrographs. Successful hydrograph preparation is reliant on data collection and interpretation. Using the results of the modeling facilitates the preparation of the hydrograph. DETERMINING PROPER VEGETATION: Vegetation adaptation, selection, and planting are keys to any successful wetland construction plan. You must verify the adaptive modes of plants suitable for your hydrologic regime, and accurately match these plants with your project. You must locate plants which represent native species assemblages and select proper planting methods. Sub-grade soil percolation must be suitable for the hydrogeomorphic model you develop. Substrate soils must prove suitable to support selected vegetation selections both in texture and nutrient composition. A visit to a constructed wetland complex in central New Jersey will highlight the "real world" application of classroom instruction.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is a detailed explanation of the steps involved in developing a successful wetland construction plan. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1231-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://ser.org/events/events-detail-view/wetland-construction-principles-and-techniques", "date": "2014-07-30T21:14:58", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510271654.40/warc/CC-MAIN-20140728011751-00380-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8783126473, "token_count": 388, "score": 3.078125, "int_score": 3 }
<urn:uuid:33bfb921-f44c-49ad-b060-caf54bc63525>
(BlackDoctor.org) — As Spring settles in the sky with rain clouds and sunshine everywhere, temperatures are getting warmer everyday but those warm temperatures also increase migraines caused by this weatherly change. New research suggests that certain weather conditions may trigger migraines and other severe headaches. But frequent sufferers may be surprised by some of the findings. The study reveals that: - Regardless of the time of year, an increase in temperature was the biggest weather-related headache trigger. Researchers reported that every 9 degree Fahrenheit increase in temperature raised the headache risk by 7.5%. - Low barometric air pressure is considered by some to be specific to migraines, but the study found no link between migraines and low-pressure systems. The researchers say lower pressure was associated with a small increase in risk for non-migraine headaches. - Air pollution was not strongly associated with an increased risk for migraine or non-migraine headaches. But the automobile exhaust pollutant nitrogen dioxide did show a borderline effect on non-migraine headaches. Weather, Pollution, and Migraines The study is one of the largest ever to examine the impact of weather and air pollution on headaches. But study lead author Kenneth J. Mukamal, MD, of Boston’s Beth Israel Deaconess Medical Center and Harvard School of Public Health, states that an even bigger study would be needed to understand the impact of air pollution on headaches. “We are not saying that air pollution is not a headache trigger,” he says. “What we can say with some confidence is that the effect is not enormous.” Mukamal and colleagues compared the medical records of 7,054 headache patients treated at a Boston hospital’s emergency department over a seven-year period to official records of pollution levels and weather conditions in the days before treatment. Specific weather conditions including temperature, barometric pressure, and humidity were also examined at other key time periods. Although rising temperature was identified as the biggest weather-related headache trigger, the researchers concluded that the impact may not be clinically meaningful.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses medical research related to weather conditions and migraines. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to health and medical research, making it fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1233-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://blackdoctor.org/3291/warmer-temps-bring-migraines/", "date": "2014-08-01T05:46:52", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00428-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9425485134, "token_count": 439, "score": 2.984375, "int_score": 3 }
<urn:uuid:5b5ce33a-c478-4e50-bb1b-fa8b6fb98d50>
A new book explores why people share some things more than others. Why do some things "go viral" while others fall flat? In his new book Contagious: Why Things Catch On, Jonah Berger, an assistant professor of marketing at the University of Pennsylvania's Wharton School, explores the science of popularity in an age of social media. Q: Great title. So why DO things catch on? A: You might think it's about advertising. You might think it's about having a better product or a higher quality product or you might think it's luck. What the book is all about is how word of mouth and social influence more generally drive people to talk about and share things and drive those things to become popular. Marketers could definitely use these tools. But this book is really a psychology book and a sociology book. It's about understanding why people talk and share. You could think about it as understanding conversations -- the science of what we talk about. Q: How has social media changed this concept? A: Social media definitely quickened the pace of information diffusion. You used to have to share with one person who shared it with someone else. Now with the click of a button, you can share with a thousand -- five thousand, ten thousand -- others. Q: Yet you say that just 7% of word of mouth happens online? How is that? A: We tend to focus a lot on online behavior -- in part because it's easy to see -- but we should remember that lots of social transmission actually happens offline as well. We have breakfast with our family. We see our colleagues at work. We go out to lunch. We have meetings. We're not talking online. We're talking offline face-to-face. While there may be no permanent record of these conversations, they still make up a big part of daily life and have a big impact on our behavior. Q: You talk about certain emotions that may boost sharing. Which ones do that? A: We found that any emotion causes people to share. Positive things get shared a little more than negative but more important is if it's an activating emotion. Think about the difference between anger and sadness. Both of those are negative emotions. But think about what you do when you're angry compared to what you do when you're sad. When you're angry, you want to do something and take action. You want to throw something or yell at someone. But when you're sad, you want to curl up in a ball or watch your favorite movie or eat a bowl of ice cream. Excitement is an activating emotion. Contentment is a de-activating emotion. It's not just whether it makes people feel positive or negative but whether it activates them or not. Q: You mention examples in the book, such as why Susan Boyle's appearance on Britain's Got Talent became one of the most viral videos ever. Can you explain why? A: Susan Boyle is a great example of high-arousal emotion. That video is extremely inspiring. It is not only remarkable that she's such a good singer, it's surprising. It's awe-inspiring. What our work shows is that high-arousal emotions increase sharing and low-arousal emotions decrease sharing. When you're excited, you are fired up and activated. When you're content, you feel relaxed but don't want to do very much. High-arousal emotion drives sharing. Q: In your book, you discuss the psychology of discounts. That's another idea that appeals to most of us. Why is it important that humans get a good deal? A: If something is on a discount and it seems like a big discount, we assume that it's a lucky situation. Whereas, if something is regular price, even if that regular price is low, we don't necessarily see it as opportunity. In the book, we talk about the rule of 100. Imagine a $20 T-shirt and imagine you could have a $5 discount or 25% discount. They're equivalent in terms of how much money off that is. Yet consumers see that and they would say that the 25% off seems like better deal. If you offered 25% off of that T-shirt, more people would take you up on it. But it's not that percentage discounts are always better. If you take a $2,000 laptop and you have a 25% off deal or equivalently $500 off, in that discount people would say that the money — the $500 off — seems like a better deal. The same discount seems bigger when it's represented as a percentage if it's less than $100, but as dollars when it's above $100. Q: You identify six elements you say drive things to become contagious. One of them is Public — when things are more observable, it makes them easier to imitate, which also makes them more likely to be popular. Talk about Gangnam Style, the Korean pop song and distinctive dance that went viral last year. A: One thing that really helped make that video popular was there was a public dance to go along with it. Thinking about public signals of private behavior, how does someone know if you like a particular song? It's hard to tell unless they see you listening to it. But what this dance did is it made it more public that people liked that particular music and it was a public signal people adopted even if they weren't listening to the song. Doing that move was a way to communicate they're in the know, but along the way, they're sharing information about the song. They're acting as an advertisement for the song.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational and informative content about social influence and sharing\", \"score\": 0}\n```\n\nThe text sample discusses the science of popularity and word of mouth in an educational and informative manner, focusing on the factors that make things \"go viral.\" It does not contain any unsafe or inappropriate content, making it fully suitable for public use, educational environments, or general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses a popular cultural phenomenon, Gangnam Style, and the concept of public signals and behavior. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1234-0", "llm_safety_score": 0, "reason": "educational and informative content about social influence and sharing" }, { "custom_id": "1234-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.usatoday.com/story/news/nation/2013/03/03/contagious-berger-book-viral/1957985/", "date": "2014-08-01T06:35:56", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274581.53/warc/CC-MAIN-20140728011754-00428-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.978505671, "token_count": 1158, "score": 2.828125, "int_score": 3 }
<urn:uuid:6ab22ac2-66a5-4bf5-830f-301d2a4525ca>
- Special Sections - Dawgs Deals It seems appropriate that October is Dental Hygiene Month since this month does have one of the biggest candy selling holidays in it. However, there are many more things that cause cavities that just candy. It seems that how often you eat can be just as important as what you eat. That is because food can affect your teeth and your mouth long after you swallow it. Eating cookies with dinner will do less harm to your teeth than eating them as a separate snack. Of course, overall poor nutrition can contribute to periodontal (gum) disease. It also can have other long-term effects on your mouth. To prevent tooth decay, you should follow a few additional guidelines. This can help to keep the amount of acid created by the bacteria on your teeth to a minimum. Here are some tips: 1. Limit between-meal snacking. Fewer snacks mean less acid exposure for your teeth. If you snack, choose foods that are not fermentable carbohydrates. Fermentable carbohydrates break down in your mouth. Other foods don’t break down until they move further down the digestive tract. It’s the fermentable carbohydrates that work with bacteria to form acids that begin the decay process and eventually destroy teeth. They include the obvious sugary foods, such as cookies, cakes, soft drinks and candy. But they also include less obvious foods, such as bread, crackers, bananas and breakfast cereal. u Best Choices – Cheese, chicken, or other meats, or nuts. These foods actually may help protect tooth enamel. They do this by neutralizing acids or by providing the calcium and phosphorus needed to put minerals back in the teeth. u Moderate choices – Firm fruits such as apples and pears and vegetables. Firm fruits contain natural sugars. However, their high water content dilutes the effects of the sugars. These fruits also stimulate the flow of salvia, which fights bacteria and helps protect against decay. Vegetables do not contain enough carbohyrates to be dangerous. u Worst Choices- Candy, cookies, cakes, crackers, bread, muffins, potato chips, French fries, pretzels, bananas, raisins, and other dried fruits. These foods provide a source of sugar that certain bacteria can use to produce acid. The problem can be worse if the foods stick to teeth or get caught between them. 2. Limit the amount of soft drinks or any other drinks that contain sugar. These include coffee or tea with added sugar, cocoa and lemonade. Fruit juices contain natural sugars that also can cause decay. Limit the amount of time you take to drink any of these drinks. Avoid sipping them throughout the day. A can of soda that you finish with a meal exposes your teeth to acids for a shorter time than a soda that takes you two hours to drink. u Better choices – unsweetened tea and water, especially fluoridated water. Tea also has fluoride, which can strengthen tooth enamel. Water helps flush away bits of food. It also can dilute the sugar acids. 3. Avoid sucking on hard candies or mints, even the tiny ones. They have enough sugar to increase the acid produced by bacteria to decay levels. If you need a mint, use the sugarless varities. 4. Limit very acidic foods because they can make the mouth more acidic. This may contribute to a loss of minerals in the teeth. The effects of acid exposure add up over a time. Every little bit counts. 5. Brush your teeth after eating and after drinking sugary drinks, to remove the plaque bacteria that create the destructive acids. If you cannot brush after every meal, brush at least twice a day. 6. Chew sugarless gum that contains xylitol. This can help reduce the risk of cavities. The gum helps dislodge some of the food stuck to your teeth. It also increases saliva flow to help neutralize the acids. Article adapted from: Simple Steps to Better Dental Health Santa Fe Skillet Casserole 1 pound ground round 3/4 cup chopped onion 3/4 cup chopped green bell pepper 1 1/2 cups uncooked instant rice 1 1/2 cups low-salt beef broth 1/4 teaspoon salt 1/4 teaspoon black pepper 1 (14.5 ounce) can Mexican-style stewed tomatoes, undrained 3/4 cup (3 ounces) reduced-fat shredded sharp Cheddar cheese u Combine first 3 ingredients in a large nonstick skillet; cook over medium-high heat until beef is browned and vegetables are tender, stirring to crumble beef. (Do not drain.) u Add rice and next 4 ingredients. Cover, reduce heat, and simmer 5 minutes or until rice is tender and liquid is absorbed. Sprinkle with cheese; serve immediately. Yield: 6 servings. Per Serving: Calories 346, Fat 15.8g, Protein 21.1g, Carbohydrates 28.5g, Fiber 2.1g, Cholesterol 62mg, Sodium 488mg, Exchanges: 1 1/2 Starch, 1 Vegetable, 3 medium-Fat meat Pamela Redwine is a nutrition and food safety agent with the MSU Extension Service. Provided by the Oktibbeha County Extension Service. Call (662) 323-5916.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"appropriate dental hygiene advice\", \"score\": 0}\n```\n\nThe text provides educational content about dental hygiene and includes tips on how to prevent tooth decay. The language is respectful and the content is suitable for all audiences, including children and educational environments.", "{\"reason\": \"No unsafe content\", \"score\": 0}\n```\n\nThe text provided is a nutritional information panel and a brief bio of a nutrition and food safety agent. It contains no unsafe or inappropriate content, and is fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1235-0", "llm_safety_score": 0, "reason": "appropriate dental hygiene advice" }, { "custom_id": "1235-1", "llm_safety_score": 0, "reason": "No unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.starkvilledailynews.com/node/7607?quicktabs_4=2", "date": "2014-07-11T11:19:25", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776426734.39/warc/CC-MAIN-20140707234026-00024-ip-10-180-212-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9346100092, "token_count": 1120, "score": 3.34375, "int_score": 3 }
<urn:uuid:4d3e161d-d52a-42f0-a649-5105e3a941de>
The purose of starter and choke in tube light circuite,for start-up the tube lights. The choke work as a blastand induced 850V to 1100V in starting,by strarter make and break of the circuite. After glowing the tube light choke work as inductance and provide 110v to circuite. starter for breaking & closing circuit with the help of small relay element and choke for to make a voltage from 230v to 1000v for energise electrons (which are in the argon gas & mercury)and closing & opening starter helps voltage build up. according to the above process electrons collision occurrs .so due to that avalanche breakdown occuers and due to that more number of electrons created .so the current flow inside the tube is get increasing & lighting process takesplaces .if current rising takesplace continuesly the tube light has to be blasting.so inorder to avoid the blast we should control the rising current .that can also be done by choke .so without choke the tubelight get if u can't belive me ,u can check that The starter is used to have a high starting current just to ignite or excite the fluorogas inside the tube, (to make and break). then the choke is used to maintain the required current needed by the gas to maintain its fluorescence. This is becuase the supply is AC signal, (max-min-max min) the choke will store energy during rising of current from the source, then when the current is going to minimum swing, the choke relesaes its energy therby giving current to the lamp, until the next cicle again of rising current from the AC source, it will store again energy in the form of magnetic field. ( inductors are used for current regulator- means it opposes change in current in the Try to use DC source and you dont need inductor anymore.. thatis why ballast are being replaced now by electronic ballast, which is actually a rectifier circuit to output stable DC current going to the fluorescent lamp. the purpose of this is to eliminate the reactive power received by the inductor ballast, which save extra KVA capacity of the electrical system a capacitor connected in parallel with flourescent lamp is called starter. to strike a flourescent lamp a sudden discharge is required in the tube. a starter provides this initial spark. when a tube light arrangement is connected to the supply, starter charges, a starter current foows during this session. on reaching the striking voltage, the starter discharges and a sudden current flows in the tube ionising the gas and giving it the glow WHEN TUBE LIGHT IS SWITCHED ON CURRENT FLOWS IN CLOSED CIRCUIT BETWEEN, CHOWK,BOTH ELECTRODES OF TUBE AND STARTER. IN CLOSED CIRCUIT FOR A WHILE SELF INDUCTANCE IS PRODUCED IN CHOWK AND ABOUT 1000 VOLTS INDUCED ACROSS CHOWK, AT THE SAME INSTANCE STARTER CONTANCTS BECOM OPEN AND INDUCED VOLTAGE APPERED AT THE TUBE, AND DUE TO THIS HIGH VOLTAGE CURRENT STARTS FLOWING FROM THE GAS OF THE TUBE AND TUBE BECOME BRILLIANT/ GLOWING. AFTER THAT STARTER REMAIN OPEN CIRCUIT TILL NEXT OPERATION OF TUBE LIGHT, AND CHOWK WORK AS VOLTAGE DROPING DEVICE UP TO 110 V AC. The choke has two main functions. It (in conjunction with the starter if it has one) causes the tube to ignite by using the back emf to create a plasma in the tube and it controls the current through the tube when it is ignited. In a gas discharge, such as a fluorescent lamp, current causes resistance to decrease. This is because as more electrons and ions flow through a particular area, they bump into more atoms, which frees up electrons, creating more charged particles. In this way, current will climb on its own in a gas discharge, as long as there is adequate voltage (and household AC current has a lot of voltage). If the current in a fluorescent light isn't controlled, it can blow out the various electrical components. A fluorescent lamp's ballast works to control this. The simplest sort of ballast, generally referred to as a magnetic ballast, works something like an inductor. A basic inductor consists of a coil of wire in a circuit, which may be wound around a piece of metal. If you've read How Electromagnets Work, you know that when you send electrical current through a wire, it generates a magnetic field. Positioning the wire in concentric loops amplifies this field. This sort of field affects not only objects around the loop, but also the loop itself. Increasing the current in the loop increases the magnetic field, which applies a voltage opposite the flow of current in the wire. In short, a coiled length of wire in a circuit (an inductor) opposes change in the current flowing through it. The transformer elements in a magnetic ballast use this principle to regulate the current in a fluorescent lamp. A ballast can only slow down changes in current -- it can't stop them. But the alternating current powering a fluorescent light is constantly reversing itself, so the ballast only has to inhibit increasing current in a particular direction for a short amount of time. Magnetic ballasts modulate electrical current at a relatively low cycle rate, which can cause a noticeable flicker. Magnetic ballasts may also vibrate at a low frequency. This is the source of the audible humming sound people associate with fluorescent lamps. Modern ballast designs use advanced electronics to more precisely regulate the current flowing through the electrical circuit. Since they use a higher cycle rate, you don't generally notice a flicker or humming noise coming from an electronic ballast. Different lamps require specialized ballasts designed to maintain the specific voltage and current levels needed for varying tube designs. Fluorescent lamps come in all shapes and sizes, but they all work on the same basic principle: An electric current stimulates mercury atoms, which causes them to release ultraviolet photons. These photons in turn stimulate a phosphor, which emits visible light photons. At the most basic level, that's all there is to it! I have a 3 phase 600 volt prim. feed and a 120/208 sec. Isolateing tranformer .We are using this system for safety against shock . What is the proper grounding ?? Does the common on sec. get earth ground ??? i am confused about generator. i have generator of 380 KVA that is 304 KW by that formula then by this formula I=KW/V*1.732*0.8 the amperes are 528.673. so that is the load which i put on one phase of generator or this is total 3 phase load on Generator.......
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"technical accuracy and clarity\", \"score\": 0}\n```\n\nThe text provided is a detailed explanation of the components and functions of a tube light circuit. It does not contain any unsafe or inappropriate content. The language is clear and technical, and the themes are educational and informative. Therefore, it is fully appropriate for all audiences, including children and sensitive environments.", "{\"reason\": \"technical and electrical content without safety concerns\", \"score\": 0}\n```\n\nThe text discusses technical and electrical concepts related to magnetic fields, ballasts, and generators. There are no unsafe elements, inappropriate language, or references to illegal activities. The content is suitable for educational and technical audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1236-0", "llm_safety_score": 0, "reason": "technical accuracy and clarity" }, { "custom_id": "1236-1", "llm_safety_score": 0, "reason": "technical and electrical content without safety concerns" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.allinterview.com/showanswers/79053.html", "date": "2014-07-22T14:01:24", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858962.69/warc/CC-MAIN-20140722025738-00216-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.8882952929, "token_count": 1506, "score": 3.53125, "int_score": 4 }
<urn:uuid:b896cfae-5395-4200-a4bf-89cc46af3e25>
Online Campus Safety Training Making Our Campus Safer (Clery Act) The Clery Act is a federal law enacted in 1990. The Act is named for Jeanne Clery, a 19-year-old Lehigh University freshman, who was raped and murdered by a fellow student while asleep in her dorm room in 1986. Her parents later discovered that there had been 38 violent crimes on the Lehigh campus in the three years before Jeanne's murder, none of which the University had disclosed to students. The Clery Act requires schools to make timely warnings to the campus community about crimes that could pose an ongoing threat to students and employees. Recognizing that many crimes are not reported to the police, the Act requires schools to collect data about crimes on and around their campuses from those referred to as Campus Security Authorities (CSAs). In addition to campus police and non-police security staff, any member of a school's faculty or staff with "significant responsibility for student and campus activities" is considered a CSA and must be trained on the Act's reporting requirements. The Act empowers the Department of Education to levy substantial fines on schools that fail to comply. In 2013, the Campus Sexual Violence Elimination (SaVE) Act amended the Clery Act by, among other things, expanding the reporting obligations under the Act to include statistics on reports of domestic violence, dating violence and stalking. Campus Safety Training Course Summary This 25-minute course explains the essential requirements of the Clery Act and the responsibilities of all CSAs. The topics covered in this course include — - Background of the Clery Act - Campus SaVE Act - The Clery Act: Spirit, intent and enforcement - Campus Security Authorities (CSAs) - CSA duties - What crimes must you report? - What crimes are "campus" crimes? - Documenting the facts - Best practices for documentation - Additional information and resources
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content\", \"score\": 0}\n```\n\nThe provided text is entirely focused on explaining the Clery Act and the responsibilities of Campus Security Authorities (CSAs) in ensuring campus safety. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1237-0", "llm_safety_score": 0, "reason": "No unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.wecomply.com/ethics-training/1156956-making-our-campus-safer-clery-act-training-courses-classes", "date": "2014-07-22T14:56:02", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858962.69/warc/CC-MAIN-20140722025738-00216-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9636937976, "token_count": 395, "score": 3.203125, "int_score": 3 }
<urn:uuid:2668cb01-703e-4559-8694-2d7bfeca1beb>
updated May 1, 2008 11:34 pm | 4,216 views Tiered storage is a strategic networked storage method where data is stored on various types of storage devices primarily based on criteria of the access, frequency of use, security and data recovery requirements. Generally as a data maintenance strategy, tier 1 data which is mission critical and generally updated real time should be stored in a very reliable media of high quality with possibly data mirroring facility such as in RAID. Tier 2 data which includes the often referred useful data such as financial, marketing and operational areas based could use some networked attached storage. Tier 3 data which is generally the accumulated historical data of the business operations could be stored in comparatively cheaper storage media such as the compact discs (CD-Rs) or tapes. The tiered storage solution is an important aspect of ILM (Information Lifecycle Management). A data storage environment consisting of two or more kinds of storage delineated by differences in at least one of these four attributes: Price, Performance, Capacity and Function. Any significant difference in one or more of the four defining attributes can be sufficient to justify a separate storage tier. Note: Storage Tiers are NOT delineated by differences in vendor, architecture, or geometry except where those differences result in clear changes to Price, Performance, Capacity and Function. Definition copied with permission from http://recoveryspecialties.com/glossary.html Tiered storage is useful in efficiently aligning data and device characteristics to address the problem that one size storage does not fit all applications. By placing less mission critical, lower performance data on older or less expensive devices, and mission critical, higher performance data on more expensive, highly available storage, better efficiencies can be realized. To the extent that storage tiers can be managed with the same resource management software, reductions in total cost of ownership (TCO) may be realized. This is not always practical but is often a desired goal of storage managers. Related White Papers and Webcasts Simplify and Accelerate Service Delivery with SmartCloud Orchestrator: Cloud Management for your IT Services Reg Tiered Storage Selection (Groups) Disclaimer: IT Wiki is a service that allows content to be created and edited by anyone in the community. Content posted to this site is not reviewed for correctness and is not supported by Toolbox.com or any of its partners. If you feel a wiki article is inappropriate, you can either correct it by clicking "Edit" above or click here to notify Toolbox.com.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text discusses tiered storage in a technical and informative manner. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1238-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://it.toolbox.com/wiki/index.php/Tiered_storage", "date": "2014-07-24T01:41:42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883905.99/warc/CC-MAIN-20140722025803-00152-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9250761271, "token_count": 517, "score": 2.65625, "int_score": 3 }
<urn:uuid:5f238396-756e-4c07-9bd8-0f5461430264>
Monitoring growth: the benefits and challenges of integrating the Born in Bradford research project with routine growth monitoring practice. Growth (Health aspects) Health promotion (Methods) Health promotion (Management) Medicine, Preventive (Study and teaching) Medicine, Preventive (Practice) Medicine, Preventive (Methods) Preventive health services (Study and teaching) Preventive health services (Practice) Preventive health services (Methods) Primary nursing (Methods) Primary nursing (Practice) |Publication:||Name: Community Practitioner Publisher: Ten Alps Publishing Audience: Academic Format: Magazine/Journal Subject: Health; Health care industry Copyright: COPYRIGHT 2009 Ten Alps Publishing ISSN: 1462-2815| |Issue:||Date: Oct, 2009 Source Volume: 82 Source Issue: 10| |Topic:||Event Code: 200 Management dynamics Computer Subject: Company business management| |Product:||Product Code: 8000140 Health Problems Prevention; 9105230 Health Problems Prevention Programs NAICS Code: 621999 All Other Miscellaneous Ambulatory Health Care Services; 92312 Administration of Public Health Programs| |Geographic:||Geographic Scope: United Kingdom Geographic Code: 4EUUK United Kingdom| Why monitor child growth? The growth rate of a child is perhaps a better indicator of general health than any other single measure. (1) Ill children often grow slowly, so monitoring growth is an important surveillance tool in all children, and more specifically in any child presenting with a suspected health problem. (2) It involves repeated cross-sectional measurement to identify size and rate of change. Growth data are compared against a reference population, and if a child's growth is unfavourable they are referred to an appropriate specialist. (3) In the UK, growth monitoring typically involves the measurement of both weight and height. (4) It is therefore possible to identify any form of growth disorder involving short or tall stature, and any nutritional problem involving under- or overweight. In 2001, Bundred et al reported an increasing prevalence of childhood overweight and obesity in the UK. (5) Growth monitoring may become a useful tool to detect children who are overweight or obese and refer them, along with their parents, to specialist clinics for advice about exercise and diet. Growth monitoring also produces an important source of data for monitoring child health. (2) Between 2004 and 2006, there was an increase in the use of routine growth data to produce public health reports. (6) Researchers with interests in different aspects of child growth and health have utilised routine growth data--for example, Buchan et al reported substantial findings in obesity epidemiology using routine weight and height data. (7) As the benefits are diverse, the NHS invests extensive resources to ensure that child growth is monitored routinely. In Bradford, a collaboration between the Born in Bradford (BiB) research project (8) and Bradford and Airedale Teaching Primary Care Trust (PCT) has worked to improve growth monitoring standards, so that routine growth data are developed to research calibre. Integrating research with practice BiB is a multi-ethnic longitudinal birth cohort study, which aims to recruit all pregnant women booked to deliver at Bradford Royal Infirmary over a period of two years. The project will follow the development of these babies, utilising routine data to investigate different aspects of child health. Growth monitoring provides a rich source of data for BiB, and changes to growth monitoring practice have been introduced to develop the quality of these data. Prior to any intervention by BiB, the health visitor standards manual for the PCT stated that weight, length and head circumference (see Box 1) should be measured in all children at prescribed age periods of nought to 28 days, six to eight weeks and seven to nine months. The first change that BiB implemented on growth monitoring practice was the introduction of a new measurement. Health visitors were asked to measure abdominal circumference on all infants, not only those enrolled in BiB, at all three prescribed age periods and tape measures were provided. Abdominal circumference is a good indicator of total body fat and fat distribution, (9) and particularly relevant considering increasing prevalence of childhood obesity. Health visitors were also asked to measure all infants between 10 and 14 days of age instead of at nought to 28 days. In the first few days of life, an average infant loses between 3.5% and 6.6% of their weight at birth, and this is not regained until roughly seven days of age. (10) The 10-to 14-day age period was chosen to ensure that infants are only measured when they have regained this weight. Training and feedback All community practice teachers and one health visitor from each health centre in Bradford were invited to attend a measurement training workshop, organised by BiB in collaboration with the Child Growth Foundation. Health visitors who attended these sessions then organised training days at their own health centres. A measurement protocol that provides precise step-by-step instruction on how to measure and record each dimension was produced and disseminated. A growth monitoring standard was written to incorporate the new measurement protocol, and to provide detailed information about growth monitoring. Among other things, the standard included a rationale for each measurement, information on how to record and plot data on growth reference charts, and guidance on when an infant should be referred to a specialist. Growth monitoring data are entered onto an electronic system by the PCT child health department. BiB has worked with this department to set up protocols for data sharing and extraction, so that growth data are more accessible to researchers. The growth data processed at the child health department could be used to provide audit of performance. For example, BiB has produced statistics reporting the percentage of children who are measured during each prescribed age period. This type of information provides the PCT with a way to assess and improve growth monitoring standards. Discussions are taking place to determine what information provides quality assurance for the PCT and good feedback for health visitors. Benefits for practice The changes to growth monitoring practice in Bradford were introduced to improve the ability of growth monitoring to detect health problems, and to develop the quality of growth data that are collected. Health visitors have been given the necessary tools to measure and interpret child growth. We believe that the introduction of the new measurement has been successful, and 80% of children are now measured for abdominal circumference between nought and 28 days of age. BiB has provided training on the importance and interpretation of abdominal circumference, aiming to improve awareness about obesity among health visitors. Growth monitoring has been modified to improve the early detection of childhood overweight and obesity. The training of health visitors and production of a new measurement protocol have helped standardise measurement techniques in Bradford. This reduces measurement error and increases the likelihood that data are reliable. (9) In Bradford, the reliability of routine growth data has been assessed, and measurement error is comparable to anthropometric literature that reports acceptable levels of reliability. (11) This information has provided the PCT with assurance that health visitors measure child growth reliably, and BiB that routine growth data are reliable enough to use for research. This work has led to proposals for the PCT to commission a routine reliability assessment in Bradford, which will provide regular quality assurance for the PCT and act as a form of anthropo-metric training. Routine reliability assessments will reinforce the importance of measurement standards, and also act as a quality assurance mechanism with feedback to practitioners. Research is now part of everyday growth monitoring practice in Bradford. The health visitors are responsible for data collection, while the child health department of the PCT is responsible for data entry, audit and feedback of performance to practitioners. Challenges of integration Health visitors monitor child growth in over 90% of PCTs, (6) and any changes to growth monitoring practice ought to consider the competing demands on health visitors and the additional work created by such changes. Aligning research with routine practice without increasing the workload of health visitors--and so losing their support for the study--presented a potential problem for BiB. The sustainability of high growth monitoring standards relies upon the continued involvement and dedication of health visitors. Without any information about BiB, health visitors may not fully understand the importance of their contribution to the project. Similarly, a lack of feedback about routine data collection does not emphasise the importance of routine growth monitoring. If PCTs regularly produce individual performance-related information that can be fed back to health visitors, such information can provide quality assurance and public health intelligence for commissioners. With the support of Bradford and Airedale Teaching PCT and practitioners across Bradford, a major research programme on child health has been integrated into routine practice, and we believe that growth monitoring in Bradford should be recognised as a national exemplar. The next challenge for BiB is to develop interventions targeting childhood overweight, obesity and other health problems that can be implemented as part of routine practice. (1) Cameron N. Growth patterns in adverse environments. American Journal of Human Biology, 2007; 19(5): 615-21. (2) Hall DMB. Growth monitoring. Archives of Disease in Childhood, 2000; 82(1): 10-5. (3) Garner P, Panpanich R, Logan S. Is routine growth monitoring effective? A systematic review of trials. Archives of Disease in Childhood, 2000; 82(3): 197-201. (4) Hall DMB, Elliman D. Health for all children (fourth edition). London: Oxford University, 2003. (5) Bundred P, Kitchiner D, Buchan I. Prevalence of overweight and obese children between 1989 and 1998: population based series of cross sectional studies. BMJ, 2001; 322(7282): 326-8. (6) Patterson L, Jarvis P, Verma A, Harrison R, Buchan I. Measuring children and monitoring obesity: surveys of english primary care trusts 2004 to 2006. Journal of Public Health, 2006; 28(4): 330-6. (7) Buchan IE, Bundred PE, Kitchiner DJ, Cole TJ. Body mass index has risen more steeply in tall than in short three-year-olds: serial cross-sectional surveys 1988 to 2003. International Journal of Obesity, 2007; 31(1): 23-9. (8) Born in Bradford Collaborative Group. Born in Bradford: for a healthy future. Available at: www.borninbradford.nhs.uk (accessed 21 July 2009). (9) Cameron N. The measurement of human growth. Guilford and King's Lynn: Croom Helm, 1984. (10) Macdonald PD, Ross SR, Grant L, Young D. Neonatal weight loss in breast and formula fed infants. Archives of Disease in Childhood Fetal and Neonatal Edition, 2003; 88(6): F472-6. (11) Johnson W, Cameron N, Dickson P, Emsley S, Raynor P, Seymour C, Wright J. The reliability of routine anthropometric data collected by health workers: a cross-sectional study. International Journal of Nursing Studies, 2009; 46(3): 310-6. PhD candidate, Loughborough University Professor of human biology, Loughborough University Born in Bradford project manager, Bradford Institute for Health Research Head of children and family services, Bradford and Airedale Community Health Services Director of research and deputy medical director, Bradford Institute for Health Research Box 1: Measuring head circumference Head circumference is a measurement that was taken routinely by health visitors in Bradford prior to BiB--along with weight and length--at nought to 28 days, six to eight weeks and seven to nine months. [ILLUSTRATION OMITTED] The measurement of head circumference is used to detect congenital microcephaly or excess growth due to hydrocephaly. According to the BiB measurement protocol: 'Using the index fingers on each hand, position the tape so that it crosses the most anterior part of the head (midway between the eyebrows and the hair line) and the most posterior part of head (occipital prominence).' BiB changed the first period of measurement to 10 and 14 days, and introduced the measurement of abdominal circumference. |Gale Copyright:||Copyright 2009 Gale, Cengage Learning. All rights reserved.|
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\nThe text is a scholarly article discussing the benefits and challenges of integrating the Born in Bradford research project with routine growth monitoring practice. It contains no unsafe, inappropriate, or harmful content.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is entirely focused on a detailed description of a growth monitoring program and its implementation in Bradford. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1239-0", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1239-1", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1239-2", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.biomedsearch.com/article/Monitoring-growth-benefits-challenges-integrating/209163267.html", "date": "2014-07-24T00:17:31", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883905.99/warc/CC-MAIN-20140722025803-00152-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9198256135, "token_count": 2557, "score": 2.703125, "int_score": 3 }
<urn:uuid:aa0b9b9c-898d-4716-a612-715f43d07572>
A gray whale emerges from the ocean waters during a Birch Aquarium whale watching cruise. Credit: Birch Aquarium at Scripps Some of the ocean's most graceful behemoths the gray whales are making their way on their migratory journey, a 10,000-mile-long round-trip from the Bering Sea off the coast of Alaska to the lagoons of Mexico's Baja California. And they don't get to take any food breaks along the way. The whales' epic trek means that whale watchers can grab their binoculars and head out to catch glimpses of the beasts' huge tail flukes and backsides as the animals breach the water to get a breath of air. Gray whales a type of baleen whale that moves along the Pacific Coast of North America leave their chilly homes in the Arctic in late October for the warm lagoons of southern California, where the giants give birth to their calves. They also leave behind an Arctic that is just beginning to ice over, meaning they can't feed on the tiny crustaceans called amphipods that dwell at the sea bottom there. "The grey whales will spend their summer months, approximately June through October, up in the Arctic in the Bering and the Chukchi seas, and they're basically trying to get as fat as they can," said Staci Shaut, whale-watching coordinator at the Birch Aquarium at the Scripps Institution of Oceanography in La Jolla, Calif. These large creatures, which weigh about 2,000 pounds (900 kilograms) as adults, put on up to 16 inches (40 centimeters) of blubber over the summer enough fat to keep them going for their long, food-free, trip south. Gray whales are en route to Baja, passing along the coast of San Diego from late December through mid-February. Whale-watching excursions are offered in most areas along the Pacific Coast, including San Diego and Monterey Bay, during those times. The Birch Aquarium offers a tour on a 100-foot (30-meter) boat that leaves from Senago Bay, moves through the harbor and out into the Pacific Ocean. From the boat, whale watchers usually see one or two gray whales on these tours, though on a good day that total can get up to eight. The immensity of the whales is impressively apparent. "They're about half the length of our boat," Shaut told OurAmazingPlanet. "But even from a distance, you can see their backs ... as they rise up to take a breath." "When they go down for their deeper dives, you'll see their fluke or tail and that's about 10 to 12 feet [3 to 4 m] across. That's always everybody's favorite moment when that tail rises up out of the water," Shaut said. Saving the whale While whale-watching can be a thrill, it's also important for conservation , Shaut said. "What we try to stress for our tours [is that] these are amazing animals," Shaut said. "The oceans are getting busier and busier every single day with recreational and just transportation of vessels ... and I think that it's really important to bring awareness that these animals are out there." The gray whale has been close to extinction twice during its existence on Earth, Shaut said. Both times occurred when whaling was allowed in the 1850s and between 1915 and1920, she said. During these near-extinction times, the gray whale population dropped from about 20,000 to as low as 500. The number is up again since whaling was banned and now hovers around 20,000 to 22,000. Gray whales are important for their ecosystem. They are a common prey of Orcas (or killer whales), and because of the plumes they create when feeding on the bottom of the ocean, they put nutrients back into the water, which keep bottom-dwelling crustaceans happy.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is entirely educational and informative, discussing the migration of gray whales and their importance in the ecosystem. There are no unsafe elements, inappropriate language, or harmful themes present." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1240-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.livescience.com/30048-whale-watching-southern-california-conservation.html", "date": "2014-07-24T00:16:37", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883905.99/warc/CC-MAIN-20140722025803-00152-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9724274874, "token_count": 824, "score": 3.03125, "int_score": 3 }
<urn:uuid:4f9a6600-e594-4ad1-936c-ea62049852cd>
1. tube - noun · conduit consisting of a long hollow object (usually cylindrical) used to hold and conduct objects or liquids or gases 2. tube - noun · electronic device consisting of a system of electrodes arranged in an evacuated glass or metal envelope 3. tube - noun · a hollow cylindrical shape 4. tube - noun · (anatomy) any hollow cylindrical body structure 5. tube - noun · an electric railway operating below the surface of the ground (usually in a city); "in Paris the subway system is called the `metro' and in London it is called the `tube' or the `underground'" 6. tube - verb · provide with a tube or insert a tube into 7. tube - verb · convey in a tube; "inside Paris, they used to tube mail" 8. tube - verb · ride or float on an inflated tube; "We tubed down the river on a hot summer day" 9. tube - verb · place or enclose in a tube The only site you need for word puzzles, home work, anagrams and scrabble games. The best site for two word anagram solutions. Enter the letters to get anagrams for the word Definitions, synonyms, antonyms and related words Enter the word below with '?' to indicate missing letters Enter the word to find the rhymes Enter the letters of the beginning, middle or end of the word.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"appropriate and safe content\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1241-0", "llm_safety_score": 0, "reason": "appropriate and safe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.unscramble.net/dictionary-tube.html", "date": "2014-07-24T00:56:22", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883905.99/warc/CC-MAIN-20140722025803-00152-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.7933826447, "token_count": 319, "score": 3.15625, "int_score": 3 }
<urn:uuid:fb26ba36-bd02-43a0-b4fa-559a50910720>
Phantom Fingers and Robot Ants On October 27, 1995, robots with emotions and computers with "fingers" mesmerized middle-school students from Patricia Robert Harris Education Center in Washington, DC, and the Ormond Stone Middle School in Centerville, VA. Thomas Massie and James McLurkin, two "twenty-something" MIT graduates, brought their inventions to the National Museum of American History as part of the Lemelson Center's "Innovative Lives" series of lecture-demonstrations by American inventors. Stressing that it is never too early to start exploring the world of invention, Massie and McLurkin talked about their careers to date and then invited the students to join them in some hands-on activities. James McLurkin, a graduate research assistant at MIT, kicked off the morning with his microrobot "ants." These tiny machines that fit easily in the palm of a child's hand work together, much like real ants. McLurkin has programmed the "ants" -- which he builds with the help of friends -- to respond to their environment: the microrobots can hunt for food, pass messages to one another, and even play tag. McLurkin explained the robots' potential for performing simple household jobs and assisting with some medical procedures. He then gathered the middle-schoolers around to experiment with the robots themselves. With flashlights aimed at the ants' light sensors, the students learned how to entice the robots around a maze of barriers on an ant "race course." Thomas Massie, McLurkin's fellow MIT graduate, started off by asking the audience, "How many of you want to be rich and famous? And have fun doing it?" With the audience's unanimous affirmative response to those questions, he explained how he had reached just those goals. As a young boy, Massie enjoyed dismantling things-like his mother's vacuum cleaner-to figure out how they worked. Eventually, he learned not only how to put them back together again, but also to create new objects with parts from discarded "junk." By the age of 12, Massie had built a robot arm out of "junk," launching his career in invention. Today, Massie is the president of SensAble Devices in Cambridge, MA, the company he founded in 1993 to market and produce his invention, the "Phantom Haptic Interface." Similar in design to his robot arms, the Phantom gives computer users the ability to "touch" things that they see on the computer screen. A user inserts his or her finger in a kind of sling on the Phantom. Pushing against the sling in response to an image on the screen-for example, a button-sets motors in the Phantom arm into action, offering resistance in the opposite direction. Massie was the first recipient of the annual $30,000 Lemelson-MIT Student Prize (1995) and SensAble Devices is thriving. The students had a chance to try out the Phantom and enter the world of virtual reality, pushing buttons, fingerpainting, and trying their hand at a bit of brain surgery, one of the Phantom's real-world applications. Massie and McLurkin also had the students dismantling radios provided by the Center to identify the parts inside and think about how they could be used to build new things. Hal Walker, the inaugural "Innovative Lives" role model, stated the goal of the series with clarity and eloquence. He told the students, "Putting your hands to work puts your mind to work." The rapt faces of these young people as they guided microbot ants, took apart radios, and touched things that weren't there made it obvious that the students embraced this philosophy. All text and images © Smithsonian Institution. Updated 5 February 1999.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text describes a lecture and demonstration by MIT graduates about robotics and invention, which is educational and appropriate for middle-school students. There are no unsafe elements, inappropriate language, or themes that would make it unsuitable for public use or educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1242-0", "llm_safety_score": 0, "reason": "educational content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://invention.smithsonian.org/centerpieces/ilives/lecture02.html", "date": "2014-07-25T03:37:38", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892806.35/warc/CC-MAIN-20140722025812-00088-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9654535651, "token_count": 776, "score": 3.40625, "int_score": 3 }
<urn:uuid:1749e3da-dfce-4d83-bc8e-439a7eedeaa1>
Discussion of all aspects of cellular structure, physiology and communication. 8 posts • Page 1 of 1 Hi, I have a doubt regarding nucleus shape. Can anyone verify the statement below with my questions "Except during cell division, the nucleus appears amorphous". From the statement and my understanding of amorphous (without a clearly defined shape) 1) Is the nucleus shape unregulated most of the time? 2) Why the nucleus shape is not amorphous (means the nucleus shape is fixed) during cell division? I have tried to google "nucleus shape fixed cell division", but I could not get a satisfying explanation. Hope you may help to explain the statement above and thank you in advance. Most cells have a well-defined nucleus especially when not in cell division. So, in my opinion that statement is false. Or at least poorly defined. Under a light microscope in a native sample the nucleus may "appear" amorphous, but it is not so in reality. 1) The shape of the nucleus is always regulated, as far as I know. During mitosis the nucleus is disintegrated for a period of time, in a controlled manner. 2) During cell division there is a phase (from prometaphase onwards) where one could perhaps describe the nucleus to be amorphous, as it has no defined shape. That statement would thus correctly describe the nucleus during cell division. May I ask where that statement is from or who said it? A textbook? A teacher? And in what context? Hmm from your explanation, can I assume that: 1) Nucleus shape is not fixed, it changes during cell division (mitosis). 2) Under normal condition (not during cell division), the nucleus shape remains fixed. I got this sentence from a lecture slide. It is very simplified and the lecturer who wrote this has long retired from my college. That is why I have to post a question here. If you are interested, I can quote my lecture slides. It is not long, just a few lines in fact. Thank you for your feedback. Yeah, I think you can make those two assumptions. Although there are exceptions to this rule as well, like there are exceptions to almost everything in biology. For example, in many cells the nucleus also changes in shape when the cell matures. This is well evident in, for example, neutrophilic leukocytes whose nuclei become lobular and very irregular-shaped during the cell maturation (but still follows a defined and controlled pathway). Another good example is the red blood cell, who has a nucleus in immature stages (erythroblast) but completely loses it when it becomes a mature erythrocyte. Thus, the nucleus shape is typically "fixed" only in resting, mature cells. When the cell activates, for meiosis or for other reasons, the shape often changes. But in almost all cases the nucleus shape is well-defined and the changes are strictly controlled. So perhaps a better term would be "dynamic, but defined", which is almost the same as "fixed" if the cell is mature and/or inactive. Mitosis is a little peculiar case, since during it the nuclear membrane briefly disappears and is re-assmbled for both of the daughter cells after the new chromatin molecules have been delivered to their correct destinations. And in my opinion, for most cells cell division is the only stage when the nucleus can be said to be "amorphous", as expressed in the original statement. And the above applies only to animal cells (like mentioned in the thread topic), other organisms do things differently, again. Oh, and feel free to quote those slides. I can try to figure out what that statement is actually trying to tell us. There are some genetic conditions that can affect nuclear membrane shape, in particular the laminopathy that causes Hutchinson-Gilford progeria. See figure 1 in the following paper for some nuclear membrane images: Scaffidi P, Gordon L, Misteli T (2005) The cell nucleus and aging: Tantalizing clues and hopeful promises. PLoS Biol 3(11): e395. http://www.plosbiology.org/article/info ... io.0030395 In most of the Eukaryotic cells the nucleus has role as a characteristic feature. There is a double membrane around the nucleus and it contains chromatin material. So nucleus is an important part of Eukaryotic cells because it stores genetic information. During the cell division e.g. in mitosis and meiosis nucleus play important role. The process of Translation and the transcription are also occurring in it. Chromatin is the combination of DNA and proteins that make up the contents of the nucleus of a cell. As the cell prepares to divide i.e. enters mitosis or meiosis. Chromatin packages more tightly to facilitate segregation of the chromosomes during anaphase. That's a good shot at a summary, Andrewpasha. There are a few errors; understand that I am applauding your effort and hope to help you sharpen your next project. First, a major role of the nucleus during cell division is to temporarily get out of the way; the nuclear membrane disappears for a while. Transcription occurs in the nucleus, but then the mRNA is moved out of the nucleus to the cytosol where translation takes place. Chromatin is an important part of the contents of the nucleus, but other structures (for instance splicosomes) are also present and vital for gene expression. You are tackling an important topic, common to all large organisms; please accept my critical reading and feedback in the friendly spirit in which I offer it, and do continue your writing. Thanks for guiding me. It’s very good for me. Now I understand about the Transcription and movement of mRNA. I want to share something about centrifuge. Centrifuge is a machine that is use to rotate an object around a fixed axis and during this circular motion the acceleration is the product of radius and the square of angular velocity . If the acceleration is gravitational acceleration then it known as relative centrifugal force. It can be write as RCF = rw2/g = angular velocity = rotational radius g = Gravitational acceleration The Centrifugal separation methods are of three types Rate zonal centrifugation In differential pelleting a sample is subjected to centrifugal separation to allow particles to sediment. 8 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses cellular biology and does not contain any unsafe, inappropriate, or harmful content. It is suitable for educational environments and general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is an educational and informative passage about cell biology and centrifugation. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1243-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1243-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.biology-online.org/biology-forum/about27233.html?hilit=Anaphase", "date": "2014-07-25T03:50:13", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892806.35/warc/CC-MAIN-20140722025812-00088-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9449734688, "token_count": 1374, "score": 2.78125, "int_score": 3 }
<urn:uuid:f1480974-edf6-4cf2-afe8-092b399bb21e>
Kerala foods have a deep route into the history and culture. Kerala is also called the Land of Spices apart from Godís Own County and so most of the Foods In Kerala both vegetarian and non vegetarian are prepared with unique style, using the locally found spices, coconut milk, and coconut oil, curry leaf, tamarind, mustard seed, chilly as well as coconut paste, which makes the food delicious and unique in taste. Coconut and banana chips are the indispensable part of Kerala food. Food in Kerala has its own distinctive delicacies and flavor, rich in taste that will leave anyone lick the finger. Kerala cuisine prepared with fish,chicken,meat,vegetables. Rice and Tapioca are the staple food of Kerala. The main food for lunch and dinner is boiled rice. Normally Kerala Foods are very spicy. Keralians gives more important for preparing food and its presentation. Keralites normally include a lot of coconut in their food which is the secret of the deliciousness of the Kerala foods. The coconut oil and coconut milk are mainly used in Kerala foods. Seafoods like fish, crab, shellfish are cooked with several spices is also a tasty Kerala food. Kerala has a large variety of dishes which are favourites to all. Kerala Sadya - - the typical Kerala Hindu feast served on a banana leaf, is a sumptuous spread of rice and more than 14 vegetable dishes, topped with payasam or pradhamans, the delicious sweet dessert. Seafood is very popular in Kerala and consumed with every meal. Various fish including sardines, mackerel, tuna, rays and shark are eaten, as are crabs, mussels and oysters. Fried fish and Fish Curry are popular Kerala dishes. Tapioca and fish curry is Kerala's ideal food which is very tasty also. Kerala Foods Websites
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n*Explanation:*\nThe text provided is a description of Kerala's cuisine and food culture. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to food and culture, which are suitable for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1244-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.keralaeverything.com/kerala-foods.html", "date": "2014-07-25T03:38:20", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892806.35/warc/CC-MAIN-20140722025812-00088-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9692965746, "token_count": 382, "score": 2.5625, "int_score": 3 }
<urn:uuid:b3c330ea-aa48-496b-bffa-75bd2ffc4d1a>
The More Things Change... Bill Quinn gives us a love letter to Sorin Oak. Catherine MacDermott tells us how the view from the red doors continues to inspire. As told to Stacia Hernstrom MLA ’05 Professor of Biology and Computer Science Sorin Oak is healthy, stately, substantial and at least 200 years old. Probably more. There’s no way to know for sure without taking a sample of the core and analyzing the ring growth. But no one wants to risk exposing it to harmful fungi or insects. So, we guess. We extrapolate based on trees of similar size. We monitor the soil and measure the growth. We take the best care of it that we can. What we know for sure is that year in and year out, the same things happen. In the spring, there’s a burst of growth. In just a week, Sorin Oak sheds its year-old leaves and replaces them with a new flush of full-grown leaves. The tree flowers. The male parts release pollen into the wind, which fertilize the female parts. In the summer, the tree is relatively dormant. It is able to minimize water loss, thanks to the thick cuticle on the leaves and the shape of its shade-providing crown. If all goes well, those fertilized flowers turn into acorns in the fall. Some sprout into seedlings. Winter comes, along with another dormant period. Then spring arrives again. Year after year, season after season, with student after student sitting under its branches, Sorin Oak completes this cycle. Some years, the tree thrives. Others, it just survives. But its biology is well-suited to its environment; we know that Sorin Oak is a native tree that endured on its own long before we began caring for it. It’s hard to imagine, but it started as an acorn and then became a seedling in the same life cycle it now perpetuates. Its trunk grows about a half inch each year. That means even more shade for students of the next century — or two. Professor of Business Communication Buy opportunity with your attitude. Know why you’re doing your internship. Timing! Punctuality! Be the first to arrive and the last to leave. These are just a few of the tips that students have given other students over the years about how to succeed in their internships. I compile the list every semester and pass it out to new interns. Students consistently say that their internship experiences help them “put everything together” and give them the chance to contribute, practice and learn in a safe environment. For many, internships are also the “foot in the door” that leads to full-time employment. Back in 1992, we were just beginning to have faculty conversations about experiential learning. (And if I recall correctly, we called it “active learning” at the time.) One representative from each school underwent training with outside experts and then returned to campus as “disciples” of experiential learning. Within a year or two, there was a strong movement toward incorporating service learning into the curriculum, and similarly, a core group of faculty members became active leaders among our schools. Lots of great training took place in those early years, which helped many of us not only see the value of experiential education, but also understand the distinctions among volunteerism, service learning and active learning. I have loved St. Edward’s ever since I first walked on campus and stood at those famous red doors overlooking the city — which in many ways represents the world for which we work so hard to prepare our students. Our mission underlies and supports all that we do. Those who come here (students, faculty and staff) are drawn to this common mission, and in that commonality, there is an environment of camaraderie, care, connection and a place to grow in our humanity.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text is a narrative about the life cycle of Sorin Oak and the importance of internships and experiential learning at St. Edward's University. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1245-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.stedwards.edu/webextras/more-things-change", "date": "2014-07-25T03:44:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892806.35/warc/CC-MAIN-20140722025812-00088-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9651302695, "token_count": 822, "score": 2.515625, "int_score": 3 }
<urn:uuid:b880145d-e47b-427f-ad03-308646f438d0>
KOUNOTORI4 completed its mission with re-entry! The KOUNOTORI4 (HTV4, a cargo transporter to the International Space Station) left the ISS on Sept. 5 and re-entered the atmosphere at 3:37 p.m. on Sept. 7 (Japan Standard Time.)The KOUNOTORI 4 successfully achieved its mission of taking cargo to the ISS, then completed all its tasks over about 36 days.We would like to send our sincere appreciation to all of you who have been supporting the KOUNOTORI project. ... - September 7, 2013 (16:50) - Successful re-entry of H-II Transfer Vehicle "KOUNOTORI 4" (HTV4) - September 3, 2013 (15:15) - H-II Transfer Vehicle "KOUNOTORI 4" (HTV4) departure from the ISS and re-entry to the atmosphere About H-II Transfer Vehicle "KOUNOTORI" (HTV) H-II Transfer Vehicle "KOUNOTORI" (HTV) Key Space Transfer Vehicle The International Space Station (ISS) represents a global partnership of fifteen nations. The ISS is a versatile research institute and a large observation platform in the unique environment of outer space. In this international project, Japan participates with its first manned space facility, Japanese Experiment Module "KIBO". "KIBO" means "hope" in Japanese. The KOUNOTORI is an unmanned cargo transporter to be launched by the H-IIB launch vehicle. It is designed to deliver up to six tons of supplies including food, clothes, and experiment devices to the ISS in orbit at an altitude of about 400 kilometers and return with spent equipment, used clothing, and other waste material. The KOUNOTORI with waste material is incinerated when it makes a re-entry into the atmosphere. This transport operation involves a rendezvous with and docking to the ISS, in a situation requiring a highly reliable transfer vehicle. The system was, therefore, being developed based on the rendezvous technical technology accumulated through work on the Engineering Test Satellite VII (ORIHIME/HIKOBOSHI), and with the application of fuselage design techniques accumulated during the development of the H-II and H-IIA launch vehicles, and manned space technology used for the Japanese Experiment Module "KIBO." The development of the HTV is aimed at the practical use of a low-cost and highly reliable means of transport to the ISS. It is expected that the practical operation of KOUNOTORI will allow Japan to accumulate know-how that can serve as basic technology for its future projects on the Space Flyer Unit and on manned transportation. Characteristics of H-II Transfer Vehicle "KOUNOTORI" (HTV) KOUNOTORI is four meters across and about 10 meters long, a size large enough to accommodate a sightseeing bus. It consists primarily of three parts:(1) A propulsion module installed at the rear and composed of main engines for orbit change, Reaction Control System (RCS) thrusters for position control, fuel and oxidizing reagent tanks, and high-pressure air tanks; (2) An avionics module installed in the center part, with electronic equipment for guidance control, power supply, and telecommunications data processing; and (3) A logistics carrier that stores supplies. |Length||Approx. 9.8m (including thrusters)| |Total Mass||Approx. 10,500kg| (supplies and equipment) -Pressurized cargo: Max. 5,200kg -Unpressurized cargo: Max. 1,500kg |Cargo capacity (waste)||Approx. 6,000kg| |Target orbit to ISS||Altitude: 350km to 460km Inclination: 51.6 degrees |Maximum duration of a mission||Rendezvous flight period: about five days Bearthed with the ISS: about 45 days On-orbit emergency stand-by: about seven days
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a detailed description of the KOUNOTORI4 mission and its characteristics. It contains no unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1246-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://global.jaxa.jp/projects/rockets/htv/index.html", "date": "2014-07-28T08:15:48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00290-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8975752592, "token_count": 853, "score": 2.546875, "int_score": 3 }
<urn:uuid:5c0905be-38c7-41bc-a6c7-40b44dd37910>
States Prepare to Rise to CO2 Challenge as Senate Climate Bill Collapses Climate proposals due to be unveiled before the Senate would strip 23 US states of their power to act on climate change In Washington, even Barack Obama's fellow Democrats are reluctant to take up proposals in Congress that would put a cap on greenhouse gas emissions — prompting the sole Republican ally to withdraw his support. In Ottawa, Canada's prime minister, Stephen Harper, has adopted an action plan on climate change that would lead to an increase in greenhouse gas emissions over the next decade. By default, that leaves regional governments as the drivers for tougher action on climate change in what is now becoming a familiar role, the White House admits. "If the states hadn't taken the positions they have in the last four or five years we wouldn't have any programmes in place," Carol Browner, the White House climate adviser, told reporters recently. The power of regional governments to deal with climate change is coming into sharper focus because of the lack of progress on national and international agreements to deal with climate change – and because it is under threat. The climate proposals due to be unveiled before the Senate would strip state authorities of their power to act on climate change. In a recent conference call with reporters, environmental authorities from a number of states argued their policies had helped set the pace for reform on a national stage, prodding the federal government forward and serving as a test lab for new policies. Though Washington and Ottawa have yet to pass cap-and-trade legislation, 23 US states and four Canadian provinces have already put a price on carbon. Between them, the carbon cutting regimes will eventually cover half of America's population and about a third of its emissions and about three-quarters of Canada's population and half of its emissions. "The bottom line here is that the federal government needs to explicitly recognise the value of state programmes," Mary Nichols, who heads California's air resources board, told reporters. After leading the way on emissions cuts and vehicle exhaust standards, California is now looking at measures to reduce the greenhouse gas emissions from the plastic components used in car interiors. The state has also set high energy efficiency standards for flatscreen TVs. Even some of the states that have not formally signed on to cap and trade are also moving away from fossil fuels. Colorado this month adopted a plan to meet 30% of its energy needs from renewable sources like wind and solar power by 2020. Arizona has put restrictions on wood burning fireplaces. State authorities say such forward-looking policies simply make economic sense. Nichols said California's climate law, which called for 25% reduction in greenhouse gas emissions by 2020, had led to the creation of 500,000 new green jobs in the state. The same incentives hold true north of the border. Quebec, for example, has been relentlessly talking up its green credentials to help market its zero emissions hydro-electric power to north-eastern states. The provincial premier, Jean Charest, argues that the decentralised nature of authority in Canada and the US established regional governments as natural leaders. "Regional governments everywhere account for 50% to 80% of what will be done to reduce greenhouse gas emissions," he said. "We are the ones that are going to be the operating arm." Quebec, thanks to its riches in emissions-free hydro, already had a head start in reducing its carbon footprint. Its per capita emissions of carbon dioxide are 11 tonnes – about half of the Canadian average. For the last few years, Quebec has levied a small tax on petrol to help fund public transit and is facing pressure to raise the charge in the next budget. The province raised the bar even further at the Copenhagen summit by setting the most ambitious targets for cutting greenhouse gas emissions in North America, a 20% cut from 1990 levels by 2020. A month later, Quebec signed on to California's stringent car standards raising fuel efficiency and reducing greenhouse gas emissions in exhaust. Montreal, whose greater metropolitan area is home to about 5 million people, is also playing a leading role. Its motorists have long boycotted big gas guzzlers in favour of smaller more economical cars, and Montreal is one of a handful of north American cities with an efficient public transit system. The city has a 30km/h speed limit and has banned idling cars, unless the temperature drops far below zero. It will outlaw dumping paper and other recyclables or organic waste in landfill sites from 2013. It pioneered the Bixi bike sharing scheme, which it is now exporting to London, Melbourne, Boston and Minneapolis. Charest and others say the division of powers in America and Canada lends itself to regional initiative. "It makes a lot of sense for provinces and states to act because they do have most of the jurisdiction to action on climate change. They have exclusive jurisdiction over energy, on transportation, on urban sprawl, on agriculture — basically over everything that emits C02," Ribaux said. Cities are even keener. San Francisco now requires all new buildings to be fitted with charging outlets for electric cars. Chicago now has 88 LEED standard green buildings, and the small city of Grand Rapids, Michigan, has 44. London, Ontario, has banned bottled water. Montreal's mayor Gerald Tremblay wanders the historic city hall building switching off the chandeliers. "I'm always telling them, you don't need them on. It's light outside," he says. Are such regional initiatives enough on their own to compensate for the lack of action by federal government? Quebec's Charest, who has put the green economy at the core of his premiership, won't go quite that far. "Keeping to 2C [rise in global temperatures] through regional arrangements would be pretty tough," he said. But while they will not, on their own, prevent the most dangerous effects of climate change, the three regional cap-and-trade regimes would manage to stabilise US emissions, said Franz Litz who heads the state climate programme at the World Resources Institute. "It is a significant amount of reduction, but it is not enough to get us where we want to go," said Litz. "It is not the answer, but it is the start." He said the regional initiatives suggested states would continue pressing for action on climate change. Such efforts slowed over the last year with states looking to Congress to take the lead on energy reform. "If it becomes clear that [as we are] not going to get something in this Congress I think we will see states evaluating their next moves," he said. And in staking its leadership on climate, regional players could help pull other parts of Canada and the US in a greener direction. "The states are the ones with boots on the ground," said Vicki Arroyo, director of the Georgetown University climate centre.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses climate change policies and initiatives taken by various states and provinces. It does not contain any unsafe, inappropriate, or harmful content. The language is neutral and the themes are related to environmental and governmental actions, which are suitable for public and educational use.", "{\"reason\": \"environmental initiatives\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses various regional initiatives and policies aimed at promoting green buildings and reducing carbon emissions. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to environmental and sustainability efforts, which are appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1247-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1247-1", "llm_safety_score": 0, "reason": "environmental initiatives" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.commondreams.org/news/2010/05/04/states-prepare-rise-co2-challenge-senate-climate-bill-collapses", "date": "2014-07-28T08:36:24", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00290-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9614742994, "token_count": 1394, "score": 2.640625, "int_score": 3 }
<urn:uuid:4708ea3c-8d0f-4d63-bc32-0144238a409f>
Summa Theologica, by St. Thomas Aquinas, , at sacred-texts.com We must, in due sequence, consider the subjective parts of prudence. And since we have already spoken of the prudence with which a man rules himself (Q, seqq.), it remains for us to discuss the species of prudence whereby a multitude is governed. Under this head there are four points of inquiry: (1) Whether a species of prudence is regnative? (2) Whether political and (3) domestic economy are species of prudence? (4) Whether military prudence is? Objection 1: It would seem that regnative should not be reckoned a species of prudence. For regnative prudence is directed to the preservation of justice, since according to Ethic. v, 6 the prince is the guardian of justice. Therefore regnative prudence belongs to justice rather than to prudence. Objection 2: Further, according to the Philosopher (Polit. iii, 5) a kingdom [regnum] is one of six species of government. But no species of prudence is ascribed to the other five forms of government, which are "aristocracy," "polity," also called "timocracy" [*Cf. Ethic. viii, 10], "tyranny," "oligarchy" and "democracy." Therefore neither should a regnative species be ascribed to a kingdom. Objection 3: Further, lawgiving belongs not only to kings, but also to certain others placed in authority, and even to the people, according to Isidore (Etym. v). Now the Philosopher (Ethic. vi, 8) reckons a part of prudence to be "legislative." Therefore it is not becoming to substitute regnative prudence in its place. On the contrary, The Philosopher says (Polit. iii, 11) that "prudence is a virtue which is proper to the prince." Therefore a special kind of prudence is regnative. I answer that, As stated above (Q, AA,10), it belongs to prudence to govern and command, so that wherever in human acts we find a special kind of governance and command, there must be a special kind of prudence. Now it is evident that there is a special and perfect kind of governance in one who has to govern not only himself but also the perfect community of a city or kingdom; because a government is the more perfect according as it is more universal, extends to more matters, and attains a higher end. Hence prudence in its special and most perfect sense, belongs to a king who is charged with the government of a city or kingdom: for which reason a species of prudence is reckoned to be regnative. Reply to Objection 1: All matters connected with moral virtue belong to prudence as their guide, wherefore "right reason in accord with prudence" is included in the definition of moral virtue, as stated above (Q, A, ad 1; FS, Q, A, ad 4). For this reason also the execution of justice in so far as it is directed to the common good, which is part of the kingly office, needs the guidance of prudence. Hence these two virtues---prudence and justice---belong most properly to a king, according to Jer. 23:5: "A king shall reign and shall be wise, and shall execute justice and judgment in the earth." Since, however, direction belongs rather to the king, and execution to his subjects, regnative prudence is reckoned a species of prudence which is directive, rather than to justice which is executive. Reply to Objection 2: A kingdom is the best of all governments, as stated in Ethic. viii, 10: wherefore the species of prudence should be denominated rather from a kingdom, yet so as to comprehend under regnative all other rightful forms of government, but not perverse forms which are opposed to virtue, and which, accordingly, do not pertain to prudence. Reply to Objection 3: The Philosopher names regnative prudence after the principal act of a king which is to make laws, and although this applies to the other forms of government, this is only in so far as they have a share of kingly government. Objection 1: It would seem that political prudence is not fittingly accounted a part of prudence. For regnative is a part of political prudence, as stated above (A). But a part should not be reckoned a species with the whole. Therefore political prudence should not be reckoned a part of prudence. Objection 2: Further, the species of habits are distinguished by their various objects. Now what the ruler has to command is the same as what the subject has to execute. Therefore political prudence as regards the subjects, should not be reckoned a species of prudence distinct from regnative prudence. Objection 3: Further, each subject is an individual person. Now each individual person can direct himself sufficiently by prudence commonly so called. Therefore there is no need of a special kind of prudence called political. On the contrary, The Philosopher says (Ethic. vi, 8) that "of the prudence which is concerned with the state one kind is a master-prudence and is called legislative; another kind bears the common name political, and deals with individuals." I answer that, A slave is moved by his master, and a subject by his ruler, by command, but otherwise than as irrational and inanimate beings are set in motion by their movers. For irrational and inanimate beings are moved only by others and do not put themselves in motion, since they have no free-will whereby to be masters of their own actions, wherefore the rectitude of their government is not in their power but in the power of their movers. On the other hand, men who are slaves or subjects in any sense, are moved by the commands of others in such a way that they move themselves by their free-will; wherefore some kind of rectitude of government is required in them, so that they may direct themselves in obeying their superiors; and to this belongs that species of prudence which is called political. Reply to Objection 1: As stated above, regnative is the most perfect species of prudence, wherefore the prudence of subjects, which falls short of regnative prudence, retains the common name of political prudence, even as in logic a convertible term which does not denote the essence of a thing retains the name of "proper." Reply to Objection 2: A different aspect of the object diversifies the species of a habit, as stated above (Q, A). Now the same actions are considered by the king, but under a more general aspect, as by his subjects who obey: since many obey one king in various departments. Hence regnative prudence is compared to this political prudence of which we are speaking, as mastercraft to handicraft. Reply to Objection 3: Man directs himself by prudence commonly so called, in relation to his own good, but by political prudence, of which we speak, he directs himself in relation to the common good. Objection 1: It would seem that domestic should not be reckoned a part of prudence. For, according to the Philosopher (Ethic. vi, 5) "prudence is directed to a good life in general": whereas domestic prudence is directed to a particular end, viz. wealth, according to Ethic. i, 1. Therefore a species of prudence is not domestic. Objection 2: Further, as stated above (Q, A) prudence is only in good people. But domestic prudence may be also in wicked people, since many sinners are provident in governing their household. Therefore domestic prudence should not be reckoned a species of prudence. Objection 3: Further, just as in a kingdom there is a ruler and subject, so also is there in a household. If therefore domestic like political is a species of prudence, there should be a paternal corresponding to regnative prudence. Now there is no such prudence. Therefore neither should domestic prudence be accounted a species of prudence. On the contrary, The Philosopher states (Ethic. vi, 8) that there are various kinds of prudence in the government of a multitude, "one of which is domestic, another legislative, and another political." I answer that, Different aspects of an object, in respect of universality and particularity, or of totality and partiality, diversify arts and virtues; and in respect of such diversity one act of virtue is principal as compared with another. Now it is evident that a household is a mean between the individual and the city or kingdom, since just as the individual is part of the household, so is the household part of the city or kingdom. And therefore, just as prudence commonly so called which governs the individual, is distinct from political prudence, so must domestic prudence be distinct from both. Reply to Objection 1: Riches are compared to domestic prudence, not as its last end, but as its instrument, as stated in Polit. i, 3. On the other hand, the end of political prudence is "a good life in general" as regards the conduct of the household. In Ethic. i, 1 the Philosopher speaks of riches as the end of political prudence, by way of example and in accordance with the opinion of many. Reply to Objection 2: Some sinners may be provident in certain matters of detail concerning the disposition of their household, but not in regard to "a good life in general" as regards the conduct of the household, for which above all a virtuous life is required. Reply to Objection 3: The father has in his household an authority like that of a king, as stated in Ethic. viii, 10, but he has not the full power of a king, wherefore paternal government is not reckoned a distinct species of prudence, like regnative prudence. Objection 1: It would seem that military prudence should not be reckoned a part of prudence. For prudence is distinct from art, according to Ethic. vi, 3. Now military prudence seems to be the art of warfare, according to the Philosopher (Ethic. iii, 8). Therefore military prudence should not be accounted a species of prudence. Objection 2: Further, just as military business is contained under political affairs, so too are many other matters, such as those of tradesmen, craftsmen, and so forth. But there are no species of prudence corresponding to other affairs in the state. Neither therefore should any be assigned to military business. Objection 3: Further, the soldiers' bravery counts for a great deal in warfare. Therefore military prudence pertains to fortitude rather than to prudence. On the contrary, It is written (Prov. 24:6): "War is managed by due ordering, and there shall be safety where there are many counsels." Now it belongs to prudence to take counsel. Therefore there is great need in warfare for that species of prudence which is called "military." I answer that, Whatever things are done according to art or reason, should be made to conform to those which are in accordance with nature, and are established by the Divine Reason. Now nature has a twofold tendency: first, to govern each thing in itself, secondly, to withstand outward assailants and corruptives: and for this reason she has provided animals not only with the concupiscible faculty, whereby they are moved to that which is conducive to their well-being, but also with the irascible power, whereby the animal withstands an assailant. Therefore in those things also which are in accordance with reason, there should be not only "political" prudence, which disposes in a suitable manner such things as belong to the common good, but also a "military" prudence, whereby hostile attacks are repelled. Reply to Objection 1: Military prudence may be an art, in so far as it has certain rules for the right use of certain external things, such as arms and horses, but in so far as it is directed to the common good, it belongs rather to prudence. Reply to Objection 2: Other matters in the state are directed to the profit of individuals, whereas the business of soldiering is directed to the service belongs to fortitude, but the direction, protection of the entire common good. Reply to Objection 3: The execution of military service belongs to fortitude, but the direction, especially in so far as it concerns the commander-in-chief, belongs to prudence.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\nThe text provided is from \"Summa Theologica\" by St. Thomas Aquinas and discusses the philosophical and theological aspects of prudence, particularly in the context of governance. It does not contain any unsafe or inappropriate content. Therefore, it is fully appropriate for educational environments and general audiences.", "{\"reason\": \"philosophical discussion\", \"score\": 0}\n```\n\nThe text provided is a philosophical discussion about prudence and its different forms, specifically in the context of political and domestic prudence. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provided is a philosophical discussion on prudence and military affairs from a theological and ethical perspective. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are academic and educational, making it fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1248-0", "llm_safety_score": 0, "reason": "educational content" }, { "custom_id": "1248-1", "llm_safety_score": 0, "reason": "philosophical discussion" }, { "custom_id": "1248-2", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.sacred-texts.com/chr/aquinas/summa/sum306.htm", "date": "2014-07-28T08:52:30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00290-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.970318079, "token_count": 2651, "score": 2.75, "int_score": 3 }
<urn:uuid:90dd29dd-d658-4f4c-9490-4e8ece22e7ad>
Archivo General de Simancas The General Archive of Simancas (also known by its acronym, AGS) is an official archive located in the castle of Simancas, province of Valladolid, Spain. It was founded in 1540, making this the first official archive of the Crown of Castile. The chronological evolution of the institution has been influenced by the history of the Crown of Castile. The moments of strength or withdrawal of the Spanish monarchy were reflected in the form of documents arrivals or resource shortages. A major milestone occurred in 1588, when Philip II of Spain gave the instruction to the Government of the Archives of Simancas, a key document for understanding the management of both this file and others in the peninsula. Also the damage suffered during the Spanish War of Independence had a major impact on what is now the institution. Currently, the AGS is a cultural institution under the Ministry of Culture of Spain dedicated to conservation, cataloging and investigation with the large funds that are hosted there. These activities have brought about the reputation of the archive as one of the cornerstones of the Iberian Peninsula in what is relates to preservation and custody of documents. The archive was placed in Simancas, a town some 10 km away from Valladolid. The place was not chosen randomly but was selected a fortified and easily defended. During the period of the Reconquista, the village of Simancas was relevant as a frontier zone. Subsequently, its strategic location between the kingdoms of León and Castile gave it a political role in that period. After the conquest of Toledo and its territory in 1085 the town lost its importance, and in the 13th century was just one of many towns in the outskirts of Valladolid. However, it soon ceased to be part of the jurisdiction of Valladolid, because in 1465 King Henry IV of Castile rewarded the city of Simancas to remain faithful to him. Until 1917, modern and contemporary historians dated Simancas castle in time of the reconquest. However, in that year Francisco Rodríguez Marín published a paper in which he states that the Simancas castle was taken by Admiral Fadrique under Henry IV and later was demolished and rebuilt by his son, Admiral Alonso Enriquez. Thus, the construction date of the current castle can be assigned between the years 1467 and 1480. The castle has been extensively renovated during the centuries, and few traces of its original structure remain. Factors that led to the decision to choose the castle of Simancas include the fact that, after advancing the frontiers of Christian territory to southern territories, the castle was not a specific function in peacetime. That meant that it had other uses, from being a weapons cache to act as a state prison, a facet played simultaneously to the archive. There was also the influence of Francisco de los Cobos, Comendador Mayor de León, as a member of the Carlos I Court exerted all his influence in order to place the institution in Simancas. The castle was not a place designed to house an archive, which means that there are some problems that have concerned archivists throughout history. The most important of these is the risk of fire. In the past, sharing file space with a jail increased the risk that the books were reduced to ashes. Moreover, being a fortress, the building was a prime target in armed conflicts, such as the Spanish War of Independence of 1808-1814. The place was designed to keep documents and not for research, so the ornament was not intended to decorate but only to remember that, behind the institution, there was the royal power. For example, one of the doors of the AGS was decorated with the coat of arms of king Philip II. Furthermore, it has been difficult to accommodate the researchers. Some improvements have been made this years about this trouble. - (English) Official website
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical and cultural information\", \"score\": 0}\n```\n\nThe text provides historical and cultural information about the General Archive of Simancas without any unsafe or inappropriate content. It is fully appropriate for all audiences and does not contain any language, themes, or references that would make it unsuitable for public use, educational environments, or general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1249-0", "llm_safety_score": 0, "reason": "historical and cultural information" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://en.wikipedia.org/wiki/Archivo_General_de_Simancas", "date": "2014-07-29T17:32:35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267745.6/warc/CC-MAIN-20140728011747-00338-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9773500562, "token_count": 806, "score": 2.890625, "int_score": 3 }
<urn:uuid:670acf29-db60-4251-8cfe-a973f5298d02>
Location: Children's Nutrition Research Center Title: Nutrient contribution of the dinner meal consumed by low-income minority preschool children Authors |O'Neil, Carol -| |Nicklas, Theresa -| |Hughes, Sheryl -| |Liu, Yan -| Submitted to: International Journal of Child Health and Nutrition Publication Type: Peer Reviewed Journal Publication Acceptance Date: September 5, 2012 Publication Date: September 25, 2012 Citation: O'Neil, C.E., Nicklas, T.A., Hughes, S.O., Liu, Y. 2012. Nutrient contribution of the dinnder meal consumed by low-income minority preschool children. International Journal of Child Health and Nutrition. 1:11-22. Interpretive Summary: Data from the 2009-2010 National Health and Nutrition Examination Survey showed that nearly 27% of US children 2 to 5 years of age were overweight or obese. There was also considerable variation in race/ethnicity, with Hispanic children having a higher prevalence of overweight/obesity than either black or white preschool children. Low-income children, attending Head Start, a school readiness program, may be more likely to be overweight or obese than other pre-school children. This study showed that 38.78% of Head Start children were overweight or obese. This study also showed that Head Start children had low intakes of nutrients of public health concern (dietary fiber, vitamin D, calcium, and potassium). Nutrition educators should address healthy eating and consumption of recommended levels of high fiber foods, including whole grains, fruit, and vegetables, and low-fat dairy with parents of preschool children. Additionally, nutrition education could be provided in Head Start centers to children and their parents. Education needs to be targeted specifically to racially/ethnically diverse groups. This may be an important strategy in the prevention and reduction of childhood obesity and a way to increase overall diet quality. Technical Abstract: The goal of this study was to examine the energy and nutrient intake of dinner of low-income preschool minority groups, African-Americans and Mexican-Americans, attending Head Start. A cross-sectional study of intake at dinner using digital photography was undertaken. Pictorial records were converted to energy and nutrient intakes using Nutrition Data System for Research nutritional software. Total grams of food and beverages, energy, and macro- and micro-nutrients were determined and compared with recommendations. The study used home assessment dinner of children enrolled in Head Start in Houston, Texas. The participants were low-income children (n=214), 3 to 5 years. This study showed that 38.78% of Head Start children were overweight or obese. This study also showed that Head Start children had low intakes of nutrients of public health concern (dietary fiber, vitamin D, calcium, and potassium). In summary, children participating in Head Start had low intakes of nutrients of public health concern at the dinner meal. The dinner meal is an integral part of the daily intake of preschool children, and this study suggests that micronutrient intakes could be improved. It is important to educate mothers and children as to what constitutes a nutrient-dense meal and to confront barriers to consumption of these meals.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a research abstract discussing the nutrient intake of low-income minority preschool children. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are related to nutrition and health, which are appropriate for educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1250-0", "llm_safety_score": 0, "reason": "No unsafe content identified" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.ars.usda.gov/research/publications/publications.htm?seq_no_115=286438", "date": "2014-07-29T18:09:24", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267745.6/warc/CC-MAIN-20140728011747-00338-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9512784481, "token_count": 656, "score": 2.921875, "int_score": 3 }
<urn:uuid:9326c4d0-81f0-46ae-bea5-737a3e7b1d23>
Ptiliidae of the Maritime Provinces of Canada (Coleoptera): new records and bionomic notes Förlag: Magnolia Press The Ptiliidae of the Maritime Provinces of Canada is surveyed. Twenty-nine new provincial records from the Maritime Provinces of Canada are reported including the first records of the family from Prince Edward Island. Fourteen species are recorded for the first time for the Maritime Provinces as a whole. Acrotrichis josephi (Matthews) is recorded for the first time in eastern North America and Acrotrichis haldemani ( LeConte) is recorded for the first time in Canada. The genus Pteryx is reported for the first time in Canada. At least 29 species of Ptiliidae are now known to occur in the region. The fauna is briefly discussed in terms of its overall composition, introduced species, and species associated with particular habitats. Saproxylic species found in mature forests are discussed and attention is drawn to their possible relationship to undisturbed forest conditions and the scarcity of old-growth habitats in the region. - Medicine and Health Sciences - Maritime Provinces - ISSN: 1175-5326
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a scientific article discussing the Ptiliidae family in the Maritime Provinces of Canada. It contains no unsafe, inappropriate, or harmful content. The language is respectful and the themes are purely academic and scientific, making it fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1251-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.lu.se/lup/publication/670344", "date": "2014-07-29T17:07:30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267745.6/warc/CC-MAIN-20140728011747-00338-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.848878026, "token_count": 253, "score": 2.515625, "int_score": 3 }
<urn:uuid:a1adb9d4-957f-4762-bec0-a7f02ca34a8a>
Delta Blues is the oldest and purest form of blues music. It is often claimed that ‘the Delta’ was the birthplace of the blues. The first musicians who were recorded playing the blues guitar came from this area. Geographically, the Delta is the fertile area between the Yazoo and Mississippi Rivers, and it also extends to the land across the Mississippi near Helena, Arkansas. Culturally, the Delta was an area of large cotton plantations worked by black slaves and later, sharecroppers. Much of the Delta area was cleared after the American Civil War, when large levees were built on either side of the Mississippi River. Life in the construction camps was tough, with men working in gangs, frequently fighting among themselves, and spending their hard-earned cash on women, gambling and itinerant musicians. By the year 1900, gangs were also building railroads through the Delta area. The Delta starts in Memphis, but its heart is in Clarksdale, Mississippi. Memphis bluesman Gus Cannon, claimed to have first heard music played in a blues style in Clarksdale around the turn of the twentieth century. In 1905, in Clarksdale, bandleader W.C. Handy heard a man playing a guitar and singing along with the low, mournful sound made by sliding a knife along the strings. This prompted Handy to start writing blues music, marking the start of its popularity. As Robert Palmer describes the music in his book Deep Blues, “The Mississippi Delta’s blues musicians sang with unmatched intensity in a gritty, melodically circumscribed, highly ornamented style that was closer to field hollers than it was to other styles of blues. Guitar and piano accompaniments were percussive and hypnotic, and many Delta guitarists mastered the art of fretting the instrument with a slide or bottleneck that made the instrument ‘talk’ in strikingly speech-like inflections.” Most Delta blues musicians were itinerant loners who occasionally teamed with other musicians to play their music anywhere where people with spare change congregated. Often pianists played a two-fisted, eight-to-the-bar style called barrelhouse (from the name of a camp barroom). Three of the most famous barrelhouse pianists in the Delta were Roosevelt Sykes of Helena, Clarksdale native Sunnyland Slim, and Little Brother Montgomery of Kentwood, Louisiana. Good pianists such as these moved to cities like New Orleans, St. Louis, and even Chicago during the 1920s and 1930s. Many of the great Delta blues guitarists, such as Tommy Johnson, Son House, Willie Brown, Robert Johnson, and Howlin’ Wolf, learned from guitarist Charley Patton, of Dockery’s Plantation. Possibly blues music was originally born in the vicinity of this large cotton plantation near the Sunflower River. The records of many of the Delta’s greatest bluesmen playing the blues guitar failed to sell in large quantities, leaving a recorded legacy that is splintered at best. Skip James and Son House in particular were hampered by working with Paramount Records, a label that went bankrupt during the 1930s. These delta blues guitarists provided great inspiration for the blues rock guitarists who started to emerge in the 1960′s, such as Eric Clapton, Jimi Hendrix, Jeff Beck and Jimmy Page. These guitarists openly acknowledged this inspiration, often covering delta blues classics, while adding their own unique style of playing the blues guitar. Here’s Eric Clapton’s wonderful version of Robert Johnson’s classic Crossroads:
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical context\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses the historical and cultural background of Delta blues music, including its origins and the social conditions of the time. It does not contain any unsafe or inappropriate content, profanity, derogatory terms, or references to illegal activities. Therefore, it is fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1253-0", "llm_safety_score": 0, "reason": "historical context" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://playingthebluesguitar.com/index.php/delta-blues-guitar", "date": "2014-07-31T01:18:07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272256.16/warc/CC-MAIN-20140728011752-00386-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9700527191, "token_count": 737, "score": 3.09375, "int_score": 3 }
<urn:uuid:d09fd571-b94e-4053-b942-eba1c21748f1>
22nd August 2007, 05:59 #1 Full Member level 3 What's the difference between cross compiler and compiler ? Wats the difference between cross compiler and compiler 22nd August 2007, 05:59 22nd August 2007, 09:03 #2 A cross compiler is a compiler capable of creating executable code for a platform other than the one on which the compiler is run. Cross compiler tools are generally found in use to generate compiles for embedded system or multiple platforms. It is a tool that one must use for a platform where it is inconvenient or impossible to compile on that platform, like microcontrollers that run with a minimal amount of memory for their own purpose. It has become more common to use this tool for paravirtualization where a system may have one or more platforms in use. Uses of cross compilers The fundamental use of a cross compiler is to separate the build environment from the target environment. This is useful in a number of situations: Embedded computers where a device has extremely limited resources. For example, a microwave oven will have an extremely small computer to read its touchpad and door sensor, provide output to a digital display and speaker, and to control the machinery for cooking food. This computer will not be powerful enough to run a compiler, a file system, or a development environment. Since debugging and testing may also require more resources than is available on an embedded system, cross-compilation can be more involved and prone to errors than native compilation. Compiling for multiple machines. For example, a company may wish to support several different versions of an operating system or to support several different operating systems. By using a cross compiler, a single build environment can be set up to compile for each of these targets. Compiling on a server farm. Similar to compiling for multiple machines, a complicated build that involves many compile operations can be executed across any machine that is free regardless of its brand or current version of an operating system. Bootstrapping to a new platform. When developing software for a new platform, or the emulator of a future platform, one uses a cross compiler to compile necessary tools such as the operating system and a native compiler. Use of virtual machines (such as Java's JVM) resolves some of the reasons for which cross compilers were developed. The virtual machine paradigm allows the same compiler output to be used across multiple target systems. Typically the hardware architecture differs (e.g. compiling a program destined for the MIPS architecture on an x86 computer) but cross-compilation is also applicable when only the operating system environment differs, as when compiling a FreeBSD program under Linux, or even just the system library, as when compiling programs with uClibc on a glibc host.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe text sample is a technical explanation about the differences between cross compilers and compilers. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1255-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.edaboard.com/thread104383.html", "date": "2014-08-01T09:49:36", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274967.3/warc/CC-MAIN-20140728011754-00434-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9239341021, "token_count": 565, "score": 3.34375, "int_score": 3 }
<urn:uuid:dc278f9c-bfe4-4942-8655-b297d75d2afa>
The 2012 Nobel Peace Prize has been awarded to the European Union. The decision is not as bizarre as one might think. There is a precedent for organisations winning the prize – the international atomic energy association won in 2005, for instance – and the EU, although it is largely forgotten now, was formed with the aim of making war in Europe not just unthinkable, but materially impossible. Its aims have certainly been achieved. The continent has gone from one which was torn apart by war twice in half a century, having undergone few years of peace in its entire history, to one which has earned an unprecedented lack of violent struggle – internally, at least. Nonetheless, for all that the award may be appropriate in the big picture, the question as to why it was awarded now is more unclear. The past couple of years have not been the best in the EU's history, certainly, and the eurocrisis has made a break-up of the union possible for almost the first time since it was founded. Adding to the strangeness is the fact that the Peace Prize is the only one awarded by Norway, rather than Sweden – and Norway isn't even a member of the EU.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses the Nobel Peace Prize and the European Union's role in promoting peace in Europe. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1256-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.newstatesman.com/blogs/world-affairs/2012/10/nobel-peace-prize-awarded-eu?quicktabs_most_read=0", "date": "2014-07-11T22:50:48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776429773.19/warc/CC-MAIN-20140707234029-00024-ip-10-180-212-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9876039624, "token_count": 237, "score": 2.796875, "int_score": 3 }
<urn:uuid:cbeebd32-40e8-4210-b3aa-798bbfa9a041>
view a plan This is an introductory lesson on the New England States Computers & Internet, Social Studies Title – Intro to and discovery of the New England states By – Becca Fresch Primary Subject – Social Studies Secondary Subjects – Computers Grade Level – 5 Lesson Plan for a Fifth Grade Lesson on the New England States LESSON 4: Introduction and discovery of the New England States 5th grade – 24 students Instruction Time- 2 classes, 1 hour each II. RATIONALE AND BACKGROUND The purpose of this lesson is to give students an opportunity to work together and introduce them to the New England States. This is the fourth lesson in a unit on the fifty states. This lesson is designed to prepare the students for the lessons that will follow and focus on these states more in depthly, as well as excite them about and engage them in the learning. The students thus far have only worked on the geographic location of the fifty states. III. Lesson Objectives 1. TLW demonstrate their ability to interact with others in a group to achieve a common goal. PI: After being placed into small groups, the students will be responsible for researching information on their state. Each group will be responsible for a state travel brochure and a small class presentation. 2. TLW employ the use of the Internet and other research materials. PI: Using the websites, search engines, and texts provided for the class, students will research and find answers to questions about the states. 3. TLW record facts learned about the states in a travel brochure. PI: Each group will make a travel brochure in order to try to persuade classmates to visit their state. Each student will be responsible for one page in the brochure as well as a small part in the class presentation. IV. RESOURCES AND MATERIALS · Computers and printers in computer lab · Internet address: www.discovernewengland.com · Internet address: www.50states.com · Search Engines Yahoo, and Dogpile · Chalkboard and chalk · 2 three whole punchers · 6 three prong folders · Crayons and markers · Construction paper · Social studies texts 1. The importance and effectiveness of group work. 2. The use of the Internet and other sources for research. 3. The organization of facts and material. A. Introduction and Motivation T: Now that we know where all the states are located, we are going to focus in on some of them. The first group that we are going to look at is the New England States. These states are Maine, Vermont, New Hampshire, Connecticut, Rhode Island, and Massachusetts. In order to start this lesson, you will be doing a project. T: We are going to pretend that the groups you are in were hired by one of the states to improve on the materials that are sent out to tourists. You will also be trying to persuade the rest of the class to visit your state. In order to do this we will be using the computer lab to do research and then create a travel brochure for each state. B. Lesson Body 1. T: Before we can start, we need to know what we are going to be looking for. The first thing we need to do is make a list of the things that we would like to include in our brochures. What kind of things would we include that might make people want to visit your state? 2. The students and teacher will compile a list on the chalkboard of important information to include. This may include facts about the population, economy, history, landmarks, famous people from there, capitols, average weather, state symbols, etc. 3. The teacher will then divide the students into groups of four and assign each group a state. The teacher will then go over the guidelines for the work, · Each student is responsible for recording the information that they find. · Each student will be responsible for one page of the brochure. · The group will combine information and then divide it up between them equally for the brochure. · All recorded information will be turned in with the finished product · Each group member will give a brief presentation on his or her page to the class. 4. While the students are doing their research, the teacher will give any needed help, and when they have found the information the class will return to the room to begin the brochures. After the groups have given their presentations there will be a brief class discussion before they are excused for lunch. The teacher will ask for any insight on the project, whether or not they enjoyed doing it, and if they learned anything they thought was especially interesting. T: Next time we are going to start by looking at Maine, the group that worked on that state is going to be our group of experts. A. Student assessment 1. Assessment Plan a) The student’s ability to use cooperative learning will be assessed informally through observation by the teacher as they are working. Cooperation, help, and socialization skills will all be considered. b) The student’s individual work will be evaluated through the finished brochure as well as the presentation given by each student. Their knowledge on the facts and ability to answer questions afterwards will be used to assess the work as well. c) Their ability to use the Internet and other sources for research will be assessed by the paper of recorded facts that they each turn in with the finished brochure. 2. Reflection on Assessment of Student Performance E-Mail Becca Fresch !
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided lesson plan is entirely appropriate for educational environments and does not contain any unsafe or inappropriate content. It focuses on teaching students about the New England states through group projects and research, which is suitable for a fifth-grade class.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is entirely focused on describing a structured evaluation process for students, including criteria for assessment and a call to communicate with a specific individual. There is no unsafe, inappropriate, or harmful content present." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1257-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1257-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://lessonplanspage.com/ssciintrodiscovernewenglandstates5-htm/", "date": "2014-07-22T19:31:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00208-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9396489859, "token_count": 1178, "score": 3.625, "int_score": 4 }
<urn:uuid:4fce950b-f1c8-4858-9e05-80b33052bff9>
How to Draw Link Welcome to this Here we learn how to draw Link, the hero from the Zelda video games. He is a brave and noble elf warrior rescuing his princess from evil A role model for us all, isn't he? Since he is a rather detailed character this lesson is pretty long. But I split it into many small sections, so it's no big deal for you to take a break in between and return later. Let's get started. to Draw Link - 1: Head and Torso Outline First we draw a very basic sketch of Link. The size of the body parts is all that matters here. Draw everything very lightly in this section, because all of these lines will be erased eventually. 1. Begin by drawing an oval for Links head. We will compare the size of the other body parts to this oval. In total, Links height will be 6 heads, so be sure that 5 more ovals would fit below your first one. 2. Now sketch in Links torso. Its height should be twice as large as the head, but it should only be slightly broader. This is the torso without shoulders. When we later add them together with arms it will look much broader. to Draw Link - 2: Legs & Feet Outline 3. Sketch in the legs in this step, making them 3 heads tall. Let them become narrow at the knees, and broader below again. Draw the feet just as simple blocks for now, without being too much concerned about their actual shape. to Draw Link - 3: Arms & Shield 4. Now it's time to draw in the arms. where the neck meets the torso and draw the shoulders growing out sides. The shoulders point slightly downwards and are not horizontal. The right arm disappears quickly behind Link's large shield. Make the shield so big that it almost reaches down to the knee. Draw the other arm hanging down beside the torso, but let it make a bend at the elbow, so that the lower arm is slightly lifted. Draw a simple circle as a placeholder for the fist to its end. to Draw Link - 4: Sword Outline 5. What would be a hero without a great weapon? Just hiding behind his shield, I guess. But that won't do, so let's give our elf a mighty sword to wield. the sword's narrow grip going through the fist's circle, and attach a triangular pommel to its end. Then add a semi-circular cross-guard to the grip's other end, and draw the long blade reaching out of its Now our basic sketch of Link is complete. We have given him a clear pose and defined his proportions. It's time to move on and draw in some details. How to Draw Link - 5: Face Details This is now the part where we start to draw over our initial sketch. Here we use stronger lines, and erase the old, light lines eventually. Draw the frame for Link's heart-shaped face into the oval, and close it off with his hair at the top. Draw one patch of hair hanging down into the forehead on one side, and another patch of hair hanging down other side of the face. 7. Add lines that close off the hair at the top, and also add two small lines where the hairline is visible on the You can lightly sketch in a cross on Link's face to help you place eyes, nose and mouth correctly. Afterwards, you'll just have to erase Draw the eyes above the middle line and make them rather large. Place the nose a bit to the right. Then draw in the mouth below it. 9. Add a big circle into each eye for the iris, and also draw in the eyebrows above them. 10. Draw a small circle (or rather an oval in my case) for the pupil into each iris. Then add a strand of hair hanging down on each side of the face. How to Draw Link - 6: Ears & Cap 11. Attach the ears as two thin triangles to the sides of the head. Put Link's cap on his head afterwards. Let it coil up at the end, as if it is flowing in the wind. 12. Here are the last few details for the head: Draw a line for a fold along the cap's broad end. Sketch a few lines into the patches of hair. Then draw a line through the middle of each ear, so that you get a division from inner and outer ear. The head is complete, and we move on the Link's clothes now. How to Draw Link - 7: Clothing 13. Start with a narrow collar around Link's throat. This is the collar of the inner shirt he is wearing. Then add a broader, spike-shaped collar around it. This one belongs to the green shirt Link is wearing on top of the inner one. 15. Redraw the top of Link's torso. is an additional line coming out from behind the shield, and another one on the other side is marking the side of the chest. Cut off the sleeve on the visible arm a bit below the shoulder. 16. Draw in a belt and add a little fold into the shirt where it touches the belt at the side. Below the belt you draw the bottom of the suit hanging down in many Also give clear lines to the shield in this step. 17. Replace the sketchy lines for links arm with a clear outline, and put a large gauntlet around his lower arm. to Draw Link - 8: Hand and Sword only one of Link's hands is visible it is important that it looks good, because it will get a lot of attention in the drawing. 18. Begin with three lines that define the edge of the fist. 19. Then add a row of four fingers along the bottom. 20. Place the thumb across the first finger, and the fist is complete. 21. Now you can outline the sword hilt in detail and with clear lines. Place a cone into the cross-guard's middle and let two curved shapes grow out of its sides. Now elongate the blade towards the cross-guard, so that it starts at the cone. Make the blade slightly narrower at the beginning and then broader as it leaves the cross-guard. to Draw Link - 9: Legs & Boots Draw in clear lines for the legs now, and take care to let the knees stand out clearly. Cut off the legs just below the knees with curved lines, because the boots are starting here already. 24. Now draw the upper parts of the boots wrapping around the legs. Outline the sides of the boots. Make them slightly curved, and let them become a bit narrower and then broader again at the bottom. Now draw in the feet. One of them is facing towards us, and is shortened by the perspective. Draw its front as an oval, and then connect it with the rest of the boot. Take care to let the heel stand The other foot is looking to the side and is longer for that reason. Make the front like a stretched oval and also take care to accentuate the heel at the back. 27. Add a thin sole at the bottom of the boots. Make a clear separation between heel and frontal part on the boot that's seen from the side. Now the boots are complete, ready to kick some monsters in the butt. to Draw Link - 10: Accessories and Details Now you can erase the last traces of our first sketch. This is what your Link should look like afterwards. Of course there will be minor differences between mine and yours. But let's continue to draw in a few more details. Start out by placing the sword's sheath on his back. There is just a small part sticking out behind his shoulder, and another one can be seen below the shield. 29. The sheath needs something that ties it to Links back, so draw in a girdle that goes diagonally across the torso. Also add a square into the girdle around the waist for the belt buckle. Draw stripes across the shoulder-girdle. Then place another square into the belt buckle, and add a small metal rod that keeps it in place. 31. Link has a mini-shield on his sword arm. Tie two stripes around his gauntlet, and then add the shield as a square that is sticking out behind it. 32. Draw stripes on the cross-guard. add a little triangle into the sword's middle and separate the sharp edge of the blade with a line from the rest of the metal. 33. Let's make the boots a bit fancier. a little leather square sticking out at the top, so that the upper edge of the boots doesn't cut into Link's legs while walking. Also draw two stripes on each boot for decoration. 34. Add a metal locket to the sheath's top and a metal chape to its that is left now is Link's shield. It has a rather complicated and detailed pattern on it. You can come up with a simpler design if you like, but here it is the way I saw it on a game screen shot. to Draw Link - 11: The Shield 35. First, draw a frame around the shield. 36. Now let small parts of the iron frame stick into the shield's inner 37. Draw circles into the upper two corners and the bottom. Then split the frame with a straight line there. Let's move to the actual picture on the shield: the top there is a big triangle. Below it is something like a bird with stretched wing. In this step you draw the small "head" of the bird, and the thing, long "body" below it. 38. Erase the line within the small circles in the corners, and then add the wings to the bird. Each wing has four feathers that become gradually smaller from top to 39. Draw these two strange shapes beside the triangle and at the bottom of the shield. Also add the "claws" for the bird. And finally, you draw a small inverted triangle into the big triangle, and place a few lines into these strange shapes beside the triangle and at the shield's bottom. And that's it. Link can rescue the princess now, or go collecting treasures that are hidden in patches of grass. It's just the sort of stuff he's doing.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a step-by-step guide on how to draw the character Link from the Zelda video game series. It contains no unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a set of instructions for drawing a bird and describing a scenario involving Link rescuing a princess or collecting treasures. There is no unsafe, inappropriate, or harmful content in the text." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1258-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1258-1", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1258-2", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.discover-how-to-draw.com/how-to-draw-link.html", "date": "2014-07-22T19:29:31", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00208-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9282131195, "token_count": 2260, "score": 2.84375, "int_score": 3 }
<urn:uuid:62ed12b8-f17f-45f0-966e-7aa3434cf409>
When you see a person who is visually impaired walking confidently down a busy city sidewalk with his devoted guide dog, you may wonder how the dog knows where to go and what to do. How does a puppy grow up to be a guide dog and take on the enormous responsibility of seeing for his human companion? We asked Leader Dog for the Blind, a non-profit organization that has been training dogs for 71 years, to let us into the life of guide dogs and the people who depend on them. Leader Dogs for the Blind relies on Golden Retrievers, Labradors and German Shepherds most often, and Standard Poodles for people with allergies. These breeds are selected because of their size and temperament, and because of their double coat. "We don't know what part of the country our dogs will end up in," says Rachelle Kniffen, Marketing Specialist at Leader Dogs for the Blind. "They have to be comfortable in any climate." Volunteer host families keep mom and dad dogs in their homes for breeding, and have the wonderful job of birthing the puppies who will grow up to be Leader Dogs. This organization also rescues dogs from shelters for training. As long as they have Golden, Lab or Shepherd in their mix, and have the double coat and temperament, they can make great Leader Dogs. Leader Dog Puppy Training Leader Dogs begin training as soon as they are weaned. Volunteer puppy raisers take puppies home to live in a family environment for a year. These puppies get obedience training like any other dog, but also get to do a whole lot more than your average pup! Wearing a bandana that says Future Leader Dog, the puppy goes to the grocery store, movies, restaurants, church, and sporting events - sometimes even to the school of the puppy raiser's children. The more the puppy experiences as a puppy, the less likely he is to have an adverse reaction to the experience as an adult. Training to be a Leader Dog When the puppy is a year old, he goes to the Leader Dogs for the Blind facility in Michigan for testing and training. Then the real work begins. The dog has to learn to notice obstacles in his owner's path and guide him around them. He has to understand traffic and when it's safe to cross a street. He learns to find the door in a building, and find an empty chair for his owner in a waiting room. Training is done with repetition and praise. It takes a good, obedient dog to learn these skills, but one of the most important skills is disobedience! Leader Dogs Learn to Disobey A guide dog must learn "intelligent disobedience" to keep his owner from harm. It's the last and most difficult thing a guide dog is taught, but one of the most critical. A guide dog will disobey the command "Go right" if going right would cause his owner to fall into a hole. Leader Dogs are trained for all kinds of scenarios that would cause them to have to disobey. But they can't be trained for everything, and sometimes dogs make these decisions on their own. Len and his Leader Dog, Mikey, a large Yellow Lab, live in a senior home in Wisconsin and walk to a coffee shop every morning. One morning, at the first intersection, Mikey stopped as usual. Len listened for traffic, heard none, and gave the command "Forward." Mikey sat down. Len gave the command several more times, and even tugged on his chain. Mikey stepped in front of Len so he could not walk into the street. Mikey then turned around and took Len home. Back in the senior home lobby, Len mentioned to the maintenance man that something was wrong with his dog. He wasn't working. The man said, "He shouldn't be working. The fog is so thick out there you can't see three feet in front of your face." Mikey was never "fog trained" as that would be impossible to do. But he determined on his own that it wasn't safe to cross the street. Training People to Trust their Guide Dog It's not easy at first for a blind person to put their trust in an animal. Imagine how unnerving it would be to go out in a strange city blindfolded, trusting that a dog you just met will keep you safe. The training at Leader Dogs for the Blind is a 26-day course, in which the trained dog is paired with an untrained person. While the dog knows exactly what to do, the person has to learn how to use and trust the Leader Dog. When Len first got Mikey, during their training, it was Len who wouldn't cross the street. Len could hear traffic but Mikey pulled to go forward. The trainer told Len that Mikey knows what he's doing. Trust him. Len since learned how true that is. Len says, "The more you believe in your dog, that he's not going to let you get hurt, the stronger you will be as a team." "There is no way to know how many times Mikey has saved my life," Len says. Mikey, like every Leader Dog, quietly guides his owner away from danger each day. Being blind, the owner often doesn't know what the danger was he avoided. "I donât have a handicap," Len says. "I have an inconvenience that's cured by the love of a dog." Leader Dogs and the Cycle of Love From puppyhood to retirement, Leader Dogs experience a life of love and devotion. Puppy raisers become enormously attached to the pups they raise, and enjoy the pride of seeing them become Leader Dogs. Many describe it as seeing their own children graduating high school. They get to meet the people who will own the Leader Dog, and they often form a strong attachment. The Leader Dog and owner form a loving bond far stronger than that of most pets and owners. After many years of service, when the Leader Dog is ready to retire, if the owner is unable to care for him, the dog sometimes ends up back where he began, in the loving home of his puppy raiser! How does a dog adjust to living in one home as a puppy, then in a kennel environment during training, then to a whole new owner in a distant city? "Purina Pro Plan Chicken & Rice Formula," Rachelle explains. "No matter what they do or where they go, at dinnertime, they always have Pro Plan in their bowl." Rachelle says the dogs adjust very well, but making sure they get the same Pro Plan diet keeps them from having any digestive upset during their transitions, and gives them a certain level of comfort. How Purina Supports Leader Dogs for the Blind - and how you can too Leader Dogs for the Blind is a remarkable organization that provides all their services to the blind for free. Everything in the 26-day training program - the flight to and from Michigan, room and board, the dog and all of his supplies are absolutely free of charge. And once the person is back home with their new Leader Dog, the organization continues to provide support and guidance. Purina and Pet Supplies Plus are donating $25,000 to support Leader Dogs for the Blind. By purchasing your pet food and toys at Pet Supplies Plus, you help support the organization. If you'd like to donate, volunteer as a puppy raiser, or host a breeding dog, go to leaderdog.org.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses the training and responsibilities of guide dogs without any inappropriate, offensive, or harmful content. It is suitable for all audiences and provides educational information about the process of training guide dogs and the trust between a blind person and their guide dog.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses the work of Leader Dogs and the organization Leader Dogs for the Blind. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are positive and educational." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1259-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1259-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.petcentric.com/02-28-2010/how-good-dogs-become-guide-dogs", "date": "2014-07-22T19:36:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997862553.92/warc/CC-MAIN-20140722025742-00208-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9810192585, "token_count": 1542, "score": 2.671875, "int_score": 3 }
<urn:uuid:8f334492-f9d5-45bc-8af9-fb71e4b9715a>
November is National Diabetes Awareness Month. Type II diabetes affects 25.8 million adults and children in the US. That is a lot of people. And, it does not have to be that way because it is a preventable disease. Diabetes is caused by the body not correctly processing glucose which is a form of sugar in the blood. The most common type of diabetes is type II diabetes. This is when the body does not produce enough insulin and must supplement with insulin treatments and a careful diet. Side effects of diabetes that is not managed correctly include damage to the nerves, eyes, kidneys, heart, and blood vessels. Your health is nothing to play with. My own father had diabetes for several years. And eventually after developing some other health problems that were probably due to the fact that he did not properly manage his diabetes, he passed away. Better’n Eggs wants to bring awareness to how important it is for us to manage and even prevent diabetes. It has been said for quite a while that egg whites are much better for us than egg yolks. Avoiding the extra cholesterol can be a key factor in managing and preventing diabetes. Crystal Farms Better’n Eggs and AllWhites are fat-free, cholesterol-free, and low-calorie alternatives to your typical store bought egg. AllWhites is 100% liquid egg whites, nothing added. Better’n Eggs is 98% liquid egg whites, plus 2% vitamins and minerals. At only half the calories of shell eggs, they each have only 25 calories per serving. And they are also great sources of protein. If you would like to learn more about diabetes join the Mamavation #BetternEggs twitter party with Bob Green November 16, 2011, 8 – 10 pm EST. See the Mamavation site for more information. You can also check out some awesome recipes on the AllWhites and Better’n Eggs website like the Mini Turkey Meatloaves.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"promotes healthy lifestyle and awareness\", \"score\": 0}\n```\n\n**Explanation:**\n- **Content:** The text is focused on raising awareness about diabetes and promoting healthy lifestyle choices.\n- **Language:** The language used is respectful and informative.\n- **Themes:** The themes are related to health and awareness, which are appropriate for public use.\n- **Suitability:** The text is suitable for all audiences, including children and educational environments.\n- **References:** There are no references to illegal activities, harmful behaviors, or inappropriate content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1260-0", "llm_safety_score": 0, "reason": "promotes healthy lifestyle and awareness" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://chitownmoms.net/bettern-eggs-better-for-diabetes/", "date": "2014-07-24T03:54:30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997885796.93/warc/CC-MAIN-20140722025805-00144-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9576058388, "token_count": 402, "score": 2.6875, "int_score": 3 }
<urn:uuid:1a34a270-560a-4cf6-9878-4de43b9ba683>
US 7932548 B2 Systems and methods are disclosed to form a resistive random access memory (RRAM) by forming a first metal electrode layer; depositing an insulator above the metal electrode layer and etching the insulator to expose one or more metal portions; depositing a Pr1-XCaXMnO3 (PCMO) layer above the insulator and the metal portions, wherein X is between approximately 0.3 and approximately 0.5, to form one or more self-aligned RRAM cells above the first metal electrode; and depositing a second metal electrode layer above the PCMO layer. 1. A resistive random access memory (RRAM) comprising: a. an insulator material deposited and patterned to expose metal openings; and b. a PCMO layer deposited above the patterned insulator material forming one or more self-aligned RRAM cells without any etching of the PCMO layer wherein the PCMO layer is deposited above the insulator material to form an amorphous silicon material, and wherein the PCMO layer is deposited above the metal openings to form polycrystalline PCMO. 2. The RRAM of 3. The RRAM of The present invention relates generally to a self-aligned storage node for a memory device such as a nonvolatile memory device. A memory device such as random access memory (RAM) is a semiconductor device for storing digital information. Data, as digital information, can be written to and read from a RAM. RAMs are fabricated using integrated circuit technology. For example, a DRAM is made of many storage nodes or memory cells and each memory cell has a memory cell transistor and a capacitor. The capacitor is an important element of the memory cell because it stores the digital information. Trench capacitors and stack capacitors are the two major types of DRAM cell capacitors. DRAMs are volatile and thus require power to maintain the data from being lost. Non-volatile alternatives include resistive random access memory (RRAM) devices, each being a cross point type memory array of a planar matrix of spaced memory cells sandwiched between two meshes of conductors running in orthogonal directions above and below the cells. The row conductors running in one direction are referred to as the word lines, and the column conductors extending in a second direction usually perpendicular to the first direction are referred to as the bit lines. The memory cells are usually arranged in a square or rectangular array so that each memory cell unit is connected with one word line and an intersecting bit line. In the RRAM array, the resistance of each memory cell has more than one state, and the data in the memory cell is a function of the resistive state of the cell. The resistive memory cells may include one or more magnetic layers, a fuse or anti-fuse, or any element that stores or generates information by affecting the magnitude of the nominal resistance of the element. Other types of resistive elements used in a resistive RAM array include poly-silicon resistors as part of a read-only memory, and floating gate transistors as part of optical memory, imaging devices or floating gate memory devices. One type of resistive random access memory is a magnetic random access memory (MRAM), in which each memory cell is formed of a plurality of magnetic layers separated by insulating layers. One magnetic layer is called a pinned layer, in which the magnetic orientation is fixed so as not to rotate in the presence of an applied magnetic field in the range of interest. Another magnetic layer is referred to as a sense layer, in which the magnetic orientation is variable between a state aligned with the state of the pinned layer and a state in misalignment with the state of the pinned layer. An insulating tunnel barrier layer sandwiches between the magnetic pinned layer and the magnetic sense layer. This insulating tunnel barrier layer allows quantum mechanical tunneling to occur between the sense layer and the pinned layer. The tunneling is electron spin dependent, causing the resistance of the memory cell, a function of the relative orientations of the magnetizations of the sense layer and the pinned layer. The variations in the junction resistance for the two states of the sense layer determine the data stored in the memory cell. U.S. Pat. No. 6,169,686, granted to Brug et al. on Jan. 2, 2001 discloses such a magnetic memory cell memory. U.S. Pat. No. 6,385,079 discloses a method for designing a resistive random access memory array in which elements are selected with values of resistances that are correlated to maintain a signal-to-noise ratio of 20 decibels or more for the array. A plurality of memory cells are selected and spaced from each other in a matrix of rows and columns, each memory cell being selected to have a junction resistance value of between 0.25 megaohms and 3.60 megaohms. A plurality of conductive row lines are selected and connected between a number N of memory cells in each row, each row and column line being selected to have a row or column unit line resistance value below 0.38 ohms, so that the values of junction resistance are correlated with the values of the row and column unit line resistance to provide a signal-to-noise ratio of 20 decibels or more for the memory array. The values of the row and column unit line resistance are selected so that the total row line resistance for each row is approximately equal to the total column line resistance for each column. The ratio of the junction resistance to the unit line resistance is approximately five million to one, in order to maintain a signal-to-noise ratio of at least 20 decibels in the resistive memory array. For an equal number N of row and column elements, the total row or column line resistance must be greater than approximately five million to N. If N is equal to approximately 1000, the ratio of junction resistance to total row or column line resistance must be approximately 5,000 or greater. In one aspect, systems and methods are disclosed to form a resistive random access memory (RRAM) by forming a first metal electrode layer; depositing an insulator above the metal electrode layer and etching the insulator to expose one or more metal portions; depositing a Pr1-XCaXMnO3 (PCMO) layer above the insulator and the metal portions, wherein X is between approximately 0.3 and approximately 0.5, to form one or more self-aligned RRAM cells above the first metal electrode; and depositing a second metal electrode layer above the PCMO layer. In another aspect, a resistive random access memory (RRAM) includes a first metal electrode layer; an insulator material deposited and patterned above the metal electrode layer; a Pr1-XCaXMnO3 (PCMO) layer forming one or more self-aligned RRAM cells above the first metal electrode layer, where X is between approximately 0.3 and approximately 0.5; and a second metal electrode layer deposited above the first metal electrode layer. In yet another aspect, a resistive random access memory (RRAM) includes an insulator material deposited and patterned; and a PCMO layer deposited above the patterned insulator material forming one or more self-aligned RRAM cells without any etching of the PCMO layer. In another aspect, a resistive random access memory (RRAM) includes an amorphous PCMO or silicon material portion and a polycrystalline PCMO portion adjacent the amorphous material to form one or more self-aligned RRAM cells without any etching thereto. Implementations of the above aspects may include one or more of the following. The PCMO layer is simply deposited over the insulator to form amorphous PCMO. Where the insulator has been patterned to expose the metal, polycrystalline PCMO forms the RRAM without requiring etching of the PCMO layer. This simplifies the manufacturing process significantly and allows smaller geometry to be achieved. The metal layer can be a transition metal such as chromium, platinum, nickel, ruthenium, copper, and others, a silicide such as nickel silicide, and a Mott insulator such as LNO(LaNiO3). As chromium and LNO are “sticky” materials, an adhesion layer is not needed. In other implementations, an insulator layer can be formed between a wafer and the first metal electrode layer. The first metal electrode layer can be formed above a wafer. Via holes connect the first metal electrode layer to circuit on the wafer. An adhesion layer can be formed above the first metal electrode layer. A metal can be formed above the via holes, and the metal is polished using chemical mechanical polishing. The first or second metal electrode layer can be platinum, chromium, iridium, ruthenium, nickel, silicide, nickel, titanium or LNO. A passivation film can be formed above the second electrode metal layer. Each memory cell can be one of: a two terminal memory device, a three terminal memory device. The PCMO layer can be deposited above the insulator material to form an amorphous PCMO. Alternatively, the PCMO layer can be deposited above the metal openings to form polycrystalline PCMO. A plurality of metal layers and vias can connect the RRAM cells to a wafer circuit. The PCMO layers can be stacked to form a 3-dimensional RAM structure. Advantages of the invention may include one or more of the following. The system provides a low-cost high density solid state memory device. The memory units can be interconnected within a wafer, and the wafers are interconnected in a stacked wafer construction of a memory system. The resulting semiconductor device offers high density storage at a very low cost. Moreover, the process can be done at low temperature. The substrate temperature in forming a thin film approximately at the range of 300-400 deg.C. and the process requires a short time. Since the thin film is formed at a very low temperature during substantially the whole process, the process can be applied to a highly integrated device to deposit an additional memory layer with a plurality of elements without damaging other elements previously deposited using conventional deposition. In order that the manner in which the above-recited and other advantages and features of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated, in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: Referring now to the drawings in greater detail, there is illustrated therein structure diagrams for a semiconductor processing system and logic flow diagrams for processes a system will utilize to deposit a memory device, as will be more readily understood from a study of the diagrams. Turning now to The process of The resulting RRAM is simple and small in size with advantages from the non-volatile characteristic of the resistor memory cell and the stability of the memory state. Since resistor is a passive component and cannot actively influence nearby electrical components, a basic RRAM cell can be just a variable resistor, arranged in a cross point resistor network to form a cross point memory array. To prevent cross talk or parasitic current path, a RRAM cell can further include a diode, and this combination is sometimes called a 1R1D (or 1D1R) cross point memory cell. To provide better access, a RRAM can include an access transistor, as in DRAM or FRAM cell, and this combination is sometimes called a 1R1T (or 1T1R) cross point memory cell. The resistance state of a RRAM cell is referred to the storing (writing) or sensing (reading) methodology of the RRAM circuit. The term resistance state is related to the resistance value of the memory resistor (the resistance state can then be said to be the resistance of the memory resistor), but sensing the resistance value of the memory resistor often means sensing the voltage across the memory resistor (the resistance state can then be said to be the voltage across the memory resistor), or sensing the current through the memory resistor (the resistance state then can be said to be the current through the memory resistor). The resistance states of the RRAM can be represented by different techniques such as structural state, polarization, or magnetization. The PCMO layer is simply deposited over the insulator to form amorphous PCMO which is a typical insulator. Where the insulator has been patterned away to expose the metal layer, the PCMO layer is polycrystalline PCMO which exhibits RRAM capability. The RRAM is formed and does not need etching of PCMO, simplifying the manufacturing process significantly and allowing smaller geometry to be achieved. The metal layer can be a transition metal such as chromium, nickel, ruthenium, copper, or nickel silicide, among others. As chromium is a “sticky” material, an adhesion layer is not needed. If the metal layer is platinum, an adhesive layer may be used to allow other materials to adhere to the platinum layer. The-one-step programming process will be discussed first. The selected cell is W2B2 and a first related bit line, B22, will be programmed to high voltage bit and a second related bit line, B22, will be programmed to low voltage bit. The word line, W2 is floated. All other word lines are biased with half-programming pulse voltage. Bit line B22 is grounded. Programming pulse, VP is applied to B22. All other bit lines are biased to the ground potential. As a result the resistor R22A is at the low resistance-state and the resistance B22B is at the high resistance-state. All memory resistor in the bit 2 other than that connected to W2 word lines are biased with half of the programming voltage. Therefore, there is no resistance change. Similarly each bit lines along the W2 may be properly biased to program one word at a time. After programming, the two memory cell resistors in any given bit are at the high resistance-state and low resistance-state, respectively. The two-step programming is more conventional. Again, cell W2B2 is selected in this example. A first selected memory resistor, R22A and memory resistor R22B are programmed to a low resistance state and to a high resistance-state, respectively. The selected word line, W2, is set to ground potential, and all other word lines are biased to 0.5 VP. A negative programming pulse, having pulse amplitude of −VP, and a positive programming pulse, having amplitude of +VP, with appropriate pulse widths, are applied to bit lines B2 and !B2, respectively. The positive and the negative programming pulses do not have to be applied simultaneously, and the A resistor and the B resistor may be programmed separately. Memory resistors, R22A and R22B, are, therefore, programmed to a low resistance state, RL and a high resistance state, RH, respectively. All other memory resistors are either pulsed with half of the programming pulse voltage, or not pulsed. Therefore, the resistance of the non-selected memory resistors will not change during this programming operation. The memory array may be read by applying a read voltage to the word line and detecting (reading) the voltage difference between the bit lines which share the memory cell. In one exemplary memory cell array shown in It is to be understood that various terms employed in the description herein are interchangeable. Accordingly, the above description of the invention is illustrative and not limiting. Further modifications will be apparent to one of ordinary skill in the art in light of this disclosure. Citations de brevets
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"technical content without unsafe elements\", \"score\": 0}\n```\n\nThe provided text is a technical description of a resistive random access memory (RRAM) and does not contain any unsafe or inappropriate content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"technical content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a detailed technical description of a resistive random access memory (RRAM) and its manufacturing process. It does not contain any unsafe, inappropriate, or offensive content. The language is respectful and the themes are entirely related to technology and engineering, making it fully appropriate for all audiences, including educational environments.", "{\"reason\": \"technical content with no unsafe elements\", \"score\": 0}\n```", "{\"reason\": \"technical description\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a technical description of a memory programming process and does not contain any unsafe, inappropriate, or harmful content. It is fully appropriate for all audiences, including educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1261-0", "llm_safety_score": 0, "reason": "technical content without unsafe elements" }, { "custom_id": "1261-1", "llm_safety_score": 0, "reason": "technical content" }, { "custom_id": "1261-2", "llm_safety_score": 0, "reason": "technical content with no unsafe elements" }, { "custom_id": "1261-3", "llm_safety_score": 0, "reason": "technical description" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.google.fr/patents/US7932548", "date": "2014-07-24T04:13:27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997885796.93/warc/CC-MAIN-20140722025805-00144-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.896622777, "token_count": 3313, "score": 2.765625, "int_score": 3 }
<urn:uuid:fc7f6349-771f-4a26-9f41-886384910dd1>
From Wikipedia, the free encyclopedia - View original article Rogue waves (also known as freak waves, monster waves, killer waves, extreme waves, and abnormal waves) are relatively large and spontaneous ocean surface waves that occur far out at sea, and are a threat even to large ships and ocean liners. In oceanography, they are more precisely defined as waves whose height is more than twice the significant wave height (Hs or SWH), which is itself defined as the mean of the largest third of waves in a wave record. Therefore rogue waves are not necessarily the biggest waves found at sea; they are, rather, unusually large waves for a given sea state. Rogue waves seem not to have a single distinct cause, but occur where physical factors such as high winds and strong currents cause waves to merge to create a single exceptionally large wave. Rogue waves, sometimes known as freak waves or "hundred-year waves" (due to the supposed likelihood of their occurrence) are an ocean phenomenon, in which winds, currents, non-linear phenomena such as solitons, and other circumstances cause a wave to briefly form that is far larger than the "average" large occurring wave (the significant wave height or 'SWH') of that time and place. Once considered mythical and lacking hard evidence for their existence, rogue waves are now proven to exist and known to be a natural ocean phenomenon. Eyewitness accounts from mariners and damages inflicted on ships have long suggested they occurred; however, their scientific measurement was only positively confirmed following measurements of the "Draupner wave", a rogue wave at the Draupner platform, in the North Sea on January 1, 1995. During that event, minor damage was also inflicted on the platform, far above sea level, confirming that the reading was valid. Their existence has also since been confirmed by satellite imagery of the ocean surface. A rogue wave is distinct from a tsunami. Tsunamis are caused by massive displacement of water, and often result from sudden movement of the ocean floor, which then propagate at high speed over a wide area. They are more or less unnoticeable in deep water and only become dangerous as they approach the shoreline and the ocean floor becomes shallower; therefore tsunamis do not present a threat to shipping at sea (the only ships lost in the 2004 Asian tsunami were in port). They are also distinct from mega-tsunamis, which are single massive waves caused by landslides within enclosed or limited bodies of water. A rogue wave, on the other hand, is an ocean phenomenon that is not caused by land movement, only lasts briefly, occurs in a limited location, and most often happens far out at sea. Rogue waves are considered rare but potentially very dangerous, since they can involve the spontaneous formation of massive waves far beyond the usual expectations of ship designers, and can overwhelm the usual capabilities of ocean-going vessels which are not designed for such encounters. In oceanography, rogue waves are precisely defined as waves whose height is more than twice the significant wave height (SWH), which is itself defined as the mean of the largest third of waves in a wave record. Therefore rogue waves are not necessarily the biggest waves found at sea; they are, rather, unusually large waves for a given sea state. In February 2000, a British oceanographic research vessel, the RRS Discovery, sailing in the Rockall Trough west of Scotland encountered the largest waves ever recorded by scientific instruments in the open ocean, with a SWH of 18.5 metres (61 ft) and individual waves up to 29.1 metres (95 ft). "In 2004 scientists using three weeks of radar images from European Space Agency satellites found ten rogue waves, each 25 metres (82 ft) or higher." Rogue waves have been cited in the media as a likely cause of the sudden, inexplicable disappearance of many ocean-going vessels. One of the very few cases in which evidence exists that may indicate a rogue wave incident is the case of the freighter MS München, lost in 1978. Rogue waves should not be confused with the so-called "hundred-year wave", which is a statistical prediction of the highest wave likely to occur in a hundred-year period in a particular body of water. It is common for mid-ocean storm waves to reach 7 metres (23 ft) in height, and in extreme conditions such waves can reach heights of 15 metres (49 ft). However, for centuries maritime folklore told of the existence of much larger waves — up to 30 metres (98 ft) in height (approximately the height of a 10-story building) — that could appear without warning in mid-ocean, against the prevailing current and wave direction, and often in perfectly clear weather. Such waves were said to consist of an almost vertical wall of water preceded by a trough so deep that it was referred to as a "hole in the sea"; a ship encountering a wave of such magnitude would be unlikely to survive the tremendous pressures exerted by the weight of the breaking water, and would almost certainly be sunk in a matter of seconds or minutes. Some research confirms that observed wave height distribution in general follows well the Rayleigh distribution, but in shallow waters during high energy events, extremely high waves are more rare than this particular model predicts. Rogue waves may also occur in lakes. A phenomenon known as the "Three Sisters" is said to occur in Lake Superior when a series of three large waves forms. The second wave hits the ship's deck before the first wave clears. The third incoming wave adds to the two accumulated backwashes and suddenly overloads the ship deck with tons of water. The phenomenon was implicated in the sinking of the SS Edmund Fitzgerald on Lake Superior in November 1975. In the course of Project MaxWave, researchers from the GKSS Research Centre, using data collected by ESA satellites, identified a large number of radar signatures that have been portrayed as evidence for rogue waves. Further research is under way to develop better methods of translating the radar echoes into sea surface elevation, but at present this technique is not proven. Because the phenomenon of rogue waves is still a matter of active research, it is premature to state clearly what the most common causes are or whether they vary from place to place. The areas of highest predictable risk appear to be where a strong current runs counter to the primary direction of travel of the waves; the area near Cape Agulhas off the southern tip of Africa is one such area; the warm Agulhas current runs to the southwest, while the dominant winds are westerlies. However, since this thesis does not explain the existence of all waves that have been detected, several different mechanisms are likely, with localised variation. Suggested mechanisms for freak waves include the following: |Diffractive focusing||According to this hypothesis, coast shape or seabed shape directs several small waves to meet in phase. Their crest heights combine to create a freak wave.| |Focusing by currents||Waves from one current are driven into an opposing current. This results in shortening of wavelength, causing shoaling (i.e., increase in wave height), and oncoming wave trains to compress together into a rogue wave. This happens off the South African coast, where the Agulhas current is countered by westerlies.| |Nonlinear effects (modulational instability)||It seems possible to have a rogue wave occur by natural, nonlinear processes from a random background of smaller waves. In such a case, it is hypothesised, an unusual, unstable wave type may form which 'sucks' energy from other waves, growing to a near-vertical monster itself, before becoming too unstable and collapsing shortly after. One simple model for this is a wave equation known as the nonlinear Schrödinger equation (NLS), in which a normal and perfectly accountable (by the standard linear model) wave begins to 'soak' energy from the waves immediately fore and aft, reducing them to minor ripples compared to other waves. The NLS can be used in deep water conditions. In shallow water, waves are described by the Korteweg–de Vries equation or the Boussinesq equation. These equations also have non-linear contributions and show solitary-wave solutions. A small-scale rogue wave consistent with the nonlinear Schrödinger equation was produced in a laboratory water tank in 2011. In particular, the study of solitons, and especially Peregrine solitons, have supported the idea that non-linear effects could arise in bodies of water.| |Normal part of the wave spectrum||Rogue waves are not freaks at all but are part of normal wave generation process, albeit a rare extremity.| |Wind waves||While it is unlikely that wind alone can generate a rogue wave, its effect combined with other mechanisms may provide a fuller explanation of freak wave phenomena. As wind blows over the ocean, energy is transferred to the sea surface. When strong winds from a storm happen to blow in the opposing direction of the ocean current the forces might be strong enough to randomly generate rogue waves. Theories of instability mechanisms for the generation and growth of wind waves—although not on the causes of rogue waves—are provided by Phillips and Miles.| |Thermal expansion||When a stable wave group in a warm water column moves into a cold water column the size of the waves must change because energy must be conserved in the system. So each wave in the wave group become smaller because cold water holds more wave energy based on density. The waves are now spaced further apart and because of gravity they will propagate into more waves to fill up the space and become a stable wave group. If a stable wave group exists in cold water and moves into a warm water column the waves will get larger and the wavelength will be shorter. The waves will seek equilibrium by attempting to displace the waves amplitude because of gravity. However by starting with a stable wave group the wave energy can displace towards the center of the group. If both the front and back of the wave group are displacing energy towards the center it can become a rogue wave. This would happen only if the wave group is very large.| The spatio-temporal focusing seen in the NLS equation can also occur when the nonlinearity is removed. In this case, focusing is primarily due to different waves coming into phase, rather than any energy transfer processes. Further analysis of rogue waves using a fully nonlinear model by R.H. Gibbs (2005) brings this mode into question, as it is shown that a typical wavegroup focuses in such a way as to produce a significant wall of water, at the cost of a reduced height. A rogue wave, and the deep trough commonly seen before and after it, may last only for some minutes before either breaking, or reducing in size again. Apart from one single rogue wave, the rogue wave may be part of a wave packet consisting of a few rogue waves. Such rogue wave groups have been observed in nature. There are three categories of freak waves: A research group at the Umeå University, Sweden in August 2006 showed that normal stochastic wind driven waves can suddenly give rise to monster waves. The nonlinear evolution of the instabilities was investigated by means of direct simulations of the time-dependent system of nonlinear equations. The possibility of the artificial stimulation of rogue wave phenomena has attracted research funding from DARPA, an agency of the United States Department of Defense. Bahram Jalali and other researchers at UCLA studied microstructured optical fibers near the threshold of soliton supercontinuum generation and observed rogue wave phenomena. After modelling the effect, the researchers announced that they had successfully characterized the proper initial conditions for generating rogue waves in any medium. Additional works carried out in optics have pointed out the role played by a nonlinear structure called Peregrine soliton that may explain those waves that appear and disappear without leaving a trace. It should be noted that many of these encounters are only reported in the media, and are not examples of open ocean rogue waves. Often, in popular culture, an endangering huge wave is loosely denoted as a rogue wave, while it has not been (and most often cannot be) established that the reported event is a rogue wave in the scientific sense — i.e. of a very different nature in characteristics as the surrounding waves in that sea state and with very low probability of occurrence (according to a Gaussian process description as valid for linear wave theory). This section lists a limited selection of notable incidents. Although rogue waves have been cited as credible explanations for unexplained losses, there is to date little clear evidence supporting this hypothesis nor any cases where the cause has been confirmed, and the claim is contradicted by information held by Lloyd's Register. A press release by the European Space Agency in 2004 made the claim that "Severe weather has sunk more than 200 supertankers and container ships exceeding 200 metres in length during the last two decades. Rogue waves are believed to be the major cause in many such cases". However, at the time the claim was made, there had only been 142 ships of that size lost at sea in the time frame, all with clear, known causes.[not in citation given] |Wikimedia Commons has media related to Rogue waves.|
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is an educational and informative passage about rogue waves from Wikipedia. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"scientific content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text discusses scientific phenomena related to rogue waves, focusing on their formation and the mechanisms behind them. The content is entirely educational and does not contain any unsafe, inappropriate, or harmful elements. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text discusses the scientific aspects of rogue waves, focusing on their formation, characteristics, and research. There is no unsafe, inappropriate, or harmful content in the text. It is entirely suitable for educational and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1262-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1262-1", "llm_safety_score": 0, "reason": "scientific content" }, { "custom_id": "1262-2", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://blekko.com/wiki/Rogue_wave?source=672620ff", "date": "2014-07-25T09:15:02", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00080-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.954990685, "token_count": 2721, "score": 3.890625, "int_score": 4 }
<urn:uuid:c9dad849-0e10-49ae-9c64-fcaf5f1326d7>
Is Steinbeck condemning or celebrating America in The Grapes of Wrath? 1 Answer | Add Yours In a review of a recently published The Grapes of Wrath in the New Yorker, Clifton Fadiman wrote that the novel dramatizes so that you cannot forget the terrible facts of a wholesale injustice committed by society.... There are, indeed, obvious socialist sympathies in its anger directed toward "buccaneering capitalism"; however, at the same time, John Steinbeck lauds the American spirit that drives people to both succeed and to endure. It is in the intercalary chapters that Steinbeck often criticizes capitalism and the callous treatment of the poor. In Chapter Five, for instance, the owners of the land drive onto the land where the tenant farmers reside. Some of the owner men were kind because they hated what they had to do, and some of them were angry because they hated to be cruel, and some of them were cold because they had long ago found that one could not be an owner unless one were cold. And all of them were caught in something larger than themselves. The callousness of the banks and companies who hold the loans of the farmers demands payment. The owners tell the tenants they must leave. We're sorry. It's not us. It's the monster. The bank isn't like a man. In other chapters, Steinbeck depicts the strength of community and family in the camps, The loss of home became one loss, and the golden time in the West was one dream. In these camps, there develops a government, with leaders. And, the people become migrants, rather than the farmers they have been. Codes of conduct develop and people share and work with one another. Their strength in community is expressed by Ma when she tells Tom he must have patience, "...us people will go on livin' when all them people is gone. Why, Tom, we're the people that live. They ain't gonna wipe us out. Why, we're the people--we go on....A different time's comin'." Tom Joad joins the ranks of the soldiers who will march to the Battle Hymn of the Republic and restore pride to people. and fight against the machinery of callous capitalism that has caused the Dust Bowl. Clearly, there is evident in this novel Steinbeck's sympathy for union organizers for the migrant workers. But, there is also great respect for the individual, the American, who perseveres. Author/critic Richard Gray wrote, As its title indicates, as well as its narrative drive, The Grapes of Wrath is an angry but also an optimistic book, [recalling] “The Battle-Hymn of the Republic” with its prophecy of truth marching to victory . . . Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text discusses the themes and interpretations of John Steinbeck's novel \"The Grapes of Wrath\" in a scholarly and educational manner. It does not contain any unsafe, inappropriate, or harmful content. The text is suitable for educational environments and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1264-0", "llm_safety_score": 0, "reason": "No unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.enotes.com/homework-help/steinback-condemning-celebrating-america-grapes-430874", "date": "2014-07-25T09:05:50", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00080-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9758053422, "token_count": 603, "score": 2.796875, "int_score": 3 }
<urn:uuid:cf90cba8-e382-4d27-84b1-24d6888b99f0>
The Hadley cell is a circulation pattern that dominates the tropical atmosphere, with rising motion near the equator, poleward flow 10-15 kilometers above the surface, descending motion in the subtropics, and equatorward flow near the surface. This circulation is intimately related to the trade winds, tropical rainbelts, subtropical deserts and the jet streams. The major driving force of atmospheric circulation is solar heating, which on average is largest near the equator and smallest at the poles. The atmospheric circulation transports energy polewards, thus reducing the resulting equator-to-pole temperature contrast. The mechanisms by which this is accomplished differ in tropical and extratropical latitudes. Between 30°N and 30°S latitude, this energy transport is accomplished by a relatively simple overturning circulation, with rising motion near the equator, poleward motion near the tropopause, sinking motion in the subtropics, and an equatorward return flow near the surface. In higher latitudes, the energy transport is instead accomplished by cyclones and anticyclones that cause relatively warm air to move polewards and cold air to move equatorwards in the same horizontal plane. The tropical overturning cell is referred to as the Hadley cell. Questions as to why it extends only to 30 degrees latitude and what determines its strength are at the heart of modern dynamical meteorology. Near the tropopause, as the air moves polewards in the Hadley cell it is turned eastward by the Coriolis force, which turns winds to the right in the Northern hemisphere and to the left in the Southern Hemisphere, creating the subtropical jet streams that flow from west to east. One can also think of a ring of air trying to conserve its angular momentum in an absolute reference frame (one not rotating with the Earth). As the ring of air moves polewards, it contracts closer to the axis of rotation so it must spin faster, which creates jets that rotate more rapidly than the Earth itself, which therefore appear as jets flowing from west to east with respect to the surface. Analogously, near the surface, the equatorward return flow is rotated to the west, or is slowed from the perspective of a non-rotating observer due to its movement away from the axis of rotation. These surface winds, with both an equatorward and a westward component, are referred to as the trade winds. In the early 1700s, George Hadley, an English lawyer and amateur meteorologist, was dissatisfied with the theory that the astronomer Edmond Halley had proposed for explaining the trade winds. What was no doubt correct in Halley's theory was that solar heating creates upward motion of equatorial air, and air mass from neighboring latitudes must flow in to replace the risen air mass. But for the westward component of the trade winds Halley had proposed that in traveling across the sky the Sun heats the air mass differently over the course of the day. Hadley was not satisfied with that part of Halley's theory and rightly so. Hadley recognized that Earth's rotation plays a role in the direction taken by air mass that moves relative to the Earth, and he was the first to do so. Hadley's theory, published in 1735, remained unknown, but it was rediscovered independently several times. Among the rediscoverers was John Dalton, who later learned of Hadley's priority. Over time the mechanism proposed by Hadley became accepted, and over time his name was increasingly attached to it. By the end of the 19th century it was shown that Hadley's theory was deficient in several respects. One of the first who accounted for the dynamics correctly was William Ferrel. It took many decades for the correct theory to become accepted, and even today Hadley's theory can still be encountered occasionally, particularly in popular books and websites. Hadley's theory was the generally accepted theory long enough to make his name become universally attached to the circulation pattern in the tropical atmosphere. The region of subsidence in the Hadley cell is known as the "horse latitudes". According to the story, in times when ship's captains relied upon the wind to reach their destinations, finding themselves becalmed was usually bad news for any horses aboard, which were thrown overboard in order to conserve precious water. The region in which the equatorward moving surface flows converge and rise is known as the intertropical convergence zone, or ITCZ, a high-precipitation band of thunderstorms. Having lost most of its water vapor to condensation and rain in the upward branch of the circulation, the descending air is dry. Low relative humidities are produced as the air is adiabatically warmed due to compression as it descends into a region of higher pressure. The subtropics are relatively free of the convection, or thunderstorms, that are common in the equatorial belt of rising motion. Many of the world's deserts are located in these subtropical latitudes.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\nThe text provided is a detailed explanation of the Hadley cell, which is a scientific and educational topic. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences, including children and sensitive environments.", "{\"reason\": \"No unsafe content\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1265-0", "llm_safety_score": 0, "reason": "educational content" }, { "custom_id": "1265-1", "llm_safety_score": 0, "reason": "No unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.reference.com/browse/Hadley%20cell", "date": "2014-07-25T09:48:07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894140.11/warc/CC-MAIN-20140722025814-00080-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9660524726, "token_count": 1020, "score": 4.03125, "int_score": 4 }
<urn:uuid:db6b26ba-959e-4639-99c8-e6681ed188f3>
In response to today’s article in the Sunday Telegraph, the National Trust sets out its position on the future of the Orford Ness lighthouse. Orford Ness National Nature Reserve has been owned and cared for by the National Trust since 1993. The current lighthouse dates from 1792 and is the third to be built on the site. At no time has the Trust owned the lighthouse on the site and whilst we care for the Orford Ness landscape, it is Natural England that holds the authority for determining permissions on any coastal defence work. Orford Ness is one of the finest examples of a vegetated shingle spit in Europe, an internationally important nature reserve and has been given multiple designations for the importance of its landscape, including being a Site of Special Scientific Interest (SSSI), a Natura 2000 site, which is the highest level of European conservation designation and one that is protected under the agreement of the 1971 Ramsar Convention for its wetland and marshes that offer vital habitats for breeding birds and wildlife. The entire Ness is subject to tidal forces and wave action and is a constantly changing natural landscape. Its changing form and dynamic nature is part of what makes it is one of the rarest vegetated shingle spits in the world. In 2010, when Trinity House announced the decision to decommission the lighthouse, we worked collaboratively to agree a position that we would allow natural forces to dictate the future of the building. Whilst we recognise how iconic this building is in the local area, we do not believe that attempts at stabilisation with soft defences such as sandbags would be effective or appropriate and could be washed away in one storm. Hard defences such as rock armour, may protect the lighthouse in the short term, but could accelerate erosion elsewhere. In the last 20 years approximately 80 metres of shingle spit has been lost from in front of the lighthouse and so far this winter almost four metres has been eroded, leaving the base of the lighthouse some 11 metres from the beach. As sad as it is, we feel that any attempts to defend the lighthouse where it stands would either be unsuccessful, or cause unacceptable damage to what is a fragile habitat of international importance. As well as the fragile landscape and wildlife habitats that we protect from excessive damage and disturbance, Orford Ness is also steeped in military history and was a secret MoD test site from 1913 until the mid-1980s. Indeed, over the years, an enormous amount of unexploded ordnance has been discovered and visitors are urged to stay to designated routes. Orford Ness is a unique National Trust property. This wild and unique site offers self-guided visits and the chance to experience the feeling of remoteness and isolation that are to key to its atmosphere. When choosing to limit the number of visitors we considered not only the fragile landscape, but also feedback from visitors that they prefer to feel isolated and alone. Accessible only by boat, there is also a practical limit to the number of visitors who can be taken to and from the Ness each day. There has been an annual open day at the lighthouse for at least ten years and whilst it is still safe to do so we will continue to assist where we can with these. Moving forward, we repeat our previous thoughts that removing and preserving key historical artefacts from inside the lighthouse must be a priority and we were working with Trinity House on their work to begin this process. We will continue to work with Mr Gold wherever possible and wish to maintain a productive relationship over the coming years.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provided is a factual and informative statement about the Orford Ness lighthouse and the National Trust's position on its future. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes discussed are related to conservation, history, and environmental protection, which are suitable for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1266-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://eastofenglandnt.wordpress.com/2014/01/12/national-trusts-response-to-article-in-the-sunday-telegraph-about-orford-lighthouse/", "date": "2014-07-28T12:20:25", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510259834.25/warc/CC-MAIN-20140728011739-00296-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9633310437, "token_count": 718, "score": 2.671875, "int_score": 3 }
<urn:uuid:aaf591f7-98da-4123-b423-9b910a48e95a>
According to researchers in Scripps Research Institute, chemists have tricked an enzyme essential in Alzheimer's disease into blocking its own debilitating action.// A team led by Nobel laureate K. Barry Sharpless developed the chemical trick, called "click chemistry," to make a molecule that blocks neurotransmitter destruction caused by the brain enzyme acetylcholinesterase. The destructive enzyme catalyzes the so-called "click reaction" that creates its own worst enemy -- its most potent inhibitor. "Think of this as a Trojan Horse approach for battling disease, but this horse goes the Greeks one better," Sharpless, W.M. Keck professor of chemistry at Scripps, said in a prepared statement. Once the disease-causing enzyme assembles the horse, Sharpless explained, out march the inhibitors, chemical soldiers who stop the enzyme cold. By assembling its own inhibitor, the enzyme actually designs the drug that will best halts its progress. James Olds, neuroscientist, director of the Krasnow Institute for Neurobiology at George Mason University in Fairfax, is indeed a revolutionary method in rational drug design."It may have great applicability to improving the specificity of drugs, particularly those currently used in the treatment of neurological and psychiatric diseases." Molecular drug designers make inhibitor molecules that fit snuggly into the active sites of target enzymes, blocking their action on other molecules. Many diseases -- cancer, AIDS, Alzheimer's, arthritis and anthrax, for instance -- induce unnatural functions in enzymes. Inhibiting enzyme action can treat these diseases. Enzyme active sites are also highly specific, so drugs react only with the sites and nowhere else. A major problem of current drugs is that they are 'dirty', that is, they have side effects -- often untoward -- because they interact with proteins or enzymes other than the ones planned for. According to him, in additioPage: 1 2 Related medicine news :1 . Researchers urge caution in using ear tube surgery2 . Researchers Scale to assess the Severity of Epilepsy in Kids3 . Researchers find new HIV hiding place4 . New Hair in 15 Days Could Now Be A Possibility Say Researchers 5 . Researchers developed world’s smallest toothbrus6 . Researchers discover receptor cells that can cause epilepsy . 15 Anti-SARS Drugs Identified By China-Europe Team of Researchers 8 . Researchers reversed the process of memory loss9 . Researchers Identify Key Gene That May Help Brain Treatment 10 . Researchers Discover Protein That Causes Malaria11 . Commercialization of Biotech Discoveries, An Encouragement For Researchers
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a scientific article discussing research findings from the Scripps Research Institute. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are related to scientific research and medical advancements, making it fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1267-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.bio-medicine.org/medicine-news/Researchers-trick-Alzheimers-Enzyme-1104-1/", "date": "2014-07-28T13:14:49", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510259834.25/warc/CC-MAIN-20140728011739-00296-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9073111415, "token_count": 537, "score": 3.3125, "int_score": 3 }
<urn:uuid:81fa9d57-6355-4a6a-9a57-7786925399a6>
Regarding how the terms we use frame our perceptions, Mehran Khalili on the terms for people from other places: Immigration, a lexicon: You’re a ‘migrant’ when you’re very poor; ‘immigrant’ when you’re not so poor; and ‘expat’ when you’re rich. Semantics matter: the images evoked by the words used to refer to a group of people will, over time, help to define what we think about that group and how we act towards it. And in the case of people migrating to Greece who have in recent years been badly mishandled by the state, the use of neutral language in reporting by international media and NGOs is vital. Literally speaking, ‘migrants’ and ‘expats’ do indeed have the same meaning. But since practical usage can be something else entirely, feed both terms into Google Images to see how they’re illustrated. ‘Expats’ in Greece are depicted as white; ‘migrants’ as darker-skinned. Who decides what ethnicity a person should have to be called a migrant? Who decides what socio-economic background, or legal status, qualifies someone for the ‘expat’ label? Let’s scrap the ‘migrant’ label and call everyone living outside their native country an expat.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"neutral language\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses the use of language and its impact on perceptions and does not contain any unsafe, inappropriate, or harmful content. It is suitable for educational and public use and does not include any offensive language, adult themes, or references to illegal activities." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1269-0", "llm_safety_score": 0, "reason": "neutral language" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://disinfo.com/2013/11/immigrants-versus-expats/", "date": "2014-07-29T21:05:06", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267865.20/warc/CC-MAIN-20140728011747-00344-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9417286515, "token_count": 294, "score": 2.625, "int_score": 3 }
<urn:uuid:bc6fed38-8a39-450c-86cc-46e00dad9b56>
Not long ago, we featured a pretty cool looking chart illustrating the many minimum wages that have applied at the federal and for various states in the U.S. since 1994. Today, we're streamlining things a bit to determine the national average minimum wage for the United States! To do that, we've calculated the percentage share of each state's population with respect to the combined population of all 50 states and the District of Columbia, and multiplied each state's share of the U.S. population by the greater of either the federal minimum wage or the state's minimum wage. We then summed up the results for each year from 1994 through 2012 to find the population-weighted national average minimum wage for the United States. Those basic results are presented below: And here are the results for each year again, this time adjusted for inflation to be in terms of 2012 U.S. dollars! In these charts, the biggest deviations from the federal minimum wage in any given year can be mainly attributed to large population states that have set their minimum wages well above the level set by the federal government. The largest deviation occurred at the beginning of 2007, when states like California ($7.50), Florida ($6.67), Illinois ($6.50), Massachusetts ($7.50), New York ($7.15) and Washington ($7.93) had set their minimum wages significantly above the U.S. minimum wage of $5.15 per hour. Together, these six states accounted for almost one-third of the U.S. population in 2007, which was enough, when combined with the higher-than-federal minimum wages of smaller population states to boost the population-weighted national average minimum wage to $6.35 per hour, 23% higher than the federal minimum wage on 1 January 2007. The timing of when these large population states increased their minimum wages over the years also explains an apparent anomaly for those analyzing U.S. national employment data. Namely, why increases in the federal minimum wage would not appear to generate the large reductions in the number of the employed that might be expected in economic theory. Here, by increasing their minimum wages in advance of when increases in the federal minimum wage have taken place, many states would bear the brunt of reduced employment earlier as a result of this action. By the time the federal minimum wage was increased with respect to the earlier actions of these states, a good portion of the job loss that might reasonably be expected if it were the only minimum wage in the U.S. would have already taken place. We think that factor goes a long way to explaining why the Age 15-24 population of the U.S. with incomes saw a net decline during the years from 2004 through 2006, which were otherwise characterized by solid economic growth in the U.S. It would seem that all it took to make that decline happen during these years was for the large population states of Florida, Illinois, New Jersey, New York and Wisconsin to rashly boost their minimum wages above the federal minimum wage level of $5.15 per hour, as those five states together account for over one-fifth of the U.S. population. Meanwhile, virtually all of the net decline in the number of Americans between the ages of 15 and 24 with incomes during these years occurred at the levels of annual income that would be most directly affected by the minimum wage increases that occurred in each of these states. (Note: The data for the Age 15-24 segment of the U.S. population is the most likely to show the real effects of minimum wage increases, because American teens and young adults make up approximately half of all individuals earning wages at or near the federal minimum wage level.) By the time the federal minimum wage was increased to $5.85 per hour nearly two-thirds of the way through 2007, the impact that might otherwise have occurred was muted, which we see in the number of American 15-24 year olds with incomes declining much less than might otherwise have been expected from the 13.5% increase in the federal minimum wage that took effect on 24 July 2007. And that's what separates our minimum wage impact analysis from other efforts that only look at the federal minimum wage - we've accounted for the different minimum wages that most affect the population of the United States! Read more posts on Political Calculations »
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is entirely focused on the historical and demographic analysis of minimum wage rates in the United States. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1270-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.businessinsider.com/the-national-average-minimum-wage-2013-3", "date": "2014-07-29T21:16:24", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267865.20/warc/CC-MAIN-20140728011747-00344-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9601983428, "token_count": 890, "score": 3.140625, "int_score": 3 }
<urn:uuid:1f61d773-435e-4d80-af70-30fe868a072a>
Northern Prairie Wildlife Research Center Establishment of Seeded Grasslands for Wildlife Habitat in the Prairie Fig 2. Diagram of seeded grassland plot that provides dabbling duck nest sites in close proximity to surrounding wetland complex that is used by breeding pairs, nesting hens and broods. Return to Text Return to Contents
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text is a description of a scientific diagram related to wildlife research and habitat establishment. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1271-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.npwrc.usgs.gov/resource/habitat/grasland/fig2.htm", "date": "2014-07-29T20:56:28", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267865.20/warc/CC-MAIN-20140728011747-00344-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8161846995, "token_count": 71, "score": 2.703125, "int_score": 3 }
<urn:uuid:00d2e669-1790-44ef-bf1c-5bc6bae7231e>
What is an Alternative Break? An alternative break program places teams of college or high school students in communities to engage in community service and experiential learning during their summer, fall, winter, weekend or spring breaks. Students perform short term projects for community agencies and learn about issues such as literacy, poverty, racism, hunger, homelessness and the environment. The objectives of an alternative break program are to involve college students in community-based service projects and to give students opportunities to learn about the problems faced by members of communities with whom they otherwise may have had little or no direct contact. Being completely immersed into diverse environments enables participants to experience, discuss, and understand social issues in a significant way. The intensity of the experience increases the likelihood that participants will transfer the lessons learned on-site back to their own communities even after the alternative break ends. Break Away seeks to use alternative breaks as a springboard into lifelong active citizenship...where the community becomes a priority in an individual's life decisions. Examples of trips students have organized are: - tutoring migrant farmworkers in Florida - building homes in Appalachia - registering voters in rural Mississippi - working with the homeless in Washington, DC. In the spring of 2006, an estimated 36,000 students across the nation participated in an alternative break experience.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text discusses alternative break programs, which are educational and community-focused initiatives. There is no unsafe, inappropriate, or harmful content in the text." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1272-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://alternativebreaks.org/Alternative_Breaks.asp", "date": "2014-07-31T05:21:26", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00392-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.93340379, "token_count": 264, "score": 3.03125, "int_score": 3 }
<urn:uuid:e1414682-5510-4346-9e33-aae81149b375>
The plane crash in Ukraine brings up many questions related to loss and grief. How will those left behind cope with the devastating event? How can we support them? With regard to how young people cope with bereavement, Mariken Spuij’s recent PhD thesis provides new insights. She studied grief and its pathological extreme, Prolonged Grief Disorder, focusing on three topics: - The phenomenology and correlates of Prolonged Grief Disorder - The role of negative thinking in Prolonged Grief Disorder - The development and piloting of the ‘Grief-Help’ intervention for children Prolonged Grief Disorder Pathological grief is characterized by persistent severe distress during more than 6 months after the loss, and including symptoms such as separation distress, disbelief regarding the death, numbness, and a sense that life is meaningless. Many of the symptoms are normal grief reactions; it is their intensity and duration that signal a need for additional support. Similar to the relation between trauma and PTSD, Dr. Spuij and colleagues concluded that Continue reading How many children develop Posttraumatic Stress Disorder after a traumatic experience such as an assault, a car crash, war or disaster? William Yule – one of the godfathers of child traumatic stress research – once pointed out that rates reported in separate studies varied from 0 to 100%. So what is the average rate to be expected? With an international team of collaborators, we conducted a meta-analysis to answer this question. The wide variety in individual study results suggests that various factors may be in play: apparently not every type of exposure, set of circumstances or group of children is related to similar PTSD rates. In addition, there may be methodological variation in the studies. Continue reading Worldwide, more than 175,000 new cases of childhood cancer are diagnosed each year. Georgie Johnstone, a recent vacation scholar at the Trauma Recovery Lab talks you through some thought-provoking new research on cancer and PTSD. Overall, in children under 15 years living in the industrialised world, childhood cancer is the 4th most common cause of death. However, childhood cancer is no longer the death sentence it once was, with overall survival rates in high-income countries now at about 80 percent. How are survivors affected by the potentially traumatic experience of their diagnosis and treatment, and how does it impact on the rest of their life and that of their family? Research has indicated that cancer survivors are at an increased risk not only from somatic late effects related to cancer and treatment, but also for depression, anxiety and antisocial behaviour. Lifetime prevalence of cancer-related PTSD has been estimated at 20-35% in survivors and 27-54% in their parents. However, new research in the Journal of Clinical Oncology has challenged these estimates. The risk of a focusing illusion We know that parents are incredibly important for children’s recovery from a traumatic event. Social support is one of the strongest predictors of trauma recovery. On the other hand, parental distress after trauma is related to children’s posttraumatic stress later on. But how do parents exactly influence children’s trajectory after something bad has happened? Continue reading The days that we only focus on pathology after trauma are long gone. In the past two decades, the literature that adopts a ‘well being’ rather than a ‘deficit’ model has grown substantially. In 1990, the field had published 4 English papers on resilience. Today, there are 858.1 But what is resilience? And how do we become more resilient? The topic is not without controversy. In a panel session at the ISTSS conference, profs Rachel Yehuda, George Bonanno, Ann Masten, and Catherine Panter-Brick gave their views. Here is what I took home: What is resilience? Continue reading
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses grief, bereavement, and trauma in a sensitive and educational manner, focusing on research and support for children and families dealing with loss and trauma. There is no unsafe content, inappropriate language, or harmful themes." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1273-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://trauma-recovery.net/", "date": "2014-07-31T05:21:12", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00392-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9512358308, "token_count": 796, "score": 2.765625, "int_score": 3 }
<urn:uuid:a7abdfc0-33e3-487e-af0f-edf016385461>
September 25, 2002 Evaluation of the Prolonged Use of Fund Resources 1. In recent years, the IMF has been at the center of an array of criticism concerning the adverse side-effects of its interventions in the countries that it supports financially, the effectiveness of the economic programs that form the basis for this support, and even its very role in today's international financial system, with the fiercest critics arguing that it has outlived the mission for which it was created. While the criticism is extensive, it is not unanimous and there are many who recognize that the IMF has an important role to play in promoting stability in the international financial system and especially in helping member countries manage balance of payments problems in a manner consistent with the pursuit of other economic objectives. 2. The purpose of the Independent Evaluation Office is to undertake independent studies, which will further the objectives of strengthening the learning culture of the IMF and contribute to transparency by providing objective assessment of the effectiveness of IMF activities in various fields. This study, which is the first report of the IEO, focuses on the phenomenon of prolonged use of IMF resources by a number of countries. It is an issue that is closely related to the broader debate about the IMF's role and has been the subject of external criticism and also internal concern. 3. External observers have criticized prolonged use from a number of perspectives, suggesting that: 1 · it contradicts the IMF's mandate as set forth in the Articles of Agreement, which stress that IMF resources should be made available to members "temporarily"2 to cope with balance of payments disequilibria. · it suggests a lack of effectiveness of IMF-supported programs, as the repeated need to make use of IMF resources indicates a persistence of the balance of payment difficulties which such programs are intended to solve (Meltzer (2000) and Vasquez (2000)). In recent years, some critics have argued that lack of ownership, leading to poor program implementation, and program design flaws are key factors underlying the lack of effectiveness of IMF-supported programs and therefore a root cause of prolonged use. · it may encourage over-indebtedness because a prolonged "IMF seal of approval" encourages overlending in an environment where there is insufficient attention to debt sustainability (Bandow (1994)); in heavily indebted countries it could reflect a strategy of "defensive lending" by the IMF and other multilaterals to avoid default (Birdsall (2001)); and · it may hinder institutional development in the borrowing countries by giving the IMF an overly intrusive presence in their policy-making process, thereby compromising the development of responsible, democratic institutions that correct their own mistakes and respond to changes in external conditions (Meltzer (2000)). 4. The issue has been discussed in the Executive Board on several occasions. These discussions have brought out several arguments suggesting that frequent recourse to IMF arrangements is less of a problem than critics contend and in any case it may take place for good reasons and be fully compatible with both the IMF's mandate and a broadly defined sense of economic efficiency: · This could be the case for countries subject to frequent external shocks and for countries where external imbalances have deep-seated structural causes, which cannot be overcome over the short term, or only at a great cost to economic prosperity-an outcome IMF-supported programs are specifically intended to avoid. The latter category could include many low income and transition economies. · Since the mid-1980s, most IMF arrangements intended to tackle the latter kind of balance of payments difficulties for low-income countries have been funded from special accounts, (i.e. the SAF, ESAF, and PRGF) and consequently have not exerted pressure on the revolving nature of general resources.3 Besides, it has been argued that the views of the official international community on the appropriate length of IMF financial involvement have changed over time, at least for the low-income group of countries, and the IMF has been given a mandate to provide support, via the PRGF, for programs intended to foster growth and poverty reduction over a longer time horizon. · Prolonged use of IMF-supported programs may simply reflect the unwillingness of other lenders (be they private or public) to provide financing without the "seal of approval" they consider an IMF-supported program to be. This raises important questions with regard to the appropriate balance between the IMF's lending and surveillance activities. 5. This evaluation investigates the issue of prolonged use by addressing three broad sets of questions. First, how extensive is prolonged use and what are its consequences for borrowing countries and the IMF? Second, to what extent is the phenomenon an inevitable consequence of conscious decisions made by the shareholders regarding the IMF's role in the changed international environment? Third, to what degree can it be accounted for by shortcomings in the design of IMF-supported programs and how could the effectiveness of the programs be improved? We find that both systemic factors associated with the IMF's role and program design and implementation issues have contributed to the extent of prolonged use. In addition, the topic of prolonged use touches upon many aspects of the IMF's operations; in undertaking the evaluation we have identified a number of issues that, while germane to this topic, are of much broader relevance. 6. The evaluation has relied upon a combination of methodological approaches, including (i) empirical and econometric analysis of a wide range of prolonged users, taken both as a whole and in various relevant groupings; (ii) detailed case studies of three country cases (Pakistan, Philippines, and Senegal) that have been among the most prolonged users of IMF resources and illustrate different aspects of prolonged use. These case studies have involved country visits to seek the views of the authorities and a wide range of other stakeholders;4 detailed reviews of both internal and publicly available IMF documents; and interviews with relevant IMF staff, management, and Executive Directors; (iii) more limited desk reviews of two countries (Jamaica and Morocco) that appear to have "graduated" from IMF financial support, drawing on IMF documents, interviews with staff and some senior officials; (iv) written submissions by the authorities in a number of prolonged use cases responding to an IEO questionnaire on specific issues; and (v) responses received to IEO questionnaires sent to official donors and creditors, members of the private financial sector, and a broad range of IMF mission chiefs designed to seek their views on a range of issues including the broader institutional framework and incentives that influence decisions on programs. In the empirical work, we have made a distinction, wherever possible, between prolonged use of general and concessional resources, since expectations of the IMF's role are, to some extent, different for the two groups. 7. We relied heavily upon existing databases for all our quantitative analyses. In addition to general databases on economic indicators (including that maintained for the World Economic Outlook), we used internal IMF databases on programs (e.g. the database on the monitoring of arrangements-MONA-and those on IMF arrangements and financial transactions), as well as those set up on an ad hoc basis for policy reviews or working papers (e.g. databases on waivers and structural conditionality, both created in early 2001). We faced a number of difficulties in using these databases: in addition to timeframe limitations, we found a number of gaps and inconsistencies between databases compiled from different sources as well as, occasionally, between the databases and program documentation. Whenever possible, we corrected these inconsistencies. However, it is clear that a further strengthening of the MONA database in particular is needed to help improve the basis for internal and external ex post assessments. We make a recommendation to this effect in Chapter VIII. 8. The three countries chosen for the detailed case studies illustrate different aspects of prolonged use. Philippines had had the longest involvement with IMF-supported programs, stretching back to the early 1960s, with only relatively brief interruptions until it finally exited from such programs in 2000. It has used only the IMF's general resources. Pakistan had repeated, but discontinuous, use of IMF resources during the period 1970-88 and, since 1988, has had a long series of arrangements-almost all of which suffered from substantial policy slippages and soon went off-track. The only exception was the 2000 stand-by arrangement, which was completed as scheduled. Pakistan has used both general and concessional resources and is currently implementing a PRGF-supported program. Senegal has had an almost continuous succession of IMF arrangements since 1979, except for a nearly two-year period around 1992-93 (i.e. prior to the devaluation of the CFA franc). Periods of strong adjustment have been interspersed with periods of policy slippages, often around electoral cycles. Since 1986, it has used primarily concessional resources, although it has also used a number of stand-by arrangements, either to supplement access levels or as a transition to multi-year arrangements. 9. The mandate of the IEO is to evaluate the operations of the IMF, not the policies or actions of country authorities. Hence, the focus of this paper on the former. This does not mean that we believe all of the reasons for prolonged use lie within the IMF, or that full implementation of our recommendations would eliminate all of the problems in the countries themselves that have contributed to a sometimes prolonged and difficult adjustment process. But these issues are clearly the responsibility of the countries concerned. 10. The organization of the report is as follows. Chapter II proposes a definition of prolonged use and discusses its extent; Chapter III summarizes the policy discussions that have taken place within the IMF with respect to prolonged use and the strategy that evolved for dealing with it. Chapter IV examines the characteristics of the prolonged users. Chapter V discusses the lessons from the prolonged users for the effectiveness of IMF-supported programs, drawing on both cross-country evidence and the country case studies. Chapter VI discusses a number of issues concerning IMF governance raised by the phenomenon of prolonged use, including the appropriate balance between IMF surveillance and lending arrangements. Chapter VII discusses a number of other implications of prolonged use for the borrowing countries and for the IMF. Chapter VIII summarizes the main conclusions of the report and makes recommendations geared to limiting the scope of prolonged use and reducing its drawbacks as well as for improving the overall effectiveness of IMF operations. Supplementary material to several of the chapters is provided in the Annexes. The detailed country case studies for Pakistan, Philippines, and Senegal, as well as the shorter case studies of Jamaica and Morocco, are contained in a separate volume. 1 Both the main criticisms and their counterarguments have been presented in more detail in the Terms of Reference of this paper, published on the IEO's website (www.imf.org/ieo) on March 15, 2002. In this report, the terms "prolonged use" and "repeat use" will be interpreted to mean the same thing. (See Chapter II.) 2 Such temporariness is dictated by the need to ensure the revolving nature of IMF resources. 3 To the extent that the PRGF Trust is expected to be self-sustaining at some point, concessional resources would also need to be revolving. 4 A list of those with whom the evaluation team had discussions in the case study countries is given in the companion volume on the case studies.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a formal evaluation of the IMF's prolonged use of financial resources. It discusses various perspectives and arguments without containing any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"no unsafe content identified\", \"score\": 0}\n```\n\n*Explanation:*\nThe provided text is a detailed evaluation of the IMF's operations and does not contain any unsafe, inappropriate, or harmful content. It discusses the evaluation methods, case studies, and broader implications without any offensive language, illegal activities, or harmful themes.", "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\n*Explanation:*\nThe provided text discusses issues related to IMF governance and the implications of prolonged use of IMF resources. The content is focused on financial and economic topics and does not contain any unsafe, inappropriate, or harmful elements. It is suitable for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1274-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1274-1", "llm_safety_score": 0, "reason": "no unsafe content identified" }, { "custom_id": "1274-2", "llm_safety_score": 0, "reason": "No unsafe content identified" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.imf.org/external/NP/ieo/2002/pu/index.htm", "date": "2014-07-31T05:26:57", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00392-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9493633509, "token_count": 2293, "score": 2.53125, "int_score": 3 }
<urn:uuid:be0d5d4c-01a7-4b08-b50c-d08035bbf3ac>
What does hunger mean to you? This question can mean a number of things to a number of different people, depending on geography, socioeconomic status and more. Nerys Udy, a student in New Zealand, presents her take in an essay about poverty in Southern Africa. Picture this: A New Zealand teenager, let's call her Stephanie, is worried. She spent the last of her allowance at the movies and now she can’t buy the formal dress that she is desperate for. Her mum is refusing to give her more money and Steph is sure she’ll have to wear something dreadful and be laughed off the dance floor. On the other side of the world, a Zimbabwean teenager, let's call her Patience, is worried. She spent the last of his money on food days ago, and now she is desperately hungry. She can’t ask her mum to help because she died years ago, along with her father, and Patience is sure she’s going to starve or have to do something dreadful to get food. When it comes to indulgences, it seems it is all too easy for the world to disregard issues such as the extreme poverty and hunger that are the reality for approximately 12 percent of Earth’s population. Yet as the UN World Food Programme says, “Hunger is the world’s greatest solvable problem.” So why aren’t we solving it? Hunger is not only solvable, but it is quite literally one of the worst problems currently facing our planet. As such, it is up to all citizens of the world to stand up and work for change, rather than leaving it to someone else simply because it does affect them personally. In the Southern-African country Zimbabwe since 2001, there have been particularly high levels of food insecurity due to natural disasters such as droughts and poor harvests. Economic instability due to hyperinflation and political volatility has resulted in high unemployment, which has also increased food insecurity. Furthermore, Zimbabwe has the world’s fifth highest rate of HIV/AIDS, which only exacerbates already volatile conditions. “We’d like to see a world where everyone has enough to eat, where no child goes to bed hungry,” says WFP’s Southern Africa Spokesperson David Orr. To achieve this goal, the World Food Programme implements several initiatives throughout the year in Zimbabwe. From October through to March, it runs the “Cash for Cereals” programme that allows local people to purchase cereal from nearby markets. Between May and October, WFP begins their Cash and Food for Assets activities. “These are designed to strengthen communities’ ability to handle weather-related and other shocks to their food security by creating community assets such as dams or water harvesting and irrigation systems,” Orr explains. The eradication of smallpox must have once seemed nigh on impossible, with two million people dying from it every year at its height. But today we live free from this disease, and there is no reason good enough as to why we should not live free from the extreme poverty seen in Zimbabwe, which really is a disease in itself. In fact, if New Zealanders want to join the fight against poverty, plenty of small things can be done. It could be as simple as reading the international section of the newspaper and ensuring you are aware of what is happening in the world, believes Laura Gemmell, a New Zealander who has worked as a journalist for the humanitarian organisation World Vision for almost two years. “You don’t need to donate money to make a difference,” she says. “Think about what your skills are and whether you’d like to work for an aid agency, volunteer overseas or get a group of likeminded individuals together and brainstorm how you can make a difference.” At the end of the day, it all comes down to how committed one is to making a difference. As worn as it sounds, if we all work together we can and will bring about change. (Nerys Udy, pictured above left, is a 17-year-old student in her last year at Marlborough Girls' College in New Zealand. As part of her media studies course, Nerys chose to research and write a feature article looking at poverty in Southern Africa, to inspire her classmates to get involved in making a difference, through organisations like the World Food Programme.)
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses poverty and hunger in a factual and educational manner. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are appropriate for a general audience, including students and the public." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1275-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.wfp.org/blog/blog/whose-problem-it-anyways", "date": "2014-07-31T06:20:40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272584.13/warc/CC-MAIN-20140728011752-00392-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9668046236, "token_count": 922, "score": 3.09375, "int_score": 3 }
<urn:uuid:86890b4c-e7a1-4784-8c59-dde01e03cfac>
(Difference between revisions) Revision as of 22:10, 21 August 2012 Lab 8: Series 3- To do on the night before this lab: You and your partner will return to the lab to make an overnight broth culture of one of the colonies that you are sure contains the gene of interest (determined from your visualization of successfully amplified, appropriately sized DNA seen on your gel photo). The sub-culture you will set up tonight will create many identical copies of bacteria that carry the plasmid containing your gene of interest. - Find your LB+amp plate in the glass front refrigerator in a rack labeled with your lab day. Make sure there is some bacteria remaining on the plate of a colony that you saw successful gene amplification in the pcr product. - Begin by pouring (DO NOT PUT A PIPET INTO THE STOCK LB!!) 10 ml of sterile LB broth from one of the stock containers in the refrigerator into a sterile orange-capped 15ml conical tube. You will use the volumetric marks on the tube for measuring the media rather than using a pipet. Make sure the LB stock does not look cloudy (indicating contamination by a previous user) and take care not to contaminate it yourself. - Add 10 microliters of the 50mg/ml ampicillin stock (also found in the refrigerator). Calculate the effective concentration of ampicillin that you will have in your LB tube (remember V1 x C1= V2 x C2) and record that information in your lab notebook. - Replace the cap of your LB +amp broth and invert the tube several times to mix the contents. - Label two sterile glass culture tubes (found in a rack in the lab) with tape in your team color. Label one with "pL4440 and the gene name" and your initials. Label the other with your initials only. - Using a 5 or 10 ml sterile disposable pipet, pipet 4 ml of your working solution of LB+ampicillin broth into each of the 2 tubes. Be careful not to touch the tip to anything non-sterile. - Inoculate the broth with your bacteria by using a sterile toothpick to scrape your candidate colony off the plate. Be sure not to touch the plate with the toothpick except on the desired colony and don’t pick up any satellite colonies! Make sure the toothpick falls into the sterile broth. (The second tube of broth labeled with just your initials is a control and should not be inoculated with bacteria as it is your control for contamination.) - Balance the 2 tubes across from each other on the rotating wheel in the incubator at the front of the room when you come in the door. - Incubate these broth cultures at 37°C overnight. Do not forget to make sure the wheel is rotating when you leave!
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a laboratory protocol for a genetic engineering experiment. It does not contain any unsafe, inappropriate, or harmful content. The text is entirely focused on a scientific procedure and does not include any language, themes, or references that would be considered unsafe or inappropriate for public use, educational environments, or general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1276-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://openwetware.org/index.php?title=BISC219/F12:_RNAi_Lab_8&diff=619307&oldid=619306", "date": "2014-08-01T14:56:19", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274987.43/warc/CC-MAIN-20140728011754-00440-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9176918268, "token_count": 595, "score": 2.8125, "int_score": 3 }
<urn:uuid:0cc9fba8-6685-431f-a13f-a2805af0bbc2>
SMITHERS LAKE. Smithers Lake is on Dry Creek in the Brazos River Basin ten miles southeast of Richmond in Fort Bend County. The project is owned and operated by the Houston Lighting and Power Company as the cooling-water supply for a steam-electric generating station. Construction was started on August 22, 1956, and was completed on July 1, 1957. Water had already been impounded in a small lake, and the date for beginning of impoundment in the new lake was October 15, 1957. The lake has a capacity of 18,000 acre-feet and a surface area of 2,140 acres, at an elevation of sixty-six feet above mean sea level. Runoff is supplemented by water purchased from the canal system of the Richmond Rice Association as needed to keep the lake elevation at normal level for the plant operation. The drainage area above the dam is 24.2 square miles. The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Seth D. Breeding, "SMITHERS LAKE," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/ros12), accessed July 12, 2014. Uploaded on June 15, 2010. Published by the Texas State Historical Association.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a factual description of Smither's Lake and its operational details. It contains no unsafe, inappropriate, or offensive content. The language is neutral and the themes are educational and informative, making it fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1278-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.tshaonline.org/handbook/online/articles/ros12", "date": "2014-07-12T09:49:14", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432860.32/warc/CC-MAIN-20140707234032-00024-ip-10-180-212-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9599992633, "token_count": 269, "score": 2.6875, "int_score": 3 }
<urn:uuid:8a4e73f5-e346-4dae-9fac-f874e2fae445>
Obsessive-compulsive disorder (OCD) is a type of anxiety disorder. If you have OCD, you have frequent, upsetting thoughts called obsessions. To try to control the thoughts, you feel an overwhelming urge to repeat certain rituals or behaviors. These are called compulsions. Examples of obsessions are a fear of germs or a fear of being hurt. Compulsions include washing your hands, counting, checking on things, or cleaning. With OCD, the thoughts and rituals cause distress and get in the way of your daily life. Researchers think brain circuits may not work properly in people who have OCD. It tends to run in families. The symptoms often begin in children or teens. Treatments include therapy, medicines, or both. One type of therapy, cognitive behavioral therapy, is useful for treating OCD. Source: National Institute of Mental Health
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is a factual explanation of OCD and its symptoms, treatments, and causes. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1279-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://medicalcenter.osu.edu/patientcare/healthcare_services/mental-behavioral/need-help/ocd/Pages/index.aspx", "date": "2014-07-22T23:36:57", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00200-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9288210869, "token_count": 178, "score": 3.96875, "int_score": 4 }
<urn:uuid:0b1552bb-bf67-4f58-9dc9-22f43d353784>
New research from the University of Calgary (Calgary, Canada) has identified two genes in the opium poppy that are involved in the synthesis of codeine and morphine. This discovery could simplify the pharmaceutical manufacturing of these two pain relievers. The opium poppy has long been depended upon for its natural ability to create these two powerful pain-fighters. But despite widespread medical use, codeine must be derived from morphine, which still needs to be extracted from the seeds of the poppy plant. For decades, researchers have searched for a synthetic mechanism to manufacture these drugs, but have struggled to understand how the opium poppy generates these substances. “The enzymes encoded by these two genes have eluded plant biochemists for a half-century," Peter Facchini, professor in the department of biological sciences at the University of Calgary, said in a press release. "In finding not only the enzymes but also the genes, we've made a major step forward. It is equivalent [to] finding a gene involved in cancer or other genetic disorders." To find the exact genes that control the production of codeine and morphine, Facchini worked with post-doctoral researcher Jillian Hagel to evaluate 23,000 different genes in search of codeine o-dementhylase (codm) and thebaine 6-0-demethylase (t6odm). These genes produce the enzymes that the opium poppy uses to covert codeine into morphine. Called non-heme dioxygenases, codm and t6odm are the only genes known to catalyze the synthesis of codeine and morphine. By inhibiting the expression of t60dm and codm in the opium poppy, the researchers tested their discovery and found that virus-induced gene silencing successfully blocked the creation of the two substances. Although codeine can be extracted directly from the plant, it is not found in the same abundance as morphine, because codeine must be created from the original morphine. In the human body, an enzyme in the liver reverses the synthesis of codeine and turns it back into morphine. Because of this, codeine and morphine have the same effects on the human brain. According to Facchini, the process of synthesizing and harvesting codeine will no longer be necessary if the genes he and Hagel discovered are used to make a knock-out version of the opium poppy that only creates codeine. "We can potentially create plants that will stop production at codeine,” said Facchini. “We are also working toward the synthesis of codeine and other opiate drugs more efficiently and economically in controlled bioprocessing facilities. Our discovery now makes it possible to use microorganisms to produce opiate drugs and other important pharmaceuticals." According to Facchini, these drugs are important because they generate a lot of revenue for the pharmaceutical industry. Canada is among the world’s top consumers of codeine, spending more than $100 million every year to import opiates. "The evolution of these two genes in a single plant species has had such a huge impact on humanity over the past several thousand years,” said Facchini. “Our discovery allows this unique genetic power to be harnessed in many important ways." The researchers will now focus on using the codeine gene to produce the drugs in a yeast or bacteria vector. The paper, “Dioxygenases catalyze the O-demethylation steps of morphine biosynthesis in opium poppy,” was published March 14 in Nature Chemical Biology.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses scientific research related to the opium poppy and the synthesis of codeine and morphine. It is entirely focused on educational and informative content, with no inappropriate language, themes, or references to illegal activities. Therefore, it is safe for public use and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1280-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.biotechniques.com/news/Researchers-unlock-the-opium-poppys-pain-fighting-genes/biotechniques-208313.html", "date": "2014-07-22T23:35:41", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00200-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9324843884, "token_count": 722, "score": 3.328125, "int_score": 3 }
<urn:uuid:1f46a869-d2d0-488b-9f38-14e5afbe798d>
At the latest school board meeting, Paula Ledford and Nancy Cantrell sought the approval of board members for an Early Childhood Literacy Campaign that they wish to begin in this area. Ledford and Cantrell presented research that indicates that 40% of children are not at the desired cognitive level when they reach kindergarten. Based on assessments created by a school board member in Washington state, student testing scores show that this trend is on the rise. This is why the Reading Foundation was created. Ledford commented that if children begin school already behind in learning, then they will most likely remain behind throughout their educational careers. Therefore, according to Ledford, it is time to “catch the kids up before that learning gap gets worse.” Ledford also explained that the first two or three years of a child's life are critical in the learning process and parents are the key. Parents that spend time reading to their children and working with them on their language, math and cognitive skills were farther ahead than those whose parent's did not. Because parent involvement is essential to improving a child's learning before they reach kindergarten rather than trying to catch them up Ledford and Cantrell proposed that they begin a program to raise awareness to the parents of this growing problem. They agree that educating parents and training them on how to use tools that they have access to everyday. “Right now we are just in the planning stages of this campaign. But we are partnering with the PR department of Drake Enterprises to help spread the word any way we can, Ledford said. “The solution is real simple, make parents aware and get them actively working with their children; and this doesn't involve something that they don't already have access to.” Here in Macon County the statistics are slightly better than the national average — only 17% are highly at risk and 36% are at some risk. There are already a few Pre-K programs in Macon County, such as the Reading Rover and More At Four. However, Ledford and Cantrell believe that there is more that can be done and that a goal of 80% of students reaching kindergarten prepared is very reasonable. Although this program will not be using school board funding, members of the board heartily approved the plan and unanimously voted to continue with the project.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses a school board meeting and an Early Childhood Literacy Campaign. It focuses on educational initiatives and statistics related to children's cognitive development and parental involvement. There is no inappropriate language, adult themes, illegal activities, or harmful content. The text is entirely suitable for public use and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1281-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.maconnews.com/news/community/1105-board-renews-commitment-to-childhood-literacy-campaign", "date": "2014-07-22T23:38:33", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00200-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.978936255, "token_count": 468, "score": 2.75, "int_score": 3 }
<urn:uuid:a8c5566d-c161-4d59-8f56-c8c02b783c17>
SHIRLEY -- As we walk along the streams and rivers of New England's small bucolic towns, we often stumble upon fieldstone walls and concrete pilings, shadows of the former hustle and bustle of numerous busy sawmills. At the recent opening of the Shirley Historical Society's latest exhibit, "Shirley's Sawmills Over the Centuries," SHS Curator and Director Meredith Marcinkewicz took visitors through the history of that industry, highlighting a dozen 18th- and 19th-century sawmills that once stood along Mulpus and Catacunemaug Brooks and the Nashua River. Sawmills were the most common type of mills found in most 19th-century New England towns, and by 1840, said Marcinkewicz, America boasted more than 31,000 sawmills. Most were owned and operated by farmers of above-average means, who often ran them seasonally, as water levels and the demands of their work permitted. Due to the expense of transporting high-weight, low-value lumber, most sawmills served their local neighborhoods. Shirley started seeing mills beginning around 1739, with a clothier mill on the Squannacook River in a part of West Groton that became Shirley in 1753, and Samuel Hazen's first saw and gristmills on Catacunemaug Brook around 1748. "History is all around us," Marcinkewicz said, pointing to a slide of the red Umbagog mill building off of Leominster Road. "Parts of this historic Shirley sawmill may have been here 150 years, yet hundreds of people pass it every day and don't even notice it." Marcinkewicz said she found much of the history of Shirley's sawmills in the Reverend Seth Chandler's "History of Shirley," published in 1883, and in Ethel Bolton's "Shirley Uplands and Intervales," published in 1914, but she also had help from the Farnsworth and Jurga families, as well as Paul Pryzbyla and other local families. Farnsworth Lumber Co., founded by Samuel Farnsworth in 1837, is still operating in Shirley today. According to Marcinkewicz, Farnsworth's actually cut the floorboards of the museum. Historic and current photographs of Farnsworth's can be viewed at the museum. The sawmill display includes photographs, maps, a collection of antique wooden planes, various woodworking tools, and Melvin Longley's collection of wood cut from 30 different kinds of trees that grow in Shirley. The exhibit will remain open through April.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical context\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses the historical context of sawmills in Shirley, Massachusetts, and does not contain any unsafe or inappropriate content. It is suitable for educational and public use, providing historical information without any offensive or harmful elements." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1282-0", "llm_safety_score": 0, "reason": "historical context" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.sentinelandenterprise.com/community/ci_22510504/shirleys-history-logged-sawmill-sites?source=rss", "date": "2014-07-23T00:11:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00200-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9712509513, "token_count": 551, "score": 2.625, "int_score": 3 }
<urn:uuid:5000b8b4-493b-4933-8859-ae1e07945c08>
From a video produced by LearnLiberty.org, a project of Institute for Humane Studies: “Prof. Don Boudreaux explains what economists mean when they talk about unintended consequences. Essentially, unintended consequences are the large outcomes that emerge from the actions made by many individuals. These outcomes can be good or bad. Therefore, when analyzing various polices, we must be extremely careful to distinguish between intentions and results.” Boudreaux concludes the video with this: “We live in this incredibly complex world. When we take any action, we know that the consequences of those actions are going to extend out very far. We can see those consequences only a little bit in front of us. We can’t trace them out fully. And it applies whether or not you believe in big government, tiny government, and medium-sized government. Yes, it’s difficult in many cases to trace out how the incentives will have real-world effects. But that difficulty does not excuse us from the task of pursuing it. We can’t just simply say, oh the intentions of the policymakers are good, therefore we can be assured that the results will be good. That’s cheating. We just can’t do that. That’s very bad public policy.” Understanding this is especially important as we in Wichita and the surrounding area prepare to undertake a comprehensive government plan for sustainable communities. The video’s page is Unintended Consequences, or click below to view at YouTube.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content with no unsafe elements\", \"score\": 0}\n```\n\nThe text provided is from an educational video discussing unintended consequences in economics and public policy. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1284-0", "llm_safety_score": 0, "reason": "educational content with no unsafe elements" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://wichitaliberty.org/free-markets/intentions-and-results/", "date": "2014-07-24T08:25:30", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00136-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9431426525, "token_count": 316, "score": 2.625, "int_score": 3 }
<urn:uuid:b19fdf50-815e-450f-adfa-6bd421a3ec00>
World Renowned Expert Introduces Climate Change Course to the E&EM Curriculum The E&EM program introduced a new course in the fall 2009 semester on Climate Change: Policy, Impacts and Response. Taught by internationally recognized climate change expert Dr. Rachael Jonassen, the course covered climate change science, impacts, mitigation, adaptation, national programs for addressing climate change and international issues. Dr. Jonassen was a lead author of the Strategic Plan for the U.S. Climate Change Science Program and the recently released A Federal Leader’s Guide to Climate Change (2009). Dr. Jonassen leads the climate change practice at LMI, a not-for-profit government consulting firm. She formerly was Program Director at the National Science Foundation, leading research into the carbon cycle as part of the U.S. Global Change Research Program. Dr. Jonassen also served on the interagency working group for carbon cycle, where she helped manage the first State of the Carbon Cycle Report, led the effort at international cooperation on bilateral climate change agreements, and organized the first all-investigators meeting on the carbon cycle. Before that, she maintained a major research program on local impacts of climate change for twenty-five years as a professor of geology at Kent State University.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content\", \"score\": 0}\n```\n\nThe provided text is entirely focused on introducing a course on climate change and does not contain any unsafe, inappropriate, or harmful content. It is suitable for educational environments and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1285-0", "llm_safety_score": 0, "reason": "No unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.gwu.edu/~eemnews/previousissues/f09_art9.html", "date": "2014-07-24T08:44:05", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00136-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.917702198, "token_count": 266, "score": 2.71875, "int_score": 3 }
<urn:uuid:cefdcc31-5cb1-47f3-a65d-5183e9dd361c>
Since a historian, armed with hindsight can see almost all the angles to a story that was not available to its characters, he can see the larger picture and put things in context. Admiral Patricio Montojo, commander in chief of the Navy in the Philippines, was recalled to Madrid to explain Spanish defeat in Manila Bay. Montojo was handicapped from the start as can be seen from the torrent of urgent cables he sent to Madrid appealing for supplies and ammunition. Montojo, for example, was promised ''protected cruisers'' to supplement the floating antiques euphemistically called the Spanish Far Eastern Fleet under his command. When the Isla de Cuba and the Isla de Luzon finally arrived in Manila, he found them mere gunboats. Montojo's cruisers, the Reina Cristina and the Castilla, had their defects. Guns were missing from both. The impotent Castilla, was a leaking wooden contraption with powerless engines, and had to be towed everywhere she went--including the scene of battle. Mines and torpedoes Manila did have, but they were barnacle-encrusted, old-fashioned and sparsely distributed, that they were practically useless. Guns were to be found in Corregidor and El Fraile, but like the ships and mines, were more fit for museums than for modern warfare. Little wonder they did not resist when the American squadron entered Manila Bay. Spanish guns had no modern sighting and range-finding devices. Worse, they were low on ammunition. Manila was defenseless and yet Montojo, to save Spanish honor, had to put up a fight. In his defense, Montojo prepared papers that would absolve him of his defeat in Manila Bay. A sampling of the cables illustrates his pathetic position. On March 26, 1898 Montojo cabled Madrid: ''... I have been actively taking all precautions. Torpedoes and boats few and deficient. I await superior orders. I have no instructions.'' Expecting a message of resignation in the face of defeat, Montojo was surprised with Madrid's reply on March 27, 1898 that said: ''... approve all precautions taken in these circumstances regretting not being able to send reinforcements since they are needed here.'' On April 11, Montojo warned his superiors that the Americans ''have more than 50 cannons. Mean speed 17 knots. They will come as soon as war is declared.'' On April 23, Montojo asked for further advice: ''... Before the immense superiority of the enemy's squadron of eight good ships against four deficient ones, I met with my captains and our majority opinion is to defend Subic Bay, leaving our squadron there in expectation of being able to take advantage of a favorable opportunity to defeat the enemy in detail or by surprise... I pray Your Excellency answer me whether you approve or not.'' The Navy minister in Madrid replied promptly on April 24: ''... Received your telegram dated yesterday.'' Montojo wanted Dewey to certify that: first, there were no shore fortifications or submarine mines in Subic; second, that the destruction of the Spanish fleet would have been more complete and devastating in Subic than Cavite because of the depth of the water; third, that the Americans did not find the Spanish unready, that they put up a good fight, but that they lost'' not for lack of valor but principally because we had poor ships.'' Dewey, magnanimous in victory, granted Montojo's request and concluded with these words: ''... The fighting of your flagships, which was singled out for attack, was especially worthy of a place in the traditions of valor if your nation... I very much regret that calumnies have been cast against you, and am confident that your honor cannot be dimmed by them.'' This was to be the first time the Americans upheld Spanish honor, the second time would be in the taking of Manila. PHILIPPINE DAILY INQUIRER, April 20, 1998. (http://www.fapenet.org/ncc/ncc/news/honor.txt)
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical context\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses historical events and does not contain any unsafe, inappropriate, or offensive content. It provides a detailed account of the situation faced by Admiral Patricio Montojo during the Spanish-American War, focusing on the challenges and decisions he made. The language used is respectful and the themes are historical and educational in nature, making it fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1286-0", "llm_safety_score": 0, "reason": "historical context" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.spanamwar.com/montojophil.htm", "date": "2014-07-24T08:25:35", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888216.78/warc/CC-MAIN-20140722025808-00136-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9755157232, "token_count": 866, "score": 2.75, "int_score": 3 }
<urn:uuid:2c8540d8-b074-4bbd-829c-8fe404e0ff7e>
Renke Samusch, Dominik Baumecker and Moritz Zockler Web 2.0 im Fremdsprachenunterricht Renke, Dominik and Moritz, who are all teacher trainees, have written a very interesting and detailed article which explores the use of comic and video creation software for teaching storytelling in a motivating and engaging way. In the article, they mention, in particular, Bitstrips, xtranormal, pixton and strip generator but they also provide a link to Alan Levines wiki resource, acknolowdging that many more comic and video creation tools exis. Renke, Dominik and Moritz believe that students become more engaged in writing process when they realise that they can use these tools to bring their own stories alive by adding a visual element through pictures and video. Being able to use the tools provides a great way of encouraging student creativity. The choice of images/characters and how they are put together is limited only by the student's imagination. Also, the student's final production can then form the basis of a verbal presentation to the rest of the class, which helps in practicing and developing spoken language skills. Obvioulsy this can also lead to great opportunities for team work where groups of students are tasked with producing their story and presentation. As a bonus, Renke, Dominik and Moritz point out that organisational skills and multimedia skills are also learnt during the process, even though they are not the main focus of the task. Renke, Dominik and Moritz feel that younger students may need more support in using such tools and that, in such cases, it "is crucial" that their teachers are competent in using the tools before introducing the exercise. This is often not the case for older students where they are very quick to pick up new programs and may even teach their teachers a thing or two! As well as students creating work, teachers can also prepare stories in advance thus controlling "the quality of the language" to be used. This can be good for practicing reading skills while still allowing students to add their own creativity to the final visualization. However, Renke, Dominik and Moritz warn that the topic should be chosen carefully so that it will inspire students to produce their comic/video. Teacher prepared comics/videos could also be used as the initial prompt for creative writing, maybe based around the current vocabulary that is being studied. The students use this work as the basis for their story and presentation. Renke, Dominik and Moritz believe this is a great way for promoting story telling and can be used with all age groups. Through these tasks, students are introduced to the idea of building plots and using dialogue appropriately. Encouraging students to role play their final presentations is another way of engaging and motivating the students in the writing process. Renke, Dominik and Moritz also provide great examples of where these tools can be used for more than just storyteling such as for learning: - cultural studies - history etc - time constraints - for learning and using the tools - overusing the tools - trying to use all the features - overexposure to the tools - reducing student motivation |You can buy a copy of the book online directly from the publisher (Verlag Werner Hülsbusch) | if you have any queries email Verlag Werner Hülsbusch at [email protected] *****************************The release of 'Web 2.0 im Fremdsprachenunterricht' is accompanied by a series of free webinars where the authors will present and discuss their articles. Information about these online meetings, which will be held in the presenter's mother tongue, can be found at Fremdsprachenunterricht 2.0 You can register for any of the webinars at Fortbildung Online. Recordings of the webinars will also be published on the LPM's website.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text is a detailed academic article discussing the use of comic and video creation tools in foreign language education. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1287-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://usingictinfe.blogspot.com/2013/01/digital-storytelling.html", "date": "2014-07-25T13:40:18", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894275.63/warc/CC-MAIN-20140722025814-00072-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9309648871, "token_count": 823, "score": 2.9375, "int_score": 3 }
<urn:uuid:1a775b48-8c82-4e3b-98cb-22ba23500862>
This question is from the National University of Colombia entrance exam in 2006 (translated from Spanish). The function y=4cos(x/2) - 3sin2x is of period: I learned to divide the period of cos by 2π, so 2π/1/2 would be 4π I thought, and the period for the sin would be 2π/2 is π, and the least common number to divide into seems like it would make the answer 4π, but the answer sheet says D, 2π, and I can't figure out why. I have their entrance exam in two days and I need to understand all this. Please help me. Thank you
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text is an educational query about a mathematical problem and does not contain any unsafe or inappropriate content. It is suitable for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1290-0", "llm_safety_score": 0, "reason": "educational content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://mathhelpforum.com/trigonometry/87899-period-function.html", "date": "2014-07-28T16:43:58", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00302-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9806033373, "token_count": 141, "score": 2.859375, "int_score": 3 }
<urn:uuid:8e6739de-2da9-40d3-87f3-bf9d539b78a6>
"We just started our new wastewater treatment plant," says George. "I think that will drastically reduce our BOD, although this again is an example of our environmental policy. We do not have to treat the water any more than we already do. We can send it to the local POTW and just pay a BOD surcharge, which is what we've done for years. But we're going to meet the goals." Akzo Nobel Aerospace Coatings' Waukegan plant said it would reduce volatile organic compound (VOC) emissions from wash tanks by 84 percent by cutting the number of solvent-containing parts washers and by using "greener" cleaners. It also committed to decreasing hazardous materials use, overall VOC emissions and electrical energy use. Balancing the reductions with production increases poses the greatest hurdle, says Akzo Nobel's Flack. "The idea is to continue to increase production, increase efficiencies and try to minimize any additions of emissions, waste ," those key areas that we've worked so hard to try to reduce." The DuPont Spruance plant pledged to increase the use of recycled Kevlar materials from 0 pounds (lb) per year to 180,000 lb per year by recycling prepolymers generated during process startups and shutdowns. In addition, the facility said it would cut air toxics, hazardous waste and solid waste. "We're well on our way to meeting our commitments," maintains DuPont's Dunn. "Our annual [carbon disulfide] emissions were 50,000 pounds ," we said we'd take that to zero, and we've done that. We said we'd reduce nonhazardous waste in Kevlar, and we've cut that by over 90 percent already." Johnson & Johnson's future commitments are based on the joint efforts of all 51 participating facilities. The company said it would reduce carbon dioxide (CO2) emissions by 2 percent by improving management practices and using alternative energy sources. It also committed to further reductions in raw material use, water use and solid waste generation. "It is a big challenge," emphasizes Johnson & Johnson's Pai. "Because we've achieved so much already under our old goals, now it becomes harder and harder to achieve additional improvements." And some reductions are more difficult to achieve than others, adds Pai. "For example, CO2 emissions ," that is the biggest challenge we have at the moment because our company is growing rapidly," she stresses. "This is a goal that measures absolute reductions. What we have done is tried to implement the use of more renewable energy ," for example, using solar energy, wind, hydroelectric power." These super-achievers and other Performance Track participants have worked long and hard to achieve environmental excellence. They are quick to point out, however, the program's benefits. "Performance Track has gotten us a lot of attention from government," says Madison Chemical's George. "One of our senators, Sen. Evan Bayh, dropped by the other day for photo opportunities and to announce some programs he's supporting. The commissioner of the Indiana Department of Environmental Management dropped by because they're very pleased with the way we conduct things. When you take these kinds of persons through the plant, it really makes an impression on the employees." DuPont's Dunn said his facility also was values the recognition for its achievements. "I put myself in the shoes of the community, and I realized that if we are doing the things we're being recognized for and say we're doing, it's natural that we would be expected to be part of Performance Track," he says. Johnson & Johnson's Pai sees the program as "that additional push, that incentive" the company needs to meet its next-generation goals. And, again, the formal recognition aspect of the program is a boon. Program members also benefit from EPA-enabled information sharing, notes Julie K. Spyres, Performance Track's director for program development and member services. EPA holds an event each spring to allow members to share best practices and network. The event can be a golden opportunity for those seeking innovative solutions to persistent problems. In addition, EPA's regions host events in which program participants have "opportunities for a lot more dialogue in lieu of a presentation style," she adds. Performance Track is still a work in progress, and EPA is evaluating additional incentives that might one day benefit program participants. For example, the agency recently proposed one rulemaking that would allow hazardous waste generators within Performance Track to extend hazardous waste storage, and another that would reduce regulatory paperwork for Performance Track facilities. And soon to be unveiled is a database of leadership practices, or best practices, says Spyres. "All of our members will be highlighted in that database," she says. "It's going to cover a lot of environmental best practices from all over the country that we learn about from different awards programs." Also in the works is a mentoring program, notes Spyres. "Not only will it be for our existing members to mentor potential members," she explains, "but we're also going to use it as a way to try to link up our membership pool ," so they have a way of asking very specific questions for which they would like us to help find answers." Moreover, the agency teamed up with the U.S. Occupational Safety and Health Administration (OSHA) to explore opportunities for joint workshops and other activities for facilities that are members of both Performance Track and OSHA's Voluntary Protection Program (VPP). The agencies also are looking into ways to recognize facilities that participate in both programs. "We really want to use [Performance Track] to improve our ability at a facility and company level to measure environmental performance and to demonstrate that we can go from a very compliance-based approach to a performance-based approach," emphasizes EPA's Fiorino. "At some point, my hope is that we'll really start to build, with legal authority perhaps, some sort of a sharing approach to regulations ," where there are high performers who meet not a different set of performance expectations, but that EPA would sort of oversee and monitor differently, given their track record." OK, so maybe Performance Track doesn't represent a complete turnaround in EPA's regulatory approach. But at least it's a start. The next open application period for EPA's National Environmental Performance Track program begins on Feb. 1, 2003. For more information, visit www.epa.gov/performancetrack. CP
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n*Explanation:*\nThe provided text discusses environmental policies and initiatives related to wastewater treatment, VOC emissions, and other environmental goals. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are focused on environmental and industrial practices, which are suitable for public and educational environments.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n*Explanation:*\nThe provided text discusses the National Environmental Performance Track program and its objectives. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are related to environmental performance and regulatory compliance, which are suitable for public and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1291-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1291-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.chemicalprocessing.com/articles/2002/224/?start=3", "date": "2014-07-28T16:39:27", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00302-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9679868817, "token_count": 1317, "score": 2.59375, "int_score": 3 }
<urn:uuid:1207a2e3-b349-4dc0-b974-2be028a0f46b>
The promising new treatments for cancer that you describe are good news (13 October, p 38). However, such treatments have inherent limitations because they are based on a fundamentally mistaken view of cancer centred on the role of individual genes. This prevents true understanding of how cancer develops and how it can be cured. One problem is that many of the same genes said to cause cancer are also involved in healthy development. Consideration of genes alone cannot reveal the cause of the phenotypes that distinguish cancers from normal developing tissue. That means we are still treating the effects and not the causes of cancer. In a monograph published last year, I presented a new paradigm of how cancer works (arxiv.org/abs/1110.5865). Genes make up less than 5 per cent of the human genome. The other 95 per cent is non-coding. The causes of cancer are networks in the ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text discusses scientific and medical topics related to cancer treatments and research. It does not contain any unsafe, inappropriate, or offensive content. The language is respectful and the themes are related to medical and scientific advancements, which are appropriate for general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1292-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.newscientist.com/article/mg21628920.400", "date": "2014-07-28T16:26:24", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00302-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9386745095, "token_count": 213, "score": 2.9375, "int_score": 3 }
<urn:uuid:339f0934-af58-467c-93ac-c779a7af93c8>
Odo I, Count of Troyes Odo (or Eudes) I (died 10 June 871) was the Count of Troyes from 852 to 859. His ancestry is not known for certain. Onomastics would place him in the extended family of Odo I, Count of Orléans. The most recent studies make him a son of Robert, Count of Oberrheingau and Wormsgau, and Waldrada, a daughter of Odo of Orléans. If this theory is true, he was the elder brother of Robert the Strong. Like the rest of his family, he was a loyal follower of Charles the Bald. Though well-endowed with estates in Austrasia, like his brother Robert, he abandoned these after the Treaty of Verdun (843) in order to rejoin Charles the Bald. In 846, he was granted lands in the region of Châteaudun, made Count of Anjou, and wed to Wandilmodis. In 852, after the death of Aleran, Count of Troyes, he was nominated to hold his vacant office and his brother Robert succeeded him in Anjou. In 858, Charles named his son Louis the Stammerer to the ducatus Cenomannicus and Robert, angered by his loss of influence there, revolted and called in the aid of Louis the German. Odo soon joined him. The brothers were subsequently expelled from their counties and Troyes confiscated and confided to one Rudolph. Odo may have recovered Troyes after Rudolph's death (866), but perhaps not. In any case, his brother made his submission in 861 and was given the March of Neustria. Odo's eponymous son was found in Troyes in 876. By Wandilmodis, he left three children: - de Saint Phalle, Édouard. "Comtes de Troyes et de Poitiers au IXe siècle : histoire d’un double échec." in Christian Settipani and Katharine S. B. Keats-Rohan, Onomastique et Parenté dans l'Occident médiéval. 2000. |This biography of a French peer or noble is a stub. You can help Wikipedia by expanding it.|
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical content with no unsafe elements\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1293-0", "llm_safety_score": 0, "reason": "historical content with no unsafe elements" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://en.wikipedia.org/wiki/Odo_I,_Count_of_Troyes", "date": "2014-07-30T01:39:17", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00350-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9703840613, "token_count": 490, "score": 2.640625, "int_score": 3 }
<urn:uuid:5080b6b8-6611-46b2-821a-1f79525807f8>
The Survival of the Fittists Understanding the role of replication in research is crucial for the interpretation of scientific advances An Exception to the Rule The stage is now set for us to shift our gaze to research done in the East. Chinese medical research, for example, is almost invisible to Western scientists, but the reverse is not true: Chinese researchers seem well aware of major findings in the West, although they are probably less familiar with the more minor publications. Keeping in mind the phenomenon of shrinking effect sizes, if we looked carefully at the findings of Chinese medical researchers as they strive to replicate Western medical findings, we would expect to find the same shrinkage as is the rule in the West. Is this what happens? Zhenglun Pan, of Shandong Provincial Hospital in Shandong, China, and a team of international scholars did a large meta-analysis of dozens of studies done in China that were meant to be replications of earlier studies. They then redid the same meta-analysis with studies from other Asian (but non-Chinese) researchers, as well as non-Asian, non-Chinese researchers. The studies they considered, in the field of genetic epidemiology, seemed to find effect sizes at or surpassing those found in the alpha study. The authors call this a “reverse Tower of Babel” bias. Although the bias was greatest in Chinese studies, it was also found, to a lesser extent, in non-Chinese Asian research. Replication studies on the same subject by non-Chinese, non-Asian researchers found the smallest effect sizes of all. (Summaries of two of the meta-analyses by Pan and colleagues are shown on the facing page.) Several speculative reasons for this effect come to mind—perhaps it is a matter of cultural norms; perhaps there is an interaction between treatment and ethnicity. Thus far we must await further research to determine its sources.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses scientific research, replication studies, and cultural differences in medical research without any inappropriate language, harmful themes, or unsafe content. It is suitable for educational and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1294-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.americanscientist.org/issues/pub/2012/5/the-survival-of-the-fittists/4", "date": "2014-07-30T01:00:09", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00350-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9581467509, "token_count": 387, "score": 2.53125, "int_score": 3 }
<urn:uuid:e1f40569-5849-4fa9-b923-fa2603eced20>
SEATTLE - The Duwamish River has long been known as a Superfund site, running through the oldest and most heavily industrialized zone in the city. But if you stayed out of the water, would you be just as healthy as somebody who lived in Magnolia or Laurelhurst? The answer, according to a new report, is a resounding no. The report assembled by the Duwamish River Cleanup Coalition/Technical Advisory Group and Just Health Action finds that air pollution from diesel exhaust, toxic emissions from industry and even noise levels play a role in cutting life expectancy in the neighborhoods of South Park, Georgetown and parts of Beacon Hill. Report authors studied nine Seattle zip codes and compared rates of disease from childhood asthma, to lung cancer to diabetes.The neighborhoods in question are in zip code 98108. It's a report that analyses everything from the percentage of tree canopy, and city park space per resident, a wide variety of environmental and economic factors, although the report did not consider personal health habits such as smoking or diet, which can affect lung cancer and heart disease rates, all higher in 98108 versus other zip codes in the city. South Park and Georgetown especially, mix heavy industry with residential neighborhoods where people live. The authors conclude people who live here are living shortened lives-- eight years shorter than Seattle and King County on average, and 13 years less than neighborhoods like Laurelhurst which is far from the foundries and machine shops that are common in 98108. For more information: http://duwamishcleanup.org/
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text is a factual report about environmental and health issues in certain neighborhoods of Seattle. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to public health and environmental concerns, which are appropriate for general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1295-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.king5.com/news/Where-you-live-affects-how-long-you-life-200338141.html?ref=next", "date": "2014-07-30T02:10:40", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268363.15/warc/CC-MAIN-20140728011748-00350-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9566795826, "token_count": 317, "score": 2.5625, "int_score": 3 }
<urn:uuid:70dda50a-98d2-4105-a0db-d5cce677d615>
Autism is a behaviorally defined neurodevelopmental disorder that includes persistent deficits in communication, reciprocal social interaction, and play. During the past two decades, family and twin studies have provided evidence for a significant genetic component in autism. The risk of autism in siblings of autistic probands is approximately 45 times greater than that in the general population (1). Moreover, the concordance rate for autism ranges from 36% to 91% in monozygotic twin pairs and from 0% to 24% in dizygotic twin pairs (2–4). Several reports have described a subset of relatives of individuals with autism who display social and language deficits that are qualitatively similar to those of the autistic proband but milder in severity (5, 6). This constellation of impairments, which relatives of probands exhibit at higher rates than do people in the general population, has been characterized as the broader phenotype for autism (7–11). Although evidence for the broad phenotype in families varies across studies (3, 12, 13), investigations of monozygotic twins (7) and of families with multiple-incidence autism (9, 10) have demonstrated support for the concept. In comparisons of families with multiple-incidence autism to families with Down’s syndrome, Piven and colleagues reported significantly higher rates of social and communication deficits and stereotyped behaviors (9) and of psychiatric disorder (10) in the families of individuals with autism. Following up on a study by Folstein and Rutter (2), Le Couteur and co-workers (7) evaluated a group of 28 monozygotic pairs and 20 dizygotic same-sex pairs in whom one or both twins had autism, and they found social and communication deficits in 77% of the monozygotic co-twins (compared to 5% of the dizygotic twins) who were discordant for autism. Rutter and colleagues (8) have suggested that in such families a genetic substrate for autism may produce the social and cognitive deficits that are included in the broader phenotype but that an interaction between the genetic substrate and unknown deleterious environmental factors culminates in a "second hit" that ultimately produces the narrow phenotype for autism. Little is known about the differences in neuroanatomy that could ultimately be associated with the narrow and broader behavioral phenotypes for autism. The findings from imaging studies of individuals with the narrow phenotype have not been consistent. Although several studies have indicated higher than normal overall brain volume (14–16) and high regional (parietal, temporal, occipital) brain volumes (17), a cross-sectional study demonstrated that high volumes of cerebral gray, cerebral white, and cerebellar white matter in early childhood were followed by low volumes in late childhood and adolescence (18). In a case report that compared the neuroanatomy of one monozygotic twin pair discordant for the narrow phenotype of autism to five age- and gender-matched unaffected comparison subjects, we found dysfunction of two separate brain clusters (19). Relative to the comparison subjects, both the twin with autism and his co-twin exhibited lower volumes in the frontal lobe and the superior temporal gyrus. The twin with autism also exhibited lower volumes in noncortical structures, including the caudate nucleus, the amygdala, the hippocampus, and the cerebellar vermis. On the basis of the findings from that preliminary report, in the present study we investigated the neuroanatomic phenotypes of 16 pairs of monozygotic twins of which seven twin pairs were concordant and nine twin pairs were discordant for the narrow phenotype for autism. We sought to examine the extent to which twin pair differences in clinical phenotype were associated with differences in neuroanatomic phenotype. In this study we examined volumetric differences in lobar regions and the cerebellum. Since large ventricles have been found in previous studies of individuals with autism (14, 15), ventricular volumes were also included in the analyses. (Mesial temporal and subcortical structures will be investigated in a subsequent study.) We hypothesized that 1) clinically concordant twin pairs would exhibit a higher degree of neuroanatomic concordance than clinically discordant twin pairs and 2) all co-twins within discordant pairs would exhibit volumetric differences from comparison subjects in the lobes of the cerebral cortex, whereas only the autistic co-twins within the discordant pairs would exhibit differences in the ventricles and the cerebellum. The original group consisted of 54 children, including 18 pairs of monozygotic twins and 18 age- and gender-matched unaffected peers. One twin pair was excluded from the analyses for this report because their scans were acquired in the sagittal plane, thereby producing segmented data that were not comparable to segmented data acquired in the coronal plane (20). A second twin pair was excluded for reasons to be described. The final study group consisted of 48 children, including 16 pairs of monozygotic twins (N=32) and 16 age- and gender-matched unaffected peers. Fourteen of the twin pairs, and their matched comparison subjects, were boys. The mean age of the twin pairs was 8.4 years (SD=2.6, range=5.3–13.8). The mean age of the comparison subjects was 8.3 years (SD=2.4, range=5.4–13.9). Families were recruited through the Autism Society of America, the National Alliance for Autism Research, the Kennedy Krieger Center for Autism and Related Disorders, and clinical neurologists at Kennedy Krieger Institute and Johns Hopkins Hospital. Subject recruitment followed a two-step process. A medical history form and the Autism Behavior Checklist (21) were initially mailed to interested families in order to screen twin pairs for potential participation in the study. Subjects with a genetic disorder or a history of status epilepticus were excluded. The behavioral criterion for potential participation in the study was based on the Autism Behavior Checklist: if at least one of the twins scored above 57 (representing the narrow phenotype for autism) on the Autism Behavior Checklist, the family was contacted by telephone by a project staff member. The Autism Diagnostic Inventory—Revised (22) was administered (separately for each co-twin) in a series of telephone interviews with a parent of the twins by a project staff member (L.A.S.) who was trained in the reliable administration of the instrument. If the diagnosis of autism was confirmed by the Autism Diagnostic Inventory for at least one co-twin of each twin pair, the family was invited to the Kennedy Krieger Institute for additional assessment. At our research center, the Autism Diagnostic Observational Scale—Generic (23) was administered to each co-twin by a reliable examiner (L.A.S.). The final diagnosis of each child was based on both the Autism Diagnostic Inventory and the Autism Diagnostic Observational Scale. In order to qualify for the diagnosis of autism, each child had to meet the criteria for autism on the Autism Diagnostic Inventory and score within one point of meeting the criteria for autism on the Autism Diagnostic Observational Scale. No twin pair that qualified for the study on the basis of the Autism Behavior Checklist was then excluded on the basis of the Autism Diagnostic Inventory. However, for three twin pairs, the two instruments produced discrepant results regarding concordance between co-twins: one twin pair identified as discordant by the Autism Behavior Checklist (i.e., only one co-twin in the pair obtained a score higher than 57) was concordant according to the Autism Diagnostic Inventory, and two twin pairs identified as concordant by the Autism Behavior Checklist were discordant according to the Autism Diagnostic Inventory. In addition, one of the twin pairs whose scores indicated concordance on the Autism Behavior Checklist and discordance on the Autism Diagnostic Inventory was ultimately excluded from the study on the basis of the Autism Diagnostic Observational Scale, on which neither co-twin scored within the autistic range. The unaffected singleton comparison subjects, individually matched by age and gender to each twin pair, were recruited from the office of a local pediatrician. Letters describing the study were sent by the pediatrician to parents of patients who, according to their medical records, did not have a history of developmental delays, emotional disturbance, or learning disabilities. Parents then contacted one of us (W.R.K. or C.P.B.) if they were interested in participating in the study. They were screened first by telephone (in order to rule out current emotional, developmental, or learning difficulties) and then invited to participate in the study if the telephone interview confirmed that their children met the inclusion criteria. Developmental diagnoses, including language delay, and psychiatric diagnoses (e.g., attention deficit hyperactivity disorder) were ruled out by using parental report and child psychiatric checklists (e.g., the Child Behavior Checklist). Accordingly, none of the comparison subjects had any known neurologic, developmental, learning, or psychiatric disorder. Parents of all subjects signed informed consent forms approved by the institutional review board of Johns Hopkins Hospital. DNA fingerprinting probes were used to confirm zygosity in all twin pairs. Eight independent loci were tested for each twin pair. The DNA profiles for all twin pairs were statistically identical at every locus, indicating that the probability of monozygosity for each twin pair was approximately 99.99%. Image Acquisition and Processing Magnetic resonance imaging (MRI) was performed on a 1.5-T GE Signa scanner (GE Medical Systems, Milwaukee). A three-dimensional high-resolution MRI scan was acquired for each subject by using a T1-weighted spoiled gradient echo sequence (TR=35–45 msec, TE=5–7 msec, number of excitations=1) with a flip angle of 45°, matrix size of 256×128, and field of view of 20–24 cm, yielding 124 coronal slices with a slice thickness of 1.5 mm and an in-plane resolution of 0.131 mm. The image data were imported into the program BrainImage (A.L. Reiss, Stanford University; http://spnl.stanford.edu/tools/brainimage.htm) for visualization, processing, and quantitation. The data were prepared for measurement by correcting for radio frequency inhomogeneity artifacts, removing voxels representing nonbrain material, and reformatting the images to produce data sets representing cubic voxels (24). These data sets were opened into the multiplanar visualization module of BrainImage, so that three orthogonal representations of the data could be viewed simultaneously. The isolated brain tissue was subdivided into cerebral lobes and subcortical, brainstem, and cerebellar regions according to a method based on the revised Talairach atlas (25) that is specific for measurement in pediatric study groups (26–28). With this approach, high levels of sensitivity and specificity are achieved for all volumes of lobar brain regions based on the revised Talairach atlas (28). Each region was then segmented into gray, white, and ventricular compartments by using a constrained fuzzy algorithm that assigns voxels to one or more tissue categories on the basis of intensity values and tissue boundaries (29). Intraclass correlation coefficients (ICCs) were calculated in order to determine the degree of concordance in brain volumes in both the concordant and discordant twin pairs. We used r-to-z transformations to compare the magnitudes of the ICCs for the two groups. Multiple analyses of covariance (ANCOVAs) were conducted in order to compare the brain volumes of the discordant twin pairs and singleton comparison subjects. Although limiting our volumetric analyses to the discordant pairs and comparison subjects reduced statistical power to some extent, this strategy permitted us to focus specifically on the extent to which discordance in clinical phenotype was reflected in differences in cerebral, cerebellar, and ventricular regions of interest. These analyses were conducted with diagnostic group as the independent variable, age and whole brain volume as the covariates, and volumes of neuroanatomic regions of interest as the dependent variables. Post hoc analyses were conducted with the Bonferroni-Dunn statistic to correct for multiple comparisons. Linear regression analyses were conducted in order to determine the relation between brain volumes and severity of autism (represented by scores on the Autism Diagnostic Inventory and Autism Diagnostic Observational Scale) in all twins with autism and their co-twins. Seven pairs of twins had scores on the Autism Diagnostic Inventory and Autism Diagnostic Observational Scale that qualified both of them for the diagnosis of autism; those twin pairs were designated as concordant for the narrow phenotype for autism. The remaining nine pairs of twins were designated as discordant. Analyses of variance were conducted to compare the scores on the Autism Diagnostic Inventory and Autism Diagnostic Observational Scale (t1) of the twin pairs. The differences between scores of the twins within the concordant pairs were not significant. Within the discordant twin pairs, Bonferroni-Dunn post hoc analyses revealed that the autistic co-twin received significantly higher scores on all but one summary measure; the exception was the rating of abnormal development on the Autism Diagnostic Inventory. Within the discordant pairs, six of the nine co-twins who did not meet the criteria for autism on the Autism Diagnostic Inventory were reported by a parent to have exhibited language or social delays before participation in the study. In at least three cases, the delays were severe enough to warrant a diagnosis (before enrollment in the study) of developmental delay or pervasive developmental disorder. Two of the three remaining children whose histories were negative for social and language delay nevertheless exhibited subthreshold scores on the Autism Diagnostic Observational Scale that indicated mild impairment of communication and social skills. According to these criteria, the co-twins of the twins with autism were characterized as exhibiting the broad autism phenotype. We refer to these co-twins as "broad-phenotype co-twins." (It should be noted, however, that one twin pair was discordant for both traditional autism and the broader phenotype.) The mean age of the concordant twin pairs was 9.6 years (SD=2.6), and the mean age of the discordant pairs was 7.6 years (SD=2.4). This difference was not statistically significant. Valid intellectual testing was completed on all comparison subjects, as well as on eight of the 14 twins in the concordant pairs and 17 of the 18 in the discordant pairs. (IQ testing could not be completed on the remaining seven children because of behavioral, attentional, and severe language deficits.) The IQs of the co-twins in the concordant pairs did not differ significantly (p=0.21) from each other, whereas the IQs of the co-twins in the discordant pairs did (t=–2.98, df=13, p=0.01). The IQs of the twins in the concordant and discordant pairs differed significantly from those of the comparison subjects. The twins in the concordant pairs had a mean IQ of 63.2 (SD=22.2), the co-twins with autism in the discordant pairs had a mean IQ of 69.6 (SD=18.7), the broad-phenotype co-twins in the discordant pairs had a mean IQ of 94.4 (SD=13.4), and the comparison subjects had a mean IQ of 123.6 (SD=9.7). After reviewing school records of the participants whose IQs we were unable to assess directly, we found that the frequency of mental retardation was somewhat higher for the twins in the concordant pairs (11 of 14 twins) than for the twins affected with autism in the discordant pairs (three of nine twins) (χ2=5.25, df=3, p=0.07). Comparison of Concordant and Discordant Twin Pairs As already stated, the degree of neuroanatomic concordance within twin pairs was determined with the ICC. The Fisher r-to-z transformation was then used to compare the difference in magnitude of the ICCs for the clinically concordant and discordant pairs. As t2 illustrates, the ICCs for cerebral gray and white volumes were high for all sets of twin pairs, regardless of degree of concordance in clinical phenotype. In contrast, the ICCs for gray and white cerebellar volumes were significantly lower for clinically discordant twin pairs than for clinically concordant twin pairs. The ICCs for ventricular volumes were somewhat (although not significantly) lower for the clinically discordant pairs than for the concordant pairs. Neuroanatomic Differences Between Twins in Discordant Pairs The following analyses were conducted with data from a subgroup consisting of the nine discordant twin pairs and the total group of 16 unaffected comparison subjects. Three groups were compared: co-twins with autism, broad-phenotype co-twins, and unaffected comparison subjects (t3). The groups did not differ significantly in whole brain volume. At the cortical level, cerebral gray and white volumes were compared with ANCOVAs, with whole brain volume entered as the covariate. (Although the group differences in whole brain volume were not significant, it was included in the model in order to investigate the extent to which its variation accounted for differences in specific tissue and regional volumes.) Cerebral gray matter volume did not differ between groups. Cerebral white matter volume was 11.5% lower in the co-twins with autism and 7.5% lower in the broad-phenotype co-twins than in the comparison subjects. Both diagnostic group (F=9.2, df=2, 1, 30, p=0.0008) and whole brain volume (F=212.9, df=2, 1, 30, p=0.0001) contributed significantly to the variance in cerebral white matter volume. Follow-up multiple ANCOVAs were conducted to determine whether specific lobar regions accounted for the differences in total cerebral white matter volume among the children with autism, their broad-phenotype co-twins, and the unaffected comparison subjects. Diagnostic group (autistic co-twin, broad-phenotype co-twin, comparison subject) was entered as the factor, whole brain volume was entered as a covariate, and frontal, parietal, temporal, and occipital white matter volumes were entered as dependent variables. The Wilks’s lambda values for both diagnostic group (lambda=0.46, F=3.3, df=8, 54, p=0.004) and whole brain volume (lambda=0.11, F=54.1, df=4, 27, p=0.0001) were significant. Relative to the typically developing children, both the autistic and broad-phenotype co-twins had significant smaller frontal white matter (F=4.8, df=2, 1, 30, p=0.02), temporal white matter (F=11.0, df=2, 1, 30, p=0.0003), and occipital white matter (F=4.3, df=2, 1, 30, p=0.03). The cerebellar gray matter, cerebellar white matter, and ventricular volumes of the three groups (autistic co-twins, broad-phenotype co-twins, comparison subjects) were compared with an ANCOVA in which whole brain volume was entered as a covariate. The cerebellar gray and white matter volumes did not differ between groups. Both diagnostic group (F=8.2, df=2, 1, 30, p=0.002) and whole brain volume (F=17.8, df=2, 1, 30, p=0.0002) contributed significantly to the variance in ventricular volume. Planned comparisons indicated that these findings were due to significant higher ventricular volume in the co-twins with autism than in the comparison subjects (p=0.002, Bonferroni-Dunn statistic). Association Between Behavioral Phenotype and Brain Volumes Data from all twin pairs were used for these analyses. The association between brain volumes and the behavioral phenotype for autism was determined by calculating the within-twin-pair difference in cerebral gray matter volume, cerebral white matter volume, cerebellar gray matter volume, and cerebellar white matter volume. The resulting difference scores were correlated with within-twin-pair difference scores derived from the sum of the communication and social scores on the Autism Diagnostic Observational Scale. (The summary score was used in order to reduce the number of comparisons.) The difference score for the Autism Diagnostic Observational Scale was not associated with the difference score for either cerebral gray or white matter volume. There was a nearly significant association between the Autism Diagnostic Observational Scale difference score and the difference score for cerebellar gray matter (r=0.48, N=16, p=0.07) and a significant association with the difference score for cerebellar white matter (r=0.55, N=16, p=0.04). The present study is the first, to our knowledge, to use a design of monozygotic twins concordant and discordant for the narrow phenotype for autism to investigate neuroanatomic variation in children with autism spectrum disorder. We found that within discordant pairs, both the autistic and broad-phenotype co-twins exhibited lower white (but not gray) matter volumes in the frontal, temporal, and occipital lobes than did unaffected comparison subjects. The finding that both autistic and broad-phenotype co-twins exhibited low cerebral white matter volume supports our hypothesis and is consistent with the deficits in executive function, higher-order cognitive processing, and language that characterize children with autistic spectrum disorders. Although a caveat to these findings is the potential contribution of the large discrepancy in IQs between the twin pairs and comparison subjects, previous volumetric studies of children with autism provide a developmental context in which to place these findings. Several studies (18, 30, 31) give cross-sectional evidence for an overproduction of cerebral gray and white matter in very young children (<5 years of age) with autism that appears to give way to hypoplasia or "normalization" of gray and white matter in children as they approach adolescence. Our subjects were somewhat older than the subjects used in at least one of these studies, and therefore we may be observing the effects of abnormally slowed cerebral white matter growth only. It is not clear, however, why the autism-specific low volume was limited to white matter. It has been hypothesized that autism is associated with impairment of structural (32) or functional (33–35) connectivity between specific regions of the brain. In a PET study, Castelli and colleagues (35) observed that adults with autism or Asperger syndrome exhibited low functional connectivity between the extrastriate cortex and the superior temporal sulcus at the temporoparietal junction while viewing animated sequences that elicited the description of mental states. In a spectroscopy study, Murphy and coworkers (36) found high concentrations of N-acetylaspartate and choline in the prefrontal cortex of individuals with autism. Whereas N-acetylaspartate is associated with neuronal integrity, choline is found in large concentrations in myelin sheaths. Accordingly, these differences may represent alterations in axons that derail the efficient transfer of information from the prefrontal cortex to other brain regions. Future studies using diffusion tensor imaging are needed to confirm the presence of alterations in the axonal connections of individuals with autism; potential findings from such studies coupled with the present findings of structural abnormalities in white matter would support the hypothesis of impaired connectivity in autism. The finding that co-twins did not differ from each other or from comparison subjects in gray or white cerebellar volume was unexpected, although not surprising in light of the long-standing controversy over the extent to which the cerebellum is affected in autism. Differences between individuals with autism and comparison subjects in the volumes of cerebellar hemispheres and in the area of the cerebellar vermis have not been found consistently. Although several studies have shown that either the tissue (16, 31) or the white matter (18) of the cerebellar hemispheres is higher in individuals with autism, the differences are not consistently disproportionate to differences in whole brain volume. The current findings (including the results from comparisons of concordant and discordant twin pairs, discussed in the following) suggest that the precise nature of abnormalities of the cerebellum in autism is still not well understood. Our hypothesis that clinically concordant twin pairs would exhibit a higher degree of neuroanatomic concordance than clinically discordant twin pairs was supported to a large extent. Whereas cerebral gray and white volumes were highly correlated for both clinically concordant and discordant pairs, cerebellar and ventricular volumes were more highly correlated for clinically concordant pairs than for clinically discordant pairs. This finding of neuroanatomic discordance in monozygotic twin pairs would not have been predicted from studies of unaffected, typically developing monozygotic twin pairs, for whom both cerebral and cerebellar volumes have been shown to be highly correlated (37–39). Previous studies of unaffected monozygotic twin pairs have led to the conclusion that key aspects of both cerebral and cerebellar development are under tight genetic control in typical subject groups. As we will discuss, however, our findings suggest that for individuals with autism, the cerebellum may not be under as high a degree of genetic control. Moreover, the within-twin-pair differences in cerebellar white and gray volumes (but not cerebral volumes) were associated with within-twin-pair differences in Autism Diagnostic Observational Scale scores, suggesting that cerebellar (as opposed to cerebral) anatomy may serve as an important mediating feature between genetic liability and the expression of behavioral features of autism. We do not know the etiology or pathology of these phenotypic differences in neuroanatomy and associated behavior between individuals whose genetic makeup is identical. One explanation is that within-pair differences in the expression of susceptibility genes could lead to differences in the regulation of brain development. The overexpression of neurotrophins, which regulate axonal and dendritic development as well as synaptogenesis (40), has been found in the blood of infants who were later diagnosed with autism spectrum disorder (41). Accordingly, development of cerebellar tissue could be compromised in co-twins who overexpress neurotrophins, undermining the development and efficiency of the neuronal connectivity necessary for adequate language and social development. In support of this notion is the finding that brain-derived neurotrophic factor, one of the neurotrophins found at high levels in children who later were diagnosed as autistic, is a growth factor for neuronal populations such as cerebellar-based Purkinje cells (41). (However, levels of brain-derived neurotrophic factor were also high in children later diagnosed with mental retardation without autism, so the association may not be specific.) As suggested earlier, an alternative explanation is that in order for severe autism to develop in one twin, genetic susceptibility to the disorder must be accompanied by deleterious environmental events that do not necessarily affect the child’s co-twin (8). These environmental events could include prenatal or perinatal factors. Previous studies have been inconsistent in their support of an association between adverse prenatal and perinatal events and autistic spectrum disorders (42–45), although a more recent study supports such an association (46). Although this association is not robust, or specific to severe autism, these environmental factors could include nutritional variation due to malposition of one co-twin (47), prenatal trauma in the form of reduced blood flow or oxygen, or twin-to-twin transfusion resulting in significantly lower birth weight in the more severely affected co-twin. These conditions could affect the extent to which certain brain regions develop concordantly in co-twins (48). Postnatal factors could include delivery complications or respiratory distress affecting one co-twin to a greater extent than the other. Although the co-twins in our study did not appear to differ in the incidence of these events, our data were based on retrospective parental responses to a general checklist of perinatal and postnatal problems as well as medical interventions required. These responses were most likely not as reliable as review of hospital birth records and pose a limitation to this study. The finding that cerebellar volumes diverge in discordant twins and are associated with the severity of the behavioral phenotype for autism suggests that variability in cerebellar morphometry may be mediated by nongenetic risk factors. Although this interpretation would not be consistent with findings from previously cited studies of the cerebellum (37, 39), which has been found to be under significant genetic control in normal subjects, it is possible that the genes that underlie the phenotype for autism affect the cerebellum in a way that increases its susceptibility to environmental risk factors. Moreover, insofar as cerebellar development is experience dependent (49, 50), the less affected (broad-phenotype) co-twin’s capacity to adapt to environmental stimuli may result in further "experience-guided" brain development (51), including greater cerebellar-cortical connections. Accordingly, preservation of cerebellar integrity in one co-twin may result in a cascade of events that permits the development of a relatively mildly affected neuroanatomic and behavioral phenotype in that co-twin. Neuroanatomic differences between concordant and discordant twin pairs can be elucidated further with studies that use larger groups of not only monozygotic but also dizygotic twin pairs. Such studies, which should also include unaffected twin pairs, would help identify the relative contributions of genetic liability and environmental risk factors for autism. In addition, environmental risk factors, including potential prenatal, perinatal, and postnatal disruptions in development, should be integrated systematically into such studies. Moreover, the inclusion of comparison subjects with IQs in the average range would reduce the potential contribution of IQ to findings of anatomic differences. Continuing to clarify and differentiate the neuroanatomic pathways that subserve the entire spectrum of autistic disorders ultimately will permit specific and appropriately timed clinical interventions to children who display the full range of phenotypic expression of this disorder. Received Feb. 21, 2003; revision received July 15, 2003; accepted July 24, 2003. From the Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore; the Kennedy Krieger Institute, Baltimore; and the Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, Calif. Address reprint requests to Dr. Kates, Department of Psychiatry, Upstate Medical University, State University of New York, 750 East Adams St., Syracuse, NY 13210; [email protected] (e-mail). The authors thank Drs. Stewart Mostofsky, Andrew Zimmerman, Walter E. Kaufmann, and Martha B. Denckla for their contributions to the conceptualization and subject recruitment for this study, and Yung Liu, Suzanna Fryers, and Kristine Ducret for data processing. Supported by a grant to Dr. Kates from the National Alliance for Autism Research and the Autism Society of America and by grants to Dr. Reiss from NIMH (MH-50047 and MH-01142) and the National Institute of Child Health and Human Development (HD-31715).
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a scientific article discussing autism and its genetic and neuroanatomical aspects. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are educational and scientific in nature, making it fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text is a detailed description of a study methodology and does not contain any unsafe, inappropriate, or harmful content. It is suitable for academic and educational environments.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is a detailed description of a scientific study involving MRI scans and twin pairs. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\nThe provided text is a detailed scientific analysis of a study related to IQ testing and neuroanatomic differences in twin pairs. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a scientific research summary discussing brain volumes and their association with autism and related phenotypes. It contains no unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a scientific study discussing the neuroanatomy of autism and does not contain any unsafe, inappropriate, or harmful content. It is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe text provided is a scientific research summary discussing the findings of a study on autism and cerebellar volumes in twins. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1296-0", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1296-1", "llm_safety_score": 0, "reason": "No unsafe content found" }, { "custom_id": "1296-2", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1296-3", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1296-4", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1296-5", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1296-6", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://ajp.psychiatryonline.org/article.aspx?articleid=176689", "date": "2014-07-31T10:54:29", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00398-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9504558444, "token_count": 6549, "score": 3.0625, "int_score": 3 }
<urn:uuid:b9095025-0015-45a3-892c-59179fd7cb07>
Capuchins in Tamilnadu 1. The first catholic missionaries in Tamilnadu As early as 1291 and 1992 John Monte Corvino came to Mylapore in Chennai and stated there for one year and baptized about 100 persons. He belongs to the Franciscans order which was founded by St. Francis of Assis in year 1209. Franciscans were the very first catholic religious missionaries to the India after the individual missionaries like St. Thomas the Apostle, Pantaenus, Frumentius etc. 2. The Arrival of the Order of Friars Minor Capuchins OFM CAP in Tamilnadu (Ordinis Fratrum Minorum Capuccinorum) The capuchins as a distinct branch of the Franciscans came into existence in the Church in 1528. It was known as Counter Reformation and movement initiated by Matthew of Basio. The propaganda Fide was established by Pope Gregory the XIV in 1622 with the help of the capuchins. The German capuchin Jerom Narni played important role in it. The capuchin cardinal was appointed as first prefect of propagation of faith. In 1633 Pope urban invited missionaries for the proclamation to non Christians. The capuchins cooperated with this office more closely. 3. Cultural backround during the Arrival of capuchins in India (1632) The Vijayanagar dynasty with the help of Nayaks was ruling present part of Tamilnadu. The great personalities Thaymanavar, Kumarakurubar and Sivapragasiyar who are Tamil spirituals lived in this time. Due to the arrival of St. Thomas the apostle and Vascoda Gama and other Portuguese travelers Christianity was planted in Tamilnadu. St. Thomas Church in Mylapore was administered by Portuguese priests in 1538 itself. The Luz Church was built by the Portuguese Franciscans. In 1635 Ther were 120 Portuguese families, 200 India Christian Families and outside Madras there were 6000 Indian Christians lived in Tamilnadu. All these people were under Padroado ( patronage) In 1632 six capuchins came to India to continue the work began earlier by the Franciscans. They arrived Pondicherry and were chaplains of the French trading company and as the French company closed its branch in Pondicherry the chaplains too had to leave India in 1634 and they were recalled later in 1671. the early capuchind spent their time in evangelizing the local people by way of their life and ministry. The evangelization was ment for them to bring people to the catholic faith and the establishment of the churches. In the course of time establishment of the order was given more importance. The congregation for the propagation of faith which was started in 1622 and it send the capuchins far and near. The capuchins were sent in 1626 to Syria and 1632 six capuchins landed in Pondichery. The Capuchins settled in Madras 1642. The novitiate was opened on Sunday 26th February 1922 in Sardana. Indian commissariat was established in 1930. In the beginning they were chaplains to the French people. The first friary to be established in Tamilnadu was Amalashram in Srirangam Trichy in 1943. 14th April 1963 the capuchin order took the shape of autonomous Indian capuchin province of the immaculate heart of Mary. Till 1972 the order in India had its one headquarter in Shanti ashram, Coimbatore. Than it was divided into four jurisdictions but now into fifteen jurisdictions which is now with 14 jurisdictions called as conference of capuchin major superiors of India (CCMSI). The Tamilnadu region came to be separate province on 17th may 1972 under the patronage of Amala Annai (Immaculate Conception). It had at this time fifty friars with seven houses. 4. Madras mission (1642-1845) It was founded by Fr. Ebhraim in 1642. the Capuchins remained in Madras until it made vicer Apostolic and handed over the mission to the Irish Augustinians in 1845. St. Andrew’s church and Luz Church are evidence of the capuchin work in Madras, Pope Urban appointed Fr. Ephrem as first vicer apostolic in 1642. he was assisted by Fr. Zeno. Chennai was made apostolic prefecture. The capuchin mission came to end in 1845 oct. 19 with the death of Michael Valanz. Thus the mission lasted for 213 (1632-1845) years. The capuchin who were vicar apostolic were as follows: - Fr. Ephraim 1642-1695 - Fr. Michael Angel Borge 1695-1708 - Fr. Reno 1708-1713 - Fr. Thomas Poitier 1713-1742 - Fr. Savarini Savai 1742-1768 - Fr. Victor Nairoth 1768-1785 - Fr. Berdinand Alois 1785-1796 - Fr. Benjamin 1796-1796 - Fr. Benedict 1797-1807 - Fr. John Babtist 1807-1819 - Fr. Fidelis 1819-1824 - Fr. John Babtist 1825-1834. 5. Pondicherry mission (1632-1634 & 1674-1828) The Capuchin came to Indian first Pondichery in 1632 on 8th of January. It was under the Syria allpa monastery. According to Nobert par le tec the capuchin landed in 1642 dated 26th June with the lettet of King Louis XIII of France. 1664 Aug 27th dated government document says that the French business people wanted spread the catholic faith even though they came for business. From 1632-1688 only capuchins were serving in the mission area of Pondicherry. So the capuchins are the pioneers of the Christianity in Pondicherry. From there they went back in 1634 and came back once again in 1674 and ministered up to 1828. they built many churches and established mission stations and brought people to catholic faith. The first churh In Pondicherry was also built by the capuchins. It was dedicated to St. Peter or Lazar. Alamparai, Karaiyampathu, Marakanam, Alampakam, Dadalur, Manjakumpam, Kdapakam, and Kurusukupam are some of the mission stations of the capuchins. The church of Mary of angels commonly known caps church was built by the capuchins in 1709. they also worked among the tribes in chithur hills stations. The capuchins and Jesuits had good relationship. St. John De Britto was invited by the capuchins and he stayed in pondicherry and also did pastoral ministry. Fr. Spiritus (+ 1738) was inspired by the in cultured Christian liturgy of Fr. Robert De Nobili and paved way for Tamil liturgy. It is believed that the capuchins first landed in Kurusukupam. The church was built latter in 1848 and it is here the Franciscan sisters of Bon secuours was founded. In Pondicherry capuchins involved in educational ministry also. To help this noble cause the Franciscan ursuline sisters were invited in 1738, Sep. 8. In 1827 Cluny sisters of St. Joseph were invited through the government to help in the field of education. The vicar apostolic of the Pondicherry Mission were: - Fr. Sebastin Never 1776-1780 - Fr. Donatus Aurelius 1780-1786 - Fr. Hillarius 17861788 - Fr. Dmasas 1788-1792 - Fr. Benedictus 1792- 6. The Capuchins as pioneers of mission in Madras and Pondicherry The Church of Mary of the Angels was built by the capuchins. The church of St. Andrews, Church in Royapuram. The first church in madras was built by capuchin father Fr. Ephrem at St. George fort. The first Indian English School was established by capuchin Fr. Ephrem in 1642. it was established in St. Andrews church which was inside the St. George Fort. He wanted to help the children of English business men who did not have time for their education of the children. In course of time the children of Portuguese and the irish business men also joined them. In course of time it was financed by the madras business people association. After the death of Fr. Ephrem the school continued in the same place for some year and it was shifted to the church of St. Mary in paris corner madras. They were involved in bringing the people to the catholic faith, working for the Christian unity, and had good relationship with people of other religion like Islam and Hinduism. Fr. Ephrem (+1695) who was pioneer of Madras mission was friend of Golkonda king Abdulla Khuta Khan. He was gifted linguist and mathematician. He was welcomed by British township of St. Georg Fort. The governor of the fort welcomed him and with the permission from Rome consented to live in Madras. He confirmed about three thousand Catholics in single service. 7. New beginning of the capuchins in Tamilnadu The Amalashram was founded in 1943. But in between 1834 and 1943, for the past 109 years the work of the capuchin was not much thought of. But it is because of their prayers and blessing the capuchin mission in Tamilnadu is being blessed to the extend of having a General Definitor in the person of Fr. John Antony ofm. Cap., in Rome from 2006 onwards and general spiritual assistant of secular Franciscan order in the person of Fr. Irudayasamy ofm.cap, from 2003-2009. fr. Nityasahayam ofm.cap., at FABC (Federation of Asian Bishop’s Conference) in the justice, peace and ecology commission as secretary. 8. The Animators of Tamilnadu Province Fr. Jonathan (1975-1978) was the first provincial minister. He was re elected in march 1975 for another three years. He was followed by Fr. Mathias (1978-1984) who was provincial for two terms. Than Fr. Aruldass (1984-1990) took up the animation for another two terms. Than Fr. Jesudass (1990-1993) was elected as provincial. His term was followed by Fr. John Antony (1996-2011) who became provincial in 1996 and he was followed by Fr. Percy Joseph raj (1993-1996) and Divakar (2002-2005) and presently the province is ruled by Fr. Alphonse Charles (2006-). He is the eight provincial of Tamilnadu province. Some of them elected for three terms two terms and for one term also. 9. Present Ministries One of the unique stamp of the capuchins is psycho-socio-spiritual animation in the animation centre and at the Anugrah Arts and Science College in Nochiodaipatty near Dindugul. Spiritual animation of Franciscan sisters of St. Thomas in 1995. spreading of the Popular devotions, Popular missions, Preaching an important occasions, Novena preaching, conducting Retreats and Recollections, Education (4), Social Work, Animation centres(5), Parish ministries (14 parishes), and publication of magazines and books etc. the voice of Assisi was published from Amalashram from 1945-1953. The printing press Assis was functioning from Amalashram Trichy. There are also many social works done by the capuchins id Tamilnadu. There are six Children Homes in places like Trichy, Dindugul, Sivagangai, Nilgiris and Arcot with five hundred children. Copuchins also working in collaboration of RTU (Reaching the unreached) in the district of theni wherein about thousand children of poor families are given life. 10. Seminaries for Formation There are seven stages of Formation to become full-fledged capuchin priest. All these stages need almost 11 years of training after Hr.Sec.School. Here below the six stages of formations centres are mentioned and one stage namely Regency will be held in any one of the friaries of the capuchins. - The Aspiranccy, Paduvai Ashram Tuticorin (one year) - Postulancy, Shanti Ashram, Coimbatore (one year) - Novitiate, Asssi Ashram, Nagercoil (one year) - Post novitiate, Anbagam, Okkur, Sivagangai (one year now optional) - Philosophate, The Friary, Kotagiri, Nikgiris (3 years) - Theologate, Francesco, Samayapuram, Trichy (4 years) Oct.6, 1993 the capuchin Theological College was affiliated with St. Peter’s institute of Theology, Bangalore. From this institute not only capuchins but also the students belonging to Carmelites, heralds of Good News, Rosarians, order of mother of God, and from dioceses of Chengelpattu are educated. So far from this institute. More than hundred and fifty priests were trained for ordination, those who were trained from this institute are working not only in Tamilnadu but also in different states of India and in many countries of other continents also. 11. Foreign mission The Tamil capuchins are also serving in other countries like Zimbabwe, Burkinofaso, France, Italy, Canada, Gulf, countries and USA etc. in Malaysia the Tamil capuchins were invited for preaching around 1960’s and in course of time capuchin friary was founded and the local youth joined the capuchin seminary and the house was handed over to Philippine province. On June 26th 1989 the Tamil capuchins were asked to become missionary in Zimbabwe in African continent and five friars were sent. Now friars are working in places like Matabisa, Brunabeg. Bulawayo etc. there some young boys joined and in course of a few years time it will be self sufficient province by with the help of the local capuchins. In Furkino Faso the capuchin mission is flourishing. Now the local students are recruited. In course of few years it will become self sufficient. 12. Capuchin in Tamilnadu Today Capuchin friaries – Tamilnadu – in Chronology - Amalashram, Trichy (1943) - The Friary, Koatagiri (1949) - Shanthi Ashram, Coimbatore (1959) - Nirmala Ashram, Thirumangalam, Madurai (1970) - Paduvai Ashram, Tuticorin (1971) - Mathias Friary, Ashoknagar, Chennai (1971) - Assisi Ashram, Nagarcoil (1979) - Gnanalaya, Contonment, Trichy (1983) - Arulagam, V.K.Puram, Palayamkottai (1987) - Anbagam, okkur, Sivagangai, (1992) - Arunodaya, Vanampatty, Dindugal (1994) - St.John’s Friary, K.M.Kottai, Sivagangai (1996) - Anugraha, Nochiodaipatty, Dindugal (1998) - St.Antony’s Church, Arcot, Vellore (1998) - Vidivelli, Sundampaty, Dharmapuri (1998) - Infant Jesus Church, Vallam, Chengalpet (2000) - Jeevanathi, Sedamangalam, Salem (2001) - Kaviyalaya, Okkur, Sivagangai (2001) - CLARES, Karumandapam, Trichy (2003) - Karanaikadal, Thenkuda, Sivagangai (2006) - Sangamem, Perugamani, Trichy (2007) - St.Francis Ashram, Thiyagathurugam, Pondichery (2008) - Francesco, Samayapuram, Kumbakonam (2008) - Reaching the Unreached RTU, Battlekundu, Madurai (2009) - Franciscan Centre, Tolgate, Kumbakonam - Anjugraman, Nagarcoil, Kottar 13. The First Tamil Capuchin Brother and Priest Br. Cassian of Erudada originally Arulapan belonging to the kankirakodu parish belonging to kotar diocese, became the first tamil capuchin friar in 1934. They took about more than three hundred years to take local candidates into the capuchin order (1632-1934). He died in 6th of April 1966 and buried in capuchin friary cemetery of kollam. The first Tamil capuchin Priest was ordained in 1951. He lives even today in the Assisi Ashram, Nagarcoil. His name is Fr.Eusebius ofm.cap., 14. Tamil capuchins in other States The Tamil capuchin friars worked in Andhra for twelve years and handed over the mission to the capuchins from St.Joseph province of Kerala who were working in Andhra. Tamil Friars went to Andaman and worked there as missionaries for some time. Fr.Nithya Sahayam ofm.cap., worked in the peace, justice and ecology commission of CBCI and now he is working in FABC. 15. The Present Tamilnadu Capuchins in the International Level The lay brothers 23, priests 103, temporarily professed members 64, thus all together 190 capuchins working in 24 Friaries. When we take in to account the number of friars working in out side the province of Tamilnadu the number of tamil Capuchin friars are 251 i.e including in the custody 22 friars and in different places 39 friars. The Amala Annai province is presently 7th largest province with 251 capuchin friars. The largest province will be Langobardica which is in Italy is having 351 Friars. The capuchin order has presently in total 136 Administrative jurisdictions. Those administrative bodies are known as Province (83), general vice province (9), vice province (17), Custody (17), Provincial delegation (7), and House of presence (23). The total number of capuchin friars in the world is 10412. the Order is present in 106 countries. The division and errection of new province 25th April 2012. The enough study and plans are made and finalized the present Amala Annai province will be divided into two coming 26th of April. This function will be held in Amalashram, Srirangam, Trichy. It will be divided as north and south province. The south province will have its headquarter in the parish of Thirumangalam under the partraonge of Ourlady of Queen of peace in the arch Diocese of Madurai and the south will continue to have its head quarters in Trichy in Ganalaya in the parish of melapurthur Cathedral under the patron of our lady of immaculate conception. The General Minister Very Rev. Fr. Mouro Johri ofm.cap., will declare it erected in the presence of all the Capuchin provincials and capuchins in Tamilnadu. It will be very important movement not only for all the capuchins in Tamilnadu but also for all the Franciscans and the Religious and 40 lakes Roman catholics of Tamilnadu as we are known to all of them due to our effect pastoral, social and spiritual ministries and commenting respect and reverence from the people of other churches and Religions, as our Founder St. Francis of Assisi is one of the popular saints among all the men of God known for inter religious Diolloug and mission. Thus the Capuchins in Tamilnadu taking St. Francis of Assisi as their role model try to spread the values of love, peace and Harmony in different fields like education, mission, preaching, social work and parish ministry in and out side the state. The Dedication and selfless sacrifice helps many people to know and experience the love of God and praise him in return. Fr. T. Antonysamy, OFM. Cap, Francesco, Capuchin Theological College, Vengankudi North, Pallividai (P.o) Samayapuram, Trichy – 621 112.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical and religious content\", \"score\": 0}\n```\n\nThe text provided is purely historical and religious in nature, detailing the arrival and activities of Catholic missionaries in Tamil Nadu. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences.", "{\"reason\": \"historical and religious content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provided is a detailed historical account of the Capuchin mission in Pondicherry and Madras. It contains no unsafe, inappropriate, or offensive content. The language is respectful and the themes are historical and religious in nature, making it fully appropriate for all audiences.", "{\"reason\": \"historical and religious content\", \"score\": 0}\n```\n\nThe text provided is a historical and religious narrative about the Capuchin mission in Tamil Nadu, India. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\nThe provided text is a detailed historical account of the Capuchin friaries and missions in Tamil Nadu and other countries. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1297-0", "llm_safety_score": 0, "reason": "historical and religious content" }, { "custom_id": "1297-1", "llm_safety_score": 0, "reason": "historical and religious content" }, { "custom_id": "1297-2", "llm_safety_score": 0, "reason": "historical and religious content" }, { "custom_id": "1297-3", "llm_safety_score": 0, "reason": "No unsafe content identified" }, { "custom_id": "1297-4", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://sribhagavathiedu.blogspot.in/2012/05/capuchins-in-tamilnadu.html", "date": "2014-07-31T09:24:42", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00398-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9536524415, "token_count": 4486, "score": 3.25, "int_score": 3 }
<urn:uuid:299eaa7d-20be-4615-8d66-bcb6dca198b7>
Note: This message is displayed if (1) your browser is not standards-compliant or (2) you have you disabled CSS. Read our Policies for more information. Access to high-speed Internet, also called broadband, helps individuals quickly find products, services, and resources that can assist you on a daily basis. However, some of Indiana’s citizens do not have access to broadband. We, at the state, are working to improve this situation. How can you help? We need your help to accurately identify where broadband coverage exists, and to determine its level of performance. In order to do this, we would like you to answer a few questions. We will test your connection speed while you provide answers. We will collect this data and provide it in map format to others to determine how and where to improve broadband coverage across the state. The National Telecommunications and Information Administration defines broadband as the data transmission technology that provides two-way data transmission to and from the Internet with advertised speeds of at least 768 kilobits per second (kbps) downstream and at least 200 kbps upstream to end users, or providing sufficient capacity in a middle mile project to support the provision of broadband service to end users within the project area. Broadband Availability Map Note: By definition, the National Telecommunication and Information Administration considers a census block to have broadband service available if any part of it has service available.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is entirely informational and educational, discussing the importance of broadband access and how to contribute to improving it. There are no unsafe elements, inappropriate language, or harmful themes present." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1298-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.in.gov/gis/BroadbandQuestionnaire.htm", "date": "2014-07-31T10:10:21", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510272940.33/warc/CC-MAIN-20140728011752-00398-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9184251428, "token_count": 290, "score": 3.015625, "int_score": 3 }
<urn:uuid:7e658e84-7986-41a4-84a1-7e86ee2845e0>
This article explores how statistics can be interpreted in different ways to yield different conclusions. It describes a pair of class activities. In the first, the results are interpreted to “show” that taking a group rather than an individual perspective is ultimately beneficial to the individual. In the second, a variation is added “showing” that telling the truth is better that lying. Grade Level: Middle School (6-8) Curriculum Topic Benchmarks: M6.3.1 Subject Keywords: Probability, Mean, Average Author(s): Glenn Simonelli PUMAS ID: 12_04_02_1 Date Received: 2002-12-04 Date Revised: 2003-02-12 Date Accepted: 2003-02-16 As yet, no Activities/Lesson Plans have been accepted for this example. Complete the On-Line Teachers' Assessment Form No comments have been posted yet for this example. Add a Comment
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"appropriate educational content\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1300-0", "llm_safety_score": 0, "reason": "appropriate educational content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://pumas.gsfc.nasa.gov/examples/index.php?id=85", "date": "2014-08-01T22:27:43", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275393.46/warc/CC-MAIN-20140728011755-00446-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8926154971, "token_count": 203, "score": 4.21875, "int_score": 4 }
<urn:uuid:3b04e70a-4be4-4cd8-8185-50c77e5e6d28>
Exelon Generation owns and operates a number of intermediate and peaking fossil power plants in several states, including Pennsylvania, Maryland and Illinois. These units are used as needed for peak demand periods and load balancing. Originally built in the 1950s to produce steam to run the steel mills, Fairless Hills now generates electricity. The power plant now uses landfill gas from Waste Management’s Tullytown and Grows landfills as its primary fuel source. Since Exelon purchased the plant in 1997, improvements have been made to expand and increase the reliability of the plant. Fairless Hills is now the second largest landfill gas power plant in the United States. In 1997, Fairless Hills received the Pennsylvania Governor’s Award for Environmental Excellence. All Exelon facilities employ sophisticated emergency response plans to protect public health and safety. These plans are reviewed and approved by state governments and regulatory agencies. The emergency planning includes coordination with local counties, municipalities and school districts. Supporting the Local Economy Exelon Power's plants pay local real estate taxes every year to support county and township government, area schools, libraries, park districts and other taxing bodies where our plants are located. Fairless Hills aims to be a good neighbor and is very active in the local community. - Charitable Giving & Volunteering. The power plant donates to a variety of charitable activities including the local United Way chapters; the Pennsbury Partners Program; and the Pennsbury School District. - Education. As part of Exelon 2020, The Fairless Hills Renewable Education Center, which opened in October 2008, offers regular tours of the facility, information on renewable energy and other Exelon renewable energy efforts. The Education Center features a classroom for 6 interactive exhibits. Since its opening, the Center has hosted more than 500 students from Bucks County schools.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is entirely factual and informative, discussing the operations and community involvement of a power plant. There are no unsafe, inappropriate, or harmful elements present." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1301-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.exeloncorp.com/PowerPlants/fairlesshills/Pages/profile.aspx", "date": "2014-08-01T22:32:56", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275393.46/warc/CC-MAIN-20140728011755-00446-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9401014447, "token_count": 379, "score": 2.625, "int_score": 3 }
<urn:uuid:27d9134a-94f0-4b5c-a551-7cbad4c9b36a>
Metal electrodes are increasingly being used in brain implants that help treat depression and the tremors of Parkinson’s disease, and in ever more sophisticated prosthetic devices. In spite of these successes, conventional metal electrodes have major limitations: performance deteriorates over time, and it’s difficult to design electrodes that are efficient at both sending and receiving electrical signals. Now researchers at the University of Texas are developing electrodes that are more efficient at both sending and receiving electrical stimuli. These electrodes, which are coated with carbon nanotubes, could lead to neural implants that monitor how they affect the neurons that they stimulate, conserving battery life and reducing side effects. Researchers led by Edward Keefer at the University of Texas Southwestern Medical Center developed a simple method for coating electrodes with carbon nanotubes. The coated electrodes were better at recording neural activity than were bare electrodes when implanted in mice and in a monkey. Importantly, the coated electrodes provided less-noisy recordings than bare ones did. They also required less power to operate. And the nanotubes enhanced the electrodes’ ability to both record and stimulate neural activity more than any other coating previously reported. Today’s neural prosthetics are good at sending electrical signals but not at receiving them, says Ravi Bellamkonda, director of the Neurological Biomaterials and Therapeutics group at Georgia Tech. Thus, the batteries in deep-brain stimulators–implanted devices used to treat Parkinson’s–last only three years because the devices are constantly on. “You want to see if the neuron is quiet,” says Bellamkonda. A feedback-enabled device that powered off when not needed could potentially use the same battery for a few more years. The University of Texas researchers’ technique for modifying electrodes is simple. Electrodes are placed in a water-based solution of carbon nanotubes; when a small voltage is applied to sites on the electrodes, carbon nanotubes localize there and can be fixed. Joseph Pancrazio, a neuroscientist at the National Institute of Neurological Disorders and Stroke, says that Keefer’s electrode modification “is something that can be done readily.” This means that other labs experimenting with neural prosthetics are likely to adopt the technique. By contrast, Pancrazio says, other methods for interfacing carbon nanotubes with neurons have required the use of special substrates and must be done at very high temperatures.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n*Explanation:*\nThe provided text discusses advancements in brain implant technology and neural prosthetics. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful and the themes are related to scientific and medical research, which are appropriate for general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1302-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.technologyreview.com/news/410864/nanotubes-on-the-brain/", "date": "2014-08-01T22:55:29", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275393.46/warc/CC-MAIN-20140728011755-00446-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9417088628, "token_count": 505, "score": 3.46875, "int_score": 3 }
<urn:uuid:bc30093e-32c0-4268-bb0f-ebfbae52ea37>
Le Thoronet Abbey, one of the wonders of twelfth-century Cistercian architecture and still revered by architects today, nestles in a wooded valley in Provence, South of France. This book is a pictorial appreciation of the abbey, photographed by Lucien Herve in the mid-1950s and introduced by Le Corbusier. 'The pictures in this book are witnesses to the truth,' says Le Corbusier of Herve's photographs of the Romanesque abbey. Herve's exquisite study presents the building throughout the course of a day, depicting the changing play of light and shadow on its stone vaulted exterior and interior. Highly textured and almost abstract in quality, his photographs reveal how the abbey is defined as much by light as by the conventions of Romanesque architecture, to communicate the intense spirituality of the Cistercian monastic order. Arranged according to the canonical hours of prayer, Herve's photographs are complemented by quotations from the psalms and the saints. An essay by Father Samuel of the Abbey of Notre-Dame de Sept-Fons provides a profound insight into the Cistercian monastic order, while the renowned state-of-the-art architect John Pawson contributes a personal appreciation of this fine example of Cistercian architecture.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe text is entirely focused on describing the architectural and historical significance of Le Thoronet Abbey. It contains no inappropriate language, adult themes, illegal activities, or any other unsafe elements." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1303-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.campusbooks.com/books/professional-technical/architecture/international/9780714840031_Lucien-Herve-Lucien-Herve_Architecture-of-Truth.html", "date": "2014-07-12T21:20:46", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435439.76/warc/CC-MAIN-20140707234035-00024-ip-10-180-212-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9288873672, "token_count": 273, "score": 2.515625, "int_score": 3 }
<urn:uuid:b91206d1-96ab-4abc-a185-5716d60bbeaf>
Artist's impression of ICESat in orbit |Mission type||Remote sensing| |Mission duration||7 years| |Launch mass||970 kilograms (2,140 lb)| |Start of mission| |Launch date||13 January 2003, 00:45:00UTC| |Rocket||Delta II 7320-10 D294| |Launch site||Vandenberg SLC-2W| |End of mission| |Decay date||30 August 2010, 09:00 UTC| |Perigee||586 kilometers (364 mi)| |Apogee||594 kilometers (369 mi)| ICESat (Ice, Cloud, and land Elevation Satellite), part of NASA's Earth Observing System, was a satellite mission for measuring ice sheet mass balance, cloud and aerosol heights, as well as land topography and vegetation characteristics. ICESat was launched 13 January 2003 on a Delta II rocket from Vandenberg Air Force Base in California into a near-circular, near-polar orbit with an altitude of approximately 600 km. It operated for seven years before being retired in February 2010, after its scientific payload shut down and scientists were unable to restart it. The ICESat mission was designed to provide elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It provides topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets. The satellite was found useful in assessing important forest characteristics, including tree density. The sole instrument on ICESat was the Geoscience Laser Altimeter System (GLAS), a space-based LIDAR. GLAS combined a precision surface LIDAR with a sensitive dual-wavelength cloud and aerosol LIDAR. The GLAS lasers emit infrared and visible laser pulses at 1064 and 532 nm wavelengths. As ICESat orbited, GLAS produces a series of approximately 70 m diameter laser spots that are separated by nearly 170 m along the spacecraft's ground track. During the commissioning phase of the mission, the ICESat was placed into an orbit which allowed the ground track to repeat every 8 days. During August and September 2004, the satellite was maneuvered into a 91-day repeating ground track for the main portion of the mission. ICESat was designed to operate for three to five years. Testing indicated that each GLAS laser should last for two years, requiring GLAS to carry three lasers in order to fulfil the nominal mission length. During the initial on orbit test operation, a pump diode module on the first GLAS laser failed prematurely on 29 March 2003. A subsequent investigation indicated that a corrosive degradation of the pump diodes, due to an unexpected but known reaction between indium solder and gold bonding wires, had possibly reduced the reliability of the lasers. Consequentially, the total operational life for the GLAS instrument was expected to be as little as less than a year as a result. After the two months of full operation in the fall of 2003, the operational plan for GLAS was changed, and it was operated for one-month periods out of every three to six months in order to extend the time series of measurements, particularly for the ice sheets. The last laser failed on 11 October 2009, and following attempts to restart it, the satellite was retired in February 2010. Between 23 June and 14 July, the spacecraft was manoeuvred into a lower orbit in order to speed up orbital decay. On 14 August 2010 it was decommissioned, and at 09:00 UTC on 30 August 2010 it reentered the atmosphere. NASA is in the process of designing a follow-on mission, ICESat-2, to continue studying polar ice changes, and biomass and carbon in vegetation. The new satellite is tentatively planned for launch in July 2016 on a Delta 2 rocket. For the period of time in between the two satellites NASA's Operation IceBridge is using a DC-8 aircraft to measure ice thickness and collect other data. - Krebs, Gunter. "ICESAT (EOS-LAM)". Gunter's Space Page. Retrieved 31 August 2010. - McDowell, Jonathan. "Launch Log". Jonathan's Space Report. Retrieved 31 August 2010. - Clark, Stephen (2010-02-25). "ICESat mission complete after seven years in orbit". Spaceflight Now. Retrieved 2010-02-25. - BBC News. "Space laser spies for woodpeckers." December 17, 2010. http://www.bbc.co.uk/news/science-environment-11867165 - "Laser Diode Pump Assembly". NASA. - Schutz, B. E.; Zwally, H. J.; Shuman, C. A.; Hancock, D.; DiMarzio, J. P. (2005). "Overview of the ICESat Mission" (PDF). Geophys. Res. Lett. 32: L21S01. Bibcode:2005GeoRL..3221S01S. doi:10.1029/2005GL024009. - "NASA's Successful Ice Cloud and Land Elevation Mission Comes to an End". NASA. 27 August 2010. Retrieved 31 August 2010. - Clark, Stephen (30 August 2010). "ICESat takes a plunge to conclude successful mission". Spaceflight Now. Retrieved 31 August 2010. - "ICESat-2". NASA Goddard Space Flight Center. 26 October 2011. Retrieved 2011-11-05. - "Operation ICE Bridge". NASA Earth Science Project Office. Retrieved 2009-11-13. - ICESat (Goddard Space Flight Center) - ICESat (Center for Space Research, University of Texas) - The Ice, Cloud and land Elevation Satellite (ICESat): Summary Mission Timeline and Performance Relative to Pre-Launch Mission Success Criteria Goddard Space Flight Center
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational and historical content\", \"score\": 0}\n```\n\nThe text sample provided is entirely educational and historical in nature, detailing the mission of ICESat, its launch, orbit, and operational details. There is no unsafe content, inappropriate language, or themes that would warrant a score higher than 0.", "{\"reason\": \"No unsafe content\", \"score\": 0}\n```\n\nThe provided text is a collection of references and citations related to scientific and space exploration missions. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1304-0", "llm_safety_score": 0, "reason": "educational and historical content" }, { "custom_id": "1304-1", "llm_safety_score": 0, "reason": "No unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://en.wikipedia.org/wiki/ICESat", "date": "2014-07-23T04:27:48", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00192-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.8893857002, "token_count": 1234, "score": 3.578125, "int_score": 4 }
<urn:uuid:36bf4f9b-cf69-40a5-879a-265f63bb85f8>
United Nations/Germany Expert Meeting on Space-based Information for Flood and Drought Risk Reduction This year’s United Nations/Germany Expert Meeting on the Use of Space-based Information for flood and drought risk reduction will focus on the use of space technologies to improve Disaster risk reduction. Floods and droughts will serve as examples for hazard types that have recently affected countries around the globe. Recent examples are the UK floods in February 2014, floods in Central Europe in June 2013, the floods following super typhoon Haiyan in the Philippines in November 2013, or the droughts in Eastern Africa 2011 and droughts in Bolivia in 2013. In its current Situation and Outlook issued in January 2014, WMO foresees an enhanced possibility of the development of a weak El Niño around the middle of 2014. Its effects on floods and droughts need to be taken into account in national plans targeting Disaster Risk reduction. A recent article in Nature stated that El Niño events may start coming once every decade, doubling their frequency. Effective disaster risk management helps preventing that natural hazards like floods and droughts turn into disasters. Space technologies, especially Earth Observation and global navigation satellite systems, provide crucial information for hazard assessment, vulnerability analysis, and risk assessment. The potential contribution of space-based information to disaster risk management is not yet fully exploited – technical solutions are not tailored enough to the needs on the ground, and the information is rarely easily accessible for disaster managers. With this expert meeting, UN-SPIDER is aiming to fulfil one of its core functions: serving as a bridge between the different space and disaster risk management communities. Objectives and Expected Outcomes The main objective of the expert meeting is to promote the use of space-based applications to assess and to manage flood and drought risks. Expected outcomes include: • Recommendations for the improvement of flood and drought risk management through the use of space based information; • Elements to contribute to the Post 2015 framework for disaster risk reduction (HFA2) and to sustainable development; • Elements to contribute to the upcoming World Conference on Disaster Risk Reduction to take place in Sendai, Japan, in March 2015. These outcomes will feed into UN-SPIDER’s outreach activities, capacity building, Knowledge Portal, and Technical Advisory Support. Financial Support to Selected Participants Taking into consideration the limited financial resources available for this expert meeting, a number of qualified applicants from developing countries, who have expressed the need for financial support, will be offered financial support to attend the expert meeting. This may include the provision of a round-trip air ticket between Bonn and the applicant’s international airport of departure or daily subsistence allowances to cover board and lodging for the duration of the Expert meeting. Any changes made to the air tickets must be the responsibility of the participants. Due to this limited availability of financial resources, applicants and their nominating organizations are strongly encouraged to find additional sources of sponsorship to allow them to attend the Expert meeting. Qualified participants whose nominating agency/organization agrees to fund round-trip travel and/or living expenses will be considered on a priority basis. Point of Contact Antje Hecheltjen (Ms.), UN-SPIDER Office for Outer Space Affairs, UN Campus Bonn Tel: +49 (0) 228 815 0677, Fax: +49 (0)228 815 0699, E-mail: [email protected] |Stefan Voigt (DLR): Earth Observation contributions to flood and drought risk assessment||Download| |Ray Williamson (SWF): Bridging the gap: Satellite data, information, and Disaster Management||Download| |Luc St-Pierre and Juan Carlos Villagran: UN-SPIDER's efforts in flood and drought risk management||Download| |Session 1: Space technologies for disaster risk reduction - Regional examples| |AKM Monowar H. Akhand (Ministry of Public Administration Bangladesh): Preparedness strategies, local administration and the role of space-based data in reduction of flood risk in Bangladesh||Download| |Byron Anangwe (RCMRD, Kenya): Disaster Risk Reduction dynamics and Earth Observation - African scenarios||Download| |Session 2: Flood Risk Reduction| |Sebastian Pisi (DKKV, Germany): Assessment of coping capacities relating to the 2013 floods in Germany||Download| |André Twele and Franz Hummel (DLR, Germany): Combining automated satellite based flood mapping and exposure mapping for flood risk assessment and management||Download| |Wazir Khan (Pakistan): Application of Space Technology for Disaster Management in Pakistan||Download| |Session 3: Drought Risk Reduction| |Jeiran Amiraslani (ISA, Iran): Role of E-Learning in knowledge promotion and capacity building for monitoring and assessment of natural disasters - A case study for drought monitoring||Download| |Peter Hoefsloot (ARC, Netherlands): African Risk Capacity (ARC) & World Food Programme||Download| |John M. Ferrell (US SOUTHCOMM): United States Southern Command Technology Approaches for Enabling Multi-National Activities||Download| |Session 4: The Way Forward| |Luna Abu-Swaireh (UNISDR): Towards the post-2015 framework for disaster risk reduction||Download| |Bonn Expert Meeting 2014 - Programme of Activities||Download| |Bonn Expert Meeting 2014 - Information for Participants||Download| |Bonn Expert Meeting 2014 - Flyer||Download| |Bonn Expert Meeting 2014 - Information Note||Download| |Bonn Expert Meeting 2014 - Background Paper Discussion Groups||Download|
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is entirely focused on a scientific and educational meeting related to the use of space-based information for disaster risk reduction. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text sample is a list of contact information and session details for a meeting related to disaster risk reduction and space technologies. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1305-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1305-1", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://un-spider.org/BonnExpertMeeting2014", "date": "2014-07-23T03:50:03", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00192-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.8642306924, "token_count": 1178, "score": 2.609375, "int_score": 3 }
<urn:uuid:190b4e4c-5056-4664-85d5-9222f385baec>
After reading Christopher Marlowe's Doctor Faustus, how might one explain the meaning of "sloth" and link it to the contents of the play? 1 Answer | Add Yours During Christopher Marlowe’s time and for centuries before then,“sloth” was considered one of the “seven deadly sins.” In fact, Sloth appears in as a character in Marlowe’s play Doctor Faustus in precisely that capacity. Sloth was traditionally defined as apathy or laziness, especially in matters involving one’s own soul and the souls of others. Doctor Faustus is guilty of sloth, as even the opening scene of the play implies. In that scene, Faustus demonstrates again and again that he is incapable of the hard work and true commitment demanded by various callings in life. He rejects one possible career after another, partly because he cannot focus on any single one of them. Despite his great learning, he is in some ways very intellectually lazy. Faustus himself seems to recognize this problem; in his very first words of the play, he urges himself to Settle thy studies, Faustus, and begin To sound the depth of that thou wilt profess . . . The irony of this statement, of course, is that for the next 64 lines he demonstrates anything but depth or a settled attitude toward his studies. Instead, he flits from one option to another because he is too mentally lazy to devote himself whole-heartedly to any single one of them. Faustus’s opening speech reveals his intellectual sloth in other ways, too. This is particularly evident when he quotes from the Bible (a book he should know very well since he has been studying it for years) and either deliberately or inadvertently omits crucial phrasing from the relevant passage. Thus, he proclaims that “Stipendium peccati mors est” and then immediately translates these Latin words into English: “The reward of sin is death.” However, he neglects to cite the rest of the passage from Romans 6.23: “For the wages of sin is death; but the gift of God is eternal life through Jesus Christ our Lord.” Perhaps Faustus, at this point in his life, does not want to remember the second half of the verse, since doing so would mean recalling his need to commit himself to Christ. Such commitment would involve dedication, self-denial, and the ability to resist temptation. Faustus is too lazy – too slothful – to undertake this kind of difficult task. It isn’t surprising, therefore, that Faustus reveals such a strong interest in magic. Magic, almost by definition, involves making difficult things seem easy. Magic, almost by definition, also involves an absence of hard work. By simply casting a spell or exercising some mysterious power, a magician achieves, in an instant, results that would otherwise involve laborious effort. Faustus’s fascination with magic, then, is a logical reflection of his fundamentally slothful nature. Join to answer this question Join a community of thousands of dedicated teachers and students.Join eNotes
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is an academic discussion about the concept of \"sloth\" in Christopher Marlowe's play \"Doctor Faustus.\" It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are educational and historical. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1306-0", "llm_safety_score": 0, "reason": "No unsafe content identified" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.enotes.com/homework-help/please-explain-meaning-sloth-link-content-doctor-338828", "date": "2014-07-23T04:04:31", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00192-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9679939747, "token_count": 659, "score": 3.015625, "int_score": 3 }
<urn:uuid:7fac6b3a-d480-4029-b5e9-874950f2e537>
Among all people age 18 or older, 73 percent have been married at least one time in their life. Alcohol abuse, as described in the DSM-IV, is a psychiatric diagnosis describing the recurring use of alcoholic beverages despite its negative consequences. Alcohol abuse is sometimes referred to by the less specific term alcoholism. However, many definitions of alcoholism exist, and only some are compatible with alcohol abuse. There are two types of alcoholics: those who have anti social and pleasure-seeking tendencies, and those who are anxiety-ridden- people who are able to go without drinking for long periods of time but are unable to control themselves once they start. Binge drinking is another form of alcohol abuse. According to research done through international surveys, the heaviest drinkers happen to be the United Kingdom's adolescent generation. When differentiating between alcohol abuse and alcoholism, one should remember that alcohol abuse is when the abuser has faced critical consequences for their actions, recently. Where an alcoholic has experienced a sense of withdrawal in the same time period. Drinking culture refers to the customs and practices associated with the consumption of alcoholic beverages. Although alcoholic beverages and social attitudes toward drinking vary around the world, nearly every civilization has independently discovered the processes of brewing beer, fermenting wine, and distilling spirits. Alcohol and its effects have been present in societies throughout history. Drinking is documented in the Hebrew and Christian Bibles, in the Qur'an, in art history, in Greek literature as old as Homer, and in Confucius’s Analects. Binge drinking is more common in men than it is in women. Among students in the USA approximately 50 percent of men and 39 percent of women binge drink. Racial differences exist among binge drinking with Hispanics followed by white people having the highest level of binge drinking. Individuals of African descent have a lower level of binge drinking followed by those of Asian descent. In the case of Asians their low level of binge drinking may be due to the presence of the aldehyde dehydrogenase gene (ALDH2, Chromosome 12) in many (but by no means the vast majority) that results in poor metabolism of alcohol which leads to severe adverse effects such as facial flushing. 1.6% of the U.S. population (2012) A Stateside Puerto Rican or Puerto Rican American (Spanish: Puertorriqueño estadounidense) is an American born in either Puerto Rico or the United States that is of full or partial Puerto Rican origin and has lived a significant part of their lives in one of the states of the United States or the District of Columbia. In journalism, a human interest story is a feature story that discusses a person or people in an emotional way. It presents people and their problems, concerns, or achievements in a way that brings about interest, sympathy or motivation in the reader or viewer. Human interest stories may be "the story behind the story" about an event, organization, or otherwise faceless historical happening, such as about the life of an individual soldier during wartime, an interview with a survivor of a natural disaster, a random act of kindness or profile of someone known for a career achievement. A social issue (also called a social problem or a social situation) is an issue that relates to society's perception of a person's personal lives. Different cultures have different perceptions and what may be "normal" behavior in one society may be a significant social issue in another society. Social issues are distinguished from economic issues. Some issues have both social and economic aspects, such as immigration. There are also issues that don't fall into either category, such as wars. Thomas Paine, in Rights of Man and Common Sense, addresses man's duty to "allow the same rights to others as we allow ourselves". The failure to do so causes the birth of a social issue.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"general information on alcohol abuse and social issues\", \"score\": 0}\n```\n\nThe text provided is a factual and educational discussion about alcohol abuse, social issues, and cultural differences. It does not contain any unsafe or inappropriate content, profanity, derogatory terms, or references to illegal activities. Therefore, it is fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1308-0", "llm_safety_score": 0, "reason": "general information on alcohol abuse and social issues" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://answerparty.com/question/answer/what-percentage-of-americans-get-married-at-least-once-in-their-life", "date": "2014-07-24T12:21:47", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888866.9/warc/CC-MAIN-20140722025808-00128-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9584780931, "token_count": 788, "score": 3.265625, "int_score": 3 }
<urn:uuid:ee12f741-2e74-4d63-8449-85b744bc58be>
Devotees throng around the chariots as they wait to pull them during the annual Hindu festival "Rath Yatra," or Chariot procession, at Puri in Odisha. Ratha yatra, the Festival of Chariot : Chariots of Shri Jagannath is celebrated every year at Puri, the temple town in Odisha, on the second (dwitiya) day of shukla pakshya (waxing cycle of moon) of Ashadh Maas (3rd month in Lunar Calendar). The festival is also known as Gundicha Jatra, Ghosa Jatra, Navadina Jatra, Dasavatara Jatra and by a variety of other names. The presiding deities of the Jagannath Temple, Puri's main temple, Lord Jagannath, Lord Balabhadra and Goddess Subhadra, with the celestial wheel (Sudarshana Chakra) are taken out from the temple precincts in an elaborate ritual procession to their respective chariots. The huge, colourfully decorated chariots are drawn by multitude of devotees on the bada danda, the grand avenue to the Gundicha Temple (Gundicha - King Indradyumna's Queen), two miles away to the North. On their way back from the Gundicha Temple, the three deities stop for a while near the Mausi Maa Temple (Aunt's abode) and have an offering of the Poda Pitha, which is a special type of pancake supposed to be the Lord's favourite. After a stay for seven days, the deities return to their abode. Three richly decorated chariots, resembling temple structures, are pulled through the streets of Puri called Badadanda. The three chariots of Balarama, Subhadra and Jagannatha are newly constructed every year with wood of specified trees like phassi, dhausa, etc. They are customarily brought from the ex-princely state of Dasapalla by a specialist team of carpenters who have hereditary rights and privileges for the same. The logs are traditionally set afloat as rafts in the river Mahanadi. These are collected near Puri and then transported by road. Lord Jagannatha's Chariot is called Nandighosa. It is forty-five feet high and forty-five feet square at the wheel level. It has sixteen wheels, each of seven-foot diameter, and is decked with a cover made of red and yellow cloth. This commemorates the annual journey of Lord Jagannath, Lord Balabhadra, and their sister Subhadra to their aunt's temple, the Gundicha Temple which is situated at a distance of 2 km from their temple. This is the only day when devotees who are not allowed in the temple premises such as non-Hindus and foreigners can get their glimpse of the deities. During the festival, devotees from all over the World go to Puri with an earnest desire to help pull Lords' chariot with the help of other priests pulling the chariots with ropes. They consider this a pious deed and risk their lives in the huge crowd. The huge processions accompanying the chariots play devotional songs with drums, tambourines, trumpets etc. Children line the streets through which the chariot will pass and add to the mass chorus. The Rath carts themselves are some approximately 45 feet (14 m) high and are pulled by the thousands of pilgrims who turn up for the event; the chariots are built anew each year only from a particular type of tree. Millions of devotees congregate at Puri for this annual event from all over the country and abroad. It is also broadcast live on many Indian, foreign television channels as well as many of the websites telecast jagannath ratha yatra live. This footage is part of the professionally-shot broadcast stock footage archive of Wilderness Films India Ltd., the largest collection of HD imagery from South Asia. The Wilderness Films India collection comprises of 50, 000+ hours of high quality broadcast imagery, mostly shot on HDCAM / SR 1080i High Definition, Alexa, SR, HDV and XDCAM. Write to us for licensing this footage on a broadcast format, for use in your production! We are happy to be commissioned to film for you or else provide you with broadcast crewing and production solutions across South Asia. We pride ourselves in bringing the best of India and South Asia to the world... Please subscribe to our channel wildfilmsindia on Youtube for a steady stream of videos from across India. Also, visit and enjoy your journey across India at www.clipahoy.com , India's first video-based social networking experience! Reach us at rupindang @ gmail . com and [email protected] - duration: 1:05 - published: 02 Jul 2014 - updated: 02 Jul 2014 - views: 39
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text describes a traditional Hindu festival and does not contain any unsafe, inappropriate, or harmful content. It is fully appropriate for all audiences and does not include any language, themes, or references that would be considered unsafe.", "{\"reason\": \"No unsafe content identified\", \"score\": 0}\n```\n\nThe provided text sample does not contain any unsafe or inappropriate content. It appears to be metadata for a video or article, including contact information, duration, publication date, update date, and view count. There are no unsafe elements such as inappropriate language, harmful themes, or references to illegal activities." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1310-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1310-1", "llm_safety_score": 0, "reason": "No unsafe content identified" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://article.wn.com/view/2014/06/30/The_Lords_rath_rolls_out_in_Allahabad_Kashi/", "date": "2014-07-25T18:22:19", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00064-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9369057417, "token_count": 1034, "score": 2.765625, "int_score": 3 }
<urn:uuid:60093941-7fe3-4847-99fe-7c1ce226a0e7>
The Q-Cochran analysis of variance, based on the Q-Cochran test, is described by Cochran (1950). This test is an extended McNemar test for several dependent groups. It is used in hypothesis verification about symmetry between several measurements for the X feature. The analysed feature can have only 2 values - for the analysis, there are ascribed to them the numbers: 1 and 0. measurement on a nominal scale (dichotomous variables– it means the variables of two categories), a dependent model. H0: all the ”incompatible” observed frequencies are equal, H1: not all the ”incompatible” observed frequencies are equal, ”incompatible” observed frequencies – the observed frequencies calculated when the value of the analysed feature is different in several measurements. The POST-HOC tests available in the Q-Cochran ANOVA: We want to compare the difficulty of 3 test questions. To do this, we select a sample of 20 people from the analysed population. Every person from the sample answers 3 test questions. Next, we check the correctness of answers (an answer can be correct or wrong). In the table, there are following scores: H0 : The individual questions received the same number of correct answers, in the analysed population, H1 : There are different numbers of correct and wrong answers in individual test questions, in the analysed population. Comparing the p value p = 0.007699 with the significance level α = 0.05 we conclude that individual test questions have different difficulty levels. We resume the analysis to perform POST-HOC test by clicking , and in the test option window, we select POST-HOC Dunn. The carried out POST-HOC analysis indicates that there are differences between the 2-nd and 1-st question and between questions 2-nd and 3-th. The difference is because the second question is easier than the first and the third ones (the number of correct answers the first question is higher).
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is a detailed explanation of the Q-Cochran analysis of variance, which is a statistical method used for hypothesis testing. The content is entirely academic and does not contain any unsafe, inappropriate, or harmful elements. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1311-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://pqstat.com/?mod_f=anova_q_cochrana", "date": "2014-07-25T18:14:23", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00064-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9193725586, "token_count": 424, "score": 3.546875, "int_score": 4 }
<urn:uuid:1cc5a900-3311-4873-95b6-0b722a4272de>
Definitions for orchidaceae This page provides all possible meanings and translations of the word orchidaceae Orchidaceae, family Orchidaceae, orchid family(noun) enormous cosmopolitan family of perennial terrestrial or epiphytic plants with fleshy tubers or rootstocks and unusual flowers The Orchidaceae are a diverse and widespread family of flowering plants with blooms that are often colorful and often fragrant, commonly known as the orchid family. Along with the Asteraceae, they are one of the two largest families of flowering plants, with between 21,950 and 26,049 currently accepted species, found in 880 genera. Selecting which of the two families is larger is still under debate, as concrete numbers on such enormous families are constantly in flux. Regardless, the number of orchid species equals more than twice the number of bird species, and about four times the number of mammal species. The family also encompasses about 6–11% of all seed plants. The largest genera are Bulbophyllum, Epidendrum, Dendrobium and Pleurothallis. The family also includes Vanilla, Orchis, and many commonly cultivated plants such as Phalaenopsis and Cattleya. Moreover, since the introduction of tropical species in the 19th century, horticulturists have produced more than 100,000 hybrids and cultivars. U.S. National Library of Medicine A plant family of the order Orchidales, subclass Liliidae, class Liliopsida (monocotyledons). All orchids have the same bilaterally symmetrical flower structure, with three sepals, but the flowers vary greatly in color and shape. Find a translation for the orchidaceae definition in other languages: Select another language:
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text is a neutral and informative definition of the Orchidaceae family. It does not contain any unsafe, inappropriate, or harmful content. The text is suitable for educational and general audiences and uses respectful language." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1312-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.definitions.net/definition/orchidaceae", "date": "2014-07-25T18:42:36", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00064-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.9074087739, "token_count": 377, "score": 3.484375, "int_score": 3 }
<urn:uuid:c66bd4fa-ad1c-45d6-ab00-37286c9d305c>
SINCE the approximate size and reproductive rate of the moose herd, and rate of kill by the wolves are known, deductions can be made about the long-range effect of predation on moose numbers. Unrefined calculations suggest that annual production and loss in the moose herd are about equal. If the winter rate of kill for the large pack (one moose per 3 days) applies year around, this pack removes about 122 moose per year. The smaller packs (totaling five or six members) probably kill about a third as many, or 41, giving a total annual mortality of 163 moose. An estimated 564 are present in late May when calves are born. Since calves composed 25 percent of the summer observations, extrapolation suggests that at least 188 calves are produced, indicating that the population would remain stable or increase slightly. However, a precise evaluation obviously is not this simple. A more thorough appraisal of moose-population dynamics requires consideration of two key figures, annual calf production, and percentage of yearlings in the total population. As used here, the term "short-yearling" is a calf in its first winter or spring, and "long-yearling" is an animal 1 to 1-1/2 years old. Pregnancy rates are not known for the Isle Royale herd, so data from other studies must be used. In British Columbia, 75 percent of 80 adult uteri were pregnant, including some from before the end of the breeding season (Edwards and Ritcey, 1958). Pimlott (1959b) found that in Newfoundland 81 percent of 239 adults taken after November were gravid, and he believes this is less than the actual percentage. The assumed rate for Isle Royale moose is 80 percent, a conservative estimate, in view of the fact that heavily cropped populations usually are most productive. Calf-production figures are derived from the pregnancy and twinning rates, and number of adult cows present in calving season. The estimated size of the herd on March 1 is 600 (page 98), including 102 yearlings (17 percent), but by calving season it should decrease to about 564 because of continued predation. The known kill of the large pack is 19 adults and 17 calves in 110 days (based on data from three winters, table 11). If the small packs take a third as many moose, the kill for the entire population is 25 adults and 23 calves in 110 days. At this rate, the wolves remove 19 adults and 17 calves from March 1 to May 20, when calving season begins, so the herd then should contain about 479 adults and 85 yearlings. Half of the 479, or 239, would be adult cows, assuming an even sex ratio. If 80 percent (191) breed and bear an average of 1.19 calves each (see page 105), the calf crop is 227. Yearling-total population ratios for Isle Royale are given in table 13, and the average annual ratio for early March is believed to be about 17 percent (page 106), which agrees with statistics from other areas. Figures from de Vos (1956) indicate that long-yearlings composed 10 percent of the population minus calves in the Chapleau Crown Game Preserve, and 13 percent in the general Chapleau District of Ontario. The ratio calculated from Knowlton (1960) for the Gravelly Mountains of Montana was 25 percent, for short-yearlings. Pimlott (1959b) found that reported ratios varied from 9 percent to 23 percent. Such variations might result from differences in methods of obtaining the figures. Some are based on summer observations, others on hunter-kill data and still others on winter aerial observations. Probably the study most comparable to the Isle Royale work is that of Spencer and Chatelain (1953). Pimlott calculated that short-yearlings composed 17 percent of their 9,436 winter aerial observations made in four Alaskan areas over a 3-year period. Most low ratios reported were from summer, whereas the Isle Royale figure applies in March, so it compares favorably with the others. This ratio is significant because it is an excellent indicator of annual increment to the herd. Mortality statistics demonstrate that very few individuals aged 1 to 5 are lost (table 18). As calculated above, an estimated 85 animals survive their first year. Having estimated calf production and annual increment, we can compare them with expected annual mortality. Since the calculated kill is 25 adults and 23 calves per 110 days in winter (see above), annual adult mortality approximates 83 animals, assuming the same rate of adult kill year round. The rate of calf kill cannot be projected for the entire year because summer calves are so much smaller than calves in winter. If the rate is constant from November 1 to May 20, 42 calves are consumed in this period. An indication of summer calf loss can be obtained from the percentage of calves present in the autumn population. The only autumn sample taken showed that 22 percent of 150 moose were calves (page 104). Theoretically, 40 adults should have perished between May 20 and late October, when the survey was made, leaving 524. If calves composed 22 percent of the total population, then 148 calves survived; this indicates that approximately 79 died from May 20 to November 1. On this basis, annual mortality would be 83 adults and 121 calves (42 plus 79), or a total of 204. Since approximately 227 animals are believed to be produced each year, the herd would increase annually by about 23, on the basis of the above computations. However, more substantial figures show that about 85 calves survive to their first year; 227 minus 85 equals 142 calves lost, 21 more than previously calculated. This discrepancy could result from a mistaken assumption that the winter rate of calf kill applies from November to May. Most likely more calves are taken in autumn and early winter, when they are smaller, more numerous and presumably more vulnerable. If this is true, a more realistic figure than 42 for calf mortality from November to May would be 142 minus 79 (summer kill), or 63. The annual calf kill is a useful figure, but the statistics most indicative of the future trend in moose numbers are annual adult kill and annual increment. The calculated figures are 83 and 85 respectively. If these approximate actual numbers, the Isle Royale herd will remain stable for as long as they apply. The annual-kill figure can be checked by comparing the approximate weights of animals killed with the total annual consumption (based on figures averaged from all three winter study periods). The large pack consumed a total of approximately 20,295 pounds in 110 days, or 184 pounds per day (page 77), and if the smaller packs ate a third as much, consumption for the entire population would be about 245 pounds per day, or 89,425 per year. The weight of animals killed is more difficult to determine. Assuming that 85 adults are taken annually and that each provides about 800 pounds of food, then adults contribute 68,000 pounds per year. If 63 calves at 275 pounds are killed between November 1 and May 20, they provide 17,325 pounds. The estimated 79 calves taken between May 20 and November 1 should average about 81 pounds apiececalculated from weekly calf weights given by Peterson (1955), Denniston (1956), and Dodds (1959)so these furnish approximately 6,399 pounds. The three estimates total 91,724 pounds, which compares well with the calculated annual consumption. The close agreement is not important, since most of the figures are estimates; the significant point is that both numbers are within the same order of magnitude. Another figure that compares favorably with production and loss statistics is the summer ratio of calves to total population (25 percent) based on field observations (page 103). The calculated calf production (227) is 33 percent of the estimated herd, but numbers undoubtedly dwindle rapidly during the first few weeks because of predation. Probably the loss rate declines as calves grow and provide more food. Since the summer calf ratio is an average of observations from about May 20 to September 20, it should be less than the percentage present on May 20 and more than the September ratio. Although the latter is unknown, the figure for November, based on 150 moose, is 22 percent. Thus the conservative estimate of the average summer ratio falls into line. The following non-assessable factors could affect the production-loss calculations, but probably none is influential enough to destroy the worth of the proposed figures: the importance of beavers as summer food, possible waste of adult moose killed by small packs in summer moose mortality other than predation, and difference in wolves' summer food requirements. Regarding the latter factor, E. H. McCleery, who has maintained a wolf kennel for years, wrote me that he feeds his animals an estimated five-sixths as much in summer as in winter. The computed proportion of calves in the summer kill does not correlate well with the wolf-scat analysis (table 10). Calf hair composed 48 percent of the occurrences from May to August, and adult hair constituted 16 percent. If 40 adults and 79 calves are killed between May 20 and November 1, calves furnish about 6,399 pounds of food, whereas adults provide about 32,000. There could be several reasons that the scat analysis might not accurately indicate absolute or even relative ratios of calves to adults consumed: (1) Calves are covered with a higher proportion of hair than are adults; (2) all hair is consumed from summer calf kills, whereas large chunks of hide are left at adult kills at least in winter; (3) a wolf could eat much meat from an adult without getting hair, but this would be difficult with a calf; and (4) when an adult is killed in summer, probably the wolves travel little until it is finished, so most scats would be left nearby; however, wolves probably finish a calf quickly and then continue, leaving a higher proportion of scats containing calf remains on trails. These and other possible biases indicate that scat-analysis figures are not a valid check on calculated kill rates. Postulated seasonal trends in the moose herd are diagramed in figure 102. This model is not a precise estimate of actual numbers; rather it is an idealized scheme based on limited data. As such, it should be useful for considering the effects of wolves on moose numbers, even though future work may necessitate its modification. The age-structure curve of the moose herd also can be plotted. Remains of 39 ageable adult moose were discovered at random and segregated into wear classes indicating relative age (table 18); these provide estimates of the percentage of mortality (from wolf predation and all other factors) occurring in each class. Assuming an annual increment of 85 yearlings and a mortality of 85 adults, one can determine the number of individuals in each wear class by subtracting the calculated mortality from the previous class, starting with 85 members in class I. When these are plotted on a graph, a profile of the age structure of the herd (just before calves are dropped) results (figure 103 and table 22). This total moose in each wear class, 493, compares favorably with the estimated population size just before calving season (564). TABLE 22.CALCULATED AGE COMPOSITION OF THE MOOSE HERD JUST BEFORE CALVING SEASON bCalculated annual adult mortality and annual increment is 85. The browse-moose-wolf complex can be summarized for the Isle Royale ecosystem in terms of weight, on the basis of data from this study and one figure from the literature. Since no attempt was made to measure browse consumption of moose, figures from other studies will be used. Hickie (1937) determined that a captive moose requires 25 pounds of browse per day, and Kellum (1941) found that captive animals that were supplied "unlimited" food consumed 40 to 50 pounds per day in winter and 50 to 60 in summer. He believes that summer consumption appears higher because of the high water content of summer foods. In addition, Palmer (1944) estimated, on the basis of tests with caribou, reindeer, and musk-oxen, that a 1,200-pound moose requires about 35 pounds per day. Because wild moose must gather their food, they probably do not consume as much as captive animals, so I will assume that an adult eats 25 pounds daily in winter and 35 pounds in summer. The Isle Royale wolf population annually devours an estimated 89,425 pounds of moose (page 163), which equals about 112 adults at 800 pounds each. (Since browse consumption figures are based on adults, the wolves' consumption must be converted to adults only.) If the summer rate applies from May 1 to September 1 and the winter rate for the rest of the year, each moose eats about 10,325 pounds per year; the 12 would consume 1,156,400 pounds annually. Since the average Isle Royale wolf is assumed to weigh 72 pounds (page 77), the entire population should weigh about 1,512 pounds. The ratio of moose to browse is 7.7 percent; of wolves to moose, 1.7 percent; and of wolves to browse, .13 percent. Thus, yearly, about 762 pounds of browse are consumed for each 59 pounds of moose, in turn consumed for each 1 pound of wolf. The above calculations demonstrate the tremendous energy loss that occurs from one trophic level to an other. However, since it takes an estimated 564 moose to produce the weight or number consumed, the annual weight of browse consumed is more realistically in the neighborhood of 5,823,300 poundsor 3,851 pounds of browse per pound of wolf! The true amount of available or total browse, versus the amount consumed by the herd, is unknown.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"content is educational and appropriate\", \"score\": 0}\n```\n\nThe text provided is a detailed analysis of moose population dynamics and predation effects, which is entirely educational and appropriate for public use, especially in scientific or educational contexts. There are no unsafe elements, inappropriate language, or themes that would warrant a higher safety score.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text is a detailed analysis of moose population dynamics and predation rates. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a detailed scientific analysis of wolf predation on moose in the Isle Royale ecosystem. It contains no unsafe, inappropriate, or offensive content. The language is academic and the themes are related to wildlife studies, making it fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"No unsafe content detected\", \"score\": 0}\n```\n\nThe provided text discusses ecological ratios and energy transfer in a trophic system. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1313-0", "llm_safety_score": 0, "reason": "content is educational and appropriate" }, { "custom_id": "1313-1", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1313-2", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1313-3", "llm_safety_score": 0, "reason": "No unsafe content detected" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.nps.gov/history/history/online_books/fauna7/fauna6.htm", "date": "2014-07-25T19:11:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894473.81/warc/CC-MAIN-20140722025814-00064-ip-10-33-131-23.ec2.internal.warc.gz", "language": "en", "language_score": 0.951849401, "token_count": 2846, "score": 3.875, "int_score": 4 }
<urn:uuid:7dccc3af-30f9-4288-bb85-028cb21758b8>
This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information. Please go to www.ahrq.gov for current information. Translating Evidence into Recommendations Making recommendations for clinical practice involves considerations that extend beyond scientific evidence. Direct scientific evidence is of pre-eminent interest, but such issues as cost effectiveness, resource prioritization, logistical factors, ethical and legal concerns, and patient and societal expectations should also be considered. Historically, the Task Force has taken a conservative, evidence-based approach to this process, making recommendations that reflect primarily the state of the evidence and refraining from making recommendations when they cannot be supported by evidence. This is done with the understanding that clinicians and policymakers must still consider additional factors in making their own decisions (34). The Task Force sees its purpose as providing users with information about the extent to which recommendations are supported by evidence, allowing them to make more informed decisions about implementation. Another important issue in making recommendations is the amount and quality of evidence required. As evidence is rarely adequate to provide decision makers with completely valid information about all important outcomes for the population of interest, those creating guidelines must consider how far they are willing to generalize from imperfect evidence. As noted in the Extrapolation and Generalization section, the Task Force believes that such generalizations can be made under defined conditions. The general principles the Task Force follows in making recommendations are outlined in Table 5. Most of these principles have been discussed in other parts of this article. They involve both the factors considered by the Task Force in making recommendations (e.g., the most salient types of evidence, feasibility, harms, economic costs, and its target population) and the way in which it considers these factors (e.g., the place of subjectivity, the importance of the population perspective, and the extent to which the evidence connects the service with positive net benefits for patients). Table 5. Principles for making recommendations - Task Force recommendations are evidence based: They require scientific evidence that persons who receive the preventive service experience better health outcomes than those who do not and that the benefits are large enough to outweigh the harms. The Task Force emphasizes evidence that directly links the preventive service with health outcomes. Indirect evidence may be sufficient if it supports the principal links in the analytic framework. Although the Task Force acknowledges that subjective judgments do enter into the evaluation of evidence and the weighing of benefits and harms, its recommendations are not based largely on opinion. The Task Force is explicit about the scientific rationale for its recommendations. - The outcomes that matter most in weighing the evidence and making recommendations are health benefits and harms. In considering potential benefits, the Task Force focuses on absolute reductions in the risk of outcomes that people can feel or care about. In considering potential harms, the Task Force examines harms of all types, including physical, psychological, and nonmedical harms that may occur sooner or later as a result of the preventive service. Where possible, the Task Force considers the feasibility of future widespread implementation of the preventive service in making recommendations. The Task Force generally takes a population perspective in weighing the magnitude of benefits against the magnitude of harms. In some situations, it may recommend a service with a large potential benefit for a small proportion of the population. In assessing net benefits, the Task Force subjectively estimates the population's value for each benefit and harm. When the Task Force judges that the perceived balance of benefits and harms is likely to vary substantially within the population, it may abandon general recommendations and suggest shared decision making at the individual level. - Where possible, the Task Force considers the total economic costs that result from providing a preventive service, both to individuals and to society, in making recommendations, but costs are not the first priority. When the Task Force recommends against a preventive service for economic reasons, it states so explicitly. The Task Force does not modify its recommendations to accommodate concerns about insurance coverage of preventive services, medicolegal liability, or legislation, but users of the recommendations may need to do so. Recommendations apply only to asymptomatic persons or those with unrecognized signs or symptoms of the target condition for which the preventive service is intended. They also apply only to preventive services initiated in the clinical setting. Return to Contents Codes and Wording of Statements As in the past, the Task Force assigns letter codes to its recommendations and uses standardized phrasing for each category of recommendations (Table 6), but the details have changed from previous versions. The original five-letter scheme, which included an E recommendation category that was rarely used (6), has been replaced with a four-letter scheme that allows only one classification for recommendations against routinely providing a preventive service (D). Table 6. Standard recommendation language, USPSTF Languagea: The USPSTF strongly recommends that clinicians routinely provide [the service] to eligible patients. (The USPSTF found good evidence that [the service] improves important health outcomes and concludes that benefits substantially outweigh harms.) Languagea: The USPSTF recommends that clinicians routinely provide [the service] to eligible patients. (The USPSTF found at least fair evidence that [the service] improves important health outcomes and concludes that benefits outweigh harms.) Languagea: The USPSTF makes no recommendation for or against routine provision of [the service]. (The USPSTF found at least fair evidence that [the service] can improve health outcomes but concludes that the balance of the benefits and harms is too close to justify a general recommendation.) Languagea: The USPSTF recommends against routinely providing [the service] to asymptomatic patients. (The USPSTF found at least fair evidence that [the service] is ineffective or that harms outweigh benefits.) Languagea: The USPSTF concludes that the evidence is insufficient to recommend for or against routinely providing [the service]. (Evidence that [the service] is effective is lacking, of poor quality, or conflicting and the balance of benefits and harms cannot be determined.) [a] All statements specify the population for which the recommendation is intended and are followed by a rationale statement providing information about the overall grade of evidence and the net benefit from implementing the service. Previous definitions for letter codes focused on whether the evidence supported "including the preventive service in the periodic health examination." Current thinking is that preventive services should also be delivered in other contexts, such as illness visits. The new wording thus focuses on whether the service should be "routinely provided." In the past, the Task Force assigned a C code to recommendations with "insufficient evidence to make a recommendation." Previous Task Forces used this code for a wide assortment of circumstances and thus assigned it to a large proportion of the preventive services they reviewed. Evidence could be insufficient because no studies existed, available studies were of poor quality, studies were of reasonable quality but conflicting, or results were consistent but the magnitude of net benefit was small. The C recommendation, because of its location in the hierarchical ranking of recommendation grades, implies that the service is less worthy of implementation than services that receive an A or a B recommendation. The current Task Force believes that such pejorative conclusions should be applied only when the evidence provides a basis for inferring that the magnitude of net benefit is smaller than for interventions that merit higher ratings. In other instances, in which evidence is of poor quality or conflicting, the possibility of substantial benefit (or substantial harm) cannot be excluded on scientific grounds and thus the Task Force can make no evidence-based judgments about the service. To address these cases, the Task Force has created a new recommendation category, the I recommendation (insufficient evidence). It has also intentionally chosen a letter distant from the A-D hierarchy to signal its reluctance to pass judgment about the effectiveness of the interventions that receive this rating. The Task Force gives an I recommendation when studies are lacking or of poor quality or when they produce conflicting results that do not permit conclusions about likely benefits and harms. For the A-D recommendations, the Task Force has adopted a more formalized process for translating the evidence into group judgments about how strongly to recommend the intervention than had been applied in the past. In earlier years, the simplistic notion was that services supported by RCTs always received A recommendations. The new approach recognizes that the importance of providing the preventive service depends not only on the quality of the evidence but also on the magnitude of net benefit to patients or populations. In an effort to ensure that both dimensions—quality and magnitude—are addressed systematically in assigning letter codes, the Task Force now uses a recommendation grid (Table 7) that makes the process more explicit. Table 7. Recommendation grid |Quality of evidence |Poor = I As shown, code A indicates that the quality of evidence is good and the magnitude of net benefits is substantial: The Task Force "strongly recommends" that these services be routinely provided (Table 6). The B code indicates that the Task Force has found that either the quality of the evidence or the magnitude of net benefits (or both) is less than would be needed to warrant an A. Primary care providers should not necessarily give higher priority to A over B services. Setting priorities for offering, providing, or reimbursing these services should include consideration of time and resource requirements, which are beyond the scope of the Task Force's review. Other groups have undertaken this important work (35). The C code indicates that the quality of evidence is either good or fair but that the magnitude of net benefits, as judged in the subjective process outlined above, is too small to make a general recommendation. In these cases, the Task Force "makes no recommendation for or against routinely providing the service." Clinicians and policymakers may choose to offer the service for other reasons—such as considerations other than scientific evidence or because benefits for individual patients are expected to exceed those observed in studies—but the Task Force rating is meant to advise them that existing evidence does not document substantial net benefit for the average patient. The D code indicates that the evidence is good or fair but that net benefit is probably either zero or negative. In these situations, the Task Force recommends against routine use of the service. When the evidence is poor, the Task Force cannot distinguish between substantial or moderate net benefits on the one hand and small or zero/negative net benefits on the other. In these cases, the Task Force uses code I to indicate that it cannot make a recommendation for or against routinely providing the service. Because extant evidence cannot yet clarify whether the net benefits of the service are large or small (or negative), this rating advises clinicians and policymakers that determination of whether to provide these services routinely cannot be based on evidence; such decisions must be based on factors other than science. Return to Contents Drafting the Report In its earliest days, background papers and recommendations of the Task Force were written by individual panel members assigned to those topics. In later years, they were written by staff with close oversight by the Task Force. In time a sharp demarcation has evolved between descriptions of the evidence and recommendations. Thus, for the third Task Force, topic teams led by EPC staff write systematic evidence reviews. These reviews define the strengths and limits of the evidence but stop short of making recommendations. Systematic evidence reviews typically include the full version (available from AHRQ and accessible on its Web site) and a shorter summary such as those available online. As a work product prepared under contract for AHRQ, the systematic evidence reviews must be approved by the agency before public release. The reviews remain pure descriptions of the science; because they are published separately, groups other than the Task Force can use them to formulate their own guidelines and recommendations. The summary reviews are typically coupled with a "recommendation and rationale" document, written by the Task Force, which contains recommendations and their supporting rationales. Recommendations, which cross the line from science into policy, are based on formal voting procedures that include explicit rules for determining the views of the majority. The Task Force has an explicit policy concerning conflict of interest. All members and EPC staff disclose at each meeting if they have an important financial, organizational, or intellectual conflict for each topic being discussed. Task Force members and EPC staff with conflicts can participate in discussions about evidence, but members abstain from voting on recommendations about the topic in question. Recommendations are independent of the government. They neither require clearance from nor represent the policy of AHRQ or the U.S. Public Health Service, although efforts are made to consult with relevant agencies to reduce unnecessary discrepancies among guidelines. The Task Force chair or liaisons on the topic team generally compose the first draft of the recommendation and rationale statement, which the full panel then reviews and edits. These statements have the general structure of the chapters in previous editions of the Guide to Clinical Preventive Services (6). Specifically, they include a recommendation statement and code, a rationale statement, and a brief discussion of clinical interventions. The clinical intervention section is meant to provide more specific information and guidance to clinicians about the service, sometimes discussing factors beyond the quality of the evidence and the magnitude of net benefit that must be considered with implementation. Return to Contents Before the Task Force makes its final determinations about recommendations on a given preventive service, the EPC and AHRQ send a draft systematic evidence review to four to six external experts and to federal agencies and professional and disease-based health organizations with interests in the topic. They ask the experts to examine the review critically for accuracy and completeness and to respond to a series of specific questions about the document. After assembling these external review comments and documenting the proposed response to key comments, the topic team presents this information to the Task Force in memo form. In this way, the Task Force can consider these external comments and a final version of the systematic review before it votes on its final recommendations about the service. Return to Contents Methods for making evidence-based practice policies are evolving. At one extreme, guidelines panels could insist on direct evidence or point to any information gaps to justify a negative recommendation for almost any service. Such an approach would result in positive recommendations only for services that had a very narrow confidence interval for net benefit, but many effective services would not be recommended. At the other extreme, guideline groups that accept incomplete data and allow easy extrapolation make many positive recommendations, but they have less certainty that the services they recommend actually produce more benefit than harm. In avoiding these extremes, the Task Force has wrestled with several gaps in existing methodology for assessing the quality of evidence, for integrating bodies of evidence, and for translating evidence into guidelines. It continues to address several knotty questions: Can criteria for the internal validity of studies be consistently applied across preventive services? How reliable are such criteria in identifying studies with misleading results? How much weight should be given to various degrees of information gaps, particularly those concerning potential harms and generalizations from research studies to everyday practice? Should the Task Force modify any of these methods when dealing with counseling services? More methodologic research is warranted in several key areas. Principal among these are efforts to determine the best factors to consider in using evidence-based principles to guide judgments about the magnitude of benefits and harms when the available evidence is fair in quality and when gaps exist in the framework supporting effectiveness. These and other challenges will make the methods of the Task Force, like those of other evidence-based guideline programs, a work in progress for many years. Return to Contents This paper was developed by the Research Triangle Institute-University of North Carolina at Chapel Hill (RTI-UNC) and the Oregon Health Sciences University (OHSU) Evidence-Based Practice Centers under contracts from the Agency for Healthcare Research and Quality (contract nos. 290-97-0011 and 290-97-0018, respectively). We acknowledge the assistance of Jacqueline Besteman, J.D., M.A., EPC Program Officer; the AHRQ staff working with the third Task Force; and the staffs of the EPCs at RTI-UNC and at OHSU for their many hours of work in support of this effort. We also acknowledge the assistance of the Counseling and Behavioral Issues Work Group of the Task Force, Evelyn Whitlock, M.D., M.P.H., convenor. Finally, we also acknowledge the major contribution of the entire third U.S. Preventive Services Task Force for its support and intellectual stimulation. The authors of this article are responsible for its contents, including any clinical or treatment recommendations. No statement in this article should be construed as an official position of the Agency for Healthcare Research and Quality or the U.S. Department of Health and Human Services. Return to Contents References and Notes 1. Field MJ, Lohr KN, eds. Guidelines for clinical practice: from development to use. Washington, DC: National Academy Press, 1992 (for Institute of Medicine). 2. Woolf SH, George JN. Evidence-based medicine: interpreting studies and setting policy. Hematol Oncol Clin N Amer 2000, 14:761-784. 3. Mulrow CD, Cook D, eds. Systematic reviews: synthesis of best evidence for health care decisions. Philadelphia: American College of Physicians, 1998. 4. Cook D, Giacomini M. The trials and tribulations of clinical practice guidelines. JAMA 1999, 281:1950-1951. 5. Lawrence RS, Mickalide AD, Kamerow DB, Woolf SH. Report of the U.S. Preventive Services Task Force. JAMA 1990, 263:436-437. 6. U.S. Preventive Services Task Force. Guide to clinical preventive services: report of the U.S. Preventive Services Task Force, 2nd ed, Washington, DC: Office of Disease Prevention and Health Promotion, U.S. Government Printing Office, 1996. 7. Eddy DM. Clinical decision making: from theory to practice. A collection of essays from JAMA. Boston: Jones and Bartlett Publishers, 1995. 8. Pignone MP, Phillips CJ, Atkins D, Teutsch SM, Mulrow CD, Lohr KN. Screening and treating adults for lipids disorders. Am J Prev Med 2001;20(suppl 3):77-89. 9. Briss PA, Zaza S, Pappaioanou M, et al. Developing an evidence-based guide to community preventive services: methods. Am J Prev Med 2000;18(suppl 1):35-43. 10. Meade MO, Richardson WS. Selecting and appraising studies for a systematic review. In: Mulrow CD, Cook D, eds. Systematic reviews: synthesis of best evidence for health care decisions. Philadelphia: American College of Physicians, 1998:81-90. 11. Woolf SH, DiGuiseppi CG, Atkins D, Kamerow DB. Developing evidence-based clinical practice guidelines: lessons learned by the U.S. Preventive Services Task Force. Ann Rev Public Health 1996, 17:511-538. 12. Battista RN, Fletcher SW. Making recommendations on preventive practices: methodological issues. Am J Prev Med 1988;4(suppl 4):53-67. 13. Mulrow C, Langhorne P, Grimshaw J. Integrating heterogeneous pieces of evidence in systematic reviews. In: Mulrow CD, Cook D, eds. Systematic reviews: synthesis of best evidence for health care decisions. Philadelphia: American College of Physicians, 1998:103-12. 14. Nelson HD, Helfand M. Screening for chlamydial infection. Am J Prev Med 2001;20(suppl 3):95-107. 15. Helfand M, Mahon SM, Eden KB, Frame PS, Orleans CT. Screening for skin cancer. Am J Prev Med 2001;20(suppl 3):47-58. 16. Wilson JMG, Junger G. Principles and practice of screening for disease. Geneva: World Health Organization, 1968 (Public Health Papers No. 34). 17. Frame PS, Carlson SJ. A critical review of periodic health screening using specific screening criteria. J Fam Pract 1975;2:29-36, 123-9, 189-94, 283-9. 18. Bucher HC, Guyatt GH, Cook DJ, Holbrook A, McAlister F.A. Users' guides to the medical literature. XIX. Applying clinical trial results. A. How to use an article measuring the effect of an intervention on surrogate end points. JAMA 1999, 282:771-778. 19. Gøtzsche PC, Liberati A, Torri V, Rossetti L. Beware of surrogate outcome measures. Int J Tech Assess Health Care 1996, 12:238-246. 20. Lohr KN, Carey TS. Assessing "best evidence": issues in grading the quality of studies for systematic reviews. J Qual Improv 1999, 25:470-479. 21. Hornberger J, Wrone E. When to base clinical policies on observational versus randomized trial data. Ann Intern Med 1997, 127:697-703. 22. Feinstein AR, Horwitz RI. Problems in the "evidence" of "evidence-based medicine." Am J Med 1997, 103:529-535. 23. Oxman AD, Cook DJ, Guyatt GH, Evidence-Based Medicine Working Group. Users' guides to the medical literature: how to use an overview. JAMA 1994;272:1367-71. 24. Mulrow CD, Linn WD, Gaul MK, Pugh JA. Assessing quality of a diagnostic test evaluation. J Gen Intern Med 1989, 4:288-295. 25. Guyatt GH, Sackett DL, Cook DJ, Evidence-Based Medicine Working Group. Users' guides to the medical literature. I. How to use an article about therapy or prevention. A. Are the results of the study valid? JAMA 1993;270:2598-601. 26. Laupacis A, Wells G, Richardson WS, Tugwell P, Evidence-Based Medicine Working Group. Users' guides to the medical literature V. How to use an article about prognosis. JAMA 1994;272:234-7. 27. Russell MA, Wilson C, Taylor C, Baker CD. Effect of general practitioners' advice against smoking. BMJ 1979, 2:231-235. 28. Eddy DM. Comparing benefits and harms: the balance sheet. JAMA 1990;263:2493, 2498, 2501. 29. Braddick M, Stuart M, Hrachovec J. The use of balance sheets in developing clinical guidelines. J Am Board Fam Pract 1999, 12:48-54. 30. Ewart RM. Primum non nocere and the quality of evidence: rethinking the ethics of screening. J Am Board Fam Pract 2000, 13:188-196. 31. Fletcher SW, Black W, Harris R, Rimer B, Shapiro S. Report of the International Workshop on Screening for Breast Cancer. J Natl Cancer Inst 1993, 85:644-656. 32. Elmore JG, Barton MB, Moceri VM, Polk S, Arena PJ, Fletcher SW. Ten-year risk of false positive screening mammograms and clinical breast examinations. N Engl J Med 1998, 338:1089-1096. 33. Nease RF Jr, Kneeland T, O'Connor GT, et al. Variation in patient utilities for outcomes of the management of chronic stable angina: implications for clinical practice guidelines. JAMA 1995, 273:1185-1190. 34. Woolf SH, Dickey LL. Differing perspectives on preventive care guidelines: a new look at the mammography controversy. Am J Prev Med 1999, 17:260-268. 35. Coffield AB, Maciosek MV, McGinnis JM, et al. Priorities among recommended clinical preventive services. Am J Prev Med 2001. In press. Return to Contents [a] Harris: School of Medicine and Cecil G. Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, NC. [b] Helfand: Division of Medical Informatics and Outcomes Research, and Evidence-based Practice Center, Oregon Health Sciences University and Portland Veterans Affairs Medical Center, Portland, OR. [c] Woolf: Department of Family Practice, Medical College of Virginia, Virginia Commonwealth University, Fairfax, VA. [d] Lohr: Research Triangle Institute, Research Triangle Park, and University of North Carolina at Chapel Hill, Program on Health Outcomes and School of Public Health, Chapel Hill, NC. [e] Mulrow: Department of Medicine, University of Texas Health Science Center, San Antonio, TX. [f] Teutsch: Outcomes Research and Management, Merck & Co, Inc., West Point, PA. [g] Atkins: Center for Practice and Technology Assessment, Agency for Healthcare Research and Quality, Rockville, MD. Return to Contents Other members of the Methods Work Group include: Alfred O. Berg, M.D., M.P.H., University of Washington School of Medicine; Karen B. Eden, Ph.D., Oregon Health Sciences University; John Feightner, M.D., M.Sc., FCFP, University of Western Ontario-Parkwood Hospital; Susan Mahon, M.P.H., Oregon Health Sciences University; and Michael Pignone, M.D., M.P.H., University of North Carolina School of Medicine. Copyright and Source Information This document is in the public domain within the United States as stated in AHRQ's license agreement with the American Journal of Preventive Medicine. For information on reprinting, contact Randie Siegel, Director, Division of Printing and Electronic Publishing, Agency for Healthcare Research and Quality, Suite 501, 2101 East Jefferson Street, Rockville, MD 20852. Requests for linking or to incorporate content in electronic resources should be sent to: https://info.ahrq.gov. Source: Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch SM, Atkins D, for the Methods Word Group, third U.S. Preventive Services Task Force. Current methods of the U.S. Preventive Services Task Force: a review of the process. Am J Prev Med 2001;20(3S):21-35 (http://www.elsevier.com/locate/ajpmonline). Return to Contents
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe provided text discusses the process of making recommendations for clinical practice, focusing on the importance of scientific evidence, cost-effectiveness, resource prioritization, and ethical considerations. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments.", "{\"reason\": \"standard recommendation language, no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a standard recommendation language from the USPSTF (United States Preventive Services Task Force) and does not contain any unsafe or inappropriate content. It discusses the criteria and process for making recommendations regarding preventive services, which is entirely appropriate for public use and educational environments.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided discusses the methodology and processes involved in creating clinical preventive service guidelines. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provided is a scholarly discussion on the reliability of criteria for identifying studies with misleading results and the challenges in using evidence-based principles to guide judgments about the magnitude of benefits and harms. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and receives a score of 0.", "{\"reason\": \"No unsafe content\", \"score\": 0}\n```\n\nThe provided text is a collection of references and citations from various medical and academic sources. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```" ], "llm_harm_scores_by_chunk": [ { "custom_id": "1314-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1314-1", "llm_safety_score": 0, "reason": "standard recommendation language, no unsafe content" }, { "custom_id": "1314-2", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1314-3", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1314-4", "llm_safety_score": 0, "reason": "No unsafe content" }, { "custom_id": "1314-5", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://archive.ahrq.gov/clinic/ajpmsuppl/harris3.htm", "date": "2014-07-28T20:31:32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00308-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9043175578, "token_count": 5577, "score": 2.671875, "int_score": 3 }
<urn:uuid:ed582845-04ff-427a-84a2-a21c77f8b7b0>
Law and Policy Religious Beliefs and Cultural Practices - Common Beliefs and Cultural Practices The Provider's Guide to Quality & Culture This section provides medical practitioner with basic information on the culture, belief system, health strengths, and health challenges of people from some of the regions and groups of the world.Cultural Groups gives detailed information about several cultural groups. - Cultural Diversity Myer, Ehren. (2006). RNotes: Nurse's Clinical Pocket Guide. F.A. Davis The table from RNotes by Ehren Myers includes cultural characteristics of groups that are widely dispersed throughout the U.S. Page 1: African-Americans, Arab-Americans, Asian-Americans and Bosnian-Americans. Page 2: Native-Americans, Mexican-Americans and Russian-Americans - Cultural Diversity: End-of-Life Marianjoy Medical Library Many factors including culture and ethnicity determine thoughts about death. This section provides an ethical overview related to death and dying and links to several resources. All major religions support organ, tissue and eye donation. Still, within each religion there are different schools of thought. Bioethics Resources on the Web - Bioethics Research Library Kennedy Institute of Ethics,Georgetown University Provides an interdisciplinary and multi-format collection on ethical issues related to healthcare, biomedical research, biotechnology, and the environment. - Ethics in Medicine University of Washington School of Medicine Offers case studies in public health ethics as a teaching tool for physicians and medical students. - The Hastings Center Independent, nonpartisan, and nonprofit bioethics research institute founded in 1969. The Center's mission is to address fundamental ethical issues in the areas of health, medicine, and the environment as they affect individuals, communities, and societies. - NIH's Bioethics Resources on the Web The information accessible through this portal spans a wide range of topics, including institutional review boards; ethics committees, clinical ethics, international research ethics issues, genetics, and neuroethics, among others. MedlinePlus® Provides authoritative consumer health information. Pertinent topics include:
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe provided text sample is a collection of references and descriptions related to cultural diversity, religious beliefs, and ethical considerations in healthcare. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for public use, educational environments, and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1315-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.marianjoylibrary.org/Diversity/ethics.aspx", "date": "2014-07-28T20:27:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00308-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.8636220694, "token_count": 436, "score": 2.90625, "int_score": 3 }
<urn:uuid:5ca9921d-5489-4545-a8b1-9168fc8d73c0>
- The definition of byre means a barn for cows. An example of a byre is a barn on a British dairy farm. This cow barn in England might be called a byre. Brit. a cow barn Origin of byreMiddle English and amp; Old English hut; akin to bower A barn for cows. Origin of byreMiddle English, from Old English b&ymacron;re; see bheu&schwa;- in Indo-European roots. From Old English bȳre
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe text provided is a definition and origin of the term \"byre,\" which is a neutral and educational term. It does not contain any unsafe, inappropriate, or harmful content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1316-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.yourdictionary.com/byre", "date": "2014-07-28T20:29:04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261958.8/warc/CC-MAIN-20140728011741-00308-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.7996822596, "token_count": 113, "score": 3.125, "int_score": 3 }
<urn:uuid:50945692-1cdf-44c4-83cd-5bc60881464a>
Early Silurian to Devonian thalloid plants characterized by pseudoparenchymatous (false parenchyma) tissue comprised of filaments, surrounded by a thick cuticle. Some produce thick-walled spores. Those colonizing soils may have been the first tier on the landscape. Prototaxites (Late Silurian to Late Devonian) - "Trunks" up to 1 m wide and 2 m long. Internal structure of large (13-35 Ám) and small tubes (5.3 Ám in diameter) that may branch. Some small tubes may exhibit vesicle-like swellings. Thick walls of large tubes contain anastomosing filamentous components. Thin walls consist of a septum with a complex pore. Thirteen species represented by compression-impressions & permineralizations. Affinities with the Fungi (F. Hueber, pers. comm.) Pachytheca - (Late Silurian to Early Devonian) Algae consisting of single spherical body 1.5-5 mm diameter composed of an inner and outer zone. The inner zone consists of multicellular filaments oriented randomly, but in the outer zone are radially oriented. Parka - (Early Devonian) A dorsiventral thallus, circular to ovoid up to 5 cm, with clustered spores. The thallus is cellular, upper and lower epidermis with parenchyma, and attached by a ventral holdfast. Spores appear cuticularized, 28-34 m in diameter, interpreted to be in a sporangium at the surface. Spongiophyton (Middle Devonian) - Remains have been recovered from fluvial sediments, indicating their terrestrial origin. Thalloid branching plant thought to have a habit similar to modern liverworts (Gensel et al., 1991) that is typically preserved as a cuticular sheath with a porate (200-300 micrometer raised areas with holes) and non-porate surface. Porate cuticle 3X as thick as non-porate surface. No evidence of vascularization. Recently, Stein et al., (1993) have suggested that these plants are lichens. Earliest vascular plants in the Late Silurian (Pridolian; 414-408) are assigned to the Order RHYNIALES (RHYNIOPHYTES). Basic characteristics of the group include: |These rhizomatous plants are the first group to grow upwards towards the light, with a determinate growth habit to the aerial parts of the organism. Many "stands" are found consisting of only one type of plant in the area. This is indicative of a colonizing, pioneering (r-strategist) behavior.| Cooksonia - Small plant attaining heights of 10 cm, dichotomous branched with globose terminal sporangia. Found in monotypic stands. Rhynia - Attaining heights of 18-50 cm, attached to rhizomatous axes with monopodial branching (one half of the dichotomy becomes physically dominant), terminal sporangium fusiform. Found either in monotypic stands or integrated with other rhyniophytalean/zosterophytalean plants. Nothia - Naked invaginated axes with pear-shaped sporangia (apical dehiscence) on adaxially (downward) recurved stalks. Sporangia may be helical or whorled in arrangement. Found with other rhyniophytes. Renalia - Longest axis recorded is 11 cm, dichotomous, with round to reniform (kidney-shaped) terminal sporangia. Pseudomonopodial in growth habit of monotypic stands. Displays characteristics of both Rhyniophytes and Zosterophyllophytes. A silicified Devonian peat bog known as the Rhynie Chert is one locality in which many of the earliest vascular plants were first identified. The official Rhynie Chert homepage can give you up-to-date information on research focused on this Fossil Lagerstätten! Zosterophyllum - Dichotomously branched aerial axes with peculiar (infrequent) laterl branches that result in an H-shaped bauplan (body architecture). Cauline sporangia are clustered near the top of the axes. Commonly found as monotypic assemblages. Sawdonia - Aerial parts up to 30 cm tall (may have attained heights of 0.5 m), pseudomonopodial (anatomically the axes appear dichotomous, but the plant takes on the appearance of a central axis with laterals) from rhizome with lateral dichotomous axes. Apices are coiled (circinate or fiddle-heads such as found in ferns). Lateral spines have no vascular tissue. Reniform (kidney-shaped) sporangia are loosely aggregated into spikes. Commonly found in monotypic assemblages. Gosslingia - Aerial stems up to 0.5 m in height with dichotomous branching but a strong centralized axis resulting in an imperfect pseudomonopodial habit. Small short vascularized branches (<2 mm long) occur along the axis. Commonly found as monotypic assemblages Rebuchia - Plant of densely branched, tufted aspect with distinct upright axes up to 15 cm in height. Some dichotomous axes terminated by dense clusters of stalked, opposite to subopposite sporangia. Psilophyton - Variable branching of the erect stem (pseudomonopodial, dichotomous or trichotomous) that emerged from rhizomatous mat. Stem with variously shaped spines. Paired sporangia terminate ultimate dichotomy of the lateral branches; homosporous. Central axis may be as much as 3.5 cm in diameter; maximum height may have been > 1m. Trimerophyton - Lateral branches originate in a spiral pattern from an erect stem. Lateral branches trifurcate (branch three times) close to the main axis, and continue to divide apically. Each branch ends with two terminal third-order branches upon which are two pendant sporangia. No leaves are known; estimate of height not available. Pertica - Lateral branches are tetrastichous (four branches in an opposite pattern), forming a clockwise spiral, and originate from a main axis Individual branches dichotomize many times at right angles to each other. Axes have small bumps (papillae). Clusters of sporangia (32-256) terminate branches; trilete spores are homosporous. Main axis 1.5 cm in diameter; maximum height may have been several meters. Two clades of lycopsids are recognized - Eligulates and Ligulates. These can be separated by the presence of a small flap of tissue, the ligule, within a modified area on the adaxial surface of the leaf. Asteroxylon attained a height of 0.5 meters and consisted of erect monopodial branches originating from a flat-lying (prostrate) rhizome. The xylem cylinder is an actinostele (star-shaped). Axes are covered with "enations" (appear to be leaves but lack vascular tissue) with stomata. Kidney-shaped sporangia (reniform) are borne in the axil of an enation. Vascular tissue has been identified in the sporangial area. Upright and prostrate axes of Drepanophycus (Siegennian-Frasnian) are up to 4 cm in diameter, bearing stout leaves (vascular tissue is present) that may appear thorn-like to curved. The axes may dichotomize infrequently, although some forms have a complex K- or H-branching pattern. These plants possess kidney-shaped sporangia and may have attained heights of several meters. Protolepidodendron was a herbaceous plant with axes that may be up to 6 mm in diameter (maximum height 30 cm); these are covered with helically-arranged leaves. The leaves fork either once (two tips) toward the distal end. Oval to kidney-shaped sporangia occur on the (adaxial) upper surface of sporophylls. Colpodexylon is a larger plant, with axes attaining 2.5 cm in diameter and a length of 60 cm. Leaves are helically arranged and three-forked. Sporophylls bear an adaxial, elliptical sporangium. Leclercqia, originating in the Emsian and widely dispersed in the Middle Devonian, had axes that may be as large as 7 mm in diameter and up to 50 cm in length (estimated height 1 m). Leaves are helically arranged and densely packed; leaves apices are five-parted. Stomates are found both on the leaves and stems. Fertile leaves bear an adaxial sporangium, and these are intermixed with sterile leaves along the upright axes. Spores are trilete; homosporous. Most of the Early Devonian plants are small in stature, and vegetation was not stratified initially. Tiering of these plant communities began with the appearance of pseudomonopodial and monopodial growth habits, the result of a change in growth and development of the growing meristem. The earliest body fossils of unequivocal terrestrial arthropods have been isolated from the same locality as the earliest vascular plants (Cooksonia) of Pridoli-age. Edwards (1996) has suggested that the decomposer/microherbivore/predator soil and litter communities identified in the Lower and Middle Devonian extend back at least into the Silurian. There is evidence for terrestrial plant-animal interaction in the Lower Devonian and this is based upon spore-dominated coprolites believed to have been produced by litter-feeding myriapods. These early plant clades existed into the Middle Devonian where newly evolved plants were larger in size and more complex in architecture. Although the earliest landscapes may have appeared vegetationally homogenous, Raymond et al. (1985) have identified three major phytogeographic regions:
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"content is educational and appropriate\", \"score\": 0}\n```\n\nThe text provided is a detailed description of early Silurian to Devonian thalloid plants and their characteristics. It is entirely educational and does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences, including children and sensitive environments.", "{\"reason\": \"educational content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provided is a detailed description of various paleobotanical species and their characteristics. It does not contain any unsafe, inappropriate, or harmful content. The language is technical and educational, making it fully appropriate for all audiences, including children and sensitive environments.", "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\nThe provided text is a scientific description of early terrestrial life and does not contain any unsafe, inappropriate, or harmful content. It is fully appropriate for educational and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1317-0", "llm_safety_score": 0, "reason": "content is educational and appropriate" }, { "custom_id": "1317-1", "llm_safety_score": 0, "reason": "educational content" }, { "custom_id": "1317-2", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://personal.colby.edu/~ragastal/Paleobotany/devontiering.html", "date": "2014-07-30T05:05:04", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00356-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9198110104, "token_count": 2185, "score": 3.484375, "int_score": 3 }
<urn:uuid:8f01e44b-1773-4319-9b97-1b5892e713df>
Sydney Harbour seafood All commercial fishing in Sydney Harbour has been halted due to elevated levels of dioxins detected in some fish and seafood. The area affected includes all of Port Jackson and it's tributaries. Recreational fishing is not closed but eating of Harbour fish should be limited (see How much can I eat?). Fish can be caught and released. Facts About Dioxins and Port Jackson (Sydney Harbour/Parramatta River) What are dioxins? Dioxins are a group of chlorinated chemicals that originate from industrial pollution or are generated in natural processes such as bush fires. They are chemically stable and can stay in the environment for a long time. Dioxins can accumulate in the body fat of animals and humans. Where do dioxins come from? Residues of dioxins in seafood caught in Sydney Harbour/Parramatta River are likely to have their sources in contaminated sediments on the bottom of Homebush Bay. Current levels are likely due to many years of industrial activities previously along those waterways. On land, the main source of dioxin emission in Australia is through bushfires and burning of agricultural stubble. What are the health risks associated with dioxins? We are all exposed to very low levels of dioxin, mostly via the food we eat. At those low levels, there are no known health risks. Long term high level exposure to dioxins can sometimes cause a range of ill effects, including reproductive disorders and cancer. However, as concluded in the Food Standards Australia New Zealand Risk Assessment report on seafood from Sydney Harbour, the health risk associated with consuming seafood from Sydney Harbour/Parramatta River is low for the average Australian consumer. The Government has closed the Harbour as a precautionary measure. What if I've been eating fish from Sydney Harbour? An increased health risk associated with dioxins only occurs from long term exposure to high levels of dioxin. It is unlikely that people consuming seafood from Port Jackson would have noticeable health effects from the seafood, even if the dietary advice below is occasionally exceeded (see How much can I eat?). NSW Health advises people to contact their doctor if they are concerned about consumption of fish from Sydney Harbour/Parramatta River. The risk is considered low. I fish for recreation and I want to eat the catch. How much can I eat? An Expert Panel has advised the Government that people can continue to eat fish from Sydney Harbour providing they follow strict dietary advice. - No seafood caught west of the Sydney Harbour Bridge should be eaten. You should release your catch. - For seafood caught east of the Sydney Harbour Bridge generally no more than 150 grams per month should be consumed. - Testing of popular species has allowed for more specific dietary advice. The table below provides advice on the maximum consumption of specific species. Table: Recommended maximum intake based on eating a single species caught east of the Sydney Harbour Bridge. |Species||Number of 150 gram serves||Amount per month| |Prawns||4 per month||600g| |Crabs||5 per month||750g| |Bream||1 per month||150g| |Dusky Flathead||12 per month||1800g| |Fan-belly Leatherjacket||24 per month||3600g| |Flounder||12 per month||1800g| |Kingfish||12 per month||1800g| |Luderick||12 per month||1800g| |Sand Whiting||8 per month||1200g| |Sea Mullet||1 every 3 months||50g| |Silver Biddie||1 per month||150g| |Silver Trevally||5 per month||750g| |Tailor||1 per month||150g| |Trumpter Whiting||12 per month||1800g| |Yellowtail Scad||8 per month||1200g| |Squid||4 per month||600g| Note: 1.) The advice in the table is provided where only one species is being eaten in the month. Each guideline number of serves is therefore the recommended total intake for the month. For example eating 150 grams of Bream and 600 grams of prawns in one month would exceed the recommended intake. Note 2.) This table will be updated if new results become available. The last update was 19 December 2007. This dietary advice is based on testing of seafood species. What about commercial fishing? The NSW Government has suspended all commercial fishing in Port Jackson (Port Jackson is the area inside Sydney Harbour, Parramatta River, Lane Cove River and Middle Harbour). How do I know if fish I’ve bought are from Port Jackson? All Port Jackson fish and crustaceans have been removed from sale. (Port Jackson is the area inside Sydney Harbour, Parramatta River, Lane Cove River and Middle Harbour). How long do dioxins last? Dioxins can last for decades in the environment. What about imported seafood? The NSW Food Authority recently tested imported prawns and fish for dioxins. The dioxin results for the imported seafood were well below the European Union limit for dioxins in food of less than 4TEQpg/g for dioxin.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text provides information about the closure of commercial fishing in Sydney Harbour due to elevated levels of dioxins detected in some fish and seafood. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are related to environmental and health concerns, which are appropriate for public use and educational environments.", "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\n*Explanation:*\nThe provided text discusses the removal of fish and crustaceans from sale due to dioxin contamination and provides information about the duration of dioxins in the environment and the results of imported seafood testing. There is no unsafe, inappropriate, or harmful content in the text." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1318-0", "llm_safety_score": 0, "reason": "no unsafe content" }, { "custom_id": "1318-1", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.foodauthority.nsw.gov.au/consumers/keeping-food-safe/special-care-foods/sydney-harbour-seafood/", "date": "2014-07-30T04:58:21", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00356-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9300945401, "token_count": 1114, "score": 3.0625, "int_score": 3 }
<urn:uuid:431f3d76-b783-4e93-9469-7ae2189c49b8>
Known in Arabic as al-Muallaqah ("The Suspended"), the Hanging Church is the most famous Coptic church in Cairo. The church is dedicated to the Virgin Mary and is thus also known as Sitt Mariam or St. Mary's Church. The Hanging Church is named for its location above a gatehouse of the Roman fortress in Old Cairo; its nave is suspended over a passage. The church is approached by 29 steps; early travelers to Cairo dubbed it "the Staircase Church." History of the Hanging Church The Hanging Church was built in the 7th century, probably on the site of a 3rd or 4th century church for the soldiers of the bastion. It has been rebuilt several times since then, including a major rebuild under Patriarch Abraham in the 10th century. By the 11th century, the Hanging Church became the official residence of the Coptic patriarchs of Alexandria and several Coptic synods were held in the church. The main furnishings - the pulpit and screens - date from the 13th century. What to See at the Hanging Church Entrance to the Hanging Church is via a beautifully-decorated gate on Shar'a Mari Girgis Street. This leads into an open courtyard, flanked by mosaics, from which there are 29 steps to the church. At the top of the stairs are three wooden doors decorated with geometric patterns, framed with decorative carvings in the stone wall. Inside, only the section to the right of the sanctuary, above the southern bastion, is considered original. Nevertheless, the Hanging Church remains one of the most impressive churches in the city, remarkable for its marble pulpit, inlaid screens, icons and murals. The timber roof of the nave recalls Noah's Ark. The 11th-century marble pulpit surmounts 13 graceful pillars, representing Jesus and the 12 disciples. As customary in Coptic churches, one of the pillars is black, representing Judas, and another is grey, for doubting Thomas. Its steps are carved with a shell and a cross. The oldest icon in the Hanging Church dates from the 8th century. Many other artifacts from this church are now displayed in the Coptic Museum, including a lintel showing Christ's entry into Jerusalem that dates from the 5th or 6th century. In the eastern end of the church are three sanctuaries with altars, dedicated to the Virgin Mary (center); St. George (left) and St. John the Baptist (right). The central iconostasis (sanctuary screen) dates from the 12th or 13th century. It is made of ebony inlaid with ivory, carved with geometric designs and crosses, and has a row of seven large icons. The center icon depicts the Christ Enthroned; to his right is the Virgin Mary, the Archangel Gabriel and St. Peter; to his left are John the Baptist, the Archangel Michael and St. Paul. The left screen has a design of squares with crosses in alternate ivory and ebony, surmounted by 17 icons depicting scenes from the martyrdom of St. George. The right screen dates from the 13th century and has a cruciform pattern. Across the top are seven small icons representing the life of St. John the Baptist. All the icons on these screens are the work of a single Armenian artist, Orhan Karabedian, and date from 1777. Quick Facts on the Hanging Church |Names:||al-Muallaqah · Hanging Church · Sitt Mariam · Staircase Church| |Dates:||7th, 10th C| |Visitor and Contact Information| |Coordinates:||30.005388° N, 31.230081° E| |Address:||Shar'a Mari Girgis| |Hours:||Daily 9-4 (except during services)| |Lodging:||View hotels near the Hanging Church| Map of the Hanging Church, Cairo Below is a location map and aerial view of the Hanging Church. Using the buttons on the left (or the wheel on your mouse), you can zoom in for a closer look, or zoom out to get your bearings. To move around, click and drag the map with your mouse.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical and cultural information\", \"score\": 0}\n```\n\nThe text provides historical and cultural information about the Hanging Church in Cairo, which is appropriate for all audiences and does not contain any unsafe content." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1319-0", "llm_safety_score": 0, "reason": "historical and cultural information" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.sacred-destinations.com/egypt/cairo-hanging-church", "date": "2014-07-30T04:59:08", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510268660.14/warc/CC-MAIN-20140728011748-00356-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9483492374, "token_count": 903, "score": 2.5625, "int_score": 3 }
<urn:uuid:acf54046-b958-4d86-9ba7-f0534bac95b0>
Okay, history buffs. We've got a challenge for you: Learn as much as you possibly can from the 700 odd maps just uploaded to the University of Richmond's Digital Scholarship Lab. Your brain will thank you later. The maps come from Charles O. Paullin's 1932 Historical Geography of the United States—and they're enthralling. The history bit really sucks you in, as it takes you everywhere from the layout of "Indian battles" from the 1500s on, to the breakdown of votes in presidential elections from the 19th century. While the maps have been around for quite some time, the new digital version animates them, pouring life into these otherwise static images. The Paullin map is actually a piece of history in and of itself. Beginning with the 1874 Statistical Atlas of the United States, cartographers went beyond simply mapping geographical features and political borders to include things like census data. Map experts say that Paullin's book represented "a culmination" of the new trend, and the results are splendid. With this new online version, for example, you can see how long it would have taken to travel from New York to the rest of the country in 1857 and you can watch the country's center of population shift westward throughout its 300-year history. Click through to the University of Richmond's historical atlas for the full experience. [NYT]
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"No unsafe content found\", \"score\": 0}\n```\n\n*Explanation:*\nThe text is purely educational and historical in nature, providing information about a collection of maps and their significance. It does not contain any inappropriate language, harmful themes, or references to illegal activities. Therefore, it is fully appropriate for all audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1320-0", "llm_safety_score": 0, "reason": "No unsafe content found" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://gizmodo.com/you-might-want-to-loosen-the-hatband-there-a-little-sp-1491056078/@zanzan42", "date": "2014-07-31T13:30:03", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00404-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9550747871, "token_count": 285, "score": 3.015625, "int_score": 3 }
<urn:uuid:b3101756-d214-4d5c-8ba5-19cce569ae0d>
In summer 1941 Erwin Rommel was Hitler's favourite general: he had driven the British out of Libya and stood poised to invade Egypt. He seemed unbeatable. So the British decided to have him killed. The British opened their counter-attack with a series of special forces raids, including the first ever operation by the newly formed SAS. Rommel was one of the targets. Michael Asher reveals how poor planning and incompetence in high places led to disaster in the desert-- and how fantastic bravery and brilliant improvisation enabled a handful of the Commandos to escape. Classic real life adventure, written by best-selling desert expert and novelist Michael Asher. History, Military, World-War-II,
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"historical context without unsafe content\", \"score\": 0}\n```\n\n**Explanation:**\nThe text provides historical context about Erwin Rommel and the actions taken by the British during World War II. It does not contain any unsafe, inappropriate, or harmful content. The language is respectful, and the themes are historical and military, which are suitable for educational and general audiences." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1321-0", "llm_safety_score": 0, "reason": "historical context without unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://www.campusbooks.com/books/biographies-memoirs/leaders-notable-people/military/9780297846857_MICHAEL-ASHER-ASHER-Michael_Get-Rommel-The-Secret-Briti.html", "date": "2014-07-31T14:12:53", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273350.41/warc/CC-MAIN-20140728011753-00404-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9519997239, "token_count": 142, "score": 2.828125, "int_score": 3 }
<urn:uuid:b61bd5a2-92f2-4e4f-aed5-b6e4134a3c6d>
|purgatory (ˈpɜːɡətərɪ, -trɪ)| |1.||chiefly RC Church a state or place in which the souls of those who have died in a state of grace are believed to undergo a limited amount of suffering to expiate their venial sins and become purified of the remaining effects of mortal sin| |2.||a place or condition of suffering or torment, esp one that is temporary| |[C13: from Old French purgatoire, from Medieval Latin pūrgātōrium, literally: place of cleansing, from Latin pūrgāre to | In the teaching of the Roman Catholic Church, the condition of souls of the dead who die with some punishment (though not damnation) due them for their sins. Purgatory is conceived as a condition of suffering and purification that leads to union with God in heaven. Purgatory is not mentioned in the Bible; Catholic authorities defend the teaching on purgatory by arguing that prayer for the dead is an ancient practice of Christianity and that this practice assumes that the dead can be in a state of suffering — a state that the living can improve by their prayers. Note: A “purgatory” is, by extension, any place of suffering, usually for past misdeeds.
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"educational content\", \"score\": 0}\n```\n\nThe text provided is purely educational and informative, detailing the concept of purgatory in the context of the Roman Catholic Church. It does not contain any unsafe, inappropriate, or harmful content. Therefore, it is fully appropriate for all audiences and educational environments." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1323-0", "llm_safety_score": 0, "reason": "educational content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://dictionary.reference.com/browse/purgatory", "date": "2014-08-02T03:02:32", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276250.57/warc/CC-MAIN-20140728011756-00452-ip-10-146-231-18.ec2.internal.warc.gz", "language": "en", "language_score": 0.9532099366, "token_count": 282, "score": 3.203125, "int_score": 3 }
<urn:uuid:bd8a452e-6a96-405b-b473-f70a6635a327>
The sabar—from the Serer people in origin, is a traditional drum from the West African nation of Senegal (also played in the Gambia). It is generally played with one hand and one stick. Among its most renowned exponents is the Senegalese musician Doudou N'Diaye Rose. The sabar was used to communicate to other villages. The different rhythms correspond to phrases and could be heard for over 15 kilometers. Sabar is also recognized as the style of music played while using this drum. |This Senegal-related article is a stub. You can help Wikipedia by expanding it.| |This article relating to percussion instruments is a stub. You can help Wikipedia by expanding it.|
{ "final_harm_score": 0, "embedding_harm_score": 0, "llm_harm_score": 0, "llm_score_assessment_by_chunk": [ "{\"reason\": \"no unsafe content\", \"score\": 0}\n```\n\nThe text discusses the cultural and historical significance of the sabar drum without any inappropriate, offensive, or harmful content. It is suitable for all audiences and does not contain any unsafe elements." ], "llm_harm_scores_by_chunk": [ { "custom_id": "1325-0", "llm_safety_score": 0, "reason": "no unsafe content" } ] }
{ "dump": "CC-MAIN-2014-23", "url": "http://en.wikipedia.org/wiki/Sabar", "date": "2014-07-13T09:38:07", "file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776437493.50/warc/CC-MAIN-20140707234037-00024-ip-10-180-212-248.ec2.internal.warc.gz", "language": "en", "language_score": 0.9575663805, "token_count": 148, "score": 2.90625, "int_score": 3 }